id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9906/gr-qc9906067.html
ar5iv
text
# Untitled Document Electromagnetic Zero Point Field as Active Energy Source in the Intergalactic Medium Alfonso Rueda and Hiroki Sunahata California State University, Long Beach, CA 90840 E-mail: arueda@csulb.edu Bernhard Haisch Solar & Astrophysics Laboratory, Lockheed Martin 3251 Hanover St., Palo Alto, CA 94304 E-mail: haisch@starspot.com Revised version of invited presentation at 35th AIAA/ASME/SAE/ASEE Joint Propulsion Conference and Exhibit 20–24 June 1999, Los Angeles, CA AIAA paper 99-2145 1 Copyright $`\mathrm{\copyright }`$ 1999 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved. ABSTRACT For over twenty years the possibility that the electromagnetic zero point field (ZPF) may actively accelerate electromagnetically interacting particles in regions of extremely low particle density (as those extant in intergalactic space (IGS) with $`n1`$ particle m<sup>-3</sup>) has been studied and analyzed. This energizing phenomenon has been one of the few contenders for acceleration of cosmic rays (CR), particularly at ultrahigh energies. The recent finding by the AGASA collaboration (Phys. Rev. Lett., 81, 1163, 1998) that the CR energy spectrum does not display any signs of the Greisen-Zatsepin-Kuzmin cut-off (that should be present if these CR particles were indeed generated in localized ultrahigh energies CR sources, as e.g., quasars and other highly active galactic nuclei), may indicate the need for an acceleration mechanism that is distributed throughout IGS as is the case with the ZPF. Other unexplained phenomena that receive an explanation from this mechanism are the generation of X-ray and gamma-ray backgrounds and the existence of Cosmic Voids. However recently, a statistical mechanics kind of challenge to the classical (not the quantum) version of the zero-point acceleration mechanism has been posed (de la Peña and Cetto, The Quantum Dice, 1996). Here we briefly examine the consequences of this challenge and a prospective resolution. INTRODUCTION The idea that the vacuum may play a fundamental role in the early development and future evolution of the universe has been “in the air” for a long time . Both the Steady State and the old (pre-inflation model) Big Bang cosmological scenarios needed to invoke this idea in one way or another . The Steady State proposed the perennial creation of particles (protons and electrons, presumably from the Dirac vacuum of particle-antiparticle pairs) throughout the universe. The old Big Bang model postulated a localized and instantaneous generation of all the matter and energy of the universe together with the subsequent gradual generation of space-time which sprang from literally nothing. And the new Big Bang with its more sophisticated Inflationary Cosmological Model started from the postulation of an unstable vacuum (the false vacuum) that collapsed, giving rise to the simultaneous creation of matter-energy and space-time followed by an inflation which produces a rapid expansion of space-time . This inflation was brief but extremely dynamic, and, as new space was actually being created, the process gave the illusion of being superluminal. Once this stage terminated, the universe entered its much slower current Hubble expansion , the era in which we now are and have been for the last $`1220\times 10^9`$ years or so. If the vacuum played such an active role in the initial stages of the universe, a valid question would be whether it is still playing some active role in the present expansion of the universe. Recent observational findings by two different teams of astronomers point in this direction . It has been discovered that instead of a decreasing rate of expansion by cosmological gravitational attraction, as was assumed for many years, the universe actually experiences an accelerated rate of expansion. This accelerated rate of expansion may be physically explainable in terms of an ongoing involvement of the vacuum in energizing the expansion. ACCELERATED EXPANSION AND COSMIC VOIDS There is a paucity of known mechanisms by which the vacuum might yield some of its energy to steadily contribute to an accelerated expansion of the universe. Recently however within a different astrophysical context a mechanism that could accomplish this was proposed . Rueda, Haisch and Cole investigated a mechanism that could account for the phenomenon of Cosmic Voids . It is now known that the structure of the universe presents a peculiar distribution of matter in which clusters and superclusters of galaxies are found in “sheets” that surround “voids”, i.e., large regions of space of typical diameters around 100 Megaparsecs, which are practically devoid of matter and where no galaxies are found . The whole structure may be represented by a soap-foam-like model where the particles and magnetic fields are mainly found on the sheets along with galactic clusters and superclusters, surrounding enormous spaces in between in which much lower particle densities and much smaller concomitant magnetic field intensities prevail. It has been proposed that the ZPF of traditional quantum electrodynamics (QED), via a well-known mechanism (discussed below), is responsible for the effect. This mechanism seems also to be involved in contributing to other astrophysical phenomena: the X-ray and gamma-ray backgrounds and the acceleration of cosmic rays (CR), particularly at very high energies, $`E10^{17}`$ eV (see for an extensive review). It is important to emphasize that when the ZPF is applied as an energizing entity to produce the expansion associated with the Cosmic Voids, automatically an expansion of the universe itself must necessarily be produced. The ZPF acceleration mechanism expands the Voids by creating a pressure imbalance, transferring energy to, and thereby increasing the pressure in, those regions where matter densities happen to be comparatively lower in the IGS plasma . As a consequence the low-density regions tend to expand and to expel the trapped magnetic fields. It is well known in astrophysics that space plasma regions of higher densities are colder, while regions of lower density are almost exclusively occupied by a highly energetic (hot) plasma of electrons and ionized nuclei, mainly protons . This temperature-density anticorrelation is what occurs in the Voids and it is also what is happening in many places throughout IGS. Moreover, such a distribution would be a natural outcome of the ZPF acceleration mechanism when combined with ordinary radiative collisional cooling . THE ZERO-POINT FIELD ACCELERATION MECHANISM The origins of this mechanism go back to the early work of Einstein and his immediate collaborators . Einstein realized that when a gas of electromagnetically interacting particles <sup>a</sup> Einstein restricted his considertion to polarizable particles. However it can be shown that in the case of monopolar particles the mechanism is even more effective is submitted to the action of a random electromagnetic background (e.g., the case of thermal radiation) two simultaneous phenomena take place. Due to the action of the random electromagnetic medium, electromagnetically interacting particles become energized, steadily increasing their translational kinetic energy. They perform a random walk in velocity space that takes them, on average, systematically away from the origin. Simultaneously however, as their velocities increase the particles find themselves submerged in a random electromagnetic medium that is viewed by each of them as Doppler-shifted and thereby has lost its isotropy. This causes the random electromagnetic background to appear distorted and thereby to produce a drag force on the particle that is of a frictional character because it is exactly proportional to the velocity $`\stackrel{}{v}`$, $$\stackrel{}{F}=A\left[\rho (\omega ,T)\frac{1}{3}\omega \frac{\rho (\omega ,T)}{\omega }\right]\frac{\stackrel{}{v}}{c},$$ $`(1)`$ where $`A`$ is a positive constant. In a Hohlraum, at equilibrium temperature $`T,\rho (\omega ,T)`$ represents the volumetric spectral energy density of the radiation: $$\rho (\omega ,T)d\omega =\frac{\mathrm{}\omega ^3}{2\pi ^2c^3}\mathrm{Coth}\left(\frac{\mathrm{}\omega }{2kT}\right)d\omega =\frac{\mathrm{}\omega ^3}{2\pi ^2c^3}\left[\frac{1}{\mathrm{exp}(\mathrm{}\omega /kT)1}+\frac{1}{2}\right]d\omega .$$ $`(2)`$ The first term in the last parenthesis represents the thermal part: a Planck distribution at temperature $`T`$. It disappears at zero temperature $`(T0)`$, leaving the last part, $$\rho (\omega ,0)d\omega =\rho _0(\omega )d\omega =\frac{\mathrm{}\omega ^3}{2\pi ^2c^3}d\omega $$ $`(3)`$ This is the spectral energy density of the ZPF. It originates in quantum theory from the harmonic oscillator behavior of the individual cavity modes. At $`T=0`$, each individual cavity mode behaves as a quantized harmonic oscillator with minimum, or zero-point, ground state energy $`\mathrm{}\omega /2`$. When the energy in each oscillator is multiplied by the density of modes per unit volume $`(\omega ^2/\pi ^2c^3)`$, one obtains the ZPF spectral energy density above. It can be shown that such ZPF is also present in free space, and cogent arguments can be given for its reality . Observe however, that when the temperature is set to zero (or close to zero) $`\rho (\omega ,T)d\omega \rho _0(\omega )d\omega `$ and because of the $`\omega ^3`$ dependence, the Einstein-Hopf drag disappears. This last fact, first realized by Boyer , is at the basis of the ZPF acceleration mechanism. So, under circumstances in which there are negligible particle collisions and negligible ambient radiation fields other than the the ZPF, when the temperature is low or negligible, particles are still translationally energized by the random background ZPF radiation. But the Einstein-Hopf drag due to the ZPF is zero. In the original formulation of a ZPF acceleration mechanism , it was assumed for simplicity that the translational displacement of the particle was restricted to a single dimension (say, the $`x`$-axis) and that the internal dipole vibrated along a single direction (say the $`z`$-axis). These restrictions were removed by one of us when proposing this concept as a mechanism for the actual acceleration of cosmic ray (CR) particles in IGS . Soon after it was realized that monopolar particles could also be accelerated by the ZPF, but in a much more effective manner than polarizable particles. This conforms with the well-known observational constraint on CR acceleration mechanisms that restricts the acceleration to fully ionized nuclei . Another well-known constraint is that electrons appear in CR only at the very low energies, $`E10^{12}`$ eV, and not beyond. This could be explained by the ultrarelativistic Zitterbewegung that, because of a time dilation effect, decorrelates the actions of the electric and of the magnetic fields in the Einstein-Hopf mechanism. This therefore prevents the acceleration of electrons to ultrahigh CR energies. This decorrelation does not take place in the case of the much more massive and sturdy protons that, if allowed, can be carried up to the highest CR energies beyond $`10^{20}`$ eV . The ZPF CR acceleration mechanism can be derived in a quantum way . It was found that it occurs in a time-symmetric or Wheeler-Feynman version of QED. But acceleration does not occur in the more ordinary time-unidirectional version of QED . However, as the time-symmetric QED version and the time-unidirectional version are equivalent (as long as certain initial boundary conditions are assumed for the radiation in space-time and as those conditions seem to hold in the original universe ) there is no clear reason for taking one or the other versions of QED: no reason other than the fact that we are more used to the time-unidirectional version. It is moreover very interesting to mention that both the Wheeler-Feynman version of QED and the classical stochastic theory give exactly the same final form for the translational kinetic energy rate of growth $`\mathrm{\Omega }`$, namely $$\mathrm{\Omega }=\frac{3}{5\pi }(\mathrm{\Gamma }\omega _0)^2\left(\frac{\mathrm{}\omega _0}{mc^2}\right)(\mathrm{}\omega _0)\omega _0$$ $`(4)`$ where $`\mathrm{\Gamma }`$ is the Abraham-Lorentz parameter ($`\mathrm{\Gamma }=2e^2/3mc^3`$), with $`e`$ the charge and $`m`$ the mass of the particle (proton). The frequency $`\omega _0`$ is a parameter that depends on other considerations, e.g., what entity really performs the oscillations, whether the whole proton or some component (like quarks, proton vibration modes, etc.) inside the proton. In the most simplistic case $`\omega _0`$ comes to be half the proton Compton frequency $`(\omega _0=mc^2/2\mathrm{})`$, but this applies only under unrealistic strictly subrelativistic considerations. In practice $`\omega _0`$ is taken as a free-parameter to be phenomenologically fitted by observation. The ZPF acceleration mechanism could be found to satisfy all standard CR observational constraints , certainly at energies $`E10^{17}`$ eV. But lower energies could not be immediately excluded, though the situation there was somewhat less certain . The least that we can say then is that the mechanism, up to now, seems to be one of the strongest contenders for CR acceleration at ultrahigh $`E10^{17}`$ eV energies. CHALLENGE TO THE ZPF ACCELERATION CONCEPT Recently in their 1996 textbook on the theory of Stochastic Electrodynamics (SED) and related theories, de la Peña and Cetto have challenged at least some aspects of the ZPF acceleration concept. In a lucid reanalysis of the Boyer derivation of the translational kinetic energy growth, they argue that if an arguably more realistic non-Markovian stochastic process is assumed for the phenomenon in its classical form, no systematic translational kinetic energy growth takes place. This would then rather fit, according to de la Peña and Cetto, the time-unidirectional version of the QED acceleration mechanism that indeed yields no systematic translational kinetic energy growth . We discuss the de la Peña and Cetto argument in the Appendix. Formally, of course, and once the non-Markovian behavior is assumed, the argument of de la Pẽna and Cetto seems faultless. The situation however is far from clear. Time-symmetric QED still gives the acceleration and there is no certainty at all that even when a classical viewpoint is implemented, the process has to be non-Markovian, i.e., having some memory. Moreover, recent work of Cole strongly suggests the thermodynamics soundness of the ZPF acceleration mechanism both in its physical and in its astrophysical context. This step is important. Before this there remained a thermodynamic challenge to ZPF acceleration . Apparently, ZPF acceleration seemed to violate standard interpretations of the first and of the second laws of thermodynamics. It has been shown that this is indeed not the case . DISCUSSION Given the overwhelming astrophysical explanatory possibilities of the ZPF acceleration mechanism — to mention a few, accelerated cosmic expansion, ultrahigh energies CR, part of the X-ray and the gamma-ray backgrounds, Cosmic Voids, etc. — it is of paramount importance to clarify the situation and decide if the de la Peña and Cetto challenge is or is not a surmountable difficulty. There are also other important possibilities for the ZPF acceleration mechanism. If valid, the mechanism should eventually provide a means to transfer energy, back and forth, but most importantly forth , from the vacuum electromagnetic ZPF into a suitable experimental apparatus. A more far-fetched but not trifling possibility is that a better understanding of the Einstein-Hopf process, that accompanies ZPF acceleration, would lead to an understanding of the recently proposed ZPF contribution to inertia , also presumably to some means for influencing inertia, and by the Einstein Principle of Equivalence, also gravity. This has very interesting prospective engineering applications. ACKNOWLEDGEMENTS BH and AR acknowledge partial support from NASA research contract NASW-5050. AR and BH acknowledge interesting exchanges with Prof. Daniel C. Cole (Boston University). APPENDIX: Possible suppression of the secular ZPF acceleration De la Peña and Cetto have reached the conclusion that the zero-point field may not produce secular acceleration. In this Appendix, the arguments leading to this conclusion are discussed. 1. Dipole Oscillator Model The Boyer SED approach to the problem is based upon the model originally developed by Einstein and Hopf . In this model, that for simplicity we follow here, it is assumed that the internal particle motion is in the $`x`$-direction and the oscillator dipole vibrates along the $`z`$-axis. (An extension to fully three-dimensional motions was obtained in .) During a time interval $`\delta t`$, this oscillator experiences two forces due to electromagnetic radiation, namely the impulse $`\mathrm{\Delta }`$ transferred to the dipole via the interaction with the fluctuating field, and the force of resistance $`Rp`$ due to the anisotropy of the field as seen by the moving particle. Then, if at time $`t`$, the momentum of the $`translational`$ motion of the oscillator is $`p`$, after a short time $`\delta t`$, its momentum becomes $`p+\mathrm{\Delta }Rp\delta t`$. Since in equilibrium the mean-square momentum has to be constant in time, we get the equilibrium condition, $$p^2=(p+\mathrm{\Delta }Rp\delta t)^2,$$ $`(A1)`$ where the impulse $`\mathrm{\Delta }=F𝑑t`$, and the drag coefficient $`R`$ are given respectively by $$\mathrm{\Delta }^2=\left(_t^{t+\delta t}𝑑tez\frac{E_z}{x}\right)^2=\frac{4\tau \pi ^4c^4}{5\omega ^2}\rho ^2(\omega ,T)\delta t,$$ $`(A2)`$ and $$R=\frac{6\pi ^2c\tau }{5m}\left(\rho \frac{1}{3}\omega \frac{\rho }{\omega }\right).$$ $`(A3)`$ Equation (A1) , when expanded and neglecting the term of second order in $`\delta t`$ yields $$\mathrm{\Delta }^2+2p\mathrm{\Delta }2Rp^2\delta t2Rp\mathrm{\Delta }\delta t=0.$$ $`(A4)`$ At $`T=0`$, however, there is no drag force since $`\rho _0`$ is Lorentz-invariant, so that the above equation yields $$\mathrm{\Delta }^2_0+2p\mathrm{\Delta }_0=0.$$ $`(A5)`$ This suggests that since $`\mathrm{\Delta }^2_00`$ due to the presence of the zero-point field, the momentum $`p`$ and the fluctuation $`\mathrm{\Delta }`$ have to be correlated, contrary to the assumption $`p\mathrm{\Delta }=0`$ correctly made by Einstein and his coworkers, but only for the case of pure thermal radiation. It is here where de la Peña and Cetto disagree with Boyer who assumed even when the ZPF is present that $`p\mathrm{\Delta }=0`$ always. So, de la Peña and Cetto sensibly claim that the fluctuation at a given time is not independent of past ones, i.e., the process $`\delta p=p\overline{p}`$ is not Markovian and the system acquires a certain degree of memory in its interactions with the ZPF. Now let us combine (A4) with (A5) using the approximation $`p\mathrm{\Delta }p\mathrm{\Delta }_0`$ (which is justified because the thermal component of the field is not expected to contribute significantly to the correlation $`p\mathrm{\Delta }_0`$ ) to yield $$\mathrm{\Delta }^2\mathrm{\Delta }^2_0=2Rp^2\delta tR\mathrm{\Delta }^2_0\delta t.$$ $`(A6)`$ It can be shown and is well-known to experts dealing with the model of Einstein and Hopf in SED that the stochastic average of the square of the fluctuating impulse $`\mathrm{\Delta }^2_0`$ is of order $`\delta t`$. Thus, the last term is of order $`(\delta t)^2`$ and can be neglected. Hence the equation simplifies to $$\mathrm{\Delta }^2\mathrm{\Delta }^2_0=2Rp^2\delta t.$$ $`(A7)`$ The first term includes both the thermal and zero-point fluctuations and reduces to $`\mathrm{\Delta }^2_0`$ at $`T=0`$. The drag force on the right hand side is also zero at $`T=0`$ due to the $`\omega ^3`$ dependence of the $`\rho _0`$. Since both sides reduce to zero at $`T=0`$, it can be argued that $`\mathrm{\Delta }^2_0`$ is no longer of the form $`const\times \delta t`$ that was responsible for the steady translational kinetic energy growth. Hence no acceleration of a free particle due to the zero-point field. The averaged square of the fluctuating impulse should be a constant as in standard theory. 2. Quantum analysis Our quantum work does not entirely support this De la Peña and Cetto conclusion. We find that the secular acceleration disappears only under ordinary time-unidirectional QED. However, the secular acceleration mechanism is resurrected and in full force (with a much more detailed algebraic expression that reduces in a suitable limit to the standard classical case) under the Wheeler-Feynman time-symmetric form of QED. A detailed discussion of this is found in the Appendix of . Our new exploration of this subject of the secular acceleration mechanism involves among other things a reanalysis of these results and a comparison of the classical Markovian case of the traditional ZPF SED secular acceleration case, the non-Markovian counterpart of De la Peña and Cetto as well as the corresponding time-asymmetric and time-symmetric QED versions. We intend to pursue (and have proposed in a NASA research proposal with D.C. Cole as PI) to perform an experimental approach to the problem in order to check if the secular acceleration can be validated for example in a Paul trap. REFERENCES Börner, G. The Early Universe (Springer-Verlag, Heidelberg, 1988) and references therein. Weinberg, S. Gravitation and Cosmology (Wiley, New York, 1972) and references therein. Gliner, E.B., Sov. Phys. JETP 22, 378 (1965). Sato, K., Mon. Not. R. Astron. Soc. 195, 487 (1981). Guth, A., Phys. Rev. D 23, 347 (1981). Linde, A.D., Phys. Lett. 108B, 389 (1982) Perlmutter, S. et al, LBNL, preprint 41801 (1998) and for all relevant detailed updated information: www-supernova.lbl.gov Perlmutter, S. et al., Nature 391, 51 (1998) and references therein. Rueda, A., Space Science Reviews 53, 223-345 (1990) and references therein. Rueda, A., Phys. Lett. A 147, 423 (1990). Rueda, A., Haisch, B. & Cole, D.C. Astrophys. J. 445, 7 (1995). De Lapparent, V., Geller, M.J. & Huchra, J.P., Astrophys. J. 302, L1 (1986), and references therein. Rueda, A., in The Galactic and Extragalactic Background Radiation, IAU Symposium 139, Heidelberg, S. Bowyer and C. Leinert , eds., (Kluwer, Dordrecht, 1990) pp 424-425. Rueda, A., Nuovo Cimento A 48, 155 (1978). Ostriker, J., in Arons, J., McKee, C. & Max, C., (eds.), Particle Acceleration Mechanisms in Astrophysics (AIP, New York, 1979) p. 357. Einstein, A. & Hopf, L., Ann. Phys. (Leipzig) 33, 1105 and 1096 (1910). Einstein, A. & Stern, O., Ann. Phys. (Leipzig) 40, 551 (1913). Rueda, A. Phys. Rev A 23, 2020 (1981). Brody, T., The Philosphy Behind Physics (Springer Verlag, Heidelberg 1993) L. de la Peña & P.E. Hodgson (eds.). de la Peña, L. & Cetto, A.M. The Quantum Dice (Kluwer, Dordrecht, 1996). Boyer, T. H., Phys. Rev. 182, 1374 (1969). Rueda, A. and Cavalleri, G., Nuovo Cimento C 6, 239 (1983). Rueda, A., Phys. Rev. A 30, 2221 (1984). Rueda, A. Nuovo Cimento B 96, 64 (1986). Davies, P.C.W. The Physics of Time Asymmetry (Univ. Calif. Press, Berkeley, 1974) and refernces therein. Rueda, A. Nuovo Cimento C 6, 523 (1983). Ref. pp. 147–152 and pp. 252–253. Cole, D.C. Phys. Rev. E 51, 1663 (1995) Ref , p. 299 fff. Rueda, A. & Haisch, B.,Found. Phys., 28, 1057 (1998); also Physics Letters A, 240, 115 (1998).
no-problem/9906/cond-mat9906436.html
ar5iv
text
# Pattern selection in a lattice of pulse-coupled oscillators ## I Introduction The study of the collective behavior of populations of interacting nonlinear oscillators has attracted the interest of physicists and mathematicians for many years since they can be used to modelize several chemical, biological and physical systems. Among them, we should mention cardiac pacemakers cells, integrate and fire neurons and other systems made of excitable units. Most of the theoretical papers that have appeared in the scientific literature deal with oscillators interacting through continuous-time couplings, allowing them to describe the system by means of coupled differential equations and apply most of the modern nonlinear dynamics techniques. More challenging from a theoretical point of view is to consider a pulse-coupling, or in other words, oscillators coupled through instantaneous interactions that take place in a very specific moment of its period. The richness of behavior of these pulse-coupled oscillatory systems includes synchronization phenomena, spatio-temporal pattern formation (we could mention, for instance, traveling waves, chessboard structures, and periodic waves ), rhythm anihilation, self-organized criticality,… Most of the work on pattern formation has been done in mean-field models or populations of just a few oscillators. However, such restrictions do not allow to consider the effect of certain variables whose effect can be crucial for realistic systems. The specific topology of the connections or geometry of the system are some typical examples which usually induce important changes in the collective behavior of these models. Pattern formation usually takes place when oscillatory units interact in an inhibitory way, although it has also been shown that the shape of the interacting pulse, when the spike lasts for a certain amount of time, or time delays in the interactions can lead to spatio-temporal pattern formation also in the case of excitatory couplings. Only recently, general solutions for the general case, where the patterns existence and stability is proved, have been worked out. The aim of this paper is to study some pattern properties and get a quantitative estimation of the probability of pattern selection under arbitrary initial conditions or, in the language of dynamical systems, the volume of the basin of attraction of each pattern. Keeping this goal in mind, we will use the general results given in where assuming a system defined on a ring the authors developed a mathematical formalism powerful enough to get analytic information of the system. Not only about the mechanisms which are responsible for synchronization and formation of spatio-temporal structures, but also, as a complement, to proof under which conditions they are stable solutions of the dynamical equations. Despite the apparent simplicity of the model, some ring lattices of pulse-coupled oscillators are currently used to modelize certain types of cardiac arhythmias where there is an abnormally rapid heartbeat whose period is set by the time that an excitation takes to travel the circuit . Moreover, there are experiments where rings of a few R15 neurons from Aplysia are constructed and stable patterns are reported . Our 1d model allows us to study analytically the most simple patterns and understand their mechanisms of selection. The structure of this paper is as follows. In Sec II we review the model introduced in as well as set the notation used throughout the paper. In Section III we study some pattern properties which will be useful for, in Section IV, propose an estimation of the probability of selection of each pattern. In the last section we present our conclusions. ## II The model Our system consists in a ring of $`(N+1)`$ pulse-coupled oscillators. The phase of each oscillator $`\varphi _i`$ evolves linearly in time $$\frac{d\varphi _i}{dt}=1i=0,\mathrm{},N$$ (1) until one of them reaches the threshold value $`\varphi _{th}=1`$. When this happens the oscillator fires and changes the state of its rightmost neighbor according to $`\varphi _i1\{\begin{array}{c}\varphi _i0\hfill \\ \varphi _{i+1}\varphi _{i+1}+\epsilon \varphi _{i+1}\mu \varphi _{i+1}\hfill \end{array}`$ subjected to periodic boundary conditions, i.e. $`N+10`$, and where $`\epsilon `$ denotes the strength of the coupling and $`\mu =1+\epsilon `$. Where we have assumed that, from an effective point of view, the pulse-interaction between oscillators, as well as the state of each unit of the system, can be described in terms of changes in the phase, or in other words, in terms of the so called phase response curve (PRC), $`\epsilon \varphi `$ in our case. A PRC for a given oscillator represents the phase advance or delay as a result of receiving an external stimuli (the pulse) at different moments in the cycle of the oscillator. We will assume $`\epsilon <0`$ througout the paper, as we are only interested in spatio-temporal pattern formation and $`\epsilon >0`$ always leads to the globally synchronized state. This linear PRC has physical sense in some situations. For instance, it shows up when we expand the non-linear PRC for the Peskin model of pacemaker cardiac cells in powers of the convexity of the driving or in neuronal modelling. In practice, however, this condition can be relaxed since a nonlinear PRC does not change the qualitative behavior of the model provided the number of fixed points of the dynamics is not altered. Moreover, a linear PRC has the advantage of making the system tractable from an analytical point of view. Let us describe the notation used in the paper. The population is ordered according to the following criterion: The oscillator which fires will be always labeled as unit 0 and the rest of the population will be ordered from this unit clockwise. After the firing, the system is driven until another oscillator reaches the threshold. Then, we relabel the units such that the oscillator at $`\varphi =1`$ is now unit number 0, and so on. This firing + driving (FD) process for $`N+1`$ oscillators can be described through a suitable transformation $$\stackrel{}{\varphi }^{}=T_k(\stackrel{}{\varphi })\stackrel{}{1}+\text{𝕄}_k\stackrel{}{\varphi },$$ (2) where $`\text{𝕄}_k`$ is a $`N\times N`$ matrix, $`\stackrel{}{\varphi }`$ is a vector with $`N`$ components, $`\stackrel{}{1}`$ is a vector with all its components equal to one and $`k`$ stands for the index of the oscillator which will fire next. We call this kind of transformation a firing map, and we have to define as many firing maps as oscillators could fire, that is, index $`k`$ must run from $`k=1`$ ($`\varphi _1`$ fires) to $`N`$ ($`\varphi _N`$ fires). For example, in the $`N+1=4`$ oscillators case we have that the firing map corresponding to the FD process where $`\varphi _2`$ is the next oscillator which do fire, | $`\varphi _0=1`$ | $`\stackrel{\text{firing}}{}`$ | $`0`$ | $`\stackrel{\text{driving}}{}`$ | $`1\varphi _2=\varphi _2^{}`$ | | --- | --- | --- | --- | --- | | $`\varphi _1`$ | $``$ | $`\mu \varphi _1`$ | $``$ | $`\mu \varphi _1+1\varphi _2=\varphi _3^{}`$ | | $`\varphi _2`$ | $``$ | $`\varphi _2`$ | $``$ | $`1=\varphi _0^{}`$ | | $`\varphi _3`$ | $``$ | $`\varphi _3`$ | $``$ | $`\varphi _3+1\varphi _2=\varphi _1^{}`$ | would be $`T_2(\stackrel{}{\varphi })`$ $$\left(\begin{array}{c}\varphi _1^{}\\ \varphi _2^{}\\ \varphi _3^{}\end{array}\right)=\left(\begin{array}{c}1\\ 1\\ 1\end{array}\right)+\underset{\text{𝕄}_2}{\underset{}{\left(\begin{array}{ccc}0& 1& 1\\ 0& 1& 0\\ \mu & 1& 0\end{array}\right)}}\left(\begin{array}{c}\varphi _1\\ \varphi _2\\ \varphi _3\end{array}\right)$$ (3) and so on. Once we have defined all possible firing maps for a given number of oscillators we can proceed to deal with the attractors or fixed points of the system dynamics. As has been proved in these fixed points must be cycles of $`N+1`$ firings. We define a cycle as a sequence of consecutive firings where each oscillator fires once and only once. Mathematically, each cycle is described by means of a return map. The return map is the transformation that gives the evolution of $`\stackrel{}{\varphi }`$ during a cycle and is the composition of all firing maps involved in the firing sequence of that cycle $$\stackrel{}{\varphi }^{}=T_{c_1}T_{c_2}\mathrm{}T_{c_{N+1}}(\stackrel{}{\varphi })\stackrel{}{R}_c+\text{𝕄}_c\stackrel{}{\varphi },$$ (4) where $`T_{c_i}T_{c_j}(\varphi )`$ is the usual composition operation $`T_{c_i}(T_{c_j}(\varphi ))`$ and $`\stackrel{}{R}_c=\stackrel{}{1}+{\displaystyle \underset{i=c_1}{\overset{c_N}{}}}({\displaystyle \underset{j=c_1}{\overset{i}{}}}\text{𝕄}_j)\stackrel{}{1}\text{and}\text{𝕄}_c={\displaystyle \underset{j=c_1}{\overset{c_{N+1}}{}}}\text{𝕄}_j.`$ Note that not all possible combinations of firing maps are allowed, just those ones whose indices $`c_i`$ sum $`p(N+1)`$ without any partial sum equal to $`q(N+1)`$, where $`p>q`$ are positive integers. As all firing maps are linear transformations, return maps are also linear. There are $`N!`$ possible cycles in the $`N+1`$ oscillators case (all permutations of firing sequences with the initial firing oscillator $`\varphi _0`$ fixed). Following our previous example, for the four oscillators case all possible firing sequences and their associated return maps are $`A:0,1,2,3T_1T_1T_1T_1`$ $`B:0,1,3,2T_2T_3T_2T_1`$ $`C:0,2,1,3T_1T_2T_3T_2`$ $`D:0,2,3,1T_3T_2T_1T_2`$ $`E:0,3,1,2T_2T_1T_2T_3`$ $`F:0,3,2,1T_3T_3T_3T_3`$ Now, in order to find the attractors of the dynamics, we must solve the fixed point equation $$\stackrel{}{\varphi }_c^{}=\stackrel{}{R}_c+\text{𝕄}_c\stackrel{}{\varphi }_c^{},$$ (5) for every cycle $`c`$. Formally, $$\stackrel{}{\varphi }_c^{}=\stackrel{}{R}_c(\text{𝕀}\text{𝕄}_c)^1.$$ (6) As was shown in , there are $`N`$ different stable solutions to the whole set of fixed point equations. Their stability is assured by the fact that $`\epsilon <0`$, since it guarantees that all eigenvalues of $`\text{𝕄}_c`$ lie inside the unit circle for all cycles $`c`$. In our four oscillators example these solutions are $`\stackrel{}{\varphi }_A^{}=`$ $`(1,{\displaystyle \frac{3}{4+3\epsilon }},{\displaystyle \frac{2}{4+3\epsilon }},{\displaystyle \frac{1}{4+3\epsilon }})`$ $`\stackrel{}{\varphi }_B^{}=\stackrel{}{\varphi }_C^{}=\stackrel{}{\varphi }_D^{}=\stackrel{}{\varphi }_E^{}=`$ $`(1,{\displaystyle \frac{1}{2+\epsilon }},1,{\displaystyle \frac{1}{2+\epsilon }})`$ $`\stackrel{}{\varphi }_F^{}=`$ $`(1,{\displaystyle \frac{1}{4+\epsilon }},{\displaystyle \frac{2+\epsilon }{4+\epsilon }},{\displaystyle \frac{3+\epsilon }{4+\epsilon }})`$ Which are a kind of four-oscillators traveling wave, chessboard and inverse traveling wave structures. From now on we will label such solutions with index $`m`$ ($`m=1\mathrm{}N`$) since their first component always satisfy $$\varphi _1^{}=\frac{m}{N+1+m\epsilon }.$$ (7) Therefore, in the example, we relabel patterns $`\stackrel{}{\varphi }_A^{}`$ as $`m=3`$, $`\stackrel{}{\varphi }_B^{}`$, $`\stackrel{}{\varphi }_C^{}`$, $`\stackrel{}{\varphi }_D^{}`$, $`\stackrel{}{\varphi }_E^{}`$ as $`m=2`$ and $`\stackrel{}{\varphi }_F^{}`$ as $`m=1`$. Since there are $`N!`$ possible cycles and $`N`$ solutions to Eq. (7) there will be some fixed points or patterns which will appear more than once, so, we shall use $`C(N+1,m)`$ to characterize these degeneracies. In the example, the values of the degeneracies are $`C(4,1)=C(4,3)=1`$ and $`C(4,2)=4`$. In general, patterns which are solutions of cycle consisting in the iterative application of the same firing map (like A and F in our example) have no periodicities whereas the ones solution of mixtures of differents firing maps (B,C,D and E) have some periodic structure that are also solution of Eq. (7) for a case with less oscillators. In Fig. 1 we can visualize the solutions for $`N+1=2,3`$ and $`4`$ oscillators and realize that solution $`m=2`$ for the four oscillators case is a periodic composition of solution $`m=1`$ for the two oscillators case. ## III Pattern properties As we have seen, the stability of all patterns solution of Eq. (6) is guaranteed by the fact that $`\epsilon <0`$, but the existence of such solutions is not ensured. In fact, for small values of the coupling strength $`|\epsilon |`$ all patterns do exist, but, as we increase it, some patterns disappear. The reason is that the solution loses its physical meaning because $`\varphi _1^{}>1`$. Their first component is always the one that becomes larger than unity earlier and this happens, for each $`m`$ and according to Eq. (9), when $$\epsilon <\epsilon _m^{}=1\frac{N+1}{m}.$$ (8) Our coupling strength range of interest ends at $`\epsilon =1`$, since at $`\epsilon 1`$ we always find the same pathological dynamics which does not have any physical or biological sense. Realistic couplings never reach such higher values. Therefore, as $`\epsilon `$ runs from $`0`$ to $`1`$, all patterns whose $`m`$ satisfy $`m>\frac{N+1}{2}`$, disappear. There is another interesting pattern property which has to do with the calculation of the pattern degeneracy $`C(N+1,m)`$. In principle, to calculate such degeneration, we should solve fixed point Eq. (6) for all possible cycles and count how many of them lead to the same pattern. Although for few oscillators the problem is quite straightforward, as we deal with higher and higher number of oscillators, the number of cycles increases (it grows as $`N!`$) and solving Eq. (6) becomes more difficult. Fortunately, there is another way of calculating $`C(N+1,m)`$ which reduces the problem to a combinatorial question. Lets show it through an example, in the previous four oscillators case, if we count, for each firing sequence, the number of oscillators which have received the pulse before firing, we can easily realize that this number is the same as its value of $`m`$ $`A:`$ $`0,\overline{1},\overline{2},\overline{3}`$ $`m=3`$ $`B:`$ $`0,\overline{1},3,\overline{2}`$ $`m=2`$ $`C:`$ $`0,2,\overline{1},\overline{3}`$ $`m=2`$ $`D:`$ $`0,2,\overline{3},\overline{1}`$ $`m=2`$ $`E:`$ $`0,3,\overline{1},\overline{2}`$ $`m=2`$ $`F:`$ $`0,3,2,\overline{1}`$ $`m=1`$ Here an upper bar means that the oscillator has already received a pulse during the cycle. The point is that it turns out that every pattern $`m`$ corresponds to a sequences of firings involving exactly $`m`$ oscillators that, when they do fire, had already received a pulse from their leftmost neighbor. Therefore, this property (we have checked for several values of $`N+1`$) allows us to associate every cycle with the pattern it leads to, just by counting these kind of firings. Now, calculating $`C(N+1,m)`$ becomes a straightforward matter. In Table I we have computed $`C(N+1,m)`$ for several values of $`N+1`$. Apart from brute force counting, degeneracy distribution $`C(N+1,m)`$ can also be determined from the following relation $`C(N+1,m)`$ $`=`$ $`mC(N,m)+`$ (10) $`(N+1m)C(N,m1),`$ for $`2mN1`$. This recursion relation is closed by $$C(N+1,1)=C(N+1,N)=1,$$ (11) which correspond to the firing sequences $`0,N,(N1)\mathrm{}2,\overline{1}\text{and}0,\overline{1},\overline{2}\mathrm{}\overline{(N1)},\overline{N},`$ respectively. From the previous relations one can deduce by induction the symmetry of the distribution with respect to its extremes at $`m=1`$ and $`m=N`$ $$C(N+1,m)=C(N+1,N+1m),$$ (12) and $$\underset{m}{}C(N+1,m)=N!.$$ (13) Another interesting property is the period $`\mathrm{\Delta }_m^{N+1}`$ of each spatio-temporal pattern $`m`$. Since all oscillators are in a phase-locked state, they must oscillate with the same period. Then, as the intrinsic period of each oscillator is one, and when any oscillator receives the delaying pulse from its neighbor it has a phase equal to $`\varphi _1^{}`$, one can easily realize that the effective period is $$\mathrm{\Delta }_m^{N+1}=1+\epsilon \varphi _1^{}=\frac{N+1+2m\epsilon }{N+1+m\epsilon }.$$ (14) Therefore, the larger the value of $`m`$, the longer the period of its associated pattern. It is important to notice that we have not fixed the value of such periods (each pattern has its own which is different from the others), since there are some authors who fix all periods equal to some constant, and use it as a condition to find the structures. ## IV Pattern selection Once we have characterized all spatio-temporal patterns, we proceed to find some general formula which give us some estimation of the probability of each pattern to be selected, or in other words, an estimation of the volume of its basin of attraction. In order to achieve this objective, we should understand the mechanism which lead to the selection of a certain spatio-temporal structure and how is it modified as the parameters of the model ($`\epsilon `$ in our case) change. There is an easy and straightforward way to get the essential features of this mechanism assuming that the probability of one oscillator to fire next is, basically, proportional to its phase (that is, if it has a phase slightly below $`1`$ it has a higher probability to be the next firing oscillator, whereas if it has a smaller phase, it will rarely fire next). Imagine the phases of all oscillators randomly distributed over the interval $`(0,1)`$. Then we let the system evolve till one of the oscillators reaches a phase $`\varphi _i=1`$ and emits a pulse that is received by its rightmost neighbor which lows its phase by an amount $`\epsilon \varphi _{i+1}`$. Now we assume that all phases are again randomly distributed over $`(0,1)`$ except the one which received the pulse whose phase is distributed over $`(0,1+\epsilon )`$. So, we get rid of memory effects (we know the oscillator that has fired should, now, have a phase equal to zero) and just keep in mind if each oscillator has received a pulse or has not. Therefore, the point is that under this conditions, the probability that one oscillator which has still not received a pulse do fire is some constant and, on the other hand, for the ones which had, is this constant times the factor $`(1+\epsilon )`$. Then, we can characterize the probability of having some cycle just by recalling how many oscillators do fire having previously received a pulse during that cycle. Basically, this probability is proportional to $`(1+\epsilon )^n`$ where $`n`$ stands for the number of oscillators which do fire having already received a pulse (the product of all constant terms will be absorbed in a normalization factor). This approach, where we assume all firings as almost-independent events, can be viewed as a kind of mean-field approximation. Then,as has been shown before, since cycles leading to the same pattern $`m`$ always exactly have $`m`$ oscillators that do fire having received the interacting pulse, we can give an estimation of the probability for pattern $`m`$ selection in the $`N+1`$ oscillators case $$p_m^{N+1}(\epsilon )𝒩(\epsilon )C(N+1,m)(1+\epsilon )^m.$$ (15) Here $`𝒩(\epsilon )`$ is chosen so that summation of the probabilities over m gives 1 $$\underset{m}{}p_m^{N+1}(\epsilon )=1.$$ (16) In the limit of small coupling strength $`\epsilon 0`$, which is the more interesting case for the majority of physical and biological systems, one can assume that interaction plays almost no role when pattern selection takes place. That is, the fact that one oscillator has received the pulse from its neighbor does not low its probability to fire as the pulse does not modify appreciably its phase. Then, we can consider that all cycles have approximately the same probability to be selected, $`(1+\epsilon )^m1`$, and only pattern degeneracy has to be considered to get a good estimation of $`p_m^{N+1}`$ $$p_m^{N+1}\frac{C(N+1,m)}{N!}.$$ (17) The dominant pattern, that is, the one which has the larger probability to be selected coincides with the mean value of $`m`$ (due to the symmetric behavior of $`C(N+1,m)`$). $$<m>_{N+1}=\underset{m}{}m\frac{C(N+1,m)}{N!}=\frac{N+1}{2}.$$ (18) For an odd number of oscillators $`<m>_{N+1}`$ does not exist and we have a competition between the two closest patterns $`m=N/2`$ and $`m=(N+2)/2`$. Recall that the most probable patterns turn out to be the ones with ”shortest wavelengths”, a fact that was already reported in simulations of these sort of systems. In Figs. 2 and 3 we check this new approximation for the $`N+1=10`$ and $`9`$ case and realize that expected results are in good agreement with simulations data. There also is the interesting question of how does this probability distribution modifies when the number of oscillators increases. In Fig. 4 we show $`p_m`$ for different values of $`N+1`$. Since there are more possible values of $`m`$ available, as we increase $`N+1`$, $`p_m^{N+1}`$ diminishes. The distribution also gets narrower as we increase $`N+1`$ and this becomes clear when one studies the variance of $`p_m`$. It can be found that $`<m^2>_{N+1}`$ $`=`$ $`{\displaystyle \underset{m}{}}m^2{\displaystyle \frac{C(N+1,m)}{N!}}`$ (19) $`=`$ $`{\displaystyle \frac{(N+1)^2}{4}}+{\displaystyle \frac{N+1}{12}}.`$ (20) We could not prove this without an explicit expression for $`C(N+1,m)`$ but we have checked it N up to $`170`$. Therefore $$\sigma _{N+1}^2=\frac{N+1}{12}=\frac{<m>_{N+1}}{6}.$$ (21) It turns out that for a large number of oscillators almost all initial conditions lead to a pattern whose $`m`$ approximately falls in the interval $`<m>_{N+1}\pm \sqrt{<m>_{N+1}}`$. In order to compare it for different number of oscillators we have to normalize $`m`$ dividing by $`N+1`$. In that case, one observes that $`\sigma _{N+1}^21/\sqrt{N+1}`$ so that as we increase $`N+1`$, the spread of $`p_m^{N+1}`$ diminishes getting the distribution sharpened. As Eq. (14) does not take into account the disappearance of the different patterns $`m`$ at the different values of $`\epsilon _m^{}`$ predicted by Eq. (9), it can not give a good quantitative estimation of pattern selection for higher coupling values. Nevertheless we can expand Eq. (14) to the leading order in $`\epsilon `$. For small $`\epsilon `$, $`p_m^{N+1}`$ are approximated by $$p_m^{N+1}\frac{C(N+1,m)}{N!}(1+(m\frac{N+1}{2})\epsilon ).$$ (22) In Fig. 5 we compare this approximation with simulated data. The slopes near $`\epsilon =0`$ do agree with Eq. (21). In our simulations we calculate the probability of each pattern to be selected just by counting how many realizations (with $`\varphi _0=1`$ and the rest of oscillators with random initial conditions) lead to each pattern $`m`$ and divide over the total number of realizations. Although we only have a good quantitative estimation of $`p_m^{N+1}`$ for small values of $`\epsilon `$, Eq. (15) catches the two basic mechanisms responsible of pattern selection. On the one hand, it is clear that for higher values of the coupling strength $`|\epsilon |`$, when one oscillator receives a pulse, it lows its phase to almost zero and, consequently, its firing probability also does. Therefore pattern selection probability $`p_m^{N+1}(\epsilon )`$ is strongly controlled by the number of oscillators which have to fire having already received a pulse, that is, the probabilistic factor $`(1+\epsilon )^m`$. As a consequence, $`p_m^{N+1}`$ begin to decrease sooner when $`|\epsilon |`$ increases, the larger $`m`$ is. On the other hand, for small values of the coupling strength, interaction plays almost no role and $`p_m^{N+1}(\epsilon )`$ is dominated by the degeneracy factor $`C(N+1,m)`$. Therefore $`p_m^{N+1}(\epsilon )`$ for the different values of $`m`$ are basically ordered as $`C(N+1,m)`$. In Fig. $`6`$, $`7`$ and $`8`$ we show results from simulations of $`p_m^{N+1}(\epsilon )`$ for different number of oscillators. ## V Conclusions In this paper we have studied some properties of the spatio-temporal patterns that appear in a ring of pulse-coupled oscillators with inhibitory interactions. We have focused our attention in estimating the probability of selecting a certain pattern under arbitrary initial conditions and have shown the two basic mechanisms responsible of that: the degeneracy distribution $`C(N+1,m)`$, for small values of $`\epsilon `$, and $`m`$, the number of oscillators that do fire having already received a pulse, for higher values of $`\epsilon `$. According to this, the different probabilities of selecting pattern $`m`$ start being distributed following the degeneracy distribution $`C(N+1,m)`$, and, as $`\epsilon `$ decreases, these probabilities diminish in a hierarchical way: the larger the value of $`m`$, the sooner its selection probability is going to decrease, so that only patterns with smaller m will survive for higher values of $`\epsilon `$. Moreover, some of the structures disappear, at the different values of $`\epsilon _m^{}`$, during this process. We have found out an approximation formula for $`p_m^{N+1}(\epsilon )`$ which takes into account all these mechanisms and gives us a quantitative estimation of the different selection probabilities for small $`\epsilon `$. The estimation of the volume of the basin of attraction of each spatio-temporal pattern $`m`$ also gives us an idea of the stability of the different structures with respect to additive noise fluctuations (for instance, we can add some random quantity $`\eta `$ to all phases after each firing event or a continuous-time $`\eta (t)`$ in the driving). Simulations of arrays of noisy pulse coupled oscillators showed that our most probable patterns were also the most stable. The present paper only concerns spatio-temporal pattern formation in a ring of oscillators, nevertheless, all results are trivially generalized to bidirectional couplings. Although the question of what happens when dealing with higher dimension lattices remains opened, some simulations results in 2d showed that almost all realizations lead to a chessboard pattern in analogy with our results in the ring. That makes us believe we have caught the basic features of the problem in our 1d model. ## Acknowledgments The authors are indebted to C.J.Pérez and A.Arenas for very fruitful discussions. They also acknowledge extremely constructive suggestions from an anonymous referee. This work has been supported by DGICYT of the Spanish Government through grant PB96-0168 and EU TMR Grant ERBFMRXCT980183. X.G. also acknowledges financial support from the Generalitat de Catalunya.
no-problem/9906/astro-ph9906358.html
ar5iv
text
# Helical motions in the jet of blazar 1156+295 ## 1 Introduction The blazar 1156+295 ($`z=0.729`$, Véron-Cetty & Véron, (1998)) is among the most active of highly polarized (HPQ) and optically violent variable (OVV) sources (Glassgold et al.,, 1983; Wills et al.,, 1983, 1992) In its active phase, this source has shown flux-density fluctuations at optical wavelengths with an amplitude of $``$ 5–7% on time scales of 0.5 hour. Also, both the position angle and fraction of optical polarization vary dramatically. At cm wavelengths $`10\%`$ variations in total flux density have been observed on time scales less than several days. Finally, 1156+295 has flared in 100 MeV $`\gamma `$-rays at least three times since 1992, while the quiescent $`\gamma `$-ray emission remains undetected (Mukherjee et al.,, 1997). High-resolution radio imaging has revealed arcsecond and milliarcsecond structure consistent with the activity described above as well as with the standard blazar model of a jet aligned nearly along the line of sight. On the arcsecond scale, the 1.5 GHz VLA image shows a symmetrical structure in the north-south direction (Antonucci & Ulvestad,, 1985). The MERLIN image at 1.6 GHz and the VLA image at 5 GHz (McHardy et al.,, 1990) show extended emission on the arcsecond-scale and a knotty jet of length $`2\text{.}^{\prime \prime }5`$ at p.a. $`19^{}`$. The northern end of the jet has a hotspot, at which point it turns through about $`90^{}`$ to the east, ending in a diffuse region of size $`1\text{.}^{\prime \prime }5`$. There is a region of similar diffuse emission to the south about $`2^{\prime \prime }`$ from the core, at p.a. $`200^{}`$. From the maps, 1156+295 appears to resemble a classical double radio source seen end-on, with its northern jet relativistically beamed towards us, and hence dominating over any southern jet, while the diffuse emission is not beamed and appears symmetric. Based on VLBI observations at three frequencies, McHardy et al., (1990, 1993) estimated the apparent superluminal velocity of $`26h^1c`$. This is much larger than any proper motion reliably found for any other source — $`2.5`$ times greater than that of the next fastest object in its redshift bin on the $`\mu z`$ diagram (Vermeulen & Cohen,, 1994). Such a high apparent proper motion in principle could be a projection effect: when the jet axis is aligned near the line of sight, small changes in the direction of the jet axis can produce large changes in the apparent proper motion. Lower apparent velocities in the range of 3.5–$`8.8h^1c`$ were reported on the basis of ten epochs of geodetic observations at 8.4 and 2.3 GHz (Piner & Kingham,, 1997). Arguments against highly beamed synchrotron emission from the milliarcsecond core (thus, not necessarily a small angle between the very inner part of the jet and viewing directions) have been reported on the basis of the VSOP Space VLBI observations at 1.6 GHz with the angular resolution of 4.4 $`\times `$ 1.4 mas (Hirabayashi et al.,, 1998). In this paper, we discuss the observations of the blazar 1156+295 at 5 GHz with the EVN + MERLIN (February 1997) in comparison with the earlier VLBA observation. Throughout this paper, the values $`H_0=100h`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, $`q_0=0.5`$, and $`S\nu ^\alpha `$ will be used. ## 2 The observations and data reduction A full-track 12-hour EVN + MERLIN observation of the blazar 1156+295 was carried out at 5 GHz in February 1997. The MERLIN array comprised 6 antennas (Defford, Cambridge, Knockin, Darnhall, Mark 2, and Tabley). The central wavelength was 4994 MHz and the observing bandwidth was 14 MHz. The amplitude calibration was carried out at Jodrell Bank with the calibrator source OQ208. The imaging was done using the NRAO Astronomical Image Processing System (AIPS) package. The EVN array comprised Effelsberg, WSRT, Jodrell Bank (Mark 2), Cambridge, Onsala, Medicina, Torun, Shanghai, Urumqi, and Hartebeesthoek. The data were acquired with the Mk III VLBI recording system in Mode B with an effective bandwidth of 28 MHz and correlated at the Max-Plank Institut für Radioastronomie in Bonn. No fringes were found on the baselines to Urumqi. The EVN data were calibrated and fringe-fitted using the NRAO AIPS package. The initial amplitude calibration was accomplished using the system temperature measurements made during the observations and the a priori station gain curves. The snapshot-mode observation of 1156+295 with the VLBA at 5 GHz was made in June 1996 as part of the VSOP prelaunch VLBA Survey (Edwards & Fomalont,, 1998). Eight intermediate frequency channels, each 8 MHz wide, were recorded for a total bandwidth of 64 MHz. The data were correlated at the NRAO VLBA correlator (Socorro, NM, USA). The fringe-fitted and calibrated VLBA data were kindly made available to us by the VSOP/VLBA pre-launch survey group (Edwards & Fomalont,, 1998). The post-processing — including editing, phase and amplitude self-calibration, imaging and model fitting — was performed within AIPS for both the EVN and VLBA data sets. ## 3 Results Our MERLIN image shows traces of a straight jet at the position angle of $`20^{}`$ and $`2^{\prime \prime }`$ in length (Hong et al., in preparation). This is consistent with the lower-resolution VLA and MERLIN images reported by McHardy et al., (1990). There appear to be several regularly spaced knots within 1 arcsecond of the core, possibly indicating quasi-periodic activity that has propagated from the base of the jet. Because of missing Urumqi data, the u-v coverage of the EVN data has a big gap between 25 and 90 M$`\lambda `$. Therefore, we imaged with the data of the European telescopes first. The image (Fig. 1) reveals a jet that is bending from North to North-East before turning back to the North, presumably to meet the arcsecond scale jet. The image of VLBA presents a similar structural pattern (Fig. 1). It shows more clearly that the jet is bent from the North to East and then back to the North-East. A higher resolution EVN image was obtained by using all the data, including data from Hartebeesthoek and Shanghai (Hong et al., in preparation). The results show that a new jet component starts at about p.a. $`50^{}`$ before bending to the North within 2 milliarcseconds of the core, which is consistent with the 8 GHz images (Piner & Kingham,, 1997). In Piner and Kingham’s images, one can also note that the jet components start at the position angle of $`45^{}`$, then move clockwise to the North before bending to the East sharply. ## 4 Discussion We note that the peak flux density at two epochs varies from 2.09 Jy/beam for the VLBA map (epoch 1996.43) to 1.2 Jy/beam for the later EVN map (epoch 1997.14), which is clear evidence of the decrease of the core brightness (the VLBA synthesized beam of Fig. 1 is smaller than that of EVN, Fig. 1). From the data of the University of Michigan Radio Astronomy Observatory, the total flux densities at 14.5, 8.0, and 4.8 GHz have all decreased from mid of 1996 to early 1997 (UMRAO database,, 1998). In 15 GHz VLBA maps, the peak flux density decreased from 1.95 Jy/beam (16 May 1996) to 0.67 Jy/beam (13 March 1997) (Kellermann et al.,, 1998). All these indicate clearly that the flare of 1156+295 was due to the outburst of the central core component. At that time a new jet component was ejected. Helical jets have been proposed to explain the bi-modal distribution of the difference between arcsecond and milliarcsecond structural axes observed for core-dominated radio sources (Conway & Murphy,, 1993). The helical pattern could result from the precession of the base of the jet (e.g. Linfield, (1981)) or fluid-dynamical instabilities in the interaction between the jet material and surrounding medium (Hardee,, 1987). The structural oscillations at mas scales can be explained by the orbital motion of a binary black hole (e.g. Roos et al., (1993) for the case of 1928+738). The jet of 1156+295 has a structural oscillation on the mas scale and a large $`\mathrm{\Delta }`$p.a. between its pc-scale and the kpc-scale directions, which may indicate that the oscillation of the jet is a projection effect of a helical jet. The observed properties of the jet could be explained by two effects: the precession of the spin axis of the black hole emitting the jet and the orbital motion of a binary black hole. In this case, the jet of 1156+295 could be ejected in the direction very closely aligned to the line of sight. The jet then would have a high Doppler boosting but would demonstrate a relatively low apparent proper motion. As the direction of jet curves away from the line of sight, the Doppler factor will decrease (thus, flux density will decrease also), but the apparent proper motion velocity will increase. As the bright radio component moves outward, it will reach the viewing angle of the maximum apparent transverse velocity, while its Doppler boosting factor will continue to monotonically decrease. This qualitative model can explain the evolution of the outburst in the core (McHardy et al.,, 1990) and the apparent initial acceleration of the proper motion of the jet components (Piner & Kingham,, 1997). It is also generally consistent with the $`\gamma `$-ray observation at the GeV band: the source demonstrated several short active periods at the high-energy $`\gamma `$-ray band but remained in a quiescent state most of the time between flares (Mukherjee et al.,, 1997). The model suggests that the $`\gamma `$-ray flares may preceed radio outbursts since the former require higher Doppler boosting. Acknowledgements. This research was supported by the National Science Foundation and the Pan Deng Plan of China. XYH thanks JIVE for the hospitality during his visit in 1998. LIG acknowledges partial support from the European Comission TMR programme, Access to Large-Scale Facilities under contract ERBFMGECT950012. The authors are grateful to the staff of EVN, MERLIN and VLBA for support of the observing projects. The authors express their gratitude to the team of the VSOP/VLBA pre-launch survey, particularly Ed Fomalont and Phil Edwards, for the permission to use their data. This research has made use of data from the University of Michigan Radio Astronomy Observatory, which is supported by the National Science Foundation and by funds from the University of Michigan. The National Radio Astronomy Observatory is operated by Associated Universities, Inc. under a Cooperative Agreement with the National Science Foundation.
no-problem/9906/hep-ph9906263.html
ar5iv
text
# HOW PENGUINS STARTED TO FLY The 1999 Sakurai Prize Lecture 11footnote 1Talk at the 1999 Centennial Meeting of the American Physical Society, March 20-26, on the occasion of receiving the 1999 Sakurai Prize for Theoretical Particle Physics. ## 1 History of the idea It was an exciting period, with Quantum Chromodynamics (QCD) emerging as the theory of strong interactions, when three of us – Valya Zakharov, Misha Shifman and I – started in 1973 to work on QCD effects in weak processes. The most dramatic signature of strong interactions in these processes is the so called $`\mathrm{\Delta }I=1/2`$ rule in nonleptonic weak decays of strange particles. Let me remind you what this rule means by presenting the experimental value for the ratio of the widths of $`K_S\pi ^+\pi ^{}`$ and $`K^+\pi ^+\pi ^0`$ decays $$\frac{\mathrm{\Gamma }(K_S\pi ^+\pi ^{})}{\mathrm{\Gamma }(K^+\pi ^+\pi ^0)}=450.$$ (1) The isotopic spin $`I`$ of hadronic states is changed by 1/2 in the $`K_S\pi ^+\pi ^{}`$ weak transition and by 3/2 in $`K^+\pi ^+\pi ^0`$, so the $`\mathrm{\Delta }I=1/2`$ dominance is evident. What does theory predict? The weak interaction has a current$`\times `$current form. Based on this, Julian Schwinger suggested to estimate nonleptonic amplitudes as a product of matrix elements of currents, i.e. as a product of semileptonic amplitudes. This approximation, which implies that the strong interaction does not affect the form of the weak nonleptonic interaction, gives 9/4 for the ratio (1). Thus, the theory is off by a factor of two hundred! We see that strong interactions crucially affect nonleptonic weak transitions. The conceptual explanation was suggested by Kenneth Wilson in the context of the Operator Product Expansion (OPE) which he introduced <sup>2</sup><sup>2</sup>2In Russia we had something of a preview of the OPE ideas worked out by Sasha Patashinsky and Valery Pokrovsky (see their book ) in applications to phase transitions and by Sasha Polyakov in field theories.. Assuming scaling for OPE coefficients at short distances, Wilson related the enhancement of the $`\mathrm{\Delta }I=1/2`$ part of the interaction with its more singular behavior at short distances as compared with the $`\mathrm{\Delta }I=3/2`$ part. In the pre-QCD era it was difficult to test this idea having no real theory of the strong interaction. With the advent of QCD all this changed. The phenomenon of asymptotic freedom gives full theoretical control of short distances. Note in passing that the American Physical Society also followed this development: the discoverers of asymptotic freedom, Gross, Politzer, and Wilczek, became recipients of the 1986 Sakurai Prize. In QCD the notion of OPE in application to nonleptonic weak interactions can be quantified in more technical terms as one can calculate the effective Hamiltonian for weak transitions at short distances. The weak interactions are carried by $`W`$ bosons, so the characteristic distances are $`1/m_W`$, with $`m_W=80`$ GeV. The QCD analysis at these distances in the effective Hamiltonian was done in 1974 by Mary K. Gaillard with Ben Lee , and by Guido Altarelli with Luciano Miani . Asymptotic freedom at short distances means that the strong interaction effects have a logarithmic dependence on momentum rather than power-like behavior, as was assumed in Wilson’s original analysis. The theoretical parameter determining the effect is $`\mathrm{log}(m_W/\mathrm{\Lambda }_{\mathrm{QCD}})`$ where $`\mathrm{\Lambda }_{\mathrm{QCD}}`$ is a hadronic scale. These pioneering works brought both good and bad news. The good news was that, indeed, strong interactions at short distances logarithmically enhance the $`\mathrm{\Delta }I=1/2`$ transitions and suppress the $`\mathrm{\Delta }I=3/2`$ ones. The bad news was that quantitatively the effect fell short of an explanation of the ratio (1). Besides $`1/m_W`$ and $`1/\mathrm{\Lambda }_{\mathrm{QCD}}`$ there are scales provided by masses of heavy quarks $`t`$, $`b`$ and $`c`$. In 1975 the object of our study was distances of order $`1/m_c`$ – the top and bottom quarks were not yet discovered. Introduction of top and bottom quarks practically does not affect the $`\mathrm{\Delta }S=1`$ nonleptonic transitions. However, it is different with charm. At first sight, the $`c`$ quark loops looked to be unimportant for nonleptonic decays of strange particles in view of the famous Glashow-Illiopoulos-Miani cancellation (GIM) with corresponding up quark loops. In 1975 the belief that this cancellation produced the suppression factor $$\frac{m_c^2m_u^2}{m_W^2}$$ was universal, which is the reason why the effect of heavy quarks was overlooked. We found instead that: * The cancellation is distance dependent. Denoting $`r=1/\mu `$, we have $`{\displaystyle \frac{m_c^2m_u^2}{\mu ^2}},\mathrm{for}m_c\mu m_W;`$ $`\mathrm{log}{\displaystyle \frac{m_c^2}{\mu ^2}},\mathrm{for}\mu m_c.`$ (2) No suppression below $`m_c`$! * Moreover, new operators appearing in the effective Hamiltonian at distances larger than $`1/m_c`$ are qualitatively different – they contain right-handed light quark fields in contrast to the purely left-handed structures at distances much smaller than $`1/m_c`$ (see the next section for their explicit form). It was surprising that right-handed quarks become strongly involved in weak interactions in the Standard Model with its left-handed weak currents. The right-handed quarks are coupled via gluons which carry no isospin; for this reason new operators contribute to $`\mathrm{\Delta }I=1/2`$ transitions only. * For the mechanism we suggested it was crucial that the matrix elements of novel operators were much larger than those for purely left-handed operators. The enhancement appears via the ratio $$\frac{m_\pi ^2}{m_u+m_d}2\mathrm{GeV},$$ which is large due to the small light quark masses. The small values of these masses was a new idea at the time, advocated in 1974 by Heiri Leutwyler and Murray Gell-Mann . The origin of this large scale is not clear to this day but it shows that in the world of light hadrons there is, besides the evident momentum scale $`\mathrm{\Lambda }_{\mathrm{QCD}}`$, some other scale, numerically much larger. Thus, the explanation of the $`\mathrm{\Delta }I=1/2`$ enhancement comes as a nontrivial interplay of OPE, GIM cancellation, the heavy quark scale, and different intrinsic scales in light hadrons. I will discuss the construction in more detail below. However, I will first digress to explain what the mechanism we suggested has in common with penguins. We had a hard time communicating our idea to the world. Our first publication was a short letter published on July 20, 1975 in the Letters to the Journal of Theoretical and Experimental Physics. Although an English translation of JETP Letters was available in the West we sent a more detailed version to Nuclear Physics shortly after. What happened then was a long fight for publication; we answered numerous referee reports – our paper was considered by quite a number of experts. The main obstacle for referees was to overcome their conviction about the GIM suppression $`(m_c^2m_u^2)/m_W^2`$ and to realize that there is no such suppression at distances larger than $`1/m_c`$. Probably our presentation was too concise for them to follow. Eventually the paper was published in the March 1977 issue of Nuclear Physics without any revision, but only after we appealed to David Gross who was then on the editorial board. The process took more than a year and a half! We were so exhausted by this fight that we decided to send our next publication containing a detailed theory of nonleptonic decays to Soviet Physics JETP instead of an international journal. Lev Okun negotiated a special deal for us with the editor Evgenii Lifshitz to submit the paper of a size almost twice the existing limit in the journal – paper was in short supply as was almost everything in the USSR. We paid our price: the paper was published in 1977, but even years later many theorists referred to our preprints of the paper without mentioning the journal publication. On a personal note, let me mention that a significant part of the JETP paper was done over the phone line – Valya and Misha worked in ITEP, Moscow, while I was in the Budker Institute of Nuclear Physics, Novosibirsk. The phone connection was not very good and we paid terrific phone bills out of our own pockets. I think, that in recognition of our work in the world at large it was Mary K. Gaillard who first broke the ice – she mentioned the idea in one of her review talks. Moreover, she collaborated with John Ellis, Dimitri Nanopoulos, and Serge Rudaz in the work in which they applied a similar mechanism to B physics. It is in this work that the mechanism was christened the penguin. How come? Figure 1 shows the key Feynman diagram for the new operators in the form we drew it in our original publications . It does not look at all penguin-like, right? Now look how a similar diagram is drawn in the paper of the four authors mentioned above. You see that that some measures were taken to make the diagram reminiscent of a penguin. Let me refer here to John Ellis’ recollections <sup>3</sup><sup>3</sup>3John sent his recollections to Misha Shifman in 1995, who published them in the preface to his book. on how it happened: > “ Mary K, Dimitri and I first got interested in what are now called penguin diagrams while we were studying CP violation in the Standard Model in 1976 … The penguin name came in 1977, as follows. > > In the spring of 1977, Mike Chanowitz, Mary K and I wrote a paper on GUTs predicting the $`b`$ quark mass before it was found. When it was found a few weeks later, Mary K, Dimitri, Serge Rudaz and I immediately started working on its phenomenology. That summer, there was a student at CERN, Melissa Franklin who is now an experimentalist at Harvard. One evening, she, I and Serge went to a pub, and she and I started a game of darts. We made a bet that if I lost I had to put the word penguin into my next paper. She actually left the darts game before the end, and was replaced by Serge, who beat me. Nevertheless, I felt obligated to carry out the conditions of the bet. > > For some time, it was not clear to me how to get the word into this $`b`$ quark paper that we were writing at the time. Then, one evening, after working at CERN, I stopped on my way back to my apartment to visit some friends living in Meyrin, where I smoked some illegal substance. Later, when I got back to my apartment and continued working on our paper, I had a sudden flash that the famous diagrams look like penguins. So we put the name into our paper, and the rest, as they say, is history.” I learned some extra details of the story from Serge Rudaz who is my Minnesota colleague now. He recollects that for him to beat John in darts was a miraculous event. John was a very strong player and had his own set of darts which he brought to the pub. ## 2 Effective Hamiltonian Application of Wilson’s OPE to nonleptonic $`\mathrm{\Delta }S=1`$ decays means the construction of the effective Hamiltonian $`H^{\mathrm{eff}}`$ as a sum over local operators $`𝒪_i`$, $$H^{\mathrm{eff}}(\mu )=\sqrt{2}G_FV_{us}^{}V_{ud}\underset{i}{}c_i(\mu )𝒪_i(\mu ).$$ (3) Here $`V_{us}`$, $`V_{ud}`$ are elements of the Cabibbo-Kobayashi-Maskawa mixing matrix and $`\mu `$ denotes the so called normalization point, which is the inverse of the shortest distance for which the effective Hamiltonian is to be applied, $`𝒪_i`$ are gauge invariant local operators made out of quark and gluon fields, and $`c_i`$ are OPE coefficients ($`c`$ numbers). At distances larger than $`1/m_c`$, i.e. at $`\mu <m_c`$, only light $`u,d,s`$ quarks and gluons remain as building material for the $`𝒪_i`$. Operators can be ordered according to their canonical dimension $`d`$, and the corresponding OPE coefficients are proportional to $`(1/m_W)^{d4}`$. As a selection criterion for operators we used their transformation features in the limit of chiral SU(2)<sub>L</sub>$`\times `$SU(2)<sub>R</sub> symmetry, picking up operators which are SU(2)<sub>R</sub> singlets. Under this criterion, the operator of lowest dimension ($`d=5`$) is of gluomagnetic type (magnetic penguins), $$T=i\overline{s}_R\sigma _{\mu \nu }t^ad_LG_{\mu \nu }^a,$$ (4) where $`G_{\mu \nu }^a`$ is the gluon field strength tensor, and $`t^a(a=1,\mathrm{}8)`$ are 3$`\times `$3 generators of the color SU(3). The corresponding OPE coefficient is proportional to the strange quark mass $`m_s`$. For this reason the magnetic penguins turn out to be not important in $`\mathrm{\Delta }S=1`$ transitions. They are important, however, for the $`b`$ quark whose mass is large. We will return to this point later. The operator basis of SU(3)<sub>R</sub> invariant operators of $`d=6`$ consists of six four-fermion operators. The first four operators are constructed from left-handed quarks (and their antiparticles, which are right-handed), $`𝒪_1`$ $`=`$ $`\overline{s}_L\gamma _\mu d_L\overline{u}_L\gamma ^\mu u_L\overline{s}_L\gamma _\mu u_L\overline{u}_L\gamma ^\mu d_L,(\mathrm{𝟖}_𝐟,\mathrm{\Delta }I=1/2),`$ $`𝒪_2`$ $`=`$ $`\overline{s}_L\gamma _\mu d_L\overline{u}_L\gamma ^\mu u_L+\overline{s}_L\gamma _\mu u_L\overline{u}_L\gamma ^\mu d_L+2\overline{s}_L\gamma _\mu d_L\overline{d}_L\gamma ^\mu d_L`$ $`+2\overline{s}_L\gamma _\mu d_L\overline{s}_L\gamma ^\mu s_L,(\mathrm{𝟖}_𝐝,\mathrm{\Delta }I=1/2),`$ $`𝒪_3`$ $`=`$ $`\overline{s}_L\gamma _\mu d_L\overline{u}_L\gamma ^\mu u_L+\overline{s}_L\gamma _\mu u_L\overline{u}_L\gamma ^\mu d_L+2\overline{s}_L\gamma _\mu d_L\overline{d}_L\gamma ^\mu d_L`$ $`3\overline{s}_L\gamma _\mu d_L\overline{s}_L\gamma ^\mu s_L,(\mathrm{𝟐𝟕},\mathrm{\Delta }I=1/2),`$ $`𝒪_4`$ $`=`$ $`\overline{s}_L\gamma _\mu d_L\overline{u}_L\gamma ^\mu u_L+\overline{s}_L\gamma _\mu u_L\overline{u}_L\gamma ^\mu d_L`$ (5) $`\overline{s}_L\gamma _\mu d_L\overline{d}_L\gamma ^\mu d_L,(\mathrm{𝟐𝟕},\mathrm{\Delta }I=3/2).`$ Every quark field is a color triplet $`q^i`$ and summation over color indices is implied, $`\overline{q}_2\gamma ^\mu q_1=(\overline{q}_2)_i\gamma ^\mu (q_1)^i`$. What is marked in the brackets are the SU(3) and isospin features of the operators. Two more four-fermion operators entering the set contain also right-handed quarks (in SU(3)<sub>R</sub> singlet form), $`𝒪_5`$ $`=`$ $`\overline{s}_L\gamma _\mu t^ad_L\left(\overline{u}_R\gamma ^\mu t^au_R+\overline{d}_R\gamma ^\mu t^ad_R+\overline{s}_R\gamma ^\mu t^as_R\right),(\mathrm{𝟖},\mathrm{\Delta }I=1/2),`$ $`𝒪_6`$ $`=`$ $`\overline{s}_L\gamma _\mu d_L\left(\overline{u}_R\gamma ^\mu u_R+\overline{d}_R\gamma ^\mu d_R+\overline{s}_R\gamma ^\mu s_R\right),(\mathrm{𝟖},\mathrm{\Delta }I=1/2).`$ (6) Operators $`𝒪_5`$ and $`𝒪_6`$ are different by color flow only. The operator basis we have introduced is recognized now (some doubts were expressed in the literature at the beginning). The standard set used presents some linear combinations of $`𝒪_{16}`$. Actually, the set is five instead of six combinations, the completness of the basis was lost on the way, although it is not important within the Standard Model. ### 2.1 Evolution The effective Hamiltonian (2) may remind the reader of the Fermi theory of beta decay with its numerous variants for four-fermion operators. While in many respects the analogy makes sense, the difference is that the standard model together with QCD allows us to fix all coefficients $`c_i`$. In the leading logarithmic approximation the evolution of the effective Hamiltonian at $`\mu >m_c`$ was found in Refs. . Penguins do not appear in this range and the result for $`H^{\mathrm{eff}}(m_c)`$ has a simple form: $$\left(\begin{array}{c}c_1(m_c)\\ c_2(m_c)\\ c_3(m_c)\\ c_4(m_c)\\ c_5(m_c)\\ c_6(m_c)\end{array}\right)=\left(\begin{array}{c}\left[\frac{\alpha _S(m_b)}{\alpha _S(m_W)}\right]^{4/b_5}\left[\frac{\alpha _S(m_c)}{\alpha _S(m_b)}\right]^{4/b_4}\\ \frac{1}{5}\left[\frac{\alpha _S(m_b)}{\alpha _S(m_W)}\right]^{2/b_5}\left[\frac{\alpha _S(m_c)}{\alpha _S(m_b)}\right]^{2/b_4}\\ \frac{2}{15}\left[\frac{\alpha _S(m_b)}{\alpha _S(m_W)}\right]^{2/b_5}\left[\frac{\alpha _S(m_c)}{\alpha _S(m_b)}\right]^{2/b_4}\\ \frac{2}{3}\left[\frac{\alpha _S(m_b)}{\alpha _S(m_W)}\right]^{2/b_5}\left[\frac{\alpha _S(m_c)}{\alpha _S(m_b)}\right]^{2/b_4}\\ 0\\ 0\end{array}\right),$$ (7) where $`\alpha _S(\mu )`$ is the running coupling $$\alpha _S(\mu )=\frac{\alpha _S(\mu _0)}{1+b_N\frac{\alpha _S(\mu _0)}{2\pi }\mathrm{ln}\frac{\mu }{\mu _0}},b_N=11\frac{2}{3}N$$ (8) in the range with $`N`$ “active” flavors. The modification due to the $`b`$ quark is not significant, a few percent numerically, and the $`t`$ quark effects do not appear in the leading logarithmic approximation. Penguin operators $`𝒪_{5,6}`$ show up due to evolution at $`\mu `$ below $`m_c`$, $$c_i(\mu )=\left[\mathrm{exp}\left\{\frac{\rho }{b_3}\mathrm{ln}\frac{\alpha _S(\mu )}{\alpha _S(m_c)}\right\}\right]_{ij}c_j(m_c),$$ (9) where the anomalous dimension matrix $`\rho `$ is $$\rho =\left(\begin{array}{cccccc}34/9& 10/9& 0& 0& 4/3& 0\\ 1/9& 23/9& 0& 0& 2/3& 0\\ 0& 0& 2& 0& 0& 0\\ 0& 0& 0& 2& 0& 0\\ 1/6& 5/6& 0& 0& 6& 3/2\\ 0& 0& 0& 0& 16/3& 0\end{array}\right).$$ (10) Numerical results for the OPE coefficients depend on the normalization point $`\mu `$, pushing $`\mu `$ as low as possible maximizes the effect of the evolution. We chose the lowest $`\mu `$ as the point where $`\alpha _S(\mu )=1`$. The values of $`c_{1,2,3,4}`$ are relatively stable, say, under variation of the $`c`$ quark mass but the penguin coefficients $`c_{5,6}`$ depend on it rather strongly. This is not surprising, of course, since penguins are generated in the interval of virtual momenta between $`\mu `$ and $`m_c`$. Numerically, even for $`\mu `$ as low as 200 MeV, the coefficients $`c_{5,6}`$ are rather small. Our 1975 estimates for $`c_5`$ were in the interval $$c_5=0.06÷0.14.$$ (11) With the present day coupling $`\alpha _S(m_Z)=0.115`$, it would be about twice smaller. The values of the coefficients $`c_{5,6}`$ are small and unstable. We will return to the problem of OPE coefficients at the low normalization point in connection with the procedure of calculating matrix elements, essentially it is about matching for perturbative and nonperturbative effects. We also found the coefficient for the magnetic penguin operator (4). Two-loop calculations were necessary for the purpose. Nowadays due to the efforts of few groups the next to leading approximation has been found for all types of weak processes, see, e.g., the review . Although it is nice to have an accurate values of the OPE coefficients the main phenomenological effects come from matrix elements as we will see in the next Section. The theoretical uncertainty is much bigger there. ## 3 Matrix elements and phenomenology of nonleptonic decays We used the naive quark model to find matrix elements of four-fermion operators $`𝒪_{16}`$ . The model implies a factorization for amplitudes of $`K`$ mesons decays, in the hyperon decays it is also the case for all operators but $`𝒪_1`$. It is clear that the factorization does not work at the range of $`\mu `$ where the strong coupling $`\alpha _S(\mu )`$ is small being in contradiction with a calculable evolution. For this reason, if the naive quark model works at all it is only at very low values of $`\mu `$ where the evolution is complete, i.e. where $`\alpha _S(\mu )1`$. But then the theoretical accuracy of the perturbative OPE coefficients is not good because it is governed by the same $`\alpha _S(\mu )`$. So we employed the following strategy: let us use the naive quark model for hadrons together with a factorization somewhere at small $`\mu `$ but let us allow adjustment of the OPE coefficients from phenomenological fits. It does not involve many parameters. Predominantly three coefficients, $`c_1`$, $`c_4`$ and $`c_5`$, plus a few nonfactorizable matrix elements in hyperon decays determine the bulk of nonleptonic amplitudes. New relations arising from such a fit are in good agreement with experimental data. ### 3.1 $`\mathrm{\Delta }I=3/2`$ transitions Let us start with the decay $`K^+\pi ^+\pi ^0`$. It is a $`\mathrm{\Delta }I=3/2`$ transition and its amplitude is determined by the matrix element of $`𝒪_4`$, $$M(K^+\pi ^+\pi ^0)=\sqrt{2}G_FV_{us}^{}V_{ud}\pi ^+\pi ^0|c_4𝒪_4|K^+.$$ (12) In the valence quark model the factorization of this matrix element is visible from the Feynman diagrams presented in Fig. 3. Consider, for instance, the diagram $`a`$, $`M_4^a`$ $`=`$ $`\pi ^0|\overline{u}_L\gamma _\mu u_L|0\pi ^+|\overline{s}_L\gamma ^\mu d_L|K^++\pi ^0|(\overline{u}_L)_i\gamma _\mu (u_L)^j|0\pi ^+|(\overline{s}_L)_j\gamma ^\mu (d_L)^i|K^+`$ (13) $`=`$ $`{\displaystyle \frac{4}{3}}\pi ^0|\overline{u}_L\gamma _\mu u_L|0\pi ^+|\overline{s}_L\gamma ^\mu d_L|K^+,`$ where out of three terms entering the definition of $`𝒪_4`$, the first one factorizes, the second factorizes after Fierz transformation, and the third does not contribute. The matrix elements in Eq. (13) are known from semileptonic $`\pi \mu \nu `$ and $`K\pi e\nu `$ transitions, $`\pi ^0|\overline{u}_L\gamma _\mu u_L|0={\displaystyle \frac{if_\pi }{2\sqrt{2}}}q_\mu ,f_\pi =0.95m_\pi ,`$ $`\pi ^+|\overline{s}_L\gamma _\mu d_L|K^+={\displaystyle \frac{1}{2}}\left[(p+q_+)_\mu f_++(pq_+)_\mu f_{}\right].`$ (14) Accounting for all diagrams in Fig. 3 in a similar way we get $$M(K^+\pi ^+\pi ^0)=ic_4G_FV_{us}^{}V_{ud}m_K^2f_\pi .$$ (15) Comparing it with the experimental value $$\left|M(K^+\pi ^+\pi ^0)\right|_{\mathrm{exp}}=0.05G_Fm_K^2m_\pi $$ (16) we find $$c_40.25,$$ (17) what is about 1.6 times less that the theoretical estimate of $`c_4`$. The consistency check comes from the $`\mathrm{\Delta }I=3/2`$ hyperon decays. In Fig. 4 quark diagrams for the $`\mathrm{\Lambda }p\pi ^{}`$ decay are presented. The symmetry of wave functions and operators under permutations of color indices (first discussed by Pati and Woo ) is important for the analysis, Namely, the operator $`𝒪_1`$ is antisymmetric under permutation of quark color indices, operators $`𝒪_{2,3,4}`$ are symmetric, and operators $`𝒪_{5,6}`$ have no specific symmetry, in these quarks differ by their helicities. Baryon wave functions are antisymmetric in color, so only antisymmetric operators survive in diagrams $`a,b,c`$. For the symmetric $`\mathrm{\Delta }I=3/2`$ operator $`𝒪_4`$ only the diagram $`c`$ remains. For this diagram factorization takes place, $`\pi ^{}p|𝒪_4^{}|\mathrm{\Lambda }`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\pi ^0n|𝒪_4|\mathrm{\Lambda }={\displaystyle \frac{4}{3}}\pi ^{}|\overline{d}_L\gamma _\mu u_L|0p|\overline{u}_L\gamma ^\mu s_L|\mathrm{\Lambda }`$ (18) $``$ $`{\displaystyle \frac{i}{\sqrt{6}}}f_\pi q_\mu \overline{u}_p\left(\gamma ^\mu +{\displaystyle \frac{5}{9}}g_A\gamma ^\mu \gamma _5\right)u_\mathrm{\Lambda }.`$ Using the value (17) for the coefficient $`c_4`$ we get predictions for the $`\mathrm{\Delta }I=3/2`$ hyperon decay amplitudes. They are collected in the Table 1, where the $`s`$ and $`p`$ wave amplitudes $`A`$ and $`B`$ are defined by $$M=iG_Fm_\pi ^2\overline{u}_f\left(A+B\gamma _5\right)u_i$$ (19) We see that the predictions for $`s`$ waves are in a reasonable agreement with the data, for $`p`$ waves the experimental accuracy is too low for a conclusion. ### 3.2 $`\mathrm{\Delta }I=1/2`$ transitions The above analysis of color symmetry and factorization in quark diagrams in Fig. 4 shows that the operators $`𝒪_1^{}`$ (diagrams $`a`$, $`b`$, and $`c`$) and $`𝒪_5^{}`$ (the diagram $`d`$) are dominant in hyperon decays. Matrix elements of $`𝒪_5^{}`$ are factorizable but that of $`𝒪_1^{}`$ are not. There exists a combination of amplitudes, $$M\left(\mathrm{\Xi }^{}\mathrm{\Sigma }^{}\pi ^0\right)=\sqrt{3}\mathrm{\Lambda }_{}^0\mathrm{\Sigma }_0^++\sqrt{3}\mathrm{\Xi }_{}^{},$$ (20) which does not contain $`𝒪_1^{}`$. Thus, the ratio of $`p`$ and $`s`$ waves for this combination is predicted, $$\frac{B\left(\mathrm{\Xi }^{}\mathrm{\Sigma }^{}\pi ^0\right)}{A\left(\mathrm{\Xi }^{}\mathrm{\Sigma }^{}\pi ^0\right)}=g_A\frac{m_\mathrm{\Xi }+m_\mathrm{\Sigma }}{m_\mathrm{\Xi }m_\mathrm{\Sigma }}25.$$ (21) Experimentally this value is $`33\pm 10`$. The uncertainty can be reduced if the Lee-Sugawara relation $`2\mathrm{\Xi }_{}^{}+\mathrm{\Lambda }_{}^+\sqrt{3}\mathrm{\Sigma }_0^+=0`$ is used. Then the value is $`27.4\pm 2.5`$. Thus, we see an experimental confirmation of our description. The comparison of $`s`$ wave $`A`$ $$A\left(\mathrm{\Xi }^{}\mathrm{\Sigma }^{}\pi ^0\right)=\left(c_5+\frac{3}{16}c_6\right)V_{us}V_{ud}^{}\frac{2}{9}\frac{f_\pi m_\pi ^2}{m_u+m_d}\frac{m_\mathrm{\Xi }m_\mathrm{\Sigma }}{m_sm_u}$$ (22) with the corresponding experimental value $`0.51\pm 0.10`$ fixes $$c_5+\frac{3}{16}c_60.25,$$ (23) which is about four times larger than the theoretical estimate. The operators $`𝒪_{5,6}`$ are dominant in $`K`$ decays. The naive quark model, together with $`\sigma `$ meson ($`m_\sigma 700\mathrm{MeV}`$) dominance in the $`s`$ wave $`\pi \pi `$ channel, gives for the matrix element $`\pi ^+\pi ^{}|𝒪_5|K_S`$ $`=`$ $`i{\displaystyle \frac{2\sqrt{2}}{9}}{\displaystyle \frac{f_\pi m_K^2m_\pi ^2}{(m_u+m_d)m_s}}\left[{\displaystyle \frac{f_K}{f_\pi }}{\displaystyle \frac{1}{m_\sigma ^2(q_++q_{})^2}}1\right]`$ (24) $``$ $`i{\displaystyle \frac{2\sqrt{2}}{9}}{\displaystyle \frac{f_\pi m_K^2m_\pi ^2}{(m_u+m_d)m_s}}\left[{\displaystyle \frac{f_K}{f_\pi }}1+{\displaystyle \frac{f_K}{f_\pi }}{\displaystyle \frac{m_K^2}{m_\sigma ^2}}\right].`$ The result for the amplitude of $`K_S\pi ^+\pi ^{}`$ decay $$\pi ^+\pi ^{}|H^{\mathrm{eff}}|K_S=(0.85+0.20)iG_Fm_K^2m_\pi ,$$ (25) where the first number comes from $`𝒪_{5,6}`$ and the second from $`𝒪_{14}`$, matches the experimental value. ## 4 Further developments ### 4.1 Limit of large $`N_{\mathrm{color}}`$, chiral loops As we see in the approach presented above the main uncertainty comes from the range of momenta of the order of hadronic scales. To make a consistent analysis in this range Bardeen, Buras, and Gerard suggested to use a description in terms of chiral meson dynamics . It means that QCD is used to fix the effective Hamiltonian at $`\mu `$ below charm threshold but large enough to have $`\alpha _S(\mu )1`$, particularly, they choose $`\mu 1\mathrm{G}\mathrm{e}\mathrm{V}`$. Matrix elements of this effective Hamiltonian which accounts for momenta below 1 GeV are calculated with mesonic loops instead of quark loops. The use of chiral meson loops can be parametrically justified in the limit of large number of colors $`N_c`$. Notice that penguin operators appear due to mixing with the left-handed operators $`𝒪_{14}`$, this mixing is suppressed as $`1/N_c`$. In the dual mesonic picture it is hadronic vertices which bring in $`1/N_c`$. The smallness of $`1/N_c`$ emphasizes once more the necessity of a large hadronic scale in the problem, it was $`m_\pi ^2/m_q`$ in the naive quark model. In chiral dynamics $`\mathrm{\Delta }I=1/2`$ amplitudes also come out enhanced what confirm the penguin explanations of $`\mathrm{\Delta }I=1/2`$. The chiral loops also improve predictions for $`\mathrm{\Delta }I=3/2`$ transitions providing an additional suppression. ### 4.2 Penguin decays of $`B`$ mesons We mentioned that “magnetic” penguins, i.e. the $`d=5`$ operators of the kind given by Eq. (4) are of particular importance for B decays (see review ). The Feynman diagram for $`bsg(\gamma )`$ transitions is presented in Fig. 5. In difference with strange particle decays there is no suppression due to a small quark mass ($`m_b`$ enters instead of $`m_s`$). From the diagram is seen that “magnetic” penguins are sensitive to heavy quarks in the loop and, for this reason, to possible deviations from the Standard Model prediction, $$\mathrm{Br}(bs\gamma )=(3.5\pm 0.3)10^4$$ (26) The inclusive rate of $`BX_s\gamma `$ decays was measured by CLEO and ALEPH collaborations, CLEO: $`\mathrm{Br}(BX_s\gamma )=`$ $`(2.32\pm 0.57\pm 0.35)10^4`$ ALEPH: $`(3.11\pm 0.80\pm 0.72)10^4`$ No deviation was observed. This CLEO experiment was recognized at this meeting, Ed Thorndike received the 1999 Panofsky Prize. ### 4.3 Direct $`CP`$ violation in $`K`$ decays At this meeting Robert Tschirhart reported a new measurement of direct $`CP`$ violation in $`K`$ decays by KTeV collaboration $$\frac{ϵ^{}}{ϵ}=(2.8\pm 0.41)10^3,$$ (27) which is in agreement with the CERN NA31 experiment but different from the previous measurement in the Fermilab E731 experiment. The direct $`CP`$ violation is a crucial test of the Standard Model. Does the result fit Standard Model? In his presentation Tschirhart defined an answer to this question as debatable. Indeed, in the recent work authors claim that the Standard Model leads to the predictions substantially (few times) lower then the experimental result (27), and only for extreme values of input parameters the theory can be consistent with the data. My statement on the issue is that our theory of nonleptonic decays naturally predicts $`ϵ^{}/ϵ`$ consistent with Eq. (27). Leaving detail for discussion elsewhere , let me make few remarks. Actually, relatively large values of $`ϵ^{}/ϵ`$ within such approach was obtained long ago by Gilman and Wise , and by Voloshin , but then the approach was unfairly abandoned. The argumentation was, see e.g. Ref. , that the operator $`𝒪_5`$ enters with the small coefficient at $`\mu =m_c`$ or $`\mu =1GeV`$, so other operators are important. This criticism, however, is not relevant to the dominance of the operator $`𝒪_5`$ in the low normalization point where factorization takes place. In one loop approximation direct CP violation shows up as an imaginary part of the coefficient $`c_5`$ $$\mathrm{Im}c_5(m_c)=\frac{\mathrm{Im}(V_{cs}^{}V_{cd})}{V_{us}^{}V_{ud}}\frac{\alpha _S(m_W)}{12\pi }\mathrm{ln}\frac{m_W^2}{m_c^2}0.12\mathrm{Im}(V_{cs}^{}V_{cd})$$ due to the diagram of Fig. 1 with the $`c`$ quark in the loop ($`t`$ quark contribution is small). In difference with CP even part, i.e. $`\mathrm{Re}c_5`$, which comes from virtual momenta between $`\mu `$ and $`m_c`$ (thanks to the GIM cancellation), the CP odd part, $`\mathrm{Im}c_5`$, comes from a larger range between $`m_c`$ and $`m_W`$. The corrections due to logarithmic evolution can be simply accounted for, they increase the value of $`\mathrm{Im}c_5(m_c)`$. Additionally $`\mathrm{Im}c_5(\mu )`$ is increased by an evolution down to the normalization $`\mu `$ where $`\alpha (\mu )1`$ and factorization is applied. Accounting for the phenomenological value (23) of CP even part we find $`ϵ^{}/ϵ`$ in ballpark of the data. Besides confirmation of the quark mixing nature of CP violation in the Standard Model this serves, somewhat surprisingly, as one more confirmation of our mechanism for $`\mathrm{\Delta }I=1/2`$ enhancement. It is nice to get such a surprise on the eve of the Sakurai Prize. ## 5 Conclusions Summarizing the development of twenty four years I cannot, unfortunately, say that theoretical understanding of $`\mathrm{\Delta }I=1/2`$ is very much advanced. Progress of last years in technically difficult calculations of higher loops corrections to OPE coefficients is not crucial when the effect comes from large numbers in matrix elements. As we discussed above the enhancement of $`\mathrm{\Delta }I=1/2`$ reflects the existence of the large momentum scale in light hadrons. In case of glueballs the scale could be even larger, as it discussed in the next talk by Valya Zakharov . Moreover, the momentum scale in light hadrons related with this enhancement is so large that treatment of the $`c`$ quark as heavy becomes questionable. In this sense the $`c`$ quark is not heavy enough (let me remind, in passing, that a low upper limit for its mass was theoretically found in the Standard Model before the experimental discovery of the $`c`$ quark). Thus, we still have some distance to go so I finish by the sentence: Penguins spread out but have not landed yet. ## Acknowledgments My great thanks to Valya Zakharov and Misha Shifman for a pleasure of our long term collaboration. I am grateful to the American Physical Society for the honor to be a recipient of the Sakurai Prize. My appreciation to colleagues, collaborators and friends in theory groups of Budker Institute and ITEP, especially to B.L. Ioffe, I.B. Khriplovich, I.I Kogan, V.N. Novikov, L.B. Okun, E.V. Shuryak, A.V. Smilga, V.V. Sokolov, and M.B. Voloshin. This work is supported in part by DOE under the grant number DE-FG02-94ER40823. ## References
no-problem/9906/astro-ph9906210.html
ar5iv
text
# The Cosmological Constant and the Time of Its Dominance ## I INTRODUCTION During the past year and a half two groups have presented (independently) strong evidence that the expansion of the universe is accelerating rather than decelerating . This surprising result comes from distance measurements to more than fifty supernovae Type Ia (SNe Ia) in the redshift range $`z=0`$ to $`z=1.2`$. While possible ambiguities related to evolution, and to the nature of SNe Ia progenitors still exist , the data are consistent with the cosmological constant (or vacuum energy) contributing to the total energy density about 70% of the critical density ($`\mathrm{\Omega }_\mathrm{\Lambda }0.7`$). At the same time, other methods, and measurements of the anisotropy of the cosmic microwave background indicate that matter alone contributes about $`\mathrm{\Omega }_M0.3`$, which when combined with the cosmological constant suggests a flat universe . These findings raise however an extremely intriguing question. It is difficult to understand why we happen to be living in the first and only time in cosmic history in which $`\rho _M\rho _\mathrm{\Lambda }`$ (where $`\rho _M`$ is the matter density, and $`\rho _\mathrm{\Lambda }`$ the vacuum energy density associated with the cosmological constant). That is, why $$t_0t_\mathrm{\Lambda },$$ (1) where $`t_0`$ is the present time and $`t_\mathrm{\Lambda }`$ is the time at which the cosmological constant starts to dominate. Observers living at $`tt_\mathrm{\Lambda }`$ would find $`\mathrm{\Omega }_M1`$ ($`\mathrm{\Omega }_\mathrm{\Lambda }0`$), while observers at $`tt_\mathrm{\Lambda }`$ would find $`\mathrm{\Omega }_\mathrm{\Lambda }1`$ ($`\mathrm{\Omega }_M0`$). There is another, less frequently discussed “coincidence”, which also calls for an explanation. Observationally, the epoch of structure formation, when giant galaxies were assembled, is at $`z13`$, or $`t_Gt_0/3t_0/8`$. For the value of $`\mathrm{\Lambda }`$ suggested by observations, this is within one order of magnitude of $`t_\mathrm{\Lambda }`$, $$t_Gt_\mathrm{\Lambda }.$$ (2) It is not clear why these seemingly unrelated times should be comparable. We could have for example $`t_Gt_\mathrm{\Lambda }`$. In the present work, we explore whether the above “coincidences” \[eqs. (1) and (2)\] could be due to anthropic selection effects. The approach that we use is one in which it is assumed that some of the constants of nature are actually random variables, whose range and a priori probabilities are nevertheless determined by the laws of physics. Under this assumption, some values which are allowed in principle, may be incompatible with the very existence of observers. Hence, such values of the constants cannot be measured. The values in the observable range will be measured by civilizations in different parts of the universe, and we can define the probability $`d𝒫=P(\chi )d\chi _1\mathrm{}d\chi _n`$ for variables $`\chi _a`$ to be in the intervals $`d\chi _a`$ as being proportional to the number of civilizations that will measure $`\chi _a`$ in those intervals. Following Ref., we shall use the “principle of mediocrity”, which assumes that we are “typical” observers. Namely, we can expect to observe the most probable values of $`\chi _a`$. An immediate objection to this approach is that we are ignorant about the origin of life, let alone intelligence, and therefore the number of civilizations cannot be calculated. However, the approach can still be used to find the probability distribution for parameters which do not affect the physical processes involved in the evolution of life. The cosmological constant $`\mathrm{\Lambda }`$ and the amplitude of density fluctuations at horizon crossing $`Q`$ are examples of such parameters. If the parameters $`\chi _a`$ belong to this category, then the probability for a carbon-based civilization to evolve on a suitable planet is independent of $`\chi _a`$, and instead of the number of civilizations we can use the number of habitable planets or, as a rough approximation, the number of suitable galaxies. We can then write $$P(\chi )d^n\chi d𝒩,$$ (3) where $`d𝒩`$ is the number of galaxies that are formed in regions where $`\chi _a`$ take values in the intervals $`d\chi _a`$. The problem of calculating the probability distribution $`d𝒫(\chi )`$ can be split into two parts. The number of galaxies $`d𝒩(\chi )`$ in Eq. (3) is proportional to the volume of the comoving regions where $`\chi _a`$ take specified values and to the density of galaxies in those regions. The volumes and the densities can be evaluated at any time. Their product should be independent of the choice of this reference time, as long as we include both galaxies that formed in the past and those that are going to be formed in the future. For some purposes it is convenient to evaluate the volumes and the densities at the time of recombination, $`t_{rec}`$. We can then write $$d𝒫(\chi )=\nu (\chi )d𝒫_{}(\chi ).$$ (4) Here, $`d𝒫_{}(\chi )=P_{}(\chi )d^n\chi `$ is proportional to the volume of those parts of the universe where $`\chi _a`$ take values in the intervals $`d\chi _a`$, and $`\nu (\chi )`$ is the average number of galaxies that form per unit volume with cosmological parameters specified by the values of $`\chi _a`$. $`d𝒫_{}(\chi )`$ is an a priori probability distribution<sup>*</sup><sup>*</sup>*We use the term a priori in the sense that this distribution is independent of the existence of observers. which should be determined from the theory of initial conditions (e.g., from an inflationary model). On the other hand, the calculation of $`\nu (\chi )`$ is a standard astrophysical problem, unrelated to the calculation of the volume factor $`d𝒫_{}(\chi )`$. The principle of mediocrity (which is closely related to the “Copernican principle”) has been applied to determine the likely values of the cosmological constant , of the density parameter $`\mathrm{\Omega }`$ , and of the density fluctuations at horizon crossing $`Q`$ . A very similar approach was used by Carter , Leslie and Gott to estimate the expected lifetime of our civilization. Gott also applied it to estimate the lifetimes of various political and economic structures, including the journal “Nature” where his article was published. Related ideas have also been discussed by Linde et. al. and by Albrecht . Spatial variation of the “constants” can naturally arise in the framework of inflationary cosmology . The dynamics of light scalar fields during inflation are strongly influenced by quantum fluctuations, causing different regions of the universe to thermalize with different values of the fields. For example, what we perceive as a cosmological constant could be a potential $`U(\varphi )`$ of some field $`\varphi (x)`$. If this potential is very flat, so that the evolution of $`\varphi `$ is much slower than the Hubble expansion, then observations will not distinguish between $`U(\varphi )`$ and a true cosmological constant. Observers in different parts of the universe would then measure different values of $`U(\varphi )`$. Quite similarly, the potential of the inflaton field $`\mathrm{\Phi }`$ that drives inflation can depend on a slowly-varying field $`\varphi `$. In this case, regions of the universe thermalizing with different values of $`\varphi `$ will be characterized by different amplitudes of the cosmological density fluctuations. Examples of models of this sort have been given in Refs.. The application of the principle of mediocrity in our case will require comparing the expected numbers of civilizations in parts of the universe with different values of $`\mathrm{\Lambda }`$ and $`Q`$, which will be treated as random variables. In fact, for our purposes, it will be convenient to deal with an additional random variable, $`t_G`$. This is because one of the questions we are addressing is the coincidence (2), and galaxy formation can itself be modeled as a random process which takes place over a range of times for given $`Q`$ and $`\mathrm{\Lambda }`$. Instead of $`Q`$, it will be more convenient to use the density contrast on the galactic scale at the time of recombination, $`\sigma _{rec}`$. Throughout the paper we assume that the universe is flat, $`\mathrm{\Omega }_\mathrm{\Lambda }+\mathrm{\Omega }_M=1`$. The paper is organized as follows. We shall first consider the situation in which only the cosmological constant is allowed to vary, with all other parameters being fixed. In Section II we will show that the most likely values of $`\mathrm{\Lambda }`$ and $`t_G`$ in this case are such that $`t_\mathrm{\Lambda }t_G`$. In Section III we shall argue that the most likely epoch for the existence of intelligent observers is $`t_0t_G`$. This completes the argument that coincidences (1) and (2) are indeed to be expected in this class of models. In Section IV we discuss models where both $`\mathrm{\Lambda }`$ and $`\sigma _{rec}`$ are variable and outline the calculation of the probability distribution for $`t_\mathrm{\Lambda }`$ and $`t_G`$. In our analysis of these models we go beyond the issue of the cosmic time coincidence and discuss the values of $`t_\mathrm{\Lambda }`$ and of the density contrast $`\sigma _{rec}`$ detected by typical observers. Our conclusions are summarized in Section V. ## II Why is $`t_\mathrm{\Lambda }`$ $``$ $`t`$<sub>G</sub>? In this and the following section we assume that the cosmological constant $`\mathrm{\Lambda }`$ is the only variable parameter. Weinberg was the first to point out that not all values of $`\mathrm{\Lambda }`$ are consistent with the existence of conscious observers. In a spatially flat universe with a cosmological constant, gravitational clustering effectively stops at a redshift $`(1+z_\mathrm{\Lambda })(\rho _\mathrm{\Lambda }/\rho _{M0})^{1/3}`$, when $`\rho _\mathrm{\Lambda }`$ becomes comparable to the matter density $`\rho _M`$. (Here, $`\rho _{M0}`$ is the present matter density.) At later times, the vacuum energy dominates and the universe enters a de Sitter stage of exponential expansion. An anthropic bound on $`\rho _\mathrm{\Lambda }`$ can be obtained by requiring that it does not dominate before the redshift $`z_{max}`$ when the earliest galaxies are formed, $$\rho _\mathrm{\Lambda }(1+z_{max})^3\rho _{M0}.$$ (5) Weinberg took $`z_{max}4`$, which gives $`\rho _\mathrm{\Lambda }100\rho _{M0}`$. One expects that the a priori probability distribution $`𝒫_{}(\rho _\mathrm{\Lambda })`$ should vary on some characteristic particle physics scale, $`\mathrm{\Delta }\rho _\mathrm{\Lambda }\eta ^4`$. The energy scale $`\eta `$ could be the Planck scale $`\eta _{pl}10^{19}`$ GeV, the grand unification scale $`\eta _{GUT}10^{16}`$ GeV, or the electroweak scale $`\eta _{EW}10^2`$ GeV. For any reasonable choices of $`\eta `$ and $`z_{max}`$, $`\mathrm{\Delta }\rho _\mathrm{\Lambda }`$ exceeds the anthropically allowed range of $`\rho _\mathrm{\Lambda }`$ by many orders of magnitude. We can therefore set $$𝒫_{}(\rho _\mathrm{\Lambda })=const$$ (6) in the range of interest . With this flat distribution, a value of $`\rho _\mathrm{\Lambda }`$ picked randomly from an interval $`|\rho _\mathrm{\Lambda }|\rho _\mathrm{\Lambda }^m`$ is likely to be comparable to $`\rho _\mathrm{\Lambda }^m`$ (the probability of picking a much smaller value is small). In this sense, the flat distribution (6) favors larger values of $`\rho _\mathrm{\Lambda }`$. The anthropic bound (5) specifies the value of $`\rho _\mathrm{\Lambda }`$ which makes galaxy formation barely possible. However, the principle of mediocrity suggests that we are most likely to observe not these marginal values, but rather the ones that maximize the number of galaxies. This suggests that $`\mathrm{\Lambda }`$-domination should not occur before a substantial fraction of matter has collapsed into galaxies. The largest values of $`\mathrm{\Lambda }`$ consistent with this requirement are such that $`t_\mathrm{\Lambda }t_G`$. Hence, the coincidence (2) is to be expected if we are typical observers . Let us now try to make this more quantitative. It will be convenient to introduce a variable $$x=\frac{\mathrm{\Omega }_\mathrm{\Lambda }}{\mathrm{\Omega }_M}=\mathrm{sinh}^2\left(\frac{t}{t_\mathrm{\Lambda }}\right),$$ (7) where for convenience, we have defined $`t_\mathrm{\Lambda }`$ as the time at which $`\mathrm{\Omega }_\mathrm{\Lambda }=\mathrm{sinh}^2(1)\mathrm{\Omega }_M1.38\mathrm{\Omega }_M`$. At the time of recombination, for values of $`\rho _\mathrm{\Lambda }`$ within the anthropic range, $`x_{rec}\rho _\mathrm{\Lambda }/\rho _{rec}1`$, where the matter density at recombination, $`\rho _{rec}`$, is independent of $`\mathrm{\Lambda }`$. We can therefore express the probability distribution for $`\rho _\mathrm{\Lambda }`$ as a distribution for $`x_{rec}`$, $$d𝒫(x_{rec})\nu (x_{rec})dx_{rec},$$ (8) where $`\nu (x_{rec})`$ is the number of galaxies formed per unit volume in regions with a given value of $`x_{rec}`$. The calculation of the distribution (8) was discussed in detail by Martel et. al. . A simplified version of their analysis is given in the Appendix. Galaxies form at the time when the density contrast (evolved according to the linear theory) exceeds a certain critical value $`\mathrm{\Delta }_c(x)`$. For small values of $`x`$, when the cosmological constant is negligible, we have $`\mathrm{\Delta }_c(x)1.69`$ as in the Einstein de Sitter model. However, it is known that $`\mathrm{\Delta }_c`$ is slightly dependent on $`x`$, with $`\mathrm{\Delta }_c(\mathrm{})1.63`$. Thus, $`\mathrm{\Delta }_c`$ varies by no more than $`4\%`$ in the whole relevant range, and in what follows we shall ignore its $`x`$-dependence. The number of galaxies wich have assembled up to a given time $`t`$ for a given value of the cosmological constant (that is, up to a given $`x`$ for a given value of $`x_{rec}`$) can thus be estimated as $$\nu (x,x_{rec})=\mathrm{erfc}\left(\frac{\mathrm{\Delta }_c}{\sqrt{2}\sigma _{rec}G(x,x_{rec})}\right).$$ (9) The factor $`G(x,x_{rec})=x_{rec}^{1/3}F(x)`$, where $$F=\frac{5}{6}\left(\frac{1+x}{x}\right)^{1/2}_0^x\frac{d\omega }{\omega ^{1/6}(1+\omega )^{3/2}},$$ (10) accounts for the growth of the dispersion in the density contrast $`\sigma `$ on the galactic scale from its value $`\sigma _{rec}`$ at the time of recombination until time $`t(x)`$. For small $`x`$ we have $`Fx^{1/3}`$, and perturbations grow as in the Einstein de Sitter model. However, at large $`x`$ the growth of perturbations is stalled and we have $`F(\mathrm{})=(5/6)\beta (2/3,5/6)1.44`$. The number of galaxies that will assemble in a given interval of $`x`$ will thus be given by $$d\nu (x,x_{rec})\mathrm{exp}\left[\frac{1}{2}\left(\frac{\mathrm{\Delta }_c}{F(x)}\frac{x_{rec}^{1/3}}{\sigma _{rec}}\right)^2\right]\frac{F^{}(x)}{F^2(x)}\frac{x_{rec}^{1/3}}{\sigma _{rec}}dx.$$ (11) Multiplying by a flat a priori distribution for $`x_{rec}`$, we have $$d𝒫(x,x_{rec})d\nu (x,x_{rec})dx_{rec}.$$ (12) The probability for an observer to live in a galaxy that formed in a given logarithmic interval of $`t_G/t_\mathrm{\Lambda }`$ can now be obtained by integrating (12) with respect to $`x_{rec}`$ while keeping $`x`$ fixed. The result is $$d𝒫(t_G/t_\mathrm{\Lambda })\sigma _{rec}^3F^2F^{}\frac{dx}{d\mathrm{ln}(t_G/t_\mathrm{\Lambda })}d\mathrm{ln}(t_G/t_\mathrm{\Lambda }).$$ (13) This distribution is shown in Fig. 1 (curve $`a`$). It has a broad peak which almost vanishes outside of the range $`.1(t_G/t_\mathrm{\Lambda })10`$. The maximum of the distribution is at $`t_G/t_\mathrm{\Lambda }1.7`$ and the median value is at $`t_G/t_\mathrm{\Lambda }1.5`$. Thus, most observers will find that their galaxies formed at $`tt_\mathrm{\Lambda }`$, and therefore the coincidence $$t_Gt_\mathrm{\Lambda }$$ (14) is explained. It is also of some interest to consider the distribution (12) without performing any integrations. By changing from the variables $`x`$ and $`x_{rec}`$ to the variables $`t_G`$ and $`t_\mathrm{\Lambda }`$ we have $$d𝒫\sigma _{rec}^3\mathrm{exp}\left[\frac{(t_\sigma /t_\mathrm{\Lambda })^{4/3}}{2F^2}\right]\frac{F^{}(x)}{F^2(x)}\left(\frac{t_\sigma }{t_\mathrm{\Lambda }}\right)^{8/3}\left(\frac{t_G}{t_\mathrm{\Lambda }}\right)\mathrm{sinh}\left(\frac{2t_G}{t_\mathrm{\Lambda }}\right)d\mathrm{ln}t_Gd\mathrm{ln}t_\mathrm{\Lambda },$$ (15) where $`x=x(t_G/t_\mathrm{\Lambda })`$ and $$t_\sigma (\mathrm{\Delta }_c/\sigma _{rec})^{3/2}t_{rec}$$ (16) is the time at which the density contrast on galactic scales would reach the critical value $`\mathrm{\Delta }_c`$ in an Einstein-de Sitter model. Here we are not allowing for variations of $`\sigma _{rec}`$, and therefore this time is just a constant. The probability density (15) per unit area in the $`(\mathrm{log}t_G,\mathrm{log}t_\mathrm{\Lambda })`$ plane is plotted in Fig. 2. Note that the peak is in the region where $`t_Gt_\mathrm{\Lambda }t_\sigma `$. Different projections of this plot are useful. If we integrate along the vertical axis, then we obtain the probability distribution for the time when $`\mathrm{\Lambda }`$ dominates, which is equivalent to (8), whereas if we integrate diagonally along $`(t_G/t_\mathrm{\Lambda })=const`$ lines, we obtain (13). ## III Why Now? As we noted in the Introduction, one of the most puzzling aspects of the value of $`\mathrm{\Omega }_\mathrm{\Lambda }`$ is related to the fact that the coincidence $`t_0t_\mathrm{\Lambda }`$ appears to be implying that we live in a special time. A similar problem exists even if a quintessence component is assumed (see Section V). As we have shown in Section II, the epoch when giant galaxies are assembled, $`t_G`$, is expected to roughly coincide with the epoch of cosmological constant dominance, $`t_\mathrm{\Lambda }`$. Therefore, if we could explain why $`t_0t_G`$, the puzzle of the cosmic age coincidence would be resolved. Most of carbon-based life may be expected to have appeared in the universe around the peak in the universal carbon production rate, at $`t_{\mathrm{carbon}}`$. The main contributors to carbon in the interstellar medium are stars in the mass range 1–2 M, through carbon stars and planetary nebulae . Consequently, detailed simulations show that the peak in the cosmic carbon production rate is delayed only by less than a billion years compared to the peak in the cosmic star formation rate, $`t_{\mathrm{SFR}}`$, namely, $$t_{\mathrm{carbon}}t_{\mathrm{SFR}}.$$ (17) The appearance of intelligent life is further delayed by no more than a fraction of the main sequence lifetime of stars in the spectral range mid-F to mid-K ( 5-20 Gyr; ). Following the main sequence, the expansion and increase in luminosity of stars spells the end of the possible existence of a biosphere on planets. Only stars in the above spectral range are expected to have continuously habitable zones around them (namely, ensuring the presence of liquid water and the absence of catastrophic cooling by $`\mathrm{CO}_2`$ clouds on planetary surfaces; ). Thus we have $$t_{\mathrm{IL}}t_{\mathrm{carbon}}t_{\mathrm{SFR}}.$$ (18) The “present time” $`t_0`$ can be defined as the time when a civilization evolves to the point where it is capable of measuring the cosmological constant and becomes aware of the coincidence (1). The experience of our own civilization suggests that, on a cosmological timescale, this time is not much different from $`t_{\mathrm{IL}}`$, $$t_0t_{\mathrm{IL}}.$$ (19) Carter and others used the principle of mediocrity to argue that the lifetime of our civilization is unlikely to be much longer than the time it has already existed, that is, $`10^4`$ yrs. If we are typical, then this should be the characteristic lifetime of a civilization. This would imply that Eq. (19) is valid even if $`t_0`$ is understood as the time when any astronomical observations can be made. Carter’s argument has some force, but it is based on a single data point, and one may be reluctant to accept it, considering in particular its pessimistic implications. We note, however, that with our definition of $`t_0`$, Eq. (19) is likely to be valid regardless of the validity of Carter’s argument (that is, even if civilizations are likely to survive much longer than $`t_{\mathrm{IL}}`$). Combining (19) with (18), we have that for a typical civilization $$t_0t_{\mathrm{SFR}}.$$ (20) Finally, models of galaxy formation in hierarchical clustering theories propose that Lyman-break galaxies (at $`z3`$) are the first objects of galactic size which experience vigorous star formation . These objects therefore signal the onset of the epoch of galaxy formation, with cosmic star formation and galaxy formation being closely linked. In fact, the mergers and collisions of “sub-galactic” objects to produce galactic-size structures, are responsible for the enhanced star formation. In hierarchical models therefore $$t_Gt_{\mathrm{SFR}}.$$ (21) The above relation is also supported by observations of the star formation history, showing that the star formation rate rises from the present to about $`z1`$, with a broad peak (of roughly constant star formation rate) in the redshift range $`z1`$–3 . This corresponds roughly to $`t_{\mathrm{SFR}}t_0/3`$$`t_0/8`$, in agreement with Eq. (21). In fact, more than 80% of the stars have already formed ($`\mathrm{\Omega }_{gas}/\mathrm{\Omega }_{stars}0.18`$ ). Combining Eqs. (14),(20),(21) above we obtain the desired relation $$t_0t_Gt_\mathrm{\Lambda }.$$ (22) ## IV Models with variable $`\mathrm{\Lambda }`$ and $`\sigma _{rec}`$ In the previous discussion we have assumed a fixed value of the density contrast at recombination $`\sigma _{rec}`$ (or equivalently, a fixed value of $`Q`$). This determines the parameter $`t_\sigma (\mathrm{\Delta }_c/\sigma _{rec})^{3/2}t_{rec}`$ appearing in the distribution (15) and therefore, as it is clear from Fig. 2, the most probable time at which the cosmological constant will dominate $`t_\mathrm{\Lambda }t_\sigma `$ . If $`\sigma _{rec}`$ is itself treated as a random variable, with a priori distribution $`𝒫_{}(\sigma _{rec})d\mathrm{ln}\sigma _{rec}`$ then the most probable value of $`t_\mathrm{\Lambda }`$ will of course have some dependence on $`𝒫_{}`$. However, as we shall argue, this dependence is not too strong provided that $`𝒫_{}`$ satisfies some qualitative requirements, in which case the most probable values of $`t_\mathrm{\Lambda }`$ and $`\sigma _{rec}`$ are actually determined by the fundamental constants involved in the cooling processes which take place in collapsing gas clouds. ### A The cooling boundary So far, we have assumed that all the galactic-size objects collapsing at any time form luminous galaxies. However, galaxies forming at later times will have a lower density and shallower potential wells. They are thus vulnerable to losing all their gas due to supernova explosions . Moreover, a collapsing cloud fragments into stars only if the cooling timescale of the cloud $`\tau _{cool}`$ is smaller than the collapse timescale $`\tau _{grav}`$. Otherwise, the cloud stabilizes into a pressure supported configuration . The cooling rate of such pressure supported clouds is exceedingly low, and it is possible that star formation in the relevant mass range will be suppressed in these clouds even when they eventually cool. Hence, it is conceivable that galaxies that fail to cool during the initial collapse give a negligible contribution to $`\nu `$. Fragmentation of a cloud into stars will be suppressed after a certain critical time which we shall refer to as the “cooling boundary” $`t_{cb}`$ . To determine $`t_{cb}`$, let us first consider the case of a matter dominated universe (not necessarily flat) without a cosmological constant. An overdensity which is destined to collapse can be described in the spherical model as a part of a closed Friedmann-Robertson-Walker (FRW) universe. The size of this spherical region at the time of recombination is such that it basically contains the mass of a galaxy. The virialization temperature and the density after virialization will be quite independent of what happens outside the region, depending only on its gravitational energy at the time $`t_{vir}`$ when it collapses. The virial velocity will then be given by $`v_{vir}(GM_g/L)^{1/2}`$, where $`L`$ is the size of the collapsing object at $`t_{vir}`$. The density of the virialized collapsing cloud $`\rho _{vir}`$ is given by $$\rho _{vir}10^2(Gt_{vir}^2)^1.$$ (23) The virialization temperature can be estimated as $`T_{vir}m_pv_{vir}^2m_p(G^3\rho _{vir}M_g^2)^{1/3}`$. Here $`m_p`$ is the proton mass. The later an object collapses, the colder and more dilute it would be. If there is a cosmological constant, then these estimates still hold to good approximation. Indeed, a spherical region will only collapse if its intrinsic “curvature” term is always dominant with respect to the cosmological constant term. The “potential” energy at the time of collapse and the properties of the virialized cloud will basically remain unaltered. In principle, a spherical region with a cosmological constant could enter a “quasistatic” phase where the gravitational pull is nearly balanced by the repulsion due to the cosmological constant. After a long period of time, this region might finally collapse and virialize to a large enough temperature. However, since the quasistatic phase is unstable we shall disregard this marginal possibility. The cooling rate $`\tau _{cool}^1`$ of a gas cloud of fixed mass depends only on its density and temperature, but as shown above both of these quantities are determined by $`t_{vir}`$ . The timescale needed for gravitational collapse is $`\tau _{grav}t_{vir}`$. Therefore, the condition $`\tau _{cool}<\tau _{grav}`$ gives an upper bound $`t_{cb}`$ on the time at which collapse occurs. Various cooling processes such as Bremsstrahlung and line cooling in neutral hydrogen and helium were considered in Ref. . For a cloud of mass $`M_g10^{12}M_{}`$, cooling turns out to be efficient for $$t<t_{cb}310^{10}\mathrm{yr}.$$ (24) In any case, this value of $`t_{cb}`$ should be taken only as indicative, since the present status of the theory of star formation does not allow for very precise estimates. ### B Likely values of $`t_\mathrm{\Lambda }`$ Let us now consider the probability distribution for the three independent variables $`x,x_{rec}`$ and $`\sigma _{rec}`$. This will be proportional to the number of galaxies forming at a time charachterized by $`x`$ in a region with given values of $`\sigma _{rec}`$ and $`x_{rec}`$, $$d𝒫(x,x_{rec},\sigma _{rec})𝒫_{}(\sigma _{rec})\mathrm{exp}\left[\frac{1}{2}\left(\frac{\mathrm{\Delta }_c}{F}\frac{x_{rec}^{1/3}}{\sigma _{rec}}\right)^2\right]\frac{F^{}}{F^2}\frac{x_{rec}^{1/3}}{\sigma _{rec}}dxdx_{rec}d\mathrm{ln}\sigma _{rec}.$$ (25) Let us assume for simplicity a power-law a priori distribution, $$𝒫_{}(\sigma _{rec})\sigma _{rec}^\alpha ,$$ (26) where $`\alpha `$ is a constant. Then we can immediately integrate over $`\sigma _{rec}`$ and obtain $$d𝒫(x,x_{rec})x_{rec}^{\alpha /3}F^{\alpha 1}F^{}dxdx_{rec}.$$ (27) Now we can integrate with respect to the “time” $`x`$ at which galaxies assemble, from the time of recombination $`x_{rec}`$ to the cooling boundary $$x_{cb}=\mathrm{sinh}^2(t_{cb}/t_\mathrm{\Lambda }).$$ (28) The integral is simply the difference in $`F^\alpha `$ between the two boundaries in the integration range, and we shall neglect the contribution at $`x_{rec}`$. Finally, using $`t_\mathrm{\Lambda }=t_{rec}x_{rec}^{1/2}`$ we obtain a probability distribution for $`t_\mathrm{\Lambda }`$ $$d𝒫(t_\mathrm{\Lambda })F^\alpha (\mathrm{sinh}^2(t_{cb}/t_\mathrm{\Lambda }))t_\mathrm{\Lambda }^{\frac{2\alpha }{3}2}d\mathrm{ln}(t_\mathrm{\Lambda }/t_{cb}).$$ (29) Thus, the most probable value of $`t_\mathrm{\Lambda }`$ is determined by $`t_{cb}`$ and $`\alpha `$. In Fig. 3, this distribution is plotted for different values of $`\alpha `$ ranging from 4 to 15. In all these cases we have $$t_\mathrm{\Lambda }t_{cb}.$$ (30) The behaviour of the distribution is different for $`\alpha 3`$. Note that $`F(y)y^{1/3}`$ for small y, whereas $`F`$ saturates at a constant value for large $`y`$. This means that if $`\alpha <3`$, the distribution (29) would favour very small values of $`t_\mathrm{\Lambda }`$. The reason is that for a small $`\alpha `$ the a priori distribution is not too suppressed at large $`\sigma _{rec}`$, and it pays off to increase $`\sigma _{rec}`$ in order to obtain a large number of collapsed objects very soon after recombination. Therefore the time of $`\mathrm{\Lambda }`$ domination can be very short without interfering with galaxy formation. Of course, this would result in an overwhelming majority of the galaxies in regions which do not look anything like ours. On the other hand, if $`\alpha >3`$, small values of $`\sigma _{rec}`$ are preferred. However, the value of $`\sigma _{rec}`$ should at least be large enough for galaxy formation to occur marginally before the cooling boundary $`t_{cb}`$. Therefore, if the cosmological constant is not to interfere with galaxy formation, the result (30) is expected. More generally, we expect the relation (30) to be valid if the a priori distribution decreases faster than $`\sigma _{rec}^3`$ at small $`\sigma _{rec}`$. With $`t_{cb}`$ from Eq. (24) and the $`t_\mathrm{\Lambda }`$ suggested by observations, the relation (30) is indeed satisfied. ### C Likely values of $`\sigma _{rec}`$ A probability distribution for $`\sigma _{rec}`$ can be obtained by integrating (25) first over $`x_{rec}`$ over the relevant range $`\mathrm{sinh}(x_{rec}^{1/2}t_{cb}/t_{rec})>x^{1/2}`$, and then over $`x`$. The result can be expressed as $$d𝒫(\beta )\beta ^{2(\alpha 3)/3}G(\beta )d\mathrm{ln}\beta ,$$ (31) where we have introduced the $`\sigma _{rec}`$ dependent parameter $$\beta =\frac{t_\sigma }{t_{cb}}=\left(\frac{\mathrm{\Delta }_c}{\sigma _{rec}}\right)^{3/2}\frac{t_{rec}}{t_{cb}}$$ (32) and the function $$G(\beta )=_0^{\mathrm{}}\mathrm{exp}\left[\frac{1}{2F^2}\left(\frac{\beta t}{t_\mathrm{\Lambda }}\right)^{4/3}\right]F^{}\left[F^2+\frac{1}{2}\left(\frac{\beta t}{t_\mathrm{\Lambda }}\right)^{4/3}\right]𝑑x.$$ (33) The function $`G(\beta )`$ is plotted in Fig. 4 (thick solid line). It stays constant for $`\beta <1`$ (towards large $`\sigma _{rec}`$) and it drops to zero around $`\beta 10`$. For larger $`\beta `$ it falls off as $`\beta ^{4/3}\mathrm{exp}(\beta ^{4/3}/2)`$. This function is multiplied in (31) by the factor $`\sigma _{rec}^{3\alpha }`$, which depends on the a priori distribution for $`\sigma _{rec}`$. If this factor is a decreasing function of $`\sigma _{rec}`$ (i.e. $`\alpha >3`$) then (31) peaks between $`1<\beta 10`$. This is illustrated in Fig. 4 (thin curves) for $`\alpha =4`$ and $`5`$. From the definition of $`\beta `$ we have $$\sigma _{rec}=\frac{\mathrm{\Delta }_c}{(1+z_{rec})}\left(\frac{2}{3\beta H_0t_{cb}\sqrt{\mathrm{\Omega }_M}}\right)^{2/3}1.110^3\beta ^{2/3}.$$ (34) Here, we have used the relation $`Hx^{1/2}=2/(3t_\mathrm{\Lambda }\sqrt{\mathrm{\Omega }_M})`$, where all quantities (including the matter density parameter $`\mathrm{\Omega }_M`$) are evaluated at the present time, and for our numerical estimate we have taken $`z_{rec}=1000`$, $`t_{cb}=310^{10}yr`$, $`H_0=100hkms^1Mpc^1`$, with $`h=.7`$, and $`\mathrm{\Omega }_M=.3`$. For $`\beta 1`$, as suggested by the distribution (31), we have that the most likely values of $`\sigma _{rec}`$ are of the order of $`10^3`$. This is close to the observationally suggested values $`\sigma _{rec}=(3.32.4)10^3`$ . Anthropic bounds on the density contrast have been recently discussed by Tegmark and Rees . Instead of $`\sigma _{rec}`$, they used the amplitude of the density fluctuations at horizon crossing, $`Q`$; the relation between the two is roughly $`Q10^2\sigma _{rec}`$. They imposed a lower bound on $`Q`$ by requiring that galaxies form prior to the “cooling boundary”, $`t_\sigma t_{cb}`$. This gives $`Q10^6`$. To obtain an upper bound, it has been argued that for large values of $`Q`$ galaxies would be too dense and frequent stellar encounters would disrupt planetary orbits. To estimate the rate of encounters, the relative stellar velocity was taken to be the virial velocity $`v_{vir}200kms^1`$, resulting in a bound $`Q10^4`$. However, Silk has pointed out that the local velocity dispersion of stars in our Galaxy is an order of magnitude smaller than $`v_{vir}`$. This gives $`Q10^3`$, which is a rather weak constraint. This issue does not arise in the approach we take in the present paper, since in our case large values of $`Q`$ are suppressed by the a priori distribution $`𝒫_{}(\sigma _{rec})`$. ### D The time coincidence Finally, we should check that the introduction of a cooling boundary does not spoil the coincidence $`t_Gt_\mathrm{\Lambda }`$. In fact, this seems rather clear from Fig. 2. Introducing the cooling boundary basically amounts to disregarding the probability density above a certain horizontal line $`t_G=t_{cb}`$. The probability distribution for $`t_G/t_\mathrm{\Lambda }`$ below the horizontal line is somewhat different from that in the whole plane, but clearly it still peaks around $`t_Gt_\mathrm{\Lambda }`$. To quantify this effect, we have integrated (12) with respect to $`x_{rec}`$ over the range $`\mathrm{sinh}(x_{rec}^{1/2}t_{cb}/t_{rec})>x^{1/2}`$. The resulting distribution for $`x`$ is proportional to the integrand in the right hand side of (33). For $`\beta =2`$, this probability density is shown in Fig. 1 (curve $`b`$). The peak is only slightly shifted towards smaller values of $`t_G/t_\mathrm{\Lambda }`$. Cooling failure is not the only mechanism that can in principle inhibit the number of civilizations at low $`\sigma _{rec}`$. It is possible, for example, that the stellar initial mass function (IMF) depends on the protogalactic density $`\rho _{vir}`$, so that the number of carbon forming stars drops rapidly towards very low values of $`\rho _{vir}`$. If the a priori distribution $`𝒫_{}(\sigma _{rec})`$ is a decreasing function of $`\sigma _{rec}`$, this can result in a peaked distribution $`d𝒫/d\mathrm{ln}t_\mathrm{\Lambda }`$. Quite similarly, if the number of relevant stars grows towards smaller $`\rho _{vir}`$, a peaked distribution is obtained for an increasing function $`𝒫_{}(\sigma _{rec})`$. Our present understanding of star formation is insufficient to determine the dependence of the IMF on $`\rho _{vir}`$, but once it is understood, the probability distribution for $`t_\mathrm{\Lambda }`$ can be calculated as outlined above . ## V Conclusions In this paper we suggested a possible explanation for the near-coincidence of the three cosmological timescales: the time of galaxy formation $`t_G`$, the time when the cosmological constant starts to dominate the energy density of the universe $`t_\mathrm{\Lambda }`$, and the present age of the universe $`t_0`$. Since this coincidence involves specifically the time of our existence as observers, it lends itself most naturally to the consideration of anthropic selection effects. We considered a model in which the cosmological constant is a random variable with a flat a priori probability distribution. We showed that a typical galaxy in this model forms at a time $`t_Gt_\mathrm{\Lambda }`$. We further demonstrated that a typical civilization should determine the value of the cosmological constant at $`t_0t_G`$. Thus we should not be surprised to find ourselves discussing the cosmic time coincidence. We also considered a model in which both the cosmological constant $`\mathrm{\Lambda }`$ and the density contrast $`\sigma _{rec}`$ are random variables. The galaxy formation in this case is spread over a much wider time interval, and we had to account for the fact that the cooling of protogalactic clouds collapsing at very late times is too slow to allow for efficient fragmentation and star formation. We therefore disregarded all galaxies formed after the “cooling boundary” time $`t_{cb}`$. We assumed that the a priori distribution for $`\sigma _{rec}`$ is a decresing power law and found that, for a sufficiently steep power, a typical observer detects $`\sigma _{rec}10^310^4`$, close to the values inferred from observations. Such observers are likely to find themselves living at $`t_0t_\mathrm{\Lambda }`$ in a galaxy formed at $`t_Gt_\mathrm{\Lambda }`$ in a region of the universe where $`t_\mathrm{\Lambda }t_{cb}`$, also close to the observationally suggested value. Our model with variable $`\mathrm{\Lambda }`$ and $`\sigma _{rec}`$ can be developed further in several directions. Instead of taking a flat distribution for $`\rho _\mathrm{\Lambda }`$ and a power-law distribution for $`\sigma _{rec}`$, one could use the methods of Refs. to calculate the a priori distributions for these variables in the framework of some inflationary model. One could also use a more refined model of structure formation and improve on our treatment of cooling failure, replacing the sharp cutoff at $`t=t_{cb}`$ with a more realistic model. We believe, however, that even in the present, simplified form our model indicates that an anthropic selection for $`\mathrm{\Lambda }`$ and $`\sigma _{rec}`$ is a viable possibility. Finally, we should note that the coincidence in the timescales requires an explanation even in models involving a quintessence component . In models of quintessence the universe at late times is dominated by a scalar field $`\varphi `$, slowly evolving down its potential $`V(\varphi )`$. It has been argued (by Zlatev et al. ) that such models do not suffer from the cosmic time coincidence problem, because the time $`t_\varphi `$ of $`\varphi `$-domination is not sensitive to the initial conditions. This time, however, does depend on the details of the potential $`V(\varphi )`$, and observers should be surprised to find themselves living at the epoch when quintessence is about to dominate. More satisfactory would be a model in which the potential depends on two fields, say $`\varphi `$ and $`\chi `$, with $`\chi `$ slowly varying in space, making the time of $`\varphi `$-domination position dependent. Such models are not difficult to construct in the context of inflationary cosmology. One could then apply the principle of mediocrity to determine the most likely value of $`t_\varphi `$. ## Acknowlegements We are grateful to Ken Olum for his comments on the manuscript. J.G. acknowledges support from CIRIT grant 1998BEAI400244. M.L. acknowleges support from NASA Grant NAG5-6857. A.V. was supported in part by the National Science Foundation. ## Appendix: The probability distribution for $`\mathrm{\Lambda }`$ In this Appendix we briefly discuss the probability distribution for the cosmological constant, giving a simplified version of the calculation presented in . In a universe where the cosmological constant is non-vanishing, a primordial overdensity will eventually collapse provided that its value at the time of recombination exceeds a certain critical value $`\delta _c^{rec}`$. In the spherical collapse model this is estimated as $`\delta _c^{rec}=1.13x_{rec}^{1/3}`$ (see e.g. ). Hence, the fraction of matter that eventually clusters in galaxies can be roughly approximated as : $$\nu (x_{rec})\mathrm{erfc}\left(\frac{\delta _c^{rec}}{\sqrt{2}\sigma _{rec}(M_g)}\right)\mathrm{erfc}\left(\frac{.80x_{rec}^{1/3}}{\sigma _{rec}(M_g)}\right).$$ (35) Here, erfc is the complementary error function and $`\sigma _{rec}(M_g)`$ is the dispersion in the density contrast at the time of recombination on the relevant galactic mass scale $`M_g10^{12}M_{}`$. The logarithmic distribution $`d𝒫/d\mathrm{ln}x_{rec}=x_{rec}\nu (x_{rec})`$ is plotted in Fig. 5. It has a rather broad peak which spans two orders of magnitude in $`x_{rec}`$, with a maximum at $$x_{rec}^{peak}2.45\sigma _{rec}^3.$$ (36) In accordance with the principle of mediocrity, we should expect to measure a value of the cosmological constant within this broad peak of the distribution. And indeed, this may actually be the case. The distribution (35) is characterized by the parameter $`\sigma _{rec}`$. As noted by Martel et al. , this parameter can be inferred from observations of the cosmic microwave background anisotropies, although its value depends on the assumed value of the cosmological constant today. For instance, assuming that the present cosmological constant is $`\mathrm{\Omega }_{\mathrm{\Lambda },0}=.8`$, and the relevant galactic co-moving scale is in the range $`R=(12)Mpc`$, Martel et al. found $`\sigma _{rec}=(2.31.7)10^3`$. In this estimate, they also assumed a scale invariant spectrum of density perturbations, a value of $`70kms^1Mpc^1`$ for the present Hubble rate, and they defined recombination to be at redshift $`z_{rec}1000`$ (this definition is conventional, since the probability distribution for the cosmological constant does not depend on the choice of reference time). Thus, taking into account that $`x`$ scales like $`(1+z)^3`$ in equation (36), one finds that the peak of the distribution for the cosmological constant today is at $`x_0^{peak}29.812`$. The value corresponding to the assumed $`\mathrm{\Omega }_{\mathrm{\Lambda },0}=.8`$ is $`x_0=4`$, certainly within the broad peak of the distribution and not far from its maximum. If instead we assume that the measured value is $`\mathrm{\Omega }_{\mathrm{\Lambda },0}=.7`$, which corresponds to $`x_0=2.33`$, the new inferred values for $`\sigma _{rec}`$ correspond to the peak value $`x_0^{peak}(8834)`$. In this case, the measured value would be at the outskirts of the broad peak, where the logarithmic probability density is about an order of magnitude smaller than at the peak, but still significant. Thus, even though there may be uncertainties in the inferred value of $`\sigma _{rec}`$ on the relevant scales, it seems fair to say that any observed value of $`\mathrm{\Omega }_{\mathrm{\Lambda },0}.7`$ is in good agreement with the principle of mediocrity.
no-problem/9906/cond-mat9906346.html
ar5iv
text
# Surface versus crystal-termination effects in the optical properties of surfaces ## I Introduction Reflectance Anisotropy Spectroscopy (RAS) and Surface Differential Reflectance (SDR) are surface-sensitive optical techniques, and are used to obtain information on the atomic and electronic structures of surfaces . In the early times of these spectroscopies, the spectra were explained in terms of transitions across surface states, although this view has been contrasted by calculations, showing that surface-geometry effects could also determine the spectra through surface perturbations on the optical matrix elements of transitions across bulk states . Now the attitude seems to be reversed: after realizing that many RA lineshapes are similar to the imaginary part of the bulk dielectric function, or to its energy derivative, it is growing the belief that surface optical spectra are mostly determined by bulk effects, and therefore not very useful as a tool of surface characterization. In this paper, we discuss the origin of these bulk-like features, and at the same time emphasize the presence in optical spectra of other features, more related to the surface structure. In 1996, Rossow, Mantese and Aspnes recognized that RAS data on chemically saturated surfaces generally resemble the energy derivative lineshapes of the corresponding bulk spectra, $`dIm[\epsilon _b(\omega )]/d(\omega )`$, while surfaces with unsaturated dangling bonds (DBs) often yield RAS lineshapes resembling the bulk spectrum, $`Im[\epsilon _b(\omega )]`$. They explained the latter lineshapes in terms of surface- induced changes of the electron-hole interaction and of local fields, while derivative-like spectra were explained in terms of surface-perturbations on the energies of bulk states. From these findings, they inferred the occurrence of shorter lifetimes of electrons and holes near the surfaces. We show here that this deduction is not necessary. Furthermore, we demonstrate the existence of other mechanisms able to produce energy-derivative lineshapes. A further step along the way of attributing most RAS and SDR features to bulk effects has been done by Uwai and Kobayashi (UK) in 1997 . They measured surface photoabsorption (SPA) spectra for different conditions of the GaAs (001) and (111) surfaces, from which the changes of the surface dielectric tensor were extracted. The imaginary parts of such changes have peaks at 2.6-3 eV and at 4.5-4.7 eV, close to the main structures, $`E_1`$ and $`E_2`$, of the bulk dielectric function. The lineshapes are similar to the imaginary part of the bulk dielectric function in the case of the (001) surface, and to its derivative for the (111) surface. UK conclude that these two peaks are not due to transitions involving surface states, but to modified bulk electronic transitions. They claim that the surface termination effect, first discussed by one of the present authors in 1975 , is responsible for the occurrence of bulk-like features in surface spectra. According to UK, this effect mostly consists in a reduction of the polarizability below the surface, arising from the quenching of bulk-state wavefunctions near the surface, due to their vanishing outside the crystal. This might be a long-range effect, extending one hundred Angstroms below the surface, which might be hardly included in slab calculations. We show here that, although the crystal-termination effect is in fact present, the way it has been described by UK is rather naive; not a bare reduction of the polarizability, but a distortion of its lineshape must occur (and indeed occurs), to produce a nonvanishing RAS or SDR signal. However, the resulting effect is by no means of long range, and is in fact included in slab calculations. Moreover, while the crystal- termination effect often yields derivative-like lineshapes, we have not found bulk-like spectra arising from it. The GaAs(110) surface is a good test case for our calculations and discussions, because of its well defined atomic structure and for the occurrence of (modest) surface effects partially overlapping in energy with (predominant) bulk effects . The As-rich GaAs(100) $`\beta 2`$(2x4) surface will also be considered. ## II Theory We calculate the surface contribution to reflectance, that is its relative deviation with respect to Fresnel formulas, according to the three-layer model : $$\frac{\mathrm{\Delta }R_i}{R}=\frac{4\omega }{c}\mathrm{cos}\theta dIm\left(\frac{\epsilon _{si}(\omega )\epsilon _b(\omega )}{\epsilon _b(\omega )1}\right),$$ (1) for s-light polarized parallel to the i-direction (i = x or y) in the surface plane, where $`\theta `$ is the angle of incidence, d the depth of the surface layer, $`\epsilon _{si}(\omega )`$ is the ii diagonal component of the surface-layer dielectric tensor, and $`\epsilon _b(\omega )`$ is the isotropic bulk dielectric function. For p–light incident in the iz–plane, the anisotropic three–layer model yields : $$\frac{\mathrm{\Delta }R_i}{R}=\frac{4\omega }{c}\mathrm{cos}\theta dIm\left(\frac{(\epsilon _{si}(\omega )\epsilon _b(\omega ))(\epsilon _b(\omega )sin^2\theta )+\epsilon _b^2(\omega )sin^2\theta (1/\epsilon _{sz}(\omega )1/\epsilon _b(\omega ))}{(\epsilon _b(\omega )1)(\epsilon _b(\omega )cos^2\theta sin^2\theta )}\right).$$ (2) The surface-layer dielectric tensor, assumed to be diagonal, is obtained by subtracting the bulk dielectric function from the calculated slab dielectric tensor, with a suitable choice of the surface-layer depth, d. The reflectivity for s light comes out to be independent of the choice of d, coincident with the microscopic formulas not relying on the three-layer model. This model is instead needed to obtain p-light reflectivity by avoiding the computationally very demanding inversion of the dielectric susceptibility tensor. ## III Results We start by calculating the normal-incidence reflectance anisotropy (RA), $`2(R_xR_y)/(R_x+R_y)`$, of GaAs(110). The latter is the cleavage surface of GaAs, and, despite being not reconstructed, undergoes large relaxations. Its equilibrium structure, known as the “rotation- relaxation model”, is well known both from the experimental and the theoretical sides: the surface As atoms relax toward the vacuum, and Ga atoms move in the opposite direction, recovering a quasi-planar sp<sup>2</sup> bonding with their three As neighbors . We represent the surface using a slab of 31 atomic layers, where the actual atomic positions are taken from a Car-Parrinello total energy minimization . Since the slab has two equivalent surfaces, the computed slab polarizability must be divided by two. We then also consider a polar surface of GaAs: the As-rich (100) $`\beta 2`$(2x4). The latter is known to be the stable reconstruction for this surface , and is a regular array of two As dimers and two dimer vacancies (the unit cell contains only two As dimers), aligned along the \[$`\overline{1}10`$\] direction. Also in this case the actual atomic positions are taken from a Car-Parrinello total energy minimization . In the case of GaAs(100), since geometry does not allow to build a slab with two equivalent surfaces, the calculation is done for a system with only one surface reconstructed, i.e. by including a real–space cutoff function (a squared cosine, approaching one on the interesting surface and zero on the other), in the optical transition probability calculations, to eliminate the contribution of the back surface. The electronic states of the slab, as well as of those of bulk GaAs, are calculated according to the $`sp^3s^{}`$ tight- binding method, as in Ref. . The As–As tight–binding interaction parameters are those of ref. . The imaginary part of the slab dielectric function is obtained by considering transitions at a number of k points in the irreducible part of the two- dimensional Brillouin Zone (IBZ). The first issue we address is the number of k points which are needed to obtain a good convergence. In Fig. 1 we show the RA of GaAs(110) calculated with 256, 1024 and 4096 special k-points in the IBZ; the curves corresponding to the first two cases are clearly distinct from each other. The calculation with 4096 k points, instead, is almost coincident with that with 1024 k points. This means that 1024 k points are needed to achieve full quantitative convergence of the GaAs(110) RA. This result might be a peculiar property of this and similar surfaces. In the case of the GaAs(100) $`\beta 2`$(2x4) surface, a good convergence of the spectrum is already obtained using a number of k-points equivalent to 64 in the (1x1) surface cell. Similar calculations carried out on Si(110):H show that the RA spectrum is already converged with 64 k points (usually, calculations are made with 64 k points or less , since, even with a well converged k-point summation, only qualitative accuracy can be achieved, due to the neglect of excitonic and local-field effects.) The calculated RAS for GaAs(110) is qualitatively similar to previous calculations, carried out using tight-binding or ab-initio methods, and to experiments . The peak at about 2.9 eV embodies a substantial contribution of transitions across surface states or resonances (at variance with Ref. , but in agreement with references and ), while the higher-energy structures are essentially due to transitions across surface-perturbed bulk states. The main effect of the k-point convergence achieved in the present calculation was to reduce the intensity of the dip just above the 2.9 eV peak and of the subsequent structures. Having achieved quantitative convergence with respect to the number of k points, we can look now at the convergence with respect to the number of layers, which is the main interest here, since changes in lineshapes occurring for very thick slabs would indicate the presence of the long-range effect assumed by UK . In Fig. 2a we show the GaAS(110) RA calculated using slabs of 11, 31 and 93 layers and 1024 k points. The latter two curves are almost indistinguishable, while the 11-layer curve is also close to them. This means that the calculation has already converged with 31 layers, and that the aforementioned long-range effect does not occur. The same is true for the polar (100) surface: in fig 2c we show the calculated RA for the GaAs(100) $`\beta 2`$(2x4) surface, where small differences show up using slabs of 16, 20 and 40 layers. The slow convergence with slab thickness observed in calculations when a smaller set of k points is used is therefore due to the error caused by the small number of k points, which randomly varies with the number of planes. This is nicely demonstrated by Fig. 2b, where the same series of slabs as in Fig 2a (11–31–93 layers) has been used to compute the GaAs(110) RAS spectrum with a set of 256 k–points. In view of the different structures of Eqs. (1), for s light, and (2), for p light, one could speculate that the long-range effect might cancel in the former case, and appear in the latter. To check this possibility, we present in Fig. 3 the surface contribution to p-light reflectance calculated using the anisotropic three-layer model at an angle of incidence of 60 degrees. Again no difference is present between the curves calculated with 31 and 93 layers, definitely showing that surface optical properties are well converged with slabs of 31 layers. The present results increase our confidence in slab calculation, not only since very long-range effects, which can be hardly embodied therein, are excluded, but also because thin slabs, as the 11 and 16-layers ones, which are the only ones that can be afforded in ab-initio calculations , already yield rather good results. Let us discuss now in more detail the crystal-termination effect invoked by UK . It has been firstly addressed by one of the present authors in connection with the reflectivity at the direct minimum gap of a semiconductor . By disregarding the microscopic structure of the surface, and describing it just as an infinite potential barrier preventing electrons from escaping into vacuum (the crystal termination), the reflectivity was obtained starting from the wavefunctions calculated according to the effective-mass approximation. Since the wavefunctions must vanish at the crystal-termination plane, the envelope plane-waves occurring in an infinite crystal are replaced by sine-type standing waves. When looking at the imaginary part of the local dielectric function, $`Im[\epsilon (z,\omega )]`$, this yields a region below the surface where this quantity is smaller than in the bulk crystal. The depth of such a region is of the order of $`\pi /k_z`$, $`k_z`$ being the largest wavevector of the relevant transitions. In the case of the minimum gap, hence, $`k_z=[2m^{}(\mathrm{}\omega E_g)/\mathrm{}^2]^{1/2}`$, where $`m^{}`$ is the reduced electron-hole effective mass and $`E_g`$ the direct- gap energy. By taking an effective mass of 0.1 and $`\mathrm{}\omega E_g`$ as 0.1 eV, we estimate this depth to be of the order of 60 Angstrom. This is the quenching of the (bulk) dielectric function below the surface, that UK assume as the most important effect. However, crystal termination affects the optical properties also in another way: since the matrix elements of the momentum operator must be calculated between the surface-perturbed wavefunctions (sine-type in the case discussed here), they are different from those of the infinite crystal. More explicitly, $`k_z`$ is no longer conserved in subsurface optical transitions; the breaking of this selection rule yields spectra distorted with respect to (namely, broader than) the corresponding bulk spectra, acting as an additional broadening localized near the surface. We will show below that this additional broadening is the most important crystal-termination effect influencing the surface optical properties of GaAs. Differently from the case discussed above, the main structures of bulk spectra, which yield the most prominent bulk-related structures in RAS and SDR, are due to transitions at saddle-points of the joint density of states. The characteristic $`k_z`$ at saddle points is much larger than at the direct gap, because a large region of Brillouin Zone is available for optical transitions at the saddle-point energy. For instance, vertical transitions along all the $`\mathrm{\Lambda }`$ line are responsible for the $`E_1`$ structure in GaAs. Hence the largest $`k_z`$ is of the order of the BZ boundary, $`\pi /a`$, and the depth of the surface-perturbed region is of the order of the lattice constant, $`a`$. This explains why we have not found in Figs. 2 and 3 any indication of long-range effects close to saddle- point energies. The simplest model of quenching is to assume that the polarizability is completely suppressed within some depth d below the surface. This, however, would be equivalent to shift the surface by d, and would not give any contribution to the reflectance. Hence we consider a slightly different model, where the polarizability is partly quenched, say 50 percent, in a depth $`d`$. In practice we assume, within the depth $`d`$, a surface dielectric function of the form: $$\epsilon _{si}(\omega )=\mathrm{f}_i\epsilon _b(\omega \mathrm{\Delta }\omega _i,\gamma _i)$$ (3) where $`\epsilon _b(\omega )`$ is the bulk dielectric function, $`\mathrm{f}_i`$ ($`1`$) represents the quenching, $`\gamma _i`$ is the broadening (possibly different from the bulk one), and $`\mathrm{\Delta }\omega _i`$ is a possible frequency shift. When $`\mathrm{\Delta }\omega _i=0`$, $`\gamma _i=\gamma _{bulk}`$, and $`\mathrm{f}_i=1`$, $`\epsilon _{si}(\omega )`$ coincides with the bulk dielectric function. Taking $`\mathrm{f}_i<1`$ with $`\gamma _i=\gamma _{bulk}`$ and $`\mathrm{\Delta }\omega _i=0`$ would not modify the s-light reflectivity, since the numerator and denominator in equation (1) are proportional to each other, and hence the fraction is a real number, with vanishing imaginary part. However, this is not the case for p-light reflectivity, which may undergo some change. The full line in Fig. 4a shows the surface contribution calculated in this way. It is clear from the figure that this model has no relation with the output of the slab calculation (dashed line); hence the pure quenching effect cannot account for the surface contribution to reflectance. We consider next the pure broadening model, i.e. $`\mathrm{f}_i=1`$, $`\mathrm{\Delta }\omega _i=0`$, and $`\gamma _i>\gamma _{bulk}`$. Now the surface is assumed to have the same dielectric function as the bulk has, but with broader lineshapes, as a consequence of the breaking of the $`k_z`$-conservation near the surface. In Fig. 4b we show the surface contribution to the reflectivity of normally incident light, with polarization perpendicular and parallel to the $`[1\overline{1}0]`$ chains, as calculated from the slab polarizability (dashed and dotted lines), and according to the broadening model (full line). We assume a broadening of 100 meV at the surface, while it is 30 meV in the bulk. The curves are rather similar, showing lineshapes resembling the energy-derivative of the imaginary part of the dielectric function. Also the surface-state related peak at about 2.9 eV is embodied in the broadening-model spectrum (this occurs only because the peak mentioned above overlaps in energy with the $`E_1`$ bulk structure around 3 eV). However, the differences between the broadening model and microscopic calculations, which seem to be small in this spectrum, become very large in the RA spectrum, shown in Fig. 4c. Here the dashed line is obtained as in Fig. 1, that is from the 31-layer slab calculation. We can produce a RA-curve according to the broadening model by assuming that the depth where the dielectric function is broader than in bulk is different for the two polarizations, or, in an equivalent manner, that also some quenching of the dielectric function occurs ($`\mathrm{f}_i<1`$), whose amount depends on the direction of light polarization ($`\mathrm{f}_x\mathrm{f}_y`$). By assuming a suitable depth or quenching difference, we produce the full line in Fig. 4c, which is of course proportional to the continuous line in Fig. 4b. The two curves in Fig. 4c are markedly different, although the peak at 2.9 eV (the only spectral feature related to transitions across surface states!) is present in both curves. Hence we can conclude by recognizing the occurrence of bulk- derivative-like features in surface optical spectra calculated for a given polarization of light, due to the broader lineshape of the dielectric function near the surface. This broader lineshape is due to the breaking of the $`k_z`$ conservation rule (namely, it is a crystal-termination effect), is included in slab calculations, and does not imply a shorter lifetime of electrons near the surface than in bulk. When anisotropy difference spectra are taken for GaAs(110), however, these features largely cancel, so that the surviving RA has about no relation to the broadening model. Of course, such cancelation may be smaller at other surfaces, so that derivative-like lineshapes may be present in RAS and SDR spectra. As a last point, we can assume that the (bulk) dielectric function near the surface can undergo small shifts of peak positions ($`\mathrm{\Delta }\omega _i0`$), in addition to broadening and quenching. To this aim it is not needed, as assumed in Ref., that electrons and holes excited in optical transitions are kept close to the surface by their short life-times, in order to be shifted in energy by the surface potential. The required small shifts of the peaks of the surface dielectric function may be produced by the surface-perturbation on the wavefunctions and, consequently, on the local polarizability. It is a matter of fact that the layer-projected density-of-states may be different from the bulk one. The same, of course, can occur for the z-dependent dielectric function , whose average over the first few layers yields the surface dielectric function. We have tried to obtain bulk-like difference (RAS or SDR) spectra by suitably varying $`\mathrm{f}_i`$, $`\gamma _i`$, and $`\mathrm{\Delta }\omega _i`$, i.e. by shifting, broadening and quenching the bulk spectrum. By varying the parameters above, we often obtained derivative-like spectra, never obtained bulk- like spectra, and sometimes hybrid spectra (see Fig. 5a, full line). It is worth to notice that this hybrid spectrum is rather similar (although energy shifted) to the microscopically calculated RAS spectrum, also shown in Fig. 5a (dashed line). For some choice of the parameters we got difference spectra approximately bulk like, that is showing peaks close to the two bulk critical-point energies, but, differently from the bulk spectrum, with a negative region in between (Fig. 5b, full line). A similar RA spectrum is the result of a realistic TB slab calculation carried out for another GaAs surface, the (polar) (100) $`\beta 2`$(2x4). The calculated RAS is shown in Fig. 5b, by the short–dashed line, while the experimental spectrum, more similar to the bulk one, corresponds to the long–dashed line. This suggests that the surface-exciton and surface local-field effect may be determinant to yield bulk-like surface spectra. A recent calculation for Si(110):H , where the experimental RA lineshape is bulk like , shows indeed that the surface local-field effect, treated therein according to the point-dipole approximation, is crucial to obtain a bulk-like theoretical lineshape. ## IV Summary and Conclusions To summarize, we have shown that surface effects on optical properties of GaAs are localized in a few monolayers below the surface. As it has been discussed in section III, these results do not depend explicitly on the particular system considered, suggesting a more general validity, i.e. indicating that for a wide class of semiconductor surfaces the surface effects on optical properties, including the crystal termination effect, are well described using slabs of a few tens of monolayers. In the absence of peculiar features due to surface states, the crystal- termination effect can be phenomenologically modeled as a shift, broadening and reduction of the bulk spectrum. Many combinations of these parameters yield surface spectra resembling the derivative of the bulk absorption spectrum. It must be emphasized that the amounts of shift, broadening and reduction are ultimately determined by the microscopic structure of the surface; furthermore, these bulk-derived structures coexist with transitions directly involving surface states. After subtraction of individual spectra to obtain RAS or SDR spectra, the resulting lineshape can be qualitatively different from a derivative-like lineshape, as in the case of GaAs(110). On the other hand, approximately bulk-like spectra are obtained for some values of the parameters. However, truly bulk-like spectra can hardly be obtained in terms of the crystal–termination effect, and they did not even occur as results of our realistic slab calculations. Hence, many-body effects like the surface-exciton or the surface-local field effect, not included in the one-electron theory, seem to be determinant to obtain truly bulk-like lineshapes. In conclusion, we agree with Rossow et al. and with UK that some features of surface spectra originate from transitions across bulk states. Surface termination effects, however, involve a more complex mechanism than that described by UK. In fact, the broadening of the bulk dielectric function near the surface is the most important crystal termination effect, due to the breaking of $`k_z`$ conservation at surfaces. It does not imply, however, that photogenerated electrons and holes have shorter lifetimes than in the bulk. This effect yields derivative-like lineshapes, as those obtained at many chemically-saturated surfaces. Finally, we would like to stress that many effects concur to determine surface optical properties. It is not possible to interpret optical spectra of all surfaces in terms of a single effect, either the crystal termination effect, or transitions across surface states. Caution must also be used in assigning spectral features to bulk-state transitions uniquely because of their energy positions, as exemplified by the case of GaAs(110), where we found that the main peak of the calculated spectrum, occurring almost at the same energy as the $`E_1`$ bulk feature, contains a substantial contribution of transitions across surface states. ## V acknowledgements We thank D.E. Aspnes for stimulating our interest in the problem, and A.I. Shkrebtii for his contributions to the tight-binding code used in this work. The calculations were performed on a parallel platform (Cray T3D) at the Interuniversity Consortium of the Northeastern Italy for Automatic Computing (CINECA), under the INFM parallel computing initiative, account cmprmpi0. This work has been finantially supported in part by the Italian Ministery of the University and Scientific Research (MURST–COFIN 97).
no-problem/9906/cs9906032.html
ar5iv
text
# Formal Modeling in a Commercial Setting: A Case Study ## 1 Introduction For a long time, researchers and practitioners have been seeking ways to improve productivity in the software development process. Precise documentation of software specifications has been advocated as one of the viable approaches . If high quality specifications are crucial to the success of system developments, it seems logical to apply rigorous specification techniques to the requirements for ensuring their completeness and consistency. The majority of successful applications of formal modeling have been confined to safety-critical projects where software correctness is the pivotal goal. In contrast, the commercial software industry seeks practical techniques that can be seamlessly integrated into the existing development processes and improve productivity; absolute quality is often a desirable but not crucial objective. Although the feasibility of formal specifications has been demonstrated in commercial settings , the overall adoption of the idea has been slow. Most companies, such as the Canadian-based telecommunications company Nortel<sup>1</sup><sup>1</sup>1Nortel, for the purpose of this paper, refers to the Toronto Multimedia Applications Center of Nortel Networks., opt to rely on manual inspections of natural-language specifications as the only technique to detect errors in them, even though the results have been suboptimal. If the advantages of better quality specifications, such as a better understanding of the system and less error-prone designs, do not provide an adequate justification, more benefits can be obtained by leveraging the investment in the formalization process to other stages of the software lifecycle, i.e. generating code or test cases from the formal specifications. Not only does this amortize the cost of creating the specifications, but the productivity improvement can also be more immediate and easily measurable. Driven by the need to shorten and improve the software development process, Nortel and the Formal Methods Laboratory at the University of Toronto have jointly proposed a pilot research project to investigate the feasibility of formal modeling techniques in a commercial setting. The goal of the project is to find means of using formal modeling to improve productivity in various stages of the software lifecycle in an economical manner. Specifically, the emphasis is placed on deriving test cases from the formal model as the Nortel engineers have expressed concerns about the feasibility of code generation for their proprietary platform. Our exploratory project was organized as a hybrid quantitative and qualitative case study . As it was extremely important to choose the right system/language combination for the formalization process, we began the study by selecting a typical system to specify and conducting a qualitative evaluation of formal modeling languages. A chosen language was applied to model the system, and the resulting model was used to identify errors in the software requirements document and to derive test suites, shadowing the existing development process. Throughout the study, we kept a variety of productivity data for comparison with similar information from the actual development process. We also noted the qualitative impact of the formalization process. The rest of the paper is organized as follows: Section 2 provides a brief description of the software system selected for the study. Section 3 discusses the criteria used in choosing a suitable modeling language. In Section 4, we discuss the formalization process. Section 5 presents findings from the study. The experience gained during the project is summarized in Section 6. Section 7 briefly describes a usability workshop that we conducted at Nortel, and Section 8 concludes the paper. ## 2 System Selection To make the project meaningful, we did not want to be directly involved in choosing a system, hoping to work on something representative of typical projects of the TorMAC division of Nortel. We also felt that it was important to do the formalization in parallel with the development cycle. Thus, a group of Nortel engineers, consisting of developers from the design team and testers from the verification team decided that we should work on a subsystem of the Operation, Administration and Maintenance (OAM) software of a multimedia-messaging system connected to a private branch exchange. The subsystem, called ServiceCreator in our paper, is a voice service creation environment that lets administrators build custom telephony applications in a graphical workspace by connecting predefined voice-service components together. Figure 1 illustrates the graphical view of one such telephony application consisting of four components: start, end, password-check, and fax. When the application is activated, a call session begins at the start component, and a caller is required to enter a numerical password in order to retrieve a fax document from the fax component. The caller is directed to the end component if an incorrect password is entered. In both scenarios, the call session ends when the end component is reached. The lines connecting various components represent potential control flow of the call session, and the actions performed by the caller in an active component determine the output path taken. In the password-check component, for instance, the caller exits via the path password if a correct password is entered, or the path max. invalid if there are too many invalid password attempts. In our study, we analyzed the run-time behavior of 15 such components, described by an 80-page natural language specification. We illustrate the approach using the password-check component, described by a 5-page natural language specification. Figure 2 shows a graphical view of this component. The purpose of this component is to validate digits entered by a caller against any of the passwords (up to five) defined by the administrator. For instance, the path password 1 is taken if the entered digits match the first defined password. The caller is forced to leave the component using the max. invalid output if the maximum number of invalid password attempts is reached. Such attempts are also monitored on a per-call session basis, and the caller leaves via the max. invalid/session output if the per-call session limit is reached. The caller can also enter the * key to retrieve the help prompt, which has a side effect of clearing the password entry, or the # key to exit prematurely via the \# (cancel) path if no password has been entered. If the caller stays idle for a certain time period and has not previously keyed in any digits prior to entering the password-check component, she is assumed to be using a rotary phone and is transferred to a different voice-service component. Otherwise, one of the two delay prompts may be played depending on whether she has begun keying in the password. After two more timeouts, the caller exits via the no response path. In order to generalize from the results of the study, we need a good characterization of the type of applications that ServiceCreator represents. First of all, it is clearly a reactive system in the telecommunications domain. Additionally, it has relatively stable requirements and is fairly self-contained, having a loose coupling with the underlying system. Finally, it is not very complex although non-trivial. ## 3 Evaluation of Modeling Methods A successful formalization of the system in a commercial setting depends crucially on a modeling language, supported by an appropriate tool. Thus, in this section we use the term method to indicate both the modeling language and its tool support. The goal of the evaluation was to select a suitable modeling method to be used in the feasibility study. Easily readable and reviewable artifacts as well as a simple notation were the two basic requirements for a modeling method to be usable in a commercial setting. Moreover, since one of the overall objectives was to amortize the cost of creating a formal specification, we began the evaluation by conducting a broad survey of available tools that provided support for both modeling and testing. These constraints turned out to be extremely limiting as most of the surveyed methods either had just the modeling or just the testing support, or did not have a formal notation. Some were simply too difficult to be used in industry. We eventually narrowed down our search to the following candidates: * Telelogic SDT —an integrated software modeling and testing tool suite that utilizes Specification and Description Language (SDL) , which is based on extended finite state machines (EFSMs), for behavioral modeling and Message Sequence Charts (MSCs) for component-interaction specification. MSCs, which can be used as test cases, can be derived semi-automatically from an SDL model. Alternatively, SDT can verify the correctness of the model with respect to independently created MSCs. * Aonix Validator/Req (V/Q) —a test generation tool that allows black-box requirements of a system to be specified in the form of parameterized UML use cases. Validator/Req generates test cases for data-flow testing from the model automatically and produces documents that conform with the IEEE standard for software requirements specifications . * Teradyne TestMaster (TM) —a test generation tool that automatically generates test cases for both control-flow and limited data-flow testing from models based on EFSMs. The number of test cases can be flexibly tuned to suit different testing needs. To perform a detailed assessment, we structured our evaluation as a feature analysis exercise and refined our focus to choosing among the remaining methods using additional evaluation criteria gathered from the Nortel engineers. These criteria comprised of factors, such as usability and smooth integration, that were crucial to the use of formal modeling in their environment. After the methods were used to model the password-check component, they were ranked against the criteria based on our impressions of the tool and the models produced. Table 1 shows results of this evaluation. It lists the evaluation criteria in column one, their relative importance using a scale from 1 (least important) to 5 (most important) in column two<sup>2</sup><sup>2</sup>2These were assigned after consultations with Nortel engineers., and the degree the methods satisfy the criteria using a scale from 1 (unsatisfactory) to 5 (completely satisfactory) in columns three to five. In the first row, for instance, SDL scores the highest as it allows the behavioral modeling and the hierarchical partitioning of a system into concurrent processes. TestMaster has similar modeling support but a system can only be modeled as a single hierarchical EFSM from a tester’s perspective. Validator/Req ranks the lowest as its scenario-based notation provides very limited modeling support. The conclusion of the evaluation was reached by comparing the final scores for the methods obtained by adding weighted scores from each criterion, making SDT the most suitable tool. A confounding factor in feature analysis is the potentially biased opinions of the evaluator. Although we tried to ensure the objectivity of the evaluation, assignments of the scores inevitably contained our subjective judgment, and we did not feel that we could accurately evaluate such factors as usability. To mitigate this potential problem and to gain more confidence in our assessment, we demonstrated the tools and the models to the engineers. They agreed that SDT satisfied their criteria more closely than the other tools. ## 4 Modeling and Testing the System The next step was to formalize the ServiceCreator application. This formalization was undertaken by us in parallel with the actual development process. ServiceCreator was modeled as a 70-page SDL system in which the environment contained the underlying OAM software and messaging-system hardware. Out of the 15 voice-service components, 10 were modeled as separate SDL processes (see Appendix A for an example) that communicated with the environment through a “driver” SDL process. This process models the control-flow information of the telephone application. Figure 3 shows a simplified view of the “driver” process created for the application of Figure 1. This process is responsible for activating voice-service components and responding to their termination. The functionality of the five remaining components (e.g., start and end) was incorporated into the “driver” processes. A total of 23 signals were used in the SDL system; eight of them were external (used for communicating with the environment) and the rest internal. Persistent data such as the predefined passwords for the password-check component, were represented as parameters to the SDL processes. ### 4.1 The Level of Abstraction The modeling process was relatively straightforward as we encountered no major problems in modeling ServiceCreator; we also felt that a background in formal methods was not required. However, the biggest concern was to determine the appropriate level of abstraction which was dictated by two opposing needs: the model should be constructed from a black-box view to reduce its complexity, while the exact behavior of the system needs to be modeled for deriving detailed test cases. In addition, a more detailed model would help in identifying problems in the natural-language specification. Our approach was to start from a high level of abstraction, filling in details about some parts of the behavior if the natural-language specification required it. Mostly, such details were necessary in dealing with external input. For example, in modeling the password-check component, we represented the various timeouts by an SDL timer timeout (see Figure 4), as the actual length of the timeouts was relatively unimportant. Processing of user input, on the other hand, required reasoning on the level of single digits. In our model, a received digit, digit, was actually stored in a variable numberRecv. While this treatment could potentially lead to large and cluttered models, we sometimes had to resort to it to be able to derive sufficiently detailed test cases. ### 4.2 Test Case Derivation To obtain immediate benefits from the formalization process, 120 MSCs were derived from the SDL model for testing the implementation. The derivation was not automatic: SDT recorded the interactions between the SDL system and its environment as MSCs when we animated the SDL model manually. We felt that the automation was not necessarily desirable since this exercise gave us confidence in the content and the coverage of the test cases. During the test case derivation, we took advantage of the modular nature of the voice-service components and generated test cases for each of them separately, achieving full transition coverage in the corresponding SDL processes. However, some functionality of the system could be covered only by testing multiple components together, i.e. by integration testing. More than 20 integration test cases have been identified. For instance, to derive test cases for testing the initial timeout behavior for touch-tone callers where no passwords were keyed in, we created a telephony application model in which a caller was required to key in some digits in a component, such as a menu, prior to entering the password-check component (see Appendix B for the corresponding MSC). The procedure for deriving such test cases was as follows: * During the modeling phase, note the cases where the input comes not only from the environment but also from other components. If some input to component $`A`$ comes from component $`B`$, we say that there is a relationship between $`A`$ and $`B`$. * After the modeling phase, use the resulting model to create test cases that specifically ensure that the relationship between the components is correctly implemented. Derivation of test cases for integration testing was the most labor-intensive part of this phase; it also required a fair bit of skill. ### 4.3 Specification and Implementation Problems The SDL model and the experience gained from the formalization process allowed us to identify specification errors that escaped a series of manual inspections by Nortel engineers. As the components modeled were not particularly complicated, most of the errors we found were caused by vagueness and missing information. In fact, the most time-consuming part of the formalization process was to understand the natural-language specifications and to consult the engineers for clarifications. We estimated that these activities took as much time as the formalization process. Some of the problems found in the specification of the password-check component are presented below: * it was never mentioned whether the various voice prompts were interruptible, and no conventions for such behavior were defined; * lengths of various timeouts were not mentioned; * it was unclear which exit path should be taken when the administrator set two or more passwords to be the same; * the maximum and minimum lengths of passwords were not defined. Some of these problems were very low criticality and could be easily fixed in the implementation. However, the testers were required to interact with the developers to clarify the correct behavior of the system, spending an unnecessarily long time in the test case execution phase (see Section 5.1). In addition, problem 4) propagated itself into the implementation. That is, a malicious caller could crash the system by entering an abnormally long password within the password-check component. Thus, this requirement omission became a critical implementation error. Moreover, since the MSCs derived from the SDL model were used to identify errors in the implementation after the Nortel engineers had officially completed testing of the voice-service components, we were able to observe that this implementation error had not been discovered. The reason was that the test cases were derived from the same incomplete specification and missed a critical error. ## 5 Findings We began our analysis by seeking quantitative evidence to measure the effects of the formalization process on productivity. However, as the study progressed, we felt that it was also crucial to identify the qualitative factors (e.g., perceived usability of SDL and commitments from the development team) and the limitations of the study in order to reach accurate and unbiased conclusions. Unfortunately, Nortel engineers did not keep track of many essential metrics and, due to the exploratory nature of our study, did not allow us to create a controlled environment where such metrics could be obtained. In particular, we do not know the exact amount of time it takes to fix a bug, if it is found during the inspection vs. design vs. testing phase. The lack of metrics significantly impairs our ability to draw quantitative conclusions. ### 5.1 Quantitative Results The entire modeling process, which consisted of activities such as understanding and formalizing the specification as well as deriving and executing test cases, took about two person-months to complete. During this period, we kept track of a variety of productivity data in the study (column two of Table 2) for comparison with similar information from the existing development process (column three). Effort measurements in this table are approximated to the nearest person-day. Since the sizes of test cases varied greatly, we did not use a “test case” as the unit of comparison for testing-related data. Instead, we counted test units, the smallest externally visible functionality of the system, in each test case to ensure a fair comparison. Highlights from the table are summarized below: * The time to model value included only the time used for modeling the SDL processes. While the modeling task did not have an equivalent in the actual development process, manual inspection was a similar activity that was also performed at the completion of the specification phase. Certainly, the formalization process was not meant to be a complete replacement. However, if a large number of specification errors were identified in a relatively short amount of time, the modeling task could be considered a way to decrease the time for inspection. (We discuss this point later in this Section.) * The number of specification errors reported could not be used for comparison as the Nortel engineers did not keep track of such statistics. * More test units could be derived from the SDL model (which translated to better test coverage) in roughly the same amount of time, possibly because the model eased the creation task by providing a more in-depth understanding of the system as well as a better sense of completeness. One other reason for the difference in the quantity was that the test units from Nortel were sometimes vaguely specified (see Figure 5 for an example); the missing details contributed to a decrease in the number of test units reported. * The time needed for test unit execution in our study was much smaller for two major reasons: the derived test units were more detailed and thus easier to execute, and it was observed that Nortel engineers spent a lot of time revising the existing test cases because of the changes in requirements and creating more detailed test scenarios based on the vaguely specified test units. However, due to tight schedules, most testers did not document these extra units until the end of the entire testing phase, which spanned over more than four months. They admitted that some of these test units would inevitably be lost, contributing to a decrease in their total number. * The number of implementation errors identified in the study was two times larger than that of the existing development process. Many of them were missed because testers created test cases from an incorrect and incomplete specification, as indicated in the third row of Table 2. Problems such as incompleteness propagate to the test cases and affect the test coverage. In fact, 18 of the 50 problems could be linked to problems in the specification. Their criticality ranged from the minor ones, dealing with the availability and the interruptability of voice prompts, to the critical ones, affecting the core functionality of the voice-service components or causing the system to crush. Most of these errors resulted from undocumented assumptions or incorrect/missing error handling. To obtain an accurate cost/benefit figure, we needed to collect additional statistics such as an average cost to fix a requirements error discovered during the inspection and the implementation phases. As we mentioned above, the Nortel development team did not keep track of such metric; however, a conservative cost/benefit estimation is still possible. Without taking the improvement in software quality into account, we can estimate the cost of formalization by subtracting the time of the modeling task from the direct savings in * the test unit creation (0 person-days), * the test unit execution (9 person-days), and * the manual inspection. The formalization process did not include a manual inspection phase, whereas the actual development took 50 person-days for it. The modeling task would come at no cost if it were to reduce the manual inspection by 2 person-days, or 4%. Of course, the actual cost/benefit figure is significantly more promising if the long-term benefits, coming from a better quality of the product, ease of maintenance and regression testing, and an ability to reuse a good specification, are taken into account. ### 5.2 Qualitative Observations While all the quantitative data was in favor of the use of the formal modeling, it was clear that these results alone constituted only a part of our findings. Some observations from the formalization process that were not evident from Table 2 are discussed below. * The most frequent complaint from the test engineers is that the missing information in the specifications often complicates the task of the test case creation. The SDL models encourage and assist developers in stating system requirements more precisely and completely, which should allow testers to create better quality (e.g., more detailed and with expected results more clearly defined) test cases and reduce the time needed for test case creation and execution. Developers should also benefit from the more complete specifications during the design and the implementation phases. This is an area where SDL can potentially significantly improve the development process. In fact, SDL has been successfully applied in the telecommunications field: from the traditional use of protocol specifications to high level specification , prototyping , design , code generation , and testing of telecommunications applications. Although the results reported in these studies were similar to ours, the goal of the studies was different: they were aimed to investigate technical advantages or a feasibility of SDL in a given environment, or were emphasizing only one of the development activities. * As with any other formal specification technique, a successful integration of SDL into the development process requires a firm commitment from the entire development team. For instance, the developers must ensure that the SDL model is always kept consistent with the system requirements and the code, e.g. last-minute changes in the design and implementation are propagated back to the model. Testers also need to ensure that their test cases always reflect the model accurately. While this is possibly one of the biggest obstacles in applying a formal modeling technique, the advantages provided by SDL justify the extra effort. * Compared to other formal modeling techniques, the strengths of SDL lie in its ease of use and the ability to express nontrivial behavior in a reviewable notation. Unlike many other formal modeling languages, SDL does not require an explicit use of formal logic. The graphical user-friendly notation allows developers without a strong mathematical background to effectively create EFSMs. Compared to natural language specifications, such EFSMs give a much better sense of completeness, allowing to easily detect missing scenarios, e.g., problem 1) in Section 4.3. In addition, the formal syntax helps clarify ambiguities or inconsistencies, e.g., problem 3) in the same section. However, SDL tends to blur the line between requirements and designs. If proper abstractions are not applied, the model may become too detailed and unnecessarily large, possibly duplicating the design effort. ### 5.3 Limitations Based on the opinions expressed by the Nortel developers and testers, ServiceCreator was representative of the types of systems they have to work with, so we are fairly confident that the results of the study would apply to similar projects in this environment. We were also fortunate to find a method which is well suited for modeling telecommunication systems. However, it would be difficult to generalize our findings outside the Nortel domain, since they would depend on the current development methodology, types of applications and the choice of a modeling language/tool. Other limitations came from the fact that we had prior experience with SDL and were not constrained by development pressures. That is, we took the time necessary to produce high quality models and detailed test cases and felt that the process was straightforward. If time pressures prevent Nortel developers from applying the modeling techniques carefully, they may not achieve equally good results. In addition, novice users of SDL would take more time and possibly create less effective models of their systems. However, we believe that appropriate training and availability of an SDL expert can ensure that Nortel engineers use the SDL system successfully. ## 6 Lessons Learned We were able to show that formal modeling techniques can shorten the development cycle while improving the quality of software. This can be done by amortizing the cost of creating the model over time-intensive phases of the software development lifecycle, such as testing or code generation. However, the total decrease in the development cycle is only achievable if the formalization can be done fairly inexpensively, by utilizing an easy-to-use and review notation, formalizing only selected components, and staying at a fairly high level of abstraction. It is also essential to achieve immediate results by using the approach incrementally, that is, being able to stop at any time and get partial benefits from partial modeling. A light-weight approach to formalization has been advocated by many researchers and applied successfully in several projects, e.g. . What about verification? We feel that in the current commercial environment the majority of systems do not require any verification. There is typically a lesser need for absolute assurance, but a greater need for rapid development of reasonably correct systems. In fact, our use of SDL showed that, if verification is not involved, it is not essential to use a modeling language with a fully-defined formal semantics to achieve immediate and measurable benefits. ## 7 Measuring Usability of SDL There is no doubt that usability of formal modeling techniques plays an important role in their acceptance in industry . An easy-to-use technique encourages experimentation and reduces the cost of integration. More importantly, the reality is that practitioners do not try to adapt to an inconvenient technique—they simply abandon it . Thus, it is essential that SDL is perceived to be usable by Nortel engineers. Only then will they be willing to apply it to their projects. To collect some information about the usability of SDL, we conducted a one-day workshop in which six Nortel engineers participated. In the first part of the workshop we provided the participants with natural language descriptions of two small software systems . After inspecting the descriptions manually and noting problems in them, the participants were asked to model the described systems in SDL. By formalizing the behavior, they were able to discover many additional specification errors; some of them found even more errors than we originally seeded, i.e. the descriptions contained errors that we did not notice. A few minor usability problems were noted, but the consensus reached among the participants was that the use of a formal, yet user-friendly, notation could help uncover problems hidden in the seemingly simple exercises much more effectively than manual inspections. In the second part of the workshop we asked the participants to fill in a questionnaire. The goal of the questionnaire was to obtain opinions about the usability of SDL and its perceived role in the development environment. Some of the results are summarized in Table 3, and the rest are available in . The column on the right contains an average score on the scale from 1 (strongly disagree) to 5 (strongly agree). Results from the questionnaire strengthened our findings that SDL is a user-friendly formal modeling technique which can be used effectively by Nortel engineers to improve their development process. Encouraged by the prospects of SDL, Nortel and University of Toronto are in the process of setting up another joint project where the engineers will carry out the formalization process themselves, and we will only observe the progress and provide consulting, if necessary. Without many limitations of our study, this new project will provide a more accurate insight into the technical and economical feasibility of SDL in a commercial setting. ## 8 Conclusions In this case study we formalized the behavior of a multimedia-messaging system in a commercial setting. The success of the study was in finding a representative system, carefully selecting a suitable modeling method, and taking a lightweight formalization approach. Although we did not have access to some development metrics to fully quantify our findings, the results of the study clearly show that software requirements can be formalized effectively and economically, yielding significant improvements over the existing development process. Acknowledgments The authors would like to thank Albert Loe, Steve Okun, Avinash Persaud, and Shawn Turner of Nortel for their technical assistance and continual support throughout the study. We are also grateful to the anonymous referees for suggesting ways to improve the paper. The study was supported in part by Nortel Networks, NSERC, and Ontario Graduate Scholarship (OGS). ## Appendix 0.A An SDL Block Diagram The diagram below illustrates the SDL block diagram of the telephony application shown in Figure 1. The “driver” block (which contains a “driver” process) acts as the signal router between the environment and the SDL blocks of the voice-service components by routing the signal lists inputSigLst and outputSigLst. It is also responsible for activating the appropriate component according to the control-flow of the application and actions from the caller by using the signal activate. Please refer to Section 4 for more details. ## Appendix 0.B A Message Sequence Chart The MSC below shows the interactions between a caller and a telephony application in ServiceCreator. The application requires a caller to press button 1 in the menu component prior to entering the password-check component. Refer to Section 4.2 for more details.
no-problem/9906/cond-mat9906183.html
ar5iv
text
# Where are we in the theory of High Temperature Superconductors*footnote **footnote *Talk given at the K S Krishnan Birth Centenary, Conference on Condensed Matter Physics, Department of Physics, University of Allahabad, Allahabad, India; 4-7 Dec 1998 ? ## I Introduction I feel honored to speak at this conference which commemorates the Birth Centenary of an eminent son of India, Sir K S Krishnan, the co-discoverer of Raman effect, and the Platinum Jubilee year of this Physics Department that has nurtured many excellent physicists over the years. Krishnan would have enjoyed seeing the development in the field of high Tc superconductivity, in which a two dimensional metallic character and quantum magnetism play fundamental role. Krishnan, in late 30’s, pioneered the study of anisotropic transport and magnetic properties of graphite, an excellent example of two dimensional metal. He had deep insights in the magnetism of transition metal and rare earth ions through his innovative susceptibility measurements of families of magnetic systems such as $`CuSO_4.5H_2O`$, that are actually Mott insulators in the current parlance. Nearly 12 years ago Bednorz and Muller discovered superconductivity in Ba doped LSCO and broke the barrier of the record Tc of 23 K exhibited by the A-15 family member $`Nb_3Ge`$. Soon many cuprates were synthesized and now the maximum Tc is nearly 160 K in some Tl/Hg based cuprate under pressure. From experiment point of view the quality of single crystals have improved considerably over years and we have a good set of several reproducible experimental results that begs quantitative explanation. From theory point of view the Resonating Valence Bond (RVB) theory of Anderson that had a lead right from the beginning in view of its strong foundation on the available body of experimental facts, has made significant progress, considering the nature of the hard quantum many body problem that cuprates posed. This fertile theory, by its fairly penetrating character also initiated a resurgence in the theory of strongly correlated systems and quantum magnetism. This theory has remained the leading guidance in the sense of providing the right directions and emphasizing the crucial aspects of the problem, albeit with occasional changes compelled by new experimental results. In this talk I will enumerate some of the important issues and point out our current understanding from the point of RVB theory. This talk will be sketchy and some details and many helpful hints for making further progress can be found in Anderson’s book. ## II Anderson’s Original Proposal and initial progress Anderson’s original proposal presented at the Bangalore conference in January 87 identified the relevant interactions and presented it in a succinct form as a one band large U Hubbard model or the equivalent t-J model. The insulating parent compound LSCO was suggested to be a 2d Mott insulator in a disordered spin liquid or RVB state. The resonating spin singlets are neutral in the insulating state - they do not transport charges at low energies. On doping they start transporting charges leading to superconductivity. Anderson also suspected the presence of neutral spin half excitations(which was later named spinons) with their own pseudo fermi surface. RVB mean field theory that brought out the neutral spinons and their pseudo fermi surface in the insulating state and superconductivity in the doped case were discussed by Anderson and collaborators. Affleck, Martson and Kotliar brought out an energetically better mean field state namely the d-RVB or the flux state. Inspired by Anderson’s suggestions Kivelson and collaborators discussed short range RVB in some detail focusing on spinon and holon excitations. Slave boson theories and gauge theories followed suit and there were intense activities and speculations, including the possible parity violating superconducting states with connection to Laughlins quantum Hall state. Looking back, the proposals of Anderson, with its emphasize on strong correlation, the ensuing non fermi liquid states, spin charge decoupling, spinon fermi surface, has remained robust and has given us a good way to think about this complex problem. In particular the ARPES, neutron scattering, NMR relaxation and transport properties can be qualitatively understood from the point of view of the above proposal. However, quantitative understanding is yet to be achieved. The precise mechanism of superconductivity, particularly in the single layer materials, is some thing that has eluded a sharper theoretical understanding so far, even though it was one of the first issues that caught the attention of the condensed matter community. Very fundamental and new ideas have however emerged through the notion of inter layer pair tunneling, which we will discuss at the end. The electron kinetic energy gain as the origin of the superconducting condensation energy is also a novel aspect of the present system. ## III 2d Quantum Antiferromagnet and under doped regime A good understanding of the Mott insulator should help one to understand the doped Mott insulator better. In this insulating state, kinetic or super exchange dominates and only spin degrees of freedom governed by Heisenberg Hamiltonian are present at low energies. An emerging local gauge symmetry in the insulating state was found and formalized by Anderson and the present author as a gauge theory. It was later discovered that this gauge field captures the physics of chiral fluctuations among the interacting spins in the low doped regime. The d-RVB state or the Affleck-Marston-Kotliar phase can be thought of as a uniform RVB state in which $`\pi `$ fluxes are condensed at low temperatures. There are good theoretical indications that the 2d Heisenberg model has a long range antiferromagnetic order. Several static and quasi static phenomenon are well explained by the spin wave theory inspired non linear sigma model analysis. However, the dominant correlations in the ground state is that of a d-RVB state; as suggested by Hsu it is meaningful to think of the ordered state as a spinon density wave in a spin liquid state. The antiferromagnetic order is fragile and disappears at about $`1.5\%`$ of doping. Recent ARPES study in under doped and insulating layered cuprates point out that the d-RVB with its massless Dirac like spinon spectrum is a good reference state to describe the Mott insulating and the under doped state. This questions the real relevance of the non linear sigma model in its present form, in the doped quantum melted region. The physics in the under doped regime is complicated by disorder, long range coulomb interaction, charge localization and micro phase separation effects. This is the origin of the stripe phase. Some theoretical work is going on in this regime. A recent work by Fisher and collaborators, addresses the issue of how an insulator to superconductor transition takes place in the ground state by their theory of nodal fermi liquids. Their reference state is a d-wave superconductor, where quantum fluctuations induced by coulomb correlations drive a metal(or insulator) superconductor transition. This theory captures some of the physics of the t-J model and over emphasizes pair fluctuation of charges. There are also some fundamental questions whether we can have a boson metal ground state for low doping. ## IV ‘Solving’ the t-J model Several experimental results indicated the validity of the t-J model for the conducting cuprates. From theory point of view the derivation of the t-J model has been shown rather satisfactorily through sharpening of the Zhang-Rice singlet arguments through detailed cluster calculations. The t-J model however remains unsolved in a satisfactory fashion, in view of the on site constraint involved. In the absence of an exact solution or a good many body theory, a natural way to solve the t-J model is to look for the next level of effective theories, guided by experiments and some times theoretical arguments, that will point to the correct final solution . In constructing the next level of effective theories there has been considerable work using slave boson mean field theories and the related gauge field theories . This approach, though presents itself with many new possibilities that often agree with the experimental trends, involves uncontrolled approximations and is far from satisfactory as a quantitatively correct many body theory. These theories are in a sense reaction against the conventional restrictive fermi liquid type of perturbative many body theories that fail in these class of correlated conductors. In the numerical front there has been many efforts by several groups . But they have not been very helpful in our understanding the rich low energy physics offered by the t-J model - they most often capture some high energy features and face serious difficulties when it comes to low energy physics. In the analytical front there is hope however, in the sense explained below, arising from Anderson’s proposal of failure of fermi liquid theory in 2d Hubbard model for arbitrarily small repulsion and the associated notion of 2d tomographic Luttinger liquid. This means that $`U^{}=0`$ is an unstable fermi liquid fixed point as in the 1d Hubbard model; and in principle the strong coupling non fermi liquid fixed point $`U^{}=\mathrm{}`$ can be understood by a careful study of small U. Anderson, through a phase shift analysis of two particles on the fermi surface in the $`2k_F`$ and singlet channel argues for the presence of a singular Landau parameter that leads to a different spin charge velocities on the fermi surface and anomalous exponent for the electron propagator. This point has remained controversial and the present author has given some supporting arguments for Anderson’s proposal. The present author has also brought out a related mechanism involving zero sound that could destabilize fermi liquid state in 2 and 3 dimensions. The presence of singular forward scattering, Anderson argues, leads to the so called tomographic Luttinger liquid (TLL) state, a non fermi liquid state exhibiting spin charge decoupling as well as branch point singularities for electron propagator on the fermi surface. The TLL theory of Anderson is essentially a Landau’s fermi liquid theory, but with a singular forward scattering. I view it as a natural generalization of Landau’s fermi liquid theory in the following sense. In Landau’s fermi liquid theory occupied low energy quasi particle states influence each other pairwise only in a mean field fashion irrespective of their relative momenta. In the tomographic Luttinger liquid, those electrons that have vanishingly small relative momenta (that is, belonging to a given tomograph) influence each other pairwise, in a non mean mean field fashion, leading to a finite phase shift in the relative momentum channel. While electrons belonging to different tomographs do not influence each other. This has profound consequences, as argued by Anderson. In view of its simple form, TLL theory should lend itself to more detailed analysis and comparison with experimental results, particularly of the normal state. Through Andersons proposal, which has a good phenomenological support, we have a possibility of analyzing the t-J model, a strong coupling limit, through a Landau type of theory. The parameters of this non fermi liquid theory can be determined from experiments. This scenario should work well for the optimal doping regime and beyond. However, it becomes a difficult problem with new possibilities when we go to the under doped situation. The physics near this region has the spin gap phenomenon and it requires some fresh thinking, or to go back to old RVB ideas to make further progress. As mentioned earlier, disorder and long range interactions and the corresponding charge localization effects cloud the real issue that we are after. ## V Normal state as a tomographic Luttinger liquid The normal state of cuprates is generally accepted as anomalous and non fermi liquid like, thanks to a variety of experimental results - frequency and temperature dependent conductivity, Hall effect, NMR relaxation, thermal conductivity, the non fermi liquid spectral functions seen in ARPES measurements and so on. While the semi phenomenological theory of Anderson’s tomographic Luttinger liquid suggests anomalous exponents, a satisfactory derivation of exponents and other details awaits further theoretical developments. At another level, the two scattering rates on the fermi surface, one corresponding to the longitudinal resistance that scales as $`T`$ and the other rate measured from Hall angle scaling as $`T^2`$ needs to be formalized. Anderson’s suggestions and heuristics remain as fundamental proposals that calls for a satisfactory derivation to make quantitative progress. The development of spin gap at low temperatures becomes prominent when we go to the under doped situation. In terms of the old RVB idea this has a natural explanation in terms of the development of neutral spinon pair condensation. However, the importance of interlayer pair tunneling and interlayer super exchange in providing the spin gap phenomenon has also been suggested by Anderson and collaborators. The real origin of the spin gap, identifying the correct in plane and inter plane contributions, needs to be sharpened further, in view of the contrast between the strong interlayer correlations in YBCO, compared to the equally high Tc compound Tl-2201 with weak interlayer interaction. Charged stripes in the normal states are interesting phenomenon of localization of the heavy holes that also suppress superconductivity. It is becoming clear that they are not providing any obvious mechanism for superconductivity. However, there are some intriguing experimental suggestions at low doping of very low energy stripe and perhaps antiferromagnetic long range order in an incommensurate peak position in the superconducting states. These are likely to be complications that are not very essential for our understanding of the underlying robust physics of superconductivity and anomalous normal state. However, we need a good theoretical understanding of them. ## VI Confinement and inter layer pair tunneling The original proposal of Anderson relied on the neutral singlets of the insulating RVB state getting charged on doping and leading to superconductivity. This suggestion was however challenged by the inter layer regularity of the Tc in various cuprates. So it was felt that perhaps the quantum fluctuations arising from the strong correlations in a single plane are strong enough to suppress superconductivity by enhanced gauge field or phase fluctuations. Around the same time the notion of confinement was introduced by Anderson and Zou by looking at the large anisotropy of the normal state resistivity - the ratio of the c-axis resistivity to ab-plane resistivity was too large compared to band effective mass anisotropy. It was then argued that the spin charge decoupling in the anomalous normal state strongly suppresses the electron spectral weight close to the fermi surface leading to absence of coherent one electron transport between neighboring planes. This is called confinement. One electron kinetic energy between two conducting planes is frustrated. This frustration leading to incoherent transport is in a way more subtle than the way one electron kinetic energy is frustrated in a Mott insulator. There is excellent experimental support for confinement phenomenon from optical sum rule measurements. A pair of electrons on the fermi surface in a spin singlet state with zero center of mass momentum, however, retains its identity without any quantum number fractionization. Hence it can tunnel coherently between two planes. The selective suppression of one electron coherent tunneling , and not of two electrons, is the origin of inter layer pair tunneling mechanism of superconductivity of Wheatley, Hsu and Anderson. The frustrated one electron kinetic energy loss is gained by pair delocalization between two planes. Anderson formalized the above through a BCS type of effective Hamiltonian that expressed the major source of pair condensation energy as a pair tunneling terms between two fermi liquid planes: $$H_{pair}=\underset{k}{}T_J(k)c_{k1}^{}c_{k1}^{}c_{k2}c_{k2}$$ and a small intra plane BCS type of scattering term non local in k-space. Notice that in the above term, the individual electron momentum is conserved, making it a resonant tunneling between to planes. It is this resonant character or local character in k-space that leads to a large Tc proportional to the pair tunneling matrix element $`T_J`$ \- a non-BCS dependence of Tc on the pairing interactions. At the level of models, formalizing Anderson’s BCS type of formalism is a very important issue. Early derivation by Muthukumar in terms of slave boson variables, can be modified to bring out the electron momentum conservation. However, a satisfactory derivation, that also addresses the issue of one electron incoherence and two electron coherence between planes or chains is still needed. The absence of bilayer splitting in the ARPES spectral functions is a good indication for low energy one electron incoherence between two layers. The interlayer coherence phenomenon and possibility of application to the closely related organic conductors has been studied by some authors. The present author has argued that certain extreme sensitivity of the Tc of organic superconductors to off chain or off plane disorder points towards the origin of inter layer pairing mechanism of superconductivity in these systems and an apparent violation of Anderson’s theorem for c-axis disorder. ## VII Is Inter Layer pair tunneling the only mechanism of superconductivity in cuprates ? The pair tunneling effective Hamiltonian has been successfully used by Anderson and collaborators to understand the origin of large $`T_c`$ and also certain features of the gap function in k-space. In this context Anderson also proposed an important test for the interlayer pair tunneling mechanism: since the superconducting condensation energy arises primarily from pair tunneling between planes, the c-axis Josephson plasma energy should be the same as the pair condensation energy. This remarkable prediction was verified for the case of bilayer materials such as YBCO and the one layer LSCO. However, the one layer Tl and Hg cuprates have remained an exception and do not seem to follow the interlayer pair tunneling mechanism. The present author made a suggestion that part of this could be accounted for through the particle-hole pair tunneling mechanism. This still does not solve the problem completely. There are also other suggestions . This has become a challenge and perhaps a revision of part of Anderson‘s Central dogma to the effect that ‘a single layer of cuprate may be superconducting’ may be called for. Thus we have to go back to the original one layer RVB mechanism and see how it can explain the large $`T_c`$ of one layer materials. RVB gauge theory ideas have been pursued a lot along these direction , including some instanton ideas and an idea of quantum tunneling of RVB $`\pi `$ flux. It is becoming clear that an in plane kinetic mechanism is also operating in addition to the inter plane kinetic mechanism. It is also found that the inter layer and intralayer mechanisms do not help each other and also dominate in different regime of doping. ## VIII Sharp resonance in Neutron Scattering and quasi particle peak in ARPES Another outstanding experimental result is the 41 meV resonance. This sharp 41 meV resonance, limited only by the instrumental resolution, in neutron scattering in YBCO in the superconducting state has a natural explanation in terms of pair tunneling mechanism as proposed by Anderson and collaborators. The peak corresponds to a transition between the bonding and antibonding state of an electron pair between the bilayers, induced by the spin flip scattering of the neutron within a layer. This resonance is strongly pinned to $`(\pi ,\pi )`$ \- this point has no satisfactory explanation so far in my opinion. Similarly, in the superconducting state, one sees a rather sharp Bogoliubov quasi particle around $`(\pi ,0)`$ at a finite energy of about 20 meV. These quasi particles are rather heavy and do not disperse in k-space, A satisfactory explanation for this phenomenon, including why this Bogoliubov quasi particle peak is confined to the Brillouin zone boundary, away from normal state fermi surface is not available so far. ## IX Symmetry of the gap function and magnetic field effects As it has been emphasized by Anderson and coworkers, the symmetry of the superconducting order parameter is not strongly dictated by the kinetic pairing mechanism. It view of its strong locality in k-space this mechanism determines only the magnitude of the gap. The detailed symmetry of the gap function is determined by the in plane short range repulsion effects, favoring a d-wave. That is the local on site constraint $`n_i+n_i2`$ leads to a global constraint on the pair amplitude in k-space: $$<c_i^{}c_i^{}>=0\underset{k}{}<c_k^{}c_k^{}>=0$$ The above global constraint in k-space is easily satisfied if the pair amplitude has a d-symmetry. In the case of conventional s-wave superconductors, small magnetic field does not modify the symmetry of the gap function or does not collapse the gap. In the case of the cuprate with d-node even small magnetic field strongly modifies the nature of the superconducting state, as recently discovered by Krishana and Ong in YBCO. Their observation of a magnetic field induced removal of the d-node has Krishana and Ong in YBCO has revived the possibility of $`d_{x^2y^2}+id_{xy}`$ state at low temperatures and low magnetic fields. Laughlin, Wilczek and others proposal of anyonic superconducting state with spontaneous P and T violation is perhaps realized now, however, with a small help from a magnetic field. The physics at Dirac nodes of a d-wave superconductor seems to be filled with rich possibilities. ## X Other approaches to study the t-J model - spin fluctuation, spin bag, SO(5) symmetry etc. The spin fluctuation theories work on a fermi liquid basis and suggest pairing mediated by the exchange of spin fluctuation quanta. There are deep issues related to the incomplatibility of real space super exchange and the fermi liquid background apart from the fact that normal state anomalies and inter layer regularities in Tc are not explained satisfactorily. Spin bag theory is essentially a spin fluctuation theory with a real space tinge to it. It also suffers from similar criticism. In the SO(5) front Zhang and collaborators view the zero temperature superconducting order of the doped 2d cuprates as one that is obtained by a rotation of the antiferromagnetic order. An alleged $`\pi `$ operator is the generator of this rotation. Serious cricisms starting from technical points to points of fundamental principles have been raised. ## XI Conclusion While one thought that the mist is getting cleared, one sees some further mist that challenges us in the cuprate game. However, the direction provided by RVB related ideas has been a constant source of real understanding of these systems. And with some more concerted effort a satisfactory picture should emerge soon. ## XII Acknowledgement I thank P.W. Anderson for a critical reading of the manuscript and comments.
no-problem/9906/astro-ph9906314.html
ar5iv
text
# Radial Velocity Studies of Close Binary Stars. II ## 1 INTRODUCTION This paper is a continuation of the radial velocity studies of close binary stars started by Lu & Rucinski (1999 = Paper I). The main motivation of this program have been determination of mean ($`\gamma `$) velocities for Hipparcos stars in order to determine spatial velocity vectors, with an important by-product of preliminary values of mass-ratio from simple sine-curve fits to the data. The program was started with contact binaries, mostly because of the evidence of their very high spatial frequency of occurrence relative to F–K dwarfs at the level of 1/100 – 1/80 (Rucinski 1998), but is slowly expanding to include other close binary systems accessible to the 1.8 meter class telescopes at medium spectral resolution of about R = 10,000 – 15,000. The paper is structured in the same way as Paper I in that it consists of two tables containing the radial velocity measurements (Table 3) and their sine-curve solutions (Table 3) and of brief summaries of previous studies for individual systems. The reader is referred to Paper I for technical details of the program. In short, all observations described here were made with the 1.88 meter telescope of the David Dunlap Observatory (DDO) of the University of Toronto. The Cassegrain spectrograph giving the scale of 0.2 Å/pixel, or about 12 km/s/pixel, was used; the pixel size of the CCD was 19 $`\mu `$m. A relatively wide spectrograph slit of 300 $`\mu `$m corresponded to the angular size on the sky of 1.8 arcsec and the projected width of 4 pixels. The spectra were centered at 5185 Å with the spectral coverage of 210 Å. The exposure times were typically 10 minutes long, with the longest exposures for fainter systems not exceeding 15 minutes. The data in Table 3 are organized the same was as in Paper I. Table 3 is slightly different from that in Paper I as it now provides information about relation between the specroscopically observed epoch of the primary-eclipse T<sub>0</sub> and the recent photometric determinations in the form of the (O–C) deviations for the number of elapsed periods E. It also contains, in the first column, below the star name, our new spectral classifications of the program objects. The classification spectra, typically two per object, were obtained using the same spectrograph, but with a different grating giving the dispersion of 0.62 Å/pixel in the range 3850 – 4450 Å. Several spectral-classification standards were observed and then the program-star spectra were “interpolated” between them in terms of relative strenghts of lines known as reliable classification criteria. In the radial-velocity solutions of the orbits, the data have been assigned weights on the basis of our ability to resolve the components and to fit independent Gaussians to each of the broadening-function peak. Weight equal to zero in Table 3 means that an observation was not used in our orbital solutions; however, these observations may be utilized in detailed modeling of broadening functions, if such are undertaken for the available material. The full-weight points are marked in the figures by filled symbols while half-weight ones are marked by open symbols. Phases of the observations with zero weights are shown by short markers in the lower parts of the figures; they were usually obtained close to the phases of orbital conjunctions. Because our data had been collected usually within one or two consecutive observing seasons, the orbital solutions were done by fixing the values of the orbital period. The solutions for the four circular-orbit parameters, $`\gamma `$, K<sub>1</sub>, K<sub>2</sub> and T<sub>0</sub>, were obtained in the following way: First, two independent least-squares solutions for each star were made using the same programs as described in Paper I. They provided preliminary values of the amplitude, $`K_i`$, of the mean velocity $`\gamma `$s and the initial (primary eclipse) epoch $`T_0`$. Then, one combined solution for both amplitudes and the common $`\gamma `$ was made with the fixed mean value of T<sub>0</sub>. Next, differential corrections the K<sub>1</sub>, K<sub>2</sub>, and T<sub>0</sub> were determined, providing best values of the four parameters. These values are given in Table 3. The corrections to $`\gamma `$, K<sub>1</sub>, K<sub>2</sub>, and T<sub>0</sub> were finally subject to a “bootstrap” process (several thousand solutions with randomly drawn data with repetitions) to provide the median values and ranges of the parameters. As is common in the application of this method, the bootstrap one-sigma ranges were found to be systematically larger than the differential-correction, linear least-squares estimates so that we have adopted them as measures of uncertainty of parameters in Table 3. Throughout the paper, when the errors are not written otherwise, we use notation of the standard mean errors in terms of the last quoted digits, e.g. the number 0.349(29) should be interpreted as $`0.349\pm 0.029`$. ## 2 RESULTS FOR INDIVIDUAL SYSTEMS ### 2.1 AH Aur The binary has been discovered by Tsesevich (1954) and early light curves were published by Hinderer (1960). The system was never observed for a study of radial velocity variations; even photometrically, it is one of the least observed contact binaries. It shows a light curve with amplitude slightly larger than 0.5 mag and with indications of partial eclipses. Although it was included in the Hipparcos observing list, it has not been included in the analysis by Rucinski & Duerbeck (1997 = RD97) because the large error in its parallax resulting in the absolute-magnitude error of 0.72 mag. The $`(BV)`$ color from the Tycho experiment 0.55(8) suggested a spectral type late F – early G, which is in excellent agreement with our spectral type F7V; this is also consistent with the absolute-magnitude calibration $`M_V=M_V(\mathrm{log}P,BV)`$ of RD97 which givs $`M_V=3.1`$. The most recent available photometric timing of the primary eclipse was that by Agerer & Hubscher (1996); this timing was used as the first guess for our radial-velocity solution. In our orbital solution, we assumed the orbital period following the 1985 edition of the General Catalogue of Variable Stars, $`P=0.4942624`$ days. The $`(OC)`$ deviation for the primary eclipse epoch T<sub>0</sub> is relatively large and equals 0.0206 days which is much larger than the error of determination of T<sub>0</sub>. This shift may be partly due to an obvious asymmetry in the radial-velocity curve of the less-massive component in the first half of the orbital cycle (see Figure 3). If we fix the T<sub>0</sub> at the epoch given by Agerer & Hubscher (1996), the asymmetry remains and the fit is only slightly modified. The values of the orbital parameters are then: ($`\gamma `$, K<sub>1</sub>, K<sub>2</sub>) = (30.96, 47.36, 279.68) km/s. The mass-ratio remains then at a relatively low value of $`q=0.17`$, but values of velocity amplitudes which determine the masses are slightly changed. The system needs a modern photometric study. ### 2.2 CK Boo Bond (1975) noted diffuse spectral lines and then obtained a fragmentary light curve indicating that the star is a W UMa-type binary. Since then, the binary has been subject of several time-of-minima studies, the most recent one by Muyesseroglu, Gurol & Selam (1996). We have taken the value of the period, $`P=0.3551501`$, from the study of Aslan & Derman (1986). The light curves of the system were presented by Krzesinski, Mikolajewski, Pajdosz & Zola (1991) and Jia, Liu & Huang (1992). The light curves are relatively shallow indicating partial eclipses and, consequently, difficulties with light-curve-synthesis solutions. Krzesinski et al. attempted a solution which included a determination of the photometric mass ratio. They derived $`q_{ph}=0.52`$ or 0.59 (depending on the assumptions concerning spots) and found that the system is of the W-type, i.e. with the primary, deeper minimum corresponding to eclipses of the smaller, less massive star in the system. The system was included in the Hipparcos study (RD97) with a relatively poor determination of the absolute magnitude, $`M_V=2.99(44)`$, which is however consistent with $`(BV)_0=0.54`$ and a spectral type of F6V as estimated by Krzesinski et al. (1991). Our estimate of the spectral type is slightly later, F7/8V. The only radial velocity study that we are aware of is by Hrivnak (1993). His preliminary spectroscopic determination of the mass-ratio, $`q_{sp}=0.16`$, was entirely different from the photometric solution of Krzesinski et al. (1991). Also, the type of the system was found to be A, not W; i.e. it is the more massive star which is eclipsed at deeper minimum. Our data fully confirm that the system is of the A-type, but our mass-ratio is even smaller than that of Hrivnak, $`q_{sp}=0.11`$, although the error of the spectroscopic determination is relatively large for such small values of $`q`$. ### 2.3 DK Cyg This contact system has not been studied spectroscopically yet, although it has been a subject of numerous photometric investigations since the discovery by Guthnick & Prager (1927). The most recent photometry and light curve have been given by Awadalla (1994); this study suggested a systematic period change. We adopted $`P=0.470691`$ days for our data, a number which is based on the values given by Awadalla 1994) and Binnendijk (1964). Because of the total eclipses at shallower minimum (a clear A-type), Mochnacki & Doughty (1972) recognized that the system would be convenient one for their (one of the first) contact-model light-curve-synthesis solutions; they used the light curve of Binnendijk (1964). Their photometric mass-ratio was $`q_{ph}=0.33\pm 0.02`$. Our entirely independent spectroscopic result of $`q_{sp}=0.32\pm 0.04`$ agrees fully with this determination confirming that systems with total eclipses can give excellent photometric solutions, in contrast to systems with partial eclipses. Mochnacki and Doughty (1972) pointed out that the spectral type of the system, judged by its color, is probably F0–F2. This prediction did not agree with the direct estimate of the spectral type at A6–8V given by Hill, Hilditch, Younger & Fischer (1975). We also estimated the spectral type at A8V. The Stromgren color $`by=0.24`$ (Hilditch & Hill 1975) suggests a spectral type A8 – F0. The Hipparcos parallax is small and has a large error ($`ϵM_V=0.9`$) so that the system was not included in RD97. The system was included in the list of near-contact binaries of Shaw (1994), but as far as we can establish, it an excellent example of an A-type contact system without any particular complications so there are no reasons to consider it a “near-contact” one. ### 2.4 SV Equ This relatively long-period system (0.881 days) has been discovered by Catalano & Rodono (1966) and then photometrically studied by Eggen (1978). Because of the color $`by=0.145`$ suggesting a spectral type of A5 and of the shallow (0.15 mag) light curve, Eggen considered the system to be a contact binary of little interest in the context of genuine early-type systems. The system has been somewhat forgotten except for its inclusion among near-contact binaries in the compilation of Shaw (1994). The Hipparcos parallax is poor in this case giving $`M_V=3.7(9)`$. Recent photometric observations of SV Equ were reported by Cook (1997) who gave the new time-of-minimum prediction with the period $`P=0.88097307`$ days. These observations were obtained very close in time to our observations, but they disagree in the time of minimum T<sub>0</sub>. We do not see any obvious reasons why the $`(OC)`$ for contemporaneous observations should be as large as $`0.028`$ days, but note that the graph of the data in Cook (1997) indicates rather large photometric errors. In spite of attempts to isolate lines of the secondary component with different template spectra, out spectroscopic observations led to detection of only one component. Thus, the star is not a contact binary (EW) as it was classified before, but most probably a semi-detached system (EB) seen at a low orbital inclination angle. ### 2.5 V842 Her (NSV 7457) The variability was discovered by Geyer, Kippenhahn & Stroheimer (1965), but the real nature of the star as a contact binary was identified relatively recently by Vandenbroere (1993) and, apparently independently, by Nomen-Torres & Garcia-Moreno (1996). We analyzed all the recent determinations of eclipse timings, as published by Vandenbroere (1993), Diethelm (1994), Nomen-Torres & Garcia-Moreno (1996) and Agerer & Huebscher (1997, 1998a), and found the following ephemeris: JD(Hel) = 2,447,643.1771(29) + 0.41903201(56) E. As for the other systems, only the period was used to determine the phases of the spectroscopic observations. V842 Her has not been yet observed for radial velocity changes. There exist also no previous estimates of its spectral type or color. It is the only W-type system in the current study; coincidentally, it is also the only system without a Hipparcos parallax measurement in this group of ten contact systems. The orbital period of 0.42 days is somewhat long for a typical W-type system and our spectral classification of F9V is unusually early for a W-type system. The light curve has a moderately large amplitude of about 0.6 mag and the primary (deeper) eclipses appear to be total or very close to total so that the system has a potential of an excellent combined, light and radial-velocity solution. ### 2.6 UZ Leo Variability of this star was discovered by Kaho (1937), but the correct type was identified much later by Smith (1954). Since then, there have been many photometric studies of the system, but no radial-velocity studies. For guidance on the orbital phases, we used the recent determination of Agerer and Huebscher (1998a). To phase our observations, we used the value of the period from the study of Binnendijk (1972). The modern light-curve synthesis solutions of Vinko, Hegedues & Hendry (1996) arrived at two similar possible values of the mass-ratio: $`q_{ph}=0.233`$ and 0.227. They differ rather substantially from our spectroscopic value, $`q_{sp}=0.303(24)`$. The system is clearly of the A-type with a large-amplitude light curve and total eclipses offering excellent prospects for a combined radial-velocity/photometric solution. The Hipparcos parallax placed the system slightly outside the $`M_V`$ error threshold of 0.5 mag used in RD97. The resulting relatively faint absolute magnitude of $`M_V=3.75(55)`$ agrees better with the spectral type of F2–3V, implied by $`(BV)=0.373`$, than with the direct estimate of A7V (Vinko et al. 1996). Our estimate of the spectral type is A9/F1V. ### 2.7 XZ Leo The system was discovered by Hoffmeister (1934). The recent time of minimum has been taken from Agerer & Huebscher (1997) while the period is that determined by Niarchos, Hoffman & Duerbeck (1994). The system was never observed spectroscopically. Niarchos et al. (1994) made a plausible and apparently correct assumption that the system is of the A-type and attempted to determine the mass-ratio. Their value, $`q_{ph}=0.726`$, is very far from our spectroscopic determination, $`q_{sp}=0.348(29)`$, once again demonstrating the dangers of spectroscopically-unconstrained light-curve solutions for partially eclipsing systems. They attempted to estimate the spectral type and preferred the range A7 to F0 rather than the previous estimates of A5 to A7. The Tycho’s experiment color $`(BV)_T=0.45(7)`$ indicates a mid-F spectral type. Our spectral type is A8/F0V so that the color does not agree with the spectral type. ### 2.8 V839 Oph Variability of the star was discovered by Rigollet (1947). The system has not been yet observed for radial-velocity variations. The recent timing of the minimum was by Agerer & Huebscher (1998b) while the period used for phasing of our observations was determined by Akalin & Derman (1997). In spite of partial eclipses and light curve instabilities, the system was subject to many photometric contact-model solutions. The most recent one by Akalin & Derman (1997) arrived at the photometric mass-ratio of $`q_{ph}=0.40`$. This substantially differs from our spectroscopic determination of $`q_{sp}=0.305(24)`$. The system is of the A-type. V839 Oph was included in the study of the Hipparcos data (RD97) with a moderately accurate determination of $`M_V=3.08(38)`$. This absolute magnitude is consistent with the Tycho color $`(BV)_T=0.62`$, with $`(by)=0.41`$ (Hilditch & Hill 1975) and the spectral type of F8V (Akalin & Derman 1997) under an assumption that the reddening is relatively large, $`E_{BV}=0.09`$ (RD97). Our spectral type is F7V. ### 2.9 GR Vir Strohmeier, Knigge & Ott (1965) noticed light variations of the star, but an independent discovery of Harris (1979) led to its identification as a close binary system. The assumed period as well as the recent timing of the eclipse come from the photometric study of Cereda, Misto, Niarchos & Poretti (1988). Since the system was not recently observed, the accumulated uncertainty in the period as well as a likely change in its length since the observations of Cereda et al. have led to a large difference between the spectroscopic and predicted values of T<sub>0</sub> of 0.2208 days. We handled the implied problem of relating our radial-velocity to the photometric data of Cereda et al. by assuming that the system is of the A-type, as indicated by the fact that the secondary (shallower) eclipses are apparently total. The mass-ratio of the system, $`q=0.12`$, is one of the smallest known for contact systems. Because of the total eclipses and the availability of the spectroscopic mass-ratio, the system is an ideal candidate for a new light-curve synthesis solution. GR Vir is the third and last system in this series which was included in the Hipparcos analysis (RD97). The absolute magnitude $`M_V=4.17(14)`$ was the best determined among the three determinations. Because of its brightness and the features mentioned above, the system deserves much attention. The colors $`(by)=0.37`$ (Olsen 1983) and $`(BV)=0.55`$ (Cereda et al. 1988) suggest the spectral type F9 – G0. Our spectral classification is F7/8V. ### 2.10 NN Vir (HD 125488) This is one of the stars whose variability has been detected by the Hipparcos satellite. Gomez-Ferrellad & Garcia-Melendo (1997) correctly identified the type of variability and determined the period and the initial epoch T<sub>0</sub>. The radial velocity variations have been observed by us for the first time. In spite of its large apparent brightness ($`V=7.6`$), it was excluded from the radial-velocity survey of early F-type stars by Nordstroem, Stefanik, Latham & Andersen (1997) because of the strong broadening of the spectral lines indicating rapid rotation and/or close binary character of the star. NN Vir has a relatively large mass-ratio, $`q_{sp}=0.491(11)`$, which is rather infrequent among the A-type contact systems. The light curve of Gomez-Ferrellad & Garcia-Melendo has a moderately large amplitude so that the system should be subject of a combined radial-velocity/photometric solution. NN Vir was not included in RD97, but its parallax is moderately large leading to relatively secure determination of the absolute magnitude of $`M_V=2.52(26)`$. The color $`by=0.25`$ (Olsen 1983) suggests the spectral type F3. This is confirmed by our direct classification F0/1V. ## 3 SUMMARY The paper brings radial velocity data for the second group of ten close binary systems that we observed, this time all observed at David Dunlap Observatory. All but SV Equ, which is probably a short-period semi-detached system, are contact binaries with both components clearly detected. Although we do not give the calculated values of $`(M_1+M_2)\mathrm{sin}^3i=1.0385\times 10^7(K_1+K_2)^3P(\mathrm{day})M_{}`$, we note that for all 9 contact binaries they are relatively large and range between $`1.17M_{}`$ (CK Boo) and $`2.25M_{}`$ (UZ Leo) indicating that the orbital inclination angles for all of them are not far from 90 degrees and that the final, combined solutions of the light and radial-velocity variations should give reliable values of the masses. Although our observational selection for this group of ten systems had been purely random, we found – after exclusion of SV Equ – that 8 systems among 9 W UMa-type systems are of the A-type; only V842 Her is of the W-type. A combination of factors could lead to this unusual preference over the W-type systems: (1) Although it is generally more difficult to detect low-mass secondaries in A-type systems than in W-type systems because of the more extreme mass-ratios, our instrumental setup is ideal for observations of the type presented here; (2) The A-type systems are, on the average, hotter and brighter so that they would be preferentially selected close to the faint limit of a magnitude-limited survey like ours; (3) It is possible that in cases when spectral lines of secondary components had not been detected during first attempts by previous observers, the A-type systems were preferentially discarded in favor of the W-type systems since the latter would normally give full, two-component orbital solutions. We point out which systems will be of greatest interests in the individual descriptions, in Section 2. Here we note that three of our systems have very small mass-ratios: 0.11, 0.12 and 0.17 for CK Boo, GR Vir and AH Aur. Small values are common among the A-type systems, but the physical state of such extreme mass-ratio systems remains elusive. We note that the only W-type system, V842 Her has a relatively small (for a W-type system) mass-ratio of 0.26, while NN Vir has a relatively large mass-ratio of 0.49 (for an A-type system). The research has made use of the SIMBAD database, operated at the CDS, Strasbourg, France and accessible through the Canadian Astronomy Data Centre, which is operated by the Herzberg Institute of Astrophysics, National Research Council of Canada. Captions to figures:
no-problem/9906/physics9906036.html
ar5iv
text
# Lasing on the 𝐷₂ line of sodium in helium atmosphere due to optical pumping on the 𝐷₁ line (up-conversion) ## Abstract A new method is proposed to produce population inversion on transitions involving the ground state of atoms. The method is realized experimentally with sodium atoms. Lasing at the frequency corresponding to the sodium $`D_2`$ line is achieved in the presence of pump radiation resonant to the $`D_1`$ line with helium as a buffer gas. In recent years, a growing interest has been attracted to new schemes of laser action in gases based on quantum interference induced by laser fields, see review papers and citation therein. Especially attractive is an opportunity to achive lasing from highly excited levels into the ground state of atoms and molecules or into the lowest possible energy levels usually having the highest population (in the absence of perturbing fields). It is such schemes that are considered perspective for short-wave generation. There are numerous works investigating the ampliphication of radiation in alkali-metal vapors on the transition into the ground state in the $`V`$-scheme. Some of them exploit effects of coherence to suppress absorption on the transition from the ground-state to appropriate upper level while the upper level is slightly populated by various methods such as discharge current , collisions with buffer gas particles . In this paper we aim at drawing attention to new possibilities to produce laser action on transitions into the ground state. The method is based on manipulation with polarization of pump waves and specific collisional population transfer mechanism. In earlier papers the possibility of laser action was demonstrated at the frequency corresponding to the $`D_1`$ line in the presence of strong pump field tuned in resonance with the $`D_2`$ line in alkali-metal vapors. The effect is generated by collisions with buffer-gas particles. The collisions should be frequent enough in order to establish Boltzmann distribution of population between fine-structure components ($`P_{3/2}`$ and $`P_{1/2}`$) during pump pulse action. At these conditions magnetic sublevels populations of the $`P_{1/2}`$ state appear by the Boltzmann factor higher than that of the $`P_{3/2}`$ state. A strong pump field is used to equalize magnetic sublevels populations of the ground state $`S_{1/2}`$ and the resonant upper state $`P_{3/2}`$. As a result, the population of level $`P_{1/2}`$ appears to be higher than the population of level $`S_{1/2}`$. Thus, the population inversion on transition from the upper state $`P_{1/2}`$ into the ground state $`S_{1/2}`$ is achieved, and laser action at the frequency of the $`D_1`$ line remains possible. In lasing reached the superluminosity regime. In correspondence with nature of the process the generated frequency in was lower compared to the pump frequency (down-conversion). This naturally leads to the question: whether up-conversion is possible, i.e. to achieve laser action (on the $`D_2`$ line) with frequency higher then that for pump field (resonant to the $`D_1`$ line)? The present paper gives a positive answer. Such an opportunity appears if one combines the above mentioned collisional processes with specific polarization effects. Let’s prove this statement. Consider an interaction of the pump pulse (specifically polarized) with a gas of atoms in mixture with a buffer gas at a high pressure. Let’s take for definitness sodium atoms with corresponding level scheme, see Fig.1. The pump pulse with carrier frequency resonant to the $`D_1`$ line has the following temporal and polarization structure. It consists of long low-intensity circularly-polarized prepulse and much more intensive and short main pulse with orthogonal circular polarization. The prepulse is used for optical orientation of sodium atoms in the ground state. From this one can set the requirement on its duration (it has to be longer compared to upper state relaxation time) and intensity limit (it may be low, but enough for optical orientation only). After the end of the prepulse almost all the population is optically pumped into one of the magnetic sublevels of the ground state (its population is shown in Fig.1(a) by symbolic column). The main pulse may be shorter than the relaxation time of the excited levels. It transfers the population from the ground state onto sublevel of the excited level $`P_{1/2}`$ that is initially empty (corresponding transition is shown in Fig.1(b) by a solid arrow). The intensity of the main pulse must be high enough to maintain equality of populations for the coupled sublevels. Furthermore, at high buffer-gas pressure collisions are frequent enough to mix the states $`P_{1/2}`$ and $`P_{3/2}`$ and their magnetic sublevels as well. As a result the population is distributed between the sublevels almost equally, as shown in Fig.1(b) by small columns. The sublevel $`M=1/2`$ of the ground state is almost empty (remember that it is pumped out by the prepulse), whereas other sublevels of the ground state and excited states are populated almost equally (the population difference by the Boltzmann factor between $`P_{1/2}`$ and $`P_{3/2}`$ states is insignificant here and may be neglected). One can see that population inversion between some upper magnetic sublevels (including those for the $`P_{3/2}`$ level) and the sublevel $`M=1/2`$ of the ground state is created. Hence, necessary conditions for laser action, specifically from the level $`P_{3/2}`$ to the ground state $`S_{1/2}`$ are met. Thus, we have shown that there exist conditions for producing laser action on the $`D_2`$ line with pulsed excitation resonant to the $`D_1`$ line. Taking into account partial oscillator strengths, we see that the highest gain is achieved on the transition $`P_{3/2}(M=3/2)S_{1/2}(M=1/2)`$, as shown in Fig.1(b) by a wavy arrow. It means that the generated wave has presumably the same polarization as the main pump pulse. To avoid misunderstanding, we should remind that collisions with particles of non-magnetic buffer gas (noble gas as an example) mix magnetic sublevels of the ground state in alkali metals very weakly, so we neglect this factor with confidence. On the basis of the above treatment, we have experimentally realized such scheme with population inversion on transition $`3P_{3/2}3S_{1/2}`$ of sodium. A general schematic of our experimental setup is shown in Fig.2. In the experiment it was easier and more convenient to use CW radiation instead of the prepulse. In this case it maintains the orientation of the ground state between pump pulses, coming from another laser source. We used CW dye laser $`DL1`$ with linear vertical polarization (of $`3`$ GHz spectral linewidth) and pulsed dye laser $`DL2`$ with horizontal polarization (of 10 GHz linewidth and $`5`$ ns pulse duration). In the spectrum of the pulsed laser together with narrow-band laser radiation, a broadband luminescence of the R6G dye is present, but its spectral density is by 3 orders of magnitude weaker. After passing the quarter-wave ($`\lambda /4`$) plate the beam of the CW laser becomes clockwise polarized, and the beam of the pulsed laser is counter-clockwise polarized. The CW radiation prepares the medium converting it into the $`S_{1/2}(M=1/2)`$ state, and high-power pulses are used for populating upper levels. Both beams propagate in the same direction and are focused by the lens $`L_1`$ with focal length $`F=55`$ cm into the cell with sodium-helium mixture. The intensity near the beam waist is $`60`$ W/cm<sup>2</sup> for the CW laser, and up to $`6`$ MW/cm<sup>2</sup> for the pulsed laser. The cell has 1.5-cm diameter and is 22-cm long with heated zone ($`BC`$) of 4.5 cm in the central part. The sodium vapour density is controlled by varying the temperature, measured by thermocouple. The cell is placed between the Helmholz coils ($`HC`$) that provide external longitudinal magnetic field to eliminate deorientating effect of transverse component of the laboratory field. The magnitude of the external field up to $`B80`$ G is available. Output radiation is focused by the lens $`L_2`$ onto the slit of the monochromator RAMANOR HG.2S ($`M`$) with an apparatus width of about 0.5 cm<sup>-1</sup>. Data from photomultiplier $`D`$ connected to amplifier and integrator are registered with a computer. That allows us to store and average measured data. The generation at the $`D_2`$ line frequency was measured in direction of the pump beam as well as in the opposite direction. For this purpose the beam splitter ($`BS_2`$) was inserted into the pump beam pathway, as shown in Fig.2. First of all, it has been ascertained, that in the absence of the external magnetic field and at low buffer-gas (helium) pressure there is no coherent radiation at the $`D_2`$ line frequency in a broad range of other experimental parameters. After the external magnetic field $`B>0.5`$ G is applied, an intense coherent radiation at the $`D_2`$ line frequency appeares at helium pressure higher than 200 torr with CW and pulse lasers being tuned to exact resonance with the $`D_1`$ line, see curve 1 in Fig.3(a). Divergence of the output beam appeares to be no more than that of the pump beam. Registered spectral width is about $`0.7`$ cm<sup>-1</sup>, being close to the apparatus width. The radiation at the $`D_2`$ line frequency has nearly the same polarization as the strong pulsed field. Curve 2 in Fig.3(a) illustrates the measurement without magnetic field ($`B=0`$). In this case only the absorption line is observed because broadband R6G dye luminescence is absorbed in optically thick media. The coherent radiation at the $`D_2`$ line frequency is observed both in the direction of the pump beam, and in the opposite direction (Fig.3(b)). Spectral width for the forward and backward output radiations is nearly the same. In contrast to the forward radiation, the backward one is also observable in the absence of the external magnetic field. Note, that the intensity of the forward output radiaton is 80 times as high as that of the backward radiation (in the presence of magnetic field). This fact can be explained by the effect of dye luminescence that serves as a seed leading to amplification in its direction. It is interesting to note that the backward generation occurs in the absence of longitudinal magnetic field, i.e. when the optical orientation is noticeably destroyed. We suppose that this happens due to a sufficient intensity of the CW radiation that is able to transfer the residual population from sublevel $`S_{1/2}`$ $`M=1/2`$ into excited states, thus helping to create inversion on the operating transition. Absence of forward generation under identical conditions is most likely explained as follows. The CW radiation gets absorbed propagating along the heated zone. Because of this, inversion condition is no longer valid in the output part of the zone. Hence, the generated radiation coming forward is absorbed in the output part. The output intensity of generated radiation under fixed other conditions reaches its maximum when frequencies of both lasers (CW and pulsed) $`\omega _L`$ are tuned in resonance with the $`D_1`$ line of frequency $`\omega _{D_1}`$. Detuning from the exact resonance $`|\mathrm{\Omega }|=|\omega _L\omega _{D_1}|4`$ cm<sup>-1</sup> leads to disappearance of the output signal. Appearing at helium pressure of 200 torr, the intensity at the $`D_2`$ line frequency rises monotonically with increasing pressure up to 810 torr, highest available in the experiment. The maximum of output intensity measured as a function of sodium vapour density is reached at $`N8\times 10^{12}`$ cm<sup>-3</sup>. Under these conditions the cell transmission is $`90`$% for the pulse radiation, and $`80`$% for the CW laser beam. Starting at external magnetic field strength $`B`$ as low as 0.5 G (comparable to laboratory field), the output signal grows almost linearly with increasing field up to $`B5`$ G when it saturates. Note, that an application of the external magnetic field considerably helps to orientate sodium atoms by circularly polarized CW radiation. The luminescence intensity is attenuated by a factor of $`3÷4`$ after the field is applied, that means the portion of oriented atoms reaches more than 80%. When the pump intensity is attenuated down to $`1.5÷2`$ MW/cm<sup>2</sup> the output intensity does not change significantly. Further attenuation of the pump radiation leads to a smooth decrease in the generated power while its spectral width remaines constant. At the same time, attenuation of the CW laser power leads to an abrupt fall in the generated power at the $`D_2`$ line frequency. At optimal conditions ($`\mathrm{\Omega }=0`$, $`N10^{13}`$ cm<sup>-3</sup>, helium pressure $`810`$ torr, $`B5`$ G) the output intensity of generated radiaton reaches $`1.5÷2`$% of the absorbed pump intensity (of the pulsed laser). The intensity of the generated radiation inside the cell is estimated to be $`5`$ kW/cm<sup>2</sup>, corresponding to strong saturation, $`\ae 1`$<sup>*</sup><sup>*</sup>*Saturation parameter is defined as $`\ae =\left(d_{mn}E/2\mathrm{}\right)^2/\mathrm{\Gamma }\mathrm{\Gamma }_m`$, where $`d_{mn}`$ – matrix element of the dipole moment on transition $`mn`$, $`E`$ – electric field amplitude, $`\mathrm{\Gamma }`$ – collisional linewidth, $`\mathrm{\Gamma }_m`$ – relaxation rate (radiative) of the upper level $`m`$.. This suggests that the generation occurs in superluminosity regime (at least in the forward direction), i.e. the generated wave is capable to utilize considerable portion of inversion on the operating transition. We also have considered parametric processes such as wavemixing as alternative interpretation of our experimental results. It is well known that wavemixing processes are suppressed at the resonance conditions due to absorption , while our signal conversely has maximum at the frequensy resonant to the $`D_2`$ line and vastly decays with detuning from the resonance, and moreover, signal generated in wavemixing in backward direction not be present due to phase mismatching . Thus our original interpretation presented above seems to be more realistic and reasonable. Our work is based on the composition of well known phenomena (laser excitation, optical pumping, collisional population transfer), but we need to stress that only appropriate combination of the phenomena mentioned above lead to the new results which were obvious beforehand. Thus, we have shown in the present work, that proposed combination of polarization and collisional transfer mechanisms opens new opportunities for resonant radiative processes. In view of the development of the general method applied here for special case, we propose a variant of the scheme that allows one to achieve generation of violet and UV radiation on transitions into the ground state. For this purpose, instead of a single-photon process a two-step or two-photon excitation into higher-lying states can be explored (see Fig.4). In this case, radiative transition from the excited state ($`m`$) into the ground state is parity-forbidden. However, if there is another close (within $`kT`$ range) level $`l`$ of different parity, coupled radiatively with the ground state $`n`$, the developed method remains applicable. Using high-pressure buffer gas an efficient collisional mixing of upper levels $`m`$ and $`l`$ can be provided. Then, due to the Boltzmann factor (if level $`l`$ is lower than level $`m`$) and high intensity pump waves, a population inversion on transition $`ln`$ can be achieved, thus resulting in laser action in short-wave range. In case the available intensity and the Boltzmann factor are not enough for generation, or level $`l`$ lies higher than level $`m`$, it is possible to prepare the system using the same polarization technique: optical orientation of the ground state $`n`$ by resonant radiation (at first-step transition) opens an additional opportunity to build up population inversion between corresponding sublevels of transition $`ln`$. We are gratefully acknowledge fruitful discussions with S.G. Rautian, Ye.V. Podivilov and M.G. Stepanov, and an opportunity to use CW dye laser by ”INVERSion” and ”Technoskan” companies. This work was supported in part by Russian Foundation for Basic Research, grants 96-15-96642, 98-02-17924.
no-problem/9906/cond-mat9906146.html
ar5iv
text
# The critical exponents of fracture precursors ## Abstract The acoustic emission of fracture precursors is measured in heterogeneous materials. The statistical behaviour of these precursors is studied as a function of the load features and the geometry. We find that the time interval $`\delta t`$ between events (precursors) and events energies $`\epsilon `$ are power law distributed and that the exponents of these power laws depend on the load history and on the material. In contrast, the cumulated acoustic energy $`E`$ presents a critical divergency near the breaking time $`\tau `$ which is $`E\left(\frac{\tau t}{\tau }\right)^\gamma `$. The positive exponent $`\gamma `$ is independent, within error bars, by all the experimental parameters. PACS: 62.20.Mk, 05.20.-y, 81.40.Np Heterogeneous materials are widely studied not only for their large utility in applications but also because they could give more insight to our understanding of the role of macroscopic disorder on material properties. The statistical analysis of the failure of these materials is an actual and fundamental problem which has received a lot of attention in the last decade both theoretically - and experimentally -. When an heterogeneous material is stretched its evolution toward breaking is characterized by the appearance of microcracks before their final break-up. Each microcrack produces an elastic wave which is detectable by a piezoelectric microphone. The microcraks constitute the so called precursors of fracture. The purpose of this letter is just to describe a detailed statistical analysis of fracture precursors performed under many different experimental conditions and in several heterogeneous materials. Analysis of this kind can give very useful informations for constructing realistic statistical models of material failure. In our experiments we apply a pressure P to an heterogeneous sample until failure. The parameters that we consider are the elapsed time $`\delta t`$ between two consecutive events, the acoustic energy $`\epsilon `$ released by a single microcrack and the acoustic energy cumulated since the beginning of the loading $`E`$. In this paper we discuss the statistical behavior of these parameters as a function of the load applied to the sample, the material elastic properties and the geometry. In previous experiments it has been shown that if a quasi-static <sup>1</sup><sup>1</sup>1A load is considered quasi-static if the load rate is lower than the relaxation time of the system. constant pressure rate is imposed, that is $`P=A_pt`$, the sample breaks in a brittle way. In this case the cumulated acoustic energy $`E(t)`$ ( i.e. the total energy released up to a time $`t`$ by the microfractures) scales with the reduced time or pressure<sup>2</sup><sup>2</sup>2In this case $`P`$ and $`t`$ are proportional. in the following way: $$E(\frac{\tau t}{\tau })^\gamma ,$$ where $`t`$ is time, the critical time $`\tau `$ is the time at which the sample breaks and $`\gamma =0.27\pm 0.05`$ for all the materials we have checked. In contrast, if a constant strain rate $`u=Bt`$ is imposed, a plastic fracture is observed and the released energy shows no critical behavior. We have also shown that in the case of a constant P imposed to the sample (creep test), the total energy $`E`$ becomes, near failure, a function of $`t`$ and scales as $`E(\frac{\tau t}{\tau })^{\gamma _c}`$. Notably, the exponent found when a constant stress is applied is the same than the one corresponding to the case of constant stress rate : $`\gamma =\gamma _c`$. In all of the processes, at constant pressure and at constant pressure rate, the actual control parameter for failure seems to be the time. The appearance of a microcrack seems to be due to a nucleation process , and the probability of nucleation determines the lifetime $`\tau `$ of the entire sample. In fact, we find that $`\tau `$ is given by the equation: $$_0^\tau \frac{1}{\tau _o}e^{(\frac{P_o}{P})^4}𝑑t=1,$$ where $`P`$ is the pressure and $`\tau _o`$ and $`P_o`$ are constants, which depends on the material and on the geometry . In the case of constant load rate ($`P=A_pt`$ or $`u=Bt`$) the system has not a characteristic scale of energy or time: the histogram $`N(\epsilon )`$ of the released energy and the histogram $`N(\delta t)`$ of the elapsed time $`\delta t`$ between two consecutive events reveal power laws, i.e. $`N(\epsilon )\epsilon ^\beta `$ and $`N(\delta t)\delta t^\alpha `$. The exponents $`\alpha `$, $`\beta `$ and $`\gamma `$ do not depend on the load rate $`A_p`$ or $`B`$ . In this paper we are interested in studying the exponents in different geometries and when a constant (creep test), cyclic or erratic load are imposed. The tests are performed by monitoring the acoustic emission (AE) released before the final break-up of a sample on a high pressure chamber (HPC) machine. The sample separates two chambers and a pressure difference $`P`$ is imposed between them. A sketch of the apparatus is shown in fig. 1a and 1b. We have prepared circular wood (Young modulus$`Y=\mathrm{2\; 10}^8N/m^2`$) and fiberglass samples of 22 cm diameter and 4 mm thickness. The Young modulus of these materials are $`Y=\mathrm{2\; 10}^8N/m^2`$ for wood and $`Y=\mathrm{2\; 10}^8N/m^2`$ for fiberglass. The AE consists of ultrasound bursts (events) produced by the formation of microcracks inside the sample. For each AE event, we record the energy $`\epsilon `$ detected by the four microphones <sup>3</sup><sup>3</sup>3The energy is defined as the integral of the sum of the squared signals., the place where it was originated, the time at which the event was detected and the instantaneous pressure and displacement at the center of the sample. We are able to record up to 33 events per second. The experimental apparatus is the same that has been used to obtain the previously cited results; a more detailed description of the experimental methods can be found in . To check the dependence of $`\alpha `$, $`\beta `$ and $`\gamma `$ on the geometry, we used a classical tensile machine, fig 1c. The force applied to the sample is slowly and constantly increased up to the final break-down of the sample. During the load we measure the applied force $`F`$, the strain, the AE produced by microcracks and the time at which the event was detected. The samples have a rectangular shape, with a length of 29 cm, a height of 20 cm and a thickness of 4 mm. More details of the experimental setup can be found in . In the experiments performed with the first apparatus (HPC), power laws are obtained for the distributions of $`ϵ`$ and of $`\delta t`$. As an example of two typical distributions obtained at constant imposed pressure, we plot in fig 2a) and 2b) $`N(\delta t)`$ and $`N(\delta ϵ)`$ respectively. The exponents of these power laws ($`\alpha _c`$ for energies and $`\beta _c`$ for times) depend on $`P`$. In fig. 2c, $`\alpha _c`$ and $`\beta _c`$ are plotted versus $`P`$. Note that both exponents grow with pressure. We observe that the rate of emissions increases with pressure, so that the weight of big values of $`\delta t`$ decreases. This explains the fact that $`\beta _c`$ grows with pressure. We have compared the histograms of energy $`ϵ`$ for several pressures, and we noticed that the number of high-energy emissions is almost the same, while the number of low-energy emissions increase with pressure, so that the exponent $`\alpha _c`$ increases as well. Moreover, as the pressure increases, the exponents $`\alpha _c`$ and $`\beta _c`$ attain the values $`\alpha =1.9\pm 0.1`$ and $`\beta =1.51\pm 0.05`$ obtained in the case of a constant loading rate. We imposed to the sample a cyclic and an erratic load, which are plotted as a function of time in figure 3a and 3b respectively. Power laws are obtained for the distributions of $`ϵ`$ and for $`\delta t`$. The exponents of these power laws do not depend on the load behavior; their value is the same of that at constant loading rate. These and previous results , allows us to state that if $`\frac{dP}{dt}0`$, the histograms of the released energy $`\epsilon `$ and of the time intervals $`\delta t`$ do not depend on the load history. The fact that $`\alpha `$ and $`\beta `$ do not depend on $`\frac{dP}{dt}`$ seems to be in contrast with the fact that $`\alpha _c`$ and $`\beta _c`$ depend on $`P`$. This result can be interpreted by considering that the microcracks formation process is not the same when $`\frac{dP}{dt}=0`$ and $`\frac{dP}{dt}0`$. In the former case, imposed constant $`P`$, the mechanism of microcrack nucleation is the dominant one and the nucleation time depend on pressure. In the other case, $`\frac{dP}{dt}0`$, the dominant mechanism is not the nucleation but the fact that, when pressure increases as a function of time, several parts of the sample may have to support a pressure larger than the local critical stress to break bonds. The fact that at high constant pressure $`\alpha _c`$ and $`\beta _c`$ recover the value $`\alpha _c`$ and $`\beta _c`$ has a simple explanation. Indeed, in order to reach a very high pressure $`P_h`$, $`\frac{dP}{dt}`$ is different from zero for a time interval which is comparable or even larger than the time interval spent at constant pressure $`P_h`$. Thus at high constant pressure the system is close to the case $`\frac{dP}{dt}0`$. Using the cyclic and the erratic pressure, plotted respectively in fig. 3a and 3b, we can check the dependence of $`\gamma `$ on the history of the sample, i.e. on the behavior of the imposed pressure. The cumulated energy $`E`$ for the cyclic and the erratic pressure, shown in fig 3a and 3b as a function of $`t`$, is plotted in log-log scale as a function of the reduced parameter $`\frac{\tau t}{\tau }`$ in fig 4a and 4b respectively. We observe that, in spite of the fluctuations due to the strong oscillations of the applied pressure, near the final break-up the energy $`E`$, as a function of $`\frac{\tau t}{\tau }`$, is fitted by a power law with $`\gamma 0.27\pm 0.02`$. In fig.4c), $`E`$ measured when a constant pressure is applied to the sample is plotted as a function of $`\frac{\tau t}{\tau }`$. A power law is found in this case too . The exponent $`\gamma `$ is, within error bars, the same in the three cases. Hence it depends neither on the applied pressure history nor on the material . Further, experiments made with the tensile machine show that $`\gamma `$ is independent on the geometry. In fact we observe that the behavior of the energy near the fracture as a function of $`\left(\frac{\tau t}{\tau }\right)`$ is still a power law of exponent $`\gamma 0.27`$, as shown in figure 4d. Considering the experimental data here presented and those already published , we claim that, if a load is imposed to an heterogeneous material, power laws are obtained for the histograms of the released energy $`\epsilon `$ and of the time intervals $`\delta t`$. The exponents of these power laws depend on the material and, if $`\frac{dP}{dt}=0`$ , on the applied pressure $`P`$. In contrast, at imposed pressure, the behavior of the cumulated energy $`E`$ near the final breaking point does not depend on the load, on the geometry and on the material . We find that time is the control parameter of the system and that $`E\left(\frac{\tau t}{\tau }\right)^\gamma `$, where the critical exponent is $`\gamma =0.27\pm 0.05`$. These results are quite similar to those obtained with numerical simulations on a democratic bundle fiber model with thermal noise. These facts and the observed dependence of $`\tau `$ on $`P`$ allow us to conclude that microcrack nucleation process plays a fundamental role in the entire dynamics of the system. These are very useful informations in order to construct a realistic statistical model of heterogenous material failure.
no-problem/9906/quant-ph9906113.html
ar5iv
text
# Quantum superluminal communication does not result in the causal loop ## Abstract We show that the quantum superluminal communication based on the quantum nonlocal influence, if exists, will not result in the causal loop, this conclusion is essentially determined by the peculiarity of the quantum nonlocal influence itself, according to which there must exist a preferred Lorentz frame for consistently describing the quantum nonlocal process. As we say quantum mechanics permits no superluminal communication, we should refer to the present quantum theory, and realize that the concrete reason is not related to the peculiarity of the quantum nonlocal influence, which is manifested in Bell’s theorem and has been confirmed by more and more experiments, on the contrary, this kind of quantum nonlocal influence may help to achieve the superluminal communication when transcending the present quantum theory. But, on the other hand, people may naturally argue that even regardless of the limitation from present quantum theory, special relativity will also inhibit such superluminal communication based on the quantum nonlocal influence owing to the causal loop, thus superluminal communication is definitely hopeless. However, people look down on the peculiar quantum nonlocal influence, in fact, it not only is independent of the limitation of present quantum theory on superluminal communication, but also rejects special relativity to some extent, here we will demonstrate that the description about quantum nonlocal influence needs a preferred Lorentz frame, and the quantum superluminal communication based on such quantum nonlocal influence, if exists, does not result in the causal loop, this undoubtedly opens the first door to superluminal communication. At first, Hardy’s theorem first states that any dynamical theory describing the quantum nonlocal process, in which the predictions of the theory agree with those of ordinary quantum theory, must have a preferred Lorentz frame, and the description about the quantum nonlocal influence is no longer independent of the selection of inertial frame, this evidently breaks the first assumption of special relativity, which asserts that the description of any physical process is independent of the selection of inertial frame. But in Hardy’s proof he presupposed that the collapse process happens simultaneously in all observing inertial frames or there is no backward causality in quantum systems, which validity is still not clear, this weakens the strength of his conclusion. Then, Percival extended Hardy’s theorem, he gave a different derivation based on classical links between two Bell experiments in different experimental inertial frames, which is called the double Bell experiment, and his proof is independent of any assumptions about causality in the quantum domain, thus it is just the quantum nonlocal influence itself that requires the dynamical theory about it must have a preferred Lorentz frame, or there will exist the forbidden causal loops in the systems with classical inputs and outputs. On the other hand, Suarez’s analysis about multisimultaneity has also indicated that the description about the causal orders of the nonlocal correlating events essentially needs a preferred Lorentz frame, although he didn’t realize this fact himself, in fact, his elegant one Bell experiment involving 2-after impacts will also generate the forbidden logical causal loop if we assume that no preferred Lorentz frame exists, or the quantum nonlocal influence happens simultaneously in all experimental frames, since as to these two space-like classical events in the experiment, each is the cause of the other, and this is evidently a logical contradiction. Thus Suarez’s one Bell experiment also demonstrates that there must exist a preferred Lorentz frame in order to consistently describe the quantum nonlocal process. Now, all the above demonstrations have clearly indicated that the consistent description about the quantum nonlocal influence needs a preferred Lorentz frame, in which all quantum nonlocal influences are simultaneous, and the causal relation between the correlating quantum nonlocal events are exclusively determined, then all quantum nonlocal influences will be no longer simultaneous in other inertial frames according to Lorentz transformations, in fact, in these frames, the quantum nonlocal influence, or quantum simultaneous communication if exists, will proceed forward in time along one direction in space, and proceed backward in time along the contrary direction in space, the causal relations between these correlating quantum nonlocal events in these frames will no longer relate directly to their time orders, and be only determined by their time orders in the preferred Lorentz frame, then it is evident that there will no longer exist any causal loops for the quantum nonlocal influence and possible quantum simultaneous communication based on such quantum nonlocal influence, since the causal relations of the correlating quantum nonlocal events are exclusively determined by their time orders in the preferred Lorentz frame, and causes always come before effects. At last, I will give a simpler apagogical demonstration, namely if the quantum simultaneous communication based on the quantum nonlocal influence leads to the forbidden causal loops, then the quantum nonlocal influence itself will also lead to the forbidden causal loops, the reason is simple, since in Percival’s double Bell experiment, if we devise the experimental settings in order that the quantum simultaneous communication based on the quantum nonlocal influence leads to the forbidden causal loops with certainty, then the quantum nonlocal influence itself will also lead to the forbidden causal loops with a nonzero probability, as Percival has minutely demonstrated, this is not permitted either, thus we again get the conclusion, namely quantum superluminal communication, if exists, does not result in the causal loop. Acknowledgments Thanks for helpful discussions with A.Suarez ( Center for Quantum Philosophy ), Dr S.X.Yu ( Institute Of Theoretical Physics, Academia Sinica ).
no-problem/9906/nucl-th9906022.html
ar5iv
text
# DOE/ER/40561-53-INT Imaging proton sources and space-momentum correlations ## Abstract The reliable extraction of information from the two-proton correlation functions measured in heavy-ion reactions is a long-standing problem. Recently introduced imaging techniques give one the ability to reconstruct source functions from the correlation data in a model independent way. We explore the applicability of two-proton imaging to realistic sources with varying degrees of transverse space-momentum correlations. By fixing the freeze-out spatial distribution, we find that both the proton images and the two-particle correlation functions are very sensitive to these correlations. We show that one can reliably reconstruct the source functions from the two-proton correlation functions, regardless of the degree of the space-momentum correlations. The sensitivity of the two-proton correlations to the space-time extent of nuclear reactions was first pointed out by Koonin and later emphasized by many authors . Since then, measurements of the two-proton correlations have been used along with pion HBT data as a probe of the space-time properties of the heavy-ion collisions (for the review of recent experimental results of two-particle interferometry see and references therein). A prominent “dip+peak” structure in the proton correlation function is due to the interplay of the strong and Coulomb interactions along with effects of quantum statistics. Because of the complex nature of the two-proton final state interactions only model-dependent and/or qualitative statements were possible in proton correlation analysis. Typically , the proton source is assumed to be a chaotic source with gaussian profile that emits protons instantaneously. For simple static chaotic sources, it has been shown that the height of the correlation peak approximately scales inversely with the source volume. Heavy-ion collisions are complicated dynamic systems with strong space-momentum correlations (such as flow) and a nonzero lifetime. Hence the validity of the assumptions behind such simplistic sources is questionable. In order to address the limitations of this type of analysis (and to incorporate collective effects), some authors utilize transport models to interpret the proton correlation functions. Although this approach is a step in a right direction, it is still highly model-dependent. Recently, it was shown that one can perform model-independent extractions of the entire source function $`S(r)`$ (the probability density for emitting protons a distance $`r`$ apart) from two-particle correlations, not just its radii, using imaging techniques . Furthermore, one can do this even with the relatively complicated proton final-state interactions and without making any a priori assumptions about the source geometry or lifetime, etc. First results from the application of imaging to the proton correlation data can be found in refs. . While these results look promising, tests of the imaging technique have only been performed on static (Gaussian and non-Gaussian) sources. It is important to understand the limitations and robustness of this technique especially in the light of the ongoing experimental program at SIS, AGS and SPS as well as upcoming experiments at RHIC. In this letter, we will study the applicability of proton imaging to realistic sources with transverse space-momentum correlations. In particular we explore how $`\stackrel{}{r}_T\stackrel{}{p}_T`$ correlations directly affect the proton sources and, hence, the shapes of the experimentally observable correlation functions. Here $`\stackrel{}{r}_T`$ and $`\stackrel{}{p}_T`$ are the transverse radius and transverse momentum vectors respectively of a proton at the time when it decouples from the system (freeze-out). It has been argued in the pion and proton cases that the apparent source size (or the effective volume) decreases as collective motion increases. We will verify this expectation and show that one can reliably reconstruct the source function, even in the presence of extreme space-momentum correlations. The outline of this letter is as follows. First, we briefly describe the imaging procedure used to extract the source function from experimental correlations. We will discuss proton sources but most of our arguments and conclusions are valid for any two-particle correlations. Next, we describe how we implement the varying degrees of space-momentum correlations using the RQMD model. Finally, we will discuss the influence of these correlations on the proton correlation functions and imaged sources. Since we can also construct the sources directly within RQMD, this serves as a more demanding test of the imaging procedure than has been performed to date. With imaging, one extracts the entire source function $`S(r)`$ from the two-proton correlation function, $`C(q)`$. Here the source function is the probability density for emitting protons with a certain relative separation in the pair Center of Mass (CM) frame. The source function and the correlation function are related by the equation : $$C(q)1=4\pi _0^{\mathrm{}}𝑑rr^2K(q,r)S(r)$$ (1) In eq. (1), $`q=\frac{1}{2}\sqrt{(p_1p_2)^2}`$ is the invariant relative momentum of the pair, $`r`$ is the pair CM separation after the point of last collision, and $`K`$ is the kernel. The kernel is related to the two-proton relative wavefunction via $$K(q,r)=\frac{1}{2}\underset{js\mathrm{}\mathrm{}^{}}{}(2j+1)\left(g_{js}^{\mathrm{}\mathrm{}^{}}(r)\right)^21$$ (2) Here $`g_{js}^{\mathrm{}\mathrm{}^{}}`$ are the relative proton radial wavefunctions for orbital angular momenta $`\mathrm{},\mathrm{}^{}`$, total angular momentum $`j`$, and total spin $`s`$. In what follows, we calculate the proton relative wavefunctions by solving the Schrödinger equation with the REID93 nucleon-nucleon and Coulomb potentials. Because (1) is an integral equation with a non-singular kernel, it can be inverted . To perform the inversion, we first discretize eq. (1), giving a set of linear equations, $`C_i1=_{j=1}^MK_{ij}S_j`$, with $`N`$ data points and $`M`$ source points. Given that the data has experimental error $`\mathrm{\Delta }C_i`$, one cannot simply invert this matrix equation. Instead, we search for the source vector that gives the minimum $`\chi ^2`$: $$\chi ^2=\underset{i=1}{\overset{N}{}}\frac{(C_i1_{j=1}^MK_{ij}S_j)^2}{(\mathrm{\Delta }C_i)^2}.$$ (3) The source that minimizes this $`\chi ^2`$ is (in matrix notation): $$S=(K^\mathrm{T}BK)^1K^\mathrm{T}B(C1)$$ (4) where $`K^\mathrm{T}`$ is the transpose of the kernel matrix and $`B`$ is the inverse covariance matrix of the data, $`B_{ij}=\delta _{ij}/(\mathrm{\Delta }C_i)^2`$. In general, inverse problems such as this one are ill-posed problems. In practical terms, small fluctuations in the data can lead to large fluctuations in the imaged source. One can avoid this problem by using the method of Optimized Discretization discussed in reference . In short, the Optimized Discretization method varies the size of the $`r`$-bins of the source (or equivalently the resolution of the kernel) to minimize the relative error of the source. The source function that one reconstructs is directly related to the space-time development of the heavy-ion reaction in the Koonin-Pratt formalism : $$\begin{array}{cc}\hfill S(r,\stackrel{}{q})=& _{4\pi }𝑑\mathrm{\Omega }_r𝑑t_1𝑑t_2d^3R\hfill \\ & \times D(\stackrel{}{R}+\stackrel{}{r}/2,t_1;\stackrel{}{q})D(\stackrel{}{R}\stackrel{}{r}/2,t_2;\stackrel{}{q}),\hfill \end{array}$$ (5) where $`\stackrel{}{q}=\frac{1}{2}(\stackrel{}{p}_1\stackrel{}{p}_2)`$, making $`q=|\stackrel{}{q}|`$. Here the $`D`$’s are the normalized single particle sources in the pair CM frame and they have the conventional interpretation as the normalized phase-space distribution of protons after the last collision (freeze-out) in a transport model. In computing $`S(r,\stackrel{}{q})`$ in a transport model, one does not need to consider the contribution of large relative momentum ($`qq_{\mathrm{cut}}`$) pairs to the source as the kernel cuts off the contribution from these pairs. The kernel does this because it is highly oscillatory while the source varies weakly on the scale of these oscillations and the integral in (1) averages to zero. We can estimate $`q_{\mathrm{cut}}`$ directly from the correlation function as $`q_{\mathrm{cut}}`$ is roughly the momentum where the correlation goes to one. Nevertheless, for the imaging in (1) to be unique one must require that the $`q`$ dependence of the correlation comes from the kernel value alone and eq. (5) seems to indicate that the source itself has a $`q`$ dependence. In practice $`S(r,\stackrel{}{q})`$ only has a weak $`\stackrel{}{q}`$ dependence for $`qq_{\mathrm{cut}}`$ and this dependence may be neglected . Since $`S(r)`$ is the probability density for finding a pair with a separation of emission points $`r`$, one can compute it directly from the freeze-out phase-space distribution given by some model. First one scans through this freeze-out density of protons, then histograms the number of pairs in relative distance in the CM, and finally normalizes the distribution: $`4\pi 𝑑rr^2S(r)=1`$. As mentioned above, only low relative momentum pairs may enter into this histogram as the kernel cuts off the contribution from pairs with $`q>q_{\mathrm{cut}}`$. In our studies we used the Relativistic Quantum Molecular Dynamics (RQMD) model . It is a semi-classical microscopic model which includes stochastic scattering, classical propagation of the particles. It includes baryon and meson resonances, color strings and some quantum effects such as Pauli blocking and finite particle formation time. This model has been successfully used to describe many features of relativistic heavy-ion collisions at AGS and SPS energies. Our approach is as follows: first we take the freeze-out phase space distributions generated by RQMD and we alter the orientation of transverse momentum relative to the transverse radius, obtaining a subset of the phase space points. Following this, we use the Lednicky-Lyuboshitz method to construct the proton-proton correlation function. This method gives a description of the final state interactions between two protons, including antisymmetrization of their relative wave function. Finally, using the imaging technique described above we compute the proton source functions. We used $`4000`$ simulated events of $`4`$ GeV/A Au-Au reactions with impact parameter $`b3`$ fm. We utilized only pairs in the central rapidity region with $`|y|0.3`$ and applied no cut on transverse momentum $`p_T`$. We consider three different degrees of alignment between the transverse position $`\stackrel{}{r}_T`$ and the transverse momentum $`\stackrel{}{p}_T`$ of each proton used to construct the correlation function. These alignments are implemented in the same manner as in reference : 1. We orient $`\stackrel{}{p}_T`$ at a random angle with respect to $`\stackrel{}{r}_T`$. We refer to this as the random case. One can think of this case as being “thermal” as the transverse flow component is completely removed. 2. We do not change the orientation of $`\stackrel{}{p}_T`$. We refer to this case as the unmodified case. 3. We align $`\stackrel{}{p}_T`$ with $`\stackrel{}{r}_T`$ and refer to this as the aligned case. One can think of this case as one of extreme transverse flow. Note that the rotation occurs in the rest frame of the colliding nuclei. In all cases, we only rotate $`\stackrel{}{p}_T`$ so these procedures do not change the spatial distribution at freeze-out. However, it is clear that these procedures do change the phase-space density. Upper panels on Fig. 1 show the correlation functions for the three different cases. It is clear from the Figure that the degree of space-momentum correlation has a strong influence on the correlation function: the peak height of the correlation function changes from about 1.45 for unmodified RQMD to about 2.2 for the aligned case and 1.2 for randomized case. We would like to stress again, in all cases the spatial part of the source e.g. “radius” or “volume,” remains unaltered as does the transverse momentum spectrum and rapidity distribution of protons. Hence, the upper panels of Fig. 1 illustrate the danger of ignoring space-momentum correlations when analyzing correlation data. On lower panels in Fig. 1 we show the proton sources obtained with the help of the imaging procedure outlined above. Notice that, as the degree of alignment increases (going from right to left), that the source function becomes narrower and higher. One can understand this shift to lower separations in the way sketched in Fig. 2. In the aligned case, it is more probable that nearby protons have a small relative momentum $`q`$. In the random case, any pair can have a small $`q`$, regardless of their separation. Given that the kernel cuts off contributions from pairs with larger $`q`$, we expect that the aligned case will have a narrower source than the unmodified case and the unmodified case will have a narrower source than the random case. Also, in Fig. 1 we have shown the sources constructed directly from the RQMD freeze-out distribution following eq. (5). In these sources, we considered all pairs with a relative momentum smaller than $`q_{\mathrm{cut}}=60`$ MeV/c. We explored a range of $`q_{\mathrm{cut}}`$ of 60 to 100 MeV/c, all beyond the point in the correlation where it is consistent with one, and found no cutoff dependence. In all cases we see a general agreement of the imaged sources with the low relative momentum sources constructed directly from RQMD. In order to check the quality of the imaging and numerical stability of the inversion procedure, the two-proton correlation functions were calculated using the extracted relative source functions shown on lower panels in Fig. 1 as an input for eq. (1). The result of such “double inversion” procedure is shown on upper panels in Fig. 1 with solid circles. The agreement between the measured and reconstructed correlation function is quite good, confirming that imaging produces numerically stable and unbiased results. In conclusion, we have explored the applicability of proton imaging to realistic sources with transverse space-momentum correlations. By fixing freeze-out spatial distribution and varying the degree of transverse space-momentum correlation we found that both the images and the two-particle correlation functions are very sensitive to these correlations. In particular, we have shown that the source function narrows (i.e. the probability of emitting pairs with small relative separation grows) and the peak of the proton correlation function increases as the degree of alignment increases. Finally, we have demonstrated that one can reliably reconstruct the source functions even with extreme transverse space-momentum correlations. We would like to point out that the effects of space-momentum correlations should be even more pronounced in the shapes of three-dimensional proton sources. Note that three dimensional proton imaging is now possible . An important direction for the future is a detailed study of the change of the phase-space density and entropy (extracted from imaged sources ) with the varying degree of space-momentum correlation. Such work should provide information complementary to ongoing studies in the pion sector . We gratefully acknowledge stimulating discussions with Drs. G. Bertsch, P. Danielewicz, D. Keane, A. Parreño, S Pratt, S. Voloshin and N. Xu. We also wish to thank Drs. R. Lednicky and J. Pluta for making their correlation afterburner code available. Finally, we thank Dr. H. Sorge for providing the code of the RQMD model. This research is supported by the U.S. Department of Energy grants DOE-ER-40561 and DE-FG02-89ER40531.
no-problem/9906/hep-ph9906526.html
ar5iv
text
# Direct and Indirect Searches for Low-Mass Magnetic MonopolesPaper contributed to Kurt Haller’s Festschrift. ## I Introduction The notion of magnetic charge has intrigued physicists since Dirac showed that it was consistent with quantum mechanics provided a suitable quantization condition was satisfied: For a monopole of magnetic charge $`g`$ in the presence of an electric charge $`e`$, that quantization condition is (in this paper we use rationalized units) $$\frac{eg}{4\pi }=\frac{N}{2}\mathrm{}c,$$ (1) where $`N`$ is an integer. For a pair of dyons, that is, particles carrying both electric and magnetic charge, the quantization condition is replaced by $$\frac{e_1g_2e_2g_1}{4\pi }=\frac{N}{2}\mathrm{}c,$$ (2) where $`(e_1,g_1)`$ and $`(e_2,g_2)`$ are the charges of the two dyons.<sup>*</sup><sup>*</sup>*An additional factor of 2 appears on the right hand side of these conditions if a symmetrical solution is adopted—see Eq. (20) below. With the advent of “more unified” non-Abelian theories, classical composite monopole solutions were discovered . The mass of these monopoles would be of the order of the relevant gauge-symmetry breaking scale, which for grand unified theories is of order $`10^{16}`$ GeV or higher. But there are models where the electroweak symmetry breaking can give rise to monopoles of mass $`10`$ TeV . Even the latter are not yet accessible to accelerator experiments, so limits on heavy monopoles depend either on cosmological considerations , or detection of cosmologically produced (relic) monopoles impinging upon the earth or moon . However, a priori, there is no reason that Dirac/Schwinger monopoles or dyons of arbitrary mass might not exist: In this respect, it is important to set limits below the 1 TeV scale. Such an experiment is currently in progress at the University of Oklahoma , where we expect to be able to set limits on direct monopole production at Fermilab up to several hundred GeV. This will be a substantial improvement over previous limits . But indirect searches have been proposed and carried out as well. De Rújula proposed looking at the three-photon decay of the $`Z`$ boson, where the process proceeds through a virtual monopole loop. If we use his formula for the branching ratio for the $`Z3\gamma `$ process, compared to the current experimental upper limit for the branching ratio of $`10^5`$, we can rule out monopole masses lower than about 400 GeV, rather than the 600 GeV quoted in Ref. . Similarly, Ginzburg and Panfil and more recently Ginzburg and Schiller considered the production of two photons with high transverse momenta by the collision of two photons produced either from $`e^+e^{}`$ or quark-(anti-)quark collisions. Again the final photons are produced through a virtual monopole loop. Based on this theoretical scheme, an experimental limit has appeared by the D0 collaboration , which sets the following bounds on the monopole mass $`M`$: $$\frac{M}{N}>\{\begin{array}{cc}610\text{ GeV}& \text{ for }S=0\\ 870\text{ GeV}& \text{ for }S=1/2\\ 1580\text{ GeV}& \text{ for }S=1\end{array},$$ (3) where $`S`$ is the spin of the monopole. It is worth noting that a mass limit of 120 GeV for a Dirac monopole has been set by Graf, Schäfer, and Greiner , based on the monopole contribution to the vacuum polarization correction to the muon anomalous magnetic moment. (Actually, we believe that the correct limit, obtained from the well-known textbook formula for the $`g`$-factor correction due to a massive Dirac particle is 60 GeV.) The purpose of the present paper is to, first, describe the key theoretical elements necessary for the establishment of a direct limit for production of monopoles at Fermilab: An estimate of the production cross section for monopole-antimonopole pairs at the collider must be made, and then an estimate of the binding probability of such produced monopoles with matter must be given, so that we can predict how many monopoles would be bound to the detector elements that are run through our induction detector. Such estimates were given in our proposal to Fermilab; the complete analysis will be given in the experimental papers to follow. Here the emphasis will be on the elementary processes involved. A secondary purpose of this paper is to critique the theory of Refs. , , , and , and thereby demonstrate that experimental limits, such as that of Ref. , based on virtual processes are unreliable. We will show that the theory is based on a naive application of electromagnetic duality; the resulting cross section cannot be valid because unitarity is violated for monopole masses as low as the quoted limits, and the process is subject to enormous, uncontrollable radiative corrections. It is not correct, in any sense, as Refs. and state, that the effective expansion parameter is $`g\omega /M`$, where $`\omega `$ is some external photon energy; rather, the factors of $`\omega /M`$ emerge kinematically from the requirements of gauge invariance at the one-loop level. If, in fact, a correct calculation introduced such additional factors of $`\omega /M`$, arising from the complicated coupling of magnetic charge to photons, we argue that no limit could be deduced for monopole masses from the current experiments. It may even be the case, based on preliminary field-theoretic calculations, that processes involving the production of real photons vanish. ## II Eikonal Approximation for Electron- (or Quark-) Monopole Scattering It is envisaged that if monopoles are sufficiently light, they would be produced by a Drell-Yan type of process occurring in $`p\overline{p}`$ collisions at the Tevatron. The difficulty is to make a believable estimate of the elementary process $`q\overline{q}\gamma ^{}M\overline{M}`$, where $`q`$ stands for quark and $`M`$ for magnetic monopole. It is not known how to calculate such a process using perturbation theory; indeed, perturbation theory is inapplicable to monopole processes because of the quantization condition (1). It is only because of that consistency condition that the Dirac string, for example, disappears from the result. Only formally has it been shown that the quantum field theory of electric and magnetic charges is independent of the string orientation, or, more generally, is gauge and rotationally invariant . It has not yet proved possible to develop generally consistent schemes for calculating processes involving real or virtual magnetically charged particles. Partly this is because a sufficiently general field theoretic formulation has not yet been given; this defect will be addressed elsewhere . However, the nonrelativistic scattering of magnetically charged particles is well understoodContrary to the statement in the second reference in Ref. , the nonrelativistic calculation is exact, employs the quantization condition, and uses no “unjustified extra prescription.” . Thus it should not be surprising that an eikonal approximation gives a string-independent result for electron-monopole scattering provided the condition (1) is satisfied. More than two decades ago Schwinger proposed and Urrutia carried out such a calculation . Indeed, this subject has had arrested development. Since this is the only successful field-theoretic calculation yet presented, it may be useful to review it here. (A more detailed discussion will be presented in Ref. .) The interaction between electric ($`J^\mu `$) and magnetic ($`{}_{}{}^{}J_{}^{\mu }`$) currents is given by $$W^{(eg)}=ϵ_{\mu \nu \sigma \tau }(dx)(dx^{})(dx^{\prime \prime })J^\mu (x)^\sigma D_+(xx^{})f^\tau (x^{}x^{\prime \prime }){}_{}{}^{}J_{}^{\nu }(x^{\prime \prime }).$$ (4) Here $`D_+`$ is the usual photon propagator, and the arbitrary “string” function $`f_\mu (xx^{})`$ satisfies $$_\mu f^\mu (xx^{})=\delta (xx^{}).$$ (5) It turns out to be convenient for this calculation to choose a symmetrical string, which satisfies $$f^\mu (x)=f^\mu (x).$$ (6) In the following we choose a string lying along the straight line $`n^\mu `$, in which case the function may be written as a Fourier transform $$f_\mu (x)=\frac{n_\mu }{2i}\frac{(dk)}{(2\pi )^4}e^{ikx}\left(\frac{1}{nkiϵ}+\frac{1}{nk+iϵ}\right).$$ (7) In the high-energy, low-momentum-transfer regime, the scattering amplitude between electron and monopole is obtained from Eq. (4) by inserting the classical currents, $`J^\mu (x)`$ $`=`$ $`e{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\lambda {\displaystyle \frac{p_2^\mu }{m}}\delta \left(x{\displaystyle \frac{p_2}{m}}\lambda \right),`$ (9) $`{}_{}{}^{}J_{}^{\mu }(x)`$ $`=`$ $`e{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝑑\lambda ^{}{\displaystyle \frac{p_2^\mu }{M}}\delta \left(x+b{\displaystyle \frac{p_2^{}}{M}}\lambda ^{}\right),`$ (10) where $`m`$ and $`M`$ are the masses of the electron and monopole, respectively. Let us choose a coordinate system such that the incident momenta of the two particles have spatial components along the $`z`$-axis: $$p_2=(p,0,0,p),p_2^{}=(p,0,0,p),$$ (11) and the impact parameter lies in the $`xy`$ plane: $$b=(0,𝐛,0).$$ (12) Apart from kinematical factors, the scattering amplitude is simply the transverse Fourier transform of the eikonal phase, $$I(𝐪)=d^2be^{i𝐛𝐪}\left(e^{i\chi }1\right),$$ (13) where $`\chi `$ is simply $`W^{(eg)}`$ with the classical currents substituted, and $`𝐪`$ is the momentum transfer. First we calculate $`\chi `$; it is immediately seen to be, if $`n^\mu `$ has no time component, $$\chi =\frac{eg}{2}\frac{d^2k_{}}{(2\pi )^2}\frac{\widehat{𝐳}(\widehat{𝐧}\times 𝐤_{})}{k_{}^2iϵ}e^{i𝐤_{}𝐛}\left(\frac{1}{\widehat{𝐧}𝐤_{}iϵ}+\frac{1}{\widehat{𝐧}𝐤_{}+iϵ}\right),$$ (14) where $`𝐤_{}`$ is the component of the photon momentum perpendicular to the $`z`$ axis. From this expression we see that the result is independent of the angle $`𝐧`$ makes with the $`z`$ axis. We next use proper-time representations for the denominators in Eq. (14), $`{\displaystyle \frac{1}{k_{}^2}}`$ $`=`$ $`{\displaystyle _0^{\mathrm{}}}𝑑se^{sk_{}^2},`$ (16) $`{\displaystyle \frac{1}{\widehat{𝐧}𝐤_{}iϵ}}+{\displaystyle \frac{1}{\widehat{𝐧}𝐤_{}+iϵ}}`$ $`=`$ $`{\displaystyle \frac{1}{i}}\left[{\displaystyle _0^{\mathrm{}}}𝑑\lambda {\displaystyle _{\mathrm{}}^0}𝑑\lambda \right]e^{i\lambda \widehat{𝐧}𝐤_{}}e^{|\lambda |ϵ}.`$ (17) We then complete the square in the exponential and perform the Gaussian integration to obtain $$\chi =\frac{eg}{4\pi }\widehat{𝐳}(\widehat{𝐧}\times 𝐛)𝑑\lambda \frac{1}{(\lambda +𝐛\widehat{𝐧})^2+b^2(𝐛\widehat{𝐧})^2},$$ (18) or $$\chi =\frac{eg}{2\pi }\mathrm{tan}^1\left(\frac{\widehat{𝐧}𝐛}{\widehat{𝐳}(𝐛\times \widehat{𝐧})}\right).$$ (19) Because $`e^{i\chi }`$ must be continuous when $`\widehat{𝐧}`$ and $`𝐛`$ lie in the same direction, we must have the Schwinger quantization condition for an infinite string, $$eg=4\pi N,$$ (20) where $`N`$ is an integer. To carry out the integration in Eq. (13), choose $`𝐛`$ to make an angle $`\psi `$ with $`𝐪`$, and the projection of $`\widehat{𝐧}`$ in the $`xy`$ plane to make an angle $`\varphi `$ with $`𝐪`$; then $$\chi =\frac{eg}{2\pi }(\psi \varphi \pi /2).$$ (21) To avoid the appearance of a Bessel function, we first integrate over $`b=|𝐛|`$, and then over $`\psi `$: $`I(𝐪)`$ $`=`$ $`{\displaystyle _0^{2\pi }}𝑑\psi {\displaystyle _0^{\mathrm{}}}b𝑑be^{ibq(\mathrm{cos}\psi iϵ)}e^{2iN(\psi \varphi \pi /2)}`$ (22) $`=`$ $`{\displaystyle \frac{4}{i}}{\displaystyle \frac{e^{2iN(\varphi +\pi /2)}}{q^2}}{\displaystyle _C}{\displaystyle \frac{dzz^{2N1}}{(z+1/ziϵ)^2}}`$ (23) $`=`$ $`{\displaystyle \frac{4\pi N}{q^2}}e^{2iN\varphi },`$ (24) where $`C`$ is a unit circle about the origin, and where again the quantization condition (20) has been used. Squaring this and putting in the kinematical factors we obtain Urrutia’s result $$\frac{d\sigma }{dt}=\frac{(eg)^2}{4\pi }\frac{1}{t^2},t=q^2,$$ (25) which is exactly the same as the nonrelativistic, small angle result found, for example, in Ref. . This calculation, however, points the way toward a proper relativistic treatment, and will be extended to the crossed process, quark-antiquark production of monopole-antimonopole pairs elsewhere. ## III Binding of monopoles to matter Once the monopoles are produced in a collision at the Tevatron, they travel through the detector, losing energy in a well-known manner (see, e.g., Ref. ), presumably ranging out, and eventually binding to matter in the detector (Be, Al, Pb, for example). The purpose of this section is to review the theory of the binding of magnetic charges to matter. We consider the binding of a monopole of magnetic charge $`g`$ to a nucleus of charge $`Ze`$, mass $`=Am_p`$, and magnetic moment $$𝝁=\frac{e}{m_p}\gamma 𝐒,$$ (26) $`𝐒`$ being the spin of the nucleus. (We will assume here that the monopole mass $``$, which restriction could be easily removed.) Other notations for the magnetic moment are $$\gamma =1+\kappa =\frac{g_S}{2}.$$ (27) The charge quantization condition is given by Eq. (1). Because the nuclear charge is $`Ze`$, the relevant angular momentum quantum number is \[recall $`N`$ is the magnetic charge quantization number in Eq. (1)\] $$l=\frac{NZ}{2}.$$ (28) We do not address the issue of dyons , which for the correct sign of the electric charge will always bind electrically to nuclei. ### A Nonrelativistic binding for $`S=1/2`$ In this subsection we follow the early work of Malkus and the more recent paper of Bracci and Fiorentini . (There are also the results given in Ref. , but this reference seems to contain errors.) The neutron ($`Z=0`$) is a special case. Binding will occur in the lowest angular momentum state, $`J=1/2`$, if $$|\gamma |>\frac{3}{2N}$$ (29) Since $`\gamma _n=1.91`$, this condition is satisfied for all $`N`$. In general, it is convenient to define a reduced gyromagnetic ratio, $$\widehat{\gamma }=\frac{A}{Z}\gamma ,\widehat{\kappa }=\widehat{\gamma }1.$$ (30) This expresses the magnetic moment in terms of the mass and charge of the nucleus. Binding will occur in the special lowest angular momentum state $`J=l\frac{1}{2}`$ if $$\widehat{\gamma }>1+\frac{1}{4l}.$$ (31) Thus binding can occur here only if the anomalous magnetic moment $`\widehat{\kappa }>1/4l`$. The proton, with $`\kappa =1.79`$, will bind. Binding can occur in higher angular momentum states $`J`$ if and only if $$|\widehat{\kappa }|>\kappa _c=\frac{1}{l}\left|J^2+Jl^2\right|.$$ (32) For example, for $`J=l+\frac{1}{2}`$, $`\kappa _c=2+3/4l`$, and for $`J=l+\frac{3}{2}`$, $`\kappa _c=4+15/4l`$. Thus $`{}_{2}{}^{}{}_{}{}^{3}`$He, which is spin 1/2, will bind in the first excited angular momentum state because $`\widehat{\kappa }=4.2`$. Unfortunately, to calculate the binding energy, one must regulate the potential at $`r=0`$. The results shown in Table 1 assume a hard core. ### B Nonrelativistic binding for general $`S`$ The reference here is . The assumption made here is that $`lS`$. (There are only 3 exceptions, apparently: <sup>2</sup>H, <sup>8</sup>Li, and <sup>10</sup>B.) Binding in the lowest angular momentum state $`J=lS`$ is given by the same criterion (31) as in spin 1/2. Binding in the next state, with $`J=lS+1`$ occurs if $`\lambda _\pm >\frac{1}{4}`$ where $$\lambda _\pm =\left(S\frac{1}{2}\right)\frac{\widehat{\gamma }}{S}l2l1\pm \sqrt{(1+l)^2+(2S1l)\frac{\widehat{\gamma }}{S}l+\frac{1}{4}l^2\left(\frac{\widehat{\gamma }}{S}\right)^2}.$$ (33) The previous result for $`S=1/2`$ is recovered, of course. $`S=1`$ is a special case: Then $`\lambda _{}`$ is always negative, while $`\lambda _+>\frac{1}{4}`$ if $`\widehat{\gamma }>\gamma _c`$, where $$\gamma _c=\frac{3}{4l}\frac{(3+16l+16l^2)}{9+4l}.$$ (34) For higher spins, both $`\lambda _\pm `$ can exceed $`1/4`$: $`\lambda _+>{\displaystyle \frac{1}{4}}`$ for $`\widehat{\gamma }>\gamma _c`$ (35) $`\lambda _{}>{\displaystyle \frac{1}{4}}`$ for $`\widehat{\gamma }>\gamma _{c+}`$ (36) where for $`S=\frac{3}{2}`$ $$(\gamma _c)_{}=\frac{3}{4l}(6+4l\sqrt{33+32l}).$$ (37) For $`{}_{}{}^{9}{}_{4}{}^{}`$Be, for which $`\widehat{\gamma }=2.66`$, we cannot have binding because $`3>\gamma _c>1.557`$, $`3<\gamma _{c+}<8.943`$, where the ranges come from considering different values of $`N`$ from 1 to $`\mathrm{}`$. For $`S=\frac{5}{2}`$, $$(\gamma _c)_{}=\frac{36+28l\sqrt{1161+1296l+64l^2}}{12l}.$$ (38) So $`{}_{13}{}^{}{}_{}{}^{27}`$Al will bind in either of these states, or the lowest angular momentum state, because $`\widehat{\gamma }=7.56`$, and $`1.67>\gamma _c>1.374`$, $`1.67<\gamma _{c+}<4.216`$. ### C Relativistic spin-1/2 Kazama and Yang treated the Dirac equation . See also and . In addition to the bound states found nonrelativistically, deeply bound states, with $`E_{\mathrm{binding}}=M`$ are found. These states always exist for $`Jl+1/2`$. For $`J=l1/2`$, these (relativistic) $`E=0`$ bound states exist only if $`\kappa >0`$. Thus (modulo the question of form factors) Kazama and Yang expect that electrons can bind to monopoles. (We suspect that one must take the existence of these deeply bound states with a fair degree of skepticism. See also .) As expected, for $`J=l1/2`$ we have weakly bound states only for $`\kappa >1/4l`$, which is the same as the nonrelativistic condition (31), and for $`Jl+1/2`$, only if $`|\widehat{\kappa }|>\kappa _c`$, where $`\kappa _c`$ is given in Eq. (32). ### D Relativistic spin-1 Olsen, Osland, and Wu considered this situation . In this case, no bound states exist, unless an additional interaction is introduced (this is similar to what happens nonrelativistically, because of the bad behavior of the Hamiltonian at the origin). Bound states are found if an “induced magnetization” interaction (quadratic in the magnetic field) is introduced. Binding is then found for the lowest angular momentum state $`J=l1`$ again if $`\widehat{\kappa }>1/4l`$. For the higher angular momentum states, the situation is more complicated: * for $`J=l`$: bound states require $`l16`$, and * for $`Jl+1`$: bound states require $`J(J+1)l^225`$. But these results are probably highly dependent on the form of the additional interaction. The binding energies found are inversely proportional to the strength $`\lambda `$ of this extra interaction. ### E Remarks on binding Clearly, this summary indicates that the theory of monopole binding to nuclear magnetic dipole moments is rather primitive. The angular momentum criteria for binding is straightforward; but in general (except for relativistic spin 1/2) additional interactions have to be inserted by hand to regulate the potential at $`r=0`$. The results for binding energies clearly are very sensitive to the nature of that additional interaction. It cannot even be certain that binding occurs in the allowed states. In fact, however, it seems nearly certain that monopoles will bind to all nuclei, even, for example, Be, because the magnetic field in the vicinity of the monopole is so strong that the monopole will disrupt the nucleus and will bind to the nuclear, or even the subnuclear, constituents. ### F Binding of monopole-nucleus complex to material lattice Now the question arises: Is the bound complex of nucleus and monopole rigidly attached to the crystalline lattice of the material? This is a simple tunneling situation. The decay rate is estimated by the WKB formula $$\mathrm{\Gamma }\frac{1}{a}e^{2_a^b\sqrt{2(VE)}},$$ (39) where the potential is crudely $$V=\frac{\mu g}{4\pi r^2}gBr,$$ (40) $``$ is the nuclear mass $``$ monopole mass, and the inner and outer turning points, $`a`$ and $`b`$ are the zeroes of $`EV`$. Provided the following equality holds, $$(E)^3\frac{g^3\mu B^2}{4\pi },$$ (41) which should be very well satisfied, since the right hand side equals $`10^{20}N^3`$ MeV<sup>3</sup>, we can write the decay rate as $$\mathrm{\Gamma }N^{1/2}10^{23}\text{s}^1\mathrm{exp}\left[\frac{8\sqrt{2}}{3137}\left(\frac{E}{m_e}\right)^{3/2}\frac{B_0}{NB}A^{1/2}\left(\frac{m_p}{m_e}\right)^{1/2}\right],$$ (42) where the characteristic field, defined by $`eB_0=m_e^2`$, is $`4\times 10^9`$ T. If we put in $`B=1.5`$T, and $`A=27`$, $`E=2.6`$MeV, appropriate for $`{}_{}{}^{27}{}_{13}{}^{}`$Al, we have for the exponent, for $`N=1`$, $`2\times 10^{11}`$, corresponding to a rather long time! To get a 10 yr lifetime, the binding energy would have to be only of the order of 1eV. Monopoles bound with kilovolt or more energies will stay around forever. Then the issue is whether the entire Al atom can be extracted with the 1.5 T magnetic field present in CDF. The answer seems to be unequivocally NO. The point is that the atoms are rigidly bound in a lattice, with no nearby site into which they can jump. A major disruption of the lattice would be required to dislodge the atoms, which would probably require kilovolts of energy . Some such disruption was made by the monopole when it came to rest and was bound in the material, but that disruption would be very unlikely to be in the direction of the accelerating magnetic field. Again, a simple Boltzmann argument shows that any effective binding slightly bigger than 1 eV will result in monopole trapping “forever.” This argument applies equally well to binding of monopoles in ferromagnets. If monopoles bind strongly to nuclei there, they will not be extracted by 5 T fields, contrary to the arguments of Goto et al. The corresponding limits on monopoles from ferromagnetic samples of Carrigan et al. are suspect. ## IV Duality and the Euler-Heisenberg Lagrangian Finally, let us consider the process contemplated in Refs. and , that is $$\left(\begin{array}{ccc}qq& & qq\\ \overline{q}q& & \overline{q}q\end{array}\right)+\gamma \gamma ,\gamma \gamma \gamma \gamma ,$$ (43) where the photon scattering process is given by the one-loop light-by-light scattering graph shown in Fig. 1. If the particle in the loop is an ordinary electrically charged electron, this process is well-known . If, further, the photons involved are of very low momentum compared the the mass of the electron, then the result may be simply derived from the well-known Euler-Heisenberg Lagrangian , which for a spin 1/2 charged-particle loop in the presence of homogeneous electric and magnetic fields isWe emphasize that Eq. (44) is only valid when $`_\alpha F_{\mu \nu }=0`$. $$=\frac{1}{8\pi ^2}_0^{\mathrm{}}\frac{ds}{s^3}e^{m^2s}\left[(es)^2𝒢\frac{\text{Re}\mathrm{cosh}esX}{\text{Im}\mathrm{cosh}esX}1\frac{2}{3}(es)^2\right].$$ (44) Here the invariant field strength combinations are $$=\frac{1}{4}F^2=\frac{1}{2}(𝐇^2𝐄^2),𝒢=\frac{1}{4}F{}_{}{}^{}F=𝐄𝐇,$$ (45) $`{}_{}{}^{}F_{\mu \nu }^{}=\frac{1}{2}ϵ_{\mu \nu \alpha \beta }F^{\alpha \beta }`$ being the dual field strength tensor, and the argument of the hyperbolic cosine in Eq. (44) is given in terms of $$X=[2(+i𝒢)]^{1/2}=[(𝐇+i𝐄)^2]^{1/2}.$$ (46) If we pick out those terms quadratic, quartic and sextic in the field strengths, we obtain<sup>§</sup><sup>§</sup>§Incidentally, note that the coefficient of the last term is 36 times larger than that given in Ref. . $``$ $`=`$ $`{\displaystyle \frac{1}{4}}F^2+{\displaystyle \frac{\alpha ^2}{360}}{\displaystyle \frac{1}{m^4}}[4(F^2)^2+7(F{}_{}{}^{}F)^2]`$ (48) $`{\displaystyle \frac{\pi \alpha ^3}{630}}{\displaystyle \frac{1}{m^8}}F^2[8(F^2)^2+13(F{}_{}{}^{}F)^2]+\mathrm{}.`$ The Lagrangian for a spin-0 and spin-1 charged particle in the loop is given by similar formulas which are derived in Ref. and (implicitly) in Ref. , respectively. Given this homogeneous-field effective Lagrangian, it is a simple matter to derive the cross section for the $`\gamma \gamma \gamma \gamma `$ process in the low energy limit. (These results can, of course, be directly calculated from the corresponding one-loop Feynman graph with on-mass-shell photons. See Refs. .) Explicit results for the differential cross section are given by Ref. : $$\frac{d\sigma }{d\mathrm{\Omega }}=\frac{139}{32400\pi ^2}\alpha ^4\frac{\omega ^6}{m^8}(3+\mathrm{cos}^2\theta )^2,$$ (49) and the total cross section for a spin-1/2 charged particle in the loop isThe numerical coefficient in the total cross section for a spin-0 and spin-1 charged particle in the loop is $`119/20250\pi `$ and $`2751/250\pi `$, respectively. Numerically the coefficients are $`0.00187`$, $`0.0306`$, and $`3.50`$ for spin 0, spin 1/2, and spin 1, respectively. $$\sigma =\frac{973}{10125\pi }\alpha ^4\frac{\omega ^6}{m^8}.$$ (50) Here, $`\omega `$ is the energy of the photon in the center of mass frame, $`s=4\omega ^2`$. This result is valid provided $`\omega /m1`$. The dependence on $`m`$ and $`\omega `$ is evident from the Lagrangian (48), the $`\omega `$ dependence coming from the field strength tensor. Further note that perturbative quantum corrections are small, because they are of relative order $`3\alpha 10^2`$ . Processes in which four final-state photons are produced, which may be easily calculated from the last displayed term in Eq. (48), are even smaller, being of relative order $`\alpha ^2(\omega /m)^8`$. So light-by-light scattering, which has been indirectly observed through its contribution to the anomalous magnetic moment of the electron , is completely under control for electron loops. How is this applicable to photon scattering through a monopole loop? At first blush this calculation seems formidable. The interaction of a magnetically charged particle with a photon involves a “string,” as described by the function $`f_\mu `$ given in Eq. (7). The interaction between electric and magnetic charges is given by the complicated expression (4). This coupling is equivalent to the interaction between the magnetic current $`{}_{}{}^{}J_{}^{\mu }`$ and the electromagnetic field, $`W_{\mathrm{int}}={\displaystyle (dx)(dx^{}){}_{}{}^{}F_{\mu \nu }^{}(x^{})f^\nu (x^{}x){}_{}{}^{}J_{}^{\mu }(x)}.`$ (51) From Eqs. (4) and (7) one obtains the relevant string-dependent monopole-photon coupling vertex in momentum space, $$\mathrm{\Gamma }_\mu (q)=ig\frac{ϵ_{\mu \nu \sigma \tau }n^\nu q^\sigma \gamma ^\tau }{nqiϵ},$$ (52) where we have, for variety’s sake, chosen a semi-infinite string. As we have noted, the choice of the string is arbitrary; reorienting the string is a kind of gauge transformation. In fact, it is this requirement that leads to the quantization conditions (1) and (2). The authors of Refs. , , and do not attempt a calculation of the “box” diagram with the interaction (51). Rather, they (explicitly or implicitly) appeal to duality, that is, the symmetry that the introduction of magnetic charge brings to Maxwell’s equations: $$𝐄𝐇,𝐇𝐄,$$ (53) and similarly for charges and currents. Thus the argument is that for low energy photon processes it suffices to compute the fermion loop graph in the presence of zero-energy photons, that is, in the presence of static, constant fields. The box diagram shown in Fig. 1 with a spin-1/2 monopole running around the loop in the presence of a homogeneous $`𝐄,𝐇`$ field is then obtained from that analogous process with an electron in the loop in the presence of a homogeneous $`𝐇,𝐄`$ field, with the substitution $`eg`$. Since the Euler-Heisenberg Lagrangian (48) is invariant under the substitution (53) on the fields alone, this means we obtain the low energy cross section $`\sigma _{\gamma \gamma \gamma \gamma }`$ through the monopole loop from Eq. (50) by the substitution $`eg`$, or $$\alpha \alpha _g=\frac{137}{4}N^2,N=1,2,3,\mathrm{}.$$ (54) ### A Inconsistency of the Duality Approximation It is critical to emphasize that the Euler-Heisenberg Lagrangian is an effective Lagrangian for calculations at the one fermion loop level for low energy, i.e., $`\omega /M1`$. It is commonly asserted that the Euler-Heisenberg Lagrangian is an effective Lagrangian in the sense used in chiral perturbation theory . This is not true. The QED expansion generates derivative terms which do not arise in the effective Lagrangian expansion of the Euler-Heisenberg Lagrangian . One can only say that the Euler-Heisenberg Lagrangian is a good approximation for light-by-light scattering (without monopoles) at low energy because radiative corrections are down by factors of $`\alpha `$. However, it becomes unreliable if radiative corrections are large.The same has been noted in another context by Bordag et al. . In this regard, both the Ginzburg and the De Rújula articles, particularly Ref. , are rather misleading as to the validity of the approximation sketched in the previous section. They state that the expansion parameter is not $`g`$ but $`g\omega /M`$, $`M`$ being the monopole mass, so that the perturbation expansion may be valid for large $`g`$ if $`\omega `$ is small enough. But this is an invalid argument. It is only when external photon lines are attached that extra factors of $`\omega /M`$ occur, due to the appearance of the field strength tensor in the Euler-Heisenberg Lagrangian. Moreover, the powers of $`g`$ and $`\omega /M`$ are the same only for the $`F^4`$ process. The expansion parameter is $`\alpha _g`$, which is huge. Instead of radiative corrections being of the order of $`\alpha `$ for the electron-loop process, these corrections will be of order $`\alpha _g`$, which implies an uncontrollable sequence of corrections. For example, the internal radiative correction to the box diagram in Fig. 1 have been computed by Ritus and by Reuter, Schmidt, and Schubert in QED. In the $`O(\alpha ^2)`$ term in Eq. (48) the coefficients of the $`(F^2)^2`$ and the $`(F\stackrel{~}{F})^2`$ terms are multiplied by $`\left(1+\frac{40}{9}\frac{\alpha }{\pi }+O(\alpha ^2)\right)`$ and $`\left(1+\frac{1315}{252}\frac{\alpha }{\pi }+O(\alpha ^2)\right)`$, respectively. The corrections become meaningless when we replace $`\alpha \alpha _g`$. This would seem to be a devastating objection to the results quoted in Ref. and used in Ref. . But even if one closes one’s eyes to higher order effects, it seems clear that the mass limits quoted are inconsistent. If we take the cross section given by Eq. (50) and make the substitution (54), we obtain for the low energy light-by-light scattering cross section in the presence of a monopole loop $$\sigma _{\gamma \gamma \gamma \gamma }\frac{973}{2592000\pi }\frac{N^8}{\alpha ^4}\frac{\omega ^6}{M^8}=4.2\times 10^4N^8\frac{1}{M^2}\left(\frac{\omega }{M}\right)^6.$$ (55) If the cross section were dominated by a single partial wave of angular momentum $`J`$, the cross section would be bounded by $$\sigma \frac{\pi (2J+1)}{s}\frac{3\pi }{s},$$ (56) if we take $`J=1`$ as a typical partial wave. Comparing this with the cross section given in Eq. (55), we obtain the following inequality for the cross section to be consistent with unitarity, $$\frac{M}{\omega }3N.$$ (57) But the limits quoted for the monopole mass are less than this: $$\frac{M}{N}>870\text{ GeV},\text{spin }1/2,$$ (58) because, at best, a minimum $`\omega 300`$ GeV; the theory cannot sensibly be applied below a monopole mass of about 1 TeV. (Note that changing the value of $`J`$ in the unitarity limits has very little effect on the bound (57) since an 8th root is taken: replacing $`J`$ by 50 reduces the limit (57) only by 50%.) Similar remarks can be directed toward the De Rújula limits . That author, however, notes the “perilous use of a perturbative expansion in $`g`$.” However, although he writes down the correct vertex, Eq. (52), he does not, in fact, use it, instead appealing to duality, and even so he admittedly omits enormous radiative corrections of $`O(\alpha _g)`$ without any justification other than what we believe is a specious reference to the use of effective Lagrangian techniques for these processes. ### B Proposed Remedies Apparently, then, the formal small $`\omega `$ result obtained from the Euler-Heisenberg Lagrangian cannot be valid beyond a photon energy $`\omega /M0.1`$. The reader might ask why one cannot use duality to convert the monopole coupling with an arbitrary photon to the ordinary vector coupling. The answer is that little is thereby gained, because the coupling of the photon to ordinary charged particles is then converted into a complicated form analogous to Eq. (51). This point is stated and then ignored in Ref. in the calculation of $`Z3\gamma `$. There is, in general, no way of avoiding the complication of including the string. We are currently undertaking realistic calculations of virtual (monopole loop) and real (monopole production) magnetic monopole processes. These calculations are, as the reader may infer, somewhat difficult and involve subtle issues of principle involving the string, and it will be some time before we have results to present. Therefore, here we wish to offer plausible qualitative considerations, which we believe suggest bounds that call into question the results of Ginzburg et al. . Our point is very simple. The interaction (51) couples the magnetic current to the dual field strength. This corresponds to the velocity suppression in the interaction of magnetic fields with electrically charged particles, or to the velocity suppression in the interaction of electric fields with magnetically charged particles, as most simply seen in the magnetic analog of the Lorentz force, $$𝐅=g(𝐁\frac{𝐯}{c}\times 𝐄).$$ (59) That is, the force between an electric charge $`e`$ and magnetic charge $`g`$, moving with relative velocity $`𝐯`$ and with relative separation $`𝐫`$ is $$𝐅=\frac{eg}{c}\frac{𝐯\times 𝐫}{4\pi r^3}.$$ (60) This velocity suppression is reflected in nonrelativistic calculations. For example, the energy loss in matter of a magnetically charge particle is approximately obtained from that of a particle with charge $`Ze`$ by the substitution $$\frac{Ze}{v}\frac{g}{c}.$$ (61) And the classical nonrelativistic dyon-dyon scattering cross section near the forward direction is $$\frac{d\sigma }{d\mathrm{\Omega }}\frac{1}{(2\mu v)^2}\left[\left(\frac{e_1g_2e_2g_1}{4\pi c}\right)^2+\left(\frac{e_1e_2+g_1g_2}{4\pi v}\right)^2\right]\frac{1}{(\theta /2)^4},\theta 1,$$ (62) the expected generalization of the Rutherford scattering cross section at small angles. Of course, the true structure of the magnetic interaction and the resulting scattering cross section is much more complicated. For example, classical electron-monopole or dyon-dyon scattering exhibits rainbows and glories, and the quantum scattering exhibits a complicated oscillatory behavior in the backward direction . These reflect the complexities of the magnetic interaction between electrically and magnetically charged particles, which can be represented as a kind of angular momentum . Nevertheless, for the purpose of extracting qualitative information, the naive substitution, $$e\frac{v}{c}g,$$ (63) seems a reasonable first step.<sup>\**</sup><sup>\**</sup>\**This, and the extension of this idea to virtual processes, leaves aside the troublesome issue of radiative corrections. The hope is that an effective Lagrangian can be found by approximately integrating over the fermions which incorporates these effects. A first estimate of the effect of incorporating radiative correction, however, may be made by applying Padé summation of the leading corrections found in Refs. : $$\sigma \frac{\sigma ^{\mathrm{PT}}}{(1\alpha _g)^2}\frac{\sigma ^{\mathrm{PT}}}{1000N^4},$$ (64) which reduces the quoted mass limits of Ref. a bit. Indeed, such a substitution was used in the proposal to estimate production rates of monopoles at Fermilab. The situation is somewhat less clear for the virtual processes considered here. Nevertheless, the interaction (51) suggests there should, in general, be a softening of the vertex. In the current absence of a valid calculational scheme, we will merely suggest two plausible alternatives to the mere replacement procedure adopted in Refs. . We first suggest, as seemingly Ref. does, that the approximate effective vertex incorporates an additional factor of $`\omega /M`$. Thus we propose the following estimate for the $`\gamma \gamma `$ cross section in place of Eq. (55), $$\sigma _{\gamma \gamma \gamma \gamma }10^4N^8\frac{1}{M^2}\left(\frac{\omega }{M}\right)^{14},$$ (65) since there are four suppression factors in the amplitude. Now a considerably larger value of $`\omega `$ is consistent with unitarity, $$\frac{M}{\omega }\sqrt{3N},$$ (66) if we take $`J=1`$ again. We now must re-examine the $`\sigma _{pp\gamma \gamma X}`$ cross section. In the model given in Ref. , where the photon energy distribution is given in terms of the functions $`f(y)`$, $`y=\omega /E`$, the physical cross section is given by $$\sigma _{pp\gamma \gamma X}=\left(\frac{\alpha }{\pi }\right)^2\frac{dy_1}{y_1}\frac{dy_2}{y_2}f(y_1)f(y_2)\sigma _{\gamma \gamma \gamma \gamma }=𝑑y_1𝑑y_2\frac{d\sigma }{dy_1dy_2},$$ (67) where now (cf. Eq. (25) of in the first reference in Ref. ) $$\frac{d\sigma }{dy_1dy_2}=\left(\frac{\alpha }{\pi }\right)^2RE^6\left(\frac{E}{M}\right)^8y_1^6f(y_1)y_2^6f(y_2),$$ (68) where, for spin 1/2, (up to factors of order unity) $$R\frac{10^4}{\alpha ^4}\left(\frac{N}{M}\right)^8.$$ (69) The result in (68) differs from that in Ref. by a factor of $`(E/M)^8y_1^4y_2^4`$. The photon distribution function $`y^2f(y)`$ used is rather strongly peaked at $`y0.3`$. (This peaking is necessary to have any chance of satisfying the low-frequency criterion.) When we multiply by $`y^4`$, the amplitude is greatly reduced and the peak is shifted above $`y=1/2`$, violating even the naive criterion for the validity of perturbation theory. Nevertheless, the integral of the distribution function is reduced by two orders of magnitude, that is, $$\frac{_0^1𝑑yy^6f(y)}{_0^1𝑑yy^2f(y)}10^2.$$ (70) This reduces the mass limit quoted in by a factor of $`1/\sqrt{3}`$, to about 500 GeV, where $`\omega /M0.9`$. This dubious result makes us conclude that it is impossible to derive any limit for the monopole mass from the present data. As for the De Rújula limit<sup>††</sup><sup>††</sup>††We note that De Rújula also considers the monopole vacuum polarization correction to $`g_V/g_A`$, $`g_A`$, and $`m_W/m_Z`$, proportional to $`(m_Z/M)^2`$ in each case, once again ignoring both the string and the radiative correction problem. He assumes that the monopole is a heavy vector-like fermion, and obtains a limit of $`M/N>8m_Z`$. Our ansatz changes $`(m_Z/M)^2`$ to $`(m_Z/M)^4`$, so that $`M/\sqrt{N}>\sqrt{8}m_Z250`$ GeV, a substantial reduction. from the $`Z3\gamma `$ process, if we insert a suppression factor of $`\omega /M`$ at each vertex and integrate over the final state photon distributions, given by Eq. (18) of Ref. , the mass limit is reduced to $`M/\sqrt{N}1.4m_Z120`$ GeV, again grossly violating the low energy criterion. And the limit deduced from the vacuum polarization correction to the anomalous magnetic moment of the muon due to virtual monopole pairs is reduced to about 2 GeV. The reader might object that this $`\omega /M`$ softening of the vertex has little field-theoretic basis. Therefore, we propose a second possibility that does have such a basis. The vertex (52) suggests, and detailed calculation supports (based on the tensor structure of the photon amplitudes<sup>‡‡</sup><sup>‡‡</sup>‡‡For example, the naive monopole loop contribution to vacuum polarization differs from that of an electron loop (apart from charge and mass replacements) entirely by the replacement in the latter of $`(g_{\mu \nu }q_\mu q_\nu /q^2)(𝐪^2/q_0^2)(\delta _{ij}q_iq_j/𝐪^2)`$, when $`n^\mu `$ points in the time direction. Apart from this different tensor structure, the vacuum polarization is given by exactly the usual formula, found, for example in Ref. . Details of this and related calculations will be given elsewhere.) the introduction of the string-dependent factor $`\sqrt{q^2/(nq)^2}`$ at each vertex, where $`q`$ is the photon momentum. Such a factor is devastating to the indirect monopole searches—for any process involving a real photon, such as that of the D0 experiment or for $`Z3\gamma `$ discussed in , the amplitude vanishes. Because such factors can and do appear in full monopole calculations, it is clearly premature to claim any limits based on virtual processes involving real final-state photons. ## V Conclusions The field theory of magnetic charge is still in a rather primitive state. Indeed, it has been in an arrested state of development for the past two decades. With serious limits now being announced based on laboratory measurements, it is crucial that the theory be raised to a useful level. At the present time, we believe that theoretical estimates for the production of real monopoles are more reliable than are those for virtual processes. This is because, in effect, the former are dominated by tree-level processes. We have indicated why the indirect limits cannot be taken seriously at present; and of course only the real production processes offer the potential of discovery. Perhaps the arguments here will stimulate readers to contribute to the further development of the theory, for it remains an embarrassment that there is no well-defined quantum field theory of magnetic charge. ## ACKNOWLEDGMENTS We are very pleased to dedicate this paper to Kurt Haller on the occasion of his 70th birthday, in view of his many contributions to non-Abelian gauge theory, of which dual QED is a disguised variant. We thank Igor Solovtsov for helpful conversations and we are grateful to the U.S. Department of Energy for financial support.
no-problem/9906/cond-mat9906410.html
ar5iv
text
# Magnetoresistance of magnetic multilayers in the CPP mode: evidence for non-local scattering \[ ## Abstract We have carried out measurements of the magnetoresistance MR(H) in the CPP (Current Perpendicular to the Plane) mode for two types of magnetic multilayers which have different layer ordering. The series resistor model predicts that CPP MR(H) is independent of the ordering of the layers. Nevertheless, the measured MR(H) curves were found to be completely different for the following two configurations: \[Co(10Å)/Cu(200Å)/Co(60Å)/Cu(200Å)\]<sub>N</sub> and \[Co(10Å)/Cu(200Å)\]<sub>N</sub> \[Co(60Å)/Cu(200Å)\]<sub>N</sub> showing that the above model is incorrect. We have carried out a calculation showing that these results can be explained quantitatively in terms of the non-local character of the electron scattering, without the need to invoke spin-flip scattering or a short spin diffusion length. \] Since the discovery a decade ago of the giant magnetoresistance exhibited by magnetic multilayers, interest in this phenomenon has not abated . Recent research has focused on the magnetoresistance MR(H) in the CPP mode (current perpendicular to the plane of the layers) . Measurements of MR(H) are technically more difficult in the CPP mode than in the CIP mode (current in plane). However, there are advantages to the MR(H) data in the CPP mode. For example, it has been shown that experimental values of MR(H) in the CPP mode can shed light on the spin diffusion length. Here we present evidence for the importance of MR(H) measurements in the CPP mode for determining the role of non-local electron scattering in the giant magnetoresistance (GMR). We shall show that because of the long electron mean free path, non-local scattering makes the series resistor model inappropriate. As is well known, the GMR occurs in magnetic multilayers because the spin-up electrons and the spin-down electrons have different scattering rates. If the electron does not flip its spin upon scattering, then the spin-up and spin-down electrons constitute two separate currents, with different resistivities, as if flowing in two parallel wires. In the CPP mode, the resistances of the different layers add in series . Therefore, it would seem that two magnetic multilayers that differ only in the ordering of the layers would yield identical results for MR(H) in the CPP mode. To test this idea, Pratt and co-workers at Michigan State University (Chiang $`et`$ $`al.`$ ) measured CPP MR(H) for the two configurations \[Py/Cu/Co/Cu\]<sub>N</sub> and \[Py/Cu\]<sub>N</sub>\[Co/Cu\]<sub>N</sub> (denoted as ‘interleaved’ and ‘separated’ configurations, respectively), where Py is Ni<sub>84</sub>Fe<sub>16</sub>. Although the expectation was that identical MR(H) curves would be obtained for the interleaved and the separated configurations, these workers found that the resulting two MR(H) curves were completely different. Chiang $`et`$ $`al.`$ attributed their results to the short spin diffusion length in Py. They had previously analyzed resistivity data within the framework of Valet-Fert theory and obtained for Py a spin diffusion length of only 55 Å, thus implying significant mixing between the spin-up and spin-down electron currents. Chiang $`et`$ $`al.`$ proposed that this spin-flipping was responsible for the different CPP MR(H) curves they observed for the separated and interleaved configurations. We have investigated these ideas by measuring MR(H) for multilayers whose magnetic layers do $`not`$ exhibit a short spin diffusion length. For the different magnetic layers, we used Co of two different thicknesses, since Co is known to have a long spin diffusion length. Measurements were carried out of CPP MR(H) for \[Co(10Å)/Cu(200Å)/Co(60Å)/Cu(200Å)\]<sub>N</sub> and \[Co(10Å)/Cu(200Å)\]<sub>N</sub>\[Co(60Å)/Cu(200Å)\]<sub>N</sub> for N = 4, 6, 8. The thickness (200 Å) of the non-magnetic layers was chosen to be large enough to ensure complete magnetic decoupling between the ferromagnetic layers. In spite of the fact that the interleaved and separated configurations differ only in the ordering of the layers, the measured MR(H) curves were found to be very different for the two different configurations. We shall show that these results can be explained quantitatively in terms of non-local electron scattering. The multilayers were grown in our VG-80M MBE facility which has base pressure of typically 4$`\times 10^{11}`$ mbar. Our CPP measurements used the superconducting Nb electrode technique, as developed by Pratt et al. . The superconducting equipotential ensures that the current is perpendicular to the layers. We used a SQUID-based current comparator, working at 0.1$`\%`$ precision to measure changes in the sample resistance of order 10 p$`\mathrm{\Omega }`$. To avoid driving the Nb normal, the CPP measurements were performed at 4.2 K in magnetic fields below 3 kOe. Consistency between the interleaved and separated samples was enhanced by growing the two configurations during the same run for each value of N. The magnetoresistance was measured in the CPP mode for the two configurations: interleaved and separated. The measured curves for MR(H) are presented for three values of N in Figs. 1a-1c. The squares represent the MR(H) data in the interleaved configuration whereas the circles give the data in the separated configuration. For each sample, the saturation magnetic field was about 2 kOe. There are several characteristic features of these data, all of which can be explained in terms of non-local electron scattering. (i) The most important feature is surely the striking difference between the MR(H) curves for the two configurations, both in shape and in magnitude. (ii) For each N, the maximum value of MR(H) is larger for the interleaved configuration. (iii) The MR(H) curve for the interleaved configuration exhibits a single peak, whereas for the separated configuration, MR(H) is the superposition of two peaks, with the second being much broader and less delineated than the first. Another interesting feature of the data, not displayed in Fig. 1, is that the saturated resistance itself is always greater for the interleaved configuration. To ensure that the differing results for MR(H) for the two configurations are not due to differences in their magnetic properties, the magnetization as a function of field was measured for each sample. We found that the two configurations yield the same magnetization. This confirms that the magnetic layers are uncoupled and become magnetized independently. At low fields, the magnetization curves are dominated by the contribution of the thicker Co layers. After the thicker Co layers reach saturation, the magnetization continues to increase as the thinner Co layers approach saturation. The magnitudes of the saturation fields for the two thicknesses of Co layers correspond closely to the saturation fields of MR(H). Kinetic theory arguments show that the electron mean free path is far longer than the thicknesses of the magnetic layers (10 Å and 60 Å). Therefore, the potential ”felt” by the electron is the combined potential of a neighboring pair of magnetic layers. This may be termed ”non-local” electron scattering in the sense that one cannot speak of the resistivity of a $`single`$ Co layer. Rather, the resistivity is determined by a property of $`pairs`$ of neighboring layers. Gittleman $`et`$ $`al.`$ have shown that for such a case, the contribution of the spin-direction-dependent resistivity depends on the cosine of the angle $`\theta _{ij}`$ between the moments of neighboring magnetic layers, i and j. This is the key to understanding the data. Because the mean free path is larger than the layer thicknesses, it is necessary to carry out a full band structure calculation to calculate properly the resistivity and magnetoresistance. However, one can understand the basic physics with a simple phenomenological model. For the interleaved configuration, the neighboring magnetic layers are different, and hence the maximum angle $`\theta _{ij}`$ is large, whereas for the separated configuration, the neighboring magnetic layers are the same (except for one boundary layer), and hence the maximum angle $`\theta _{ij}`$ is small. Therefore, there is no reason to expect MR(H) to be the same for the two configurations. This explains the first feature of the data mentioned above. From the above considerations, it also immediately follows that MR(H) will be larger for interleaved multilayers than for separated multilayers, because the angle $`\theta _{ij}`$ is larger for the former configuration. This explains the second feature of the data mentioned above. This has been confirmed by measurements of the GMR as a function of the number of bilayers. A Fuchs-Sondheimer analysis of these data shows that the mean free path in sputtered and MBE samples is about 500 Å and 700 Å, respectively. For the interleaved configuration, there is only $`one`$ angle $`\theta _{ij}`$ that is relevant, namely, the angle between the moments of the different (10Å and 60Å) neighboring magnetic layers. Therefore, there will be only $`one`$ peak, as the angle $`\theta _{ij}`$ becomes progressively larger, passes through a maximum at the saturation field of the Co (60Å) layer and then becomes smaller as the Co (10Å) layer also saturates. By contrast, for the separated configuration, there are $`two`$ angles $`\theta _{ij}`$ that are relevant, namely, the angle between neighboring moments for each set of layers (the 10 Å set and the 60 Å set). As each angle $`\theta _{ij}`$ passes through its maximum, a peak will be obtained for MR(H), leading to two overlapping peaks, with each maximum occurring at a different value of the magnetic field, corresponding roughly to the coercive field of each type of magnetic layer. This explains the third feature of the data mentioned above. These ideas can be made quantitative. If the spin diffusion length is very long, it is known that a simple expression is obtained for MR(H). According to the phenomenological theory of Wiser , for the geometry under consideration here and assuming a very long spin diffusion length, the magnetoresistance due to an $`ij`$-pair of neighboring magnetic layers is: $$MR_{ij}(H)=c_{ij}(1\mathrm{cos}\theta _{ij}(H))^2$$ (1) The spin diffusion length of Co has been measured yielding values of 450 Å and 1000 Å . These values is very much larger than the thickness of the Co layers, and so one may safely employ the expression for MR(H) given in (1). For our samples, there are three parameters $`c_{ij}`$ corresponding to the three different types of neighboring pairs of magnetic layers: i = j = 1; i = j = 2; i = 1, j = 2, where 1 refers to Co (60Å) layers and 2 refers to Co (10Å) layers. The interleaved configuration contains only type i = 1, j = 2 neighbors, whereas the separated configuration contains all three types. For a sample containing N repeats, the separated configuration consists of N-1 pairs of type i = j = 1 neighbors, followed by one pair of type i = 1, j = 2 neighbors (the boundary layer), followed by N-1 pairs of type i = j = 2 neighbors. First consider the interleaved configuration. The saturation magnetic field $`H_{s1}`$ of the thicker Co layers is smaller than $`H_{s2}`$ of the thinner Co layers. Thus, as the magnetic field is increased, the angle $`\theta _{1,2}`$ increases, since the thicker Co layers are reversing their direction of magnetization faster than the thinner Co layers. According to Eq. (1), increasing the angle $`\theta _{1,2}`$ implies an increase in MR(H). When the magnetic field reaches $`H_{s1}`$, the angle $`\theta _{1,2}`$ reaches its maximum value, and begins to decrease as the thinner Co layers continue to reverse their direction of magnetization while the thicker Co layers have already reached saturation. According to Eq. (1), decreasing $`\theta _{1,2}`$ leads to a decrease in MR. Finally, when the field reaches $`H_{s2}`$, the angle $`\theta _{1,2}`$ is again zero, and MR vanishes. Thus, we expect - and find - a single peak for MR(H) for the interleaved configuration. The field dependence of $`\theta _{ij}`$ is determined as follows. The magnetization increases linearly with field (except near saturation, where it increases more slowly). Since the magnetization is proportional to the cosine of the angle between the magnetic moment and the field, it follows that cos$`\theta _i`$ and cos$`\theta _j`$ are each linear in the field. Equation (1) contains cos$`\theta _{ij}`$ = cos($`\theta _i`$ \- $`\theta _j`$). Expanding the cosine readily gives the required field dependence. The calculated results for the interleaved configuration are given by the curves in Fig. 2. For each value of N, the parameter $`c_{1,2}`$ was determined by fitting to the MR(H) data. The agreement between the calculated curves and the data is evident from the figure. We now consider the separated configuration. If the Co layers were ideal single-domain structures, then the magnetic moment of each Co layer would react identically to the magnetic field and the angles $`\theta _{1,1}`$ and $`\theta _{2,2}`$ would both be zero at all fields. However, because of the presence of domains and of structural imperfections in the Co layers, each layer reverses its magnetization at a somewhat different rate. As a result, the angles $`\theta _{1,1}`$ and $`\theta _{2,2}`$ become non-zero as the field is increased, pass through a maximum at the coercive field, and then decrease to zero as saturation is approached. We assumed a simple parabolic form for each of the two angles. The maximum value of each parabola, $`\theta _{max,1,1}`$ and $`\theta _{max,2,2}`$, cannot be determined by fitting to the data for the following reason. Because these angles are small, Eq. (1) can be expanded to yield $$MR_{ii}=c_{ii}(\frac{1}{2}\theta _{ii}^2)^2c_{ii}(\theta _{max,ii})^4$$ (2) and this $`combination`$ of $`c_{ii}`$ and $`\theta _{max,ii}`$ serves as a $`single`$ fitting parameter. Nevertheless, some numerical tests we have carried out suggest that both $`\theta _{max,1,1}`$ and $`\theta _{max,2,2}`$ lie in the range of $`15^{}30^{}`$. This value is, of course, much smaller than the maximum value of the angle $`\theta _{1,2}`$. This explains why MR(H) is larger for the interleaved configuration than for the separated configuration. The calculated results for the separated configuration are given by the curves in Fig. 3. For each value of N, the three parameters $`c_{1,1}(\theta _{max,1,1})^4`$, $`c_{2,2}(\theta _{max,2,2})^4`$, and $`c_{1,2}`$ were determined by fitting to the MR(H) data. The agreement between the calculated curves and the data is evident from the figure. To confirm that MR(H) for the separated configuration contains the contributions of \[Co(10Å)/Cu(200Å)\]<sub>N</sub> and of \[Co(60Å)/Cu(200Å)\]<sub>N</sub>, we also measured MR(H) for a multilayer containing only \[Co(10Å)/Cu(200Å)\]<sub>N</sub> and for another multilayer containing only \[Co(60Å)/Cu(200Å)\]<sub>N</sub>. For each of these two multilayers, MR(H) consists of a single peak, located at the same magnetic field as one of the two peaks in the separated configuration. Thus, the two peaks observed for the separated configuration do indeed correspond to the two individual peaks. In conclusion, we have shown that the principal features of the MR(H) data can be explained quantitatively, for both the interleaved and the separated configurations, by invoking non-local electron scattering. It is a pleasant duty to acknowledge that this research was supported by grants from the UK-Israel Science and Technology Research Fund and the UK-EPSRC. We appreciate discussions with C. H. Marrows and A. Carrington. D. Bozec thanks the University of Leeds for financial support.
no-problem/9906/astro-ph9906281.html
ar5iv
text
# 1 Properties of Galaxy Pairs
no-problem/9906/hep-ph9906500.html
ar5iv
text
# The effects of nonextensive statistics on fluctuations investigated in event-by-event analysis of data ## Abstract We investigate the effect of nonextensive statistics as applied to the chemical fluctuations in high-energy nuclear collisions discussed recently using the event-by-event analysis of data. It turns out that very minuite nonextensitivity changes drastically the expected experimental output for the fluctuation measure. This results is in agreement with similar studies of nonextensity performed recently for the transverse momentum fluctuations in the same reactions. PACS numbers: 25.75.-q 24.60.-k 05.20.-y 05.70.Ln Some time ago a novel method of investigation of fluctuations in even-by-event analysis of high energy multiparticle production data has been proposed . It consists in defining a suitable measure $`\mathrm{\Phi }`$ of a given observable $`x`$ being exactly the same for nucleon-nucleon and nucleus-nucleus collisions if the later are simple superpositions of the former. $$\mathrm{\Phi }_x=\sqrt{\frac{Z^2}{N}}\sqrt{\overline{z^2}}\mathrm{where}Z=\underset{i=1}{\overset{N}{}}z_i.$$ (1) Here $`z_i=x_i\overline{x}`$ where $`\overline{x}`$ denotes the mean value of the observable $`x`$ calculated for all particles from all events (the so called inclusive mean) and $`N`$ is the number of particles analysed in the event. In (1) $`N`$ and $`Z^2`$ are averages of event-by-event observables over all events whereas the last term is the square root of the second moment of the inclusive $`z`$ distribution. By construction if particles are produced independently $`\mathrm{\Phi }_x=0`$. When applied to the recent NA49 data from central $`PbPb`$ collisions at $`158`$ A$``$GeV this method revealed that fluctuations of transverse momentum ($`x=p_T`$) decreased significantly in respect to elementary NN collisions. This has been interpreted as a possible sign of equilibration taking place in heavy ion collisions providing thus enviroment for the possible creation of quark-gluon plasma (QGP). It was quickly realised that existing models of multiparticle production are leading in that matter to conflicting statements . On the other hand the use of fluctuations as a very sensitive tool for the analysis of dynamics of multiparticle reactions has been advocated since already some time , especially their role in searching for some special features of the QGP equation of state has been shown to be of special interest . However, it was demonstrated recently in that the corresponding fluctuation measure calculated for a pion gas in global equilibrium (defined within the standard extensive thermodynamic) is almost an order of magnitude greater then the experimental value. Although the recent NA49 paper presents a new value, which is more like the prediction in , the controversy aroused around $`\mathrm{\Phi }`$ resulted in a number of presentations trying to clarify and extend the meaning of the $`\mathrm{\Phi }`$ variable (cf., for example, and references therein<sup>1</sup><sup>1</sup>1For some other recent discussions of event-by-event fluctuations see .)<sup>2</sup><sup>2</sup>2We would like to point here only that, if there are some additional fluctuations (not arising from quantum statistics, like those caused by the experimental errors) which add in the same way to both terms in definition (1) of $`\mathrm{\Phi }`$ it would perhaps be better to use $`\mathrm{\Phi }\mathrm{\Phi }^{}=\frac{Z^2}{N}\overline{z^2}`$ where they would cancel.. In the mean time the use of this variable has been extended, so far only theoretically, to the possible study (actually planned already by NA49) of the event-by-event fluctuations of the ”chemical” (particle type) composition of the final stage of high energy collisions . Moving apart from the above discussion we would like to follow another path of research. Namely, it was suggested recently in that the extreme conditions of density and temperature occuring in ultrarelativistic heavy ion collisions can lead to memory effects and long-range colour interactions and to the presence of non-Markovian processes in the corresponding kinetic equations (cf., for example ). It turns out that such effects in many other branches of physics are best described phenomenologically in terms of a single parameter $`q`$ by using the so called nonextensive statistics . This statistics is based on new definition of $`q`$-entropy (which for $`q1`$ coincides with the usual Boltzmann-Gibbs definition): $$S_q=\frac{1}{q1}\underset{k=1}{\overset{W}{}}p_k\left(1p_k^{q1}\right)\stackrel{q1}{}S=\underset{k=1}{\overset{W}{}}p_k\mathrm{ln}p_k$$ (2) (defined for the probability distribution $`\{p_k\}`$ for a system of $`W`$ microstates). Such entropy is nonextensive, i.e., for a system $`(A+B)`$ composed with two independent systems $`A`$ and $`B`$: $$S_q(A+B)=S_q(A)+S_q(B)+(1q)S_q(A)S_q(B).$$ (3) The value of $`|q1|`$ characterises deviation from the extensitivity, see for more details. As was shown in , nonextensive approach with $`q`$ as minuite as $`q=1.01÷1.015`$ eliminates the abovementioned discrepancy between the first NA49 data and ideal quantum gas (Boltzmann statistics) estimation of . Because NA49 Collaboration plans to study also the chemical fluctuations, there have already apeared predictions concerning the expected form of the fluctuation measure $`\mathrm{\Phi }`$ in this case . They are based on the use of normal (i.e., Boltzmann) statistics. In the present note we have generalized it to the case of nonextensive statistics in a manner identical to that presented in for the case of transverse momenta. As in we have computed the $`\mathrm{\Phi }`$ measure for the system of particles of two sorts, $`\pi ^{}`$ and $`K^{}`$, i.e., nonstrange and strange hadrons with multiplicities $`n_\pi `$ and $`n_K`$, respectively. Since $$N=n_\pi +n_K$$ (4) one immediately finds that in definition (1) $$z^2=\frac{n_\pi n_K}{N}$$ (5) and $$Z^2=\frac{n_\pi ^2n_K^2+n_\pi ^2n_K^2\mathrm{\hspace{0.17em}2}n_\pi n_K}{N^2}.$$ (6) He have now consistently replaced the mean occupation numbers by their $`q`$-equivalents, which under some approximations, valid for small values of nonextensitivity $`|1q|`$, can be expressed in the following analytical form : $$n_q=\left\{\left[1+(q1)\beta (E\mu )\right]^{1/(q1)}\pm 1\right\}^1,$$ (7) where $`\beta =1/kT`$, $`\mu `$ is chemical potential and the $`+/`$ sign applies to fermions/bosons. Notice that in the limit $`q1`$ (extensive statistics) one recovers the conventional Fermi-Dirac and Bose-Einstein distributions. We shall not dwell on the details of this procedure, they are essentially the same as those discussed in . It is only necessary to mention that in this approximation one retains the basic factorised formula for correlations used in , namely that $$n_in_j=n_in_j.$$ (8) As in (where $`\mathrm{\Phi }`$ for transverse momenta $`p_T`$ has been considered), a rather large sensitivity of predictions presented in to the parameter $`q`$ has been observed. According to the nonextensive statistics philosophy this fact indicates a large sensitivity to the (initial and boundary) conditions present in the ultrarelativistic heavy ion collisions and existence of some kind of memory effects in such systems, as mentioned in references . Our results are presented in Figs. 1 and 2 where modifications caused by the nonextensitivity $`q=1.015`$ (chosen in such a way as to fit the $`p_T`$ spectra in Fig. 3, see discussion below) to the results of for directly produced particles are shown. For simplicity we have restricted ourselves here only to comparison with results of without resonances <sup>3</sup><sup>3</sup>3One can argue that resonance production belongs in our philosophy already to the nonextensive case being therefore responsible for (at least a part of) the effect leading to a nonzero $`|1q|`$. This is best seen inspecting results of with resonances included, which show that $`\mathrm{\Phi }`$ in this case also changes sign. The use of parameter $`q`$ is, however, more general as it includes all other possible effects as well.. Notice (cf. also ) that, as is clearly seen in Fig. 3, the same pattern of fluctuations is already present in the transverse momentum spectra of produced secondaries, i.e., the same value of $`q`$ brings new ”$`q`$-thermal” curve in agreement with experiment in the whole range of $`p_T`$ presented. If it would emerge also in the future data on the fluctuations of chemical composition discussed here, i.e., if parameter $`q`$ would turn out to be similar (modulo experimental errors), it would signal that both observables, fluctuations of which is investigated, are similarly affected by the external conditions mentioned before and that they can be easily parametrized phenomenologically by a single parameter $`q`$, i.e., by the measure of the nonextensitivity of the nuclear collision process<sup>4</sup><sup>4</sup>4It is worth to mention here that methods of nonextensive statistics have been already used in the field of high energy physics in order to analyse some aspects of cosmic ray data and to the description of hadronization in $`e^+e^{}`$ annihilation processes . Also the recently proposed use of quantum groups in studying Bose-Einstein correlations observed in all multiparticle reactions belong to that cathegory because, as was shown in , there is close correspondence between the deformation parameter of quantum groups and the nonextensitivity parameter of Tsallis statistics. In fact, as can be seen from , also works on intermittency using the so called Lévy stable distributions (for example ) belong to this cathegory as well.. Figure Captions: * $`\mathrm{\Phi }`$ \- measure of the kaon multiplicity fluctuations (in the $`\pi ^{}K^{}`$ system of particles) as a function of temperature for three values of the pion chemical potential. The kaon chemical potential vanishes. The resonances are neglected. $`(a)`$ \- results of (in linear scale); $`(b)`$ \- our results for $`q=1.015`$. * $`\mathrm{\Phi }`$ \- measure of the kaon multiplicity fluctuations (in the $`\pi ^{}K^{}`$ system of particles) as a function of temperature for three values of the kaon chemical potential. The pion chemical potential vanishes. The resonances are neglected. $`(a)`$ \- results of (in linear scale); $`(b)`$ \- our results for $`q=1.015`$. * The results for $`p_T`$ distribution: notice that $`q=1.015`$ results describes also the tail of distribution not fitted by the conventional exponent (i.e., $`q=1`$ in our case, cf. also ). Data are taken from .
no-problem/9906/math9906058.html
ar5iv
text
# Untitled Document
no-problem/9906/quant-ph9906059.html
ar5iv
text
# Implementation of the Quantum Fourier Transform ## Abstract The quantum Fourier transform has been implemented on a three bit nuclear magnetic resonance (NMR) quantum computer, providing a first step towards the realization of Shor’s factoring and other quantum algorithms. Implementation of the QFT is presented with fidelity measures, and state tomography. Experimentally realizing the QFT is a clear demonstration of NMR’s ability to control quantum systems. PACS numbers 03.67.-a, 03.67.Lx, 02.70.-c, 89.70.+c Quantum computers are devices that process information in a way that preserves quantum coherence. Unlike a classical bit, a quantum bit, or ‘qubit,’ can be in a superposition of $`0`$ and $`1`$ at once. This nonclassical feature of quantum information allows quantum computers to perform some computations faster than classical computers. For example, quantum computers, if constructed, could factor large numbers more rapidly , search data basis more quickly , and simulate quantum systems more efficiently than is possible using current classical algorithms . A key subroutine of algorithms for factoring and simulation is the quantum Fourier transform (QFT) . In essence the QFT takes a ‘position’ state $`|x`$ to the corresponding ‘momentum’ state $`|p`$ and is defined as follows: $$QFT_q|x\frac{1}{\sqrt{q}}\underset{p=0}{\overset{q1}{}}e^{2\pi iap/q}|p.$$ (1) Where $`q`$ is the dimension of the systems Hilbert space. In general the $`QFT_q`$ transforms the input amplitudes as, $$QFT_q\underset{x}{}f(x)|x\underset{p}{}\stackrel{~}{f}(p)|p.$$ (2) Where the coefficients $`\stackrel{~}{f}(p)`$ are $$\stackrel{~}{f}(p)=\frac{1}{\sqrt{q}}\underset{a}{}e^{2\pi iap/q}f(x).$$ (3) For example, the two qubit QFT corresponds to the unitary operator, $`QFT_4`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left(\begin{array}{cccc}1& 1& 1& 1\\ 1& i& 1& i\\ 1& 1& 1& 1\\ 1& i& 1& i\end{array}\right).`$ (8) This operator shows the QFT separating the input states by 0 degrees in the first row and column, and then by 90 degrees, 180 degrees and 270 degrees, multiples of $`\frac{\pi }{2}`$. Equation (4) shows that the QFT has effects similar to that of the classical Fourier transform. In particular, if $`f(a)`$ is periodic with period $`r`$, then $`\stackrel{~}{f}(c)`$ will exhibit a spike at $`c=r`$. This is the key to Shor’s algorithm which allows a quantum computer to factor very large numbers in polynomial time. The classical Fourier transform reveals the periodicity in functions, the QFT reveals periodicity of wavefunctions. As formulated by Coppersmith, the QFT can be constructed from two basic unitary operations, $`A_j`$, operating on the jth qubit $`A_j`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}\left(\begin{array}{cc}1& 1\\ 1& 1\end{array}\right)`$ (11) and $`B_{jk}`$ operating on the jth and kth qubits $`B_{jk}`$ $`=`$ $`\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& 1& 0& 0\\ 0& 0& 1& 0\\ 0& 0& 0& e^{i\theta _{jk}}\end{array}\right),`$ (16) where $`\theta _{jk}=\pi /2^{kj}`$. To implement the QFT, these gates, $$B_{j,j+1}B_{j,j+2}\mathrm{}B_{j,L1}A_j$$ (17) are implemented on the lead bit, $`j=L1`$. Repeating the above sequence of gates to all $`L`$ bits as $`j`$ is indexed from $`L1`$ to $`0`$ will complete the QFT. This sequence of quantum logic gates can be realized NMR. The idea of using nuclear spins as the basic unit of a quantum computer was proposed by Lloyd , and detailed schemes for using NMR as a method of quantum computing were proposed by Cory et al and Gershenfeld and Chuang . In NMR a series of radio frequency pulses are used to control the excess magnetization of an ensemble of quantum states. NMR experiments are easily visualized by picturing the excess magnetization as a vector pointing in some direction and the pulses as rotations about the various axes. In addition, a bilinear coupling term in the Hamiltonian allows for quantum superposition. The Hamiltonian of a three spin (qubit) NMR sample with $`J`$-coupling is $`H`$ $`=`$ $`\begin{array}{c}\omega _1I_1^z+\omega _2I_2^z+\omega _3I_3^z+\\ 2\pi (J_{1,2}I_1^zI_2^z+J_{1,3}I_1^zI_3^z+J_{2,3}I_2^zI_3^z)\end{array}`$ (20) where $`I_i=\sigma _i/2`$. The three bit QFT was implemented via NMR using the three carbon-13 spins of an alanine sample. The resonant frequency of carbon-13 at 9.4 Tesla is approximately 100.617MHz. The carbonyl was labeled spin 1, $`C_\alpha `$ was labeled spin 2, and $`C_\beta `$ spin 3. The chemical shift of the three alanine carbons are 12587Hz, 0Hz, and -3435Hz respectively. Coupling constants between the three spins are $`J_{12}`$ = 54Hz, $`J_{23}`$ = 35Hz, and $`J_{13}`$ = 1.2Hz. Relaxation time $`T_1`$ for alanine is approximately 1.56s while $`T_2`$ is about 420ms. The $`A_j`$ matrix described above can be broken up into idempotents $`E_+E_{}+\sigma _x(E_++E_{})`$. The pulse sequence of the $`A_j`$ gate can now be determined using the geometric algebra formalism , $$A_j=\left(\frac{\pi }{2}\right)_y^j\left(\pi \right)_x^j.$$ (21) This pulse program reads: apply a pulse along the $`y`$-axis that rotates spin $`j`$ 90 degrees, apply a pulse along the $`x`$-axis that rotates $`j`$ 180 degrees. Magnetization on the $`z`$-axis would be rotated to the positive $`x`$-axis. Since this experiment starts with the spins at thermal equilibrium (pointing along the $`z`$-axis) the above sequence for the $`A_j`$ gate can be replaced by the simpler $`\frac{\pi }{2}`$ pulse along the positive $`y`$-axis. The $`B_{jk}`$ gate, which can be constructed using the coupling between qubits, In terms of idempotents reduces to $`1E_{}^{}{}_{}{}^{1}E_{}^{}{}_{}{}^{2}+e^{i\theta }E_{}^{}{}_{}{}^{1}E_{}^{}{}_{}{}^{2}`$. Again using geometric algebra this yields the following pulse sequence: $`B_{jk}`$ $`=`$ $`\begin{array}{c}\left(\pi \right)_\varphi ^j\left(\frac{\theta }{2\pi J_{jk}}\right)\left(\pi \right)_\varphi ^j\\ \\ \left(\frac{\pi }{2}\right)_y^{j,k}\left(\frac{\theta }{2}\right)_x^{j,k}\left(\frac{\pi }{2}\right)_y^{j,k}.\end{array}`$ (25) The notation $`\theta /2\pi J_{jk}`$ represents an interval of spin evolution under the coupling Hamiltonian. The final three pulses effectively perform a rotation around the $`z`$-axis. These pulses are not necessary, however, since the same effect may be achieved by rotating the prior pulses of the experiment. The complete pulse program is the combination of $`A_j`$ and $`B_{jk}`$ gates described above. In this implementation, the necessity of performing a swap gate has been removed by reordering the bits at the appropriate interval. The complete pulse program is, $`QFT_3`$ $`=`$ $`\begin{array}{c}\left(\frac{\pi }{2}\right)_{sin(\frac{3\pi }{8})x+cos(\frac{3\pi }{8})y}^1\left(\pi \right)_x^2\\ \\ \left(\frac{1}{8J_{12}}\right)\left(\pi \right)_x^3\left(\frac{1}{8J_{12}}\right)\left(\pi \right)_x^2\\ \\ \left(\frac{\pi }{2}\right)_{\frac{x+y}{\sqrt{2}}}^2\left(\frac{1}{16J_{13}}\right)\left(\pi \right)_x^2\\ \\ \left(\frac{1}{16J_{13}}\right)\left(\pi \right)_x^2\left(\frac{1}{8J_{23}}\right)\left(\pi \right)_x^1\\ \\ \left(\frac{1}{8J_{23}}\right)\left(\pi \right)_x^2\left(\pi \right)_x^1\left(\pi \right)_x^2\\ \\ \left(\pi \right)_x^3\left(\pi \right)_x^2\left(\frac{\pi }{2}\right)_y^3\left(\pi \right)_x^2.\end{array}`$ (37) This sequence includes a number of $`\left(\pi \right)`$ pulses to refocus couplings during the intervals they should be inactive. The pulse sequence takes advantage of knowledge of the starting state of the system at the beginning and end of the program by replacing Hadamard transforms with $`\frac{\pi }{2}`$ pulses. In the middle of the sequence the full Hadamard was indeed used. Figure 1 shows selected theoretical and experimental spectra following the quantum Fourier transform of the state $`I_z^1+I_z^2+I_z^3`$ on the three qubit NMR quantum computer. The fidelity of the QFT calculated using the measure $$F=\frac{1}{2}+\frac{1}{2}\frac{Tr(\rho _{theory}\rho _{exp})}{\sqrt{Tr(\rho _{theory}^2)}\sqrt{Tr(\rho _{exp}^2)}}$$ (38) is 87%. Here $`\rho `$ is the density matrix minus the part that is proportional to the identity (in NMR, this is called the ‘reduced’ density matrix; it should not be confused with the reduced density matrix got by partially tracing the density matrix for a composite quantum system over some of its subsystems). This measure reflects both imperfections in the applied pulses and delays, as well as decoherence. To a first approximation, decoherence during the course of the QFT attenuates the entire density matrix. This is shown in figure 2. Therefore, we can approximately separate the errors caused by experimental imperfections by renormalizing $`\rho _{exp}`$ to its attenuated average. Using this the fidelity of the operations themselves is above 98% over the 6 gates in (11). The fidelity of 87% corresponds to an error rate of 97.7% over the six gates which, while high, does not attain the error rate of $`10^4`$ required for robust quantum computation . These errors arise primarily from spatial inhomogeneities in the radio frequency fields which we believe can be improved. In conclusion, using NMR, the QFT has been implemented on a three bit quantum system and the fidelity with which we can transform an initially diagonal state has been measured. Although the fidelity does not reach that required for fault tolerant computing, it is easily high enough to permit studies on small quantum systems including quantum simulations. A particularly straightforward use of the QFT is in quantum chaos: as Balazs and Voros pointed out, a simple version of the quantum baker’s map can be performed by QFTs and Schack has shown how such a quantum map might be realized on a quantum computer . The authors thank S. S. Somaroo and C. H. Tseng for helpful discussions. This work was supported by DARPA.
no-problem/9906/hep-ph9906402.html
ar5iv
text
# Low mass lepton pair production in hadron collisions ## 1 INTRODUCTION The production of lepton pairs in hadron collisions $`h_1h_2\gamma ^{}X;\gamma ^{}l\overline{l}`$ proceeds through an intermediate virtual photon and its subsequent leptonic decay. Traditionally, interest in this Drell-Yan process has concentrated on lepton pairs with large mass $`Q`$ which allows for the application of perturbative QCD and the extraction of the antiquark density in the proton . Prompt photon production $`h_1h_2\gamma X`$ can be calculated in perturbative QCD if the transverse momentum $`Q_T`$ of the photon is sufficiently large. This process then provides essential information on the gluon density in the proton at large $`x`$ . Unfortunately, it suffers from considerable fragmentation, isolation, and intrinsic transverse momentum uncertainties. Alternatively, the gluon density can be constrained from the production of jets with large transverse momentum at hadron colliders , which however suffers from ambiguous information coming from different experiments and colliders. In this paper we demonstrate that, like prompt photon production, lepton pair production is dominated by quark-gluon scattering in the region $`Q_T>Q/2`$. This leads to sensitivity to the gluon density in kinematical regimes that are accessible both at collider and fixed target experiments while eliminating the theoretical and experimental uncertainties. In Sec. 2, we briefly discuss the relationship between virtual and real photon production in hadron collisions in next-to-leading order QCD. In Sec. 3 we present our numerical results, and Sec. 4 contains a summary. ## 2 NEXT-TO-LEADING ORDER QCD FORMALISM In leading order (LO) QCD, two partonic subprocesses contribute to the production of virtual and real photons with non-zero transverse momentum: $`q\overline{q}\gamma ^{()}g`$ and $`qg\gamma ^{()}q`$. The cross section for lepton pair production is related to the cross section for virtual photon production through the leptonic branching ratio of the virtual photon $`\alpha /(3\pi Q^2)`$. The virtual photon cross section reduces to the real photon cross section in the limit $`Q^20`$. The next-to-leading order (NLO) QCD corrections arise from virtual one-loop diagrams interfering with the LO diagrams and from real emission diagrams. At this order processes with incident gluon pairs $`(gg)`$, quark pairs $`(qq)`$, and non-factorizable quark-antiquark $`(q\overline{q}_2)`$ processes contribute also. Singular contributions are regulated in $`n=42ϵ`$ dimensions and removed through $`\overline{\mathrm{MS}}`$ renormalization, factorization, or cancellation between virtual and real contributions. An important difference between virtual and real photon production arises when a quark emits a collinear photon. Whereas the collinear emission of a real photon leads to a $`1/ϵ`$ singularity that has to be factorized into a fragmentation function, the collinear emission of a virtual photon gives a finite logarithmic contribution since it is regulated naturally by the photon virtuality $`Q`$. In the limit $`Q^20`$ the NLO virtual photon cross section reduces to the real photon cross section if this logarithm is replaced by a $`1/ϵ`$ pole. A more detailed discussion can be found in . The situation is completely analogous to hard photoproduction where the photon participates in the scattering in the initial state instead of the final state. For real photons, one encounters an initial-state singularity that is factorized into a photon structure function. For virtual photons, this singularity is replaced by a logarithmic dependence on the photon virtuality $`Q`$ . ## 3 NUMERICAL RESULTS In this section we present numerical results for the production of lepton pairs in $`p\overline{p}`$ collisions at the Tevatron with center-of mass energy $`\sqrt{S}=1.8`$ and 2.0 TeV and in $`pH^2`$ collisions at fixed target experiments with $`\sqrt{S}=38.8`$ GeV. We analyze the invariant cross section $`Ed^3\sigma /dp^3`$ averaged over the rapidity interval -1.0 $`<y<`$ 1.0 at the Tevatron and averaged over the scaled longitudinal momentum interval 0.1 $`<x_F<`$ 0.3 at fixed target experiments. We integrate the cross section over various intervals of $`Q`$ and plot it as a function of the transverse momentum $`Q_T`$. Our predictions are based on a NLO QCD calculation and are evaluated in the $`\overline{\mathrm{MS}}`$ renormalization scheme. The renormalization and factorization scales are set to $`\mu =\mu _f=\sqrt{Q^2+Q_T^2}`$. If not stated otherwise, we use the CTEQ4M parton distributions and the corresponding value of $`\mathrm{\Lambda }`$ in the two-loop expression of $`\alpha _s`$ with four flavors (five if $`\mu >m_b`$). The Drell-Yan factor $`\alpha /(3\pi Q^2)`$ for the decay of the virtual photon into a lepton pair is included in all numerical results. In Fig. 1 we display the NLO QCD cross section for lepton pair production at the Tevatron at $`\sqrt{S}=1.8`$ TeV as a function of $`Q_T`$ for four regions of $`Q`$. The regions of $`Q`$ have been chosen carefully to avoid resonances, i.e. between the $`\rho `$ and the $`J/\psi `$ resonances, between the $`J/\psi `$ and the $`\mathrm{{\rm Y}}`$ resonances, above the $`\mathrm{{\rm Y}}`$’s, and a high mass region. The cross section falls both with the mass of the lepton pair $`Q`$ and, more steeply, with its transverse momentum $`Q_T`$. Unfortunately, no data are available yet from the CDF and D0 experiments. However, data exist for prompt photon production out to $`Q_T100`$ GeV, where the cross section is about $`10^3`$ pb/GeV<sup>2</sup>. It should therefore be possible to analyze Run I data for lepton pair production up to at least $`Q_T30`$ GeV where one can probe the parton densities in the proton up to $`x_T=2Q_T/\sqrt{S}0.03`$. The UA1 collaboration measured the transverse momentum distribution of lepton pairs at $`\sqrt{S}=630`$ GeV up to $`x_T=0.13`$ , and their data agree well with our theoretical results . The fractional contributions from the $`qg`$ and $`q\overline{q}`$ subprocesses up through NLO are shown in Fig. 2. It is evident from Fig. 2 that the $`qg`$ subprocess is the most important subprocess as long as $`Q_T>Q/2`$. The dominance of the $`qg`$ subprocess diminishes somewhat with $`Q`$, dropping from over 80 % for the lowest values of $`Q`$ to about 70 % at its maximum for $`Q`$ 30 GeV. In addition, for very large $`Q_T`$, the significant luminosity associated with the valence dominated $`\overline{q}`$ density in $`p\overline{p}`$ reactions begins to raise the fraction of the cross section attributed to the $`q\overline{q}`$ subprocesses. Data obtained by the Fermilab E772 collaboration from an 800 GeV proton beam incident on a deuterium target are shown in Fig. 3 along with theoretical calculations. For our analysis we have chosen a lepton pair mass region between the $`J/\psi `$ and $`\mathrm{{\rm Y}}`$ resonances. The solid line shows the purely perturbative NLO expectation. The transition to low $`Q_T`$ can be described by the soft-gluon resummation formalism and is shown in the dashed curve in Fig. 3 . The resummed result can be expanded in a power series in $`\alpha _s`$ asymptotically around $`Q_T=0`$. Its NLO component (dotted curve) can then be matched to the perturbative result (dot-dashed curve) . From Fig. 3 it becomes clear that resummation is not needed and fixed order perturbation theory can be trusted when $`Q_T>Q/2`$. Unfortunately, the data from E772 do not extend into this region. However, the cross section should be measurable in forthcoming experiments down to $`10^3`$ pb/GeV<sup>2</sup>, i.e. out to at least $`Q_T6`$ GeV or $`x_T0.31`$ where the gluon density is poorly constrained now. In Fig. 4 we demonstrate that in fixed target experiments also the lepton pair cross section is dominated by quark-gluon scattering at the level of 80 % once $`Q_TQ`$. The results in Fig. 4 also prove that subprocesses other than those initiated by the $`q\overline{q}`$ and $`qg`$ initial channels are of negligible import. We will now turn to a previously unpublished study of the sensitivity of collider and fixed target experiments to the gluon density in the proton. The full uncertainty in the gluon density is not known. Here we estimate this uncertainty from the variation of different recent parametrizations. We choose the latest global fit by the CTEQ collaboration (5M) as our point of reference and compare it to their preceding analysis (4M ) and to a fit with a higher gluon density (5HJ) intended to describe the CDF (and D0) jet data at large transverse momentum. We also compare to global fits by MRST , who provide three different sets with a central, higher, and lower gluon density, and to GRV98 <sup>1</sup><sup>1</sup>1In this set a purely perturbative generation of heavy flavors (charm and bottom) is assumed. Since we are working in a massless approach, we resort to the GRV92 parametrization for the charm contribution and assume the bottom contribution to be negligible.. For this study we update the Tevatron center-of-mass energy to Run II conditions ($`\sqrt{S}=2.0`$ TeV) which increases the invariant cross section for the production of lepton pairs with mass 5 GeV $`<Q<`$ 6 GeV by 5 % at low $`Q_T1`$ GeV and 20 % at high $`Q_T100`$ GeV. In Fig. 5 we plot the cross section for lepton pairs between the $`J/\psi `$ and $`\mathrm{{\rm Y}}`$ resonances at Run II of the Tevatron which should be measurable up to at least $`Q_T30`$ GeV ($`x_T0.03`$). For the CTEQ parametrizations we find that the cross section increases from 4M to 5M by 2.5 % ($`Q_T=30`$ GeV) to 5 % ($`Q_T=10`$ GeV) and from 5M to 5HJ by 1 % in the whole $`Q_T`$-range. The largest differences to CTEQ5M are obtained with GRV98 at low $`Q_T`$ (minus 10 %) and with MRST(g$``$) at large $`Q_T`$ (minus 7%). A similar analysis for conditions as in Fermilab’s E772 experiment is shown in Fig. 6. In fixed target experiments one probes substantially larger regions of $`x_T`$ than in collider experiments. Therefore one expects a much larger sensitivity to the gluon distribution in the proton. Indeed we find that CTEQ5HJ increases the cross section by 7 % (26 %) w.r.t. CTEQ5M at $`Q_T=3`$ GeV ($`Q_T=6`$ GeV) and even by 134 % at $`Q_T=10`$ GeV. For MRST(g$``$) the CTEQ5M cross section drops by 17 %, 40 %, and 59 % at these three values of $`Q_T`$. ## 4 SUMMARY In summary, we have demonstrated that the production of Drell-Yan pairs with low mass and large transverse momentum is dominated by gluon initiated subprocesses. In contrast to prompt photon production, uncertainties coming from fragmentation, isolation, and intrinsic transverse momentum are absent. The hadroproduction of low mass lepton pairs is therefore an advantageous source of information on the gluon density in the proton at large $`x`$ in collider experiments and even more in fixed target experiments. Massive lepton pair production data could provide new insights into the parametrization and size of the gluon density. ## Acknowledgement It is a pleasure to thank L.E. Gordon for his collaboration.
no-problem/9906/physics9906014.html
ar5iv
text
# Theory of Two Dimensional Mean Field Electron Magnetohydrodynamics ## I Introduction The transport and amplification properties of a large scale magnetic field remains an area of active investigation. This is primarily due to its relevance in a variety of physical phenomena. For example, the existence of magnetic field in the universe is being understood on the basis of amplification process by some kind of dynamo mechanism. Another interesting phenomenon is the release of high energy bursts in solar flares etc. It is believed to occur as a result of the reconnection of magnetic fields, which can happen in the presence of finite diffusivity. However, there is only modest quantitative understanding of these processes. The amount of magnetic energy released by reconnection depends on the value of diffusivity, which turns out to be too small to provide an explanation of the vast energy released in these bursts. There have been attempts then to understand these phenomenon on the basis of turbulent magnetic field diffusivity, which is directly related to the question of transport of a large scale magnetic field in the presence of turbulence. Most theories put forward in these areas are cast within the Magnetohydrodynamic system. Lately, however, there has been some work which make use of models pertaining to faster time scales. It is on this regime that we are going to focus here. In the present work we address the question of the diffusion of a long scale magnetic field in the presence of small scale turbulent magnetic fluctuation ocurring at time scales which are faster than the ion response time. For such phenomena the evolution of magnetic field is governed by the electron flow velocity. The ions being stationary, the flow velocity of the electrons determines the current and hence is thus directly related to the curl of magnetic field. Thus unlike MHD, in this approximation, heretofore referred as the Electron Magnetohydrodynamic (EMHD) approximation, the magnetic field itself evolves through an explicitly nonlinear equation. This should be contrasted to the MHD model in which the nonlinear effects creep indirectly through the lorentz force operating on the plasma flow. The paper is organized as follows. In section II we present the salient features of the Electron Magnetohydrodynamics (EMHD) model. In section III we study the evolution of mean magnetic field in two dimensions within the framework of EMHD description. In two dimensions there is no amplification of the large scale field, it can only diffuse. We obtain an expression for the effective diffusion coefficient and show that it is suppressed from the naive quasilinear estimates. For complete whistlerization, i.e. when the turbulence is comprised only of randomly interacting whistler waves (whistler modes being the normal modes of the EMHD model), we show that there is no turbulent contribution to diffusivity. This, then raises the pertinent question about the constituents of the turbulent state in this particular model. It becomes important to know whether the turbulent state comprises entirely of randomly interacting whistler waves or is it merely a collection of random eddies or is it that a combination of both whistlers and eddies which represent the true scenario? We address these question in section IV by numerically simulating the decaying turbulence for EMHD equations. The initial condition is chosen to be random, i.e. no whistlers to begin with. The study of final state reveals evidence of whistlerization. In section V we numerically investigate the problem of diffusion, which shows suppression of magnetic field diffusivity, essentially confirming our analytical findings of section III. Section VI contains the discussion and conclusion. ## II the Model Electron Magnetohydrodynamics (EMHD) is the theory of the motion of magnetized electron fluid in the presence of self consistent and external electric and magnetic fields. Such a theory is applicable when the time scales of interest are fast (e.g. lying between electron and ion gyrofrequencies) so that ions being massive and unmagnetized play a passive role as a neutralizing background, and the dominant role in dynamics is played by a strongly magnetized electron species. Phenomena having such time scales are often encountered in a number of plasma operated devices (e.g. switches, focusing devices, fast Z-pinches etc. ). Moreover, in the description of collisionless magnetic reconnection as well as in certain problems related to ionosphere, the EMHD paradigm is invoked frequently. The entire whistler physics is premised on the EMHD regime of dynamics. The EMHD model is obtained by using the (i) electron momentum equation (ii) the current expressed in terms of electron velocity $`\stackrel{}{J}=n_eev_e`$ as the ions are stationary at fast time scales depicted by the model; and (iii) the Ampere’s law, where displacement current is ignored under the assumption ($`\omega <<\omega _{pe}^2/\omega _{ce}`$). The magnetic field then evolves through the following equation $$\frac{}{t}(\times \stackrel{}{P})=\times (\stackrel{}{v_e}\times (\times \stackrel{}{P}))m_e\nu \times \stackrel{}{v_e}$$ (1) Here $`m_e`$ and $`\stackrel{}{v_e}`$ are the electron mass and the velocity respectively, $`\stackrel{}{P}`$ is the canonical momenta defined as $`\stackrel{}{P}=m_e\stackrel{}{v_e}e\stackrel{}{A}/c`$ ($`\stackrel{}{A}`$ being the vector potential of the magnetic field), $`\nu `$ represents the electron ion collision frequency. Using the current and electron velocity relationship we obtain $`\times \stackrel{}{P}=e(d_e^2^2\stackrel{}{B}\stackrel{}{B})/c`$; where $`d_e=c/\omega _{pe}`$ is the skin depth. It is clear from Eq.1 that the $`(d_e^2^2\stackrel{}{B}\stackrel{}{B})`$ is frozen in the electron fluid flow. In the limit when the electron inertia can be ignored, it is simply the magnetic field which is carried along with the electron fluid. Since $`v_e\times \stackrel{}{B}`$; the evolution equation for magnetic field is nonlinear in $`\stackrel{}{B}`$. This can be contrasted with the MHD model where the magnetic field evolution is governed by an equation which is intrinsically linear in $`\stackrel{}{B}`$. In MHD, the nonlinear effects then arise as a result of back reaction on the fluid flow through the Lorentz force terms. Basically, in EMHD $`\stackrel{}{v_e}\times \stackrel{}{B}`$, and so the flow is directly related to the instantaneous magnetic field; whereas in MHD the evolution of flow velocity $`\stackrel{}{v}`$ depends on magnetic field through the Lorentz force term and hence $`\stackrel{}{v}`$ has a memory of the past magnetic field configuration. The MHD model is applicable for scale lengths which are longer than the ion skin depth. EMHD on the other hand depicts phenomenon having scale lengths shorter than the ion skin depth. Another distinction from MHD arises from the presence of intrinsic scale, viz. the electron skin depth $`d_e=c/\omega _{pe}`$ in the EMHD model, which separates the two regimes one in which electron inertia is important and the other where the electron inertia plays no role. The character of the EMHD equation changes in these two disparate regimes of scale lengths. In two dimensions (i.e. when the variations are confined in $`xy`$ plane) Eq.1 can be simplified and cast in terms of two scalar variables $`\psi `$ and $`b`$ which define the total magnetic field by the expression $`\stackrel{}{B}=\widehat{z}\times \psi +b\widehat{z}`$. The following coupled set then represents the evolution of these scalar variables $$\frac{}{t}(\psi ^2\psi )+\widehat{z}\times b(\psi ^2\psi )=\eta ^2\psi $$ (2) $$\frac{}{t}(b^2b)\widehat{z}\times b^2b+\widehat{z}\times \psi ^2\psi =\eta ^2b$$ (3) Here we have chosen to normalize length by electron skin depth $`d_e=c/\omega _{pe}`$, magnetic field by a typical amplitude $`B_0`$ and time by the corresponding electron gyrofrequency. In the nonresistive limit the above coupled equations support the following quadratic invariants $$E=\frac{1}{2}[(\psi )^2+b^2+(^2\psi )^2+(b)^2]𝑑x𝑑y$$ which represents the total energy (sum of the magnetic and the kinetic energy), $$H=(\psi ^2\psi )^2𝑑x𝑑y$$ the mean square magnetic potential and $$K=(\psi ^2\psi )(b^2b)𝑑x𝑑y$$ the cross helicity. The fields $`b`$ and $`\psi `$ are chosen to be uncorrelated initially in our numerical simulations. On the basis of the existence of these quadratic invariants it can be infered that the mean square magnetic potential cascades towards longer scale. We will be making use of this later in our derivation for turbulent diffusivity. Linearizing the evolution equations in the presence of uniform magnetic field $`B_0`$ pointing in the $`y`$ direction leads to the following dispersion relation $$\omega =\pm \frac{kk_yd_e^2\omega _{ci}}{(1+k^2d_e^2)}$$ for whistlers, the normal mode of oscillations in the EMHD regime. It is clear form the dispersion relation that the propagation of these waves is preferentially parallel to the magnetic field. Furthermore, the whistler wave excitation leads to the coupling of the form $`b_k=\pm k\psi _k`$ between the two perturbed fields. This relation between the perturbed fields then leads to an equipartition between the energy associated with the poloidal and the axial fields. An initial unequal distribution of energy in the poloidal and axial fields ultimately has a tendency towards redistribution and achieving equipartition as a result of the whistlerization of the spectrum. It is observed that time asymptotically the turbulent state in EMHD consists of a gas of whistlers interspersed with a collection of random eddies. There has been considerable interest lately to understand features of EMHD turbulence both in two and three dimensions in terms of power spectra and the cascade properties of the square invariants supported by the model . Our attempt here, however, is to understand the role of EMHD turbulence in determining the diffusion of long scale magnetic field. ## III Suppression of turbulent magnetic diffusivity in 2D In this section we concentrate on the transport of magnetic field in two dimension. In 2D the magnetic field can only diffuse, thus our endeavour here is to estimate the effective magnetic diffusivity in the presence of turbulence. We will concentrate here on turbulent scale lengths longer than the electron skin depth. In this regime of scale lengths i.e. for $`kd_e<<1`$ the electron inertia effects are unimportant and as mentioned in earlier section the magnetic field lines are frozen in the electron fluid flow. Thus turbulence in the electron velocity leads to the diffusion of magnetic flux. This diffusion of magnetic field lines, arising as a result of turbulence and not due to resistivity, is termed as the turbulent diffusivity of the magnetic field. The effective turbulent diffusivity would thus depend on the electron fluid flow velocity. A naive quasilinear estimate would thus predict that the magnetic field diffusivity $`\beta \tau v_e^2\tau (b)^2`$, where $`\tau `$ is some averaged correlation time for the electron flow velocity $`v_e=\widehat{z}\times b`$ in the $`xy`$ plane, and $`b`$ is the $`z`$ component of the turbulent small scale magnetic field. This suggests that the magnetic field diffusion in the $`xy`$ plane is solely determined by the turbulent properties of the $`z`$ (i.e. the axial) component of the magnetic field. However, this does not represent the complete picture. We will now show that the presence of small scale turbulence in the poloidal magnetic field results in the suppression of such estimates of diffusivity. This is similar to the work carried out by Gruzinov , Cattaneo and others in the context of MHD. In MHD the magnetic field lines are tied to the plasma flow velocity. It is observed that the magnetic field diffusivity is suppressed from the quasilinear estimates given solely in terms of plasma flow velocity. The presence of small scale turbulence in the magnetic field, which opposes the fluid motion through the $`\stackrel{}{J}\times \stackrel{}{B}`$ backreaction is found to be responsible for such a suppression. We choose to represent the small scale turbulence in the fields $`b`$ and $`\psi `$ as $$b(x,t)=\underset{k}{}b_k(t)exp(i\stackrel{}{k}\stackrel{}{r})$$ $$\psi (x,t)=\underset{k}{}\psi _k(t)exp(i\stackrel{}{k}\stackrel{}{r})$$ In addition to this we assume the existence of a large scale magnetic field pointing along $`y`$ direction characterized by the magnetic stream function of the following form $$\psi _0=\psi _qexp(iq_xx)+c.c$$ This magnetic field has a scale length $`q^1>>k^1`$ and hence when considering averaging over the scale of turbulence this field can be essentially treated as a constant in space. We are interested in understanding the process of diffusion of this long scale field in the presence of small scale turbulence in the variables $`b`$ and $`\psi `$, i.e. we seek an equation of the kind $$\frac{\psi _q}{t}=\beta q_x^2\psi _q$$ (4) and are interested in determining $`\beta `$ in terms of the properties of small scale turbulence. The $`q^{th}`$ fourier component of Eq.2 yields $$(1+q_x^2)\frac{d\psi _q}{dt}+<\widehat{z}\times b(\psi ^2\psi )>_q=\eta q_x^2\psi _q$$ (5) The second term in the equation signifies the generation of $`q^{th}`$ mode as the result of nonlinear coupling between the high $`k`$ turbulent fields. The angular brackets indicate the ensemble average. The above equation can be rewritten as $$(1+q_x^2)\frac{d\psi _q}{dt}+i\stackrel{}{q}<\widehat{z}\times b(\psi ^2\psi )>_q=\eta q_x^2\psi _q$$ We denote $`<\widehat{z}\times b(\psi ^2\psi )>_q`$ by $`\stackrel{}{\mathrm{\Gamma }}`$ representing the nonlinear flux. Since $`q_y=0`$, $`i\stackrel{}{q}\stackrel{}{\mathrm{\Gamma }}=iq_x\mathrm{\Gamma }_x`$. The suffix $`x`$ stands for the $`x`$ component. Now $$\mathrm{\Gamma }_x=<\frac{b}{y}(\psi ^2\psi )>_q=\underset{k}{}ik_y(1+k_1^2)<b_k\psi _{k_1}>$$ where $`k_1=qk`$. To estimate the correlation $`<b_k\psi _{k_1}>`$ we make use of the quasilinear approximation where each of these fields gets generated from the other through the interaction with the large scale field. Thus we can write $$<b_k\psi _{k_1}>=<b_k\delta \psi _{k_1}>+<\delta b_k\psi _{k_1}>,$$ where it is understood that $`\delta \psi _{k_1}`$ is the magnetic perturbation in the plane arising as the result of turbulent stretching of the mean magnetic field by the electron flow velocity $`\widehat{z}\times \stackrel{}{k}b_k`$; and $`\delta b_k`$ is the perturbation in the elecron flow (viz.$`\widehat{z}\times \stackrel{}{k}\delta b_k`$) arising from the Lorentz force $`\widehat{z}k_1^2\psi _{k_1}\times \widehat{y}q_x\psi _q`$. It should be noted here that the first term corresponds to that derived from a kinematic treatment, wherein the response of magnetic field on flow is not considered. The second term takes account of the back reaction of the magnetic field on the electron velocity. Thus dropping the second term would be tantamount to a purely kinematic approximation. We will now show that the second term leads to a significant suppression of the estimates of diffusivity obtained purely from the kinematic treatment. The equations for $`\delta b_k`$ and $`\delta \psi _{k_1}`$ are $$(1+k_1^2)(i\omega _k+\delta \omega _k)\delta \psi _{k_1}=\eta k_1^2\delta \psi _{k_1}ik_yb_kiq_x(1+q^2)\psi _q$$ and $$(1+k^2)(i\omega _k+\delta \omega _k)\delta b_k=\eta k^2\delta b_kik_{y1}(k_1^2q^2)\psi _{k_1}iq_x\psi _q$$ Here $`\omega `$ represents the linear frequency and $`\delta \omega `$ stands for the eddy decorrelation effect arising from the coherent mode coupling. Substituting the above expression for $`\delta b_k`$ and $`\delta \psi _{k_1}`$ we obtain the following expression for the nonlinear flux $$\mathrm{\Gamma }_x=\underset{k}{}\left(\tau _k(k_y^2b_k^2k_{1y}^2k_1^2\psi _{k_1}^2)\right)iq_x\psi _q$$ (6) where $$\tau _k=\frac{1}{(1+k^2)(i\omega _k+\delta \omega _k)+\eta k^2}$$ Here $`\tau _k`$ represents the spectral correlation times for the turbulent fields. We have assumed that the turbulent scales are much longer compared to the electron skin depth (i.e. $`k<<1`$) in the above derivation. The evolution equation for $`\psi _q`$ under the approximation $`q<<k<<1`$ can then be written as $$\frac{d\psi _q}{dt}=q_x^2\left[\underset{k}{}\tau _kk_y^2(b_k^2k^2\psi _k^2)\right]\psi _q\eta q_x^2\psi _q$$ (7) The factor inside the square bracket in the right hand side of the above equation represents the turbulent contribution to diffusivity. It is made up of two parts. The first part, depending on $`k_y^2b_k^2`$ represents the kinematic contribution and the second part arises as the result of small scale turbulence in the poloidal component of magnetic field. It is clear that turbulence in the poloidal component of magnetic field contributes towards suppressing the magnetic field diffusivity. It should be noted here that for complete whistlerization, the spectral components of the two fields would be related as $`b_k=\pm k\psi _k`$, for which the turbulent diffusivity vanishes exactly. For this extreme case, diffusion of $`\psi _q`$ is determined by resistivity alone. It appears then, that the understanding of the question of whistlerization of the spectrum in the turbulent state is of paramount importance. We will study this issue in the next section. We rewrite Eq.7 as $`{\displaystyle \frac{d\psi _q}{dt}}`$ $`=`$ $`q_x^2{\displaystyle \underset{k}{}}\tau _k(<v_x^2>_kk^2<\stackrel{~}{B}_x^2>_k)\psi _q\eta q_x^2\psi _q`$ (8) $`=`$ $`{\displaystyle \frac{q_x^2}{2}}{\displaystyle \underset{k}{}}\tau _k(<v^2>_kk^2<\stackrel{~}{B}^2>_k)\psi _q\eta q_x^2\psi _q`$ (9) In the above expression $`\stackrel{~}{B}_x`$ is the $`x`$ component of the turbulent field. In writing the second equality we have assumed that the turbulence is isotropic. Thus we can write $$\beta =\underset{k}{}\frac{\tau _k}{2}(<v^2>_kk^2<(\psi )^2>_k)+\eta $$ The kinematic diffusivity $`\beta _0`$ would be just $`\beta _0=_k\tau _kv_k^2/2+\eta `$, dependent on the turbulent velocity alone. We can then express $`\beta `$ in terms of the kinematic diffusivity as $`\beta =\beta _0_k\tau _kk^2<(\psi )^2>_k/2`$. Following Gruzinov et al we assume an equivalence of correlation times (i.e. assume $`\tau _k=\tau `$ for each mode ) and write $`\beta =\beta _0\tau <k^2><(\psi )^2>/2`$. To estimate $`<(\psi )^2>`$ we use the stationarity of the mean square magnetic potential. This can be justified on the basis of inverse cascade property of the mean square potential. At longer scales dissipation due to resistivity is small and the assumption of stationarity of the mean square potential is reasonably good. We multiply Eq.2 by $`\psi `$ and take ensemble average. This yields $$<\psi \frac{d\psi }{dt}>=\frac{1}{2}<\frac{d\psi ^2}{dt}>=0$$ $$<\psi \widehat{z}\times b\psi >=\frac{1}{2}<\widehat{z}\times b\psi ^2>=0$$ we thus obtain $$\eta <(\psi )^2>=B_0<\psi \frac{b}{y}>=\beta B_0^2$$ Substituting for $`<(\psi )^2>`$ and writing $`\tau /2`$ as $`\beta _0/<v^2>=\beta _0/<(b)^2>`$ we obtain $$\beta =\frac{\beta _0}{1+\frac{<k^2>\beta _0B_0^2}{\eta <(b)^2>}}=\frac{\beta _0}{1+R_m\frac{<k^2>B_0^2}{<v^2>}}$$ (10) Here $`R_m`$ is the magnetic Reynold’s number. It is clear that for $`R_m>>1`$ the suppression of the magnetic field diffusivity occurs even when the turbulent velocity is larger than the effective whistler speed in the presence of $`B_0`$, the magnetic field. ## IV Whistlerization We have observed in the earlier section that for a turbulent state which is a collection of whistlers alone, the effective turbulent diffusivity goes to zero. Thus it is of significance to understand the whistlerization of turbulent spectra. This is identical to studying the question of Alfvenization in the context of MHD model. It is interesting to note, however, that in the MHD model Alfvenization leads to an equipartition between the magnetic and the fluid energies. However, there can be no equipartition between the magnetic and kinetic energies as a result of the whistlerization of the spectrum. Thus, the dominance of magnetic or kinetic energies is dependent on whether the typical scale of turbulence are longer or shorter that the electron skin depth respectively. In this paper we have concentrated on the case where the turbulent scales are much longer compared to the electron skin depth. Thus the total energy is predominantly magnetic. Whistlerization of the spectrum then leads to an equipartition between the poloidal and the axial field energies. We seek to understand the question of whistlerization by carrying out numerical simulation. We evolve the two field $`\psi `$ and $`b`$ by Eq.2 and Eq.3 respectively, using a fully de-aliased pseudospectral scheme. In this scheme the fields $`b`$ and $`\psi `$ are fourier decomposed. Each of the fourier modes are then evolved, linear part exactly, whereas the nonlinear terms are calculated in real space and then fourier transformed in $`k`$ space. This requires going back and forth in real and $`k`$ space at each time step. The Fast Fourier Transform (FFT) routines were used to go back and forth in the real and $`k`$ space at each time integration. The time stepping is done using predictor corrector with the mid point leap frog scheme. The simulation was carried out with a resolution of $`128X128`$ modes as well as at a higher resolution of $`256X256`$ modes. The initial spectrum of the two fields $`b`$ and $`\psi `$ was chosen to be concentrated on a band of scales and their phases were taken to be random. The two fields were chosen to be entirely uncorrelated to begin with. In Fig.1 we show a plot $`b_k`$ vs. $`k\psi _k`$ for the initial spectrum. It is clear from the figure that the initial spectrum is totally different from a spectrum whistler waves, which in turn would have shown up in the figure as a straight line passing through the origin with unit slope basically depicting the relationship $`b_k=k\psi _k`$ for whistlers. In Fig.2 and Fig.3 we plot for the evolved spectrum $`b_k`$ vs. $`k\psi _k`$ for $`B_0=0`$ and $`0.5`$ respectively. It is clear that most of the points now cluster close to the origin. It is suggestive, when contrasted with the initial condition of Fig.1 that the modes are trying to acquire whistler wave relationship. The scatter in the plot indicates that both eddies and whistlers constitute the final state. Thus a quantitative assessment of the turbulent state as regards whistlerization of the spectra is required. For this purpose we introduce a variable $$w_k=\frac{abs(b_k^2\psi _k^2)}{(b_k^2+\psi _k^2)}$$ (11) which essentially indicates the fractional deviation of the $`k_{th}`$ mode from being whistlerized. In Table I we list the fraction of modes in the spectrum for which $`w_k`$ is within certain percentage. TABLE - I | | Fraction of modes Whistlerized | | | | --- | --- | --- | --- | | Permissible | Initial condition | Evolved state | Evolved state | | % deviation | | $`B_0=0`$ | $`B_0=0.5`$ | | 2.5 | 0 | 0.028 | 0.031 | | 5 | 0 | 0.053 | 0.054 | | 7.5 | 0 | 0.077 | 0.080 | | 10 | 0 | 0.101 | 0.102 | It is clear from Table I that the initial state had zero fraction of modes having deviations, $`w_k`$ even upto $`10\%`$, in the final state a reasonable fraction of modes acquire whistlerization within a certain percentage of deviation as measured by the parameter $`w_k`$. We also introduce an integral quantity signifying overall whislerization as $`w=w_k𝑑k/𝑑k`$. For a completely whistlerized spectrum the variable $`w`$ would take a value of $`0`$, and the maximum value that $`w`$ can have is unity. For our initial spectrum $`w=0.9957`$, after evolution (i)for $`B_0=0`$ (corresponding to Fig.1), $`w=0.5020`$, and (ii) for $`B_0=0.5`$ (Fig.2) $`w=0.4912`$. More detailed studies of this kind, addressing the evolution of whislerization with time (e.g. by studing how $`w`$ evolves with time), its dependence on external magnetic field, etc. are being carried out presently and will be presented in a subsequent publication. The question of Alfvenization of the spectrum in the context of MHD is also being pursued along similar lines and will be presented elsewhere. It is clear from our studies that the whistlerization of the spectrum is not complete. Random eddies are also present in the evolved spectrum. This deviation from the whistler wave relationship contributes towards the residual effective turbulent diffusivity of the magnetic field lines. In the next section we will carry out a numerical study to determine the diffusivity of magnetic field in the presence of turbulence. ## V Numerical results on diffusion We saw in section III that the final expression of the effective diffusivity that we obtained was based on the fact that the effective correlation times of the interacting modes were ultimately the same for each of them. Whether this this really happens can only be verified by a fully nonlinear numerical simulation. We have carried out a set of numerical studies to investigate the question of magnetic diffusivity. We observe that the results of our investigation agrees with the expression that we have obtained earlier, thereby suggesting that the ansatz of local equivalence of correlation time is indeed correct. The numerical scheme is the same as outlined in the last section. However, in addition to evolving the two fields $`b`$ and $`\psi `$ a number of tracer particles ($`N=1600`$) were placed in the two dimensional spatial $`xy`$ region of integration. The particles were initially placed uniformly in the $`xy`$ plane, and were then evolved using the Lagrangian electron velocity at their location (viz. $`\widehat{z}\times b`$). Since the magnetic field lines are tied to the electron flow velocity, the behaviour of magnetic field diffusivity can be discerned from the diffusion of these particles. Thus the averaged mean square displacement of these particles is used as a measure of magnetic diffusivity (e.g. $`\beta =d<(\delta x)^2>/dt`$). This method of evaluating the tracer particle diffusivity to study the diffusion of magnetic fields in two dimension has been adopted by Cattaneo in the context of the MHD model . It is clear that for $`\eta 0`$ and an initial distribution of power with random phases in the various modes for the two fields $`b`$ and $`\psi `$, Eq.2 and Eq.3 represent the case of ’decaying’ EMHD turbulence. We refrain from using a random stirring force to achieve stationary state as this might lead to the particle displacement being dependent on the characteristics of the random stirrer. We will here investigate the case of decaying turbulence and we will present results in the regime where the variations can be considered as slow, i.e. we treat the problem in the quasistatic limit. The derivation of our main Eq.9 for the suppression of magnetic field diffusivity was premised on the notion of stationarity of the mean square magnetic potential. As discussed earlier the cascade of the mean square magnetic potential towards longer scales ensures attaining such a state. This can be clearly seen in Fig.4 which shows the evolution of mean square magnetic potential with time. It is clear that the percentage variation in $`\psi ^2𝑑x𝑑y`$ is small after $`t=200`$. For the purpose of our calculations in all our numerical runs we have restricted to the region where the percentage variations in $`\psi ^2𝑑x𝑑y`$ is below $`3\%`$. In Fig.5 we show the mean square displacement of the tracer particles with time. The thick line indicated by the label ’kinematic’ essentially corresponds to the displacement when the uniform magnetic field in the $`y`$ direction $`B_0`$ is chosen to be zero. We will designate the slope of this curve as $`\beta _{kin}`$, the kinematic diffusivity. The other two lines essentially correspond to the longitudinal and the transverse displacement in the presence of a uniform magnetic field $`B_0=1`$ along the $`y`$ diection. It is clear from the figure that the slope of the kinematic curve is larger than the other two curves which correspond to the displacement for finite $`B_0`$. This clearly indicates that the presence of $`B_0`$ suppresses the diffusivity; the conclusion we arrived at earlier in the last section. However, longitudinal displacements of the tracer particles are larger compared to their transverse displacement, suggesting that the assumption of isotropic turbulence in not valid in the presence of uniform magnetic field. There has been indications in earlier works both in MHD as well as in EMHD that the presence of strong magnetic field results in anisotropy of the spectrum. Our results showing distinct values for the longitudinal and the transverse diffusivity is another evidence for anisotropic turbulence in the presence of external magnetic field. We next investigate the question whether the supression of diffusivity with increasing magnetic field is indeed given by the kind of expression (Eq.9) that we have obtained in the earlier section. For this purpose we carry out several numerical runs with varying strength of the magnetic field. The diffusivity $`\beta `$ for each case is then given by the slope of the displacement of the tracer particles. It is clear from Fig.5 that the curve is jagged, essentially signifying that $`\beta `$, the diffusivity estimated from the slope of such a curve is a statistical quantity. We take a time average given by $$\beta (t_2t_1)=\frac{1}{t_2t_1}_{t_1}^{t_2}\beta (t)𝑑t$$ The choice of $`t_2t_1`$ is such that the in this duration the turbulence can essentially be treated as quasistationary. The averaging procedure eliminates the statistical fluctuation in the estimate of diffusity and it is observed that with varying $`t_2`$ the slope asymptotes to a constant value for each case. In Fig.6 the $`y`$ axis represents $`\beta _{kin}/\beta `$ and along the $`x`$ axis we vary $`B_0^2`$. It is clear from the plot that the data points nicely fit a straight line, as our analytical expression predicts. ## VI Discussion There are two important results of our present work. First, we have been able to show that the turbulent EMHD state shows tendencies towards whistlerization. The spectrum is only partially whistlerized, suggesting that both eddies and randomly interacting whistlers constitute the turbulent state. Secondly, we have carried out studies to understand the diffusion of long scale magnetic field in the context of Electron Magnetohydrodynamics. We have shown that the effective diffusivity due to turbulence in the electron flow velocity gets suppressed in the presence of small scale turbulence of the magnetic field. For complete whistlerization the turbulent diffusivity vanishes. However, since the turbulent state is only partially whistlerized the effective diffusivity does not vanish it only gets suppressed from pure kinematic estimates. We have confirmed these results numerically. The problem of diffusion of magnetic field lines is of great interest, as it provides a mechanism for the reconnection of magnetic field lines which is thought to underlie an understanding of the rapid release of energy in several solar and astrophysical contexts. The resistive diffusion turns out to be too small to explain the large amount of energy released. This had initiated efforts towards understanding the phenomenon of turbulent diffusivity of magnetic field lines. Earlier attempts on this were based on the Magnetohydrodynamic approximation. However, it was shown theoretically by Gruzinov et al and numerically by Cattaneo that the value of turbulent diffusivity gets suppressed in the presence of turbulence in small scale magnetic field. Recently, attempts to understand the reconnection phenomenon in the context of Electron Magnetohydrodynamics are being made . Our work in this context becomes relevant, as we have shown here that the naive quasilinear estimates do not provide a complete picture. The effective diffusivity gets suppressed in the presence of turbulence in the magnetic field, with whistlerization of the spectrum playing an important role in this regard. Other issue that we would like to point out in this regard is the role of whistlers in EMHD turbulence. Some recent studies on EMHD turbulence categorically rule out the presence of whistler effect in determining the energy tranfer rate on the basis of the numerically observed scaling of the power spectrum . We have, on the other hand shown here that there is a tendency towards whistlerization of the turbulent spectra and that directly influences the effective diffusivity of the magnetic field lines. Invoking the Prandtl mixing length argument, which relates the transfer rate to the effective diffusivity, the question of whistler effect being present or not remains debatable. Moreover, we also have evidence of anisotropization of the turbulent spectrum in the presence of external magnetic field ( this work will be presented elsewhere) which further points towards a subtle role of whistlers in governing the EMHD turbulence. Aknowledgement: We would like to thank the San Diego Supercomputer centre, an NSF funded site of NPACI for providing computing time on T90 supercomputer for this work. This research was supported by DOE Grant No. DE-FG03-88ER-53275. FIGURE CAPTION * Plot of $`b_k`$ vs. $`k\psi _k`$ for the initial spectrum. * Plot of $`b_k`$ vs. $`k\psi _k`$ for the evolved spectrum when the external field $`B_0=0`$. * Plot of $`b_k`$ vs. $`k\psi _k`$ for the evolved spectrum when the external field $`B_0=0.5`$. * Evolution of mean square magnetic potential. * Mean square displacement of the tracer particles with time is shown, thick lines (kinematic) shows the displacement in the absence of any external field. The other two lines indicated by ’longitudinal’ and the ’transverse’ show the mean square displacement of the tracer particles along and across the external magnetic field $`B_0=1`$. * A plot of $`\beta _{kin}/\beta `$ vs. $`B_0^2`$.
no-problem/9906/astro-ph9906398.html
ar5iv
text
# High-Redshift Galaxies: Their Predicted Size and Surface Brightness Distributions and Their Gravitational Lensing Probability ## 1 Introduction Current observations reveal the existence of galaxies out to redshifts as high as $`z6.7`$ (Chen et al. 1999; Weymann et al. 1998; Dey et al. 1998; Spinrad et al. 1998; Hu, Cowie, & McMahon 1998) and bright quasars out to $`z5`$ (Fan et al. 1999). Based on sources for which high resolution spectra are available, the intergalactic medium appears to be predominantly ionized at this epoch, implying the existence of ionizing sources at even higher redshifts (Madau 1999; Madau, Haardt, & Rees 1999; Haiman & Loeb 1998a,b; Gnedin & Ostriker 1997). Hierarchical Cold dark matter (CDM) models for structure formation predict that the first baryonic objects appeared near the Jeans mass ($`10^5\mathrm{M}_{}`$) at redshifts as high as $`z30`$ (Haiman & Loeb 1998b, and references therein). The Next Generation Space Telescope (NGST ), planned for launch in 2008, is expected to reach an imaging sensitivity better than 1 nJy in the infrared, which will allow it to detect galaxies or mini-quasars at $`z10`$. In this paper we explore the ability of NGST to extend gravitational lensing studies well beyond their current limits. Due to the increased path length along the line-of-sight to the most distant sources, their probability for being lensed is expected to be the highest among all possible sources. Sources at $`z>10`$ will often be lensed by $`z>2`$ galaxies, whose masses can then be determined with lens modeling. Similarly, the shape distortions (or weak lensing) caused by foreground clusters of galaxies will be used to determine the mass distributions of less massive and higher redshift clusters than currently feasible. In addition to studying the lensing objects, observers will exploit the magnification of the sources to resolve and study more distant galaxies than would otherwise be possible. The probability for strong gravitational lensing depends on the abundance of lenses, their mass profiles, and the angular diameter distances among the source, the lens and the observer. The statistics of existing lens surveys have been used at low redshifts to constrain the cosmological constant (for the most detailed work see Kochanek 1996a, and references therein), although substantial uncertainties remain regarding the luminosity function of early-type galaxies and their dark matter content (Cheng & Krauss 1999; Chiba & Yoshii 1999). The properties of dark matter halos will be better probed in the future by individual as well as statistical studies of the large samples of lenses expected from quasar surveys such as the 2-Degree Field (Croom et al. 1998) and the Sloan Digital Sky Survey (SDSS Collaboration 1996). Given the early stage of observations of the redshift evolution of galaxies and their dark halos, we adopt a theoretical approach in our analysis and use the abundance of dark matter halos as predicted by the Press-Schechter (1974, hereafter PS) model. A similar approach has been used previously for calculating lensing statistics at low redshifts, with an emphasis on lenses with image separations above $`5\mathrm{}`$ (Narayan & White 1988; Kochanek 1995; Nakamura & Suto 1997) or on lensing rates of supernovae (Porciani & Madau 1999). Even when multiple images are not produced, the shape distortions caused by weak lensing can be used to determine the lensing mass distribution. Large numbers of sources are required in order to average away the noise due to the intrinsic ellipticities of sources, and so the mass distribution can only be determined for the extended halos of rich clusters of galaxies (e.g., Hoekstra et al. 1998; Luppino & Kaiser 1997; Seitz et al. 1996) or statistically for galaxies (e.g., Brainerd et al. 1996; Hudson et al. 1998). Schneider & Kneib (1998) have noted that the ability of NGST to take deeper exposures than is possible with current instruments will increase the observed density of sources on the sky, particularly of those at high redshifts. The large increase might allow such applications as a detailed weak lensing mapping of substructure in clusters. Obviously, the source galaxies must be well resolved to allow an accurate shape measurement. Unfortunately, the characteristic galaxy size is expected to decrease with redshift for two reasons: (i) the mean density of collapsed objects scales as the density of the Universe at the collapse redshift, namely as $`(1+z)^3`$. Hence, objects of a given mass are expected to be more compact at high redshifts, and (ii) the characteristic mass of collapsed objects decreases with increasing redshift in the bottom-up CDM models of structure formation. In the following, we attempt to calculate the size distribution of high redshift sources. Aside from the obvious implications for weak lensing studies, the finite size of sources also has important implications for their detectability with NGST above the background noise of the sky brightness. The outline of the paper is as follows. In §2 we employ the PS halo abundance in several hierarchical models of structure formation to estimate the lensing rate of the high redshift objects that will be observed with NGST. This lensing rate has been calculated by Marri & Ferrara (1998) assuming point mass lenses. We use the simple but more realistic model of a singular isothermal sphere (SIS) profile for dark matter halos and obtain a substantially lower lensing rate. The formation of galactic disks and the distributions of their various properties have been studied by Dalcanton, Spergel, & Summers (1997) and Mo, Mao, & White (1998) in the framework of hierarchical models of structure formation. In §3 we apply their models to high redshift sources, and find the angular size distribution of galactic disks as a function of redshift. We use this distribution to predict whether observations with NGST will be significantly limited by confusion noise. We also calculate the redshift evolution of the mean surface brightness of disks. Finally, §4 summarizes the implications of our results. ## 2 Lensing Rate of High-Redshift Sources ### 2.1 Calculation Method We calculate the abundance of lenses based on the PS halo mass function. Relevant expressions for various CDM cosmologies are given, e.g., in Navarro, Frenk, & White (1997, hereafter NFW). The PS abundance agrees with N-body simulations on the mass scale of galaxy clusters, but may over-predict the abundance of galaxy halos at present by a factor of 1.5–2 (e.g., Gross et al. 1998). At higher redshifts, the characteristic mass scale of collapsed objects drops and the PS abundance becomes more accurate for the galaxy-size halos which dominate the lensing rate. The probability for producing multiple images of a source at a redshift $`z_S`$ due to gravitational lensing by SIS lenses is obtained by integrating over lens redshift $`z_L`$ the differential optical depth (Turner, Ostriker, & Gott 1984; Fukugita et al. 1992; Peebles 1993) $$d\tau =16\pi ^3n\left(\frac{\sigma }{c}\right)^4(1+z_L)^3\left(\frac{D_{OL}D_{LS}}{D_{OS}}\right)^2\frac{cdt}{dz_L}dz_L,$$ (1) in terms of the comoving density of lenses $`n`$, velocity dispersion $`\sigma `$, look-back time $`t`$, and angular diameter distances $`D`$ among the observer, lens and source. More generally we replace $`n\sigma ^4`$ by $$n\sigma ^4=\frac{dn(M,z_L)}{dM}\sigma ^4(M,z_L)𝑑M,$$ (2) where $`dn/dM`$ is the PS halo mass function. We assume that $`\sigma (M,z)=V_c(M,z)/\sqrt{2}`$ , and we calculate the circular velocity $`V_c(M,z)`$ corresponding to a halo of a given mass as in NFW, except that we vary the virialization overdensity using the fitting formula of Bryan & Norman (1998). The lensing rate depends on a combination of redshift factors, as well as the evolution of halo abundance. At higher redshifts, halos of a given mass are more concentrated and have a higher $`\sigma `$, but lower-mass halos contain most of the mass in the Universe. When calculating the angular diameter distances we assume the standard distance formulas in a homogeneous universe. Inhomogeneities, however, cause a dispersion around the mean distance. The non-Gaussian, skewed distribution of distances in hierarchical models is best studied with numerical simulations (e.g., Wambsganss et al. 1998), and can in principle be included self-consistently in more elaborate calculations of the lensing statistics. We consider cosmological models with various values of the cosmological density parameters of matter and vacuum (cosmological constant), $`\mathrm{\Omega }_0`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$. In particular, we show results for $`\mathrm{\Lambda }`$CDM (with $`\mathrm{\Omega }_0=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0.7`$), OCDM (with $`\mathrm{\Omega }_0=0.3`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$), and SCDM (with $`\mathrm{\Omega }_0=1`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$). The models assume a Hubble constant $`h=0.5`$ if $`\mathrm{\Omega }_0=1`$ and $`h=0.7`$ otherwise (where $`H_0=100h\text{ km s}^1\text{Mpc}^1`$). They also assume a primordial scale invariant ($`n=1`$) power spectrum, normalized to the present cluster abundance, $`\sigma _8=0.5\mathrm{\Omega }_0^{0.5}`$ (e.g., Pen 1998 and references therein), where $`\sigma _8`$ is the root-mean-square amplitude of mass fluctuations in spheres of radius $`8h^1`$ Mpc. ### 2.2 Numerical Results In Figure 1 we show the variation of the lensing optical depth with source redshift. This plot does not include the magnification bias which we discuss below. In order to show the relative variation, we normalize each curve to unity at $`z_S=2`$. The dashed curves show results for non-evolving lenses with $`n\sigma ^4=const`$ in $`\mathrm{\Lambda }`$CDM, OCDM, and SCDM, in order from top to bottom. The higher values obtained in low-$`\mathrm{\Omega }`$ models are due to the increased spatial volume in these cosmologies. The solid curves show the results for the PS halo distribution in OCDM, $`\mathrm{\Lambda }`$CDM, and SCDM, in order from top to bottom. High redshifts are characterized by a decrease in $`dn/dM`$ at high masses and an increase at low masses, so that the typical mass of collapsing objects decreases. In the OCDM model, the evolution of $`dn/dM`$ toward lower masses is slow enough that $`n\sigma ^4`$ increases with $`z`$ up to $`z3.5`$, which increases the lensing optical depth above the value expected for non-evolving lenses. For a given source redshift, the distribution of lens redshifts is proportional to $`d\tau /dz_L`$, which is given by eqs. (1) and (2). In Figure 2 we show the probability density $`p(z_L)`$, defined so that the fraction of lenses between $`z_L`$ and $`z_L+dz_L`$ is $`p(z_L)dz_L`$. We assume PS halos in $`\mathrm{\Lambda }`$CDM (solid curves), OCDM (dashed curves), or SCDM (dotted curves). In each cosmological model, we consider a source at $`z_S=5`$ or at $`z_S=10`$, where the higher curve at $`z_L<1`$ corresponds to $`z_S=5`$. The curves peak around $`z_L=1`$ in the low-$`\mathrm{\Omega }`$ models and around $`z_L=0.7`$ in SCDM. In each case a significant fraction of the lenses are above redshift 2: $`20\%`$ for $`z_S=5`$ and $`36\%`$ for $`z_S=10`$ in $`\mathrm{\Lambda }`$CDM. The $`z_L>2`$ fractions are higher in OCDM ($`26\%`$ for $`z_S=5`$ and $`48\%`$ for $`z_S=10`$) and lower in SCDM ($`13\%`$ for $`z_S=5`$ and $`26\%`$ for $`z_S=10`$). The fraction of lensed sources in an actual survey is enhanced, relative to the above lensing probability, by the so-called magnification bias. At a given observed flux level, unlensed sources compete with lensed sources that are intrinsically fainter. Since fainter galaxies are more numerous, the fraction of lenses in an observed sample is larger than the optical depth discussed above. The magnification bias is calculated in detail below, but for the purpose of the discussion here we adopt a uniform enhancement factor of 5 when computing the lensing fraction. Our results for the different cosmological models are summarized in Table 1. At $`z_S=2`$ we compare the results from the hierarchical PS models to a no-evolution model of the lens population based on the local luminosity function of galaxies. The last column of Table 1 shows the results (with a magnification bias factor of 5), for example, for the parameters of the no-evolution model of Kochanek (1996a), who adopted a number density $`n_e=6.1h^3\times 10^3`$Mpc<sup>-3</sup> of E/S0 galaxies, a Schechter function slope $`\alpha =1`$, a Faber-Jackson exponent $`\gamma =4`$, and a characteristic dark matter velocity dispersion $`\sigma _{}=225\mathrm{km}\mathrm{s}^1`$. The PS models yield a higher lensing fraction, although the difference is small for the $`\mathrm{\Lambda }`$CDM model. In all the PS models, the fraction of multiply imaged systems at $`z_S=10`$ is around $`5\%`$ if the magnification bias is 5. In the SIS model, the two images of a multiply-imaged source have a fixed angular separation, independent of source position, of $`\mathrm{\Delta }\theta =8\pi (\sigma /c)^2(D_{LS}/D_{OS})`$. The overall distribution of angular separations is shown in Figure 3 for $`\mathrm{\Lambda }`$CDM (solid curves), OCDM (dashed curves), and SCDM (dotted curves). The results are illustrated for $`z_S=2`$, 5, and 10 in each model. Image separations are typically reduced by a factor of 2–3 between $`z_S=2`$ and $`z_S=10`$, almost entirely due to the evolution of the lenses. With the NGST resolution of $`0\stackrel{}{\mathrm{.}}06`$, a large majority ($`85\%`$) of lenses with $`\mathrm{\Delta }\theta <5\mathrm{}`$ can be resolved even for $`z_S=10`$. Note, however, that a ground-based survey with $`1\mathrm{}`$ seeing is likely to miss $`60\%`$ of these lenses. There is also a tail of lenses with separations $`\mathrm{\Delta }\theta >5\mathrm{}`$. These large separation lenses, and the observational difficulties in identifying them, have been previously explored both analytically (Narayan & White 1988; Kochanek 1995) and with numerical simulations (Cen et al. 1994; Wambsganss et al. 1995). The magnification bias is determined by the distribution of image magnifications and by the source luminosity function. Denoting the probability distribution of magnifications by $`q(A)`$ (independent of $`z_L`$ and $`z_S`$ for the SIS), and the number counts of sources per unit flux at a flux $`F`$ by $`dN/dF`$, the fraction of lensed sources at the observed flux $`F`$ is increased by a bias factor $$B=\frac{dN}{dF}|_{F/A}\left[\frac{dN}{dF}|_F\right]^1q(A)\frac{dA}{A}.$$ (3) As noted above, NGST will resolve almost all double images and so we count them as two apparent sources. Thus we compute the bias factors separately for the two images, using $`q(A)=2/(A1)^3`$ and $`A>2`$ for the brighter image, and $`q(A)=2/(A+1)^3`$ and $`A>0`$ for the fainter image. We then find the sum, which is dominated by the brighter image of the two. This sum includes the contributions to sources observed at a flux $`F`$ from all lensed images (each of which is either the bright image or the faint image of a lensed pair). The product of the resulting bias factor and the lensing optical depth yields the fraction of all apparent sources which are part of a lensed system. We note that any attempt to estimate the magnification bias of high-redshift sources is highly uncertain at the present due to several tentative assumptions about their characteristic mass-to-light ratio, star formation history, initial stellar mass function, dust extinction amplitude, and quasar formation history. Figure 4 illustrates the magnification bias for the NGST number count model of Haiman & Loeb (1998b; 1997), who assumed cosmological parameters nearly equivalent to our $`\mathrm{\Lambda }`$CDM model. Solid lines are for mini-quasars, dashed lines are for galaxies undergoing starbursts which convert $`20\%`$ of the gas of each halo into stars, and dotted lines are for starbursts which use only $`2\%`$ of the gas of each halo. For each type of source, we show separate curves corresponding to all sources at redshifts $`z_S>5`$ or to all sources at redshifts $`z_S>10`$. Although the $`z_S>10`$ number counts are smaller, they are steeper than the $`z_S>5`$ counts and produce a larger magnification bias. Similarly, for low $`(2\%)`$ star-formation efficiency, galaxies are detected only if they lie in relatively massive halos, which have a steeper mass function and thus a larger magnification bias than for a higher star-formation efficiency. These results indicate a magnification bias around 3–6, but this factor could be much higher if the actual number counts are only somewhat steeper than predicted by these models. Indeed, the number counts fall off roughly as power laws $`dN/dF_\nu F_\nu ^\beta `$ with $`\beta `$ 2–2.5, while for the SIS, the magnification bias diverges at the critical value $`\beta =3`$. Using Figure 4, Table 1, and the number counts of Haiman & Loeb (1998b), the estimated number of sources (lensed sources) above 1 nJy per $`4\mathrm{}\times 4\mathrm{}`$ field of view is 90 (5) for $`z>10`$ quasars, 300 (12) for $`z>5`$ quasars, 400 (17) for $`z>10`$ galaxies with $`20\%`$ star-formation efficiency, $`10^4`$ (200) for $`z>5`$ galaxies with $`20\%`$ efficiency, 20 (1) for $`z>10`$ galaxies with $`2\%`$ efficiency, and $`2\times 10^3`$ (30) for $`z>5`$ galaxies with $`2\%`$ efficiency. Note, however, that the number counts for galaxies are reduced when we include the fact that most galaxies are resolved by NGST and cannot be treated as point sources (see §3). We have assumed that each lensing halo can be approximated as a SIS, although the mass distributions in actual halos might be more complicated. Numerical simulations of pure dark matter indicate a roughly universal profile (NFW) with a $`1/r`$ density profile in the core. This result is supported by very high resolution simulations of a small number of halos (Moore et al. 1999), although simulations of large numbers of halos typically find a shallower inner density profile in agreement with observed rotation curves of dark-matter-dominated galaxies (Kravtsov et al. 1998). In addition, galaxy halos undergo adiabatic compression when the baryons cool and contract (e.g., Flores et al. 1993). Halos with the NFW profile have a smaller lensing cross-section than the SIS, but this is partly compensated for by the higher mean magnification and thus the higher magnification bias produced by NFW lenses (Keeton 1999, in preparation). In the above discussion, we have also assumed spherical symmetry. If the SIS is made ellipsoidal, with an ellipticity of 0.3, then the total lensing cross-section is changed only slightly, but lenses above a total magnification of $`8`$ are then mostly four-image systems (see, e.g., Kochanek 1996b). We have also assumed that each halo acts as an isolated lens, while in reality galaxies are clustered and many galaxies lie in small groups. The large dark matter halo associated with the group may combine with the halos of the individual galaxies and enhance their lensing cross-section. External shear due to group halos will also tend to increase the fraction of four-image systems. On the other hand, dust extinction may reduce the number of lensed systems below our estimates, especially since high redshift galaxies are observed at rest-frame UV wavelengths. Significant extinction may arise from dust in the source galaxy itself as well as dust in the lens galaxy, if the image path passes sufficiently close to the center of the lens galaxy. ## 3 Size Distribution of High-Redshift Disk Galaxies ### 3.1 Semi-Analytic Model The formation of disk galaxies within hierarchical models of structure formation was explored by Fall & Efstathiou (1980). More recently, the distribution of disk sizes was derived and compared to observations by Dalcanton, Spergel, & Summers (1997; hereafter DSS) and Mo, Mao, & White (1998; hereafter MMW). In order to estimate the ability of NGST to resolve high redshift disks, we adopt the simple model of an exponential disk in a SIS halo. We consider a halo of mass $`M`$, virial radius $`r_{\mathrm{vir}}`$, total energy $`E`$, and angular momentum $`J`$, for which the spin parameter is defined as $$\lambda J|E|^{1/2}G^1M^{5/2}.$$ (4) If the disk mass is a fraction $`m_d`$ of the halo mass and its angular momentum is a fraction $`j_d`$ of that of the halo, then the exponential scale radius of the disk is given by (MMW) $$R_d=\frac{1}{\sqrt{2}}\left(\frac{j_d}{m_d}\right)\lambda r_{\mathrm{vir}}.$$ (5) The observed distribution of disk sizes suggests that the specific angular momentum of the disk is similar to that of the halo (see DSS and MMW), and so we assume $`j_d/m_d=1`$. The distribution of disk sizes is then determined<sup>3</sup><sup>3</sup>3For a halo of a given mass and redshift, we determine $`r_{\mathrm{vir}}`$ using NFW and eq. (6) of Bryan & Norman (1998); see also §2.1 . by the PS halo abundance and by the distribution of spin parameters. The latter approximately follows a lognormal distribution, $$p(\lambda )d\lambda =\frac{1}{\sigma _\lambda \sqrt{2\pi }}\mathrm{exp}\left[\frac{\mathrm{ln}^2(\lambda /\overline{\lambda })}{2\sigma _\lambda ^2}\right]\frac{d\lambda }{\lambda },$$ (6) with $`\overline{\lambda }=0.05`$ and $`\sigma _\lambda =0.5`$ following MMW, who determined these values based on the N-body simulations of Warren et al. (1992). Unlike MMW, we do not include a lower cutoff on $`\lambda `$ due to disk instability. If a dense bulge exists, it can prevent bar instabilities, or if a bar forms it may be weakened or destroyed when a bulge subsequently forms (Sellwood & Moore 1999). The distribution of disks is truncated at the low-mass end due to the fact that gas pressure inhibits baryon collapse and disk formation in shallow potential wells, i.e. in halos with a low circular velocity $`V_c`$. In particular, photo-ionization heating by the cosmic UV background heats the intergalactic gas to a characteristic temperature of $`10^{45}\mathrm{K}`$ and prevents it from settling into systems with a lower virial temperature. Using a spherical collapse code, Thoul & Weinberg (1996) found a reduction of $`50\%`$ in the collapsed gas mass due to heating, for a halo of $`V_c=50\mathrm{km}\mathrm{s}^1`$ at $`z=2`$, and a complete suppression of infall below $`V_c=30\mathrm{km}\mathrm{s}^1`$. Three-dimensional numerical simulations (Quinn, Katz, & Efstathiou 1996; Weinberg, Hernquist, & Katz 1997; Navarro & Steinmetz 1997) found a suppression of gas infall into even larger halos with $`V_c75\mathrm{km}\mathrm{s}^1`$. We adopt a typical cutoff value $`V_{\mathrm{cut}}=50\mathrm{km}\mathrm{s}^1`$ in the PS halo function, requiring $`V_c>V_{\mathrm{cut}}`$ for the formation of disks. We note, however, that the appropriate $`V_{\mathrm{cut}}`$ could be lower at both very low and very high redshifts when the cosmic UV background was weak. In particular, the decline of the UV background at $`z1`$ allowed gas to condense in halos down to $`V_c25\mathrm{km}\mathrm{s}^1`$ (Kepner, Babul, & Spergel 1997). Similarly, gaseous halos that had formed prior to reionization, when the cosmic UV background had been negligible, could have survived photo-ionization heating at later times as long as they satisfied $`V_c13\mathrm{km}\mathrm{s}^1`$ (Barkana & Loeb 1999). Aside from its relevance to lensing studies, the distribution of disk sizes is useful for assessing the level of overlap of sources on the sky, namely the confusion noise. We first compute the geometric optical depth of galactic disks, i.e., the fraction of the sky covered by galactic disks. This corresponds to the probability of encountering a galactic disk (within one exponential scale length) inside an infinitesimal aperture. Averaging over all random orientations, a circular disk of radius $`R_d`$ at redshift $`z_S`$ occupies an angular area of $`2(R_d/D_{OS})^2`$. The total optical depth then depends on $`V_{\mathrm{cut}}`$. For $`\mathrm{\Lambda }`$CDM with $`V_{\mathrm{cut}}=50\mathrm{km}\mathrm{s}^1`$, we find the geometric optical depth to be $`2.0\times 10^4`$ when integrated over all $`z>10`$ sources, $`5.5\times 10^3`$ for $`z>5`$ sources, $`1.7\%`$ for $`z>3`$ sources, $`4.6\%`$ for $`z>1`$ sources, and $`6.8\%`$ for sources at all redshifts. If we lower $`V_{\mathrm{cut}}`$ to $`30\mathrm{km}\mathrm{s}^1`$, the optical depth becomes $`8.8\times 10^4`$ for $`z>10`$ sources, $`3.5\%`$ for $`z>3`$ sources, and $`11.3\%`$ for all source redshifts. A more realistic estimate of confusion noise must include the finite resolution of the instrument as well as its detection limit for faint sources. We characterize the instrument’s resolution by a minimum circular aperture of angular diameter $`\theta _a`$. We include as sources only those galactic disks which are brighter than some threshold. This threshold is dictated by $`F_\nu ^{\mathrm{ps}}`$, the minimum spectral flux<sup>4</sup><sup>4</sup>4Note that $`F_\nu ^{\mathrm{ps}}`$ is the total spectral flux of the source, not just the portion contained within the aperture. required to detect a point-like source (i.e., a source which is much smaller than $`\theta _a`$). For an extended source of diameter $`\theta _s\theta _a`$, we assume that the signal-to-noise ratio can be improved by using a larger aperture, with diameter $`\theta _s`$. The noise amplitude scales as the square root of the number of noise (sky) photons, or the square root of the corresponding sky area. Thus, the total flux needed for detection of an extended source at a given signal-to-noise threshold is larger than $`F_\nu ^{\mathrm{ps}}`$ by a factor of $`\theta _s/\theta _a`$. We adopt a simple interpolation formula between the regimes of point-like and extended sources, and assume that a source is detectable if its flux is at least $`\sqrt{1+(\theta _s/\theta _a)^2}F_\nu ^{\mathrm{ps}}`$. We can now compute the “ intersection probability”, namely the probability of encountering a galactic disk (within one exponential scale length) anywhere inside the aperture of diameter $`\theta _a`$. A face-on circular disk of diameter $`\theta _s=2R_d/D_{OS}`$ will overlap the aperture if its center lies within a radius of $`(\theta _a+\theta _s)/2`$ about the center of the aperture. Assuming a random orientation of the disk, the average cross-section is then $`\pi \theta _a^2/4+1.323\theta _a\theta _s+\theta _s^2/2`$. We integrate this cross-section over the spin parameter distribution and over the abundance of halos at all masses and redshifts. The resulting intersection probability is closely related to the confusion noise. If this probability is small then individual sources are resolved from each other, since the aperture typically contains at most a single detectable source. We can also obtain a limit on the confusion noise from sources below the flux detection threshold, by computing the same intersection probability but including sources at all fluxes. The flux $`F_\nu `$ of a given disk depends on its mass-to-light ratio, which in turn depends on its star formation history and stellar mass function. We adopt a semi-analytic starburst model similar to that of Haiman & Loeb (1998b), but different in detail. We assume that each halo of mass $`M`$ hosts a disk of mass $`m_dM`$, of which a fraction $`f_d`$ participates in star formation. Adopting a cosmological baryon density of $`\mathrm{\Omega }_bh^2=0.02`$, we define the star formation efficiency $`\eta `$ so that $`f_dm_d=\eta (\mathrm{\Omega }_b/\mathrm{\Omega }_0)`$. We assume a fixed universal value of $`\eta `$, and illustrate our results for a high efficiency of $`\eta =20\%`$ (assumed unless indicated otherwise) and for a low efficiency $`\eta =2\%`$. These values cover the range of efficiencies suggested by observations of the metallicity of the Ly$`\alpha `$ forest at $`z=3`$ (Haiman & Loeb 1998b) and the cumulative mass density of stars in the Universe at present (Fukugita, Hogan, & Peebles 1998). Note that $`\eta =20\%`$ and a particular value of $`F_\nu ^{\mathrm{ps}}`$ are equivalent to $`\eta =2\%`$ and a tenfold decrease in $`F_\nu ^{\mathrm{ps}}`$. In order to determine the mass-to-light ratio of a halo of mass $`M`$ at a redshift $`z`$, we assume that the mass $`\eta (\mathrm{\Omega }_b/\mathrm{\Omega }_0)M`$ is distributed in stars with a Salpeter mass function ($`dNm^\alpha dm`$ with $`\alpha `$=2.35) from 1 $`M_{\mathrm{}}`$ up to 100 $`M_{\mathrm{}}`$. If the mass function were extended to masses below 1 $`M_{\mathrm{}}`$, the additional stars would contribute significant mass but little luminosity, so this would essentially be equivalent to a reduction in $`\eta `$. We use the stellar population code of Sternberg (1998) with Z=0.001 stellar tracks and Z=0.006 stellar spectra. We assume that the age of the stellar population equals that of the dark matter halo, whose age is determined from its merger history. The formation redshift $`z_{\mathrm{form}}>z`$ is defined as the time at which half the mass of the halo was first contained in progenitors more massive than a fraction $`f`$ of $`M`$. We set $`f=0.5`$ and estimate the formation redshift (and thus the age) using the extended Press-Schechter formalism (see, e.g., Lacey & Cole 1993). At high redshifts, the young age of the Universe and high halo merger rate imply young stellar populations which are especially bright at rest-frame UV wavelengths. At each redshift $`z`$ we calculate the halo spectral flux by averaging the composite stellar spectrum over the wavelengths corresponding to the observed NGST spectral range of 0.6–3.5$`\mu `$m. We also include a Ly$`\alpha `$ cutoff in the spectrum due to absorption by the dense Ly$`\alpha `$ forest at all redshifts up to that of the source. We do not, however, include dust extinction. Despite the generally low metallicity at high redshifts, extinction could be significant since observations correspond to rest-frame UV wavelengths (Loeb & Haiman 1997). Our starburst model is expected to describe galaxies at high redshifts, but it may fail at redshifts $`z2`$. The model relies on two key assumptions, namely that stars form in disks, and that the stars in each galaxy have formed since the last major merger of its halo. At high redshifts, the fraction of gas that has collapsed into halos is small, and the fraction that has turned into stars is even smaller. Thus, a high-redshift galaxy is expected to be gas-rich whether it forms in a merger or accretes most of its gas from the intergalactic medium. Such a galaxy is likely to form most of its stars in a disk after the gas cools and settles onto a plane. At low redshifts, on the other hand, disk galaxies may have converted most of their gas into stars by the time they merge. In this case, the merger may form a massive elliptical galaxy rather than a disk-dominated galaxy. Indeed, elaborate semi-analytic models indicate that the stars in elliptical galaxies are typically much older than their halo merger age (e.g., Thomas & Kauffmann 1999), in agreement with the red colors of ellipticals which suggest old stellar populations. Although the increased presence of elliptical galaxies invalidates our model for the mass-to-light ratios of galaxies at low redshifts, our results for the size distribution of galaxies may remain approximately valid. Theoretical considerations based on the virial theorem, as well as numerical simulations, suggest that the characteristic size of a galactic merger remnant is smaller by a factor of $`1.5`$ than the size expected for a disk galaxy of the same mass and velocity dispersion (Hausman & Ostriker 1978; Hernquist et al. 1993). ### 3.2 Numerical Results Figure 5 shows the total intersection probability as a function of limiting flux (right panel), for all sources with $`z>0`$, $`z>2`$, $`z>5`$, and $`z>10`$, from top to bottom. The total probability is dominated by the contribution of sources at low redshifts, which is relatively insensitive to the limiting flux (or to $`\eta `$). All curves assume the $`\mathrm{\Lambda }`$CDM model with a circular-velocity cutoff for the host halo of $`V_{\mathrm{cut}}=50\mathrm{km}\mathrm{s}^1`$. The aperture diameter is chosen to be $`\theta _a=0\stackrel{}{\mathrm{.}}06`$, close to the expected NGST resolution at $`2\mu `$m. With $`F_\nu ^{\mathrm{ps}}=1`$ nJy, the total intersection probability for all redshifts is $`8.9\%`$ (or $`5.6\%`$ if $`\eta =2\%`$) in $`\mathrm{\Lambda }`$CDM (and it is $`10\%`$ or less also in the SCDM and OCDM models). The probability increases to $`15\%`$ ($`6.2\%`$ if $`\eta =2\%`$) if $`V_{\mathrm{cut}}=30\mathrm{km}\mathrm{s}^1`$ instead of $`50\mathrm{km}\mathrm{s}^1`$. The contribution from sources at $`z>5`$ is $`1.0\%`$ ($`9.0\times 10^4`$ if $`\eta =2\%`$). Thus the chance for overlapping sources will be small for NGST. If the resolution were $`\theta _a=0\stackrel{}{\mathrm{.}}12`$, the probability would be $`12\%`$ ($`6.6\%`$ if $`\eta =2\%`$) for all redshifts and $`1.7\%`$ ($`0.14\%`$ if $`\eta =2\%`$) for sources at $`z>5`$. If we include all sources regardless of flux then the probability becomes independent of $`\eta `$, and (with $`\theta _a=0\stackrel{}{\mathrm{.}}06`$) it equals $`9.1\%`$ if $`V_{\mathrm{cut}}=50\mathrm{km}\mathrm{s}^1`$ and $`18.8\%`$ if $`V_{\mathrm{cut}}=30\mathrm{km}\mathrm{s}^1`$. The contribution from sources below the detection threshold is small due to the $`V_c`$ cutoff, i.e. the fact that the photo-ionizing background prevents the formation of galaxies in small dark matter halos. This fact should eventually result in a turnover, where the number counts do not increase with decreasing flux. However, the turnover occurs somewhat below 1 nJy, a flux that is much smaller than the detection threshold of current observations such as the Hubble Deep Field. In summary, we have shown that confusion noise for NGST will be low, assuming that there is one galaxy per halo and that the luminous stars form primarily in disks. Note that we have not included the possible confusion noise from multiple galaxies per halo, from clustered or interacting galaxies, or from galaxies being observed as separate fragments rather than smooth disks. We also have not included the confusion noise from stars and other sources in our own galaxy. Also note that with no flux limit on sources, the intersection probability approaches unity only if the aperture is increased to $`0\stackrel{}{\mathrm{.}}9`$. Our model predicts the size distribution of galaxies at various redshifts. Figure 6 shows the fraction of the total number counts contributed by sources with diameters greater than $`\theta `$, as a function of $`\theta `$. The size distributions are shown for a high efficiency ($`\eta =20\%`$, solid curves) and for a low efficiency ($`\eta =2\%`$, dotted curves) of star formation. Each curve is marked by the lower limit of the corresponding redshift range, with ‘0’ indicating sources with $`0<z<2`$, and similarly for $`2<z<5`$, $`5<z<10`$, and $`z>10`$. All curves include a cutoff of $`V_{\mathrm{cut}}=50\mathrm{km}\mathrm{s}^1`$ and a limiting point source flux of 1 nJy, and all are for the $`\mathrm{\Lambda }`$CDM model. The vertical dashed line in Figure 6 indicates the NGST resolution of $`0\stackrel{}{\mathrm{.}}06`$. Note that increasing $`\eta `$ leads to a decrease in the typical angular size of galaxies, since the set of observable galaxies then includes galaxies which are less massive, and thus generally smaller. However, a tenfold increase in $`\eta `$ decreases the observed angular sizes of $`z>10`$ galaxies by only a factor of two. The typical observed size of faint disks (i.e., of all disks down to 1 nJy) is $`0\stackrel{}{\mathrm{.}}4`$ for sources at $`0<z<2`$, $`0\stackrel{}{\mathrm{.}}2`$ for sources at $`2<z<5`$, $`0\stackrel{}{\mathrm{.}}10`$ (or $`0\stackrel{}{\mathrm{.}}15`$ if $`\eta =2\%`$) for sources at $`5<z<10`$, and $`0\stackrel{}{\mathrm{.}}065`$ (or $`0\stackrel{}{\mathrm{.}}11`$ if $`\eta =2\%`$) for sources at $`z>10`$. Roughly $`60\%`$ of all $`z>10`$ sources (or $`90\%`$ if $`\eta =2\%`$) can be resolved by NGST, and the fraction is at least $`85\%`$ among lower redshift sources. Thus, the high resolution of NGST should make most of the detected sources useful for weak lensing. If reliable shape measurements require a diameter equal to twice the resolution scale (probably overly pessimistic), then the useful ($`\theta >0\stackrel{}{\mathrm{.}}12`$) fractions are $`13\%`$ for $`z>10`$, $`40\%`$ for $`5<z<10`$, and $`80\%`$ for $`2<z<5`$ sources. If $`\eta =2\%`$, the corresponding fractions are $`40\%`$ for $`z>10`$, $`65\%`$ for $`5<z<10`$, and $`80\%`$ for $`2<z<5`$. These results are all in the $`\mathrm{\Lambda }`$CDM model, but disk sizes in the SCDM and OCDM models differ by only about $`10\%`$. As noted by Schneider & Kneib (1998), ground-based telescopes that are not equipped with adaptive optics or interferometry would be unable to resolve most of the high-redshift sources, even if they could reach the same flux sensitivity as NGST. For example, a ground-based survey down to 1 nJy with, e.g., $`0\stackrel{}{\mathrm{.}}75`$ seeing at $`2\mu `$m, could resolve only $`0.003\%`$ of the $`z>10`$ sources, with corresponding fractions of $`0.1\%`$ for $`5<z<10`$, and $`2\%`$ for $`2<z<5`$. If $`\eta =2\%`$ then the resolved fractions are $`0.03\%`$ for $`z>10`$, $`0.8\%`$ for $`5<z<10`$, and $`4\%`$ for $`2<z<5`$. Thus, the high resolution of NGST is crucial for resolving faint galaxies at the redshifts of interest. Current observations of galaxy sizes at $`z>2`$ are inadequate for a detailed comparison with our models. Gardner & Satyapal (1999, in preparation) have determined the sizes of galaxies in the Hubble Deep Field South, finding typical half-light radii of $`0\stackrel{}{\mathrm{.}}1`$ with a very large scatter. This sample likely includes a wide range of redshifts, and it is expected to be strongly biased toward small galaxy sizes. Given the steep luminosity function of the detected galaxies, most of them are detected very close to the detection limit, especially those at high redshift. Of course, galaxies near the flux threshold can be detected only if they are nearly point sources, while large galaxies are excluded from the sample because of their low surface brightness. Since most galaxies will be resolved by NGST, predictions for the total number counts are affected by the higher flux needed for the detection of extended objects relative to point sources. For a point source flux limit of 1 nJy and $`\eta =20\%`$, the total number counts are reduced (relative to a size-independent flux limit of 1 nJy) by a factor of 2 for $`z>10`$ and by only $`10\%`$ for $`5<z<10`$. The reduction for $`z<10`$ sources is small if $`\eta =20\%`$, since in this case the total flux of most $`z<10`$ sources is greater than 1 nJy, and these galaxies can still be detected even as extended objects. However, the reduction in number counts is more significant if $`\eta =2\%`$, with a factor of 8 for $`z>10`$, 4 for $`5<z<10`$, and 2.5 for $`2<z<5`$. We show in Figure 7 the resulting prediction for the redshift distribution of the galaxy population observed with NGST. We assume the $`\mathrm{\Lambda }`$CDM model and plot $`dN/dz`$, where $`N`$ is the number of galaxies per NGST field of view. The solid curve assumes a high efficiency ($`\eta =20\%`$) of star formation and the dashed curve assumes a low efficiency ($`\eta =2\%`$). All curves assume a limiting point source flux of 1 nJy. The total number per field of view of galaxies at all redshifts is $`N=59,000`$ for $`\eta =20\%`$ and $`N=15,000`$ for $`\eta =2\%`$. The fraction of galaxies above redshift 5 is sensitive to the value of $`\eta `$ – it equals $`40\%`$ for $`\eta =20\%`$ and $`7.4\%`$ for $`\eta =2\%`$ – but the number of $`z>5`$ galaxies is large ($`1000`$) even for the low efficiency. The number of $`z>5`$ galaxies predicted in SCDM is close to that in $`\mathrm{\Lambda }`$CDM, but in OCDM there are twice as many $`z>5`$ galaxies. ### 3.3 The Surface Brightness of Lensed Sources In our estimates of the lensing rate in §2 we implicitly made two important assumptions: (i) the source is smaller than the image separation, so that the two images of the source are not blended; (ii) the surface brightness of the background source is comparable to or higher than that of the foreground lens, otherwise the background source could not be detected when it is superimposed on the lens galaxy. These assumptions are trivially justified for the point-like images of quasars. In the context of galactic sources, we can apply our estimates of disk sizes to test these two assumptions quantitatively. A lensed galaxy is generally much smaller than the separation of its two images. The combination of Figures 3 and 6 shows that, regardless of the source redshift, the typical image separation is at least four times as large as the typical diameter of a source galaxy detected by NGST. Thus, the majority of all lensed sources will not be blended. Note, however, that if ellipticity or shear are included then some of the resulting four-image systems may include arcs produced by several blended images. In order to compare the surface brightness of source galaxies to that of lens galaxies, we calculate the redshift evolution of the mean surface brightness of galaxies. At high redshifts, we may apply our disk starburst model to find the surface brightness of a galaxy from the predicted size, mass, and mass-to-light ratio of its disk. We compute the average surface brightness (as observed in the NGST spectral range) out to one exponential scale length. Figure 8 shows this surface brightness $`\mu `$ (expressed in nJy per square arcsecond) averaged over all galaxies at each redshift, in the $`\mathrm{\Lambda }`$CDM model only (as the OCDM and SCDM models yield very similar results). Solid lines show the mean at $`z>2`$, where galaxies are weighed by their number density and their mass-to-light ratios are derived from the starburst model. As discussed at the end of §3.1, although our model for the size distribution of galaxies should remain approximately valid at low redshifts, the starburst model may fail to predict the correct mass-to-light ratio of the stellar population at $`z2`$, particularly for the lens galaxies. These lenses tend to be massive elliptical galaxies, with stellar populations that may be much older than the merger timescale assumed in our starburst model. In order to estimate the surface brightness of lens galaxies, we adopt a simple alternative model in which all their stars are uniformly old. The dashed lines in Figure 8 show (for $`z<2`$) the mean surface brightness of lensing galaxies (i.e., where galaxies are weighed by the product of their number density and their lensing cross-section), assuming that their stars formed at $`z=5`$. In each case (i.e., for source galaxies or for lens galaxies), the upper curve assumes a high efficiency ($`\eta =20\%`$) and the lower curve assumes a low efficiency ($`\eta =2\%`$) of incorporating baryons into stars in the associated halos. All curves include a cutoff velocity of $`V_{\mathrm{cut}}=50\mathrm{km}\mathrm{s}^1`$ and a limiting point source flux of 1 nJy. As is apparent from Figure 8, the mean surface brightness of galaxies varies, for a fixed $`\eta `$, by a factor of $`2`$ over all redshifts above 2, despite the large range in luminosity distances from the observer. Several different factors combine to keep the surface brightness nearly constant. Except for redshift factors, the surface brightness is proportional to the luminosity over the square of the disk radius, and the luminosity is in turn equal to the disk mass divided by its mass-to-light ratio. Although the typical mass of halos decreases at high redshifts, two other effects tend to increase the surface brightness. First, high redshift disks are compact due to the increased mean density of the Universe. The second effect results from the low mass-to-light ratio of the young stellar populations of high redshift disks, which makes these galaxies highly luminous despite their small masses. For example, the mean ratio of halo mass to disk luminosity for $`z=2`$ galaxies (with $`\eta =20\%`$ and $`F_\nu ^{\mathrm{ps}}=1`$ nJy) is 14 in solar units, and this decreases to 3.8 at $`z=5`$ and 1.2 at $`z=10`$. This evolution in the mass-to-light ratio includes the so-called K-correction, i.e. the fact that for higher-redshift sources the NGST filter corresponds to shorter rest-frame wavelengths. Acting alone, the factors discussed above would result in a sharp increase with redshift in the surface brightness of galaxies. Additional redshift effects, however, counter-balance these other factors. According to the Tolman surface brightness law, the expansion of the universe yields a factor of $`(1+z)^4`$ regardless of the values of the cosmological parameters. This redshift factor dominates and produces an overall decrease in $`\mu `$ among lens galaxies at low redshifts (up to $`z1.5`$). At these low redshifts, all galaxies are detected regardless of $`\eta `$, so the overall $`\mu `$ is exactly proportional to $`\eta `$. At higher redshifts, the 1 nJy flux limit preferentially removes low surface brightness galaxies from the detected sample. The resulting bias toward high surface brightness is larger if $`\eta =2\%`$, and this decreases the difference in $`\mu `$ between the cases of $`\eta =2\%`$ and $`\eta =20\%`$. The mass-to-light ratio begins to decrease rapidly at $`z1.5`$, and at $`z>2`$ the various factors combine to produce a slow variation in $`\mu `$. Although there is only a modest redshift evolution in the surface brightness of galaxies, there is an additional difficulty in detecting lensed sources. Lensing galaxies are biased toward larger circular velocities, i.e. toward larger masses at each redshift. Since a galaxy that is more massive is usually also more luminous, its surface brightness tends to be larger. As shown in Figure 8, this tendency makes the mean surface brightness of lenses somewhat higher than that of sources, despite our assumption of an old stellar population in lens galaxies. Consider, for example, a source at redshift 5 which is multiply imaged. The mean lens redshift for $`z_S=5`$ is $`z_L=1.4`$. If we select the source from the general galaxy population and the lens from the population of lenses, then the typical source-to-lens surface brightness ratio is 1:3 if $`\eta =20\%`$ (or close to 1:1 if $`\eta =2\%`$). Even though lens galaxies might have a somewhat higher mean surface brightness than the sources which they lens, it should be possible to detect lensed sources since (i) the image center will typically be some distance from the lens center, of order half the image separation, and (ii) the younger stellar population and higher redshift of the source will make its colors different from those of the lens galaxy, permitting an easy separation of the two in multi-color observations. These two helpful features, along with the source being much smaller than the lens and the image separation, are evident in the currently known systems which feature galaxy-galaxy lensing. These include two four-image ‘Einstein cross’ gravitational lenses discovered by Ratnatunga et al. (1995) in the Groth-Westphal strip, and a lensed three-image arc detected in the Hubble Deep Field South and studied in detail by Barkana et al. (1999). In these cases of moderate redshifts and optical/UV observations, the sources appear bluer than the lens galaxies. In the infrared range of NGST, high-redshift sources are expected to generally be redder than their low redshift lenses, since the overall redshift has a dominant effect on the spectrum. Suppose, e.g., that $`z_S=5`$ and $`z_L=1.4`$. We divide the NGST spectral range into four logarithmically-spaced parts (in order of increasing frequency). For a given spectrum, we find the fraction of the total luminosity which is emitted in each frequency quadrant. The mean fractions for $`z_S=5`$ galaxies are 0.37, 0.21, 0.26, and 0.16, respectively, while the fractions for $`z_L=1.4`$ lenses (assuming, as above, that their stars formed at redshift 5) are 0.16, 0.29, 0.39, and 0.16 . Thus, if we use the lowest frequency quadrant, the source will be brighter than the lens by an additional factor of 2.3 relative to the source-to-lens luminosity ratio when we use the full NGST bandwidth. Note that we have not included here extinction, which could further redden the colors of lensed sources. ## 4 Conclusions We have calculated the lensing probability of high-redshift galaxies or quasars by foreground dark matter halos. We found that the lensing optical depth for multiple imaging of sources increases by a factor of 4–6 from $`z_S=2`$ to $`z_S=10`$. With a magnification bias of $`5`$ expected for $`z_S>5`$ sources, the fraction of apparent sources which form one of the images of a lensed source reaches $`5\%`$ for sources at $`z_S=10`$ (see Table 1). Among lenses with image separations below $`5\mathrm{}`$, the typical image separation (in $`\mathrm{\Lambda }`$CDM) drops from $`1\stackrel{}{\mathrm{.}}1`$ at $`z_S=2`$ to $`0.5\mathrm{}`$ at $`z_S=10`$. With its expected $`0\stackrel{}{\mathrm{.}}06`$ resolution, NGST can resolve $`85\%`$ of the lenses with $`z_S=10`$. Assuming the number counts predicted by Haiman & Loeb (1998b), the estimated number of lensed sources above 1 nJy per field of view of NGST is roughly 5 for $`z>10`$ quasars, 10 for $`z>5`$ quasars, 1–15 for $`z>10`$ galaxies, and 30–200 for $`z>5`$ galaxies. Note that these values are in a $`\mathrm{\Lambda }`$CDM cosmology; the number of $`z>10`$ galaxies is smaller by a factor of $`3`$ in SCDM but larger by a factor of $`10`$ in OCDM. Although only a small fraction of the sources are multiply imaged, all sources are mildly altered by gravitational lensing due to foreground objects. For a source that is not multiply imaged, the cross-section for an amplification of at least $`A`$ varies as $`1/(A1)^2`$ for a SIS lens. Thus, for $`z_S=10`$ the optical depth is unity for an amplification of $`A=1.1`$ or greater. This implies that extended sources at high redshifts are significantly distorted due to lensing. A typical $`z=10`$ source is magnified or de-magnified by $`10\%`$ and also has an ellipticity of at least $`10\%`$ due to lensing. We have also predicted the size distribution of galactic disks at high redshifts (see Figure 6) and found that the angular resolution of NGST will be sufficiently high to avoid confusion noise due to overlapping sources. Indeed, with a 1 nJy flux limit the probability of encountering a galactic disk inside an aperture of $`0\stackrel{}{\mathrm{.}}06`$ diameter is $`8.9\%`$ for $`\mathrm{\Lambda }`$CDM, of which $`4\%`$ comes from $`z>2`$ sources, $`1\%`$ comes from $`z>5`$ sources, and only $`0.02\%`$ is contributed by $`z>10`$ sources (see Figure 5). These values are for a high star formation efficiency of $`\eta =20\%`$, and they are reduced if $`\eta =2\%`$. In our estimates of the lensing rate in §2, we assumed that a lensed source can be detected even when its images overlap the lensing galaxy. We showed in §3 that the mean surface brightness of galaxies evolves modestly above redshift 2 (see Figure 8). Although the surface brightness of a background source will typically be somewhat lower than that of the foreground lens, the lensed images should be detectable since they are offset from the lens center and their colors are expected to differ from those of the lens galaxy. Although the typical size of sources decreases with increasing redshift, at least $`60\%`$ of the $`z>10`$ galaxies above 1 nJy can still be resolved by NGST. This implies that the shapes of these high redshift galaxies can be studied with NGST. We have also found that the high resolution of NGST is crucial in making the majority of sources on the sky useful for weak lensing studies. When we assumed a 1 nJy flux limit for detecting point sources, we included the fact that resolved sources require a higher flux in order to be detected with the same signal-to-noise ratio. Therefore, estimates of number counts that assume a constant flux limit of 1 nJy for all sources overestimate the number counts by a factor of 2 for $`z>10`$ sources and a star formation efficiency of $`\eta =20\%`$, or by as much as a factor of 8 if $`\eta =2\%`$. Even with this limitation, though, NGST should detect a total (over all redshifts) of roughly one galaxy per square arcsecond for $`\eta =20\%`$ (or one per 4 square arcseconds if $`\eta =2\%`$). In conclusion, the field of gravitational lensing is likely to benefit greatly over the next decade from the combination of unprecedented sensitivity and high angular resolution of NGST. ###### Acknowledgements. We thank Zoltan Haiman for providing number count data from earlier work. We are also grateful to Tal Alexander and Amiel Sternberg for numerical results of their stellar population model, and to David Hogg for useful discussions. RB acknowledges support from Institute Funds. This work was supported in part by NASA grants NAG 5-7039 and NAG 5-7768 for AL.
no-problem/9906/astro-ph9906360.html
ar5iv
text
# 1 Introduction ## 1 Introduction The Planck Surveyor is an European Space Agency (ESA) satellite mission to map spatial anisotropy in the Cosmic Microwave Background (CMB) over a wide range of frequencies with an unprecedented combination of sensitivity, angular resolution, and sky coverage (Bersanelli et al. 1996). The data gathered by this mission will revolutionize modern cosmology by shedding light on fundamental cosmological questions such as the age and present expansion rate of the universe, the average density of the universe, the amount and the kind of dark matter, and other questions. As with any CMB experiment, achieving the desired performance requires careful attention to the control of systematic effects. $`1/f`$-type noise in the radiometer output is one of the most critical systematic effect pertaining to the Low Frequency Instrument (LFI) radiometers because it may lead to striping in the final sky maps and increase the noise level. In general, a value of the $`1/f`$ knee frequency, $`f_k`$, significantly greater than the spacecraft rotation frequency, $`f_s`$, will lead to some degradation in sensitivity (Janssen et al. 1996). In this paper we examine the impact of the $`1/f`$-type noise by adopting a realistic estimate of $`1/f`$ knee frequency as a function of the load, amplifier noise and paylod environment temperatures, of the radiometer bandwidth and of the level of fluctuations of each stage of the radiometer amplifiers, based on recent analytical studies of systematic effects in Planck LFI radiometers (Seiffert et al. 1997). In section 2 we summarize the basic concepts relevant for understanding the basic properties of instrumental noises. In section 3 we present the mathematical formalism of our numerical code for the simulation of the Planck observations, including instrumental noises, and the data stream generation; we typically refer here to Planck like scanning strategies, but our code is versatile enough to allow the study of other observational schemes. In section 4 we describe how we have converted our simulated data streams in simulated observed maps. In section 5 we discuss in detail the mathematical formalism of the proposed destriping technique, including some considerations about the numerical efficiency. The “standard” estimators that quantify the magnitude of the striping effect and the efficiency of destriping techniques are presented in section 6. Some preliminary results are presented in section 7. Finally, in section 8 we draw out the main conclusions of our analysis, compare our results with those of previous works and draw out a brief guide-line for a future work. We discuss there the main implications of our study, focusing on their impact for the optimization of the Planck observational strategy. ## 2 Sources of instrumental noises Planck LFI radiometers are modified Blum correlation receivers (Blum 1959, Colvin 1961). The modification is that the temperature of the reference load is quite different from the sky temperature (Bersanelli et al. 1995). To compensate, differing DC gains are applied after the two detector diodes. Adjusting the ratio of DC gains, $`r`$, allows one to null the output signal, minimize sensitivity to RF gain fluctuations, and achieve the lowest white noise in the output. Although it may not be immediately apparent, the fact that the reference load is not at the same temperature as the sky does not increase the white noise level compared to a standard correlation receiver. The ideal sensitivity of our radiometer for a single observation with an integration time $`\tau `$ is $$\mathrm{\Delta }T_{\mathrm{white}}=\frac{\sqrt{2}\left(T_n+T_x\right)}{\sqrt{\beta \tau }},$$ (1) where $`\beta `$ is the effective bandwidth, $`T_x`$ is the noise temperature of the signal entering one of the two horns and $`T_n`$ is the amplifier noise temperature. In order to null the average output signal of the radiometer, the DC gains ratio after the two detector diodes, $`r`$, must be adjusted to the proper value. In the simple case that the gains of the two amplifiers entering the two horns and their noise temperatures can be considered equal, $`r`$ must be setted to the value: $$r=\frac{T_x+T_n}{T_y+T_n},$$ (2) where $`T_y`$ is the reference load temperature entering the other horn. The above temperatures are antenna temperatures; $`T_x`$ is due properly to the sum of the sky temperature (essentially the CMB monopole antenna temperature, related to the CMB thermodynamic temperature $`T_02.726`$K, plus “minor” contributions from CMB dipole and anisotropies, galactic and extragalactic foregrounds, bright sources, Zodiacal light …) and of the “environment” temperature (of about 1 K) due to the satellite emission. There are several potential concerns for the current radiometer scheme. Amplifiers noise temperature variations, that derive from gain fluctuations, could be confused with the sky signal variations, that we are interested in measuring, by introducing a change in the observed signal, $`\mathrm{\Delta }T_{\mathrm{equiv}}`$, which can mimic a true sky fluctuation. The amplifiers noise temperature variations have the characteristics of $`1/f`$ noise and this leads to $`1/f`$-type noise in the radiometer output. We have recently estimated the $`1/f`$ knee frequency of our radiometer (Burigana et al. 1997a, Mandolesi et al. 1997a, Seiffert et al. 1997) on the simplifying assumption that there is no $`1/f`$ contribution from the detector diodes and neglecting the effects of the phase shifting that is designed to control this contribution; the diodes are assumed to be perfect square law detectors and we have assumed that the bandpass of the signal is the same in both legs of the radiometer. Here we briefly summarize the main arguments relevant for the expected magnitude of gain and noise temperature fluctuations and the main concepts relevant for the estimate of the $`1/f`$ knee frequency. The contribution to the $`1/f`$ noise directly due to amplifier gain fluctuations results to be zero at first order. Under quite reasonable assumptions, the noise contributions due to reference load fluctuations and fluctuations in the ratio of DC gains are much less than that due to the amplifier noise temperature fluctuations term, which then results to be the dominant source of $`1/f`$ noise in the radiometer output. Imperfect isolation does not significantly modify this conclusion. Further, the sensitivity of our radiometer to differences between the gains and noise temperatures of the two amplifiers is not critical. As a consequence, all these complications cannot significantly change the knee frequency respect to the results we draw out here below. Cryogenic HEMT amplifiers have noise temperature fluctuations with a $`1/f`$-type spectrum because induced by the $`1/f`$-type gain fluctuations of amplifiers (Wollack 1995, Jarosik 1996, Seiffert et al. 1996). The magnitude of noise temperature fluctuations can be computed from the following argument. Assuming that each stage of the amplifier has the same level of fluctuation, we can conclude that the transconductance of an individual HEMT device also fluctuates according to: $$\frac{\mathrm{\Delta }g_m}{g_m}=\frac{1}{2\sqrt{N_s}}\frac{\mathrm{\Delta }G}{G},$$ (3) where $`N_s`$ is the number of stages of the amplifier, typically $`5`$. An optimal low noise amplifier design will have equal noise contributions from the gate and drain of the HEMT, which means the changes in $`g_m`$ will lead to changes in $`T_n`$ (Pospieszalsky 1989). This can be expressed as $$\frac{\mathrm{\Delta }T_n}{T_n}\frac{\mathrm{\Delta }g_m}{g_m}.$$ (4) We can write the $`1/f`$ spectrum of the gain fluctuations as: $$\frac{\mathrm{\Delta }G}{G}=\frac{C}{\sqrt{f}}.$$ (5) Putting this together we get: $$\frac{\mathrm{\Delta }T_n}{T_n}\frac{1}{2\sqrt{N_s}}\frac{C}{\sqrt{f}}.$$ (6) We can therefore write the amplifier noise temperature fluctuations as $$\frac{\mathrm{\Delta }T_n}{T_n}=\frac{A}{\sqrt{f}},$$ (7) with $`A=C/(2\sqrt{N_s})`$; a normalization of $`A1.8\times 10^5`$ (relying on the references above) is appropriate for the Planck radiometers at 30 and 45 GHz. Throughout, we will use units of $`\mathrm{K}/\sqrt{\mathrm{Hz}}`$ for $`\mathrm{\Delta }T`$ so that we will not need to refer to the sampling frequency of the radiometer. In these units then, $`\mathrm{\Delta }T/T`$ has units of Hz<sup>-1/2</sup> and $`A`$ is dimensionless. We also note that the value of $`A`$ will generally depend on the physical temperature of the amplifier. The values for $`A`$ given here should be regarded as estimates rather than precise values. For the radiometers at higher frequencies, it will be necessary to use HEMT devices with a smaller gate width to achieve the lowest amplifier noise figure. We expect that the gate widths will be roughly $`1/2`$ that of the devices used for the lower frequency radiometers and this will lead to $`g_m`$ fluctuations that are roughly a factor of $`\sqrt{2}`$ higher (Gaier 1997, Weinreb 1997). We will therefore adopt a normalization of $`A=2.5\times 10^5`$ for the 70 and 100 GHz radiometers. Starting from the expression of the average output of the differential radiometer, one derives the change in the output signal for the above small change in the noise temperature of one of the amplifier; and by multipling it by a factor $`\sqrt{2}`$ because both amplifiers (which have uncorrelated noise) can contribute to this effect, we have the change in the observed signal, $`\mathrm{\Delta }T_{\mathrm{equiv}}`$, given by: $$\mathrm{\Delta }T_{\mathrm{equiv}}=\sqrt{2}\mathrm{\Delta }T_n\left[\frac{1r}{2}\right].$$ (8) We define the “knee” frequency as the post-detection frequency, $`f_k`$ at which the $`1/f`$ contribution and the ideal white noise contribution are equal, i.e. $`\mathrm{\Delta }T_{\mathrm{equiv}}=\mathrm{\Delta }T_{\mathrm{white}}`$. For the computation of the knee frequency we use an integration time $`\tau =1/(2\mathrm{\Delta }f)`$ and $`\mathrm{\Delta }f=1`$ Hz, according to the choosen units for $`A`$ and $`\mathrm{\Delta }T_n`$. The knee frequency is then given by: $$f_k=\frac{A^2\beta }{8}\left(1r\right)^2\left(\frac{T_n}{T_n+T_x}\right)^2.$$ (9) This expression shows as the knee frequency depends upon several factors including the radiometer bandwidth, reference load temperature, and the intrinsic level of fluctuation in the HEMT devices; values of $`\mathrm{few}\times 0.1`$ Hz, according to the frequency, should be reached with only passive cooling of the radiometer (to about 50 K), whereas active cooling (to about 20K or less) can further reduces the knee frequency. As examples, assuming a 20% bandwidth for our frequency channels and an antenna temperature $`T_x=3`$K, the knee frequency is $`0.046`$ Hz and $`0.11`$ Hz respectively at 30 GHz (assuming $`T_y=20`$K and $`T_n=10`$K) and at 100 GHz (assuming $`T_y=20`$K and $`T_n=40`$K). For a more realistic evaluation we have to accurately repeat the analytical considerations by Seiffert et al. (1997) to distinguish between the amplifier noise temperature, $`T_n`$, and the system temperature, $`T_{sys}`$. In reality, it would be more appropriate to insert $`T_{sys}`$ in eq. (1) and to carefully consider when the two temperatures enter in the determination of the knee frequency. On the other hand, the difference between these two temperature is estimated to be of few K while the instrumental situation is still partially unclear; then a careful distinction between them is presently not necessary from practical point of view, although interesting. We will address this point in a future work. The knee frequency must be compared to the spin frequency $`f_s`$; for the Planck observational strategy proposed for the Phase A study $`f_s=1`$ r.p.m., i.e. 0.017 Hz. As a comparison, for a total power radiometer, laboratory measurements have found knee frequencies between 10 and 100 Hz; the modified correlation radiometer scheme reduces the knee frequency by more than two order magnitudes. ## 3 Simulation of the mission We have written a code that simulates the basic properties of Planck observations in order to study the striping effect due to $`1/f`$-type noise on the measured sky temperatures. For simplicity our sky map includes only CMB fluctuations, generated in equi-cylindrical pixelisation (ECP) by using the method of Muciaccia et al. (1997); then we have projected it in COBE-cube pixelisation in order to have quasi equal area pixels. For our tests we have considered CMB fluctuations in a typical CDM scenario ($`\mathrm{\Omega }_b=0.05`$, with COBE/$`DMR`$ normalization). We have used here an input map $`𝐌_{\mathrm{𝐢𝐧}}`$ at a resolution of $`19.4^{}`$ \[i.e. at COBE-cube resolution 9\] that we have derived from an ECP map with 1024 grid points along each parallel computed by using the multipoles up $`l=512`$. The typical dimension of a pixel of an input map at resolution 9 is comparable with the beam FWHM of $`30^{}`$ of the channels at 30GHz, that we have considered for the present tests. Our input map is shown in Figure 1. ### 3.1 The simulation of the sky observation The instrument design for Planck mission calls for multi-frequency focal plane arrays placed at the focus of off-axis optical systems, in order to achieve proper angular resolution, sensitivity, and spectral coverage. As a consequence, not all the feedhorns can be located very close to the centre of the focal plane; the study of the implications of the related beam optical distortions on temperature measurements has been presented in other works (Burigana et al. 1997b,c). For the present purpose we do not convolve the sky map with the beam response, but we simply read the map temperature corresponding to the pixel identified by the central direction of a given beam during the sky scanning. Figure 2 shows the schematic representation of the observational geometry. Let $`i`$ be the angle between a unit vector $`\stackrel{}{s}`$, along the satellite spin axis (outward the Sun direction), and the normal to the ecliptic plane, and $`\stackrel{}{p}`$ the unit vector of the direction of the optical axis of the telescope, at an angle $`\alpha `$ from the spin axis ($`i=90^{}`$ and $`\alpha =70^{}`$ for the Phase A study (Bersanelli et al. 1996). We choose two coordinates $`x`$ and $`y`$ on the plane tangent to the celestial sphere in the telescope optical axis direction, with unit vector $`\stackrel{}{u}`$ and $`\stackrel{}{v}`$ respectively; we choose the $`x`$ axis according to the condition that the unit vector $`\stackrel{}{u}`$ points always toward the satellite spin axis; indeed, for standard Planck observational strategy, this condition is preserved as the telescope scans different sky regions. With this choice of reference frame, we have that $`\stackrel{}{v}=\stackrel{}{p}\stackrel{}{s}/|\stackrel{}{p}\stackrel{}{s}|`$ and $`\stackrel{}{u}=\stackrel{}{v}\stackrel{}{p}/|\stackrel{}{v}\stackrel{}{p}|`$ (here $``$ indicates the vector product). In general the coordinates $`(x_0,y_0)`$ of the beam centre will be identified by two angles; we use here the colatitude $`\theta _B`$ and the longitude $`\varphi _B`$ in the $`\stackrel{}{u},\stackrel{}{v},\stackrel{}{p}`$ reference frame (see the Appendix A for details on geometrical transformations). For the present test we assume a typical off-axis location of the considered beam: $`\theta _B=2.8^{}`$, $`\varphi _B=45^{}`$. We note that our choice of $`\theta _B`$ is representative of typical LFI beam position for a telescope with a primary mirror of 1.5m aperture in the new Planck optical configuration (see Mandolesi et al. 1997b). Our choice of $`\varphi _B`$ corresponds to a case of intermediate efficiency respect to the destriping technique; the cases $`\varphi _B=0^{}`$ (beam located along the $`\stackrel{}{u}`$ direction in the TICRA U-V plane) and $`\varphi _B=90^{}`$ (beam located along the $`\stackrel{}{v}`$ direction in the U-V plane) are equivalent respectively to an on-axis beam and to a beam that suitably distributes the crossings between different circles in two regions, close to the ecliptic poles, of maximum size. The off-axis choice for $`\theta _B`$, when $`\varphi _B`$ significantly differs from 0, also assures that, even in the particular case of an angle of $`90^{}`$ between the spin axis and the telescope direction, the scanning circles do not cross always exactly the ecliptic poles but two somewhat larger regions around them also for scanning circles corresponding to significantly different spin axis positions. We remember that in the Planck scanning strategy of the Phase A study the sampling time of each receiver is choosen in order to have three samplings when the telescope axis describes in the celestial sphere an angle with a length equal to the beam FWHM ($``$ three samplings per beam). For any value of the angle $`\alpha `$, this condition determines the number of samplings, $`n_p`$, per scan circle. We will study in detail the effect introduced by the telescope motion in a future work. For the present purpose, we simply assign the proper beam directions to each sampling, extract the corresponding pixel and read its temperature in the simulated input map; given the assumed FWHM of $`30^{}`$ and the pixel dimension $`19.4^{}`$ we have typically 2–3 different pixels explored in an integration time corresponding to 3 samplings. We remember that, in order to be able to reduce the white noise in a simple way, we want to “close” the scanning circle, i.e. we need to modify just a little the integration time by requiring that the telescope points always at the same set of directions when it repeats the (120 for the Phase A study) number of cycles with the same spin axis direction. Finally we have applied a further reassessment of the integration time to make $`n_p`$ multiple of 12, in order to be able of applying in the future the destriping technique not only with one but also with 2, 3 or 4 level constants for circle (see sections 5 and 7.3). Our code let us free to implement arbitrary scanning strategies (see also Appendix A); we consider here cases with the spin axis always on the ecliptic plane and with an angle $`\alpha `$ between the spin axis and the telescope direction of $`80^{},85^{},90^{}`$; lower values of $`\alpha `$, like that of $`70^{}`$ assumed for the Phase A study, will require wide oscillations ($`15^{}20^{}`$) of the spin axis on the ecliptic plane (with relevant problems for the thermal stability) in order to observe the regions close to ecliptic poles, that are indeed very informative, being not significantly contaminated by the galactic emission. We have considered also a case with $`\alpha =90^{}`$ and with ten $`10^{}`$ sinusoidal oscillations of the spin axis on the ecliptic plane. For all the cases we have considered a 360 days mission in order to compare results obtained with the same mission duration. ### 3.2 The generation of instrumental noise series We generate white noise and 1/f noise using a random number generator code (see Press et al. 1992). For the white noise this is very simple: we have simply to rescale a gaussian random noise distribution, by taking into account the radiometer $`rms`$ white noise \[see eq. 1\]. For including the $`1/f`$-type noise we have adapted in FORTRAN an original IDL code provided us by M. Seiffert, based on the power spectrum expansion in the Fourier space of the noise components. We generate together white and $`1/f`$ noise. Firstly a random gaussian distribution of (white) noises is generated; then we calculate its power spectrum using an FFT code (see Press et al. 1992); the amplitude of this power spectrum is then multiplied by $`(1+f_k/f)^{1/2}`$ for including the $`1/f`$ contribution. Finally we have the time series with both the noises by computing the (inverse) FFT of the power spectrum. When we take into account the real properties of the 30 GHz receivers, eq. (9) (Seiffert et al. 1997) gives the $`1/f`$ knee frequency of our radiometers. We adopt as reference for the present simulations $`f_k=0.05`$Hz a value adequate to a cooling efficiency that allows to keep a load temperature $`T_y20`$K (we will use here $`T_n=9`$K, $`\beta =6`$ GHz). Our noise series consist of about $`2\times 10^6`$ evaluations and a single series covers the integrations for 8 different contiguous spin axis directions (16 hours). We use a single series for white noise and $`1/f`$ noise together, but we generate also a noise series that includes white noise only; this increases the computation time of only few percent (see above) but it is useful for comparison and for the quantification of the stripes magnitude and of the destriping efficiency (see sections $`6÷10`$). In our code the generation of noise series is coupled with the sky observation; more precisely, because of the noises dependence on the antenna temperature $`T_x`$, we include the exact local sky temperature in the noise magnitudes. This kind of flexibility may be useful also in the future for simulations that will include Galaxy emission and thermal drifts too. Then, the only simplification is in the estimate of the knee frequency for the $`1/f`$ noise series, which is assumed to be constant (i.e. with constant $`T_x`$, not allowing for spatial or time variations); the adopted value is consistent with the CMB monopole antenna temperature and a typical environment temperature of 1 K. ### 3.3 The data stream generation and recording We have recorded our data streams in 4 matrices; any row of these matrices refers to the data obtained from a given spin axis direction; the number of columns is equal to the number of samplings ($`n_p2100`$, depending on $`\alpha `$); the number of rows is equal to the number, $`n_s`$, of different spin axis directions (4320 for a shift of 5’ in the spin axis direction). We have recorded the following data: * the matrix $`𝐍`$ which contains the pixel numbers – 4 bytes per pixel – (in COBE-cube pixelisation) corresponding to the different integrations; * the matrix $`𝐓`$ which contains the ”global” temperatures observed by the receiver in the above sky directions, directly averaged over the number of cycles per scan circle (120 in the Phase A study scheme) in order to avoid the use of a large useless amount of memory space. $`𝐓`$ includes: the ”input” sky temperature fluctuation as observed in the adopted geometrical scheme, which obviously can be computed a single time for any given spin axis direction; the white noise and the 1/f noise averaged over the number of cycles for each spin axis direction; * the matrix $`𝐖`$, generated in the same way as the matrix $`𝐓`$, but containing the temperatures that will be observed in presence of the white noise only (see section 3.2); * the matrix $`𝐆`$ contains the temperatures that will be observed in absence of instrumental noise and will be useful to check the goodness of the geometrical part of our flight simulation code. Of course the matrices $`𝐖`$ and $`𝐆`$ do not have the corresponding ones in a real observation. In principle, if we simulate the mission for a time long enough that the spin axis return on the same direction after a certain period (360 days, as a reference allowing for the case $`\alpha =90^{}`$) we can average the data of the second period with the corresponding rows of the first period, and take memory that their will be affect by a (statistical) error $`\sqrt{2}`$ times smaller than that corresponding to rows that are observed for a single period. For a more realistic simulation, we need to record a further matrix $`𝐄`$ with the statistical sensitivities corresponding to the pixels of matrix $`𝐍`$, which takes into account this fact as well as a possible degradation in sensitivity for some elements of data streams due for example to cosmic rays, spurious effects … . We neglect these kinds of complications for the present analysis, which is equivalent to say that all the elements of $`𝐄`$ assume a constant value. Nevertheless, in section 5 we draw out our formulas by including this possible effect, for sake of generality. For implementing the destriping techniques (see section 5) we need to recognize when the pointing directions for different spin axis directions are substantially identical. The sky pointing direction is stored in this scheme only through the corresponding pixel number. Then, from a statistical point of view, two pointing directions are considered identical provided that their distance is smaller than the pixel size. The resulting number of pixels in common is then related to the assumed pixel size. We have then recorded: * additional matrices $`𝐍_𝐇`$ (H=1,2,…,$`n_R`$) contains the pixel number corresponding to the different integrations for a certain number, $`n_R`$, of resolutions higher than that used for the input/output maps (for example at resolution 10 or 11 for input map at resolution 9). By exploiting these matrices we will able to test the adopted destriping technique under more stringent conditions on the average distance between their pointing directions in the research of the pixels in common. ## 4 From data streams to observed simulated maps Given the above simulated data streams, it is quite simple to obtain the following simulated observed maps (see also section 5.2), that can be easily compared one to each other. We compute the sensitivity map, $`𝐌_𝐒`$, with which any map pixel is observed, by recognizing how many times a given pixel is observed from the analysis of the whole matrix $`𝐍`$ (and $`𝐄`$ if this is the case). From the matrices $`𝐍`$ and $`𝐓`$, we average the temperatures corresponding to the same pixel in different matrix positions to have the observed temperature map $`𝐌_𝐓`$ (including noises). In similar way, from the matrices $`𝐍`$ and $`𝐖`$ (or $`𝐆`$) we obtain the observed temperature map $`𝐌_𝐖`$ (or $`𝐌_𝐆`$) computed in presence of white noise only (or in absence of instrumental noises). We have verified that the map $`𝐌_𝐆`$ is identical to the “input” map for all the observed pixels, so confirming the validity of the geometrical part of our flight simulation code. ## 5 Destriping techniques We have developed a technique in order to eliminate the effects of gain drifts in Planck signal due to the 1/$`f`$ noise effect. The method is derived from that proposed for the Phase A Study and re-analyzed by Delabrouille (1997). On the other hand, our treatment of the Planck observation simulation, although simplified, is general enough to be close to the “real” Planck observations; so we draw out here below the destriping mathematical formalism, in a way directly applicable to our simulated data streams. ### 5.1 Mathematical formalism In this section we discuss how to eliminate the effects of gain drifts on timescales greater than that for which the spin axis points at a given direction (2 hours for the Phase A scheme, or 1 hour or 40 minutes for possible new observational strategies in which the spin axis shift of 2.5’ or 1.6’ in 1 hour or 40 minutes respectively), i.e. the satellite scans a given circle in the sky. After we removed the drifts within any given scan circle by averaging the observations over the corresponding cycles (see section 3.1), each set of observations at a given spin axis direction, denoted by the index $`i`$, is characterized by an additive level $`A_i`$ which is related to the “mean” $`1/f`$ noise level during the observation in that scan circle. These levels $`A_i`$ are different for different circles, due to gain fluctuations. Our goal is to obtain a reduced set of observations of different scan circles by removing the contamination that affects any circle. So we will subtract to all sets of observations on a given scan circle their own characteristic level $`A_i`$. As a variance, we can attribute more constant levels, say $`n_l`$ levels, per single scan circle. From the computational point of view this is exactly equivalent to rearrange all the matrices in section 3.3 by dividing their rows in $`n_l`$ parts that have to be appropriately relocated to construct new matrices with $`n_s\times n_l`$ rows and $`n_p/n_l`$ columns that can be then analysed exactly as in the case of a single constant per circle. For estimating all these levels we use a computation scheme able to simultaneously find the pixels in common between different scan circles and generate a linear system whose solution gives the unknowns $`A_i`$. The observations of different directions in the sky explored by the satellite have been recorded in three matrices $`𝐍`$, $`𝐓`$, $`𝐄`$ of $`n_s`$ rows and $`n_p`$ columns, where $`n_s`$ is the number of different spin axis directions, and $`n_p`$ is the number of samplings at different horn pointing directions in a given circle. Here $`N_{il}`$ ($`i=1,\mathrm{},n_s`$, $`l=1,\mathrm{},n_p`$) contains the pixel number corresponding to the observed direction in the sky, and $`T_{il}`$ and $`E_{il}`$ are respectively the corresponding observed temperature (full signal due to the sky plus noises) and the estimates of the $`rms`$ noise, essentially due to the white noise. We observe that $`E_{il}`$ is properly related to the amplifier noise temperature and to the observed antenna temperature which depends in a real case on the local thermal conditions and on the true sky temperature but it may depend also on possible “spurious effects”. We have in reality only “first-order” informations about all these quantities at this level; on the other hand its accurate knowledge is not crucial, being the amplifier noise temperature typically higher than the observed temperature. Anyway we hope to have more accurate informations from accurate thermal models and from iterating our data reduction scheme to have a good determination of sky temperature, which is of course our goal. In the following the first index will denote the row index and the second the column index. We must check for all the possible crossing points for two different circles for the whole ensemble of $`n_s`$ circles (see also section 3.3). Let $`\pi `$ be an index that identifies a generic couple of different observations corresponding to the same pixel in the sky, i.e. a pixel in common between two scan circles: $`\pi `$ ranges from $`1`$ to $`n_c`$, where $`n_c`$ is the total number of couples found. Therefore the index $`\pi `$ is related to two elements in the matrix $`𝐍`$: $`\pi (il,jm)`$. Here $`i`$ and $`j`$ identify the two circles for which we found common pixels, $`l`$ and $`m`$ are the common pixels positions on the circle $`i`$ and on the circle $`j`$ respectively. So we have $`N_{il}=N_{jm}`$. As a variance, we can replace the matrix $`𝐍`$ with one of the matrices $`𝐍_𝐇`$ (see section 3.3 and 7.4) to search for the pixels in common, according to the adopted averaged maximum distance for recognizing two pixels in common. We want to minimize the quantity: $$S=\underset{\mathrm{all}\mathrm{couples}}{}\left[\frac{[(A_iA_j)(T_{il}T_{jm})]^2}{E_{il}^2+E_{jm}^2}\right]=\underset{\pi =1}{\overset{n_c}{}}\left[\frac{[(A_iA_j)(T_{il}T_{jm})]^2}{E_{il}^2+E_{jm}^2}\right]_\pi $$ (10) respect to the set of the unknown levels $`A_i`$; the index $`\pi `$ in the right hand side of this equation remembers that each set $`(il,jm)`$ derive from a given pixel $`\pi `$. S is quadratic in all the unknown $`A_i`$; on the other hand, only the differences between the levels $`A_i`$ enter in this expression, so that the solution will be indeterminate, i.e. the levels are determined apart from an arbitrary additional constant (with no physical meaning, as obvious for anisotropy measurements). To remove this indetermination, we add a constraint to the $`A_i`$ quantities: $$\underset{h=1}{\overset{n_s}{}}A_h=0.$$ (11) This is equivalent to minimize the quantity: $$S^{}=S+\left(\underset{h=1}{\overset{n_s}{}}A_h\right)^2$$ (12) Now let’s go into some algebra. We perform the derivate of the previous equation, and finally we have: $$\frac{1}{2}\frac{S^{}}{A_k}=\underset{\pi =1}{\overset{n_c}{}}\left[\frac{[(A_iA_j)(T_{il}T_{jm})]\left[\delta _{ik}\delta _{jk}\right]}{E_{il}^2+E_{jm}^2}\right]_\pi +\underset{h=1}{\overset{n_s}{}}A_h=0$$ (13) for *all* $`k=1,\mathrm{},n_s`$ (here the $`\delta `$ are the usual Kronecker symbols). So we have a set of $`n_s`$ equations: $$\underset{t=1}{\overset{n_s}{}}C_{kt}A_t=B_k,k=1,\mathrm{},n_s.$$ (14) We denote with $`𝐂`$ and $`𝐁`$, respectively, the matrix of the coefficients $`C_{kt}`$ and the vector of the coefficients $`B_k`$. To be concrete, we show here as $`𝐂`$ and $`𝐁`$ are formed as we extract the pixels in common between the different rows. First of all, we set $`𝐁=0`$ and $`C_{kt}=1`$ $`k,t`$ (setting all $`C_{kt}`$ to $`1`$ takes into account the second term of eq. 13). Then for each couple $`\pi `$ of pixels in common between two scan circle we define: $$\chi _\pi =\left[\frac{1}{E_{il}^2+E_{jm}^2}\right]_\pi $$ (15) and $$\tau _\pi =\left[\frac{T_{il}T_{jm}}{E_{il}^2+E_{jm}^2}\right]_\pi $$ (16) From the above equation and the definition of Kronecker symbol, we easily have that a given couple $`\pi `$ contributes only to two equations of our linear system, those for $`k=i`$ or $`k=j`$, where as usual $`i`$ and $`j`$ corresponds to two different observations of the same pixel. If we iteratively increment the coefficients of $`𝐂`$ and $`𝐁`$ as we find a new couple, explicitely we have \[remember $`\pi (il,jm)`$\]: $$C_{ii}C_{ii}+\chi _\pi $$ (17) $$C_{ij}C_{ij}\chi _\pi $$ (18) $$C_{ji}C_{ji}\chi _\pi $$ (19) $$C_{jj}C_{jj}+\chi _\pi $$ (20) $$B_iB_i+\tau _\pi $$ (21) $$B_jB_j\tau _\pi $$ (22) Summing up, we have that each couple $`\pi `$ contributes to only six terms, and the resulting system shows a complete symmetry with respect to the exchange of the indexes $`i`$ and $`j`$. The linear system defined by eq. (14) has some interesting properties which considerably simplify the numerical computation of its solution. In particular, the matrix $`𝐂`$: * is *symmetric*, so we can hold in memory only half of the matrix (say the upper-right part) and solve the system and speed-up the code by computing only half of the matrix coefficients. This is possible because the Gauss reduction algorithm preserves at each step the symmetry of remaining part of the matrix; * is *positive defined*, so we never find a null pivot when reducing a non-singular matrix (Strang 1976). This allow us to solve the system without having to exchange rows or columns, so preserving the symmetry; * is *not singular* (provided that there are enough intersections between scan circles), because the only fundamental indetermination has been removed by imposing the constraint (11). Anyway, after the system has been solved, it is our care to replace the solution into the original $`𝐂`$ matrix, to verify its correctness and check for rounding errors and/or accidental degenerations. ### 5.2 Some remarks on numerical efficiency, RAM requirements and off-sets In order to speed the construction of the matrix $`𝐂`$ and of the vector $`𝐁`$, we have found that it is very advantageous to firstly order (we use the quick sort algorithm) all the elements of the matrix $`𝐍`$, i.e. the observed pixels (4 bytes integers), in the first column of a new “matrix” $`𝐔`$ (of $`n_p\times n_s`$ rows and 3 “columns”) by keeping memory of their locations (2$`\times `$2 bytes integers), in the original matrix $`𝐍`$ in the other two “columns” of $`𝐔`$. In this way we simply extract once for all each pixel in common between two scan circles for all scan circles, being the same pixel located in contiguous rows in the matrix $`𝐔`$, by considering all the possible pairs of rows of $`𝐔`$ with the same element in the first column, with the simple caution that the elements of the second column of $`𝐔`$, i.e. the original rows in the matrix $`𝐍`$, are different. In this way the “scanning” of the matrix $`𝐍`$ and the construction of $`𝐂`$ and $`𝐁`$ according to the rules of section 5.1 turns to be very fast. It is immediate to use the matrices $`𝐍_𝐇`$, containing the pixel numbers at higher resolutions, in the construction of $`𝐂`$ and $`𝐁`$ if one want to adopt more stringent conditions on the distance between pixels in common. In addition, working with the additional matrix $`𝐔`$ optimize the construction of the simulated maps from the simulated data streams (see section 4), being immediate to recognize in the matrix $`𝐔`$ when the same pixel has been observed. For the solution of the linear system (14) we have found that the Gauss elimination method works very well. We prefer to construct and solve the system by using double precision accuracy, to have high numerical accuracy and to be sure of avoiding artificial numerical singularity; also, due to matrix symmetry and positive definiteness, we do not need pivot. To build up the linear system, we have to keep the memory space for the system matrix, the system known terms, the observed temperature matrix and the auxiliary (integer) matrix $`𝐔`$; by taking advantage of this symmetry, and by considering in general $`n_l`$ constant levels per scan circle the memory requirement is: $`8\mathrm{b}\mathrm{y}\mathrm{t}\mathrm{e}\mathrm{s}\times [n_ln_s(n_ln_s+1)/2+n_ln_s+n_sn_p]+4\mathrm{b}\mathrm{y}\mathrm{t}\mathrm{e}\mathrm{s}\times 2\times n_sn_p`$. For example, at 30 GHz (FHWM$`30^{}`$, $`n_p2100`$), for the case of $`5^{}`$ shift of the spin axis ($`n_s=4320`$) we need about 220 (440) Mbytes by working with $`n_l=1`$ ($`n_l=2`$); for a $`2.5^{}`$ shift of the spin axis, $`n_s=8640`$ and we need about 590 (1500) Mbytes by working with $`n_l=1`$ ($`n_l=2`$). For sake of illustration, if we have a beam of $`10^{}`$ (like the nominal 100 GHz beams) and we want to record 4 samplings per beam we will have $`n_p8400`$; for a $`2.5^{}`$ shift of the spin axis and by working with $`n_l=1`$ ($`n_l=2`$) the memory requirement is of about 1500 (2400) Mbytes. This memory problem can be solved by taking advantage of disk buffers; we discuss our solution in the Appendix B. For solving the system we only need to keep in memory the system matrix and known terms, and the memory requirement is: $`8\mathrm{b}\mathrm{y}\mathrm{t}\mathrm{e}\mathrm{s}\times [n_ln_s(n_ln_s+1)/2+n_ln_s]`$. At 30 GHz and with 3 samplings per beam, we need about 75 Mbytes if $`n_s=4320`$ and $`n_l=1`$, 300 Mbytes if $`n_s=4320`$ and $`n_l=2`$ or $`n_s=8640`$ and $`n_l=1`$ and 1200 Mbytes if $`n_s=8640`$ and $`n_l=2`$. This problem maybe crucial depending on the available amount of RAM, especially because the Gauss elimination continuously changes the system components. Also this memory problem can be solved by taking advantage of disk buffers (Appendix B). After the solution of the linear system, we obtain a “destripped” matrix $`𝐃`$ by subtracting the level $`A_i`$ to the $`i`$th row ($`i=1,n_s`$) of the matrix $`𝐓`$ Then we apply to this matrix the same treatment of section 4 and we have the observed destripped temperature map $`𝐌_𝐃`$. We observe that, contrary to the case of pure white noise, the average of a $`1/f`$-type noise series can be significantly different from zero. Then, the map $`𝐌_𝐓`$ (as well as the map $`𝐌_𝐃`$, but in general with a somewhat different value) may present an off-set with respect to the map $`𝐌_𝐆`$; on the contrary the off-set between the map $`𝐌_𝐖`$ and the map $`𝐌_𝐆`$ is negligible. As a typical example, for the simulation with $`\alpha =90^{}`$ (see Table 1) we find that these off-sets are $`4.8\mu `$K; for comparison, the off-set we find between the maps $`𝐌_𝐖`$ and $`𝐌_𝐆`$ is much smaller, $`0.075\mu `$K. The off-set between $`𝐌_𝐓`$ ($`𝐌_𝐃`$, $`𝐌_𝐖`$) and $`𝐌_𝐆`$ is of course not relevant for anisotropy measurements, nor we are able to subtract it in a real case (on the other hand we must pay attention to the fact that off-sets may be present between maps produced by different receivers at the same frequency). The off-set of the map $`𝐌_𝐓`$ ($`𝐌_𝐃`$, $`𝐌_𝐖`$) must be removed by subtracting the difference between the average of the map $`𝐌_𝐓`$ ($`𝐌_𝐃`$, $`𝐌_𝐖`$) and of the map $`𝐌_𝐆`$: this is necessary for a correct quantitative analysis of stripes magnitude and destriping efficiency. We indicate with $`\stackrel{~}{𝐌}_𝐓`$, $`\stackrel{~}{𝐌}_𝐃`$ (and $`\stackrel{~}{𝐌}_𝐖`$, but it is not relevant in practice for this matrix) the above maps, when this kind of off-set has been removed. ## 6 Estimators of the destriping efficiency In the previous sections we have described our simulations of the Planck observations and the basic treatment to convert observational data streams into sky maps, including destriping techniques. Here we analyse the efficiency of the adopted destriping technique, by considering well known estimators. ### 6.1 Ratio between the $`\chi _r^2`$’s We expect that the average of the squares of differences between the elements of the maps $`\stackrel{~}{𝐌}_𝐖`$ and $`𝐌_𝐆`$ divided by the observation sensitivity \[essentially the estimator $`\chi _r^2`$; we will call it in this case $`(\chi _r^2)_W`$\] is very close to 1, because only the white noise is present in this case. We can compute the same estimator for $`\stackrel{~}{𝐌}_𝐓𝐌_𝐆`$ \[$`(\chi _r^2)_T`$, undestripped case\] and $`\stackrel{~}{𝐌}_𝐃𝐌_𝐆`$ \[$`(\chi _r^2)_D`$ destripped case\]. (For the above consideration, by using observed maps without removing the off-sets one will find a meaningless amplification of the $`\chi _r^2`$). On the other hand, we find that the exact value of $`(\chi _r^2)_W`$ may be just a little different from 1 depending indeed on the assumed sensitivity; for example it is just a little different is we divide $`\stackrel{~}{𝐌}_𝐖𝐌_𝐆`$ by the map $`𝐌_𝐬`$ (i.e. by using the sensitivity proper of any pixel – we will use this definition in our tables) or the average of the sensitivities in the map $`𝐌_𝐒`$ or the estimate of the average sensitivity obtained on the basis the global mission time, the observed number of pixels (393216 for our maps at COBE-cube resolution 9) and the properties of considered receiver. Then we prefer to use the ratio between the $`(\chi _r^2)_T`$ – or $`(\chi _r^2)_D`$ – and $`(\chi _r^2)_W`$ as estimator. This “renormalized” estimator (that we will denote by $`\chi _{r,n,T}^2`$ and $`\chi _{r,n,D}^2`$ respectively for the undestripped and destripped cases) results independent of the choice of the sensitivity adopted for the $`\chi _r^2`$ calculation, so allowing a better understanding of the magnitude of striping effect and of destriping efficiency. We will quantify the destriping efficiency by using the relative decrease of the renormalized $`\chi _r^2`$, i.e. with the quantity $`[(\chi _r^2)_D(\chi _r^2)_T]/[(\chi _r^2)_T1]`$. ### 6.2 Magnitude of stripes temperature From the values of $`\chi _{r,n,T}^2`$ and $`\chi _{r,n,D}^2`$ defined above and from the average $`rms`$ white noise, $`rms_W`$, for the observed pixels derived from the sensitivity map $`𝐌_𝐒`$ we can easily give an estimate of the $`rms`$ temperature of the stripes before, $`rms_T`$, and after destriping, $`rms_D`$. Under the hypothesis that the error introduced by the noises in each pixel may be thought as a sum of two uncorrelated contributions from white noise and $`1/f`$ noise, we have $`rms_T=rms_W\sqrt{\chi _{r,n,T}^21}`$ and $`rms_D=rms_W\sqrt{\chi _{r,n,D}^21}`$ \[Method (a)\]. Another estimate \[Method (b)\] of the $`rms`$ stripes temperature can be obtained by directly evaluating the global temperature $`rms`$ difference before, $`rms_{tot,T}`$, or after, $`rms_{tot,D}`$, the destriping from the comparison with the maps $`𝐌_𝐆`$ and by assuming that they are given by the sum in quadrature of $`rms_W`$ and $`rms_T`$ or $`rms_D`$. From the values of $`rms_W`$, $`rms_{tot,T}`$ and $`rms_{tot,D}`$ we can calculate $`rms_T`$ or $`rms_D`$. Finally \[Method (c)\], we can treat the $`1/f`$ contribution to the total noise like a systematic (and not statistical) error and therefore to assume that $`rms_{tot,T}`$ or $`rms_{tot,D}`$ are simply given by the sum of $`rms_W`$ and $`rms_T`$ or $`rms_D`$. For estimating the destriping efficiency in terms of residual stripes temperature we will by using the relative decrease of the stripes temperature, i.e. the quantity $`(rms_Drms_T)/rms_T`$. ## 7 Results From the visual inspection of the simulated maps the effect of the noises is of course not clearly evident; we only recognize a somewhat degradation of the map details. The stripes become more evident when we plot the noises map only (see Figure 3); they can be obtained by subtracting the CMB fluctuation map ($`𝐌_𝐆`$) to the observed map. The stripes figures well reproduce the adopted scanning strategy. ### 7.1 Destriping versus scanning strategy The visual inspection of our “stripes” maps do not allow to quantify the striping magnitude and its reduction obtained from the destriping procedure (see Figures 3 and 4). The statistical analysis of the maps allows a much better understanding of the destriping procedure efficiency. We present here (see Tables $`1÷4`$) the results of our simulations (for a typical channel at 30 GHz) in terms of reduced $`\chi _{r,n}^2`$ and of stripes temperature for the three choosen values of the (constant) angle $`\alpha `$ between the spin axis and the telescope direction and for the considered case with $`\alpha =90^{}`$ and spin axis oscillations (see also the Appendix A). We report also in the tables some informations on relevant quantities: the ideal white noise for a single sampling time which weakly increases with $`\alpha `$ for geometrical reasons if we want to have 3 samplings per beam; the (single receiver) average $`rms`$ white noise, $`rms_W`$, (expressed in mK) for the observed pixels for a 360 days mission; the square of the ratio $`R`$ between these white noises, normalized to the intermediate case of $`\alpha =85^{}`$; the percentage of sky which results to be observed by the considered single off-axis beam, which also increases with $`\alpha `$; the number of couples in common find in the destriping procedure and the number of constant levels per scan circle used in the destriping code; the map resolution and the resolution used for searching the pixels in common. We remember the adopted value, 0.05 Hz, of $`1/f`$ knee frequency and of the bandwidth, 6 GHz, and that the sampling time is of about 0.03 sec, the exact value depending on the choosen value of $`\alpha `$. In the tables we report our values of $`\chi _{r,n}^2`$ before and after the destriping procedure and the relative (%) decrease without the multiplicative factor $`R^2`$ and by taking it into account. Indeed the (white noise) sensitivity per pixel is different for different scanning strategies; then, by including the factor $`R^2`$ we “renormalize” the values of $`\chi _{r,n}^2`$ at the same (white noise) sensitivity level, so making the them essentially independent of the $`\alpha `$–dependent sensitivity. | Table 1: destriping results; $`\alpha =90^{}`$. | | | | | --- | --- | --- | --- | | Some global parameters | | | | | $`\mathrm{\Delta }T_W=1.322`$mK | $`rms_W=27.18\mu `$K | $`R^2=1.00666`$ | % sky = 99.98 | | Map Res. = 9 | Res. common pix. = 9 | $`n_l=1`$ | pix. in common = $`2.70\times 10^8`$ | | Before destriping | After destriping | % improvement | Method | | $`\chi _{r,n,T}^2=1.1770`$ | $`\chi _{r,n,D}^2=1.0216`$ | 87.8 % | | | $`\chi _{r,n,T}^2R^2=1.1848`$ | $`\chi _{r,n,D}^2R^2=1.0284`$ | 84.6 % | | | $`rms_T=11.4\mu `$K | $`rms_D=3.99\mu `$K | 65.1 % | (a) | | $`rms_T=17.3\mu `$K | $`rms_D=6.93\mu `$K | 59.9 % | (b) | | $`rms_T=5.03\mu `$K | $`rms_D=0.87\mu `$K | 82.7 % | (c) | | Table 2: destriping results; $`\alpha =85^{}`$. | | | | | --- | --- | --- | --- | | Some global parameters | | | | | $`\mathrm{\Delta }T_W=1.318`$mK | $`rms_W=27.09\mu `$K | $`R^2=1`$ | % sky = 99.43 | | Map Res. = 9 | Res. common pix. = 9 | $`n_l=1`$ | pix. in common = $`2.34\times 10^8`$ | | Before destriping | After destriping | % improvement | Method | | $`\chi _{r,n,T}^2=1.2709`$ | $`\chi _{r,n,D}^2=1.0142`$ | 94.7 % | | | $`\chi _{r,n,T}^2R^2=1.2709`$ | $`\chi _{r,n,D}^2R^2=1.0142`$ | 94.7 % | | | $`rms_T=14.1\mu `$K | $`rms_D=3.23\mu `$K | 77.1 % | (a) | | $`rms_T=21.6\mu `$K | $`rms_D=7.31\mu `$K | 66.2 % | (b) | | $`rms_T=7.98\mu `$K | $`rms_D=0.969\mu `$K | 87.9 % | (c) | | Table 3: destriping results; $`\alpha =80^{}`$. | | | | | --- | --- | --- | --- | | Some global parameters | | | | | $`\mathrm{\Delta }T_W=1.311`$mK | $`rms_W=26.90\mu `$K | $`R^2=0.98602`$ | % sky = 98.12 | | Map Res. = 9 | Res. common pix. = 9 | $`n_l=1`$ | pix. in common = $`2.20\times 10^8`$ | | Before destriping | After destriping | % improvement | Method | | $`\chi _{r,n,T}^2=1.0396`$ | $`\chi _{r,n,D}^2=1.0292`$ | 26.19 % | | | $`\chi _{r,n,T}^2R^2=1.0251`$ | $`\chi _{r,n,D}^2R^2=1.0148`$ | 41.0 % | | | $`rms_T=5.35\mu `$K | $`rms_D=4.60\mu `$K | 14.1 % | (a) | | $`rms_T=8.05\mu `$K | $`rms_D=6.94\mu `$K | 13.8 % | (b) | | $`rms_T=1.17\mu `$K | $`rms_D=0.88\mu `$K | 24.8 % | (c) | | Table 4: destriping results; $`\alpha =90^{}\pm 10^{}`$ (10 sinusoidal oscillations). | | | | | --- | --- | --- | --- | | Some global parameters | | | | | $`\mathrm{\Delta }T_W=1.322`$mK | $`rms_W=29.07\mu `$K | $`R^2=1.15152`$ | % sky = 100 | | Map Res. = 9 | Res. common pix. = 9 | $`n_l=1`$ | pix. in common = $`2.36\times 10^8`$ | | Before destriping | After destriping | % improvement | Method | | $`\chi _{r,n,T}^2=1.1165`$ | $`\chi _{r,n,D}^2=1.0153`$ | 86.9 % | | | $`\chi _{r,n,T}^2R^2=1.2857`$ | $`\chi _{r,n,D}^2R^2=1.1691`$ | 40.8 % | | | $`rms_T=9.92\mu `$K | $`rms_D=3.60\mu `$K | 63.8 % | (a) | | $`rms_T=15.2\mu `$K | $`rms_D=8.62\mu `$K | 43.3 % | (b) | | $`rms_T=3.73\mu `$K | $`rms_D=1.25\mu `$K | 66.5 % | (c) | ### 7.2 Destriping versus $`1/f`$ knee frequency For the interesting case $`\alpha =90^{}`$ with no oscillations, we carried out other a simulation with a much larger value of the $`1/f`$ knee frequency, $`f_k=10`$Hz, of order of that expected for total power radiometers. It is interesting to study the stripes effect and destriping performance under this very pessimistic condition. Tables 5 shows our results that have to be compared with those of Table 1, based on the theoretical estimate of $`f_k`$ of our kind of radiometers. | Table 5: destriping results; $`\alpha =90^{}`$; $`f_k=10`$Hz. | | | | | --- | --- | --- | --- | | Some global parameters | | | | | $`\mathrm{\Delta }T_W=1.322`$mK | $`rms_W=27.18\mu `$K | $`R^2=1.00666`$ | % sky = 99.98 | | Map Res. = 9 | Res. common pix. = 9 | $`n_l=1`$ | pix. in common = $`2.70\times 10^8`$ | | Before destriping | After destriping | % improvement | Method | | $`\chi _{r,n,T}^2=9.2846`$ | $`\chi _{r,n,D}^2=2.6769`$ | 79.8 % | | | $`\chi _{r,n,T}^2R^2=9.3464`$ | $`\chi _{r,n,D}^2R^2=2.6947`$ | 79.7 % | | | $`rms_T=78.2\mu `$K | $`rms_D=35.4\mu `$K | 54.8 % | (a) | | $`rms_T=258\mu `$K | $`rms_D=67.7\mu `$K | 73.7 % | (b) | | $`rms_T=233\mu `$K | $`rms_D=45.8\mu `$K | 80.3 % | (c) | ### 7.3 Destriping with more than one constant per scan circle For the reference case $`\alpha =90^{}`$, both with the theoretical prediction for $`f_k`$ and for the case with $`f_k`$ representative of total power radiometers, we have applied our destriping code by using two constants per scan circle. The results are shown in Tables 6 and 7 that must be compared with Tables 1 and 5 respectively. We find that the use of more constant per circle does not help the destriping technique. | Table 6: destriping results; $`\alpha =90^{}`$. | | | | | --- | --- | --- | --- | | Some global parameters | | | | | $`\mathrm{\Delta }T_W=1.322`$mK | $`rms_W=27.18\mu `$K | $`R^2=1.00666`$ | % sky = 99.98 | | Map Res. = 9 | Res. common pix. = 9 | $`n_l=2`$ | pix. in common = $`2.70\times 10^8`$ | | Before destriping | After destriping | % improvement | Method | | $`\chi _{r,n,T}^2=1.1770`$ | $`\chi _{r,n,D}^2=1.0280`$ | 84.2 % | | | $`\chi _{r,n,T}^2R^2=1.1848`$ | $`\chi _{r,n,D}^2R^2=1.0348`$ | 81.2 % | | | $`rms_T=11.4\mu `$K | $`rms_D=4.55\mu `$K | 60.1 % | (a) | | $`rms_T=17.3\mu `$K | $`rms_D=7.62\mu `$K | 56.0 % | (b) | | $`rms_T=5.03\mu `$K | $`rms_D=1.05\mu `$K | 79.2 % | (c) | | Table 7: destriping results; $`\alpha =90^{}`$; $`f_k=10`$Hz. | | | | | --- | --- | --- | --- | | Some global parameters | | | | | $`\mathrm{\Delta }T_W=1.322`$mK | $`rms_W=27.18\mu `$K | $`R^2=1.00666`$ | % sky = 99.98 | | Map Res. = 9 | Res. common pix. = 9 | $`n_l=2`$ | pix. in common = $`2.70\times 10^8`$ | | Before destriping | After destriping | % improvement | Method | | $`\chi _{r,n,T}^2=9.2846`$ | $`\chi _{r,n,D}^2=2.7584`$ | 78.8 % | | | $`\chi _{r,n,T}^2R^2=9.3464`$ | $`\chi _{r,n,D}^2R^2=2.7768`$ | 78.7 % | | | $`rms_T=78.2\mu `$K | $`rms_D=36.2\mu `$K | 53.7 % | (a) | | $`rms_T=258\mu `$K | $`rms_D=70.0\mu `$K | 72.9 % | (b) | | $`rms_T=233\mu `$K | $`rms_D=47.9\mu `$K | 79.4 % | (c) | ### 7.4 Destriping versus distance conditions For the reference case $`\alpha =90^{}`$, both with the theoretical prediction for $`f_k`$ and for the case with $`f_k`$ representative of total power radiometers, we have applied our destriping code by using the map pixels at higher resolution to search for pixels in common. The results are shown in Tables 8 and 9 that must be compared with Tables 1 and 5 respectively. We conclude that the use of a more stringent condition to find the coincidences of the pointing directions in different scan circles does not help the destriping technique. | Table 8: destriping results; $`\alpha =90^{}`$. | | | | | --- | --- | --- | --- | | Some global parameters | | | | | $`\mathrm{\Delta }T_W=1.322`$mK | $`rms_W=27.18\mu `$K | $`R^2=1.00666`$ | % sky = 99.98 | | Map Res. = 9 | Res. common pix. = 10 | $`n_l=1`$ | pix. in common = $`6.97\times 10^7`$ | | Before destriping | After destriping | % improvement | Method | | $`\chi _{r,n,T}^2=1.1770`$ | $`\chi _{r,n,D}^2=1.0235`$ | 86.7 % | | | $`\chi _{r,n,T}^2R^2=1.1848`$ | $`\chi _{r,n,D}^2R^2=1.0303`$ | 83.6 % | | | $`rms_T=11.4\mu `$K | $`rms_D=4.73\mu `$K | 58.5 % | (a) | | $`rms_T=17.3\mu `$K | $`rms_D=7.16\mu `$K | 58.6 % | (b) | | $`rms_T=5.03\mu `$K | $`rms_D=0.928\mu `$K | 81.6 % | (c) | | Table $`8^{}`$: destriping results; $`\alpha =90^{}`$. | | | | | --- | --- | --- | --- | | Some global parameters | | | | | $`\mathrm{\Delta }T_W=1.322`$mK | $`rms_W=27.18\mu `$K | $`R^2=1.00666`$ | % sky = 99.98 | | Map Res. = 9 | Res. common pix. = 11 | $`n_l=1`$ | pix. in common = $`1.62\times 10^7`$ | | Before destriping | After destriping | % improvement | Method | | $`\chi _{r,n,T}^2=1.1770`$ | $`\chi _{r,n,D}^2=1.0334`$ | 81.1 % | | | $`\chi _{r,n,T}^2R^2=1.1848`$ | $`\chi _{r,n,D}^2R^2=1.0403`$ | 78.2 % | | | $`rms_T=11.4\mu `$K | $`rms_D=5.46\mu `$K | 52.1 % | (a) | | $`rms_T=17.3\mu `$K | $`rms_D=8.10\mu `$K | 53.2 % | (b) | | $`rms_T=5.03\mu `$K | $`rms_D=1.18\mu `$K | 76.5 % | (c) | | Table 9: destriping results; $`\alpha =90^{}`$; $`f_k=10`$Hz. | | | | | --- | --- | --- | --- | | Some global parameters | | | | | $`\mathrm{\Delta }T_W=1.322`$mK | $`rms_W=27.18\mu `$K | $`R^2=1.00666`$ | % sky = 99.98 | | Map Res. = 9 | Res. common pix. = 10 | $`n_l=1`$ | pix. in common = $`6.97\times 10^7`$ | | Before destriping | After destriping | % improvement | Method | | $`\chi _{r,n,T}^2=9.2846`$ | $`\chi _{r,n,D}^2=2.6970`$ | 79.5 % | | | $`\chi _{r,n,T}^2R^2=9.3464`$ | $`\chi _{r,n,D}^2R^2=2.7150`$ | 79.5 % | | | $`rms_T=78.2\mu `$K | $`rms_D=23.0\mu `$K | 70.6 % | (a) | | $`rms_T=258\mu `$K | $`rms_D=68.3\mu `$K | 73.5 % | (b) | | $`rms_T=233\mu `$K | $`rms_D=46.3\mu `$K | 80.1 % | (c) | | Table $`9^{}`$: destriping results; $`\alpha =90^{}`$; $`f_k=10`$Hz. | | | | | --- | --- | --- | --- | | Some global parameters | | | | | $`\mathrm{\Delta }T_W=1.322`$mK | $`rms_W=27.18\mu `$K | $`R^2=1.00666`$ | % sky = 99.98 | | Map Res. = 9 | Res. common pix. = 11 | $`n_l=1`$ | pix. in common = $`1.62\times 10^7`$ | | Before destriping | After destriping | % improvement | Method | | $`\chi _{r,n,T}^2=9.2846`$ | $`\chi _{r,n,D}^2=2.7752`$ | 78.6 % | | | $`\chi _{r,n,T}^2R^2=9.3464`$ | $`\chi _{r,n,D}^2R^2=2.7937`$ | 78.5 % | | | $`rms_T=78.2\mu `$K | $`rms_D=24.2\mu `$K | 69.0 % | (a) | | $`rms_T=258\mu `$K | $`rms_D=70.5\mu `$K | 72.7 % | (b) | | $`rms_T=233\mu `$K | $`rms_D=48.4\mu `$K | 79.2 % | (c) | ## 8 Discussion and conclusions An analytical estimate of the maximum excess noise factor, $`F`$, due to the stripes related to $`1/f`$ effect has been given by Janssen et al. (1996); they found $`F[1+\tau f_k(2\mathrm{l}\mathrm{n}n_p+0.743)]^{1/2}`$. By thinking $`F^2`$ as equal to $`1+(\mathrm{\Delta }rms/rms_W)^2`$, we have a fractional additional $`rms`$ noise respect to the $`rms`$ noise, $`rms_W`$, obtained in the case of pure white noise given by $`(\mathrm{\Delta }rms/rms_W)^2=\tau f_k(2\mathrm{l}\mathrm{n}n_p+0.743)`$. For example for our simulations we have $`n_p2100`$, a sampling time $`\tau 28`$msec, corresponding to an angle of $`10^{}`$ in the sky, and $`f_k=0.05`$Hz (or 10 Hz) . With these number we get $`(\mathrm{\Delta }rms/rms_W)^20.022`$ (or 4.5) for a pixel of $`10^{}\times 10^{}`$. Our map pixel is $`19.4^{}\times 19.4^{}`$ (COBE-cube resolution 9); therefore $`rms_W^2`$ reduces by a factor $`4`$ and we expect to have an additional $`(\mathrm{\Delta }rms/rms_W)^2`$ per pixel, i.e. an additional reduced $`\chi ^2`$, of about 0.1 (or 18). The results shown in our tables are in quite good agreement (always within a factor 2) with these analytical estimates. Somewhat larger values may be expected from the larger observational time toward high ecliptic latitudes, the consequent reduction of the white noise and the increasing of the relative weight of the $`1/f`$ noise. We stress here that, contrary to the case of pure white noise, even for the same scanning strategy the final effect of $`1/f`$ noise may be quite different for different simulations: larger or smaller effects can be obtained according to the (simulated) behaviour of $`1/f`$ gain fluctuations. Only from a very large set of simulations for each considered scanning strategy we can derive a robust evaluation of the “averaged” final effect of $`1/f`$ noise (see below). For this reason, the results shown in our tables have to be considered as first order estimates of the final $`1/f`$ noise effect expected in a given scanning strategy rather than detailed predictions. Without applying the destriping procedure, the typical amount of stripes temperatures (i.e. the $`rms`$ values) per resolution element (of $`19.4^{}\times 19.4^{}`$) ranges from few $`\mu `$K to one or two tens of $`\mu `$K (according to the method adopted for estimating them) and is always significantly less than the corresponding single beam sensitivity of about $`27÷30\mu `$K. The residual stripes that remain after the reduction through our destriping code show typical $`rms`$ temperatures of few $`\mu `$K, roughly independently of the adopted scanning strategy. The efficiency of the destriping algorithm is quite good ($`20\%÷90\%`$, according to the adopted estimator and depending in part on the scanning strategy). We stress that in any case, for our kind of radiometers, the reduced additional noise is of few percent in terms of increased reduced $`\chi ^2`$ and of few $`\mu `$K in terms of stripes temperature. This is not particularly critical. Nevertheless it must be compared with the final Planck sensitivity as it results by combining the data from all the receivers at the same frequency (see below). In the case of much larger $`1/f`$ contaminations like those expected for higher values of the knee frequency in the case of total power radiometers, the final impact on observed sky maps is much higher. The destriping code allows to reduce a large fraction of the added noise (see Table 5), nevertheless a significant increase of the final noise still remains. Then, the importance of the reduction via “hardware” of the $`1/f`$ noise is strongly recommended. About the dependence of the striping effect and of the destriping efficiency on the scanning strategy, our preliminary results suggest the following conclusions. * The magnitude of the sensitivity degradation due to the stripes is only weakly dependent on $`\alpha `$ for undestripped maps as well as for destripped maps (see Tables 1 $`÷`$ 3). * The final residual added noise is roughly independent of the scanning strategy and on the unreduced added noise, almost not relevant unreduced contaminations (see Tables 1 $`÷`$ 4). * Oscillations of the spin axis do not improve significantly the destriping efficiency (see Table 4). On the contrary it is well known that oscillations of the spin axis may introduce further systematic effects related to variations of the illumination by the sun and of shielding performances. * Our results are of course related to the beam position in the sky field of view. For on-axis beams and for beams located at $`\varphi _B0^{}`$ the case at $`\alpha =90^{}`$ is expected to give the worst results, whereas even in this case the destriping results are expected to improve for off-axis beams located at $`\varphi _B90^{}`$. * We find that, in spite of the larger computing time, the efficiency of the destriping technique does not improve by using two level constants per scan circle (see Tables 6 and 7). Indeed the number of “physical” conditions (the number of pixels in common) do not depend on the choosen number of level constants adopted in the destriping procedure. Then, by searching (in the solution of the linear system, see section 5.1, obtained by the condition of minimization of $`S`$) for a number of unknown larger by a factor two, by using the same number of informations, we expect that the uncertainty of each unknown will be larger. We infer that the advantage of using more constant per scan circle found by Delabrouille (1997) for a similar knee frequency but for the case of thermal drifts (noise spectrum proportional to $`1/f^2`$) has to be related to the different kind of noise spectrum. The noise fluctuations on long timescales will be higher in the case of $`1/f^2`$ noise spectrum than in the case of $`1/f`$ noise spectrum; in the former case the use of more constants per circle may allow a more appropriate subtraction of the gain fluctuations in each circle, whereas in our case this advantage is balanced by the increased uncertainty in the determination of the levels and the global effect results in to a small decreasing of destriping efficiency. * The use of more stringent conditions for identifying the pixels in common does not improve (see Tables 8 and 9) the destriping results. This option allows to reach a more accurate superposition of the pixels used in the destriping; nevertheless the number of couples decreases about by a factor 4 (or 16) by using pixels 4 (or 16) times smaller, and so a number of conditions significantly smaller than that found before enters in the minimization of $`S`$ (see section 5.1). Given all these results we believe that a simple scanning strategy with constant $`\alpha `$ in the range $`85^{}÷90^{}`$ can offer the advantages of a large (practically full) sky coverage, quite good destriping performances together with the minimization of all the systematic effects related to the variations of thermal conditions. We draw here below a brief guide-line for the future simulation work on this topic. * We intend to test different sampling strategies: for example with a smaller shift (2.5’) of the spin axis direction and a corresponding smaller observation time for spin axis direction (1 hour). Such a kind of sampling, possibly with the same ($`8700`$, corresponding to 4 samplings per beam at 100 GHz – FWHM$`10^{}`$) number of samplings per scan circle independently of the frequency, may be the final LFI/Planck sampling. A strategy of this kind allows to have a larger number of pixels in common in the destriping procedure; on the other hand the sensitivity of the temperature measurements for the different scan circles degrades. It may be interesting to investigate pro and contro of a variance of this kind from the point of view of the $`1/f`$ noise reduction. * We may be also interested in an accurate verification of the validity of the choosen telescope rotation velocity around the spin axis. * It may be interesting to apply our codes to higher ratios between the beam FWHM and the map pixel size, for example by working, at 30 GHz, with maps at COBE-cube resolution 10. Indeed, considering input maps at a higher resolution roughly tests the importance of assuming a better efficiency in the data streams deconvolution, for example by fully exploiting the beam oversampling. * As better approximation, it is interesting to implement the convolution with the beam for a moving telescope in the observation simulation code and to search for robust and fast criteria for establishing in this situation when it is possible to consider that integrations in different scan circles can be really referred to the same direction in the sky. Of course this problem is correlated to the technique adopted for deconvolving the data streams to obtain observed maps; on the other hand, the stripes magnitude must be small enough to not significantly alter the deconvolution procedure. * We have found that the noise added by the $`1/f`$ effect is not particularly critical, compared to the single beam (white noise) sensitivity. Nevertheless it must be compared with the final Planck sensitivity as it results by combining the data from all the receivers at the same frequency. Indeed, we do not expect that the $`1/f`$ noise magnitude decreases as the square root of the number of receivers, as white noise does: by carrying out several simulation with the same set of physical parameters for the same scanning strategy and averaging the corresponding maps, we can address this topic. * We intend to apply the methods of inversion of CMB maps (Muciaccia et al. 1997) for deriving the angular power spectrum of the observed maps. By comparing the CMB angular power spectrum obtained in presence of $`1/f`$ noise contamination with that derived in the case of pure white noise (and of course with that of the input map) it is possible to estimate the $`1/f`$ noise impact on the extraction on the key cosmological informations, almost in the case of gaussian fluctuations like those expected in inflationary scenarios. Particular attention has to be attempted for evaluating the impact on our science in the context of topological defects, like cosmic strings for example, which introduce non gaussian features in the power spectrum. The characteristic geometrical pattern of $`1/f`$ noise stripes (related to the scanning strategy) have to be used for disantangle between instrumental and cosmological deviations from the gaussianity. * Of course, we intend to extend in the next future our analysis to Planck measurements at higher frequencies. The full success of missions like Planck and MAP require a good control of all the relevant sources of systematic effects. Discrete sources above the detection limit must be carefully removed and accurate models for foregrounds radiation and anisotropies (Brandt et al. 1994, Danese et al. 1996, Bouchet et al. 1997, Toffolatti et al. 1995, 1997) are required to keep the sensitivity degradation in the knowledge of CMB anisotropies below few tens percent (Dodelson 1997). Optical distortions, which produce a non-symmetric beam response for feed-horns located away from the centre of the focal plane, introduce other systematic effects; they must be minimized by optimizing the telescope and the focal plane assembly design (Mandolesi et al. 1997b). Thermal drifts (Bersanelli et al. 1996), which couple to the $`1/f`$-type noise here discussed, can also generate stripes in the observed maps; efficient shield is required together with accurate reduction of sidelobe effects and optimization of the thermal conditions during the mission. All in all maximum efforts should be addressed to optimize the cooling efficiency and the observational strategy and to improve the methods for the data analysis in order to reduce the magnitude of the striping effect and of the other instrumental systematic effects. Acknowledgements – We warmly thank M. Seiffert for useful discussions on Planck LFI receivers and for having provided us its original IDL code for the generation on $`1/f`$-type noise, J. Delabrouille and K. Gorski for useful discussions on simulations and destriping techniques during their visits in Bologna and P. Natoli and N. Vittorio for having provided us their code for the generation of CMB anisotropy maps. ## Appendix A: Geometrical transformations between coordinate systems Let $`\stackrel{}{i},\stackrel{}{j},\stackrel{}{k}`$ the standard unit vectors in ecliptic coordinates and $`\stackrel{}{s}`$ a unit vector along the satellite spin axis outward the Sun direction. Let $`i`$ the angle between $`\stackrel{}{s}`$ the and $`\stackrel{}{k}`$ (i.e. the ecliptic colatitude of $`\stackrel{}{s}`$) and $`\varphi `$ the angle between $`\stackrel{}{i}`$ and $`\stackrel{}{s}`$ (i.e. the ecliptic longitude of $`\stackrel{}{s}`$) . For general scanning strategies $`i`$ will be described by a, possibly not constant, function $`i=i(\varphi )`$. Then $`\stackrel{}{s}=\mathrm{sin}i\mathrm{cos}\varphi \stackrel{}{i}+\mathrm{sin}i\mathrm{sin}\varphi \stackrel{}{j}+\mathrm{cos}i\stackrel{}{k}`$. Let $`\stackrel{}{i}^{}=\stackrel{}{s}`$ and $`\stackrel{}{k}^{}`$ a unit vector hortogonal to $`\stackrel{}{s}`$ on the plane identified by the vectors $`\stackrel{}{k}`$ and $`\stackrel{}{s}`$, namely $`\stackrel{}{k}^{}=\mathrm{cos}i\mathrm{cos}\varphi \stackrel{}{i}\mathrm{cos}i\mathrm{sin}\varphi \stackrel{}{j}+\mathrm{sin}i\stackrel{}{k}`$. Let $`\stackrel{}{j}^{}=\stackrel{}{k}^{}\stackrel{}{i}^{}`$ (here $``$ indicates the vector product). Let $`\stackrel{}{p}`$ the unit vector that identifies the pointing direction of the telescope optical axis. In the reference $`\stackrel{}{i}^{},\stackrel{}{j}^{},\stackrel{}{k}^{}`$ the vector $`\stackrel{}{p}`$ can be defined by two angles: the angle $`\alpha `$ from $`\stackrel{}{s}`$ ($`\alpha =70^{}`$ for the Phase A study, Bersanelli et al. 1996) and the angle, $`\psi `$, between its projection on the plane identified by $`\stackrel{}{j}^{},\stackrel{}{k}^{}`$ and $`\stackrel{}{k}^{}`$, with the convention $`\stackrel{}{p}=\mathrm{cos}\alpha \stackrel{}{i}^{}+\mathrm{sin}\alpha \mathrm{sin}\psi \stackrel{}{j}^{}+\mathrm{sin}\alpha \mathrm{cos}\psi \stackrel{}{k}^{}`$. Given $`\stackrel{}{i}^{},\stackrel{}{j}^{},\stackrel{}{k}^{}`$ in terms of $`\stackrel{}{i},\stackrel{}{j},\stackrel{}{k}`$ it is easily to derive $`\stackrel{}{p}`$ in the same basis. We choose two coordinates $`x`$ and $`y`$ on the plane tangent to the celestial sphere in the telescope optical axis direction, $`\stackrel{}{p}`$, with unit vector $`\stackrel{}{u}`$ and $`\stackrel{}{v}`$ respectively; we choose the $`x`$ axis according to the condition that the unit vector $`\stackrel{}{u}`$ points always toward the satellite spin axis; indeed, for standard Planck observational strategy, this condition is preserved as the telescope scans different sky regions. With this choice of reference frame, we have that $`\stackrel{}{v}=\stackrel{}{p}\stackrel{}{s}/|\stackrel{}{p}\stackrel{}{s}|`$ and $`\stackrel{}{u}=\stackrel{}{v}\stackrel{}{p}/|\stackrel{}{v}\stackrel{}{p}|`$. In general, the coordinates $`(x_0,y_0)`$ of the beam centre in a (“satellite”) reference $`x_T,y_T,z_T`$, corresponding to the unit vectors $`\stackrel{}{u},\stackrel{}{v},\stackrel{}{p}`$, can be identified by two angles; we use here the colatitude $`\theta _B`$ and the longitude $`\varphi _B`$ in this reference. Finally, the pointing direction of this generic (on-axis or off-axis) beam is given by the unit vector $`\stackrel{}{B}=\mathrm{cos}\theta _B\stackrel{}{p}+\mathrm{cos}\varphi _B\mathrm{sin}\theta _B\stackrel{}{u}+\mathrm{sin}\varphi _B\mathrm{sin}\theta _B\stackrel{}{v}`$. For sake of illustration, we consider here briefly three different kinds of scanning strategies (the first two options have been considered in the present simulations). * Spin axis always on the ecliptic plane. In this simple case we have $`i=90^{}`$ and $`\stackrel{}{s}=\mathrm{cos}\varphi \stackrel{}{i}+\mathrm{sin}\varphi \stackrel{}{j}`$. Given the angle $`\alpha `$, we need to give the time dependences, $`\varphi =\varphi (t)`$ and $`\psi =\psi (t)`$, of the spin axis longitude and of the telescope projection to fully determine the scanning strategy. The reference case is to change $`\varphi `$ of a certain angle $`\mathrm{\Delta }\varphi `$ ($`5^{}`$, for example) after a given time interval (2 hours, for example) and to choose a given spin frequency (1 r.p.m., for example) of the continuous rotation of $`\psi `$. * Sinusoidal oscillations of the spin axis. In this case we need also to define the amplitude of the oscillations, $`\delta `$ ($`10^{}`$, for example), and the number of complete oscillations, $`n_{osc}`$ (10 oscillations for 360 days, for example), per a complete rotation of the spin axis over the ecliptic (in 360 days, for example). Then, by choosing $`i=0`$ when $`\varphi =0`$, we have simply $`i=\delta \mathrm{sin}(n_{osc}\varphi )`$. * Precession of the spin axis. This case is just a little more complicate. We can consider a further unit vector $`\stackrel{}{f}`$ which moves always on the ecliptic plane; its ecliptic longitude, $`\eta =\eta (t)`$ defines it: $`\stackrel{}{f}=\mathrm{cos}\eta \stackrel{}{i}+\mathrm{sin}\eta \stackrel{}{j}`$. Let $`\stackrel{}{j}^{\prime \prime }=\mathrm{sin}\eta \stackrel{}{i}+\mathrm{cos}\eta \stackrel{}{j}`$. The satellite spin axis $`\stackrel{}{s}`$ precedes around $`\stackrel{}{f}`$ (we can consider for example again 10 precessions for 360 days, per a complete rotation of the axis $`\stackrel{}{f}`$ over the ecliptic plane in 360 days). Let $`\xi `$ the angle between its projection on the plane identified by $`\stackrel{}{j}^{\prime \prime }`$ and $`\stackrel{}{k}`$ and the vector $`\stackrel{}{k}`$ and $`\delta `$ the angle between $`\stackrel{}{f}`$ and $`\stackrel{}{s}`$ ($`10^{}`$, for example). Then, the relation $`\stackrel{}{s}=\mathrm{cos}\delta \stackrel{}{f}+\mathrm{sin}\delta \mathrm{cos}\xi \stackrel{}{k}+\mathrm{sin}\delta \mathrm{sin}\xi \stackrel{}{j}^{\prime \prime }`$ easily gives the colatitude $`i`$ and the longitude $`\varphi `$ of spin axis. ## Appendix B: System creation and solution with low memory usage As discussed in section 5.2, the creation of the linear system requires a large amount of memory, usually much more than the available RAM. This problem can be avoided by taking advantage of disk buffers, essentially by splitting a large matrix into smaller blocks and creating it a block at a time. Our strategy to do this is very simple: 1. a memory buffer is created, large enough to keep $`L`$ lines; 2. the algorithm described in section 5.1 is performed: for each couple $`\pi `$ of pixel in common between two scan circles $`i`$ and $`j`$, the quantities $`\chi _\pi `$ and $`\tau _\pi `$ are evaluated; 3. if $`i`$ is in the range $`[0,\mathrm{},L1]`$, then equations $`(17)`$, $`(18)`$ and $`(21)`$ are applied; if $`j`$ is in the range $`[0,\mathrm{},L1]`$, then equations $`(19)`$, $`(20)`$ and $`(22)`$ are applied; 4. after all the couples have been evaluated, the memory buffer is saved in a file; steps $`24`$ are then repeated for $`i`$ and $`j`$ in the range $`[L,\mathrm{},2L1]`$, and then for $`i`$ and $`j`$ in $`[2L,\mathrm{},3L1]`$, and so on until all the $`N`$ matrix lines have been created. At the end of this loop, the linear system has been created and stored in a file in binary format; the coefficients are organized in a matrix of $`N`$ rows and $`N+1`$ columns, where the last column contains the known terms. It is easy to see that this strategy saves memory space but increases considerably CPU time, because time required to create the whole matrix is directly proportional to the number of pieces $`N/L`$ into which the linear system is divided. Due to the fact that program execution time is essentially dominated by the routines which solve the system, while creation time is negligible for the present practical purpose, we didn’t bother to improve our code. A very different situation occurs for system solution, because both a large amount of memory and a lot of computation time are usually required. While time efficiency can’t be usually improved (except in a few particular cases, when the matrix which defines the system has special symmetry properties and a lot of zero coefficients), memory requirements can be considerably reduced by taking advantages of disk swapping. However, a special care is required when choosing the strategy for disk operations, because disk time can easily blow up and overtake CPU time. Our algorithm behaves as follows: 1. as for system creation, a memory buffer is allocated, large enough to keep $`L`$ lines; 2. then the first $`L`$ lines ($`0,\mathrm{},L1`$) are loaded in memory, and complete Gauss elimination is performed on them: each line is reduced by the preceding lines and is used to reduce the following lines; 3. each of the remaining $`NL`$ lines is sequentially load into memory, and reduced by *each of* the $`L`$ lines stored in the memory buffer. In this way, we perform $`L`$ steps of the Gauss elimination algorithm with a *single* disk operation; 4. the memory buffer is flushed, and the next $`L`$ lines ($`L,\mathrm{},2L1`$) are loaded into memory, reduced by one another and used to sequentially reduce all the remaining lines of the system. Steps 3-4 are repeated until the original matrix has been completely reduced. As can be seen, our algorithm differs from the “standard” version only in the *order* in which the elimination is performed; in this way, we can limit memory requirements without increasing total solution time. It is easy to see that the total time spent for disk operation is: $$t_{\mathrm{disk}}=\beta \frac{N}{2}\left(\frac{N}{L}+1\right)(N+1)\beta _{\mathrm{disk}}\frac{N^3}{L}$$ (23) CPU time scales as the cube of the linear size of the system: $$t_{\mathrm{cpu}}=\beta _{\mathrm{cpu}}N^3$$ (24) So we conclude that the ratio $`\eta `$ between disk time and CPU time is independent from the size $`N`$ of the linear system, and only depend on the total number of lines $`L`$ then we can hold simultaneously in memory: $$\eta =\frac{t_{disk}}{t_{cpu}}=\frac{\beta _{disk}}{\beta _{cpu}L}$$ (25) Of course, the memory necessary to hold a line is proportional to the line length, so we need a larger amount of memory to solve a larger system; however, the size of the buffer is *linear* in system size $`N`$, while the memory required to hold all the coefficients simultaneously is *quadratic* in $`N`$. REFERENCES Bersanelli M., Mandolesi N., Weinreb S., Ambrosini R. & Smoot G.F., 1995, Int. Rep. ITESRE 177/1995 – COBRAS memo n.5 Bersanelli M. et al., 1996. ESA, COBRAS/SAMBA Report on the Phase A Study, D/SCI(96)3 Blum E.J., 1959, Annales d’Astrophysique, 22-2, 140 Bouchet F. et al., 1997, in proceedings of The XVIth Moriond Astrophysics Meeting, Les Arcs, Savoie, France, 16-23 March 1996. Brandt W.N. et al., 1994, ApJ 424, 1 Burigana C. et al., 1997a, Int. Rep. ITeSRE/CNR 186/1997 Burigana C. et al., 1997b, in proceedings of Particle Physics and Early Universe Conference, Cambridge 7-11 April 1997, http://www.mrao.cam.ac.uk/ppeuc/proceedings/ Burigana C. et al., 1997c, A&A, submitted Colvin R.S., 1961, Ph.D. thesis, Stanford University Danese L., Toffolatti L., Franceschini A., Bersanelli M. & Mandolesi N. 1996, Astroph. Lett & comm, 33, 257. Delabrouille J., 1997, A&A, submitted Dodelson S., 1997, ApJ 482, 577 Gaier T., 1997, private communication Janssen M.A. et al., 1996, Astrophys. J. Lett., submitted Jarosik N.C., 1996, IEEE Trans. Microwave Theory Tech., 44, 193 Mandolesi N. et al., 1997a, in proceedings of Particle Physics and Early Universe Conference, Cambridge 7-11 April 1997, http://www.mrao.cam.ac.uk/ppeuc/proceedings/ Mandolesi N. et al., 1997b, Int. Rep. ITeSRE/CNR 198/1997 Muciaccia P.F. et al., 1997, preprint astro-ph/9703084 Pospieszalsky, 1989, MTT Sep, p. 1340 Press W.H. et al., 1992, “Numerical Recipes in Fortran”, Cambridge University Press Seiffert M. et al., 1996, Rev. Sci. Instrum., submitted Strang G., 1976, “Linear Algebra and Its Applications”, Academic Press, Inc. Toffolatti L. et al., 1995, Astro. Lett. & Comm. 32, 125 Toffolatti L. et al., 1997, MNRAS, submitted Weinreb S., 1997, private communication Wollack E.J., 1995, Rev. Sci. Instrum., 66, 4305 FIGURE CAPTIONS Figure 1: The simulated (input) map of CMB anisotropies (CDM model, the dipole term is neglected). Galactic coordinates have been used for the plot. Figure 2: Schematic representation of the observational geometry. Figure 3: The unreduced simulated noise map (white plus $`1/f`$ noise) for the simulation with $`\alpha =85^{}`$. Note the two small circular regions close to the ecliptic poles that are not observed by the considered off-axis beam; for graphic purposes we have filled them with a random noise distribution with variance given by the noise variance of the observed pixels. Note also the elongated sky region with noise significantly larger than the average, which corresponds to the sky regions that are observed a single time only, due to the choosen value of $`\alpha `$. (Galactic coordinates have been used for the plot). Figure 4: The reduced simulated noise map (white plus “reduced” $`1/f`$ noise) for the simulation with $`\alpha =85^{}`$. Note again the two small circular regions close to the ecliptic poles filled with a random noise distribution. We note that the the elongated sky region with noise significantly larger than the average disappears as result of the destriping procedure. Also the stripes in the sky became much less evident. (Galactic coordinates have been used for the plot).
no-problem/9906/cond-mat9906073.html
ar5iv
text
# Composite Fermions with Spin Freedom ## I Introduction Energy gap due to the Landau quantization or the spin Zeeman splitting is essential for the integer quantum Hall effect. When the Landau level filling factor $`\nu `$ is equal to an integer $`N`$, the lowest $`N`$ spin-split Landau levels are filled, and the IQHE will be observed, in principle. However, if we can change the Landau level spacing $`\mathrm{}\omega _\mathrm{c}`$ and the size of the Zeeman splitting $`g^{}\mu _\mathrm{B}B`$ independently, level crossings of the Landau levels belonging to the different spin states can be caused. Here $`\omega _\mathrm{c}`$ is the cyclotron frequency, $`g^{}`$ is the $`g`$-factor of the two-dimensional electron and $`\mu _\mathrm{B}`$ is the Bohr magneton. As depicted in Fig. 1, the spin splitting of the Landau levels is proportional to $`g^{}\mu _\mathrm{B}B`$, and the condition for the level crossing is given as $$j\mathrm{}\omega _\mathrm{c}=g^{}\mu _\mathrm{B}B,$$ (1) where $`j`$ is an integer. If the level crossing occurs for the Landau levels at the Fermi energy, configuration of the electrons changes. This is a quantum phase transition, and the energy gap vanishes at the transition. As shown in Fig. 1 the phase transition occurs only when $`j=1`$ for $`\nu =2`$, while it occurs at $`j=2`$ and 4 for $`\nu =5`$. Since, the fractional quantum Hall effect can be understood as an integer quantum Hall effect of the composite fermions (CF), similar quantum phase transition between different spin polarization is expected to occur. Actually, such transitions are observed experimentally, and they have been used to deduce the effective mass $`m^{}`$ and $`g`$-factor of the composite fermion $`g^{}`$. In these experiments tilting of the magnetic field is used to enhance the spin Zeeman splitting. The interpretation of the experiment is quite simple, as long as the filling factor is expressed as $`\nu =p/(mp+1)`$, or its electron-hole conjugate, $`\nu =2p/(mp+1)`$, where $`p=\pm 1,\pm 2,\mathrm{}`$ is an integer and $`m`$ is an even integer. Here $`|p|`$ gives the filling factor of the composite fermion and $`m`$ describes the number of flux quanta attached to the electron to transform it to a composite fermion. On the other hand, for the filling factor such as $`\nu =5/7`$ or 4/5, it is not simple. For example let us consider the situation at $`\nu =5/7`$. According to Wu et al., $`\nu =(3p+2)/(4p+3)`$ is related to $`\nu ^{}=(3p+2)/(2p+1)`$ by a transformation, $`\nu =\nu ^{}/(2\nu ^{}1)`$, which in turn is related to $`\nu ^{\prime \prime }=p/(2p+1)`$ by the electron-hole symmetry. This state finally maps into $`\nu ^{\prime \prime \prime }=p`$. Therefore $`\nu =5/7`$, which is obtained by inserting $`p=1`$ to the above series, is mapped to $`\nu ^{\prime \prime \prime }=1`$. If this mapping is appropriate, there should be no quantum phase transition, since there is no level crossing for the lowest Landau level. However, another mapping is possible. If the Zeeman splitting is quite large, $`\nu =5/7`$ can be considered as an electron-hole symmetric state of $`\nu =2/7`$. This filling has the form of $`\nu =p/(mp+1)`$ with $`p=2`$, and $`m=4`$. In this case, since the filling factor of the composite fermion is 2, quantum phase transition between the spin-polarized ground state and spin-singlet ground state seems possible. However, as the true electron-hole symmetric state of $`\nu =5/7`$ is not $`\nu =2/7`$ but $`\nu =1+2/7`$, we first need to establish a rule to handle the situation where both spin states are occupied before we can consider the possibility of the transition. Now the experiment clearly shows that there is a quantum phase transition at $`\nu =5/7`$. Therefore, Wu et al.’s interpretation is not appropriate. Furthermore, we cannot simply treat $`\nu =5/7`$ as an electron-hole symmetric state of $`\nu =2/7`$. This is what Yeh et al. did in ref.(). In their treatment, they could not relate the observed collapse of excitation gap to level crossing at the Fermi level. Namely, they tried to determine $`m^{}`$ and $`g^{}`$ so that energy gap collapse is related to the level crossing of the Landau levels. Then they found that a unique choice of the product $`m^{}g^{}`$ can relate every feature in the resistivity to level crossings in a consistent way. However, then the most prominent feature in the resistivity at $`\nu =5/7`$ had to be connected to the condition that twice of the Landau level splitting is equal to the Zeeman splitting, i.e. it occurs at $`j=2`$. At this point the filling of the lowest two Landau levels will not change: the level crossing occurs between the third and fourth lowest Landau levels as shown in Fig. 1. Therefore, $`\nu =5/7`$ should not be mapped to $`\nu =2/7`$, for which the CF filling factor is two. The situation was the same for other filling factors around $`\nu =3/4`$: None of the prominent features in the resistivity could be related to the quantum phase transition, if states around $`\nu =3/4`$ are considered as electron-hole symmetric states around $`\nu =1/4`$. I have briefly pointed out that the apparent discrepancy is resolved, if we correctly take into account both spin states of the original electrons. It is pointed out that the spin freedom of the composite fermion should be understood as a result of that of the electrons. In the present paper, I give details of the theory. The organization of this paper is as follows. In §2 we consider the FQH states at $`\nu =p/(2p+1)`$ and its electron-hole symmetric states at $`\nu =2p/(2p+1)`$. The latter states can be considered as those at $`\nu =1+p^{}/(2p^{}+1)`$. From this analysis we establish a set of rules for the composite fermion transformation for $`\nu =1\pm p/(mp+1)`$. These rules are applied to the states at $`m=4`$, or at $`\nu =1\pm p/(4p+1)`$ in §3. We derive a condition for the quantum phase transition in these filling factors. Comparison with the experiments and discussion are given in §4. ## II Electron-Hole Symmetry In this section we consider states at $$\nu =\frac{p}{(2p+1)},$$ (2) and whose electron-hole symmetric states to deduce a set of rules for the states at $$\nu =1+\frac{p^{}}{(2p^{}+1)},$$ (3) where $`p`$ and $`p^{}`$ are integers. In this paper we consider systems in the strong magnetic field limit. Namely, we consider the electronic cyclotron energy, $`\mathrm{}\omega _c`$, to be infinitely large, although we assume that the $`g`$-factor is small so that the spin Zeeman splitting is finite. Therefore, we retain only the lowest Landau levels for each spin state. In this situation, the electron-hole symmetric states of that at $$\nu =\frac{p}{(2p+1)},$$ (4) is realized at $$\nu ^{}=2\frac{p}{(2p+1)}.$$ (5) When the Zeeman splitting is large enough, the states at $`\nu =p/(2p+1)`$ is spin-polarized, and mapped to composite fermion states at $`\nu _{\mathrm{CF}}=|p|`$. In the experiments the Zeeman splitting is enhanced by tilting the magnetic field: The spin-Zeeman splitting is proportional to the total magnetic field, $`B_{\mathrm{tot}}`$, and given by $`g^{}\mu _\mathrm{B}B_{\mathrm{tot}}`$. On the other hand the Landau level splitting of the CF is given by $`\mathrm{}\omega _\mathrm{c}^{}`$, where $$\omega _\mathrm{c}^{}=\frac{eB_{\mathrm{eff}}}{m^{}},$$ (6) $`B_{\mathrm{eff}}`$ is the effective magnetic field for the CF, and $`m^{}`$ is the effective mass of the CF. The effective field is proportional to the component of the magnetic field perpendicular to the 2-d plane, $`B_{}`$: $$B_{\mathrm{eff}}=\frac{B_{}}{2p+1}=B_{}B_{,1/2}.$$ (7) Here $`B_{,1/2}`$ is the magnetic field to realize the half-filled state. When the Zeeman splitting is reduced while keeping the size of the Landau level splitting fixed, successive quantum phase transitions to partially spin-polarized states occur as stated in the introduction, and observed experimentally. The condition for the transition is given as follows. We denote the filling factor of the up(down)-spin CF as $`\nu _{\mathrm{CF}()}`$. We consider the transition from $$\{\begin{array}{cc}\hfill \nu _{\mathrm{CF}}& =|p|k\hfill \\ \hfill \nu _{\mathrm{CF}}& =k,\hfill \end{array}$$ (8) to $$\{\begin{array}{cc}\hfill \nu _{\mathrm{CF}}& =|p|k1\hfill \\ \hfill \nu _{\mathrm{CF}}& =k+1,\hfill \end{array}$$ (9) where $`k`$ is a non-negative integer. In the state before the transition the energy of the highest occupied Landau level (HOLL) of down-spin CF is $$E_{}=(|p|k\frac{1}{2})\mathrm{}\omega _\mathrm{c}^{}\frac{1}{2}g^{}\mu _\mathrm{B}B_{\mathrm{tot}},$$ (10) and that of the lowest unoccupied Landau level (LULL) of up-spin CF is $$E_{}=(k+\frac{1}{2})\mathrm{}\omega _\mathrm{c}^{}+\frac{1}{2}g^{}\mu _\mathrm{B}B_{\mathrm{tot}}.$$ (11) At the transition these energies coincide, and the excitation energy vanishes. The condition is therefore given as $$(|p|2k1)\mathrm{}\omega _\mathrm{c}^{}=g^{}\mu _\mathrm{B}B_{\mathrm{tot}}$$ (12) This condition has been used in the experiments to deduce the $`g`$-factor and effective mass of the CF. Notice that this condition is not identical to the level crossing condition, $`j\mathrm{}\omega _\mathrm{c}^{}=g^{}\mu _\mathrm{B}B_{\mathrm{tot}}`$. The range and parity of $`j`$ are fixed in eq.(12). This restriction is important. Since, the state at $`\nu ^{}=2p/(2p+1)`$ is the electron-hole symmetric state of that at $`\nu =p/(2p+1)`$, the same condition, eq.(12), should be satisfied at the quantum phase transition. The only difference is that the effective field has different expression when written as deviation from the field at $`\nu =3/2`$: $$B_{\mathrm{eff}}=\frac{B_{}}{2p+1}=3(B_{}B_{,3/2}).$$ (13) The extra factor of 3 comes from the fact that the number of holes change as $`B_{}`$ does, when the total number of electron is fixed. What we have written until now have been known and have been used to analyze the experiments. Now we derive new relation by noticing that $`\nu =2p/(2p+1)`$ can be written as $`\nu =1+p^{}/(2p^{}+1)`$ with $`p^{}=p1`$. If the Zeeman splitting is large enough, the filled down-spin Landau level is inert and can be neglected. Therefore in this situation the system is described by spinless CF at filling factor $`|p^{}|`$. What we want to do is to use this picture even if the Zeeman splitting becomes smaller by considering the down-spin Landau level appropriately. However, before taking into account the down-spin Landau level, a remark is in order. Namely we want to stress that the spin freedom of the CF comes from that of the original electrons. To see that lets assume that the CF has independent spin freedom. Then we can consider that CF at $`\nu _{\mathrm{CF}}=|p^{}|`$ has Zeeman split spin Landau levels which are empty. When the Zeeman splitting is reduced, the quantum phase transition occurs between these spin-split Landau levels. The condition for the transition point is obtained by replacing $`p`$ in eq.(12) by $`p^{}`$, and is given by $$(|p^{}|2k1)\mathrm{}\omega _\mathrm{c}^{}=g^{}\mu _\mathrm{B}B_{\mathrm{tot}}.$$ (14) However, since $`p^{}=p1`$, this condition contradicts eq.(12). This clearly shows that the spin freedom of CF comes from that of the electrons. At the phase transition, electrons in the filled down-spin Landau level must move to the up-spin Landau level. Now let us describe the transition. What we need to do is to establish a set of rules to treat the filled down-spin Landau level at the composite fermion transformation. The spirit of the composite fermion transformation is to make explicit the already existing correlation between electrons by attaching the fictitious flux quanta. Since, only the minimal correlation due to the Fermi statistics is possible for the filled Landau level, it does not seem appropriate to attach flux quanta to the down-spin electrons. However, the holes introduced in the down-spin Landau level repel each other and they can have flux attached. Another clue is that the effective magnetic field should not change discontinuously at the quantum phase transition. From these considerations we derive the following rules. (1) All the electrons are changed into CF’s. (2) However, each single electronic state in the lowest down-spin Landau level gives two flux quanta with opposite sign. These two rules guarantee that the filled level does not contribute to fictitious flux, and that the spin-flip of an electron will not change the effective magnetic field. The effective field is calculated as follows. We consider a finite size system with $`N_\mathrm{e}`$ electrons and $`N_0`$ flux quantum. Namely the degeneracy of a Landau level is $`N_0`$ and $`N_\mathrm{e}=\nu N_0`$. The effective flux quanta is reduced from $`N_0`$ by $`2N_\mathrm{e}`$ and increased by $`2N_0`$ according to the rules (1) and (2), respectively. Thus $`N_{0,\mathrm{eff}}`$ $`=`$ $`N_02N_\mathrm{e}+2N_0=(32\nu )N_0`$ (15) $`=`$ $`{\displaystyle \frac{1}{2p^{}+1}}N_0.`$ (16) Namely, the magnetic field is reduced by a factor of $`(2p^{}+1)`$: $`B_{\mathrm{eff}}=B_{}/(|2p^{}+1|)=B_{}/(|2p+1|)`$. Since the magnetic field is weaker by a factor of $`|2p^{}+1|`$ for the CF, the down-spin CF fills $`|2p^{}+1|`$ Landau levels, when the down-spin Landau level is filled by electrons. As we have restricted ourselves to consider only the lowest Landau level, we cannot accommodate more down-spin electrons. This fact also gives an restriction to the CF filling factors, or an additional rule, namely, (3) the maximum allowed CF filling factor for each spin state is $`|2p^{}+1|`$. Finally we impose the last rule: (4) the Zeeman splitting of these levels are $`g^{}\mu _\mathrm{B}B_{\mathrm{tot}}`$. Now let us see these rules give correct result: When the electronic filling factor is $`\nu =1+p^{}/(2p^{}+1)`$, the total CF filling factor becomes $`\nu _{\mathrm{CF}}=|2p^{}+1|+|p^{}|=|3p^{}+1|`$. The quantum phase transition occurs between the state where $$\{\begin{array}{cc}\hfill \nu _{\mathrm{CF},}& =|2p^{}+1|k\hfill \\ \hfill \nu _{\mathrm{CF},}& =|p^{}|+k,\hfill \end{array}$$ (18) to the state where $$\{\begin{array}{cc}\hfill \nu _{\mathrm{CF},}& =|2p^{}+1|k1\hfill \\ \hfill \nu _{\mathrm{CF},}& =|p^{}|+k+1,\hfill \end{array}$$ (19) where $`k`$ is an integer. As before from the condition that the energy of the down-spin HOLL coincide with that of the up-spin LULL, we get the following equation, $$(|p^{}+1|2k1)\mathrm{}\omega _\mathrm{c}^{}=g^{}\mu _\mathrm{B}B_{\mathrm{tot}}.$$ (20) Since $`|p^{}+1|=p`$, this condition is the same as eq.(12). I have considered possibilities for other rules. However, all the plausible rules I coined could not give the transition point consistent with eq.(12). ## III Rules and Application In this section we generalize the rules to the series of filling factors related to $`\nu =p/(mp+1)`$, where $`m`$ is a positive even integer. As we have seen above, states at $`\nu =p/(mp+1)`$ and at $`\nu =2p/(mp+1)`$ are simple: the states are transformed to the CF system at total filling factor $`|p|`$. The CF Landau levels have spacing $`\mathrm{}\omega _\mathrm{c}^{}=\mathrm{}eB_{\mathrm{eff}}/m^{}`$, and they are spin-split by $`g^{}\mu _\mathrm{B}B_{\mathrm{tot}}`$. New series of filling factors for which we assign rules are $$\nu =1+\frac{p}{mp+1},$$ (21) and its electron-hole symmetric states at $$\nu =1\frac{p}{mp+1}.$$ (22) These states are expressed by CFs at total filling factor $$\nu _{\mathrm{CF}}=|mp+1|+|p|,$$ (23) according to the following rules: (1) All the electrons are changed into CF’s by attaching $`m`$ flux quanta. (2) However, each electronic states in the lowest down spin Landau level gives $`|m|`$ flux quanta with opposite sign. (3) the maximum allowed CF filling factor for each spin state is $`|mp+1|`$. (4) the Zeeman splitting of these levels are $`g^{}\mu _\mathrm{B}B_{\mathrm{tot}}`$, while the Landau splitting is given by the effective magnetic field, $$B_{\mathrm{eff}}=\frac{B_{}}{mp+1}.$$ (24) We can rewrite $`B_{\mathrm{eff}}`$ in a different way also. It is proportional to the deviation of $`B_{}`$ from that at $`\nu =1\pm 1/m`$, $`B_{,1\pm 1/m}`$: $$B_{\mathrm{eff}}=\pm (m\pm 1)(B_{}B_{,1\pm 1/m}).$$ (25) The condition for the gap collapse, or the quantum phase transition point, is given by these rules as follows: $$(|mp+1||p|2k1)\mathrm{}\omega _\mathrm{c}^{}=g^{}\mu _\mathrm{B}B_{\mathrm{tot}},$$ (26) where $`k`$ is a non-negative integer, and $`\omega _\mathrm{c}^{}=eB_{\mathrm{eff}}/m^{}`$. Now that we have obtained the rules, we can analyze the case of $`\nu =5/7`$ mentioned in the introduction. This filling factor belongs to a series $`\nu =1p/(4p+1)`$: $`\nu =5/7`$ is obtained by putting $`p=2`$. Therefore, the transition occurs at $$(42k)\mathrm{}\omega _\mathrm{c}^{}=g^{}\mu _\mathrm{B}B_{\mathrm{tot}}.$$ (27) The transition at $`k=1`$ is clearly observed at the correct magnetic field by Yeh et al.’s experiment as shown in Fig.2 in their paper. The transition at $`k=0`$ is also expected to occur, but it requires stronger magnetic field and has not been observed yet. ## IV Discussion So far experiments have been done around $`\nu =3/2`$ and around $`\nu =3/4`$. The observed phase transition points are used to deduce the combination $`g^{}m^{}`$. Since, $`\nu =3/2`$ is electron-hole symmetric to $`\nu =1/2`$, previously known CF theory is almost sufficient to explain the experiments around $`\nu =3/2`$: the longitudinal resistance shows distinct peaks where the gap is expected to collapse. Although, weak peaks are observed, where eq.(12) is satisfied with half-integer $`k`$. For the interpretation of the experiment around $`\nu =3/4`$, we need the theory in this paper. Because, if we want to use the previously known simple theory, we must consider the states around $`\nu =3/4`$ as electron-hole symmetric states of those around $`\nu =1/4`$; in this case, the peak position does not coincide with the phase transition point as remarked in the report of the experiment, and stated in the introduction. Comparison with the experiment shows that the present theory can correctly explain the distinct peaks of the resistance at $`\nu =8/11`$ and 5/7. These filling factors belong to negative $`p`$ side of the series $`\nu =1p/(4p+1)`$. On the other hand, at $`\nu =4/5`$ and 7/9, namely positive $`p`$ side of the series, the stronger peaks correspond to half-integer $`k`$. This discrepancy and the existence of weaker peaks are left as problems to be solved in the future. It should be remarked that Wu et al.’s three step CF transformation also cannot resolve this discrepancy. The existence of weaker peaks seems to indicate the CF theory is too naive, even if the stronger peaks can be understood successfully. Possible origin of the discrepancy at $`\nu =4/5`$ and 7/9 could be the exchange enhancement of the Zeeman splitting or the effect of skyrmion. The experimentally determined phase transition point gives the combination $`g^{}m^{}`$ through the condition eq.(26). To get $`g^{}`$ and $`m^{}`$ separately, the experimentalists used the temperature dependence of the Schubnikov-de Haas data to deduce $`m^{}`$. From the obtained values of $`m^{}`$, $`g^{}`$ of about 0.6 is obtained at $`\nu =5/7`$ as well as around $`\nu =3/2`$. They could not find a reason why the $`g`$-factor of the two-flux quanta CF is equal to that of the four-flux quanta CF. However, in the present simple theory, the $`g`$-factor of the CF is nothing but that of original electrons, this coincidence is not strange at all. Actually, 0.6 is close to the $`g`$-factor of electron, 0.44. As stated in the introduction, a mapping of $`\nu =5/7`$ state to $`\nu _{\mathrm{CF}}=1`$ is possible. Namely, there are several schemes to map electronic states to CF states. The present investigation clarified that the mappings are not equivalent: in one of the mapping quantum phase transition never occurs, but it does in another mapping. Then how can we know which mapping is the correct one? Of course experiments can discriminate them, but as a theoretical problem how can we do that. One criterion is whether the mapping is simple or not. In the case considered in this paper our one-step mapping is much simpler than three-step mapping by Wu et al., and consistent with the experiment. “Simple is best” is comfortable for physicists, but we need to give foundation for this conjecture, which is left as a future problem. In conclusion in this paper we developed a set of rules for the CF transformation, which can explain the quantum phase transition observed experimentally. We also found that there still remain several problems to be solved in the future. ## Acknowledgements I thank Dan Tsui who showed me the experimental results prior to publication while we stayed at Aspen Center for Physics, where part of this work was done. This work is supported by Grant-in-Aid for Scientific Research (C) 10640301 from the Ministry of Education, Science, Sports and Culture.
no-problem/9906/hep-ph9906452.html
ar5iv
text
# 1 fig2 Active-sterile neutrino oscillations in the early Universe is a fascinating possibility with far-reaching consequences e.g. for nucleosynthesis \[1-12\] and CMB radiation . Nucleosynthesis considerations in particular have made it possible to place stringent constraints on model building aimed at understanding the observed neutrino anomalies in terrestrial observations. In the very first papers it was observed that the mixing with an active species ($`SU(2)`$-doublet) endows the sterile ($`SU(2)`$-singlet) neutrino with effective interactions, which can be strong enough to bring the sterile species in equilibrium. The ensuing excess energy density would result in a failure of the nucleosynthesis explanation of the observed light element abundances . This line of reasoning was put to a solid computational foundation in refs. , and these results were later reproduced in ref. . Of particular interest was the observation that nucleosynthesis is in conflict with $`\nu _\mu \nu _s`$-oscillation solution to the atmospheric neutrino problem . Already in it was noted that nucleosynthesis constraints depend on the reasonable assumption that the leptonic asymmetries are not many orders of magnitudes larger than the baryonic one. More specifically, e.g. for a mass squared difference $`|\delta m^2|=10^4\text{eV}^2`$ a large initial asymmetry, $`L_\nu ^{\mathrm{in}}>\mathrm{\hspace{0.33em}10}^5`$ (here $`L_\alpha =(N_\alpha N_{\overline{\alpha }})/N_\gamma `$) would suppress the effective mixing angle so much that the equilibration would never take place. This observation was later revived by Foot and Volkas , who basing on this and another previously observed effect, an exponential growth of leptonic asymmetry , suggested an interesting way to circumvent the nucleosynthesis constraints without invoking unnatural initial conditions . Their scenario assumes a novel mass-mixing scenario, where a $`\nu _\tau \nu _s`$-mixing, with carefully chosen parameters, produces a large leptonic asymmetry (but does not equilibrate $`\nu _s`$), which suppresses the subsequent $`\nu _\mu \nu _s`$-mixing angle and thereby prevents the $`\nu _s`$-equilibration from taking place. Some details concerning the growth of the asymmetry in this scenario are still under debate . It was later observed by Shi that the period of exponential growth exhibits chaotic features and therefore, while the amplitude of the final asymmetry is robust, its sign appeared to be essentially arbitrary. This raises some interesting questions: for example, is the sign of $`L_\nu `$ sensitive to the fluctuations in the initial conditions, like in the baryon asymmetry? If so, one should expect a large suppression in the effective asymmetry present at the important epoch for the $`\nu _\mu \nu _s`$-oscillations due to diffusion effects. For this purpose it is important to establish the extent of possible chaotic or regular regions in the parameter space. In this letter we have studied the dependence of the sign of $`L_\nu `$ on neutrino mixing parameters. We find a rather clearcut division of the parameter space into non chaotic and partly or completely chaotic<sup>1</sup><sup>1</sup>1We do not claim here that the system exhibits chaoticity in the mathematical sense of the definition of chaos; we merely mean that the system is sensitive to small variations of the parameters. regions. In chaotic regions the final sign of the asymmetry is indeed found to be highly sensitive also to fluctuations in the initial conditions. Another, more direct consequence follows from the fact that sign($`L`$) affects the computed <sup>4</sup>He abundance, either directly in the case of $`\nu _e\nu _s`$ oscillations, or when induced by large $`L_\nu `$ created in $`\nu _\mu \nu _s`$ or $`\nu _\tau \nu _s`$ oscillations and later transferred to $`\nu _e`$-sector via active-active oscillations . It then follows that possible chaotic behavior will constrain our chances to draw any definite conclusions about the effects of sterile neutrinos on Big Bang nucleosynthesis, as we will discuss below. In the early Universe neutrinos experience frequent scatterings, which tend to bring their distributions into thermal equilibrium. The requisite mathematical formalism is therefore very different from the one particle approach valid for description of accelerator physics (beams) and even solar neutrinos. Indeed, the objects of interest are the (reduced) density matrices for the neutrino and antineutrino ensembles $$\rho _\nu \frac{1}{2}P_0(1+𝐏),\rho _{\overline{\nu }}\frac{1}{2}\overline{P}_0(1+\overline{𝐏}).$$ (1) Solving full momentum dependent kinetic equations for $`\rho _\nu (p)`$ and $`\rho _{\overline{\nu }}(p)`$ is obviously a very difficult task. Instead, we employ the momentum averaged equations for $`𝐏=𝐏(p)`$, with $`p3.15T`$, which should be expected to give a good approximation for the full system . (Our preliminary studies with full momentum dependent equations support this assumption). Moreover, for the parameters we are interested in, one can neglect the collision terms so that $`P_0`$ remains a constant and can be set to a unity. The coupled equations of motion then are (for definiteness we shall focus here on $`\nu _\tau \nu _s`$ oscillations; other cases are obtained from this by simple redefinitions<sup>2</sup><sup>2</sup>2Interested reader can find these redefinitions for example from .) $`\dot{𝐏}`$ $`=`$ $`𝐕\times 𝐏D𝐏_T`$ $`\dot{\overline{𝐏}}`$ $`=`$ $`\overline{𝐕}\times \overline{𝐏}D\overline{𝐏}_T`$ (2) where $`\dot{𝐏}d𝐏/dt`$ and we defined $`𝐏_TP_x\widehat{𝐱}+P_y\widehat{𝐲}`$. In the case of $`\nu _\tau \nu _s`$ oscillations the damping coefficient is $`D1.8G_F^2T^5`$ , and $`\overline{D}D`$ to a very high accuracy. It is convenient to decompose the rotation vector $`𝐕`$ as $$𝐕=V_x\widehat{𝐱}+\left(V_0+V_L\right)\widehat{𝐳},$$ (3) with the components $`V_x`$ $`=`$ $`\mathrm{\Delta }\mathrm{sin}2\theta `$ $`V_0`$ $`=`$ $`\mathrm{\Delta }\mathrm{cos}2\theta +\delta V_\tau `$ $`V_L`$ $`=`$ $`\sqrt{2}G_FN_\gamma L,`$ (4) where $`\theta `$ is the vacuum mixing angle, $`\mathrm{\Delta }\delta m^2/2p`$ and the photon number density $`N_\gamma 2\zeta (3)T^3/\pi ^2`$. The effective asymmetry $`L`$ appearing in the leading contribution $`V_L`$ to the neutrino effective potential is $$L=\frac{1}{2}L_n+L_{\nu _e}+L_{\nu _\mu }+2L_{\nu _\tau }(P)$$ (5) where $`n`$ refers to neutrons and we have assumed electrical neutrality of the plasma. The remaining piece to the effective potential $`\delta V_\tau `$ is given by $$\delta V_\tau =\sqrt{2}G_FN_\gamma A_\tau \frac{pT}{2M_Z^2},$$ (6) with $`A_\tau =14\zeta (4)/\zeta (3)12.61`$. The rotation vector for antineutrinos is simply $`\overline{𝐕}(L)=𝐕(L)`$. The coupling of particle and antiparticle sectors occurs through the asymmetry term, where $$L_{\nu _\tau }(P)=L_{\nu _\tau }^{\mathrm{in}}+\frac{3}{8}(P_z\overline{P}_z).$$ (7) with $`L_{\nu _\tau }^{\mathrm{in}}`$ being the initial $`\nu _\tau `$ asymmetry. Even the simplified one state-quantum kinetic equations (2) are very difficult to handle numerically, because of the vast difference in the time scales involved (Hubble expansion rate, matter oscillation frequency and the width of the resonance, for example) on one hand and due to extremely strong coupling induced by the asymmetry term on the other. The so-called static approximation employed in reduces the system to one first order differential equation. Unfortunately it is not really suitable for the treatment of oscillations at the resonance, since many of its basic assumptions – that the system is adiabatic, that the MSW-effect can be neglected, and that the rate of change of lepton number is dominated by the collisions – break down at the resonance. These considerations emphasize the need for a very careful numerical approach. In practice, the accuracy is much improved if one makes separation between the large ($``$ number density) and small components ($``$ asymmetry) in equations (2). To this end we change the variables into $$P_i^\pm P_i\pm \overline{P}_i,$$ (8) in terms of which (2) become $`\dot{P}_x^+`$ $`=`$ $`V_0P_y^+V_LP_y^{}DP_x^+`$ $`\dot{P}_y^+`$ $`=`$ $`V_0P_x^++V_LP_x^{}DP_y^+`$ $`\dot{P}_z^+`$ $`=`$ $`V_xP_y^+`$ $`\dot{P}_x^{}`$ $`=`$ $`V_0P_y^{}V_LP_y^+DP_x^{}`$ $`\dot{P}_y^{}`$ $`=`$ $`V_0P_x^{}+V_LP_x^+DP_y^{}`$ $`\dot{P}_z^{}`$ $`=`$ $`V_xP_y^{}.`$ (9) We have studied numerically the behaviour of the system described by (9) as a function of the oscillation parameters $`\delta m^2`$ and $`\mathrm{sin}^22\theta `$, and in particular the evolution of the asymmetry (7). Of crucial importance in this evolution is the occurrence of the resonance at $`V_0=0`$ if $`\delta m^2<0`$. Inserting the appropriate parameters, one finds that the resonance temperature is given by $$T_{\mathrm{res}}16.0(|\delta m^2|\mathrm{cos}2\theta )^{1/6}\mathrm{MeV},$$ (10) where $`\delta m^2`$ is given in units $`\mathrm{eV}^2`$. Far above the resonance the damping terms tend to suppress the off-diagonal elements $`𝐏_T`$ and moreover, the system is driven towards the initially stable fixed point $`L=0`$. As soon as the system passes the resonance however, $`L=0`$ becomes an unstable fixed point and two new locally stable and degenerate minima corresponding to the solutions of $`V_L+V_0=0`$ appear; these are given by the condition $`|L|11(|\delta m^2|/\mathrm{eV}^2)(\mathrm{MeV}/T)^4`$. The system is roughly analogous to a ball rolling down a valley that branches to two, passing via a saddle configuration. Initially the branching into these two new valleys can be very shallow and it may stay that for a long time. Once on the side of the bifurcation, the ball still keeps on passing over the central barrier (the continuation of the old stable fixed point to an unstable extremum) until the barrier grows too high, or until friction (damping) reduces energy enough, and it gets trapped to one of the valleys. It is easy to picture in ones mind that in a case of a very shallow bifurcation a small change in the initial conditions ($`L^{\mathrm{in}}`$), or in the shape of the valleys (oscillation parameters), can very much affect which minimum the system finally chooses to settle in. In Fig. 1 we show the asymmetry $`L_{\nu _\tau }`$ (7) resulting from solving equations (9) for two sets of parameters. In Fig. 1a the resonance is rather narrow and only one oscillation occurs before the system is trapped into the minimum with a negative sign of $`L_{\nu _\tau }`$. The subsequent oscillations about this new local minimum are quickly washed away by the damping terms. In contrast, Fig. 1b shows an example of oscillation parameters with which the bifurcation into new local minimum is extremely slow and for a long time there is hardly any barrier between the two minima with opposite signs of $`L_{\nu _\tau }`$, and the system oscillates thousands of times before settling down to a minimum with positive $`L_{\nu _\tau }`$. After settling down, the further evolution of the asymmetry follows a power-like behaviour. These results agree well with those of ref. . It is instructive to look a little more carefully into how the system approaches the resonance. Before the resonance the off-diagonal $`P^\pm `$ components are very near zero and $`P_z^{}`$ near the value $`L=0`$. Just before the resonance the $`P_{x,y}^+`$ components begin to increase, which triggers both the growth of $`P_{x,y}^{}`$ components and the decrease of $`P_z^+`$. As $`V_0`$ changes the sign at the resonance it creates an instability in the equation for $`P_y^{}`$, which eventually strongly pushes $`P_y^{}`$ to negative direction. The simple coupling of $`P_y^{}`$ to $`P_z^{}`$ in (9) then drags $`P_z^{}`$ along leading to a rapid growth of $`L_{\nu _\tau }`$. So far these phenomena have not much affected the evolution of $`P^+`$ variables (which have continued to grow). Eventually however, the exponential growth of $`P^{}`$ terms causes the $`V_L`$ term in the equations for $`P_x^+`$ and $`P_y^+`$, insofar neglible, become dominant. Large $`V_L`$ then forces $`P_x^+`$ and $`P_y^+`$ to change sign and grow to opposite direction until $`V_L`$ again changes the sign. Additionally, the ensuing oscillatory motion of the $`P_{x,y}^+`$ components induces the oscillation into other variables as well, leading to the exponentially large oscillation pattern observed in $`L_{\nu _\tau }`$. To find out the extent of chaotic and/or regular behaviour of sign($`L`$), we have scanned through the parameter space depicted in Fig. 2, which shows the sign of the final asymmetry $`L`$ with the initial value $`L^{\mathrm{in}}=10^{10}`$. As can be seen, the structure of sign($`L`$) is highly complex. In the upper left hand corner, extending downwards to large $`\theta `$, there is a regular region with no change in the asymmetry. Its existence is relatively easy to understand: this is the region where only the very first oscillation is carried out before the sign of the asymmetry is fixed. Since the direction of the first oscillation is determined by the sign of the initial asymmetry $`L^{\mathrm{in}}`$ (not necessarily the initial $`\nu _\tau `$-asymmetry), the sign of final neutrino asymmetry in this part of the parameter space should indeed be regular and fully determined. The bands seen in the left hand side of Fig. 2 are formed as the system goes through two or more oscillations. In this region the number of oscillations is slowly increased as $`\theta `$ grows leading to less determined sign($`L`$) but it still can hardly be described as chaotic yet. In addition to the two more or less regular regions there are regions where sign($`L`$) appears to be chaotic. The interval $`10^2<|\delta m^2|<\mathrm{\hspace{0.33em}1}`$ contains a very complicated structure. For $`|\delta m^2|>\mathrm{\hspace{0.33em}1}`$ one may discern some tendency for positive $`L`$ to prevail, while the region with $`|\delta m^2|<\mathrm{\hspace{0.33em}10}^2`$ appears to be pure white noise. In the lower right-hand corner of Fig. 2 (below the gray line), with $`|\delta m^2|/\mathrm{sin}^22\theta <\mathrm{\hspace{0.33em}10}`$, oscillations in $`L`$ will continue past the neutrino freeze-out and will not settle into any definite value. The boundary of the regular region above which $`L`$ is positive is given approximately by $$\left(\mathrm{log}\frac{\delta m^2}{10^{5.8}\text{eV}^2}\right)^20.44\left(\mathrm{log}\frac{\mathrm{sin}^22\theta }{10^{4.5}}\right)^21.$$ (11) We conjecture that the chaotic behaviour occurs only when the oscillating period is long. We have also explored in finer detail a restricted region of parameters in the chaotic regime, without finding any structure. It is possible however, that the final sign of $`L`$ in this region is affected by the accumulated numerical error originating from the extremely high number of oscillation periods. In this sense proving a true chaoticity is of course not possible. Nevertheless, if the system is sensitive to numerical error, it should be expected to be sensitive to parameter fluctuations as well, so the general pattern of rapid sign($`L`$) fluctuations is expected to be robust. Finally, the region in the upper right-hand corner does not correspond to a large asymmetry, but it is merely the region where $`\nu _s`$ is fully equilibrated and the absolute value of final $`L`$ is very small. Changing the initial value for $`L^{\mathrm{in}}`$ does not change the picture qualitatively, although the structures evident in Fig. 2 shift slightly to the left when $`L^{\mathrm{in}}`$ is increased. Moreover, the changes saturate at $`L^{\mathrm{in}}>\mathrm{\hspace{0.33em}10}^9`$. These changes, or their absence, are very interesting however: In Fig. 3 we have plotted the value of $`L`$ at the temperature $`T=T_{\mathrm{res}}2.5`$, as a function of $`L^{\mathrm{in}}`$ for three representative choices of parameters. The first set, with $`\mathrm{sin}^22\theta =10^7`$ and $`\delta m^2=10^4`$ eV<sup>2</sup> corresponds to the stable region with positive $`L`$ in Fig. 2. As expected, $`L`$ remains positive independently of the initial value $`L^{\mathrm{in}}`$. In fact the dependence turned out to be smooth and linear showing that not only is the sign robust, but also that the numerical solution is well under control. The second set, with $`\mathrm{sin}^22\theta =10^6`$ and $`\delta m^2=1`$ eV<sup>2</sup>, lies in the intermediate region where positive $`L`$ predominates, and the same dominance is seen as a function of the inital value $`L^{\mathrm{in}}`$. The last set with $`\mathrm{sin}^22\theta =10^6`$ and $`\delta m^2=10^3`$ eV<sup>2</sup> corresponds to the chaotic region. It is evident that sign($`L`$) is very sensitive to initial conditions, displaying clear randomness as a function of $`L^{\mathrm{in}}`$. The final value of sign($`L`$) has consequences for both atmospheric neutrinos and primordial nucleosynthesis. It has been proposed that Super-Kamiokande results for atmospheric neutrinos, which lie in the forbidden zone , might still allow a active-sterile mixing solution if the asymmetry growth is taken into account . Although the oscillation parameters in the case of atmospheric neutrinos are in the region where asymmetry growth is not expected, it has been argued that other neutrino oscillations could induce a large asymmetry in the active-sterile sector which the oscillations cannot damp. If the outcome of neutrino oscillations is highly chaotic, the validity of such a scenario might be suspect. However, we found a large region in the parameter space where sign($`L`$) is very robust with respect to small variations of the mixing parameters. No chaoticity should be expected there with respect to other small perturbations, such as local perturbations in $`L^{\mathrm{in}}`$, either. It is in these stable domains where one would expect that the mechanism of ref. can be successful. In the region where sign($`L`$) is chaotic in the oscillation parameter space, it was also found to be sensitive to fluctuations in $`L^{\mathrm{in}}`$; these are predicted to be generated for example during the QCD phase transition, or in scenarios of electroweak baryogenesis . In such case causally disconnected regions would be expected to develop large asymmetries with a random sign distribution. It has been argued that then the nucleosynthesis constraint on active-sterile mixing would be even more stringent, because of additional MSW conversion taking place in the boundaries of domains with different sign($`L`$. However, our results indicate that the new constraints obtained in may be overly optimistic, because for a large part of their excluded region we have found sign($`L`$) to be stable against small fluctuations; hence in no domain formation should be expected to occur in the first place. Determining the sign of $`L`$ is important also for considering the effect of the electron (anti)neutrino spectrum distortions on the light element abundances . When the momentum spectrum gets distorted from its thermal equilibrium value the neutron to proton freezing ratio will change. Direct $`\nu _e\nu _s`$ oscillations obviously can induce such distortions, but also scenarios where large asymmetry is first generated in $`\nu _{\mu ,\tau }\nu _s`$ and then transferred to electron neutrino via $`\nu _e\nu _{\mu ,\tau }`$ oscillation, could have considerable effects on the electron neutrino spectrum. It turns out that positive sign($`L`$) has the effect of decreasing and negative sign($`L`$) of increasing <sup>4</sup>He abundance , so that the difference is $`\mathrm{\Delta }Y10^2`$, with some dependence on the oscillation parameters. Because the oscillation parameters cannot be measured with an arbitrary accuracy, it follows that in the region where the sign($`L`$) is chaotic, the role of resonant active-sterile neutrino mixing in Big-Bang Nucleosynthesis can not be reliably estimated. Rather, in this region, depicted in Fig. 2, there always remains an arbitrariness in the <sup>4</sup>He abundance given by $`\mathrm{\Delta }Y10^2`$, which should be considered as a source of systematic error. In the region where the sign is stable, more concrete conclusions can be drawn. However, in this region $`L`$ is positive and only a rather small negative shift in the helium abundance $`0.005<\mathrm{\Delta }Y<0`$ was found for these parameters. Interestingly enough, such a shift could ameliorate the apparent conflict of the nucleosynthesis theory viz-a-viz observations . Our results in this paper are based on an averaged momentum description of the neutrino ensemble. Some effects, like the diffusion of the asymmetry between different momentum states, would seem to indicate the need for using full momentum-dependent kinetic equations. This is rather hard, since one has to deal with exponential growth in every momentum state and the width of the resonance is, for most of the parameter space, so small that one needs a very large number of bins to complete the task. Our preliminary results with momentum dependent kinetic equations support the results presented here. This work has been supported by the Academy of Finland under the contract 101-35224.
no-problem/9906/chao-dyn9906034.html
ar5iv
text
# 1 Introduction ## 1 Introduction Finite many-body quantum systems, both fermionic and bosonic, exhibit collective oscillations, which are nowadays experimentally detected with sophisticated devices. In many cases, these collective oscillations can become strongly aperiodic and also chaotic \[1-3\]. In this short report, we analyze the collective modes of a trapped Bose condensate of alkali atoms. The Bose-Einstein condensation (BEC) is the macroscopic occupation of the ground-state of the system of bosons. From 1995 we have experimental results interpreted as an evidence of BEC in dilute vapors of confined alkali-metal atoms (<sup>87</sup>Rb, <sup>23</sup>Na and <sup>7</sup>Li) \[4-6\]. The experiments with alkali-metal atoms generally consist of a laser cooling and confinement in an external potential (a magnetic or magneto-optical trap) and an evaporative cooling (temperature of the order of $`100`$ nK) \[4-6\]. Nowadays a dozen of experimental groups have achieved the BEC by using different geometries of the confining trap and atomic species. The dynamics of the Bose condensate can be accurately described by the Gross-Pitaevskii equation of mean-field approximation. The theoretically predicted low-energy collective oscillations of the condensate have been experimentally confirmed by laser imaging techniques . In this short report we show that at higher energies non-linear effects appear and eventually the collective oscillations become chaotic. ## 2 Mean-field theory of BEC The many-body problem of $`N`$ identical bosonic atoms in an external trapping potential can be formulated within the non-relativistic quantum field theory \[9-11\]. The Lagrangian density of the Schrödinger field is given by $$=\widehat{\psi }^+(𝐫,t)\left[i\mathrm{}\frac{}{t}+\frac{\mathrm{}^2}{2m}^2U(𝐫)\right]\widehat{\psi }(𝐫,t)$$ $$\frac{1}{2}d^3𝐫^{}\widehat{\psi }^+(𝐫,t)\widehat{\psi }^+(𝐫^{},t)V(|𝐫𝐫^{}|)\widehat{\psi }(𝐫^{},t)\widehat{\psi }(𝐫,t),$$ (1) where $`U(𝐫)`$ is the external potential and $`V(|𝐫𝐫^{}|)`$ is the interatomic potential. The quantum bosonic field $`\widehat{\psi }(𝐫)`$ satisfies the standard equal-time commutation rules. The Lagrangian is invariant under the global $`U(1)`$ gauge transformation $$\widehat{\psi }(𝐫,t)e^{i\alpha }\widehat{\psi }(𝐫,t),$$ (2) which implies the conservation of the total number of particles $$\widehat{N}=d^3𝐫\widehat{\psi }^+(𝐫,t)\widehat{\psi }(𝐫,t).$$ (3) The Bose-Einstein condensation (BEC) is the macroscopic occupation of the $`N`$-body ground-state $`|𝐎>`$ of the system. To study BEC a useful mean-field prescription is to separate out the condensate contribution to the bosonic field operator in the following way $$\widehat{\psi }(𝐫,t)=\varphi (𝐫,t)+\widehat{\mathrm{\Sigma }}(𝐫,t),$$ (4) where $`\varphi (𝐫,t)=<𝐎|\widehat{\psi }(𝐫,t)|𝐎>`$ is the so-called macroscopic wavefunction (or order parameter) of the condensate, and $`\widehat{\mathrm{\Sigma }}(𝐫,t)`$ is the fluctuation operator, such that $`\widehat{\mathrm{\Sigma }}(𝐫,t)|𝐎>=0`$. Note that this prescription breaks the $`U(1)`$ global gauge symmetry of the system \[9-11\]. Alkali vapors are quite dilute and at zero temperature the atoms are practically all in the condensate \[4-6\]. Thus we can neglect the quantum depletion due to the the operator $`\widehat{\mathrm{\Sigma }}`$ and the macroscopic wavefunction is normalized to the total number of atoms. Moreover, the range of the atom-atom interaction $`V(r)`$ is believed to be short in comparison to the typical length scale of variations of atomic wave functions. The atom-atom interaction is usually replaced by an effective zero-range pseudo-potential, $`V(r)=g\delta ^3(r)`$, where $`g=4\pi \mathrm{}^2a_s/m`$ is the scattering amplitude and $`a_s`$ is the s-wave scattering length. This scattering length is positive (repulsive interaction) for <sup>87</sup>Rb and <sup>23</sup>Na but negative (attractive interaction) for $`{}_{}{}^{7}Li`$. Within these approximations, the Lagrangian density becomes a local function of the condensate wavefunction, namely $$=\varphi ^{}(𝐫,t)\left[i\mathrm{}\frac{}{t}+\frac{\mathrm{}^2}{2m}^2U(𝐫)\right]\varphi (𝐫,t)\frac{1}{2}g|\varphi (𝐫,t)|^4.$$ (5) By imposing the least action principle one obtains the following Euler-Lagrange equation $$i\mathrm{}\frac{}{t}\varphi (𝐫,t)=\left[\frac{\mathrm{}^2}{2m}^2+U(𝐫)+g|\varphi (𝐫,t)|^2\right]\varphi (𝐫,t),$$ (6) which is a nonlinear Schrödinger equation and is called time-dependent Gross-Pitaevskii (GP) equation . Note that the GP equation is nothing but the mean-field (Hartree) approximation of the exact time-dependent Schrödinger equation of the N-body problem, where the totally symmetric many-particle wavefunction $`\mathrm{\Psi }`$ of the system is decomposed in the following way $`\mathrm{\Psi }(𝐫_1,𝐫_2\mathrm{},𝐫_N,t)=\varphi (𝐫_1,t)\varphi (𝐫_2,t)\mathrm{}\varphi (𝐫_N,t)`$. ## 3 Hydrodynamic equations of the Bose condensate The complex macroscopic wavefunction $`\varphi (𝐫,t)`$ of the condensate can be written in terms of a modulus and a phase, as follows $$\varphi (𝐫,t)=\sqrt{\rho (𝐫,t)}e^{iS(𝐫,t)}.$$ (7) The phase $`S`$ fixes the velocity field $`𝐯=(\mathrm{}/m)S`$. The GP equation can hence be rewritten in the form of two coupled hydrodynamic equations for the density and the velocity field $$\frac{}{t}\rho +(𝐯\rho )=0$$ (8) $$m\frac{}{t}𝐯+\left(U+g\rho \frac{\mathrm{}^2}{2m\sqrt{\rho }}^2\sqrt{\rho }+\frac{mv^2}{2}\right)=0.$$ (9) If the repulsive interaction among atoms is strong enough, then the density profiles become smooth and one can safely neglect the kinetic pressure term in $`\mathrm{}^2`$ in the equation for the velocity field, which then takes the form $$m\frac{}{t}𝐯+\left(U+g\rho +\frac{mv^2}{2}\right)=0.$$ (10) In the current experiments with alkali metal-atoms, the external trap is well approximated by a harmonic potential $$U(𝐫)=\frac{m}{2}(\omega _1x^2+\omega _2y^2+\omega _3z^2).$$ (11) The ground-state solution ($`𝐯=0`$) of Eq. (10) is given by $$\rho (𝐫)=g^1\left[\mu U(𝐫)\right],$$ (12) in the region where $`\mu >U(𝐫)`$, and $`\rho =0`$ outside. The normalization condition on $`\rho (𝐫)`$ provides $`\mu =(\mathrm{}\omega _h/2)(15Na_s/a_h)^{2/5}`$, where $`\omega _h=(\omega _1\omega _2\omega _3)^{1/3}`$ and $`a_h=(\mathrm{}/m\omega _h)^{1/2}`$. An analytic class of solutions of the time-dependent hydrodynamic equations (8) and (10), which are valid when the condition $`Na_s/a_h>>1`$ is satisfied, is found by writing the density in the form $$\rho (𝐫,t)=a_0(t)a_1(t)x^2a_2(t)y^2a_3(t)z^2$$ (13) in the region where $`\rho (𝐫,t)`$ is positive, and the velocity field as $$𝐯(𝐫,t)=\frac{1}{2}[b_1(t)x^2+b_2(t)y^2+b_3(t)z^2].$$ (14) The coefficient $`a_0`$ is fixed by the normalization of the density $`a_0=(15N/8\pi )^{2/5}(a_xa_ya_z)^{1/5}`$. By inserting these expressions in the hydrodynamic equations one finds 6 coupled differential equations for the time-dependent parameters $`a_i(t)`$ and $`b_i(t)`$. By introducing new variables $`q_i`$, defined by $`a_i=m\omega _i^2(2gq_i^2q_1q_2q_3)^1`$, the hydrodynamic equations give $`a_i=\dot{q}_i/q_i`$ and $$\ddot{q}_i+\omega _i^2q_i=\frac{\omega _i^2}{q_iq_1q_2q_3},$$ (15) with $`i=1,2,3`$. The second and third terms give the effect of the external trap and of the interatomic forces, respectively. It is important to observe that, using the new variables $`q_i`$, the equations of motion do not depend on the value of the coupling constant $`g`$. In terms of $`q_i`$ the mean square radii of the condensate are $`<x_i^2>=(2\mu /m\omega _i)q_i^2`$ and the velocities are $`<v_i^2>=(2\mu /m\omega _i)\dot{q}_i^2`$ . ## 4 BEC Collective Modes and Chaos The three differential equations (15) are the classical equations of motion of a system with coordinates $`q_i`$ and Lagrangian given by $$L=\frac{1}{2}(\omega _1^2\dot{q}_1^2+\omega _2^2\dot{q}_2^2+\omega _3^2\dot{q}_3^2)\frac{1}{2}(q_1^2+q_2^2+q_3^2)\frac{1}{q_1q_2q_3}.$$ (16) This Lagrangian describes collective modes of the Bose condensate for $`Na_s/a_h>>1`$ . As stressed previously, in such a case the collective dynamics of the condensate does not depend on the number of atoms or the scattering length. The minimum of the effective potential is at $`q_i=1`$, $`i=1,2,3`$. The mass matrix $`M`$ of the kinetic energy and the Hessian matrix $`\mathrm{\Lambda }`$ of the potential energy at the equilibrium point are given by $$M=\left(\begin{array}{ccc}\omega _1^2& 0& 0\\ 0& \omega _2^2& 0\\ 0& 0& \omega _3^2\end{array}\right)\text{and}\mathrm{\Lambda }=\left(\begin{array}{ccc}3& 1& 1\\ 1& 3& 1\\ 1& 1& 3\end{array}\right).$$ (17) The low-energy collective excitations of the condensate are the small oscillations of variables $`q_i`$’s around the equilibrium point. The calculation of the normal mode frequencies $`\mathrm{\Omega }`$ for the motion of the condensate is reduced to the eigenvalue problem $`\mathrm{\Lambda }\mathrm{\Omega }^2M=0`$, which gives $$\mathrm{\Omega }^63\left(\omega _1^2+\omega _2^2+\omega _3^2\right)\mathrm{\Omega }^4+8\left(\omega _1^2\omega _2^2+\omega _1^2\omega _3^2+\omega _2^2\omega _3^2\right)\mathrm{\Omega }^220\omega _1^2\omega _2^2\omega _3^2=0.$$ (18) Note that this formula has been recently obtained by using a variational approach and also by studying hydrodynamic density fluctuations of the condensate . For an axially symmetric trap, where $`\omega _1=\omega _2=\omega _{}`$, the previous equation gives $$\mathrm{\Omega }_{1,2}=\left(2+\frac{3}{2}\lambda ^2\pm \frac{1}{2}\left(16+9\lambda ^416\lambda ^2\right)^{1/2}\right)^{1/2}\omega _{},\mathrm{\Omega }_3=\sqrt{2}\omega _{},$$ (19) where $`\lambda =\omega _3/\omega _{}`$ is the asymmetry parameter of the trap . Observe that the experimental results obtained on sodium vapors at MIT ($`\lambda =\sqrt{8}`$) are in good agreement with the theoretical values predicted by (19) . In the case of an isotropic harmonic trap ($`\lambda =1`$) with frequency $`\omega `$, one obtains $`\mathrm{\Omega }_{1,2}=\sqrt{5}\omega `$, $`\mathrm{\Omega }_3=\sqrt{2}\omega `$ . In most experiments the confining trap has axial symmetry \[4-6\]. Let us analyze this case in detail. Because of the axial symmetry, we can impose $`q_1=q_2=q_{}`$. Moreover, by using the adimensional time $`\tau =\omega _{}t`$, the Hamiltonian of the BEC collective modes becomes $$H=p_{}^2+\frac{1}{2}\lambda ^2p_3^2+q_{}^2+\frac{1}{2}q_3^2+\frac{1}{q_{}^2q_3},$$ (20) where $`p_{}=dq_{}/d\tau `$ and $`p_3=\lambda ^2dq_3/d\tau `$ are the conjugate momenta. Note that the condition $`q_1=q_2`$ restricts collective modes to monopole oscillations, where the third component of the angular momentum, that is a good quantum number, is zero . Near the minimum of the potential the trajectories in the phase-space are periodic or quasi-periodic. On the contrary, far from the minimum, the effect of nonlinearity becomes important. As the KAM theorem predicts, parts of phase space become filled with chaotic orbits, while in other parts the toroidal surfaces of the integrable system are deformed but not destroyed. We use a symplectic Euler method (leap-Frog) to numerically compute the trajectories. The time-step is $`\mathrm{\Delta }\tau =10^4`$ and the energy is conserved to the sixth digit. The conservation of energy restricts any trajectory of the four-dimensional phase space to a three-dimensional energy shell. At a particular energy, the restriction $`q_{}=1`$ defines a two-dimensional surface in the phase space, which is called Poincarè section. Each time a particular trajectory passes through the surface a point is plotted at the position of intersection $`(q_3,p_3)`$. We employ a first-order interpolation process to reduce inaccuracies due to the use of a finite step length. Figure 1: Poincarè sections with $`\lambda =\sqrt{2}`$. Each panel is at a fixed energy. From left to right and from top to bottom: $`\chi =1\%`$, $`\chi =12\%`$, $`\chi =28\%`$, $`\chi =36\%`$. $`\chi `$ is the relative increase of the energy with respect to the ground-state (minimum of the potential energy). In Figure 1 we plot Poincarè sections of the system with $`\lambda =\sqrt{2}`$. In each panel there is a Poincarè section with a fixed value of the energy of the system. At each energy value, we have chosen different initial conditions \[$`q_{}(0)`$,$`q_3(0)`$,$`p_{}(0)`$,$`p_3(0)`$\] for the dynamics. Actually $`p_{}(0)`$ has been fixed by the conservation of energy. Integration time is $`400`$ in adimensional units, that is less than $`1`$ second (the life-time of the condensate is about $`10`$ seconds). Note that the CPU time to calculate a Poincarè section with a dozen of initial conditions is about $`30`$ seconds. Chaotic regions on the Poincarè section are characterized by a set of randomly distributed points and regular regions by dotted or solid curves. Let $`\chi `$ be the relative increase of the energy with respect to the ground-state (minimum of the potential energy). For $`\chi =1\%`$ and $`\chi =12\%`$ the trajectories are still all regular but for $`\chi =28\%`$ there is chaotic sea. For $`\chi =36\%`$ most trajectories are chaotic. It is important to observe that a strong enhancement of nonlinear effects and eventually chaos can be obtained not only by increasing the energy but also by changing the anisotropy $`\lambda `$ of the trap. In fact, as shown in Ref. , for special values of $`\lambda `$, frequencies of different modes, or of their harmonics, can coincide. We plan to investigate in detail the onset of chaos for different configurations of the external trap. ## 5 Conclusions In this short paper we have discussed the mean-field equations which describe the collective motion of a trapped weakly-interacting Bose condensate. We have shown by using Poincarè sections that for large energy values the system becomes chaotic. An important question is the following: Can BEC chaotic dynamics be experimentally detected? In our opinion the answer is positive. Nowadays non-destructive images of the dynamics of the condensate can be obtained. The radius of the condensate as a function of time can be detected and its power spectrum analyzed. In fact, one finds the the power spectrum of a chaotic signal is much more complex than for a regular one. Typically, one see few peaks in the regular signal and many peaks surrounded by a lot of noise in a chaotic one. Finally, we observe that various initial conditions for the collective dynamics of the condensate can be obtained by using laser beams or by modulating for a short period the magnetic fields which confine the condensate. ## References V.R. Manfredi and L. Salasnich, Int. J. Mod. Phys. E 4 625 (1995). V.R. Manfredi, M. Rosa-Clot, L. Salasnich, and S. Taddei, Int. J. Mod. Phys. E 5, 519 (1996). V.R. Manfredi and L. Salasnich, “Mean-Field and Nonlinear Dynamics in Many-Body Quantum Systems”, to appear in Proceedings of the 7th Workshop ’Perspectives on Theoretical Nuclear Physics’, Ed. A. Fabrocini et al. (Edizioni ETS, Pisa, 1999). M.H. Anderson, J.R. Ensher, M.R. Matthews, C.E. Wieman, and E.A. Cornell, Science 269, 189 (1995). K.B. Davis, M.O. Mewes, M.R. Andrews, N.J. van Druten, D.S. Drufee, D.M. Stamper-Kurn, and W. Ketterle, Phys. Rev. Lett. 75, 3969 (1995). C.C. Bradley, C.A. Sackett, J.J. Tollet, and R.G. Hulet, Phys. Rev. Lett. 75, 1687 (1995). E.P. Gross, Nuovo Cimento 20, 454 (1961); J. Math. Phys. 4, 195 (1963); L.P. Pitaevskii, Zh. Eksp. Teor. Fiz. 40, 646 (1961) \[Sov. Phys. JETP 13, 451 (1961)\]. D.S. Jin, J.R. Ensher, M.R. Matthews, C.E. Wieman, and E.A. Cornell, Phys. Rev. Lett. 77, 420 (1996). A.L. Fetter and J.D. Walecka, Quantum Theory of Many-Particle Systems (McGraw-Hill, New York, 1971). K. Huang, Statistical Mechanics (Wiley, New York, 1963). S.J. Chang, Introduction to Quantum Field Theory, Lecture Notes in Physics, vol. 29 (World Scientific, Singapore, 1990). S. Stringari, Phys. Rev. Lett. 77, 2360 (1996). F. Dalfovo, C. Minniti, S. Stringari, and L. Pitaevskii, Phys. Lett. A 227, 259 (1997); F. Dalfovo, C. Minniti, and L. Pitaevskii, Phys. Rev. A 56, 4855 (1997). E. Cerboneschi, R. Mannella, E. Arimondo, and L. Salasnich, Phys. Lett. A 249, 245 (1998). S. Stringari, Phys. Rev. A 58, 2385 (1998). D.M. Stamper-Kurn, H.J. Miesner, S. Inouye, M.R. Andrews, and W. Ketterle, Phys. Rev. Lett. , 500 (1998). M.C. Gutzwiller, Chaos in Classical and Quantum Mechanics (Springer, New York, 1990); A.J. Lichtenberg and M.A. Lieberman, Regular and Stochastic Motion, (Springer, New York, 1983).
no-problem/9906/cond-mat9906068.html
ar5iv
text
# Dynamical Phase Transition in a Fully Frustrated Square Josephson Array ## I Introduction There has recently been considerable interest in dynamical phases in superconductors. Such interest has been stimulated by much evidence (both theoretical and experimental) that vortex lattices, as well as other ordered systems, can exhibit various types of transitions from one type of dynamical phase to another, as a function of controllable external parameters, such as driving force and temperature. In the case of Type-II superconductors in a magnetic field, the driving force which acts on the vortex lattice is the applied current. In disordered superconducting films and crystals, there now appear to exist at least three distinct phases, as a function of driving current and temperature. At low driving current, the vortex lattice is pinned, and typically exhibits a glass-like order because of the random pinning centers which prevent the lattice from moving. At intermediate driving currents, the glass phase is depinned, and starts to move; this motion is thought to occur inhomogeneously in disordered superconductors (that is, some of the vortices move through random channels, while other vortices remain pinned). In this phase, the vortex system is usually said to exhibit plastic flow. Finally, at high driving currents, an event resembling a phase transition occurs, and the vortex system reverts back to an ordered state which closely resembles a moving crystalline phase. Most workers now believe that this third phase, especially in two dimensions, lacks long-range crystalline order but instead exhibits hexatic or smectic order, modified by the static disorder of the system. This high-current phase is also thought to exhibit a finite transverse critical current, i. e., once moving parallel to the driving force, the lattice requires a finite driving force perpendicular to the average direction of motion before acquiring a nonzero transverse velocity. The details of all these phases, as well as the transitions between them, remain the subjects of much experimental and theoretical investigation. Recently, there has been evidence that some similar behavior is to be found in ordered systems, both in two and three dimensions (2D and 3D). Extensive numerical studies have been carried out by Nori and collaborators, who have found a very complex structure of ordered and disordered (sometimes called “plastic”) phases as a function of the relative density of vortices and pinning sites, and of the direction of applied current relative to the axes of the periodic pinning lattice. Some of the observed structure can be described in terms of nomenclature frequently used for incommensurate dynamical systems, such as devil’s staircases, Ar’nold tongues, etc. These workers use a standard model whereby the vortices interact with each other via a pairwise potential involving a modified Bessel function, and with the pins by a suitable short-range pinning potential. A similar model had been used previously to study vortex lattices driven through a disordered array of pins. This earlier calculation revealed a somewhat simpler structure with three phases apparent: a pinned vortex lattice, a moving plastic phase, and a moving vortex lattice with a finite transverse critical current. In this paper, we investigate the possibility of dynamical phase transitions in a widely investigated model system: the overdamped square Josephson junction array in a transverse magnetic field. This study is complementary to the work mentioned above, in that the phase angles of the complex superconducting order parameter are explicitly included as degrees of freedom. Although such ordered Josephson junction arrays have, of course, been extensively studied, little attention has been paid to the possibility of plastic phases in these materials. Strongly disordered arrays have recently been shown numerically to have both plastic and moving-lattice phases separated by a phase boundary which appears to exhibit critical phenomena associated with a diverging correlation length. Weakly disordered driven periodic systems are also predicted to have a nearly periodic temporal order at large values of the driving parameter. In order to search for possible dynamical phases, we study this system in a regime in which a current is applied with nonzero components parallel to both the $`x`$ and $`y`$ axes of the array. We consider only the so-called fully-frustrated case with exactly 1/2 flux quantum per plaquette of the square array. Even in this ordered array, we find three phases. These are ($`A`$) the pinned vortex lattice, ($`B`$) a moving lattice, and ($`C`$) a plastic phase occurring at currents between $`A`$ and $`B`$. Phases $`A`$ and $`B`$ have been much discussed previously, but phase $`C`$ seems not to have been observed in an ordered lattice. The plastic phase $`C`$ is characterized by aperiodic voltage noise and an absence of long range vortex order. By contrast, the moving lattice $`B`$ is characterized by voltage signals which are periodic in time, and, most strikingly, by a finite transverse critical current. In an attempt to understand our numerical results, we also present some simple analytical models. One model is based on the assumption that the dynamics can be characterized in terms of periodically repeating 2 $`\times `$ 2 unit cells driven by a current applied at an angle to the cell axes. A second model considers the motion of a single vortex through the periodic potential formed by the lattice of Josephson-coupled grains. Both models give rise to distinct regimes in which none, one, or both of the voltage components parallel to the cell axes are nonzero. Thus, in particular, a nonzero transverse critical current is present in both analytical models. The remainder of this paper is organized as follows. In Section II, we briefly review our calculational model. Our numerical results are presented in Section III. Section IV is an interpretive discussion which includes the two analytical models mentioned above. Our conclusions are described in Section V. ## II Model In all our simulations, we take as the starting point the well-known dynamical equations for an overdamped Josephson junction array in a transverse magnetic field. We assume that each Josephson junction is resistively shunted, and we neglect the loop inductance; that is, we assume that the magnetic field generated by the currents is negligible compared to the applied magnetic field. The equations then take the form $$I_{ab}=I_{c,ab}\mathrm{sin}\left(\varphi _a\varphi _bA_{ab}\right)+\frac{\mathrm{}}{2eR_{ab}}\frac{d}{dt}\left(\varphi _a\varphi _bA_{ab}\right),$$ (1) $$\underset{b}{}I_{ab}=I_a^{ext}.$$ (2) Here $`I_{c,ab}`$ is the critical current of the junction connecting grains $`a`$ and $`b`$, $`R_{ab}`$ is the shunt resistance of that junction, and $`I_a^{ext}`$ is the external current fed into the $`a`$th grain. We assume a constant, uniform external field $`\text{B}=B\widehat{z}`$ perpendicular to the array, and make the gauge choice $`\text{A}=Bx\widehat{y}`$. The phase factor $$A_{ab}=\frac{2\pi }{\mathrm{\Phi }_0}\text{A}𝑑\text{s}$$ (3) is then easily expressed in terms of $`f=Ba^2/\mathrm{\Phi }_0`$, the frustration or flux per plaquette of dimension $`a\times a`$, where $`\mathrm{\Phi }_0=hc/2e`$ is the flux quantum. The equations of motion can be put in dimensionless form with the definitions $`i_{ab}I_{ab}/I_c`$, $`i_{c;ab}I_{c,ab}/I_c`$, and $`g_{ab}R/R_{ab}`$, and the natural time unit $`\tau \frac{\mathrm{}}{2eRI_c}`$, where $`I_c`$ and $`R`$ are a typical critical current and a typical shunt resistance. The result of this substitution is $$i_{ab}=i_{c,ab}\mathrm{sin}\left(\varphi _a\varphi _bA_{ab}\right)+\tau g_{ab}\frac{d}{dt}\left(\varphi _a\varphi _b\right),$$ (4) $$\underset{b}{}i_{ab}=i_a^{ext}.$$ (5) Combining these equations yields a set of coupled differential equations which is easily reduced to matrix form and solved numerically. In our work, we employed a fourth-order embedded Runge-Kutta integration with variable time step. In most of our calculations, we have considered a lattice without disorder: all shunt resistances $`R_{ab}=R`$ (or $`g_{ab}=1`$), and all critical currents $`i_{c,ab}=1`$ on a square array. Our array is driven by nonzero current densities in both the $`x`$ and $`y`$ directions: a current $`i_x`$ is fed into each grain along the left-hand edge of the array, and extracted from each grain on the right-hand edge, and a current $`i_y`$ is similarly injected into each grain on the bottom edge and extracted from each grain on the top edge of the array (see Fig. 1). Starting from randomized initial phases, we integrate these equations of motion over an “equilibration” interval of 100$`\tau `$-5000$`\tau `$, followed by an averaging period of 100$`\tau `$-2000$`\tau `$. Typically, the external currents $`i`$ are ramped up or down (in steps of 0.001 to 0.1) without rerandomizing the phases. We calculate the spatially averaged but time-dependent voltage difference $`v(t^{})=V(t^{})/NRI_c`$ (where $`t^{}=t/\tau `$ is the dimensionless time) between the input and output edges of the array in both the $`x`$ and $`y`$ directions, as well as its time-average $`v_t`$, in both directions. In some regions of the phase diagram, these voltages appear to be periodic in time, as revealed by an analysis of the power spectrum of the voltage. In other regions, as described below, this power spectrum reveals that the voltage is aperiodic. In some of our simulations, we also tracked the number and motion of vortices in the array. The vortex number in a given plaquette $`\alpha `$ is an integer $`n_{v,\alpha }`$ defined by the relation $$n_{v,\alpha }\frac{1}{2\pi }\underset{ab}{}(\varphi _a\varphi _bA_{ab})=0,\pm 1,$$ (6) where the sum is taken clockwise around the $`\alpha ^{th}`$ plaquette, and each phase difference is restricted to the range \[$`\pi ,\pi `$\]. Our calculations are carried out exclusively for a frustration $`f=1/2`$, i. e., an applied magnetic field equal to one flux quantum for every two plaquettes. In a square Josephson array, the ground state of this field is the well-known checkerboard vortex pattern, shown schematically in Fig. 1. Our simulations reproduce this pattern. ## III Results The central results of our calculations are summed up concisely in Fig. 2, which shows the “dynamical phase diagram” for two square Josephson junction arrays with no disorder at $`f=1/2`$, driven by two orthogonal currents $`i_x`$ and $`i_y`$. We find three different phases: a pinned vortex lattice (time-averaged voltages $`v_x=v_y=0`$), a plastic flow regime ($`v_x=0`$, $`v_y>0`$ or $`v_y=0`$, $`v_x>0`$), and a moving vortex lattice ($`v_x>0`$, $`v_y>0`$). The calculated boundaries are shown in Fig. 2. The results of Fig. 2 were obtained by two different methods. In the first procedure, the longitudinal current $`i_x`$ was first ramped up from zero to a finite value, with $`i_y`$ held at zero. Next, the transverse current $`i_y`$ was ramped up at fixed $`i_x`$. To determine the phase boundaries, we simply searched for the currents at which the time averages $`v_x_t`$ or $`v_y_t`$ (or both) became nonzero. In the second method, we ramped up $`i_x`$ and $`i_y`$ simultaneously, holding the ratio $`i_y/i_x`$ fixed. Both methods generally gave similar phase boundaries. Likewise, we found little indication of substantial hysteresis, except in determining the boundary between phases $`A`$ and $`C`$. In this case, if the integration time is too long, the system tended to jump abruptly back and forth between the two phases. We refer to the phase in which both voltages are nonzero as a “plastic” phase, by analogy with a similar phase exhibited by vortices in systems with quenched disorder. In this phase, $`v(t^{})`$ is apparently non-periodic in time; the corresponding voltage power spectrum is only weakly frequency-dependent (see below). Although we use the nomenclature of plastic, we have not checked that the vortex motion in this phase is inhomogeneous (i. e. that only some vortices are in motion while others remain pinned in this phase). Such inhomogeneous motion is thought to occur in disordered systems. By contrast, in the driven lattice phase, where only $`v_x_t`$ or $`v_y_t`$ is nonzero, the power spectra of the voltage traces are sharply peaked at a fundamental frequency and its harmonics. Further, while one voltage is always nonzero, the other voltage is periodic, only averaging to zero over a cycle. We interpret this behavior as representing a vortex lattice being driven transverse to the larger of the two current components. Various minor numerical difficulties sometimes interfered with the calculations, but they could usually be overcome. For example, spurious voltage jumps were occasionally observed during these calculations; these jumps (unlike genuine jumps) could generally be eliminated by changing the initial conditions, the integration time, or the direction of current ramping. Only those voltage jumps which appeared at the same position on the phase diagram in different runs were deemed to be genuine. From the occurrence of these jumps, however, we conclude that at certain points on the phase diagram there are a several metastable dynamical states which have similar energies. The occurrence of such states may suggest a first-order transition across the phase boundary, at least in the ordered system. In Fig. 3, we show time-dependent voltage traces at several points in the plastic and moving-lattice phases. The traces are plainly very different in the two phases. In the moving-lattice phase, the voltage traces are evidently periodic in time. By contrast, in the plastic phase, the voltages, both in $`x`$ and $`y`$ directions, are obviously aperiodic. Another striking feature is apparent in the moving lattice phase. In this phase, as noted above, there is a non-zero time averaged voltage only along one of the two directions, even though current is applied along both the $`x`$ and $`y`$ directions. Despite this feature, there is a finite time-dependent voltage in the $`y`$ direction, which averages to zero, and which is periodic like $`v_x(t^{})`$. To learn more about the harmonic content of these voltage traces, we have also calculated the voltage power spectrum $`P(\omega \tau )`$, using the non-normalized Lomb method for variable-time-step data: $$P(\omega \tau )=1/2\left\{\frac{\left[_j\left(v_j\overline{v}\right)\mathrm{cos}\omega \left(t_jt_o\right)\right]^2}{_j\mathrm{cos}^2\omega \left(t_jt_o\right)}+\frac{\left[_j\left(v_j\overline{v}\right)\mathrm{sin}\omega \left(t_jt_o\right)\right]^2}{_j\mathrm{sin}^2\omega \left(t_jt_o\right)}\right\},$$ (7) where $`t_o`$ is defined by $$\mathrm{tan}\left(2\omega t_o\right)=\frac{_j\mathrm{sin}2\omega t_j}{_j\mathrm{cos}2\omega t_j}.$$ (8) Here, $`\omega `$ is the angular frequency, the $`t_j`$’s are the times at which the voltage is recorded, $`v_j=v\left(t_j\right)`$, and $`\overline{v}`$ is the arithmetic average of the $`v_j`$’s. We have carried out this calculation for several points in the plastic and moving-lattice phases as indicated by triangles in Fig. 2(b). The results are shown in Fig. 4 for both $`v_x`$ and $`v_y`$. Clearly, the noise confirms that the voltage in the moving-lattice phase is periodic in time, while that in the plastic phase has a continuous spectrum which is relatively weakly dependent on frequency. The moving-lattice phase is characterized by a fundamental angular frequency $`\omega _0`$. It is readily shown that $`\omega _0`$ is related to the time-averaged voltage drop across the array, $`V_t`$, by $$\omega _0=2eV_t/\mathrm{}$$ (9) This relation is consistent with the widely-accepted egg-carton picture of the vortex lattice in the moving phase. In this picture, the vortex lattice is viewed as a collection of eggs moving in a potential similar to an egg-carton, consisting of a periodic distribution of wells on a square lattice (each well lying at the center of a plaquette formed by four grains). During one period, the vortex lattice moves by one row through the egg-carton potential. Since there is a phase slip of $`2\pi `$ between two opposite edges of the array each time a vortex crosses the line joining those two edges, one can readily deduce the above relationship. Fig. 2(a) shows that a finger of the driven lattice phase is interposed between the pinned and plastic flow phases in a $`10\times 10`$ array. This finger appears to be a finite-size artifact, because it is absent in the phase diagram for a $`20\times 20`$ array shown in Fig. 2(b). We believe that the phase diagram of Fig. 2(b) is likely to persist in an $`N\times N`$ lattice even at very large $`N`$. Thus, the ordered array at $`f=1/2`$ has three phases: pinned lattice, moving plastic phase, and moving lattice. Since a realistic Josephson lattice is certain to have some disorder, we have also carried out a limited number of calculations for an array at $`f=1/2`$ with weak disorder in the critical currents. Specifically, we assume that the critical currents are independent random variables uniformly distributed between $`0.9I_c`$ and $`1.1I_c`$. The resulting phase diagram, for a single realization of disorder, is shown in Fig. 5. It is calculated using the same techniques as for ordered arrays. Once again, we see clear evidence of three phases: pinned lattice, plastic phase, and moving lattice. These have characteristics similar to those in the ordered case. For example, the moving lattice phase has a finite transverse critical current $`i_c`$, which goes to zero near the phase boundary. In this $`10\times 10`$ sample, there is an even larger finger of $`B`$ phase interposed between $`A`$ and $`C`$ than there is in the ordered array; once again, we assume that this finger disappears in larger arrays. For strongly disordered square arrays at several different field strengths, a phase diagram resembling ours, though without the interpolated finger, has been found by Dominguez. We have also calculated the voltage power spectra for the phases $`B`$ and $`C`$; they resemble those of Fig. 4 in that $`P(\omega \tau )`$ has peaks at multiples of a fundamental frequency in the moving lattice, while the spectra in the plastic phase are continuous and only weakly dependent on frequency. ## IV Simplified Analytical Models In a further effort to understand the behavior found numerically, we have considered two simple analytical models. In this section, we give a brief description of the models used. As will be seen, while each can reproduce some of the numerical results, neither generates all the details of the simulations. ### A Four-Plaquette Unit Cell Our first analytical model is a slight generalization of an approach previously used by Rzchowski et al. They consider the dynamics of a fully frustrated $`N\times N`$ array of overdamped resistively-shunted junctions. To treat this system analytically, they assume that the dynamical state is simply a periodic repetition of a square four-plaquette unit cell. It has long been known that the ground state of the fully frustrated lattice has such a unit cell, corresponding to the checkerboard vortex pattern shown in Fig. 1. If we maintain this assumption of periodicity, the equations of Ref. are readily extended to to the case of currents applied at an angle to the plaquette edges. The resulting equations take the form $`\gamma +\alpha +\beta +\delta =\pi (mod2\pi )`$ (10) $`\mathrm{sin}\beta {\displaystyle \frac{d\beta }{dt^{}}}\mathrm{sin}\delta {\displaystyle \frac{d\delta }{dt^{}}}+\mathrm{sin}\gamma +{\displaystyle \frac{d\gamma }{dt^{}}}+\mathrm{sin}\alpha +{\displaystyle \frac{d\alpha }{dt^{}}}=0`$ (11) $`{\displaystyle \frac{d\gamma }{dt^{}}}+\mathrm{sin}\gamma {\displaystyle \frac{d\alpha }{dt^{}}}\mathrm{sin}\alpha =I_{tot,x}`$ (12) $`{\displaystyle \frac{d\beta }{dt^{}}}+\mathrm{sin}\beta {\displaystyle \frac{d\delta }{dt^{}}}\mathrm{sin}\delta =I_{tot,y}`$ (13) where $`\alpha `$, $`\beta `$, $`\delta `$, and $`\gamma `$ are the four inequivalent gauge-invariant phase differences describing the bonds of the $`2\times 2`$ primitive cell (cf. Fig. 1 of Ref. ), $`t^{}=t/\tau `$ is a dimensionless time, and $`I_{tot,x}`$ and $`I_{tot,y}`$ are the total bias currents in the $`x`$ and $`y`$ directions per $`2\times 2`$ superlattice cell (in units of the single-junction critical current). We have solved these equations numerically, first reducing the system to three variables and then employing the same integration algorithm described above. In comparing with the equations for our first set of simulations, note that the quantity $`I_{tot,\alpha }=2i_\alpha (\alpha =x,y)`$ is twice the current injected into each boundary grain. The resulting phase diagram is shown in Fig. 6. As in the previous phase diagrams calculated in this paper, there are regions (denoted $`A`$, $`B`$, and $`C`$) in which none, one, or both of the time-averaged voltages $`v_x`$ and $`v_y`$ are nonzero. Despite the reduction in number of variables and the enforced symmetry of this simulation, the voltages in regime $`C`$ are still aperiodic in time, with continuous power spectra at most points in regime $`C`$. A representative power spectrum for $`v_x(t^{})`$ is shown in the inset to Fig. 6. It is calculated for a point in region $`C`$ indicated by a triangle. When the current is applied along the $`x`$ axis, the critical current is close to the analytically computed value $`i_x=\sqrt{2}1`$. By contrast, when $`i_x>>1`$, the boundaries of region $`B`$ asymptotically approach the line $`i_y=1`$. In general, we conclude that this simplified version of the dynamics has some, but not all, features of the large array. In particular it does have a region of “plastic flow” at large $`i_x`$ and $`i_y`$. However, it fails to reproduce the region of plastic flow interposed between phases $`A`$ and $`B`$ and seen in our larger-scale simulations. ### B Single Vortex in an “Egg-Carton” Potential Our second analytical model is even simpler. It refers, not to an entire lattice of vortices, but to a single vortex moving in the “egg-carton” potential formed by the lattice. According to Lobb, Abraham, and Tinkham, a single vortex can be viewed, to a good approximation, as moving in a potential of the form $$V(x,y)=V_0\left[\mathrm{cos}\left(\frac{2\pi x}{a}\right)+\mathrm{cos}\left(\frac{2\pi y}{a}\right)\right],$$ (14) where $`a`$ is the lattice constant of the array and $`V_0`$ is the depth of the potential felt by a single vortex. This potential has minima at $`𝐫(x,y)=(n_1a,n_2a)`$ where $`n=0,\pm 1,\pm 2`$, …. \[The grains of the Josephson lattice, in these coordinates, are located at $`((n_1+\frac{1}{2})a,(n_2+\frac{1}{2})a)`$, and correspond to maxima of the vortex potential.\] The current-voltage characteristics of this model are readily calculated. The Magnus force on a vortex due to an external current density J may be written (taking $`\widehat{z}`$ as the direction perpendicular to the array) $$𝐅_{ext}=\mathrm{\Phi }_0\widehat{z}\times 𝐉/c,$$ (15) where J, in this two-dimensional system, represents a current per unit length. In the steady state, this force has to be balanced by two other forces: the gradient of the egg-carton potential energy and the frictional force experienced by the vortex moving through the lattice. This condition may be written $$𝐅_{ext}V(𝐫)\eta \dot{𝐫}=0,$$ (16) or explicitly, in component form, $`\eta \dot{x}={\displaystyle \frac{\mathrm{\Phi }_0J_y}{c}}{\displaystyle \frac{2\pi }{a}}V_0\mathrm{sin}\left({\displaystyle \frac{2\pi x}{a}}\right)`$ (17) $`\eta \dot{y}={\displaystyle \frac{\mathrm{\Phi }_0J_x}{c}}{\displaystyle \frac{2\pi }{a}}V_0\mathrm{sin}\left({\displaystyle \frac{2\pi y}{a}}\right).`$ (18) Given the vortex velocities, the electric fields may be written down by using the relation between vortex velocity and electric field. Specifically, the voltage drop between any two points $`P_1`$ and $`P_2`$ is given by $$\mathrm{\Delta }V_{12}=2\pi nv_{}L\frac{\mathrm{}}{2e},$$ (19) where $`v_{}`$ is the component of vortex velocity perpendicular to the line joining $`P_1`$ and $`P_2`$, $`L`$ is the distance between $`P_1`$ and $`P_2`$, and $`n`$ is the vortex number density per unit area. Here we have used the fact that the phase difference between $`P_1`$ and $`P_2`$ changes by $`2\pi `$ every time a vortex crosses that line, and have also availed ourselves of the Josephson relation between voltage and phase. We therefore deduce the following expressions for the components of electric field: $`E_x={\displaystyle \frac{hn\dot{y}}{2e}}`$ (20) $`E_y={\displaystyle \frac{hn\dot{x}}{2e}}.`$ (21) Eqs. (17) and (18) are identical in form to the equations for single Josephson junctions, with $`x/a`$ and $`y/a`$ playing the role of the Josephson phase, $`2\pi V_0/a`$ the role of the critical current, and $`\mathrm{\Phi }_0J_y/c`$ and $`\mathrm{\Phi }_0J_x/c`$ the roles of the driving currents. In view of eqs. (17) and (18), we deduce that the time-averaged voltage drop in the $`i`$ direction ($`i=x,y`$) becomes nonzero when $`|\mathrm{\Phi }_0J_i/c|>2\pi V_0/a`$. As has been shown by ref. , $`V_0`$ is related to the critical current $`I_c`$ of an individual Josephson junction by $$V_00.22\frac{\mathrm{}I_c}{2e}.$$ (22) Collecting all this information, we obtain the simple phase diagram shown in Fig. 7 for the voltage drops arising from motion of a single vortex in a square array. Once again, there are three regimes, denoted $`A`$, $`B`$, and $`C`$, where none, one, or both of the time-averaged voltage drops $`v_x_t`$ and $`v_y_t`$ are nonzero. However, the phase diagram is simplified by the fact the $`v_x_t`$ and $`v_y_t`$ are independent of one another, depending only on $`i_x`$ and $`i_y`$ respectively. The phase boundaries correspond to the vertical and horizontal lines $`v_x0.11`$ and $`v_y0.11`$. The results of Fig. 7 are obviously oversimplified compared to the numerical diagrams of Fig. 2. In addition to the other obvious differences, the voltages in region $`C`$ are not chaotic, but are instead just the superposition of two independent voltages in the $`x`$ and $`y`$ directions, each of which has its own fundamental frequency and harmonics of that fundamental. The two fundamentals may, of course, be incommensurate depending on the values of $`v_x_t`$ and $`v_y_t`$. ## V Discussion and Conclusions The present calculations clearly show that a fully frustrated array of overdamped Josephson junctions exhibits at least three dynamical phases as a function of the two orthogonal driving currents $`i_x`$ and $`i_y`$: a pinned vortex lattice, a plastic phase characterized by a continuous power spectrum for both $`v_x`$ and $`v_y`$, and a moving vortex lattice with only one of the voltages $`v_x`$ and $`v_y`$ nonzero. In this last phase, the power spectrum, at least for the limited lattice sizes we have investigated, contains only harmonic multiples of a fundamental frequency. Weak disorder in the critical currents appears not to change this phase diagram greatly. Regarding our phase diagram, it is natural to ask whether the boundaries between the different phases are analogous to first-order phase transitions. Although we have no conclusive evidence, our numerical results suggest that they may indeed be first-order, rather than continuous, at least for the ordered lattices. In support of this conjecture, we note the occasional occurrence of hysteresis in our simulations, and of discontinuous jumps between one phase and another near the phase boundaries. There is also little evidence that any quantities, such as the strength of the voltage noise, diverge near the phase boundaries, as might be expected of a continuous phase transition. In this respect, these transitions differ somewhat from those seen in strongly disordered lattices. For these ordered arrays in which the vortex lattice is commensurate with the underlying Josephson array, one might have expected other types of commensurate-incommensurate transitions as the angle between the applied current and array symmetry axis is varied. Such “magic angle” effects, with a multitude of commensurate and disordered phases, are seen in other models in which vortex lattices are driven through periodic pinning arrays. In the present case, we have seen no clear evidence of any phases other than the three shown in our phase diagram. Possibly, such additional phases would appear if we studied larger arrays in which such delicate effects would be more stable. On the other hand, our open boundary conditions may constitute such a strong perturbation on the periodic array that such commensurability effects would be suppressed even for very large lattices. Finally, we comment briefly on the power spectra found in our simulations. In the moving lattice phase $`B`$, the voltage power spectra appear to contain only multiples of a fundamental frequency. Such a power spectrum represents an array which is phase-locked, and hence would radiate power only at multiples of that fundamental frequency. In this case, phase-locking clearly occurs between rows as well as along rows of junctions parallel to the voltage. Such phase locking must be present because it is required in order to produce the observed relation between $`v_x_t`$ and the fundamental frequency. Furthermore, such locking survives weak disorder in the critical currents. The effectiveness of a magnetic field in producing phase locking in square arrays has been discussed previously. The present work provides additional evidence that a field $`f=1/2`$ is effective in producing phase locking, even when the applied current has nonzero components parallel to both array axes. In conclusion, we have numerically investigated the dynamic phases of fully-frustrated square Josephson junction arrays driven by two independent, orthogonal currents. We find three phases: stationary lattice, driven lattice, and plastic flow. These phases appear also in weakly disordered arrays. We find that our numerical results can be partly understood by two simple analytical models, which show some of the features of the full simulation. ## VI Acknowledgments This work has been supported by the National Science Foundation, Grant No. DMR97-31511, and by the Midwest Superconductivity Consortium at Purdue University through Grant DE-FG 02-90 ER45427. We thank Profs. Predrag Cvitanovic and Franco Nori for useful conversations.
no-problem/9906/nucl-th9906073.html
ar5iv
text
# EXPLORING NUCLEON-NUCLEON CORRELATIONS IN (𝑒,𝑒'⁢𝑁⁢𝑁) REACTIONS ## 1 NN Correlations Nuclei are a very intriguing object to explore the theory of quantum-many-body systems. One of the reasons is that realistic wave functions of nuclear systems must exhibit strong two-particle correlations. This can be demonstrated in a little ‘theoretical experiment’: Assuming a realistic model for the nucleon-nucleon (NN) interaction, this means an interaction which reproduces the empirical data of NN scattering below the pion threshold, one may calculate the energy of nuclear matter within the mean field or Hartree-Fock approximation. Results of such a calculation are listed in the first row of table 1. One finds that all these interactions yield a positive value for the energy per nucleon, which means that nuclear matter as well as all nuclei would be unbound. Only after the effects of two-body correlations are included, one obtains a value which is in rough agreement with the empirical value of -16 MeV per nucleon. This demonstrates that nuclear correlations are indispensable to describe the structure of nuclei. In order to explore dominant components of these correlation, table 1 also lists the expectation value of the $`\pi `$-exchange contribution to the NN interaction using the HF approximation ($`V_{\pi HF}`$) and with inclusion of the correlation effects ($`V_{\pi Corr}`$). One finds that the gain in binding energy is not only due to the central short-range correlation effects, i.e. the nuclear wave function tries to minimize the probability that two nucleons approach each other so close that they feel the repulsive core of the interaction. A large part of this gain in binding energy is due to pionic correlations which are dominated by the effects of the tensor force. The different interaction models all reproduce the same empirical NN scattering phase shifts. This is true in particular for the modern NN interactions: the charge-dependent Bonn potential (CDB), the Argonne V18 (ArgV18) and the Nijmegen interaction (Nijm1), which all yield an excellent fit of the same phase shifts. Nevertheless, they predict quite different correlations. This can be seen e.g. from inspecting the expectation values for the kinetic energies per nucleon (denoted as $`T`$ in table 1). This means that correlations are a significant fingerprint of the interaction of two nucleons in a nuclear medium. So if we find a way to measure details of these correlations, we shall obtain information on the validity of the various models for the NN interaction. ## 2 Correlations and exclusive $`(e,e^{}p)`$ reactions The uncorrelated Hartree-Fock state of nuclear matter is given as a Slater determinant of plane waves, in which all states with momenta $`k`$ smaller than the Fermi momentum $`k_F`$ are occupied, while all others are completely unoccupied. Correlation in the wave function beyond the mean field approach will lead to occupation of states with $`k`$ larger than $`k_F`$. Therefore correlations should be reflected in an enhancement of the momentum distribution at high momenta. Indeed, microscopic calculations exhibit such an enhancement for nuclear matter as well as for finite nuclei. One could try to measure this momentum distribution by means of exclusive $`(e,e^{}p)`$ reactions at low missing energies, such that residual nucleus remains in the ground state or other well defined bound state. From the momentum transfer $`q`$ of the scattered electron and the momentum $`p`$ of the outgoing nucleon one can calculate the momentum of the nucleus before the absorption of the photon and therefore obtain direct information on the momentum distribution of the nucleons inside the nucleus. This idea, however, suffers from a little inaccuracy. To demonstrate this we write the momentum distribution $`n(k)`$ representing the ground state wave function of the target nucleus by $`\mathrm{\Psi }_A`$, denoting the creation (annihilation) operator for a nucleon with momentum $`k`$ by $`a_k^{}`$ ($`a_k`$), as $`n(k)`$ $`=`$ $`<\mathrm{\Psi }|a_k^{}a_k|\mathrm{\Psi }>`$ $`=`$ $`{\displaystyle _0^{\mathrm{}}}𝑑E<\mathrm{\Psi }|a_k^{}|\mathrm{\Phi }_{A1}(E)><\mathrm{\Phi }_{A1}(E)|a_k|\mathrm{\Psi }>`$ $`=`$ $`{\displaystyle _0^{\mathrm{}}}𝑑ES(k,E)`$ $`\text{with}S(k,E)`$ $`=`$ $`\left|<\mathrm{\Psi }|a_k^{}|\mathrm{\Phi }_{A1}(E)>\right|^2.`$ In the second line of this equation we have inserted the complete set of eigenstates for the residual nucleus with $`A1`$ nucleons and excitation energy $`E`$. Therefore, if one performs an exclusive $`(e,e^{}p)`$ experiment leading to the residual nucleus in its ground state, one does not probe the momentum distribution but the spectral function at an energy $`E=0`$. While the total momentum distribution exhibits the enhancement at high momenta discussed above, the spectral function at small energies does not have this feature and the momentum distribution extracted from such experiments is very similar to the one derived from Hartee-Fock wave functions. This is demonstrated by Figure 1, which compares experimental data of $`(e,e^{}p)`$ experiments on $`{}_{}{}^{16}O`$ leading to the ground state of the residual nucleus $`{}_{}{}^{15}N`$, which were performed at MAMI in Mainz, with theoretical calculations. The calculation account for the final state interaction of the outgoing nucleon with the residual nucleus by means of a relativistic optical potential. One finds that the spectral function calculated with inclusion of correlation yields the same shape as the corresponding Hartree-Fock approximation. The only difference being the global normalization: the spectroscopic factor. Therefore exclusive $`(e,e^{}p)`$ reactions yield a rather limited amount of information on correlation effects, they are sensitive to the mean field properties of the nuclear system. There is a major discussion of these mean field properties in nuclear physics: Motivated by the success of the Walecka model, attempts have been made to include relativistic features in microscopic nuclear many-body studies. Such attempts are often referred to as Dirac-Brueckner-Hartree-Fock calculations. The main prediction of these relativistic nuclear structure calculations is that the small component of the Dirac spinors for the nucleon inside a nucleus is enhanced relative to the small component of a free nucleon with the same momentum. This enhancement can be parameterized in terms of an effective Dirac mass $`m^{}`$ which is significantly smaller than the bare nucleon mass. Can one observe this enhancement of the small component of the Dirac spinor by means of $`(e,e^{}p)`$ experiments? Theoretical calculations predict that this may be possible, if one performs a more detailed analysis of the corresponding cross section. For that purpose one decomposes the cross section into a contraction of hadronic responses and the appropriate electron contributions, which are defined as in $$\frac{m|p_x|}{(2\pi )^3}\sigma _{Mott}\left(V_LR_L+V_TR_T+V_{LT}R_{LT}\mathrm{cos}\varphi +V_{TT}R_{TT}\mathrm{cos}2\varphi \right).$$ Results for the hadronic response functions with and without the relativistic effect are displayed in Figure 2. While the relativistic features do not effect the longitudinal $`R_L`$ and transverse repons functions $`R_T`$, they predict an enhancement of the interference structure functions $`R_{LT}`$ and $`R_{TT}`$ as compared to the non-relativistic reduction. This feature is discussed more in detail by Moya de Guerra and Udias. First experimental results on $`R_{LT}`$ are presented by Bertozzi at this workshop. ## 3 Kinematical study of $`(e,e^{}NN)`$ in nuclear matter As exclusive one-nucleon knock-out experiments only yield limited information on NN correlations, one may try to investigate exclusive $`(e,e^{}NN)`$ reactions, i.e. triple coincidence experiments in which the energies of the two outgoing nucleons and the energy of the scattered electron guarantee that the rest of the target nucleus remains in the ground state or a well defined excited state. The idea that processes in which the virtual photon, produced by the scattered electron, is absorbed by a pair of nucleons should be sensitive to the correlations between these two nucleons. Unfortunately, however, this process which is represented by the diagram in Figure 3a, competes with the other processes described by the diagrams of Figure 3b and c. These last two diagrams refer to the effects of final-state-interaction (FSI) and contributions of two-body currents. Here we denote by final state interaction not just the feature that each of the outgoing nucleons feels the remaining nucleus in terms of an optical potential. Here we call FSI the effect, that one of the nucleons absorbs the photon, propagates (on or off-shell) and then shares the momentum end energy of the photon by interacting with the second nucleon which is also knocked out the target. The processes described in Figures 3a and 3b, correlations and FSI, are rather similar, they differ only by the time ordering of NN interaction and photon absorption. Therefore it seems evident that one must consider both effects in an equivalent way. Nevertheless, all studies up to now have ignored this equivalency but just included the correlation effect in terms of a correlated two-body wave function. In our approach we will assume the same interaction to be responsible for the correlations and the FSI, correlations are evaluated in terms of the Brueckner G-matrix, while the T-matrix derived from the very same interaction is used to determine FSI. The two-body current contributions of Figure 3c include Meson Exchange Current (MEC) and Isobar Current (IC) contributions. The MEC effects should be calculated consistently with the meson exchange terms included in the NN interaction, used to calculate correlations and FSI. In our calculations up to day we only account for the contributions die to the exchange of the pions. Note that the pion-seagull and pion in flight term only contribute is the emitted pair contains a proton and a neutron. Contributions of other charged mesons like e.g. the $`\rho `$ meson have been considered e.g. by Vanderhaeghen et al. and shall also be included in future investigations. The IC contributions contain diagrams like the ones displayed in Figures 3a and b. The only difference being that the intermediate nucleon line is replaced by the propagation of the $`\mathrm{\Delta }`$ excitation. This demonstrates that also IC contributions should be treated in terms if baryon-baryon interactions accounting for admixture of $`\mathrm{\Delta }`$ configurations to the target wave functions as well as FSI effects with intermediate isobar terms. Presently the IC terms are evaluated in terms of the Born diagrams, including again only $`\pi `$ exchange for the transition interactions $`NNN\mathrm{\Delta }`$. Also in this case one should account for the effects of the $`\rho `$ exchange. In this section I would like to present results for the various contributions just introduced, calculated for nuclear matter at saturation density. Of course, this study will not lead to any result, which can directly be compared with experimental data for a specific target nucleus. The idea is to get some general features, which are independent on the specific target nucleus or final state of the residual nucleus. We would like to see, if we can provide general information about the importance of the various contributions just discussed. It is the hope, that one may find special kinematical situations, in which one of the contributions mentioned above is dominating over others. All results discussed here have been obtained with the Bonn A potential defined by Machleidt. Details of these calculations are described in reference. As a first example we consider the longitudinal structure function for the knockout of a proton-proton pair. One of the protons is emitted parallel to the momentum of the virtual photon with an energy of $`T_{p,1}=156\mathrm{MeV}`$, while the second is emitted antiparallel to the photon momentum with an energy of $`T_{p,2}=33\mathrm{MeV}`$ (see Figure 4). This is called the ‘super-parallel kinematic’, which should be appropriate for a separation of longitudinal and transverse structure functions. In this situation the dominant contribution to the longitudinal response function is due to correlation effects (red curve). But also the FSI effects contribute in a non-negligible way to the cross section (yellow curve), although the two protons are emitted in opposite directions. The effects of FSI are much more important, if we request that the two protons are emitted in a more symmetric way. As an example we show the longitudinal structure function for ($`e,e^{}pp`$), requesting that each of the protons carries away an energy of 70 MeV and is emitted with an angle of $`30^\mathrm{o}`$ or $`30^\mathrm{o}`$ with respect to the momentum transfer $`q`$ of the virtual photon. Corresponding results are displayed in Figure 5. For this kinematical situation the FSI contribution is much more important than the correlation effect. As a last example we present the results for the longitudinal structure function for the ($`e,e^{}pn`$) reaction, assuming the same kinematical (super-parallel) setup as has been employed for the ($`e,e^{}pp`$) reaction displayed in Figure 4. The resulting structure function for ($`e,e^{}pn`$) displayed in Figure 6 is almost an order of magnitude larger than for the corresponding ($`e,e^{}pp`$) case. In ($`e,e^{}pn`$) reactions one has also to include the effects of MEC. Note, however, that for the case considered the MEC contribution are smaller than the correlation effects. This is due to a strong cancellation between the pion seagull and the pion in flight contributions to the MEC. The dominating contribution to the longitudinal response is again the correlation part. Comparison with Figure 4 demonstrates that the $`pn`$ correlations are significantly larger than those for the $`pp`$ pairs. This supports our conclusion from discussing the results of table 1 that the pionic or tensor correlations which are different for isospin $`T=0`$ and $`T=1`$ pairs play an important role and are even more important than the central correlations, which are independent of the isospin. More detailed results including the transversal structure function and the effect of isobar currents have partly been published already in . The discussion of further results is in preparation ## 4 Correlations in finite nuclei Various quite different approaches have been developed to determine correlations in the nuclear wave function, which are beyond the mean-field or Hartree-Fock approach. In the preceeding section we have employed a calculation of correlation effects in terms of the Brueckner G-matrix. In this section we will consider the so-called coupled cluster or “$`exp(S)`$” method. The basic features of the coupled cluster method have been described already in the review article by Kümmel et al. . More recent developments and applications can be found in . Here we will only present some basic equations. The many-body wave function of the coupled cluster or $`\mathrm{exp}(S)`$ method can be written $$|\mathrm{\Psi }>=\mathrm{exp}\left(\underset{n=1}{\overset{A}{}}\widehat{S}_n\right)|\mathrm{\Phi }>.$$ (1) The state $`|\mathrm{\Phi }>`$ refers to the uncorrelated model state, which we have chosen to be a Slater determinant of harmonic oscillator functions with an oscillator length $`b`$=1.72 fm, which is appropriate for the description of our target nucleus <sup>16</sup>O. The linked $`n`$-particle $`n`$-hole excitation operators can be written $$\widehat{S}_n=\frac{1}{n!^2}\underset{\nu _i\rho _i}{}<\rho _1\mathrm{}\rho _n|S_n|\nu _1\mathrm{}\nu _n>a_{\rho _1}^{}\mathrm{}a_{\rho _n}^{}a_{\nu _n}\mathrm{}a_{\nu _1}.$$ Here and in the following the sum is restricted to oscillator states $`\rho _i`$ which are unoccupied in the model state $`|\mathrm{\Phi }>`$, while states $`\nu _i`$ refer to states which are occupied in $`|\mathrm{\Phi }>`$. For the application discussed here we assume the so-called $`S_2`$ approximation, i.e. we restrict the correlation operator in (1) to the terms with $`\widehat{S}_1`$ and $`\widehat{S}_2`$. One may introduce one- and two-body wave functions $`\psi _1|\nu _1>`$ $`=`$ $`|\nu _1>+\widehat{S}_1|\nu _1>`$ $`\psi _2|\nu _1\nu _2>`$ $`=`$ $`𝒜\psi _1|\nu _1>\psi _1|\nu _2>+\widehat{S}_2|\nu _1\nu _2>`$ (2) with $`𝒜`$ denoting the operator antisymmetrizing the product of one-body wave functions. Using these definitions one can write the coupled equations for the evaluation of the correlation operators $`\widehat{S}_1`$ and $`\widehat{S}_2`$ in the form $$<\alpha |\widehat{T}_1\psi _1|\nu >+\underset{\nu _1}{}<\alpha \nu _1|\widehat{T}_2\widehat{S}_2+\widehat{V}_{12}|\nu \nu _1>=\underset{\nu _1}{}ϵ_{\nu _1\nu }<\alpha |\psi _1|\nu _1>,$$ (3) where $`\widehat{T}_i`$ stands for the operator of the kinetic energy of particle $`i`$ and $`\widehat{V}_{12}`$ is the two-body potential. Furthermore we introduce the single-particle energy matrix defined by $$ϵ_{\nu _1\nu }=<\nu _1|\widehat{T}_1|\nu >+\underset{\nu ^{}}{}<\nu _1\nu ^{}|\widehat{V}_{12}\psi _2|\nu \nu ^{}>$$ The Hartree-Fock type equation (3) is coupled to a two-particle equation of the form $`0`$ $`=`$ $`<\alpha \beta |\widehat{Q}\left[(\widehat{T}_1+\widehat{T}_2)\widehat{S}_2+\widehat{V}_{12}\psi _2+\widehat{S}_2\widehat{P}\widehat{V}_{12}\psi _2\right]|\nu _1\nu _2>`$ (4) $`{\displaystyle \underset{\nu }{}}\left(<\alpha \beta |\widehat{S}_2|\nu \nu _2>ϵ_{\nu \nu _1}+<\alpha \beta |\widehat{S}_2|\nu _1\nu >ϵ_{\nu \nu _2}\right)`$ In this equation we have introduced the Pauli operator $`\widehat{Q}`$ projecting on two-particle states, which are not occupied in the uncorrelated model state $`|\mathrm{\Phi }>`$ and the projection operator $`\widehat{P}`$, which projects on two-particle states, which are occupied. If for a moment we ignore the term in (3) which is represented by the operators $`\widehat{T}_2\widehat{S}_2`$ and also the term in (4) characterized by the operator $`\widehat{S}_2\widehat{P}\widehat{V}_{12}`$ the solution of these coupled equations corresponds to the Brueckner-Hartree-Fock approximation and we can identify the matrix elements of $`\widehat{V}_{12}\psi _2`$ with the Brueckner $`G`$-matrix. Indeed the effects of these two terms are rather small and we have chosen the coupled cluster approach mainly because it provides directly correlated two-body wave functions (see eq.2). More details about the techniques which are used to solve the coupled cluster equations can be found in . As an example we would like to present the effects of correlations on the two-body density obtained by removing two protons from oscillator $`p_{1/2}`$ states, coupled to total angular momentum $`J=0`$ and isospin $`T=1`$ $$\left|<\stackrel{}{r}_1\stackrel{}{r}_2|\psi _2|p_{1/2},p_{1/2}J=0,T=1>\right|^2$$ (5) In Figure 7 this two-body density is displayed for a fixed $`\stackrel{}{r}_1=(x_1=0,y_1=0,z_1=2\mathrm{fm})`$ as a function of $`\stackrel{}{r}_2`$, restricting the presentation to the $`x_2,z_2`$ half-plane with ($`x_2>0,y_2=0`$). The upper part of this figure displays the two-body density without correlations ($`\widehat{S}_2=0`$). One observes that the two-body density, displayed as a function of the position of the second particle $`\stackrel{}{r}_2`$ is not affected by the position of the first one $`\stackrel{}{r}_1`$. Actually, the two-body density displayed is equivalent to the one-body density. This just reflects the feature of independent particle motion. If correlation effects are included, as it is done in the lower part of Figure 7, one finds a drastic reduction of the two-body density at $`\stackrel{}{r}_2=\stackrel{}{r}_1`$ accompanied by a slight enhancement at medium separation between $`\stackrel{}{r}_1`$ and $`\stackrel{}{r}_2`$. In order to amplify the effect of correlations, Figure 8 displays the corresponding correlation densities (i.e. replace $`\psi _2`$ by $`\widehat{S}_2`$, see also (2)). While the upper part shows the correlation density for the removal of a proton-proton pair, the corresponding density for a proton-neutron pair is displayed in the lower part. Comparing these figures one sees that that the $`pn`$ correlations are significantly stronger than the $`pp`$ correlations. This is mainly due to the presence of pionic or tensor correlations in the case of the $`pn`$ pair. Figure 8 also exhibits quite nicely the range of the correlations. This range is short compared to the size of the nucleus even in the case of the $`pn`$ correlations. All results displayed in this section have been obtained using the Argonne V14 potential for the NN interaction. ## 5 Two nucleon knockout on $`{}_{}{}^{16}O`$ The coincidence cross section for the reaction induced by an electron with momentum $`\stackrel{}{p}_0`$ and energy $`E_0`$, with $`E_0=|\stackrel{}{p}_0|=p_0`$, where two nucleons, with momenta $`\stackrel{}{p}_1^{}`$, and $`\stackrel{}{p}_2^{}`$ and energies $`E_1^{}`$ and $`E_2^{}`$, are ejected from a nucleus is given, in the one-photon exchange approximation and after integrating over $`E_2^{}`$, by $$\frac{\mathrm{d}^8\sigma }{\mathrm{d}E_0^{}\mathrm{d}\mathrm{\Omega }\mathrm{d}E_1^{}\mathrm{d}\mathrm{\Omega }_1^{}\mathrm{d}\mathrm{\Omega }_2^{}}=K\mathrm{\Omega }_\mathrm{f}f_{\mathrm{rec}}|j_\mu J^\mu |^2.$$ (6) In Eq. (6) $`E_0^{}`$ is the energy of the scattered electron with momentum $`\stackrel{}{p}_0^{}`$, $`K=e^4p_{0}^{}{}_{}{}^{2}/4\pi ^2Q^4`$ where $`Q^2=\stackrel{}{q}^2\omega ^2`$, with $`\omega =E_0E_0^{}`$ and $`\stackrel{}{q}=\stackrel{}{p}_0\stackrel{}{p}_0^{}`$, is the four-momentum transfer. The quantity $`\mathrm{\Omega }_\mathrm{f}=p_1^{}E_1^{}p_2^{}E_2^{}`$ is the phase-space factor and integration over $`E_2^{}`$ produces the recoil factor $$f_{\mathrm{rec}}^1=1\frac{E_2^{}}{E_\mathrm{B}}\frac{\stackrel{}{p}_2^{}\stackrel{}{p}_\mathrm{B}}{|\stackrel{}{p}_2^{}|^2},$$ (7) where $`E_\mathrm{B}`$ and $`\stackrel{}{p}_\mathrm{B}`$ are the energy and momentum of the residual nucleus. The cross section is given by the square of the scalar product of the relativistic electron current $`j^\mu `$ and of the nuclear current $`J^\mu `$, which is given by the Fourier transform of the transition matrix elements of the charge-current density operator between initial and final nuclear states $$J^\mu (\stackrel{}{q})=<\mathrm{\Phi }_\mathrm{f}|\widehat{J}^\mu (\stackrel{}{r})|\mathrm{\Phi }_\mathrm{i}>\mathrm{e}^{\mathrm{i}\stackrel{}{q}\stackrel{}{r}}d\stackrel{}{r}.$$ (8) If the residual nucleus is left in a discrete eigenstate of its Hamiltonian, i.e. for an exclusive process, and under the assumption of a direct knockout mechanism, Eq. (8) can be written as $`J^\mu (\stackrel{}{q})`$ $`=`$ $`{\displaystyle \varphi _\mathrm{f}^{}(\stackrel{}{r}_1𝝈_1,\stackrel{}{r}_2𝝈_2)J^\mu (\stackrel{}{r},\stackrel{}{r}_1𝝈_1,\stackrel{}{r}_2𝝈_2)\varphi _\mathrm{i}(\stackrel{}{r}_1𝝈_1,\stackrel{}{r}_2𝝈_2)}`$ (9) $`\times \mathrm{e}^{\mathrm{i}\stackrel{}{q}\stackrel{}{r}}\mathrm{d}\stackrel{}{r}\mathrm{d}\stackrel{}{r}_1\mathrm{d}\stackrel{}{r}_2\mathrm{d}𝝈_1\mathrm{d}𝝈_2.`$ Eq. (9) contains three main ingredients: the two-nucleon overlap integral $`\varphi _\mathrm{i}`$, the nuclear current $`J^\mu `$ and the final-state wave function $`\varphi _\mathrm{f}`$. In the model calculations the final-state wave function $`\varphi _\mathrm{f}`$ includes the interaction of each one of the two outgoing nucleons with the residual nucleus while their mutual interaction, which we have discussed as FSI in the preceeding section is here neglected. Therefore, the scattering state is written as the product of two uncoupled single-particle distorted wave functions, eigenfunctions of a complex phenomenological optical potential which contains a central, a Coulomb and a spin-orbit term. The nuclear current operator in Eq. (9) is the sum of a one-body and a two-body part. In the one-body part convective and spin currents are included. As discussed already in section 3, the two-body current includes, the seagull and pion-in-flight diagrams and the diagrams with intermediate isobar configurations. The two-nucleon overlap integral $`\varphi _\mathrm{i}`$ contains the information on nuclear structure and allows one to write the cross section in terms of the two-hole spectral function. For a discrete final state of the <sup>14</sup>N nucleus, with angular momentum quantum number $`J`$, the state $`\varphi _\mathrm{i}`$ is expanded in terms of the correlated two-hole wave functions defined in the preceeding section as $`\varphi _\mathrm{i}^{JT}(\stackrel{}{r}_1𝝈_1,\stackrel{}{r}_2𝝈_2)`$ $`=`$ $`{\displaystyle \underset{\nu _1\nu _2}{}}a_{\nu _1\nu _2}^{JT}<\stackrel{}{r}_{12},\stackrel{}{R},𝝈_1,𝝈_2|\psi _2|\nu _1\nu _2JT>`$ (10) The expansion coefficients $`a_{\nu _1\nu _2}^{JT}`$ are determined from a configuration mixing calculations of the two-hole states in <sup>16</sup>O, which can be coupled to the angular momentum and parity of the requested state. The residual interaction for this shell-model calculation is also derived from the Argonne V14 potential and corresponds to the Brueckner G-matrix. Note that these expansion coefficients $`a_{\nu _1\nu _2}^{JT}`$ account for the global or long-range structure of the specific nuclear states, while the information on short-range correlations is already contained in $`<\stackrel{}{r}_{12},\stackrel{}{R},𝝈_1,𝝈_2|\psi _2|\nu _1\nu _2JT>`$. Results for the cross section of exclusive ($`e,e^{}pn`$) reactions on <sup>16</sup>O leading to the ground state of $`{}_{}{}^{14}N`$ are displayed in Figure 9. The calculations have been performed in the super-parallel kinematic, which we already introduced before. The kinematical parameters correspond to those adopted in a recent <sup>16</sup>O(e,epp)<sup>14</sup>C experiment at MAMI . In order to allow a direct comparison of ($`e,e^{}pp`$) with ($`e,e^{}pn`$) experiments, the same setup has been proposed for the first experimental study of the <sup>16</sup>O(e,epn)<sup>14</sup>N reaction . This means that we assume an energy of the incoming electron $`E_0=855`$ MeV, electron scattering angle $`\theta =18^\mathrm{o}`$, $`\omega =215`$ MeV and $`q=316`$ MeV/$`c`$. The proton is emitted parallel and the neutron antiparallel to the momentum transfer $`\stackrel{}{q}`$. Separate contributions of the different terms of the nuclear current are shown in the figure and compared with the total cross section. The contribution of the one-body current, entirely due to correlations, is large. It is of the same size as that of the pion seagull current. The contribution of the $`\mathrm{\Delta }`$-current is much smaller at lower values of $`p_\mathrm{B}`$, whereas for values of $`p_\mathrm{B}`$ larger than 100 MeV/$`c`$ it becomes comparable with that of the other components. It is worth noting the the total cross section is about an order of magnitude larger than the one evaluated for the corresponding ($`e,e^{}pp`$) experiment. This confirms our finding about the relative cross sections for $`pp`$ and $`pn`$ knock out, which we have observed already in section 3. In Fig. 10 the same quantities as in Fig. 9 are shown, but the two-nucleon overlap has been calculated with the simpler prescription of correlations, i.e. by the product of the pair function of the shell model, described for $`1_1^+`$ as a pure ($`p_{1/2}`$)<sup>-2</sup> hole, and of a Jastrow type central and state independent correlation function. The large differences between the cross sections in Figs. 9 and 10 indicate that a refined description of the two-nucleon overlap, involving a careful treatment of both aspects related to nuclear structure and NN correlations, is needed to give reliable predictions of the size and the shape of the ($`e,e^{}pn`$) cross section. The cross sections for the transition to the excited 1$`{}_{2}{}^{}{}_{}{}^{+}`$ state are displayed in Figs. 11. The two-nucleon overlap function for this state contains the same components in terms of relative and c.m. wave functions and the same defect functions as for the 1$`{}_{1}{}^{}{}_{}{}^{+}`$ ground state, but they are weighed with different amplitudes $`a_{\nu _1\nu _2}^J`$ in Eq. (10). In practice the two overlap functions have different amplitudes for $`p_{1/2}`$ and $`p_{3/2}`$ holes. This has the consequence that the cross sections in Figs. 9 and 11 have a different shape and are differently affected by the various terms of the nuclear current. So transition to various states probe the ingredients of the transient matrix elements in different ways. More details will be presented in the contribution of Carlotta Giusti. ## 6 Conclusion It has been the aim of this contribution to demonstrate that exclusive ($`e,e^{}NN`$) reactions are sensitive to NN correlations and therefore sensitive to the NN interaction in the nuclear medium at short inter-nucleon distances. The careful study and analysis of these reactions is a challenge for experimental but also theoretical efforts. In particular it should be pointed out: * $`pp`$ as well as $`pn`$ knock-out experiments should be performed. The cross sections for $`pn`$ knock-out are significantly larger than for corresponding $`pp`$ emission. This is partly due to the meson-exchange-current (MEC) contributions for the charged mesons, which is absent in $`pp`$ knock-out. The difference, however, also reflects the isospin dependence of nuclear correlations. While the study of ($`e,e^{}pp`$) mainly explores the short-range central correlations, the corresponding ($`e,e^{}pn`$) experiments also probe tensor correlations. * Effects of Final State Interaction (FSI) are non-negligible. Most of the studies up to now consider FSI effects only in a mean field approach. It must be emphasized, however, that the residual interaction between the two ejected nucleons has a non-negligible effect as well. This is even true, when the two nucleons are emitted ‘back - to - back’. FSI effects, however, get much more important for other final states. * All contributions to the ($`e,e^{}NN`$) cross section should be determined in a consistent way. In order to separate the various contributions, one should try to separate the various structure functions (longitudinal and transverse). One may also take advantage of the fact that transitions to various final states in the residual nucleus probe the different contributions differently. * The super-parallel kinematic seems to be quite appropriate for the study of correlation effects. Acknowledgments The results, which have been presented here, have been obtained in collaborations with many colleagues. In particular I would like to mention the PhD students Daniel Knödler, Markus Stauf and Stefan Ulrych. Furthermore, I would like to thank K. Allaart, K. Amir-Azimi-Nili, P. Czerski, W.H. Dickhoff, C. Giusti, F.D. Pacati, A. Polls and J. Udias. This work has been supported by grants from the DFG (SFB 382, GRK 135 and Wa728/3).
no-problem/9906/physics9906010.html
ar5iv
text
# 1 Introduction ## 1 Introduction A consistently recurring topic at LEP2 has been the interpretation and combination of results from searches for new particles. The fundamental task is to interpret the collected dataset in the context of two complementary hypotheses. The first hypothesis – the null hypothesis – is that the dataset is compatible with non-signal Standard Model background production alone, and the second is that the dataset is compatible with the sum of signal and Standard Model background production. In most cases, the search for new particles proceeds via several parallel searches for final states. The results from all of these subchannels are then combined to produce a final result. All existing confidence level calculations follow the same general strategy . A test statistic or estimator is constructed to quantify the “signal-ness” of a real or simulated experiment. The “signal-ness” of a single observed experiment leads to the confidence level on, for example, the null hypothesis that the observed experiment is incompatible with signal and background both being produced. Most calculation methods use an ensemble of toy Monte Carlo experiments to generate the estimator distribution against which the observed experiment is compared. This generation can be rather time-consuming when the number of toy Monte Carlo experiments is great (as it must be for high precision calculations) or if the number of signal and background expected for each experiment is great (as it is for the case of searches optimized to use background subtraction). In this note, we present an improved method for calculating confidence levels in the context of searches for new particles. Specifically, when the likelihood ratio is used as an estimator, the experiment estimator distribution may be calculated analytically with the Fourier transform. With this approach, the disadvantage of toy Monte Carlo experiments is avoided. The analytic method offers several advantages over existing methods, the most dramatic of which is the increase in calculation speed and precision. ## 2 Likelihood ratio estimator for searches The likelihood ratio estimator is the ratio of the probabilities of observing an event under two search hypotheses. The estimator for a single experiment is $$E=C\frac{_{s+b}}{_b}.$$ (1) Here $`_{s+b}`$ is the probability density function for signal+background experiments and $`_b`$ is the probability density function for background-only experiments. Because the constant factor $`C`$ appears in each event’s estimator, it does not affect the ordering of the estimators – an event cannot become more signal-like by choosing a different $`C`$. For clarity in this note, the constant is chosen to be $`e^s`$, where $`s`$ is the expected number of signal events. <sup>1</sup><sup>1</sup>1When considering the two production hypotheses and calculating an exclusion, the expected signal $`s`$ is uniquely determined by the cross section. If the cross section is not fixed, then $`e^s`$ is not constant, and $`C`$ may be set to unity. For the simplest case of event counting with no discriminant variables (or, equivalently, with perfectly non-discriminating variables), the estimator can be calculated with Poisson probabilities alone. In practice, not every event is equally signal-like. Each search may have one or more event variables that discriminate between signal-like and background-like events. For the general case, the probabilities $`_{s+b}`$ and $`_b`$ are functions of the observed events’ measured variables. As an example, consider a search using one discriminant variable $`m`$, the reconstructed Higgs mass. The signal and background have different probability density functions of $`m`$, defined as $`f_s(m)`$ and $`f_b(m)`$, respectively. (For searches with more than one discriminant variable, $`m`$ is replaced by a vector of discriminant variables $`\stackrel{}{x}`$.) It is then straightforward to calculate $`_{s+b}`$ and $`_b`$ for a single event, taking into account the event weighting coming from the discriminant variables: $$E=e^s\frac{P_{s+b}}{P_b}=e^s\frac{e^{(s+b)}\left[sf_s(m)+bf_b(m)\right]}{e^b\left[bf_b(m)\right]}.$$ (2) The likelihood ratio estimator can be shown to maximize the discovery potential and exclusion potential of a search for new particles . Such an estimator, both with and without discriminant variables, has been used successfully by the LEP2 collaborations to calculate confidence levels for searches . ## 3 Ensemble estimator distributions via Fast Fourier Transform (FFT) One way to form an estimator for an ensemble of events is to generate a large number of toy Monte Carlo experiments, each experiment having a number of events generated from a Poisson distribution. Another way is to analytically compute the probability density function of the ensemble estimator given the probability density function of the event estimator. The discussion of this section pursues the latter approach. The likelihood ratio estimator is a multiplicative estimator. This means the estimator for an ensemble of events is formed by multiplying the individual event estimators. Alternatively, the logarithms of the estimators may be summed. In the following derivation, $`F=\mathrm{ln}E`$, where $`E`$ is the likelihood ratio estimator. For an experiment with 0 events observed, the estimator is trivial: $`E`$ $`=`$ $`e^s{\displaystyle \frac{e^{(s+b)}}{e^b}}=1`$ (3) $`F`$ $`=`$ $`0`$ (4) $`\rho _0(F)`$ $`=`$ $`\delta (F),`$ (5) where $`\rho _0(F)`$ is the probability density function of $`F`$ for experiments with 0 observed events. For an experiment with exactly one event, the estimator is, again using the reconstructed Higgs mass $`m`$, $`E`$ $`=`$ $`e^s{\displaystyle \frac{e^{(s+b)}\left[sf_s(m)+bf_b(m)\right]}{e^b\left[bf_b(m)\right]}},`$ (6) $`F`$ $`=`$ $`\mathrm{ln}{\displaystyle \frac{sf_s(m)+bf_b(m)}{bf_b(m)}},`$ (7) and the probability density function of $`F`$ is defined as $`\rho _1(F)`$. For an experiment with exactly two events, the estimators of the two events are multiplied to form an event estimator. If the reconstructed Higgs masses of the two events are $`m_1`$ and $`m_2`$, then $`E`$ $`=`$ $`{\displaystyle \frac{\left[sf_s(m_1)+bf_b(m_1)\right]\left[sf_s(m_2)+bf_b(m_2)\right]}{\left[bf_b(m_1)\right]\left[bf_b(m_2)\right]}}`$ (8) $`F`$ $`=`$ $`\mathrm{ln}{\displaystyle \frac{sf_s(m_1)+bf_b(m_1)}{bf_b(m_1)}}+\mathrm{ln}{\displaystyle \frac{sf_s(m_2)+bf_b(m_2)}{bf_b(m_2)}}.`$ (9) The probability density function for exactly two particles $`\rho _2(F)`$ is simply the convolution of $`\rho _1(F)`$ with itself: $`\rho _2(F)`$ $`=`$ $`{\displaystyle \rho _1(F_1)\rho _1(F_2)\delta (FF_1F_2)𝑑F_1𝑑F_2}`$ (10) $`=`$ $`\rho _1(F)\rho _1(F).`$ (11) The generalization to the case of $`n`$ events is straightforward and encouraging: $`E`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{n}{}}}{\displaystyle \frac{sf_s(m_i)+bf_b(m_i)}{bf_b(m_i)}}`$ (12) $`F`$ $`=`$ $`{\displaystyle \underset{i=1}{\overset{n}{}}}\mathrm{ln}{\displaystyle \frac{sf_s(m_i)+bf_b(m_i)}{bf_b(m_i)}}`$ (13) $`\rho _n(F)`$ $`=`$ $`{\displaystyle \int \cdots \int \underset{i=1}{\overset{n}{}}\left[\rho _1(F_i)dF_i\right]\delta \left(F\underset{i=1}{\overset{n}{}}F_i\right)}`$ (14) $`=`$ $`\underset{n\text{times}}{\underset{}{\rho _1(F)\mathrm{}\rho _1(F)}}.`$ (15) Next, the convolution of $`\rho _1(F)`$ is rendered manageable by an application of the relationship between the convolution and the Fourier transform. If $`A(F)=B(F)C(F)`$, then the Fourier transforms of $`A`$, $`B`$, and $`C`$ satisfy $$\overline{A(G)}=\overline{B(G)}\overline{C(G)}.$$ (16) This allows the convolution to be expressed as a simple power: $$\overline{\rho _n(G)}=\left[\overline{\rho _1(G)}\right]^n.$$ (17) Note this equation holds even for $`n=0`$, since $`\overline{\rho _0(G)}=1`$. For any practical computation, the analytic Fourier transform can be approximated by a numerical Fast Fourier Transform (FFT) . How does this help to determine $`\rho _{s+b}`$ and $`\rho _b`$? The probability density function for an experiment estimator with $`s`$ expected signal and $`b`$ expected background events is $$\rho _{s+b}(F)=\underset{n=0}{\overset{\mathrm{}}{}}e^{(s+b)}\frac{(s+b)^n}{n!}\rho _n(F),$$ (18) where $`n`$ is the number of events observed in the experiment. Upon Fourier transformation, this becomes $`\overline{\rho _{s+b}(G)}`$ $`=`$ $`{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}e^{(s+b)}{\displaystyle \frac{(s+b)^n}{n!}}\overline{\rho _n(G)}`$ (19) $`=`$ $`{\displaystyle \underset{n=0}{\overset{\mathrm{}}{}}}e^{(s+b)}{\displaystyle \frac{(s+b)^n}{n!}}\left[\overline{\rho _1(G)}\right]^n`$ (20) $$\overline{)\overline{\rho _{s+b}(G)}=e^{(s+b)\left[\overline{\rho _1(G)}1\right]}}$$ (21) The function $`\rho _{s+b}(F)`$ may then be recovered by using the inverse transform. In general, this relation holds for any multiplicative estimator. This final relation means that the probability density function for an arbitrary number of expected signal and background events can be calculated analytically once the probability density function of the estimator is known for a single event. This calculation is therefore just as fast for high background searches as for low background searches. In particular, it holds great promise for Higgs searches which, due to use of background subtraction and discriminant variables, are optimized to higher background levels than they have been in the past. Two examples will provide practical proof of the principle. For the first, assume a hypothetical estimator results in a probability density function of simple Gaussian form $$\rho _1(F)=\frac{1}{\sqrt{2\pi \sigma }}e^{\frac{(x\mu )^2}{2\sigma ^2}},$$ (22) where $`\sigma =0.2`$ and $`\mu =2.0`$. For an expected $`s+b=20.0`$, both the FFT method and the toy Monte Carlo method are used to evolute the event estimator probability density function to an experiment estimator probability density function. The agreement between the two methods (Fig. 1) is striking. The higher precision of the FFT method is apparent, even when compared to 1 million toy Monte Carlo experiments. The periodic structure is due to the discontinuous Poisson distribution being convolved with a narrow event estimator probability function. In particular, the peak at $`\mathrm{ln}E=0`$ corresponds to the probability that exactly zero events be observed ($`e^{(s+b)}=2.1\times 10^9`$). The precision of the toy Monte Carlo method is limited by the number of Monte Carlo experiments, while the precision of the FFT method is limited only by computer precision. For the second example, a more realistic estimator is calculated using a discriminant variable distribution from an imaginary $`\text{HZ}\text{H}\tau \tau `$ search. The variable used here is the reconstructed Higgs mass of the event. This estimator’s probability density function is then calculated for an experiment with $`s=5`$ and $`b=3`$ expected events (Fig. 2). Again, the two methods agree well in regions where the toy Monte Carlo method is useful. These examples support the mathematical proof of the FFT method described above. Because the final calculations $`c_{s+b}`$ and $`c_b`$ are simply integrals of the experiment estimator probability density function, any confidence levels calculated with the FFT method and the toy Monte Carlo method are identical. The examples also show the precision achievable with the FFT method, a precision that will be important when testing discovery hypotheses at the $`5\sigma =5\times 10^7`$ level. ## 4 Combining results from several searches Given the multiplicative properties of the likelihood ratio estimator, the combination of several search channels proceeds intuitively. The estimator for any combination of events is simply the product of the individual event estimators. Consequently, construction of the estimator probability density function for the combination of channels parallels the construction of the estimator probability density function for the combination of events in a single channel. In particular, for a combination with $`N`$ search channels: $`\overline{\rho _{s+b}(G)}`$ $`=`$ $`{\displaystyle \underset{j=1}{\overset{N}{}}}\overline{\rho _{s+b}^j(G)}`$ (23) $`=`$ $`e^{_{j=1}^N(s_j+b_j)\left[\overline{\rho _1^j(G)}1\right]}`$ (24) Due to the strictly multiplicative nature of the estimator, this combination method is internally consistent. No matter how subsets of the combinations are rearranged (i.e., combining channels in different orders, combining different subsets of data runs), the result of the combination does not change. Once a results are obtained for $`\rho _{s+b}(F)`$ and $`\rho _b(F)`$, simple integration gives the confidence coefficients $`c_{s+b}`$ and $`c_b`$. From this point, confidence levels for the two search hypotheses may be calculated in a number of ways . Those straightforward calculations are outside the scope of this note. ## 5 Final remarks and conclusions A few short remarks conclude this note and emphasize the advantages of calculations using the likelihood ratio with the Fast Fourier Transform (FFT) method. 1. The likelihood ratio estimator is an optimal ordering estimator for maximizing both discovery and exclusion potential. Such an estimator can only improve the discovery or exclusion potential of a search. 2. As a multiplicative estimator, the likelihood ratio estimator ensures internal consistency when results are combined. For example, if the dataset is split into several smaller pieces, the combined result always remains the same. 3. The probability density function of an ensemble estimator may be calculated analytically from the event estimator probability density function. Avoiding toy Monte Carlo generation brings revolutionary advances in speed and precision. For a $`\text{HZ}\text{4-jets}`$ search with 25 expected background events, a full confidence level calculation with $`2^{18}`$ toy MC experiments and 60 Higgs mass hypotheses takes approximately fifteen CPU hours. By contrast, the same calculation using the FFT method takes approximately two CPU minutes. This discrepancy only increases as the required confidence level precision and the number of toy MC experiments increase. For example, confidence level calculations for discovery at the $`5\sigma `$ level would require $`𝒪(10^8)`$ toy MC experiments. Given the approximately linear scaling of calculating time with number of toy experiments, such a calculation would take up almost a year in the 4-jet channel alone! The precision of the analytic FFT method is more than sufficient for a $`5\sigma `$ discovery. A fast confidence level calculation makes possible studies that might have otherwise been too CPU-intensive with the toy MC method. These include studies of improvements in the event selections, of various working points, and of systematic errors and their effects, among others. A precise calculation makes possible rejection of null hypotheses at the level necessary for discovery. The marriage of the likelihood ratio estimator and the FFT method seems well-suited for producing extremely fast and precise confidence level results, and the flexibility and ease of use of the clfft package should make this a powerful tool in interpreting searches for new particles.
no-problem/9906/cond-mat9906375.html
ar5iv
text
# Mechanisms of decoherence in weakly anisotropic molecular magnets ## Abstract Decoherence mechanisms in crystals of weakly anisotropic magnetic molecules, such as V<sub>15</sub>, are studied. We show that an important decohering factor is the rapid thermal fluctuation of dipolar interactions between magnetic molecules. A model is proposed to describe the influence of this source of decoherence. Based on the exact solution of this model, we show that at relatively high temperatures, about 0.5 K, the quantum coherence in a V<sub>15</sub> molecule is not suppressed, and, in principle, can be detected experimentally. Therefore, these molecules may be suitable prototype systems for study of physical processes taking place in quantum computers. A new class of magnetic compounds, molecular magnets , has been attracting much attention. Each molecule of such a compound is a nanomagnetic entity with a large spin (or, in the antiferromagnet case, large staggered magnetization). The interaction between different molecules, being of the dipole-dipole type, is very small, so that the corresponding crystal is an arrangement of identical weakly interacting nanomagnets. Molecular magnets are ideal objects to study phenomena of great scientific importance for mesoscopic physics, such as spin relaxation in nanomagnets, quantum tunneling of magnetization, topological quantum phase interference, quantum coherence, etc . Low-spin weakly anisotropic compounds, like V<sub>15</sub> , demonstrate well-pronounced quantum properties, such as significant tunneling splitting of low-lying spin states. As we show here, they are attractive prototype systems to study mesoscopic quantum coherence and physical processes which destroy it. Besides fundamental science, these studies are important also for the implementation of quantum computation . At present, for strongly anisotropic high-spin magnetic molecules such as Mn<sub>12</sub> and Fe<sub>8</sub>, different kinds of decohering interactions have been studied and their interplay with quantum properties at low temperatures (below 1.5–2 K) is well understood. A general conclusion about strongly anisotropic systems is that the dissipative environment, especially the bath of nuclear spins, rapidly destroys coherence even at very low temperatures, and only incoherent tunneling survives. Decoherence in weakly anisotropic magnetic molecules has not yet received much study, and such a study is the main purpose of the present paper. We analyze various sources of decoherence for such molecular magnets as V<sub>15</sub>, and show that in the temperature range 0.2 K – 0.5 K the decoherence is governed by rapidly fluctuating dipole-dipole fields produced by thermally activated molecules. This mechanism in molecular magnets has not been considered before; estimates show that in strongly anisotropic magnets like Mn<sub>12</sub> or Fe<sub>8</sub> this effect is small. Based on an exactly solvable model, we demonstrate that even at temperatures as high as 0.5 K, the quantum coherence in V<sub>15</sub> molecules is remarkably robust, and, in principle, can be detected experimentally. Therefore, the V<sub>15</sub> molecular magnet is a promising candidate for the study of quantum coherence and may be a useful prototype system for the investigation of physical processes taking place in quantum computers. The magnetic subsystem of the molecule K<sub>6</sub>\[V<sub>15</sub>As<sub>6</sub>O<sub>42</sub>(H<sub>2</sub>O)\]$``$8H<sub>2</sub>O, (denoted for brevity as V<sub>15</sub>) consists of fifteen V<sup>4+</sup> ions with the spin 1/2 (see Fig. 1). The ions form two nonplanar hexagons (with total spin equal to zero) and a triangle sandwiched between them. Exchange interactions between ions are reasonably large (from 30 K to 800 K), but, due to the strong spin frustration present in the molecule, the couplings of the central triangle spins with the hexagons cancel each other (see Fig. 1). The hexagon spins form a rather stiff antiferromagnetic structure, and the low-energy part of the spectrum is defined by only three weakly coupled spin 1/2 ions belonging to the central triangle. An effective exchange coupling between the triangle spins $`J2.5`$ K is present. Thus, the ground-state term consisting of two doublets with $`S=1/2`$ is separated from the low-lying excited term $`S=3/2`$ by the distance $`\mathrm{\Delta }_1=3J/23.8`$ K. Experimental results suggest that within the two ground-state doublets, the states $`|S=1/2,S_z=+1/2`$ and $`|S=1/2,S_z=1/2`$ are mixed (a small anisotropic interaction may be responsible, but it is not of concern for the arguments presented), so that tunneling between these levels occurs and the fourfold degeneracy of the ground state is partly lifted (it cannot be lifted completely because of Kramers’ theorem: in the absence of an external field all levels are doubly degenerate). The coherent tunneling leads to a splitting $`\mathrm{\Delta }_00.2`$ K between the two pairs of Kramers-degenerate levels. The aim of this paper is to study the decoherent influence of the environment upon this tunneling, i.e. the decoherence between the states $`|S=1/2,S_z=+1/2`$ and $`|S=1/2,S_z=1/2`$. First, we consider decoherence caused by the spin-lattice relaxation. The rate of the relaxation due to direct one-phonon processes can be estimated as $$\left(\tau _{sl}^{dir}\right)^1=9\pi \frac{\left|V_{sl}\right|^2}{Mv^2}\left(\frac{\mathrm{\Delta }_0}{\theta }\right)^3\mathrm{coth}\left(\frac{\mathrm{\Delta }_0}{2T}\right),$$ (1) where $`\mathrm{\Delta }_0`$ is the tunneling splitting of the ground state doublets, $`T`$ is the temperature, $`v2800`$ m/s is the sound velocity in the molecular crystal, $`M2.310^3`$ a.m.u. is the mass of the molecule, $`\theta =(6\pi ^2v^3/\mathrm{\Omega }_0)^{1/3}70`$ K is the Debye temperature ($`\mathrm{\Omega }_0`$ is the volume per molecule), and $`V_{sl}`$ is the characteristic “modulation” of spin energy under long-wavelength acoustic deformation. At present, the physical mechanism of spin-lattice coupling is unclear, but the value of $`V_{sl}2.6`$ K has been estimated from the available experimental data . As a result, the estimate is $`\left(\tau _{sl}^{dir}\right)^12T10^{11}`$ K (where $`T`$ is the temperature in Kelvins). Here and below, we put $`\mathrm{}=k_B=1`$ and express all quantities, including relaxation time, in the same units (Kelvins). Also, there is a contribution from Raman two-phonon processes, but at low temperatures the corresponding relaxation time $`\tau _{sl}^R`$ is very long: $`\tau _{sl}^{dir}/\tau _{sl}^RT^6/(Mv^2\mathrm{\Delta }_0^2\theta ^3)1`$ , and can be neglected. We also consider Orbach two-step relaxation via the excited levels $`S=3/2`$ (see Fig. 1): $$\left(\tau _{sl}^{Or}\right)^1=9\pi \frac{|V_{sl}|^2}{Mv^2}\left(\frac{\mathrm{\Delta }_1}{\theta }\right)^3\mathrm{exp}\left(\frac{\mathrm{\Delta }_1}{T}\right)$$ (2) and for the parameters of V<sub>15</sub> we have $`\left(\tau _{sl}^{Or}\right)^110^8\mathrm{exp}(\mathrm{\Delta }_1/T)`$ K. Here, we assume that the spin-lattice matrix element $`V_{sl}`$ is of the same order as above (about 2.6 K). Along with triggering Orbach processes, the excitation of molecules to the level $`S=3/2`$ leads to a variation of the dipolar field exerted on a given molecule. As time goes by, some of the excited molecules relax back to $`S=1/2`$, while other molecules go up to the level $`S=3/2`$, and the dipolar field at a given point in the crystal fluctuates with time. In this paper, we use a mean-field approach to take into account the dipolar fields acting on molecules; it is justified since we are dealing with the case of relatively high temperatures (in comparison with the energy of dipolar interactions $`\mathrm{\Gamma }_0`$) and long-range dipolar forces. Within the mean-field approach, the dipolar field of the molecule with the spin $`𝐒_2`$ (equal to 3/2) can be imagined as a sum of the field created by a spin $`𝐒_1`$ (equal to 1/2) and a field created by the spin $`𝐒^{}=𝐒_2𝐒_1`$. Thus, the total dipolar field is a sum of two fields: the static demagnetizing field created by a uniform medium of spins 1/2, and a purely fluctuating field $`h`$ created by the spins $`𝐒^{}`$. The spins $`𝐒^{}`$ are distributed approximately uniformly over the sample at any instant, and their number $`N_1`$ is small in comparison with the total number $`N`$ of molecules, $`N_1=N\mathrm{exp}(\mathrm{\Delta }_1/T)`$, so the fluctuating field $`h`$ at any instant obeys the Cauchy (Lorentz) distribution (Chapter IV, Ref. ): $$P\left(h\right)=\frac{\mathrm{\Gamma }}{\pi }\frac{1}{h^2+\mathrm{\Gamma }^2}$$ (3) with $`\mathrm{\Gamma }=\mathrm{\Gamma }_0(N_1/N)=\mathrm{\Gamma }_0\mathrm{exp}\left(\mathrm{\Delta }_1/T\right)`$, where $`\mathrm{\Gamma }_010^4`$ K is of order of the dipole-dipole interaction energy in the ground state. Note that the fluctuating field $`h`$ is measured against the total static field, including the static dipolar field. A comparison with Eqs. (1) and (2) shows that at $`T>0.2`$ K the distribution width $`\mathrm{\Gamma }`$ is much larger than $`1/\tau _{sl}`$, so that the fluctuating field $`h`$ destroys coherence much faster than phonons do. Therefore, the fluctuating dipole-dipole field constitutes an important decoherence factor. To estimate the correlation time of the dipolar field fluctuations, we note that the field changes when excited molecules relax back to the $`S=1/2`$ level (and, according to the principle of detailed balance, the same number of molecules go to the level $`S=3/2`$). The transition from $`S=3/2`$ to $`S=1/2`$ proceeds via emission of phonons of energy $`\mathrm{\Delta }_1`$. The rate of this transition is proportional to $`\mathrm{\Delta }_1^3`$ (the number of phonons with energy $`\mathrm{\Delta }_1`$), and can be calculated using the Fermi’s golden rule (Chapter 10, Ref. ): $$\tau _c^1=9\pi \frac{|V_{sl}|^2}{Mv^2}\left(\frac{\mathrm{\Delta }_1}{\theta }\right)^3,$$ (4) which satisfies the condition of detailed balance between the levels $`S=3/2`$ to $`S=1/2`$ (cf. Eqs. (4) and (2) representing the rates of transitions “up” and “down”), so the level populations remain constant in time. During the time $`\tau _c`$, a majority of the molecules situated initially in the state $`S=3/2`$ relax to $`S=1/2`$, and other molecules are excited, causing the field to fluctuate. Thus, $`\tau _c`$ is the correlation time for the fluctuating dipolar field. The estimate gives $`\tau _c^110^8`$ K for V<sub>15</sub>, so that $`\mathrm{\Gamma }\tau _c1`$ at $`T<0.5`$ K, i.e. the field fluctuations are fast in comparison with their amplitude. Now, let us consider the hyperfine fields which constitute an important source of decoherence . A typical time for fluctuations of the hyperfine field $`\tau _n`$ is of order of the linewidth of nuclear magnetic resonance and can be estimated as dipole-dipole interactions between different nuclei : $`1/\tau _n(\mu _n/\mu _e)^2\mathrm{\Gamma }_0`$, where $`\mu _n`$, $`\mu _e`$ are nuclear and electronic magnetic moments, respectively. Therefore, for the temperatures $`T>\mathrm{\Delta }_1/[2\mathrm{ln}(\mu _n/\mu _e)]0.2`$ K one has $`\tau _n\mathrm{\Gamma }1`$ and hyperfine fields can be considered as static for time intervals of order $`\mathrm{\Gamma }^1`$. As will be shown below, $`\mathrm{\Gamma }`$ defines the relaxation (decoherence) time, so that hyperfine fields can be combined with static demagnetizing fields to give some total static mean-field bias $`h_0`$ of the doublet levels. This bias is determined mainly by the hyperfine field exerted on a molecule, which is about $`\mathrm{\Gamma }_{hf}510^2`$ K (demagnetizing fields are weaker), and is of order of the tunneling splitting $`\mathrm{\Delta }_0`$. Therefore, for a large fraction of the molecules the levels $`|S_z=+1/2`$ and $`|S_z=1/2`$ are rather close to resonance. This is radically different from the case of strongly anisotropic molecular magnets (such as Mn<sub>12</sub> or Fe<sub>8</sub>) where the ground-state tunneling splitting is much smaller than hyperfine fields. Finally, we consider the static dipolar interaction $`\mathrm{\Gamma }_010^4`$ K between the molecules situated in the lowest four states (with $`S=1/2`$). Longitudinal dipolar interactions (the terms $`S_z^1S_z^2`$) are included in the mean field along with the static hyperfine field $`\mathrm{\Gamma }_{hf}`$, and can be neglected in comparison with the latter (since $`\mathrm{\Gamma }_0\mathrm{\Gamma }_{hf}`$). The terms $`S_z^1S_x^2`$ etc. within the mean-field approximation just change the tunneling splitting negligibly (since $`\mathrm{\Gamma }_0\mathrm{\Delta }_0`$). But the flip-flop terms ($`S_x^1S_y^2`$ etc.) cannot be incorporated into the mean-field scheme. Flip-flop between two molecules is a transition from the state $`|S_z^1=+1/2,S_z^2=1/2`$ to the state $`|S_z^1=1/2,S_z^2=+1/2`$. The matrix element of this transition is of order $`\mathrm{\Gamma }_0`$, but the energy difference between the initial and final states is determined by the difference in local mean fields acting on the two molecules, which is of order $`\mathrm{\Gamma }_{hf}\mathrm{\Gamma }_0`$. In this situation, known as Anderson localization, the levels of the molecule do not widen at all, and no spin diffusion is present. The localization can be lifted due to the dynamic change of the hyperfine field at the molecule, but this happens on a timescale $`t\tau _n`$. At temperatures $`T>0.2`$ K the coherence is already lost at these times, due to thermoactivated dipolar field fluctuations ($`\mathrm{\Gamma }\tau _n1`$). At lower temperatures, the mean-field approach is not valid, and the intermolecular correlations should be taken into account. Summarizing the discussion above, the dipolar dynamic fluctuations constitute an important source of decoherence at 0.2 K$`<T<`$ 0.5 K. Let us formulate now a model for magnetic relaxation under the fluctuating dipolar field $`𝐡=h_x𝐞_x+h_y𝐞_y+h_z𝐞_z`$. We consider a two-level system (the levels $`S_z=\pm 1/2`$ for V<sub>15</sub>) with the static tunneling splitting $`\mathrm{\Delta }_0`$ and static mean-field bias $`h_0`$ (the latter is governed mainly by the hyperfine static fields, since the demagnetizing fields are much weaker). The system is described by the density matrix $`\rho `$ written in the basis formed by the levels $`S_z=\pm 1/2`$. Its equation of motion is $$\dot{\rho }=i[\rho ,]$$ (5) where $`=(\mathrm{\Delta }_0+h_x)\sigma _xh_y\sigma _y(h_0+h_z)\sigma _z`$ is the Hamiltonian of the system ($`\sigma _{x,y,z}`$ are the Pauli’s matrices). It can be conveniently written as $`\dot{x}`$ $`=`$ $`h_yy(\mathrm{\Delta }_0+h_x)z`$ (6) $`\dot{y}`$ $`=`$ $`h_yx+(h_0+h_z)z`$ (7) $`\dot{z}`$ $`=`$ $`(\mathrm{\Delta }_0+h_x)x(h_0+h_z)y`$ (8) by introducing the variables: $`x=(\rho _{11}\rho _{22})/2`$, $`y=(\rho _{12}+\rho _{21})/2`$ and $`z=(\rho _{12}\rho _{21})/(2i)`$. The static fields $`\mathrm{\Delta }_0`$ and $`h_0`$ can be eliminated by two rotations of the co-ordinate frame: $`x`$ $`=`$ $`X\mathrm{cos}\phi (Y\mathrm{cos}Et+Z\mathrm{sin}Et)\mathrm{sin}\phi `$ (9) $`y`$ $`=`$ $`X\mathrm{sin}\phi +(Y\mathrm{cos}Et+Z\mathrm{sin}Et)\mathrm{cos}\phi `$ (10) $`z`$ $`=`$ $`Y\mathrm{sin}Et+Z\mathrm{cos}Et,`$ (11) where $`\mathrm{sin}\phi =\mathrm{\Delta }_0/E`$, $`\mathrm{cos}\phi =h_0/E`$, $`E=\sqrt{\mathrm{\Delta }_0^2+h_0^2}`$, and Eqs. (6) take the form $`\sqrt{2}\dot{X}`$ $`=`$ $`\left(h_{2a}h_{3b}\right)Y\left(h_{2b}+h_{3a}\right)Z`$ (12) $`\sqrt{2}\dot{Y}`$ $`=`$ $`\sqrt{2}h_1Z+\left(h_{3b}h_{2a}\right)X`$ (13) $`\sqrt{2}\dot{Z}`$ $`=`$ $`\left(h_{2b}+h_{3a}\right)X\sqrt{2}h_1Y.`$ (14) The random fields acting on the system are $`h_1=h_z\mathrm{cos}\phi +h_x\mathrm{sin}\phi `$, $`h_{2a,3a}=\sqrt{2}h_{2,3}\mathrm{sin}Et`$, and $`h_{2b,3b}=\sqrt{2}h_{2,3}\mathrm{cos}Et`$, where $`h_2=h_z\mathrm{sin}\phi +h_x\mathrm{cos}\phi `$ and $`h_3=h_y`$. As we discussed above, $`h_{x,y,z}`$ are independent random fields, at any instant distributed with the same law (3); the same is true for $`h_{1,2,3}`$. Since $`E\tau _c\mathrm{\Delta }_0\tau _c1`$, one can consider $`h_{2a,3a}`$ and $`h_{2b,3b}`$ as independent, and in Eq. (12) we have several independent fluctuating fields with the same Cauchy distribution and with very short autocorrelation time $`\tau _c`$. Eqs. (12) can be imagined as describing the evolution of a system with the Hamiltonian $`H=H_1+H_2+H_3`$ $$\dot{𝐑}=iH𝐑$$ (15) where $`𝐑=(X,Y,Z)`$, $`H_1=\sqrt{2}(h_{2a}h_{3b})S_1`$, $`H_2=\sqrt{2}(h_{2b}+h_{3a})S_2`$, $`H_3=h_1S_3`$, and the noncommuting matrices $`S_{1,2,3}`$ are $$S_1=\left(\begin{array}{ccc}0& i& 0\\ i& 0& 0\\ 0& 0& 0\end{array}\right),S_2=\left(\begin{array}{ccc}0& 0& i\\ 0& 0& 0\\ i& 0& 0\end{array}\right),S_3=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& i\\ 0& i& 0\end{array}\right).$$ (16) The formal solution of Eqs. (15) can be represented in a path-integral-like form, by splitting the time interval $`(0,t)`$ into $`N1`$ equal pieces of length $`ϵ=t/N`$: $$𝐑(t)=\mathrm{exp}[iϵH(t_{N1})]\mathrm{}\mathrm{exp}[iϵH(0)]𝐑(0)$$ (17) where $`t_n=nϵ`$. Each of the matrices $`H`$ is proportional to the fluctuating fields $`h_{1,2,3}`$, so if we choose $`ϵ\mathrm{\Gamma }^1`$ the Trotter decomposition formula can be applied to each factor: $$\mathrm{exp}\left(iϵH_k\right)=\mathrm{exp}(iϵH_k)+𝒪(ϵ^2),$$ (18) where $`k=1,2,3`$. The correlation time of all the fields $`h_{1,2,3}`$ is $`\tau _c`$, so $`H_k(t)`$ and $`H_k(t+ϵ)`$ in Eq. (18) are decorrelated if $`ϵ\tau _c`$. Choosing $`\tau _cϵ\mathrm{\Gamma }^1`$, each term in the products (17) and (18) can be averaged independently over different realizations of the random processes represented by the fields $`h_{1,2,3}`$ thus giving the answer: $`X(t)`$ $`=`$ $`X(0)\mathrm{exp}(2\sqrt{2}\mathrm{\Gamma }t)`$ (19) $`Y(t)`$ $`=`$ $`Y(0)\mathrm{exp}[(\sqrt{2}+1)\mathrm{\Gamma }t]`$ (20) $`Z(t)`$ $`=`$ $`Z(0)\mathrm{exp}[(\sqrt{2}+1)\mathrm{\Gamma }t]`$ (21) These results, together with Eq.(9), represent an exact solution of the problem. The situation considered here is similar to that found in spin resonance, and the results can be conveniently expressed in corresponding terms. The dynamics of the density matrix elements is represented as a sum of two terms: damped oscillations with the frequency $`E`$ (with transverse relaxation rate $`T_2^1=(\sqrt{2}+1)\mathrm{\Gamma }`$) and pure damping (with longitudinal damping rate $`T_1^1=2\sqrt{2}\mathrm{\Gamma }`$). The decoherence times $`T_{1,2}`$ both are of order $`\mathrm{\Gamma }^1`$. This holds in spite of the smallness of $`\tau _c`$, due to the peculiar properties of the Cauchy distribution: for Gaussian fluctuations with variance $`\sigma ^2`$ we would have much smaller relaxation rate $`\sigma ^2\tau _c`$ (motional narrowing ). On the other hand, if $`\tau _c`$ were very large then the dipolar field would be almost static, and the decoherence time for a single molecule would be determined by hyperfine fields, as it is for Mn<sub>12</sub> or Fe<sub>8</sub> . Nevertheless, for V<sub>15</sub> the decoherence rate is still small enough: $`\mathrm{\Gamma }/\mathrm{\Delta }_0210^7`$ at $`T=0.5`$ K, i.e. the system tunnels about 5,000,000 times before the tunneling oscillations are wiped out by decoherence. We emphasize that each tunneling in V<sub>15</sub> is not a single-spin event: it takes place between the two states of the whole molecule. It is a tunneling of an antiferromagnetic system with small uncompensated spin, and all 15 spins are involved. Summarizing, we considered possible sources of decoherence in V<sub>15</sub> molecules between the states $`S_z=\pm 1/2`$ of ground-state doublets. We found that in the temperature region 0.2–0.5 K the main source of decoherence is the fluctuating dipolar field created by the molecules, which are thermally activated to the higher $`S=3/2`$ level. Based on an exactly solvable model, a rather low decoherence rate is found: about 5,000,000 tunneling events occur before the coherence is destroyed. Such a low decoherence rate is unusual for magnetic systems of mesoscopic size at these temperatures. Authors would like to thank W. Wernsdorfer, I. Chiorescu and B. Barbara for helpful discussions. This work was partially carried out at the Ames Laboratory, which is operated for the U. S. Department of Energy by Iowa State University under Contract No. W-7405-82 and was supported by the Director for Energy Research, Office of Basic Energy Sciences of the U. S. Department of Energy. This work was partially supported by Russian Foundation for Basic Research, grant 98-02-16219.
no-problem/9906/hep-lat9906001.html
ar5iv
text
# On the 𝐼=2 channel 𝜋-𝜋 interaction in the chiral limit Supported in part by NSF PHY-9700502, and by FWF P10468-PHY. ## 1 INTRODUCTION The use of improved lattice actions allows to work with lattice volumes large enough to accommodate systems of two hadrons, with manageable computational effort. We take advantage of this opportunity to study the residual effective interaction of two pseudo-scalar mesons on the lattice. This paper is a report on the current status of this project. An earlier exploratory study was made with a smaller number of gauge configurations and two, somewhat large, quark masses. The new results presented here are based on 208 configurations and 6 values of quark masses, with otherwise unchanged lattice parameters. The current simulation allows extrapolation of the extracted $`\pi `$-$`\pi `$ potential to the chiral limit and a comparison of lattice-based scattering phase shifts, computed with the potential, to experimental results in the isospin $`I=2`$ channel. ## 2 LATTICE PARAMETERS An $`L^3\times T=9^3\times 13`$ lattice was used with an $`O(a^2)`$ tree-level and tadpole improved action based on next-nearest neighbor couplings . At $`\beta =6.2`$, in the conventions of , the corresponding lattice constant is $`a0.4`$fm, or $`a^1500`$MeV. We have used $`N_U=208`$ quenched gauge configurations. The hopping parameters for Wilson fermions were set to $`\kappa ^1=5.720,5.804,5.888,5.972,6.056,6.140`$. The critical value for $`\kappa ^1`$ is $`5.5`$. Quark propagator matrix elements were computed using a random-source technique with $`N_R=8`$ Gaussian sources. ## 3 METHOD An outline of the theoretical framework may be found in , we here mention only the essential points. Suitable operators for the $`\pi ^+`$$`\pi ^+`$ system, having isospin $`I=2`$, are $$\mathrm{\Phi }_\stackrel{}{p}(t)=\varphi _\stackrel{}{p}(t)\varphi _{+\stackrel{}{p}}(t),$$ (1) where $$\varphi _\stackrel{}{p}(t)=L^3\underset{\stackrel{}{x}}{}e^{i\stackrel{}{p}\stackrel{}{x}}\overline{\psi }^\text{d}(\stackrel{}{x},t)\gamma _5\psi ^\text{u}(\stackrel{}{x},t)$$ (2) describes single mesons with lattice momenta $`\stackrel{}{p}`$. The correlation matrix for the $`\pi ^+`$$`\pi ^+`$ system $$C_{\stackrel{}{p}\stackrel{}{q}}=\mathrm{\Phi }_\stackrel{}{p}^{}(t)\mathrm{\Phi }_\stackrel{}{q}(t_0)=\overline{C}_{\stackrel{}{p}\stackrel{}{q}}+C_{I,\stackrel{}{p}\stackrel{}{q}}$$ (3) is a sum of a free, $`\overline{C}`$, and a residual-interaction contribution, $`C_\text{I}`$. The free $`\pi ^+`$$`\pi ^+`$ correlator $`\overline{C}`$ is diagonal in $`p,q`$. We also implement link variable fuzzing and operator smearing at the sink. We define an effective interaction through $$H_\text{I}=\frac{}{t}\mathrm{ln}(\overline{C}^{1/2}C\overline{C}^{1/2}).$$ (4) Matrix elements of $`H_\text{I}`$ are obtained from linear fits to the logarithm of the eigenvalues of the correlators $`\overline{C}`$ and $`C`$. At this point, only the diagonal elements of $`C`$ were utilized. Momentum-space matrix elements $`(\stackrel{}{p}|H_\text{I}|\stackrel{}{q})`$ are computed in a truncated basis of small lattice momenta. The Fourier transform to coordinate space contains a local potential $$V(\stackrel{}{r})=\underset{\stackrel{}{q}}{}e^{2i\stackrel{}{q}\stackrel{}{r}}(\stackrel{}{q}|H_\text{I}|+\stackrel{}{q}).$$ (5) Its $`s`$-wave projection ($`\mathrm{}=0`$) makes use of only the irreducible representation $`A_1`$ of the lattice symmetry group $`O(3,𝒵)`$. In terms of the corresponding reduced matrix elements we have $$V(r)=\underset{q}{}j_0(2qr)(q|H_\text{I}^{(A_1)}|q),$$ (6) where $`j_0`$ is a spherical Bessel function. ## 4 RESULTS Potentials according to (6) for one selected value of $`\kappa `$ are shown in Fig. 1. The sum over (on-axis) momenta $`q=\frac{2\pi }{L}k`$ was truncated at increasing $`k_{\text{max}}=0,1,2,3,4`$, respectively. A detailed error analysis is in progress, however, errors appear to increase with $`k`$. The results presented below are for the truncation at $`k_{\text{max}}=3`$. Chiral extrapolation of the potential was done by linear fits of $`V(r)`$ versus $`m^2`$ for a fine (plot-grade) mesh of values $`r`$, fixed one at a time. Using sets of 3 through 6 data points, corresponding to the smallest available values of $`m^2`$, gives very similar results. The subsequent analysis was performed with 3 data points. The extrapolated potential $`V(r)`$ is shown in Fig. 2 as a dashed line. The oscillations of $`V(r)`$ in the region $`r>2`$ are due to the Fourier transform of the truncated momentum sums. The wave length is indicative of the lattice resolution at the current truncation. A parametric fit to $`V(r)`$ with $$V^{(\alpha )}(r)=\alpha _1\frac{1\alpha _2r^{\alpha _5}}{1+\alpha _3r^{\alpha _5+1}e^{\alpha _4r}}+\alpha _0,$$ (7) at $`\alpha _5=2`$ fixed, was applied to the extrapolated potential. The result is shown in Fig. 2 as a solid line. It suggests attraction at short distances followed by a repulsive barrier. We have used $`V^{(\alpha )}(r)`$ in a Schrödinger equation to calculate s-wave scattering phase shifts $`\delta _{\mathrm{}=0}^{I=2}(p)`$, see Fig. 3. The pion mass was set to multiples of the experimental value, corresponding to $`m_\pi =0.28`$ in units of $`a^1`$. The repulsive nature of the phase shifts is due to the hump of $`V^{(\alpha )}(r)`$ around, and extending beyond, $`r1`$, see Fig. 2. The data points in Fig. 3 are experimental results compiled from . ## 5 CONCLUSION Scattering phase shifts for the $`I=2`$ channel $`\pi `$$`\pi `$ system were computed from lattice QCD by way of extracting a non-relativistic potential. Since at this point an error analysis is pending, particularly systematic, errors are unknown. The range of the extracted potential is short compared to the current spatial resolution. The latter is determined by the somewhat large value of the lattice constant, and by the limitations imposed by the momentum truncation for the correlator matrices. This situation makes it difficult to reliably extract details of $`V(r)`$. (The current lattice parameters should be better suited for studying interactions involving, larger sized, baryons.) In addition to $`V(r)`$ there is also present a nonlocal potential which has not yet been computed. The scattering phase shifts obtained from the underlying lattice study show repulsive behavior in the low-momentum region and, in this respect, compare favorably to experimental findings. Quantitatively, the computed phase shifts are too small by a sizeable factor. Relativistic corrections are at the 40% level at $`p0.6a^1`$. Acknowledgement: We would like to thank R.M. Woloshyn for a multiple-mass solver.
no-problem/9906/hep-ph9906316.html
ar5iv
text
# 1 Introduction ## 1 Introduction One of the main themes of particle physics is the strive for an increased accuracy in the description of physical processes. This is required to test the current standard model in detail, and often also to control background to searches for new physics. A main road to improvements is the perturbative higher-order calculations of physical processes. For most processes, currently this means next-to-leading order (NLO), i.e. one order higher than the Born-level of the process, either by the emission of one more parton or by the inclusion of one-loop corrections to the Born graph. In principle, the perturbative expansion is a well-defined and successful technique but, for QCD processes, the large $`\alpha _\mathrm{s}`$ value makes the perturbative series only slowly convergent. This problem is especially severe in the collinear region, where the emission rate is increasing as one approaches the non-perturbative régime. Finite total cross sections are obtained only by a cancellation between large positive real and large negative virtual contributions. Therefore higher-order matrix elements (ME’s) are not of much use to describe the substructure of jets, apart from very crude features. Parton showers (PS’s) have complementary strengths. By a resummation of the large logarithmic terms, e.g. into Sudakov form factors, it is possible to obtain a reasonable description also in regions of large $`\alpha _\mathrm{s}`$ values. Formally, for most generators, this resummation is only certified to leading logarithmic (LL) accuracy, but in reality many of the expected next-to-leading log improvements are already included, such as exact energy-momentum conservation, angular ordering, and optimal scale choice for $`\alpha _\mathrm{s}`$. Furthermore, the cross section for any $`n`$-parton configuration is always positive definite. Finally, it is possible to terminate the parton showers at some process-independent lower cut-off $`Q_0`$ and attach a — thereby also process-independent — non-perturbative hadronization model for physics below that scale. The main PS weakness, on the other hand, is the crude treatment of wide-angle parton emission, where many Feynman diagrams may contribute with comparable strengths, and the final rate therefore may depend on detailed interference effects not present in the PS language. For some simple processes it has been possible to improve the showers by explicit NLO ME information in this region, thereby obtaining an improved description of the process . In general, however, this approach does not appear tractable, and one would like to find other methods to combine the advantages of the ME answer at large parton separations with the PS one at small separation (and with hadronization models for scales below that). In this note we will present some thoughts on a more general — although maybe less beautiful — ME/PS matching strategy that could be used for a larger set of NLO processes. To be specific, we will consider the example where the leading-order process is of the $`22`$ type, i.e. producing two high-$`p_{}`$ jets, as observed in HERA photoproduction events. The NLO corrections then contain both $`23`$ processes and virtual corrections to the $`22`$ ones. ## 2 The NLO parton configurations The first step is to use the ME’s to set up the starting configuration, either 2- or 3-“jet” events, where “jet” is a misnomer denoting that any nearby partons have been clustered. The two event classes are distinguished by some jet resolution criteria, i.e. by requiring that none of the partons in a $`23`$ configuration are found in a soft or collinear region. This can be defined, e.g., in terms of minimal $`p_{}`$ and $`R=\sqrt{\mathrm{\Delta }\eta ^2+\mathrm{\Delta }\varphi ^2}`$, or minimal invariant masses between partons. While it is straightforward to ‘cut out’ the appropriate phase space regions from the 3-“jet” final state, to remain with a finite and positive differential cross section everywhere else, the consequences for the 2-“jet” configurations are more complicated. Here one will now receive contributions from (1) the leading-order $`22`$ ME’s, (2) the virtual terms, including (counter)terms coming from the scale-dependent parton distributions, also $`22`$, and (3) those $`23`$ parton configurations that are rejected as such by the criteria above, and thus should be reclassified as 2-“jet” events. Divergences from soft and collinear emissions should cancel between the two latter event classes. In the extreme divergent region, the three-body phase space naturally reduces to the two-body one, and so the singularities can be cancelled analytically. The finite pieces are often integrated numerically as three separate contributions, that are only combined in the end to give the correct cross section. If this strategy is applied in an event generator, it becomes necessary to work with events with negative weight, from the virtual corrections term. This is possible, but known to be a very unstable procedure in practice. For instance, it requires the hadronization model to be continuous in the limit that two partons are brought closer and eventually merged to one. This is true for the string model , although some fine print sets a practical limit, but many other hadronization models are flawed in this respect from the onset. Instead we would propose a rearranged procedure, inspired by the so-called subtraction method . (Which does not exclude the use of the competing phase-space slicing method to handle the collinear regions.) In analogy with this method, the third term is subdivided in two: (3a) a strongly simplified matrix element, that reproduces the correct behaviour in the singular regions, but away from this can be chosen in a convenient way that allows simple integration over the extra 3-body phase space variables not present in the 2-body phase space (for fixed incoming partons), and (3b) the difference between the full and the approximate expressions, that is everywhere finite, but messy to integrate analytically. In a Monte Carlo context, this would work as follows: * Pick a desired $`22`$ parton configuration, at random (but biased to the regions of large cross sections, of course). * Evaluate the differential cross section contributions from the parts (1) and (2+3a) for this configuration. By the notation (2+3a) we imply that the singular contributions now explicitly cancel between (2) and (3a). We expect (1)+(2)+(3) to be clearly positive, in the sense that, were (2)+(3) anywhere to become negative of the same order as (1) is positive, one would be entering a collinear/soft region of large higher-order corrections, better described by showers. Therefore the parameters of the clustering algorithm must be chosen so as to avoid this. Provided that the approximate form in (3a) is not very badly chosen, also (1)+(2+3a) should always be positive, although this is not strictly required. * Pick a 3-body phase-space point in the soft/collinear regions that, by the jet resolution criteria, should be classified as the 2-“jet” configuration picked above. * Evaluate the difference (3b) in this point, and multiply by the integral of the extra 3-body phase space variables. * Add (1)+(2+3a)+(3b) together to obtain the cross section for the two-“jet” configuration. With a reasonable separation into (3a) and (3b), the grand sum should always be a non-negative number, that can then be used to accept/reject events in order to obtain a final event sample with unit weight. The key feature here is that, by the Monte Carlo nature of it, one and the same two-“jet” configuration will be assigned different (non-negative) weights each time the cross section is evaluated, since the associated three-parton configuration will differ, but it is arranged so that the average converges to the right cross section. ## 3 The PS interface A parton shower is organized in terms of some evolution variable, such that emissions are ordered to give e.g. a decreasing angle, transverse momentum or mass . Often the upper limit of evolution is set to cover the full phase space of emissions, but some other maximum can also be indicated. For instance, if the ME regularization has been defined in terms of some minimum invariant mass (or angular) scale between resolved partons, a natural complement is a shower evolution in mass (or angle) from this scale downwards. However, such a match would never be perfect, so one would always need the capability to reject unwanted branchings in a shower. Fortunately, an acceptance/rejection step is part of shower algorithms anyway, so one only needs to add further rejection criteria matching the NLO ME cuts. The method would therefore be as follows: * In two-“jet” events, the evolution is started from some conveniently large scale, chosen so that no allowed phase-space regions are excluded. When a potential new emission has been selected by the shower algorithm, the resulting new parton configuration is tested by the “jet” clustering algorithm. Any emission that gives a three-“jet” classification is rejected and the evolution is continued downwards. This scheme should be applied both for the initial- and final-state showers. Ambiguities could arise, e.g. if one emission from the initial and one from the final state happen to overlap, so that they together define a third “jet”, although individually they do not. Occurrences of this kind are formally of higher order, so one is free to pick any sensible strategy. One extreme would be only to apply cuts to each shower separately, another to insert a final clustering test that makes use of all partons to accept/reject the full shower treatment. * In three-“jet” events, in principle all further emission is allowed, also such that leads to four or more “jets”. However, if one were to allow emissions at scales harder than the ones in the basic $`23`$ graph itself, there is a manifest risk of doublecounting in the jet cross section. Again, this could be avoided by applying a veto to emissions. In an explicit algorithm is presented, that sets up a final-state shower from a given parton configuration, in a sensible way that avoids (or at least minimizes) doublecounting. A similar approach should be possible for the initial-state showers. ## 4 Outlook If we want to improve the precision of NLO QCD tests in the future, new strategies need to be developed. This note is one such proposal. The main point is an alternative phase-space sampling/integration strategy for NLO ME’s. It would lead to events with positive definite weights, that therefore could be better interfaced to parton showers and hadronization models, and thus more realistically compared with data. Clearly many details need to be settled, to make this a working proposition. While it should not be necessary to recalculate any of the NLO corrections to a process, the code for the evaluation of cross sections need to be restructured compared with current practice. Especially, the phase-space generation machinery must be rewritten significantly. By comparison, the modifications required in existing parton-shower algorithms appear more straightforward. Acknowledgment:We thank M. Klasen and B. Pötter for helpful conversations.
no-problem/9906/physics9906002.html
ar5iv
text
# Proton structure effects in muonic hydrogen ## Acknowledgments I wish to thank M. Krawczyk, E. Rondio for information on proton structure functions and F. Kottmann for pointing out a mistake in the sum of all contributions to the $`\mu `$H Lamb shift in . This work was supported by Polish Committee for Scientific Research under contract No. 2 P03B 024 11
no-problem/9906/cond-mat9906094.html
ar5iv
text
# Preasymptotic multiscaling in the phase-ordering dynamics of the kinetic Ising model ## Abstract The evolution of the structure factor is studied during the phase-ordering dynamics of the kinetic Ising model with conserved order parameter. A preasymptotic multiscaling regime is found as in the solution of the Cahn-Hilliard-Cook equation, revealing that the late stage of phase-ordering is always approached through a crossover from multiscaling to standard scaling, independently from the nature of the microscopic dynamics. Recently we have made an extensive study of the overall time evolution in the phase-ordering process with scalar conserved order parameter in the framework of the Cahn-Hilliard-Cook (CHC) equation. Through the careful study of the global evolution of the structure factor from the very beginning of the quench down to the fully developed late stage, we have identified a pattern whereby the very early behavior is followed by an intermediate nonlinear mean-field regime before the asymptotic fully nonlinear regime is attained. Furthermore, the time scales of these regimes have been found to be strongly dependent on (i) the lengthscale considered and (ii) the parameters of the quench such as the amplitude $`\mathrm{\Delta }`$ of the initial fluctuations and the final temperature $`T_F`$. The concurrence of all these elements gives rise to a rich and interesting variety of behaviors in the preasymptotic phenomenology which can be accounted for in a simple manner on the basis of a model introduced by Bray and Humayun (BH model). In this paper we make an analogous study of the structure factor in the framework of the kinetic Ising model evolving with Kawasaki dynamics. The main point is that we observe also in this case the existence of an intermediate regime characterized by the mean-field multiscaling behavior, pointing to the generic nature of the above described structure in the overall time evolution. There is abundant experimental and numerical evidence for the universality of the late stage scaling in the phase-ordering process yielding the following form for the structure factor $$C(\stackrel{}{k},t)L^d(t)F(kL),$$ (1) with $$L(t)t^{1/z}.$$ (2) In particular the dynamic exponent $`z`$ and the scaling function $`F(x)`$ are independent of the initial condition, of the final temperature and of the nature of the dynamics, be it Langevin for continuous spins or Kawasaki for discrete spins. In a way this is easy to understand. The behavior (1) applies in the large time regime, when ordered domains have formed and the process is dominated by the motion of interfaces, obeying an effective equation of motion, which presumably takes the same form independently from the nature of the microscopic dynamics. Now, with the results presented in this paper, we are able to extend the independence from the nature of the microscopic dynamics also to the preasymptotic regime where mean-field behavior is observed. In order to explain this let us summarize the results of Ref. . The CHC equation of motion for an $`N`$-component order parameter is given by $$\frac{\stackrel{}{\varphi }(\stackrel{}{x},t)}{t}=^2\left[\frac{V(\stackrel{}{\varphi })}{\stackrel{}{\varphi }}^2\stackrel{}{\varphi }\right]+\stackrel{}{\eta }(\stackrel{}{x},t),$$ (3) where $`V(\stackrel{}{\varphi })=(r/2)\stackrel{}{\varphi }^2+(g/4N)(\stackrel{}{\varphi }^2)^2`$, with $`r<0`$ and $`g>0`$, while $`\stackrel{}{\eta }`$ is a gaussian white noise, with expectations $$\{\begin{array}{ccc}<\stackrel{}{\eta }(\stackrel{}{x},t)>& =& 0\hfill \\ <\eta _\alpha (\stackrel{}{x},t)\eta _\beta (\stackrel{}{x}^{},t^{})>& =& 2T_F\delta _{\alpha \beta }^2\delta (\stackrel{}{x}\stackrel{}{x}^{})\delta (tt^{}).\hfill \end{array}$$ (4) For future reference let us denote by $`M_0^2=r/g`$ the square of the order parameter at the bottom of the local potential, which we will take $`O(1)`$. Starting from this equation, through a combination of the $`1/N`$-expansion and the gaussian auxiliary field approximation of Mazenko, Bray and Humayun have derived a nonlinear closed equation for the structure factor $$\frac{C(\stackrel{}{k},t)}{t}=2k^2[k^2+R(t)]C(\stackrel{}{k},t)2\frac{k^2}{N}R(t)D(\stackrel{}{k},t)+2k^2T_F,$$ (5) where $$R(t)=r+g\frac{d^dk}{(2\pi )^d}C(\stackrel{}{k},t)$$ (6) and $$D(\stackrel{}{k},t)=\frac{d^dk_1}{(2\pi )^d}\frac{d^dk_2}{(2\pi )^d}C(\stackrel{}{k}\stackrel{}{k}_1,t)C(\stackrel{}{k}_1\stackrel{}{k}_2,t)C(\stackrel{}{k}_2,t).$$ (7) Integrating formally with the initial condition $$C(\stackrel{}{k},t=0)=\mathrm{\Delta }$$ (8) one can write the structure factor as the sum of three pieces $$C(\stackrel{}{k},t)=\mathrm{\Delta }C_0(\stackrel{}{k},t)+T_FC_{T_F}(\stackrel{}{k},t)+\frac{1}{N}C_{nl}(\stackrel{}{k},t)$$ (9) where $$C_0(\stackrel{}{k},t)=\mathrm{exp}\left\{2k^2_0^t𝑑t^{}[k^2+R(t^{})]\right\}$$ (10) $$C_{T_F}(\stackrel{}{k},t)=2k^2_0^t𝑑t^{}\frac{C_0(\stackrel{}{k},t)}{C_0(\stackrel{}{k},t^{})}$$ (11) $$C_{nl}(\stackrel{}{k},t)=2k^2_0^t𝑑t^{}\frac{C_0(\stackrel{}{k},t)}{C_0(\stackrel{}{k},t^{})}R(t^{})D(\stackrel{}{k},t^{})$$ (12) which are coupled together through the definitions of $`R`$ and $`D`$. In general this is a very complicated nonlinear integral equation which must be handled numerically. However, as we have shown in Ref. , progress in the understanding of the solution can be made by considering the scaling properties of the three terms separately. Then, the first two terms, which are what is left after taking the large-$`N`$ limit, obey multiscaling. Namely one has $$C_{0,T_F}(\stackrel{}{k},t)_1^{\alpha _{0,T_F}(x)}F_{0,T_F}(x)$$ (13) with $`_1=L(t)[k_m(t)L(t)]^{2/d1}`$, $`L(t)=t^{1/4}`$, $`x=k_2`$ and $`_2=k_m^1(t)`$ is the inverse of the peak wave vector. The two lengths $`_1`$ and $`_2`$ are related by $$\frac{_1}{_2}=(d\mathrm{ln}L)^{\frac{1}{2d}}.$$ (14) The spectra of exponents $`\alpha _{0,T_F}(x)`$ are given by $$\alpha _0(x)=d\psi (x)$$ (15) with $$\psi (x)=1(1x^2)^2$$ (16) and $$\alpha _{T_F}(x)=\{\begin{array}{ccc}2+(d2)\psi (x)\hfill & & x<x^{}\hfill \\ 2\hfill & & x>x^{}\hfill \end{array}$$ (17) where $`x^{}=\sqrt{2}`$. Finally $`F_0(x)=1`$ while $`F_{T_F}(x)=T_F/x^2`$. The last term in the right hand side of (9) has been shown by Bray and Humayun to obey standard scaling $$C_{nl}(\stackrel{}{k},t)L^d(t)F_{nl}(x).$$ (18) Summarizing, the behavior of the full solution of (5), is expected to be the outcome of the competition of three terms of the type $$C(\stackrel{}{k},t)\mathrm{\Delta }L^{\alpha _0(x)}+T_FL^{\alpha _{T_F}(x)}+(1/N)L^d$$ (19) where, for simplicity, we have omitted logarithmic factors taking $`_1=L`$. As it is evident from the plot of the spectra of exponents in Fig. 1, since $`\alpha _{0,T_F}(x)d`$, for any finite $`N`$ the last term in the right hand side of (19) eventually dominates on all lengthscales, leading to the asymptotic standard scaling behavior of the structure factor. However, due to the $`x`$ dependence of the exponents, the competition takes place differently on different lengthscales. For the full analysis of the interplay of the three terms we refer to Ref. . In the present case with $`d=2`$, the second and third term in Eq. (19) scale in the same way and the interplay is only between the first and the other two together. Notice that the difference $`\delta \alpha (x)=d\alpha _0(x)`$ vanishes for $`x=1`$, namely at the peak of the structure factor, while it is positive for all $`x1`$ and in particular becomes very large for $`x>x^{}`$, where $`\alpha _0(x)`$ is negative. Therefore, even if the last two terms are bound to dominate, by modulating the choice of $`\mathrm{\Delta }`$, the relative weight of the first term may be adjusted so that multiscaling can be observed over a sizable preasymptotic time interval, with a spread in $`x`$ about the peak which depends on the actual value of $`\mathrm{\Delta }`$. With a continuous order parameter the value of $`\mathrm{\Delta }`$ can be varied at will and in Ref. it was shown that when $`\mathrm{\Delta }`$ is very small ($`\mathrm{\Delta }M_0^2`$) there exist an observable mean-field preasymptotic behavior practically only at the peak, while for large values of $`\mathrm{\Delta }`$ ($`\mathrm{\Delta }M_0^2`$) preasymptotic mean-field multiscaling is clearly observed over the range $`xx^{}`$. The interest of this result is that it is not just a property of the BH model, but the same pattern of behavior is observed in the simulation of the full CHC equation with $`N=1`$. We now show that a very similar structure in the overall time evolution appears also in the dynamics of the Ising model. The analysis of the crossover between multiscaling and standard scaling is carried out by computing the effective spectrum of exponents $`\alpha (x,t)`$ that one obtains when fitting the structure factor to the form $$C(\stackrel{}{k},t)=_1^{\alpha (x,t)}F(x),$$ (20) with $`x=k_2`$ and $`_2(t)=k_m^1(t)`$. A delicate point here is the choice of $`_1`$. Adopting straightforwardly the definition of the large-$`N`$ model given after Eq. (13) amounts to put in by hand the growth law $`L(t)=t^{1/4}`$. On the other hand, in the large-$`N`$ model $`_1`$ is also given by $`[C_0(k_m,t)]^{1/d}`$. Therefore we take as an unbiased choice $`_1=[C(k_m,t)]^{1/d}`$, dropping an inessential costant factor $`F(1)`$. In practice $`\alpha (x,t)`$ is obtained as the slope of the plot of $`\mathrm{ln}C(x/_2(t),t)`$ versus $`\mathrm{ln}_1(t)`$ for fixed $`x`$. When the time dependence of $`\alpha (x,t)`$ disappears there is scaling, which is of the standard type if $`\alpha (x)`$ is independent of $`x`$ and it is of the multiscaling type if $`\alpha (x)`$ actually displays a dependence on $`x`$. We have considered an Ising system on a two dimensional lattice of size 512$`\times `$512 with periodic boundary conditions and initially prepared in an infinite temperature configuration. We have then let the system evolve with Kawasaki spin-exchange dynamics after a sudden quench to a fixed temperature $`T_F`$ below the critical temperature $`T_c`$. We have computed the structure factor by averaging over $`10^2`$ realizations of the time histories. We have divided the entire duration of the simulation ($`10^5`$ Monte Carlo steps/spin) into subsequent and nonoverlapping time intervals, corresponding to temporal decades, and computed the effective exponent in the different time intervals. In Fig. 2 we have plotted the effective exponent as a function of $`x`$ in the five decades for a quench to $`T_F=0.5T_c`$. The curves follow the pattern of the results reported in Ref. for the CHC equation when $`\mathrm{\Delta }`$ is large, as indeed it is the case for the Ising system where $`\mathrm{\Delta }=1`$. The gross feature is that there are two markedly distinct behaviors for $`x<x^{}`$ and for $`x>x^{}`$. To the right of $`x^{}`$ the curves display a clear time dependence eventually reaching the standard scaling behavior $`\alpha (x)2`$ in the latest time interval. To the left of $`x^{}`$, instead, the curves are bunched together displaying multiscaling with a spectrum of exponents which follows closely $`\alpha _0(x)`$. This pattern fits quite well with the results of the previous analysis of the competition between the first and the other two terms on the right hand side of Eq. (19). The multiscaling behavior for $`x<x^{}`$ shows that the relative size of the prefactors is such that, for these values of $`x`$, there exists a long time interval during which the structure factor is dominated by $`\mathrm{\Delta }L^{\alpha _0(x)}`$. This is the preasymptotic mean-field scaling which preceeds the crossover towards the eventual standard scaling. The duration of our simulation allows to detect just the onset of the crossover and it is not long enough to see the definitive establishment of standard scaling. For $`x>x^{}`$, instead, $`\delta \alpha (x)=d\alpha _0(x)`$ is much too big for observing any multiscaling and the time dependence of the effective exponent, jointly to the flat behavior as a function of $`x`$, shows that the crossover towards standard scaling is fully under way. The same analysis has been performed for a quench to $`T_F=0.9T_c`$, obtaining results completely analogous to those of Fig. 2. Moreover we have repeated the analysis for a one dimensional Ising model quenched from infinite temperature to $`T_F=0.5J/k_B`$. For such value of $`T_F`$ the equilibrium correlation length is $`\xi (T_F)27.3`$ and the system orders for times such that the typical domain length is smaller than $`\xi (T_F)`$. At the longest time in our simulation the typical domain size, measured as the first zero of the correlation function, was $`R_g=9.33`$. The behavior of the effective exponent $`\alpha (x,t)`$ in Fig. 3 is very similar to the one in Fig. 2, displaying a neater approach to standard scaling for $`x>1`$. In summary, the study of the overall time evolution in the kinetic Ising model evolving with Kawasaki dynamics displays a clear crossover from multiscaling to standard scaling as observed in the simulation of the CHC equation with large initial fluctuations. The first remark is that preasymptotic multiscaling is independent from the nature of the microscopic dynamics, as well as asymptotic standard scaling. The second remark is that multiscaling is the characteristic signature of mean-field behavior. Therefore the results presented in this paper together with those for the CHC equation in Ref. , show that a genuine feature of spinodal decomposition is that before reaching asymptotics the nonlinearity is governed by the mean-field mechanism on the large length scales. The reason of this phenomenon poses an interesting question for the theory of phase-ordering kinetics. It would be also interesting to look for systems with sufficiently large initial fluctuations in order to observe multiscaling experimentally. We thank the Istituto di Cibernetica of CNR (Arco Felice, Napoli) for generous grant of computer time. This work has been partially supported from the European TMR Network-Fractals c.n. FMRXCT980183 and from MURST through PRIN-97.
no-problem/9906/hep-ph9906349.html
ar5iv
text
# 1 Diagram of the generic process that we consider. A hadronic collision that leads to a pair of particles being produced, which each decay into one particle that is observed with momenta 𝑝₁ and 𝑝₂ respectively; and one particle (shown as a wavy lines) that is not directly detected, and whose presence can only be inferred from the missing transverse momentum, " / "⁢𝐩_𝑇. ## Acknowledgements We would like to thank Andy Parker for helpful conversations. CGL wishes to thank his funding body, the Partice Physics and Astronomy Research Council (PPARC), for financial support.
no-problem/9906/gr-qc9906003.html
ar5iv
text
# Measuring the Foaminess of Space-Time with Gravity-Wave Interferometers ## I Introduction Quantum mechanics and general relativity, the two pillars of modern physics, are very useful in describing the phenomena in their respective domains of physics. Unfortunately, their synthesis has been considerably less successful. It is better known for producing a plethora of puzzles from the embarrassing cosmological constant problem to the enigma of possible information loss associated with black hole evaporation. String theory is a reaction to this crisis. Nowadays, it is the main contender to be the microscopic theory of quantum gravity. But even without the correct theory of quantum gravity (be it string theory or something else), we know enough about quantum mechanics and gravity to study its low-energy limit. In particular, we would like to know what that limit of quantum gravity can tell us about the structure of space-time. In this article, we will combine the general principles of quantum mechanics with those of general relativity to address the problem of quantum measurements of space-time distances. But first, let us recall what quantum mechanics and general relativity have to say about the nature of space-time distance measurements. In quantum mechanics, we specify a space-time point simply by its coordinates; hardly do we feel the need to give a prescription to spell out how the coordinates are to be measured. This lax attitude will not do with general relativity. According to general relativity, coordinates do not have any meaning independent of observations; in fact, a coordinate system is defined only by explicitly carrying out space-time distance measurements. In the following (the discussion is based on our earlier work) we will abide by this rule of general relativity, and will follow Wigner in using clocks and light signals to measure space-time distances. In Section II, we will analyze a gedanken experiment designed to measure the distance between two spatially separated points, and will show that quantum mechanics and general relativity together imply that there is a limit on the accuracy with which we can measure that distance. That uncertainty in space-time measurements is interpreted to induce an uncertainty in the space-time metrics; in other words, space-time undergoes quantum fluctuations. Some consequences of space-time fluctuations are listed in Section III. Section IV is devoted to show how gravity-wave interferometers can be used to test this phenomenon of space-time fluctuations. We offer our conclusions in Section V. ## II From Space-time Measurements to Space-time Foams Suppose we want to measure the distance between two separated points A and B. To do this, we put a clock (which also serves as a light-emitter and receiver) at A and a mirror at B. A light signal is sent from A to B where it is reflected to return to A. If the clock reads zero when the light signal is emitted and reads $`t`$ when the signal returns to A, then the distance between A and B is given by $`l=ct/2`$, where $`c`$ stands for the speed of light. The next question is: What is the uncertainty (or error) in the distance measurement? Since the clock at A and the mirror at B are the agents in measuring the distance, the uncertainty of distance $`l`$ is given by the uncertainties in their positions. We will concentrate on the clock, expecting that the mirror contributes a comparable amount to the uncertainty in the measurement of $`l`$. Let us first recall that the clock is not stationary; its spread in speed at time zero is given by the Heisenberg uncertainty principle as $$\delta v=\frac{\delta p}{m}\begin{array}{c}>\\ \end{array}\frac{\mathrm{}}{2m\delta l},$$ (1) where $`m`$ is the mass of the clock. This implies an uncertainty in the distance at time $`t`$, $$\delta l(t)=t\delta v\begin{array}{c}>\\ \end{array}\left(\frac{\mathrm{}}{m\delta l(0)}\right)\left(\frac{l}{c}\right),$$ (2) where we have used $`t/2=l/c`$ (and we have dropped an additive term $`\delta l(0)`$ from the right hand side since its presence complicates the algebra but does not change any of the results). Minimizing $`(\delta l(0)+\delta l(t))/2`$ we get $$\delta l^2\begin{array}{c}>\\ \end{array}\frac{\mathrm{}l}{mc}$$ (3) At first sight, it appears that we can make $`\delta l`$, the uncertainty in the position of the clock, arbitrarily small by using a clock with a large enough (inertial) mass. But that is wrong as the (gravitational) mass of the clock would disturb the curvature. It is here the principle of equivalence in general relativity comes into play: one cannot have a large inertial mass and a small gravitional mass since they are equal. We can now exploit this equality of the two masses to eliminate the dependence on $`m`$ in the above inequality to make the uncertainty expression useful. Let the clock at A be a light-clock consisting of two parallel mirrors (each of mass $`m/2`$), a distance of $`d`$ apart, between which bounces a beam of light. On the one hand, the clock must tick off time fast enough such that $`d/c<\delta l/c`$, in order that the distance uncertainty is not greater than $`\delta l`$. On the other hand, $`d`$ is necessarily larger than the Schwarzschild radius $`Gm/c^2`$ of the mirrors ($`G`$ is Newton’s constant) so that the time registered by the clock can be read off at all. From these two requirements, it follows that $$\delta l>d>\frac{Gm}{c^2},$$ (4) the product of which and Eq. (3) yields $$\delta l\begin{array}{c}>\\ \end{array}(ll_P^2)^{1/3},$$ (5) where $`l_P=(\frac{\mathrm{}G}{c^3})^{1/2}`$ is the Planck length ($`10^{33}`$ cm). In a similar way, we can deduce the uncertainty in time interval ($`t`$) measurements, $$\delta t\begin{array}{c}>\\ \end{array}(tt_P^2)^{1/3},$$ (6) where $`t_P=l_P/c`$ is the Planck time ($`10^{42}`$ sec). The intrinsic uncertainty in space-time measurements just described can be interpreted as inducing an intrinsic uncertainty in the space-time metric $`g_{\mu \nu }`$. Noting that $`\delta l^2=l^2\delta g`$ and using Eq. (5) we get $$\delta g_{\mu \nu }\begin{array}{c}>\\ \end{array}(l_P/l)^{2/3}(t_P/t)^{2/3}.$$ (7) The fact that there is an uncertainty in the space-time metric means that space-time is foamy. The origin of the uncertainty is quantum mechanical. Therefore we can say that space-time undergoes quantum fluctuations and this is an intrinsic property of space-time. The amount of fluctuations on a length scale $`l`$ or time scale $`t`$ is given by Eq. (7). The uncertainty expressed in Eq. (3) is due to quantum effects, and it depends on $`m`$, the mass of the clock. In the above, we have used Eq. (4) to put a bound on $`m`$, eventually arriving at Eq. (5). Perhaps, we should point out that, besides Eq. (5), there are (at least) two other expressions for the uncertainty in space-time measurements that have appeared in the literature, predicting different degrees of foaminess in the structure of space-time. Instead of repeating the derivations used by the other workers, we find it instructive to ”derive” them by adopting an argument similar to the one we have used above. We start with Eq. (3). For the bound on $`m`$, if one uses (instead of Eq. (4)) $$l\begin{array}{c}>\\ \end{array}\frac{Gm}{c^2},$$ (8) then one finds $$\delta l\begin{array}{c}>\\ \end{array}l_P,$$ (9) the canonical uncertainty in distance measurements widely quoted in the literature. Eq. (8) gives a considerably more conservative bound on $`m`$; and the inequality is trivially satisfied because, otherwise, point B would be inside the Schwarzschild radius of the clock at A, an obviously nonsensical situation. So, we do not expect the resulting inequality (given by Eq. (9)) to be very restrictive (or, for that matter, to be very useful, in our opinion). On the other hand, if, instead of Eq. (4), one uses $$m_P\begin{array}{c}>\\ \end{array}m,$$ (10) where $`m_P\mathrm{}/cl_P`$ denotes the Planck mass ($`10^5`$ gm), then combining it with Eq. (3), one gets $$\delta l\begin{array}{c}>\\ \end{array}(ll_P)^{1/2},$$ (11) a result for the uncertainty in space-time measurements found in Ref. . Since $`l>>l_P`$ (which we have implicitly assumed), the distance uncertainty given by Eq. (11) is considerably bigger than the one proposed by us (Eq. (5)). But regardless which of the three pictures of space-time foam we have in mind, they all predict a very small distance uncertainty: e.g., even on the size of the whole observable universe ($`10^{10}`$ light-years), Eq. (9), Eq. (5), and Eq. (11) yield a fluctuation of only about $`10^{35}`$ m, $`10^{15}`$ m and $`10^4`$ m respectively. We leave it to the readers to decide for themselves which of the three pictures of space-time foam is the most reasonable. ## III Other Properties of Space-time Foam Let us return to that picture of space-time foam proposed by us, expressed in Eq. (5), Eq. (6), and Eq. (7). The metric fluctuations give rise to some rather interesting properties besides the uncertainties in space-time measurements. Here is a partial list: (i) There is a corresponding uncertainty in energy-momentum measurements for elementary particles, given by $$\delta p\begin{array}{c}>\\ \end{array}p\left(\frac{p}{m_Pc}\right)^{2/3},\delta E\begin{array}{c}>\\ \end{array}E\left(\frac{E}{m_Pc^2}\right)^{2/3}.$$ (12) We should keep in mind that energy-momentum is conserved only up to this uncertainty. (ii) Space-time fluctuations lead to decoherence phenomena. The point is that the metric fluctuation $`\delta g`$ induces a multiplicative phase factor in the wave-function of a particle (of mass $`m`$) $$\psi e^{i\delta \varphi }\psi ,$$ (13) given by $$\delta \varphi =\frac{1}{\mathrm{}}mc^2\delta g^{00}𝑑t.$$ (14) One consequence of this additonal phase is that a point particle with mass $`m>m_P`$ is a classical particle (i.e., it suffices to treat it classically). This fuels the speculation that the high energy limit of quantum gravity is actually classical. But in connection with this speculation, a cautionary remark is in order: by extrapolating the mass scale beyond the Planck mass, one runs the risk of going beyond the domain of validity of this work, viz. the low-energy limit of quantum gravity. (iii) The energy density $`\rho `$ associated with the metric fluctuations (Eq. (7)) is actually very small. Regarding the metric fluctuation as a gravitational wave quantized in a spatial box of volume $`V`$, we find $$\rho m_Pc^2/V.$$ (15) However, if one uses the ”root mean square” approach proposed in the first paper in Ref. , one gets an unacceptably large energy density of $`m_Pc^2/l_P^3`$. (iv) Due to space-time fluctuations, gravitational fields of individual particles with mass $`m<<m_P`$ that make up ordinary matter are not observable. From this point of view, the gravitational field is a statistical phenomenon of bulk matter. (v) There is a simple connection between spacetime quantum fluctuations as given by Eq. (5) and the holographic principle. The holographic principle asserts that the number of degrees of freedom of a region of space is bounded by the area of the region in Planck units. To see the connection, consider a region of space with linear dimension $`l`$. According to the conventional wisdom, the region can be partitioned into cubes as small as $`l_P^3`$. It follows that the number of degrees of freedom of the region is bounded by $`(l/l_P)^3`$, i.e., the volume of the region in Planck units. But according to our spacetime foam picture (Eq. (5)), the smallest cubes inside that region have a linear dimension of order $`(ll_P^2)^{1/3}`$. Accordingly, the number of degrees of freedom of the region is bounded by $`[l/(ll_P^2)^{1/3}]^3`$, i.e., the area of the region in Planck units, as stipulated by the holographic principle. Thus one may say that the holographic principle has its origin in the quantum fluctuations of spacetime. It has not escaped our attention that the effective dimensional reduction of the number of degrees of freedom may have a dramatic effect on the ultraviolet behaviour of a quantum field theory. (vi) Fluctuations in space-time imply that metrics can be defined only as averages over local regions and cannot have meaning locally. This gives rise to some sort of non-locality. It has also been observed that the space-time measurements described above alter the space-time metric in a fundamental manner and that this unavoidable change in the metric destroys the commutativity (and hence locality) of position measurement operators. The gravitationally induced non-locality, in turn, suggests a modification of the fundamental commutators. Furthermore, we would not be surprised if this feature of non-locality is in some way related to the holographic principle. ## IV Probing the Structure of Space-time with Gravity-wave Interferometers As noted above, the fluctuations that space-time undergoes are extremely small. Indeed, it is generally believed that no currently available technologies are powerful enough to probe into the space-time foam. But it has been shown recently by G. Amelino-Camelia that modern gravity-wave interferometers are already sensitive enough or will soon be sensitive enough to test two of the three pictures of space-time foam described in Section II. First let us briefly recall the physics of modern gravity-wave interferometers. They consist of a laser light source, a beam splitter, and two mirrors placed at the ends of two (very long) arms arranged in an L-shaped pattern. The light beam is split by the beam splitter into a transmitted beam and a reflected beam. The transmitted beam is directed toward one of the mirrors; and the reflected beam is directed toward the other mirror. The two beams of light are reflected by the mirrors back to the beam splitter where they are superposed. The resulting interference pattern is very sensitive to changes in the distances between the beam splitter and the mirrors at the ends of each arm. Modern gravity-wave interferometers are sensitive to changes in distance to an accuracy of the order of $`10^{18}m`$ and better. To reach such a sensitivity, one has to contend with all sorts of noises such as seismic noise, suspension thermal noise, and photon shot noise. Our claim is that even after one has subtracted away all these known noises, there is still a noise arising from space-time fluctuations. At first sight, it appears that the task of measuring space-time fluctuations is well beyond our reach; after all, even the extraordinary sensitivity down to an accuracy of order $`10^{18}`$ m is no where near the Planck scale of $`10^{35}`$ m. But the displacement sensitivity of an interferometer actually depends on frequencies $`f`$ (more on this below). Besides the $`10^{18}`$ m length scale mentioned above, the physics of interferometers involves another length scale $`c/f`$ provided by $`f`$. Interestingly, as shown in Ref. , within certain range of frequencies, the experimental limits are comparable to the theoretical predictions for two of the space-time foam pictures described above. The idea of using gravity-wave interferometers to probe the structure of space-time is actually fairly simple. Let us concentrate on the picture of space-time foam described by Eq. (7) and Eq. (5) or Eq. (6). Due to the foaminess of space-time, in any distance measurement that involves an amount of time $`t`$, there is a minute uncertainty $`\delta l(ctl_{QG}^2)^{1/3}`$, where, for later use, we have introduced $`l_{QG}`$ which we expect to be of order $`l_P`$. (It is understood that the time of observation $`t`$ is much smaller than the time interval over which the space-time region where the observation is done experiences significant curvature effects.) But measuring minute changes in (the) relative distances (of the test masses or the mirrors) is exactly what an interferometer is designed to do. Hence, the intrinsic uncertainty in a distance measurement for a time $`t`$ manifests itself as a displacement noise (in addition to other sources of noises) that infests the interferometers $$\sigma (ctl_{QG}^2)^{1/3}.$$ (16) In other words, quantum space-time effects provide another source of noise in the interferometers and that noise is given by Eq. (16). It is customary to write the displacement noise in terms of the associated displacement amplitude spectral density $`S(f)`$ of frequency $`f`$. For a frequency-band limited from below by the time of observation $`t`$, $`\sigma `$ is given in terms of $`S(f)`$ by $$\sigma ^2=_{1/t}^{f_{max}}[S(f)]^2𝑑f.$$ (17) Now we can easily check that, for the displacement noise given by Eq. (16) corresponding to our picture of space-time foam, the associated $`S(f)`$ is $$S(f)f^{5/6}(cl_{QG}^2)^{1/3}.$$ (18) In passing, we should mention that since we are considering a time scale much larger than the Planck time, we expect this formula for $`S(f)`$ to hold only for frequencies much smaller than the Planck frequency ($`c/l_P`$). For consistency, this implies that if the $`S(f)`$ given by Eq. (18) is used in the integral in Eq. (17), the integral should be relatively insensitive to $`f_{max}`$. That is indeed the case as the small frequency region dominates the integral for $`\sigma `$. Needless to say, to know the high frequency behavior of $`S(f)`$, one would need the correct theory of quantum gravity. We can now use the existing noise-level data obtained at the Caltech 40-meter interferometer to put a bound on $`l_{QG}`$. In particular, by comparing Eq. (18) with the observed noise level of $`3\times 10^{19}\mathrm{mHz}^{1/2}`$ near 450 Hz, which is the lowest noise level reached by the interferometer, we obtain the bound $`l_{QG}\begin{array}{c}<\\ \end{array}10^{29}`$ m which is in accordance with our expectation $`l_{QG}l_P10^{35}`$ m. The exciting news is that the ”advanced phase” of LIGO is expected to achieve a displacement noise level of less than $`10^{20}\mathrm{mHz}^{1/2}`$ near 100 Hz, and this would probe $`l_{QG}`$ down to $`10^{33}`$ m which is almost the length scale that we expect it to be. Moreover, since $`S(f)`$ goes like $`f^{5/6}`$ according to Eq. (18), we can look forward to the post-LIGO/VIRGO generation of gravity-wave interferometers for improvement by optimizing the performance at low frequencies. As lower frequency detection is possible only in space, we will probably need to wait for a decade or two for the LISA-type set-ups; but it will be worth the wait! We can also test the other two pictures of space-time foam by using the gravity-wave interferometers. The results are shown in the accompanying Table where, for convenience, we have rewritten Eq. (9) and Eq. (11) respectively as $`\delta l\begin{array}{c}>\\ \end{array}L_{QG}`$ and $`\delta l\begin{array}{c}>\\ \end{array}(l\stackrel{~}{l}_{QG})^{1/2}`$. We expect both $`L_{QG}`$ and $`\stackrel{~}{l}_{QG}`$ to be of order $`l_P10^{35}`$ m. Note that the amplitude spectral density for each of the three space-time foam pictures has its own characteristic frequency dependence. | Spacetime pictures with $`\delta l\begin{array}{c}>\\ \end{array}`$ | $`L_{QG}`$ | $`(ll_{QG}^2)^{1/3}`$ | $`(l\stackrel{~}{l}_{QG})^{1/2}`$ | | --- | --- | --- | --- | | Metric fluctuations with $`\delta g\begin{array}{c}>\\ \end{array}`$ | $`\frac{L_{QG}}{l}`$ | $`(\frac{l_{QG}}{l})^{2/3}`$ | $`(\frac{\stackrel{~}{l}_{QG}}{l})^{1/2}`$ | | Displacement noise $`\sigma `$ | $`L_{QG}`$ | $`(ctl_{QG}^2)^{1/3}`$ | $`(ct\stackrel{~}{l}_{QG})^{1/2}`$ | | Amplitude spectral density | $`f^{1/2}L_{QG}`$ | $`f^{5/6}(cl_{QG}^2)^{1/3}`$ | $`f^1(c\stackrel{~}{l}_{QG})^{1/2}`$ | | $`S(f)`$ | | | | | Bound from 40-m | $`L_{QG}\begin{array}{c}<\\ \end{array}10^{17}`$ m | $`l_{QG}\begin{array}{c}<\\ \end{array}10^{29}`$ m | $`\stackrel{~}{l}_{QG}\begin{array}{c}<\\ \end{array}10^{40}`$ m | | interferometer | | | | | Advanced phase of LIGO | $`L_{QG}`$ to $`10^{19}`$ m | $`l_{QG}`$ to $`10^{33}`$ m | $`\stackrel{~}{l}_{QG}`$ to $`10^{45}`$ m | | probes | | | | | Present status | hard to check | waiting eagerly | ruled out? | ## V Conclusions As the last column of the accompanying Table shows, the existing noise-level data obtained at the Caltech 40-m interferometer have already excluded all values of $`\stackrel{~}{l}_{QG}`$ down to $`10^{40}`$ m, five orders of magnitude smaller than the Planck length. Thus, the third picture of space-time foam appears to be in serious trouble, if not aleady ruled out. It is interesting to reflect that, until recently, no one would have dreamed that there is a way to rule out a space-time foam model that predicts a mere $`10^4`$ m uncertainty on a scale of the whole observable universe. Now, even the Planck scale is no longer regarded as so prohibitively small that quantum gravity cannot be probed by modern laser interferometry. On the other hand, the Table also shows that the quantum space-time effects predicted by the canonical picture of space-time foam (corresponding to fluctuations given by Eq. (9) in space-time measurements) are still far too small to be measured by interferometry technologies currently available or imaginable. Even the advanced phase of LIGO can probe $`L_{QG}`$ only down to $`10^{19}`$ m, some 16 orders away from the expected scale of Planck length. Waiting for the confirmation of the canonical space-time foam picture with the techniques of interferometry is like waiting for Godot in Beckett’s play — the waiting may never end. Finally, here is the exciting news: modern gravity-wave interferometers are within striking distance of testing the space-time foam picture proposed by us. Incredibly, the advanced phase of LIGO will probe $`l_{QG}`$ down to $`10^{33}`$ m. We can expect even more stringent bounds on $`l_{QG}`$ with future LISA-type projects. According to our space-time foam picture, a noise-level corresponding to the associated amplitude spectral density given by Eq. (18) with $`l_{QG}`$ of the order of Planck length, should be left in the read-out of an interferometer even after all classical-physics and ordinary quantum-mechanics noise sources have been eliminated. That noise is an intrinsic consequence of quantum gravity. If and when that noise is detected, we will have successfully taken a glimpse at the very fabric of space-time at very short distance scales. Eagerly we wait to catch that faint echo from space-time quantum fluctuations. Acknowledgments We thank G. Amelino-Camelia for a useful correspondence. This work was supported in part by the Department of Energy and by the Bahnson Fund of the University of North Carolina at Chapel Hill. One of us (YJN) gave a seminar on the topic of space-time measurements to a very receptive audience at the University of Connecticut in the fall of 1993. In the audience was Prof. Kurt Haller who probably did raise the question: How can we test the uncertainty expressed in Eq. (5)? At that time we had no concrete and practical idea. Now, five and half years later, we are glad to report to Prof. Haller that there is a way to do it. This article is dedicated to him to celebrate his seventieth birthday.
no-problem/9906/hep-ex9906019.html
ar5iv
text
# MEASUREMENT OF THE ATMOSPHERIC MUON NEUTRINOS WITH THE MACRO DETECTOR ## 1 Introduction The interest in precise measurements of the flux of neutrinos produced in cosmic ray cascades in the atmosphere has been growing over the last years due to the anomaly in the ratio of contained muon neutrino to electron neutrino events1-5). The anomaly finds explanation in the scenario of neutrino oscillation. The effects of neutrino oscillations appear also at higher energies, as reported by many experiments6-9). The flux of muon neutrinos in the energy region from a few $`GeV`$ up to hundreds of $`GeV`$ is inferred from measurements of upgoing muons. The flux of upgoing muons is reduced as a consequence of $`\nu `$ oscillations. A clearer signature of $`\nu `$ oscillations in the range $`10^3<\mathrm{\Delta }m^2<10^2eV^2`$ is connected with the dependence of the reduction on the polar angle $`\theta `$ with respect to the zenith. The reduction in the number of events is stronger near the vertical than near the horizontal directions due to the longer pathlength of neutrinos from production to observation near the nadir (Fig. 1). Furthermore the flux of atmospheric muon neutrinos in the region of a few $`GeV`$ can be studied looking at muons produced inside the detector and muons externally produced and stopping inside it. If the atmospheric neutrino anomalies are the results of neutrino oscillations, it is expected a reduction in the flux of upward-going low-energy atmospheric neutrinos of about a factor of two, but without any distortion in the shape of the angular distribution. In this paper the MACRO flux measurements are updated for high and low energy muon neutrinos. The new results are in agreement with the previous ones, and the indication for the $`\nu `$ oscillation hypothesis is now stronger. ## 2 The MACRO detector The MACRO detector is located in the Gran Sasso Laboratory, with a minimum rock overburden of $`2700hg/cm^2`$. It is a large rectangular box ($`76.6\times 12\times 9.3m^3`$) divided longitudinally in $`6`$ supermodules and vertically in a lower and an upper part, called attico. The active elements (see Fig. 2) are liquid scintillation counters for time measurement and streamer chambers for tracking, with $`27^{}`$ stereo strip readouts. In the lower half of the detector streamer tube planes are alternated with trays of crushed rock absorbers. The attico is hollow and it is used as a work area in which the electronics racks are placed. The streamer tube system allows to achieve a tracking resolution for muons in the range $`0.2^{}`$ – $`1^{}`$ as a function of the track length. Hence the angular uncertainty is lower than the angular spread due to multiple scattering in the rock for a muon coming from above. The scintillator system consists of horizontal and vertical layers. The scintillator resolutions for muons are about $`0.5ns`$ in time and $`11cm`$ in position. Thanks to its large area, fine tracking granularity and electronics symmetry with respect to upgoing and downgoing flight directions, the MACRO detector is a proper tool for the study of upward-travelling muons, generated by external $`\nu `$ interactions. Its mass ($`5.3kton`$) allows also to collect a statistically significant sample of neutrino events due to internal interactions. ## 3 Neutrino events in MACRO Fig. 2 displays the different kinds of neutrino events here analyzed. Most of the detected particles are muons generated in $`\nu _\mu `$ Charged Current interactions. In Fig. 3 the parent neutrino energy distributions for the different event topologies are shown : * Up Through \- These tracks come from interactions in the rock below MACRO and cross the whole detector ($`E_\mu >1GeV`$). The time information provided by scintillator counters allows to know the flight direction by means of the time-of-flight (T.o.F. ) method. The data have been collected in three periods, with different detector configurations. In the first two periods (March 1989 – November 1991, December 1992 – June 1993) only lower parts of MACRO were working. In the last period (April 1994 – February 1999) also the attico was in acquisition. * In Up \- These partially contained events come from $`\nu `$ interactions inside the apparatus. Also in this case the analysis algorithm is based on the T.o.F. calculation made possible by the attico scintillator layers. Hence only the data collected with the attico (live-time $`4.1`$ years) are useful for this analysis. A not negligible percentage ($`13\%`$) of events is expected to be induced by Neutral Currents or $`\nu _e`$ CC interactions. * Up Stop + In Down \- This sample is composed by two subsamples : external interactions with upward-going track stopping in MACRO (Up Stop), interactions in the lower part of the detector with a downgoing track (In Down). These events are recognized by means of topological criteria and the lack of time information makes difficult to distinguish the two subsamples. Up to now we consider them as an unique sample. Assuming that neutrinos do not oscillate, it is expected that the number of Up Stop is almost equal to the number of In Down and the contribution of Neutral Currents and $`\nu _e`$ CC interactions is $`10\%`$. The attico is used also in the analysis of this sample as a veto for downgoing tracks. Therefore the analyzed data are those collected with the whole detector with the same effective live-time of the In Up sample. ## 4 Analysis procedure The T.o.F. method uses the formula $$\frac{1}{\beta }=\frac{c\times (T_1T_2)}{L},$$ (1) where $`T_1`$ and $`T_2`$ are the times measured in lower and higher scintillator counters, respectively, and $`L`$ is the path between the two counters. Therefore $`1/\beta `$ results roughly $`+1`$ for downgoing tracks and $`1`$ for upgoing tracks. In the sample of throughgoing muons it happens that a track hits 3 scintillator layers. In this case the time measurements are redundant and $`1/\beta `$ is calculated by means of a linear fit of the time as a function of the pathlength. Several cuts are imposed to remove backgrounds from radioactivity and showering events which may cause failure in time reconstruction. Another background is connected to photonuclear processes such as $`\mu N\mu \pi X`$ where low-energy upgoing particles are produced at large angles by downgoing muons. The requirement of a minimum range of $`200g/cm^2`$ in the apparatus is applied to the Up Through sample in order to reduce drastically these low-energy tracks which mimic neutrino induced events when the downgoing muon misses MACRO. After all analysis cuts the signal peaks with $`1/\beta 1`$ are well isolated for the first two samples (see Fig. 4). The Up Stop + In Down events are identified via topological constraints. The main requirement is the presence of a reconstructed track crossing the bottom scintillator layer. All the track hits must be at least $`1m`$ far from the supermodule walls. The criteria used to verify that the event vertex (or $`\mu `$ stop point) is inside the detector are similar to those used for the In Up search. The probability that an atmospheric muon produces a background event is negligible. To reject ambiguous and/or wrongly tracked events which survive automated analysis cuts, real events are directly scanned with the MACRO Event Display. Because of the low energy of these events the minimum range of $`200g/cm^2`$ in the apparatus is not required. Therefore the background due to upward going charged pions is estimated via simulation and subtracted on a statistical basis. Expected rates and angular distributions have been estimated assuming the atmospheric $`\nu `$ flux calculated by the Bartol University group. The estimate of $`\nu `$ cross-section was based on GRV94 parton distribution set, which varies by $`+1\%`$ the prediction with respect to the Morfin and Tung parton distribution used in the past by the MACRO Collaboration. For In Up and Up Stop + In Down samples also low-energy effects have been taken into account. The propagation of muons in the rock was taken from Lohmann et al. The uncertainty on the expected muon flux is estimated $`17\%`$ for Up Through events and $`25\%`$ for the other events. The apparatus and the data acquisition are fully reproduced in a GEANT based Monte Carlo program. Real and simulated data are analyzed by means of the same procedure. Particular care has been taken to minimize the systematic uncertainty in the detector acceptance simulation. For the Up Through sample different acceptance calculations, including separate electronic and data acquisition systems, have been compared. For each sample, two different analyses have been performed getting the same results. Furthermore trigger and streamer tube efficiency, background subtraction, effects of analysis cuts have been in detail studied. The efficiency of the visual scanning for the Up Stop + In Down sample has been estimated by analyzing real and simulated events after a random merging. The systematic error on the total number of events due to the acceptance has been estimated $`6\%`$ for Up Through sample. The uncertainty is higher ($`10\%`$) for low-energy samples because it depends strongly on data taking conditions, analysis algorithm efficiency and detector mass. ## 5 High energy sample - Analysis results In the Up Through sample 642 events are in the signal range ($`1.25<1/\beta <0.75`$). Looking at the distribution of events outside the signal peak, we estimate a background contamination of $`12.5\pm 6`$ events. Furthermore $`10.5\pm 4`$ events are expected to be upgoing charged particles produced by downgoing muons. Finally we expect that $`12\pm 4`$ are not true Up Through events because they are generated by neutrino interactions in the very bottom layer of MACRO scintillators. Hence, removing these backgrounds, the observed number of Up Through muons integrated over all zenith angles results $`607`$. For this sample $`825`$ events are expected and the ratio of observed to expected events is reported in Table 1. In Fig. 5 the $`\mathrm{cos}\theta `$ distribution of the measured flux is shown compared with the expected ones assuming stable or oscillating neutrinos. The data error bars are the statistical errors with an extension due to the systematic errors, added in quadrature. The observed zenith distribution does not fit well with the hypothesis of no oscillation, giving a maximum $`\chi ^2`$ probability<sup>1</sup><sup>1</sup>1The data are normalized to the predicition. The last bin near the horizontal is not taken into account because of the higher acceptance uncertainty and the background connected to scattering of quasi horizontal downgoing muons. of only $`0.35\%`$. Combining normalization and angular shape the probability is still very low ($`0.36\%`$). These results can be explained in the scenario of $`\nu `$ oscillation. Assuming $`\nu _\mu \nu _\tau `$ the lower $`\chi ^2`$ value for the angular distribution in the physical region<sup>2</sup><sup>2</sup>2The best $`\chi ^2`$ value is 10.6 in the unphysical range ($`\mathrm{sin}^22\theta _{mix}=1.5`$). is $`12.5`$ with maximal mixing and $`\mathrm{\Delta }m^20.0025eV^2`$. In the first plot of Fig. 6 the independent probabilities for normalization and angular shape and the combined probability are shown as functions of $`\mathrm{\Delta }m^2`$ assuming maximal mixing. It is notable that total number of events and $`\mathrm{cos}\theta `$ distribution indicate very close values of $`\mathrm{\Delta }m^2`$. The second plot of Fig. 6 shows the same probabilities for $`\nu _\mu \nu _{sterile}`$ oscillation. The maximum of the combined probability is $`36.6\%`$ for $`\nu _\mu \nu _\tau `$ and $`8.4\%`$ for oscillation into sterile neutrino. Fig. 7 shows the confidence regions at the $`90\%`$ and $`99\%`$ confidence levels based on application of the Monte Carlo prescription by Feldman et al. Also the sensitivity of the experiment is plotted. The sensitivity is the $`90\%`$ contour which would result when the data are equal to the Monte Carlo prediction at the best-fit point. ## 6 Low energy samples - Analysis results In the In Up sample the uncorrelated background is estimated from the $`1/\beta `$ distribution. After the background subtraction $`116`$ events are accepted. The prediction is $`202`$ In Up events. In the Up Stop + In Down sample $`193`$ events survive the analyis cuts and the visual scanning while $`274`$ events are expected. The ratios of the observed number of events to the prediction and the angular distributions of both samples are reported in Table 1 and in Fig. 8. The low-energy $`\nu _\mu `$ samples show an uniform deficit of the measured number of events over the whole angular distribution with respect to the predictions based on the absence of neutrino oscillation. We note the agreement between the results for low-energy and Up Through events. Assuming the oscillation parameters suggested by higher energy sample, it is expected a $`50\%`$ disappearance of $`\nu _\mu `$ in In Up and Up Stop samples because of the neutrino path (thousands of kilometres). No flux reduction is instead expected for In Down events whose neutrino path is of the order of tens of kilometres. The ratios and the angular distributions estimated assuming the $`\nu `$ oscillation are also reported in Table 1 and in Fig. 8. In order to reduce the effects of uncertainties coming from neutrino flux and cross section, the double ratio $$\frac{(\frac{\mathrm{𝐈𝐧}\mathrm{𝐔𝐩}}{\mathrm{𝐔𝐩}\mathrm{𝐒𝐭𝐨𝐩}+\mathrm{𝐈𝐧}\mathrm{𝐃𝐨𝐰𝐧}})_{observed}}{(\frac{\mathrm{𝐈𝐧}\mathrm{𝐔𝐩}}{\mathrm{𝐔𝐩}\mathrm{𝐒𝐭𝐨𝐩}+\mathrm{𝐈𝐧}\mathrm{𝐃𝐨𝐰𝐧}})_{expected}}$$ (2) has been studied. A residual theoretical error ($`5\%`$) survives, due to small differences between the energy spectra of the two samples. Because of some cancellations the systematic uncertainty is also reduced to $`6\%`$. The value of the double ratio over the zenith angle distribution is shown in Fig. 9. Assuming the oscillation parameters ($`\mathrm{sin}^22\theta _{mix}=1`$, $`\mathrm{\Delta }m^2=0.0025eV^2`$) suggested by Up Through sample, the points are compatible with 1. The probability to obtain a sum double ratio at least so far from 1 is $`6\%`$ assuming no oscillation and the Bartol flux as the parent $`\nu `$ flux. ## 7 Conclusions The flux and the shape of the zenith distribution for the Up Through sample favour $`\nu _\mu \nu _\tau `$ oscillations. The experimental data have a $`36.6\%`$ probability assuming oscillation against $`0.36\%`$ assuming stable neutrino. Therefore the new data confirm the MACRO published results with increased probability for the oscillation hypothesis. The low-energy neutrino events are fewer than expected and the deficit is quite uniform over the whole angular range. Also these results suggest oscillation with maximal mixing and $`\mathrm{\Delta }m^2`$ of a few times $`10^3eV^2`$. The combined analysis of high and low energy data is in progress. Presently we stress the strong coherence of the MACRO results in different energy ranges and with different event topologies.
no-problem/9906/hep-ph9906544.html
ar5iv
text
# Figure captions ## Figure captions | Fig.1 | Comparison of QGSM calculations with preliminary experimental data on $`\mathrm{\Lambda }_c`$ spectra measured by SELEX collaboration for :a) $`\pi ^{}p`$ , b) $`pp`$ and c) $`\mathrm{\Sigma }^{}p`$ collision at $`P_L=600GeV/c`$. | | --- | --- | | Fig.2 | The $`x_F`$–dependence of $`L_c,\overline{L}_c`$–production asymmetry in $`\mathrm{\Sigma }^{}p`$ collision. The experimental data from WA89 (full circles) at $`P_L=340GeV/c`$ and SELEX (open circles) at $`P_L=600GeV/c`$.The theoretical curves were calculated for 350GeV/c (full line) and 600GeV/c(dashed line). | | Fig.3 | The $`x_F`$–dependence of $`D^{}`$–meson cross section $`1/\sigma d\sigma /dx`$ in $`\pi ^{}p`$ interaction at $`350GeV/c`$ | | Fig.4 | Comparison of the model calculations with experimental data on leading ($`D^{}`$) and nonleading ($`D^+`$) charmed vector mesons production asimmetry in $`\pi ^{}p`$ interaction at $`P_L=350GeV/c`$ . |
no-problem/9906/hep-ph9906360.html
ar5iv
text
# 1 Introduction ## 1 Introduction Despite the success of quantum field theory in the description of the behaviour of elementary particles in the perturbative regime of interactions, it still remains a challenge to understand the non-perturbative domain satisfactorily. One of the methods which has gained attention in this regard in recent years is the study of Schwinger-Dyson equations (SDEs) . Despite the difficulties involved in finding a non-perturbative truncation of these equations, this approach has been very successful in addressing issues like dynamical mass generation for fundamental fermions when they are involved in sufficiently strong interactions . Moreover, recent attempts, e.g., to improve the reliability of the approximations used have increased the credibility of the results obtained through such studies. Application of Schwinger-Dyson formalism to supersymmetric (SUSY) models has been less extensive. In supersymmetric Quantum Electrodynamics (SQED), based upon the arguments of non-renormalization theorem and gauge invariance, it is expected to be impossible to obtain dynamical mass generation for fermions though some studies argue that it is probably possible to break chiral symmetry dynamically in SQED. We postpone the study of SQED for a future work. In this paper, we take the simplest SUSY model, i.e the Wess-Zumino model and attempt to solve the corresponding Schwinger-Dyson equations for the fermion and scalar propagators. We believe this exercise will provide us with a deeper insight into how the role of supersymmetry in the context of dynamical mass generation translates into the language of Schwinger-Dyson equations. Such a study should provide us with a better starting point for more complicated SUSY theories such as SQED and SQCD. In the latter theory, a need also exists to further explore connections between the Holomorphic approach and that of the Schwinger-Dyson equations . We first study the Yukawa model with one real scalar and one Majorana fermion, which can be considered as a truncated Wess-Zumino model. We discuss this model in some detail for two reasons, first being that it is interesting in its own right because, after all, it is Yukawa interactions which are responsible for giving masses to fermions in the Standard Model (SM). Secondly, extending the Yukawa model by doubling the scalar degrees of freedom provides us with a clear understanding of how supersymmetry works. We use the quenched approximation. Keeping in mind the perturbative expansion of the 3-point vertex beyond the lowest order and its transformation under charge conjugation symmetry, we propose an ansatz for the full effective vertex. One of the advantages of using this vertex is that the equations for the mass function $`(p^2)`$ and the wavefunction renormalization $`F(p^2)`$ decouple completely in the neighbourhood of the critical coupling, $`\alpha _c`$ above which mass is generated for the fermions, and partly above it. We solve both the equations to find analytical expressions for $`F(p^2)`$ and the anomalous mass dimensions in the neighbourhood of $`\alpha _c`$. The results show that non-perturbative interaction of fermions with fundamental scalars can give masses to fermions in a dynamical way provided the interaction is strong enough. We use numerical calculation to draw Euclidean mass of the fermions as a function of the coupling, and confirm that it obeys Miransky scaling. We also evaluate $`F(p^2)`$ numerically. We then extend the particle spectrum by doubling the number of scalars and imposing relations for the couplings that define the Wess-Zumino model. Due to the presence of the additional symmetry, we are able to extract useful information beyond the rainbow approximation. ## 2 The Yukawa Model Consider a massless Lagrangian with one Majorana fermion and one real scalar interacting with each other through a $`\gamma _5`$-type interaction: $`={\displaystyle \frac{1}{2}}(_\mu A)^2+{\displaystyle \frac{1}{2}}i(\overline{\psi }\gamma ^\mu _\mu \psi ){\displaystyle \frac{1}{2}}g^2A^4+ig\overline{\psi }\gamma _5\psi A.`$ (1) The corresponding Schwinger-Dyson equation for the fermion propagator, $`S_F(p)`$, is displayed in Fig. 1. Motivated by the success of the quenched approximation in QED and QCD, we neglect the fermion loops. Moreover, as a first step towards truncating the infinite set of Schwinger-Dyson equations, we drop all 4-point functions. The full scalar propagator can then be replaced by its bare counterpart. Using Feynman rules, the Schwinger-Dyson equation can be written as: $`iS_F^1(p)=iS_F^{0^1}(p){\displaystyle \frac{d^4k}{(2\pi )^4}(2g\gamma _5)(iS_F(k))(2g\gamma _5\mathrm{\Gamma }_A(k,p))(\frac{i}{q^2})},`$ (2) where $`q=kp`$ and $`S_F(p)`$ can be expressed in terms of two Lorentz scalar functions, $`F(p^2)`$, the wavefunction renormalization and $`(p^2)`$, the mass function, so that $`S_F(p)`$ $`=`$ $`{\displaystyle \frac{F(p^2)}{\overline{)}p(p^2)}}.`$ (3) The bare propagator $`S_F^0(k)=1/(\text{/}p)`$, where the bare mass has been taken to be zero. We can project out equations for $`F(p^2)`$ and $`(p^2)`$ by taking the trace of Eq. (2) having multiplied by $`\overline{)}p`$ and 1 in turn. On Wick rotating to Euclidean space, $`F(p^2)`$ $`=`$ $`1{\displaystyle \frac{\alpha }{\pi ^3}}{\displaystyle \frac{1}{p^2}}{\displaystyle d^4k\frac{F(k^2)F(p^2)}{k^2+^2(k^2)}\frac{kp}{q^2}\mathrm{\Gamma }_A(k,p)}`$ (4) $`(p^2)`$ $`=`$ $`{\displaystyle \frac{\alpha }{\pi ^3}}{\displaystyle d^4k\frac{(k^2)}{k^2+^2(k^2)}\frac{F(k^2)F(p^2)}{q^2}\mathrm{\Gamma }_A(k,p)}`$ (5) where $`\alpha =g^2/4\pi `$. It is here that we cannot proceed any further without making an ansatz for $`\mathrm{\Gamma }_A(k,p)`$. Any ansatz for the 3-point vertex must fulfill at least the following requirements: * Perturbatively, we must have $`\mathrm{\Gamma }_A(k,p)=1+𝒪(g^2)`$. * It must be symmetric in $`k`$ and $`p`$. Moreover, as the SDEs relate the 2-point function with the 3-point function, the expression for the full vertex is expected to involve functions $`F(p^2)`$ or/and $`(p^2)`$. The commonly used ansatz in the non-perturbative study of the SDEs is the bare vertex ansatz. For the Yukawa Model under discussion, it implies $`\mathrm{\Gamma }_A(k,p)=1`$. It agrees with the lowest order perturbation theory. The only truncation of the complete set of Schwinger-Dyson equations known so far that avoids any assumptions other than the smallness of the coupling at every level of this approximation is the perturbation theory. Therefore, it is natural to assume that physically meaningful solutions of the Schwinger-Dyson equations must agree with perturbative results in the weak coupling regime. It requires, e.g., that every non-perturbative ansatz chosen for the full vertex must reduce to its perturbative counterpart when the interactions are weak. Bare vertex fulfills this requirement to the lowest order in perturbation theory. Any other vertex which fulfills this condition and does not violate other requirements is at least as good as the bare vertex. One of the simplest non-perturbative vertices can be constructed by realizing that Eq.(4) yields the following expansion of $`F(p^2)`$ in perturbation theory: $`F(p^2)=1+𝒪(\alpha )`$ (6) Therefore, a simple candidate for the 3-point vertex can be written as: $`\mathrm{\Gamma }_A(k,p)={\displaystyle \frac{1}{F(k^2)F(p^2)}}.`$ (7) Perturbatively, it gives $`\mathrm{\Gamma }_A(k,p)={\displaystyle \frac{1}{[1+𝒪(\alpha )][1+𝒪(\alpha )]}}=1+𝒪(\alpha )`$ (8) Therefore, to the lowest order in perturbation theory, our vertex ansatz reduces to the bare vertex. It is exceedingly complicated to ensure that at higher orders, the non-perturbative vertex reduces to its perturbative counterpart in the weak coupling regime. We do not aim at it in this paper. However, we demonstrate that even up to next to lowest order, i.e, to $`𝒪(\alpha )`$, our ansatz is correct to the extent that both the ansatz and the real vertex have the logarithmically divergent behaviour in the ultraviolet regime. Fig. 2 represents the perturbative expansion of the 3-point fermion-scalar vertex to $`𝒪(\alpha )`$. Using Feynman rules, we can write it as follows: $`2g\gamma _5\mathrm{\Gamma }_A=2g\gamma _5+{\displaystyle \frac{d^4w}{(2\pi )^4}(2g\gamma _5)iS_F(pw)(2g\gamma _5)iS_F(kw)(2g\gamma _5)\frac{i}{w^2}}`$ (9) which can be simplified to: $`\mathrm{\Gamma }_A`$ $`=`$ $`16\pi i\alpha ^2\left[\overline{)}k\overline{)}pJ^{(0)}(\overline{)}k\gamma ^\nu +\gamma ^\nu \overline{)}p)J_\nu ^{(1)}+K^{(0)}\right]`$ (10) where $`J^{(0)}`$ $`=`$ $`{\displaystyle d^4w\frac{1}{w^2(pw)^2(kw)^2}}`$ (11) $`J_\mu ^{(1)}`$ $`=`$ $`{\displaystyle d^4w\frac{w_\mu }{w^2(pw)^2(kw)^2}}`$ (12) $`K^{(0)}`$ $`=`$ $`{\displaystyle d^4w\frac{1}{(pw)^2(kw)^2}}`$ (13) The exact analytical expressions for these three integrals are known . They involve basic functions of momenta $`k`$ and $`p`$ and a spence function. We believe that it is highly non-trivial to construct a non-perturbative vertex which reduces to this complicated form in the weak coupling regime. However, asymptotic behaviour can be reproduced to some extent. Simple power counting reveals that the integrals $`J^{(0)}`$ and $`J_\mu ^{(1)}`$ are perfectly well-behaved in the ultraviolet regime. However, $`K^{(0)}`$ is logarithmically divergent. We now show that our vertex ansatz also exhibits this behaviour. Using the fact that perturbatively $`(p^2)=0`$, we can re-write Eq. (4) as follows: $`F(p^2)`$ $`=`$ $`1{\displaystyle \frac{\alpha }{\pi ^3}}{\displaystyle \frac{1}{p^2}}{\displaystyle d^4k\frac{F(k^2)F(p^2)}{k^2}\frac{kp}{q^2}}`$ (14) where we have employed Eq. (6), and the Feynman rule for the vertex. Carrying out angular and radial integrations respectively and retaining the leading log terms, we get $`F(p^2)`$ $`=`$ $`1+{\displaystyle \frac{\alpha }{2\pi }}\mathrm{ln}{\displaystyle \frac{p^2}{\mathrm{\Lambda }^2}}`$ (15) Therefore, the proposed vertex ansatz can be written perturbatively as follows $`\mathrm{\Gamma }_A(k,p)={\displaystyle \frac{1}{F(k^2)F(p^2)}}=1+{\displaystyle \frac{\alpha }{\pi }}\mathrm{ln}{\displaystyle \frac{k^2p^2}{\mathrm{\Lambda }^2}}+𝒪(\alpha ^2)`$ (16) which is logarithmically divergent in the ultraviolet regime just as the real vertex to $`𝒪(\alpha )`$. Therefore, perturbatively our vertex ansatz is more realistic than the bare vertex. An added advantage of using the proposed vertex ansatz is that Eq.(5) can be solved independently of Eq.(4). Therefore, it serves a purpose similar to that of Mandlestam’s choice of the 3-gluon vertex in studying the Schwinger-Dyson equation of the gluon propagator. As the unknown functions $`F`$ and $``$ do not depend upon the angle between $`k`$ and $`p`$, we can perform angular integration to arrive at $`F(p^2)`$ $`=`$ $`1{\displaystyle \frac{\alpha }{2\pi }}{\displaystyle 𝑑k^2\frac{1}{k^2+^2(k^2)}\left[\frac{k^4}{p^4}\theta (p^2k^2)+\theta (k^2p^2)\right]}`$ (17) $`(p^2)`$ $`=`$ $`{\displaystyle \frac{\alpha }{\pi }}{\displaystyle 𝑑k^2\frac{(k^2)}{k^2+^2(k^2)}\left[\frac{k^2}{p^2}\theta (p^2k^2)+\theta (k^2p^2)\right]}.`$ (18) Such equations are known to have a non-trivial solution for the mass function above a critical value of the coupling $`\alpha =\alpha _c`$. In the neighbourhood of the critical coupling, when the generated mass is still small, we can put $`^2=0`$. Then Eqs. (17,18) decouple from each other completely. The leading log solution for $`F(p^2)`$ is then: $`F(p^2)`$ $`=`$ $`1+{\displaystyle \frac{\alpha }{2\pi }}\mathrm{ln}{\displaystyle \frac{p^2}{\mathrm{\Lambda }^2}}.`$ (19) As for the mass function, multiplicative renormalizability demands a solution of the type $`M(p^2)(p^2)^s`$. Substituting this in Eq. (18) and performing radial integration, we find $`s`$ $`=`$ $`{\displaystyle \frac{1}{2}}\pm {\displaystyle \frac{1}{2}}\sqrt{1{\displaystyle \frac{\alpha }{\alpha _c}}}`$ (20) where $`\alpha _c=\pi /4`$. For $`\alpha >\alpha _c`$ the solution of the mass function enters the complex plane indicating that a phase transition has taken place from perturbative to non-perturbative solution corresponding to the dynamical generation of mass. Numerically, above $`\alpha _c`$, we solve Eq. (18) in a two-step process. We first use the iterative method to get close to the solution and then refine the answer by converting the integral equation into a set of simultaneous nonlinear equations to be solved by Newton-Raphson method. In Fig. 3., we have drawn the Euclidean mass $`M`$(which can be taken to be $`(0)`$) as a function of the coupling $`\alpha `$. We see that it obeys Miransky scaling law and can be fitted to the form $`{\displaystyle \frac{M}{\mathrm{\Lambda }}}`$ $`=`$ $`\mathrm{exp}\left[{\displaystyle \frac{A}{\sqrt{\frac{\alpha }{\alpha _c}1}}}+B\right]`$ (21) very well by the choice $`A=0.97\pi `$ and $`B=1.45`$. These numbers are close to the ones found in although the value of the critical coupling is of course different. The slight mismatch in the values of $`A`$ and $`B`$ is due to the fact that in the logarithmic grid of momenta, we choose 30 points per decade and do not extrapolate the result to an infinite number of points, an exercise carried out in . We also compute $`F(p^2)`$ for various values of $`\alpha `$ and find that the closer we approach $`\alpha _c`$, where the generated mass is still small, starting from a larger value of $`\alpha `$, the numerical result gets closer and closer to the analytical result as expected, Fig. 4. ## 3 The Wess-Zumino Model We now extend the particle spectrum by doubling the number of scalars to discuss the massless Wess-Zumino model, characterized by the following Lagrangian: $`={\displaystyle \frac{1}{2}}(_\mu A)^2+{\displaystyle \frac{1}{2}}(_\mu B)^2+{\displaystyle \frac{1}{2}}(i\overline{\psi }\gamma ^\mu _\mu \psi ){\displaystyle \frac{1}{2}}g^2(A^2+B^2)^2g\overline{\psi }(BiA\gamma _5)\psi .`$ (22) The Schwinger-Dyson equation for the fermion propagator in this model is depicted in Fig. 5. Before we embark on solving this equation, we must re-address the validity of the ansatz for the 3-point vertex which we made for the Yukawa case. To the one loop level in perturbation theory, one now has contributions from both the scalars as depicted in Fig. 6. Therefore, e.g., for scalar $`A`$, we can write $`2g\gamma _5\mathrm{\Gamma }_A`$ $`=`$ $`2g\gamma _5+{\displaystyle \frac{d^4w}{(2\pi )^4}(2g\gamma _5)iS_F(pw)(2g\gamma _5)iS_F(kw)(2g\gamma _5)\frac{i}{w^2}}`$ (23) $`+`$ $`{\displaystyle \frac{d^4w}{(2\pi )^4}(2ig)iS_F(pw)(2g\gamma _5)iS_F(kw)(2ig)\frac{i}{w^2}}=0.`$ The same is the case for $`\mathrm{\Gamma }_B`$, i.e., in the presence of both the scalars $`A`$ and $`B`$, none of the 3-point vertices gets modified at $`𝒪(\alpha )`$ in perturbation theory. Therefore, in the Wess-Zumino case, it is more reasonable to use the bare vertex instead of the ansatz earlier made. Though one would now expect to solve coupled integral equations for $`F(p^2)`$ and $`(p^2)`$, a miraculous cancellation of terms takes place as evident from the following Schwinger-Dyson equation for the fermion propagator: $`{\displaystyle \frac{\overline{)}p(p^2)}{iF(p^2)}}`$ $`=`$ $`{\displaystyle \frac{\overline{)}p}{i}}{\displaystyle \frac{\alpha }{\pi ^3}}{\displaystyle d^4k\left[\frac{F(k^2)}{\overline{)}k(p^2)}\frac{1}{q^2}\right]}+{\displaystyle \frac{\alpha }{\pi ^3}}{\displaystyle d^4k\left[\gamma _5\frac{F(k^2)}{\overline{)}k(p^2)}\gamma _5\frac{1}{q^2}\right]}.`$ Taking the trace of this equation, we get $`(p^2)`$ $`=`$ $`0.`$ As the cancellation of terms takes place at the very beginning, it is easy to see that dynamical mass generation will remain an impossibility for the full vertex and the full scalar propagator as long as they are identical for both the scalars. The vertex corrections for $`A`$ and $`B`$ have been proven to be equal up to $`𝒪(\alpha ^2)`$ . We shall shortly see that the same is true for the full scalar propagator at least up to $`𝒪(\alpha )`$. This is in accordance with the arguments based on non-renormalization theorem. SUSY plays a role in providing same number of bosonic and fermionic degrees of freedom. We have seen from the case of the Yukawa Model that without this equality, it will not be possible to prevent dynamical mass generation. Secondly, SUSY imposes relations on couplings of the two scalars with the fermions. This relationship is crucial in preventing dynamical mass generation. As far as wavefunction renormalization $`F(p^2)`$ is concerned, its leading log behaviour gets modified slightly, by the inclusion of the other scalar, to: $`F(p^2)`$ $`=`$ $`1+{\displaystyle \frac{\alpha }{\pi }}\mathrm{ln}{\displaystyle \frac{p^2}{\mathrm{\Lambda }^2}}.`$ (25) Although it is an interesting conclusion in its own right that supersymmetry prevents dynamical mass generation for fermions in the Wess-Zumino model, another important issue to probe will be whether supersymmetry itself remains intact, i.e., whether the scalars can also be kept massless. This is what we discuss now. The Schwinger-Dyson equation for the scalar (for example $`A`$) has been depicted in Fig. 7. A scalar propagator, unlike a fermion, needs only one unknown function to describe it. But we shall prefer to split it into two parts and write the full scalar propagator as follows: $`S_A(p)`$ $`=`$ $`{\displaystyle \frac{F_A(p^2)}{p^2_A^2(p^2)}}.`$ (26) The non-zero value of the mass function $`_A(p^2)`$ will be responsible for shifting the pole from $`p^2=0`$ to some finite value, generating the mass for the scalar dynamically. $`F_A(p^2)`$ on the other hand is the scalar wavefunction renormalization. The SD-equation for the scalar propagator in Euclidean space can now be written as: $`{\displaystyle \frac{p^2+_A^2(p^2)}{F_A(p^2)}}`$ $`=`$ $`p^2+{\displaystyle \frac{3\alpha }{2\pi ^3}}{\displaystyle d^4k\frac{F_A(k^2)}{k^2+_A^2(k^2)}}+{\displaystyle \frac{\alpha }{2\pi ^3}}{\displaystyle d^4k\frac{F_B(k^2)}{k^2+_B^2(k^2)}}`$ (27) $`{\displaystyle \frac{2\alpha }{\pi ^3}}{\displaystyle d^4k\frac{F(k^2)F(q^2)}{k^2q^2}\mathrm{\Gamma }_A(k,p)kq}`$ where we have used the fact that the fermions do not acquire mass. If we want to preserve supersymmetry and do not want the scalars to acquire mass, we must have: $`_A(p^2)=_B(p^2)=0.`$ (28) We are then left with: $`{\displaystyle \frac{1}{F_A(p^2)}}`$ $`=`$ $`1+{\displaystyle \frac{\alpha }{2\pi ^3p^2}}{\displaystyle \frac{d^4k}{k^2}\left[3F_A(k^2)+F_B(k^2)4\frac{kq}{q^2}F(k^2)F(q^2)\mathrm{\Gamma }_A(k,p)\right]}`$ (29) and there is a similar equation for the scalar $`B`$: $`{\displaystyle \frac{1}{F_B(p^2)}}`$ $`=`$ $`1+{\displaystyle \frac{\alpha }{2\pi ^3p^2}}{\displaystyle \frac{d^4k}{k^2}\left[3F_B(k^2)+F_A(k^2)4\frac{kq}{q^2}F(k^2)F(q^2)\mathrm{\Gamma }_B(k,p)\right]}.`$ (30) These equations should yield a solution for $`F_A(p^2)`$ and $`F_B(p^2)`$ such that it does not change the position of the pole for the scalar propagator and that the quadratic divergences cancel. It is well-known that it does happen in perturbation theory to $`𝒪(\alpha )`$. In fact, one can evaluate $`F_A(p^2)`$ and $`F_B(p^2)`$. The leading log expression for these functions to $`𝒪(\alpha )`$ is $`F_A(p^2)=F_B(p^2)=1+{\displaystyle \frac{\alpha }{\pi }}\mathrm{ln}{\displaystyle \frac{p^2}{\mathrm{\Lambda }^2}}`$ (31) which is exactly the same expression as that for $`F(p^2)`$ for the fermion propagator. This result indicates that supersymmetry need not be broken. ## 4 Conclusions We have studied the Schwinger-Dyson equations for the Yukawa (a scalar interacting with a fermion with a $`\gamma _5`$ type interaction) and the Wess-Zumino models. In the simple Yukawa model, we propose a vertex $`\mathrm{𝑎𝑛𝑠𝑎𝑡𝑧}`$ which we argue should perform better than the bare vertex. In the quenched approximation, we find dynamical mass generation for fermions above a critical value of the coupling $`\alpha _c=\pi /4`$. The generated Euclidean mass obeys Miransky scaling. When we extend this Yukawa model to equate the scalar and fermionic degrees of freedom (Wess-Zumino model), we find that a neat cancellation of terms occurs and there is no mass generation for the fermions. This fact remains true beyond the rainbow approximation and is supported by perturbative calculations available for the 3-point vertex to $`𝒪(\alpha ^2)`$ and of the scalar propagator. This result was expected on the basis of non-renormalization theorem. The two approaches will remain in agreement provided the full vertex and the full scalar propagator as long as they are identical for both the scalars to higher orders in perturbation theory as well. If supersymmetry has to be preserved, the scalars should also acquire no mass dynamically. Studying the Schwinger-Dyson equations for the scalars, we observe that such a solution is allowed and in fact leads to the wavefunction renormalization function for the scalars which is exactly the same as that for the fermion. It is more interesting to see the role of supersymmetry in more complicated theories such as SQED. The studies so far carried out in superfield and component formalism seem to arrive at different conclusions. We plan to present our work in this context in a future publication. Acknowledgements This work was partly supported by a TWAS-AIC award and CONACYT-SNI (México). AB is also grateful for the hospitality of Instituto de Física, Benemérita Universidad Autónoma de Puebla (BUAP) during his stay there, where a part of the work was done. Figure Captions Fig. 1. Schwinger-Dyson equation for the fermion propagator in the Yukawa model. The solid lines represent fermions and the dashed the scalar. The solid dots indicate full, as opposed to bare, quantities. Fig. 2. One loop perturbative expansion for the vertex in the Yukawa model. Fig. 3. The dynamically generated mass $`(M/\mathrm{\Lambda })`$ versus the 3-point coupling $`\alpha `$ in the Yukawa model. The critical coupling is $`\alpha _c=\pi /4`$, above which the mass can be seen to be bifurcating away from the chirally symmetric solution. $``$s represent the numerical result and $`+`$s the numerical fit to the form $`M=\mathrm{\Lambda }\mathrm{exp}\left[A/\sqrt{\alpha /\alpha _c1}+B\right]`$ with $`A=0.97\pi `$ and $`B=1.45`$. Fig. 4. The wavefunction renormalization function $`F(p^2)`$ in the Yukawa model for various values of the coupling $`\alpha `$. The solid line corresponds to the analytical expression $`1\alpha /4\pi +(\alpha /2\pi )\mathrm{ln}(p^2/\mathrm{\Lambda }^2)`$ in case of no mass generation for $`\alpha =0.78`$. Fig. 5. Schwinger-Dyson equation for the fermion propagator in the Wess-Zumino model. Fig. 6. One loop perturbative expansion for the vertex in the Wess-Zumino model. We have shown the case for scalar $`A`$. A similar diagram exists for scalar $`B`$. Fig. 7. Schwinger-Dyson equation for the scalar propagator in the Wess-Zumino model.
no-problem/9906/astro-ph9906117.html
ar5iv
text
# Does a varying speed of light solve the cosmological problems? ## I Introduction Inflationary models were originally proposed as a solution to some of the most fundamental problems of the standard cosmological model namely the horizon, flatness and monopole problems . In the context of inflation the solution to these problems is achieved through a period of very rapid expansion induced by a huge vacuum energy. Despite the lack of a single, well motivated particle physics model for inflation, these models are highly successful in providing solutions to such cosmological puzzles. Still, it is crucial to investigate if other scenarios could also solve some of these cosmological problems, or even others which inflationary models do not address (such as the cosmological constant problem). In particular, it is important to establish what general conditions are required of a theory which is capable of providing solutions to such problems. There have been recent claims for a time-varying fine structure constant detected by comparing quasar spectral lines in different multiplets. These possible variations in the dimensionless parameter $`\alpha `$ can be interpreted as variations in dimensional constants such as the electric charge, the Planck constant or the speed of light in vacuum . Albrecht and Magueijo have recently proposed a generalisation of General Relativity incorporating a possible change in the speed of light in vacuum (c) and the gravitational constant (G)—see also . They have shown that their theory can solve many of the problems of the standard cosmological model including the horizon, flatness and cosmological constant problems, at the price of breaking covariance and Lorentz invariance. The also make the additional ex nihilio assumption of minimal coupling at the level of Einstein’s equations. Here we take a pedagogical look at this problem by asking if one can restore some of the above principles and still obtain a theory which, prima facia, provides credible solutions to the standard cosmological enigmas. With this aim, we propose a new generalisation of General Relativity which also allows for arbitrary changes in the speed of light, $`c`$, and the gravitational constant, $`G`$. Our theory is both covariant and Lorentz invariant and for $`G/c^2=\mathrm{constant}`$ both mass and particle number are conserved. We solve the Einstein equations for Friedmann universes and show that the solution to the flatness, horizon or $`\mathrm{\Lambda }`$ problems always requires similar conditions to the ones found in the context of the standard cosmological model. We therefore argue that a theory that reduces to General Relativity in the appropriate limit and solves the horizon and flatness problems of the standard cosmological problems must either violate the strong energy condition (which is what inflation does), Lorentz invariance or covariance. Stronger requirements are needed in order to solve also the cosmological constant problem. In a subsequent publication we shall show that our approach and that of Albrecht and Magueijo can be further distinguished by an analysis of their corresponding structure formation scenarios. ## II A variable speed of light theory Experiments can only measure dimensionless combinations of the fundamental parameters. This means that any evidence for variation in a dimensional parameter is dependent on the choice of units in which it is measured. Hence, before investigating the cosmological consequences of a variable speed of light theory we must specify our choice of units. Here, we choose our unit of energy to be the Rydberg energy $`E_R=m_ee^4/2(4\pi ϵ_0)^2\mathrm{}^2`$, our unit of length to be the Bohr radius ($`a_0=4\pi ϵ_0\mathrm{}^2/m_ee^2`$) and our time unit to be $`\mathrm{\Delta }t=\mathrm{}/E_R`$. Using these units a measure of the velocity of light will be a measure of the dimensionless quantity $$\frac{c\mathrm{}}{a_0E_R}=\frac{8\pi ϵ_0}{\alpha }$$ (1) where $`\alpha =e^2/(\mathrm{}c)`$ is the fine structure constant. Hence, by choosing appropriate units we are able to interpret a variation in the fine structure constant $`\alpha `$ as being due to a change in the speed of light, c. It is possible to redefine our unit of time in such a way that $`c`$ and $`e`$ remain fixed while $`\mathrm{}`$ varies proportionally to $`\alpha ^1`$ by making $$\mathrm{\Delta }t^{}=\alpha \mathrm{\Delta }t=\frac{\alpha \mathrm{}}{E_R}.$$ (2) In this article we shall not address the problem of which mechanism could induce a change in $`\alpha `$ but we concentrate on the cosmological implications of such a variation. Given that c is a constant in these units, we may specify our theory of gravity to be Einstein’s General Relativity with a variable gravitational constant $`G`$. The Planck constant ($`\mathrm{}`$) and the gravitational constant ($`G`$) will be a function of the space-time position. However, we will assume, for the sake of simplicity, that to zeroth order $`\mathrm{}`$ and $`G`$ are functions of the cosmological time only. The theory specified in this way will clearly be Lorentz invariant. Our theory implies a modification of quantum mechanics in order to incorporate a variable $`\mathrm{}`$ (as indeed do the theories discussed in ). However, for $`E_R/\dot{\mathrm{}}<<1`$ the variation of the Planck constant is very small on an atomic timescale. This means that quantum mechanical results for atomic behaviour will hold to a very good approximation with only a simple modification $`\mathrm{}\mathrm{}(t)`$. Nevertheless, we do expect that such changes will have observational consequences (eg. for black body curves). We will discuss this in more detail elsewhere, but here we simply point out that this (and many other constraints) will force any significant changes in the fundamental constants to happen very early in the history of the universe. If we now switch to our original (and more natural) choice of units we will be left with a theory which has a variable $`c`$ but which is just analogous to General Relativity. The mass of an electron is a constant in these units, and it will be implicitly assumed that in the absence of any interactions (including gravity) the average distance between particles will remain a constant. From now on we will stick to our original choice of time unit $`\mathrm{\Delta }t=\mathrm{}/E_R`$. In passing, we note that it would also be possible, by making appropriate changes of units, to interpret the variation of the fine structure constant as a variation in the electric charge $`e`$ . This different choice of units would lead to a different interpretation of the theory, but at the end of the day the physical consequences of the model would be the same. In our model the Einstein equations take the usual form $$G_{\mu \nu }g_{\mu \nu }\mathrm{\Lambda }=\frac{8\pi G}{c^4}T_{\mu \nu },$$ (3) but now arbitrary variations in $`c`$ and $`G`$ will be allowed. Contrary to Albrecht and Magueijo we do not assume ab initio that variations in the speed of light do not introduce corrections to the curvature terms in the Einstein equations in the cosmological frame. In our model variations in the velocity of light, are always allowed to contribute to the curvature terms. These contributions are computed from the metric tensor in the usual way. Note that our only assumption is that both $`\alpha `$ and (consequently) $`c`$ are a function of cosmological time $`t`$ only, to zeroth order. One can see that this is essentially similar to the much more familiar assumption that both density and pressure are functions of the cosmological time only, to zeroth order in the metric perturbations. Consequently, this does not break the covariance of the theory. With the line element $$ds^2=a^2\left[c^2dt^2\frac{dr^2}{1Kr^2}r^2d\mathrm{\Omega }_2^2\right],$$ (4) the Friedmann and Raychaudhuri equations in our theory are given by $`\left({\displaystyle \frac{\dot{a}}{a}}\right)^2`$ $`=`$ $`{\displaystyle \frac{8\pi G}{3}}\rho {\displaystyle \frac{Kc^2}{a^2}}`$ (5) $`{\displaystyle \frac{\ddot{a}}{a}}`$ $`=`$ $`{\displaystyle \frac{4\pi G}{3}}\left(\rho +3{\displaystyle \frac{p}{c^2}}\right)+{\displaystyle \frac{\dot{c}}{c}}{\displaystyle \frac{\dot{a}}{a}}.`$ (6) Here, $`\rho c^2`$ and $`p`$ are the energy and pressure densities, $`K=0,\pm 1`$ and $`G`$ the curvature and the gravitational constants, and the dot denotes a derivative with respect to proper time. These can be combined into a conservation equation $$d(G\rho a^{3\gamma }c^2)/dt=0,$$ (7) where $`\gamma `$ is defined by $`p=(\gamma 1)\rho c^2`$. If the factor $`G/c^2`$ is a constant then the mass density ($`\rho `$) is conserved. In general, however, the conserved quantity will not be what one usually defines as ‘energy’. This is due to our particular choices of ‘fundamental’ units. Unlike in the theory proposed by Albrecht and Magueijo the curvature of the universe does not explicitly appear in the conservation equation. As we will show elsewhere , this difference is crucial for the ensuing structure formation scenarios. Note that it is easy to transform between our general coordinate system as specified by the line element (4) and one in which $`c_0=1`$ (we use the subscript zero to denote quantities measured in these coordinates). The transformation rules are $$dt_0=cdt,$$ (8) $$\rho _0=\rho c^2,$$ (9) $$p_0=p,$$ (10) $$G_0=G/c^4.$$ (11) It is then straightforward to check that, for example, the Friedmann and Raychaudhuri equations (5,6) transform in the correct way. We note in passing that in our theory the Planck time given by $$t_{\mathrm{Pl}}=\sqrt{\frac{G\mathrm{}}{c^5}}$$ will be a variable for $`G/c^5\mathrm{constant}`$. This means that we may enter the quantum gravitational epoch sooner or later than in the standard cosmological scenario depending on the behaviour of both $`c`$ and $`G`$ at early times. ## III The flatness, horizon and Lambda problems In order to solve the flatness problem the curvature term in equation (5) needs to be subdominant at late times. From the conservation equation we have that $`G\rho c^2a^{3\gamma }`$ and so equation (5) can be re-written as $$\left(\frac{\dot{a}}{a}\right)^2=C\frac{c^2}{a^{3\gamma }}K\frac{c^2}{a^2}.$$ (12) where $`K`$ and $`C`$ are constants. We can see that the condition necessary for the the curvature term to be subdominant at large $`a`$ is $$\gamma 2/3.$$ (13) This is just the condition necessary to solve the flatness problem in the standard cosmological model. Consequently, no solution to the flatness problem arrises naturally in this model. On the other hand, a solution to the horizon problem can be achieved by having a period in the history of the universe in which the the scale factor can grow faster than the proper distance to the horizon, $$d_H=a(t)_0^t\frac{cdt^{}}{a(t^{})}.$$ (14) We can easily see that the scale factor can grow faster than $`d_H`$ only if $$\gamma 2/3$$ (15) Hence, the condition necessary to solve the horizon problem is again identical to the solution to the flatness problem and is no different from the one we obtain in the standard cosmological model: we must violate the strong energy condition. Finally, a cosmological constant $`\mathrm{\Lambda }`$ can be accounted for by including the cosmological constant mass density ($`\rho _\mathrm{\Lambda }\mathrm{\Lambda }c^2/8\pi G`$) in equations (5) and (6). In this case equation (5) becomes $$\left(\frac{\dot{a}}{a}\right)^2=C\frac{c^2}{a^{3\gamma }}\frac{Kc^2}{a^2}+\frac{\mathrm{\Lambda }c^2}{3}.$$ (16) The condition necessary for the cosmological constant term in equation (16) to become negligible at late times is just $$\gamma 0.$$ (17) Again this is exactly the same condition we get in the standard cosmological model. We therefore conclude that our theory does not provide a solution to the standard cosmological problems. Note that our results do not depend on any assumptions about the specific behaviour of $`c`$ and $`G`$. In particular, they hold whether or not mass and particle number are conserved. ## IV Discussion and conclusions In this article we have explicitly constructed a a generalisation of General Relativity which is both covariant and Lorentz invariant and obeys the strong energy condition, and shown that in such a theory any arbitrary time-like variations in $`c`$ and $`G`$ will not lead to a solution to some of the most important problems of the standard cosmological model. The main drawback of the theory we have constructed is that it is incomplete in the sense that a model for the dynamics of $`c`$ and $`G`$ is not presented. Also, another outstanding issue which needs further discussion is that of the possible observational consequences of the required modifications to quantum mechanics. From our above discussion, it is easy to see that the reason why such a theory cannot solve the standard cosmological enigmas is that one can always find a choice of time unit in which this theory will be identical (roughly speaking) to the standard cosmological model. Furthermore, this should be true of any theory which is (a) covariant and Lorentz invariant, (b) reduces to GR in the appropriate limit, and (c) obeys the strong energy condition. The first two conditions ensure that there will be a choice of units such that the theory in question will reduce to the standard one, and then the condition required to solve the cosmological problems should be that (c) is violated. Note that this argument is not strictly a proof, since we have not provided a general way to find the required choice of units. However, we believe that it is physically clear from the discussion above (and in section II) that such a choice of units should exist. On the other hand, the postulates of the theory of Albrecht and Magueijo , when translated in the above language, correspond to the assumption that one can not find any choice of time unit in which the theory reduces to the standard one. In their theory this is achieved by breaking covariance and Lorentz invariance. Hence the above discussion leads us to conclude that to solve the standard cosmological problems in a theory which reduces to General Relativity (possibly after some appropriate changes in the ‘fundamental’ units) one must either violate the strong energy condition, Lorentz invariance or covariance. Of course the above condition is necessary but not sufficient. Inflation is an obvious example of a theory which violates the first of the above principles. A theory such as the one proposed by Albrecht and Magueijo , on the other hand, can work because it violates the latter two. In this context, the fact that such a theory has a variable speed of light is, we think, only a minor ‘side effect’ of their postulates, and other possibilities would do just as well (eg, a varying electric charge , etc). Furthermore, we anticipate that it should be possible to construct theories which break covariance and Lorentz invariance, have ‘constant constants’ and still can solve the horizon and flatness problems, although one might also expect such theories to be even more contrived than the ones discussed here. In a subsequent paper , we will show that the approach of Albrecht and Magueijo has some other difficulties, in particular at the level of structure formation scenarios. Some of these are due to the fact that there are varying constants (and will therefore also appear in the theory we have proposed in the present paper), but others are specifically due to the way in which Lorentz invariance and covariance are broken in their approach. This implies that it is not clear that this approach can be a viable alternative to inflation. It is therefore interesting to ask if there is any other paradigm, apart from these two, that can provide analogous solutions to the cosmological enigmas. ###### Acknowledgements. We would like to thank Paulo Carvalho and Paulo Macedo for enlightening discussions. P.P.A. is funded by JNICT (Portugal) under ‘Programa PRAXIS XXI’ (grant no. PRAXIS XXI/BPD/9901/96). C.M. is funded by JNICT (Portugal) under ‘Programa PRAXIS XXI’ (grant no. PRAXIS XXI/BPD/11769/97).
no-problem/9906/astro-ph9906171.html
ar5iv
text
# HUBBLE SPACE TELESCOPE IMAGING OF the CFRS and LDSS REDSHIFT SURVEYS— III. Field elliptical galaxies at 0.2<𝑧<1.011footnote 1Based on observations with the NASA/ESA Hubble Space Telescope obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy Inc., under NASA contract NAS 5-26555. ,22footnote 2Based in part on data obtained through the facilities of the Canadian Astronomy Data Centre. ## 1 INTRODUCTION The simplest models of galaxy evolution have massive elliptical galaxies forming at high redshift in a rapid collapse and monolithic burst of star formation. Such scenarios have been described by Eggen, Lynden-Bell, & Sandage (1962) and Larson (1975). Thereafter, elliptical galaxy stellar populations age passively, with no further star formation. Hierarchical clustering models (White & Rees 1978, Kauffman, White, & Guiderdoni 1993) require that the most massive objects form at late times via the merging of smaller subunits. In this scenario, massive ellipticals would be assembled recently and would be absent from high-redshift galaxy surveys. At redshift $`z=1`$ the space density would be lower a factor of 2-3 (Kauffmann 1996, Baugh, Cole, & Frenk 1996) compared to $`z=0`$ for $`\mathrm{\Omega }_{}=1`$ (with a less dramatic decrease expected for smaller $`\mathrm{\Omega }_{}`$). These two views of the assembly of massive ellipticals (early monolithic formation and recent merging) are diametrically opposed to one another. The monolithic collapse/passive evolution scenario will be referred to as the ”orthodox” model because of the rigidity of its prohibition of any further star formation following the initial burst at formation. This prohibition renders the color and luminosity evolution that accompany the aging of the galaxy more easily predictable by theoretical means. The late-epoch merging or hierarchical clustering model will be referred to as the ”secular” model. It is also useful to define an intermediate or ”reform” model which has massive ellipticals assembling most of their stellar mass early (e.g., $`z>3`$) but where some fraction of the stars form later ($`z<2`$). The reform model is useful both because it represents a plausible physical scenario and because it helps to characterise the sensitivity of different types of observational tests of elliptical galaxy formation and evolution. In summary, the orthodox and reform models both require the assembly of the majority of the stellar mass of elliptical galaxies at high redshift whereas recent merging plays a central role in elliptical galaxy formation in the secular model. A large fraction of the work on elliptical galaxies has been concentrated on the cluster environment where early-type galaxies are the dominant population. That work may provide useful insights into the origin and evolution of the field population but those studies might also be misleading if environment plays a major role in galaxy formation and evolution. ### 1.1 Evidence in favor of the orthodox view In many respects, cluster ellipticals appear to form a remarkably homogeneous population. The existence of a well-defined color-luminosity relation for early-type galaxies suggests a degree of homogeneity in that population since color depends on age and metallicity. Bower, Lucey, & Ellis (1992) show that the dispersion in the color-luminosity relation in Coma is small enough to require either a high formation redshift ($`z3`$) or a high degree of synchronicity in galaxy formation times. The existence of a low-scatter fundamental plane for early-type galaxies in clusters (Djorgovski & Davis 1987, Dressler et al. 1987) based on size, surface brightness, and velocity dispersion further implies a degree of regularity in the mass-to-light ratios, and therefore the stellar populations, among the early-type population. Jorgensen, Franx, & Kjaergaard (1996) find no dependence of the fundamental plane coefficients or scatter on environment (e.g., cluster richness or velocity dispersion) a conclusion supported by Pahre, de Carvalho, & Djorgovski (1998). Guzman & Lucey (1993) disagree, suggesting that ellipticals in lower density environments may have slightly younger stellar populations than those residing in regions of higher density. Forbes, Ponman, & Brown (1998) analyse a sample of (mostly field) ellipticals and find a correlation between the time since the last major episode of star formation and fundamental plane residuals. This argues that field ellipticals span a wide range in age. More recently, measurements of the early-type galaxy population have been pushed out to higher redshift. The well-defined color-luminosity relation in clusters at $`z=0.55`$ (Ellis et al. 1997) shows that the dispersion among early-type galaxies is small even at that epoch. However, the debate is still active regarding exactly how strong a constraint on elliptical galaxy formation is presented by the colour-magnitude relation (Kauffman & Charlot 1998, Bower, Kodama, & Terlevich 1998). Studies of the cluster elliptical fundamental plane (van Dokkum & Franx 1996, Kelson et al. 1997, van Dokkum et al. 1998) and the weakening of the Mgb absorption as a function of redshift (Bender, Ziegler, & Bruzual 1996) support simple passively-evolving (orthodox) models. Purely photometric techniques (Pahre, Djorgovski, & de Carvalho 1996, Barrientos, Schade, & López-Cruz 1996, Schade et al. 1996) have been use to compile larger samples of ellipticals covering a wide range in redshift. Schade, Barrientos & López-Cruz (1997) show that the surface brightness of cluster elliptical galaxies to $`z1`$ evolves substantially ($`\mathrm{\Delta }\mu (B)1`$ mag) in a manner broadly consistent with early-forming and passively-evolving models (e.g., Bruzual & Charlot 1993). Color evolution of cluster ellipticals (Dressler and Gunn 1990, Aragon-Salamanca et al. 1993, Rakos and Schombert 1995, Oke, Gunn, & Hoessel 1996, Standford, Eisenhardt, & Dickinson 1997) is found to be broadly consistent with passive evolution of an old stellar population with little recent star formation. Lilly et al. (1995) examined the distributions of $`<V/V_{max}>`$ for a photometrically-defined sample of early-type galaxies (i.e., the subset of red galaxies) in the Canada-France Redshift Survey (CFRS) and found that this population is distributed in space in a manner consistent with a constant space density to $`z1`$. The evidence cited above suggests that ”orthodox” models of elliptical galaxy formation provide at least broad agreement with the observed properties of the cluster elliptical population over a range ($`z=0`$ to $`z1`$) in redshift. In the field, the situation is less well-defined. Sandage & Visvanathan (1978) show that the $`z=0`$ field and cluster color-luminosity relations are very similar and have comparably small scatter ($`\sigma 0.10`$ mag) implying an evolutionary history that does not depend strongly on environment. Schade et al. (1996) use ground-based imaging to compare cluster and field ellipticals to $`z=0.55`$ and find roughly similar amounts of luminosity evolution in the two environments. On the other hand the work by Forbes, Ponman, & Brown (1998) and Guzman & Lucey (1993) suggests evolutionary histories that depend on environment, a result supported by Mobasher & James (1996). ### 1.2 Evidence against the orthodox view Observational tests of purely passive evolution come in two flavors. In the first, it is only the orthodox view that can be rejected. That is, the result cannot be used to discriminate between the reform or secular (merging) model. The second flavor constitutes a direct test of the merging model and can provide a rejection of both the orthodox and reform points of view. Examples of the first flavor of test are studies that search for high-redshift galaxies that have the colors predicted by population synthesis models for passively-evolving stellar systems. Kauffman, Charlot, & White (1996) claim evidence for strong evolution of the E/S0 population over $`0.2<z<1.0`$ from a $`V/V_{max}`$ analysis of the Canada-France Redshift Survey (CFRS; Lilly et al. 1995). Selection of a sample of early-type (i.e., E or S0) galaxies was attempted on the basis of optical colors predicted by evolving population synthesis models. Note that Kauffman, Charlot, & White compare the sample selected in this way to the predictions of semi-analytic galaxy formation models for the number density of objects with the morphological properties of ellipticals. Their claim of strong evolution requires either that early-type galaxies are absent at high redshift (which would support the secular model) or that they are present but have colors that are blue enough to avoid detection by the particular color selection that was applied to compile the sample. The latter case would support the reform model since the blue color is presumably produced by ongoing star formation. Totani & Yoshii (1998) have repeated an analysis of the CFRS sample similar to the analysis by Kauffman, Charlot, & White (1996) but do not support the conclusions of the earlier work. In particular, Totani & Yoshii use different evolutionary models to select early-type galaxies and they restrict the sample redshift range to $`0.2<z<0.8`$. The analysis yields results consistent with passive evolution models without merging. There are other studies searching for high-redshift galaxies with passively-evolving colors. Zepf (1997) concludes from a deep search for objects with very red optical-infrared colors that typical ellipticals must have had significant star formation more recently than $`z=5`$. Barger et al. (1998) use infrared observations to estimate that roughly 50% of the local space density of elliptical galaxies was in place with passively-evolving colors at $`1.3<z<2.2`$. Menanteau et al. (1998) conclude that there is a deficit of a factor of 3 or more in the observed number of red ellipticals compared to monolithic collapse models. Abraham et al. (1998) find limited evidence for a diversity of star-formation histories among a sample of 12 bright, morphologically-selected spheroids in the Hubble Deep Field but do not attempt to determine space density variations with redshift. As has been noted, the observational tests described in the preceding several paragraphs are in principle capable of rejecting only the orthodox model even if the experiments are perfectly executed. In other words, they may fail to detect the expected number of galaxies at high redshift because either a) the galaxies are absent (because they form via merging at an epoch later than the epoch of observation) or b) the galaxies are present but bluer than the threshold color. These experiments rely on theoretical models to predict the expected colors. Therefore, actual galaxies may be bluer than the theoretical threshold because they have undergone recent star formation (and thus violate the tenets of the orthodox model) or because the theoretical colors for passively-evolving populations are in error. For example, Jimenez et al. (1998) present an alternative model for the evolution of a stellar population where more than 90% of the stellar mass of ellipticals is formed in the first gigayear of their existence yet these galaxies are substantially bluer than standard population synthesis models at intermediate redshifts. This study illustrates that uncertainties in modelling old stellar populations may be a significant contributor to uncertainties in the conclusions drawn from searches for very red galaxies at intermediate redshifts. ### 1.3 Evidence in favor of the merging scenario Theories based on hierarchical cluster models of structure formation (e.g. White & Rees 1978) naturally predict that massive galaxies will be the last galaxies to form by merging. Numerical simulations of disk-disk mergers (White & Negroponte 1973, Barnes & Hernquist 1996) showing that the remnants have a number of properties similar to elliptical galaxies give this scenario further plausibility. On the observational side, studies of close pairs of galaxies (Zepf & Koo 1989, Carlberg, Pritchet, & Infante 1994, Patton et al. 1997) indicate that the galaxy interaction or merging rate rises steeply with redshift ($`(1+z)^3`$). This result is supported by a study of the pair fractions and merging rate in the CFRS sample (Le Fèvre et al. 1999). Schweizer & Seitzer (1992) studied the relation between morphological features and colors among local early-type galaxies. They find evidence that the colors of field and group E and S0 galaxies are correlated with the presence of fine morphological structure believed to be produced by merging events. They argue that the systematic shift toward bluer colors with increased structure is likely produced by systematic variations in the mean age of the stellar population. The elapsed times since the last merger events are estimated to be in the 3-10 Gyr range. The elliptical galaxy population contains many examples of galaxy cores that are kinematically decoupled from the main body of the galaxy (Bender 1992 and references therein). Although this is clearly evidence against monolithic formation of the entire stellar population in ellipticals, it is not clear whether it requires major mergers or whether more minor interactions might be responsible (see, e.g., Hau & Thompson 1994). At high redshift there are tests that bypass color information (and thus bypass the uncertainties in the population synthesis models) by selecting early-type galaxies directly on the basis of morphology. Counts of galaxies as a function of morphological type (Driver et al. 1998) in the the Hubble Deep Field (HDF) provide an indication at roughly the $`2\sigma `$ level of a deficit of elliptical galaxies at $`z>1`$ relative to purely passively-evolving models. Franceschini et al. (1998) use a K-band selected sample together with morphological information to define a sample of 35 elliptical and S0 galaxies in the HDF with spectroscopic or photometric redshifts. These authors suggest that the episodes of star formation that produce field ellipticals are spread over a wide range in redshift ($`1<z<4`$) rather than concentrated in a single burst at any epoch (a similar result is found by Kodama, Bower, & Bell 1998). They also note a deficit of early-type galaxies at $`z>1.3`$. The HDF samples are small and originate in a single small field and are so are susceptible to the effects of clustering along the line of sight.<sup>3</sup><sup>3</sup>3As a check on this problem a subsample matching the selection criteria of the CFRS sample ($`I_{AB}22.5`$) can be extracted from the Franceschini et al. sample. Depending on the morphology classification chosen (including or excluding S0s) and the assumed redshift of completeness for the CFRS ($`z=0.8`$ or $`z=1.0`$) the ratio of Franceschini to CFRS early-type galaxies per unit area is in the range 2.2 to 2.9 indicating a significant surplus of early-type galaxies with $`I_{AB}22.5`$ in the HDF (relative to the CFRS which is averaged over many lines of sight) up to $`z=1`$. Clearly multiple lines of sight offer distinct advantages. ### 1.4 Evidence against the merging scenario In an attempt to confirm the results of Schweizer & Seitzer (1992), Silva & Bothun (1998a) used near-infrared imaging to search for an intermediate-age (1-3 Gyr) stellar population among a sample of ellipticals with morphological signatures of mergers (largely overlapping the Schweizer & Seitzer sample). They find that these galaxies cannot be distinguished from those ellipticals without signs of interaction and that a small fraction ($`<15\%`$ at most) of the stellar mass can be attributed to an intermediate-age population. The test is based on the search for an intermediate-age asymptotic giant-branch population. The search for evidence of mergers is extended (Silva & Bothun 1998b) into the central regions of the galaxies where gas-rich disk-disk mergers would be expected to produce a concentration of younger stars but no compelling evidence for such a population is found. These authors argue that any merging that has occurred has not been accompanied by a strong burst of star formation, either distributed globally or concentrated in the central regions of the galaxies. A measurement of the space density of large ellipticals as a function of look-back time could produce a direct resolution to the issue of merging as an important process in forming elliptical galaxies. Density evolution can be detected using either the $`V/V_{max}`$ statistic (which tests the hypothesis that a population is distributed uniformly in distance) or by direct comparison of estimated space densities of large ellipticals at high and low redshift. If merging is important for producing early-type galaxies then there must be fewer large elliptical galaxies at high redshift and a correspondingly low value of the $`<V/V_{max}>`$ statistic. In a key piece of work Im et al. (1996) construct the luminosity function (LF) for a sample of 376 E/S0 galaxies selected by morphology and luminosity-profile analysis but independent of color. They use HST imaging and photometric redshifts based on a training set of their own and CFRS redshifts and detect evolution in the characteristic luminosity of the early-type LF. They find a value of $`<V/V_{max}>=0.58\pm 0.01`$ for their entire sample. Note that the high value of $`<V/V_{max}>`$ implies an excess probability of finding ellipticals at high redshift. Presumably this excess is produced by the positive luminosity evolution rather than by an increase in space density. A correction for that evolution would reduce $`<V/V_{max}>`$. The value of $`<V/V_{max}>`$ and the form of the luminosity function are both consistent with a constant space density of elliptical galaxies to $`z1`$. The great strength of this study lies in the large number of galaxies used which provides a firm statistical footing but the weakness lies in the small numbers of spectroscopic redshifts used as a training set for the redshifts based on $`VI`$ color, apparent magnitude, and half-light radius. In summary, there are strong theoretical reasons to believe that recent merging may be an important process governing the assembly of massive galaxies. Much of the evidence that very red (i.e. old and passively-evolving) galaxies are deficient in space density at high redshift are based upon small samples so that the statistical uncertainty is large. In addition there is uncertainty in the theoretical models used to define the color threshold for the sample selection. In any case, studies based on color selection cannot discriminate between the secular (merging) and reform models so they do not provide direct support for the merging scenario. Comparisons of morphologically selected samples (number counts or luminosity functions) disagree with one another. However, the Im et al. (1996) is a strong piece of evidence that the space density of large ellipticals has changed little since $`z=1`$. ### 1.5 The present study The definition of an elliptical galaxy used throughout this paper is based on two-dimensional fitting procedures and is independent of color. Only those galaxies that are well-fit by an $`R^{1/4}`$ profile (de Vaucouleurs 1948) with no significant evidence of an additional disk component are used. Those with asymmetry index greater than 0.10 (Schade et al. 1995; roughly the fraction of the total flux contained in the asymmetric component of the galaxy within 5 kpc of the galaxy center) have been excluded. This is the third of a series of four papers based on an amalgamation of redshifts and HST imaging for galaxies in the Canada-France Redshift Survey (Lilly et al. 1995a and references therein) and LDSS Redshift Survey (Ellis et al. 1996). The series forms a comprehensive study of the evolution of galaxy populations to $`z=1`$ using a complete sample of 341 galaxies with HST imaging. Brinchmann et al. (1998) (hereafter referred to as Paper I) discuss the morphological properties of the sample as a whole and Lilly et al. (1998) (hereafter referred to as Paper II) discuss the quantitative morphologies of the late-type population. Le Fèvre et al. (1998) (Paper IV) deals with the merging history of luminous galaxies. The present paper discusses the galaxy sample and the image processing procedures (§2), results for elliptical galaxies are presented in §3 and are discussed in §4. Throughout this paper (unless otherwise noted) we have adopted $`H_{}=50h_{50}`$ km sec<sup>-1</sup> Mpc<sup>-1</sup> and $`q_{}=0.5`$. ## 2 DATA AND PROCEDURE ### 2.1 CFRS/LDSS Imaging Details of the sample selection and imaging data are given in Paper I and are summarised here. HST imaging was obtained for a sample of 341 galaxies from the combined Canada-France Redshift Survey with a limiting magnitude of $`I_{AB}=22.5`$ and the LDSS Redshift Survey (limit $`b_j=24`$, Ellis et al. 1996) datasets. Typical exposures were 6300 seconds in the F814W filter. A subset of the fields also had F450W images of 6300 seconds duration. Archival images in F814W and F606W (typically 4500 seconds) from the region where the CFRS overlaps the “Groth strip” were also used. These overlaps provided multiple images for 38 galaxies. The typical signal-to-noise ratio (defined using the total flux within a 3 arcsecond diameter aperture) of the HST imaging is 100 at $`I_{AB}=22`$ and this is sufficient to allow visual classification or calculation of concentration index (Paper I) or to allow two-dimensional surface photometry. ### 2.2 Two-dimensional surface photometry Given the modest signal-to-noise ratio of the high-redshift galaxy observations, it is appropriate to adopt simple models to characterise the luminosity profile of these galaxies. We adopt models that have been shown to adequately represent the luminosity profiles of most galaxies in the nearby universe (e.g., de Vaucouleurs 1948, Freeman 1970, Kormendy 1977, Kent 1985, Kodaira, Watanabe, & Okamura 1986, van der Kruit 1987, de Jong 1996, Courteau, de Jong, & Broeils 1996). Two-dimensional surface photometry was performed on the HST images via the fitting of two-component parametric models: an exponential disk (Freeman 1970) and $`R^{1/4}`$ bulge (deVaucouleurs 1948) (see also de Jong 1996). These models were convolved with an empirically-defined point-spread-function (PSF) and integrated over each pixel of the HST image. Free parameters were size (scale length or half-light radius), axial ratio, and position angle for each component, and fractional bulge luminosity $`B/T`$. Together with the galaxy center (assumed to be coincident for the two components) the fits thus require 9 parameters. Fitting was done using a modified Levenberg-Marquardt algorithm (Press et al. 1992). The technique is simple and robust and provides structural and surface brightness information in addition to a morphological classification. The problems of fitting two-component luminosity profiles has been addressed by Simien & de Vaucouleurs (1986) and Schombert & Bothun (1987). Real galaxies show a large variety of morphological features (bars, dust lanes, spiral structure, companions) and are often asymmetric whereas the models adopted here are idealised and symmetric. In order to deal with these features and also to allow the computation of an asymmetry index, each galaxy was “symmetrized” by the following procedure (see also Elmegreen, B., Elmegreen, D., & Montenegro 1992 and Schade et al. 1995). First, the image of the galaxy was rotated by $`180^{}`$ and subtracted from itself. The resulting “asymmetric” image (with both positive and negative asymmetric pixel values) was clipped so that only positive asymmetries more significant than $`2\sigma `$ remained. This provides an estimate of the asymmetric component of the galaxy image and the difference formed by subtracting this from the original image yields the “symmetric” image. This symmetric image was subjected to the fitting procedure and the ratio of the asymmetric to symmetric flux is taken as an index of the relative importance of the asymmetric component of the galaxy. From a practical standpoint, the symmetric image is free of companions and other problems and results in a much cleaner and reliable fitting process. Sky background was measured from large areas of the chips and was not a parameter of the fit. Simulations showed that fitting the sky as a free parameter resulted in an increased scatter in the fitted galaxy parameters but did not otherwise effect the results. With the sky held fixed, the models were normalised at each iteration to yield a minimum $`\chi ^2`$ given the current structural parameters. The fitting process results in measurements for each of the bulge and disk components individually. This allows independent analyses of the evolution of the disk components separately from the galaxy as a whole, or the bulge components in isolation. Perhaps most importantly, the classifications (based solely on fractional bulge luminosity $`B/T`$) are independent of color and spectroscopic properties so that no implicit assumptions are made about the stellar populations in individual galaxies. This approach allows a check of whether spectral energy distribution and morphology are correlated at high redshift and whether that correlation evolves. A crucial feature of the two-dimensional modelling as a means to measure structural parameters and surface brightnesses is the fact that we have applied these same techniques (Schade, Barrientos & López-Cruz 1997) to nearby elliptical galaxies (albeit in clusters rather than the field) that were observed with similar physical resolution and at similar signal-to-noise ratios. This greatly reduces concerns that some of our trends are due to systematic differences between the analyses of low and high redshift samples. ### 2.3 Visual examination of the fits The evaluation of the success of the fitting process and the validity of the model fits was done by a combination of statistics and visual examination of the galaxy, the models, and, most importantly, the residuals. This procedure has subjective elements and needs a clear explanation so that the issue of possible bias can be addressed. The main function of the visual inspection is to decide whether a single-component or two-component (bulge-plus-disk) model is the best representation of the galaxy under consideration. The smallest reduced $`\chi ^2`$ is not always accepted as the best fit. Two-component models were rejected in favor of single component models if 1) the fitted bulge component was larger and of lower surface brightness than the disk component (this is an explicit bias that we have imposed on the procedure and was the most common type of failure of the fitting process: it occurred in 3% of the galaxies), or 2) the second component was effectively being fitted to low-level residual asymmetries (that escaped the symmetrizing process) in the galaxy rather than a legitimate bulge or disk component (this overlaps with reason 1). The visual examination relies mainly on the residuals. If an examination of the residuals after the bulge fit has been subtracted shows no evidence for a disk component then the pure bulge fit will be accepted. This is true even if, as happens in some cases, the bulge+disk model fit has a slightly smaller value of $`\chi ^2`$. This subjective element of the procedure was evaluated by multiple examinations by one of us (DJS) of the fits separated by substantial periods of time. These repeated judgements showed that about 10% of early-type galaxies drifted from E to S0 or vice versa depending on whether a disk component was considered a plausible. There are 65 early-type (E or S0) galaxies so that the elliptical sample of 46 galaxies may have contamination of about 15%. We adopt this as an estimate of the reliability of our early-type classifications although we will later examine the effect of 25% contamination of the E sample which we consider a limiting case. ### 2.4 Internal errors from duplicate observations There exist 38 duplicate galaxy observations. These are independent observations (most being widely separated in time) which were reduced and analysed independently so that they give good estimates of the internal errors of our fitting procedure including potential sources of error such as centroiding, PSF variations, sky subtraction, and the effect of the subjective visual examination procedure. The results of duplicate fits are shown in Figures 1 and 2. Figure 1 shows $`B/T`$, total $`I`$ magnitude, and the index of asymmetry. Filled symbols represent galaxies with good residuals (symmetric or “normal galaxies”) and open symbols indicate asymmetric galaxies (those with asymmetry index as defined in Schade et al. 1995 of $`R>0.10`$). It is important to note that classifications are considered valid only for symmetric galaxies and discussions of evolution of “normal” galaxies in this paper pertains only to those galaxies; asymmetric objects are excluded. The dispersions are indicated on the figures for the symmetric galaxies only. The dispersions for the symmetric (and all) galaxies are 0.04 (0.14) for $`B/T`$ with 70% of the objects having duplicate observations within $`\pm 0.10`$. In total magnitude, the dispersion is 0.05 (0.11) magnitudes. The dispersion in the asymmetry index is 0.06. Since the index results in a binary decision (symmetric or asymmetric) it is appropriate to look at the errors in classification. A cutoff of in asymmetry index of 0.10 is adopted here and this results in 5/38 (13%) disagreement between the two classifications, whereas adopting a cutoff of 0.14 in asymmetry index (as adopted by Schade et al. 1995) results in 1/38 (3%) differences in classification. In summary, the repeat observations show that the fitting procedure produces consistent measures of $`B/T`$, total magnitude, and asymmetry index, with errors in the range 5-15%. Figure 2 shows the multiple-fit results for structural parameters. The dispersions for the symmetric (and asymmetric) disk scale length measurements are 0.04 (0.05) arcseconds, or about 10% for a typical disk, the surface brightness errors are 0.20 (0.46) mag for disks, and the total disk magnitude errors are 0.10 (0.14) mag. The corresponding figures for the bulge (elliptical) measurements are 0.04 (0.04) arcseconds in size, 0.10 (0.14) mag in surface brightness, and 0.04 (0.04) mag in total magnitude. The duplicate observations are a robust test of the entire analysis process and include all sources of random errors. The size (scale length or half-light radius) and magnitude errors are $`10\%`$ as inferred from these comparisons. The surface brightness measurement errors are $`1020\%`$. ### 2.5 Magnitudes and K-corrections The HST $`I_{814}`$ magnitudes from the surface photometry were compared to the ground-based CFRS $`Iband`$ isophotal imaging and the mean difference was $`0.006`$ with a dispersion of 0.35 magnitudes. The k-corrections were calculated according to the procedure given in Lilly et al. (1995). The $`(VI)_{AB}`$ color straddles rest-frame $`B`$ band of $`0.2<z<0.9`$ and the use of observed $`I`$-band with an interpolation of the Coleman, Wu, & Weedman spectral energy distributions yields typically small corrections ($`I`$-band corresponds to rest-frame B at $`z0.9`$) to obtain rest-frame B magnitudes. The B-band surface brightness is derived from the HST F814W observations and the k-corrections and rest-frame $`(UV)_{AB}`$ colors are from interpolation of ground based $`(VI)_{AB}`$ colors. ### 2.6 Emission-line measurements Measurements of the \[OII\] equivalent width were taken from Table 1 of Hammer et al. (1997 CFRS-XIV) for the CFRS objects, or from the Autofib survey (Ellis et al. 1996), or otherwise were re-measured from the CFRS spectra themselves. The values in our Table 1 are in the observed frame. ### 2.7 Spectroscopic failures In the sample of 341 galaxies with HST imaging and spectroscopy there are 18 objects classified morphologically as galaxies where no redshift could be determined from their spectra. These objects were processed along with the galaxies with redshifts. These objects are generally near the magnitude limit of the survey. The morphology distribution is skewed toward early-type galaxies: 7/18 (40%) are ellipticals compared to 12% Es in the entire sample. This effect is expected because the low frequency of emission lines among early-type galaxies makes redshift determination difficult, particularly at high redshift. The color distribution (see Fig. 5) of the ellipticals without redshifts suggests that most are at $`z>0.5`$. Lilly et al. (1995) have derived estimated redshifts for these galaxies using the $`I_{AB}`$ magnitudes, the $`(VI)_{AB}`$ and $`(IK)_{AB}`$ colors, and a measure of the compactness of the images and these estimated redshifts are all at $`z>0.6`$. These estimated redshifts will be used in some of the analysis that follows. ## 3 RESULTS The output from the fitting process consists of a classification (fractional bulge luminosity $`B/T`$), size (disk scale length $`h`$ or bulge half-light radius $`R_e`$), surface brightness, and luminosity of each of the bulge and disk components individually. Although colors from HST observations are available for a subset of the galaxies, ground-based $`VI`$ colors (Lilly et al 1995, Paper I) are used here to ensure homogeneity. Table 1 shows the parameters for the sample of CFRS/LDSS galaxies that are classified as ellipticals by the profile-fitting procedure. ### 3.1 Surface brightness Figure 3 shows the luminosity-size ($`M_B\mathrm{log}R_e`$) relation for those galaxies classified as ellipticals by the fitting and residual examination process. The sample is divided into three slices of redshift and each panel shows the best-fit, fixed-slope, linear fit as a solid line. The dashed line represents the local $`M_B\mathrm{log}R_e`$ relation for cluster ellipticals ($`M_B(AB)=3.33\mathrm{log}R_e18.65`$) derived by Schade, Barrientos, & Lopez-Cruz (1997). As described in that paper the local cluster elliptical relation was determined from ground-based data with similar physical resolution and signal-to-noise ratio to the HST data used here and in that paper. The intention of using such data was to minimise systematic errors between our local fiducial $`M_B\mathrm{log}R_e`$ relation and those derived at high redshift which might occur if local data with much higher signal-to-noise ratio and resolution were used. Galaxies more luminous than $`M_B(AB)=20`$ were used to determine the mean shifts of the $`M_B\mathrm{log}R_e`$ locus with “discrepant” points included and excluded. “Discrepant” points are defined as those where the visual classification given in Paper I was Sab or later thus presenting an apparent conflict with the classifications presented here based on model fits. Duplicate observations for galaxies are treated as independent points but results are also calculated excluding them. The shifts were estimated by minimising the residuals in magnitude (minimising the size residuals was also tried and the difference in the results was negligible) while holding the slope of the linear relation fixed at the value found by Schade, Barrientos, & Lopez-Cruz (1997). The estimated shifts in luminosity are given in Table 2. There is significant evolution in the $`M_B\mathrm{log}R_e`$ relation although there is substantial scatter in the relation, particularly in the interval $`0.5<z<0.75`$. Figure 3 uses symbols coded to indicate discrepant points (defined above) and galaxies with measurable \[OII\] 3727 emission lines. Elliptical galaxies with strong \[OII\] emission (indicating star-formation) might be expected to be over-luminous relative to passively-evolving galaxies if a normal initial-mass function is assumed. In the absence of that assumption it is possible that a small number of massive, UV-bright stars could drive the \[OII\] emission and blue colors without significantly enhancing the luminosity. The observations show that they are roughly equally distributed on both sides of the mean $`M_B\mathrm{log}R_e`$ relation: no luminosity enhancement is seen. The substantial scatter might mask the expected effect but this issue needs further investigation with a larger sample. We have also examined what effect would be produced if S0 galaxies are mis-classified as ellipticals. Such objects would preferentially lie to the right of the mean $`M_B\mathrm{log}R_e`$ relation because their disks would cause the bulge half-light radius to be fit too large (this assumes that their bulges conform to the passive-evolution model appropriate to the ellipticals). This is not seen in the data. The conclusion is that none of these classes of objects (\[OII\]-strong, discrepant, possible mis-classified S0s) is an important causal factor in the observed shifts in the $`M_B\mathrm{log}R_e`$ relations. We treat spectroscopic failures by adopting their photometrically-estimated redshift, plotting them on Figure 3 as stars, and indicating the trajectories they would follow as a function of redshifts (from $`z=0.2`$ to $`z=1.0`$). The amount of evolution in the size-luminosity relation is consistent with what is expected from passively evolving models of old stellar populations (e.g., Bruzual & Charlot 1993) for reasonable values of the initial stellar mass function if the population is sufficiently old. ### 3.2 Colors Figure 4 shows the color-luminosity relations for CFRS field ellipticals (we exclude LDSS galaxies because they were observed in a different color system) in three redshift slices. We compute rest-frame $`(UV)_{AB}`$ as outlined in §2.5. The dotted line (which coincides with the solid line in the low-redshift panel) is the adopted local relation from Bower, Lucey, & Ellis (1992) for Coma assuming a redshift of 0.0232 so that $`(mM)_V=35.73`$ and $`(BV)=0.96`$ for ellipticals. Noting that $`(UV)_{AB}=(UV)+0.7`$ and that $`B_{AB}=B0.17`$ yields a Coma relation $`(UV)_{AB}=0.079M_{AB}(B)+0.51`$. The colors of the ellipticals with $`0.2<z<0.5`$ in Figure 4 agree very well with the Coma relation. In the following process we have assumed that the observed colors of field ellipticals can be represented by a linear relation with the same slope as the local cluster relation (given above) modified only by a uniform shift in color. In fact, a Spearman rank correlation test does not provide significant evidence of the existence of a correlation between color and magnitude for the data in Figure 4 but this is not surprising given the errors and the shallowness of the slope of the color-magnitude relation. The shifts in the color-luminosity relation are computed holding the slope fixed and minimising the $`\chi ^2`$ residuals in $`(UV)`$. Note that the errors in color (derived from ground-based $`(VI)_{AB}`$ measurements) and luminosity (from HST measurements) are independent except to the extent that $`(VI)_{AB}`$ was used to compute the K-corrections. The results are given in Table 3 and the mean shifts are shown as solid lines in Figure 4. Note that the shift $`(UV)`$ for an actual galaxy requires a correction if that galaxy evolves in luminosity as well as color. For luminosity evolution of $`\mathrm{\Delta }M_B`$ and an observed color shift of $`\mathrm{\Delta }(UV)`$ a correction of $`\frac{d(UV)}{dM}\times \mathrm{\Delta }M`$ is needed in the sense that a physical galaxy undergoes a smaller blueward shift. Table 3 shifts are not corrected for this effect. Ellis et al. (1997) study the color-luminosity relation for cluster E/S0 galaxies at $`z0.55`$. At this redshift the $`V_{555}I_{814}`$ colors approximate restframe $`(UV)`$. The observed mean color-luminosity relation for 3 clusters is shifted blueward by $`0.3`$ mag in $`V_{555}I_{814}`$ which is consistent with the luminosity evolution-corrected shift of $`\mathrm{\Delta }(UV)=0.26\pm 0.08`$ at $`z(\mathrm{median})=0.59`$ for field ellipticals measured here. At higher redshifts, Stanford, Eisenhardt, & Dickinson (1998) estimate the shift in U-infrared color to be $`0.40.6`$ mag at $`z=0.80.9`$ for a sample of cluster early-type galaxies and we estimate a shift of $`\mathrm{\Delta }(UV)0.40.6`$ mag from Figure 1 of Kodama, Bower, & Bell (1998) for early-type galaxies in the Hubble Deep Field. Our value of $`\mathrm{\Delta }(UV)=0.68\pm 0.11`$ is consistent with those values. An examination of the coded symbols on Figure 4 does not suggest that the shift in mean color is produced by any of the sub-populations, e.g., galaxies with \[OII\] emission, discrepant galaxies, or spectroscopic failures. Although there is a hint that the ellipticals with measurable \[OII\] emission are bluer than those without \[OII\], the distributions in color of the two populations cannot be distinguished by a Kolmogorov-Smirnoff test. A second quantity of interest, in addition to the shift in color, is the dispersion in color ($`\sigma `$) at each redshift. The observed dispersion is a combination of the intrinsic dispersion in the color-magnitude relation and the measurement errors. We remove the contribution from the measurement errors using a procedure from Statistical Consulting Center for Astronomy at Pennsylvania State University (see also Akritas & Bershady 1996). The observations are denoted by $`Y_i`$ and their observational errors by $`\sigma _i`$ and the “de-biased” sample variance is given by $`N^1(Y_i\overline{Y})N^1\sigma _i^2`$. An estimate of the variance of the debiassed sample variance is $`N^1(\xi _i\overline{\xi })^2`$ where $`\xi _i=Y_i^2\overline{Y^2}2\overline{Y}(Y_i\overline{Y})`$. The observational errors in $`(VI)_{AB}`$ were propagated through the interpolation procedure to obtain the errors in $`(UV)_{AB}`$ Given these individual errors we estimate the intrinsic dispersion (and its 1$`\sigma `$ error) in the color-luminosity relation is $`0.19\pm 0.08`$, $`0.29\pm 0.11`$ , and $`0.31\pm 0.16`$ magnitudes in $`(UV)`$ in the three redshift slices, respectively. An examination of the HST imaging shows that crowding is not a serious problem for the colors. The dispersions in color that we find for field ellipticals are larger than the dispersions measured by Bower, Lucey, & Ellis (1992) ($`\sigma <0.06`$) in the Virgo and Coma clusters and by Ellis et al. (1997) in 3 clusters at $`z0.55`$. The difference is significant at roughly the $`2\sigma `$ level. Caution requires us to note that a few errors in our classifications might have a large impact on the observed dispersion. In addition, our photometric errors may be slightly underestimated if, for example, the photometric zero points vary slightly from field to field. On balance we conclude that we have evidence that field ellipticals have a larger color dispersion at a given luminosity than those in clusters, but that the evidence is suggestive rather than conclusive. Kodama, Bower, & Bell (1998) examined the color-magnitude relation for early-type galaxies with spectroscopic or photometric redshifts in the Hubble Deep Field and concluded that half of those galaxies are as old as rich cluster ellipticals. The intrinsic scatter found for those galaxies with $`(UV)>0.7`$ (which is effectively the more luminous part of the sample and is comparable to our own sample) was $`\sigma (UV)=0.12\pm 0.06`$ compared to our somewhat larger value of $`0.31\pm 0.16`$ at a similar redshift. ### 3.3 Selection of early-type galaxies: morphology versus color The central tenet of this paper is that we can reliably isolate a sub-population of galaxies—independent of redshift—with luminosity profiles that fall into a particular class, namely those whose profiles are well-fit by a deVaucouleurs $`R^{1/4}`$ (with no other component required). For the purposes of this paper we define such galaxies as ellipticals provided, in addition, that they are not strongly asymmetric (asymmetry index $`R<0.10`$). These criteria exclude obvious S0 galaxies because they possess disk components. Nevertheless there are borderline cases. The E-S0 boundary is fuzzy. Figure 5 shows the observed $`(VI)_{AB}`$ colors versus redshift for the CFRS sample, with those galaxies classified (from profile fitting) as ellipticals shown as filled circles. The galaxies with fractional bulge luminosity $`0.4<B/T<1.0`$ (loosely called S0s) are indicated by open squares. The majority of the ellipticals are among the reddest galaxies in the sample. What are the other galaxies that are very red? About 1/3 of the red non-ellipticals are classified as $`S0`$ galaxies (defined here as galaxies with fractional bulge luminosity $`B/T0.5`$), most of the remainder are Sa-Sb galaxies (defined by $`0.2B/T<0.5`$), with a few presumably reddened edge-on disks. The color distribution of the elliptical sample demonstrates that we are selecting—by morphology alone—a sample of objects with spectral energy distributions that are very different from the average properties of the galaxies in the full sample. This is encouraging. However, the color distribution of ellipticals has large scatter and some disk galaxies are as red as many ellipticals. This is true independent of redshift and thus any color cut to to segregate a morphological class appears difficult (perhaps impossible). Another test of the credibility of our classifications is a comparison with more conventional methods of classification. For this purpose we use the visual classifications given for this dataset in Paper I. In that system the early-type galaxies have classes E=0, E/S0=1, S0=2. Figure 6 shows the observed $`(VI)_{AB}`$ versus redshift plot for these galaxies. Filled symbols are those with visual classifications of E/S0 or earlier (numerical visual class $`1`$ from Paper I). Galaxies with classifications of S0 (visual class 2) are indicated by open squares. The visual classification system and the two-dimensional profile-fitting system yield similar populations in terms of color versus redshift but the dispersion in color for early-type galaxies is large using either method of classification. There is a fundamental difference in philosophy between selection of a galaxy sub-population by color and selection by morphology. The physical structure of a massive galaxy is more likely to be a stable quantity than is the color of a galaxy. A starburst involving a moderate fraction ($`10\%`$) of a galaxy’s mass is capable of radically altering the color of the galaxy (albeit temporarily). Such an event is unlikely to have as strong an effect on the apparent structure of the galaxy although admittedly this depends on the spatial distribution and strength of star formation. Morphological selection keys on the apparent structure of the galaxy whereas color selection keys on the current state of the stellar population. This current state depends on the age distribution of its stars and the contemporaneous rate of star-formation (as well as other quantities such as metallicity and the initial mass function). There is also a strong dependence on dust content in the sense that galaxies with young or intermediate-age stellar populations can be shifted into the selection region by reddening. Kauffman, Charlot, & White (1996) (KCW) have applied a color selection technique to the entire CFRS sample (the present sample is the subset of CFRS galaxies with HST imaging). Details of their evolving stellar population models are given in that paper. The line plotted on figures 5 and 6 is the selection line (intended to represent the blue edge of the E/S0 population) applied by Kauffman, Charlot, & White (1996) to the CFRS sample. This line is a very poor fit to our morphologically-selected sample but gives us the opportunity to directly compare the morphological and color selection procedures. Kauffman, Charlot, & White (1996) (hereafter KCW) test the hypothesis that giant ellipticals all formed at high redshift in a brief burst of star formation and that thereafter they evolve passively with no further substantial episodes of star formation. Under this hypothesis the colors and luminosities of those galaxies can be predicted as a function of redshift by modelling the stellar populations. As a practical matter S0 galaxies are lumped together with ellipticals because their colors are similar in the field (Buta et al. 1994) and nearly identical in nearby clusters (Bower, Lucey, & Ellis 1992) and clusters at higher redshift (Ellis et al. 1997). If the stellar models are accurate and the transformation into the observational plane is reliable then all early-forming galaxies with no ongoing star formation can be detected and counted. A comparison can then be made of the space densities of those populations at high and low redshift. If all red galaxies formed long ago ($`z>>1`$) and have since stopped forming stars then the space density will be conserved. If a constant space density is not observed then either red galaxies have been created by merging since $`z=1`$ or those galaxies are bluer than the selection threshold because of recent star formation. Figures 5 and 6 show that neither the profile fitting method nor the visual classification yields a sample that corresponds well with that selected by the color criteria of KCW. If a comparison is made with the two-dimensional profile-fitting classification system used in this paper, color selection is both incomplete (success rate of $`<58\%`$ for E/S0s) and suffers from contamination (47% of the color-selected sample are not E/S0s but are other red galaxies). Furthermore, the rates of completeness and contamination are a function of redshift. For example, the percentage of ellipticals detected is 57% at $`0.2<z<0.5`$, 36% at $`0.5<z<0.75`$, and 21% at $`0.75<z<1.0`$. The rates of contamination (the fraction of galaxies redder than the threshold that are not E/S0 galaxies) are 56% at $`0.2<z<0.5`$, 22% at $`0.5<z<0.75`$, and 57% at $`0.75<z<1.0`$. Evidently, the selection threshold adopted by KCW does not trace the evolution of the elliptical galaxy population as it is defined here. Neither does it trace the evolution of the E/S0 population as defined by the visual classification system of Paper I. The difference is attributable to the fact that many of our ellipticals are bluer than the KCW threshold (some have emission lines) and many disk-dominated systems are redder than their threshold. Thus, the claim by Kauffman, Charlot, & White (1996) that they have detected strong evolution in the early-type population is not valid. This point will be addressed in section 3.6 where the $`V/V_{max}`$ analysis will be repeated using the samples selected here. Selection by morphology bypasses potential problems with modelling the stellar populations, is affected very little by moderate episodes of star formation, and is not prone to include disk galaxies because of the presence of dust. If the structure of these galaxies is stable then the question is: can a set of morphological criteria be consistently applied over a range of redshift where surface brightness dimming (a $`(1+z)^4`$ effect) is a strong effect? The most likely error would be failure to detect disks around bright elliptical components at high redshift because of suppression by surface-brightness dimming. At low redshift such a galaxy would be classified as S0. At high redshift such an object would be classified as elliptical. In section 3.6 we will construct samples that address these question of possible errors in the application of the morphological selection. ### 3.4 Modelling the luminosity and color evolution For the moment we will ignore the presence of emission lines in these galaxies and compare the evolution in color and luminosity to Isochrone Synthesis Spectral Evolution models (Bruzual & Charlot 1993,1996). The GISSEL95 and GISSEL96 libraries give synthetic colors and luminosities for simple stellar populations (and composite populations) as a function of age with a range of metallicities. The available constraints are the present-day colors of elliptical galaxies (from Buta et al. 1994), together with the evolution in color and luminosity. If we restrict our examination to orthodox models where the entire stellar population of elliptical galaxies is formed in an instantaneous burst, then the available model parameters are the redshift of formation (assumed to be the same for all galaxies), the metal abundance, the stellar initial mass function, and the choice of cosmology. The choice of a particular stellar atlas in GISSEL96 has a negligible effect on the observables in the present case. Initially we adopted a Salpeter (1955) initial mass function which has been shown (Charlot & Bruzual 1991,Bruzual & Charlot 1993) to reproduce the colors of local galaxies and we assume solar metallicity unless otherwise noted. We explore only cosmologies with $`H_{}=50`$ km sec<sup>-1</sup> Mpc<sup>-1</sup> and $`\lambda =0`$ and use $`q_{}=0.0`$ and 0.5. If ellipticals were a constant surface brightness population then the derived luminosity evolution would be independent of $`q_{}`$. This is not the case. However, the dependence on $`q_{}`$ is rather weak. At $`z=0.5`$ using $`q_{}=0.0`$ yields only $`0.09`$ mag more evolution than $`q_{}=0.5`$ and at $`z=1`$ the effect is $`0.18`$ magnitudes. In the following comparisons, these small corrections necessary for $`q_{}=0`$ are applied to the theoretical curves rather than the data. The major effect of varying $`q_{}`$ is to change the age of the stellar populations for a given formation redshift. A single-burst population formed at $`z=10`$ with $`q_{}=0.5`$ is shown plotted on figure 7. The model luminosity evolution from $`z=0`$ to $`z=1`$ is somewhat larger than observed whereas the model colors are redder than those observed at $`z0.5`$. Note that this plot shows both field and cluster elliptical galaxies from previous work (reduced in a similar manner). The model luminosity evolution can be reduced if $`q_{}=0.0`$ (since the population is older) but this makes the $`z=0`$ colors much redder and the color evolution is then far too flat. This problem is not relieved by using either of the other initial mass functions (Scalo 1986 or Miller and Scalo 1979) available in these models. Changing the metallicity fails to produce the necessary steeper increase of color with redshift. Since it is likely that the color systems of the models are not identical to the observations, and because of uncertainties in the models themselves (Charlot, Worthey, & Bressan 1996) differences of 2-3 tenths of a magnitude in absolute terms will not be considered a serious disagreement. We choose to give more weight to the comparison of differential effects with redshift. The observed rapid change with redshift in the mean color of the elliptical galaxy population can be reproduced by lowering the redshift of formation. However, in order to obtain even a reasonable agreement between observed and model color evolution a very recent formation epoch ($`z1.5`$) is required and this enhances the luminosity evolution, creating a serious discrepancy. Another way for the models to reproduce the observed color evolution is to superimpose bursts of star formation comprised of some fraction of the galaxy mass on top of the old, passively-evolving population. Bursts at $`1<z<3`$ with mass fractions in the range 10-25% are capable of producing the color behavior but they fail to avoid the problem of over-producing luminosity evolution; the same problem as with a more recent, single-burst, formation epoch. A similar difficulty was noted in Hammer et al. (1997) in fitting the 4000 angstrom break of many of the “quiescent” objects with an old, single-burst population. The fundamental problem is that a change in $`(UV)`$ of $`0.68\pm 0.11`$ mag should be accompanied by a much larger brightening (roughly 2 magnitudes in the B band) in a passively evolving system. One particular solution to the modelling problem is to superimpose a small burst of star formation at $`z=1`$. A model that fits well is the onset at $`z=1`$ of exponentially-declining star formation with an e-folding time of 1 Gyr (a $`\tau `$ model) and a mass of 2.5% of the final stellar mass of the galaxy. Such an effect might be produced by accretion of low-mass companions (see Silva & Bothun 1998a) but precludes a major star-formation episode accompanying a large disk-disk merger. Such a model fits very well if we adopt $`q_{}=0`$ and a high redshift ($`z_f=10`$) of formation of the dominant old population. We show such a model on Figure 7. (Here we have adopted an abundance of 40% solar for the old population and a solar abundance for the star formation starting at $`z=1`$. The difference in color at $`z=0`$ between the simple models and the two-component models is due mostly to the lower abundance chosen for the old population in these two-component models.) This model is, of course, completely ad hoc. An attempt to provide serious constraints on the modelling process would require accurate mean colors and luminosities for elliptical galaxies over a wide range of redshift. Nevertheless, it is evidently possible to match the observations presented here with simple models if the dominant stellar population is old (thus the preference for low $`q_{}`$) in order to match the slope of the luminosity versus age relation but then some more recent ($`z1`$) star formation is required in order to match the strong color evolution. Our estimates of color evolution at $`z1`$ are broadly consistent with both the cluster work of Stanford, Eisenhardt, & Dickinson (1998) and the results for field galaxies by Kodama, Bower, & Bell (1998). However, precise comparisons with the present work are not possible and we emphasise the uncertainties associated with constraining models with the presently very limited set of observations. ### 3.5 \[OII\] Emission and star formation In the previous section we have ignored the presence of \[OII\] emission lines which indicate that these galaxies are not composed exclusively of populations formed at $`z>1`$. The fraction of galaxies exhibiting strong (equivalent width $`>15`$ angstroms) \[OII\] is roughly 30% independent of redshift. This compares to $`<3\%`$ of a sample of 104 nearby elliptical galaxies (Caldwell 1984). Clearly there has been strong evolution in the emission-line properties of elliptical galaxies since $`z=1`$. We have estimated the star-formation rates (SFRs) for these galaxies using the prescriptions of Kennicutt (1992) using \[OII\] fluxes or equivalent widths. As emphasised in that paper there are substantial uncertainties in such estimates. The SFRs derived from continuum luminosity coupled with \[OII\] equivalent width compare very well with those SFRs derived directly from the \[OII\] fluxes. We have adopted the former method and normalised the SFRs in each redshift bin by the total elliptical galaxy mass in that redshift bin (regardless of \[OII\] strength). The masses are estimated from the best-fit stellar populations model in the previous section ($`q_{}=0.0`$, old population plus star formation onset at $`z=1`$) which gives B-band luminosity of 8.3 mag per solar mass at $`z=0`$. The observed luminosities were corrected to their “de-evolved” luminosities at $`z=0`$ to yield masses. Galaxies with photometrically-estimated redshifts were included with \[OII\] equivalent widths of zero so that they contributed to the mass but not the star-formation rate. Figure 8 shows the star-formation rate per unit stellar mass in elliptical galaxies as a function of redshift. It is assumed that the SFR is zero for local ellipticals (Kennicutt 1998). If the star-formation were constant from $`z=1`$ to the present and remained at the observed high-redshift rate then $`5\%`$ of the stellar mass in present-day elliptical galaxies would have formed since $`z=1`$. If, instead, the star formation rate is modelled as exponentially declining, a smaller estimated mass fraction results. ### 3.6 Merging: The density evolution of the early-type population The present study shows that some level of star formation is maintained in the elliptical population at $`z<1`$. Therefore, the ”orthodox” model for the formation and evolution of elliptical galaxies is rejected. The remaining (very large) question is the role of late-epoch merging in producing massive ellipticals. The role of merging can be addressed directly by examining the space distribution of a complete sample of these objects. The $`V/V_{max}`$ statistic (Schmidt 1968, Lilly et al. 1995, Kauffman, Charlot, & White 1996) is one way to address this question. In the case of no evolution $`V/V_{max}`$ is uniformly distributed between 0 and 1 with a mean of 0.5. The expected $`1\sigma `$ error in the mean of a sample of $`n`$ objects is $`(12n)^{1/2}`$. We have analysed the sample of CFRS ellipticals (excluding LDSS ellipticals in order to produce a homogeneous sample) using this technique and we compare our results with those obtained by Kauffman, Charlot, & White (1996) using the same technique on their color-selected subset of the whole CFRS sample. Table 4 presents results for the CFRS elliptical sample as well as the early-type sample derived by Brinchmann et al. (1998) in Paper I using visual classification techniques. We compute the mean value of $`V/V_{max}`$ and the probability ($`P_{ks}`$) that the distribution of $`V/V_{max}`$ is drawn from a uniform parent population (calculated using a Kolmogorov-Smirnov test). This directly represents the probability that a constant-space-density model is acceptable for a specific subsample, cosmology, and assumption about luminosity evolution as indicated in Table 4. We also estimate the exponent $`\gamma `$ and its $`1\sigma `$ confidence interval for an evolution law of the functional form $`F(1+z)^\gamma `$. The value of $`\gamma `$ and its error are estimated by varying $`\gamma `$ until $`V/V_{max}=0.5\pm (12n)^{1/2}`$. Luminosity evolution is incorporated into the analysis assuming that $`\mathrm{\Delta }M_B=z`$ i.e., one magnitude of evolution to $`z=1`$. Each galaxy is corrected from its observed luminosity to a redshift-zero fiducial luminosity $`M_B(0)=M_B(observed)+z`$. Then its apparent magnitude as a function of redshift is computed from $`M_B(z)=M_B(0)z`$ using distance modulus and k-corrections. We have constructed and analysed the following subsamples. Sample 1 is composed of all elliptical galaxies (as judged from profile fitting) with spectroscopic redshifts. Sample 2 adds spectroscopic failures adopting their photometric redshifts. In sample 3 we assign all of the spectroscopic failures a redshift of $`z=0.2`$. These samples all give results consistent with no evolution at the 95% significance level regardless of cosmology or luminosity evolution. We address the issue of possible errors in the classifications between E and S0 galaxies (described in the previous section) in two ways. First, from the residuals of the fit we can make some estimate of which ellipticals are most likely to be mis-classified. We adopt what we consider to be an extreme level of contamination (25%) and exclude all of these “possible S0s” from Sample 1. This gives us Sample 4 and the procedure has little affect on the $`<V/V_{max}>`$ results. The second way that we address the classification error problem is to add all S0 galaxies (as judged from profile fitting) to Sample 1 to yield Sample 5. All of these results are consistent with no-evolution models of the population. Note that if we were to add spectroscopic failures at their photometric redshifts we would make $`<V/V_{max}>`$ larger whereas $`<V/V_{max}>`$ needs to be smaller than $`0.5`$ in order to produce the Kauffman, Charlot, & White (1996) result. An alternative to using the classifications given by the model-fitting approach is to adopt the visual classifications from Paper I. Samples 6 and 7 are constructed using these classifications with and without photometric redshifts, respectively. Sample 6 results in rejection of the no-evolution hypothesis at the 95% significance level for $`q_0=0.01`$. Sample 7 is preferred over sample 6 because it includes spectroscopic failures at their photometrically-estimated redshifts. Sample 7 yields a result consistent with no-evolution models. The results in Table 4 provide no evidence of evolution in the space density of early-type galaxies regardless of whether that population is defined as elliptical galaxies (by the profile-fitting method) or as E/S0 galaxies using the visual classifications. The most reliable samples in Table 4 are those where photometric redshifts have been adopted where spectroscopic redshifts were not available. These samples are the most reliable because they correct for the expected incompleteness in spectroscopic redshifts for early-type galaxies at high redshift. Considering those “best” samples indicates that no-evolution models are consistent with these datasets independent of cosmology and independent of luminosity evolution. This conclusion about the evolution of ellipticals from the $`V/V_{max}`$ statistic contradicts that reached by Kauffman, Charlot, & White (1996). The color selection method employed in that paper fails to produce a complete, uncontaminated sample of early-type galaxies. Totani & Yoshii (1998) employ their own models to select early-type galaxies on the basis of color. They apply a sample cut-off at $`z=0.8`$ based on the assumption that the CFRS is not complete at higher redshifts. With this cut-off they find $`<V/V_{max}>=0.478\pm 0.035`$ which is consistent with no evolution. It is important to appreciate that the size of the error bars allow a factor of 2-3 change in the space density at the $`1\sigma `$ level. Therefore, although the present work provides no evidence for significant evolution in the space density of early-type galaxies it does not rule out such evolution. A larger sample would be required to decide the question using a $`<V/V_{max}>`$ test. Finally, we can compare the observed space density of elliptical galaxies in the CFRS sample to the predictions based on the local luminosity function. For this purpose we use the E/S0 LF of Marzke et al. (1998) (a conversion is required from their $`H_{}=100`$ km sec<sup>-1</sup> Mpc<sup>-1</sup> to our value of $`H_{}=50`$ km sec<sup>-1</sup> Mpc<sup>-1</sup>). The LF is integrated over all galaxies brighter than $`M_B(AB)=20.2`$ to yield a space density of $`3.47\times 10^3`$ Mpc<sup>-1</sup>. The effective area of the HST subsample of the CFRS is taken to be 0.0096 deg<sup>2</sup>. The expected numbers are computed for the three redshift shells $`0.2<z<0.5`$, $`0.5<z<0.75`$, $`0.75<z<1.0`$. In the present case a negligible error is introduced by assuming that each galaxy can be detected throughout is redshift shell and simply multiplying the integral of the luminosity function above by the total volume within the shell, given the area of coverage. We assume that the luminosity evolution of the elliptical population is $`M(z)=M(z=0)z`$ which approximates the results of the present study. Thus we count galaxies in $`0.2<z<0.5`$ brighter than $`M_B(AB)=20.2`$, those in $`0.5<z<0.75`$ brighter than $`20.7`$, and galaxies in $`0.75<z<1.0`$ brighter than $`M_B(AB)=21.2`$. The comparison between observed and predicted numbers are given in Table 5. Because we are comparing classifications derived from visual inspection with those derived from profile-fitting (and at very different redshifts), there is the potential for systematic differences in the populations being counted. On the other hand, we have shown that the visual classifications done on this sample agree broadly with the profile-fitting classifications. A puzzling feature of the result shown in Table 5 is that we count roughly the same number of elliptical galaxies as are predicted for $`q_{}=0.01`$ by the LF for elliptical and S0 galaxies combined. Data from Buta et al. (1994) suggests that S0 galaxies will comprise roughly two-thirds of the E/S0 sample whereas the LF’s from Marzke et al. 1994 suggest that only one-quarter of an E/S0 sample will consist of ellipticals. Table 5 provides no evidence for a deficit of elliptical galaxies at high redshift. On the contrary, the problem is that there are more than expected if ellipticals constitute only a fraction (1/3 to 1/4) of the galaxies in the E/S0 luminosity functions used in the calculations. The problem is substantially worse in the case where $`q_{}=0.5`$. However, if isolated ellipticals gradually accrete halo gas to build a disk (as suggested in some formation scenarios) then these high-redshift ellipticals might evolve into present-day S0 galaxies. An alternative is that we have failed to detect the faint disks around some early-type galaxies at high redshift that would a have resulted in a classification of S0 rather than E. Either of these interpretations would result in perfect agreement with the numbers in Table 5 if $`q_{}0`$. ## 4 CONCLUSIONS The analysis of the CFRS/LDSS sample of field elliptical galaxies reveals evolution in the $`M_B\mathrm{log}R_e`$ relation so that a galaxy of a given size is more luminous by $`0.97\pm 0.14`$ magnitudes at $`z=0.9`$ than its local counterpart as estimated from the cluster elliptical locus. This apparent evolution in luminosity is accompanied by a bluing of these galaxies with look-back time. At $`z=0.9`$ the mean shift in color is $`0.68\pm 0.11`$ mag in rest-frame (U-V). The luminosity evolution is consistent with simple models of passive evolution of an old, single-burst stellar population but a small quantity (by mass) of more recent star formation is required to make the models match the observed color evolution. Approximately 1/3 of the elliptical galaxies at $`0.2<z<1.0`$ display \[OII\] emission lines, indicating star formation at rates that we estimate at $`0.5`$M yr<sup>-1</sup> per $`10^{11}`$ M of stellar mass in elliptical galaxies so that perhaps a few percent of the stellar populations of these galaxies have formed since $`z=1`$. The estimates of luminosity and color evolution found here are consistent with that found in clusters (Schade, Barrientos, & López-Cruz 1997, Stanford, Eisenhardt, & Dickinson 1998). On the other hand, we have found evidence that the dispersion in the color-luminosity relation may be larger among field ellipticals at $`z>0.5`$ than the dispersion observed for early-type galaxies in Coma by Bower, Lucey, & Ellis (1992) or those in clusters at $`z=0.55`$ by Ellis et al. (1997). The presence of \[OII\] emission indicates that star formation is occurring in $`30\%`$ of the elliptical galaxies in this sample. Thus the ”orthodox” view of elliptical galaxy formation (a single burst followed by no further star formation) is false. A larger dispersion in color among field ellipticals than among those in clusters (which is suggested by these data) would also argue in favor of a more diverse star formation history in field ellipticals than in cluster early-type galaxies in the sense that more recent activity occurred in field galaxies possibly because continued gas infall that could fuel ongoing star formation is suppressed in the cluster environment. We have repeated the $`<V/Vmax>`$ test for early-type CFRS galaxies done by Kauffman, Charlot, & White (1996) using a subset of the CFRS sample with HST imaging. Our sample was selected on the basis of profile fitting rather than color (the technique they used). We find no evidence for evolution in the space density of large ellipticals over the range $`0.2<z<1.0`$ within the CFRS survey. We cannot rule out fairly large changes (factors $``$ 2-3) in density because of the small numbers in our survey. Nevertheless, our result contradicts the claim for detection of strong evolution by Kauffman, Charlot, & White (1996) because the color selection applied in that work did not detect all of the E/S0 galaxies that were present and also suffered from contamination by red disk-dominated galaxies. A more powerful test for a change in space density (although more prone to systematic error because of differences in the classification method) is a comparison of our observed number of field elliptical galaxies at high redshift with the predictions based on the E/S0 luminosity function of Marzke et al. (1998). This test indicates no deficit of early=type galaxies at $`z1`$. In fact, there is a surplus of ellipticals at high redshift unless the ratio of E to S0 galaxies increases at high redshift. Such an effect might be spurious. It is plausible that faint disks might be missed at high redshift resulting in mis-classification of S0 galaxies as ellipticals. Alternatively, it might be a real evolutionary effect where ellipticals are transformed into S0s via the development of disk components. Formally, we can say that, at the 95% confidence level, the number of ellipticals that we detect at $`0.75<z<1.0`$ is between $`0.71`$ and $`2.03`$ times the number of E/S0 galaxies predicted by the Marzke et al. (1998) luminosity function assuming $`q_{}=0.01`$ and including spectroscopic failures. In summary, the results presented here are consistent with an early formation epoch for these elliptical galaxies but with some degree of ongoing star formation at $`z<1`$. We reject the orthodox model of elliptical galaxy evolution. We find no evidence that the space density of large ellipticals has changed since $`z=1`$. The result of Im et al. (1996) agrees very well with our result and carries substantially more statistical weight. These results directly challenge the view that significant numbers of elliptical galaxies have been formed by mergers since $`z=1`$. A final note on the question of merging. Le Fèvre (1999) find evidence—from the same sample of galaxies studied here—that the rate of bright galaxy mergers increases steeply with redshift. That work suggests that a typical ($`L^{}`$ at $`z1`$ ) galaxy will undergo 1-2 major merging events since $`z=1`$. On the other hand, the present study finds no evidence for a significant change in the space density of massive elliptical galaxies since $`z=1`$. Furthermore, Lilly et al. (1998) find no significant change in the space density of large disk galaxies over this redshift range. These observations can be reconciled if the enhanced level of interaction and merging that is seen is, in fact, taking place among galaxies that are somewhat less massive than present-day $`L^{}`$ galaxies and if such interactions do not frequently produce large elliptical galaxies. ###### Acknowledgements. We acknowledge the support of NATO in the form of a travel grant. The research of SJL was supported by NSERC. Figure Captions
no-problem/9906/hep-ph9906302.html
ar5iv
text
# 1 Forward-jet cross section from [] as compared to ZEUS data []. The differences between the three histograms are indicative of the uncertainty on the theoretical result. ## Acknowledgements The work on implementing a CCFM description of final states was done in collaboration with Giulio Bottazzi, Giuseppe Marchesini and Massimo Scorletti. I would like also to thank Yuri Dokshitzer, Marcello Ciafaloni and Dimitri Colferai for discussions.
no-problem/9906/physics9906048.html
ar5iv
text
# Overcoming priors anxiety ## 1 Introduction The main resistance scientists have to Bayesian theory seems to be due to their reaction in the face of words such as “subjective”, “belief” and “priors” (to which the word “bet” might also be added). These words sound blasphemous to those who pursue the ideal of an objective Science. Given this premise, it is not surprising that frequentistic ideas, which advertise objective methods at low cost in a kind of demagogical way, became popular very quickly and are still the most widely used in all fields of application, despite the fact that they are indefensible from a rational point of view. As in commercials, what often matters is just the slogan, not the quality of the product, at least in the short term. And advertised objective methods are certainly easier to sell than subjective ones. When one adds to these psychological effects yet others based upon political reasons (see, for example, the very interesting philosophical and historical introduction to Lad’s book ), life gets really hard for subjective probability. Moving from the slogan to the product, it is not difficult to see that, if they were to be taken literally, frequentistic ideas would lead nowhere. Indeed their success seems due to a mismatch between what they state and how scientists interpret them in good faith. In other words, frequentistic methods make sense only if they are - when they can be - reinterpreted from a subjective point of view. Otherwise they may cause serious mistakes to be made. In recent years I have investigated this question among particle physicists . For the convenience of the reader, I report really here the main conclusions which I reached : > * there is a contradiction between a cultural background in statistics and the good sense of physicists; physicists’ intuition is closer to the Bayesian approach than one might naïvely think; > * there are cases in which good sense alone is not enough and serious mistakes can be made; it is then that the philosophical and practical advantages offered by the Bayesian approach become of crucial importance; > * there is a chance that the Bayesian approach can become widely accepted, if it is presented in a way which is close to physicists’ intuition and if it can solve the “existential” problem of reconciling two aspects which seem irreconcilable: subjective probability and the honest ideal of objectivity that scientists have. This last point was just sketched in the original paper, and I would like to discuss it here in a bit more detail, and to relate it to the “problem” of priors, the main subject of this article. I think, in fact, that it is impossible to talk about priors without putting them into the framework to which they belong. Only when one is aware of the role they have in Bayes’ theorem, and of the role of Bayes’ theorem itself, can one have a relaxed relationship with them. Once this is achieved, depending on the specific problem, one may choose the most suitable priors or ignore them if they are irrelevant; or one may decide, instead, that priors are so relevant that only Bayes’ factors can be provided; alternatively one may even skip the Bayes theorem altogether, or use it in a reverse mode to discover which kind of of priors might give rise to the final beliefs that one unconsciously has. These situations will be illustrated by examples. Before going any further, some clarifications are in order. First, my comments will be from the viewpoint of the “experienced scientist” (i.e. the scientist who is used to everyday confrontation with real data); this point of view is often neglected, since priors (and questions of subjectivity/objectivity) tend to be debated among mathematicians, statisticians and philosophers. Second, since I am an experimental particle physicist, I am aware that my knowledge about the literature concerning the arguments I am going to talk about is necessarily limited and fragmentary. I therefore apologize if people who may have expressed opinions similar to those stated in this paper are not acknowledged here. ## 2 Subjective degrees of belief and objective Science The question “can subjective degrees of belief build an objective Science?” is subtle. If we take it literally, the answer is NO. But this is not because of the subjective degrees of belief in themselves. It is simply because, from a logical point of view, “objective Science” is a contradiction in terms, if “Science” stands for Knowledge concerning Nature, and “objective” for something which has the same logical strength as a mathematical theorem. This has been pointed out many times by philosophers, the strongest defence of this point of view being due to Hume , to whom there is little to reply. If, instead, “objective Science” stands for what scientists refer to by this expression, the question becomes a tautology. In fact, using Galison’ words , “experiments begin and end in a matrix of beliefs. …beliefs in instrument types, in programs of experimental enquiry, in the trained, individual judgements about every local behaviour of pieces of apparatus…”. Any scientist knows already that the only objective thing in science is the reading of digital scales. When we want to transform this information into scientific knowledge we have to make use of many implicit and explicit beliefs. However, many scientists are reluctant to use the word “belief”<sup>1</sup><sup>1</sup>1But many other scientists, usually prominent ones, do. And, paradoxically, objective science is, for those who avoid the word ”belief”, nothing but the set of beliefs held by the most influential scientists in whom they believe… for professional purposes. It seems to me that the reason for this attitude is due to a misuse of the word “belief”, which has somehow led to a deterioration of its meaning. In this connection I think a few remarks are of particular importance. The first is that we should have Hume’s distinction between “belief” and “imagination” clear in mind . Then, once we agree on what “belief” is, and on the fact that it can have a degree, and that this degree depends necessarily on the subject who evaluates it, another important concept which enters the game is that of de Finetti’s “coherent bet” . The “coherent bet” plays the crucial role of neatly separating “subjective” from “arbitrary”. In fact, coherence has the normative role of forcing people to be honest and to make the best (i.e. the “most objective”) assessments of their degree of belief<sup>2</sup><sup>2</sup>2The coherence is also important to avoid making the confusion between “belief” and “convenience” (or “wish”). In other words, the tasks of assessing probability and of decision making should be kept separate.. Finally comes Bayes’ rule , which is the logical tool for updating degrees of belief. In my opinion there is a really good chance that this way of presenting the Bayesian theory will be accepted by scientists. In fact the ideal of objectivity is easily recovered, although in terms of intersubjectivity, if scientific knowledge is regarded as a very solid Bayesian network (Galison’s “matrix of beliefs” ), based on centuries of experimentation, with fuzzy borders which correspond to the areas of current investigation. ## 3 Choosing priors: fear, misconception and good faith Once we have specified the exact meaning of each of the ingredients entering probabilistic induction (degree of belief - coherent bet - Bayes’ rule), there should, in principle, no longer be a problem. However, all Bayesians know by experience that the most serious concerns scientists have are related to the choice of priors (sometimes due to real technical problems, but more often due only to “prejudices on priors”). In fact, practitioners can avoid talking about “degree of belief” in their papers, replacing it by the nobler term “probability”; they can accept the use of Bayes’ theorem, because it is a theorem; but it seems they cannot escape from priors. And they often get stuck, or simply go back to “objective” frequentistic methods. In fact, the choice of the prior is usually felt to be a vital problem by all those who approach the Bayesian methods with a purely utilitarian spirit, that is, without having assimilated the spirit of subjective probability. Some use “Bayesian formulae” simply because they “have been proved”, by Monte Carlo simulation, to work in a particular application. Others seem convinced by the power of Bayesian reasoning, but they are embarrassed because of the apparent “arbitrariness” of the choice of priors. It might seem that reference priors (see e.g. and references therein, although in this paper I will refer only to Jeffreys’ priors , the most common in Physics applications) have a chance of attracting people to Bayesian theory. In fact, reference priors enable practitioners to avoid the responsibility for choosing priors, and give them an illusion of objectivity analogous to that offered by frequentistic procedures . However I have some perplexity about uncritical use of reference priors, for philosophical, sociological and practical reasons which I am now going to explain. ### 3.1 Bayesian dogmatism and its dangers Although I agree, in principle, that a “concept of a ‘minimal informative’ prior specification \- appropriately defined!” is valid, those who are not fully aware of the intentions and limits of reference analysis perceive the Bayesian approach to be dogmatic. Indeed, one can find indiscriminate use and uncritical recommendation of reference priors in books, lecture notes, articles and conference proceedings on Bayesian theory and applications. This gives to practitioners the impression that only those priors blessed by the official Bayesian literature are valid. This would be a minor problem if the use of reference priors, instead of more motivated ones, merely caused a greater or lesser difference in the numerical result. However, the question becomes more serious when the - perhaps unwanted - dogmatism is turned against the Bayesian theory itself. I would like to give an example of this kind which concerns me very much, because it may influence the High Energy Physics community to which I belong. In a paper which appeared last year in Physical Review it is stated that > “For a parameter $`\mu `$ which is restricted to $`[0,\mathrm{}]`$, a common non-informative prior in the statistical literature is $`P(\mu _t)=1/\mu _t`$…In contrast the PDG<sup>3</sup><sup>3</sup>3PDG stands for “Particle Data Group”, a committee that every second year publishes the Review of Particle Properties, a very influential collection of data, formulae and methods, including sections on Probability and Statistics.description is equivalent to using a prior which is uniform in $`\mu _t`$. This prior has no basis that we know of in Bayesian theory.” This example should be taken really very seriously. The authors, in fact, use the pulpit of a prestigious journal to make it seem as if they understand deeply both the Bayesian approach and the frequentistic approach and, on this basis, they discourage the use of Bayesian methods (“We then obtain confidence intervals which are never unphysical or empty. Thus they remove an original intention for the description of Bayesian intervals by the PDG”). So it seems to me that there is a risk that indiscriminate use of reference priors might harm the Bayesian theory in the long term, in a similar way to that which happened at the end of last century, as a consequence of the abuse of the uniform distribution. This worry is well expressed in Earman’s conclusions to his “critical examination of Bayesian confirmation theory” : > “We then seem to be faced with a dilemma. On the one hand, Bayesian considerations seem indispensable in formulating and evaluating scientific inference. But on the other hand, the use of the full Bayesian apparatus seems to commit the user to a form of dogmatism”. ### 3.2 Unstated motivations behind Jeffreys’ priors? Coming now to the specific case of Jeffreys’ priors , I must admit that, from the most general (and abstract) point of view, it is not difficult to agree that “in one-dimensional continuous regular problems, Jeffreys’ prior is appropriate”. Unfortunately, it is rarely the case that in practical situations the status of prior knowledge is equivalent to that expressed by the Jeffreys’ priors, as I will discuss later. Reading “between the lines”, it seems to me that the reasons for choosing these priors are essentially psychological and sociological. For instance, when utilized to infer $`\mu `$ (typically associated with the “true value”) from “Gaussian small samples”, the use of a prior of the kind $`f_{}(\mu ,\sigma )1/\sigma `$ has two apparent benefits: * first, the mathematical solution is simple (this reminds me of the story of the drunk under the streetlamp, looking for the key lost in the dark alley); * second, one recovers the Student distribution, and for some it seems to be reassuring that a Bayesian result gets blessed by “well established” frequentistic methods. (“We know that this is the right solution”, a convinced Bayesian once told me…) But these arguments, never explicitly stated, cannot be accepted, for obvious reasons. I would like only to comment on the Student distribution. This is the “standard way” for handling small samples, although there is, in fact, no deep reason for aiming to get such a distribution for the posterior. This becomes clear to anyone who, having measured the size of this page twice and having found a difference of 0.3 mm between the measurements, then has to base his conclusion on that distribution. Any rational person will refuse to state that, in order to be 99.9 % confident in the result, the uncertainty interval should be 9.5 cm wide (any carpenter would laugh…). This might be the reason why, as far as I know, physicists don’t use the Student distribution. Another typical application of the Jeffrey’ prior is in the case of inference on the $`\lambda `$ parameter of a Poisson distribution, having observed a certain number of events $`x`$. Many people have, in fact, a reluctance to accept, as an estimate of $`\lambda `$, a value which differs from the observed number of counts (for example, $`\text{E}(\lambda )=x+1`$ starting from a uniform prior) and which is deemed to be distorted by the “distorted” frequentistic criteria used to analyse the problem (see e.g. ). In my opinion, in this case one should simply educate the practitioners about the difference between the concept of maximum belief and that of prevision (or expected value). An example in which the choice of priors becomes crucial, is the case where no counts are observed, a typical situation for frontier physics, where new and rare phenomena are constantly looked for. Any reasonable prior consistent with what I like to call the “positive attitude of the physicists who have pursued the research” , allows reasonable upper limits compatible with the sensitivity of the experiment to be calculated (even a uniform prior is good for the purpose). Instead, a prior of the kind $`f_{}(\lambda )1/\lambda `$ prevents use of probabilistic statements to summarize the outcome of the experiment, and the same result ($`0\pm 0`$) is obtained, independently of the size sensitivity and running time of the experiment. I will return below to such critical situations which are typical of frontier science. ## 4 Priors for routine applications Let us discuss now the reasons which indicate that experimentally motivated priors for “routine measurements” are quite different from Jeffreys’ priors. This requires a brief reminder about how measurements are actually performed. I will also take the opportunity to introduce the International Organization for Standardization (ISO) recommendations concerning measurement uncertainty. ### 4.1 Unavoidable prior knowledge behind any measurement To understand why an “experienced scientist” has difficulty in accepting a prior of the kind $`f_{}(\sigma )1/\sigma `$ (or $`f_{}(\mathrm{ln}(\sigma ))=k`$), one has to remember that the process of measurement is very complex (even in everyday situations, like measuring the size of the page You are reading now, just to avoid abstract problems): * first one has to define the measurand, i.e. the quantity one is interested in; * then one has to choose the appropriate instrument, one which has known properties, well-suited range and resolution, and in which one has some confidence, achieved on the basis of previous measurements; * the measurement is performed and, if possible, repeated several times; * then, if one judges that this is appropriate, one applies corrections, also based on previous experience with that kind of measurement, in order to take into account known (within uncertainty) systematic errors; * finally<sup>4</sup><sup>4</sup>4This is not really the end of the story, if a researcher wishes his result to have some impact on the scientific community. Only if other people trust him will they use the result in further scientific reasoning, as if it were their own result. This is the reason why one has to undergo an apprenticeship during one’s youth, when one must build up one’s reputation (i.e. again beliefs) in the eyes of one’s colleagues. one gets a credibility interval for the quantity (usually a best estimate with a related uncertainty); Each step involves some prior knowledge and, typically, each person who performs the measurement (be it a physicist, a biologist, or a carpenter) operates in his field of expertise. This means that he is well aware of the error he might make, and therefore of the uncertainty associated with the result. This is also true if only a single observation has been performed<sup>5</sup><sup>5</sup>5This defence of the possibility of quoting an uncertainty from a single measurement has nothing to do with the mathematical games like those of .: try to ask a carpenter how much he believes in his result, possibly helping him to quantify the uncertainty using the concept of the coherent bet. There is also another important aspect of the “single measurement”. One should note that many measurements, which seem to be due to a single observation, consist, in fact, of several observations made within a short time: for example, measuring a length with a design ruler, one checks the alignment of the zero mark with the beginning of the segment to be measured several times; or, measuring a voltage with a voltmeter or a mass with a balance, one waits until the reading is well stabilized. Experts use unconsciously also information of this kind when they have to figure out the uncertainty they attribute to the result, although they are unable to use it explicitly because this information cannot be accommodated in the standard way of evaluating uncertainty based on frequentistic methods . The fact that the evaluation of uncertainty does not necessarily come from repeated measurements has also been recognized by the International Organization for Standardization (ISO) in its “Guide to the expression of uncertainty in measurement”. There the uncertainty is classified > “into two categories according to the way their numerical value is estimated: > > 1. those which are evaluated by statistical methods<sup>6</sup><sup>6</sup>6Here “statistical” should be seen as referring to “repeated observations on the same measurand”, and not to general meaning of “probabilistic”.; > 2. those which are evaluated by other means;” Then, illustrating the ways to evaluate the “type B standard uncertainty”, the Guide states that > “the associated estimated variance $`u^2(x_i)`$ or the standard uncertainty $`u(x_i)`$ is evaluated by scientific judgement based on all of the available information on the possible variability of $`X_i`$. The pool of information may include > > * previous measurement data; > * experience with or general knowledge of the behaviour and properties of relevant materials and instruments; > * manufacturer’s specifications; > * data provided in calibration and other certificates; > * uncertainties assigned to reference data taken from handbooks.” It is easy to see that the above statements have sense only if the probability is interpreted as degree of belief, as explicitly recognized by the Guide: > “…Type B standard uncertainty is obtained from an assumed probability density function based on the degree of belief that an event will occur \[often called subjective probability…\].” It is also interesting to read the concern of the Guide regarding the uncritical use of statistical methods and of abstract formulae: > “the evaluation of uncertainty is neither a routine task nor a purely mathematical one; it depends on detailed knowledge of the nature of the measurand and of the measurement. The quality and utility of the uncertainty quoted for the result of a measurement therefore ultimately depend on the understanding, critical analysis, and integrity of those who contribute to the assignment of its value”. This appears to me perfectly in line with the lesson of genuine subjectivism, accompanied by the normative rule of coherence. ### 4.2 Rough modelling of realistic priors After these comments on measurement, it should be clear why a prior of the kind $`f_{}(\mu ,\sigma )1/\sigma `$ does not look natural. As far as $`\sigma `$ is concerned, this prior would imply, in fact, that standard deviations ranging over many (“infinite”, in principle) orders of magnitude would be equally possible. This is unreasonable in most cases. For example, measuring the size of this page with a design ruler, no one would expect $`\sigma 𝒪(10\text{cm})`$ or $`𝒪(1\mu \text{m})`$. As for $`\mu `$, the choice $`f_{}(\mu )=k`$ is acceptable until $`\sigma \mu `$ (the so called Savage principle of precise measurement). But when the order of magnitude of $`\sigma `$ is uncertain, the prior on $`\mu `$ should also be revised (for example, most of the directly measured quantities are positively defined). Some priors which, in my experience, are closer to the typical prior knowledge of the person who makes routine measurements are those concerning the order of magnitude of $`\sigma `$, or the order of magnitude of the precision (quantified by the variation coefficient $`v=\sigma /|\mu |`$). For example<sup>7</sup><sup>7</sup>7For the sake of simplicity, let us stick to the case in which the fluctuations are larger than the intrinsic instrumental resolution. Otherwise one needs to model the prior (and the likelihood) with a discrete distribution., one might expect a r.m.s. error of 1 mm, but values of 0.5 or 2.0 mm would not look surprising. Even 0.2 or 4 mm would look possible, but certainly not 1 $`\mu `$m or $`10`$cm. So, depending on whether one is uncertain on the absolute or the relative error, a distribution which seems suitable for a rough modelling of this kind of prior is a lognormal in either $`\sigma `$ or $`v`$. For instance, the above example could be modeled with $`\mathrm{ln}\sigma `$ normally distributed with average 0 ($`=\mathrm{ln}1`$) and standard deviation 0.4. The 1, 2 and 3 standard deviation interval on $`\sigma `$/mm would be $`[0.7,1.5]`$, $`[0.5,2.2]`$ and $`[0.3,3.3]`$, respectively, in qualitative agreement with the prior knowledge. In the case of more sophisticated measurements, in which the measurand is a positive defined quantity of unknown order of magnitude, a suitable prior would again be a normal (or, at limit, a constant) in $`\mathrm{ln}\mu `$ (before the first measurement one may be uncertain on the order of magnitude that will be obtained), while $`\sigma `$ is somehow correlated to $`\mu `$ (again $`v`$ can be reasonably described by a lognormal). One might imagine of other possible measurements which give rise to other priors, but I find it very difficult to imagine a real situation for which Jeffrey’s priors are appropriate. ### 4.3 Mathematics versus good sense The case of small samples seems to lead to an impasse. Either we have a simple and standard solution to a fictitious problem, given by the Student distribution, or we have to face complicated calculations if we want to solve specifically the problem we have in mind, formulated by modeling experience motivated priors. I do not think that experimenters would be willing to calculate lognormal integrals to report the results of a couple of measurements. This could be done once, perhaps, to get a feeling of what is going on, or to solve an academic exercise, but certainly not as routine. The suggestions sketched above were in the framework of the Bayes’ theorem paradigm. But I don’t want to give the impression that this is the only way to proceed. The most important teaching of subjective probability is that probability is always conditioned by a given status of information. The probability is updated in the light of any new information. But it is not always possible to describe the updating mechanism using the neat scheme of the Bayes’ theorem. This is well known in many fields, and, in principle, there is no reason for considering the use of the Bayes theorem to be indispensable to assessing uncertainty in scientific measurements. The idea is to force the expert to declare (using the coherent bet) some quantiles in which he believes is contained the true value, on the basis of a few observations. It may be easier for him to estimate the uncertainty in this way, drawing on his past experience, rather than trying to model some priors and playing with the Bayes’ theorem. The message is what experimentalists intuitively do: when you have just a few observations, what you already know is more important than what the standard deviation of the data teaches you. Some will probably be worried by the arbitrariness of this conclusion, but it has to be remembered that: an expert can make very good guesses in his field; 20, 30, or even 50 % uncertainty in the uncertainty is not considered as significantly spoiling the quality of a measurement; there are usually many other sources of uncertainty, due to possible systematic effects of unknown size, which can easily be more critical. I am much more worried by the attitude of giving up prior knowledge to a mathematical convenience, since this can sometimes lead to paradoxical results. ### 4.4 Uniform prior for $`\mu `$, $`\lambda `$ and $`p`$ in routine measurements I find, on the other hand, that for routine applications the use of the uniform distribution for the center parameter of the normal distribution, usually associated to the true value, is very much justified. This is because, apart from pathological situations, or from particular cases in frontier research, even if one does not know if the associated uncertainty will be 0.1, 1, or 10 %, the prior knowledge on $`\mu `$ is so vague that it can be considered uniform for all practical purposes. The same holds when one is interested in $`\lambda `$ of a Poisson distribution (counting experiments) or to $`p`$ of the binomial distribution (measurements of proportions), under the condition that normal approximation is roughly satisfied, which is a kind of desideratum for the planning of a good routine experiment (otherwise it becomes a non-routine one). Taking into account the fact that, for routine measurements, the difference between mode and average of the final distribution is much smaller than $`\sigma (\lambda )`$ or $`\sigma (p)`$, we “recover” maximum likelihood results, but with a natural, i.e. subjective, interpretation of the results. This corresponds, in fact, to the case where the intuitive “dog-hunter probability inversion” is reasonable. For example, indicating by $`x`$ the number of observed events in the case of Poisson distribution or of successes in the case of the binomial one, with the number of trials of the latter indicated by $`n`$, we get, simply $`\lambda `$ $``$ $`𝒩(x,\sqrt{x})`$ (1) $`p`$ $``$ $`𝒩({\displaystyle \frac{x}{n}},\sqrt{{\displaystyle \frac{x}{n}}\left(1{\displaystyle \frac{x}{n}}\right){\displaystyle \frac{1}{n}}}),`$ (2) where $`𝒩(,)`$ is short hand for normal distribution of given average and standard deviation. The recommendation I usually like to give, to check “a posteriori” if the uniform prior is suitable or not is the following: first evaluate central value and standard deviation according to the approximations (1) and (2); then try to judge if the central value “disturbs” you, and/or the standard deviation seems to be of the order of your prior vagueness; if this is the case, it is now that you need to model down some priors, which will actually affect the posteriors; otherwise, priors will have no appreciable effect and the approximated result is good enough. This “a posteriori’ consideration of priors might seem questionable, but I find it absolutely consistent with the spirit of subjective probability. In fact, the priors one has plug into Bayes’ theorem should reflect the status of knowledge as it is felt to be by the subject who performs the inference. But sometimes it can be difficult to model this information consciously, or it might simply take too much time. The comparison of the approximate result got from a uniform prior with the result that the researcher was ready to accept can help, indeed, to raise this status of prior knowledge from the unconscious to the conscious. ## 5 Priors for frontier science The question is completely reversed when one is interested in quantities whose value might be at the edge or beyond the sensitivity of the experiment (perhaps even orders of magnitudes beyond it) and if the quantity itself makes sense at all. This is a typical situation in particle physics or in astrophysics, and it is only to these kind of measurements that I will refer to as “frontier science measurements”. However, even though they are “frontier”, most of the measurements performed in the above mentioned fields belong, in fact, to the class of “routine measurements”. I would like to illustrate this new situation with a numerical example. Let us imagine that an experiment has been run for one year looking for rare events, like magnetic monopoles, proton decays, or gravitational waves. The physics quantity of interest (i.e. a decay rate, or a flux) is related to the intensity $`r`$ of a Poisson process. Usually there is also an additional Poisson process to be considered, associated with the physical or instrumental background which produces observables indistinguishable from the process of interest ($`r_B`$). The easy, although ideal, case is when the background is exactly zero and at least one event is observed. This case prompts researchers to make a discovery claim. Let us consider, instead, the situation when no candidate events are observed, still with zero background. The likelihood, considering 1 year as unit time, is $`f(x=0|r)=e^r`$. Considering a uniform prior for $`r`$, we get $`f(r|x=0)=e^r`$ (see figure 1), from which a 95 % probability upper limit ($`r_u`$) can be evaluated. This comes out to be $`r_u=3`$ events/year and it is a kind of standard way in HEP of reporting a negative search result<sup>8</sup><sup>8</sup>8It is worth noting that many physicists are convinced that the reason for this value is due to the fact that the probability of getting 0 from a Poisson of $`\lambda =3`$ is 5 %. This is the classical arbitrary probability inversion which in this case comes out to be correct, assuming a flat prior, due to the property of the exponential under integration.. The usual interpretation of this result is that, if the process looked for exists at all, then there is 95 % probability that $`r`$ is not greater than $`r_u`$. But, I find that often one does not pay enough attention to all the logical implications contained in this statement, or in all the infinite probabilistic statements which can be derived from $`f(r|x=0)`$. This can be highlighted considering statements complementary to the standard ones, especially in those cases in which the experimenters feel that the detector sensitivity is not suitable for searching for such a rare process. The embarrassing reply to questions like “do you really believe 5 % that $`r`$ is greater than $`r_u`$?”, or “would you really place a 1 to 19 bet on $`r>r_u`$ shows that, often, $`f(r|x=0)`$ does not describe coherent beliefs. And this is due to the fact that the priors were not appropriate to the problem. For example, a researcher could run a cheap monopole experiment for one day, using a 1 m<sup>2</sup> detector, find no candidates and present, without hesitation, his 95 % upper limit as $`r_u=3`$ monopoles/m<sup>2</sup>/day, or 1095 monopoles/m<sup>2</sup>/year. But he would react immediately if we made him aware that he is also saying that there is 5 % chance that the monopole flux is above 1095 monopoles/m<sup>2</sup>/year, because he knows that $`𝒪(1000)`$ m<sup>2</sup> detectors have been run for many years without observing a convincing signal. The situation becomes even more complicated when one has a non zero expected background and a number of observed candidates superior to it. For example, researchers could expect a background of 1 event per day and observe 5 events. Differently from the above example of the monopole search, let us imagine that the prior knowledge is not so strong that all the 5 events can be attributed with near certainty to background. Instead, let us imagine that the experimenters are here in serious trouble: the $`p`$-value is below 0.5 %; they do not believe strongly that the excess is due to the searched for effect; but neither do they feel that the probability is so low that they can decide not to publish the result and miss the chance of a discovery. If they perform a standard Bayesian analysis using a flat prior they will get a final distribution peaked at 4 which looks like a convincing signal, since it seems to be well separated from 0 (see figure 1). They could use, instead, a Jeffreys’ prior and find no result, since $`P(rr_{})/P(r>r_{})=\mathrm{}`$ for any $`r_{}>0`$. It is easy to see that in such a situation pedantic use of the Bayesian theory (“Prior, Likelihood $``$ Final”) leads to an embarrassing outcome whatever one does. Therefore, in the case of real frontier science observables, the best solution seems to be that one has to abstain from providing final distributions and publish only likelihoods, which are degrees of beliefs too, but they are much less critical than priors. But reporting the likelihoods as such can be inconvenient, because often they do not give an intuitive and direct idea of the power of different experiments. Recently, faced with problems of the kind described above, I have realized that a very convenient quantity to use is a function that gives the Bayes factor of a generic value of interest with respect to the asymptotic value for which the experimental sensitivity is lost (if the asymptotic value exists and the Bayes factor is finite). In the simple case of the Poisson process with background that we are considering, we have $$(r)\frac{f(x|r,r_B)}{f(x|r=0,r_B)}$$ (3) The advantage of this function is that it has a simple intuitive interpretation of shape distortion function of the p.d.f. (or a relative belief updating ratio<sup>9</sup><sup>9</sup>9In fact, in the case $`f_{}(r)0`$ one can rewrite (3) in the following way $$(r)=\frac{f(r|x,r_B)/f_{}(r)}{f(r=0|x,r_B)/f_{}(r=0)}.$$ ) introduced by the new observations. As long as $``$ is 1 it means that the experiment is not sensitive and the shape of the p.d.f. (and hence the relative beliefs) remain unchanged. Instead, when $``$ goes to zero the beliefs go to zero too, no matter how strong they were before. Moreover, since the $``$ differs from the likelihood only by a multiplicative factor, it can be used directly in the Bayes’ formula when a scientist wants to turn them into probabilities, using subjective priors. Different experiments can easily be compared and the combination of independent data is performed multiplying the different $``$’s. The function $``$ is particularly intuitive when plotted with the abscissa in log scale. For example, figure 2, shows the result in terms of the $``$ function for the same cases shown in figure 1. Looking at the plot, one can immediately get an idea of what is going on. For example, it also becomes clear where the problems with the flat prior and with the Jeffreys’ prior come from. We can also now understand which kind of priors the hesitant researchers of the above example had in mind. Their prior beliefs were concentrated some order of magnitudes below the peak of R, but with tails which could also accommodate $`r𝒪(4)`$. This is in agreement with the fact that after the observations the intuitive probability for $`r>𝒪(1)`$ becomes sizable (5, 10, 30 %?) and the researchers do not have the courage not to publish the result. Finally, let us comment on upper (or lower) limits. It is clear now that, exactly in those frontier situations in which the limit would be pertinent, a highly intersubjective 95 % probability limit does not exist. Therefore one has to be very careful in providing such a quantity. However, looking at the plots of figure 2 it is also clear that one can talk somehow about a bound of values which roughly separates the region of “possible” values from that of “impossible” ones. One could then take a conventional value, which could be the value for which $`=0.05`$, or 0.5, or any other. The important thing is to avoid calling this conventional value a 95 % or a 50 % probability limit. If instead one really wants to give a probability limits, one has to go through priors, which should be precisely stated. In this case, if I really had to recommend a prior, it would be the uniform distribution. This is not, only, for mathematical convenience, but also because it seems to me that it can do a good job in many cases of interest. In fact, one can see that it gives the same result as any other reasonable prior consistent with the positive attitude of the researchers which have planned and financed the experiment (for example, if an experimental team performs a dedicated proton decay experiment with the intention of making a good investment of public money, it means that the physicists really do hope to observe a reasonable amount of signal events for the planned sensitivity of the experiment). ## 6 Conclusions The key point expressed in this paper is that there is no need to “objectivise” Bayesian theory, treating subjectivism as if it were something we should be ashamed of. Only when this point is accepted and Bayes’ theorem is correctly placed within the framework of subjective probability, with clear role and limitations, can the anxiety about priors and their choice be overcome. Once this is achieved, either we can choose the priors which best describe the prior knowledge for a specific problem; or we can “ignore” them in routine applications, thus “recovering” maximum likelihood results, but with transparent subjective interpretation, and with awareness of the assumptions we are using; or we can decide that priors are so critical that only likelihoods or Bayes factors can be provided as the outcome of the experiment; or we can use the Bayes theorem in a reverse mode, to find out which priors we had, unconsciously, that give rise to the beliefs we have after the new observation; finally there are some cases in which it is even more practical to skip the Bayes’ theorem and to assess directly the degree of belief. With respect to this last point, I would like to remind the reader that, in fact, if one thinks that probabilities must only be calculated using the Bayes’ rule, one gets trapped in an endless prior-final chain. As far as reference priors are concerned, they could, indeed, simplify the life of the practitioners in some well defined cases. However, their uncritical use should be discouraged. First, because they could lead to wrong, or even absurd, results in critical situations, if reference priors are preferred to case motivated priors just for formal convenience. Second, and more important, because of they might give the impression of dogmatism, which, together with the absurd results obtained through their misuse, could seriously damage the credibility of the Bayesian theory itself.
no-problem/9906/hep-ph9906426.html
ar5iv
text
# 1 MPI(CH) bandwidth and latency results for the PingPong benchmark. x TTP99–15 NIKHEF 99–015 hep-ph/9906426 Parallelizing the Symbolic Manipulation Program FORM <sup>*</sup><sup>*</sup>*talk presented by D.Fliegner at AIHENP’99, Heraklion, Greece, Apr 12-16 1999. D.Fliegner, A.Rétey, J.A.M. Vermaseren Institut für Theoretische Teilchenphysik, Universität Karlsruhe, D-76128 Karlsruhe, Germany NIKHEF, P.O. Box 41882, 1009 DB, Amsterdam, The Netherlands Abstract After an introduction to the sequential version of FORM and the mechanisms behind it we report on the status of our ongoing project of its parallelization. An analysis of the parallel platforms used is given and the structure of a parallel prototype of FORM is explained. 1. The Sequential Version of FORM FORM is a program for symbolic manipulation of algebraic expressions specialized to handle very large expressions of millions of terms in an efficient and reliable way. It is used non-interactively by executing a program that contains several parts called modules. The execution of each module is again divided into three steps: * Compilation: the input is translated into an internal representation. * Generating: for each term of the input expressions the statements of the module are executed. This in general generates a lot of terms for each input term. * Sorting: all the output terms that have been generated are sorted and equivalent terms are summed up. FORM only allows local operations on single terms, like replacing parts of a term or multiplying something to it. Together with a sophisticated pattern matcher this seemingly strong limitation allows the formulation of general and efficient algorithms. The limitation to local operations makes it possible to handle expressions as “streams” of terms, that can be read in sequentially from a file and be worked on one at a time. The generation of terms is done in a way that the output terms drop out term by term also. These output terms are stored in two intermediate buffers and a temporary sortfile in a staged procedure: When the smaller buffer gets full, its content is sorted and copied to the larger buffer. Consequently the small buffer is free to be filled with the next patch of output terms. If the larger buffer gets full, its content is again sorted and copied to the temporary file, freeing the large buffer. At the end of the module all of the existing sorted patches residing in the two buffers and the sortfile have to be merged stage by stage to one single sorted output stream of terms which is written to file and used as input source of terms for the next module. For the sorting of the small buffer a slightly modified merge sort algorithm is used; merging sorted patches of terms in the other stages of sorting is done with a tree of losers . This results in a tree-like structure for the generation of terms as well as for the sorting. Of course the first step before parallelizing the program was to profile and optimize the sequential code. One of the main achievements was a speedup of about a factor 2 on the 64-bit architecture of the alpha processors by changing the internal used word-length from 16 ( which results in 32-bit arithmetic operations) to 32 bits (64-bit arithmetics). Another speedup of about 1.5 was simply achieved by experimenting with the compiler optimizations. Profiling of FORM in typical applications shows that the time needed for compiling the program text into the internal representation is not dependent on the size of the problem and usually neglectible. Most of the time is spent in generating and sorting of terms. 2. Evaluation of Different Parallel Platforms One of the suppositions for the parallelizing of FORM has been not to limit the approach by using a too specialized hardware, but instead to use standard message passing libraries (MPI(CH) and PVM ) for the parallelization that are available on a wide class of architectures and should be portable to new and more powerful systems. Usually both message passing libraries use specialized device drivers underneath to yield maximum performance. During the first part of our project the following combinations of hard– and software have been used: * | DEC alpha workstation cluster running DEC UNIX 4.0D, | | --- | | 8 processors, 600MHz, 512MB RAM and 2$`\times `$4GB disk each, | | interconnection: Fast (100 MBit) & Gigabit (1000 MBit) Ethernet, | | 1.26 GBit Myrinet , 1.26 GBit ParaStation II , | | message passing: MPI(CH) and PVM (over IP and ParaStation II). | * | IBM SP2 running AIX 4.2.1, | | --- | | 168 (in total 256) processors, 120MHz, 512MB RAM, | | interconnection: special low latency switches, | | message passing: MPI. | * | ALR Quad6 SMP machine running Solaris 2.6, | | --- | | 4 processors, 200MHz PentiumPro, 512MB RAM and 2$`\times `$4GB disks, | | interconnection: shared memory, | | message passing: MPICH (over shared memory). | In the case of the DEC alpha cluster different IP drivers and different implementations of device drivers for the messages libraries exist. For a thorough understanding of the parallel systems behaviour it is of course crucial to compare the performance of the messages passing libraries. The MPI(CH) libraries have been examined using the Pallas MPI benchmarks (PMB) . A shell script is provided that can be used to measure the throughput and latency of the message passing operations under different circumstances without further interaction. We only present the results for the simplest possible single transfer benchmark, a ping-pong test: One process(or) sends a message of $`n`$ bytes to another process(or), which immediately sends that message back. In contrast to more complicated benchmarks there is no concurrency with other messages passing activity during this test. In other words, the bandwidth and latency are measured under optimal conditions. As can be seen from figure 1 (left) the bandwidth increases as the message size does. For large message sizes there is a saturation effect. The maximum throughput for the Fast Ethernet is about 40 MBit/s, for the Gigabit Ethernet and the Myrinet the maximum bandwidth is about 200 Mbit/s. The ParaStation II software provides a special device driver for MPI(CH) on the Myrinet hardware that does not use the IP protocol and adds much less protocol overhead. With this spezialized software a bandwidth of about 300 MBit/s is achieved. The IBM SP2 reaches a maximum MPI bandwidth of 800 MBit/s, which is even higher than the maximum MPI(CH) throughput on the SMP machine that uses shared memory segments for the data transfer. In figure 1 (right) the measured MPI(CH) latency for the ping-pong benchmark is shown. The latency ranges from a few microseconds for small message sizes up to a second for large message sizes. Using MPI(CH) over the IP protocol results in a minimum latency of more than 200 microseconds. Lower latencies can be achieved with special device drivers only. It turns out that the minimum latency on the ParaStation II (about 20 microseconds) is even smaller than on the IBM SP2. Of course the SMP machine has the smallest latency—less than ten microseconds. Note that for our particular interest the latency at larger message sizes is most important. In this region again the IBM SP2 provides optimal performance. 3. The Parallelization of FORM Allowing local operations only makes FORM very well suited for a straightforward parallelization: distribute the input terms among the available processors, let each of them perform the local operations on its input terms and generate and sort the arising output terms. At the end of a module the sorted streams of terms from all processors have to be merged into one final output stream again. The compiling of the program-text to the internal representation was considered not to be worth parallelizing. This concept indicates to use a master-slave structure for the parallelization, where the master would store the expressions and distribute and recollect all the terms of each expression. For the implementation of this raw concept we used a four step strategy: * one process(or) generates terms, a second process(or) sorts the output terms * instead of only one process arbitrary many processes perform the sorting. * the input terms are distributed and the term generation is also done in parallel * final optimizations: avoid or handle worst cases, load leveling, fault tolerance This approach has several advantages, the most important being that having working versions in every stage gives us a good idea of how good the parallelization is and the possibility of doing realistic tests even in a very early stage. 3.1 The Two-Processor Version This first step turns out to be useful, since it not only gives a deep insight into how changes to the source code of FORM have to be made without affecting the efficiency of the well optimized sequential code. The two-processor version also serves as a check of whether and how the concept can possibly lead to a decent speedup, because we basically add communication overhead (no speedup can be expected from seperating the generation and the sorting of terms only). It turns out that for parallelizing software on a cluster of very fast workstations the importance of avoiding communication overhead cannot be overstated. Moreover these experiments show that the communication has to be done in a buffered way, since sending single terms increases the runtimes of the two-processor code up to a factor 20. With the buffered version the increase in runtime was limited to about a factor of 1.5 of the sequential code with two workstation connected by a (slow) 10 Mbit/s Ethernet using the PVM and MPI(CH) libraries. 3.2 Parallel Sorting The second step is to distribute the output terms among arbitrary many processors and do the sorting in parallel. Since this part of the sorting relies strongly on communication between the processors, it most probably sets the limits of parallel speedup. Therefore already in this stage it could be predicted quite reliably whether a speedup of the whole program could be achieved or not. A first try was to map the “tree of losers” used in FORM to merge sorted patches onto the processors. While it would distribute the workload in an optimal way this approach adds too much communication overhead. This is why in the end a much simpler communication structure was used, where all slaves send their sorted terms to the master process and this process uses a local “tree of losers” to merge the output streams of the slave-processes. Additional effort was made to overlap the work on the master process with the sorting done on the slaves, which caused a much deeper interference with the sequential code, but finally resulted in a very fast and stable implementation. Of course the scaling of this approach with a large number of processors is a possible limitation, but tests on the 256 node IBM SP2 parallel computer showed that, at least for such a specialized network, the performance does not dramatically break down up to 32 processors. If problems should occur with more processors or a different architecture, an intermediate layer of “foremen” could be used to circumvent this possible bottleneck. Since we are only testing the sorting, any problem that produces a sufficient number of intermediate terms can be used to test the parallel speedup. We wrote a very simple program, that expanded the expression $`(a_1+\mathrm{}+a_n)^2`$ and then replaced $`a_1`$ by $`(a_4+\mathrm{}+a_n)`$ which results in a short, easy to check result and, by choosing different values of $`n`$, can be scaled in an easy way. The runtimes we could achieve with this version on the alpha cluster with different combinations of communication soft/hardware are shown in figure 2 for a medium ($`n=3000`$, $`500`$ MB) and a large ($`n=5000`$, $`1400`$ MB) problem (the sizes in MB are for the 64-bit architecture of the alphas, they are half as large for 32 bit systems (a more efficient compression to reduce disk access is in progress). They show that only for sufficiently large problems a satisfying speedup can be achieved without sophisticated network soft/hardware. The parallel sorting was implemented in a way that can take advantage of the non-blocking unbuffered MPI message passing functions by using a set of cyclic buffers which eliminates any unnecessary overhead and also minimizes the time consumed for synchronizing. Choosing the number of cyclic buffers to be one results in using the send/receive function in blocking mode. On a 4 processor SMP machine we could see a speedup of about 10% for some cases using this feature, while on a IBM SP2 there are no differences between the two versions (figure 3). It is also quite interesting to see that obviously the master process can generate just enough terms to keep about 4 processes busy, almost independently of the problem size, the difference is that the cost of additional overhead for more than 4 processes is larger for the small problems, which of course is no surprise. It was know from the profiling of the sequential code that a large part of the runtime is spent in the generation routines, so the speedup we could see for very large problems is about the optimum we could expect to reach at this intermediate stage. 3.3 Parallel Generating The last step towards a working prototype was the distribution of input terms among all processes before generation of the output terms. The slave-processes of course also need all the information necessary for the generation of the terms, which is at the moment realized by having them all read the program text file and compile their own internal representation and broadcasting only the rest of necessary information from the master process. It was one of the main goals to get this version to run realistic problems as soon as possible, so the often used FORM-package MINCER , which can calculate certain types of Feynman diagrams up to three loops was used to serve as a testbed. First the necessary subset of FORM commands used in this package was made available and some easy standard integrals served as a test of the correctness of the parallel computations. After these preliminary tests the computation of some diagrams of an ongoing project, the calculation of the 3-loop triple-gluon vertexfunction, are now serving as “real world” tests. Just as with the sorting the speedup is strongly dependent on the number of terms sent within one message. In the current implementation this number can be changed at the start of the program and results in very different speedups, as can be seen in figure 4. Of course choosing a too coarse grained distribution will result in the danger of running into worst cases, where all the work sits in only one of the input patches, and only one processor is busy. On the other hand, the fine grain distribution causes more overhead. The best setting turned out to be not only dependent on the underlying soft/hardware, but also strongly dependent on the problem that is run. The distribution is organized such that the master sends a patch of terms to each slave process at the beginning of the module and then waits for the slaves to ask for new terms whenever they are finished with their last patch of input terms. This actually turns the concept in that of a client-server situation which will also be useful to make the slaves receive any kind of global information when the need arises. It also produces a decent load levelling among the slaves, which can be controlled by the size of the input-patches and could even be adjusted during runtime for further improvement. It must be understood that for this specific problem there were now over hundred modules executed, where most of them were only seeing 1 to 10 terms, which makes them “worst cases” for the parallelization and there is a lot of room for further optimizations of this parallel program. Also the results are received from the FORM-code written for the sequential version of FORM without any modification or optimization in these algebraic packages, which corresponds to a perfect code reuse. As was expected, on the slow nodes of the IBM SP2 with fast connections a speedup is quite easy to obtain (see figure 4, left). Also it can be seen that for the fully parallelized program there is indeed a speedup up to 12 processors on the SP2. Another result is that the number of input terms that are distributed at once has a large influence on the speedup that can be achieved. Figure 4 (right) shows the runtimes that could be achieved on a 4 processor SMP machine. Also on that architecture the number of input terms sent at a time must be at least about 10 to see a speedup. As can be seen in figure 5 this holds also for the DEC alpha cluster using MPI(CH) over the ParaStation II hard/software. Obviously a speedup is much harder to obtain in this case. Still the parallel version on the IBM SP2 and the 4-processor SMP machine does hardly reach the runtimes measured with the sequential version on the DEC alpha cluster. From our experience with the parallel sorting and preliminary tests we expect a much larger speedup for larger problems, which are of course most interesting and investigated at the moment. 4. Conclusion & Outlook The ongoing work is mainly to find and fix bugs to make the parallel version to run larger problems just as extremely reliable as the sequential one, which has been tested and improved for over ten years. As the runtimes show, there is also some work to be done to further optimize the parallel version, especially to insure that modules with only few terms will not cause a slowdown by unnecessary communication overhead. The next step will be the implementation of the full FORM version 2.3 standard, so that all existing software for that version can be run in parallel (which is basically all software that exists for FORM). Since the parallel program is actually based on version 3.0 which is in preparation by the author J. Vermaseren and offers some new and powerful features, we also will investigate if and how these new features can be implemented in the parallel version. Another field of current and planned activities is porting the program to other architectures. We are especially interested in taking better advantage of the possibilities of SMP machines. The main goal of all these efforts is to get from the current stage of a working prototype to an easy to use, powerful and reliable program that is not an end in itself, but a useful tool in real life applications on a wide variety of (parallel) architectures in the same manner as the current sequential version of FORM. This work has been supported by DFG Forschergruppe under contract no. KU 502/8-2 and Digital Equipment under contract no. DE-98008.
no-problem/9906/quant-ph9906106.html
ar5iv
text
# References Single Spin State Detection for the Kane Model of Silicon-Based Quantum Computer S.N.Molotkov and S.S.Nazin Institute of Solid State Physics of Russian Academy of Sciences Chernogolovka, Moscow district, 142432, Russia ## Abstract The scheme for measurement of the state of a single spin (or a few spin system) based on the single-electron turnstile and injection of spin polarized electrons from magnetic metal contacts is proposed. Applications to the recent proposal concerning the spin gates based on a silicon matrix (B.Kane, Nature, 393, 133 (1998)) are discussed. After the discovery of efficient quantum algorithms and a rigorous proof of the possibility of fault-tolerant quantum computing , various realizations of quantum logical gates were proposed based on cold ions , nuclear magnetic resonance , optical schemes , semiconductor heterostructures , and Josephson effect . Recently, the possibility of fabrication of quantum gates employing a silicon matrix with the dopant P<sup>31</sup> atoms was suggested . The role of quantum bits is played by the nucleus and electron spins of the P<sup>31</sup> atom. One of the problems in the scheme is the measurement of a single nucleus or electron (or both) spin. The papers consider the indirect spin measurement employing the single-electron transistor. However, the proposed schemes do not allow to measure the state of a single spin and only make it possible to measure the different charge states of a system of nucleus and/or electron spins. The detection of a single spin in itself is not an exotic thing. The observation of Larmor precession of a single spin localized on the Si(111)7$`\times 7`$ surface in ultra-high vacuum with the scanning tunnelling microscope (STM) was first reported by the IBM group (Demuth et al , see also ) ten years ago. There were also reports on the detection of the electron paramagnetic resonance signal in the STM current from a single spin in an organic molecule . The STM with a magnetic tip was also demonstrated to be sensitive on the atomic level to the state of single spins on the surface of magnetic materials . The quantity which is directly measured in STM is the tunnel current which depends on the lateral tip position relative to the sample on the atomic scale, and in the case of a magnetic tip the tunnel current has a spin-dependent component $$I_t(𝐱)\rho _c(𝐱)\rho _t𝐦_c(𝐱)𝐦_t,$$ where $`\rho _{c,t}(𝐱)`$ are the local densities of states on the tip apex and the sample surface, respectively, while $`𝐦_c(𝐱)`$ and $`𝐦_t`$ are the local magnetizations at the sample surface at a point $`𝐱`$ and the tip apex, respectively. However, the steady state current measurements cannot be directly applied to the detection of the state of quantum gates. Realization of quantum gates requires the possibility of measuring the state of the system at an arbitrary moment of time. According to the general theory of quantum-mechanical measurements \[15–17\], the most complete description of any particular measuring procedure which can be applied to a quantum system is given by the so-called instrument. The instrument $`T(d\lambda )`$ is actually a mapping of the set of all quantum states of the system (density matrices) $`\rho _s`$ before the measurement to the system states (up to the normalization) just after the measurement $`\stackrel{~}{\rho }_s=T(d\lambda )\rho _s`$ which gave the result in the interval $`d\lambda `$, the probability of obtaining a measurement result in the interval $`d\lambda `$ being $`\mathrm{Tr}\stackrel{~}{\rho }_s=\mathrm{Tr}\{T(d\lambda )\rho _s\}`$. Any instrument can be represented in the form $$T(d\lambda )\rho _s=\text{Tr}_A\{(I_sM_A(d\lambda ))U(\rho _s\rho _A)U^1\},$$ i.e. any measurement can be realized by allowing the studied system to interact with a suitable auxiliary system (ancilla) prepared in a fixed initial state $`\rho _A`$ for some time (so that their joint evolution is described by certain unitary dynamics $`U`$) and a subsequent measurement generated by a suitable identity resolution $`M_A(d\lambda )`$ performed on the ancilla. Proposed below is the method for detection of the state of a single spin (or a few spins, e.g. nucleus spin + electron spin) based on the “turnstile” concept . Explicitly present in the proposed scheme are the preparation of the ancilla ($`\rho _A`$) at an arbitrary moment of time, turning on the interaction between the ancilla and the studied system, their joint unitary evolution, turning off the interaction at an arbitrary moment of time, and detection of the state of the ancilla. Consider the following model system. Suppose that we have a system of spins in the subsurface region, e.g. an atom with the nucleus possessing non-zero spin and an electron localized on that atom (Fig. 1). Suppose also that a system of tunnel-coupled quantum dots is fabricated on the surface in such a way that the central dot is located just above the system of spins acting as the quantum bits (Figs. 1,2). Each dot has a single size-quantized level. The leftmost and the rightmost dots are tunnel-coupled to the magnetic metal electrodes (Figs. 1,2). The Hamiltonian describing the interaction among the quantum dots and between the quantum dots and metallic electrodes can be written as $$H=\underset{k,\sigma ,\alpha =L,R}{}\epsilon _{k\sigma \alpha }a_{k\sigma \alpha }^+a_{k\sigma \alpha }+\underset{\sigma }{}(\epsilon _cc_{c\sigma }^+c_{c\sigma }+\epsilon _Lc_{L\sigma }^+c_{L\sigma }+\epsilon _Rc_{R\sigma }^+c_{R\sigma })+$$ (1) $$\underset{k\sigma }{}(T_{kL}c_{L\sigma }^+a_{k\sigma L}+T_{cL}c_{c\sigma }^+c_{L\sigma }+T_{cR}c_{c\sigma }^+c_{R\sigma }+\text{h.c.})+\underset{\sigma }{}(U_Ln_{L\sigma }n_{L\sigma }+U_cn_{c\sigma }n_{c\sigma }U_rn_{R\sigma }n_{R\sigma }),$$ where the first two terms describe the electron states in isolated electrodes and the dots, the third one represents the tunnel coupling between the dots and the electrodes, and the last term accounts for the Coulomb intradot repulsion (if it is important). We assume that the electrons in the electrodes are spin-polarized with the magnetization vectors ($`𝐧_L`$ and $`𝐧_R`$ in the left and right electrodes, respectively) fixed by e.g. magnetic anisotropy. If the system is placed in an external magnetic field, the corresponding terms should be added to the Hamiltonian. The spin system (quantum gate) Hamiltonian, for example, for the case of nucleus spin + the electron spin localized on it can be written as $$H_s=\underset{\sigma }{}\epsilon _sc_{s\sigma }^+c_{s\sigma }+g_s\mu _Bc_{s\sigma }^+c_{s\sigma ^{}}𝝈_{\sigma \sigma ^{}}𝐇+g_I\mu _B𝐈𝐇+g_{sI}𝐈𝝈_{\sigma \sigma ^{}}c_{s\sigma }^+c_{s\sigma ^{}}.$$ (2) In the external magnetic field the contribution from the metal electrodes should also be taken into account. The Hamiltonian describing the interaction of the spins in the quantum gate and the electron localized in the central dot (see below) depends on the specific geometry of the considered structure. For example, if the wave functions of the central dot electron and the electron localized on the subsurface center overlap, then the Hamiltonian can be written as $$H_{int}=\underset{\sigma }{}(t_{sc}c_{s\sigma }^+c_{c\sigma }+\text{h.c.})+g_{cI}𝐈𝝈_{\sigma \sigma ^{}}c_{c\sigma }^+c_{c\sigma ^{}}+\underset{\sigma }{}U_{sc}n_{c\sigma }n_{s\sigma }.$$ (3) If the overlap is negligible, then only the dipole-dipole interaction should be retained. The complete solution of the problem of finding the temporal evolution of the considered system is a difficult task. To go one step further, we shall assume that the characteristic times of different processes occurring in our systems form a certain hierarchy. Let $`\tau _{res}`$ be the typical time of electron tunneling from the central dot to the metal electrode when the energy levels in the adjacent quantum dots are tuned to the resonance (this time actually coincides with the time required for the electron to tunnel from the left (right) quantum dot to the metal electrode through a single barrier), $`\tau _{non}`$ be the characteristic tunneling time from the central dot to the metal electrodes when the levels in the dots are detuned from the resonance, and finally $`\tau _{dyn}`$ be the typical time of joint evolution caused by the interaction between the electron in the central dot and the spins in the gate. We shall assume that $`\tau _{res}\tau _{dyn}\tau _{non}`$. Below we wish to take advantage of the well known point that for the tunneling through the two barriers (from the central dot to the metal electrodes), tuning of the levels into the resonance lifts the smallness associated with the additional barrier. The characteristic time are inversely proportional to the level width and depends on the level position; an estimate is given by (e.g., see Ref.) $$\frac{1}{\tau (\omega )}=\gamma (\omega )=\frac{|T_{Lc}|^2\gamma _0^2}{[\stackrel{~}{\epsilon }_c(\omega )\stackrel{~}{\epsilon }_L(\omega )]^2+\gamma _0^2},\gamma _0=\underset{k}{}|T_{kL}|^2\delta (\omega \epsilon _{kL})=|T_L|^2.$$ Here $`\gamma _0=|T_L|^2|T_{cL}|^2=|T|^2`$ is the bare tunnel transparency of the barrier between the dots and between the dots and the electrodes which can be assumed to be the same without any loss of generality. Under the resonance conditions ($`\stackrel{~}{\epsilon }_c(\omega _r)=\stackrel{~}{\epsilon }_L(\omega _r)`$) $`1/\tau _{res}|T|^2=\gamma _0`$. When the levels are detuned by the energy larger than the level width ($`\mathrm{\Delta }\gamma _0`$), the characteristic time becomes $`1/\tau _{non}\gamma _0(\gamma _0^2/\mathrm{\Delta }^2)1/\tau _{res}`$. Accounting for the Coulomb repulsion does not change the situation qualitatively. Let us now discuss the different stages of the measurement procedure (Fig. 2.) a) The size-quantized levels in the dots are initially empty (lie above the chemical potentials in the electrodes). The dashed line shows the levels in the dots split off by the Coulomb repulsion. b,c) The central and the left dots are subjected to the voltage pulses with the duration $`\tau `$ such that $`\tau _{res}\tau \tau _{dyn}`$ and the central and left dots levels are tuned into the resonance and pulled down below the chemical potential $`\mu _L`$ in the left electrode. During the time of the order of $`\tau _{res}`$ the levels in the left and the central dots are filled by the electrons from the left electrode. d) Then the voltage pulse with the duration $`\tau _{res}\tau \tau _{dyn}`$ is applied to the left dot pushing its level above the chemical potential $`\mu _L`$. During the time of the order of $`\tau _{res}`$ the level in the left dot becomes empty since the electron escapes back into the left electrode. AT the same time the level in the central dot remains filled. On the time scale $`<\tau _{non}`$ one can assume that the electron does not remember about the electrodes and is effectively isolated, its spin state being determined by the left electrode state. The above described procedure results in the preparation at the initial moment of time (on the time scale $`\tau \tau _{dyn}`$ — instantly) of the ancilla in the state $`\rho _A(t=0)`$. Since the density matrix for the spin-1/2 system can always be written in the form $`\rho =\frac{1}{2}(I+𝝈𝐮)`$, the state of the electron in the central dot which tunnelled from the left electrode is described by the density matrix $`\rho _A(t=0)=(1/2)(I+𝝈𝐮_L)`$, where $`𝐮_L`$ is the vector describing both the direction and the magnitude of the electron spin polarization vector of the electrons in the left electrode. Then on the time scale $`\tau _{res}t\tau _{non}`$ one can assume that the electron in the central dot and the spins in the gate evolve according the joint unitary dynamics $$\stackrel{~}{\rho }(t)=U(t)(\rho _A(t=0)\rho _s(t=0))U^1(t),U(t)=\mathrm{exp}(i_0^tH_{int}(t^{})𝑑t^{}).$$ Here $`\rho _s(t=0)`$ is the density matrix of the quantum gate the time moment $`t=0`$. The Hamiltonian $`H_{int}`$ can be easily diagonalized since it describes a finite-dimensional system (e.g., the joint dynamics of the electron localized in the dot together with the nucleus spin and the electron spin localized on it is described by the $`8\times 8`$ matrix). The density matrix of the electron in the central dot by the times moment $`t`$ after the joint evolution is $$\rho _A(t)=\text{Tr}_s\{U(t)(\rho _A(t=0)\rho _s(t=0))U(t)^1\}=\frac{1}{2}(I+𝝈𝐮_𝐀(t)),$$ where the vector $`𝐮_A(t)`$ specifies the spin of electron localized in the central dot by the time moment $`t`$ resulting from the interaction with the spins in the quantum gate. e) Detection of the electron state in the central dot is performed by measuring the current flowing into the right electrode. For that purpose the central and the right quantum dots are subjected to the voltage pulses of duration $`\tau _1`$ (Fig. 2e) similar to those used to inject an electron from the left electrode to the central dot. If the time $`\tau _1`$ is short compared with $`\tau _{res}`$, the probability of electron escape to the right electrode is proportional to $`\tau _1`$. Since $`\tau _{dyn}\tau _{res}`$, at the time moment $`t`$ the interaction between the ancilla and the quantum gate is almost instantly (on the time scale characteristic of their joint dynamics) turned off. The probability of electron escape to the right electrode per unit time is (to within the numerical factors) $$\text{Pr}|T|^2\text{Tr}_{sA}\{\rho _R\rho _a(t)\}=|T|^2\text{Tr}_{sA}\{(I_s\rho _R)(U(t)(\rho _s(t=0)\rho _A(t=0))U^1(t))\},$$ where $`\rho _R`$ is the electron density matrix in the right electrode, $`\rho _R=\frac{1}{2}(I+𝝈𝐮_R)`$. Therefore, the probability of appearance of a current pulse in the right electrode depends on the spin state of the electron in the central quantum dot and is $$\mathrm{Pr}=C\tau _1|T|^2\{1+𝐮_R𝐮_A(t)\},$$ (4) where $`C`$ is a constant (the scalar product actually arises from the reduction to a single spin quantization axis of the two spinors describing the electron states in the central dot and the right electrode ). f) Finally, the voltage pulses are applied to the left and the central quantum dots whose magnitude and duration chosen in such a way that the electron escapes from the central dot (if after the previous stage there is still an electron in the central quantum dot) to the left electrode with unit probability. Let the duration of the complete cycle consisting of the stages a)–f) be $`\tau _0`$. Then at fixed $`\tau _1`$ the current flowing through the system of quantum dots will be equal to $`\mathrm{Pr}\frac{e}{\tau _0}`$. The constant $`C`$ in Eq.(4) can be found from the current measurements for the case of parallel magnetizations in the left and right electrodes when the interaction with the gate is turned off ($`t=0`$), so that $`𝐮_R𝐮_A(t)=|𝐮_R||𝐮_L|`$ (we assume that the magnetizations in the electrodes, $`|𝐮_R|`$ and $`|𝐮_L|`$), are known). Thus, the current measurements in the outlined scheme allow one to determine the ancilla polarization vector $`𝐮_A(t)`$ which depends on the initial state of the gate $`\rho _s`$ before the measurement. Strictly speaking, finding the gate state from the measured current requires the determination of all three components of the vector $`u_s`$ appearing in the density matrix of the gate. It is obvious that this can only be done if the current measurements are preformed for at least three different combinations of the system parameters. For example, one can vary the magnetization direction in both electrodes and the duration of ancilla interaction with the gate. However, the problem of whether or not the tunnel current behavior as a function of the indicted parameters provides sufficient information for the complete recovery of the vector $`u_s`$ should be solved separately for each particular interaction between the quantum gate and the ancilla. The characteristic time of the non-resonant tunneling can be made arbitrarily large by increasing the width of the double barrier so that it imposes no restrictions. The typical time of the joint quantum dynamics of the gate and the electron in the central dot can be estimated as the typical time of the dynamics of an isolated gate which is the inverse Larmor spin precession in the external field . In the field $`B100`$ Gs (0.01 T) this time is $`1/\tau _{dyn}10^6`$ Hz. The resonant tunneling time can well be increased up to $`\tau _{res}10^9`$ s allowing to measure the current pulses on the times $`\tau \tau _{res}`$. To avoid the smearing of the Zeeman splitting, the temperature should not exceed 1 mK. Growth of the operational temperature shortens $`\tau _{dyn}`$ and, consequently, $`\tau _{res}`$, resulting in the reduction of the time during which the current pulses are measured. Note also that both the quantum dots and silicon matrix should not contain the isotopes with non-zero nuclear spin which prevents employment of the advanced technology of GaAs/GaAlAs materials and requires usage of the Si/SiGe-based systems. The authors are grateful to Prof. K.A.Valiev for discussion of obtained results. The work was supported by the Russian Fond for Basic Research (project No 98-02-16640) and by the Program “Advanced technologies and devices of nano- and microelectronics” (project No 02.04.5.2.40.T.50), and by the Program “Surface Atomic Structures” (project N 1.1.99).
no-problem/9906/nucl-th9906049.html
ar5iv
text
# Measuring longitudinal amplitudes for electroproduction of pseudoscalar mesons using recoil polarization in parallel kinematics ## Abstract We propose a new method for measuring longitudinal amplitudes for electroproduction of pseudoscalar mesons that exploits a symmetry relation for polarization observables in parallel kinematics. This polarization technique does not require variation of electron scattering kinematics and avoids the major sources of systematic errors in Rosenbluth separation. Transition form factors for electroexcitation of nucleon resonances provide important tests of QCD-inspired models baryon structure. However, it is often very difficult to separate unpolarized longitudinal response functions from the dominant transverse response functions using the traditional Rosenbluth method without substantial systematic errors arising from the strong dependence of both acceptances and cross sections upon electron-scattering kinematics. Arnold, Carlson, and Gross demonstrated that the ratio between electric and magnetic nucleon elastic form factors can be measured using either recoil or target polarization; such techniques are now becoming standard for elastic scattering. In this Brief Report we demonstrate that polarization observables for electroproduction of pseudoscalar mesons in parallel kinematics can be used to separate longitudinal and transverse amplitudes without need of Rosenbluth separation. When the nucleon momentum and spin are both parallel to the momentum transfer, conditions sometimes described as superparallel kinematics , the polarized and unpolarized transverse response functions become identical and the recoil polarization or the target polarization asymmetry can be used to determine the ratio between longitudinal and transverse cross sections . Some of the implications of this symmetry have been considered for nucleon knockout reactions upon spin-0 targets which leave the residual nucleus with spin-$`\frac{1}{2}`$ and for electron scattering by a polarized spin-$`\frac{1}{2}`$ target . Raskin and Donnelly also mention this symmetry for pion electroproduction. Using the nonstandard multipole expansion of Raskin and Donnelly and assuming dominance of the $`M_{1+}`$ amplitude for the $`N\mathrm{\Delta }`$ transition, Schmieden discussed the sensitivity of polarization observables for pion electroproduction to quadrupole amplitudes, but the symmetry is more general and it is not necessary to assume that a single resonance dominates. The complete tables of response functions expressed in terms of helicity amplitudes for pseudoscalar meson production which can be found in Refs. implicitly contain the results below also. Nevertheless, here we present some of the practical aspects of exploiting this symmetry because the simplicity and utility of the polarization method for superparallel kinematics does not appear to be widely known or appreciated. The reaction amplitudes for any $`A(e,e^{}N)B`$ process where $`A`$ has spin-$`\frac{1}{2}`$ and $`B`$ spin-$`0`$ that is governed by the one-photon exchange mechanism can be expressed in terms of helicity amplitudes of the form $$H_{\lambda _f\lambda _i\lambda _\gamma }(Q^2,W,\theta ,\varphi )=\lambda _f|_\mu \epsilon ^\mu |\lambda _i,\lambda _\gamma $$ (1) where $`\lambda _i`$ and $`\lambda _f`$ are the initial and final helicities of the nucleon, $`\lambda _\gamma `$ is the helicity of the virtual photon, $`^\mu `$ is an appropriately normalized transition current operator, and $`\epsilon ^\mu `$ is the virtual-photon polarization vector. The invariant mass of the meson-nucleon system is given by $`W`$, while $`Q^2=𝐪^2\omega ^2`$ is the virtuality of a spacelike photon. The pion direction relative to the momentum transfer $`𝐪`$ and electron-scattering plane is given by polar and azimuthal angles $`\theta `$ and $`\varphi `$. We label a nucleon recoil momentum that is along $`𝐪`$, such that $`\theta =\pi `$, as parallel or a nucleon momentum in the opposite direction as antiparallel, and assign $`\varphi =0`$ to both. Phase conventions for helicity states follow the conventions of Jacob and Wick . The present derivation assumes that the reaction is mediated by one-photon exchange and conserves parity, but makes no other assumptions about the details of the transition amplitudes. Since parity conservation requires $`|H_{\lambda _f\lambda _i\lambda _\gamma }|=|H_{\lambda _f\lambda _i\lambda _\gamma }|`$, it is sufficient to consider six independent amplitudes $`H_i`$ for $`(\lambda _f,\lambda _i,\lambda _\gamma )`$ chosen as $`(\frac{1}{2},\frac{1}{2},1)`$, $`(\frac{1}{2},\frac{1}{2},1)`$, $`(\frac{1}{2},\frac{1}{2},1)`$, $`(\frac{1}{2},\frac{1}{2},1)`$, $`(\frac{1}{2},\frac{1}{2},0)`$, and $`(\frac{1}{2},\frac{1}{2},0)`$ and numbered sequentially . Due to the absence of orbital angular momentum in the initial state or spin in the undetected recoil particle ($`B`$), the angular momentum projected onto the virtual photon direction reduces to $`J_z=\lambda _\gamma \lambda _i=\pm \lambda _f`$ for parallel or antiparallel kinematics, where the upper sign applies to parallel and the lower to antiparallel kinematics. Hence, only $`H_4`$ and $`H_6`$ contribute to parallel or $`H_2`$ and $`H_5`$ to antiparallel kinematics. It is convenient to define $`T_+=H_4`$ and $`L_+=H_6`$ as the transverse and longitudinal amplitudes relevant to parallel kinematics and $`T_{}=H_2`$ and $`L_{}=H_5`$ as the corresponding amplitudes for antiparallel kinematics. These amplitudes are related to the usual CGLN coefficients by $`T_\pm `$ $`=`$ $`\sqrt{2}(_1\pm _2)`$ (3) $`L_\pm `$ $`=`$ $`{\displaystyle \frac{Q}{\omega }}(_5^{}_6^{})`$ (4) where $`i^0`$ $`=`$ $`{\displaystyle \frac{q}{\omega }}\left(_5^{}\stackrel{}{\sigma }\widehat{q}+_6^{}\stackrel{}{\sigma }\widehat{p}\right)`$ (6) $`i\stackrel{}{}`$ $`=`$ $`_1\stackrel{}{\sigma }i_2\stackrel{}{\sigma }\widehat{p}\stackrel{}{\sigma }\times \widehat{q}+_3\widehat{p}\stackrel{}{\sigma }\widehat{q}+_4\widehat{p}\stackrel{}{\sigma }\widehat{p}+_5\widehat{q}\stackrel{}{\sigma }\widehat{q}+_6\widehat{q}\stackrel{}{\sigma }\widehat{p}`$ (7) and $`_5^{}`$ $`=`$ $`_5+_3\widehat{p}\widehat{q}+_1`$ (9) $`_6^{}`$ $`=`$ $`_6+_4\widehat{p}\widehat{q}.`$ (10) Using the standard multipole expansion of CGLN amplitudes introduced by Dennery , the amplitudes for antiparallel kinematics become $`T_{}`$ $`=`$ $`\sqrt{{\displaystyle \frac{1}{2}}}{\displaystyle \underset{\mathrm{}}{}}\left[(\mathrm{}+1)(\mathrm{}+2)E_\mathrm{}++\mathrm{}(\mathrm{}1)E_{\mathrm{}}+\mathrm{}(\mathrm{}+1)(M_\mathrm{}+M_{\mathrm{}})\right]`$ (12) $`L_{}`$ $`=`$ $`{\displaystyle \frac{Q}{q}}{\displaystyle \underset{\mathrm{}}{}}\left[(\mathrm{}+1)^2S_\mathrm{}++\mathrm{}^2S_{\mathrm{}}\right]`$ (13) while the summands for parallel kinematics require an extra factor of $`()^{\mathrm{}}`$. Note that because Raskin and Donnelly confused $`\theta _\pi `$ with $`\theta _N`$, the multipole amplitudes used by Schmieden should be multiplied by $`()^{\mathrm{}}`$. Furthermore, Raskin and Donnelly give the opposite sign for $`E_{\mathrm{}}`$. The differential cross section for the meson electroproduction reaction $`p(\stackrel{}{e},e^{}\stackrel{}{N})x`$ can be expressed in the form $$\frac{d^5\sigma }{dk_fd\mathrm{\Omega }_ed\mathrm{\Omega }_N}=\mathrm{\Gamma }_\gamma \sigma _v$$ (14) where $`\sigma _v`$ is the center of mass cross section for the virtual photoproduction reaction $`\gamma _v+Nx+N`$ and $$\mathrm{\Gamma }_\gamma =\frac{\alpha }{2\pi ^2}\frac{k_f}{k_i}\frac{k_\gamma }{Q^2}\frac{1}{1ϵ}$$ (15) is the virtual photon flux for initial (final) electon momenta $`k_i`$ ($`k_f`$). Here $`ϵ=\left(1+2\frac{𝐪^2}{Q^2}\mathrm{tan}^2\frac{\theta _e}{2}\right)^1`$ is the transverse polarization of the virtual photon, $`\theta _e`$ is the electron scattering angle, and $`k_\gamma =(W^2m_p^2)/2m_p`$ is the laboratory energy a real photon would need to excite the same transition. The spin dependence of the virtual photoproduction cross section for an unpolarized target can be expressed in the form $$\sigma _v=\overline{\sigma }\left[1+𝑷𝝈+h(A+𝑷^{}𝝈)\right]$$ (16) where $`\overline{\sigma }`$ is the unpolarized differential cross section, $`A`$ is the beam analyzing power, $`𝑷`$ is the induced or helicity-independent recoil polarization, $`𝑷^{}`$ is the polarization transfer or helicity-dependent recoil polarization, and $`h`$ is the beam helicity. Thus, the net polarization of the recoil nucleon is $`𝚷=𝑷+h𝑷^{}`$. A similar expression applies when the target is polarized and the recoil polarization is unobserved. We omit observables requiring both target and recoil polarization because they provide no new information for parallel kinematics and are so difficult to measure as to be of no practical interest. For parallel kinematics it is simplest to refer polarizations to a basis in which $`\widehat{𝒛}=\widehat{𝒒}`$ is in the photon direction, $`\widehat{𝒚}=\widehat{𝒌}_i\times \widehat{𝒌}_f`$ is normal to the electron scattering plane, and $`\widehat{𝒙}=\widehat{𝒚}\times \widehat{𝒛}`$ is transverse. Recoil polarization observables can now be expressed in the form $`\overline{\sigma }`$ $`=`$ $`\sigma _T+ϵ\sigma _L=𝒦({\displaystyle \frac{1}{2}}|T|^2+ϵ|L|^2)`$ (18) $`\mathrm{\Pi }_x\overline{\sigma }`$ $`=`$ $`h𝒦\sqrt{ϵ(1ϵ)}Re(TL^{})`$ (19) $`\mathrm{\Pi }_y\overline{\sigma }`$ $`=`$ $`𝒦\sqrt{ϵ(1+ϵ)}Im(TL^{})`$ (20) $`\mathrm{\Pi }_z\overline{\sigma }`$ $`=`$ $`h𝒦\sqrt{1ϵ^2}{\displaystyle \frac{1}{2}}|T|^2`$ (21) where $`𝒦=pW/k_\gamma m_p`$ is a kinematic factor and $`p`$ is the final center of mass momentum. We have left the $`\pm `$ subscripts on observables and amplitudes implicit in the interests of brevity. In the chosen basis target and recoil polarization observables for parallel kinematics differ only in sign. Thus, using either recoil or target polarization, there are five observables that depend upon just four response functions (bilinear amplitude products). Therefore, there exists a relationship between polarization and cross section for parallel kinematics that provides an alternative method for separating the longitudinal and transverse cross sections. We define $$_\pm =\frac{\sigma _{L\pm }}{\sigma _{T\pm }}=2\frac{|L_\pm |^2}{|T_\pm |^2}$$ (22) as the ratio between longitudinal and transverse cross sections for parallel kinematics. The traditional Rosenbluth separation method relies on the variation of cross section with $`ϵ`$, but this method requires measurements for two or more electron scattering kinematics with quite different acceptances. When the longitudinal contribution is small, the systematic errors due to acceptances and kinematic variables can become prohibitively large. Alternatively, for parallel kinematics it is possible to exploit the relationship between the longitudinal component of recoil (or target) polarization and the transverse contribution to the differential cross section to obtain $$=\frac{h\sqrt{1ϵ^2}\mathrm{\Pi }_z}{ϵ\mathrm{\Pi }_z}$$ (23) with fixed electron scattering kinematics without Rosenbluth separation; in fact, for the ratio one does not even need to normalize the cross section. Therefore, this polarization technique avoids the major sources of systematic error that afflict the Rosenbluth method. Assuming that $`ϵ`$ is known accurately, the uncertainty in $``$ is related to the polarization uncertainty $`\delta \mathrm{\Pi }_z`$ by $$\frac{\delta }{\delta \mathrm{\Pi }_z}=\frac{|h|\sqrt{1ϵ^2}}{ϵ\mathrm{\Pi }_z^2}=\frac{(1+ϵ)^2}{|h|ϵ\sqrt{1ϵ^2}}.$$ (24) If $``$ is small and if the minimum attainable value of $`\delta \mathrm{\Pi }_z`$ is governed by systematic errors, then the optimum kinematics for measurement of $``$ are realized when $`ϵ=1/\sqrt{2}`$. Alternatively, when statistical uncertainties dominate $`\delta \mathrm{\Pi }_z`$, it becomes advantageous to employ the largest practical $`ϵ`$ because the virtual-photon flux is proportional to $`(1ϵ)^1`$; for given $`W`$ and $`Q^2`$ this implies that higher beam energies are more favorable. These relationships can also be used to establish a bound $$|\mathrm{\Pi }_z||h|\sqrt{1ϵ^2}$$ (25) upon the longitudinal polarization, where the limiting value is realized for a purely transverse electroproduction amplitude. Recognizing that in parallel kinematics the transverse amplitudes flip nucleon spin while the longitudinal amplitudes do not, one finds that reduction of the longitudinal polarization from its maximal value would be indicative of a measurable spin nonflip amplitude. Furthermore, a nonvanishing normal component of polarization is indicative of a phase difference between longitudinal and transverse amplitudes. The magnitudes of the longitudinal and transverse helicity amplitudes and the relative phase between them can be determined from $`L`$ $`=`$ $`rTe^{i\delta }`$ (27) $`|T|^2`$ $`=`$ $`{\displaystyle \frac{2\overline{\sigma }\mathrm{\Pi }_z}{h𝒦\sqrt{1ϵ^2}}}`$ (28) $`r^2`$ $`=`$ $`/2={\displaystyle \frac{h\sqrt{1ϵ^2}\mathrm{\Pi }_z}{2ϵ\mathrm{\Pi }_z}}`$ (29) $`\mathrm{tan}\delta `$ $`=`$ $`\sqrt{{\displaystyle \frac{1ϵ}{1+ϵ}}}{\displaystyle \frac{\mathrm{\Pi }_y}{\mathrm{\Pi }_x}}h.`$ (30) Finally, under some conditions these quantities provide useful constraints upon multipole amplitudes. For example, if we limit the expansions to $`s`$\- and $`p`$-waves and include only contributions involving the dominant $`M_{1+}`$ amplitude for pion electroproduction near the $`P_{33}(1232)`$ resonance, we find $$Re\left(S_1+4S_{1+}S_{0+}\right)M_{1+}^{}\frac{\mathrm{\Pi }_{x\pm }\overline{\sigma }_\pm }{h\sqrt{2}\frac{Q}{q}𝒦\sqrt{ϵϵ^2}}.$$ (31) Thus, by comparing parallel versus antiparallel kinematics one can separate the combinations $`ReS_{0+}M_{1+}^{}`$ and $`Re(S_1+4S_{1+})M_{1+}^{}`$. Most attempts to measure the $`S_{1+}`$ amplitude for pion electroproduction, which is sensitive to quadrupole deformation of the nucleon and $`\mathrm{\Delta }`$ wave functions, have relied upon the $`R_{LT}`$ response function obtained from the left-right asymmetry of the unpolarized cross section. However, Mertz et al. have shown that current models fail to reproduce the $`W`$ dependence of the cross section asymmetry, which casts doubt upon the reliability of fitted $`S_{1+}`$ resonance amplitudes. It is widely believed that the $`S_{0+}`$ contribution may be responsible for these difficulties, but it should be possible to measure this amplitude with relatively little model dependence using recoil polarization for parallel kinematics. Furthermore, although one cannot separate $`S_{1+}`$ from $`S_1`$ without more comprehensive data, the $`S_1`$ contribution is expected to be quite small and to display a distinctly different dependence upon $`W`$. Therefore, recoil polarization for parallel versus antiparallel kinematics offers an independent method for measuring $`S_{1+}`$ also. In summary, we have proposed a polarization method for measuring the ratio between longitudinal and transverse cross sections for electroproduction of pseudoscalar mesons in parallel kinematics that employs fixed electron kinematics and avoids Rosenbluth separation. We have also developed the relationships needed to extract the magnitudes and relative phase of the corresponding helicity amplitudes. It is important to recognize that this method does not depend upon dominance of any particular resonance and applies equally well to the resonant and nonresonant contributions. However, it need not apply to more complicated background processes such as $`p(e,e^{}N)\pi \pi `$. Fortunately, for many interesting experiments, such as $`\gamma _vNP_{33}(1232)N\pi `$ or $`\gamma _vNS_{11}(1535)N\eta `$, those background contributions should vary slowly with missing mass and can be subtracted from the single-meson peak in the missing mass distribution. Nor can this method be used to obtain full angular distributions for longitudinal and transverse response functions. Nevertheless, the ability to separate longitudinal and transverse amplitudes in parallel and/or antiparallel kinematics using recoil polarization measurements without Rosenbluth separation can be very helpful in testing models of baryon structure and is a useful supplement to the traditional cross section method.
no-problem/9906/astro-ph9906024.html
ar5iv
text
# Detecting Stellar Spots by Gravitational Microlensing ## 1 Introduction Apart from having a venerable history (Schwarzschild 1975), the question of small-scale surface structure in normal stars is very important for stellar modeling. Direct interferometric evidence is scarce and inconclusive (Di Benedetto & Bonneau 1990; Hummel et al. 1994). Indirect evidence, such as Doppler imaging and photometry, is limited to specific types of stars (RS CVn, BY Dra, etc.). Photometric evidence for stellar spots comes from the Optical Gravitational Lensing Experiment (OGLE) survey of bulge giants (Udalski et al. 1995). Modeling of the spots on the stars selected from the OGLE database was reported by Guinan et al. (1997). Direct evidence for a bright spot is available for one red supergiant, $`\alpha `$ Ori (Uitenbroek, Dupree, & Gilliland 1998), although in this case the bright region may be associated with the long-period pulsation of the star. The evidence for spots is particularly scarce for normal red giants. Such information will be invaluable, given the current difficulty in calculating detailed red giant atmosphere models directly or extrapolating their physics from available models of red dwarfs. The presence of spots can also be revealed by observations of Galactic microlensing events. During such an event the flux from a background star is amplified by gravitational lensing due to a massive object, such as a dim star, passing in the foreground. Over 350 such events have been observed already, primarily toward source stars in the Galactic bulge and the Magellanic Clouds. For a review of Galactic microlensing see Paczyński (1996). In events with a small impact parameter, when the lens passes within a few stellar radii of the line of sight toward the source, the lens resolves the surface of the background star, as the amplification depends on its surface brightness distribution. Various aspects of this effect have been studied by Simmons, Newsam, & Willis (1995), Gould & Welch (1996), Bogdanov & Cherepashchuk (1996), Heyrovský & Loeb (1997) and Valls-Gabaud (1998), to name a few. Spectral changes due to microlensing of a spotless source star were described in detail recently by Heyrovský, Sasselov, & Loeb (1999; hereafter HSL). In this complementary work we study the case when the circular symmetry of the surface brightness distribution of the source is perturbed by the presence of a spot. We illustrate the effect using a model of a source similar to the one lensed in the MACHO Alert 95-30 event (hereafter M95-30; see Alcock et al. 1997). During this microlensing event the lens directly transited a red giant in the Galactic bulge. As red giants in the bulge are generally the most likely sources to be resolved, Galactic microlensing appears to be ideally suited for filling a gap in our understanding of their atmospheres as well as of spots in general. In the following section we study the broadband photometric effect of a spot on the light curve of a microlensing event. The spectroscopic signature on temperature-sensitive lines is illustrated in §3. In §4 we discuss the limitations of the simple spot model used here, as well as the possibility of confusing the effect of a spot with a planetary microlensing event. Our main results are summarized in §5. ## 2 Effect of a Spot on Microlensing Light Curves We describe the lensing geometry in terms of angular displacements in the plane of the sky. The angular radius of the source star serves as a distance unit in this two-dimensional description. During a point-mass microlensing event the flux from a spotless star with a limb-darkening profile $`B(r)`$ is amplified by a factor $$A_{}(\stackrel{}{r}_L)=\frac{B(r)A_0(|\stackrel{}{r}\stackrel{}{r}_L|)𝑑\mathrm{\Sigma }}{B(r)𝑑\mathrm{\Sigma }}.$$ (1) Here $`\stackrel{}{r}_L`$ is the displacement of the lens from the source center, $`\stackrel{}{r}`$ is the position vector of a point on the projected surface of the star $`\mathrm{\Sigma }`$, and the point-source amplification is $$A_0(\sigma )=\frac{\sigma ^2+2ϵ^2}{\sigma \sqrt{\sigma ^2+4ϵ^2}}.$$ (2) The Einstein radius of the lens <sup>1</sup><sup>1</sup>1 Also given in source radius units. is denoted by $`ϵ`$; the separation between a point-source and the lens is $`\sigma `$. At large separations $`\stackrel{}{r}_L`$, formula (1) is well approximated by the point-source limit (2) with $`\sigma =r_L`$. However, when the lens approaches the source closer than three source radii, finite-source effects become observable (higher than 1%–2%), as shown in HSL. The light curve of such an event then contains information on the surface structure of the source, introduced through its dependence on the surface brightness $`B(\stackrel{}{r})`$. HSL dealt with spectral effects due to the wavelength dependence of $`B(r)`$ in the case of a spotless source. Here we study the case of a source in which the circular symmetry is perturbed by a spot. In such a case it is useful to separate the surface brightness distribution into the circularly symmetric component $`B(r)`$ (describing the source in the absence of the spot) and a brightness decrement $`B_D(\stackrel{}{r})`$. This decrement is zero beyond the area of the spot $`\mathrm{\Sigma }^{}`$. The amplification of the spotted star can now be written $$A(\stackrel{}{r}_L)=\frac{B(r)A_0(|\stackrel{}{r}\stackrel{}{r}_L|)𝑑\mathrm{\Sigma }B_D(\stackrel{}{r}^{})A_0(|\stackrel{}{r}^{}\stackrel{}{r}_L|)𝑑\mathrm{\Sigma }^{}}{(BB_D)(\stackrel{}{r})𝑑\mathrm{\Sigma }},$$ (3) where we made use of the linearity of the integral in the numerator in $`B`$. Note that the second integral in the numerator is taken only over the area of the spot. The position vector $`\stackrel{}{r}^{}`$ of a point within the spot as defined here originates at the source center. The relative deviation of the amplification from the amplification of a spotless source is $$\delta (\stackrel{}{r}_L)=\frac{AA_{}}{A_{}}(\stackrel{}{r}_L)=\frac{B_D(\stackrel{}{r}^{})𝑑\mathrm{\Sigma }^{}}{(BB_D)(\stackrel{}{r})𝑑\mathrm{\Sigma }}\left[1\frac{B_D(\stackrel{}{r}^{})A_0(|\stackrel{}{r}^{}\stackrel{}{r}_L|)𝑑\mathrm{\Sigma }^{}}{A_{}(\stackrel{}{r}_L)B_D(\stackrel{}{r}^{})𝑑\mathrm{\Sigma }^{}}\right].$$ (4) We employ a simple model for a small spot by using a constant decrement $`B_D`$ over a circular area with radius $`r_s1`$ on the projected face of the source star, centered at a distance $`s`$ from its center <sup>2</sup><sup>2</sup>2 Note that in this model the surface brightness within the spot is not constant, it decreases toward the limb of the source following the shape of the spotless profile $`B(r)`$.. This way we can compute both integrals in the numerator of equation (3) using the method for computing light curves of circularly symmetric sources described in Heyrovský & Loeb (1997). The expression for the amplification deviation in this model simplifies to $$\delta (\stackrel{}{r}_L)=\frac{\pi r_s^2B_D}{(BB_D)(\stackrel{}{r})𝑑\mathrm{\Sigma }}\left[1\frac{A_0(|\stackrel{}{r}^{}\stackrel{}{r}_L|)𝑑\mathrm{\Sigma }^{}}{\pi r_s^2A_{}(\stackrel{}{r}_L)}\right].$$ (5) In the following we illustrate the effect of a test spot superimposed on a spotless model of a red giant similar to the M95-30 source. The 3750 K model atmosphere applied here is described in more detail in HSL. In most of the applications we use its $`V`$-band limb-darkening profile for the surface brightness distribution $`B(r)`$. The presented broadband results, however, do not change qualitatively for different spectral bands or source models in the small spot regime explored here. The value of the brightness decrement $`B_D`$ depends on the physical properties of the spot, as well as its apparent position on the source disk. We can separate these two by expressing the decrement as $`B_D=(1\mu ^1)B(s)`$. Here we introduced the contrast parameter $`\mu `$, which is equal to the ratio of the spotless brightness at the position of the center of the spot $`B(s)`$ to the actual brightness at the same position $`B(s)B_D`$. The dependence of $`\mu `$ on the spot position $`s`$ is weak except when very close to the limb; we therefore neglect it here. The parameter $`\mu `$ now depends purely on intrinsic physical properties - namely, on the temperature contrast of the spot $`\mathrm{\Delta }`$T, the effective temperature of the star, and the spectral band. Its value can be obtained numerically by comparing model atmospheres of different temperatures. For example, $`\mu 10`$ for a $`\mathrm{\Delta }`$T=1000 K spot on the model source used. Values $`\mu <1`$ can be used to describe bright spots, while $`\mu =1`$ corresponds to zero contrast. Sample light curves computed using equation (3) are presented in Figure 1. In all cases a lens with the M95-30 Einstein radius of $`ϵ=13.23`$ transits the source star with zero impact parameter. The spot has a radius $`r_s=0.2`$ and contrast $`\mu =10`$. When the lens directly crosses the spot (solid line; spot centered at $`s=0.4`$), there is a significant dip in the light curve. On the other hand, if the spot lies further from the lens path (dashed line; closest approach to spot = 0.3, $`s=0.5`$), the effect is weak. It consists primarily of a slight shift due to the offset center of brightness, and a minor increase in peak amplification. To explore the range of possible spot signatures on light curves we study the relative amplification deviation using equation (5). The deviation, $`\delta `$, is primarily a function of parameters describing the lensing and spot geometries ($`\stackrel{}{r}_L,ϵ,s,r_s`$), and the spot contrast $`\mu `$ \- six parameters in all. The dependence of $`\delta `$ on the lens position $`\stackrel{}{r}_L`$ is illustrated by the contour plots in Figure 2, for different spot positions $`s`$. The three other parameters are kept fixed at values $`ϵ=13.23,r_s=0.2`$ and $`\mu =10`$. First we note that the deviation from the spotless light curve of the same source can be positive as well as negative, for any spot position <sup>3</sup><sup>3</sup>3 A dark spot can produce a positive effect, because the amplification (3) is normalized by the lower intrinsic flux from the spotted star.. The negative effect peaks at 18%–19% at the spot position in all four cases. This region of the source is relatively dimmer than in the spotless case. The weaker positive effect (2%–3%, less when $`s=0`$) peaks on the opposite side of the source close to the limb, a region relatively brighter than in the spotless case. Geometrically the actual deviation depends on the interplay of the distances of the lens from the spot, from the positive peak and from the limb. Deviation curves for any particular lensing event with the given spot geometry can be read off directly from the plots in Figure 2. Examples corresponding to the four lens paths marked in Figure 2 are shown in Figure 3. Orienting our coordinate system as in Figure 2 with the spot along the positive $`x`$-axis, we parametrize the lens trajectory $`\stackrel{}{r}_L=(x_L,y_L)=(p\mathrm{sin}\beta +t\mathrm{cos}\beta ,p\mathrm{cos}\beta +t\mathrm{sin}\beta )`$, where $`t`$ is the time in units of source-radius crossing time measured from closest approach. The parameter $`\beta `$ is the angle between the spot position vector $`\stackrel{}{s}`$ and the lens velocity $`\dot{\stackrel{}{r}}_L`$. In this notation the impact parameter $`p`$ is given a sign depending on the lens motion - positive if $`\stackrel{}{r}_L`$ turns counterclockwise, negative if clockwise. The upper left-hand panel in Figure 3 corresponds to the maximum spot-transit effect. The three other panels demonstrate several other possible light curve deviations. According to equation (5), the time dependence of the deviation (i.e., $`\stackrel{}{r}_L`$-dependence) can be separated from the dependence on the spot contrast ($`\mu `$ through $`B_D`$). It follows that changing the contrast affects only the amplitude of the deviation during a microlensing event, not its time dependence. To see the change in amplitude as a function of $`\mu `$, it is sufficient to look at the change in the maximum deviation during an event <sup>4</sup><sup>4</sup>4 The maximum deviation is therefore parametrized by $`p`$ and $`\beta `$ instead of $`\stackrel{}{r}_L`$., $`\delta _M=\mathrm{max}_t|\delta [\stackrel{}{r}_L(t)]|`$. The dependence of $`\delta _M`$ on spot contrast is shown in Figure 4. In this generic example an $`ϵ=13.23`$ lens has a zero impact parameter and a $`r_s=0.1`$ spot is centered on the source ($`s=0`$). The dependence is steep for $`\mu <5`$, but changes only slowly for $`\mu >10`$. Values of $`\mu `$ between 2 and 10 are thought to be typical of stellar dark spots, roughly corresponding to $`\mathrm{\Delta }`$T values of 250 to 1000 K, which is at the high end of spots observed in active stars. In most of the calculations presented here we use $`\mu =10`$ to study the maximum spot effect. In a similar way we can study the dependence on the Einstein radius. We find that $`\delta _M`$ grows while $`ϵ<1`$, but remains practically constant for $`ϵ>2`$. This saturation is due to the linear dependence of amplification on $`ϵ`$ close to the light curve peak ($`|\stackrel{}{r}_L|ϵ`$) for sufficiently large $`ϵ`$. The ratio of amplifications in equation (5) then cancels out the $`ϵ`$-dependence. The effect of spot size on $`\delta _M`$ is illustrated by the following two figures. Figure 5 demonstrates the detectability of spots on the projected stellar surface for various combinations of spot radius and impact parameter. Spots centered in the black regions will produce a maximum effect higher than 5%. Those centered in the gray areas have $`\delta _M<2\%`$, and thus would be difficult to detect. We can draw several conclusions for dark spots with sufficient contrast (here $`\mu =10`$). As a rule of thumb, small spots with radii $`r_s0.15`$ could be detected ($`\delta _M>2\%`$) if the lens passes within $`1.5r_s`$ of the spot center. Larger spots with $`r_s0.2`$ can be detected over a large area of the source during source-transit events, and possibly even marginally during near-transit events (e.g. $`p1.2`$). As a further interesting result, the maximum effect a spot of radius $`r_s`$ (within the studied range) can have during a transit event is numerically roughly $`\delta _Mr_s`$, irrespective of the actual impact parameter value. For example, a spot with radius 0.05 can have a maximum effect of 5%, and an $`r_s=0.2`$ spot can cause a 19% deviation. Figure 6 is closely related to Figure 5. For a fixed impact parameter we plotted contours of the minimum radius of a spot (centered at the particular position) necessary to be detectable ($`\delta _M>2\%`$). As hinted above, during any transit event a spot with $`r_s0.3`$ located practically anywhere on the projected surface of the source will produce a detectable signature on the light curve. Turning to the case of a bright spot, we can use the same approach as above with a negative decrement $`B_D`$ in formula (5), corresponding to contrast parameter $`\mu <1`$. As noted earlier, a change in the contrast affects only the amplitude but not the time dependence of the deviation. The only difference for a bright spot is a change in sign of the deviation due to negative $`B_D`$. Therefore the geometry of the contour plots in Figure 2 remains the same, only the contour values and poles are changed. The maximum deviation at the position of the bright spot is now positive, the weaker opposite peak is negative. Changing the sign of the deviation in Figure 3 in fact gives us deviation curves for a bright spot in the same geometry with $`\mu 0.5`$ instead of $`\mu =10`$. This correspondence can be seen from Figure 4, where both these values have a same maximum effect $`\delta _M`$. Unlike in the case of a dark spot, there is mathematically no upper limit on the relative effect of a bright spot. A very bright spot would achieve high magnification and dominate the light curve, acting as an individual source with radius $`r_s`$. ## 3 Change in Spectral Line Profiles Studying spectral effects requires computing light curves for a large set of wavelengths simultaneously. Changes in the observed spectrum of a spotless source star due to microlensing are described in HSL. Most individual absorption lines respond in a generic way - they appear less prominent when the lens is crossing the limb of the source and become more prominent if the lens approaches the center of the source. The effect can be measured by the corresponding change in the equivalent width of the line. The use of sensitive spectral lines can maximize the search for spots and active regions on the surfaces of microlensed stars (Sasselov 1997). Similar techniques are widely known and used in the direct study of the Sun. One example is observing the bandhead of the CH radical at 430.5 nm, which provides very high contrast to surface structure (Berger et al. 1995). This method will require spectroscopy of the microlensing event, but could be very rewarding. For red giants such as the M95-30 source, the H$`\alpha `$ line will be sensitive to active regions on the surface (which often, but not always, accompany spots). To demonstrate the effect, we computed the H$`\alpha `$ profile using a five-level non-LTE solution for the hydrogen atom in a giant atmosphere, as described in HSL. We use the same source model as in the previous calculations (T=3750 K, $`\mathrm{log}g`$=0.5), for the active region ($`\mathrm{\Delta }`$T$``$800 K) we use the line profile of a $`\mathrm{log}g`$=2 giant with a chromosphere similar to that of $`\beta `$ Gem. In Figure 7 we show a time sequence of the changing profile of H$`\alpha `$ distorted by an $`r_s=0.1`$ active region on the surface of the star. In the calculation we used the M95-30 Einstein radius and a zero impact parameter. The presence of the H$`\alpha `$-bright region leads to a noticeable change in the line profile; in its absence the change is considerably weaker. In this particular case, more pronounced wings as well as wing emission can be seen when the lens passes near the active region. ## 4 Discussion The small spot model used in this work has obvious limitations. For example, the constant brightness decrement assumption is not adequate close to the limb, and spots can have various shapes and brightness structure (umbra, penumbra). However, most of these problems are not significant for sufficiently small spots and will not change the general character of the obtained results. The dependence of the deviation on the spotless brightness profile $`B(r)`$ was also neglected in the study (except for its effect on the contrast $`\mu `$). According to equation (5), it can be expected to have a weak effect on the amplitude, and due to the spotless amplification in equation (1) an even weaker effect on the time dependence of the amplification deviation in a microlensing event. More importantly, it should be noted that the deviations computed in this paper are deviations from the light curve of the underlying spotless source in the same lensing geometry. This is not necessarily the best-fit spotless light curve for the given event. In practice, this will limit the range of the marginally detectable events with $`\delta _M2\%`$. Good photometry and spectroscopy combined with an adequate model atmosphere for the source star can reduce this problem. The range of detectability will also be reduced if we consider the duration of the observable effect. If this effect occurs over too short a period, it could easily pass undetected. Source-crossing times in microlensing transit events can reach several days ($`3.5`$ days in M95-30 with $`p0.7`$; corresponding source-radius crossing time $`2.5`$ days). As seen from Figures 2 and 3, an effect $`\delta >2\%`$ can then last hours to days. Dense light curve sampling during any transit event can therefore lead to detections or at least provide good constraints on the presence of spots on the source star. Note that these timescales are too short to expect effects due to intrinsic changes in the spots or their significant motion in the case of red giant sources, which have typically slow rotation speeds. These effects should be considered only in long timescale events with smaller sources - events with an inherently low probability. In such events one has to additionally consider the foreshortening of a spot as it moves towards the limb of the source. The source star can be expected to have more than just a single spot. The lensed flux is linear in $`B(\stackrel{}{r})`$; therefore, it can be split again into terms corresponding to individual spots and the underlying spotless source. An analysis similar to the one in this paper can then be performed. The single spot case provides helpful insight into the general case, even though the relative amplification deviation is not a linear combination of individual spot terms. The presence of spots on microlensed stars (e.g., red giants) could complicate the interpretation of light curves that may be distorted due to a planetary companion of the lens (Gaudi & Gould 1997, Gaudi & Sackett 1999). A dark spot could be confused with the effect of a planet perturbing the minor image of the source, while a bright spot ($`\mu <1`$) can resemble a major image perturbation (see Figure 3). However, the spot effect is always localized near the peak region of the light curve, which is itself affected by the finite source size. Deviations due to planetary microlensing are usually expected as perturbations offset from the peak of a simple point-source light curve. In general it would be sufficient to look for signatures of limb-crossing during the event, by photometry in two or more spectral bands or by spectroscopy. Any color effect observable in a planetary microlensing event will occur within the period of the amplification deviation effect (Gaudi & Gould 1997). In the case of a spotted source, the color effect due to the spot will be preceded and followed by color effects due to limb crossing (see HSL). High-magnification planetary microlensing events, in which the source crosses the perturbed caustic near the primary lens (Griest & Safizadeh 1998), can prove to be more difficult to distinguish, as they can also have a similar limb-crossing signature for a sufficiently large source. However, in such cases there will be no prominent perturbations between the two limb crossings, which can at least eliminate confusion with direct spot transits. ## 5 Summary Stellar spots can be detected by observations of source-transit microlensing events. The amplification deviation due to the spot can be positive as well negative, depending on the relative configuration of the lens, source, and spot. In the small spot case ($`r_s0.2`$) studied here, we find that dark spots with radii $`r_s0.15`$ on the projected stellar disk can cause deviations $`\delta _M>2\%`$ if the lens passes within $`1.5r_s`$ of the spot center. Larger spots with $`r_s0.2`$ can be detected over a large area of the surface of the source during any transit event, in some cases even in near-transit events. Numerically we find that the maximum effect of a dark spot with sufficient contrast is roughly equal to the fractional spot radius $`r_s`$, when the spot is directly crossed. On the other hand a very bright spot can dominate the shape of the light curve. The obtained results on the relative amplification deviation are largely independent of the Einstein radius of the lens in the range $`ϵ>2`$; most microlensing events toward the Galactic bulge fall well within this range. The presence of spots and especially active regions can also be detected efficiently by observing the changing profiles of sensitive spectral lines during microlensing events with a small impact parameter. Light curves due to sources with spots can resemble in some cases the effect of a low-mass companion of the lens. Good photometry and spectroscopy will suffice to distinguish the two in most cases by detecting additional limb-crossing effects in the case of a spotted star. Currently operating microlensing follow-up projects such as the Probing Lensing Anomalies NETwork (PLANET; Albrow et al. 1998) and Global Microlensing Alert Network (GMAN; Becker et al. 1997) can perform high-precision photometry with high sampling rates; both are sensitive enough to put constraints on the presence of spots in future source-transit events. As a result, over a few observing seasons statistical evidence for spots on red giants could be obtained, making an important contribution to our theoretical understanding of red giant atmospheres. We would like to thank Avi Loeb for stimulating discussions and the anonymous referee for helpful suggestions on the manuscript.
no-problem/9906/hep-ph9906317.html
ar5iv
text
# Next-to-Leading Order QCD corrections to the Lifetime Difference of 𝐵_𝑠 Mesons ## 1 Non-expert-introduction As there were many students in the audience we will start with an elementary introduction. Neutral mesons are well known from lectures at the university and were mentioned here several times e.g. in . As in the $`K`$-system we have in the $`B_s`$-system flavour eigenstates which are defined by their quark content. $$|B_s=(\overline{b}s);|\overline{B}_s=(b\overline{s}).$$ (1) The mass eigenstates are linear combinations of the flavour eigenstates $`|B_H`$ $`=`$ $`p|B_sq|\overline{B}_s`$ (2) $`|B_L`$ $`=`$ $`p|B_s+q|\overline{B}_s`$ (3) with the normalization condition $`|p|^2+|q|^2=1`$. $`B_H`$ and $`B_L`$ are the physical states. They have definite masses and lifetimes, but no definte CP-quantum numbers. The mass eigenstates are in general mixtures of CP-odd and CP-even eigenstates. The time evolution of the physical states is described by a simple Schrödinger equation $$i_t\stackrel{}{B}=\widehat{H}\stackrel{}{B}$$ (4) with $$\stackrel{}{B}=\left(\genfrac{}{}{0pt}{}{|B_s}{|\overline{B}_s}\right);\widehat{H}=\left(\begin{array}{cc}M\frac{i}{2}\mathrm{\Gamma }& M_{12}\frac{i}{2}\mathrm{\Gamma }_{12}\\ M_{12}^{}\frac{i}{2}\mathrm{\Gamma }_{12}^{}& M\frac{i}{2}\mathrm{\Gamma }\end{array}\right).$$ (5) To find the mass eigenstates and the eigenvalues of the mass operator and the dacay rate operator we have to diagonalize the hamiltonian. We get $`\mathrm{\Delta }M_B`$ $`=`$ $`M_HM_L=2Re(Q)`$ (6) $`\mathrm{\Delta }\mathrm{\Gamma }_B`$ $`=`$ $`\mathrm{\Gamma }_L\mathrm{\Gamma }_H=4Im(Q)`$ (7) with $$Q=\sqrt{(M_{12}\frac{i}{2}\mathrm{\Gamma }_{12})(M_{12}^{}\frac{i}{2}\mathrm{\Gamma }_{12}^{})}.$$ (8) If we neglect CP violation and expand in $`m_b^2/m_t^2`$ we can write with a very good precision $`\mathrm{\Delta }M_B`$ $`=`$ $`2|M_{12}|`$ (9) $`\mathrm{\Delta }\mathrm{\Gamma }_B`$ $`=`$ $`2\mathrm{\Gamma }_{12}.`$ (10) The different neutral meson systems gave rise to important contributions to the field of high energy physics. In 1964 Christenson, Cronin, Fitch and Turlay discovered indirect CP-violation<sup>1</sup><sup>1</sup>1Indirect or equivalently mixing induced CP violation means that the physical states $`K_{S/L}`$ are not pure CP-eigenstates. There is a big contribution of one CP-parity and a tiny of the opposite CP-parity. If the small contribution dacays, one speaks of indirect CP violation. in the $`K^0\overline{K}^0`$ -system. The mass difference in the $`B_d\overline{B}_d`$ -system was the first experimental hint for a very large top quark mass, before the indirect determination at LEP and before the discovery at Tevatron. As $`m_t`$ is by now quite well known, we can extract the CKM parameter $`|V_{td}V_{tb}|`$ from $`\mathrm{\Delta }M_{B_d}`$. The determination of the CKM parameters is crucial for a test of our understanding of the standard model and for the search for new physics. The mass difference in the $`B_s\overline{B}_s`$ -system is not measured yet, but we have a lower limit from which we already get an important bound on the parameters of the CKM matrix. The Heavy Quark Expansion (HQE) is the theoretical framework to handle inclusive $`B`$-decays. It allows us to expand the dacay rate in the following way $$\mathrm{\Gamma }=\mathrm{\Gamma }_0+\left(\frac{\mathrm{\Lambda }}{m_b}\right)^2\mathrm{\Gamma }_2+\left(\frac{\mathrm{\Lambda }}{m_b}\right)^3\mathrm{\Gamma }_3+\mathrm{}.$$ (11) Here we have an systematic expansion in the small parameter $`\mathrm{\Lambda }/m_b`$. The different terms have the following physical interpretations: * $`\mathrm{\Gamma }_0`$: The leading term is described by the decay of a free quark (parton model), we have no non-perturbative corrections. * $`\mathrm{\Gamma }_1`$: In the derivation of eq. (11) we make an operator product expansion. From dimensional reasons we do not get an operator which would contribute to this order in the HQE. <sup>2</sup><sup>2</sup>2Strictly spoken we get one operator of the appropriate dimension, but with the equations of motion we can incorporate it in the leading term. * $`\mathrm{\Gamma }_2`$: First non-perturbative corrections arise at the second order in the expansion due to the kinetic and the chromomagnetic operator. They can be regarded as the first terms in a non-relativistic expansion. * $`\mathrm{\Gamma }_3`$: In the third order we get the so-called weak annihilation and pauli interference diagrams. Here the spectator quark is included for the first time . These diagrams give rise to lifetime differences in the neutral $`B`$-system. Each of these terms can be expanded in a power series in the strong coupling constant $$\mathrm{\Gamma }_i=\mathrm{\Gamma }_i^{(0)}+\frac{\alpha _s}{\pi }\mathrm{\Gamma }_i^{(1)}+\mathrm{}.$$ (12) So $`\mathrm{\Delta }\mathrm{\Gamma }_B`$ has the following form $$\mathrm{\Delta }\mathrm{\Gamma }_B=\frac{\mathrm{\Lambda }^3}{m_b^3}\left(\mathrm{\Gamma }_3^{(0)}+\frac{\alpha _s}{\pi }\mathrm{\Gamma }_3^{(1)}+\mathrm{}\right)+\frac{\mathrm{\Lambda }^4}{m_b^4}\left(\mathrm{\Gamma }_4^{(0)}+\mathrm{}\right).$$ (13) After this short introduction for non-experts we motivate the special interest in the quantity $`\mathrm{\Delta }\mathrm{\Gamma }_{B_s}`$. ## 2 Motivation From a physical point of view one wants to know the exact value of the decay rate difference, because * $`(\mathrm{\Delta }\mathrm{\Gamma }/\mathrm{\Gamma })_{B_s}`$ is expected to be large. LO estimates give values up to $`20\%`$. This is on the border of the experimental visibility ; * a big value of $`\mathrm{\Delta }\mathrm{\Gamma }_{B_s}`$ would enable us to do novel studies of CP-violation without the need of tagging . Tagging is a major expermintal difficulty in B-physics; * in the ratio $`\mathrm{\Delta }\mathrm{\Gamma }_{B_s}/\mathrm{\Delta }M_{B_s}`$ some of the non-perturbative parameters cancel . So we can get theoretically clean information on $`\mathrm{\Delta }M_{B_s}`$ from a measurement of $`\mathrm{\Delta }\mathrm{\Gamma }_{B_s}`$; * the decay rate difference can be used to search for non SM-physics. In it was shown that $`\mathrm{\Delta }\mathrm{\Gamma }_{newphysics}\mathrm{\Delta }\mathrm{\Gamma }_{SM}`$. In order to fullfill this physics program we need a relieable prediction in the standard model. Therefore we need in addition to the LO estimate $`\mathrm{\Gamma }_3^{(0)}`$, which was calculated in * the $`1/m_b`$-corrections $`\mathrm{\Gamma }_4^{(0)}`$. They have been calculated by ; * the non-perturbative matrix elements for the $`\mathrm{\Delta }B=2`$ operators, which arise in the calculation. Here a relieable prediction is still missing; * the NLO QCD corrections to the leading term in the $`1/m_b`$ expansion, $`\mathrm{\Gamma }_3^{(1)}`$. This was the aim of our work . Besides the better accuracy and a reduction of the $`\mu `$ dependence there is a very important point: NLO-QCD correction are needed for the proper matching of the perturbative calculation to lattice calculations. From a technical point of view this calculation was very interesting because * our result provides the first calculation of perturbative QCD corrections beyond leading logarithmic order to spectator effects in the HQE. Soft gluon emmision from the spectator $`s`$ quark leads to power-like infrared singularities in individual contributions. As a conceptual test of the HQE the final result has to be infrared finite . * a crucial point in the derivation of the HQE is the validity of the operator product expansion. This assumption is known under the name quark hadron duality and can be tested via a comparison of theory and experiment. A recent discussion of that subject can be found in . In the next chapter we will describe the calculation. ## 3 Calculation The width difference in the $`B^0\overline{B}^0`$ -system is defined as $$\mathrm{\Delta }\mathrm{\Gamma }=\mathrm{\Gamma }_L\mathrm{\Gamma }_H=2\mathrm{\Gamma }_{21}.$$ (14) The off-diagonal element of decay-width matrix can be related to the so-called transition operator $`𝒯`$ via $$\mathrm{\Gamma }_{21}=\frac{1}{2M_{B_S}}\overline{B}_S|𝒯|B_S$$ (15) with $$𝒯=\text{Im}id^4xT_{eff}(x)_{eff}(0).$$ (16) In $`𝒯`$ we have a double insertion of the effective hamiltonian with the standard form $$_{eff}=\frac{G_F}{\sqrt{2}}V_{cb}^{}V_{cs}\left(\underset{r=1}{\overset{6}{}}C_rQ_r+C_8Q_8\right).$$ (17) $`G_F`$ denotes the Fermi constant, $`V_{pq}`$ are the CKM matrix elements and $`Q_i`$ are local $`\mathrm{\Delta }B=1`$ operators. The Wilson coefficients $`C_i`$ describe the short distance physics and are known to NLO QCD. Formally we proceed now with an operator product expansion of that product of two hamiltonians. In real life one has to calculate diagrams of the following form: One can do the calculation in two different ways (we did it in both ways, to have a check): * calculate the imaginary part of the two loop integrals or * use Cutkosky rules and calculate virtual and real one loop corrections, followed by a phase space integration. The result in LO QCD has the following form $$𝒯=\frac{G_F^2m_b^2}{12\pi }\left(V_{cb}^{}V_{cs}\right)^2\left[G(z)Q+G_S(z)Q_S\right]$$ (18) with $`z=m_c^2/m_b^2`$ and the $`\mathrm{\Delta }B=2`$ operators $`Q`$ $`=`$ $`(\overline{b}_is_i)_{VA}(\overline{b}_js_j)_{VA}`$ $`Q_S`$ $`=`$ $`(\overline{b}_is_i)_{SP}(\overline{b}_js_j)_{SP}.`$ (19) In principle we have more operators, but we can reduce them to the two operators above with the use of Fierz identities <sup>3</sup><sup>3</sup>3This reduction is relativeley tricky. For details see .. Equation (18) is an example of an operator product expansion of equation (16). We have reduced the double insertion of $`\mathrm{\Delta }B=1`$ operators, which appear in $`_{eff}`$, to a single insertion of an $`\mathrm{\Delta }B=2`$ operator. In principle we have integrated out the internal charm quarks in figure 1. For the NLO calculation we have to match the $`\mathrm{\Delta }B=1`$ double insertion with gluon exchange to a $`\mathrm{\Delta }B=2`$ insertion with gluon exchange. This means, we have to calculate the following diagrams: These diagrams can be classified in the following way: 1. Virtual one loop corrections to a $`\mathrm{\Delta }B=2`$ operator insertion. 2. Imaginary part of virtual two loop corrections to a double insertion of $`\mathrm{\Delta }B=1`$ operators. 3. Penguin contributions to the $`\mathrm{\Delta }B=1`$ double insertion. The calculation of all these diagrams gives us the NLO QCD result. ## 4 Results The result in NLO is: $$𝒯=\frac{G_F^2m_b^2}{12\pi }\left(V_{cb}^{}V_{cs}\right)^2\left[G(z)QG_S(z)Q_S\right]$$ (20) with the following numerical values for the Wilson coefficients $$\begin{array}{cccc}& & & \\ \mu & m_b/2& m_b& 2m_b\\ & & & \\ G^{(0)}& 0.013& 0.047& 0.097\\ & & & \\ G& 0.023& 0.030& 0.036\\ & & & \\ G_S^{(0)}& 1.622& 1.440& 1.292\\ & & & \\ G_S& 0.743& 0.937& 1.018\end{array}$$ with $$G=G^{(0)}+\frac{\alpha }{4\pi }G^{(1)}.$$ (21) Here one can see two important points. First, the value for $`G_S`$ is numerical dominant and second, the NLO values are considerably smaller than the LO values. For the final result we parametrise the matrix elements of the $`\mathrm{\Delta }B=2`$ operators in the following way: $`\overline{B}_s|Q|B_s`$ $`=`$ $`{\displaystyle \frac{8}{3}}f_{B_s}^2M_{B_s}^2B`$ $`\overline{B}_s|Q_S|B_s`$ $`=`$ $`{\displaystyle \frac{5}{3}}f_{B_s}^2M_{B_s}^2{\displaystyle \frac{M_{B_s}^2}{(\overline{m}_b+\overline{m}_s)^2}}B_S.`$ $`B`$ and $`B_S`$ are so-called bag parameters, $`f_{B_s}`$ is the decay constant. The values of these parameters have to be determined by non-perturbative methods like lattice simulations. $`\overline{m}_q`$ denotes the running quark mass in the $`\overline{MS}`$-scheme. With the following input parameters $$m_b=4.8\text{GeV}\left(\frac{m_c}{m_b}\right)^2=0.085\overline{m}_s=0.2\text{GeV}$$ $$M_{B_s}=5.37\text{GeV}B(B_sXe\nu )=0.104$$ (22) we obtain for the relative dacay rate difference $$\overline{)\begin{array}{ccc}\left(\frac{\mathrm{\Delta }\mathrm{\Gamma }}{\mathrm{\Gamma }}\right)_{B_s}& =& \left(\frac{f_{B_s}}{210\text{MeV}}\right)^2\hfill \\ & & \\ & & \left[0.006B(m_b)+0.150B_S(m_b)0.063\right]\hfill \end{array}}.$$ A definitive determination of the two bag parameters is still missing. From the literature we were able to extract preliminary values for the bag parameters $$B(m_b)=0.9B_S(m_b)=0.75.$$ (23) With that numbers at hand we obtain as a final result $$\overline{)\left(\frac{\mathrm{\Delta }\mathrm{\Gamma }}{\mathrm{\Gamma }}\right)_{B_s}=\left(\frac{f_{B_s}}{210\text{MeV}}\right)^2\left(0.054_{0.032}^{+0.016}\pm \mathrm{?}\mathrm{?}\mathrm{?}\right)}.$$ (24) The question marks remind us that we do not know the uncertainties in the numerical values for the bag parameters. ## 5 Disscussion and outlook The LO estimate for the relative decay rate difference $`\mathrm{\Delta }\mathrm{\Gamma }_{B_s}/\mathrm{\Gamma }_{B_s}=𝒪(20\%)`$ is considerably reduced due to several effects: * the $`1/m_b`$ corrections are sizeable and give an absolute reduction of about - 6.3 % . * the pure NLO QCD corrections are sizeable, too and give an absolute reduction of about - 4.8 % . * with the NLO QCD corrections at hand we can perform a proper matching to the (preliminary) lattice calculations for the bag parameters. This tells us that we have to use a low value for the bag parameters, i.e. $`B_S(m_b)=0.75`$ . Compared to the naive estimate $`B_S=1`$, this is another absolute reduction of about - 3.8 %. Unfortunateley the value of $`\mathrm{\Delta }\mathrm{\Gamma }_{B_s}/\mathrm{\Gamma }_{B_s}`$ has been pinned down to a value of about $`5\%`$. The LO prediction was just a the border of experimental visibility . Now we will have to wait for the forthcoming experiments like HERA-B, Tevatron (run II) and LHC. Another application of our calculation are inclusive indirect CP-asymmetries in the $`bu\overline{u}d`$ channel. For the complete NLO prediction of this quantity, $`\mathrm{\Gamma }_{12}`$ in the $`B_d`$ system was missing. We get this value from our calculation with a trivial exchange of the CKM parameters and the limit $`m_c0`$. This allows a determination of the CKM-angle $`\alpha `$ . Acknowledgements. I want to thank the organizers of the Corfu Summer Institute on Elementary Particle Physics for their successful work, M. Beneke, G. Buchalla, C. Greub and U. Nierste for the pleasant collaboration and A.J. Buras for proofreading the manuscript.
no-problem/9906/cond-mat9906162.html
ar5iv
text
# Memory in the aging of a polymer glass ## Abstract Low frequency dielectric measurements on plexiglass (PMMA) show that cooling and heating the sample at constant rate give an hysteretic dependence on temperature of the dielectric constant $`ϵ`$. A temporary stop of cooling produces a downward relaxation of $`ϵ`$. Two main features are observed i) when cooling is resumed $`ϵ`$ goes back to the values obtained without the cooling stop ( i.e. the low temperature state is independent of the cooling history) ii) upon reheating $`ϵ`$ keeps the memory of the aging history (Memory). The analogies and differences with similar experiments done in spin glasses are discussed. PACS: 75.10.Nr, 77.22Gm, 64.70Pf, 05.20$``$y. The aging of glassy materials is a widely studied phenomenon , which is characterized by a slow evolution of the system toward equilibrium, after a quench below the glass transition temperature $`T_g`$. In other words the properties of glassy materials depend on the time spent at a temperature smaller than $`T_g`$. In spite of the interesting experimental and theoretical progress , done in the last years, the physical mechanisms of aging are not yet fully understood. In fact on the basis of available experimental data it is very difficult to distinguish which is the most suitable theoretical approach for describing the aging processes of different materials. In order to give more insight into this problem several experimental procedures have been proposed and applied to the study of the aging of various materials, such as spin-glasses (SG), orientational glasses (OG), polymers and supercooled liquids (SL). Among these procedures we may recall the applications of small temperature cycles to a sample during the aging time. These experiments have shown three main results in different materials: i) there is an important difference between positive and negative cycles and the details of the response to these perturbations are material dependent ; ii) for SG the time spent at the higher temperature does not contribute to the aging at a lower temperature whereas for plexiglass (PMMA) and OG it slightly modifies the long time behavior; iii) A memory effect has been observed for negative cycles. Specifically when temperature goes back to the high temperature the system recovers its state before perturbation. In other words the time spent at low temperature does not contribute to the aging behavior at the higher temperature. These results clearly exclude models based on the activation processes over temperature independent barriers, where the time spent at high temperature would help to find easily the equilibrium state. At the same time it is difficult to decide which is the most appropriate theoretical approach to describe the response to these temperature cycles . For example a recent model explains the results in SG but not in OG and in PMMA . In order to have a better understanding of the free energy landscape of SG and OG, a new cooling protocol has been proposed and used in several experiments . This protocol, which is characterized by a temporary cooling stop, has revealed that in SG and in OG the low temperature state is independent of the cooling history <sup>1</sup><sup>1</sup>1This effect was named Chaos in ref. but in a more recent paper the relevance of chaos for aging has been disputed. The idea of chaos in this context might be inappropriate and that these materials keep the memory of the aging history (Memory effect) . The purpose of this letter is to describe an experiment where we use the cooling protocol, proposed in ref., to show that a memory effect is present during the aging of the dielectric constant of plexiglass (PMMA), which is a polymer glass with $`T_g=388K`$ . We also compare the behavior of PMMA to that of SG and OG, submitted to the same cooling protocol. To determine the dielectric constant, we measure the complex impedance of a capacitor whose dielectric is the PMMA sample. In our experiment a disk of PMMA of diameter $`10cm`$ and thickness $`0.3mm`$ is inserted between the plates of a capacitor whose vacuum capacitance is $`C_o=230pF`$.The capacitor temperature is stable within $`0.1K`$ and it may be changed from $`300K`$ to $`500K`$. The capacitor is a component of the feedback loop of a precision voltage amplifier whose input is connected to a signal generator. We obtain the real and imaginary part of the capacitor impedance by measuring the response of the amplifier to a sinusoidal input signal. This apparatus allows us to measure the real and imaginary part of the dielectric constant $`ϵ=ϵ_1+iϵ_2`$ as a function of temperature $`T`$, frequency $`\nu `$ and time $`t`$. Relative variations of $`ϵ`$ smaller than $`10^3`$ can be measured in all the frequency range used in this experiment, i.e. $`0.1Hz<\nu <100Hz`$. The following discussion will focus only on $`ϵ_1`$ , because the behavior of $`ϵ_2`$ leads to the same conclusions. The measurement is performed in the following way. We first reinitialize the PMMA history by heating the sample at a temperature $`T_{max}>T_g`$. The sample is left at $`T_{max}=415K`$ for a few hours. Then it is slowly cooled from $`T_{max}`$ to a temperature $`T_{min}=313K`$ at the constant rate $`|R|=|\frac{T}{t}|`$ and heated back to $`T_{max}`$ at the same $`|R|`$. The dependence of $`ϵ_1`$ on $`T`$ obtained by cooling and heating the sample at a constant $`|R|`$, is called the reference curve $`ϵ_r`$. As an example of reference curve we plot in fig.1(a) $`ϵ_r`$, measured at $`0.1Hz`$ and at $`|R|=20K/h`$. We see that $`ϵ_r`$ presents a hysteresis between the cooling and the heating in the interval $`350K<T<405K`$. This hysteresis depends on the cooling and heating rates. Indeed, in fig.1(b), the difference between the heating curve ($`ϵ_{rh}`$) and the cooling curve ($`ϵ_{rc}`$) is plotted as a function of $`T`$ for different $`|R|`$. The faster we change temperature, the bigger hysteresis we get. Furthermore the temperature of the hysteresis maximum is a few degrees above $`T_g`$, specifically at $`T392K`$. The temperature of this maximum gets closer to $`T_g`$ when the rate is decreased. We neglect for the moment the rate dependence of the hysteresis and we consider as reference curve the one, plotted in fig.1(a), which has been obtained at $`\nu =0.1Hz`$ and at $`|R|=20K/h`$. The evolution of $`ϵ_1`$ can be quite different from $`ϵ_r`$ if we use the temperature cycle proposed in ref.. After a cooling at $`R=20K/h`$ from $`T_{max}`$ to $`T_{stop}=374K`$ the sample is maintained at $`T_{stop}`$ for $`10h`$. After this time interval the sample is cooled again, at the same $`R`$, down to $`T_{min}`$. Once the sample temperature reaches $`T_{min}`$ the sample is heated again at $`R=20K/h`$ up to $`T_{max}`$. The dependence of $`ϵ_1`$ as a function of $`T`$, obtained when the sample is submitted to this temperature cycle with the cooling stop at $`T_{stop}`$, is called the memory curve $`ϵ_m`$. In fig.2(a), $`ϵ_m`$ (solid line), measured at $`\nu =0.1Hz`$, is plotted as a function of $`T`$. The dashed line corresponds to the reference curve of fig.1(a). We notice that $`ϵ_m`$ relaxes downwards when cooling is stopped at $`T_{stop}`$: this corresponds to the vertical line in fig.2(a) where $`ϵ_m`$ departs from $`ϵ_r`$. When cooling is resumed $`ϵ_1`$ merges into $`ϵ_r`$ for $`T<340K`$. The aging at $`T_{stop}`$ has not influenced the result at low temperature. During the heating period the system keeps the memory of the aging at $`T_{stop}`$ (cooling stop) and for $`340K<T<395K`$ the evolution of $`ϵ_m`$ is quite different from $`ϵ_r`$. In order to clearly see this effect we divide $`ϵ_m`$ in the cooling part $`ϵ_{mc}`$ and the heating part $`ϵ_{mh}`$. In fig.2(b) we plot the difference between $`ϵ_m`$ and $`ϵ_r`$. Filled downwards arrows corresponds to cooling ($`ϵ_{mc}ϵ_{rc}`$) and empty upward arrows to heating ($`ϵ_{mh}ϵ_{rh}`$). The difference between the evolutions corresponding to different cooling procedures is now quite clear. The system keeps the memory of its previous aging history when it is reheated from $`T_{min}`$. The amplitude of the memory corresponds well to the amplitude of the aging at $`T_{stop}`$ but the temperature $`T_m`$ of the maximum is shifted a few degrees above $`T_{stop}`$. We checked that this temperature shift is independent of $`T_{stop}`$ for temperatures where aging can be measured in a reasonable time (from $`340K`$ to $`T_g`$). This effect can be seen in fig.3 where the difference between $`ϵ_m`$ and $`ϵ_r`$ measured for three different $`T_{stop}`$, is plotted as a function of $`T`$. We clearly see that $`T_mT_{stop}`$ is independent on $`T_{stop}`$. Furthermore the amplitude of the downward relaxation at $`T_{stop}`$ is a decreasing function of $`T_{stop}`$. It almost disappears for $`T_{stop}<340K`$. For this reason double memory experiments are more difficult in PMMA than in SG. However it has to be pointed out that if two cooling stops are done the system keeps memory of both of them . The memory effect seems to be permanent because it does not depend on the waiting time at $`T_{min}`$. Indeed we performed several experiments in which we waited till $`24h`$ at $`T_{min}`$, before restarting heating, without noticing any change in the heating cycle. In contrast the amplitude and the position of the memory effect depend on $`R`$ and on the measuring frequency. As an example of rate dependence, at $`\nu =0.1Hz`$ and waiting time at $`T_{stop}`$ of $`10h`$, we plot in fig.4 the difference $`ϵ_mϵ_r`$ as a function of $`T`$ for three different rates. The faster is the rate the larger is the memory effect and the farther the temperature of its maximum is shifted above the aging temperature $`T_{stop}`$. Finally we checked the dependence of the memory effect on the measuring frequency. We find that the memory effect becomes larger at the lowest frequency and the positions of the maxima are at the same temperature. We can summarize the main results of the low frequency dielectric measurements on PMMA: (a) The reference curve, obtained at constant cooling and heating rate is hysteretic. This hysteresis is maximum a few degrees above $`T_g`$. (b) The hysteresis of $`ϵ_r`$ increases with $`|R|`$.(c) Writing memory : a cooling stop produces a downward relaxation of $`ϵ_1`$. The amplitude of this downward relaxation depends on $`T_{stop}`$ and it decreases for decreasing $`T_{stop}`$. It almost disappears for $`T_{stop}<330K`$. (d) When cooling is resumed $`ϵ_1`$ goes back to the cooling branch of the reference curve. This suggests that the low temperature state is independent on the cooling history. (e) Reading memory : upon reheating $`ϵ_1`$ keeps the memory of the aging history and the cooling stop (Memory). The maximum of the memory effect is obtained a few degrees above $`T_{stop}`$. (f) The memory effect does not depend on the waiting time at low temperature but it depends both on the cooling and heating rates . The memory effect increases with $`|R|`$. These results can be explained by a hierarchical free energy landscape, whose barriers growth when temperature is lowered . However the dependence of the memory effect and the hysteresis on $`|R|`$ and the independence on the waiting time at $`T_{min}`$ means that, at least for PMMA, the free energy landscape has to depend not only on temperature but also on $`|R|`$ . The existence of the hysteresis and temperature shift of the memory effect could also be explained by a dependence of the landscape on the sign of the rate (and not only on its magnitude). Many models and numerical simulations do not take into account this dependence because they consider just a static temperature after a quench. In contrast points a),b),e) and f) indicate that whole temperature history is relevant too. Other models based on the idea of domains growth explain the rate dependence but not the memory effect . Analogies between point a-b) for the hysteresis and point e-f) for the rate dependence of the memory effect leads to a new interpretation of hysteresis, which can be seen as the memory of aging at a temperature $`T_{stop}T_g`$. Indeed, in a free energy landscape model, when cooling the sample just above $`T_g`$ the system is in its equilibrium phase, that is in a favorable configuration at this temperature. If this configuration is not strongly modified by aging at lower temperatures then, when heating back to $`T_g`$, the system keeps the memory of this favorable state, just as it does in the memory effect. It is interesting to discuss the analogies and the differences between this experiment and similar ones performed on SG and on OG . It turns out that, neglecting the hysteresis of the reference curve of PMMA and of OG, the behavior of these materials is quite similar to that of SG. During the heating period PMMA, SG and OG keep the memory of their aging history, although the precise way, in which history is remembered, is material dependent.Furthermore in these materials the low temperature state is independent on the cooling history (same response, same aging properties ). One can estimate the temperature range $`\delta T`$ where the material response is different from that of the reference curve because of the cooling stop. It turns out that the ratio $`\delta T/T_G`$ is roughly the same in PMMA, in SG and in OG, specifically $`\delta T/T_G0.2`$. The important difference between SG and PMMA is that the amplitude of the downward relaxation is a function of $`T_{stop}`$ in PMMA and it is not in SG. As a conclusion the ”memory” effect seems to be an universal feature of aging whereas the hysteresis is present in PMMA and in OG but not in all kinds of spin glasses. It would be interesting to know if these effects are observed in other polymers and in supercooled liquids, and if the hysteresis interpretation in terms of a memory effects hold for other materials. As far as we know no other results are available at the moment. We acknowledge useful discussion with J. Kurchan and technical support by P. Metz and L. Renaudin. This work has been partially supported by the Région Rhône-Alpes contract “Programme Thématique : Vieillissement des matériaux amorphes” .
no-problem/9906/astro-ph9906114.html
ar5iv
text
# 1 Introduction ## 1 Introduction The problem of the efficiency of particle acceleration at shocks by the first order Fermi acceleration process, and the strength of the back-reaction of these particles on the plasma flow, is intimately related to the injection process. This describes the rate at which particles are not only part of the thermal plasma, which is compressed and heated when it passes the shock, but become subject to energy gain due to the Fermi process, described by the diffusion-convection equation (e.g. Skilling 1975). We will follow closely a picture of this injection process which has been developed by Malkov & Völk (1995) and Malkov (1998). The spatial diffusion of particles, which is an essential part of their acceleration process, is produced by magneto-hydrodynamic waves, which are in turn generated by particles streaming along the magnetic field $`𝑩_0`$. We will assume the field $`𝑩_0`$ to be parallel to the shock normal ($`x`$-direction). The magnetic field, which corresponds to a circularly polarised wave can be written as $`𝑩=B_0𝒆_x+B_{}(𝒆_y\mathrm{cos}k_0x𝒆_z\mathrm{sin}k_0x)`$. The amplitude $`B_{}`$ will be amplified downstream by a factor $`\rho /\rho ^{}=r`$, where $`\rho `$ and $`\rho ^{}`$ are the plasma densities downstream and upstream respectively and $`r`$ is the compression ratio. The downstream field can be described by a parameter $`ϵ`$, for which we assume $`ϵ:=B_0/B_{}1`$, in the case of strong shocks considered here. Note that the perpendicular component of the magnetic field leads effectively to a quasi-alternating field downstream of the shock for particles moving along the shock normal. Only particles with a gyro radius $`r_\mathrm{g}=pc\mathrm{sin}\alpha /(eB_{})`$ for which the condition $`k_0r_\mathrm{g}>1`$ is fulfilled, would be able to have a net velocity with respect to the wave frame, i.e. the downstream plasma would be transparent. The fraction of these particles, which are also in the appropriate part of the phase space (depending on the shock speed) would be able to cross the shock from downstream to upstream. For the protons of the plasma, the resonance condition for the generation of the plasma waves gives $`k_0v_\mathrm{p}\omega _0=\omega _{}B_0/B_{}`$, where the cyclotron frequency of protons is given by $`\omega _{}=eB_{}/(m_\mathrm{p}c)`$, and $`v_\mathrm{p}`$ is the mean downstream thermal velocity of the protons. We now have for the thermal protons $`k_0r_\mathrm{g}ϵ1`$, which means, that most of the downstream thermal protons would be confined by the wave, and only particles with higher velocity (the tail of the Maxwell distribution) are able to leak through the shock. Ions with mass to charge ratio higher than protons, have a proportional larger gyro radius, so that the injection efficiency of protons, would yield a lower limit for the ions. On the other hand, for thermal electrons a plasma with such proton generated waves would not be transparent. However, reflection off the shock could become efficient. In the following we will focus on the wave generating protons. To find the part of the thermal distribution for which the magnetised plasma is transparent, and therefore forms the injection pool, Malkov (1998) solves analytically the equations of motion for protons in such self generated waves. This is a nonlinear problem, because of the feedback of the leaking particles on the transparency of the downstream plasma, mediated by the waves generated in the upstream region, which are swept to downstream with the plasma flow. He finds a transparency function $`\tau _{\mathrm{esc}}`$ which depends mainly on the particle velocity $`v`$, the velocity of the shock in the downstream plasma frame $`u_2`$ and the parameter $`ϵ`$. This function expresses the fraction of particles which are able to leak through the magnetic waves, divided by the part of the phase space for which particles would be able to cross from downstream to upstream when no waves are present. Furthermore, as a result of the above described feedback, he is able to constrain the parameter $`ϵ`$, leaving essentially no free parameter. The plasma flow structure, of course, also depends on the cosmic rays. These provide an energy sink for the plasma and smooth the shock structure due to a gradual deceleration of the plasma flow. We use a numerical method of solving the gas dynamics equations together with the cosmic ray transport equation which has been developed by Kang & Jones (1995), and which is outlined in more detail in Sect. 3. We show in the next section how we incorporate the above described model in this numerical method. ## 2 Injection model The key part of the solution to the problem of proton (ion) injection is the above described transparency function of the plasma. Kang & Jones (1995) used a numerical injection model with two essentially free parameters which describe boundaries in momentum at which particles can be accelerated and from which these contribute to the cosmic ray pressure. The transparency function provides now a more physical definition of exactly this intermediate region. For the adiabatic wave particle interaction it is given by Malkov (1998) Eq. (33), with $`\tau _{\mathrm{esc}}=2\nu _{\mathrm{esc}}/(1u_2/v)`$, where the wave escape function $`\nu _{\mathrm{esc}}`$ is divided by the fraction of the available phase space. We use here the following approximation: $`\tau _{\mathrm{esc}}(v,u_2)=H\left[\stackrel{~}{v}(1+ϵ)\right]\left(1{\displaystyle \frac{u_2}{v}}\right)^1\left(1{\displaystyle \frac{1}{\stackrel{~}{v}}}\right)\mathrm{exp}\left\{\left[\stackrel{~}{v}(1+ϵ)\right]^2\right\},`$ (1) where the particle velocity is normalised to $`\stackrel{~}{v}=vk_0/\omega _{}`$ and $`H`$ is the Heaviside step function. We argued above, that $`\omega _{}/k_0u_2/ϵ`$ (see Malkov 1998, Eq. 42). The transparency function now solely depends on the shock velocity $`u_2`$, the particle velocity $`v`$, and the relative amplitude of the wave $`ϵ`$. An important result of Malkov (1998) is the constraint on the parameter $`ϵ`$. He found $`0.3\begin{array}{c}<\\ \end{array}ϵ\begin{array}{c}<\\ \end{array}0.4`$, as a result of the feedback described above. Comparison with hybrid simulations suggest $`0.25\begin{array}{c}<\\ \end{array}ϵ\begin{array}{c}<\\ \end{array}0.35`$ (Malkov & Völk 1998). The function (1) is plotted in Fig. 1 for $`ϵ=0.35`$ and protons vs. their kinetic energy for three different times during the evolution of a modified shock (see below). The time dependence arises from the modification of the downstream plasma velocity by the cosmic rays. We use this function to correct the result of the cosmic ray transport equation for the upstream phase space density after each numerical time step. That means, the Maxwell distribution is restored (according to the appropriate plasma temperature) where $`\tau _{\mathrm{esc}}=0`$, because here the cosmic ray acceleration has no effect. For higher velocities we multiply the difference between the new and old phase space distribution by the transparency function. Where $`\tau _{\mathrm{esc}}=1`$, the solution of the transport equation is unchanged, because for these particles the plasma is completely transparent, and all of them are subject to first order Fermi acceleration. The transition between these regions is then described by the shape of the transparency function (1). The phase space distribution then changes gradually (in energy) from a Maxwell distribution to a power law distribution at high energies, and it is the difference between this distribution and the Maxwell distribution, which we use to calculate the cosmic ray pressure $`P_\mathrm{c}`$. ## 3 Numerical method and results In order to find the time evolution of the cosmic ray energy spectrum, we solve the time dependent cosmic ray transport equation (using an implicit Crank-Nicholson scheme), together with the general equations of gas dynamics (using a TVD code, see Harten 1983), including the cosmic ray pressure $`P_\mathrm{c}`$ and the energy flux $`S`$ which couples these equations. We refer to Kang & Jones (1995) for a more detailed description, and emphasize here only the main differences with that work. Very important for the injection process is the energy transfer between plasma and cosmic rays. The injection energy-density loss term can be written as $`(\mathrm{d}e/\mathrm{d}t)_\mathrm{c}=`$ $`S`$ where $`S`$ is given by ($`pp/m_\mathrm{p}`$): $`S={\displaystyle \frac{2}{3}}\pi m_\mathrm{p}c^2{\displaystyle \frac{u}{x}}{\displaystyle \underset{0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{\tau _{\mathrm{esc}}(p)}{p}}p^5f(p)dp.`$ (2) Here we have used $`\tau _{\mathrm{esc}}(0)=0`$, $`\tau _{\mathrm{esc}}(\mathrm{})=1`$, and $`\tau _{\mathrm{esc}}/p0`$ for momentum $`p1`$, which is true, of course, for the representation given in Eq. (1). Given a step function $`\tau _{\mathrm{esc}}(p)=H(pp_0)`$ the injection energy loss term used by (e.g.) Kang & Jones (1995) is revealed. The escape function $`\tau _{\mathrm{esc}}`$ depends on the downstream plasma velocity, which is averaged over the diffusion length of the particles with momentum at the injection threshold. This dependence is quite important for the injection efficiency, and leads to a regulation mechanism similar to the above beam wave interaction. If the initial injection is so strong that a significant amount of energy is transferred from the gas to high energy particles, the downstream plasma cools, and becomes decelerated. Because the injection momentum is in the high energy cut-off of the Maxwell distribution of the plasma, the cooling decreases significantly the injection rate. However, the deceleration allows for a modest increase of the phase space of particles which can be injected. This is expressed by the $`u_2`$ dependence of Eq. (1), and can be seen also in Fig. 1 where the dotted line shows the escape function for the setup distribution, and the solid line at $`t=10t_0`$. This velocity dependence balances partly the reduction of injection due to the cooling of the plasma. These two effects lead to a very weak dependence of the injection efficiency on $`ϵ`$ in the vicinity of $`ϵ0.35`$. We consider here diffusion proportional to Bohm diffusion, with $`\kappa =10\kappa _\mathrm{B}`$, where $`\kappa _\mathrm{B}=310^{22}p^2/(1+p^2)^{1/2}\mathrm{cm}^2\mathrm{s}^1`$, for a magnetic field $`B=1\mu `$G. Figure 2 shows the gas density $`\rho `$, gas pressure $`P_\mathrm{g}`$, plasma velocity $`u`$ and the cosmic ray pressure $`P_\mathrm{c}`$ over the spatial length $`x`$, for different times. The scales are as follows: $`t_0=1.210^5`$ s, $`x_0=6.010^{13}`$ cm, $`u_0=5000\mathrm{km}\mathrm{s}^1`$, $`\rho _0/m_\mathrm{p}=1\mathrm{cm}^3`$, $`P_{\mathrm{g0}}=4.17510^7\mathrm{erg}\mathrm{cm}^3`$. The initial cosmic ray adiabatic index is equal to the gas adiabatic index $`\gamma _\mathrm{c}=\gamma _\mathrm{g}=5/3`$, and the compression ratio is $`r=3.97`$. We have used 20480 grid zones for $`x/x_0=[18,14]`$, with the shock initially at $`x=0`$, and 128 grid zones in $`\mathrm{log}(p/m)=[3.0,0.3]`$. For the parameters introduced above, Fig. 3 shows the energy spectrum in form of the (at $`t=0`$) normalised omni-directional flux $`F(E)\mathrm{d}Evp^2f(p)\mathrm{d}p`$ vs. proton kinetic energy downstream of the shock. At energies above the thermal particles we expect, for the strong shock ($`r4`$) simulated here, the result $`F(E)E^\sigma `$, with $`\sigma =\{(r+2)/(r1)\}/2=1`$, which is reproduced with high accuracy. At the thermal part of the distribution the cooling of the plasma due to the energy flux into high energy particles is responsible for the shift of the distribution towards lower energies. The initial injection rate decreases thereby to a quite stable value, as described above. Due to the steep dependence of both the Maxwell distribution, and the transparency function (Fig. 1) on particle energy, the injection energy is quite well defined, leading to a power law due to Fermi acceleration, starting shortly above thermal energies. In using the standard cosmic ray transport equation, we have, of course, made use of the diffusion approximation, which may introduce an error especially for $`vu_2`$. Multiplication of the initial thermal distribution with $`\tau _{\mathrm{esc}}`$ suggest an effective initial injection velocity of about $`710^6\mathrm{m}\mathrm{s}^1`$ (in the shock frame). Using an eigenfunction method, Kirk & Schneider (1989) have explicitly calculated the angular distribution of accelerated particles and accounted for effects of a strong anisotropy especially at low particle velocities. They were able to calculate the injection efficiency without recourse to the diffusion approximation, and found always lower efficiencies. For the above given injection velocity, $`r=4`$ and $`u_0=510^6\mathrm{m}\mathrm{s}^1`$, they estimate a reduction effect of $`7\%`$, leaving the diffusion approximation as quite reasonable even in this regime. ## 4 Conclusions We have presented results from a solution of the time dependent gas dynamics equation together with the cosmic ray transport equation. We have incorporated in these calculations an analytical solution of an injection model for a quasi-parallel shock, based on particle interaction with self-generated waves. We were therefore able to investigate the interaction of high energy particles, accelerated by the Fermi process, with the shocked plasma flow without a free parameter for the efficiency of the injection. We found the energy-flux $`EF(E)`$ of (non-relativistic) particles in the power law region to be about two orders of magnitude less than at the peak of the thermal distribution. This result turns out to be quite stable, due to the self-regulating mechanisms between particle injection and wave generation and gas modification described above. ## 5 Acknowledgments We are grateful to M.A. Malkov for helpful discussions. This work was supported by the University of Minnesota Supercomputing Institute, by NSF grant AST-9619438 and by NASA grant NAG5-5055. References Harten A., 1983, J. Comput. Phys. 49, 357 Kang H., Jones T.W., 1995, ApJ 447, 944 Kirk J.G., Schneider P., 1989, A&A 225, 559 Malkov M.A., 1998, Phys. Rev. E 58, 4911 Malkov M.A., Völk H.J., 1995, A&A 300, 605 Malkov M.A., Völk H.J., 1998, Adv. Space Res. 21, 551 Skilling J., 1975, MNRAS 172, 557
no-problem/9906/astro-ph9906366.html
ar5iv
text
# Multiple forms of intermittency in PDE dynamo models ## I Introduction Intermittency has been observed in a variety of real settings as well in a vast number of numerical models. A great deal of effort has therefore gone into understanding these modes of behaviour in the context of deterministic dynamical systems theory. These studies have demonstrated the existence of a number of different types of intermittency (such as Pomeau–Manneville , Crisis , On-off intermittencies), each with their own associated signatures and scalings. Many of these forms of intermittency have in turn been concretely shown to be present in experiments and numerical studies of dynamical systems in a variety of settings (see and references therein). An important potential domain of applicability of such behaviour arises in understanding the mechanisms underlying the intermediate time scale variability in the Sun \- the occurrence of the so called Maunder or grand minima \- during which solar activity (as deduced from the sunspot numbers) was greatly diminished . This behaviour is also confirmed by evidence coming from the analysis of proxy data . There is also some evidence for similar types of variability in solar-type stars . The idea that some type of dynamical intermittency may under pin the grand minima type variability in the sunspot record (the intermittency hypothesis ) goes back at least to the late 1970’s . This idea has been the subject of intense study over the recent years and has involved the employment of various classes of dynamo models, including ordinary differential equations (ODE) (e.g. as well as partial differential equations (PDE) models (e.g. ). In addition to the phenomenological evidence for the presence of intermittent-type behaviours in dynamo models , concrete evidence has recently been found for the presence of particular types of intermittency in both ODE dynamo models as well as a recently discovered generalisation of on-off intermittency, referred to as in-out intermittency , in PDE models . Here we wish to report concrete evidence for the occurrence of two other types of intermittency, namely the crisis–induced and Pomeau–Manneville Type-I intermittencies, in PDE mean–field dynamo models. The organisation of the paper is as follows. In Sec. II we briefly introduce the model studied here. Sec. III summarises our evidence demonstrating the presence of these types of intermittencies in this model and finally in Sec. IV we draw our conclusions. ## II Model Ideally one would wish to employ the full 3-dimensional dynamo models with a minimum number of approximations and simplifying assumptions. Despite a number of important recent attempts , the difficulty of dealing with small scale turbulence makes a detailed and extensive self consistent study of such fully turbulent regimes in stars still computationally impractical (see e.g. . In view of this an alternative approach in studies of stellar dynamos has been to employ mean–field models . We should mention that there is an ongoing debate regarding the nature and realistic value of such models . Nevertheless, 3-D turbulence simulations do seem to produce magnetic fields whose global properties (such as field parity and time dependence) are similar to those expected from corresponding mean–field dynamo models . In this way mean–field dynamo models seem to reproduce certain features of the more complicated models and allow the study of certain global properties of magnetic fields in the Sun and solar-type stars (see for example ). The standard mean–field dynamo equation is given by $$\frac{𝐁}{t}=\times \left(𝐮\times 𝐁+\alpha 𝐁\eta _t\times 𝐁\right),$$ (1) where $`𝐁`$ and $`𝐮`$ are the mean magnetic field and mean velocity respectively and the turbulent magnetic diffusivity $`\eta _t`$ and the coefficient $`\alpha `$ arise from the correlation of small scale turbulent velocities and magnetic fields ($`\alpha `$ effect) . We consider the usual algebraic form of $`\alpha `$–quenching namely $$\alpha =\frac{\alpha _0\mathrm{cos}\theta }{1+|𝐁|^2},$$ (2) where $`\alpha _0=\mathrm{constant}`$ and $`\theta `$ is the co-latitude. We solve Eq. 1 in an axisymmetric configuration and in the following, as is customary , we shall discuss the behaviour of the solutions by monitoring the total magnetic energy, $`E=\frac{1}{2\mu _0}𝐁^2𝑑V`$, where $`\mu _0`$ the induction constant, and the integral is taken over the dynamo region. We split $`E`$ into two parts, $`E=E_A+E_S`$, where $`E_A`$ and $`E_S`$ are respectively the energies of the antisymmetric and symmetric parts of the field with respect to the equator. The overall parity $`P`$ is given by $`P=[E_SE_A]/E`$, so $`P=1`$ denotes an antisymmetric (dipole-like) pure parity solution and $`P=+1`$ a symmetric (quadrupole-like) pure parity solution. For the numerical results reported in the following section, we used a modified version of the axisymmetric dynamo code of Brandenburg et al. (1989) employed recently in . These models are constructed from a complete sphere of radius $`R`$ by removing an inner concentric sphere of radius $`r_0`$ and a conical section of semi-angle $`\theta _0`$ about the rotation axis, from both the north and south polar regions (see for details of the model and the relevant parameters). To test the robustness of the code we verified that no qualitative changes were produced by employing a finer grid and different temporal step length (we used a grid size of $`41\times \mathrm{\hspace{0.17em}81}`$ mesh points and a step length of $`10^4R^2/\eta _t`$ in the results presented in this paper). For the following results we use $`C_\mathrm{\Omega }=10^4`$, which give the magnitude of the differential rotation and $`\theta _0=45^{}`$. The magnitude of the $`\alpha `$-effect is given by the dynamo parameter $`C_\alpha `$. In the next section we show in turn concrete evidence for the occurrence of crisis–induced and Pomeau–Manneville Type-I intermittencies. ## III Results ### A Crisis–induced Intermittency As far as their detailed underlying mechanisms and temporal signatures are concerned, crises come in three varieties . Here we shall be concerned with only one of these types, referred to as “attractor merging crisis”, whereby as a system parameter is varied, two or more chaotic attractors merge to form a single attractor. There are both experimental and numerical evidence for this type of intermittency (see for example and references therein). In particular, this type of behaviour has been discovered in a 6-dimensional truncation of mean–field dynamo models . Fig. 1 shows the plots of the energy and parity for the above model as a function of time, calculated with $`r_0=0.2`$ and $`C_\alpha =25.202`$ which show a bimodal behaviour, switching intermittently between two different chaotic states. To determine the nature of this behaviour more precisely, we have plotted in Fig. 2 the return maps for the PDE models (1), showing the attractors before and after the merging. As can be seen the resulting merged attractor is, as expected, larger than the superposition of the two pre-existing attractors. These results can be taken as indications for the presence of crisis–induced intermittency in this model. To substantiate this further, we recall that another important signature of this type of intermittency is the way $`\tau `$, the average time between switches, scales with the system parameter, in this case, $`C_\alpha `$. According to Grebogi et al. , for a large class of dynamical systems this relation takes the form $$\tau \left|C_\alpha C_\alpha ^{}\right|^\gamma ,$$ (3) where the real constant $`\gamma `$ is the critical exponent characteristic of the system under consideration and $`C_\alpha ^{}`$ is the critical value of $`C_\alpha `$ at which the two chaotic attractors merge. The model under study here is a PDE system which is formally infinite dimensional. Such PDE models are numerically costly to integrate over long enough intervals of time (sometimes in excess of 5000 time units) necessary in order to obtain the scaling of the type (3). Furthermore, the demonstration of such scaling requires a precise determination of the critical value $`C_\alpha ^{}`$ which is difficult since as one approaches this value $`\tau `$ diverges and the integration time becomes prohibitive. Despite these difficulties, we have succeeded to obtain strong evidence for the presence of such a scaling as depicted in Fig. 3, with the corresponding $`\gamma =1.08\pm 0.05`$. Grebogi et al. conjecture that there may be a general tendency for $`\gamma `$ to be larger for higher–dimensional attractors. We do have a value of $`\gamma `$ higher than the previous one found for a related six dimensional ODE dynamo model but much lower than the value range suggested by Grebogi et al. Therefore, the conjectured range may need modification for large high–dimensional systems. There is also evidence for an enlargement of the final attractor after merging, as shown by the larger amplitudes of variation in the parity, in the sense that the parity gets closer to $`1`$ after the merging, as depicted in Fig. 2. This helped us to numerically arrive at a better estimate for the critical value $`C_\alpha ^{}`$. These indicators, taken together, amount to strong evidence for the presence of crisis–induced intermittency for this model. ### B Pomeau–Manneville Type-I Intermittency This type of intermittency, which is brought about through a tangent bifurcation, results in the system switching back and forth between a “ghost” periodic orbit and sudden bursts of chaotic behaviour . There are both experimental and numerical evidence for this type of intermittency (see for example and references therein). In particular this type of behaviour has been discovered in a 12-dimensional truncation of mean–field dynamo model . To demonstrate the presence of this type of intermittency in the above PDE dynamo model, we have plotted in Fig. 4 the energy and parity as a function of time for the parameter values $`r_0=0.7`$ and $`C_\alpha =28.0`$, which clearly demonstrates switches between nearly periodic behaviour and sudden bursts. We note that interestingly the energy in this case shows strong modulation which could be of interest in accounting for the occurrence of grand type minima in sunspot activity. Another signature of this type of intermittency is provided by the specific characteristics of its corresponding power spectrum. By employing finite dimensional maps , it has been shown that the corresponding spectra have a broad-band feature whose shape obeys approximately the inverse-power law $`1/f`$ for $`f>f_s`$, where $`f_s`$ is the saturation frequency. Below this frequency there is a flat plateau induced by noise that causes arbitrarily long laminar phases to become finite. As further evidence for this type of intermittency in the model (1), we have plotted in Fig. 5 the power spectrum at $`C_\alpha =28.0`$, obtained by averaging over 16 different initial conditions corresponding to different initial parities. As can be seen, the power spectrum shows both the flat plateau and the $`1/f`$ power law scaling. Taken together, these indicators amount to strong evidence for the presence of Pomeau–Manneville Type-I intermittency for this model. ## IV Conclusion We have obtained concrete evidence, in terms of phase space signatures, spectra and scalings to demonstrate the presence of crisis–induced and the Pomeau–Manneville Type-I intermittencies in axisymmetric mean–field PDE dynamo models. Despite the rather idealised nature of these models, this is of potential importance since it shows the occurrence of two more types of intermittency (in addition to in–out intermittency recently discovered ) in these models which may in turn be taken as an indication that more than one type of intermittency may occur in solar and stellar dynamos. This suggests that any observational programme for identifying the mechanisms underlying grand minima type variability needs to take into account the possibility that multiple intermittency mechanisms may be operative in different stars of the similar type, or even in the same star over different epochs. This would also be of importance in the interpretation of proxy data. In this way a more appropriate hypothesis regarding such variability would be that of multiple–intermittency hypothesis. We would like to thank Axel Brandenburg for providing us with the original code and Andrew Tworkowski for helpful discussions. EC is supported by grant BD/5708/95 – PRAXIS XXI, JNICT. RT benefited from PPARC UK Grant No. L39094.
no-problem/9906/cond-mat9906442.html
ar5iv
text
# Multishelled Gold Nanowires ## I Introduction One-dimensional and quasi one-dimensional metallic structures are often used in various electronic devices. Wires of nanometre diameters and micrometre lengths are produced and studied for some time. Recent advances in experimental techniques, such as Scanning Tunneling Microscopy (STM) and electron-beam litography , are giving rise to fabrication of wires with nanometre lengths. Many important results were recently obtained for nanowires of different materials. For example, multishelled nanostructures were found in experiments for carbon , $`WS_2`$ , $`MoS_2`$ , and $`NiCl_2`$ . In the jellium model calculation multishelled structures were obtained for sodium nanowires . The results of Molecular Dynamics (MD) simulation have shown that a gold wire with length $`4`$ nm and radius of $`0.9`$ nm, at $`T=300`$ K, consists of the three coaxial cylindrical shells and the thin core. Here we present an analysis of two additional multiwalled gold nanowires. We also propose that unusual strands of gold atoms recently formed in STM and observed by a transmission electron microscope (TEM) are the image of cylindrical walls of multishelled gold nanowires. An explanation of the thinning process for the STM supported multishelled gold nanowires is given. ## II Method To simulate metals by the classical MD method one should use many-body potentials. Several implementations of these potentials are available, as for example ones developed within the embedded-atom and effective medium theories . Gold nanowires were simulated using the glue realization of the embedded atom potentials . This potential is well-tested and produces a good agreement with diversity of experimental results for bulk, surfaces, and clusters. In contrast to most other potentials, it reproduces different reconstructions on all low-index gold surfaces . Therefore, it is expected that simulated gold nanowires of more than $`50`$ atoms realistically model natural structures. A time step of $`7.14\times 10^{15}`$ s was employed in simulation. The temperature was controlled by rescaling particle velocities. We started from ideal face centered cubic nanowires with the (111) oriented cross-section at $`T=0`$ K, and included in the cylindrical MD boxes all particles whose distance from the nanowire axis was smaller than $`1.2`$ nm for the first nanowire, and $`0.9`$ nm for the second one. The initial lengths of nanowires were $`6`$ and $`12`$ layers, whereas the number of atoms were $`689`$ and $`784`$. The samples were first relaxed, then annealed and quenched. To prevent melting and collapse into a drop, instead of usual heating to $`1000`$ K used in MD simulation of gold nanostructures, our finite nanowires were heated only to $`600`$ K. Such a procedure gives the atoms a possibility to find local minima and models a constrained dynamical evolution present in fabricated nanowires. The structures were analyzed after a long MD run at $`T=300`$ K. ## III Results and Discussion Figure 1 shows the shape of the MD box for a nanowire of $`784`$ atoms after $`7.1`$ ns of simulation at $`T=300K`$. Top views of the particle trajectories in the whole MD boxes for two nanowires are shown in Fig. 2 and Fig. 3. While the presence of a multishelled structure is obvious, after $`10^6`$ time steps of simulation the walls are still not completely homogeneous. Several atoms remain about the walls. Three cylindrical shells exist for the nanowire shown in Fig. 2. The nanowire presented in Fig. 3 consists of the two coaxial near walls and a large filled core. The filled core is well ordered and its parallel vertical planes are at the spacing of $`0.18`$ nm. This double-walled structure suggests an application of similar gold nanowires as cylindrical capacitors. Therefore, we calculated the capacitance of finite nanometre-scale cylindrical capacitors and found the values of the order of $`0.5`$ aF for the sizes for which multishelled nanowires appear in simulations . As always in computer simulations of real materials, it is important to compare results with experiments. Gold nanostructures were the subject of recent STM studies . Unusual strands of gold atoms (down to one row) were simultaneously observed by an electron microscope . The structure of the strands and understanding of the thinning process for these tip-supported nanostructures were left for future studies. The MD trajectory plots of atoms in the vertical slice of the box shown in Fig. 4 resemble strands of gold atoms in Fig. 2 of Ref. . Therefore, we propose that strands formed in STM are the image of the cylindrical walls of multishelled gold nanowires . Figure 4 shows that defects exist on some rows. Gold rows with a defect were sometimes observed by electron microscope (see Fig. 2 (a) in Ref. ). The thinning process for a multishelled nanowire should start from its central part, either the filled thin core (Fig.1 from Ref. ), the central part of a large filled core (Fig. 3 here), or an empty interior cylinder which first shrinks into a thin filled core (Fig. 2 here). When this central part is removed by diffusion of atoms to the tip, the next interior cylindrical wall shrinks into a new core, and then the process repeats. Therefore, the number of rows decreases by one as observed in the experiment . At a final stage of the shrinkage processes an empty cylinder shrinks to one row of atoms. For multishelled nanowires the shrinkage of an internal cylinder is followed by the decrease of the radii of external cylinders and the whole nanostructure thins down with time. A mechanism of plastic deformation cycles of filled nanowires was suggested to explain the shape of necks formed in STM . In this model plastic deformation starts from the central cylindrical slab of a filled nanowire which acts as a weakest spot. For multishelled nanowires a such central cylindrical weak spot most often naturally forms. In STM/TEM experiments it was noted that the gap between the dark lines of their Fig. 2 (d) is greatly enlarged. This should be related to the special situation where the multishelled structure is lost and only two rows, i.e., an empty cylinder is present. After that, at a final stage of the shrinkage processes, one atomic chain remains. ## IV Conclusions MD computer simulation based on the well-established embedded-atom potential shows that finite gold wires of nanometre dimensions are often multishelled. Recently, similar gold nanostructures were formed in STM and observed by TEM . The model of multishelled gold nanowires should be considered in explanation of the conductance measured in STM . Results of computer simulations enable fabrication of similar metallic nanowires which will be used in nanoelectronic and nanomechanical devices.
no-problem/9906/cond-mat9906311.html
ar5iv
text
# Does macroscopic disorder imply microscopic chaos? We argue that Gaspard and coworkers do not give evidence for microscopic chaos in the sense in which they use the term. The effectively infinite number of molecules in a fluid can generate the same macroscopic disorder without any intrinsic instability. But we argue also that the notion of chaos in infinitely extended systems needs clarification: In a wider sense, even some systems without local instabilities can be considered chaotic. In a beautiful recent experiment , Gaspard and coworkers verified that the position of a Brownian particle behaves like a Wiener process with positive resolution dependent entropy . More surprisingly and dramatically , they claim that this observation gives a first proof of “microscopic chaos”, a term they illustrate by examples of finite dimensional dynamical systems which are intrinsically unstable. While the recent literature finds such chaos on a molecular level quite plausible, we argue that the observed macroscopic disorder cannot be taken as direct evidence. In fact, Brownian motion can be derived for systems which would usually be called non-chaotic, think of a tracer particle in a non-interacting ideal gas. All that is needed for diffusion is molecular chaos in the sense of Boltzmann, i.e. the absence of observable correlations in the motion of single molecules. Part of the confusion is due to the lack of a unique definition of “microscopic chaos” for systems with infinitely many degrees of freedom. The authors of introduce the term by extrapolating finite dimensional dynamical systems for which chaos is a well defined concept: Initially close states on average separate exponentially when time goes to infinity. The rate of separation, the Lyapunov exponent, is independent of the particular method to measure “closeness”. However, the notions of diffusion and Brownian motion involve by necessity infinitely many degrees of freedom. In this thermodynamic limit, Lyapunov exponents are no longer independent of the metric. As a consequence, the large system limit of a finite non-chaotic system will remain non-chaotic with one particular metric and become chaotic with another. Let us illustrate this astonishing fact with an example introduced by Wolfram in the context of cellular automata. Consider two states $`𝐱=(\mathrm{}x_2,x_1,x_0,x_1,x_2\mathrm{})`$ and $`𝐲=(\mathrm{}y_2,y_1,y_0,y_1,y_2\mathrm{})`$ of a one-dimensional bi-infinite lattice system. If the distance between $`𝐱`$ and $`𝐲`$ is defined by $`d_{\mathrm{max}}(𝐱,𝐲)=\mathrm{max}_i|x_iy_i|`$, it can grow exponentially only if the local differences do. This is what one usually means by “chaos”, and this is what the authors of had meant by “microscopic chaos”. This mechanism is absent in the thermodynamic limit of finite non-chaotic systems. Therefore, some authors would also call the limit non-chaotic. However, the metric $`d_{\mathrm{exp}}(𝐱,𝐲)=_i|x_iy_i|e^{|i|}`$ can also show exponential divergence if an initially far away perturbation just moves towards the origin without growing . Arguably, when observing a localised tracer like in , the latter choice of metric seems more appropriate. In finite dimensional dynamical systems, chaos arises due to the de-focusing microscopic dynamics. The positive entropy is generated by the initially insignificant digits of the initial condition which are braught forth by the dynamics. In the thermodynamic limit, also a completely different mechanism exists: Perturbations coming from far away regions kick the tracer particle once and move again away to infinity. The entropy is positive due to information stored in remote parts of the initial condition. Associated to this, one can also define suitable Lyapunov exponents . To resolve the confusion, we suggest to follow Sinai and first let the system size tend to infinity, and only then the observation time. In that case, a system observed in a particular metric $`\mu `$ is called $`\mu `$-chaotic when we find a positive Lyapunov exponent using this metric. However, Gaspard et al. had obviously in mind the type of chaos detectable with the metric $`d_{\mathrm{max}}`$, and arising from a local instability. For this, they fall short of giving experimental evidence.
no-problem/9906/hep-ph9906395.html
ar5iv
text
# 1 Introduction ## 1 Introduction In all probability a Higgs boson will be discovered at either LEP II, Run II of the Tevatron, or the Large Hadron Collider (LHC). It is then inevitable that the emphasis of Higgs physics will be turned away from discovery and instead will focus on the investigation of Higgs boson properties, such as its mass, width and branching ratios. Although much interesting Higgs phenomenology can be done at the LHC, many analyses are made infeasible by the rather messy nature of hadron colliders. Instead one must resort to the much cleaner environment of $`e^+e^{}`$ annihilations, for example, at the Linear Collider (LC) , where precision measurements at the TeV scale can be made. One particularly interesting task to be carried out at future colliders is the reconstruction of the Higgs potential itself, possibly confirming, or denying, the mechanism of spontaneous electroweak symmetry breaking. This can be achieved by measuring the trilinear $`\lambda _{HHH}`$ and quadrilinear $`\lambda _{HHHH}`$ Higgs self-couplings, which can then be compared with the predictions of the Standard Model (SM), or indeed the Minimal Supersymmetric Standard Model (MSSM)<sup>3</sup><sup>3</sup>3In principle, the former coupling is amenable to investigation also at hadron and high energy photon colliders too, via double Higgs-strahlung off $`W^\pm `$ or $`Z`$ bosons , $`W^+W^{}`$ or $`ZZ`$ fusion , gluon-gluon fusion or $`\gamma \gamma `$ fusion .. A measurement of the trilinear term, $`\lambda _{HHH}`$, is the first step in reconstructing the Higgs potential. At a future $`e^+e^{}`$ collider, the $`\lambda _{HHH}`$ coupling of the SM is accessible through double Higgs-strahlung off $`Z`$ bosons, in the process $`e^+e^{}HHZ`$. This is the mechanism with which we will be concerned in this paper (for the MSSM see Ref. ). The SM signal of the trilinear Higgs self-coupling has been thoroughly investigated in Ref. (with its MSSM counterparts), and was found to be small but measurable for an intermediate mass Higgs boson, given a high integrated luminosity. In contrast, the quadrilinear vertex, $`\lambda _{HHHH}`$, is unmeasurable at the energy scale of the proposed LCs due to suppression by an additional power of the electromagnetic coupling constant. However, in measuring $`\lambda _{HHH}`$, one must be sure that the already small signal can be distinguished from its backgrounds without being appreciably reduced. Here we will examine the $`Hb\overline{b}`$ decay channel over the Higgs mass range $`M_H\stackrel{<}{}140`$ GeV and present kinematic cuts to aid its selection. The case of off- and on-shell $`HW^{\pm ()}W^{}`$ decays for $`M_H\stackrel{>}{}140`$ GeV is under examination elsewhere . If one assumes very efficient tagging and high-purity sampling of $`b`$ quarks, the backgrounds to a $`\lambda _{HHH}`$ measurement from double Higgs events in the $`4b`$ decay channel are primarily the ‘irreducible’ ones via $`b\overline{b}b\overline{b}Z`$ intermediate states , which can be separated into EW and QCD backgrounds. Furthermore, the double Higgs-strahlung process (see Fig. 1): $$e^+e^{}HHZb\overline{b}b\overline{b}Z$$ (1) contains diagrams proceeding via an $`HHZ`$ intermediate state but not dependent on $`\lambda _{HHH}`$ (graphs 1–3 in Fig. 1), as well as that sensitive to the trilinear Higgs self-coupling (graph 4 in Fig. 1). In addition, we also include four extra diagrams, which differ only in the exchange of the four-momenta and helicities of two identical $`b`$ quarks (or, equivalently, antiquarks) and a minus sign (due to Fermi-Dirac statistics pertinent to identical fermionic particles). However, the narrow width of the Higgs resonance ensures that the interference will be negligible and these extra diagrams could be included by symmetry. The other two backgrounds proceed via purely EW interactions (see Figs. 23), $$e^+e^{}\mathrm{EW}\mathrm{graphs}b\overline{b}b\overline{b}Z,$$ (2) and via QCD couplings as well (see Fig. 4), $$e^+e^{}\mathrm{QCD}\mathrm{graphs}b\overline{b}b\overline{b}Z,$$ (3) and both contain no more than one intermediate Higgs boson. The EW background, process (2), is of $`𝒪(\alpha _{em}^5)`$ away from resonances, but can, in principle, be problematic due to the presence of both $`Z`$ vectors and Higgs scalars yielding $`b\overline{b}`$ pairs. Finally, the QCD background, process (3), is of $`𝒪(\alpha _{em}^3\alpha _s^2)`$ away from resonances. Here, although there are no heavy objects decaying to $`b\overline{b}`$ pairs, the production rate itself could give difficulties due to the presence of the strong coupling. As with process (1), one must include diagrams with the interchange of the two identical $`b`$ (anti)quarks also in the EW and QCD background processes. In contrast to the signal, here interference effects are sizable. The plan of the paper is as follows. The next Section details the procedure adopted in computing the relevant scattering amplitudes. Sect. 3 displays our numerical results and contains our discussion. Finally, in the last Section, we summarize and conclude. ## 2 The matrix elements (MEs) The double Higgs-strahlung process (1) proceeds at lowest-order through the diagrams of Fig. 1, as explained in the Introduction. They are rather straightforward to calculate in the case of on-shell $`HHZ`$ production (see, e.g., Ref. for an analytic expression of the ME). The EW background (2) derives from many graphs: 550 in total (again, considering the $`b`$ (anti)quark statistics). However, they can conveniently be grouped into different ‘topologies’: that is, collections of diagrams with identical (non-)resonant structure. We have isolated 23 of these, and displayed them in Figs. 2 and 3, depending on whether one or zero Higgs intermediate states are involved, respectively. There are 214 graphs of the first kind and 336 of the second. This approach, of splitting the ME in (non-)resonant subprocesses, facilitates the integration over the phase space and further provides an insight into the fundamental dynamics. On the one hand, one can compute each of the topologies separately, with the appropriate mapping of variables, thus optimizing the accuracy of the numerical integration. On the other hand, one is able to assess the relative weight of the various subprocesses into the full scattering amplitude, by comparing the various integrals with each other. However, one should recall that the amplitudes squared associated to each of these topologies are in general non-gauge invariant. In fact, the latter is recovered only when the various (non)-resonant terms are summed up. For reasons of space, we will not dwell in technicalities any further here, as a good guide to this technique can be found in Ref. . (The resonant structure of the various subchannels ought to be self-evident in Figs. 23.) The QCD diagrams associated to process (3) can be found in Fig. 4. In total, one has 120 of these, with only five different (non-)resonant topologies. The integration in this case is much simpler than in the EW case and can in fact be carried out with percent accuracy directly over the full ME using standard multichannel Monte Carlo methods. Non-zero interference effects exist between processes (1), (2) and (3). However, given the very narrow width of the Higgs boson (always below 20 MeV over the mass range considered here), any interference with the signal can be safely neglected. Furthermore, we will see that the dominant subprocesses of the two backgrounds have very different topologies, so one also expects their interference to be negligible. Therefore, given that their calculation would be rather cumbersome, we do not consider them in our analysis. The large number of amplitudes can easily and efficiently be dealt with in the numerical evaluation if one resorts to helicity amplitudes. In doing so, we have adopted the HELAS subroutines . The algorithm used to perform the multi-dimensional integrations was VEGAS . Numerical inputs were as follows. The strong coupling constant $`\alpha _s`$ entering the QCD process (3) has been evaluated at two loops, with $`N_f=5`$ and $`\mathrm{\Lambda }_{\overline{\mathrm{MS}}}=160`$ MeV, at a scale equal to the collider CM energy, $`\sqrt{s}E_{\mathrm{cm}}`$. The EM coupling constant was $`\alpha _{em}=1/128`$. The sine squared of the Weinberg angle was $`\mathrm{sin}^2\theta _W=0.232`$. The fermionic (pole) masses were $`m_e=0`$ and $`m_b=4.25\mathrm{GeV}`$. As for the gauge boson masses (and their widths), we have used: $`M_Z=91.19\mathrm{GeV}`$, $`\mathrm{\Gamma }_Z=2.50\mathrm{GeV}`$, $`M_WM_Z\mathrm{cos}\theta _W80\mathrm{GeV}`$ and $`\mathrm{\Gamma }_W=2.08\mathrm{GeV}`$. Concerning the Higgs boson, we have spanned its mass $`M_H`$ over the range 100 to 150 GeV and we have computed its width, $`\mathrm{\Gamma }_H`$, by means of the program described in Ref. , which uses a running $`b`$ mass in evaluating the $`Hb\overline{b}`$ decay fraction. Thus, for consistency, we have evolved the value of $`m_b`$ entering the $`Hbb`$ Yukawa coupling of the $`Hb\overline{b}`$ decay currents in the same way. We have adopted as CM energies typical for the LC the values $`E_{\mathrm{cm}}=500,1000`$ and 1500 GeV. Notice that, in the remainder of this paper, total and differential rates are those at the partonic level, as we identify jets with the partons from which they originate. In order to resolve the latter as four separate systems, we impose the following acceptance cuts: $`E(b)>10`$ GeV on the energy of each $`b`$ (anti)quark and $`\mathrm{cos}(b,b)<0.95`$ on the relative separation of all possible $`2b`$ combinations. We further assume that $`b`$ jets are distinguishable from light-quark and gluon jets (e.g., by using $`\mu `$-vertex tagging techniques). However, no efficiency to tag the four $`b`$ quarks is included in our results. Also, the $`Z`$ boson is treated as on-shell and no branching ratio (BR) is applied to quantify its possible decays. In practise, in order to simplify the treatment of the final state, one may assume that the $`Z`$ boson decays leptonically (i.e., $`Z\mathrm{}^+\mathrm{}^{}`$, with $`\mathrm{}=e,\mu ,\tau `$) or hadronically into light quark jets (i.e., $`Zq\overline{q}`$, with $`qb`$). Finally, we have not included Initial State Radiation (ISR) in our calculations. In fact, we would expect it to affect rather similarly the various processes (1)–(3). As we are basically interested in relative rates among the latter, we are confident that the salient features of our results are indifferent to the presence or not of photons radiated by the incoming electron-positron beams<sup>4</sup><sup>4</sup>4We also neglect beamsstrahlung and Linac energy spread, by assuming a narrow beam design .. ## 3 Results The total cross sections for process (1), at the three CM energies considered here, can be found in the top-left frame of Fig. 5, as a function of $`M_H`$. The decrease of the total rates with increasing Higgs mass is mainly the effect of the BR of the decay channel $`Hb\overline{b}`$, see, e.g., Fig. 1 of Ref. . This mode is dominant and very close to 1 up to the opening of the off-shell $`HW^\pm W^{}`$ decay channel, which occurs at $`M_H140`$ GeV. In contrast, the production cross section for $`e^+e^{}HHZ`$ is much less sensitive to $`M_H`$ . In addition, because reaction (1) is an annihilation process proportional to $`1/s`$, a larger CM energy tends to deplete the production rates, as long as $`E_{\mathrm{cm}}2M_H+M_Z`$. When this is no longer true, e.g., at 500 GeV and $`M_H\stackrel{>}{}140`$ GeV, phase space suppression can overturn the $`1/s`$ propagator effects. This is evidenced by the crossing of the curves for 500 and 1000 GeV. In practice, the maximum cross section for double Higgs-strahlung (1) is reached at energies $`E_{\mathrm{cm}}2M_H+M_Z+200`$ GeV . For Higgs masses in the lower part of the $`M_H`$ range considered here, e.g., $`M_H=110`$ GeV (where the bottom-antibottom channel is unrivaled by any other decay mode), this corresponds to $`E_{\mathrm{cm}}=500`$ GeV. Furthermore, the sensitivity of the production rates of reaction (1) on $`\lambda _{HHH}`$ is higher at lower collider energies . Thus, in order to illustrate the interplay between reactions (1)–(3), we will in the following focus on the case of a CM energy of 500 GeV, top-right corner of Fig. 5, as an illustrative example. In fact, the discussion for the other two choices, $`E_{\mathrm{cm}}=1000`$ and 1500 GeV (the two bottom plots of Fig. 5), would be rather similar, so we refrain from repeating it. (Also note that the signal-to-background ($`S/B`$) ratio improves with increasing energy.) The rise at 500 GeV of the purely EW background (2) with the Higgs mass can be understood in the following terms. The dominant components of the EW process are those given by: 1. $`e^+e^{}ZZZb\overline{b}b\overline{b}Z`$, first from the left in the second row of topologies in Fig. 3. That is, triple $`Z`$ production with no Higgs boson involved. 2. $`e^+e^{}HZZb\overline{b}b\overline{b}Z`$, first(first) from the left(right) in the fifth(fourth) row of topologies in Fig. 2 (also including the diagrams in which the on-shell $`Z`$ is connected to the electron-positron line). That is, single Higgs-strahlung production in association with an additional $`Z`$, with the Higgs decaying to $`b\overline{b}`$. The cross sections of these two channels are obviously identical. 3. $`e^+e^{}HZZ^{}Z^{}Zb\overline{b}b\overline{b}Z`$, first from the right in the third row of topologies in Fig. 2. That is, single Higgs-strahlung production with the Higgs decaying to $`b\overline{b}b\overline{b}`$ via two off-shell $`Z^{}`$ bosons. 4. $`e^+e^{}ZHb\overline{b}Z^{}Zb\overline{b}b\overline{b}Z`$, first(first) from the right(left) in the first(second) row of topologies in Fig. 2. That is, two single Higgs-strahlung production channels with the Higgs decaying to $`b\overline{b}Z`$ via one off-shell $`Z^{}`$ boson. Also the cross sections of these two channels are identical to each other, as in 2. The production rates of 1.–4. as separate subprocesses can be found in the upper portion of Fig. 6. All other EW subprocesses are much smaller and rarely exceed $`10^3`$ femtobarns, so we do not plot them here. The QCD process (3) is dominated by $`e^+e^{}ZZ`$ production with one of the two $`Z`$ bosons decaying hadronically into four $`b`$ jets. This subprocess corresponds to the topology in the middle of the first row of diagrams in Fig. 4. Notice that Higgs diagrams are involved in this process as well (bottom-right topology in the above figure). These correspond to single Higgs-strahlung production with the Higgs scalar subsequently decaying into $`b\overline{b}b\overline{b}`$ via an off-shell gluon. Their contribution is not negligible, owning to the large $`ZH`$ production rates, as can be seen in the lower portion of Fig. 6. The somewhat unexpected dependence of the latter upon $`M_H`$ (with a maximum at 130 GeV) is the result of the interplay between our acceptance cuts and phase space effects. The contribution of the other diagrams, which do not resonate, is typically one order of magnitude smaller than the $`ZZ`$ and $`ZH`$ mediated graphs, with the interferences even smaller (and generally negative). One should note from Fig. 5 that the overall rates of the signal are quite small (also recall that we neglect tagging efficiency as well as the $`Z`$ decay rates), even at low Higgs masses where both the production and decay rates are largest. In fact, they are always below $`0.2`$ femtobarns for all energies from 500 to 1500 GeV, although this can be doubled simply by polarizing the incoming electron and positron beams. Thus, as already recognised in Ref. , where on-shell production studies of process (1) were performed, luminosities of the order of one inverse attobarn need to be collected before statistically significant measurements of $`\lambda _{HHH}`$ can be performed. This emphasizes the need of high luminosity at any future LC. We now proceed by looking at several differential spectra of reactions (1)–(3), in order to find suitable kinematic cuts which will enhance the $`S/B`$ ratio. The distributions in $`E(b)`$ and $`\mathrm{cos}(b,b)`$ leave little to exploit in separating signal from background after the acceptance cuts are made, especially with respect to the EW background. We turn then to other spectra, for example, invariant masses of $`b`$ (anti)quark systems. In this respect, we have plotted those of the following combinations: * of $`2b`$ systems, for the case in which the $`b`$ jets come from the same production vertex (‘right’ pairing) and the opposite case as well (‘wrong’ pairing); * of $`3b`$ systems, in which only two $`b`$ (anti)quarks have the same EM charge; * of the $`4b`$ system. We denote the mass spectra of the systems (a)–(d) as $`M_R(bb)`$ and $`M_W(bb)`$ (where $`R(W)`$ signifies the right(wrong) combination), $`M(bbb)`$ and $`M(bbbb)`$, respectively. In the first three cases, there exists more than one combination of $`b`$ quarks. In such instances, we bin them all in the same distribution each with identical event weight. Further notice that the $`2b`$ invariant masses that can be reconstructed experimentally are actually appropriately weighted superpositions of $`M_R(bb)`$ and $`M_W(bb)`$. In particular, if the $`b`$ charge tag is available, then it is roughly the sum of the two. If not, the latter is about twice as large as the former. The invariant mass spectra can be found in Fig. 7, for the combination $`E_{\mathrm{cm}}=500`$ GeV and $`M_H=110`$ GeV. Here, one can appreciate the narrow Higgs peak<sup>5</sup><sup>5</sup>5Recall that for $`M_H=110`$ GeV one has $`\mathrm{\Gamma }_H3`$ MeV. The Higgs resonances in the top-left frame of Fig. 7 have been smeared out by incorporating a $`5`$ GeV bin width, emulating the finite efficiency of the detectors in determining energies and angles. in the $`M_R(bb)`$ distribution, that can certainly be exploited in the signal selection, especially against the QCD background, which is rather flat in the vicinity of $`M_H`$. Apparently, this is no longer true for the EW process, as it also displays a resonance at $`M_H`$ (induced by the diagrams in Fig. 2 which carry an external on-shell current $`Hb\overline{b}`$). However, events of the type (1) contain two $`2b`$ invariant masses naturally peaking at $`M_H`$, whereas only one would appear in samples produced by process (2) (apart from accidental resonant mispairings). Thus, even in the case of the EW background one can achieve a significant noise reduction. Finally, requiring that none of the $`2b`$ invariant masses reproduce a $`Z`$ boson will also be helpful in this respect, as evident from the $`M_R(bb)`$ spectrum of process (2). However, in this case, the invariant mass resolution of di-jet systems must be at least as good as the difference $`(M_HM_Z)/2`$, in order to resolve the $`Z`$ and $`H`$ peaks. Other mass distributions can be useful too in reducing the noise while keeping a substantial portion of the signal. Of some help are the $`M(bbb)`$ and $`M(bbbb)`$ spectra. In particular, notice that the minimum value of the latter is about $`2M_H`$ for process (1), whereas for reaction (2) it is lower, typically around $`2M_Z`$ or $`M_H+M_Z`$, as driven by the two dominant components of the EW background at low Higgs mass (i.e., subprocesses 1. and 2., respectively, see top of Fig. 6). The QCD background can stretch to $`M(bbbb)`$ values even further below the $`2M_H`$ end point of the signal (the more the larger $`M_H`$), indeed showing a peak both at $`M_H`$ and $`M_Z`$, corresponding to the $`H4b`$ and $`Z4b`$ decays induced by the second and last topologies in Fig. 4. As for the $`M(bbb)`$ spectrum, its shape is strongly related to that of the $`4b`$ mass. In a sense, by excluding one of the four $`b`$ quarks from the mass reconstruction corresponds to smearing the $`M(bbbb)`$ distributions, so that the broad prominent peak at $`M(bbb)90100`$ GeV in the case of the QCD process can be viewed as the superposition of the remains of the two narrow ones seen in $`M(bbbb)`$. For this very reason then, once a selection cut is imposed on one of the two masses, this is very likely to affect the other in a similar manner. The spectacular differences seen in Fig. 7 between, on the one hand, process (1), and, on the other hand, reactions (2) and (3) (more in the former than in the latter), are a direct reflection of the rather different resonant structure of the various channels, that is, the form of the time-like propagators (i.e., $`s`$-channels) in the corresponding MEs. However, one should expect further kinematic differences, driven by the presence in the backgrounds of space-like propagators (i.e., $`t,u`$-channels), which are instead absent in the signal (see Figs. 14). This is most evident in the properties of the four-quark hadronic system (d) recoiling against the $`Z`$ boson. As the internal dynamics of the four $`b`$ quarks is very different in each process (1)–(3), we also study the cases (a)–(c) separately. One can appreciate the propagator effects by plotting, for example, the cosine of the polar angle (i.e., with the beam axes) of the four $`b`$ quark system (or, indeed, the real $`Z`$). See Fig. 8, where, again, $`E_{\mathrm{cm}}=500`$ GeV and $`M_H=110`$ GeV. Notice that the backgrounds are much more forward peaked than the signal. This can be understood by recalling that the QCD events are mainly due to $`e^+e^{}ZZ`$ production followed by the decay of one of the gauge bosons into four $`b`$ quarks. The $`ZZ`$ pair is produced via $`t,u`$-channel graphs, so the gauge bosons are preferentially directed forward and backward into the detector. In contrast, the signal (1) is always induced by $`s`$-channel graphs. The EW background (2) has a more complicated structure but is still sizably dominated by forward production. The behaviour of $`\mathrm{cos}(bbb)`$ and $`\mathrm{cos}(bb)`$ is very similar to that of the four-quark system. In practise, it is the strong boost of the $`Z`$ bosons produced forwards and backwards in processes (2)–(3), combined with the small value of $`m_b`$ (compared to the typical process scale, i.e., $`E_{\mathrm{cm}}`$), that produces a similar angular pattern for all multi-$`b`$ systems of the background, regardless of their actual number. This is true for reaction (1) also. Therefore, all angular distributions displayed can boast strong (though correlated) discriminatory powers, allowing one to separate signal and backgrounds events efficiently. An alternative possible means of disentangling the effects of the propagators is to resort to the differential distributions of the above systems (a)–(d) in transverse momentum, $`pT`$. These are plotted in Fig. 9, for the same $`E_{\mathrm{cm}}`$ and $`M_H`$ as the previous two figures. However, this kinematic variable proves not to be useful. In fact, the only discriminating distribution is the $`pT`$ for all four $`b`$ quarks (or equivalently the final state $`Z`$ boson), and this only singles out the QCD background, a large fraction of which populates the range beyond 180–200 GeV. Neither the signal nor the EW background do so and always look rather similar in their shape (even in the spectra involving $`2b`$ quarks only, once these are appropriately combined together). Notice that two of the $`b`$ quarks in the QCD background originate from gluon splitting and are therefore rather soft. Consequently, one expects the four $`b`$ quarks to be more planar, in the $`4b`$ rest frame, for the QCD background than for the signal, where they are all the decay products of heavy bosons. We study this by plotting the thrust $`T`$ and sphericity $`S`$ distributions in Fig. 10. Indeed, such quantities could prove useful in reducing the QCD background but are harmless against the EW background. Before proceeding to apply dedicated selection cuts, we remark that kinematic features similar to those displayed in Figs. 710 can also be seen at the other two values of CM energies considered here and for other Higgs masses. In fact, increasing the value of $`M_H`$ in those distributions mainly translates into a ‘movable’ resonant peak in $`M_R(bb)`$ as well as lower-end point in $`M(bbbb)`$ and into somewhat softer(harder) spectra in invariant mass(transverse momentum) than at the smaller $`M_H`$ value considered so far. Moreover, the $`t,u`$-channel dependence of the backgrounds, as opposed to the $`s`$-one of the signal, is more marked at higher $`E_{\mathrm{cm}}`$ values. Finally, for angular distributions, a larger Higgs mass does not remove the big differences seen between the three channels (1)–(3). Therefore, in all generality, following our discussions of Figs. 710, and recalling the need to economize on the loss of signal because of its rather small production and decay rates, one can optimise the $`S/B`$ ratio by imposing the cuts: $$|M(bb)M_H|<5\mathrm{GeV}(\mathrm{on}\mathrm{exactly}\mathrm{two}\mathrm{combinations}\mathrm{of}2b\mathrm{systems}),$$ $$|M(bb)M_Z|>5\mathrm{GeV}(\mathrm{for}\mathrm{all}\mathrm{combinations}\mathrm{of}2b\mathrm{systems})$$ $$M(bbbb)>2M_H,|\mathrm{cos}(2b,3b,4b)|<0.75.$$ (4) In enforcing these constraints, we assume no $`b`$ jet charge determination. Moreover, the reader should recall that the spectra of the four hadronic systems (a)–(d) are all correlated, in each of the quantities studied above, and so are the invariant masses, transverse momenta and polar angles among themselves. The counterpart of Fig. 5 after the implementation of the above cuts is Fig. 11. The effect of the latter is a drastic reduction of both background rates (2)–(3), while maintaining a large portion of the original signal (1). Further notice how the imposition of the cuts (4) modifies the hierarchy of cross sections for process (1) with the CM energy, as now the largest rates occur at $`E_{\mathrm{cm}}=1000`$ GeV and the smallest at $`E_{\mathrm{cm}}=500`$ GeV (compare to Fig. 5). The $`S/B`$ ratios turn out to be enormously large for not too heavy Higgs masses. For example, at $`E_{\mathrm{cm}}=500(1000)[1500]`$ GeV and for $`M_H=110`$ GeV, one gets $`S/B=25(60)[104]`$, where $`S`$ corresponds to the rates of reaction (1) and $`B`$ refers to the sum of the cross sections for processes (2)–(3). The reduction of both backgrounds amounts to several order of magnitudes, particularly for the case of the strong process, whereas the loss of signal is much more contained. The acceptance of the latter is better at higher collider energies and lower Higgs masses. In fact, the poorest rate occurs for $`E_{\mathrm{cm}}=500`$ GeV at the upper end of the $`M_H`$ range, where more than 90% of the signal is sacrificed. We should however remark that the suppression of the backgrounds comes largely from the invariant mass cuts on $`M_{bb}`$ advocated in (4). (In fact, they are crucial not only in selecting the $`M_H`$ resonance of the signal, but also in minimizing the rejection of the latter around $`M_Z`$ when mispairings occur: notice the shoulder at 90 GeV of the $`M_W(bb)`$ spectrum of reaction (1)). The value we have adopted for the resolution is rather high, considering the large uncertainties normally associated with the experimental determination of jet angles and energies, though not unrealistic in view of the most recent studies . The ability of the actual detectors in guaranteeing the performances foreseen at present is thus crucial for the feasibility of dedicate studies of double Higgs-strahlung events at the LC. A related aspect is the efficiency of tagging the $`b`$ quarks necessarily present in the final state of reaction (1), particularly in the case in which the $`Z`$ boson decays hadronically. On the one hand, given the high production rate of six jet events from QCD and multiple gauge boson resonances in light quark and gluon jets, it is desirable to resort to heavy flavour identification in hadronic samples. On the other hand, the poor statistics of the $`HHZ`$ signal requires a judicious approach in order not to deplete the latter below detection level. According to recent studies , the two instances can be combined successfully, as efficiencies for tagging $`b\overline{b}`$ pairs produced in Higgs decays were computed to be as large as $`ϵ_{b\overline{b}}90\%`$, with mis-identification probabilities of light(charmed) quarks as low as $`ϵ_{q\overline{q}(c\overline{c})}0.3(4)\%`$ (and negligible for gluons). If such a projection for the LC detectors proves to be true, then even the requirement of tagging exactly four $`b`$ quarks in double-Higgs events of the type (1) might be statistically feasible, thus suppressing the reducible backgrounds to really marginal levels . One must also bear in mind that experimental considerations, such as the performances of detectors, the fragmentation/hadronization dynamics and a realistic treatment of the $`Z`$ boson decays, are also important when determining what cuts should be made. Such considerations are beyond the scope of this paper, and are under study elsewhere . Finally, the number of signal and backgrounds events seen per inverse attobarn of luminosity at $`E_{\mathrm{cm}}=500`$, $`1000`$, and $`1500`$ GeV, with $`M_H=110`$ GeV, can be seen in Tab. 1. Of course, one could relax one or more of the constraints we have adopted to try to improve the signal rates without letting the backgrounds become unmanageably large. For example, by removing the cuts on $`\mathrm{cos}(bb)`$ and $`\mathrm{cos}(bbb)`$ one can enhance the signal rates by about a factor of two. However, the EW background would also increase by a comparable amount and the QCD rate would do by a somewhat larger factor, of at least three/four. Kinematic fits can also help in improving the $`S/B`$ ratios . ## 4 Summary In conclusion, the overwhelming irreducible background from EW and QCD processes of the type $`e^+e^{}b\overline{b}b\overline{b}Z`$ to double Higgs production in association with $`Z`$ bosons and decay in the channel $`Hb\overline{b}`$, i.e., $`e^+e^{}HHZb\overline{b}b\overline{b}Z`$, should easily be suppressed down to manageable levels by simple kinematics cuts: e.g., in invariant masses and polar angles. The number of signal events is generally rather low, but will be observable at the LC given the following ‘mandatory conditions’ (some of which have already been outlined in Ref. ): * very high luminosity; * excellent $`b`$ tagging performances; * high di-jet resolution. The requirement advocated in Ref. of a good forward acceptance for jets may also be added to the above list, as we have explicitly verified (though not shown) that single jet directions can stretch in the double Higgs-strahlung process up to about 20 degrees in polar angle. Finally, beam polarization can also be invoked to increase the signal-to-background rates . ## Acknowledgments We thank Peter Zerwas for suggesting the subject of this research and for useful discussions. DJM would also like to thank DESY for hospitality while part of this research was carried out. Financial support from the UK-PPARC is gratefully acknowledged by both authors.
no-problem/9906/hep-ph9906278.html
ar5iv
text
# NLL BFKL and NNLO ## 1 LL BFKL In the limit of center-of-mass energy much greater than the momentum transfer, $`\widehat{s}|\widehat{t}|`$, any scattering process is dominated by gluon exchange in the cross channel. Building upon this fact, the BFKL theory models strong-interaction processes with two large and disparate scales, by resumming the radiative corrections to parton-parton scattering. This is achieved to leading logarithmic (LL) accuracy, in $`\mathrm{ln}(\widehat{s}/|\widehat{t}|)`$, through the BFKL equation, i.e. a two-dimensional integral equation which describes the evolution in transverse momentum space and moment space of the gluon propagator exchanged in the cross channel, $`\omega f_\omega (k_a,k_b)=`$ (1) $`{\displaystyle \frac{1}{2}}\delta ^2(k_ak_b)+{\displaystyle \frac{\overline{\alpha }_s}{\pi }}{\displaystyle \frac{d^2k_{}}{k_{}^2}K(k_a,k_b,k)},`$ with $`\overline{\alpha }_s=N_c\alpha _s/\pi `$, $`N_c=3`$ the number of colors, $`k_a`$ and $`k_b`$ the transverse momenta of the gluons at the ends of the propagator, and with kernel $`K`$, $`K(k_a,k_b,k)=`$ (2) $`f_\omega (k_a+k,k_b){\displaystyle \frac{k_a^2}{k_{}^2+(k_a+k)_{}^2}}f_\omega (k_a,k_b),`$ where the first term accounts for the emission of a gluon of momentum $`k`$ and the second for the virtual radiative corrections, which reggeize the gluon. Eq. (1) has been derived in the multi-Regge kinematics, which presumes that the produced gluons are strongly ordered in rapidity and have comparable transverse momenta. The solution of eq. (1), transformed from moment space to $`y`$ space, and averaged over the azimuthal angle between $`k_a`$ and $`k_b`$, is $`f(k_a,k_b,y)={\displaystyle \frac{d\omega }{2\pi i}e^{\omega y}f_\omega (k_a,k_b)}`$ (3) $`={\displaystyle \frac{1}{k_a^2}}{\displaystyle _{\frac{1}{2}i\mathrm{}}^{\frac{1}{2}+i\mathrm{}}}{\displaystyle \frac{d\gamma }{2\pi i}}e^{\omega (\gamma )y}\left({\displaystyle \frac{k_a^2}{k_b^2}}\right)^\gamma ,`$ with $`\omega (\gamma )=\overline{\alpha }_s\chi (\gamma )`$ the leading eigenvalue of the BFKL equation, determined through the implicit equation $$\chi (\gamma )=2\psi (1)\psi (\gamma )\psi (1\gamma ).$$ (4) In eq. (3) the evolution parameter $`y`$ of the propagator is $`y=\mathrm{ln}(\widehat{s}/\tau ^2)`$. The precise definition of the reggeization scale $`\tau `$ is immaterial to LL accuracy; the only requirement is that it is much smaller than any of the $`s`$-type invariants, in order to guarantee that $`y1`$. The maximum of the leading eigenvalue, $`\omega (1/2)=4\mathrm{ln}2\overline{\alpha }_s`$, yields the known power-like growth of $`f`$ in energy . What does the BFKL theory have to do with reality ? There is no evidence, as yet, of the necessity of a BFKL resummation either in the scaling violations to the evolution of the $`F_2`$ structure function in DIS (for a summary of the theoretical status, see ref. ), or in dijet production at large rapidity intervals . The most promising BFKL footprint, as of now, seems to be forward jet production in DIS, where the data show a better agreement with the BFKL resummation than with a NLO calculation (for a summary of dijet and forward jet production, see ref. ). In a phenomenological analysis, the BFKL resummation is plagued by several deficiencies; the most relevant is that energy and longitudinal momentum are not conserved, and since the momentum fractions of the incoming partons are reconstructed from the kinematic variables of the outgoing partons, the BFKL prediction for a production rate may be affected by large numerical errors . However, energy-momentum conservation at each gluon emission in the BFKL ladder can be achieved through a Monte Carlo implementation of the BFKL equation (1). Besides, because of the strong rapidity ordering between the gluons emitted along the ladder, any two-parton invariant mass is large. Thus there are no collinear divergences, no QCD coherence and no soft emissions in the BFKL ladder. Accordingly jets are determined only to leading order and have no non-trivial structure. Other resummations in the high-energy limit, like the CCFM equation which has QCD coherence and soft emissions, seem thus better suited to describe more exclusive quantities, like multi-jet rates . However, it has been shown that, provided the jets are resolved, i.e. their transverse energy is larger than a given resolution scale, the BFKL and the CCFM multi-jet rates coincide to LL accuracy . ## 2 NLL BFKL and NNLO In addition to the problems mentioned above, the BFKL equation is determined at a fixed value of $`\alpha _s`$ (as a consequence, the solution (3) is scale invariant). All these problems can be partly alleviated by computing the next-to-leading logarithmic (NLL) corrections to the BFKL equation (1). In order to do that, the real and the one-loop corrections to the gluon emission in the kernel (2) had to be computed, while the reggeization term in eq. (2) needed to be determined to NLL accuracy . From the standpoint of a fixed-order calculation, the NLL corrections present features of both NLO and NNLO calculations. Namely, they contain only the one-loop running of the coupling; on the other hand, in order to extract the NLL reggeization term, an approximate evaluation of two-loop parton-parton scattering amplitudes had to be performed . In addition, the one-loop corrections to the gluon emission in the kernel (2) had to be evaluated to higher order in the dimensional regularization parameter $`ϵ`$, in order to generate correctly all the singular and finite contributions to the interference term between the one-loop amplitude and its tree-level counterpart . This turns out to be a general feature in the construction of the infrared and collinear phase space of an exact NNLO calculation , and can be tackled in a partly model-independent way by using one-loop eikonal and splitting functions evaluated to higher order in $`ϵ`$ . Building upon the NLL corrections to the terms in the kernel (2), the BFKL equation was evaluated to NLL accuracy . Applying the NLL kernel to the LL eigenfunctions, $`(k_{}^2)^\gamma `$, the solution has still the form of eq. (3), with leading eigenvalue, $`\omega (\gamma )`$ $`=`$ $`\overline{\alpha }_s(\mu )\left[1b_0\overline{\alpha }_s(\mu )\mathrm{ln}(k_a^2/\mu ^2)\right]\chi _0(\gamma )`$ (5) $`+`$ $`\overline{\alpha }_s^2(\mu )\chi _1(\gamma )`$ where $`b_0=11/12n_f/(6N_c)`$ is proportional the one-loop coefficient of the $`\beta `$ function, with $`n_f`$ active flavors, and $`\mu `$ is the $`\overline{MS}`$ renormalization scale. $`\chi _0(\gamma )`$ is given in eq. (3), and $`\chi _1(\gamma )`$ in ref. . In eq. (5), the running coupling term, which breaks the scale invariance, has been singled out. Both the running-coupling and the scale-invariant terms in eq. (5) present problems that could undermine the whole resummation program (for a summary of its status see ref. ). Firstly, the NLL corrections at $`\gamma =1/2`$ are negative and large (however, eq. (5) no longer has a maximum at $`\gamma =1/2`$ ). Secondly, double transverse logarithms of the type $`\mathrm{ln}^2(k_a^2/k_b^2)`$, which are not included in the NLL resummation, can give a large contribution and need to be resummed . Double transverse logarithms appear because the NLL resummation is sensitive to the choice of reggeization scale $`\tau `$; e.g. the choices $`\tau ^2=k_ak_b`$, $`k_a^2`$ or $`k_b^2`$, which are all equivalent at LL, introduce double transverse logarithms one with respect to the others at NLL. An alternative, but related, approach is to introduce a cut-off $`\mathrm{\Delta }`$ as the lower limit of integration over the rapidity of the gluons emitted along the ladder . This has the advantage of being similar in spirit to the dependence of a fixed-order calculation on the factorization scale, namely in a NLL resummation the dependence on the rapidity scale $`\mathrm{\Delta }`$ is moved on to the NNLL terms , just like in a NLO exact calculation the dependence on the factorization scale is moved on to the NNLO terms. Finally, we remark that so far the activity has mostly been concentrated on the NLL corrections to the Green’s function for a gluon exchanged in the cross channel. However, in a scattering amplitude this is convoluted with process-dependent impact factors, which must be determined to the required accuracy. In a NLL production rate, the impact factors must be computed at NLO. For dijet production at large rapidity intervals, they are given in ref. .
no-problem/9906/hep-ex9906014.html
ar5iv
text
# Jet and hadron production in photon-photon collisions11footnote 1 submitted to the proceedings of DIS99, DESY-Zeuthen, Berlin, Germany, April 1999 ## 1 Leading Order parton processes The interaction of quasi-real photons ($`Q^20`$) studied at LEP and the interaction of a quasi-real photon with a proton studied at HERA (photoproduction) are very similar processes. In leading order (LO) different event classes can be defined in $`\gamma \gamma `$ and $`\gamma `$p interactions. The photons can either interact as ‘bare’ photons (“direct”) or as hadronic fluctuations (“resolved”). Direct and resolved interactions can be separated by measuring the fraction $`x_\gamma `$ of the photon’s momentum participating in the hard interaction for the two photons. In $`\gamma \gamma `$ interactions they are labelled $`x_\gamma ^\pm `$ for the two photons. Ideally, the direct $`\gamma \gamma `$ events with two bare photons are expected to have $`x_\gamma ^+=1`$ and $`x_\gamma ^{}=1`$, whereas for double-resolved events both values $`x_\gamma ^+`$ and $`x_\gamma ^{}`$ are expected to be much smaller than one. In photoproduction, the interaction of a bare photon with the proton is labelled ‘direct’ (corresponding to ‘single-resolved’ in $`\gamma \gamma `$) and the interaction of a hadronic photon is called ‘resolved’ (corresponding to ‘double-resolved’ in $`\gamma \gamma `$). ## 2 Di-jet production Studying jets gives access to the parton dynamics of $`\gamma \gamma `$ interactions. OPAL has therefore measured di-jet production in $`\gamma \gamma `$ scattering at $`\sqrt{s}_{\mathrm{ee}}=161172`$ GeV using the cone jet finding algorithm with $`R=1`$ . The differential cross-section $`\mathrm{d}\sigma /\mathrm{d}E_T^{\mathrm{jet}}`$ for di-jet events with pseudorapidities $`|\eta ^{\mathrm{jet}}|<2`$ is shown in Fig. 1. The measurements are compared to a parton level NLO calculation for three different NLO parametrisations of the parton distributions of the photon GRV-HO , AFG and GS . The calculations using the three different NLO parametrisations are in good agreement with the data points except in the first bin where theoretical and experimental uncertainties are large. ## 3 Jet shapes The jet shape is characterised by the fraction of a jet’s transverse energy ($`E_T^{\mathrm{jet}}`$) that lies inside an inner cone of radius $`r`$ concentric with the jet defining cone: $$\psi (r)=\frac{1}{N_{\mathrm{jet}}}\underset{\mathrm{jets}}{}\frac{E_\mathrm{T}(r)}{E_\mathrm{T}(r=R)},$$ (1) where $`E_\mathrm{T}(r)`$ is the transverse energy within the inner cone of radius $`r`$ and $`N_{\mathrm{jet}}`$ is the total number of jets in the sample. The jet shapes are corrected to the hadron level using the Monte Carlo. It has been shown by OPAL that the jets become narrower with increasing $`E_T^{\mathrm{jet}}`$, that the jet shapes are nearly independent of $`\eta ^{\mathrm{jet}}`$ and that gluon jets are broader than quark jets . The measured jet shapes are compared to data from the HERA experiments in similar kinematic ranges. H1 has measured jet shapes for di-jet events produced in deep-inelastic scattering (DIS) with $`10<Q^2<120`$ GeV<sup>2</sup> and $`210^4<x<810^3`$ . The events are boosted into the Breit-frame. ZEUS has mesured jet shapes in di-jet photoproduction for quasi-real photons in the lab frame . In the regions $`\eta _{\mathrm{Breit}}<1.5`$ (H1 data) and and $`1<\eta ^{\mathrm{jet}}<0`$ (ZEUS data), most jets should be quark jets. The same is true for the OPAL data at relatively large $`E_T^{\mathrm{jet}}`$, where the direct process dominates. A comparison shown in Fig. 2 performed for similar transverse energy ranges shows good agreement of the OPAL $`\gamma \gamma `$ and the H1 DIS data. The jets in photoproduction events measured by ZEUS are narrower than the $`\gamma \gamma `$ jets. This could be due to the slightly larger $`E_T^{\mathrm{jet}}`$ in the ZEUS data. A detailed comparison of jet widths measured in $`\gamma \gamma `$ interactions and other processes ($`\text{e}^+\text{e}^{},\overline{\text{p}}\text{p},\gamma `$p) has recently been published by TOPAZ . ## 4 Charged hadron production Hadron production at large transverse momenta is also sensitive to the partonic structure of the interactions without the theoretical and experimental problem related to the various jet algorithms. Interesting comparisons of $`\gamma \gamma `$ and $`\gamma `$p data taken at LEP and HERA, respectively, should be possible in the future, since similar hadronic centre-of-mass energies $`W`$ of the order 100 GeV are accessible for both type of experiments. The distributions of the transverse momentum $`p_\mathrm{T}`$ of hadrons produced in $`\gamma \gamma `$ interactions are expected to be harder than in $`\gamma `$p or hadron-p interactions due to the direct component. This is demonstrated in Fig. 3 by comparing $`\mathrm{d}\sigma /\mathrm{d}p_\mathrm{T}`$ for charged hadrons measured in $`\gamma \gamma `$ interactions by OPAL to the $`p_\mathrm{T}`$ distribution measured in $`\gamma `$p and hp (h$`=\pi ,`$K) interactions by WA69 . The WA69 data are normalised to the $`\gamma \gamma `$ data in the low $`p_\mathrm{T}`$ region at $`p_\mathrm{T}200`$ MeV/$`c`$ using the same factor for the hp and the $`\gamma `$p data. The hadronic invariant mass of the WA69 data is about $`W=16`$ GeV which is of similar size as the average $`W`$ of the $`\gamma \gamma `$ data in the range $`10<W<30`$ GeV. Whereas only a small increase is observed in the $`\gamma `$p data compared to the h$`\pi `$ data at large $`p_\mathrm{T}`$, there is a significant increase of the relative rate in the range $`p_\mathrm{T}>2`$ GeV/$`c`$ for $`\gamma \gamma `$ interactions due to the direct process. The $`\gamma \gamma `$ data are also compared to a ZEUS measurement of charged particle production in $`\gamma `$p events with a diffractively dissociated photon at $`W=180`$ GeV The invariant mass relevant for this comparison should be the mass $`M_\mathrm{X}`$ of the dissociated system (the invariant mass of the ‘$`\gamma `$-Pomeron’ system). The average $`M_\mathrm{X}`$ equals 10 GeV for the data shown. The $`p_\mathrm{T}`$ distribution falls exponentially, similar to the $`\gamma `$p and hadron-p data, and shows no flatening at high $`p_\mathrm{T}`$ due to a possible hard component of the Pomeron. NLO calculations of the cross-sections $`\mathrm{d}\sigma /\mathrm{d}p_\mathrm{T}`$ are shown in Fig. 4. The cross-sections are calculated using the QCD partonic cross-sections, the NLO GRV parametrisation of the parton distribution functions and fragmentation functions fitted to e<sup>+</sup>e<sup>-</sup> data. The renormalisation and factorisation scales are set equal to $`p_\mathrm{T}`$. The change in slope around $`p_\mathrm{T}=3`$ GeV/$`c`$ in the NLO calculation is due to the charm threshold. The agreement between the data and the NLO calculation is good. ## 5 Prompt photons The production of prompt photons in $`\gamma \gamma `$ interactions can also be used to measure the quark and gluon content of the photon . At TRISTAN energies the single-resolved process $`\gamma \text{q}\gamma \text{q}`$ is expected to dominate the prompt photon production cross-section, whereas at LEP2 energies double-resolved processes should become important. TOPAZ has measured the prompt photon cross-section $`\sigma (\text{e}^+\text{e}^{}\text{e}^+\text{e}^{}\gamma X)`$ by fitting signal plus background (mainly from $`\pi ^0`$ decays) to variables describing the shower shape in the calorimeter (Fig. 5). The variables are the rms of the cluster width in the r$`\varphi `$ direction, $`\sigma _{\mathrm{R}\varphi }`$, and the ratio of the maximum energy in a cluster to the total cluster energy, $`F_{\mathrm{max}}`$. TOPAZ obtains $`\sigma (\text{e}^+\text{e}^{}\text{e}^+\text{e}^{}\gamma X)=(1.48\pm 0.4\pm 0.49)`$ pb for photons with energies greater than 2 GeV using a data set with an integrated luminosity of $`L=288`$ pb. This result is about 1.5-2 standard deviations larger than the LO cross-sections $`\sigma (\text{e}^+\text{e}^{}\text{e}^+\text{e}^{}\gamma X)`$ of $`0.35`$ pb and $`0.50`$ pb which were obtained with PYTHIA using SaS-1D and LAC1 , respectively. ## Acknowledgements I want to thank Hisaki Hayashii for helping me with the TOPAZ data, Michael Klasen for providing the NLO calculations of the di-jet cross-sections, Tancredi Carli for discussion on the jet shapes and the organizers for this interesting and enjoyable workshop.
no-problem/9906/cond-mat9906456.html
ar5iv
text
# Spin gap and magnetic coherence in a clean high-𝑇_𝑐 superconductor A notable aspect of high-temperature superconductivity in the copper oxides is the unconventional nature of the underlying paired-electron state. A direct manifestation of the unconventional state is a pairing energy \- that is, the energy required to remove one electron from the superconductor \- that varies (between zero and a maximum value) as a function of momentum or wavevector : the pairing energy for conventional superconductors is wavevector-independent . The wavefunction describing the superconducting state will include not only the pairing of charges, but also of the spins of the paired charges. Each pair is usually in the form of a spin singlet , so there will also be a pairing energy associated with transforming the spin singlet into the higher energy spin triplet form without necessarily unbinding the charges. Here we use inelastic neutron scattering to determine the wavevector-dependence of spin pairing in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> , the simplest high-temperature superconductor. We find that the spin pairing energy (or ’spin gap’) is wavevector independent, even though superconductivity significantly alters the wavevector dependence of the spin fluctuations at higher energies. The experimental technique that we use is inelastic neutron scattering, for which the cross-section is directly proportional to the magnetic excitation spectrum and can be used to probe it as a function of wavevector and energy transfer. In addition, we have selected La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> , the simplest of the high-temperature (high-$`T_c`$) superconductors. The material consists of nearly square CuO<sub>2</sub> lattices with Cu atoms at the vertices and O atoms on the edges alternating with LaSrO charge reservoir layers. In the absence of Sr doping, the compound is an antiferromagnetic insulator, where the spin on each Cu<sup>2+</sup> ion is antiparallel to those on its four nearest neighbours. Because of the unit cell doubling, magnetic Bragg refections appear at wavevectors such as ($`\frac{1}{2}`$,$`\frac{1}{2}`$) (sometimes called ($`\pi `$,$`\pi `$) in the two-dimensional reciprocal space of the CuO<sub>2</sub> planes ). Doping yields a superconductor without long-range magnetic order but which has low-energy magnetic excitations peaked at the quartet of wavevectors $`𝑸_\delta =(\frac{1}{2}(1\pm \delta ),\frac{1}{2})`$ and $`(\frac{1}{2},\frac{1}{2}(1\pm \delta ))`$, shown in Figure 1a. The recent discovery of nearly identical fluctuations in the high-$`T_c`$ YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-y</sub> bilayer materials clearly indicates their relevance to the larger issue of high-$`T_c`$ superconductivity and validates the continued study of La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> as the cuprate with the least structural and electronic complexity. The samples are single crystal rods grown in an optical image furnace. The most reliable measure of the quality of bulk superconductors is the specific heat $`C`$. For our samples, there is a jump of $`\mathrm{\Delta }C/k_BT_c`$ = 7 mJ/moleK<sup>2</sup> at $`T_c`$ = 38.5 K. As $`T0`$, $`C`$ = $`\gamma _ST`$ where $`\gamma _S`$ is proportional to the electronic density of states at the Fermi level and has the value $`\gamma _S<`$ 0.8 mJ/moleK<sup>2</sup>. This together with an estimate of 10 mJ/mole$`K^2`$ for the corresponding normal state $`\gamma _N`$ indicates that the bulk superconducting volume-fraction 1-$`\gamma _S`$/$`\gamma _N`$ of our samples is greater than 0.9. This, as well as the high value of $`T_c`$ and the narrowness of the transition, is evidence for the very high quality of our large, single crystals. The basic experimental configurations are similar to those employed previously . Figure 1a shows the reciprocal space regions probed. A series of scans like those indicated in the figure, performed for a range of energy transfers were used to build up the $`𝑸`$-$`E`$ maps in Figure 1b and c, which show the scattering around the incommensurate peaks in the normal and superconduting states (here $`E`$ is the energy transfer). Figure 1b shows that the normal state excitations at 38.5 K are localized near $`𝑸_\delta `$ but are entirely delocalized in $`E`$. In other words the magnetic fluctuations which are favoured are those with a particular spatial period $`1/\delta `$ corresponding to $`𝑸_\delta `$, but no particular temporal period. Cooling below $`T_c`$ produces a very different image in $`𝑸`$-$`E`$ space. In Figure 1c all low frequency excitations ($`E5`$ meV) seem to be eliminated and there is an enhancement of the signal above 8 meV at the incommensurate wavevectors. The signal now has obvious peaks at around $`E`$=11 meV and $`\delta =0.29\pm 0.03`$ reciprocal lattice units (r.l.u.). We can thus visualize the zero point fluctuations in the superconductor as magnetic density waves undergoing (damped) oscillation with a frequency of 2.75 THz. In the normal paramagnetic state, the motion of the density waves becomes entirely incoherent. Figure 2a-2c shows a series of constant-$`E`$ cuts through the data in Figure 1. These graphs demonstrate that superconductivity induces a complete loss of signal for $`E`$=2 meV (2a), a significant intensity-preserving sharpening of the incommensurate peaks for $`E`$=8 meV (2b), and a large enhancement of the peaks for $`E`$=11 meV (2c). The peak narrowing in (2b) and (2c) corresponds to a spectacular superconductivity-induced rise in the magnetic coherence lengths (defined as the resolution-corrected inverse half-widths at half-maxima obtained as in ) from $`20.1\pm 0.9`$ Å to $`33.5\pm 2.0`$ Å and $`25.5\pm 0.1`$ Å to $`34.3\pm 0.8`$ Å, respectively. Figure 3a-c displays constant-$`𝑸`$ spectra both away from $`𝑸_\delta `$ (Figure 3a and b) and at $`𝑸_\delta `$ (Figure 3c). Superconductivity removes the low-$`E`$ signal below a threshold energy, while it enhances the higher-$`E`$ signal close to $`𝑸_\delta `$. The threshold for $`T<T_c`$ appears the same for the three wavevectors shown in Figures 3a-c, with the increase in intensity first visible in all cases at 6 meV. To quantify how superconductivity changes the spectra, we fit the data with the convolution of the instrumental resolution (full-width-half-maximum = 2 meV) and $$S(𝑸,E)=\frac{1}{1exp(E/k_BT)}\frac{AE^{}\mathrm{\Gamma }}{\mathrm{\Gamma }^2+E^2}$$ (1) where $$E^{}=Re\left\{[(E\mathrm{\Delta }+i\mathrm{\Gamma }_s)(E+\mathrm{\Delta }+i\mathrm{\Gamma }_s)]^{1/2}\right\}$$ (2) and $`A`$ is the amplitude, $`\mathrm{\Delta }`$ is the spin gap, $`\mathrm{\Gamma }`$ is the inverse lifetime of spin fluctuations with $`E\mathrm{\Delta }`$ (if $`\mathrm{\Delta }\mathrm{\Gamma }`$), $`E^{}`$ is an odd function of $`E`$ which defines the degree to which the spectrum has a gap and $`\mathrm{\Gamma }_s`$ is the inverse lifetime of the fluctuations at the gap edge. In the normal state, the best fits are obtained for $`\mathrm{\Delta }=0`$ meV, and the fitted value of $`\mathrm{\Gamma }`$ is essentially $`𝑸`$-independent, (Figure 3d). Thus, the lower-amplitude fluctuations with wavevectors different from $`𝑸_\delta `$ have lifetimes similar to those at the incommensurate peak positions. The $`𝑸`$-dependence of the signal is entirely accounted for by the $`𝑸`$-dependence of the real part $`\chi ^{}(𝑸)`$ of the magnetic susceptibility (Figure 3e) which, when $`\mathrm{\Delta }=0`$, is simply the amplitude $`A`$. In the superconducting state, $`\mathrm{\Gamma }`$ (Figure 3d), which characterizes the shape of the spectrum well above the spin gap, becomes strongly $`𝑸`$-dependent. At the same time, $`\chi ^{}(𝑸)`$ (Figure 3e), related via a Kramers-Kronig relation to the parameters in Equation (1), is suppressed. This explicitly demonstrates that superconductivity reduces the tendency towards static incommensurate magnetic order in La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> . Figure 4 shows the $`𝑸`$ dependence of the spin gap $`\mathrm{\Delta }`$. As anticipated from inspection of the data in Figure 3, $`\mathrm{\Delta }`$ is $`𝑸`$-independent and has the value 6.7 meV. The gap is quite sharp for our sample, with $`\mathrm{\Gamma }_s0.2`$ meV for all $`𝑸`$. Also shown in Figure 4 are the results for $`x`$=0.15 and 0.14 . $`\mathrm{\Delta }(𝑸_\delta )`$ is indistinguishable for the present $`x`$=0.163 and the older $`x`$=0.14 samples; the difference in the low-$`E`$ behavior is primarily due to the much larger damping ($`\mathrm{\Gamma }_s=1.2`$ meV for $`x`$=0.14 ). In addition, the $`𝑸`$-independence of $`\mathrm{\Delta }(𝑸)`$ is consistent with the $`𝑸`$-independent but incomplete suppression of the magnetic fluctuations in the $`x`$=0.14 sample . In contrast, the results of and show a large discrepancy with $`x`$=0.163, where the spin gap quoted in these papers is defined as the threshold for visible scattering. Nevertheless the results of are consistent with our work if we use the definition - advocated here and in \- of $`\mathrm{\Delta }`$ given by Equation (2). Fitting the data of to Equation (1) with $`\mathrm{\Delta }`$= 6.7 meV yields $`\mathrm{\Gamma }_s`$ = 0.5 meV, a value intermediate between our findings of 1.2 and 0.1 meV for $`x`$=0.14 and 0.163. Our experiments show that superconductivity produces strongly momentum-dependent changes in the magnetic excitations with energies above a momentum-independent spin gap. The data in their entirety do not resemble the predictions for any superconductors, be they $`s`$-wave or $`d`$-wave. Most notably, all $`d`$-wave theories anticipate dispersion in the spin gap which would have been observed over the wavevector range and for the energy resolution of the present experiment. At the same time, $`s`$-wave theory cannot account for the value of the spin gap. We are unaware of calculations which yield the dramatic incommensurate peak sharpening and enhancements seen above the spin gap, while at the same time showing a large reduction in the real part of the magnetic susceptibility. There are other difficulties with the conventional weak-coupling $`d`$-wave approach which posits nodes and therefore a smaller relative superconductivity-induced reduction in scattering between rather than at the incommensurate peaks. Figure 2b shows the opposite - just above the gap energy, the incommensurate peak intensities are preserved while the scattering between the peaks is suppressed. Furthermore, the peak sharpening in momentum space for $`\mathrm{}\omega >\mathrm{\Delta }`$ finds a precedence only in quantum systems, such as $`S`$=1 antiferromagnetic (Haldane) spin chains and rotons in superfluid helium, which have well-defined gaps with non-zero minima. Thus, while our statistics and resolution cannot exclude a small population of spin-carrying subgap quasiparticles, the systematics of the signal found near the gap energy make such quasiparticles improbable. As for any other spectroscopic experiment, we can only place an upper bound on the signal below the dispersionless gap. Inspection of Figure 3 shows that in between the incommensurate peaks at $`𝑸=(\frac{1}{2}(1+\frac{\delta }{2}),\frac{1}{2}(1\frac{\delta }{2}))`$, where ordinary weak-coupling $`d`$-wave theories generally anticipate nodes in the spin gap, the intensity for 2 meV at 5 K is less than 14 % of what was seen at $`T_c`$ and below 5 % of that observed for the incommensurate peaks at 5 K. Given the overwhelming evidence for $`d`$-wave superconductivity in the hole-doped high-$`T_c`$ superconductors , we see our data not as evidence against $`d`$-wave superconductivity but as proof that the spin excitations in the superconducting state do not parallel the charge excitations in the fashion assumed for ordinary $`d`$\- and $`s`$-wave superconductors. Our measurements, which are sensitive exclusively to the spin sector, taken together with the evidence for $`d`$-wave superconductivity in the charge sector suggest that the high-$`T_c`$ superconductors are actually Luther-Emery liquids, namely materials with gapped (triplet) spin excitations and gapless spin zero charge excitations . Luther-Emery liquids arise in one-dimensional interacting Fermi systems, which formally resemble two-dimensional $`d`$-wave superconductors - the dimensionality (zero) of the nodal points where the gap vanishes in the two-dimensional copper oxide is the same as that of the Fermi surface of a one-dimensional metal. There are other arguments for the applicability of the concept of Luther-Emery liquids. The first is that theory indicates that such liquids are the ground states of ladder compounds, one-dimensional strips of finite width cut from CuO<sub>2</sub> planes . The second involves the break-down of spin-charge separation when the spin gap collapses to zero, which can be brought about by a magnetic field whose Zeeman energy matches the spin gap energy. The 6.7 meV spin gap which we measure is much closer to the Zeeman energy of the upper critical field measured for samples similar to ours than to an ordinary Bardeen-Cooper-Schrieffer pairing energy $`3.5k_BT_c=11.6`$ meV. We thank K. N. Clausen for his help and support during the experiments, and B. Batlogg, G. Boebinger, V. Emery, K. Kivelson, H. Mook, D. Morr, D. Pines, Z-X. Shen, C-C. Tsuei and J. Zaanen for helpful disussions. Work done at the University of Toronto was sponsored by the Natural Sciences and Engineering Research Council and the Canadian Institute for Advanced Research, while work done at Oak Ridge was supported by the US DOE. TEM acknowledges the financial support of the Alfred P. Sloan Foundation, and AS acknowledges the assistance of the TMR program. Correspondence and requests for materials should be addressed to BL (e-mail: bella@phonon.ssd.ornl.gov).
no-problem/9906/hep-ph9906212.html
ar5iv
text
# 1 𝑉_{𝑢⁢𝑏} from inclusive charmless semileptonic decays of 𝑏 hadrons ## 1 $`V_{ub}`$ from inclusive charmless semileptonic decays of $`b`$ hadrons ### 1.1 A new method — a model-independent determination It was proposed recently to use the decay distribution in terms of the observable $`\xi _u=(q^0+|𝐪|)/M_B`$ in the $`B`$ rest frame to measure $`V_{ub}`$, where $`q`$ is the momentum transfer to the lepton pair. This decay spectrum is unique in that the tree-level and virtual gluon processes $`bu\mathrm{}\nu `$ at the parton level generate a trivial $`\xi _u`$ spectrum — a discrete line at $`\xi _u=m_b/M_B`$, solely on kinematic grounds. Two distinct effects, gluon bremsstrahlung and hadronic bound state effects, spread out the spectrum, but most of the decay rate remains at large $`\xi _u`$. Consequently, about $`99\%`$ of the $`bu`$ events pass the kinematic cut $`\xi _u>1M_D/M_B`$, where no $`bc`$ transition is allowed. This discrimination between $`bu`$ signal and $`bc`$ background is even more efficient than the cut on the hadronic invariant mass. Because of heaviness of the decaying hadron, the light-cone expansion is applicable to inclusive $`B`$ decays . The leading nonperturbative QCD effect is attributed to the distribution function, defined as Fourier transformation of the matrix element of the non-local $`b`$ quark operators separated along the light cone. The spectrum $`d\mathrm{\Gamma }/d\xi _u`$ is directly proportional to the distribution function . The detailed form of the distribution function is not known. However, the normalization of it is exactly known due to $`b`$ quantum number conservation. Using the known normalization, the dependence on the distribution function can be eliminated in the weighted integral of the decay spectrum: $$_0^1𝑑\xi _u\frac{1}{\xi _u^5}\frac{d\mathrm{\Gamma }}{d\xi _u}=\frac{G_F^2M_B^5}{192\pi ^3}|V_{ub}|^2.$$ (1) Thus a measurement of the above weighted integral of the $`\xi _u`$ spectrum determines $`V_{ub}`$. This method is based on the light-cone expansion, which is, in principle, model independent as it is in deep inelastic scattering. Note that this method does not rely on the heavy quark effective theory. Therefore, at least potentially, this theoretically sound, clean and experimentally efficient method allows for a model-independent determination of $`V_{ub}`$ with a minimum overall (experimental and theoretical) error. By this method the dominant hadronic uncertainty associated with the distribution function is avoided. The residual hadronic uncertainty due to higher-order, power-suppressed corrections of order $`O(\mathrm{\Lambda }_{\mathrm{QCD}}^2/M_B^2)`$ is expected to be at the level of $`1\%`$. The perturbative corrections are calculable. The study of these remaining theoretical uncertainties is in progress. The precision of this determination of $`V_{ub}`$ will mainly depend on its experimental feasibility. This method appears quite feasible by the similar techniques used to measure the inclusive charmless semileptonic branching ratio of $`b`$ hadrons at LEP. This method may be the best one available for the $`V_{ub}`$ determination. ### 1.2 From the inclusive charmless semileptonic branching ratio I have calculated the inclusive charmless semileptonic $`B`$ decay width in the approach based on light-cone expansion and heavy quark effective theory. This approach is from first principles and the nonperturbative QCD effect can be computed in a systematic way. Additional properties of the distribution function were deduced from the heavy quark effective theory, which impose strong constraints on the functional form of it. This allows for a largely model-independent determination of $`V_{ub}`$ from the inclusive charmless semileptonic branching ratio of $`b`$ hadrons measured at LEP. A crucial observation is that both dynamic and kinematic effects of nonperturbative QCD must be taken into account. The latter results in the extension of phase space from the quark level to the hadron level, which obviously increases the decay width. It turns out that the net effect of nonperturbative QCD enhances the semileptonic decay width. The heavy quark expansion approach fails to take into account the kinematic effect of nonperturbative QCD and, as a result, the calculation of the decay width in this approach lead to a higher value of $`V_{ub}`$. This failure is a consequence of the theoretical limitations in the heavy quark expansion approach: the operator product expansion breaks down for low-mass final hadronic states; the truncation of the expansion enforces the use of quark kinematics rather than physical hadron kinematics. These theoretical limitations were already indicated by the $`\tau (\mathrm{\Lambda }_b)/\tau (B_d)`$ measurements. The recent CLEO analysis of the hadronic mass and lepton energy moments in $`BX_c\mathrm{}\nu `$ may also hint at such limitations if the experiment is correct. The preliminary experimental result shows an inconsistency between the values of the heavy quark expansion parameters extracted, respectively, from the measured hadronic mass moments and from the measured lepton energy moments. Moreover, the interplay between nonperturbative and perturbative QCD effects has been accounted for in our approach, since confinement implies that free quarks are not asymptotic states of the theory. ### 1.3 From the hadronic invariant mass spectrum I have analysed the hadronic invariant mass spectrum in the QCD-based approach . I found that the theoretical error on $`V_{ub}`$ depends strongly on the hadronic invariant mass cutoff. The higher it can be experimentally made to be, the smaller the theoretical error on $`V_{ub}`$. The hadronic invariant mass spectrum has also been analysed in the approach based on the resummation of the heavy quark expansion. The distinct approximations are made in this approach. A comparison found that this approach contains less information on nonperturbative QCD in the leading approximation than the approach based on light-cone expansion. ### 1.4 From the lepton energy endpoint spectrum This determination of $`V_{ub}`$ has statistical power. Our analysis in the QCD-based approach showed that the theoretical uncertainty on $`V_{ub}`$ from the lepton energy endpoint spectrum is under control. A key step towards the improvement of the theoretical uncertainties on $`V_{ub}`$ from the inclusive charmless semileptonic branching ratio, the hadronic invariant mass spectrum or the lepton energy endpoint spectrum is a direct extraction of the distribution function from experiment. It was pointed out that the nonperturbative distribution function can be directly extracted by measuring either the spectra in $`\xi _f=(q^0+\sqrt{|𝐪|^2+m_f^2})/M_B`$ $`(f=u,c)`$ in inclusive semileptonic $`B`$ decays or the photon energy spectrum in inclusive radiative $`B`$ decays. ## 2 Model-independent determinations of the ratios of the CKM matrix elements It was found that the distribution function is universal in the sense that the same distribution function encodes the leading nonperturbative QCD contributions to inclusive semileptonic $`B`$ decays as well as inclusive radiative $`B`$ decays. It was proposed that a model-independent determination of the ratio $`|V_{ub}/V_{ts}|`$ can be obtained by measuring the ratio of the $`\xi _u`$ spectrum in $`BX_u\mathrm{}\nu `$ and the photon energy spectrum in $`BX_s\gamma `$, since the universal distribution function cancels in the ratio, $`[d\mathrm{\Gamma }(BX_u\mathrm{}\nu )/d\xi _u]/[d\mathrm{\Gamma }(BX_s\gamma )/dE_\gamma ]|_{E_\gamma =M_B\xi _u/2}`$. By the similar methods one can also obtain model-independent determinations of $`|V_{ub}/V_{cb}|`$ and $`|V_{cb}/V_{ts}|`$ . These methods depend on the validity of universality of the distribution function, which can be tested experimentally. ## 3 $`V_{cb}`$ from inclusive charmed semileptonic decays of $`b`$ hadrons ### 3.1 From the inclusive semileptonic branching ratio I have calculated the semileptonic decay width of the $`B`$ meson, which can be used to gain a largely model-independent determination of $`V_{cb}`$. It was also shown that it is important to include the kinematic nonperturbative QCD effect, as in the $`bu`$ case discussed above. I found that the semileptonic decay width is enhanced by long-distance strong interactions, in contrast to the result of the heavy quark expansion where a reduction of the free quark decay width is claimed. The primary reason for the difference is that the heavy quark expansion approach has to use the quark-level kinematics rather than the hadron-level kinematics, as mentioned above. Consequently, compared with the light-cone approach , the inclusive rate calculated in the heavy quark expansion approach leads to a larger gap between the inclusive and exclusive determinations of $`|V_{cb}|`$. Our prediction for the lepton energy spectrum was found to be in good agreement with the experimental data. This experimental test increases our confidence in the determination of $`V_{cb}`$. ### 3.2 A new method It was proposed to use the $`\xi _c`$ spectrum to obtain a model-independent determination of $`V_{cb}`$. The idea is the same as the use of the $`\xi _u`$ spectrum to determine $`V_{ub}`$ discussed in Section 1.1. However, this way of determining $`V_{cb}`$ may still suffer from large theoretical systematic error. For $`BX_c\mathrm{}\nu `$, the maximum momentum transfer squared is $`q_{\mathrm{max}}^2=(M_BM_D)^2`$. This means that $`q^2`$ is not large enough to neglect the higher order corrections. Actually the semileptonic $`bc`$ decay rate is dominated by a few exclusive decay modes ($`D,D^{}`$ and $`D^{()}\pi `$), which suggests that the light-cone picture cannot be valid point by point. The theoretical prediction in the light-cone expansion refers only to the smeared spectrum. A related problem is the uncertainty in the charm quark mass. The method proposed for a model-independent determination of $`V_{ub}`$ is, on the other hand, theoretically very reliable. The light-cone expansion works much better for $`BX_u\mathrm{}\nu `$ because a much larger momentum transfer with the maximum $`q_{\mathrm{max}}^2=M_B^2`$ can occur in $`BX_u\mathrm{}\nu `$ than in $`BX_c\mathrm{}\nu `$. Many final hadronic states contribute to the $`\xi _u`$ spectrum above the charm threshold, without any preferential weighting towards the low-lying resonance states. Both theoretical and experimental situations in this way of determining $`V_{ub}`$ are so attractive that the feasibility of the experiment is worth investigating. Note added. After this note was completed, a paper by Uraltsev appeared, in which the work of Refs. was criticized <sup>2</sup><sup>2</sup>2The same criticism appeared in , which is addressed here. It is worth reminding that the semileptonic decay width Eq. (1) in (or Eq. (13) in ) was derived using quark kinematics from the heavy quark expansion. Contrary to claims in Ref. , there exists no rigorous proof in the literature (including Refs. given in ) which demonstrates that Eq. (1) in (or Eq. (13) in ) recovers the full decay width from hadron kinematics. The sum rules are a key ingredient of the proof claimed in Ref. . These sum rules were obtained by assuming that the moments of the structure functions are identical to the moments of the structure functions at the quark level (see, e.g., Eq. (102) of Ref. in ). However, this assumption is not valid and, as a matter of fact, the sum rules were obtained still using quark kinematics under the assumption. The claimed proof is therefore incorrect. Rather, it is logically apparent that the kinematic nonperturbative effect due to the extention from quark phase space to hadron phase space cannot be included in the decay width calculations in quark phase space itself that lead to Eq. (1) in and Eq. (13) in . Quantitatively the significant difference shown in between the semileptonic widths calculated in the light-cone approach using hadron kinematics and in the heavy quark expansion approach using quark kinematics confirms that the kinematic nonperturbative QCD contributions are missed in the heavy quark expansion approach.. It has been observed in Refs. that the kinematic nonperturbative QCD effects are missed in the heavy quark expansion approach and must be incorporated additionally. The author of Ref. disproves that observation by criticizing the approach in Refs. . However, it should be stressed that that observation does not depend on any specific approach. The total decay rate receives an enhancement when the phase space is extended from the quark level determined by the $`b`$ quark mass to the hadron level determined by the $`B`$ meson mass. This is physically quite obvious and general, without the intervention of any specific theoretical approach. The heavy quark expansion approach has to use the quark-level phase space, while the $`B`$-meson decay rate should be calculated using the physical hadron-level phase space. As a consequence, there is rate missing in the calculation of the inclusive charmless semileptonic decay rate in in the heavy quark expansion approach. Let me next turn to the issue of the theoretical foundation of the approach in . The light-cone expansion has long been recognized as the theoretical foundation for the description of deep inelastic scattering processes that are dominated by light-cone singularities. The same formalism is at the basis of the approach to inclusive $`B`$ decays. Inclusive semileptonic $`B`$ decays involve large momentum transfer in most of phase space. The light-cone expansion is applicable to the inclusive decays. On the other hand, the heavy quark expansion for inclusive semileptonic $`B`$ decays is grounded in the operator product expansion in the case of large energy release. Making a serious scrutiny into their formulations, one can realize that as concerns the theoretical foundation the approach based on the light-cone expansion in is not less firm at all than the heavy quark expansion approach. Actually these two approach tackle nonperturbative QCD effects in the different ways. The discrepancy between their results is an inevitable consequence of the difference between the underlying methods. Given the theoretical problems of the heavy quark expansion approach mentioned previously, it is not justified to regard such a discrepancy as the deficiency of the approach based on the light-cone expansion. The discrepancy could be just a reflection of the merit of the light-cone approach. As an example, Eq. (14) in Ref. results from the first three terms in the moment expansion of the distribution function in the light-cone approach: $$f(\xi )=\underset{n=0}{\overset{\mathrm{}}{}}\frac{(1)^n}{n!}M_n(\frac{m_b}{M_B})\delta ^{(n)}(\xi \frac{m_b}{M_B}).$$ (2) In fact, the truncation of this expansion is illegal in the light-cone approach. Thus Eq. (14) given in is not a true result of the light-cone approach. Nevertheless, assuming that they are comparable the emerging discrepancy between Eq. (14) and the heavy quark expansion result Eq. (1) shown in is conceivable and not surprising, and not in a problem with the light-cone approach itself. In the free quark limit, the $`B`$ meson and the $`b`$ quark in it move together with the same velocity. Eq. (2) of Ref. is the expression for the inclusive charmless semileptonic decay width of the $`B`$ meson at rest, from the light-cone expansion. In the free quark limit, corresponding to the limit $`f(\xi )\delta (\xi m_b/M_B)`$, it correctly reproduces the decay width of the free $`b`$ quark at rest. The claimed meaning of the heavy quark expansion correction in Eq. (1) of Ref. corresponding to a free quark moving with the small velocity is self-contradictory and not a result of QCD. A related point is that if true the results from QCD in $`1+1`$ dimensions may provide some insights when we have to deal with real systems that are not simple, but cannot be regarded as the truth or proof in real QCD. I would conclude that it is premature to dismiss the results of Refs. on the basis of the work in . It is an indisputable fact that the kinematic nonperturbative QCD effects identified in are missed in the heavy quark expansion approach. Acknowledgements: I am indebted to Emmanuel Paschos for collaboration. I would like to thank the organizers for inviting me to have a contribution. This work is supported by the Australian Research Council.
no-problem/9906/physics9906013.html
ar5iv
text
# Internal Vortex Structure of a Trapped Spinor Bose-Einstein Condensate ## Abstract The internal vortex structure of a trapped spin-$`1`$ Bose-Einstein condensate is investigated. It is shown that it has a variety of configurations depending on, in particular, the ratio of the relevant scattering lengths and the total magnetization. PACS number: 03.75Fi Recently the MIT group has succeeded in obtaining Bose Einstein Condensation (BEC) of <sup>23</sup>Na atoms in an optical trap. A novel aspect of this system is that <sup>23</sup>Na atoms possess a hyperfine spin, with $`f=1`$ in the lower multiplet. All three possible projections of the hyperfine spin can be optically trapped simultaneously. Thus generally the condensate have to be described by a spin-$`1`$ order parameter consisting of the spinor $`(\mathrm{\Psi }_u,\mathrm{\Psi }_0,\mathrm{\Psi }_d)`$ where $`\mathrm{\Psi }_{u,0,d}`$ are the macroscopic wavefunctions with the hyperfine projection $`s=1,0,1`$ respectively. The ground state of the condensate is determined by the spin-dependence of the interaction between the bosons. In the dilute limit they can be characterized by the (s-wave) scattering lengths $`a_0`$ and $`a_2`$ in the total (hyperfine-)spin $`0`$ and $`2`$ channels respectively. The polar state, where the order parameter is proportional to the spinor $`(0,1,0)`$, is favored if $`a_2>a_0`$, but the axial state, where the spinor is $`(1,0,0)`$, if the inequality is otherwise. <sup>23</sup>Na belongs to the former case. However, <sup>87</sup>Rb is predicted to belong to the latter case. A particular interesting feature of these systems is that, due to the weak spin dependence of the interatomic interaction, $`a_0a_2`$. For example for <sup>23</sup>Na $`a_046a_B`$ and $`a_252a_B`$, where $`a_B`$ is the Bohr radius. Thus the difference $`a_2a_06a_B`$ is only a small fraction of $`a_0`$ or $`a_2`$. As a result, the condensate only needs to pay very little extra energy to get into the “wrong” state. If there is a competing energy, such as that due to the presence of a gradient, it may be energetically favorable for the condensate to deviate locally from the polar state. An analogous remark is also applicable to <sup>87</sup>Rb, where $`a_0110a_B`$ and $`a_2107a_B`$. In this paper I shall illustrate this by considering vortices of this spin-$`1`$ Bose condensate. Vortices with a scalar order parameter in BEC have been discussed in many papers (e.g ). In a cylindrically symmetric trap, there is no stable vortex state for angular momentum $`L<N\mathrm{}`$ where $`N`$ is the number of particles. A singly quantized vortex has $`L=N\mathrm{}`$, with the node of the order parameter located at the center of the trap. Here I shall show that vortices in a spinor condensate are even more interesting in that they exhibit a very rich internal structure. In general locally the order parameter is in neither the polar nor the axial state. They may have broken cylindrical symmetry with nodes of the order parameter of individual species appearing at positions other than the trap center. The minimum angular momentum required for the formation of a vortex is also less than $`N`$. Moreover transitions between different internal vortex structures are possible. Vortices were also discussed in Ref and , but they did not consider structures which arise from deviations of the order parameter from the original polar or axial phases. Since the order parameter has more than one component, it is convenient to distinguish between vortices of individual order parameter component $`\mathrm{\Psi }_s`$ and the composite structure. I shall refer the latter as the composite vortex (CV). The order parameter, in particular that of the CV, is found by minimization of the energy (restricting ourselves to $`T=0`$) under appropriate constraints. The energy density $``$ consists of the kinetic and potential contributions $`_s\frac{|\mathrm{\Psi }_s|^2}{2M_a}+V|\mathrm{\Psi }_s|^2`$, where $`M_a`$ is the atomic mass, $`V`$ is the trap potential and the sum is over all spin components, and the interaction part which can be written as $`_{\mathrm{int}}=\frac{1}{2}(c_0+c_2)n^2\frac{1}{2}c_2|2\mathrm{\Psi }_u\mathrm{\Psi }_d\mathrm{\Psi }_0|^2`$ where $`n=_s|\mathrm{\Psi }_s|^2`$ is the local density. Here $`c_0\frac{g_0+2g_2}{3}`$ and $`c_2\frac{g_2g_0}{3}`$ where $`g_{0,2}`$ are in turn related to the scattering lengths in the total spin $`0`$ and $`2`$ channels via $`g_{0,2}=\frac{4\pi \mathrm{}^2a_{0,2}}{M_a}`$. The total particle number $`N`$, angular momentum $`L`$ and magnetization $`M`$ should be considered as conserved if no exchange of the corresponding quantity is allowed between the atoms inside the trap and their environment (within the relevant experimental time scale). In this case we have to minimize the total energy for given $`N`$, $`L`$ and $`M`$. As usual it is convenient to introduce and minimize the free energy $`GE\mu N\mathrm{\Omega }LHM`$ where $`\mu ,\mathrm{\Omega }`$ and $`H`$ are Lagrange multipliers. $`\mu ,\mathrm{\Omega },H`$ correspond to the chemical potential, angular velocity and magnetic field. $`\mathrm{\Omega }`$ is given by the angular velocity of the rotating trap if angular momentum can be exchanged between the trapped atoms and their environment. It is useful to note that the energy is invariant under relative rotation between the real and spin space. Accordingly in below the spin quantization axis will be chosen for the most convenient presentation (and always along the total magnetization if it is finite). In particular the configurations presented below do not rely on any special relative orientation between the net magnetization and the rotational axis (which is always chosen as $`z`$). I shall also discuss the local magnetization density $`\stackrel{}{m}`$. The projection of $`\stackrel{}{m}`$ along a general direction is measureable in BEC experiments since it is given by the difference in the number density between the $`u`$ and $`d`$ species when one uses that direction as the quantization axis. Setting the variation of the free energy with respect to $`\mathrm{\Psi }_s^{}`$ to zero, one obtains the familiar Gross-Pitaevskii (GP) equations (generalized due to the presence of multiple spin species ). If $`c_2>0`$ such as in the case of <sup>23</sup>Na, in the absence of a net magnetization the order parameter can be chosen so that only $`\mathrm{\Psi }_0`$ is finite and obeys the GP equation in the usual form: $`0`$ $`=`$ $`(\frac{\mathrm{}^2}{2M_a}^2+V\mu )\mathrm{\Psi }_0`$ $`+`$$`c_0|\mathrm{\Psi }_0|^2\mathrm{\Psi }_0`$. The order parameter profile for $`\mathrm{\Psi }_0`$ would then be completely analogous to that of a scalar order parameter with the interaction parameter given by $`c_0`$ (note then $`\mathrm{\Psi }_0`$ is independent of $`c_2`$ ). In particular in the absence of any circulation and if one ignores the gradient term, ( the Thomas Fermi (TF) approximation ) $`|\mathrm{\Psi }_0|^2=\frac{\mu V}{c_0}\theta (\mu V)`$. The structure of a singly quantized vortex would also be exactly analogous to that of a scalar order parameter investigated by, e.g., Dodd et al . For the discussions below it is also convenient to re-consider the same CV with quantization axis rotated by $`\pi /2`$ about a horizontal axis. In this basis the above CV becomes two coinciding vortices of the $`u`$ and $`d`$ components with $`|\mathrm{\Psi }_u|=|\mathrm{\Psi }_d|`$ and their nodes at the trap center. We shall see below that in general the CV is very different from the ones just discussed. Typically in the experiments the cloud is trapped by a potential harmonic in all three spatial directions. For numerical simplicity I shall instead consider a cloud subject to a harmonic potential only in the $`xy`$ plane but of uniform density within thickness $`d`$ along the rotational $`z`$ axis. It is reasonable to assume that the results below will be qualitatively applicable to a pancake shaped cloud trapped by a three dimensional, axially symmetric harmonic potential if the radii of the clouds perpendicular to the rotational axis are comparable. Rather than varying $`\mu `$ and $`\mathrm{\Omega }`$ to obtain a fixed total number of particles and angular momentum, I shall simply present the types of CV for fixed $`\mu `$’s and $`\mathrm{\Omega }`$’s. However, I shall continue to use the total magnetization (rather than $`H`$) as an independent variable . I shall eliminate $`\mu `$ in favor of $`R(2\mu /M_a\omega _o^2)^{1/2}`$, where $`\omega _o`$ is the (angular) trap frequency. In the absence of vortices and under the TF approximation, the radius of the cloud and the total number of particles are independent of the value of $`c_2`$ and given by $`R`$ and $`N_o=\frac{d}{16a}(\frac{R}{\lambda _o})^4`$ respectively, where $`\lambda _0`$ is the size of the harmonic oscillator ground state wavefunction ($`\lambda _o=(\mathrm{}/M_a\omega _o)^{1/2}`$). Here $`a\frac{(a_0+2a_2)}{3}`$ is an effective scattering length for the interaction parameter $`c_0`$. All the CV presented below has $`NN_o`$. I shall introduce the parameter $`ϵ(\lambda _0/R)^2`$ which measures the deviation from the TF ($`ϵ0`$) limit. $`ϵ`$ depends only weakly on $`N`$ for given trap parameters. I shall express $`\mathrm{\Psi }_s`$ in units of $`\sqrt{\mu /c_o}`$ (correspondingly the number density $`n`$ and the magnetization density $`\stackrel{}{m}`$ in $`\mu /c_o`$) distances in units of $`R`$, and total particle number and magnetization $`m_{\mathrm{tot}}`$ in units of $`N_o`$. With this, all physical results depend only on the dimensionless parameters $`ϵ`$, $`\stackrel{~}{\mathrm{\Omega }}\mathrm{\Omega }/\omega _o`$, $`\stackrel{~}{c}_2c_2/c_o`$ and $`m_{\mathrm{tot}}`$. Anticipating future experiments on other atoms I will not fix $`\stackrel{~}{c}_2`$ to that of <sup>23</sup>Na (though confining myself to $`\stackrel{~}{c}_2>0`$ ). As a concrete example I shall consider mainly $`ϵ=0.1`$, $`\stackrel{~}{\mathrm{\Omega }}=0.45`$, with the corresponding phase diagram shown in Fig. 1. I shall comment on other values of the parameters as I proceed. We begin by considering $`m_{\mathrm{tot}}=0`$. I shall present the CV in two ways, each related to some of the CV structures discussed for $`m_{\mathrm{tot}}0`$ below. The structure of the CV with quantization axis chosen so that it resembles most closely a vortex of $`\mathrm{\Psi }_0`$ alone is as shown in Fig. 2. However, instead of an empty core, it is energetically favorable for some of the $`0`$ particles to convert to $`u`$ and $`d`$ species and appear near the center of the trap. For the present parameters, $`|\mathrm{\Psi }_u|=|\mathrm{\Psi }_d|`$ and each has two nodes with unit circulation. In Fig 2 the order parameter along the line (chosen as the $`x`$ axis) going through these singularities was shown. Note that the CV has broken cylindrical symmetry. Another useful way of presenting the above CV is to use quantization axis rotated by $`\pi /2`$ about a horizontal axis with respect to those above. In this basis only $`\mathrm{\Psi }_u`$ and $`\mathrm{\Psi }_d`$ are finite. Each has one node, displaced by equal but opposite distance from the trap center (Fig 3). It follows that the local magnetization density is finite and points along the (present) $`\widehat{z}`$-axis, being negative for $`x<0`$ and positive for $`x>0`$. Notice that at the singularity for say the $`d`$-component, since $`|\mathrm{\Psi }_d|=0`$ and $`|\mathrm{\Psi }_u|0`$, locally the condensate is actually in the axial but not polar state (even though $`m_{\mathrm{tot}}`$ and $`H`$ are zero) With the use of this quantization axis we can also understand easily the reason for the present CV structure. Due to the presence of the trap potential a vortex has maximum kinetic energy if its node is located at the center of the trap. It is thus energetically favorable for the nodes of the $`u`$ and $`d`$ components to move away from the trap center. For the system to be at an energy minimum, they move opposite to each other, creating regions where $`|\mathrm{\Psi }_u||\mathrm{\Psi }_d|`$, eventually balanced by the desire of the condensate to remain in the polar state. In this picture it is obvious that $`L/N<1`$. As $`\mathrm{\Omega }`$ increases, the $`\mathrm{\Psi }_{u,d}`$ singularities move closer to the center of the trap and $`L/N`$ increases. \[e.g., for $`\stackrel{~}{c}_2=0.2`$, $`L/N=0.87(0.92)`$ at $`\stackrel{~}{\mathrm{\Omega }}=0.45(0.5)`$\]. The separation between these singularities, and the region where the local magnetization is non-zero, increase with decreasing $`\stackrel{~}{c}_2`$ \[correspondingly $`L/N`$ decreases: e.g., at $`\stackrel{~}{\mathrm{\Omega }}=0.45`$, $`L/N=0.84(0.80)`$ for $`\stackrel{~}{c}_2=0.1(0.05)`$\] Now we are ready to consider the structure of the CV with finite total magnetization. I shall describe each region of the phase diagram Fig. 1 in turn. I: In this region the favorable configuration is similar to that of Fig 3 except for an increase (decrease) in the amplitude of $`\mathrm{\Psi }_u`$ and $`\mathrm{\Psi }_d`$ (not shown). The local and total magnetization of this CV are always collinear. One can understand this configuration by considering the energy under the magnetic field $`H`$. The CV has an order parameter and hence a magnetic susceptibility which is anisotropic. For a given magnitude of the magnetization, the energy is minimum if the direction of $`\stackrel{}{m}_{\mathrm{tot}}`$ is along that of largest susceptibility. The quantization axes used in Fig 2 and 3 above correspond to the principal directions of the susceptibility tensor. It is intuitively reasonable that the CV has larger susceptibility along the quantization axis of Fig 3 (c.f. ). II: For larger $`m_{\mathrm{tot}}`$ the vortex of the minority species $`d`$ disappears. The CV is replaced by a vortex of the $`u`$ species with a $`d`$ core. ( Fig. 4.) This can be understood by recognizing that the effective chemical potential for the $`d`$ species is given by $`\mu H`$. Increasing $`m_{\mathrm{tot}}`$ requires increasing $`H`$, hence decreasing $`\mu H`$. Eventually the effective chemical potential is too low to overcome the necessary kinetic energy required for forming a circulating $`d`$ component. This picture is supported by the fact that the critical $`m_{\mathrm{tot}}`$ needed for the I $``$ II transition increases with $`\mathrm{\Omega }`$. The $`\mathrm{\Omega }L`$ term in the free energy favors an order parameter with finite circulation, thus at higher angular velocity a larger $`H`$ and hence $`m_{\mathrm{tot}}`$ is required for the transition. \[e.g, at $`\stackrel{~}{c}_2=0.2`$, the critical $`m_{\mathrm{tot}}0.2`$ for $`\stackrel{~}{\mathrm{\Omega }}=0.45`$ here (Fig 1) whereas $`m_{\mathrm{tot}}0.4`$ for $`\stackrel{~}{\mathrm{\Omega }}=0.5`$\] III: This occurs at still larger $`m_{\mathrm{tot}}`$ and only for small $`c_2`$. In this region the CV has a u vortex with a core filled by the $`0`$ species (Fig 5). The minimum magnitude of $`m_{\mathrm{tot}}`$ needed for this new CV increases with $`\stackrel{~}{c}_2`$. These features can be understood by considering again the effective chemical potential for the $`0`$ and $`d`$ spins which are $`\mu `$ and $`\mu H`$ respectively. The spin $`0`$ species is more favored by $`H`$, but suffers a stronger repulsion (than the $`d`$ species) from the majority $`u`$ species due to the spin dependent interaction $`c_2(>0)`$. Only at sufficiently small $`c_2`$ and large $`m_{\mathrm{tot}}`$ does this CV become favorable. In Fig. 6 we display the local magnetization density $`\stackrel{}{m}`$ of this CV at points on the $`x`$ axis, defined so that the phase difference between the $`u`$ and $`0`$ components vanishes for $`x>0`$. Near the trap center $`\stackrel{}{m}`$ points mainly along the horizontal, turning towards $`\widehat{z}`$, the direction of net magnetization, only further away. The magnitude as well as the $`z`$-component of $`\stackrel{}{m}`$ depend only on the radial distance from the center of the trap. The azimuthal angle of $`\stackrel{}{m}`$ is the negative of that of the corresponding physical point in space. It is interesting to note that the presence of the CV may not be apparent if one examines only the particle number density $`n`$, in strong contrast to the case of a scalar condensate. . IV: This is the most intriguing region. At very small $`\stackrel{~}{c}_2`$ (and not too small $`ϵ`$’s) the CV has spontaneous (spin) symmetry breaking in the sense that it has a net magnetization even when $`H=0`$. The configuration is similar to that of Fig. 2 except now the numbers of spin-up and spin-down particles are no longer equal (see Fig 7). This configuration is stable (i.e. the topology of the CV remains the same except for a re-adjustment of the amplitudes of $`\mathrm{\Psi }`$’s) so long as the total magnetization is close to that of the ‘spontaneous’ one. The corresponding local magnetization density is as shown in Fig 8. The direction of $`\stackrel{}{m}`$ thus rotates from $`\widehat{x}`$ through $`\widehat{z}`$ to $`\widehat{x}`$ as one moves along the physical $`x`$-axis. The existence of this spontaneous $`m_{\mathrm{tot}}`$ means more of the local order parameter is axial like, and thus this state is possible only for sufficiently small $`\stackrel{~}{c}_2`$. For larger $`m_{\mathrm{tot}}`$, this configuration gives way to that of a $`u`$-vortex and a $`d`$ core (Fig 4), which has a larger susceptibility, discussed earlier. In conclusion I have shown that the internal vortex structure of a spin-$`1`$ Bose condensate in a harmonic trap is much richer than that of a condensate with a scalar order parameter. I thank T.-L. Ho for his comments on the manuscript.
no-problem/9906/hep-ph9906318.html
ar5iv
text
# Rotating QCD string and the meson spectrum ## 1 Introduction QCD is believed to be the fundamental theory of strong interactions and the meson spectroscopy is to be derived from QCD. The spectrum of mesons has been treated in a sequence of models which may be called QCD motivated, but still not directly derived from the QCD Lagrangian. The problem of the celebrated Regge behaviour of the hadron spectra has been discussed in literature not once (see e.g. and references herein) but still attracts considerable attention. The light–light meson spectrum obtained so far and reasonably describing experiment can be written in the form $$M_{ll}^2(n,J)=(c_n^{(ll)}n+c_J^{(ll)}J+\mathrm{\Delta }M_p^2+\mathrm{\Delta }M_s^2),$$ (1) where $`n`$ is the radial quantum number, $`J`$ being the total angular momentum, $`\mathrm{\Delta }M_p^2`$ contains the perimeter (self-energy) mass correction as well as corrections to the first two terms, while $`\mathrm{\Delta }M_s^2`$ takes into account spin splittings. For heavy–light mesons a similar relation holds true with the subscript $`ll`$ changed for $`hl`$ in all coefficients: $$M_{hl}^2(n,J)=(c_n^{(hl)}n+c_J^{(hl)}J+\mathrm{\Delta }M_p^2+\mathrm{\Delta }M_s^2).$$ (2) From general physical considerations one expects the spectra (1) and (2) to follow from a string–like picture of confinement which predicts the (inverse) Regge slope $$c_J^{(ll)}=2\pi \sigma ,c_J^{(hl)}=\pi \sigma .$$ (3) Additional daughter Regge trajectories are given by vibrational excitations missing in (1) and (2), which are due to hybrid excitations, i.e. constituent gluons attached to the fundamental string . In what follows we are interested only in radial and orbital excitations of the string. The string slope (3) is an important criterion and check for any QCD inspired model since it requires the correct account for the rotation of the string, which is not present in the potential models considered so far. For example, relativistic spinless Salpeter equation with confinement reduced to the linearly rising potential between quarks yields $$c_J^{(ll)}(Salpeter)=8\sigma ,c_J^{(hl)}(Salpeter)=4\sigma $$ (4) that is about 25% larger than (3), whereas the one-body Dirac equation with the linearly rising potential leads to $$c_J^{(hl)}(Dirac)=4\sigma ,$$ (5) if the potential is added to the energy term (vector confinement<sup>1</sup><sup>1</sup>1Here we leave aside the well-known problem of the Klein paradox revealing itself in case of vector confinement.), and $$c_J^{(hl)}(Dirac)=2\sigma ,$$ (6) for the potential added to the mass term (scalar confinement). Both results lead to considerable discrepancies with (3) and, as will be shown later, this happens because the rotation of the string, and hence momentum dependence of the effective potential, is not taken into account. It was found few years ago that starting from the area law for Wilson loops one arrives at the relativistic Hamiltonian for the spinless quark and antiquark which possesses two different regimes: potential regime for small angular momenta $`L`$ and any $`n`$, and string–like one for large $`L`$ and fixed $`n`$. In the latter case the dominant term in the Hamiltonian indeed describes the rotating QCD string, so that the string Regge slope (3) is readily reproduced. Similar results were obtained independently by numerical analysis of the spinless quark–antiquark system . In the present paper we concentrate on the quasiclassical approach to mesons, as the WKB method allows to obtain analytic formulae for the meson spectra of surprisingly high accuracy thus giving evidence for the quasiclassical dynamics of confined quarks in the meson. Therefore our first task will be to check the accuracy of the WKB approximation for those cases where exact solutions are feasible: spinless Salpeter equation for light–light mesons (potential regime of the general QCD string formalism ) and the Dirac equation for linear confining potential for the case of heavy–light system. We argue that the accuracy of WKB results is very good even for lowest states. However the slopes in both cases are incorrect, as in (4) and (6) respectively. At this point we come to the main purpose of this study — to include the proper string dynamics, whereby abandoning the notion of local potential and introducing a new entity, the QCD string, the effect which can not be recasted in terms of local potential. We use the Hamiltonian derived in and calculate the quasiclassical spectrum of light mesons. The results represent celebrated straight-line Regge trajectories even for low-lying states with the slope very close to the expected string slope (3). In conclusion we demonstrate how other effects (spin and colour Coulomb interaction) can be included in the same Hamiltonian to make a direct comparison with experiment. ## 2 Meson spectrum and quasiclassical approximation We start with the spinless Salpeter equation which describes relativistic quark and antiquark of equal masses $`m`$ with angular momentum $`l=0`$ and spin effects neglected (see for the derivation of this equation from the general meson Green function in QCD). $$(2\sqrt{p_r^2+m^2}+\sigma r)\psi _n=M_n^{(ll)}\psi _n$$ (7) The Bohr–Sommerfeld condition looks like $$_0^{r_+}p_r(r)𝑑r=\pi \left(n+\frac{3}{4}\right),n=0,1,2,\mathrm{},r_+=\frac{M_n^{(ll)}2m}{\sigma }$$ (8) that yields $$M_n^{(ll)}\sqrt{\left(M_n^{(ll)}\right)^24m^2}4m^2ln\frac{\sqrt{(M_n^{(ll)})^24m^2}+M_n^{(ll)}}{2m}=4\sigma \pi \left(n+\frac{3}{4}\right)$$ (9) A similar consideration for the heavy–light system of masses $`m`$ and $`M`$ ($`M\mathrm{}`$) gives $$(\sqrt{p_r^2+m^2}+\sigma r)\psi _n=M_n^{(hl)}\psi _n$$ (10) $$M_n^{(hl)}\sqrt{\left(M_n^{(hl)}\right)^2m^2}m^2ln\frac{\sqrt{\left(M_n^{(hl)}\right)^2m^2}+M_n^{(hl)}}{m}=2\sigma \pi \left(n+\frac{3}{4}\right).$$ (11) Accuracy of WKB approximation (9), (11) can be tested $`vs`$ exact solutions of the Salpeter equations (recently accuracy of WKB approximation was checked for light–light mesons in ). In Table 1 this comparison is given for the light–light system with $`m=0`$ and heavy–light one with $`m_q=0.01GeV`$ and and $`M_{\overline{q}}=10GeV`$. The mass $`M_n^{(hl)}`$ in the latter case actually refers to the difference of the total mass of the heavy–light system and the mass of the heavy antiquark. Summarizing, one can say that spectra (9), (11) (as function of $`n`$ for $`l=0`$) indeed have the form (1), (2) with the corrections at large $`n`$ in the form $$\mathrm{\Delta }M^2=O\left(\frac{m^2}{M_n^2}ln\frac{M_n}{m}\right)=O\left(\frac{lnn}{n}\right).$$ (12) The WKB spectrum is linear in $`n`$ and its accuracy is about 3-4% even for the lowest state. We now turn to the case of the Dirac equation with linear confining potential studied in . The WKB method for the Dirac equation was thoroughly investigated in and recently applied to the case of confining potential . Let us briefly recall the results here. The Dirac equation with scalar ($`U`$) and vector ($`V`$) local potentials has the form $$(\stackrel{}{\alpha }\stackrel{}{p}+\beta (m+U)+V)\psi _n=\epsilon _n\psi _n,$$ (13) and the WKB quantization condition is $$_r_{}^{r_+}\left(p+\frac{\kappa w}{pr}\right)𝑑r=\pi \left(n+\frac{1}{2}\right),n=0,1,2,\mathrm{},$$ (14) where $$p=\sqrt{(\epsilon V)^2\frac{\kappa ^2}{r^2}(m+U)^2},$$ (15) $$w=\frac{1}{2r}\frac{1}{2}\frac{U^{}V^{}}{m+U+\epsilon V},$$ $$|\kappa |=j+\frac{1}{2}$$ An approximate quasiclassical solution of (13) obtained in for the case $`m=0`$, $`V=0`$, $`U=\sigma r`$ is $$\epsilon _n^2=2\sigma \left(2n+j+\frac{3}{2}+\frac{sgn\kappa }{2}+\frac{\kappa \sigma }{\pi \epsilon _n^2}\left(0.38+ln\frac{\epsilon _n^2}{\sigma |\kappa |}\right)+O\left(\left(\frac{\kappa \sigma }{\epsilon _n^2}\right)^2\right)\right).$$ (16) The last two terms on the r.h.s. of equation (16) are sub-leading for large $`n`$ and are generated by the term $`\frac{\kappa w}{pr}`$ (see (14)). One can see that the (inverse) Regge slope in $`j`$ in (16) is equal to $`2\sigma `$ coinciding with the exact result (6), but is not of string type. As it was expected a $`j`$-independent scalar potential does not describe the physical phenomenon of rotating string. Still the accuracy of the WKB approximation is impressing. In Table 2 one can see the comparison of exact eigenvalues computed in with quasiclassical ones and with those obtained from (16). The discrepancy is less then 1% even for the lowest state and it is much better for higher states. ## 3 Rotating string in the spinless quark Hamiltonian Let us turn back to the spinless Salpeter equation for the light–light meson and take the non-zero angular momentum into account. As it was shown above the Salpeter equation with local $`l`$-independent potential leads to the incorrect Regge slope (4), and therefore this case requires a special treatment. One needs a Hamiltonian taking into account dynamical degrees of freedom of the string, e.g. in the form of time derivatives of string coordinates. This was done explicitly in , where it was shown that starting from the QCD Lagrangian and writing the gauge invariant $`q\overline{q}`$ Green function for confined spinless quarks in the Feynman-Schwinger representation, one can arrive at the Lagrange function of the system in the well-known form $$L(\tau )=m_1\sqrt{\dot{x}_1^2}m_2\sqrt{\dot{x}_2^2}\sigma _0^1𝑑\beta \sqrt{(\dot{w}w^{})^2\dot{w}^2w^2},$$ (17) where $`\tau `$ denotes the proper time of the system, the first two terms stand for quarks, whereas the last one describes the minimal string with tension $`\sigma `$ developed between the constituents; $`w_\mu (\tau ,\beta )`$ being the string coordinate. Adopting the straight-line anzatz for the minimal string, i.e. $`w_\mu (\tau ,\beta )=\beta x_{1\mu }+(1\beta )x_{2\mu }`$, synchronizing the quarks proper times, $`x_{10}=x_{20}=\tau =t_{\mathrm{lab}}`$ and introducing auxiliary fields to get rid of the square roots (see e.g. ) one can obtain the following Hamiltonian in the centre of mass frame (we consider the case of equal masses $`m`$) $$H=\frac{p_r^2+m^2}{\mu (\tau )}+\mu (\tau )+\frac{\widehat{L}^2/r^2}{\mu +2_0^1(\beta \frac{1}{2})^2\nu (\beta )𝑑\beta }+$$ $$+\frac{\sigma ^2r^2}{2}_0^1\frac{d\beta }{\nu (\beta )}+_0^1\frac{\nu (\beta )}{2}𝑑\beta ,$$ (18) where the two auxiliary positive functions $`\mu (\tau )`$ and $`\nu (\beta ,\tau )\nu (\beta )`$ are to be varied and to be found from the minimum of $`H`$ yielding quark energy and string energy density respectively. A more detailed analysis of the role played by auxiliary fields can be found in e.g. . Note that Hamiltonian (18) has a form of the sum of “kinetic” and “potential” terms only due to auxiliary fields $`\mu `$ and $`\nu `$. If one gets rid of them by substituting their extremal values, the resulting Hamiltonian possesses a very complicated form which makes its analysis and quantization hardly possible. The centrifugal potential in Hamiltonian (18) is of special interest to us and, most of all, the second term in the denominator. It is this term that describes extra inertia due to the string connecting the quarks. Neglecting this term and taking extrema in the auxiliary fields one easily arrives at the ordinary Salpeter Hamiltonian with linearly rising potential, whereas account for this extra term describes the proper string rotation and brings the slope of the Regge trajectory into correct form (3). In the nonrelativistic expansion of Hamiltonian (18) this term yields the so-called string correction to the leading confining potential $`\sigma r`$ $$\mathrm{\Delta }H_l=\frac{\sigma \widehat{L}^2}{6m^2r},$$ the part of the interaction which explicitly depends on the angular momentum. Hamiltonian (18) assumes especially simple form in the case of zero angular momentum and after excluding the auxiliary fields produces Salpeter equation (7). Variation of (18) over $`\nu (\beta )`$ gives the stationary energy distribution along the string with $`\beta (0\beta 1)`$ being the coordinate along the string. Thus one obtains $$\nu _0(\beta )=\frac{\sigma r}{\sqrt{14y^2(\beta \frac{1}{2})^2}},$$ (19) where $`y`$ is to be found from the transcendental equation $$\frac{\widehat{L}}{\sigma r^2}=\frac{1}{4y^2}(arcsinyy\sqrt{1y^2})+\frac{\mu y}{\sigma r},$$ (20) and $`\widehat{L}^2=l(l+1)`$. Note that the maximal possible value of $`y,y=1`$, yields the energy distribution $`\nu _0^{free}(\beta )`$ corresponding to the free open string (string without quarks at the ends) . In the general case inserting the extremal function $`\nu _0(\beta )`$ one obtains from (18) $$H=\frac{p_r^2+m^2}{\mu (\tau )}+\mu (\tau )+\frac{\sigma r}{y}arcsiny+\mu (\tau )y^2$$ (21) with $`y=y(\widehat{L},r,\mu )`$ defined by equation (20). Unfortunately no rigorous analytic calculations are possible anymore, so one has to rely upon numerical calculations. But let us first perform some analysis of Hamiltonian (21). Neglecting $`\mu `$ in (20) and $`\mu y^2`$ in (21) (which is justified for large $`\widehat{L}`$ and $`\sigma r`$, so that $`\frac{\mu }{\sigma r}1`$) and varying over $`\mu `$ in (21) one obtains $$H_{as}=2\sqrt{p_r^2+m^2}+\frac{\sigma r}{y}arcsiny,$$ (22) so that the second term on the r.h.s can be viewed as an effective potential, and we would like to emphasize that this potential is non-trivially $`l`$-dependent. In the general case one has a $`\mu `$-dependent Hamiltonian (18) with the “potential” $`U(\mu ,r)`$, $$U(\mu ,r)=\frac{\sigma r}{y}arcsiny+\mu y^2.$$ (23) A simplifying approximation can be used at this step, namely the standard WKB procedure can be applied to the Hamiltonian $$H=\frac{p_r^2+m^2}{\mu _0}+\mu _0+U(\mu _0,r),$$ (24) which slightly differs from the exact Hamiltonian (21) as it treats $`\mu _0`$ as a variational parameter not depending on $`\tau `$. We find eigenvalues $`M(\mu _0,\widehat{L},n)`$ and minimize them with respect to $`\mu _0`$ to obtain the spectrum $`M(\mu _0^{}(\widehat{L},n),\widehat{L},n)`$, where $`\mu _0^{}(\widehat{L},n)`$ being the extremal value of $`\mu _0`$. To check the accuracy of such a procedure for the eigenvalues two Hamiltonians were considered: $$H_1=2\sqrt{p_r^2+m^2}+\sigma r,$$ (25) $$H_2=\frac{p_r^2+m^2}{\mu _0}+\mu _0+\sigma r,\mu _0\mathrm{varied},$$ (26) where $`H_1`$ is obtained from $`H_2`$ in the limit when $`\mu _0\mu (\tau )`$. The results are listed in Table 3. One can see that the accuracy of variational procedure (26) is better than 5% and it is reasonable even for $`m`$ tending to zero. As a next step we use the standard WKB method to find the spectrum of Hamiltonian (24). To this end we write the Bohr–Sommerfeld condition as $$_r_{}^{r_+}p_r(r)𝑑r=\pi \left(n+\frac{1}{2}\right),$$ (27) with $$p_r(r)=\sqrt{\mu _0(M\mu _0U(\mu _0,r))m^2}.$$ (28) The eigenvalues $`M(\mu _0,\widehat{L},n)`$ were found numerically from (27) and the minimization procedure was then used with respect to $`\mu _0`$. Results for $`M_{nl}`$ are given in Table 4 and depicted in Fig.1 demonstrating very nearly straight lines with approximately string slope $`(2\pi \sigma )^1`$ in $`l`$ and as twice as smaller slope in $`n`$. Let us give a little comment concerning effective potential $`U(\mu _0,r)`$. Its behaviour at large and small distances can be extracted analytically from Hamiltonian (18) and coincides with that of the Salpeter: the centrifugal barrier at small $`r`$ and linear growth at large $`r`$. Meanwhile in the region of intermediate values of $`r`$ this potential differs from what one would have in the Salpeter equation and it is just this region which is important to obtain the correct Regge slope. The form of the effective potential is depicted in Fig.2 for several different angular momenta $`l`$. In case of $`l=0`$ the effective potential equals to $`\sigma r`$ for all values of $`r`$. ## 4 Conclusion We have shown that the proper account of the string dynamics leads to practically linear Regge trajectories, shown in Fig.1, with the slope numerically close to the conventional $`(2\pi \sigma )^1`$. The exact form of the effective potential incorporating the string rotation as well as the quark radial motion was found numerically and shown in Fig.2. To make contact with experimental data on meson masses one should specify corrections $`\mathrm{\Delta }M_p^2`$, $`\mathrm{\Delta }M_s^2`$ in (1), or in the case of the Hamiltonian formalism, one should add to Hamiltonian (21) the colour Coulomb term $`V_C`$ and spin-dependent interaction. Treating the latter as perturbation one finds, e.g. for $`\rho `$ meson, a negative shift of the mass due to $`V_C`$ of about $`160MeV`$ and positive correction of about $`40MeV`$ due to hyperfine term $`\mathrm{\Delta }H_{ss}`$. Taking this into account one obtains for $`\rho `$ meson ($`l=0`$) the mass about $`1.6GeV`$. It is clear from Fig.1 that these corrections practically do not violate the linearity of Regge trajectories as $`\rho `$ meson lies on the continuation of the leading theoretical trajectory in $`l`$ (see dashed line attached to the trajectory with $`n=0`$). In this way starting from QCD and making one assumption of the area law for the Wilson loop we obtain linear Regge trajectories for light quark mesons with the string slope. In this discussion quark spin effects have been taken into account perturbatively, which is a reasonable approximation for the $`\rho `$ trajectory, but unacceptable for pions and kaons, since for the latter one needs the full implementation of the chiral dynamics. The progress in this direction was achieved in recent papers of one of the authors (Yu.S.) , where an effective Dirac equation for the quark moving in the field of an infinitely heavy antiquark source was derived, and it was shown that solutions display the properties of confinement and chiral symmetry breaking. The nonrelativistic limit of the resulting interaction lead to the conventional result for the confining term and to the spin-orbit interaction in agreement with the standard Eichten-Feinberg-Gromes results . Therefore pionic trajectories should be considered in this new formalism. There is yet another question unanswered by our paper (and to our knowledge by all other existing papers) — the intercept of Regge trajectories $`L_0L(M^2=0)`$. Theoretical intercept for the leading trajectory in $`j`$ (see Fig.1 and the caption to it) is around -0.5, whereas it is +0.5 for the experimental $`\rho `$ trajectory also shown in Fig.1. The customary way in the potential models is to add to the Hamiltonian a large negative constant $`|C_0|1GeV`$ to reproduce the intercept, but this would obviously violate the linearity of Regge trajectories. Therefore one expects that QCD provides a negative constant $`\mathrm{\Delta }M_p^2`$ in (1) but not in Hamiltonian (18). The authors are grateful to A.M.Badalian, A.B.Kaidalov, Yu.S.Kalashnikova and V.S.Po-pov for useful discussions. Financial support of RFFI through the grants 97-02-16404, 97-02-17491 and 96-15-96740 is gratefully acknowledged.
no-problem/9906/cond-mat9906392.html
ar5iv
text
# Detection of the BCS transition of a trapped Fermi Gas ## Abstract We investigate theoretically the properties of a trapped gas of fermionic atoms in both the normal and the superfluid phases. Our analysis, which accounts for the shell structure of the normal phase spectrum, identifies two observables which are sensitive to the presence of the superfluid: the response of the gas to a modulation of the trapping frequency, and the heat capacity. Our results are discussed in the context of experiments on trapped Fermi gases. The observation of Bose-Einstein condensation in several atomic systems has recently sparked increasing interest in trapped fermionic atoms. These systems offer the prospect of a Bardeen-Cooper-Schrieffer (BCS) transition to a superfluid phase at low temperatures $`T<T_c`$. By trapping the atoms in two hyperfine states, the phase transition temperature $`T_c`$ should be experimentally accessible, and several experimental groups are presently working to achieve this transition . However, as only a few percent of the atoms are likely to participate in Cooper pairing , it is not obvious how the transition could be observed in these dilute systems. Recently, it has been proposed that the propagation and scattering of light should be significantly altered by the presense of Cooper pairs . Since the quasiparticles (QP) with energies near the Fermi chemical potential, $`\mu _\mathrm{F}`$, are those most affected by the Cooper pairing, candidate observables for the detection of the BCS transition should be sought from phenomena sensitive to this low-energy region of the QP spectrum. In this paper, we consider two such observables: the response of the gas to a “shaking” of the trap, as first suggested by Baranov ; and the heat capacity. For low $`T`$, both of these observables are dominated by contributions from the low-energy spectrum. By presenting a complete calculation of the properties of the trapped gas in both the normal and superfluid phases, which accounts exactly for the quantization of the single-particle energy levels, we are able to predict if these two observables are suitable to detect the presence of Cooper pairing. Our analysis should have direct relevance to the ongoing experiments on trapped Fermi gases. We consider a gas of fermionic atoms of mass $`m`$, confined by a potential $`U_0(𝐫)`$, with an equal number of atoms $`N_\sigma `$ in each of two hyperfine states, $`|\sigma =\pm `$. Two fermions in the same internal state $`\sigma `$ must have odd relative orbital angular momentum (minimally $`p`$-wave), and at low temperatures the centrifugal barrier suppresses their mutual interaction . Thus, we suppose the interaction to be effective only between atoms in different hyperfine states and to be dominated by the $`s`$-wave contribution. As the interplay between the discrete nature of the normal phase spectrum and the Cooper pairing is crucial for the quantities considered in this paper, we need a theory which can describe the interaction and Cooper pairing of atoms residing in different discrete trap levels. This precludes the use of a simple Thomas-Fermi treatment . A theory appropriate for the present paper has recently been presented. It uses a zero-range pseudopotential to model the interaction between atoms in two different hyperfine states; this is appropriate when the scattering length for binary atomic collisions, $`a`$, has a larger magnitude than the effective range of the interaction, $`r_e`$, and when $`k_\mathrm{F}|a|1`$, where $`k_\mathrm{F}=\sqrt{2m\mu _\mathrm{F}/\mathrm{}}`$ is the Fermi wavevector. The generalized mean field theory derived from this approach yields the eigenvalue problem : $`E_\eta u_\eta (𝐫)`$ $`=`$ $`[_0+W(𝐫)]u_\eta (𝐫)+\mathrm{\Delta }(𝐫)v_\eta (𝐫)`$ (1) $`E_\eta v_\eta (𝐫)`$ $`=`$ $`[_0+W(𝐫)]v_\eta (𝐫)+\mathrm{\Delta }(𝐫)u_\eta (𝐫).`$ (2) Here $`_0=\frac{\mathrm{}^2}{2m}^2+U_0(𝐫)\mu _\mathrm{F}`$ is the single-particle Hamiltonian; $`W(𝐫)g\widehat{\psi }_\sigma ^{}(𝐫)\widehat{\psi }_\sigma (𝐫)`$ is the Hartree potential, where $`\widehat{\psi }_\sigma (𝐫)`$ is the atom field operator for component $`\sigma `$ at position r, which obeys the usual fermion anticommutation relations. The coupling constant is $`g=4\pi a\mathrm{}^2/m`$ and the pairing field, $`\mathrm{\Delta }(𝐑)`$, is defined by $$\mathrm{\Delta }(𝐑)g\underset{r0}{lim}_r[r\widehat{\psi }_+(𝐑+\frac{𝐫}{2})\widehat{\psi }_{}(𝐑\frac{𝐫}{2})].$$ (3) Our definition of the pairing field differs from that often employed in weak-coupling BCS theory . The main advantage of the definition given above is that it eliminates the ultraviolet divergence present in the usual weak-coupling theory. The elementary quasiparticles (QPs) with excitation energies $`E_\eta `$ are described by the Bogoliubov wave functions $`u_\eta (𝐫)`$ and $`v_\eta (𝐫)`$. We solve the Bogoliubov-de Gennes (BdG) equations (2) for the case of an isotropic harmonic potential, $`U_0(r)=m\omega ^2r^2/2`$, using a self-consistent numerical procedure outlined elsewhere . In the absence of the pairing field, the QPs exhibit a discrete spectrum of energies $`E_\eta `$, with the index $`\eta `$ designating a triple of quantum numbers $`(n,l,m)`$, where $`l,m`$ are the usual angular momentum quantum numbers and $`n`$ is an index of radial excitation. In the presence of the pairing field, the self-consistent solution to the BdG equations with the lowest free energy is spherically symmetric. Thus, the Bogoliubov wavefunctions are given by $`u_\eta (𝐫)=r^1u_{nl}(r)Y_{lm}(\theta ,\varphi )`$ and $`v_\eta (𝐫)=r^1v_{nl}(r)Y_{lm}(\theta ,\varphi )`$, where the $`Y_{lm}`$ are the usual spherical harmonics, and with $`n,l,m`$ being implicitly indexed by $`\eta `$. The paring field, which is a scalar operator under rotations, couples a normal phase QP with $`(n,l,m)`$ to one with $`(n^{},l,m)`$. Due to the spherical symmetry, in taking sums over states needed to obtain the results of this paper, we can replace sums over $`m`$ by factors of $`(2l+1)`$. We now calculate the response of the gas to a harmonic time-dependent perturbation of the trapping potential, $`\mathrm{\Delta }(t)`$, of the form $$\mathrm{\Delta }(t)=\lambda \mathrm{sin}(\stackrel{~}{\omega }t)\underset{\sigma }{}d^3r\frac{1}{2}m\omega ^2r^2\psi _\sigma ^{}(𝐫)\psi _\sigma (𝐫),$$ (4) where $`\lambda `$ is a small parameter. We expand the field operators in terms of the Bogoliubov wave functions and the QP operators in the usual way , and by applying Fermi’s golden rule to obtain the linear response $`R(\stackrel{~}{\omega })`$ of the gas to the perturbation, Eq.(4), we obtain: $`R(\stackrel{~}{\omega })`$ $``$ $`2{\displaystyle \underset{n>n^{},l}{}}(2l+1)|{\displaystyle _0^{\mathrm{}}}𝑑r(u_{nl}u_{n^{}l}v_{nl}v_{n^{}l})r^2|^2`$ (5) $`\times `$ $`(f_{n^{}l}f_{nl})\delta (\mathrm{}\stackrel{~}{\omega }+E_{n^{}l}E_{nl})+`$ (7) $`{\displaystyle \underset{n,n^{},l}{}}(2l+1)|{\displaystyle _0^{\mathrm{}}}𝑑r(u_{nl}v_{n^{}l}+v_{nl}u_{n^{}l})r^2|^2`$ $`\times `$ $`(1f_{nl}f_{n^{}l})\delta (\mathrm{}\stackrel{~}{\omega }E_{nl}E_{n^{}l}),`$ (8) where $`f_{nl}=(\mathrm{exp}\beta E_{nl}+1)^1`$, $`\beta =1/k_\mathrm{B}T`$, and $`k_\mathrm{B}`$ is Boltzmann’s factor. The physical interpretation of the two terms in Eq.(5) is straightforward: The first term describes the excitation of a QP due to the perturbation, whereas the second term describes the creation of two QPs. This latter process does not violate particle conservation, since the QPs in general are mixtures of real particles and holes. The response of the gas should be observable as density fluctuations of the trapped gas. As we have assumed a spherical symmetric perturbation, the transitions all have $`\mathrm{\Delta }l=0`$. A generalization to perturbations with arbitrary angular momentum $`l`$ is straightforward. In the non-interacting limit, Eq.(5) reduces to a sum of delta functions $`\delta (\stackrel{~}{\omega }2n\omega )`$ with $`n=0,1,2\mathrm{}`$. We now solve the BdG equations self-consistently and then calculate the response of the gas to a “shaking” of the trap from Eq.(5). In Fig.1, we show a typical plot of the response $`R(\stackrel{~}{\omega })`$ for various values of $`T^{}k_\mathrm{B}T/\mathrm{}\omega `$. In this example, we have chosen the parameters $`g/(\mathrm{}\omega l_h^3)=0.8`$ and $`\mu _\mathrm{F}=51.5\mathrm{}\omega `$, where $`l_h=(\mathrm{}/m\omega )^{1/2}`$ is the characteristic length of the ground-state harmonic oscillator wavefunction. With the value of $`a=2160a_0`$, appropriate to <sup>6</sup>Li , these parameters correspond to $`N_\sigma 3.8\times 10^4`$ atoms of each spin state in a trap with frequency $`\nu =\omega /2\pi 520`$Hz; a value of $`T_c5.6\mathrm{}\omega /k_\mathrm{B}=140\mathrm{n}\mathrm{K}`$ for the transition temperature is obtained by linearizing Eq.(2. Fig.1 shows the response for $`T^{}=0,3.95`$ and $`4.55`$, where the gas is in the superfluid phase, and for $`T^{}=6>T_c`$, where the gas is in the normal phase. For comparison, we also plot the $`T=0`$ response, assuming the gas is in the normal phase. Each delta function in Eq.(5) representing a $`t\mathrm{}`$ resonance, is smoothed out to a frequency range of $`\omega /10`$ to model the finite frequency resolution of the appropriate experiment. We now discuss these results, considering first the response for the normal phase. The resonance peaks for $`T=0`$ and $`T^{}=6`$ are relatively narrow on the scale of $`\omega `$. This is perhaps surprising, as one might expect the Hartree field to wash out the shell-structure of the QP spectrum in the normal phase . To understand this, we plot in Fig.2 the lowest QP energies $`E_\eta `$ for the gas in the normal phase at $`T=0`$. To simplify the plot, we include only even values of $`l`$. The QP energies with odd $`l`$ behave in a completely analogous way. All energies are positive; negative normal-phase particle energies are simply holes ($`u_\eta (r)=0`$) with positive energy in this representation. For $`T=0`$, only the $`\delta (\mathrm{}\stackrel{~}{\omega }E_\eta E_\eta ^{})`$ term in Eq.(5) is non-zero. The thick vertical arrow in Fig.2 indicates a typical transition: creation of a hole with energy $`E_h`$, and a particle with energy $`E_p`$, yielding $`\mathrm{}\stackrel{~}{\omega }=E_h+E_p2.2\mathrm{}\omega `$. The analysis of this normal-phase spectrum is basically the same as the one presented in Ref.. A key result of the present paper is the finding that, although the Hartree field has introduced a significant dispersion of the QP energies as a function of $`l`$, the dispersion is almost the same for each band. Hence, for $`\mathrm{\Delta }l=0`$, the difference of energies between two particle bands (or the sum of energies of a particle and a hole band, as in Fig.2) varies much less with $`l`$ than the energies themselves, which results in a relatively narrow resonance peak. The resonance for $`T=0`$ is sharper than for $`T^{}=6`$. This is because for $`T=0`$, only the energy bands immediately around $`\mu _\mathrm{F}`$ contribute to the response due to the Fermi exclusion principle, whereas for higher $`T`$, there are transitions between several bands that yield slightly different transition energies. We now consider the response when the gas is in the superfluid phase. By comparing the result for $`T^{}=6`$ and $`T^{}=4.55`$ in Fig.1, we see that when the gas enters the superfluid phase, there is a significant broadening of the resonance line. This is due to the fact that Cooper pairing starts to mix particles with holes, and the QP spectrum is altered. This is depicted in Fig.3, which shows the lowest even-$`l`$ QP levels for $`T^{}=4.55`$ for both superfluid and normal phases. When the energies of particles and holes are almost degenerate in the normal phase ($`l26`$ in Fig.3), the pairing strongly mixes these two states. This leads to the usual avoided crossing and the QP-spectrum is changed significantly. The strong mixing yields the broadening of the resonance line depicted in Fig.1. There are now transitions with significantly lower energies than in the normal phase. Such a transition, which contributes to the $`\delta (\mathrm{}\stackrel{~}{\omega }+E_\eta ^{}E_\eta )`$ term in Eq.(5) with $`E_\eta E_\eta ^{}0.8\mathrm{}\omega `$, is indicated by the vertical arrow in Fig.3. We also note that the effect of the pairing decreases with increasing $`l`$. This is simply because the centrifugal potential “pushes” the high $`l`$ states into the region where the order parameter becomes very small. The inset in Fig.1 shows $`\mathrm{\Delta }(r)`$ and $`|W(r)|`$. For $`T^{}=4.55`$, the pairing only takes place around the center of the cloud, and QP states which have a small amplitude in this region are unaffected. For $`TT_c`$, all the low lying QP states are strongly influenced by the pairing. From the inset in Fig.1, we see that Cooper pairing now takes place over the entire trapped cloud. The low energy QP spectrum for $`T=0`$ plotted in Fig.2 is qualitatively different from the normal phase spectrum. The low energy QP wave functions are centered between the regions where the pairing field and the trapping potential are significant. These “in-gap” states, which were first discussed by Baranov , depend strongly upon the strength of pairing. As $`T`$ decreases and $`\mathrm{\Delta }(r)`$ increases, their energy increases. The response of the gas is completely dominated by these states for $`TT_c`$. The broad peak for $`T=0`$ in Fig.1 comes from the $`\delta (\mathrm{}\stackrel{~}{\omega }E_\eta E_\eta ^{})`$ term in Eq.(5) with $`E_\eta =E_\eta ^{}`$ being the lowest energy for a given $`l`$. It reflects excitations of the kind $`\gamma _{\eta \sigma }^{}\gamma _{\eta \sigma }^{}|\mathrm{\Phi }_0`$ where $`|\mathrm{\Phi }_0`$ is the ground state and $`\gamma _{\eta \sigma }^{}`$ creates a QP with quantum numbers $`\eta `$ in hyperfine state $`\sigma `$. Hence, for $`TT_c`$ the response of the gas is a broad peak coming from excitations of the lowest QP band. The resonance peak should be centered around an increasing frequency as $`T`$ is lowered, since $`\mathrm{\Delta }(r)`$ increases. This is confirmed in Fig.1, where a broad peak has emerged in the response for $`T^{}=3.95`$, the peak being centered at a lower frequency than for $`T=0`$. The qualitative behavior of the response of the gas described above depends on the fact that the resonance peaks are relatively well-defined in the normal phase. We have performed a number of calculations varying both the coupling strength and the number of atoms trapped. For experimentally realistic parameters, it turns out that the Hartree field does not wash out the resonance peaks in the normal phase. We therefore believe the analysis above should be valid for typical experimental conditions. The low-$`T`$ heat capacity is another observable which probes the low lying QP spectrum. The usual way to measure the energy of a trapped gas is to turn off the trapping potential and then deduce the velocity distribution from the expanding cloud . As the trapping potential is turned off non-adiabatically, the energy observed is really $`E_{tot}E_{pot}`$ where $`E_{tot}`$ is the total energy of the trapped gas and $`E_{pot}=_\sigma d^3r\rho _\sigma (r)m\omega ^2r^2/2`$ with $`\rho _\sigma (r)=\psi _\sigma ^{}(r)\psi _\sigma (r)`$. Thus, the most appropriate definition of the heat capacity for the present purpose is $`C_N_T(E_{tot}E_{pot})|_{N_\sigma }`$. Solving the BdG equations with varying $`T`$, we can calculate $`E_{tot}(T)`$ and $`E_{pot}(T)`$ and therefore $`C_N`$. The total energy of the gas given the solution of Eq.(2) is: $`E_{tot}={\displaystyle \underset{\eta }{}}f_\eta {\displaystyle d^3ru_{\eta }^{}{}_{}{}^{}(𝐫)(_0+E_\eta )u_\eta (𝐫)}`$ (9) $`+{\displaystyle \underset{\eta }{}}(1f_\eta ){\displaystyle d^3rv_\eta (𝐫)(_0E_\eta )v_{\eta }^{}{}_{}{}^{}(𝐫)}.`$ (10) As an example, we plot in Fig.4 $`c_NC_N/2N_\sigma `$ both for the normal and the superfluid phases. The number of particles in the trap is held constant at $`2N_\sigma =24860`$ and $`g/(\mathrm{}\omega l_h^3)=1`$, corresponding to a trapping frequency of $`820`$Hz for <sup>6</sup>Li. The critical temperture for this set of parameters is $`k_BT_c4.5\mathrm{}\omega `$. From Fig.4, we see that the heat capacity is suppressed in the superfluid phase for low $`T`$. This is because the pairing removes the gapless excitations present in the normal phase. These excitations have equal particle and hole character, and are therefore strongly influenced by the pairing as noted earlier. Therefore, the heat capacity is exponentially suppressed by a factor $`\mathrm{exp}(\beta \mathrm{\Delta })`$, where $`\mathrm{\Delta }`$ is the gap in the QP spectrum coming from the Cooper pairing. However, the suppression is only significant for $`TT_c`$, where all angular momentum states are affected by the pairing and the QP spectrum is truly gapped (compare Fig.2 and Fig.3). Since the system is finite and the superfluid correlations occur gradually starting in the center of the trap at $`T_c`$, and then continuously extending outwards as $`T`$ decreases (see inset of Fig.1), there is no discontinuity in the heat capacity at $`T=T_c`$, in contrast to the case of an infinite homogeneous system . It should be noted that it is important that the normal phase spectrum is approximately gapless on the scale of $`\mathrm{\Delta }`$ such that the normal phase $`c_N`$ behaves linearly with $`T`$ for $`T<T_c`$ as depicted in Fig.4. Otherwise, the heat capacity is exponentially suppressed even in the normal phase, and one will not observe a significant change when the gas becomes superfluid. Fortunately, for realistic values of $`g`$ and $`N_\sigma `$, it turns out that the Hartree field indeed makes the normal phase QP spectrum essentially gapless . In conclusion, we have presented a detailed analysis of two possible ways of detecting the predicted BCS phase transition for a trapped gas of fermionic atoms. The onset of Cooper pairing influences significantly the response of the gas to modulation of the trapping frequency. For $`T>T_c`$, the response has a relatively sharp peak, and the width of the peak should narrow as $`T`$ is lowered. Then as $`T=T_c`$ is reached, one should observe a significant broadening of the response peak as the Cooper pairing starts to affect the low lying QP spectrum. For $`TT_c`$, the low lying QP states are qualitatively different from the normal phase states, and the response of the gas to the shaking is predicted to be a broad peak coming from the lowest QP band. The center of the peak should move to increasing frequencies as the pairing increases for decreasing $`T`$. Also, one should be able to detect the phase transition by looking at the low $`T`$ heat capacity. It should be exponentially suppressed for $`TT_c`$ reflecting the gapped nature of the QP spectrum due to Cooper pairing. However, a measurement of the heat capacity is destructive as one has to release the trap and it requires several repetitions of the trapping experiment. Also, the suppression of $`C_{N_\sigma }`$ is only significant for $`TT_c`$. We therefore estimate that it is a less direct way of detecting the transition than by looking at the response to a modulation of the trapping frequency. The analysis presented here should be qualitatively correct for a non-spherical symmetric trap as well, although the actual calculations would be more cumbersome in this case.
no-problem/9906/quant-ph9906049.html
ar5iv
text
# Bell inequality and the locality loophole: Active versus passive switches ## 1 Introduction Quantum nonlocality, i.e. entanglement of distant systems, plays a central role in today’s Natural Sciences. It is at the core of quantum physics and its holistic description of physical systems. These non-local (but not relativity violating) correlations are nowadays exploited as a resource for Quantum Information Processing (QIP). Hence Quantum nonlocality is intimately related to both relativity and to (Shannon) information theory, bringing together the three major conceptual scientific breakthroughs of this century. Some readers may object that information theory is not a Natural Science, but, precisely, the possibility to exploit entanglement for QIP forces us to accept Landauer view that information is physical and that information theory is a Natural Science . Whatever views one holds on these deep issues, it is clearly desirable to test entanglement, as it leads to the infamous measurement problem, to the Schrödinger cat paradox, to nonlocality and to quantum information processing. The first two examples just mentioned are unfortunately not yet directly testable, hence of limited scientific interest (though see the pioneer work ). QIP on the other extreme is a very promising scientific field. It has been argued that it could become the Science of the 21st century, similar to electro-dynamics for the 20th century . However, almost all the promises of QIP rely on the assumption that entanglement as described by quantum theory really exists even for systems composed of tens, hundreds or thousands of subsystems and that entanglement is robust enough to be manipulated. Unfortunately, in practice, only 2 or 3 particle systems have been demonstrated so far. Moreover, even for the simplest case of 2 particle entangled systems, all experimental demonstrations suffer from technological limitations that leave open some loopholes. Admittedly these loopholes are somewhat artificial. Nevertheless, considering the conceptual and the practical importance of quantum nonlocality, these loopholes deserve further investigations. In this letter, we briefly review the two main loopholes (section 2), establish a connection between them and analyse their status (section 3) after the recent experiments carried out in Innsbruck and in Geneva . ## 2 Bell inequality and the detection and locality loopholes It is not our goal here to present Bell inequality, but simply to fixe the notation. The most common form of the Bell inequality reads : $$SE(\alpha ,\beta )+E(\alpha ,\beta ^{})+E(\alpha ^{},\beta )E(\alpha ^{},\beta ^{})2$$ (1) where $`\alpha `$ and $`\alpha ^{}`$ are two possible settings of a measuring apparatus $`A`$ analysing the first subsystem and similarly for $`\beta `$ and $`\beta ^{}`$. The measurement outcomes are labelled $`a=\pm 1`$ and $`b=\pm 1`$. In (1), the correlation $`E(\alpha ,\beta )`$ is defined in function of the coincidence function $`C(A=a,B=b|\alpha ,\beta )`$ as: $`E(\alpha ,\beta )=C(A=1,B=1|\alpha ,\beta )+C(A=1,B=1|\alpha ,\beta )`$ $`C(A=1,B=1|\alpha ,\beta )C(A=1,B=1|\alpha ,\beta )`$ (2) The assumption of locality, as formulated by Bell in 1964 , implies that given the total state $`\lambda `$ the result on one side is independent of the setting on the other side: $$P(A=a,B=b|\lambda ,\alpha ,\beta )=P(A=a|\lambda ,\alpha )P(B=b|\lambda ,\beta )$$ (3) The $`\lambda `$ are called local hidden variables (lhv), although only their local character is important. The only additional assumption on $`\lambda `$ is that they belong to some measurable space (in the mathematical sense), hence that mean values can be computed in the usual way, with some normalized probability distribution $`\rho (\lambda )`$: $$C(A=a,B=b|\alpha ,\beta )=P(A=a|\lambda ,\alpha )P(B=b|\lambda ,\beta )\rho (\lambda )𝑑\lambda $$ (4) Let us emphasize that determinism is not an issue here. Indeed, the lhv $`\lambda `$ could incorporate enough randomly chosen data to play any chance game. It is irrelevant whether a coin is tossed at the analyzer site or at the source. The detection loophole is based on the fact that in real experiments only a fraction of the particle pairs emitted by the source are detected. Hence, the sample of detected pairs could be biased. A lhv model exploits the detection loophole if the following strict inequality holds: $$P(A=1|\lambda ,\alpha )+P(A=1|\lambda ,\alpha )<1$$ (5) Note that for the detection loophole it is irrelevant whether the particle never gets to the detector or whether it get there but does not get detected. Actually, such a distinction is ill defined, first because it has no testable consequences, next because the very concept of detector is not sharp enough (is detection of a photon the creation of the first photon-electron? or the amplification of this? how large must the amplification be? etc). For an explicit lhv model based on this loophole reproducing exactly the quantum correlation see . The locality loophole is based on the fact that in most of the experiments the settings $`\alpha `$ and $`\beta `$ are set long before the particle pairs are produced. Hence it is logically possible that the source produces lhv with a probability distribution $`\rho (\lambda )`$ which depends on the settings: $$\rho (\lambda )=\rho (\lambda ,\alpha ,\beta )$$ (6) Both loopholes are based on a similar intuition quite natural from the lhv point of view. The particle pairs have addition parameters (lhv) that enable them to answer to certain questions and not to others (ie to pass the analyzer for certain settings and not for others). If the actual setting does not correspond to the lhv of the particle, then, according to the detection loophole the particle are simply not detected. While, according to the locality loophole this situation never happens because the source ”knows” the settings in advance. In principle both loopholes can be closed by appropriate experiments. To close the detection loophole, however, one needs detection efficiencies higher than $`\frac{2}{1+\sqrt{2}}82.8\%`$ . No experiment today has achieved this. Hence, one has to face the annoying fact that the detection loophole resists after almost 30 years of research and progress! To close the locality loophole, the settings should be chosen only after the particles have left the source, ie after the assumed lhv is fixed. In 1982 A. Aspect and co-workers were the first to address this issue . In their remarkable experiment quasi-periodic modulators with different frequencies selected the settings on both sides (see also the critics in ). More recently experiments in Innsbruck and in Geneva have confirmed the 1982 results, as discussed in the next section. ## 3 Active versus passive switches for the locality loophole In order to examine the locality loophole let us concentrate on Fig. 1. Ideally, the settings on Alice and Bob sides should be chosen randomly (by beings enjoying free-will). The recent beautiful experiment of the Innsbruck group comes very close to this ideal situation . The randomness was provided by a quantum random number generator triggering a polarization modulator. Moreover the data were registered locally and compared only in a later stage. In Alice’s black box as described in Fig.1, the polarization modulator used in the Innsbruck experiment (i.e. a setting modulator) is replaced by an active switch which selects the setting $`\alpha `$ or $`\alpha ^{}`$ used to analyse the particle. This is equivalent to the Innsbruck experiment and makes more straightforward the comparison with the Geneva experiment. The principle of the latter is illustrated in Bob’s black box of Fig. 1. A beam splitter (in practice a fiber optic coupler) is used as a passive switch connecting the incoming particle to analysers with setting $`\beta `$ and $`\beta ^{}`$. Only one analyser, either the one with setting $`\beta `$ or the $`\beta ^{}`$ one, is turned on at a time, while the detectors of the other analyser are turned off. Hence the choice of the settings can be done by a purely electronic switch, as for Alice’s box. Note that if the active switch in Alice box has 50% losses, then both black boxes are undistinguishable. Nevertheless, It is intuitively clear that the scheme with an active optical switch is closer to the ideal case. Let us thus analyse Alice box closer, assuming a lossy switch, as in real experiments. One could argue that the probability distribution of the lhv could still depend on the settings: the source would randomly guess the setting and whenever the position of the switch is such that the actual setting differs from the guessed setting, the particle would get lost in the switch. Such an argument is logically consistent, but admittedly very artificial. Moreover, such an argument is closer in spirit to the detection loophole than to the locality loophole. Accordingly, we argue that an experiment with lossy active switches does close the locality loophole (assuming that the data violate Bell inequality). Hence, the Innsbruck experiment does indeed close the locality loophole, although, from pure logic one can’t exclude (6). The above discussion shows that the two loopholes are not independent and that in practice (i.e. for lossy active switches) the implementation with active or passive switches as illustrated for Alice and Bob in Fig.1 are equivalent. Let us elaborate on the connection between the two loopholes. A real detector, that is a detector of finite efficiency $`\eta `$ is equivalent to an ideal detector ($`\eta _{ideal}=100\%`$) with a passive beamsplitter in front removing a fraction $`1\eta `$ of the photons. This beam splitter has an open port. It is then natural to connect another analyzer to this open port. There is clearly no different in principle between a detector with a coupler in front when the second output port of the coupler is left open or when the second port is connected to a turned off analyser. What is disturbing in this reasoning is that it seems obvious that turning off the analyser connected to the second port has no influence on the main analyser connected to the first output port. Moreover, it also seems obvious that the 2 analysers play a symmetric role. From this discussion we conclude that as long as the active switch have losses higher than 50%, Alice implementation using an active switch is in practice as good (or as bad) as Bob’s implementation using passive splitter. Thus the Geneva experiment does also close the locality loophole. ## 4 Conclusion Quantum nonlocality plays a central role both for our understanding of quantum physics and for the promising field of quantum information technology. All the experimental evidence provides an overwhelming support for quantum nonlocality. Nevertheless, some loopholes prevent the conclusion that lhv are logically excluded. The two main loopholes have been recalled, defined and some relations established. A first conclusion is that the use of lossy active switches to test the locality loophole is equivalent to the use of a passive beam splitter. The second and main conclusion is that the recent experiments confirm Aspect’s result and close the locality loophole. The detection loophole, however, remains embarrassingly open, despite years of efforts. ## Acknowledgments Stimulating discussions with Jurgen Brendel, Bruno Huttner, Sandu Popescu, Abner Shimony and Woflgang Tittel were appreciated! This work was partially supported by the Swiss National Science Foundation and by the European TMR Network ”The Physics of Quantum Information” through the Swiss OFES. ## Figure Captions 1. General scheme of tests of Bell’s inequality. Alice’s (A) and Bob’s (B) ”black box measurement apparatuses” each have two possible settings, a,a’ and b,b’, respectively, and two possible outcomes r and g. The inside of both apparatuses are also shown. In Alice’s apparatus the setting a or a’ determines the state of an active optical switch which directs the incoming photons to the corresponding analyser. In Bob’s apparatus the setting b or b’ determines that only the detectors of the corresponding analyser are turned on; hence the passive optical switch (beam splitter) directs the photon at random (in superposition) but only the selected analyser can detect the photon. If the losses of the active switch are 3 dB (i.e. 50%) higher than the passive switch, then both ”black box measurement apparatuses” are undistinguishable.
no-problem/9906/hep-th9906181.html
ar5iv
text
# Boundary conditions in the Unruh problem ## I Introduction It was proposed more than 20 years ago that a detector moving with constant proper acceleration in empty Minkowski spacetime (MS) responses as if it had been immersed into thermal bath of Fulling particles at Davies - Unruh temperature $$T_{DU}=\frac{\mathrm{}g}{2\pi ck_B},$$ (1) where $`g`$ is proper acceleration of the observer and $`k_B`$ is Boltzmann constant. Moreover it is claimed that such response is universal in the sense that it is the same for any kind of the detector. This statement is now referred as the ”Unruh effect”, see e.g. Refs. and citation therein. More precisely the Unruh effect means that from the point of view of a uniformly accelerated observer the usual vacuum in MS occurs to be a mixed state described by thermal density matrix with effective temperature (1). In this paper we will give a critical analysis of this statement and will show that fundamental principles of quantum field theory do not give any physical grounds to assert that the Unruh effect exists. In fact there are two aspects of the Unruh problem: (i) behaviour of a particular accelerated detector and (ii) interpretation of properties of quantum field restricted to a subregion of MS. The second aspect seems to be more fundamental since one doesn’t need to consider the structure of the detector and details of its interaction with the quantum field. Indeed the original derivation of the Unruh effect (see also the later publications ) is based only on quantum field theory principles and use some special models of detectors only as illustration. Moreover exactly this approach gives grounds for the assertion about the universality of the detector response. In this paper we will deal basically with the quantum field theory aspect (ii) of the Unruh problem. It should be emphasized that this aspect of the problem is of general interest for quantum field theory. There are serious arguments (see e.g.) to think that the Unruh effect is closely related to the effect of quantum evaporation of black holes predicted by Hawking . It is claimed that both effects arise due to the presence of event horizons and that Schwarzschild observer in Kruskal spacetime may be considered by analogy with Rindler observer in MS. Furthermore very recently there were proposed some arguments that evaporation of an eternal Schwarzschild black hole may be considered as Unruh effect in the six dimensional embedding MS . The standard explanation of the Unruh effect is based on existence of the aforementioned event horizons $`h_\pm `$ bordering the part of MS which is accessible for a Rindler observer, the so-called $`R`$\- wedge, see Fig.1. In the context of (ii) aspect of the Unruh problem we refer the term ”Rindler observer” to a uniformly accelerated point object whose trajectory is entirely located in the $`R`$-wedge so that the totality of world lines of all Rindler observers completely cover the interior of the $`R`$-wedge. <sup>*</sup><sup>*</sup>*For definiteness we assume that Rindler observers are moving in $`z`$\- direction with respect to some inertial observer. Since such observer due to the presence of horizons have access only to a part of information possessed by inertial observers it is commonly accepted that he sees the usual vacuum state in MS as a mixed state. To put this idea on precise grounds Unruh suggested a new quantization scheme for a free field in MS alternative to the standard one. There are two sorts of particles in this scheme, namely $`r`$\- particles living everywhere but in $`L`$\- wedge and $`l`$-particles living everywhere but in $`R`$\- wedge. $`r`$-particles as seen by a Rindler observer turn out to be nothing but the Fulling particles . The corresponding modes carry only positive frequencies with respect to $`(t,z)`$\- plain Lorentz boost generator. The parameter of Lorentz boost may be chosen as time variable in the interior of the $`R`$-wedge and is called Rindler time. $`r`$\- and $`l`$-particle content of Minkowski vacuum in the Unruh construction can be found by some formal manipulations and after elimination of non-visible for a Rindler observer degrees of freedom corresponding to $`l`$\- particles the ”thermal” density matrix with temperature (1) can be obtained, see e.g. Refs.. Such construction is however inconsistent because the Unruh quantization is unitary inequivalent to the standard one associated with Minkowski vacuum. Therefore the aforementioned expression for ”Minkowski vacuum” in terms of $`r`$\- and $`l`$\- particles content doesn’t make direct mathematical sense and the ”thermal” density matrix which arises after elimination of $`l`$\- particles degrees of freedom actually vanishes. This difficulty was pointed out in literature by many authors (see e.g. Refs.). Therefore mathematically more rigorous methods based on algebraic approach to quantum field theory were applied to the problem. In the frame of this approach the notion of Kubo-Martin-Schwinger (KMS) state is usually used instead of thermal equilibrium state (which can not be rigorously defined in this problem). Reformulation of the Unruh construction on the language of algebraic approach was presented in Ref.. It is worth to note that mathematical physicists commonly identify the Unruh effect and the so-called Bisogniano- Wichmann theorem . This theorem is equivalent to the statement that the Minkowski vacuum state (understood in algebraic sense) when restricted to the wedge $`R`$ of MS satisfies the KMS condition with respect to Rindler time and the corresponding ”temperature” parameter after evaluation in terms of the observer proper time is exactly the same as given by Eq.(1). But apart from the difficulties related to unitary inequivalence between the Unruh and standard quantization schemes there are also inconsistencies in physical interpretation of the Unruh construction which (unlike mathematical difficulties) can not be resolved. Indeed it is well known that before any measurement could be carried out one should have prepared the initial state of the quantum system, the Minkowski vacuum state in our case. However only a part of MS consisting of the interior of the $`R`$-wedge is accessible for a Rindler observer. We will refer to the interior of $`R`$\- or $`L`$\- wedges as Rindler spacetimes (RS). These spacetimes are separated from the rest of MS by the event horizons. As a consequence any well defined scheme of quantization in RS should imply that the quantum field satisfies boundary condition at the edge $`h_0`$ of RS. In fact this condition is nothing but the usual requirement of vanishing of the field at spatial infinity. Certainly the horizons $`h_\pm `$ arise due to excessive idealization of the problem. It is evident that in any physical situation the only thing one can achieve is to accelerate the detector during arbitrary long but finite period of time. In the latter case no horizons arise at all. But in this case one deals with the (i) aspect of the Unruh problem which has nothing to do with quantization of field in RS and hence with the notion of Fulling particles. In virtue of boundary conditions the Unruh quantization actually can be performed only in the so-called double Rindler wedge rather than in MS. The former is a disjoint union of the interiors of the wedges $`R`$ and $`L`$ separated causally and in addition by a sort of ”topological obstacle” . The role of the topological obstacle is played by the boundary condition necessary for the notion of Fulling particles and absent in the real Minkowski spacetime. It follows then that Rindler observers have nothing in common with the field in MS and that Minkowski vacuum can not be prepared by any manipulations in double RS. Moreover since the left and right RS are separated state vectors of the field in double Rindler wedge are represented by tensor products of state vectors describing the fields from $`R`$\- and $`L`$\- Rindler spacetimes. Since the fields in these spacetimes eternally don’t have any influence on each other only such states are physically realizable which do not have any correlations between $`r`$ and $`l`$ particles. In other words there is a ”superselection rule” acting in the Hilbert space of states representing quantum fields in the double Rindler wedge. Let us stress that this sort of ”superselection rule” arises due to eternal absence of interaction between the $`R`$ and $`L`$ parts of the field. Therefore Minkowski vacuum state which is formally represented in the Unruh construction as a ”superposition” of states with different number of $`r`$ \- particles and the same number of $`l`$ \- particles is physically unrealizable for the quantum field in the double RS and discussion of it’s thermal or other properties becomes meaningless. Since the difficulties with interpretation of the Unruh construction are of physical nature they of course arise also when one interprets the results obtained in algebraic approach. In Ref. the notion of KMS state was related to the usual notion of the bath of heated up $`r`$\- and $`l`$\- particles. But it occurs that these states may be mathematically well-defined only for the observables which vanish at some neighborhood of the common edge of the right and left RS. It is nothing but the mentioned above boundary condition which leads to loss of any connection between a Rindler observer and MS. Elucidation of the role of boundary conditions in the Unruh problem is the central point of this paper. Therefore we begin in Section II with brief consideration of boundary conditions at spatial infinity for a free field quantized in a plain-wave basis in MS. The existence of boundary conditions is always implied but their discussion is usually omitted in text-books. Nevertheless disregarding of this point may lead to mistakes in treating some delicate problems as it happened in our opinion with the Unruh problem. In Section III we consider boundary conditions for quantum field in RS. This case slightly differs in some technical details from the case of a free field in MS due to the absence of mass gap for Fulling particles. Section IV is devoted to consideration of quantization of a free field in MS in the basis of ”boost modes”. This scheme of quantization is unitary equivalent to the usual plain wave quantization and is exploited in the Unruh construction. We consider the Unruh construction in Section V. We show that the Unruh quantization scheme is valid only for the double Rindler wedge and can not be used for derivation of ”thermal properties” of Minkowski vacuum with respect to a Rindler observer. Algebraic approach to the Unruh problem is discussed in Section VI. Our results are summarized in Section VII. In Appendix A we present some technical details of derivation of the expression for boost mode annihilation operator in terms of the field values on a Cauchy surface. The derived formula allows one to understand difference between the Unruh and Fulling operators. Appendix B includes discussion of analogy between the Unruh construction and the construction of squeezed states for a two dimensional harmonic oscillator. In this Appendix we have also included a proof of unitary inequivalence of the standard plain wave and Unruh schemes of quantization. Although this issue seems to be a well-known fact we actually could not find a detailed discussion of it in physical literature. In the paper we restrict our discussion to the case of massive neutral scalar field in $`1+1`$ dimensional MS. Generalization to higher dimensions may be obtained straightforwardly by introducing components of momentum $`\stackrel{}{q}`$ orthogonal to the direction of motion of a Rindler observer just changing definition of the mass of the field as $`m\sqrt{m^2+q^2}`$ and inserting additional integration over $`\stackrel{}{q}`$ in appropriate places (see e.g. Ref. for $`1+3`$ dimensional case and Ref. for the general case of $`1+n`$ dimensions). Short presentation of our results can be found in Refs. . ## II Quantization of a neutral scalar field in D=1+1 Minkowski spacetime (plane wave modes) In this section we will discuss the boundary conditions for a quantized free scalar field in two-dimensional Minkowski space-time (MS). Let $$x=\{t,z\},ds^2=dt^2dz^2,$$ (2) be global coordinates and metric of pseudoeuclidean plain From this place we use natural units $`\mathrm{}=c=k_B=1`$ throughout the paper.. Operator of a free neutral scalar field $`\varphi _M(x)`$ of mass $`m`$ satisfies the Klein-Fock-Gordon (KFG) equation $$\left\{\frac{^2}{t^2}+𝒦_M(z)\right\}\varphi _M(x)=0,𝒦_M(z)=\frac{^2}{z^2}+m^2.$$ (3) The plain waves $$\mathrm{\Theta }_p(x)=(2ϵ_p)^{1/2}e^{iϵ_pt}\phi _p(z),\phi _p(z)=(2\pi )^{1/2}e^{ipz},ϵ_p=\sqrt{p^2+m^2},\mathrm{}<p<\mathrm{},$$ (4) form a complete set of solutions of the equation (3) orthonormalized relative to scalar product in MS $$(f,g)_Mi\underset{\mathrm{}}{\overset{\mathrm{}}{}}f^{}(x)\frac{\genfrac{}{}{0pt}{}{}{}}{t}g(x)𝑑z.$$ (5) The completeness of the set (4) allows one to perform quantization by setting $$\varphi _M(x)=\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑p\left[a_p\mathrm{\Theta }_p(x)+a_p^{}\mathrm{\Theta }_p^{}(x)\right],$$ (6) where $`a_p`$ and $`a_p^{}`$ are annihilation and creation operators obeying canonical commutation relations. The vacuum state $`|0_M`$ in MS is defined by the relations $$a_p|0_M=0,\mathrm{}<p<\mathrm{},$$ (7) and operators $`a_p`$ can be expressed in terms of field operator $`\varphi _M(x)`$ values on an arbitrary spacelike surface by $`a_p=(\mathrm{\Theta }_p,\varphi _M)_M`$. In particular $$a_p=i\underset{\mathrm{}}{\overset{\mathrm{}}{}}\mathrm{\Theta }_p^{}(x)\frac{\genfrac{}{}{0pt}{}{}{}}{t}\varphi _M(x)𝑑z.$$ (8) It is commonly assumed that the field $`\varphi _M(x)`$ vanishes at spatial infinity. Nevertheless it is worth emphasizing that the operators $`a_p`$, $`a_p^{}`$, $`\varphi _M(x)`$ are unbounded ones and therefore the requirement $`\varphi _M(x)0,z\pm \mathrm{}`$ as well as relations (6),(8) should be understood in weak sense . The latter means that these statements relate to arbitrary matrix elements of operators under discussion. Note that the requirement of vanishing of the field $`\varphi _M(x)`$ in the weak sense at spatial infinity is a necessary condition for finiteness of the energy of the field. For illustration of this statement we will consider one particle amplitude $$\varphi _f(x)=0_M|\varphi _M(x)|f,$$ (9) which determines all matrix elements of the free field operator. One particle state $`|f`$ in Eq.(9) is defined by $$|f=a^{}(f)|0_M,a^{}(f)=\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑pf(p)a_p^{},f|f=\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑p|f(p)|^2=1.$$ (10) The field Hamiltonian expectation value in the state $`|f`$ is given by the following expression $$f|H|f=f|\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑z:T^{00}:|f=\frac{1}{2}\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑z\left\{\left|\frac{\varphi _f(x)}{t}\right|^2+\left|\frac{\varphi _f(x)}{z}\right|^2+m^2\left|\varphi _f(x)\right|^2\right\}.$$ (11) One can easily see that the finiteness of the field energy $`f|H|f`$ implies $$\underset{\mathrm{}}{\overset{\mathrm{}}{}}|\varphi _f(x)|^2𝑑z<\mathrm{},\underset{\mathrm{}}{\overset{\mathrm{}}{}}\left|\frac{\varphi _f(x)}{z}\right|^2𝑑z<\mathrm{},$$ (12) and hence leads to continuity of $`\varphi _f(x)`$ and its vanishing at spatial infinity $$\varphi _f(t,z)0,z\pm \mathrm{}.$$ (13) Indeed from the inequality $`|\varphi _f^2(t,z_2)\varphi _f^2(t,z_1)|2{\displaystyle \underset{z_1}{\overset{z_2}{}}}𝑑z\left|\varphi _f(x){\displaystyle \frac{\varphi _f(x)}{z}}\right|2\left({\displaystyle \underset{z_1}{\overset{z_2}{}}}𝑑z|\varphi _f(x)|^2{\displaystyle \underset{z_1}{\overset{z_2}{}}}𝑑z\left|{\displaystyle \frac{\varphi _f(x)}{z}}\right|^2\right)^{1/2}`$ and square integrability of $`\varphi _f(x)`$ and $`\varphi _f(x)/z`$ it is evident that the function $`\varphi _f(t,z)`$ is continuous and there exists the limit $`\underset{z\pm \mathrm{}}{lim}\varphi _f(t,z)`$. But if $`\varphi _f(x)`$ is square integrable then this limit should be zero (for details of the proof in more general case see Sec. 5.6 in Ref.). Hence if we refuse from the condition (13) then the energy becomes infinite. This boundary condition in terms of Whightman functions is equivalent to vanishing of the two-point function for infinite spacelike separations. From Eqs.(6), (7) for positive-frequency function $`\mathrm{\Delta }^{(+)}(x,m)=i0_M|\varphi _M(x)\varphi _M(0)|0_M`$ we obtain $$\mathrm{\Delta }^{(+)}(x,m)=\frac{1}{4}\times \{\begin{array}{cc}H_0^{(2)}(m\sqrt{t^2z^2}),& t>|z|,\\ \frac{2i}{\pi }K_0(m\sqrt{z^2t^2}),& |t|<|z|,\\ H_0^{(1)}(m\sqrt{t^2z^2}),& t<|z|,\end{array}$$ (14) where $`H_\nu ^{(1,2)}`$ are Hankel and $`K_\nu `$ \- Macdonald (modified Bessel) functions. Therefore using asymptotic expansions for $`H_\nu ^{(1,2)}`$ and $`K_\nu `$ we get for $`|z|\mathrm{}`$, $`t=0`$ $$\mathrm{\Delta }^{(+)}(x,m)(m|z|)^{1/2}e^{m|z|}.$$ (15) The two-point commutator function $`\mathrm{\Delta }(xx^{},m)=i[\varphi _M(x),\varphi _M(x^{})]`$ reads $$\mathrm{\Delta }(x,m)=\frac{1}{4}\{\mathrm{sgn}(tz)+\mathrm{sgn}(t+z)\}J_0(m\sqrt{t^2z^2}),$$ (16) where $`J_\nu `$ denotes Bessel function, $`\theta (\tau )`$ is the Heavicide step function and $`\mathrm{sgn}(\tau )=\theta (\tau )\theta (\tau )`$. Note that the Cauchy data of the function $`\mathrm{\Delta }(x,m)`$ on the surface $`t=0`$ is $$\mathrm{\Delta }(x,m)|_{t=0}=0,\frac{\mathrm{\Delta }(x,m)}{t}|_{t=0}=\delta (z),$$ (17) in full analogy with the Cauchy data for the Pauli-Jordan function in four dimensional case . ## III Quantization of a neutral scalar field in D=1+1 Rindler space In this section we will consider quantization of a neutral scalar field in D=1+1 Rindler space the geometry of which is described by the metric $$ds^2=\rho ^2d\eta ^2d\rho ^2,\mathrm{}<\eta <+\mathrm{},0<\rho <\mathrm{}.$$ (18) This issue plays an important role for the Unruh problem because it defines the notion of Fulling particles. In the Sec.III A we will define the Fulling modes which form a basis for quantization and introduce the notion of Fulling particles . In the Sec.III B we will discuss boundary conditions arising in the procedure of Fulling quantization. ### A Fulling quantization KFG equation in RS takes the form $$\left\{\frac{^2}{\eta ^2}+𝒦_R(\rho )\right\}\varphi _R(\xi )=0,𝒦_R(\rho )=\rho \frac{}{\rho }\rho \frac{}{\rho }+m^2\rho ^2,\xi =\{\eta ,\rho \}$$ (19) Differential operator $`𝒦_R`$ is a self- adjoint positive operator in the Hilbert space $`L_\sigma ^2(0,\mathrm{})`$ of square integrable functions with measure $`d\sigma (\rho )=d\rho /\rho `$ and inner product $`(\chi ,\psi )_{L^2}=_0^{\mathrm{}}\chi ^{}(\rho )\psi (\rho )𝑑\rho /\rho `$. Functions from the domain of it’s definition $`𝒟(𝒦_R)`$ obey the conditions $$(\psi ,\psi )_{L^2}=\underset{0}{\overset{\mathrm{}}{}}\frac{d\rho }{\rho }|\psi (\rho )|^2<\mathrm{},\underset{0}{\overset{\mathrm{}}{}}\frac{d\rho }{\rho }\left|\rho \frac{\psi (\rho )}{\rho }\right|^2<\mathrm{},\psi (0)=0,$$ (20) where the last restriction is a consequence of the two previous (compare to Eqs.(12), (13)). The condition $`\psi (0)=0`$ is an automatic or built-in boundary condition, see Ref.. This means that at the point $`\rho =0`$ we encounter the case of the Weyl limit-point. From mathematical point of view it results from the fact that deficiency indices of the operator $`𝒦_R`$ are equal to $`(0,0)`$. The physical meaning of this condition was discussed by Fulling using the language of wave packets. The Eigenfunctions of the operator $`𝒦_R`$, $$\psi _\mu (\rho )=\pi ^1(2\mu \mathrm{sinh}\pi \mu )^{1/2}K_{i\mu }(m\rho ),𝒦_R\psi _\mu (\rho )=\mu ^2\psi _\mu (\rho ),$$ (21) satisfy the orthogonality and completeness conditions, $$\underset{0}{\overset{\mathrm{}}{}}\frac{d\rho }{\rho }\psi _\mu ^{}(\rho )\psi _\mu ^{}(\rho )=\delta (\mu \mu ^{}),\underset{0}{\overset{\mathrm{}}{}}𝑑\mu \psi _\mu (\rho )\psi _\mu ^{}(\rho ^{})=\rho \delta (\rho \rho ^{}).$$ (22) Note that the functions $`\psi _\mu (\rho )`$ satisfy the third condition in Eq.(20) in the sense of distributions. For the solution of the Cauchy problem for Eq.(19) we have $$\varphi _R(\eta ,\rho )=e^{i\eta 𝒦_R^{1/2}}\psi (\rho )+e^{i\eta 𝒦_R^{1/2}}\psi ^{}(\rho ),\psi (\rho )=\frac{1}{2}\varphi _R(0,\rho )+\frac{i}{2}𝒦_R^{1/2}\frac{}{\eta }\varphi _R(0,\rho ).$$ (23) Therefore positive-frequency with respect to timelike variable $`\eta `$ modes, Fulling modes , read $$\mathrm{\Phi }_\mu (\xi )=(2\mu )^{1/2}\psi _\mu (\rho )e^{i\mu \eta },\mu >0.$$ (24) These modes are orthonormal relative to the inner product in RS, $$(F,G)_R=i\underset{0}{\overset{\mathrm{}}{}}\frac{d\rho }{\rho }F^{}(\xi )\frac{\genfrac{}{}{0pt}{}{}{}}{\eta }G(\xi ),$$ (25) and together with $`\mathrm{\Phi }_\mu ^{}`$ form a complete set of solutions of KFG equation (19). Therefore they may be used for quantizing the field $`\varphi _R`$, $$\varphi _R(\xi )=\underset{0}{\overset{\mathrm{}}{}}𝑑\mu \{c_\mu \mathrm{\Phi }_\mu (\xi )+c_\mu ^{}\mathrm{\Phi }_\mu ^{}(\xi )\},[c_\mu ,c_\mu ^{}^{}]=\delta (\mu \mu ^{}),[c_\mu ,c_\mu ^{}]=[c_\mu ^{},c_\mu ^{}^{}]=0,$$ (26) and one can define vacuum state $`|0_R`$ for Rindler space by the condition $$c_\mu |0_R=0,\mu >0.$$ (27) The states which are created from this vacuum by operators $`c_\mu ^{}`$ correspond to Fulling (or sometimes also called Rindler) particles. The annihilation operator $`c_\mu `$ may be expressed in terms of the field $`\varphi _R`$ by $$c_\mu =(\mathrm{\Phi }_\mu ,\varphi _R)_R=\frac{i}{\sqrt{2\mu }}\underset{0}{\overset{\mathrm{}}{}}\frac{d\rho }{\rho }\psi _\mu (\rho )\left(\frac{\varphi _R(\xi )}{\eta }i\mu \varphi _R(\xi )\right)|_{\eta =0}.$$ (28) The secondly quantized operator corresponding to the Killing vector $`i/\eta `$, $$K=\underset{0}{\overset{\mathrm{}}{}}\mu c_\mu ^{}c_\mu 𝑑\mu ,$$ (29) plays a role of Hamiltonian. With the help of Eqs.(26), (27) one can calculate the two-point commutator $`D(\eta \eta ^{},\rho ,\rho ^{})=i[\varphi _R(\xi ),\varphi _R(\xi ^{})]`$ and the positive-frequency Whightman function $`D^{(+)}(\eta \eta ^{},\rho ,\rho ^{})=i0_R|\varphi _R(\xi )\varphi _R(\xi ^{})|0_R`$ for RS, $$D(\eta \eta ^{},\rho ,\rho ^{})=\frac{2}{\pi ^2}\underset{0}{\overset{\mathrm{}}{}}𝑑\mu \mathrm{sinh}\pi \mu \mathrm{sin}\mu (\eta \eta ^{})K_{i\mu }(m\rho )K_{i\mu }(m\rho ^{}),$$ (31) $$D^{(+)}(\eta \eta ^{},\rho ,\rho ^{})=\frac{i}{\pi ^2}\underset{0}{\overset{\mathrm{}}{}}𝑑\mu \mathrm{sinh}\pi \mu \mathrm{cos}\mu (\eta \eta ^{})K_{i\mu }(m\rho )K_{i\mu }(m\rho ^{})+\frac{1}{2}D(\eta \eta ^{},\rho ,\rho ^{}).$$ (32) Using the relation between Minkowski coordinates $`(t,z)`$ and Rindler coordinates $`(\eta ,\rho )`$ in the $`R`$-wedge of MS, $$\eta =\mathrm{artanh}(t/z),\rho =(z^2t^2)^{1/2},$$ (33) one can easily see that the two point commutation functions $`D`$ and $`\mathrm{\Delta }`$ coincide. In particular for $`\mathrm{\Delta }s^2=2\rho \rho ^{}\mathrm{cosh}(\eta \eta ^{})\rho ^2\rho ^2>0`$ we have $`D(\eta \eta ^{},\rho ,\rho ^{})={\scriptscriptstyle \frac{1}{2}}J_0\left(m(\mathrm{\Delta }s^2)^{1/2}\right)`$. This coincidence of two-point commutators means that the local properties of the quantum fields $`\varphi _M`$ and $`\varphi _R`$ are the same. Nevertheless global properties of these fields are different due to different definitions of vacuums $`|0_M`$ and $`|0_R`$, $`\mathrm{\Delta }^{(+)}D^{(+)}`$. Note that singularities of these functions for coinciding points are the same and cancel when one takes their difference, $$0_M|\varphi _M^2(\xi )|0_M0_R|\varphi _R^2(\xi )|0_R=\frac{1}{\pi ^2}\underset{0}{\overset{\mathrm{}}{}}𝑑\mu e^{\pi \mu }K_{i\mu }^2(m\rho )>0.$$ (34) ### B Boundary condition Let us now discuss the boundary conditions for the field $`\varphi _R(\xi )`$. As in section II we will consider one-particle amplitude $$\varphi _g(\xi )=0_R|\varphi _R(\xi )|g=\mathrm{exp}(i\eta 𝒦_R^{1/2})\psi _g(\rho ).$$ (35) (compare to Eq.(23)), where $$|g=c^{}(g)|0_R,c^{}(g)=\underset{0}{\overset{\mathrm{}}{}}\frac{d\mu }{\mu ^{1/2}}g(\mu )c_\mu ^{}.$$ (36) The spatial part $`\psi _g`$ of the one particle amplitude (35) is expressed in terms of the weight function $`g`$ as follows, $$\psi _g(\rho )=\underset{0}{\overset{\mathrm{}}{}}𝑑\mu G(\mu ,\rho ),G(\mu ,\rho )=\frac{g(\mu )}{\pi }\left(\frac{\mathrm{sinh}\pi \mu }{\mu }\right)^{1/2}K_{i\mu }(m\rho ).$$ (37) For inner product in RS we have $$(\varphi _g,\varphi _h)_R=2(𝒦_R^{1/4}\psi _g,𝒦_R^{1/4}\psi _h)_{L^2},$$ (38) and the state $`|g`$ is normalized by the condition $$g|g=\underset{0}{\overset{\mathrm{}}{}}\frac{d\mu }{\mu }|g(\mu )|^2=2\underset{0}{\overset{\mathrm{}}{}}\frac{d\rho }{\rho }|𝒦_R^{1/4}\psi _g|^2=1.$$ (39) We will discuss boundary conditions at both boundaries of RS, namely at the points $`\rho =\mathrm{}`$ and $`\rho =0`$. It’s worth to note that the point $`\rho =0`$ may also be considered as spatial infinity. It becomes evident after the Langer transformation $`u=\mathrm{ln}(m\rho )`$ ($`\mathrm{}<u<\mathrm{}`$) mapping the singular point $`\rho =0`$ into $`\mathrm{}`$. After this transformation operator $`𝒦_R`$ takes the form $$𝒦_R=\frac{^2}{u^2}+V(u),V(u)=m^2e^{2u}.$$ (40) Since $`V(u)`$ is a confining potential at $`u+\mathrm{}`$ the boundary condition $$\varphi _g(\eta ,+\mathrm{})=0,$$ (41) is obvious and is satisfied not only for the amplitude (35) but even for Eigenfunctions $`\psi _\mu (\rho )`$ of operator $`𝒦_R`$ . Therefore we concentrate below on the less evident case $`\rho =0`$ or $`u=\mathrm{}`$. The requirement of finiteness of the average energy (29) in the state $`|g`$, $$g|K|g=\frac{1}{2}\underset{0}{\overset{\mathrm{}}{}}\frac{d\rho }{\rho }\left\{\left|\frac{\varphi _g}{\eta }\right|^2+\rho ^2\left|\frac{\varphi _g}{\rho }\right|^2+m^2\rho ^2\left|\varphi _g\right|^2\right\}=\underset{0}{\overset{\mathrm{}}{}}𝑑\mu |g(\mu )|^2<\mathrm{},$$ (42) leads to the restrictions $$\underset{0}{\overset{\mathrm{}}{}}\frac{d\rho }{\rho }\left|\rho \frac{\varphi _g}{\rho }\right|^2<\mathrm{},\underset{0}{\overset{\mathrm{}}{}}\frac{d\rho }{\rho }\left|\rho \varphi _g\right|^2<\mathrm{}.$$ (43) But from these restrictions (unlike the case we had in MS) it does not immediately follow that $$\varphi _g(\eta ,0)=0.$$ (44) Therefore we need use more delicate procedure to prove the condition (44). Let us split the integral in Eq.(37) into three parts, $$\begin{array}{c}\psi _g(\rho )=I_1(\rho )+I_2(\rho )+I_3(\rho ),\hfill \\ I_1=\underset{0}{\overset{\mu _1}{}}G𝑑\mu ,I_2=\underset{\mu _1}{\overset{\mu _2}{}}G𝑑\mu ,I_3=\underset{\mu _2}{\overset{\mathrm{}}{}}G𝑑\mu ,\hfill \end{array}$$ (45) where $`\mu _1`$, $`\mu _2`$ are arbitrary numbers such that $`0<\mu _11\mu _2<\mathrm{}`$ and the function $`G(\mu ,\rho )`$ is defined in Eq.(37). After applying the known asymptotic behaviour of the Macdonald function we obtain for $`u=\mathrm{ln}(m\rho )<0`$, $`|u|1`$ $$G(\mu ,\rho )=\frac{g(\mu )}{\sqrt{\pi }\mu }\times \{\begin{array}{cc}\mathrm{sin}(\mu u\mu \mathrm{ln}2),\hfill & \mu 1,\\ & \\ \mathrm{cos}(\mu u\mu \mathrm{ln}2\mathrm{arg}\mathrm{\Gamma }(i\mu )),\hfill & \mu 1,\\ & \\ \mathrm{sin}(\mu \mathrm{ln}\mu \mu u+\mu (\mathrm{ln}21)+\pi /4),\hfill & \mu 1.\end{array}$$ (46) Let us first proceed with the last two terms in Eq.(45). It follows from the normalization condition Eq.(39) and the evident inequality $`|g|{\scriptscriptstyle \frac{1}{2}}(1+|g|^2)`$ that the integral $`_{\mu _1}^{\mu _2}|g(\mu )|𝑑\mu /\mu `$ should converge. Therefore we may apply the Riemann-Lebesgue lemma to conclude that $`I_2(\rho )`$ vanishes when $`\rho 0`$. To estimate $`I_3`$ we use the Schwartz inequality and get $$|I_3(\rho )|^2\frac{1}{\pi \mu _2}\underset{\mu _2}{\overset{\mathrm{}}{}}|g(\mu )|^2𝑑\mu .$$ (47) Now using the condition of finiteness of the energy Eq.(42) we conclude that $`I_3(\rho )`$ may be done arbitrary small by the appropriate (independent of the value of $`\rho `$) choice of $`\mu _2`$. Therefore the sum $`I_2(\rho )+I_3(\rho )`$ tends to zero when $`\rho 0`$. Note also that for differentiable functions $`g(\mu )`$ one has the estimation $`I_2+I_3\mathrm{ln}^1(1/m\rho )`$ for $`\rho m^1`$. Consider now the first part $`I_1(\rho )`$ of the integral in Eq.(45). We will discuss first the case of weight functions $`g(\mu )`$ which are continuous at the point $`\mu =0`$. From Eq.(39) it immediately follows that such weight functions should vanish for $`\mu 0`$, $`g(0)=0`$. Let $`g(\mu )`$ vanish for $`\mu 0`$ as a power of $`\mu `$, $$g(\mu )=a\mu ^\alpha ,\alpha >0,\mu 0.$$ (48) Then from Eqs.(45), (46) we obtain the estimation $$I_1(\rho )=\frac{a\mathrm{\Gamma }(\alpha )\mathrm{sin}(\pi \alpha /2)}{\sqrt{\pi }(\mathrm{ln}\frac{2}{m\rho })^\alpha },\rho m^1,\alpha 2,4,6,\mathrm{}.$$ (49) Thus for this case $`I_1(\rho )`$ decreases logarithmically when $`\rho 0`$. We see that for $`0<\alpha <1`$ the term $`I_1(\rho )`$ dominates in the sum in Eq.(45) for small $`\rho `$ and thus the whole integral over $`\mu `$ is defined by the behaviour of $`g(\mu )`$ at small $`\mu `$ while for $`\alpha >1`$ generally $`\psi _g(\rho )`$ does not depend on behaviour of $`g(\mu )`$ at small $`\mu `$. For even values of $`\alpha `$ the leading term (49) vanishes and $`I_1(\rho )`$ decreases for $`\rho 0`$ even faster. For example, for the particular weight function $$g_0(\mu )=\frac{a_0\mu (\mu \mathrm{sinh}\pi \mu )^{1/2}}{\sqrt{\pi }\mathrm{\Gamma }^2(ϵ)}|\mathrm{\Gamma }(ϵ+i\mu )|^2,(ϵ>0),g_0(\mu )a_0\mu ^2,\mu 0,$$ (50) ($`a_0`$ \- normalization constant) by performing Kontorovich - Lebedev transform we obtain $`\psi _g(\rho )(m\rho )^ϵ\mathrm{exp}(m\rho )\rho ^ϵ`$ for $`\rho m^1`$. If the weight function vanishes for small $`\mu `$ logarithmically, $$g(\mu )=b\left(\mathrm{ln}\frac{1}{\mu }\right)^\beta ,\beta >0,\mu 0,$$ (51) then for small $`\rho `$ we have $$I_1(\rho )=\frac{\sqrt{\pi }b}{2\left(\mathrm{ln}\mathrm{ln}\frac{2}{m\rho }\right)^\beta }\left(1\frac{\beta C}{\left(\mathrm{ln}\mathrm{ln}\frac{2}{m\rho }\right)^\beta }+\mathrm{}\right),$$ (52) where $`C=0.577\mathrm{}`$ is Euler constant. This asymptotic representation is valid in particular for $`0<\beta <1/2`$ when the normalization integral (39) diverges. Since the faster $`g(\mu )`$ decreases for $`\mu 0`$ the faster $`I_1(\rho )`$ tends to zero, we conclude that the one particle amplitude $`\mathrm{\Phi }_g(\xi )`$ tends to zero when $`\rho 0`$ for all weight functions which are continuous at $`\mu =0`$, satisfy the normalization condition and correspond to the states with finite energy. Note that continuity of weight functions $`g(\mu )`$ is not a necessary condition for the validity of the condition Eq.(44). As an example we will consider the weight function which for small $`\mu `$ behaves as $$g(\mu )=c\sigma ^\gamma \mathrm{exp}(d\sigma ^\delta \mathrm{sin}^2\sigma ),\gamma 0,\sigma =\mathrm{ln}(1/\mu ).$$ (53) For big values of $`\sigma `$ this function is localized almost near the points $`\sigma _n=\pi n`$ with characteristic width $`\mathrm{\Delta }\sigma _n=d^{1/2}(\pi n)^{\delta /2}`$. Therefore for the contribution of small $`\mu `$ to the normalization integral Eq.(39) we have $$\underset{0}{\overset{\mu _1}{}}\frac{d\mu }{\mu }|g(\mu )|^2=\underset{\sigma _1}{\overset{\mathrm{}}{}}𝑑\sigma |g(e^\sigma )|^2\underset{n}{}d^{1/2}(\pi n)^{2\gamma \delta /2},$$ (54) where $`\sigma _1=\mathrm{ln}(\frac{1}{\mu _1})`$. Therefore the function $`g(\mu )`$ with behaviour at small $`\mu `$ as given by Eq.(53) may be normalizable if $$\delta >4\gamma +2$$ (55) (the case $`\gamma =1/2`$, $`\delta =6`$ is known as Dirichlet-Reymond example ). But let us note that for the function Eq.(53) with parameters satisfying Eq.(55) the integral $`_0^{\mu _1}|g(\mu )|𝑑\mu /\mu =_{\sigma _1}^{\mathrm{}}|g(e^\sigma )|𝑑\sigma `$ also converges and thus in virtue of the Riemann-Lebesgue lemma we conclude that even in this case we have $`I_1(\rho )0`$ as $`\rho 0`$. It looks as if the condition Eq.(44) is valid for discontinues at $`\mu =0`$ weight functions as well. Let us also note that for arbitrary small but finite accuracy of measuring of energy of Fulling particles the weight functions of the type Eq.(53) are physically indistinguishable from those vanishing for small $`\mu `$. We can state therefore that the boundary condition Eq.(44) is satisfied at least for all physically realizable states $`|g`$. It is worth to emphasize that the considered boundary condition (44) for quantized field can not be reduced to the boundary condition $`\psi (0)=0`$ for functions from the domain of definition of the operator $`𝒦_R`$ because the expansion (26) is valid for nonvanishing (in weak sense) at the point $`\rho =0`$ functions $`\varphi _R(\xi )`$ as well. The situation is the same as in the plain-wave quantization scheme in MS where the boundary condition (13) does not necessarily follow from the self-adjointness of the operator $`𝒦_M(z)`$. If one deals with the states $`|g`$ for which $$g|K^1|g=\underset{0}{\overset{\mathrm{}}{}}\frac{d\mu }{\mu ^2}|g(\mu )|^2=2\underset{0}{\overset{\mathrm{}}{}}\frac{d\rho }{\rho }|\psi _g(\rho )|^2<\mathrm{},$$ (56) then the boundary condition Eq.(44) may be proved straightforwardly in the same way as it was done in section II. The condition Eq.(56) corresponds to the regularity condition which was proposed by Kay in Ref. to guarantee the existence of thermal states for Fulling quantization system. At the end of the section let us discuss the asymptotic behaviour of Whightman function $`D^{(+)}(\eta \eta ^{},\rho ,\rho ^{})`$ for fixed value of $`\rho ^{}`$ and $`\rho 0`$. In this case the two-point commutator vanishes and the contribution of small $`\mu `$ to the integral in Eq.(32) becomes principal. Thus we have $$D^{(+)}(\eta \eta ^{},\rho ,\rho ^{})=\frac{iK_0(m\rho ^{})}{\pi \mathrm{ln}(2/m\rho )}+O(\mathrm{ln}^2(1/m\rho )),(\mathrm{\Delta }s^2<0).$$ (57) We see that physically significant quantities die out at point $`\rho =0`$ in RS as they do at spatial infinity in MS. ## IV Quantization of a neutral scalar field in D=1+1 Minkowski spacetime (boost modes) The Rindler observer world line coincides with one of the orbits of Lorentz group and $`R`$-wedge is one of domains of MS invariant under Lorentz rotation. Therefore it is convenient dealing with the Unruh problem to quantize the field in the basis of Eigenfunctions of Lorentz boost operator rather than in the plane-wave basis. Since the secondary quantized boost operator $`L=M_{tx}`$ does not commute with the energy $`H`$ and momentum $`P`$ operators, $$[L,H]=iP,[L,P]=iH,$$ (58) it is not diagonal in terms of annihilation, creation operators of particles with given momentum $`p`$. In order to diagonalize the operator $`L`$ we replace first the variable $`p`$ by the rapidity $`q=\mathrm{artanh}(p/ϵ_p)`$ and introduce new operators $`\alpha _q`$ as $$\alpha _q=(m\mathrm{cosh}q)^{1/2}a_p,[\alpha _q,\alpha _q^{}^{}]=\delta (qq^{}).$$ (59) The energy, momentum and boost operators are expressed in terms of new operators as follows $$H=m\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑q\mathrm{cosh}q\alpha _q^{}\alpha _q,P=m\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑q\mathrm{sinh}q\alpha _q^{}\alpha _q,L=\frac{i}{2}\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑q\alpha _q^{}\frac{\genfrac{}{}{0pt}{}{}{}}{q}\alpha _q.$$ (60) It is easy to see that in terms of operators $`b_\kappa `$ which are Fourier transforms of operators $`\alpha _q`$ $$b_\kappa =\frac{1}{\sqrt{2\pi }}\underset{\mathrm{}}{\overset{\mathrm{}}{}}e^{i\kappa q}\alpha _q𝑑q,\alpha _q=\frac{1}{\sqrt{2\pi }}\underset{\mathrm{}}{\overset{\mathrm{}}{}}e^{i\kappa q}b_\kappa 𝑑\kappa .$$ (61) the boost operator $`L`$ is diagonal $$L=\underset{\mathrm{}}{\overset{\mathrm{}}{}}\kappa b_\kappa ^{}b_\kappa 𝑑\kappa .$$ (62) Using the definition (59) and the relations (61) one can easily transform the Eq.(6) to the form $$\varphi _M(x)=\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑\kappa \{b_\kappa \mathrm{\Psi }_\kappa (x)+b_\kappa ^{}\mathrm{\Psi }_\kappa ^{}(x)\},$$ (63) where functions $`\mathrm{\Psi }_\kappa `$ are defined by the integral representation $$\mathrm{\Psi }_\kappa (x)=\frac{1}{2^{3/2}\pi }\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑q\mathrm{exp}\{im(z\mathrm{sinh}qt\mathrm{cosh}q)i\kappa q\}.$$ (64) It is assumed that an infinitely small negative imaginary part added to $`t`$, see Appendix A. It may be shown that functions (64) are Eigenfunctions of the boost generator $``$, $$\mathrm{\Psi }_\kappa (x)=\kappa \mathrm{\Psi }_\kappa (x),\mathrm{}<\kappa <+\mathrm{},=i(z/t+t/z).$$ (65) They are orthonormal relative to inner product in MS, $$(\mathrm{\Psi }_\kappa ,\mathrm{\Psi }_\kappa ^{})_M=\delta (\kappa \kappa ^{}),(\mathrm{\Psi }_\kappa ^{},\mathrm{\Psi }_\kappa ^{})_M=0,$$ (66) and together with their adjoints $`\mathrm{\Psi }_\kappa ^{}`$ form a complete set of solutions for KFG equation in MS. We will call this set of functions boost modes. The boost modes can serve as a basis for a new quantization scheme for the field $`\varphi _M(x)`$. Indeed according to Eqs.(59), (61) the commutation relations for $`b_\kappa `$ read $$[b_\kappa ,b_\kappa ^{}^{}]=\delta (\kappa \kappa ^{}),[b_\kappa ,b_\kappa ^{}]=[b_\kappa ^{},b_\kappa ^{}^{}]=0.$$ (67) The vacuum state with respect to operators $`b_\kappa `$ which obeys the condition $$b_\kappa |0_M=0.$$ (68) is exactly the usual Minkowski vacuum. This is because the transition from the operators $`\alpha _q`$ to $`b_\kappa `$ (61) is a unitary transformation, $$b_\kappa =F\alpha _\kappa F^{},FF^{}=F^{}F=1,F=\mathrm{exp}\left(i\frac{\pi }{4}\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑q\left\{_q\alpha _q^{}_q\alpha _q+(q^21)\alpha _q^{}\alpha _q\right\}\right),$$ (69) and hence the solutions $`\mathrm{\Psi }_\kappa `$ correspond to positive frequencies relative to global time $`t`$. Note that quantization of scalar field defined by Eqs.(67), (68), (63) is equivalent to the one performed in Ref. by analytical extension of Green functions. There exists another representation of boost modes (see also Ref. for the fermion case) corresponding to splitting of MS into the right (R), future (F), left (L) and past (P) wedges, see Fig.1, $$\mathrm{\Psi }_\kappa =\theta (x_+)\theta (x_{})\mathrm{\Psi }_\kappa ^{(R)}+\theta (x_+)\theta (x_{})\mathrm{\Psi }_\kappa ^{(F)}+\theta (x_+)\theta (x_{})\mathrm{\Psi }_\kappa ^{(L)}+\theta (x_+)\theta (x_{})\mathrm{\Psi }_\kappa ^{(P)},$$ (70) where $`x_\pm =t\pm x`$ are null coordinates in MS. By performing integration in Eq.(64) under assumption $`x_+>0`$, $`x_{}<0`$ and using integral representation for Macdonald functions we obtain for $`\mathrm{\Psi }_\kappa ^{(R)}`$ $$\mathrm{\Psi }_\kappa ^{(R)}=\frac{1}{\pi \sqrt{2}}\mathrm{exp}\left(\frac{\pi \kappa }{2}i\frac{\kappa }{2}\mathrm{ln}\left(\frac{x_+}{x_{}}\right)\right)K_{i\kappa }\left(m\sqrt{x_{}x_+}\right).$$ (72) The explicit form for the boost modes in other wedges may be obtained from (72) by analytical extension. The branch points of the function Eq.(72) lie on the light cone and for transition from one wedge to another one should use the substitutions $`(x_{})x_{}e^{i\pi }`$ for the transition $`RF`$, $`x_+(x_+)e^{i\pi }`$ for $`FL`$, $`x_{}(x_{})e^{i\pi }`$ for $`LP`$ and $`(x_+)x_+e^{i\pi }`$ for $`PR`$. Thus we obtain $$\mathrm{\Psi }_\kappa ^{(F)}=\frac{i}{2^{3/2}}\mathrm{exp}\left(\frac{\pi \kappa }{2}i\frac{\kappa }{2}\mathrm{ln}\left(\frac{x_+}{x_{}}\right)\right)H_{i\kappa }^{(2)}\left(m\sqrt{x_{}x_+}\right).$$ (73) $$\mathrm{\Psi }_\kappa ^{(L)}=\frac{1}{\pi \sqrt{2}}\mathrm{exp}\left(\frac{\pi \kappa }{2}i\frac{\kappa }{2}\mathrm{ln}\left(\frac{x_+}{x_{}}\right)\right)K_{i\kappa }\left(m\sqrt{x_{}x_+}\right).$$ (74) $$\mathrm{\Psi }_\kappa ^{(P)}=\frac{i}{2^{3/2}}\mathrm{exp}\left(\frac{\pi \kappa }{2}i\frac{\kappa }{2}\mathrm{ln}\left(\frac{x_+}{x_{}}\right)\right)H_{i\kappa }^{(1)}\left(m\sqrt{(x_{})(x_+)}\right).$$ (75) After transition $`PR`$ we return to Eq.(72). The second linearly independent set of solutions for KFG equation $`\mathrm{\Psi }_\kappa ^{}`$ may be obtained from Eqs.(72) - (75) by substitutions $`x_\pm x_\pm e^{i\pi }`$. The possibility of unique recovery of the values of $`\mathrm{\Psi }_\kappa (x)`$ (and hence the values of the field $`\varphi _M(x)`$) in the full MS using its values only in $`R`$-wedge and the requirement of positivity of the energy is an illustration of the content of the Reeh-Schlieder theorem (see e.g. Refs.). The splitting (70) corresponds to the four families of orbits of the two-dimensional Lorentz group (compare to §6 of Chapter V in Ref.). As it was already mentioned the functions Eqs.(72) - (75) have branch points at the light cone which corresponds to the four degenerate orbits $`x_\pm =0`$, $`\mathrm{sgn}t=\pm 1`$. For example if $`tz>0`$ then using the expansion of Macdonald function $`K_\nu (\zeta )`$ for $`\zeta 0`$ we obtain $$\mathrm{\Psi }_\kappa ^{(R)}=\frac{1}{2^{3/2}\pi }e^{\pi \kappa /2}\left\{\mathrm{\Gamma }(i\kappa )\left(\frac{mx_+}{2}\right)^{i\kappa }+\mathrm{\Gamma }(i\kappa )\left(\frac{mx_{}}{2}\right)^{i\kappa }+\mathrm{}\right\}.$$ (77) The light cone asymptotic behaviour of the other functions Eqs.(73) - (75) may be derived from Eq.(77) by the described above procedure of analytical extension. For example, $$\mathrm{\Psi }_\kappa ^{(F)}=\frac{1}{2^{3/2}\pi }e^{\pi \kappa /2}\left\{\mathrm{\Gamma }(i\kappa )\left(\frac{mx_+}{2}\right)^{i\kappa }+e^{\pi \kappa }\mathrm{\Gamma }(i\kappa )\left(\frac{mx_{}}{2}\right)^{i\kappa }+\mathrm{}\right\}.$$ (78) After substituting these expressions in Eq.(70) we find the light-cone behaviour of the boost mode which reads $$\mathrm{\Psi }_\kappa (x)=\frac{1}{2^{3/2}\pi }e^{\pi \kappa /2}\left\{\mathrm{\Gamma }(i\kappa )\left(\frac{mx_+}{2}i0\right)^{i\kappa }+\mathrm{\Gamma }(i\kappa )\left(\frac{mx_{}}{2}+i0\right)^{i\kappa }+\mathrm{}\right\}.$$ (79) The distributions $`(\zeta \pm i0)^\lambda =\zeta ^\lambda \theta (\zeta )+e^{\pm i\lambda \pi }(\zeta )^\lambda \theta (\zeta )`$ in Eq.(79) were defined and studied in Ref.. It is clear from Eq.(79) that in spite of the presence of $`\theta `$-functions in Eq.(70) the modes $`\mathrm{\Psi }_\kappa (x)`$ obey the KFG equation without sources. In the vertex of the light cone $`t=z=0`$ (which is the fixed point for the Lorentz group) from Eq.(64) we get $$\mathrm{\Psi }_\kappa (0,0)=\frac{1}{\sqrt{2}}\delta (\kappa ).$$ (80) The same result may be derived either from Eqs.(72) - (75) by taking into account that $$\frac{i}{2}H_{i\kappa }^{(1)}(0)=\frac{i}{2}H_{i\kappa }^{(2)}(0)=\frac{1}{\pi }K_{i\kappa }(0)=\delta (\kappa ),$$ (81) or from Eq.(79). This result means that all modes $`\mathrm{\Psi }_\kappa (x)`$ except for the singular zero mode vanish at the vertex of the light cone. The expression for the annihilation operator $`b_\kappa `$ in terms of field operator on an arbitrary Cauchy surface (compare Eqs.(8),(28)) read $$b_\kappa =(\mathrm{\Psi }_\kappa ,\varphi _M)_M=i\underset{\mathrm{}}{\overset{\mathrm{}}{}}\mathrm{\Psi }_\kappa ^{}(t,z)\frac{\genfrac{}{}{0pt}{}{}{}}{t}\varphi _M(t,z)𝑑z,t0.$$ (82) For the surface $`t=0`$ we have $$\begin{array}{c}\hfill b_\kappa =\frac{i}{\pi \sqrt{2}}\left(e^{\pi \kappa /2}\underset{0}{\overset{\mathrm{}}{}}F_R(z,\kappa )𝑑z+e^{\pi \kappa /2}\underset{\mathrm{}}{\overset{0}{}}F_L(z,\kappa )𝑑z\right),F_{R,L}=K_{i\kappa }(\pm mz)\left(\frac{\varphi _M}{t}\frac{\varphi _M}{z}\right)_{t=0}\pm \\ \\ \hfill \pm \mathrm{\Gamma }(i\kappa )\left(\pm \frac{mz}{2}\right)^{\pm i\kappa }\left(\frac{\varphi _M}{z}\right)_{t=0}+m\left\{K_{i\kappa 1}(\pm mz)\frac{1}{2}\mathrm{\Gamma }(1i\kappa )\left(\pm \frac{mz}{2}\right)^{\pm i\kappa 1}\right\}\varphi _M(0,z),\end{array}$$ (83) with upper (lower) signs corresponding to the indices $`R`$ ($`L`$) consequently. (The derivation of this very important formula is given in Appendix A). Note that calculation of the Whightman function $`\mathrm{\Delta }^{(+)}(x,m)`$ by use of Eqs.(63), (68) of course leads to the result (14). It gives the independent proof that the plain waves quantization is unitary equivalent to the boost modes one. ## V The Unruh construction In this Section we will consider the quantum field theory aspect of the Unruh problem. The study of this problem was inspired by Fulling who suggested in 1973 a valid scheme for quantization of a massive scalar field in RS (see Sec.III). Fulling treated RS as a part of MS and hence considered Fulling-Rindler vacuum as a state of quantum field in MS. Therefore he tried to express the annihilation and creation operators of Fulling-Rindler particles (28) in terms of plain-wave operators (8) and argued that the Minkowski vacuum state could be interpreted as a many-particle Fulling-Rindler state. In virtue of the boundary conditions which the field $`\varphi _R`$ must obey in RS and which we have considered in Sec.III Fulling procedure in MS is physically meaningless. But even if one disregards the existence of boundary conditions the quantization scheme suggested by Fulling is incorrect for MS since it implied the assumption that the field modes which he used for quantization were equal to zero outside $`R`$-wedge of MS. In other words only the first term from Eq.(70) for boost modes was involved in the procedure of quantization. But due to the presence of $`\theta `$-functions this term obeys not the KFG equation for the free field in MS but the equation with sources of infinite power localized on the light cone. To avoid this difficulty Unruh with the help of a rather elegant trick made an attempt to construct a new scheme of quantization which should be valid in MS and in some sense repeat the Fulling scheme in $`R`$-wedge. The central point of Unruh suggestion was to use such superpositions $`R_\mu `$, $`L_\mu `$ of boost modes $`\mathrm{\Psi }_\kappa `$ with positive and $`\mathrm{\Psi }_\kappa ^{}`$ with negative frequencies that they vanish either in the left or right wedges of MS and coincide with Fulling modes respectively in the right or left wedges. We will present the explicit form of the Unruh modes and discuss the Unruh quantization scheme in Sec.V A. Then in the Sec.V B we will discuss the so-called Unruh effect. ### A The Unruh quantization The Unruh modes can be expressed in terms of boost modes as follows $$R_\mu (x)=\frac{1}{\sqrt{2\mathrm{sinh}\pi \mu }}\left\{e^{\pi \mu /2}\mathrm{\Psi }_\mu (x)e^{\pi \mu /2}\mathrm{\Psi }_\mu ^{}(x)\right\},L_\mu (x)=\frac{1}{\sqrt{2\mathrm{sinh}\pi \mu }}\left\{e^{\pi \mu /2}\mathrm{\Psi }_\mu ^{}(x)e^{\pi \mu /2}\mathrm{\Psi }_\mu (x)\right\},$$ (84) where $`\mu >0`$. These functions obey the normalization conditions $$(R_\mu ,R_\mu ^{})_M=(L_\mu ,L_\mu ^{})_M=\delta (\mu \mu ^{}),(R_\mu ,R_\mu ^{}^{})_M=(L_\mu ,L_\mu ^{}^{})_M=(R_\mu ,L_\mu ^{})_M=(R_\mu ,L_\mu ^{}^{})_M=0.$$ (85) With the help of Eqs.(72) - (75) one can easily check that for $`x`$ belonging to the right wedge $`R`$ the ”left” Unruh modes $`L_\mu (x)`$ vanish while the ”right” modes $`R_\mu (x)`$ coincide with Fulling modes $`\mathrm{\Phi }_\mu (\xi )`$, Eq.(24). For the light cone behaviour of Unruh modes in the right wedge $`R`$ we have $$R_\mu (x)=\mathrm{\Phi }_\mu (\xi )\frac{\sqrt{\mathrm{sinh}\pi \mu }}{2\pi }\left\{\mathrm{\Gamma }(i\mu )\left(\frac{mx_+}{2}\right)^{i\mu }+\mathrm{\Gamma }(i\mu )\left(\frac{mx_{}}{2}\right)^{i\mu }\right\},xR,x_\pm 0.$$ (86) In the future wedge $`F`$ they can be represented as $$R_\mu (x)=\frac{i}{2\sqrt{\mathrm{sinh}\pi \mu }}\left(\frac{x_+}{x_{}}\right)^{i\mu /2}J_{i\mu }\left(m\sqrt{x_+x_{}}\right)\frac{\sqrt{\mathrm{sinh}\pi \mu }}{2\pi }\mathrm{\Gamma }(i\mu )\left(\frac{mx_+}{2}\right)^{i\mu },xF,x_\pm 0,$$ (88) $$L_\mu (x)=\frac{i}{2\sqrt{\mathrm{sinh}\pi \mu }}\left(\frac{x_+}{x_{}}\right)^{i\mu /2}J_{i\mu }\left(m\sqrt{x_+x_{}}\right)\frac{\sqrt{\mathrm{sinh}\pi \mu }}{2\pi }\mathrm{\Gamma }(i\mu )\left(\frac{mx_{}}{2}\right)^{i\mu },xF,x_\pm 0.$$ (89) Note that in spite of Unruh statement made in Ref. functions (84) can not be obtained by analytical extension of Fulling modes (24) through the future and past horizons into the $`F`$\- and $`P`$-wedges. As a matter of fact the Unruh modes are combinations of functions defined on different sheets of Riemannian surface, see the rules for enclosing of brunch points of the boost modes in Sec.IV. By inverting the relations Eq.(84) and substituting the result in Eq.(63) one obtains $$\varphi _M(x)=\underset{0}{\overset{\mathrm{}}{}}𝑑\mu \{r_\mu R_\mu (x)+r_\mu ^{}R_\mu ^{}(x)+l_\mu L_\mu ^{}(x)+l_\mu ^{}L_\mu (x)\}.$$ (90) The Eq.(90) holds everywhere in MS except the origin of Minkowski coordinate frame because the boost mode $`\mathrm{\Psi }_\kappa (x)`$ is singular at the point $`x=(0,0)`$ (see Eq.(80)) and hence it is impossible to perform integration in Eq.(63) at this point over positive and negative values of $`\kappa `$ independently. This means that expansion (90) is valid for the field in MS with cut out point $`x=(0,0)`$ This fact was earlier proposed as the reason for ”thermal properties” of ”Minkowski vacuum” in Refs... The Unruh operators $`r_\mu `$ and $`l_\mu `$ from Eq.(90) are defined by the expressions $$r_\mu =\frac{1}{\sqrt{2\mathrm{sinh}\pi \mu }}\left\{e^{\pi \mu /2}b_\mu +e^{\pi \mu /2}b_\mu ^{}\right\},l_\mu =\frac{1}{\sqrt{2\mathrm{sinh}\pi \mu }}\left\{e^{\pi \mu /2}b_\mu +e^{\pi \mu /2}b_\mu ^{}\right\},(\mu >0).$$ (91) Using Eqs.(67) one can easily show that these operators obey the following commutation relations $$[r_\mu ,r_\mu ^{}^{}]=[l_\mu ,l_\mu ^{}^{}]=\delta (\mu \mu ^{}),[r_\mu ,r_\mu ^{}]=[l_\mu ,l_\mu ^{}]=[r_\mu ,l_\mu ^{}]=[r_\mu ,l_\mu ^{}^{}]=0.$$ (92) Though the Unruh operators satisfy canonical commutation relations (92) it is not enough they could be considered as annihilation, creation operators. A necessary condition for the latter is existence of a stationary ground state for the system, vacuum state. Neither the Unruh modes (84) nor their adjoints are positive frequency solutions for KFG equation in $`P`$\- and $`F`$-wedges with respect to any timelike variable and hence the Unruh operators (91) are composed of creation and annihilation operators which relate to particles with opposite sign of frequency. Therefore it is clear that in the global MS a stationary vacuum state with respect to the $`r`$\- and $`l`$-”particles” can not exist. To confirm this point let us see how the first term in parentheses in Eq.(90) behave in $`F`$-wedge at small $`\mu `$ (see Eqs.(72), (84), (91)) $$r_\mu R_\mu (x)\frac{i}{2^{3/2}\pi \mu }J_0\left(m\sqrt{x_+x_{}}\right)(b_0+b_0^{})\frac{1}{\mu },\mu 0,xF.$$ (93) It is clear that the contribution of this term to the integral (90) diverges logarithmically at the lower limit. Similarly we have $$l_\mu ^{}L_\mu (x)\frac{i}{2^{3/2}\pi \mu }J_0\left(m\sqrt{x_+x_{}}\right)(b_0+b_0^{})\frac{1}{\mu },\mu 0,xF.$$ (94) Though singularities cancel in the sum of these two terms in Eq.(90) it is clear that none of the matrix elements of these parts of the field operator $`\varphi _M(x)`$ exist separately. The same is true for contributions of terms with all other Unruh operators. This situation certainly holds also for the $`P`$-wedge. This consideration shows that the Unruh operators (91) may not be interpreted as annihilation, creation operators and the Unruh construction may not be regarded as a valid quantization scheme in the global MS. Nevertheless we may consider the integral (90) only for the world points located entirely in $`R`$\- and $`L`$-wedges i.e. inside the double Rindler wedge. It is important that being restricted to the double Rindler wedge the l.h.s. of Eq.(90) cannot be considered as the field $`\varphi _M(x)`$ in MS. Indeed taking into account that the Unruh (Eq. (84)) and Fulling (Eq.(24)) modes coincide in the $`R`$-wedge we may represent the l.h.s. of Eq.(90) there in the form $$\stackrel{~}{\varphi }_M(x)=\underset{\epsilon 0}{lim}\underset{\epsilon }{\overset{\mathrm{}}{}}𝑑\mu \{r_\mu \mathrm{\Phi }_\mu (x)+r_\mu ^{}\mathrm{\Phi }_\mu ^{}(x)\},xR.$$ (95) Then considering for the sake of simplicity the case of equal times $`t=t^{}=0`$ we have $$0_M|\stackrel{~}{\varphi }_M(0,z)\stackrel{~}{\varphi }_M(0,z^{})|0_M=\frac{1}{\pi ^2}\underset{\epsilon 0}{lim}\underset{\epsilon }{\overset{\mathrm{}}{}}𝑑\mu \mathrm{cosh}\pi \mu K_{i\mu }(mz)K_{i\mu }(mz^{}),z,z^{}>0.$$ (96) Since $`(1/2\pi )K_{i\mu }(0)=0`$ at $`\mu >0`$ (see Eq.(81)) we obtain for $`z^{}=0`$ $$0_M|\stackrel{~}{\varphi }_M(0,z)\stackrel{~}{\varphi }_M(0,0)|0_M=0.$$ (97) At the same time for the Whightman function in MS we have $$0_M|\varphi _M(0,z)\varphi _M(0,0)|0_M=\frac{1}{2\pi ^2}\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑\kappa \mathrm{cosh}\pi \kappa K_{i\kappa }(mz)K_{i\kappa }(0)=\frac{1}{2\pi }K_0(mz),z>0,$$ (98) in full agreement with Eq.(14). It is clear after comparison of Eqs.(97),(98) that $`\mathrm{\Delta }^+(x,m)`$ is not equal to zero in the $`R`$-wedge only due to existence of singular zero boost mode (80). Since this mode is absent in the Unruh set (84) the latter is not a complete set of solutions for the KFG equation. Therefore from this moment we will denote the l.h.s. of Eq.(90) as $`\varphi _{DW}(x)`$ instead of $`\varphi _M(x)`$. Consider quantization of the field $`\varphi _{DW}(x)`$ in the double Rindler wedge now. Since there exists a timelike variable, namely Rindler time, with respect to which the Unruh modes (84) are positive frequency solutions of KFG equation we can attach to the Unruh operators the meaning of annihilation, creation operators of particles living in the double Rindler wedge. But the fields in $`R`$\- and $`L`$-wedges are absolutely independent of each other since any two points belonging to different wedges are separated by spacelike interval. Therefore double Rindler wedge is a disjoint union of $`R`$\- and $`L`$-wedges and quantization in these wedges should be carried out separately. We have discussed already quantization procedure in the $`R`$-wedge and found that it implies existence of boundary condition ensuring finiteness of the field energy. It is clear that the field in double Rindler wedge $`\varphi _{DW}(x)`$ should satisfy the same boundary condition. Taking into account these considerations we should rewrite Eq.(90) in the form $$\varphi _{DW}(x)=\underset{0}{\overset{\mathrm{}}{}}𝑑\mu \{r_\mu R_\mu (x)+r_\mu ^{}R_\mu ^{}(x)\}+\underset{0}{\overset{\mathrm{}}{}}𝑑\mu \{l_\mu L_\mu ^{}(x)+l_{\mu }^{}{}_{}{}^{}L_\mu (x)\},xRL,$$ (99) where Unruh operators should coincide with the corresponding Fulling operators $`c_\mu `$, $`c_\mu ^{}`$ and $`c_\mu ^{}`$, $`c_{\mu }^{}{}_{}{}^{}`$ for particles living in $`R`$\- and $`L`$-wedges respectfully if the field $`\varphi _{DW}(x)`$ satisfies the boundary condition $$\varphi _{DW}(0,0)=0.$$ (100) We will prove the latter statement for the operator $`r_\mu `$ as an example. Substituting the expression (82) for operators $`b_\kappa ,b_\kappa ^{}`$ into the first formula in Eq.(91) we obtain after transition to Rindler coordinates (33) $$r_\mu =\underset{\rho 0}{lim}(\stackrel{~}{c}_\mu (\rho )c_\mu ^{(s)}(\rho )),$$ (102) $$\stackrel{~}{c}_\mu (\rho )=\frac{i}{\sqrt{2\mu }}\underset{\rho }{\overset{\mathrm{}}{}}\frac{d\rho }{\rho }\psi _\mu (\rho )\left(\frac{\varphi _M}{\eta }i\mu \varphi _M\right)_{\eta =0},$$ (103) $$c_\mu ^{(s)}(\rho )=\frac{i\sqrt{\mathrm{sinh}\pi \mu }}{2\pi }\varphi _M(0,\rho )\left\{\mathrm{\Gamma }(i\mu )\left(\frac{m\rho }{2}\right)^{i\mu }\mathrm{\Gamma }(i\mu )\left(\frac{m\rho }{2}\right)^{i\mu }\right\},$$ (104) Note that both $`\stackrel{~}{c}_\mu (\rho )`$ and $`c_\mu ^{(s)}(\rho )`$ become singular for $`\rho 0`$ if $`\varphi _M(0,0)0`$ but the singularities cancel in their difference in Eq.(102). It is clear from Eqs.(102), (104) that although Eqs.(103), (28) look similar, one can not identify the right Unruh operator $`r_\mu `$ and the Fulling annihilation operator $`c_\mu `$ unless $$\varphi _M(0,0)=0.$$ (105) This condition should be understood in weak sense of course. From Eqs.(63), (80) we however formally have $$\varphi _M(0,0)=\frac{b_0+b_0^{}}{\sqrt{2}}=\underset{\mathrm{}}{\overset{\mathrm{}}{}}\frac{dp}{\sqrt{4\pi ϵ_p}}(a_p+a_p^{}),$$ (106) and therefore for the value of one-particle amplitude (9) at the vertex of the light cone we obtain $$\varphi _f(0,0)=\underset{\mathrm{}}{\overset{\mathrm{}}{}}\frac{dp}{\sqrt{4\pi ϵ_p}}f(p).$$ (107) Of course there are no any physical reasons to require vanishing of this quantity in MS. Thus if one understands Eq.(105) as a condition for the field in MS it means nothing but cutting out the point $`t=z=0`$. Cutting out even a single point is not however a ”painless operation” for MS because it dramatically changes its properties. In particular MS looses its property to be a globally hyperbolic spacetime. As a consequence the Cauchy data (17) for the two-point commutator corresponds now to zero solution of KFG equation unlike what we have in real MS. Since in four-dimensional MS the Pauli-Jordan function has Cauchy data similar to (17) this result is not a specific property of two dimensional case. This inconsistency disappears if we apply the Unruh construction to the double Rindler wedge rather than to MS. That means that we should use the expansion (99) instead of (90), substitute $`\varphi _{DW}`$ instead of $`\varphi _M`$ in Eqs.(103),(104) and read Eq.(105) as Eq.(100). As a result we conclude that the Unruh construction is a valid quantization scheme only in the double Rindler wedge. ### B The Unruh ”effect” There are several ways to ”give proof” of existence of the Unruh effect in the frames of conventional quantum field theory . One of them is based on the equation $$0_M|r_\mu ^{}r_\mu ^{}|0_M=\left(e^{2\pi \mu }1\right)^1\delta (\mu \mu ^{}),$$ (108) which can be easily obtained using Eqs.(91), (68). The l.h.s. of the Eq.(108) at $`\mu =\mu ^{}`$ is interpreted after integration over $`\mu `$ as Minkowski vacuum expectation value of the operator of Fulling particles, while the r.h.s. of this equation after the standard trick is written as $`\delta (\mu \mu ^{})|_{\mu =\mu ^{}}={\displaystyle \underset{\mathrm{}}{\overset{\mathrm{}}{}}}e^{i(\mu \mu ^{})\eta }|_{\mu =\mu ^{}}{\displaystyle \frac{d\eta }{2\pi }}={\displaystyle \frac{g\mathrm{\Delta }\tau }{2\pi }},`$ where $`\tau =\eta /g`$ is proper time and $`g`$ is proper acceleration of Rindler observer. Then Eq.(108) is transformed to the form $$\frac{\mathrm{\Delta }\overline{N}}{\mathrm{\Delta }\tau }=\underset{0}{\overset{\mathrm{}}{}}\frac{d\omega }{2\pi }(e^{2\pi \omega /g}1)^1,$$ (109) where $`\omega =g\mu `$ is commonly understood as the energy of Fulling quanta. Finally the integrand in Eq.(109) is identified with the thermal spectrum corresponding to Davies-Unruh temperature (1). However, as it was shown in the previous section, one can not identify Unruh operators $`r_\mu `$, $`r_\mu ^{}`$ with Fulling annihilation and creation operators $`c_\mu `$, $`c_\mu ^{}`$ in MS. Moreover operators $`r_\mu `$, $`r_\mu ^{}`$ can not serve as annihilation and creation operators of any particles in MS. Therefore the l.h.s. of Eq.(108) can not be interpreted as Minkowski vacuum expectation value of number of particles. The Unruh operators coincide with the corresponding Fulling operators for the field obeying the boundary condition (100) in the double Rindler wedge. But an observer living in RS can not define Minkowski vacuum state. To conclude that the state of the field is Minkowski vacuum one must have possibilities to perform measurements in every point of Cauchy surface in the whole MS. This is impossible for an observer living in $`R`$\- (or $`L`$-) wedge because he can not perform measurements at the points belonging to $`L`$\- (or $`R`$-) wedge. From mathematical point of view this statement is a direct consequence of Reeh-Schlieder theorem, see Refs.. On the other hand observers living in the double Rindler wedge are not able to perform such measurements due to existence of the boundary condition (100). Let us also note that the Bose factor in the r.h.s. of Eq.(108) should not be necessarily interpreted as thermal distribution. This factor appears entirely due to specific properties of Bogolubov transformation (91) and is encountered in many physical problems where in no way does the notion of temperature arise. Two-mode squeezed photon states in quantum optics is a well known example of such situation, see also Ref.. Two-dimensional harmonic oscillator can serve as another example, see Appendix B. Another ”derivation” of the Unruh effect is based on the relation $$|0_M=Z^{\frac{1}{2}}\underset{n=0}{\overset{\mathrm{}}{}}\underset{0}{\overset{\mathrm{}}{}}𝑑\mu _1\mathrm{}\underset{0}{\overset{\mathrm{}}{}}𝑑\mu _ne^{\pi \underset{i=1}{\overset{n}{}}\mu _i}|1_{\mu _1},\mathrm{}1_{\mu _n}_L|1_{\mu _1},\mathrm{}1_{\mu _n}_R.$$ (110) This formula ”determines $`r`$\- and $`l`$-particle content of Minkowski vacuum” and allows one to introduce the density matrix describing states of the field in $`R`$-wedge, see e.g. . The latter is achieved by taking the tensor product of the r.h.s. of Eq.(110) with its dual and then taking trace over the states of the field in the $`L`$-wedge. So for an arbitrary observable $``$ depending on the ”values of the field” $`\varphi _M(x)`$ for $`x`$ belonging only to the right Rindler wedge $`R`$ we have $$0_M||0_M=\mathrm{Sp}(\rho _R),\rho _R=Z^1\mathrm{exp}(H_R/T_{DU}).$$ (111) In this equation $`\rho _R`$ is the density matrix and $`H_R=gK=g_0^{\mathrm{}}\mu c_\mu ^{}c_\mu 𝑑\mu `$ is a secondly quantized Hamiltonian with respect to proper time $`\tau `$ of the accelerating observer. However the l.h.s. of Eq.(110) may not be considered as Minkowski vacuum state because as we have shown in the previous section the notion of Rindler particles which is essentially used in derivation of Eq.(110) makes sense only in double Rindler wedge rather than in global MS. Therefore Eq.(110) could describe vacuum state only in double Rindler wedge. But it loses any physical meaning if one takes into account the existence of boundary condition (100). Indeed the derivation of Eq.(110) assumes (see e.g. ) that one-particle Hilbert space in the double Rindler wedge is a direct sum of one-particle Hilbert spaces in $`R`$\- and $`L`$-wedges $`_{DW}=_R_L`$ and Fock space of states of the field in double Rindler wedge $`(_{DW})`$ is a tensor product of Fock spaces on $`_R`$ and $`_L`$, $`(_{DW})(_R)(_L)`$. But in virtue of boundary condition, which is equivalent to cutting out the vertex of light cone, $`R`$\- and $`L`$-wedges have no common points and therefore never interact. Therefore only such superpositions of state vectors from $`(_{DW})`$ can have physical sense which do not contain correlations between $`r`$\- and $`l`$-particles. This ”superselection rule” prohibits states of the type Eq.(110). Besides the normalization constant $`Z`$ in Eq.(110) which also has the meaning of partition function in Eq.(111) is infinite, namely $$Z=\mathrm{exp}(\delta (0)\pi ^2/12),$$ (112) see Appendix B. The divergence of constant $`Z`$ means that representations of the canonical commutation relations in terms of Unruh and boost modes operators are unitary inequivalent. There could exist two ways to formulate Eq.(111) in mathematically meaningful way and avoid these difficulties. The first one is to place the field in a box which in this problem could be constructed by two uniformly accelerated mirrors moving in right and left Rindler wedges . However such regularization again leads to consideration of double RS as a physical spacetime of the observer. The second opportunity is to use algebraic approach and a notion of KMS state as a definition of thermal equilibrium state. We discuss below the formulation of Eq.(111) which was developed in the frame of algebraic approach to quantum field theory. ## VI Algebraic approach Algebraic approach to quantum theory allows one to compare states which cannot be represented by vectors or density matrices in the same Hilbert space representation of algebra of observables of the system. It is because the states in this approach are primarily considered as positive normalized linear functionals over the algebra of observables rather than vectors in Hilbert space. The physical meaning of the state $`\omega `$ in algebraic approach is that the value $`\omega (A)`$ is the expectation value of the observable $`A`$ in the state $`\omega `$. The algebraic counterpart of usual thermal equilibrium state is called the KMS state . Unlike usual thermal equilibrium state the KMS state exists even if partition function of the system diverges. On the language of algebraic approach the Unruh effect means that the algebraic state corresponding to Minkowski vacuum state coincides with the KMS state for double Fulling quantization. In this section we will show that such conclusion implies existence of boundary condition at the origin of Minkowski reference frame. Our consideration will make clear that in algebraic derivation of the Unruh effect the same inconsistencies are present as in traditional approach. ### A One mode model In order to use simple and suitable notation, let us first present the construction of KMS state over the one mode quantum system (i.e. one-dimensional harmonic oscillator). We will see that the case of free Bose field in $`D=1+1`$ MS requires just a trivial generalization. Let $`R`$ be one-mode quantum system. Its algebra of observables (which we denote by $`𝒰_R`$) may be characterized either in terms of unbounded generators, annihilation and creation operators $`r`$, $`r^{}`$ satisfying commutation relation $$[r,r^{}]=1,$$ (113) or in terms of Weyl generators $$W(f)=\mathrm{exp}(frf^{}r^{}),$$ (114) labeled by complex number $`f`$ <sup>§</sup><sup>§</sup>§The unbounded operators $`r`$, $`r^{}`$ may be expressed in terms of Weyl generators, for example $`r={\scriptscriptstyle \frac{1}{2}}(W^{}(f)iW^{}(if))|_{f=0}`$ where derivatives are taken with respect to $`f`$. and which satisfy the following requirements $$W(f_1)W(f_2)=\mathrm{exp}({\scriptscriptstyle \frac{1}{2}}(f_1^{}f_2f_2^{}f_1))W(f_1+f_2),W(f)^{}=W(f).$$ (115) An arbitrary observable $``$ may be written in the form $`=(r,r^{})`$. Time evolution of observables is determined by the equation $$(t)=U^{}(t)U(t),U(t)=\mathrm{exp}(iH_Rt),$$ (116) with $`H_R=ϵr^{}r`$ being the one-mode Hamiltonian. In particular, $$(t)=(re^{iϵt},r^{}e^{iϵt}),W(f,t)=W(f(t)),f(t)=fe^{iϵt}.$$ (117) Vacuum state $`\omega _R`$ is defined by the relation $`r|0_R=0`$ and the Hilbert space $`_R`$ where the considered operators act is generated by the basis $`|n_R=(r^{})^n/\sqrt{n!}|0_R`$. The algebraic vacuum state $`\omega _R`$ is a prescription for calculating expectation values in the vacuum state $$\omega _R()=0_R|(r,r^{})|0_R.$$ (118) The vacuum expectation value of Weyl generator $`W(f)`$ may be easily shown to be equal to $$\omega _R(W(f))=\mathrm{exp}({\scriptscriptstyle \frac{1}{2}}|f|^2).$$ (119) Thermal equilibrium state with inverse temperature $`\beta `$ is defined as follows $$\omega _R^{(\beta )}()=\mathrm{Sp}(\rho _\beta ),\rho _\beta =Z_\beta ^1\mathrm{exp}(\beta H_R)=Z_\beta ^1\underset{n}{}\mathrm{exp}(\beta ϵn)|n_Rn_R|,$$ (120) where $`Z_\beta =\underset{n}{}\mathrm{exp}(\beta ϵn)`$. Of course $`Z_\beta `$ is finite for this simple one-mode model, $`Z_\beta =(1e^{\beta ϵ})^1`$. But since this may be not the case for quantum systems with infinite number of degrees of freedom it is important to reformulate Eq.(120) in the form not containing the value of $`Z_\beta `$ explicitly. In order to do it one should introduce another copy of system $`R`$, say $`L`$ which does not interact with $`R`$, and consider the combined quantum system $`RL^{}`$ , the ”double system”. Normally in applications of KMS states to statistical mechanics this additional copy of initial system is considered as a mathematical trick used with the purpose to describe thermal state by a vector in Hilbert space and there are no attempts to interpret it as a really existent. The asterisk here indicates that we choose Hamiltonian of the combined system to be $$H_{RL^{}}=H_R11H_L$$ (121) rather than $`H_R1+1H_L`$ (compare to Eq.(B1)). One can interpret it saying that time direction at $`L`$ is inverted. The vacuum state of the system $`RL^{}`$ is defined by the relations $`r|0_{RL^{}}=0,l|0_{RL^{}}=0,`$ where $`l`$ is annihilation operator for $`L`$ and the vectors $`|n_R|m_L={\displaystyle \frac{(r^{})^n}{\sqrt{n!}}}{\displaystyle \frac{(l^{})^m}{\sqrt{m!}}}|0_{RL^{}},`$ constitute the basis of the Hilbert space $`_{RL^{}}`$. Let us introduce the state $$|\mathrm{\Omega }_\beta =Z_\beta ^{1/2}\underset{n}{}\mathrm{exp}(\beta ϵn/2)|n_R|n_L.$$ (122) It can be immediately verified that the thermal expectation value (120) may be rewritten in the form $$\omega _R^{(\beta )}()=\mathrm{\Omega }_\beta |(r1,r^{}1)|\mathrm{\Omega }_\beta ,$$ (123) where calculation is performed in the space $`_{RL^{}}`$. Note that the expectation value (123) does not depend on time. This property served as a reason for the choice of Hamiltonian $`H_{RL^{}}`$ in the form (121). Now let us consider operators $$b_+=\mathrm{cosh}\theta r1\mathrm{sinh}\theta \mathrm{\hspace{0.17em}1}l^{},b_{}=\mathrm{sinh}\theta r^{}1+\mathrm{cosh}\theta \mathrm{\hspace{0.17em}1}l,$$ (124) where $`\theta `$ is defined by the equation $`\mathrm{tanh}\theta =e^{\beta ϵ/2}`$. Note that these operators depend on time as $`b_\pm (t)e^{iϵt}`$. The key observation is that these operators annihilate the state $`|\mathrm{\Omega }_\beta `$ and together with $`b_+^{}`$, $`b_{}^{}`$ satisfy the usual commutation relations $$b_\pm |\mathrm{\Omega }_\beta =0,[b_+,b_{}]=0,[b_+,b_{}^{}]=0,[b_\pm ,b_\pm ^{}]=1.$$ (125) The span of the vectors of the form $`{\displaystyle \frac{(b_+^{})^n}{\sqrt{n!}}}{\displaystyle \frac{(b_{}^{})^m}{\sqrt{m!}}}|\mathrm{\Omega }_\beta `$ constitutes the Hilbert space $``$ which in our one-mode case coincides with the space $`_{RL^{}}`$. Expressing operators $`r,r^{}`$ in terms of operators $`b_\pm `$ we can rewrite Eq.(123) in the form $$\omega _R^{(\beta )}()=\mathrm{\Omega }_\beta |(b_+\mathrm{cosh}\theta +b_{}^{}\mathrm{sinh}\theta ,b_{}\mathrm{sinh}\theta +b_+^{}\mathrm{cosh}\theta )|\mathrm{\Omega }_\beta .$$ (126) The r.h.s. of this equation does not contain $`Z_\beta `$ and in virtue of Eqs.(125) is just the vacuum expectation value of the observable $``$ but calculated in the space $``$ and with respect to vacuum $`|\mathrm{\Omega }_\beta `$. In general the algebraic state defined by Eq.(126) is called ”the KMS state”. Another equivalent definition of KMS state is given by the requirement $`\omega _R^{(\beta )}(_1(t)_2(t^{}))=\omega _R^{(\beta )}(_2(t^{})_1(t+i\beta ))`$ where $`_1`$, $`_2`$ are arbitrary observables of system $`R`$, . In our case the KMS state (126) is just the usual thermal equilibrium state. One can generalize the definition of KMS state to the observables of the double system $`RL^{}`$ which have the form $`A(r,r^{},l,l^{})`$ by setting $$\omega ^{(\beta )}(A)=\mathrm{\Omega }_\beta |A(b_+\mathrm{cosh}\theta +b_{}^{}\mathrm{sinh}\theta ,b_{}\mathrm{sinh}\theta +b_+^{}\mathrm{cosh}\theta ,b_{}\mathrm{cosh}\theta +b_+^{}\mathrm{sinh}\theta ,b_{}^{}\mathrm{cosh}\theta +b_+\mathrm{sinh}\theta )|\mathrm{\Omega }_\beta .$$ (127) The state defined by Eq.(127) is called the double KMS state . Given definitions of KMS and double KMS states in the evident way may be generalized to the case of any finite or infinite number of degrees of freedom. For the latter case the usual definition of thermal equilibrium state is in general not valid. Let us give the formulas for expectation values of Weyl generators in KMS and double KMS states. By simple computation one gets $$\omega _R^{(\beta )}(W(f))=\mathrm{exp}\left(\frac{1}{2}\mathrm{coth}\left(\frac{\beta ϵ}{2}\right)|f|^2\right).$$ (128) We define the Weyl generator $`W(f_R,f_L)`$ for the double system by the equation $$W(f_R,f_L)=\mathrm{exp}(f_Rrf_R^{}r^{}f_L^{}l+f_Ll^{}).$$ (129) The advantage of this definition is that time dependence of Weyl generator takes the form $`W(f_R,f_L,t)=W(f_Re^{iϵt},f_Le^{iϵt})`$ or in other words both $`f_R`$ and $`f_L`$ are positive frequency solutions for ”one-mode” wave equation $`(_t^2+ϵ^2)f=0`$. The expectation value of the operator (129) in the double KMS state may be shown to be equal to $$\omega ^{(\beta )}(W(f_R,f_L))=\mathrm{exp}\left\{\frac{1}{2}\mathrm{coth}\left(\frac{\beta ϵ}{2}\right)(|f_R|^2+|f_L|^2)\frac{1}{\mathrm{sinh}(\beta ϵ/2)}\mathrm{Re}f_R^{}f_L\right\}.$$ (130) ### B The Unruh problem in algebraic approach Now let us turn back to the Unruh problem. At first sight Eqs.(124) and the definition of the state $`|\mathrm{\Omega }_\beta `$ (125) look very similar to inverted Eqs.(91) expressing boost operators $`b_\kappa `$ in terms of Unruh operators $`r_\mu `$, $`l_\mu `$ and the definition of the state $`|\mathrm{\Omega }_M`$ in Eq.(68). But we will show that it is not correct to apply the notion of double KMS state to the Unruh problem. The physical reason is that free field in Minkowski spacetime cannot be decomposed into two non-interacting fields living in the interior of right and left Rindler wedges. To reformulate the Eq.(111) in terms of algebraic approach let us introduce the required definitions. The algebra $`𝒰`$ of observables of the free field in MS is a $`C^{}`$ algebra with Weyl generators $`W(\mathrm{\Phi })=\mathrm{exp}\{(\varphi _M,\mathrm{\Phi })_M\}`$ where $`\varphi _M(x)`$ is the quantum field operator and $`\mathrm{\Phi }(x)`$ is a real-valued solution of KFG equation (3). More precisely $`𝒰`$ contains arbitrary finite linear combinations of Weyl generators and their limits in the sense of convergence in $`C^{}`$ norm. For a complete set of positive frequency orthonormal modes $`\mathrm{{\rm Y}}_\lambda (x)`$ we define $`f_\lambda `$ by the equation $`f_\lambda =(\mathrm{{\rm Y}}_\lambda ,\mathrm{\Phi })_M`$ and rewrite $`W(\mathrm{\Phi })`$ in the form $$W(\mathrm{\Phi })=\mathrm{exp}\left\{𝑑\lambda (f_\lambda \mathrm{a}_\lambda f_\lambda ^{}\mathrm{a}_\lambda ^{})\right\},$$ (131) where $`\mathrm{a}_\lambda `$ is the appropriate annihilation operator. The Weyl relations take form $$W(\mathrm{\Phi }_1)W(\mathrm{\Phi }_2)=\mathrm{exp}\left\{\frac{1}{2}𝑑\lambda (f_{1\lambda }^{}f_{2\lambda }f_{2\lambda }^{}f_{1\lambda })\right\}W(\mathrm{\Phi }_1+\mathrm{\Phi }_2),W(\mathrm{\Phi })^{}=W(\mathrm{\Phi }),$$ (132) (compare to Eqs.(114),(115)). Note that solutions $`\mathrm{\Phi }(x)`$ are required to decrease sufficiently fast at spatial infinity (say, have compact support on any Cauchy surface). The expectation value of Weyl generator $`W(\mathrm{\Phi })`$ in Minkowski vacuum state $`\omega _M`$ may be obtained by generalization of the formula (119): $$\omega _M(W(\mathrm{\Phi }))=\mathrm{exp}\left(\frac{1}{2}\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑\kappa |f_\kappa |^2\right),$$ (133) where coefficients $`f_\kappa =(\mathrm{\Psi }_\kappa ,\mathrm{\Phi })_M`$ are defined with respect to a complete set of boost modes $`\mathrm{\Psi }_\kappa `$ (64). By inverting relations (84) one can rewrite eq.(133) in terms of Unruh modes. The result is $$\omega _M(W(\mathrm{\Phi }))=\mathrm{exp}\left\{\frac{1}{2}\underset{0}{\overset{\mathrm{}}{}}𝑑\mu \left(\mathrm{coth}\pi \mu (|f_\mu ^{(L)}|^2+|f_\mu ^{(R)}|^2)+\frac{2}{\mathrm{sinh}\pi \mu }\mathrm{Re}(f_\mu ^{(L)})^{}f_\mu ^{(R)}\right)\right\},$$ (134) where $`f_\mu ^{(R)}=(R_\mu ,\mathrm{\Phi })_M`$, $`f_\mu ^{(L)}=(L_\mu ,\mathrm{\Phi })_M`$. Finite linear combinations of elements from $`𝒰`$ of the form $`W(\mathrm{\Phi })`$ with $`\mathrm{\Phi }`$ vanishing in the closed wedge $`\overline{L}`$ and the limits of sequences of such linear combinations in uniform sense (i.e. limits in the sense of convergence in $`C^{}`$ norm) constitute a $`C^{}`$ subalgebra $`𝒰_R`$ of $`𝒰`$ which is called the right wedge algebra. The left wedge algebra $`𝒰_L`$ and the double wedge algebra $`\stackrel{~}{𝒰}`$ are defined similarly by restricting to solutions which vanish in closed wedge $`\overline{R}`$ and in a neighborhood of $`h_0`$ respectively, see Fig.1. Now let us evaluate expectation value of Weyl generator in a double KMS state with temperature $`\beta ^1`$ with respect to Fulling quantization prescription. By generalizing Eq.(130) one obtains <sup>\**</sup><sup>\**</sup>\**Compare to section 1.4 in Ref.. $$\stackrel{~}{\omega }_F^{(\beta )}(W(\mathrm{\Phi }))=\mathrm{exp}\left\{\frac{1}{2}\underset{0}{\overset{\mathrm{}}{}}𝑑\mu \left(\mathrm{coth}\left(\frac{\beta \mu }{2}\right)(|\zeta _\mu ^{(R)}|^2+|\zeta _\mu ^{(L)}|^2)+\frac{2}{\mathrm{sinh}(\beta \mu /2)}\mathrm{Re}(\phi _\mu ^{(L)})^{}\zeta _\mu ^{(R)}\right)\right\},$$ (135) where $`\mathrm{\Phi }=\mathrm{\Phi }_R\mathrm{\Phi }_L`$, $`\zeta _\mu ^{(R)}=(\mathrm{\Phi }_\mu ^{(R)},\mathrm{\Phi })_R`$, $`\zeta _\mu ^{(L)}=(\mathrm{\Phi }_\mu ^{(L)},\mathrm{\Phi })_L`$, $`\mathrm{\Phi }_\mu ^{(R)}`$ is complete set of Fulling modes (24) and $`\mathrm{\Phi }_\mu ^{(L)}`$ is their analog in the wedge $`L`$. Note that the expression in the r.h.s. of Eq.(135) is well-defined only if test functions $`\mathrm{\Phi }`$ in double RS obey the requirement $`{\displaystyle \underset{0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{d\mu }{\mu }}\left|\zeta _\mu ^{(R,L)}\right|^2<\mathrm{},`$ which is referred as regularity condition in Ref. (compare to Eq.(56)). Let us obtain the relation between the coefficients $`f_\mu ^{(R)}`$ and $`\zeta _\mu ^{(R)}`$ in Eqs.(134), (135). For this purpose we first evaluate $`f_\mu ^{(R)}=(R_\mu ,\mathrm{\Phi })_M`$ supposing that the surface of integration is a surface of constant positive small $`t`$ and then take limit $`t0`$. For Unruh mode we use the expression $$R_\mu (x)=R_\mu ^{(\mathrm{R})}(x)\theta (x_+)\theta (x_{})+R_\mu ^{(\mathrm{F})}(x)\theta (x_+)\theta (x_{})+R_\mu ^{(\mathrm{P})}(x)\theta (x_+)\theta (x_{}),$$ (136) which can be easily obtained from Eqs.(70,84). To calculate the inner product $$f_\mu ^{(R)}=i\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑zR_\mu ^{}(x)\frac{\genfrac{}{}{0pt}{}{}{}}{t}\mathrm{\Phi }(x),$$ (137) we need a time derivative of (136). Taking into account that $`t>0`$ we write it in the following way: $$\begin{array}{c}\hfill \frac{}{t}R_\mu (x)=\left(\frac{}{t}R_\mu ^{(\mathrm{R})}(x)\right)\theta (x_+)\theta (x_{})+\left(\frac{}{t}R_\mu ^{(\mathrm{F})}(x)\right)\theta (x_+)\theta (x_{})+R_\mu ^{(\mathrm{F})}(x)\delta (x_+)+\\ \hfill +\left\{R_\mu ^{(\mathrm{F})}(x)R_\mu ^{(\mathrm{R})}(x)\right\}\delta (x_{}).\end{array}$$ (138) It is not very hard to verify using Eqs.(84), (24), (72) and (33) that $$\underset{t0}{lim}i\underset{0}{\overset{\mathrm{}}{}}𝑑zR_\mu ^{(\mathrm{R})}(x)\frac{\genfrac{}{}{0pt}{}{}{}}{t}\mathrm{\Phi }(x)=\zeta _\mu ^{(R)}.$$ (139) The second term in the r.h.s. of Eq.(138) vanishes when $`t0`$. Therefore we consider only the last two terms. Substituting the expansions Eq.(86) and Eq.(88) into Eq.(137) and taking limit $`t0`$ we obtain <sup>††</sup><sup>††</sup>††Compare to Eqs.(102)-(104), see also Appendix A. $$f_\mu ^{(R)}=\zeta _\mu ^{(R)}+\frac{i}{2\pi }\sqrt{\mathrm{sinh}\pi \mu }\underset{z0}{lim}\mathrm{\Phi }(0,z)\left\{\mathrm{\Gamma }(i\mu )\left(\frac{mz}{2}\right)^{i\mu }\mathrm{\Gamma }(i\mu )\left(\frac{mz}{2}\right)^{i\mu }\right\},$$ (140) and the similar relation between $`f_\mu ^{(L)}`$ and $`\zeta _\mu ^{(L)}`$. One concludes after comparing Eqs.(134),(135) that equation $$\omega _M(W(\mathrm{\Phi }))=\stackrel{~}{\omega }_F^{(2\pi )}(W(\mathrm{\Phi })),$$ (141) holds if and only if $$\mathrm{\Phi }(0,0)=0,$$ (142) (compare to Eq.(100) and hence by linearity $$\omega _M=\stackrel{~}{\omega }_F^{(2\pi )}\text{on }\stackrel{~}{𝒰}.$$ (143) This equation is an analog of Eq.(111) in algebraic approach (see Ref.). The restriction of Eq.(143) to the right wedge algebra $`𝒰_R`$ is usually referred as Bisogniano - Wichmann theorem . We see that Eq.(143) holds only on the double wedge subalgebra $`\stackrel{~}{𝒰}𝒰`$, which corresponds to the space of those solutions for the field equation which satisfy the boundary condition (142). The r.h.s. of Eq.(143) doesn’t admit continuation to the whole algebra $`𝒰`$ while the l.h.s. admits such continuation. Therefore functionals $`\omega _M`$ and $`\stackrel{~}{\omega }_F^{(2\pi )}`$ describe different algebraic states over the algebra of observables of the free field in MS. Let us consider two opportunities to interpret Eq.(143). The first one is to treat $`𝒰`$ as the true algebra of observables for the accelerated observer. In this case Eq.(143) does not hold for all observables and therefore Minkowski vacuum does not coincide with the thermal state $`\stackrel{~}{\omega }_F^{(2\pi )}`$. The second opportunity is to propose that $`\stackrel{~}{𝒰}`$ should be the true algebra of observables for accelerated observer. In this case the true Minkowski vacuum state $`\omega _M`$ (which is the state over the algebra $`𝒰`$) is unrealizable state for such observer. Then Eq.(143) is satisfied for all physical observables and hence the restriction $`\omega _M|_{\stackrel{~}{𝒰}}`$ of the state $`\omega _M`$ to $`\stackrel{~}{𝒰}`$ coincides with the state $`\stackrel{~}{\omega }_F^{(2\pi )}`$ and admits interpretation in terms of Fulling – Unruh quanta. But let us stress that the Minkowski vacuum state is physically distinguished among the other possible states of the theory not so much due to it’s explicit expression (which of course is inherited by it’s restrictions to the subalgebras of $`𝒰`$) as by it’s key physical properties such as Poincaré invariance, spectral conditions, local commutativity and cluster property (see the Whightman reconstruction theorem, ). Although some of these properties are inherited by the restrictions to subalgebras, the other properties such as Poincaré invariance generally are not <sup>‡‡</sup><sup>‡‡</sup>‡‡Lacking of Poincaré invariance for field theory with boost time evolution is a consequence of the fact that 1-parameter boost group does not constitute normal subgroup in Poincaré group.. But exactly these properties are mentioned when one assumes that some quantum system is prepared initially in the state of Minkowski vacuum. Since subalgebra $`\stackrel{~}{𝒰}`$ is not Poincaré invariant there are no any physical reasons to consider the state $`\omega _M|_{\stackrel{~}{𝒰}}`$ as the initial state of the field. Moreover, since the automorphisms of $`\stackrel{~}{𝒰}`$ corresponding to boost time evolution don’t mix up observables from $`𝒰_R`$ and $`𝒰_L`$ there is still no way for Rindler observer to prepare the state $`\omega _M|_{\stackrel{~}{𝒰}}`$. We see that consideration of Unruh problem in algebraic approach leads to the same results as in usual field-theoretical approach. ## VII Conclusions We have analyzed the Unruh problem in the frame of quantum field theory and have shown that the Unruh quantization scheme is valid in the double Rindler wedge rather than in MS. The double Rindler wedge is composed of two disjoint regions which causally do not communicate with each other. Moreover the Unruh construction implies existence of boundary condition at the point $`h_0`$ (or 2-dimensional plain in the case of $`1+3`$-dimensional spacetime) of MS. Such boundary condition may be interpreted as a topological obstacle which gives rise to a superselection rule prohibiting any correlations between $`r`$\- and $`l`$-particles. Thus a Rindler observer living in the $`R`$-wedge in no way can influence the part of the field from the $`L`$-wedge and therefore elimination of the invisible ”left” degrees of freedom will take no effect for him. Hence averaging over states of the field in one wedge can not lead to thermalization of the state in the other. In algebraic approach the Unruh effect is commonly identified with the Bisogniano - Wichmann theorem. According to the Bisogniano - Wichmann theorem the Minkowski vacuum expectation value of only those observables which are entirely localized in the interior of the $`R`$-wedge constitutes the algebraic state which satisfies the KMS condition with respect to Rindler timelike variable $`\eta `$. This statement implies two points essential for its physical interpretation. First, it is assumed that the observer which carries out measurements lives in MS. Only then he could prepare the Minkowski vacuum state as the initial state of the field. Second, the variable $`\eta `$ must coincide with proper time of the observer. Only then he can interpret the KMS state as a thermal bath with Davies - Unruh temperature. But the Rindler observer can carry out measurements only inside the $`R`$-wedge and hence can not prepare the Minkowski vacuum state. From the other hand the variable $`\eta `$ can not coincide with proper time of an observer which is an inertial one at least asymptotically in far past and far future. Nevertheless only such observer for whom inertial in- and out- regions exist is able to prepare a state with finite number of particles in MS. These are the reasons why the Bisogniano - Wichmann theorem is irrelevant for consideration of the Unruh problem. Hence considerations of the Unruh problem both in the standard and algebraic formulations of quantum field theory lead us to conclusion that principles of quantum field theory does not give any grounds for existence of the ”Unruh effect”. Nevertheless there exists another aspect of the Unruh problem dealing with behaviour of a particular detector uniformly accelerated in MS. The direct consideration of the behavior of a constantly accelerating physical detector is a very difficult problem and its treatment in literature is very contradictory and often is simply erroneous. The major difficulty is that an object moving with a constant proper acceleration must be considered as a point object. This is because different points of a finite size body rigid with respect to Rindler coordinate frame move actually with different accelerations. Thus one should use an elementary particle or a microscopic bound system as a detector. In both cases the detector is a quantum object moving along a definite classical trajectory. Such assumption is in contradiction with the uncertainty principle, its range of applicability is very limited and therefore it must be used with proper care. Moreover, it was shown by Nikishov and Ritus that elementary particles placed in a constant electric field do not demonstrate the universal thermal response. It is clear that a heavy atom for which WKB approach is valid satisfies physical claims for the detector much better than an elementary particle. Unfortunately a systematic relativistic theory of bound states is still absent. Utilization of non relativistic bound systems as detectors was discussed in Ref.. The ionization rate of a heavy ion moving with a constant acceleration was considered. It was shown that the ionization rate differs from the one obtained by virtue of the detailed balance principle applied to an atom immersed in a thermal bath with the Davies-Unruh temperature. It was also shown that the time of ”thermal ionization” (if it was at all possible) is parametrically much greater than the time of destruction of the atom due to the tunneling ionization process in electric field. It is worth to add that in literature the Unruh effect is usually explained by existence of event horizons for a constantly accelerated observer. But we understand that the notion of a constantly accelerated observer is an inadmissible idealization. It is clear that for any physical object the horizons are absent. We certainly understand that behavior of accelerated detectors will differ from those at rest. We admit that under some circumstances detectors of some special configuration will follow Unruh behavior. But no conclusive proof exists that this behavior is universal and does not depend on the nature of the detector and the accelerating field. ###### Acknowledgements. We would like to thank A.A. Starobinski, U. Gerlach and B.S. Kay for very helpful discussions. N.B. Narozhny and A.M. Fedotov are grateful to R. Ruffini for hospitality at Rome University ”La Sapienza” (Italy). V.A. Belinskii thanks the Institut des Hautes Etudes Scientifiques at Bures – sur – Yvette (France) where a part of the work for this paper was done for hospitality and support. This work was supported in part by the Russian Fund for Fundamental Research under projects 97–02–16973 and 98–02–17007. ## A Derivation of Eq.(83) for $`b_\kappa `$ To derive Eq.(83) let us introduce small parameter $`z_0`$ and split the integral in Eq.(82) as follows, $$\begin{array}{c}b_\kappa =b_\kappa ^{(R)}(z_0)+b_\kappa ^{(0)}(z_0)+b_\kappa ^{(L)}(z_0),b_\kappa ^{(R)}=i\underset{z_0}{\overset{\mathrm{}}{}}𝑑z\mathrm{\Psi }_{\kappa }^{(R)}{}_{}{}^{}(0,z)\left\{\frac{\varphi _M}{t}i\frac{\kappa }{z}\varphi _M\right\}_{t=0},\hfill \\ \\ b_\kappa ^{(0)}=i\underset{z_0}{\overset{z_0}{}}𝑑z\left(\mathrm{\Psi }_\kappa ^{}(t,z)\frac{\genfrac{}{}{0pt}{}{}{}}{t}\varphi _M(t,z)\right)_{t=0},b_\kappa ^{(L)}=i\underset{\mathrm{}}{\overset{z_0}{}}𝑑z\mathrm{\Psi }_{\kappa }^{(L)}{}_{}{}^{}(0,z)\left\{\frac{\varphi _M}{t}i\frac{\kappa }{z}\varphi _M\right\}_{t=0}.\hfill \end{array}$$ (A1) In the first and last integrals of Eq.(A1) we used Eq.(65) for calculation of the time derivative of boost mode at $`t=0`$. Let us first calculate the first term in Eq.(A1). Using Eq.(72) for $`\mathrm{\Psi }_\kappa ^{(R)}`$ we find <sup>\**</sup><sup>\**</sup>\**One should add a small negative imaginary part to $`t`$ in order to choose the right branches of the functions contained in the expression for $`\mathrm{\Psi }_\kappa ^{(R)}`$. $$b_\kappa ^{(R)}=\frac{ie^{\pi \kappa /2}}{\pi \sqrt{2}}\underset{z_0}{\overset{\mathrm{}}{}}𝑑zK_{i\kappa }(mz)\left\{\frac{\varphi _M}{t}i\frac{\kappa }{z}\varphi _M\right\}_{t=0}.$$ (A2) Taking into account the well-known formula $`{\displaystyle \frac{i\kappa }{z}}K_{i\kappa }(mz)=m\{K_{i\kappa }^{}(mz)+K_{i\kappa 1}(mz)\},`$ we may rewrite Eq.(A2) in the form $$b_\kappa ^{(R)}=\frac{ie^{\pi \kappa /2}}{\pi \sqrt{2}}\underset{z_0}{\overset{\mathrm{}}{}}𝑑z\left\{K_{i\kappa }(mz)\frac{\varphi _M}{t}+mK_{i\kappa }^{}(mz)\varphi _M+mK_{i\kappa 1}(mz)\varphi _M\right\}_{t=0}.$$ (A3) Finally after substitution $`mK_{i\kappa }^{}(mz)\varphi _M=(K_{i\kappa }(mz)\varphi _M)_z^{}K_{i\kappa }(mz)(\varphi _M)_z^{}`$ and adding and subtracting to the integrand the term $`\mathrm{\Gamma }(i\kappa )\{(mz/2)^{i\kappa }\varphi _M\}_z^{}`$ we obtain $$b_\kappa ^{(R)}=\frac{ie^{\pi \kappa /2}}{\pi \sqrt{2}}\left\{\underset{z_0}{\overset{\mathrm{}}{}}F_R(z,\kappa )𝑑z+\frac{1}{2}\left(\mathrm{\Gamma }(i\kappa )\left(\frac{mz_0}{2}\right)^{i\kappa }\mathrm{\Gamma }(i\kappa )\left(\frac{mz_0}{2}\right)^{i\kappa }\right)\varphi _M(0,z_0)\right\},$$ (A4) where $`F_R`$ was defined in Eq.(83). Note that we have chosen the regularization term by the requirement that the integral in Eq.(A4) converges when $`z_0`$ tends to zero. Note also that we assumed that the field vanishes at spatial infinity. Substitution of Eq.(74) into the third integral in Eq.(A1) yields $$b_\kappa ^{(L)}=\frac{ie^{\pi \kappa /2}}{\pi \sqrt{2}}\underset{\mathrm{}}{\overset{z_0}{}}𝑑zK_{i\kappa }(mz)\left\{\frac{\varphi _M}{t}i\frac{\kappa }{z}\varphi _M\right\}_{t=0}.$$ (A5) It is easy to see that the r.h.s. of Eq.(A5) may be obtained from the r.h.s. of Eq.(A2) by changing the variable of integration $`zz`$ and substitutions $`\kappa \kappa `$, $`\varphi _M(t,z)\varphi _M(t,z)`$. Thus we obtain $$b_\kappa ^{(L)}=\frac{ie^{\pi \kappa /2}}{\pi \sqrt{2}}\left\{\underset{z_0}{\overset{\mathrm{}}{}}F_L(z,\kappa )𝑑z+\frac{1}{2}\left(\mathrm{\Gamma }(i\kappa )\left(\frac{mz_0}{2}\right)^{i\kappa }\mathrm{\Gamma }(i\kappa )\left(\frac{mz_0}{2}\right)^{i\kappa }\right)\varphi _M(0,z_0)\right\}.$$ (A6) Since $`b_\kappa ^{(R)}(z_0)+b_\kappa ^{(L)}(z_0)`$ becomes singular when $`z_0`$ tends to zero one should also consider the contribution of the second integral in Eq.(A1). Using the integral representation of boost modes (64) this integral may be written in the form $$b_\kappa ^{(0)}=\frac{i}{2^{3/2}\pi }\underset{z_0}{\overset{z_0}{}}𝑑z\underset{\mathrm{}}{\overset{\mathrm{}}{}}𝑑q\mathrm{exp}(imz\mathrm{sinh}q+i\kappa q)\left\{\frac{\varphi _M}{t}im\mathrm{cosh}(q)\varphi _M\right\}_{t=0}.$$ (A7) Let $`z_0`$ be sufficiently small we could change $`\varphi _M(0,z)`$ to $`\varphi _M(0,0)`$. Then after performing integration over $`z`$ and changing variable of integration $`qu=mz_0\mathrm{sinh}q`$ this expression may be reduced to $$b_\kappa ^{(0)}=\frac{\sqrt{2}}{\pi }\varphi _M(0,0)\underset{0}{\overset{\mathrm{}}{}}\frac{du}{u}\mathrm{sin}u\mathrm{cos}(\kappa q(u)),$$ (A8) (the term with time derivative of the field vanishes when $`z_0`$ tends to zero). Since the effective values of the variable $`u`$ in the integral (A8) are of order $`1`$ we may use approximation $$\mathrm{cos}\kappa q\frac{1}{2}\left\{\left(\frac{mz_0}{2u}\right)^{i\kappa }+\left(\frac{mz_0}{2u}\right)^{i\kappa }\right\},$$ (A9) which is valid if $`u1`$ and $`z_0`$ is small enough. Substitution of Eq.(A9) into Eq.(A8) and evaluation of the integral yields $$b_\kappa ^{(0)}=\frac{i\mathrm{sinh}(\pi \kappa /2)}{\pi \sqrt{2}}\varphi _M(0,0)\left\{\mathrm{\Gamma }(i\kappa )\left(\frac{mz_0}{2}\right)^{i\kappa }\mathrm{\Gamma }(i\kappa )\left(\frac{mz_0}{2}\right)^{i\kappa }\right\}.$$ (A10) Finally substituting Eqs.(A4), (A6) and (A10) into Eq.(A1) and taking limit $`z_00`$ we obtain Eq.(83). ## B Analogy between the Unruh states and squeezed states of a harmonic oscillator 1. Consider a two dimensional harmonic oscillator in $`\{x,y\}`$-plain. The Hamiltonian of such oscillator reads $$H_{osc}=b_+^{}b_++b_{}^{}b_{}+1,$$ (B1) where $`b_+={\displaystyle \frac{1}{\sqrt{2}}}\left(x+{\displaystyle \frac{}{x}}\right),b_{}={\displaystyle \frac{1}{\sqrt{2}}}\left(y+{\displaystyle \frac{}{y}}\right),`$ and the commutation relations are of the usual form, $$[b_\pm ,b_\pm ^{}]=1,[b_\pm ,b_{}^{}]=0.$$ (B2) The ground state $`|0`$ satisfies the condition $$b_\pm |0=0.$$ (B3) Let us introduce the new operators $`r_\nu `$, $`l_\nu `$ as $$r_\nu =S(\nu )b_+S(\nu )^{}=\mathrm{cosh}\theta b_++\mathrm{sinh}\theta b_{}^{},l_\nu =S(\nu )b_{}S(\nu )^{}=\mathrm{sinh}\theta b_+^{}+\mathrm{cosh}\theta b_{},$$ (B4) where the unitary operator $`S(\nu )`$ reads $$\begin{array}{c}S(\nu )=e^{i\theta 𝒢}=\mathrm{exp}\{\theta (b_+b_{}b_+^{}b_{}^{})\}=\mathrm{exp}(e^\nu b_+^{}b_{}^{})\mathrm{exp}\{{\scriptscriptstyle \frac{1}{2}}\mathrm{ln}(1e^{2\nu })H_{osc}\}\mathrm{exp}(e^\nu b_+b_{}),\\ \\ \nu =\mathrm{ln}\mathrm{tanh}\theta >0,0\theta <\mathrm{}.\end{array}$$ (B5) The operators (B4) obey the commutation relations in the form $$[r_\nu ,r_\nu ^{}]=[l_\nu ,l_\nu ^{}]=1,[r_\nu ,l_\nu ]=0.$$ (B6) Note that the generator of the transformation (B5), $$𝒢=i\left(x\frac{}{y}+y\frac{}{x}\right),$$ (B7) looks very similar to the boost operator $``$, compare Eq.(65). According to Eqs.(B2), (B4) we have $$r_\nu |s_\nu =0,l_\nu |s_\nu =0,$$ (B8) where $$|s_\nu =S(\nu )|0=(1e^{2\nu })^{1/2}\mathrm{exp}(e^\nu b_+^{}b_{}^{})|0,$$ (B9) is a two dimensional squeezed vacuum state which in contrast to the ground state $`|0`$ is not stationary, $$|s_\nu ,t=e^{iHt}|s_\nu =(1e^{2\nu })^{1/2}\mathrm{exp}(e^{\nu 2it}b_+^{}b_{}^{})|0.$$ (B10) For the amplitude of transition between the ground and squeezed vacuum states according to Eq.(B9) we have $$0|s_\nu =Z_\nu ^{1/2},Z_\nu =(1e^{2\nu })^1,$$ (B11) and the expectation value of the ”number of squeezed quanta” $`N_r=r_\nu ^{}r_\nu `$ in the ground state reads $$0|N_r|0=(e^{2\nu }1)^1=\mathrm{Sp}(N_r\rho _r),$$ (B12) where $$\rho _r=Z_\nu ^1\mathrm{exp}(2\nu N_r),$$ (B13) and the ”partition function” $`Z_\nu `$ is defined in Eq.(B11). The analogy between Eqs.(B11,B12) and Eq.(111) becomes evident after substitution $`\nu =\pi \mu `$. It is clear that the appearance of the Bose factor in Eq.(108) results completely from the properties of Bogolubov transformation and does not relate to any sort of thermal behaviour. 2. We will show now that due to infinite number of degrees of freedom in the Unruh problem the considered analogy is not complete and representation of canonical commutation relations in terms of the Unruh operators is unitary inequivalent to the standard one (in terms of the plain waves or boost operators). One can write the relation between the Unruh and boost operators in the form $$r_\mu =\mathrm{cosh}\theta _\mu b_\mu +\mathrm{sinh}\theta _\mu b_\mu ^{},l_\mu =\mathrm{sinh}\theta _\mu b_\mu ^{}+\mathrm{cosh}\theta _\mu b_\mu ,$$ (B14) with $`\theta _\mu `$ defined by $`\mathrm{tanh}\theta _\mu =\mathrm{exp}(\pi \mu )`$. The vacuum state of the field in the double Rindler wedge $`|0_{DW}`$ ( called sometimes the Fulling vacuum, ) is defined by the requirements $$r_\mu |0_{DW}=0,l_\mu |0_{DW}=0,\mu >0.$$ (B15) If the Fulling vacuum could be represented by the vector $`|0_{DW}`$ in the same Hilbert space where Minkowski vacuum state $`|0_M`$ is defined then there should exist a unitary operator $`S`$ such that $$|0_{DW}=S|0_M,Sb_\mu S^{}=r_\mu ,Sb_\mu S^{}=l_\mu .$$ (B16) A simple calculation shows that such operator $`S`$ has the following formal representation (compare to Eq.(B5)): $$S=\mathrm{exp}\left(\underset{0}{\overset{\mathrm{}}{}}𝑑\mu \theta _\mu (b_\mu b_\mu b_\mu ^{}b_\mu ^{})\right).$$ (B17) It is obvious from this representation that $`|0_{DW}`$ has the form $$|0_{DW}=\left(K^{(0)}+\underset{0}{\overset{\mathrm{}}{}}𝑑\mu K^{(2)}(\mu )b_\mu ^{}b_\mu ^{}+\underset{0}{\overset{\mathrm{}}{}}𝑑\mu _1\underset{0}{\overset{\mathrm{}}{}}𝑑\mu _2K^{(4)}(\mu _1,\mu _2)b_{\mu _1}^{}b_{\mu _1}^{}b_{\mu _2}^{}b_{\mu _2}^{}+\mathrm{}\right)|0_M.$$ (B18) Consider the matrix element $$f[\theta _\mu ]=0_{DW}|0_M=0_M|S^{}[\theta _\mu ]|0_M,$$ (B19) for arbitrary function $`\theta _\mu `$. The derivative of the functional $`f[\theta _\mu ]`$ can be expressed as follows (see e.g. Sec. 2.4 of Ref.), $$\begin{array}{c}\frac{\delta f}{\delta \theta _\mu }=0_M|b_\mu b_\mu S^{}|0_M=0_M|S^{}(Sb_\mu S^{})(Sb_\mu S^{})|0_M=0_M|S^{}r_\mu l_\mu |0_M=\\ =f\mathrm{cosh}\theta _\mu \mathrm{sinh}\theta _\mu \delta (0)\mathrm{sinh}^2\theta _\mu \frac{\delta f}{\delta \theta _\mu }.\end{array}$$ (B20) For the last transformation in Eq.(B20) we have used Eqs.(B14) and the obvious formula $$\frac{\delta f}{\delta \theta _\mu }=0_M|S^{}b_\mu ^{}b_\mu ^{}|0_M.$$ (B21) After simplification of Eq.(B20) one obtains for the functional $`f[\theta _\mu ]`$ the differential equation , $$\frac{\delta f[\theta _\mu ]}{\delta \theta _\mu }=\delta (0)\mathrm{tanh}\theta _\mu f[\theta _\mu ],$$ (B22) the formal solution of which we can write as $$f=\mathrm{exp}\left(\delta (0)\underset{0}{\overset{\mathrm{}}{}}𝑑\mu \mathrm{ln}\mathrm{cosh}\theta _\mu \right),$$ (B23) (the constant of integration is fixed by the requirement that $`f`$ should be equal to 1 for $`\theta _\mu =0`$). Evaluation of the integral yields $`{\displaystyle \underset{0}{\overset{\mathrm{}}{}}}𝑑\mu \mathrm{ln}\mathrm{cosh}\theta _\mu ={\displaystyle \frac{1}{2}}{\displaystyle \underset{0}{\overset{\mathrm{}}{}}}𝑑\mu \mathrm{ln}(1e^{2\pi \mu })={\displaystyle \frac{\pi ^2}{24}}.`$ Thus we obtain $$f=0_{DW}|0_M=\mathrm{exp}(\delta (0)\pi ^2/24)=0,$$ (B24) i.e. $`K^{(0)}`$ in Eq.(B18) vanishes. Further with the help of Eqs.(B21),(B23),(B24) we get $$0_{DW}|b_\mu ^{}b_\mu ^{}|0_M=\frac{\delta f[\theta _\mu ]}{\delta \theta _\mu }=\delta (0)\mathrm{exp}(\pi \mu \delta (0)\pi ^2/24)=0,$$ (B25) i.e. $`K^{(2)}=0`$. Processing further in such a way we conclude that $`|0_{DW}=0`$. It means that there is no Unruh vacuum state in the same Hilbert space where Minkowski vacuum exists and that the Unruh operators (B14) form unitary inequivalent representation of commutation relations. It is clear from Eq.(122) (where for current consideration one should change $`|\mathrm{\Omega }_\beta `$ to $`|0_M`$, $`\beta `$ to $`2\pi `$ and $`|0_R|0_L`$ to $`|0_{DW}`$) that $`0_{DW}|0_M=Z_{2\pi }^{1/2}`$. Therefore we can also express Eq.(B24) in the form $`Z_{2\pi }=\mathrm{exp}(\delta (0)\pi ^2/12)=\mathrm{}`$ (compare to Eq.(112)). FIGURES
no-problem/9906/cond-mat9906184.html
ar5iv
text
# K S Krishnan and the early experimental evidences for the Jahn-Teller Theorem ## Abstract Jahn-Teller theorem, proposed in 1937, predicts a distortional instability for a molecule that has symmetry based degenerate electronic states. In 1939 Krishnan emphasized the importance of this theorem for the arrangement of water molecules around the transition metal or rare earth ions in aqueous solutions and hydrated saltes, in a short and interesting paper published in Nature by pointing out atleast four existing experimental results in support of the theorem. This paper of Krishnan has remained essentially unknown to the practitioners of Jahn-Teller effect, eventhough it pointed to the best experimental results that were available, in the 30’s and 40’s, in support of Jahn-Teller theorem. Some of the modern day experiments are also in conformity with some specific suggestions of Krishnan. Jahn-Teller effect<sup>1</sup> is a beautiful and simple quantum phenomenon that occurs in molecules, transition metal complexes as well as solids containing transition metal or rare earth ions. It states roughly that ‘a localized electronic system that has a symmetry based orbital degeneracy, will tend to lift the degeneracy by a distortion that results in the reduction of the symmetry on which the degeneracy is based.’ In isolated systems such as a molecule or a transition metal complex it is a dynamical or quasi static phenomenon. They are called dynamic and static Jahn-Teller effects<sup>2</sup>. When it occurs co-operatively in crystals it is a spontaneous symmetry breaking phenomenon and a crystal structure change. This is called a co-operative Jahn-Teller effect. Even before Jahn-Teller theorem appeared, Krishnan and collaborators<sup>3</sup> performed a series of pioneering magneto crystalline anisotropy study of families of paramagnetic salts containing transition metal and rare earth ions lending good support to various new quantum mechanical ideas including those of Bethe, Kramers and Van Vleck on crystal field splitting and magneto crystalline anisotropy. In the biographical Memoirs of the Royal Society of London, K. Lonsdale and H.J. Bhaba<sup>4</sup> wrote: ‘The papers published by Krishnan during this period (30’s) in collaboration with B C Guha, S Banerjee and N C Chakravarty were the foundation stones of the modern fields of crystal magnetism and magnetochemistry’. In the paramagnetic salts that Krishnan and collaborators studied the magnetic ions are well separated from each other by the intervening water molecules and also anion groups. As a result any direct or superexchange or dipolar interactions between the magnetic moments are weak; consequently any co-operative magnetic order such as antiferrmomagnetic order is pushed down to very low temperatures below 1 K. This enabled Krishnan and others to study in detail the magnetic properties of essentially isolated paramagnetic ions in various crystal field environments. Jahn-Teller theorem and its experimental consequences has been studied in great detail in chemistry and physics particularly in the context of electron spin resonance experiments<sup>5</sup>. In the simplest of transition metal complexes there is a cubic (octahedral) environment around the transition metal ion such as $`Cu^{2+}`$. The octahedral environment leads to a crystal field splitting of the five fold degenerate d level into an orbital doublet ($`e_g`$) and a triplet ($`t_{2g}`$). In Krishnan and collaborator’s work one notices repeated reference to deviation from cubic electric field at the center of rare earth ions in several cases and transition metal ions in some cases as inferred from their own anisotropic magnetic susceptibility measurements. The most obvious causes for departure from regular cubic symmetry of the coordination clusters are inequivalence of the ligand atoms in the first or second co-ordination shell and forces of crystal packing. No one suspected that electronic orbital degeneracy can lead to an asymmetry such as a distortion of the octahedra with equivalent ligand atoms. It is at this juncture the theoretical papers of Jahn and Teller appeared, which suggested another important cause for molecular asymmetry. And Krishnan readily appreciated the importance of this theorem for crystal field splitting and arrangement of water molecules around paramagnetic ions in aqueous solutions and wrote an interesting short paper in Nature<sup>6</sup> in 1939 that we reproduce here. It is interesting that Van Vleck<sup>7</sup>, who very much admired<sup>8</sup> K S Krishnan, also developed his theory of Jahn-Teller effect for orbital doublets in paramagnetic ions in the same year. In his paper KSK quotes at least four existing experimental results that support the Jahn-Teller theorem: i) x-ray data that is consistant with a small deviation of the perfect $`H_2O`$ octahedra around the paramagnetic ion in hydrated salts ii) magnetic data, mostly from his group, that exhibits strong magnetic anisotropy that is similar in magnitudes in various salts suggesting that the cause for any distortion arises from the electronic state of the paramagnetic ions rather than the surrounding atoms iii) asymmetry inferred from electronic absorption spectra of cation surrounded by water molecules in aqueous solutions studied by Freed et al. iv) magnetic double refraction exhibited by the aqueous solution of these salts, as observed experimentally by Raman and Chinchalkar and Haenny and v) the experimental observation of Chinchalkar that double refraction is absent experimentally when the paramagnetic ion is in an S-state (for example $`Gd^{3+}`$ or $`Mn^{2+}`$ which are orbital singlets) Krishnan’s paper contained important suggestions and also looked at a slightly more complex system namely rare earth ions in aqueous solutions than the relatively simpler case of transition metal ions in solids. What Krishnan was looking for was perhaps static Jahn-Teller distortion at room temperature. The issue of timescale associated with the distortion dynamics and the nature of the experimental probe becomes important here. The Jahn-Teller cluster being a finite system, will either quantum mechanically tunnel among equivalent distorted configurations in a phase coherent manner with a short time period , or will hop to various equivalent distorted configurations incoherently through thermal fluctuations. Optical absorption, for example, is a short time scale measurement: it will see even dynamic distortions as a static one. The static susceptibility measurememt on the other hand is a low frequency probe. Any indication of distortion through this experiment will mean a nearly frozen distortion. The choice of paramagnetic salts dissolved in water offers some special advantage too. A paramagnetic ion is typically surrounded by six water molecules forming a rigid octahedral complex. A coordination complex such as $`[Cu(H_2O)_6]^{2+}`$, is loosely coupled to the environment namely water. This makes the restoring force against the Jahn-Teller distortion weaker, making a nearly static Jahn-Teller distortion feasible even at room temperatures. On the other hand, In the case of a solids, the Jahn-Teller distortion has to work against the packing forces of the crystal thereby reducing the Jahn-Teller stabilisation energy. Many of Krishnan’s experiments involved rare earth ions. In view of the larger degeneracy and also the compactness of the f-orbitals, rare earth ions have some advantages as well as disadvantages. The compactness of the wave functions make the crystal field splitting smaller and also the quenching of angular momentum less important. On the other hand the spin orbit coupling, which can make the magnetic anisotropy stronger, is larger in the rare earth case. The larger degeneracy of the f-orbitals also leads to a proliferation of low lying multiplets making the theory as well the interpretations of the spectra hard. Perhaps because of these difficulties most of the classic studies of Jahn Teller effect seems to be confined to transition metal ion systems. I will try to put some of Krishnan’s suggestions in the modern context. Now it is well known that the dynamic to static Jahn-Teller distortion cross over temperature, as measured by ESR measurements, ranges from 1K to more than 300K in a variety of systems<sup>10</sup>. Some of these room temperature systems include $`Cu^{2+}`$ ion that Krishnan was referring to based preseumably on room temperature measurements. What I mean to say is that static Jahn-Teller distortion is not a very low temperature luxury that Krishan could have missed. Secondly, Krishnan was referring to some x-ray structural data as evidence for octahedral distortion. some modern EXAFS measurements<sup>11</sup> on $`Cu^{2+}`$ and $`Cr^{2+}`$ ions in aqueous solutions at room temperatures show the presence of a distorted octahedra with two copper-oxygen distances of about 2.00 Au and 2.3 Au. Krishnan’s suggestion that the large magnetic double refraction observed by Raman and Chinchalkar and Haenny in aqueous solutions of paramagnetic ions is very interesting. This seems to be a compelling evidence for the Jahn-Teller distortion, as explained by Krishnan: the distorted octahedral complex, inview of the magnetic anisotropy arising from the large spin-orbit coupling, tend to align themselves in water along the magnetic fields causing double refraction. The absence of solid state effects and diluteness of the paramagnetc ions makes this argument very important: the system resembles somewhat an ideal gas of ‘octahedral molecules’ as far as the double refraction is concerned. I should also point out that I have not been able to find out in my short literature survey any later study of Jahn-Teller effect in aqueous solutions using the magnetic double refraction as a proble. After this one paper on Jahn-Teller effect, Krishnan essentially left the field of crystal paramagnetism. Further he also never wrote any paper on the issue of Jahn-Teller effect. The insightful suggestions of Krishnan, that came soon after the Jahn-Teller theorem, was never taken up for further study as far as I can see in the literature. This paper has remained unknown. One wonders why. One possible reason is the choice of rare earth ions, which as we mentioned earlier is complicated theoretically and also hard when it comes to interpretation of the experimental spectra. Another possible non scientific reason is as follows: a detailed followup paper or some further works by Krishnan<sup>9</sup> could have made Krishnan’s work known more and opened up the field of Jahn-Teller effect much earlier, making Krishnan a pioneer in one more beautiful field. Detailed experimental confirmation of Jahn-Teller theorem became possible only after the ESR experiments of Bleaney and Bowers<sup>12</sup> in 1952 in $`CuSiF_6.6H_2O`$ and related family of paramagnetic salts, about 15 years after the enunciation of the theorem and Krishnan‘s supporting suggestions. The milestones included careful observation of the split ESR line below about 50K (indicating frozen distortion in the ESR time scales) and motional narrowed unsplit line above about 50K (indicating a dynamic distortion) for the copper salts. Narly half a decade of experimental work on Jahn-Teller effect in the modern era is broadly consistant with the sixty year old suggestion of Krishnan. The practically unknown suggestions of Krishnan , however, remains as the only experimental support for the Jahn-Teller theorem covering a period of more than ten years, before the beginning of the modern era. As I mentioned earlier, the suggestion of magnetic double refraction by Krishnan may be still be useful in the modern context for Jahn-Teller effect study in the rich variety of new and old coordination complexes involving transition metal and rare earth metal ions. Acknowledgement I wish to thank Dr Tapan Kumar Kundu (RSIC, IIT Madras) for an informative discussion. References 1) H.A. Jahn and E. Teller, Proc. Roy. Soc. A161 220 (1937) 2) M.D. Sturge, in Solid State Physics, Eds. Seitz and Turnbull, vol. 20 page 91 (Academic Press, New Yor, 1967); I.B. Bersuker, Jahn-Teller effect and vibronic interactions in Modern Chemistry (Plenum, NY, 1984) 3) Starting from 1933 Krishnan and collaborators wrote more than a dozen important papers on exhaustive study of magneto crystalline anisotropy in paramagnetic salts 4) K Lonsdale and H J Bhabha in Biographical Memoirs of the Royal Society of London, 1962 5) A. Abragam and B. Bleaney, Electron Paramagnetic Rosonance of transition metal ions (Clarendon Press, Oxford 1970) 6) K. S. Krishnan, Nature, 143 600 (1939) 7) J. H. Van Vleck, J. Chem. Phys. 7 72 (1039) 8) P.W. Anderson, in his K.S. Krishnan Memorian Lecture devivered at NPL, Delhi in 12th January 1987, recalls warmly Van Vleck’s (Ph.D. supervisor of P W Anderson) admiration for K.S. Krishnan. Another such instance involving Van Vleck is quoted in the article by K.Lonsdale and H.J. Bhaba (ref. 4) 9) A close look at K.S. Krishnan’s style of paper writing shows that typically he starts with a short letter style paper and ends with a series of long papers presenting the details of his experimental investigations. Roughly in each five, six years he goes for new pastures. 10) T.K. Kundu and P.T. Manoharan, Chem. Phys. Lett. 241 627 (1997); 264 338 (1997); T.K. Kundu, Ph.D. Thesis (Chemistry Department, IIT, Madras 1997); Kundu notes that in solutions the counter ions can affect (renormalise) the cross over temperature considerably in some cases 11) T.K. Sham, Chem. Phys. Lett. 83 391 (1981); B. Beagley et al. J. Phys.: Conens. Matter, 1 2395 (1989); A recent paper, Ralf Akesson et al., J. Phys. Chem. 96 150 (1992), refers to some modern EXAFS measurements by I. Watanabe and N. Matsubayashi (Osaka) on the two copper-oxygen distances in the octahedra. 12) B. Bleaney and K.D. Bowers, Proc. Phys. Soc. (London) A65 667 (1952)
no-problem/9906/astro-ph9906018.html
ar5iv
text
# The Energy of a Plasma in the Classical Limit ## I Introduction There have been many classical calculations of the energy of a plasma . They are based on perturbation theory of an ideal gas, in terms of the plasma parameter $`g`$ (which usually is a small value). The treatment, to the first order in $`g`$, is the Debye-Hückel theory. However, in the calculations that have been made it is assumed that $`\omega T`$ ($`k_B=\mathrm{}=1`$). This is a very strong assumption. For example, in our previous analysis , we showed that only by not assuming $`\omega T`$, is the blackbody spectrum obtained. We evaluate the energy of a plasma, studying the electromagnetic fluctuations in a plasma without assuming that $`\omega T`$. A plasma in thermal equilibrium sustains fluctuations of the magnetic and electric fields. The electromagnetic fluctuations are described by the fluctuation-dissipation theorem . The evaluation of the electromagnetic fluctuations in a plasma has been made in numerous studies . Recently, Cable and Tajima (see also ) studied the magnetic field fluctuations in a cold plasma description with a constant collision frequency as well as for a warm, gaseous plasma, described by kinetic theory. Using a model that extends the work of Cable and Tajima , we study an electron-proton plasma of temperature $`10^410^5K`$ with densities $`10^{13}10^{19}cm^3`$. The condition for a classical analysis is that $`\lambda _T<d_T`$, where $`\lambda _T`$ is the de Broglie wavelength for a thermal electron and $`d_T=e^2/T`$, the distance of closest approach. This condition is satisfied for $`T<3\times 10^5K`$ and for the plasmas studied. In section II we recall the expressions for the electromagnetic fluctuations in a plasma, and in section III, the electromagnetic energy is computed. Finally, we discuss our results in section IV. ## II Electromagnetic Fluctuations The spectra of the electromagnetic fluctuations in an isotropic plasma are given by , $$\frac{E^2_{𝐤\omega }}{8\pi }=\frac{1}{e^{\omega /T}1}\frac{Im\epsilon _L}{\epsilon _L^2}+2\frac{1}{e^{\omega /T}1}\frac{Im\epsilon _T}{\epsilon _T\left(\frac{kc}{\omega }\right)^2^2},$$ (1) $$\frac{B^2_{𝐤\omega }}{8\pi }=2\frac{1}{e^{\omega /T}1}\left(\frac{kc}{\omega }\right)^2\frac{Im\epsilon _T}{\epsilon _T\left(\frac{kc}{\omega }\right)^2^2}$$ (2) ($`\mathrm{}=k_B=1`$), where $`\epsilon _L`$ and $`\epsilon _T`$ are the longitudinal and transverse dielectric permittivities of the plasma. The first and second terms of Eq. (1) are the longitudinal and transverse electric field fluctuations, respectively. By using the fluctuation-dissipation theorem, we can estimate the energy in the electromagnetic fluctuations for all frequencies and wave numbers. The calculation includes not only the energy of the fluctuations in the well defined modes of the plasma, such as plasmons in the longitudinal component and photons in the transverse component, but also the energy in fluctuations that do not propagate. For the description of the plasma, we use the model described in detail in Opher and Opher . The description includes thermal and collisional effects. It uses the equation of Vlasov in first order, with the BGK (Bhatnagar-Gross-Krook) collision term that is a model equation of the Boltzmann collision term . We used the BGK collision term as a rough guide for the inclusion of collisions in a plasma. From this description, the dielectric permittivities for an isotropic plasma are easily obtained: $$\epsilon _L(\omega ,𝐤)=1+\underset{\alpha }{}\frac{\omega _{p\alpha }^{}{}_{}{}^{2}}{k^2v_{\alpha }^{}{}_{}{}^{2}}\frac{1+\frac{(\omega +i\eta )}{\sqrt{2}kv_\alpha }Z\left(\frac{\omega +i\eta _\alpha }{\sqrt{2}kv_\alpha }\right)}{1+\frac{i\eta }{\sqrt{2}kv_\alpha }Z\left(\frac{\omega +i\eta _\alpha }{\sqrt{2}kv_\alpha }\right)},$$ (3) $$\epsilon _T(\omega ,𝐤)=1+\underset{\alpha }{}\frac{\omega _{p\alpha }^{}{}_{}{}^{2}}{\omega ^2}\left(\frac{\omega }{\sqrt{2}kv_\alpha }\right)Z\left(\frac{\omega +i\eta _\alpha }{\sqrt{2}kv_\alpha }\right),$$ (4) where $`\alpha `$ is the label for the species of particles, $`v_\alpha `$ the thermal velocity for the species and $`Z(z)`$, the Fried and Conte function. ## III Electromagnetic Energy In order to estimate the electromagnetic energy, we use the dielectric permittivities, given by Eqs. (3) and (4), and calculate the magnetic and the electric field spectra from Eqs. (1) and (2). Integrating the spectra in wave number and frequency (and dividing by $`(2\pi )^3`$), we obtain the energy densities of the magnetic field $`\rho _B`$ and of the transverse and longitudinal electric fields $`\rho _{E_T}`$ and $`\rho _L`$. Usually, when estimating the energy stored in the electromagnetic fluctuations from Eqs. (1) and (2), it is assumed that $`\omega T`$ ($`k_B=\mathrm{}=1`$). With this assumption, the Kramers-Kronig relations can then be used, and a simple expression for the energy is obtained . However, the assumption that $`\omega T`$ is very restrictive. For example, a large part of the fluctuations which create the blackbody electromagnetic spectrum has $`\omega >T`$ . It is therefore necessary to perform the integration of the spectra over frequency and wave number without using this assumption. Our model uses kinetic theory with a collision term that describes the binary collisions in the plasma. A cutoff has to be taken since, for very small distances, the energy of the Coulomb interaction excedes the kinetic energy. This occurs for distances $`r_{min}e^2/T`$, which defines our maximum wave number, $`k_{max}`$. A large $`k_{max}`$ is needed in order to reproduce the blackbody spectrum. In this study, we used a $`k_{max}`$ equal to the inverse of the distance of closest approach, which we previously found is able to do this . Any smaller $`k_{max}`$ was unable to reproduce the entire blackbody spectrum. In the usual classical calculations, the correction to the energy due to correlations between the particles, is made through the correlation energy. To the first order in the plasma parameter $`g`$, the correlation energy depends on the two-particle correlation function $`S(k)`$, $$E_C=\frac{n}{4\pi ^2}𝑑kk^2\varphi _kS(k)\frac{n}{4\pi ^2}𝑑kk^2\varphi _k,$$ (5) where the second term is the energy of the particles due to their own fields. $`S(k)`$ can be estimated by the fluctuation-dissipation theorem or by the BBGKY hierarchy equations . Generally, it is assumed that $`\omega T`$ (so the Kramers-Kronig relation can be used) and $`S(k)`$ is obtained as $$S(k)=\frac{k^2}{k^2+k_D^2},$$ (6) where $`k_D`$ is the inverse of the Debye length. With this, the energy density of a plasma to first order in $`g`$ is given as $$U=\frac{3}{2}nT\left(1\left(\frac{g}{12\pi }\right)\right),$$ (7) where $`n`$ is the number density of the particles. Thus, the correlation energy, to the first order in $`g`$ is $$E_c=\frac{3}{2}nT\left(\frac{g}{12\pi }\right).$$ (8) We define the energy of a plasma as $$U=\frac{3}{2}nT(1+\mathrm{\Delta }).$$ (9) With this definition, $`\mathrm{\Delta }=\mathrm{\Delta }_0=g/12\pi `$, for the previous classical analysis (Eq. (7)), where the subscript “0” means that the assumption $`\omega T`$ has been used. Higher order calculations of the correlation energy have been made, for example by O’Neil and Rostoker . However, in all treatments, the assumption $`\omega T`$ has been made. As we commented above, the assumption $`\omega T`$ is very strong. A large part of the fluctuations has $`\omega >T`$. To obtain the interaction energy, we need to subtract the energy of the particles due to their own fields, the second term of Eq. (5), from the longitudinal energy density, $`\rho _L`$. We thus have $`\rho _{int}=\rho _L\frac{n}{4\pi ^2}𝑑kk^2\varphi _k`$. Using Eq. (9), the interaction energy can be written as $`\rho _{int}\frac{3}{2}(nT)\mathrm{\Delta }`$, where $`\rho _{int}`$ is the equivalent of the correlation energy. In fact, using the approximation $`\omega T`$, $`\rho _{int}`$ is equal to the second term of Eq. (7). In order to compare $`\rho _{int}`$ with $`E_c`$, we define the parameter, $$F\frac{\mathrm{\Delta }\mathrm{\Delta }_0}{\mathrm{\Delta }_0}.$$ (10) We previously found that the transverse energy (summing the transverse electric and magnetic field energies, $`\rho _{E_T}`$ and $`\rho _B`$) has an additional energy, compared to the blackbody energy density in vacuum. The additional transverse energy is $$\mathrm{\Delta }\rho _\gamma =\rho _B+\rho _{E_T}\rho _\gamma ,$$ (11) where $`\rho _\gamma `$ is the photon energy density, estimated as the blackbody energy density in vacuum. Adding the interaction energy $`\rho _{int}`$ to $`\mathrm{\Delta }\rho _\gamma `$, we obtain the total change in the energy density due to the transverse and longitudinal components, $$\rho _{new}=\mathrm{\Delta }\rho _\gamma +\rho _{int}.$$ (12) We calculate $`\rho _{new}`$ and $`\rho _{int}`$ for an electron-proton plasma at $`T=10^5K`$, $`T=10^4K`$ and $`T=10^5K`$ for densities ranging from $`10^310^{19}cm^3`$. The densities were chosen so as to assure that the plasma parameter, $`g=1/n\lambda _D^3<1`$, in order that kinetic theory is valid. For these plasmas, the de Broglie wavelength is less than the distance of closest approach of thermal electrons, which justifies our classical treatment. In Figure 1, we plot $`\mathrm{\Delta }`$ as a function of the density $`10^3cm^3n10^{19}cm^3`$ for the temperatures $`T=10^3K`$, $`10^4K`$, and $`10^5K`$. We extended each plot until the density for which $`g=0.3`$ was reached. For each of the temperatures, the value of $`g`$ increases with the density. In the case of $`T=10^5K`$, for example, for $`n=10^3cm^3`$, $`g=9.62\times 10^9`$ and for $`n=10^9cm^3`$, $`g=3.04\times 10^6`$. When $`g=0.3`$, $`n=10^{19}cm^3`$. In the case of $`T=10^3K`$, for $`n=10^3cm^3`$, $`g=3.04\times 10^6`$ and for $`n=10^{10}cm^3`$, $`g=9.62\times 10^3`$. When $`g=0.3`$, $`n=10^{13}cm^3`$. We found a very good fit for the results of Figure 1, using a Fermi-Dirac functional form for the density dependence of $`\mathrm{\Delta }`$, $`\mathrm{\Delta }(T)=A1/(exp[(x/A2)A3]+1)`$, with $`x=log(n)`$ and $`A1=a_{10}+a_{11}T+a_{12}T^2`$; $`A2=a_{20}+a_{21}T+a_{22}T^2`$ and $`A3=a_{30}+a_{31}T+a_{32}T^2`$. From Figure 1, we obtain $`A1=0.35220.1698(T/10^5)+0.1145(T/10^5)^2`$, $`A2=0.8255+0.4797(T/10^5)0.4532(T/10^5)^2`$ and $`A3=17.650+33.027(T/10^5)26.201(T/10^5)^2`$. The curves (filled, dashed and dotted) are evaluated from the analytic expression; the filled squares are the calculated values of $`\mathrm{\Delta }`$ from Eqs. (1)-(4). The fit can be seen to be excellent. In Figure 2, we plot $`F=(\mathrm{\Delta }\mathrm{\Delta }_0)/\mathrm{\Delta }_0`$ as a function of the density, for the temperatures $`T=10^3K`$, $`10^4K`$ and $`10^5K`$, which shows how $`\mathrm{\Delta }`$ differs from the usual correction $`\mathrm{\Delta }_0`$. The values of $`\mathrm{\Delta }`$ that we obtained are positive and larger in absolute value than $`\mathrm{\Delta }_0`$, whereas $`\mathrm{\Delta }_0`$ is negative. This indicates that the energy in the fluctuations dominates the interaction energy of the particles. We observe that $`F`$ can reach values of a thousand or greater. The additional transverse energy $`\mathrm{\Delta }\rho _\gamma `$ is completely negligible for these temperatures and densities. For example, for $`T=10^5K`$ and $`n=10^{19}cm^3`$, $`\mathrm{\Delta }\rho 10^3\rho _\gamma `$. For this temperature and density, $`\rho _{par}=273\rho _\gamma `$ and $`\rho _{new}`$ is completely dominated by $`\rho _{int}=\mathrm{\Delta }\rho _{par}`$. For example, for $`T=10^5K`$ and $`n=10^{17}cm^3`$, $`\mathrm{\Delta }\rho _\gamma 10^7\rho _\gamma `$. As a check, we calculated $`\rho _{int}`$, integrating in frequency only up to $`\omega =\omega _p`$, the plasma frequency $`(T)`$, and integrating in wavenumber up to $`kk_D`$. As expected, we then found that $`\mathrm{\Delta }`$ is equal to $`\mathrm{\Delta }_0`$, the value obtained in previous analysis. ## IV Conclusions and Discussion We calculated $`\rho _{new}`$ and $`\rho _{int}`$ for an electron-proton plasma as a function of density for $`T=10^310^5K`$. For many interesting plasmas, we found that $`\mathrm{\Delta }\mathrm{\Delta }_0`$. We used the BGK collison term as a rough guide to the inclusion of collisions. The BGK is a model collision term for the Boltzmann collision term. Collisions, however, change the results very little. For example, for $`T=10^5K`$ and $`n=10^{10}cm^3`$, the difference in $`\mathrm{\Delta }`$, with or without collisions, is less than $`10^6`$. Since there is no significant difference between the energy density, with or without the collision term, the use of a more acurate collision term than the BGK collision term is not necessary. Appreciably different values than the usual ones are obtained, for the interaction energy of a plasma, by not assuming $`\omega T`$. To the first order in $`g`$, we found that the energy of an ideal gas needs to be corrected by a positive value, approximately $`0.3\rho _{par}=0.3(3/2)nT`$. This results in very different values from the usual ones $`10^310^4(3/2)nT`$. We obtained a general expression for the correction $`\mathrm{\Delta }`$ as a function of density and temperature: $`\mathrm{\Delta }(T)=A1/(exp[(x/A2)A3]+1)`$ with $`x=log(n)`$, $`A1=0.35220.1698(T/10^5)+0.1145(T/10^5)^2`$, $`A2=0.8255+0.4797(T/10^5)0.4532(T/10^5)^2`$ and $`A3=17.650+33.027(T/10^5)26.201(T/10^5)^2`$. The total correction to the energy is completely dominated by the interaction energy. For these temperatures and densities, the transverse additional energy is negligible. Our results may be applied to the plasma before the recombination era, when the plasma had a temperature $`T>10^3K`$ and a density $`n>10^3cm^3`$. Since the expansion rate of the Universe (the Hubble parameter) is proportional to the square root of the plasma energy density, our results indicate that the Universe before the recombination era was expanding appreciably faster than previously thought. The purpose of this work was to demonstrate that $`\omega T`$ is an extremely strong assumption. By not making this assumption, there is a large change in the energy of the plasma. The authors would like to thank the anonymous referees for helpful comments. M.O. would like to thank the Brazilian agency FAPESP for support (no. 97/13427-8) and R.O. the Brazilian agency CNPq for partial support. Both authors would like to thank the Brazilian project Pronex/FINEP (no. 41.96.0908.00) for support.
no-problem/9906/nucl-th9906025.html
ar5iv
text
# On convergence of the HFF expansion for meson production in NN collisions ## Abstract We consider the application of heavy fermion formalism based $`\chi `$PT to meson production in nucleon-nucleon collisions. It is shown that to each lower chiral order irreducible diagram there corresponds an infinite sequence of loop diagrams which are of the same momentum power order. This destroys the one-to-one correspondence between the loop and small momentum expansion and thus rules out the application of any finite order HFF $`\chi `$PT to the $`NNNN\pi `$ reactions. Key Words: hadroproduction, chiral perturbation theory, heavy fermion formalism. There have been several attempts recently, to apply the heavy fermion formalism (HHF) based chiral perturbation theory ($`\chi `$PT), to calculate meson production rate in nucleon-nucleon collisions. It is well known that in a fully relativistic $`\chi `$PT there is no one-to-one correspondence between the loop and small momentum expansion. Such a correspondence is believed to be restored in the extremely non-relativistic approach of the HFF. An essential and most important clue to assess the validity of these calculations resides on how rapid the HFF expansion converges. Detailed $`\chi `$PT calculations which account for all contributions from tree and one loop diagrams to chiral order D=2, show that within the frame work of the HFF, one loop contributions are sizably bigger than the lowest-order impulse and rescattering terms, indicating that the HFF power series expansion does not converge fast enough and therefore may not be suitable to calculate pion production rate in NN collisions. More recently Bernard et al. and Gedalin et al. have shown that the HFF power series expansion of the nucleon propagator is on the border of its convergence circle. Consequently, a finite order HFF can not possibly predict nucleon pole terms correctly and should not be applied to meson production. It is the purpose of the present comment to call attention to the fact that within the framework of the HFF $`\chi `$PT, the one-to-one correspondence between the loop and small momentum expansion is badly destroyed for processes of sufficiently large momentum transfer. Particularly, for each low chiral order $`D`$ diagram there corresponds an infinite sequence of $`n`$-loop diagrams, $`n=1,2,\mathrm{}`$, of chiral order $`D_n=D+2n`$, which have the same low momentum power as the original diagram. Therefore, any finite chiral order HFF based $`\chi `$PT calculations, can not possibly explain meson production in NN collisions. To be specific we consider pion production via the $`NNNN\pi `$ reaction. Such a process necessarily involves large momentum transfer. The characteristic four momentum transferred at threshold is $`Q(m_\pi /2,\sqrt{Mm_\pi })`$, where $`M`$ and $`m_\pi `$ are masses of the nucleon and meson produced. This stands in marked difference with the Weinberg’s standard power counting, where presumably the momentum transferred is considerably smaller of the order $`Q^2m_\pi ^2`$. Clearly, one can not use directly the original power counting scheme of Weinberg. However, we may apply the modified Weinberg’s power counting, a scheme tailored to deal specifically with the production process . This scheme includes the following rules : (i) a $`\pi NN`$ vertex of zero chiral order D=0, $`V_{\pi NN}^{(0)}`$, contributes a factor $`Q/F`$, (ii) a pion propagator contributes a factor $`Q^2`$, (iii) a nucleon propagator $`(vQ)^1m_\pi ^1`$, (iv) a $`\pi NN`$ vertex of chiral order D=1, $`V_{\pi NN}^{(1)}`$ contributes a factor $`(k^0Q/FM)m_\pi ^{(3/2)}/(FM^{1/2})`$, (v) a $`2\pi `$NN D=1 vertex, $`V_{\pi \pi NN}^{(1)}`$, contributes a factor $`k^0Q^0/(F^2M)`$. In our notations (see Fig. 1) we refer to the four-momentum squared $`Q^2=(p_1p_2)^2=(vQ)^2\stackrel{}{Q}^2m_\pi M`$, with $`v`$ , $`|\stackrel{}{Q}|\sqrt{m_\pi M},Q^0=vQm_\pi `$, $`k^0`$, being the nucleon four-velocity, the transferred three-momentum, the transferred energy and the pion total energy,respectively. The radiative pion decay constant is denoted by F. In terms of $`Q=\sqrt{m_\pi M}`$ and $`vQ=m_\pi `$, the rules listed above are exactly the ones quoted by Cohen et al.. To calculate loop contributions one has to add three more rules : (vi) a loop integration contributes a factor $`(Q^2/4\pi )^2`$, (vii) a four pion vertex of zero order, $`V_{\pi \pi \pi \pi }^{(0)}`$, contributes a factor $`Q^2/F^2(m_\pi M)/F^2`$, (viii) a 3$`\pi `$NN zero order vertex, $`V_{\pi \pi \pi NN}^{(0)}`$, contributes a factor $`Q/F^3\sqrt{m_\pi M}/F^3`$. The last two factors originate, respectively, from the terms $`\pi ^2(_\mu \pi )^2/6F^2`$ and $`S^\mu \pi ^2_\mu \pi /6F^3`$ in the lowest-order Lagrangian. (see for example Eqn. 2 of Ref.). We now turn to demonstrate that for each low chiral order D diagram there exist infinite sequence of loop diagrams of higher chiral order, which have the same low momentum power as the original diagram. Consider for example the diagrams shown in Fig. 1. The simplest irreducible diagram is that of the so called impulse term, (graph 1a), corresponding to a chiral order D=1. As shown in Ref., by using the rules quoted above, the low momentum power order of this term is, $`\mathrm{\Theta }_0F^3(m_\pi /M)^{1/2}`$. Next, by adding two zero chiral order $`\pi NN`$ vertices, two lowest order nucleon propagators and one meson propagator to the diagram 1a, one obtains the irreducible one loop diagram 1b. We recall that a zero chiral order $`\pi NN`$ vertex is proportional to the meson three momentum, i.e., $$V_{\pi NN}=\frac{g_A}{F}SQ\tau ,$$ (1) where S is the nucleon spin-operator and contributes a factor $`QF^1`$ (rule (i) above). Thus the two added nucleon vertices give a factor $`Q^2F^2`$. Likewise, a meson propagator contributes a factor $`Q^2`$, the two nucleon propagators give $`(vQ)^2`$ and the loop integral contributes a factor of $`Q^4(4\pi )^2`$. Altogether the diagram 1b has an additional factor $`Q^4(4\pi FvQ)^2`$ with respect to original diagram 1a. The power factor of diagram 1b is therefore, $$\mathrm{\Theta }_3=\mathrm{\Theta }_1\frac{Q^4}{(4\pi F)^2(vQ)^2}.$$ (2) With $`4\pi FM,vQm_\pi `$, $`\mathrm{\Theta }_3=\mathrm{\Theta }_1`$, so that although higher in chiral order, the diagram 1b is of the same order as the diagram 1a. Similarly, by adding progressively, two zero order $`\pi NN`$ vertices, a pion propagator and two nucleon propagators, as mentioned above, one obtains the other irreducible n-loop diagrams shown in Fig. 1. By making use of the same power counting rules as above, the momentum power of n-loop diagram would be, $$\mathrm{\Theta }_{2n+1}=\mathrm{\Theta }_1\left(\frac{Q^4}{(4\pi f)^2(vQ)^2}\right)^n=\mathrm{\Theta }_1.$$ (3) Thus for the impulse diagram 1a there exists an infinite sequence of n-loop diagrams, $`n=1,2,\mathrm{}`$ of chiral order $`2n+1`$ all having the same characteristic momentum power as the lowest chiral order tree diagram. Quite obviously, such a sequence of loop diagrams can be constructed in a similar manner for any irreducible diagram that may contribute to the production process. Thus the basic principle of the HFF $`\chi `$PT of one-to-one correspondence between the loop and small momentum expansion is badly destroyed. The primary production amplitude becomes the sum over infinite sequences of loop diagrams all having the power order, and thus excluding the possibility that a finite chiral order HFF based $`\chi `$PT calculations can explain meson production in NN collisions. This result, along with the observation made in Refs., that the HFF series of the nucleon propagator is on the border of its convergence circle, lead us to conclude that the $`NNNN\pi `$ production process falls outside the HFF validity domain. Acknowledgments This work was supported in part by the Israel Ministry Of Absorption.
no-problem/9906/cond-mat9906266.html
ar5iv
text
# Transport on an annealed disordered lattice \[ ## Abstract We study the diffusion on an annealed disordered lattice with a local dynamical reorganization of bonds. We show that the typical rearrangement time depends on the renewal rate like $`t_r\tau ^\alpha `$ with $`\alpha 1`$. This implies that the crossover time to normal diffusion in a slow rearrangement regime shows a critical behavior at the percolation threshold. New scaling relations for the dependence of the diffusion coefficient on the renewal rate are obtained. The derivation of scaling exponents confirms the crucial role of singly connected bonds in transport properties. These results are checked by numerical simulations in two and three dimensions. \] “Dynamic percolation” (“stirred percolation”) was introduced as a model of transport in environments that evolve in time, e.g. microemulsions or polymers (for further applications see ). The simplest version of the model is defined on a $`d`$-dimensional regular lattice. Each pair of nearest neighbor sites is connected by a bond, which can be either conducting or insulating. We note $`p`$ the proportion of conducting bonds. Time evolution of the environment is achieved by a reorganization of bonds, defined below. Diffusion of a tracer particle in such a network is conveniently described by the ant-in-the-labyrinth paradigm . Two basic algorithms are available. The “blind” ant chooses its direction randomly at each time step and moves only if the corresponding bond is conducting. The “Myopic” ant chooses among the conducting bonds. Both algorithms lead to the same scaling behavior of the diffusion coefficient. Two qualitatively different dynamic percolation models appeared in the literature. The global reorganization model is the simplest. After some renewal time $`T_r`$, the assignment of conducting bonds is updated throughout the lattice. The behavior of this model is well understood , as it is closely related to the ordinary percolation. If $`r^2_{T_r}`$ is the mean square distance traveled on the quenched lattice during the time $`T_r`$, the diffusion coefficient on the stirred lattice will be $`D=r^2_{T_r}/2dT_r`$. The case of local reorganization, which is studied in this article, is more realistic, because the evolution of the network is continuous. The state of a bond evolves through a Poissonian process with a characteristic time $`\tau `$. At each iteration a conducting bond is cut with a probability $`1/(p\tau )`$, and a randomly chosen non conducting bond becomes conducting, to insure that the proportion $`p`$ of conducting bonds is conserved. No exact result is available for the dependence of the diffusion coefficient $`D`$ on $`p`$ and $`\tau `$, except in some particular one dimensional situations . Approximative solutions of the problem in any dimension can be obtained by means of a time-dependent version of the effective-medium approximation developed in . Here, we study the scaling of the diffusion coefficient $`D`$ in the vicinity of the percolation threshold $`p_c`$ of the quenched network. Several different scaling formulas for $`D(pp_c,\tau )`$ were proposed in the literature. They were derived for models with slightly different local evolution rules, but the details in the local rules are not relevant for the critical behavior around $`p_c`$ . As discussed below, our simulation results do not support current predictions. We derive a new scaling formula for the diffusion coefficient, which we confirm by extensive numerical simulations. The behavior at the percolation threshold is studied first, before we treat the general case of the behavior around $`p_c`$. The mean square displacement in the vicinity of $`p_c`$ on a quenched percolation network is given by $$R^2=t^{2/d_w^{}}f\left[(pp_c)t^{1/(2\nu +\mu \beta )}\right],$$ (1) where $`d_w^{}`$ is the anomalous-diffusion exponent, $`d_w^{}=(2\nu +\mu \beta )/(\nu \beta /2)`$ and $`f(x)\{\begin{array}{cc}x^\mu \hfill & \text{as }x\mathrm{}\hfill \\ (x)^{2\nu +\beta }\hfill & \text{as }x\mathrm{}\hfill \\ const.\hfill & \text{as }x0\text{ .}\hfill \end{array}`$ At early times, anomalous diffusion is observed. The crossover to a normal diffusion (if $`p>p_c`$) or to a localization regime (if $`p<p_c`$) appears at a time of the order of $`t_c|pp_c|^{\beta 2\nu \mu }`$, which is the only relevant timescale of the problem. In the case of dynamically disordered lattices, another timescale, related to the cluster rearrangement process, has to be taken into account. We define this typical “rearrangement” time $`t_r`$ as crossover time from anomalous to normal diffusion at the percolation threshold. It is only a function of the evolution rate $`\tau `$, and we assume a dependence in the form $`t_r\tau ^\alpha `$. The mean square displacement in the presence of dynamical disorder is thus described by a scaling formula depending on two parameters, $`t/t_c`$ and $`t/t_r`$ : $$R^2=t^{2/d_w^{}}g[(pp_c)t^{1/(2\nu +\mu \beta )};t/\tau ^\alpha ].$$ (2) At the percolation threshold, $`t_c`$ diverges, and the preceding expression reads $$R^2=t^{2/d_w^{}}\chi (t/\tau ^\alpha ),$$ (3) where $`\chi (y)const.`$ as $`y0`$ and $`\chi (y)=Dy^{12/d_w^{}}`$ as $`y\mathrm{}`$. The diffusion coefficient $`D`$ is obtained in the limit $`t\mathrm{}`$ by $$D\tau ^{\alpha \mu /(2\nu +\mu \beta )}.$$ (4) Eq. (4) contains an unknown parameter $`\alpha `$. Several values of $`\alpha `$ were proposed in the literature. In , the problem was mapped on the continuous random walk, and the lower and upper bounds for $`\alpha `$ were predicted. In $`\alpha =1`$ is considered. The only justification for this value is the assumption that the global and local rearrangement models have the same behavior. We have performed Monte Carlo simulations to evaluate $`\alpha `$ numerically. The diffusion coefficient can only be measured for small values of $`\tau `$, where the crossover time $`t_r`$ is small. In order to explore a broader range of values, we have determined $`\alpha `$ from the finite size scaling relation (3). We measured $`R^2`$ for $`\tau `$ between $`5\times 10^3`$ and $`1.62\times 10^6`$ in two dimensions, and between $`7.8\times 10^3`$ and $`5.12\times 10^5`$ in three dimensions. In two dimensions the best data collapse with parameter $`\alpha `$ is obtained for $`\alpha =0.80\pm 0.02`$ (Fig. 1), in three dimensions for $`\alpha =0.79\pm 0.03`$ (Fig. 2). Identical results were obtained with both the myopic and the blind ant algorithms. As a matter of fact, the value of $`\alpha `$ can be evaluated as a function of known critical exponents using simple assumptions about the geometry of clusters. Clusters are composed of well connected blobs, interconnected by singly connected bonds (“red bonds”) . If a red bond is cut, the cluster breaks into two parts. We argue that the crossover time corresponds to a removal (or addition) of a red bond in the region visited by the tracer particle. The red bonds are the only possible paths where a particle can escape from a blob, hence they control the diffusion. For $`t<t_r`$ a particle visits on average a hypersphere of a diameter $`Rt^{1/d_w^{}}`$. The “network” of red bonds is fractal, and their number inside the hypersphere grows as $`N_{\mathrm{rb}}R^{1/\nu }`$ . The crossover corresponds to the average time for the first of $`N_{\mathrm{rb}}`$ red bonds to be cut. Hence $`t_r\tau /N_{\mathrm{rb}}\tau ^{d_w^{}/(d_w^{}+1/\nu )}`$, giving $$\alpha =\frac{d_w^{}}{d_w^{}+1/\nu }.$$ (5) In two dimensions, where $`\nu =4/3`$, $`\beta =5/36`$ and $`\mu =1.303`$ we obtain $`\alpha =0.802`$. In three dimensions $`\alpha =0.81\pm 0.06`$ is obtained, using $`\nu =0.88\pm 0.02`$, $`\mu =2.003\pm 0.047`$ and $`\beta =0.405\pm 0.025`$ . These values of $`\alpha `$ are in excellent agreement with numerical results. Relation (5) predicts that $`\alpha =1`$ for $`d6`$, so in this limit the local and the global reorganization rules lead to the same scaling. Knowing the value of $`\alpha `$, the complete scaling law for $`D`$ in the vicinity of the percolation threshold can be deduced from (2). The ratio $`t_c/t_r`$ separates two different regimes. In the fast rearrangement regime ($`t_c/t_r1`$) a tracer particle does not see the finiteness of cluster sizes, hence the scaling of $`D`$ is given by (4). In the slow rearrangement regime ($`t_c/t_r1`$) two cases have to be considered. For $`p>p_c`$ and $`\tau \mathrm{}`$, known results for the diffusion on the quenched network should be recovered, hence $$D|pp_c|^\mu .$$ (6) For $`p<p_c`$ the situation is more complicated. At $`tt_ct_r`$, the network is not yet reorganized and anomalous diffusion crossovers to a localization regime on a finite cluster exactly in the same way as for the quenched network. The mean square displacement is thus $`R^2|pp_c|^{\beta 2\nu }`$. For $`t>t_c`$ it grows as $`R^2|pp_c|^{\beta 2\nu }g^{}[|pp_c|t^{1/(2\nu +\mu \beta )};t/\tau ^\alpha ].`$ For $`t\mathrm{}`$ a diffusive regime is reached, and it is evident that $`D1/\tau `$ in this case. Thus the scaling function $`g^{}`$ behaves as $`g^{}[x,y]x^ay^{1/\alpha }`$ for $`x,y\mathrm{}`$, where the coefficient $`a`$ reads $$a=(1/\alpha 1)(2\nu +\mu \beta )=1\frac{\beta }{2\nu }.$$ (7) The final expression was obtained replacing $`\alpha `$ by (5). Then $`a=0.948`$ in two dimensions and $`a=0.77\pm 0.017`$ in three dimensions. The scaling relation for $`t>t_c`$ can thus be written as a function of a unique parameter $$R^2|pp_c|^{\beta 2\nu }g^{\prime \prime }\left[\frac{t}{|pp_c|^a\tau }\right],$$ (8) with $`f(y)const`$ for $`y0`$, $`f(y)y`$ for $`y\mathrm{}`$. It is readily seen that the crossover time $`t_c^{}|pp_c|^a\tau `$ has itself a critical behavior near $`p_c`$ with an exponent $`a`$. This fact has been already predicted in but a different exponent $`a=1`$ was proposed. The scaling of $`D`$ in the slow rearrangement regime is simply deduced from (8), $$D\frac{|pp_c|^{\beta 2\nu a}}{\tau }.$$ (9) The complete scaling law for $`D`$ consistent with (4) (6) and (9) reads $$D=\frac{|pp_c|^{\beta 2\nu a}}{\tau }\varphi \left[(pp_c)\tau ^{1/(2\nu +\mu +a\beta )}\right],$$ (10) with $`\varphi (z)\{\begin{array}{cc}const.\hfill & \text{as }z\mathrm{}\hfill \\ |z|^{2\nu +a\beta }\hfill & \text{as }z0\hfill \\ z^{2\nu +\mu +a\beta }\hfill & \text{as }z\mathrm{}\text{ .}\hfill \end{array}`$ To verify this relation, we have calculated the diffusion coefficient in two dimensions for different values of $`\tau `$ and for $`p`$ in the range $`[0.4;0.47]`$ and $`[0.53;0.7]`$, using the algorithm of the myopic ant. Results are presented in Figure 3. They are well rescaled by the relation (10) (Fig. 4). The best collapse seems to be reached for a slightly smaller value of $`a`$ ($`a=0.9`$) than predicted by (7) ($`a=0.948`$). However, the collapse is not very sensitive on the precise value of $`a`$, because the slow rearrangement regime is not explored in our range of ($`\tau ,p`$). It is difficult to attain this regime using a simple random walk, since the crossover time to the diffusive behavior becomes too important for large values of $`\tau `$. This is the reason why we used the following algorithm to verify Eq. (9). We start from a given site belonging to a cluster of $`s`$ sites. We suppose that the evolution of the network is quasistatic : before the network is rearranged, the particle is thermalized, so that the probability to find it on a given cluster site equals $`1/s`$. Thus we assign at first the probability $`1/s`$ to each cluster site. We then exchange one conducting bond with an insulating bond, find a new cluster distribution and thermalize the probability distribution on each cluster. We iterate this procedure and measure the mean square displacement. The Hoshen-Kopelman algorithm was used to obtain the distribution of clusters. To get good statistics, an average over more than 2000 realizations was performed, so we were limited to networks of relatively small size (up to $`400\times 400`$ sites). Since the diffusive regime is not attained on such a small network, we used the finite size scaling formula (8). We measured $`R^2`$ for $`p`$ ranging from 0.4 to 0.46. For higher values of $`p`$, clusters are too large, and much bigger networks have to be used. The data collapse is obtained for $`a=0.87\pm 0.05`$ (Figure 5), that is for a value slightly smaller than predicted by (7). The same effect as in the case of the data collapse of $`D(pp_c,\tau )`$ is thus encountered. The discrepancy is due to fact that we are already out of the critical region, so corrections to the exponents $`\alpha `$ and $`d_w`$ should be taken into account. For values of $`p`$ far from $`p_c`$, the probability of having a large cluster, corresponding to a long jump, grows more slowly than near $`p_c`$, and the growth of the diffusion coefficient with $`p`$ is thus also slower. In conclusion, we have derived a new scaling law for the diffusion coefficient in the case of a simple model of stirred percolation. The dependence of the scaling exponents on the basic exponents of the percolation theory was found. We showed that the distribution of red bonds controls the transport in the network. Results are supported by extensive numerical simulations. In the slow rearrangement regime for $`p<p_c`$ the diffusion coefficient scales as $`D|pp_c|^s^{}`$, where $`s^{}2.1`$ in three dimensions. The value of the scaling exponent in microemulsions ($`s^{}1.2`$) thus cannot be explained by this simple model, as suggested earlier . It is plausible that the difference is due to interparticle interactions present in microemulsions. They play an important role in the formation of clusters and they might also influence the dynamics of the reorganization of the environment. I would like to thank Jérôme Chave for many fruitful conversations and Hugues Chaté for a careful reading of the manuscript. I also thank Roger Bidaux and Marc A. Dubois for useful discussions.
no-problem/9906/hep-th9906038.html
ar5iv
text
# A Model for High Energy Scattering in Quantum Gravity ## 1 Introduction The problem of very high energy scattering is deeply intertwined with the history of string theory and the quantum theory of gravity. Indeed, string theory was originally invented as a model of the Regge behavior expected for high energy scattering in hadron physics. More recently, there has been a great deal of effort devoted to elucidating the behavior of high energy scattering in string theory - . In parallel to this activity, ’t Hooft and followers , have initiated a study of high energy scattering in the “ quantum theory of gravity ”, which is to say that their considerations are supposed to be valid in any theory which obeys the principles of quantum theory and reduces to General Relativity at long distances. The aim of this latter program was, at least in part, to address the question of information loss in black hole formation. Not coincidentally, the work cited in - restricts attention to impact parameters larger than those at which General Relativity would predict that black holes are formed. The claim of the present note is that the gross features of high energy scattering far above the Planck scale can be extracted from semiclassical considerations in General Relativity. We will always consider situations with four or more asymptotically Minkowski dimensions. For simplicity, we will restrict our attention to initial scattering states consisting of two particles of mass far below the Planck scale, but it should be possible to extend it to more complicated initial states. The basis of this claim is the following simple observation. The classical picture of this initial state, insofar as gravitational interactions are concerned, consists of two Aichelburg-Sexl shock wave metrics. General Relativity predicts that when the impact parameter of the two shock waves is smaller than a critical value $`R_S`$, a black hole is formed. The No-Hair theorem tells us that the classical final state is then uniquely specified by its representation under the asymptotic symmetry group. $`R_S`$ is of order the Schwarzchild radius of the corresponding black hole and we will, by abuse of language, call it the Schwarzchild radius. The mass of the black hole is of order the center of mass energy of the collision. Thus, $`R_S`$ grows with the center of mass energy. On the other hand, for asymptotically large impact parameters, scattering is also described by classical General Relativity. Indeed, all existant calculations are consistent with the claim that high energy large impact parameter scattering is dominated by eikonalized single graviton exchange. Thus, since $`R_S`$ grows with energy, we may expect that all aspects of the scattering up to the point of formation of the black hole are well described by the classical theory. Unfortunately, except in the case of $`2+1`$ dimensions with Anti-deSitter boundary conditions the exact classical solution for black hole formation in the collision of Aichelburg-Sexl shock waves is unknown. The state of the art calculations for shock wave initiated processes in four dimensional flat space may be found in . This fact will make it impossible for us to exhibit a complete formula for scattering cross sections at all energies and impact parameters. For impact parameters below $`R_S`$ an exact description of scattering amplitudes would require us to enter into all of the mysteries of the black hole information problem. However, since for large energy the mass of the black hole is large, one needs only the familiar Hawking formulae to extract the gross features of inclusive cross sections. Furthermore, the thermal nature of these cross sections suggests that any more precise description of the scattering will be hopelessly complicated. We want to emphasize that, although we believe recent progress in M Theory suggests that the scattering matrix is unitary even in the presence of black hole production, our results do not depend heavily on this assumption since we only describe inclusive cross sections. To summarize then, our proposed model for high energy scattering is the following: At impact parameters greater than $`R_S`$ elastic and inelastic processes (gravitational radiation, photon bremstrahlung for charged particles and the like) will in principle be described by solving the classical equations of the low energy theory with initial conditions described by a pair of shock waves with appropriate quantum numbers<sup>1</sup><sup>1</sup>1Note that we only attempt to describe the leading high energy behavior. The full series of corrections to this behavior would require more knowledge of M Theory than we possess. . Note that since $`R_S^2`$ grows much faster with energy than strong interaction cross sections are allowed to (by the Froissart bound), this behavior will be completely determined by the classical physics of the degrees of freedom with energies below the Planck scale (see however the discussion of weakly coupled string theory below). At smaller impact parameters, scattering will be dominated by “resonant” production of a single black hole with mass equal to the center of mass energy. We put the word “resonant” in quotes because, despite their long lifetimes, black holes do not fit the profile of a classic Breit-Wigner resonance. Indeed they are most peculiar from the Breit-Wigner point of view. The Breit-Wigner formula for the contribution of a particular resonance to the elastic cross section for two body scattering is proportional to the square of the partial width for the resonance to decay into this particular channel. This is because, for a single narrow resonance, unitarity implies that the amplitude to produce the resonant state is the same as that for the resonance to decay back into the initial state. For black hole production, we expect the initial amplitude to be of order one whenever the impact parameter is smaller than $`R_S`$. On the other hand, the decay of the black hole is thermal; there will be a very small probability for it to decay back into the initial high energy two particle state. Thus, we expect the elastic cross section to be linear rather than quadratic in the partial width of the two body final state. Thus, the elastic cross section is larger than might have been expected. More striking is the inelastic resonant cross section, $`\sigma (A+BBHAnything)`$. This will be large even though the partial width to decay into the initial state is small. The cross section resembles that for a high energy collision of two bodies with an already existing, highly degenerate, macroscopic object. In such a situation, the energy of the initial particles is thermalized among the large number of degrees of freedom of the macroscopic body, the decay is thermal, and the probability of recreating the initial state much smaller than that of the initial collision. A. Rajaraman has suggested to us that a fact from classical GR may help to explain this behavior. When a black hole is formed by the collapse of a thin spherical shell of matter, the horizon forms long before the shell has fallen past the Schwarzchild radius. Similarly, in the collision of two shock waves, we might expect the horizon to form long before the waves reach the distance $`R_S`$. In this sense we can view the scattering as being caused by the impact of the colliding particles with an already existing horizon, an object which has a macroscopic number of degrees of freedom. The other important difference between black hole and resonance production is that resonances occur at discrete energies. By contrast, for every center of mass energy $`E`$ above some threshold of order the Planck mass, and every impact parameter below $`R_S(E)`$ we expect the high energy cross section to be dominated by the production of a single object, almost stationary in the center of mass frame. The object will have a long lifetime and will decay thermally (and thus isotropically in the center of mass frame) . The elastic cross section will be small. Note in addition that two body final states with large momentum transfer will be even more highly suppressed than the generic two body state. This is in marked contrast with the behavior of the system below the black hole production threshold, where the proliferation of hadronic jets in the final state increases with energy. Once black holes with large enough radii can be formed, the colliding particles never get close enough to perform a hard QCD scattering. The dramatic nature of these processes suggests that they will be easy to see if we ever build a Planck energy accelerator. This is particularly exciting in view of recent suggestions that the world may have large extra dimensions and a true Planck scale of order a TeV <sup>2</sup><sup>2</sup>2We note that the authors of have investigated the properties of and astrophysical constraints on, black holes in theories with large extra dimensions. . However, we argue that most of the Hawking radiation of the higher dimensional black hole will, for phase space reasons, consist of Kaluza-Klein modes of gravitons, and thus be undetectable. In the absence of experimental information about the final state, it is hard to distinguish these missing energy signals from those which come from production of a few KK gravitons . At sufficiently high energies the Hawking temperature of the black hole is small compared to the KK energy scale and the Hawking radiation will be dominated by observable particles. We show this occurs at about the point where the Schwarzchild radius is equal to the KK radius, which is at energies above the four dimensional Planck mass. We suggest that the suppression of hard QCD processes is a possible signal for identifying this sort of invisible black hole production. Complete suppression of QCD jet phenomena requires that the Schwarzchild radius be larger than an inverse GeV, and this only occurs at unreachably high energies. However, the suppression of jets with transverse momenta higher than the inverse Schwarzchild radius should become apparent before this. The detailed investigation of this phenomenon is beyond the scope of the present paper. The plan of the rest of this paper is as follows. In the following section we outline the regime of parameters within M Theory in which we might expect our discussion to be valid. We point out in particular that physics in the regimes corresponding to weakly coupled string theories is considerably more complex than what we have discussed. There can be a plethora of scales and a variety of different high energy regimes. Readers of a more phenomenological bent are advised to skip this section, which will be of interest mostly to string theorists. In Section 3, we present some remarks relevant to the regime discussed in this introduction, and assess the likelihood of observing black hole production experimentally if theories with large dimensions are correct. ## 2 High Energy Scattering in Weakly Coupled String Regimes Consider the moduli space of M Theory compactifications with four Minkowski dimensions. Much of this moduli space can be well approximated by compactifications of 11 dimensional supergravity (11D SUGRA) on manifolds with dimensions large compared to the Planck scale. The discussion in the introduction applies primarily to such regions of moduli space. An example of regions not covered by this description are weakly coupled string compactifications. These can be viewed as the proper limits of compactifications of 11D SUGRA on manifolds with some dimensions much smaller than the Planck scale. The same is true for F theory compactifications. We will begin by discussing the simple case in which all dimensions are of order or much larger than the eleven dimensional Planck length. Then by examining the case of weakly coupled string theory, we will establish that other regimes have a much more complicated set of high energy behaviors. Imagine first that some dimensions are of order the Planck scale, while $`n`$ others are much larger. Let $`M_{4+n}`$ denote the Planck mass in the effective theory below the eleven dimensional Planck scale. Given our assumptions, it is of order the eleven dimensional Planck scale and we will not bother to distinguish between them. The four dimensional Planck scale is given by $$M_P^2V_nM_{4+n}^{2+n},$$ (1) where $`V_n`$ is the volume of the large dimensions. Note that $`M_P>M_{4+n}`$. As the energy is raised, we will approach two thresholds, the first, the Kaluza Klein (KK) scale of the large dimensions, and the second, $`M_{4+n}`$. The observation of is that if $`V_n`$ is large in eleven dimensional Planck units, if the standard model lives on a brane embedded in the large dimensions, and if gravitons and other fields with only nonrenormalizable couplings to the fields on the brane are the only bulk fields, then the first threshold may show up only in very high precision experiments. The couplings of ordinary matter to the new states will be suppressed by powers of the energy divided by $`M_P`$ until we reach the threshold $`M_{4+n}`$. Most analyses of what happens above this threshold have concentrated on the production of KK modes. We will argue that somewhere around this energy regime, the (in principle) much more dramatic phenomenon of black hole production sets in. We will reserve a more detailed description of these processes for the next section. Here we merely observe that the appropriate form of GR to use in these estimates is $`4+n`$ dimensional gravity. We turn now to regimes described by weakly coupled string theory. Here there is generically a hierarchy of scales, starting with the string scale and proceeding to energy scales which are larger than the string scale by inverse powers of the coupling. Among these is the Planck scale associated with the $`4+n`$ dimensional space <sup>3</sup><sup>3</sup>3We now assume that $`6n`$ dimensions are compactified at about the string scale. In string theory, T dualities and Mirror symmetries usually make this a lower bound on compactification dimensions in the weak coupling regime.. Since we are interested in scattering above the string energy the effective theory in the regime of interest will always be ten dimensional (see the previous footnote). There have been many attempts to study high energy scattering in string perturbation theory. We will argue that these are reliable only in a certain energy regime<sup>4</sup><sup>4</sup>4The analysis of the next few paragraphs summarizes unpublished work done several years ago by one of the authors (TB) and S. Shenker. We thank S. Shenker for permission to include it here.. Let us turn first to the fixed angle regime studied in . A cartoon version of the analysis of this papers follows: All Lorentz invariants of the scattering process have the same order of magnitude, call it $`s`$, in this regime. $`k`$ loop amplitudes have the form $$A_k𝑑me^{s\alpha ^{}f_k(m)/k}$$ (2) where the integral is over the moduli space of genus $`k`$ Riemann surfaces with $`4`$ punctures and $`f_k`$ depends only weakly on $`k`$. For large $`s`$ one does the integral by steepest descents, obtaining an amplitude which falls off like $`e^{s\alpha ^{}f_k(m_0)/k}`$. The Gaussian fluctuations around the stationary point in moduli space give a coefficient of order $`6k`$. The facts that the exponential becomes flatter for large $`k`$ and that the coefficient far exceeds the $`2k`$ growth expected for the large order behavior of string perturbation theory (facts which are mathematically related) lead us to be suspicious of this result at energies which scale like inverse powers of $`g_S`$ <sup>5</sup><sup>5</sup>5 Physically, the reason for this flattening was explained in : the lowest order amplitude gives an exponentially falling amplitude for large momentum transfer. At higher orders, the most efficient way to distribute the momentum transfer is to form a $`k`$ string intermediate state, with each subprocess transferring momentum squared of order $`s/k`$. . Indeed, the large $`k`$ behavior of the amplitude at fixed $`s`$ is constant in $`s`$ and is of order $`2k`$ (the estimate of the coefficient comes from the volume of moduli space). This suggests an $`s`$ independent, nonperturbative contribution to the amplitude of magnitude $`e^{\frac{c}{g_S}}`$ such as that predicted by D-instantons in Type IIB string theory. One may expect similar pointlike contributions in other weakly coupled string theories (with the notable exception of the heterotic theory where these contributions may have something to do with the throats of NS 5 branes) from components of the wave functions of scattering states which contain D-object anti D-object pairs separated by distances of order the string length. At small impact parameter we should see a contribution from the pointlike scattering of individual D-branes . The nonperturbative amplitude competes with the perturbative one when $`s\alpha ^{}\frac{1}{g_S}`$. Note that the ten dimensional Planck energy squared is $`M_{10}^2(g_S^{1/2}\alpha ^{})^1`$, which is much smaller than this crossover energy. The semiclassical analysis of the introduction and the following section are valid only when the energy is much larger than $`M_P`$ and the Schwarzchild radius larger than the impact parameter as well as the string length. In ten dimensions the Schwarzchild radius is of order $`s^{\frac{1}{14}}g_S^{\frac{2}{7}}`$ in string units, so the semiclassical regime would apparently only set in for $`sg_S^4`$. This would seem to be a valid estimate in IIB string theory, but in IIA this energy is above the inverse compactification radius of the M Theory circle, so we should really make an eleven dimensional estimate. The eleven dimensional Schwarzchild radius only exceeds the string scale when $`sg_S^6`$ in string units. Thus in all cases it seems that the crossover between perturbative string and D-instanton behavior sets in in a regime in which gravitational corrections are negligible. If the interpretation of the pointlike nonperturbative cross section in terms of D-brane “sea partons” in the string wave function is correct, we may expect that the description we have given of the crossover is not complete. Indeed, the authors of showed that D0 brane scattering in weakly coupled type IIA string theory became soft at scales of order the eleven dimensional Planck mass, or $`g_S^{1/3}`$ in string units. This is lower than the crossover scale. To conclude, in the weakly coupled string regime, the semiclassical analysis of the introduction is expected to be valid only at energies which are parametrically (in $`g_S`$) higher than any relevant Planck scale. In the IIA theory it is only 11 dimensional SUGRA which eventually becomes relevant, and only at an energy scale parametrically larger than the inverse compactification radius of the M Theory circle. At energies below this true asymptopia we expect to see a rich structure of high energy amplitudes, dominated successively by perturbative strings, and nonrelativistic followed by relativistic scattering of “ Dirichlet sea ” constituents of the incoming states. ## 3 Black hole cross sections We write the elastic amplitude for $`22`$ scattering in eikonal form $$A(s,q^2)𝑑𝐛e^{i\chi (𝐛,s)}$$ (3) where $`𝐛`$ is the impact parameter and $`s`$ the square of the center of mass energy. For $`n`$ relatively large compact dimensions, the Schwarzchild radius of a $`4+n`$ dimensional black hole of mass $`\sqrt{s}`$ is approximately , $`R_SM_{4+n}^1(s/M_{4+n}^2)^{\frac{1}{2(n+1)}}`$. In order to use flat space black hole formulae, we must have $`R_SL`$ , the radius of the compact dimensions. In terms of the energy, this bound is $`\sqrt{s}M_P(M_P/M_{4+n})^{\frac{n+2}{n}}`$. For applications to theories with low scale quantum gravity, this bound is never exceeded, so we will not discuss larger values of $`s`$. We note however that when $`R_S`$ exceeds the compactification radius, the most likely outcome is that the system is described as a four dimensional black hole (a black brane wrapped on the compact dimensions). For impact parameters smaller than $`R_S`$ the cross section will be completely dominated by black hole production. As outlined in the introduction, this will have the following consequences: * The elastic cross section will be suppressed by a Boltzmann factor $`e^{\sqrt{s}/T_H}`$, where the Hawking temperature $`T_HM_{4+n}(M_{4+n}/\sqrt{s})^{\frac{1}{n+1}}`$. * Due to initial state bremstrahlung the black hole will not be exactly at rest in the center of mass frame. The average energy emitted in bremstrahlung should be calculable by the methods of . The final state will be a black hole at rest in the frame determined by this bremstrahlung calculation. It will decay thermally, and therefore isotropically in this frame. This prescription only allows one to calculate inclusive cross sections, but the thermal nature of these indicates that any more precise calculation of the amplitudes for various final states is beyond the range of our abilities. * In the standard model, we expect high energy collisions to be characterized by a larger and larger multiplicity of QCD jets with higher and higher transverse momenta. One of the most striking features of black hole production is that processes with transverse momenta larger than $`R_S^1`$ should be completely absent. The incoming particles never get close enough together to perform a hard QCD scattering. This characteristic shutoff of hadronic jets may be one of the most striking signals of black hole production processes. * Although a long lived black hole will be produced at every sufficiently high energy and small impact parameter, the signature of these events does not look like a conventional Breit-Wigner resonance. When the impact parameter is larger than $`R_S`$, we do not expect black hole formation to occur. When the impact parameter is very large, the elastic scattering is given by the eikonal formula coming from single graviton exchange. Note that here it is four dimensional gravitational physics which is relevant since we are talking about asymptotically large impact parameter. At energies relevant for discussing theories of low scale quantum gravity , these amplitudes are completely negligible. Since, at sufficiently high energy, the Schwarzchild radius is larger than all microscopic scales beside the radius of the compact dimensions, we conjecture that the behavior of the elastic amplitude and at least gross features of multiparticle production cross sections, can be extracted from the solution of the equations of classical general relativity. It is possible that there is a small region in impact parameter near to but larger than the Schwarzchild radius where a more detailed quantum mechanical treatment is necessary. Thus, in summary we conjecture that most gross features of scattering at energies much higher than the Planck scale can in fact be determined by solving classical equations. This is still a very involved task. Even the problem of colliding Aichelburg-Sexl waves in four flat dimensions is not solved. For scenarios of low scale quantum gravity one would have to solve an analogous problem in a partially compactified space. Also, one would have to learn how to extract information about multiparticle amplitudes from the classical solutions. Despite the complication, we would imagine that these problems are amenable at least to numerical solution. An important issue which might be clarified by this analysis is a more precise estimate of the threshold above which our description of high energy scattering would be expected to hold. At the moment we can only say that it should hold sufficiently far above the Planck scale. A better estimate of the threshold is crucial to any attempt to use the properties of black hole production to constrain theories of low scale gravity. We would guess that it is about an order of magnitude higher than the $`4+n`$ dimensional Planck mass. However, even when we reach this threshold, it is not clear that black hole production will have striking experimental signatures <sup>6</sup><sup>6</sup>6The following paragraphs were a response to questions raised by E.Witten.. The most striking feature of black hole production is of course the Hawking decay of the final state, which will be nearly at rest in the center of mass frame. Unfortunately, for phase space reasons, this decay will be primarily into KK graviton modes, which are invisible to all detectors. Thus, although the final state of black hole decay is very different from that produced in the perturbative processes discussed in , it may not be different in a way that can be easily measured. One might hope that at sufficiently high energies, the Hawking temperature would be so low that KK modes could not be produced, and we would get a thermal distribution of standard model particles. This happens at temperatures where $`R_{KK}T<1`$, Since the Hawking temperature is just the inverse of the Schwarzchild radius, this is the point at which the Schwarzchild and KK radii cross. As noted above, this occurs only at energies larger than the four dimensional Planck mass. A more promising signal is the suppression of hard QCD processes. Complete suppression requires a Schwarzchild radius of order an inverse GeV. This occurs at energies of order $`(E/M_{4+n})(M_{4+n}/1\mathrm{G}\mathrm{e}\mathrm{V})^{(n+1)}`$. Even for a six dimensional scenario with $`M_{4+n}=1`$ TeV, this is $`10^{12}`$ GeV. However, suppression of jets with transverse momenta larger than $`M_{4+n}`$ will occur as soon as the threshold for production of black holes is passed. Furthermore, the suppression will become more marked with increasing energy, i.e. the average transverse momenta of jets should go down with the energy, precisely the opposite of the QCD expectation. This question deserves more detailed study, but we feel confident that a relatively clean experimental signature will emerge from such a study. Note that the rate of increase of the Schwarzchild radius with energy may be measurable in this way, thus providing a direct measurement of the number of large compact dimensions. Clearly, all of these studies require the ability to probe a range of energies up to a few orders of magnitude above $`M_{4+n}`$, and it is unclear if anything can be seen in presently planned accelerators. If however, evidence for large extra dimensions is found at LHC, then one would be highly motivated to build a larger machine, which could study black hole production. ###### Acknowledgments. We would like to thank G.Horowitz, A.Rajaraman, and E.Witten for useful discussions. The work of TB was supported in part by the DOE under grant number DE-FG02-96ER40559. The work of WF was supported in part by the Robert Welch Foundation and the NSF under grant number PHY-9219345
no-problem/9906/physics9906028.html
ar5iv
text
# Untitled Document MY LIFE AS TUTOR Reflections on Two Recent Experiences Alessandro B. Romeo Onsala Space Observatory, Chalmers University of Technology, S-43992 Onsala, Sweden (romeo@oso.chalmers.se) 22 January 1999 Final Report for ‘Pedagogy of Group Tutoring’ (1998), Advanced Research Course at Chalmers University of Technology SUMMARY In this final report, I briefly reflect on two parallel teaching experiences as tutor: one for ‘Computer Science and Engineering in Context’ (1998) and the other for ‘Introduction to Electrical Engineering’ (1998), obligatory courses at Chalmers University of Technology. Special emphasis is put on the former. Besides, I briefly view such experiences in interaction with my research work, private life and new teaching position. In harmony with my conception of teaching, I avoid the standard formal style of reports and try an interactive dialogue with the reader. Why I am writing this report The course ‘Computer Science and Engineering in Context’ is finished and now it is we, the tutors, who should write a final report on our teaching experience. Such reflections are important for at least two reasons: (1) for ourselves, because we can view our experience retrospectively and learn how to improve our role of educators; (2) for the others, either new tutors or course organizers, in order to improve the structure of the course and optimize its impact on the student education at Chalmers. In writing this report, I will use a colloquial style and imagine to interact directly with the reader. I always use a formal style in writing scientific papers. Here instead I want to experiment with a new way of communicating my ideas, something between an oral discussion and a written report, something more spontaneous and hopefully more effective. So let us discuss the main phases of my experience. How I got involved in this course I had just come back from a three-month visit to the International School for Advanced Studies in Trieste, Italy, when I heard that Peter Jansson was recruiting tutors for the course. At that time, I had an urgent need of financial support since my position at Onsala Space Observatory was finished a few months before. So I didn’t hesitate to confirm my participation. On the other hand, I did also hope to complete the scientific work started in Italy, and thus I was unwilling to teach intensively before the new year. The first meeting with Peter and the other tutors was a really positive start. What I liked most of that workshop was, apart from the beautiful place where it was held, the fact that it was structured as an interactive discussion between course organizers, pedagogues and tutors. This is precisely what I believe teaching should be: not a monologue, but an open dialogue between the participants. My previous teaching experience taught me that flexibility and intuition are important qualities in such a respect. After this first workshop, I felt very charged and ready to start. What I expected and what happened Before the actual start of the course I felt somewhat anxious. In fact, this was the first time that I should tutor research projects in other fields than physics or mathematics. I am an astrophysicist and know very little about computer science and engineering. Students always expect much from their teachers, and so I expected much from myself. But how could I explain them that I don’t know anything about their field and, yet, I could be a good tutor? The best would be of course to clarify my role since the beginning, and so I did. The initial expectation analyses, one performed on myself and the other performed on my students, gave immediately good results. I felt accepted by the students, and thus I could also trust myself in the new role. This was the premise for a fruitful atmosphere and a successful work. How I acted: an ABC A. Next came the problem of how to integrate the important points emphasized at the pedagogy course into my personal method of teaching. My conception of teaching is that it should be dynamic, as opposed to static. So what better opportunity than experimenting brainstorming, snowball and related techniques since the beginning, that is at our first group meeting? Students accepted this idea enthusiastically. They like to experiment new forms of learning, just as we like to experiment new forms of teaching. This is because our mind likes changes. Nobody can guarantee that one method is better than another, it depends on so many factors, most of which are out of our control. What is important is to adapt ourselves as teachers to the students, but such a process should be as swift and smooth as possible. Besides, perhaps even more important, we should show the students that we are willing to learn from them. I believe that such a conception of reciprocal teaching and learning is what can make teachers and students active members of the same team: to give and to receive work always better together. B. Now a crucial question arises: how can we optimize our interaction with the students? The answer that I found is: observing how students interact between one another, and deciding which role to take accordingly. If we want to observe the real behaviour of students, we should disturb them as little as possible with our presence and make them unaware of our role of observers. My solution was to say: “I see that you are discussing intensively specific topics of the project. Do you mind if, meanwhile, I read something urgent?” So, while pretending to read an astrophysical paper, I was instead concentrating on their interactions. How long? About half an hour, that is the typical time that students need for forgetting about the presence of an apparently inactive tutor. How many times? About once or twice a month, in order to monitor the evolution of the student interactions. This is especially important for finding out if there are conflicts in the group. In such a case, the best is of course to sort out these conflicts as soon as possible, and the tutor should then act as a real psycho-analyst. C. Last but not least, what about student evaluation? A key factor to bear in mind is that, whenever people interact, they evaluate one another. This means that not only teachers evaluate students, but also students tacitly evaluate teachers. Both evaluations start from the beginning and last up to the end of the series of meetings, or even beyond. Our goal as educators is to give students good examples even concerning such a delicate point: we should show them that timely and constructive evaluations can be of benefit for both parts. We are human beings, and mistakes belong to our nature. It is through mistakes that children learn to live in the adults’ world, it is through mistakes that humanity has progressed to the present. The important thing is to recognize our mistakes and to improve. Yes, if we emphasize this goal as the reason of the evaluations, then we can be sure that students will agree and collaborate. What a better example than to start with ourselves? It is hard but necessary. If we show that we want to improve through the suggestions of the students, then we also show that we are willing to learn from them. What a better satisfaction for the students? To teach their teachers! Everyone teaches and learns at the same time and democratically, as it should be. In summary: Be a good example for your students not only as an educator, but also as a student and a human being! If you want to know more The previous discussion can be extended by referring to my four progress reports, which are unpublished but available on request, and to appropriate literature. Let me first recommend the book and TV series: ‘Filosofiska Frågor – Äventyr i Tankens Värld’ produced by the ‘UtbildningsRadion’ in 1998. Such a source does not directly concern pedagogy, but it helps us to reflect on fundamental problems about ourselves and our complex interactions with others, and thus to learn from our life experiences. The literature suggested at the pedagogy course covers important aspects of group tutoring and related problems. Lennéer-Axelson & Thylefors (1991) discusses the psycho-sociology of group work, and I find it especially useful for learning how to cope with conflicts. Jaques (1992) discusses educational methods for improving the quality of group teaching and training. I find it especially useful for learning several things: techniques such as brainstorming and snowball (expectation analysis is discussed by Widerberg 1994); the tutor’s roles in his/her interaction with the students, and their effects on group dynamics and evolution; and group evaluation. In addition, Booth et al. (1997) and Jansson (1998) analyse the impacts of the course ‘Computer Science and Enginnering in Context’ and the underlying project ‘D++’ on the Swedish education. What I learned What I learned from this experience is due to two major sources: (1) the excellent course ‘Pedagogy of Group Tutoring’ organized by Peter Jansson and Shirley Booth; (2) the interaction with my students. The first source represented a concrete example of how fruitful a group work can be. It also contributed to structure and rationalize my ideas about teaching, which had started to take form about 15 years ago, that is at the time of my first private lessons. The second source was, I believe, fundamental: I learned to learn from my students. Comparison with a parallel experience My experience as tutor for the course ‘Computer Science and Engineering in Context’ was rich and joyful. I had an ideal group of students, the best among all first-year students that I had had in five years of teaching here at Chalmers. The course organization was excellent, a real team work between course organizers, pedagogues and tutors. What else to say? Parallel to that, I had a painful experience as tutor for the course ‘Introduction to Electrical Engineering’. The group of students was on the whole very lazy. Actually, there was a good student among them, but he was shy. So the group was dominated by the lack of internal motivation and the stubbornness of the other students. I did my best to stimulate their enthusiasm and activity, but with poor results. The course organization was, to say the least, chaotic. The recruitment of tutors was completed more than one week after the start of the course. Tutors were not informed about their duties until few days before they were assumed to act. For example, they had to play the role of mathematical ‘exercisers’ (not tutors) with no previous psychological preparation for that, and with a resulting substantial delay in the start of the project. All the steps of the project were decided by the course organizer without consulting the tutors, who just found weekly instructions for their work ………. The list is even longer ………. Last, and worst, I had a clear impression that the course organizer wanted to convey the idea that students should pass the exam; this impression was also shared by the students of my group. What else to say? An experience that I would never repeat! One may wonder whether such a picture is objective or not. How is it possible that two parallel experiences were one so positive and the other so negative? What I can say is that I am not black and white in the way of thinking, and that I have other teaching experiences for comparison. Thus I believe that the picture reflects sharply the contrast between two course organizations, which unavoidably influence the performance of both tutors and students. Viewing such organizations in context and monitoring the opinions of both tutors and students may help these courses to progress towards their important common goal. How teaching interacted with my research work When I came back from Italy, I was very excited and full of new ideas. It was the first time that I could stay in my natural environment for more than one month after seven years of residence in Sweden. At the International School for Advanced Studies I had started three important scientific collaborations, one of which together with the Astrophysics Group at Chalmers, and I had planned to complete these projects as soon as possible because of the official end of my position at Onsala Space Observatory. Thus the impact of intensive teaching on my research work was hard at the beginning: I was obliged to teach in order to earn money for surviving. This was in sharp contrast to my previous teaching experience, in which I had got involved spontaneously. Did anything change afterwards? Yes, definitely! Peter and my students of the course ‘Computer Science and Engineering in Context’ succeeded to create such a stimulating atmosphere around my role of tutor that I quickly became enthusiastic, even considering my disappointment about the other course. (Now, of course, I feel a bit embarrassed to write such things about Peter because he will read my report, but this is the pure truth.) In conclusion, my research work underwent substantial delays, but I earned an invaluably rich teaching experience. How teaching interacted with my private life Outside the academic world, I am a happy husband and father of three children. Apart from an obvious period of stress caused by the initial teaching-research conflict, my family life reflected the satisfaction of being tutor. I was so absorbed in such a role that I proposed a practical project for the week-ends to my family. A brief work plan was ready by the end of November, and the result was the construction of a beautiful Christmas crib. There was no final report :-) MY NEW LIFE AS LECTOR Now my children would say: “snipp snapp snut, nu är sagan slut”, but the end of that story is the beginning of a new one. This Christmas I have received a surprising present: a tenure-track lector position, which the Swedish Natural Science Research Council will financially support to 80% for three years, that is beyond the start of the new millennium. This means a lot to me: more balance between my teaching and research works, possibly time for popular science, more economical security for my family, and thus a clearer view of my future; in other words, a new life.
no-problem/9906/astro-ph9906090.html
ar5iv
text
# Non-equilibrium excitation of methanol in Galactic molecular clouds: multi-transitional observations at 2 mm. ## 1 Introduction Methanol in space has been intensively studied since its discovery (Ball et al. 1970). Most of the attention has been concentrated on methanol masers, and much less effort has been put into studies of thermal methanol. The methanol molecule is a slightly asymmetric top with hindered internal rotation, and possesses a large number of allowed transitions at radio frequencies. Multi-transitional observations can be used to determine the main properties of the ambient gas—temperature and density—and of the methanol itself (column density and abundance; Menten et al. 1988; Kalenskii et al. 1997). Ziurys & McGonagle (1993) detected a series of $`J_0J_1E`$ lines near 157 GHz toward Ori-KL. Observations in these lines have an additional advantage that, because they can be observed simultaneously with the same receiver, their relative intensities are free from calibration errors. We made an extensive survey of galactic star-forming regions in these lines. In addition, most sources were observed in the $`6_15_0E`$, $`6_27_1A^{}`$, and $`5_26_1E`$ lines near 133 GHz and six objects were observed in the $`7_17_0E`$ line near 166 GHz. According to statistical equilibrium (SE) calculations, both thermal and maser emission are possible in all these lines. ## 2 Observations All lines were observed with the 12–m NRAO <sup>1</sup><sup>1</sup>1NRAO is operated by Associated Universities, Inc., under contract with the National Science Foundation. telescope at Kitt Peak, AZ. The observations at 157 GHz were made in March 1994. The receiver setup and observing mode was described in Slysh et al. (1995). A hybrid spectrometer with 150 MHz total bandwidth and 768 channels was used, providing frequency resolution of 195 kHz (0.37 km s<sup>-1</sup>); the bandwidth of the spectrometer allowed us to observe the $`J=15`$ lines simultaneously. Many sources were also observed with a 256-channel filter spectrometer with 2 MHz resolution, operating in parallel with the hybrid spectrometer; the bandwidth of the spectrometer allowed us to observe simultaneously the $`J=16`$ lines. Eight $`J_0J_1E`$ methanol lines were observed in W3(OH), 345.01+1.79, W48 and Cep A by tuning the receiver to appropriate frequencies. The observations at 133 GHz were made on June 5–7, 1995, and the observations at 166 GHz were made on April 21, 1997, in a remote-observing mode from the Astro Space Center in Moscow. The 133 GHz observations are described in Slysh et al. (1997). The observing mode, receiver, and spectrometer setup for the 166 GHz observations were the same as for the 157 GHz observations. The spectra were calibrated using the standard vane method (Kutner & Ulich, 1981). The accuracy of calibration is better than 10% (Kutner, personal communication). The observed transitions are shown in Fig. Non-equilibrium excitation of methanol in Galactic molecular clouds: multi-transitional observations at 2 mm.. The frequencies and line strengths are presented in Table 1. All the data were reduced with the NRAO software package UNIPOPS. ## 3 Observational results Seventy-three sources were detected at 157 GHz. Narrow maser lines were observed in 4, while broad quasi-thermal lines in 72. Negative 157 GHz results are given in Table 3. At 133 GHz, our results are the following: 33 quasi-thermal sources and 7 masers were found in the $`6_15_0E`$ line, 13 quasi-thermal sources in the $`6_27_1A^{}`$ line, and 12 emission sources in the $`5_26_1E`$ line. Six quasi-thermal sources and no masers were observed at 165 GHz. In this paper, we present our results on quasi-thermal emission; the results on the 157 and 133 GHz masers were presented elsewhere (Slysh et al. 1995, 1997, respectively). Gaussian parameters of the detected 157 GHz lines together with the positive and negative 133 GHz data are presented in Table Non-equilibrium excitation of methanol in Galactic molecular clouds: multi-transitional observations at 2 mm.. The 157 GHz lines $`J=1`$–3 are heavily blended. To obtain parameters of the lines, we made an assumption for the majority of the sources that the LSR velocity is equal to the LSR velocity of the $`4_04_1E`$ line. The center velocity and width of quasi-thermal lines for all the sources are very similar, suggesting that approximately the same regions were probed in all lines. Table Non-equilibrium excitation of methanol in Galactic molecular clouds: multi-transitional observations at 2 mm. shows that the intensity of the broad ($`\mathrm{\Delta }V5`$ km s<sup>-1</sup>) $`3_03_1E`$ lines is often lower than the value that is obtained by interpolation of the intensities of other 157 GHz lines. This can be understood if the lines are optically thick. If overlapping lines arise in a common region, then the shape of a spectral feature formed by them is not identical to the sum of their shapes. This results from the non-linearity of the radiation transfer equation. We calculated the optical thickness profile $`\tau (\nu )`$ for a blend of lines at the frequencies of the $`J=1`$ to 3 157 GHz lines, having identical peak opacities $`\tau _0`$. Then we simulated the shape of the blend, i.e., for a frequency of each spectral channel the value $`T(\nu )=T_{ex}(1e^{\tau (\nu )})`$ was calculated. Here $`T_{ex}`$ is an arbitrarily chosen ”excitation temperature” of the lines. We obtained Gaussian fits for the lines, from which the model blend consists, with the UNIPOPS package in the same manner as we reduced the observational data. This procedure was repeated for different linewidths and $`\tau _0`$. We found that for optically thick lines the results of the Gaussian fitting depend on the linewidth. If the linewidth is less than about 5 km sec<sup>-1</sup>, the intensities of the fitting Gaussians remain approximately equal. For larger linewidths, the overlapping becomes significant and a decrease of the $`J=3`$ Gaussian relative to the neighboring $`J=1`$ and $`J=2`$ Gaussians appears. The amplitudes of the latter two lines remained approximately correct, i.e., equal to $`T_{ex}(1e^{\tau _0})`$. Thus, we beleive that the decrease of the $`3_03_1E`$ line intensities obtained from the Gaussian analysis can be attributed to the overlapping of optically thick lines. ## 4 Excitation temperature ### 4.1 Analysis The excitation temperature is an important parameter of the molecular energy level population distribution. For a transition from the upper level $`u`$ to the lower level $`l`$ at the frequency $`\nu `$ the excitation temperature is defined as $$T_{\mathrm{ex}}=\frac{h\nu }{k\mathrm{ln}\left(\frac{g_lN_u}{g_uN_l}\right)}$$ (1) $`N`$ is the population and $`g`$ is statistical weight of the upper and lower levels. In thermodynamic equilibrium, the energy level population follows the Boltzmann distribution, and the excitation temperature of each transition is equal to the gas kinetic temperature. Deviation from the thermodynamic equilibrium is common in the interstellar medium, and the excitation temperature is usually different from the kinetic temperature. In the low density regions collisional transitions are less frequent than radiative transitions which depopulate energy levels. In simple molecules—diatomic or linear—this leads to a decrease of the upper level population relative to the lower level population, and to a decrease of the excitation temperature. In complex polyatomic molecules like methanol the energy levels are connected by allowed radiative transitions with several different levels, and the radiative transitions may cause deviation of the population distribution from the Boltzmann distribution in either direction: the upper level population may decrease relative to the lower level population which means a decrease of the excitation temperature, or the upper level population may increase relative to the lower level population, meaning an increase of the excitation temperature. Further rise of the relative population of the upper level over the ratio of the statistical weights leads to a population inversion, which is described as a negative exitation temperature. In this study we will use results of the measurements of line intensities of several interconnected transitions of methanol in the Galactic molecular clouds to derive the excitation temperature of some interesting transitions. One can determine the population ratio of two levels of a given transition from the measured intensity ratio of two lines emitted from the upper and lower levels of this transition to some other levels. If several lines are emitted from a given level, any line can be chosen. This method is based on a fact that the intensity of an optically thin line is determined by the spontaneous emission rate from the upper to the lower level of the given transition and the population of the upper level. Note that for a small optical depth absorption is negligible and the line intensity does not depend on the population of the lower level. The level population $`N`$ is related to the intensity of a line, emitted from this level by the usual equation $$\frac{N}{g}=\frac{3kW}{8\pi ^3\nu S\mu ^2}$$ (2) where $`W=\underset{v}{}T_R𝑑v`$ is the integrated over the line profile radiation temperature, $`\mu `$ and $`S`$ are permanent dipole moment and line strength. Substituting the level populations from Eq. (2) into the equation for the excitation temperature (1) one obtains $$T_{\mathrm{ex}}=\frac{h\nu }{k\mathrm{ln}\left(\frac{S_l\nu _lW_u}{S_u\nu _uW_l}\right)}$$ (3) where $`S_u`$, $`\nu _u`$, $`W_u`$ and $`S_l`$, $`\nu _l`$, $`W_l`$ refer to the line strength, transition frequency and integrated radiation temperature of lines, emitted from the upper and lower level, respectively. Note, that Eq. (2) provides population of a given level even if the level gains or loses population via several transitions. Hence, Eq. (3) provides excitation temperature in multi-level systems, such as the methanol molecule. In this study excitation temperature of the $`6_15_0E`$ transition was determined using the observed intensity of the same transition $`6_15_0E`$ as a measure of the column density of the upper level $`6_1E`$, and the intensity of the transition $`5_05_1E`$ as a measure of the column density of the lower level $`5_0E`$. Similarly, the excitation temperature of the transition $`5_04_0E`$ was determined from the observed intensity ratio of the transitions $`5_05_1E`$ and $`4_04_1E`$, giving the column density of the upper and lower levels, respectively. It is interesting that in this particular case the transition $`5_04_0E`$ at 96 GHz was not observed in this study, and its excitation temperature has been derived from observations of two other transitions. The third transition, for which the excitation temperature was determined is $`5_26_1E`$, for which the population ratio was found from the observed intensity ratio of the transitions $`5_26_1E`$ and $`6_15_0E`$. In some sources the line $`6_15_0E`$ contains both narrow maser and broad quasi-thermal components. The results for the excitation temperature in this study refer to the quasi-thermal component. The excitation temperature for the transitions $`6_06_1E`$ and $`7_17_0E`$ was determined from the intensity ratio of the pairs of lines $`(6_06_1E)/(6_15_0E)`$ and $`(7_17_0E)/(7_07_1E)`$, respectively, which were observed in several sources. From the observations, we obtain the beam-averaged brightness temperature rather than the brightness temperature of the objects. The ratio of the upper-level populations, and hence, the excitation temperature may be affected by the different beam filling factors at 133, 157 and 165 GHz. We calculated the excitation temperatures for two extreme cases: (1) the sources uniformly fill the beams and (2) the sources are much smaller than the beams. In the former case, we can substitute the radiation temperature in Eq. (3) for the antenna temperature. In the latter case, introducing a beam filling factors leads to the expression $$T_{\mathrm{ex}}=\frac{h\nu }{k\mathrm{ln}\left(\frac{S_l\nu _l^3W_u}{S_u\nu _u^3W_l}\right)}$$ (4) which differs from (3) only by the exponent of the frequency ratio. Since this ratio is not far from unity (1.18 or 1.05), the difference between excitation temperatures determined by (3) and (4) is not significant, as can be seen in Table Non-equilibrium excitation of methanol in Galactic molecular clouds: multi-transitional observations at 2 mm., except the case when the absolute value of the excitation temperature is large and its determination is critically dependent on parameters. ### 4.2 Results We derived excitation temperature for the $`5_04_0E`$, $`5_26_1E`$, $`6_15_0E`$, and $`7_17_0E`$ transitions. The results are shown in Fig. Non-equilibrium excitation of methanol in Galactic molecular clouds: multi-transitional observations at 2 mm. and Table Non-equilibrium excitation of methanol in Galactic molecular clouds: multi-transitional observations at 2 mm. <sup>1</sup><sup>1</sup>1Microwave background cannot not be taken into account in this study and is neglected here. Using Eq. A3 from Turner (1991) we estimated that this can result in an overestimation of the excitation temperature of the subthermally excited Class II transitions, presented in Table Non-equilibrium excitation of methanol in Galactic molecular clouds: multi-transitional observations at 2 mm.. The overestimation may be of order of 1.5 if the excitation temperature, given in Table Non-equilibrium excitation of methanol in Galactic molecular clouds: multi-transitional observations at 2 mm. is lower than 10 K. For the $`5_04_0E`$ and $`6_15_0E`$ transitions the errors, caused by this simplification are smaller and typically within the errors, presented in the table.. The derived excitation temperature was found to be different for all transitions, meaning that the population distribution is not a Boltzmann distribution. The mean harmonic excitation temperature $`M(T_{ex})`$ of the $`5_04_0E`$ transition is 20 K (Fig. Non-equilibrium excitation of methanol in Galactic molecular clouds: multi-transitional observations at 2 mm.). This value is typical for the kinetic temperature in molecular clouds. Very strong deviations from this temperature are found in the other two transistions, $`6_15_0E`$ and $`5_26_1E`$. The transition $`6_15_0E`$ was found to be either inverted or overheated. The mean harmonic excitation temperature for our sample is $`M(T_{ex})`$ = $`10`$ K. The $`5_26_1E`$ transition demonstrates an opposite behaviour: it is overcooled in most sources, with the mean harmonic excitation temperature $`M(T_{ex})`$ = 5 K, which is markedly less than the excitation temperature of the $`5_04_0E`$ transition. The difference between excitation temperature of the three transitions is clearly visible from the comparison of the respective histograms (Fig. Non-equilibrium excitation of methanol in Galactic molecular clouds: multi-transitional observations at 2 mm.). The histograms were plotted for inverse excitation temperature 1/$`T_{ex}`$, in order to avoid discontinuity in the transition from positive to negative temperature, which occurs through the infinity. The $`6_06_1E`$ transition is also subthermally excited in most sources. It is inverted in W3(OH), provided that the source is extended. However, the source is most likely much smaller than our beam (see, e.g., Kalenskii et al. 1997); in this case the $`6_06_1E`$ transition is not inverted and its excitation temperature is lower than that of the $`5_04_0E`$ transition. The $`7_17_0E`$ transition is inverted in W3(OH) and Cep A. In 345.01+1.79, it is subthermally excited. ## 5 Statistical equilibrium calculations. In order to demonstrate how physical conditions in molecular clouds can cause the observed deviations from the equilibrium Boltzmann distribution we calculated several models in the Large Velocity Gradient (LVG) approximation, varying density and an external radiation intensity. Models a and b were calculated for warm gas with predominantly collisional excitation. Models c and d take into account external radiation. The modelling was performed using an LVG code, made available by C.M. Walmsley (Walmsley et al. 1988). The results are presented in Figure Non-equilibrium excitation of methanol in Galactic molecular clouds: multi-transitional observations at 2 mm. and Table 6. The collisional selection rules are based on the paper by Lees & Haque (1974) and imply that the $`\mathrm{\Delta }K=0`$ collisions are preferred. These collisional selection rules are known to be not accurate enough, leading to attempts (not fully successful) to derive them from astronomical observations (Sobolev, 1990; Turner, 1998). The opacities determined by the LVG calculations, are small for all lines, presented in Table 6 in all models, except for the line $`5_04_0E`$, which has optical depth 1.1 in model d. In the absence of external radiation, except for the microwave background (model a), the levels in the backbone ladder ($`K=1`$ for $`E`$-methanol) are overpopulated relative to those from adjacent ladders, and transitions with upper levels on the backbone ladders and lower levels on lateral ladders, like $`6_15_0E`$ (Class I transitions), are inverted, whereas the transitions like $`5_26_1E`$ and $`J_0J_1E`$, with upper levels on lateral ladders and lower levels on the backbone ladder (Class II transitions), are overcooled (The classification of methanol transitions is presented, e.g. in Menten (1991); Sobolev (1993); see also Table 1). The reason for the inversion of Class I transitions is a much longer lifetime against spontaneous decay of the upper level ($`6_1E`$), which belongs to the backbone ladder, as compared to the lower level ($`5_0E`$). The inversion disappears if the density is $`10^7`$ cm<sup>-3</sup> (model b). External radiation alteres the inversion. It populates more efficiently levels with shorter lifetimes against spontaneous decay, i.e., those on lateral ladders (Kalenskii 1995). If the radiation temperature exceeds the kinetic temperature, then the lateral ladders become more populated than the backbone ladder, leading to the inversion of some Class II transitions and disappearence of the inversion in Class I transitions (Cragg et al. 1992). For example, if the radiation temperature is 50 K, then in a cold (25 K) gas the Class II $`2_03_1E`$ transition is inverted (model c). However, this radiation is not strong enough to invert some other Class II lines, in particular, the $`J_0J_1E`$ or $`7_17_0E`$ lines. Stronger radiation ($`T_{\mathrm{RAD}}=150`$ K, model d) can invert these lines, too. The $`5_04_0E`$ line remains thermally excited with or without external radiation, with the excitation temperature slowly growing with the radiation temperature and close to the gas kinetic temperature, unless the density is lower than $`10^5`$ cm<sup>-3</sup> (model a). This can be the case for sources with a strongly inverted $`6_15_0E`$ line, i.e., when the absolute value of the $`6_15_0E`$ transition excitation temperature is about 10 K or lower. ## 6 Discussion The observations show that in the majority of sources the $`6_15_0E`$ transition is inverted, while the $`5_26_1E`$ transition is overcooled. This agrees with the models a and b, meaning that the collisional excitation alone can lead to the inversion or overcooling of some methanol transitions. The excitation temperature of the Class I $`6_15_0E`$ transition is negative in the majority of sources, i.e. the lines are inverted. The $`1/<T_{\mathrm{ex}}>`$ value is equal to $``$0.1. SE calculations show that the $`6_15_0E`$ transition is inverted if the density is lower than $`10^7`$ cm<sup>-3</sup>, and external radiation is absent or weak. Thus, we can conclude that in most observed sources the gas density is below $`10^7`$ cm<sup>-3</sup>, and there is no significant external radiation. In spite of the inversion of the $`6_15_0E`$ transition, the sources show thermal emission profiles, without line narrowing, typical for maser emission. This is possible for low optical depth when the maser gain is less than unity and the line narrowing does not occur. According to Sobolev (1993), such objects can be called Class Ib masers. In contrast to this, the excitation temperature of the Class II $`5_26_1E`$ and $`6_06_1E`$ transitions is typically positive and much lower than the excitation temperature of the $`5_04_0E`$ transition. Since the $`5_04_0E`$ transition is thermally or subthermally excited, the excitation of the $`5_26_1E`$ and $`6_06_1E`$ transitions is typically subthermal. The reason for the subthermal excitation of these Class II lines is the same as for the inversion of the $`6_15_0E`$ transitions, i.e. much longer lifetime against spontaneous decay of the levels on the backbone ladder. In the case of the $`5_26_1E`$ and and $`6_06_1E`$ transitions, the lower level, $`6_1E`$, belongs to the backbone ladder and is overpopulated relative to the upper $`5_2E`$ or $`6_0E`$ levels. SE calculations indicate that the inversion of the transitions that belong to Class II requires a strong radiation field (see also Cragg et al. (1992); Sobolev et al. (1997)). Since strong radiation is necessary to invert the $`7_17_0E`$ transition (Table Non-equilibrium excitation of methanol in Galactic molecular clouds: multi-transitional observations at 2 mm.; Sobolev et al. 1997), it may belong to Class II. The $`7_17_0E`$ transition proved to be inverted in W3(OH) and Cep A. According to current models of methanol excitation, the inversion of Class I transitions prohibits the inversion of Class II transition in the same region, and vice versa. Hence, the transitions $`6_15_0E`$ and $`7_17_0E`$, which belong to Class I and II, respectively, cannot be inverted simultaneously. For W3(OH) and Cep A, both of these transitions appear inverted, in conflict with this statement. Most likely, the inversion of either or both of these transitions is spurious and caused by a combination of calibration/optical depth effects (see the next section). However, a complex source structure is possible or the $`7_17_0E`$ transition may not belong to Class II. For the $`5_04_0E`$ transition, we obtained positive excitation temperatures. The $`1/<T_{\mathrm{ex}}>`$ value is equal to 0.05. It is known from SE calculations that the $`5_04_0E`$ transition is not inverted over the whole range of temperature, density, and methanol abundance typical for Galactic molecular clouds, and its excitation temperature is close to the kinetic temperature, unless the gas density is about $`10^5`$ cm<sup>-3</sup> or lower. In the latter case, the excitation temperature of the $`5_04_0E`$ transition is lower than the kinetic temperature. According to Table Non-equilibrium excitation of methanol in Galactic molecular clouds: multi-transitional observations at 2 mm., such low density implies a strong inversion of the $`6_15_0E`$ transition. Hence, in the sources with a strongly inverted $`6_15_0E`$ transition, i.e., those with the absolute value of the $`6_15_0E`$ transition excitation temperature about 10 K or lower, (see Table 6), the $`5_04_0E`$ transition may be subthermally excited, whereas for all other sources the excitation temperature of the $`5_04_0E`$ line $`T_{54}`$ represents the kinetic temperature. Table Non-equilibrium excitation of methanol in Galactic molecular clouds: multi-transitional observations at 2 mm. shows that in the majority of the observed objects, $`T_{54}`$ is limited to 15–50 K. Therefore the kinetic temperatures of at least dense sources from our sample must be enclosed in the same range. ### 6.1 Optical depth effects To estimate source properties, we made the usual assumption that the observed lines are optically thin. However, this assumption may not always be valid (see Kalenskii et al. 1997). A question arises whether the inversion, found in two lines, can be be an artifact, caused by optical depth effects. If the excitation temperature is derived, as we do, from the ratio of the upper and lower level population, the transition can appear to be (but not actually be) inverted if the upper level population is overestimated, the lower level population is underestimated, or both. The population of the upper level will be overestimated if the line, used for derivation of the level’s column density, is optically thick and inverted. The population of the lower level will be underestimated if the line, observed for the derivation of the level’s column density, is optically thick and not inverted. We obtained inversion in the transitions $`6_15_0E`$ and $`7_17_0E`$. The same transitions were used to determine their upper level populations (see Table 4). Therefore optical depth can lead to overestimation of the upper level population only if these transitions are really inverted. Hence, these transitions can appear as inverted, being not inverted, only if the transitions used to determine the lower level population ($`5_05_1E`$ and $`7_07_1E`$, respectively) are not inverted and are optically thick, leading to underestimation of the $`5_0E`$ and $`7_0E`$ level population. Below, we discuss whether optical depth can lead to apparent detection of inversion in the $`6_15_0E`$ transition. We could not estimate the optical depth effect quantitavely, since the optical depth of the observed $`6_15_0E`$ and $`5_05_1E`$ lines (see Table 4) is unknown. We can not find excitation temperature, corrected for optical depth effect. Instead, we estimated how large the optical depth of the $`5_05_1E`$ line must be in order to cause the apparent inversion of the $`6_15_0E`$ transition. We assume that the $`6_15_0E`$ line is optically thin; this is the best case for the appearance of a spurious inversion, since in the opposite case the ratio of the $`6_1E`$ and $`5_0E`$ level population becomes lower, decreasing the chance for the spurious inversion to appear. We corrected the population of the $`5_0E`$ level for the optical depth effect $$\frac{N_u^{}}{g_u^{}}=\frac{N_u}{g_u}\frac{\tau }{1exp(\tau )}$$ (5) where $`N_u/g_u`$ is the value obtained in the optically thin approximation, assuming optical depth of the $`5_05_1E`$ line of 1, 2, and 3. Higher optical depth is not likely, according to Kalenskii et al. (1997). Then, using these corrected values, we calculated the $`6_15_0E`$ transition excitation temperature. The same procedure was applied for the $`7_17_0E`$ transition, i.e., we assumed that the $`7_17_0E`$ line is optically thin and examined how large optical depth of the $`7_07_1E`$ line should be to cause the apparent inversion of the $`7_07_1E`$ transition. The results are shown in Table 7, and can be summarized for the $`6_15_0E`$ transition as follows: * an optical depth equal to 1, ”quenches” the inversion if the modulus of the initial excitation temperature (i.e., that presented in Table Non-equilibrium excitation of methanol in Galactic molecular clouds: multi-transitional observations at 2 mm.) is of the order of or larger than 10 K. * if the modulus of the initial excitation temperature is of the order of or less than 5, then the inversion does not disappear, even if the optical depth is equal to 3. Thus, in most of the sources, given in Table Non-equilibrium excitation of methanol in Galactic molecular clouds: multi-transitional observations at 2 mm., the derived inversion of the $`6_15_0E`$ transition cannot be an artifact caused by optical depth effects. For the $`7_17_0E`$ transition, inversion was found in W3(OH) and Cep A. Table 7 shows that optical depth of the $`7_07_1E`$ transition of order 1 or larger can cause apparent detection of this inversion. The excitation temperature of the thermal $`5_04_0E`$ transition was calculated from the ratio of the $`5_05_1E`$ and $`4_04_1E`$ line intensities. With the increase of optical depth, this ratio tends to unity, and the excitation temperature calculated from Eq. 1 tends to 69 K, independent of the real excitation temperature. However, we calculated a number of models and found that optical depths of the $`4_04_1E`$ line as high as 3 can lead to $`T_{54}`$ errors not larger than 1.5. Hence, the excitation temperature of the $`5_04_0E`$ transition given in Table Non-equilibrium excitation of methanol in Galactic molecular clouds: multi-transitional observations at 2 mm. is not affected seriously by optical depth effects. ## 7 Summary and conclusions 1. We observed a large sample of star-forming regions in the $`J_0J_1E`$ series of methanol lines near 157 GHz. Emission was detected toward 73 sources. Many of them were additionally observed in the $`6_15_0E`$, $`6_27_1A^{}`$, and $`5_26_1E`$ lines near 133 GHz and six sources were observed in the $`7_17_0E`$ line at 166 GHz. The observations at 133 and 166 GHz were made in a remote observing mode from the Astro Space Center in Moscow. 2. The excitation of methanol transitions proved to be quite diverse. Typically, the $`6_15_0E`$ transition is inverted, showing that in most observed sources the gas density is below $`10^7`$ cm<sup>-3</sup> and there is no significant external radiation. The $`5_26_1E`$ and $`6_06_1E`$ transitions are overcooled and the $`5_04_0E`$ transition is thermally excited. The $`7_17_0E`$ transition may be inverted in W3(OH) and Cep A, indicating that a strong radiation field in these sources may be present. We are grateful to Dr. Phil Jewell and the staff of the Kitt-Peak 12-m telescope for help during the observations. We would like to thank Dr. Garwood (NRAO) for providing the UNIPOPS package. The work was done under partial financial support from the Russian Foundation for Basic Research (grant No 95-02-05826), International Science Foundation (grants No MND000 and MND300), and European Southern Observatory. The Sun Sparc station used in the remote observations was made available to the Astro Space Center through a grant from the National Science Foundation to the Haystack Observatory.
no-problem/9906/math9906186.html
ar5iv
text
# Regularity of Operators on Essential Extensions of the Compacts ## 1 Introduction Hilbert $`C^{}`$-modules arise in many different areas, for example, in the study of locally compact quantum groups and their representations, in KK-theory, in noncommutative geometry, and in the study of completely positive maps between $`C^{}`$-algebras. A regular operator on a Hilbert $`C^{}`$-module is an analogue of a closed operator on a Hilbert space that naturally arises in many of the above contexts. A closed and densely defined operator $`T`$ on a Hilbert $`C^{}`$-module $`E`$ is called regular if its adjoint $`T^{}`$ is also densely defined and if the range of $`(I+T^{}T)`$ is dense in $`E`$. Every regular operator on a Hilbert $`C^{}`$-module $`E`$ is uniquely determined by a (bounded) adjointable operator on $`E`$, called its $`z`$-transform. This fact is exploited when dealing with regular operators, as the adjointable operators, being bounded, are more easily manageable than unbounded operators. But given an unbounded operator, the first and the most basic problem is to decide whether or not it is regular. In , Woronowicz investigated this problem using graphs of operators, and proved a few results (see proposition 2.2, theorem 2.3 and examples 1–3 in ). In particular, he was able to conclude the regularity of some very simple functions of a regular operator $`T`$, like $`T+a`$ where $`a`$ is an adjointable operator, and $`Ta`$ and $`aT`$ where $`a`$ is an invertible adjointable operator. The problem was later attacked from a different angle in . A somewhat larger class of operators, called the semiregular operators were considered. A semiregular operator is a closable densely defined operator whose adjoint is also densely defined. Though regularity is quite difficult to ascertain, semiregularity is not. The problem then investigated in was ‘when is a semiregular operator regular?’. The first step was to reduce the problem to a problem on $`C^{}`$-algebras by establishing that semiregular operators on a Hilbert $`C^{}`$-module $`E`$ correspond, in a canonical manner, to those on the $`C^{}`$-algebra $`𝒦(E)`$ of ‘compact’ operators on $`E`$. The question to be answered next is then ‘for what class of $`C^{}`$-algebras is a closed semiregular operator regular (or admits regular extension)?’ Among other things, it was established that for abelian $`C^{}`$-algebras as well as for subalgebras of $`_0()`$, closed semiregular operators are indeed regular. In the present paper, we will extend the results to a class of $`C^{}`$-algebras that contain $`_0()`$ as an essential ideal. Most of the results, however, are valid in a more general situation where $`_0()`$ is replaced by any essential ideal $`K`$. Since it involves almost no extra work, the results are stated in this general set up. In section 2, we develop the necessary background for proving the main results which are presented in section 3. Finally in section 4, we discuss two examples that arise in the context of quantum groups and are covered by the results in section 3. We have assumed elements of $`C^{}`$-algebra theory and Hilbert $`C^{}`$-module theory as can be found, for example, in Pedersen () and Lance () respectively. Now, why are essential extensions of the compacts important in the context of the problem? Firstly, because they cover examples that arise naturally, like the quantum complex plane which has been discussed later in this paper. Secondly and perhaps more importantly, because they arise as irreducible representations of all type I $`C^{}`$-algebras. For a large class of type I $`C^{}`$-algebras, one would be able to conclude by the results here that all irreducible ‘fibres’ of a semiregular operator $`S`$ are regular. This fact, along with some mild restrictions on $`S`$ should then lead to its regularity. Notations. We will follow standard notations mostly. So, for example, $``$ is a complex separable Hilbert space, $`_0()`$ is the algebra of compact operators on $``$; $`𝒜`$ is a $`C^{}`$-algebra, $`M(𝒜)`$ and $`LM(𝒜)`$ are the space of multipliers and left multipliers respectively of $`𝒜`$. For a topological space $`X`$, $`C_0(X)`$ will denote the $`C^{}`$-algebra of continuous functions on $`X`$ vanishing at infinity. The $`C^{}`$-algebra $`𝒜`$ that we will primarily be interested in, will always be assumed to be separable (this of course will not be true for all $`C^{}`$-algebras that we deal with; for example the multiplier algebra of a nonunital $`C^{}`$-algebra is never separable). ## 2 Restriction to an Ideal Let $`𝒜`$ be a nonunital $`C^{}`$-algebra and let $`K`$ be an essential ideal in $`𝒜`$. Since $`𝒜`$ is essential in $`M(𝒜)`$, it follows that $`K`$ is essential in $`M(𝒜)`$. It is easy to see then that there is an injective homomorphism from $`M(𝒜)`$ to $`M(K)`$ through which $`M(𝒜)`$ can be thaught of as sitting inside $`M(K)`$. For the rest of this paper, we will always assume that $`K𝒜M(𝒜)M(K)`$. Before we proceed further, let us recall the definition of a semiregular operator. ###### Definition 2.1 () Let $`E`$ and $`F`$ be Hilbert $`𝒜`$-modules. An operator $`T:EF`$ is called semiregular if 1. $`D_T`$ is a dense submodule of $`E`$ (i.e. $`D_T𝒜D_T`$), 2. $`T`$ is closable, 3. $`T^{}`$ is densely defined. Next we list some elementary observations regarding the restriction of a semiregular operator to an essential ideal. ###### Proposition 2.2 Let $`S`$ be a closed semiregular operator on $`𝒜`$. Then 1. $`D_K:=D(S)K`$ is a dense right ideal in $`K`$, 2. $`S(D_K)K`$, 3. $`S_0:=S|_K`$ is closed and semiregular, 4. $`D(S)K`$ is a core for $`S_0`$, 5. $`(S|_K)^{}=S^{}|_K`$. Proof: 1. That $`D_K`$ is a right ideal is obvious. Let us show that it is dense. Choose any $`aK`$. Let $`\{e_\alpha \}`$ be an approximate identity in $`K`$. For any $`ϵ>0`$, there is an $`\stackrel{~}{a}D(S)`$ such that $`a\stackrel{~}{a}<ϵ`$. Hence for large enough $`\alpha `$, $`\stackrel{~}{a}e_\alpha a`$ $``$ $`\stackrel{~}{a}ae_\alpha +ae_\alpha a`$ $``$ $`2ϵ.`$ Since $`\stackrel{~}{a}D(S)`$, $`e_\alpha K`$, $`\stackrel{~}{a}e_\alpha D(S)K`$. 2. Take an $`aD_K`$. For any $`bD(S^{})`$, $`b^{}Sa=(S^{}b)^{}aK`$. Since $`D(S^{})`$ is dense in $`𝒜`$, we have $`b^{}(Sa)K`$ for all $`b𝒜`$. Put $`b=Sa`$ to get $`(Sa)^{}(Sa)K`$. Hence $`|Sa|^{1/2}K`$. Now in $`𝒜`$, there exists an element $`u`$ such that $`Sa=u|Sa|^{1/2}`$. Hence $`SaK`$. 3. For $`aD(S^{}|_K)`$ and $`bD(S|_K)`$, we have $$S^{}|_Ka,b=S^{}a,b=a,Sb=a,S|_Kb.$$ Therefore $`S^{}|_K(S|_K)^{}`$ and $`(S|_K)^{}`$ is densely defined. Now suppose $`a_nD(S|_K)=J_K`$, and $`a_na`$, $`S|_Ka_nb`$. Since $`S|_Ka_n=Sa_n`$ and $`S`$ is closed, we conclude that $`aD(S)`$ and $`Sa=b`$. But $`aK`$ also. Hence $`aJ_K`$, and $`S|_Ka=b`$. 4. Take $`aD(S|_K)`$. If $`\{e_\alpha \}`$ is an approximate identity for $`K`$, then $`ae_\alpha a`$ and $`S|_K(ae_\alpha )=(S|_Ka)e_\alpha S|_Ka`$. Since $`ae_\alpha D(S)K`$, $`D(S)K`$ is a core for $`S|_K`$. 5. We have already seen that $`S^{}|_K(S|_K)^{}`$. Let us prove the reverse inclusion here. For any $`aD((S|_K)^{})`$, $`bD(S)`$, $`kK`$, we have $`a,Sbk`$ $`=`$ $`a,S(bk)`$ $`=`$ $`(S|_K)^{}a,bk`$ $`=`$ $`(S|_K)^{}a,bk.`$ Hence $`a,Sb=(S|_K)^{}a,b`$, so that $`aD(S^{})`$. Thus $`D((S|_K)^{})D(S^{})K=D(S^{}|_K)`$. $`\mathrm{}`$ ###### Proposition 2.3 Let $`S`$ and $`T`$ be semiregular operators on $`𝒜`$ such that $`S|_K=T|_K`$. Then 1. $`S=T`$ on $`D(S)D(T)`$, 2. $`S^{}=T^{}`$, 3. if $`(S|_K)^{}=S|_K`$, then there exists a maximal closed semiregular operator on $`𝒜`$ whose restriction to $`K`$ equals $`S|_K`$. Proof: 1. Take $`aD(S)D(T)`$. For any $`kK`$, $`akD(S|_K)=D(T|_K)`$. Hence $`(Sa)k=S(ak)=T(ak)=(Ta)k`$. Therefore $`Sa=Ta`$. 2. Take any $`aD(S^{})`$, $`bD(T)`$. Then for any $`kK`$, $`a,Tbk`$ $`=`$ $`a,T(bk)`$ $`=`$ $`a,S(bk)`$ $`=`$ $`S^{}a,bk`$ $`=`$ $`S^{}a,bk.`$ Hence $`a,Tb=S^{}a,b`$. Thus $`S^{}T^{}`$. Similarly $`T^{}S^{}`$. 3. $`S^{}`$ is the required operator. For, if $`T`$ is any other semiregular operator whose restriction to $`K`$ is $`S|_K`$, then $`T^{}=S^{}`$, thereby implying $`S^{}=T^{}`$, so that $`TS^{}`$. By part 5 of the forgoing proposition, $`S^{}|_K=(S^{}|_K)^{}=(S|_K)^{}=S|_K`$. $`\mathrm{}`$ Part 3 above tells us, in particular, that if $`S|_K`$ is regular then $`S^{}`$ is the maximal semiregular operator on $`𝒜`$ whose restriction to $`K`$ is same as that of $`S`$. ###### Lemma 2.4 If $`T`$ is regular on $`𝒜`$ with $`z`$-transform $`z`$, then $`T(K)K`$, and $`T|_K`$ is a regular operator on $`K`$ with the same $`z`$-transform $`z`$. Proof: Observe that $`zM(𝒜)M(K)`$, and $`(Iz^{}z)^{1/2}K`$ contains $`(Iz^{}z)^{1/2}𝒜K=D(T)K`$ which is dense in $`K`$. Hence there exists a regular operator $`T_0`$ on $`K`$ with $`z`$-transform $`z`$. Clearly $`T_0T|_K`$. By part 4 of proposition 2.2, $`T_0=T|_K`$. $`\mathrm{}`$ ###### Proposition 2.5 Let $`S`$ be a closed semiregular operator on $`𝒜`$ such that $`S|_K`$ is regular with $`z`$-transform $`zM(K)`$. Then for any $`aD(S)`$, there is a $`cM(K)`$ such that $`a`$ $`=`$ $`(Iz^{}z)^{1/2}c,`$ $`Sa`$ $`=`$ $`zc.`$ Proof: Take an $`aD(S)`$. Let $`\{e_\alpha \}`$ be an approximate identity for $`K`$. For each $`\alpha `$, one has $`ae_\alpha D(S)K=D(S|_K)`$. Hence there is a $`c_\alpha K`$ such that $$\begin{array}{ccc}\hfill ae_\alpha & =& (Iz^{}z)^{1/2}c_\alpha ,\hfill \\ \hfill S(ae_\alpha )& =& zc_\alpha .\hfill \end{array}$$ (2.1) From the above equations it follows that $`c_\alpha =(Iz^{}z)^{1/2}ae_\alpha +z^{}(Sae_\alpha )=ce_\alpha `$, where $`c=(Iz^{}z)^{1/2}a+z^{}(Sa)`$. Now using the fact that $`e_\alpha `$ is an approximate identity, we get $`ak`$ $`=`$ $`(Iz^{}z)^{1/2}ck,`$ $`(Sa)k`$ $`=`$ $`zck`$ for all $`kK`$, which proves the result. $`\mathrm{}`$ The above proposition together with the one that follows will be the key ingredients in proving the regularity of certain semiregular operators later. ###### Proposition 2.6 For any $`aD(S^{})`$, there exists $`cM(K)`$ such that $`a`$ $`=`$ $`(Izz^{})^{1/2}c,`$ $`S^{}a`$ $`=`$ $`z^{}c.`$ Proof: Similar to the proof of the previous proposition. $`\mathrm{}`$ Let us denote by $`D`$ the set $`\{(Iz^{}z)^{1/2}a+z^{}(Sa):aD(S)\}`$ and by $`D_{}`$ the set $`\{(Izz^{})^{1/2}a+z(S^{}a):aD(S^{})\}`$. Observe that for $`cD`$ and $`dD_{}`$, $`zc`$ and $`z^{}d`$ are in $`𝒜`$. ###### Lemma 2.7 Let $`D`$ be as above, and assume that $`S=S^{}`$. Then 1. $`D`$ is a Hilbert $`𝒜`$-module contained in $`M(K)`$, 2. $`D=\mathrm{\Gamma }(z):=(Iz^{}z)^{1/2}𝒜z^1𝒜\{cM(K):(Iz^{}z)^{1/2}c𝒜,zc𝒜\}.`$ Proof: Part 1 is straightforward. We will prove part 2 here. Define an operator $`\stackrel{~}{S}:(Iz^{}z)^{1/2}\mathrm{\Gamma }(z)𝒜`$ by $$\stackrel{~}{S}((Iz^{}z)^{1/2}c)=zc,c\mathrm{\Gamma }(z).$$ By proposition 2.5, $`D\mathrm{\Gamma }(z)`$ and $`S\stackrel{~}{S}`$. Hence $`\stackrel{~}{S}`$ is densely defined. From the injectivity of $`(Iz^{}z)^{1/2}`$ it follows that $`\stackrel{~}{S}`$ is well-defined. It can easily be verified from the definition of $`\stackrel{~}{S}`$ that it is closed. By proposition 2.2, $`S^{}|_K=(S|_K)^{}`$ and hence has $`z`$-transform $`z^{}`$. From proposition 2.6, we conclude that $`D_{}\mathrm{\Gamma }(z^{})`$. Now, for $`dD_{}`$ and $`c\mathrm{\Gamma }(z)`$, $`(Izz^{})^{1/2}d,\stackrel{~}{S}((Iz^{}z)^{1/2}c)`$ $`=`$ $`(Izz^{})^{1/2}d,zc`$ $`=`$ $`z^{}(Izz^{})^{1/2}d,c`$ $`=`$ $`S^{}((Izz^{})^{1/2}d),(Iz^{}z)^{1/2}c,`$ so that $`D(S^{})D((\stackrel{~}{S})^{})`$. Therefore $`S^{}(\stackrel{~}{S})^{}`$. Thus $`S\stackrel{~}{S}(\stackrel{~}{S})^{}S^{}=S`$. This implies $`D(S)=D(\stackrel{~}{S})(Iz^{}z)^{1/2}\mathrm{\Gamma }(z)`$, i.e. $`\mathrm{\Gamma }(z)D`$. $`\mathrm{}`$ A similar statement about $`D_{}`$ also holds; except that in that case one need not assume $`S^{}=S^{}`$, it is automatic. The above proposition tells us that if $`S|_K`$ is regular, even though $`S`$ may not be regular, it is uniquely determined by a bounded adjointable operator on $`K`$, as long as $`S`$ is sufficiently nice (i.e. $`S=S^{}`$). ###### Proposition 2.8 Let $`S`$ be a closed semiregular operator on $`𝒜`$ such that $`S|_K`$ is regular with $`z`$-transform $`z`$. Then one has the following inclusions: $$\begin{array}{cccccc}\hfill \mathrm{i}.& z𝒜\overline{(Izz^{})^{1/2}𝒜},\hfill & & & \hfill \mathrm{ii}.& z^{}𝒜\overline{(Iz^{}z)^{1/2}𝒜},\hfill \\ \hfill \mathrm{iii}.& 𝒜z\overline{𝒜(Iz^{}z)^{1/2}},\hfill & & & \hfill \mathrm{iv}.& 𝒜z^{}\overline{𝒜(Izz^{})^{1/2}},\hfill \\ \hfill \mathrm{v}.& z^{}z𝒜\overline{(Iz^{}z)𝒜},\hfill & & & \hfill \mathrm{vi}.& zz^{}𝒜\overline{(Izz^{})𝒜},\hfill \\ \hfill \mathrm{vii}.& 𝒜\overline{(Iz^{}z)𝒜},\hfill & & & \hfill \mathrm{viii}.& 𝒜\overline{(Izz^{})𝒜}.\hfill \end{array}$$ (here overline indicates closure in the norm topology) Proof: We will prove (i) here. Proof of (ii) is similar. All the other inclusions follow from these two. Take any $`a=(Iz^{}z)^{1/2}dD(S)`$. Then $`za=z(Iz^{}z)^{1/2}d=(Izz^{})^{1/2}zd(Izz^{})^{1/2}𝒜`$. Thus $`zD(S)(Izz^{})^{1/2}𝒜`$. Since $`D(S)`$ is dense in $`𝒜`$, we have the required inclusion. $`\mathrm{}`$ ###### Corollary 2.9 With the notation as above, one has the following $`D`$ $``$ $`\overline{(Iz^{}z)^{1/2}𝒜},`$ $`D_{}`$ $``$ $`\overline{(Izz^{})^{1/2}𝒜}.`$ Proof: Any $`dD`$ is of the form $`(Iz^{}z)^{1/2}a+z^{}Sa`$ for some $`aD(S)`$. By part (ii) of the previous proposition, $`z^{}Sa\overline{(Iz^{}z)^{1/2}𝒜}`$. Hence we have the first inclusion. Proof of the other one is similar. $`\mathrm{}`$ ###### Lemma 2.10 Let $`S`$ be as in proposition 2.8. If $`zM(𝒜)`$ then $`S^{}`$ is regular. Proof: From corollary 2.9 and the given condition, it follows that $`D𝒜`$. Therefore $`(Iz^{}z)^{1/2}𝒜`$ contains $`D(S)`$ and is dense in $`𝒜`$. So $`z`$ is indeed the $`z`$-transform of some regular operator $`T`$ on $`𝒜`$. Clearly $`ST`$, so that $`T^{}S^{}`$. From corollary 2.9 we also have $`D_{}𝒜`$. Therefore $`D(S^{})=(Izz^{})^{1/2}D_{}(Izz^{})^{1/2}𝒜=D(T^{})`$. It follows then that $`S^{}=T^{}`$. Hence $`S^{}=T^{}=T`$. Thus $`S^{}`$ is regular. $`\mathrm{}`$ ###### Proposition 2.11 Let $`S`$ and $`z`$ be as in the previous proposition. If $`z^{}zM(𝒜)`$ then $`S^{}`$ is regular. Proof: Let us first show that $`zz^{}`$ is also in $`M(𝒜)`$. Take any $`a`$ and $`b`$ in $`D(S^{})`$. There are elements $`c`$, $`d`$ in $`D_{}`$ such that $`a=(Izz^{})^{1/2}c`$ and $`b=(Izz^{})^{1/2}d`$. For any integer $`n1`$, we have $`a^{}(zz^{})^nb=c^{}(Izz^{})^{1/2}z(z^{}z)^{n1}z^{}(Izz^{})^{1/2}d=(z^{}c)^{}(Iz^{}z)^{1/2}(z^{}z)^{n1}(Iz^{}z)^{1/2}z^{}d𝒜`$. Since $`D(S^{})`$ is norm dense in $`𝒜`$, one has $`a^{}(zz^{})^nb𝒜`$ for all $`a,b𝒜`$. Which means in particular that $`zz^{}`$ and $`(zz^{})^2`$ both are in $`QM(𝒜)`$, the space of quasi-multipliers of $`𝒜`$. By proposition 5.3 in , $`zz^{}LM(𝒜)`$, and since $`zz^{}`$ is positive, it is actually in $`M(𝒜)`$. Now from parts (i) and (iii) of proposition 2.8 and the forgoing proposition, it follows that $`S^{}`$ is regular. $`\mathrm{}`$ ## 3 Regularity We are now ready for the main results in this paper. Let $`\pi `$ be the canonical projection of $`M(K)`$ onto $`M(K)/K`$. Restriction of $`\pi `$ to $`𝒜`$ gives the canonical projection of $`𝒜`$ onto $`𝒜/K`$. ###### Theorem 3.1 Let $`S`$ be a closed semiregular operator on $`𝒜`$ such that its restriction to $`K`$ is regular. If $$\left(Z(𝒜/K)\pi (D(S))\right)𝒜/K\text{is total in }𝒜/K,$$ (3.1) where $`Z(𝒜/K)`$ is the centre of $`𝒜/K`$, then $`S^{}`$ is regular. Proof: Let $`z`$ be the $`z`$-transform of $`S|_K`$, and let $`\{e_\alpha \}_\alpha `$ be an approximate identity in $`𝒜`$. By part (iii) of proposition 2.8, there exist elements $`f_\alpha `$ in $`𝒜`$ such that $`lim_\alpha e_\alpha zf_\alpha (Iz^{}z)^{1/2}=0.`$ This implies that $$\underset{\alpha }{lim}z^{}e_\alpha ^2z(Iz^{}z)^{1/2}f_{\alpha }^{}{}_{}{}^{}f_\alpha (Iz^{}z)^{1/2}=0,$$ which, in turn, implies that $$\underset{\alpha }{lim}z^{}zd(Iz^{}z)^{1/2}f_{\alpha }^{}{}_{}{}^{}f_\alpha (Iz^{}z)^{1/2}d=0$$ for all $`dD`$. It follows then that $$\underset{\alpha }{lim}(Iz^{}z)^{1/2}d(Iz^{}z)(I+f_{\alpha }^{}{}_{}{}^{}f_\alpha )(Iz^{}z)^{1/2}d=0$$ for all $`dD`$, i.e. $$\underset{\alpha }{lim}a(Iz^{}z)(I+f_{\alpha }^{}{}_{}{}^{}f_\alpha )a=0aD(S).$$ Applying $`\pi `$ now, we get $$\underset{\alpha }{lim}\pi (a)(I\pi (z)^{}\pi (z))(I+\pi (f_\alpha )^{}\pi (f_\alpha ))\pi (a)=0aD(S).$$ Now choose an $`aD(S)`$ such that $`\pi (a)Z(𝒜/K)`$, then $`I+\pi (f_\alpha )^{}\pi (f_\alpha )`$ will commute with $`\pi (a)`$. Therefore using the facts that $`(I+f_{\alpha }^{}{}_{}{}^{}f_\alpha )^11`$ and $`(I+\pi (f_\alpha )^{}\pi (f_\alpha ))^1`$ also commutes with $`\pi (a)`$, we get $$\underset{\alpha }{lim}(I+\pi (f_\alpha )^{}\pi (f_\alpha ))^1\pi (a)(I\pi (z)^{}\pi (z))\pi (a)=0$$ (3.2) for all $`\pi (a)Z(𝒜/K)\pi (D(S))`$. From condition (3.1), it follows that (3.2) holds for all $`\pi (a)𝒜/K`$. That is, for any $`a𝒜`$, $`\pi (z)^{}\pi (z)\pi (a)𝒜/K`$. Hence there is a $`b𝒜`$ and a $`kK`$ such that $`z^{}za=b+k`$, which implies that $`z^{}za𝒜`$. Thus $`z^{}zM(𝒜)`$. From proposition 2.11, we conclude that $`S^{}`$ is regular. $`\mathrm{}`$ The following two corollaries are now immediate. ###### Corollary 3.2 Let $`S`$ be a closed semiregular operator on $`𝒜`$ such that its restriction to $`K`$ is regular. If $`𝒜/K`$ is abelian, then $`S^{}`$ is regular. Proof: In this case, $`Z(𝒜/K)\pi (D(S))=\pi (D(S))`$. Therefore condition (3.1) holds. $`\mathrm{}`$ ###### Corollary 3.3 Let $`S`$ be as in the earlier theorem. If $`𝒜/K`$ is unital, then $`S^{}`$ is regular. Proof: Since $`\pi (D(S))`$ is a dense right ideal in $`\pi (𝒜)=𝒜/K`$ which is unital, we have $`\pi (D(S))=𝒜/K`$. Therefore $`IZ(𝒜/K)\pi (D(S))`$. So (3.1) is satisfied. $`\mathrm{}`$ ###### Remark 3.4 We will primarily be interested in the case $`K=_0()`$. By proposition 5.1 of , the condition that the restriction of $`S`$ to $`K`$ is regular is automatic in this case. It is now natural to ask what happens in the general case, i.e. when $`𝒜/K`$ is neither unital nor abelian. We will give a counterexample to illustrate that the result may fail to hold in general. Before going to the example, let us observe that if $`S`$ is a semiregular operator on $`𝒜`$, then the prescription $`D(\pi (S))`$ $`:=`$ $`\pi (D(S)),`$ $`\pi (S)\pi (a)`$ $`:=`$ $`\pi (Sa),aD(S),`$ defines a semiregular operator on $`\pi (𝒜)`$. The example below, which appears in as an example of a nonregular operator, will in fact show that even if $`S|_K`$ and $`\pi (S)`$ both are regular, $`S`$ may fail to be so. Let us first define an operator on the Hilbert $`C^{}`$-module $`E=C[0,1]`$, where $`=L_2(0,1)`$. Let $`\beta `$ be the following function on the interval $`[0,1]`$: $$\beta (\pi )=\{\begin{array}{cc}1\hfill & \text{if }\pi =0\text{,}\hfill \\ \mathrm{exp}(i/\pi )\hfill & \text{if }0<\pi 1\text{.}\hfill \end{array}$$ Let $$D_\pi =\{fL_2(0,1):f\text{ absolutely continuous, }f^{}L_2(0,1),f(0)=\beta (\pi )f(1)\},$$ For $`fE`$, denote by $`f_\pi `$ the function $`f(\pi ,)`$ in $``$. Let $`T`$ be the semiregular operator on $`E`$ defined as follows: $$D(T)=\{fE:f_\pi D_\pi \pi ,\pi (f_\pi )^{}\text{ is continuous}\}$$ $$(Tf)_\pi :=i(f_\pi )^{}.$$ It has been shown by Hilsum () that this is a self-adjoint nonregular operator. Also, from proposition 2.9 in , it follows that the restriction of $`T`$ to the submodule $`F=C_0(0,1]`$ is a self-adjoint regular operator. Notice two things now. $`𝒜=C[0,1]_0()`$ is the $`C^{}`$-algebra of ‘compact’ operators on $`E`$, and $`K=C_0(0,1]_0()`$ is the corresponding $`C^{}`$-algebra for $`F`$. $`K`$ can easily be seen to be an essential ideal in $`𝒜`$, and $`𝒜/K_0()`$. Let $`\varphi _1`$ be the map introduced in section 3 of for the Hilbert module $`E`$. Define $`S`$ to be the operator $`\overline{\varphi _1(T)}`$ on $`𝒜`$. Using lemmas 3.1, 3.2 and 3.5 in , one can prove that for any semiregular operator $`𝒕`$ on $`E`$, $`\overline{\varphi _1(𝒕^{})}=\varphi _1(𝒕)^{}`$. Since in our case $`T`$ is self-adjoint, it follows that $`S`$ is self-adjoint too. Nonregularity of $`S`$ is also clear by the discussion at the end of section 3 in . Restriction of $`S`$ to $`K`$ is the $`\varphi _1`$-image of the restriction of $`T`$ to $`F`$. Therefore $`S|_K`$ is regular. Since $`𝒜/K_0()`$, the projection of $`S`$ on $`𝒜/K`$ is also regular by proposition 5.1 in . ###### Remark 3.5 If we write $`z`$ for the $`z`$-transform of the restriction of $`S`$ to $`K`$, then the above example tells us that the inclusions in proposition 2.8 are not enough to guarantee that $`zM(𝒜)`$, as in that case $`S`$ would have been regular. ## 4 Examples We will restrict ourselves to two examples in this section that occur naturally in the study of quantum groups. The first one is the $`C^{}`$-algebra corresponding to the quantum complex plane and the other one is the crossed product algebra $`C_0(q^{}\{0\})_\alpha `$, where $`q`$ is a fixed real number in the interval (0,1), $`q^{}`$ stands for the set $`\{q^k:k\}`$, and the action $`\alpha `$ of $``$ on $`C_0(q^{})`$ is given by $`\alpha _kf(q^r)`$ $`=`$ $`f(q^{rk}),r,k,`$ $`\alpha _kf(0)`$ $`=`$ $`f(0).`$ Let us start with the quantum complex plane. Let $`=L_2()`$, with canonical orthonormal basis $`\{e_n\}_n`$. Let $`\mathrm{}^{}`$ and $`q^N`$ denote the following operators: $$\begin{array}{cccc}\hfill \mathrm{}^{}e_k& =& e_{k+1},\hfill & k,\hfill \\ \hfill q^Ne_k& =& q^ke_k,\hfill & k.\hfill \end{array}$$ Let $`D`$ denote the linear span of $`\{(\mathrm{}^{})^kf_k(q^N):k,f_kC_0(q^{}\{0\}),f_k(0)=0\text{ for }k0\}`$. The $`C^{}`$-algebra of ‘continuous vanishing-at-infinity functions’ on the quantum plane, which we denote by $`C_0(_q)`$, is the norm closure of $`D`$. The quantum complex plane can be looked upon as the homogeneous space $`E_q(2)/S^1`$ ($`S^1`$ being the one dimensional torus) for the quantum $`E(2)`$ group (,). $`C_0(_q)`$ was introduced in a slightly different form in (For a proof of the fact that the $`C^{}`$-algebra described above is isomorphic to the one in , see ). ###### Lemma 4.1 $`C_0(_q)/_0()`$. Proof: It is easy to see that $`C_0(_q)`$ acts irreducibly on $``$ and contains the compact operator $`|e_0e_0|=I_{\{1\}}(q^N)`$. Therefore $`_0()C_0(_q)`$. Define a map $`\varphi :C_0(_q)`$ by the prescription $$\varphi \left(\underset{k}{}(\mathrm{}^{})^kf_k(q^N)\right)=f_0(0),\underset{k}{}(\mathrm{}^{})^kf_k(q^N)D.$$ It extends to a complex homomorphism of $`C_0(_q)`$. It is easy to see that $`\mathrm{ker}\varphi `$ is the closure of $`\{(\mathrm{}^{})^kf_k(q^N):k,f_kC_0(q^{}\{0\}),f_k(0)=0\text{ for all }k\}`$, i.e. is isomorphic to $`C_0()`$, which in turn is isomorphic to $`_0()`$. $`\mathrm{}`$ We can now apply corollary 3.3 to conclude that for any closed semiregular operator $`S`$ on $`C_0(_q)`$, $`S^{}`$ is regular. Indeed, since the restriction of $`S`$ to $`_0()`$ is regular, by proposition 2.3, $`S^{}`$ is an operator satisfying the assumptions of corollary 3.3. Our second example, the crossed product algebra $`𝒜=C_0(q^{}\{0\})`$, is actually very similar to the previous one. Its relevance in quantum groups stems from the fact that for any infinite dimensional irreducible representation $`\pi `$ of the $`C^{}`$-algebra $`C_0(E_q(2))`$ corresponding to the quantum $`E(2)`$ group, $`\pi (C_0(E_q(2)))`$ is isomorphic to $`𝒜`$. From the definition of a crossed product algebra, it can be shown quite easily that $`𝒜`$ is the norm closure of the linear span of $`\{(\mathrm{}^{})^kf_k(q^N):k,f_kC_0(q^{}\{0\})\}`$. One then shows that $`𝒜/_0()C(S^1)`$. The proof is similar to the proof of lemma 4.1, except that the map $`\varphi `$ in this case maps $`𝒜`$ onto $`C(S^1)`$ and is defined by $`\varphi (_k(\mathrm{}^{})^kf_k(q^N))=_kf_k(0)\zeta ^k`$, where $`\zeta `$ stands for the function $`zz`$ on $`S^1`$.
no-problem/9906/astro-ph9906012.html
ar5iv
text
# 1 Introduction ## 1 Introduction Neutron stars (NS) can appear as isolated objects (radiopulsars, old isolated accreting NS, etc.) and as X-ray sources in close binary systems. The most prominent of the last ones are X-ray pulsars, where important parameters of NS can be determined. Now more than 40 X-ray pulsars are known (see, for example, Bildsten et al., 1997). Observations of optical counterparts give an opportunity to obtain distances to these objects with high precision, and with hyroline detections one can obtain the value of magnetic field of a NS. But lines are not detected in all sources of that type (partly because they can lay out of the range of necessary spectral sensitivity of devices, when field are high), and magnetic field can be estimated from period measurements (see Lipunov, 1982, 1992). Precise distance measurements usually are not available immediately after X-ray discovery (especially, if error boxes, as for example in the BATSE case, are large). So, methods of their determination basing only on X-ray observations can be useful. Here we propose a simple method to determine magnetic field and distance to X-ray pulsar using only X-ray flux and period variations measurements. ## 2 Method In Lipunov (1982) it was proposed to use maximum spin-up and spin-down values to obtain limits on the magnetic momentum of X-ray pulsars in disk or wind models, using known values of luminosity (method, based on maximum spin-down, is very insensitive to uncertainties in luminosity and produces better results). In this short note we propose a rough simple method to determine magnetic field without known distance and to determine distance itself. The method is based on several measurements of period derivative, $`\dot{p}`$, and X-ray pulsar’s flux, $`f`$. Fitting two parameters: distance, $`d`$, and magnetic momentum, $`\mu `$, one can obtain good correspondence with the observed $`\dot{p}`$ and $`f`$, and that way produce good estimates of distance and magnetic field. Here we consider only disk accretion. In that case one can write (see Lipunov, 1982, 1992): $$\frac{dI\omega }{dt}=\dot{M}\left(GMϵR_A\right)^{1/2}k_t\frac{\mu ^2}{R_c^3},$$ (1) where $`\omega `$ – spin frequency of a NS, $`M`$ – its mass, $`I`$– its moment of inertia, $`R_A`$ \- Alfven radius, $`R_c`$ – corotation radius. We use the following values: $`ϵ=0.45`$, $`k_t=1/3`$ (see Lipunov, 1992). The first term on the right side represents acceleration of a NS from an accretion disk, and the second term represents deceleration. The form of the deceleration term is general, only typical radius of interaction should be changed. It is equal to $`R_c`$ for accretors, $`R_l`$ \- light cylinder radius for ejectors, and $`R_A`$ for propellers (see the details in Lipunov 1992). Lets rewrite eq. (1) in terms of period and its derivative: $$\dot{p}=\frac{4\pi ^2\mu ^2}{3GIM}(0.45)^{1/2}2^{1/14}\frac{\mu ^{2/7}}{I}\left(GM\right)^{3/7}\left[p^{7/3}L\right]^{6/7}R^{6/7},$$ (2) where $`L=4\pi d^2f`$ – luminosity, $`f`$ – observed flux. So, in eq. (2) we know all parameters ($`I`$, $`M`$, $`R`$ etc.) except $`\mu `$ and $`d`$. Fitting observed points with them we can obtain estimates of $`\mu `$ and $`d`$. If $`\mu `$ is known, one can immediately obtain $`d`$ from eq. (2) even from one determination of $`\dot{p}`$ (in that case it is better to use spin-down value). Uncertainties mainly depend on applicability of that simple model. ## 3 Illustration of the method To illustrate the method, we apply it to the X-ray pulsar GRO J1008-57, discovered by BATSE (Bildsten et al., 1997). It is a $`93.5s`$ X-ray pulsar, with the flux about $`10^9ergcm^2s^1`$. A 33 day outburst was observed by BATSE in August 1993. The source was also observed by EXOSAT (Macomb et al., 1994) and ASCA (Day et al., 1995). ROSAT made possible to localize the source with high precision (Petre & Gehrels, 1994), and it was identified with a Be-system (Coe et al., 1994) with $`135^d`$ orbital period (Shrader et al. 1999). We use here only 1993 outburst, described in Bildsten et al. (1997). The authors in Bildsten et al. (1997) show flux and frequency history of the source with 1 day integration. In the maximum of the burst errors are rather small, and we neglect them. Points with large errors were not used. We used standard values of NS parameters: $`I=10^{45}gcm^2`$, moment of inertia; $`R=10km`$, NS radius; $`M=1.4M_{}`$, NS mass. On figures 1-2 we show observations (as black dots) and theoretical curves (in disk model, see Shrader et al. 1999, who proposed a disk formation during the outbursts, in contrast with Macomb et al. (1994), who proposed wind accretion) on the plane $`\dot{p}`$$`p^{7/3}f`$, where $`f`$ – observed flux (logarithms of these quantities are shown). Curves were plotted for different values of the source distance, $`d`$, and NS magnetic momentum, $`\mu `$. The best fit (both for spin-up and spin-down) gives $`d5.8kpc`$ and $`\mu 37.610^{30}Ecm^3`$. It is shown on both figures. The distance is in correspondence with the value in Shrader et al. (1999), and such field value is not unusual for NS in general and for X-ray pulsars in particular (see, for example, Lipunov, 1992 and Bildsten et al., 1997). Tests on some other X-ray pulsars with know distances and magnetic fields also showed good results. ## 4 Discussion and conclusions The method is only approximate and depends on several assumptions (disk accretion, specified values of $`M,I,R`$, etc.). Estimates of $`\mu `$, for example, can be only in rough correspondence with observations of magnetic field $`B`$, if standard value of the NS radius, $`R=10km`$ is used (see, for example, the case of Her X-1 in Lipunov 1992). Non-standard values of $`I`$ and $`M`$ can also make the picture more complicated. The method can be, in principal, generalized for applications to wind-accreting systems, and to disk-accreting systems with complicated time behavior (when, for example, $`\dot{p}`$ changes appear with nearly constant flux, or even when $`\dot{p}`$ changes are uncorrelated with flux variations). If one uses maximum spin-up, or maximum spin-down values to evaluate parameters of the pulsar, then one can obtain values different from the best fit (they are shown on the figures): $`d8kpc`$, $`\mu 37.610^{30}Ecm^3`$ for maximum spin-up, and two values for maximum spin-down: $`d4kpc`$, $`\mu 37.610^{30}Ecm^3`$ and the one close to our best fit (two similar values of maximum spin-down were observed for different fluxes, but we mark, that formally maximum spin-down corresponds to the values, which are close to our best fit). It can be used as an estimate of the errors of our method: accuracy is about the factor of 2 in distance, and about the same value in magnetic field, as can be seen from the figures. In some very uncertain situations, for example, when only X-ray observations without precision localization are available, our method can give (basing on several observational points, not one!, as, for example, in the case of maximum spin-down determination of magnetic momentum), rough, but useful estimates of important parameters: distance and magnetic momentum. Acknowledgments It is a pleasure to thank prof. V.M. Lipunov for numerous discussions and suggestions and drs. I.E. Panchenko, M.E. Prokhorov and K.A. Postnov for useful comments. The work was supported by the RFBR (98-02-16801) and the INTAS (96-0315) grants. Figure captions Figure 1. Dependence of period derivative, $`\dot{p}`$, on the parameter $`p^{7/3}f`$, $`f`$– observed flux. Both axis are in logarithmic scale. Observations (Bildsten et al., 1997) are shown with black dots. Five curves are plotted for disk accretion for different values of distance to the pulsar and NS magnetic momentum. Solid curve: $`d=4kpc`$, $`\mu =37.610^{30}Ecm^3`$. Dashed curve: $`d=8kpc`$, $`\mu =37.610^{30}Ecm^3`$. Long dashed curve: $`d=5.8kpc`$, $`\mu =1010^{30}Ecm^3`$. Dot-dashed curve: $`d=5.8kpc`$, $`\mu =4510^{30}Ecm^3`$. Dotted curve (the best fit): $`d=5.8kpc`$, $`\mu =37.610^{30}Ecm^3`$. Figure 2. Dependence of period derivative, $`\dot{p}`$, on the parameter $`p^{7/3}f`$, $`f`$– observed flux. Both are axis in logarithmic scale. Observations (Bildsten et al., 1997) are shown with black dots. Five curves are plotted for disk accretion for different values of distance to the pulsar and NS magnetic momentum. Solid curve: $`d=4kpc`$, $`\mu =1010^{30}Ecm^3`$. Dashed curve: $`d=8kpc`$, $`\mu =1010^{30}Ecm^3`$. Long dashed curve: $`d=8kpc`$, $`\mu =4510^{30}Ecm^3`$. Dot-dashed curve (the best fit): $`d=5.8kpc`$, $`\mu =37.610^{30}Ecm^3`$. Dotted curve: $`d=4kpc`$, $`\mu =4510^{30}Ecm^3`$.
no-problem/9906/nucl-ex9906007.html
ar5iv
text
# Angular momentum sharing in dissipative collisions ## Abstract Light charged particles emitted by the projectile-like fragment were measured in the direct and reverse collision of <sup>93</sup>Nb and <sup>116</sup>Sn at 25 AMeV. The experimental multiplicities of Hydrogen and Helium particles as a function of the primary mass of the emitting fragment show evidence for a correlation with net mass transfer. The ratio of Hydrogen and Helium multiplicities points to a dependence of the angular momentum sharing on the net mass transfer. It is now experimentally established that binary or quasi-binary dissipative processes continue to dominate the heavy-ion reaction cross-section well into the intermediate energy regime . In this context, a still open field of investigation concerns the degree of equilibrium attained in the internal degrees of freedom and, in particular, the partition of energy and angular momentum between the two reaction partners. At low bombarding energies ($``$15 AMeV), several experimental findings (concerning mass and charge drift, variances, excitation energies of reaction products) are rather well accounted for, in some cases also quantitatively, by models based on the stochastic exchange of single nucleons (see, e.g., ). At larger bombarding energies, the relevance of such a mechanism becomes somewhat uncertain, due to the decrease of interaction times, to the increasing importance of the reaction dynamics and to associated non-equilibrium effects. In the <sup>120</sup>Sn + <sup>100</sup>Mo collision at 19.1 AMeV the fission probability $`P_{fiss}`$ of the projectile- and target-like fragment (PLF and TLF) was measured as a function of the primary mass $`A`$. For a given $`A`$, corresponding to different net mass transfers for PLF and TLF, $`P_{fiss}`$ was found to be significantly larger for the TLF (which gained mass), even at large TKEL (Total Kinetic Energy Loss). The observed effect is a clear signature of the lack of an overall equilibrium between the two partners at the end of the interaction. In the <sup>100</sup>Mo + <sup>120</sup>Sn collision at 14.1 AMeV a similar behavior was found also in the binary exit channel, where the highly excited fragments de-excite mainly by light particle emission. The observed correlation between the total number of emitted nucleons and the net mass transfer indicates a non-equilibrium excitation energy partition between the reaction products, with an excess of excitation being deposited in the fragment which gains nucleons. Similar conclusions had been drawn by other authors , but remained quite controversial. Although the existence of such correlations is compatible, by itself, with a nucleon exchange picture, the fact that they are largely independent of the degree of inelasticity is difficult to understand within the present versions of the stochastic nucleon exchange model and deserves new investigation. Up to now no study has been performed concerning possible correlations between net mass transfer and angular momentum sharing. This letter presents for the first time a direct evidence for a correlation between angular momentum sharing and net mass transfer. Beams of <sup>93</sup>Nb and <sup>116</sup>Sn at 24.9 AMeV were delivered by the GANIL accelerator with an excellent time resolution (about 550 ps and 350 ps FWHM for Nb and Sn, respectively). The system <sup>93</sup>Nb + <sup>116</sup>Sn was studied in direct and reverse kinematics, as already done in a previous experiment at lower energy . This method allows to obtain information on both partners of the collision, although the detection is optimized for the PLF and its associated particles. It is equivalent to studying, in two separate runs, the PLF and TLF of the Nb+Sn collision. The slight mass asymmetry guarantees a common range of PLF masses in the exit channels in both kinematic cases, even at moderate TKEL. Beam and target feasibility reasons (as well as the wish to have a system near to the <sup>100</sup>Mo+<sup>120</sup>Sn studied at 14 AMeV ) guided the choice of the system. The isotope of Sn, while abundant enough to make a beam, has an N/Z ratio similar enough to that of <sup>93</sup>Nb so as to reduce the possible role of isospin. Heavy ($`A20`$) reaction products were detected with position-sensitive gas detectors . The FWHM resolution was 3.5 mm for position and 600-750 ps for the time-of-flight (including the beam contribution). From the measured velocity vectors, primary (pre-evaporative) quantities were deduced event-by-event with an improved version of the kinematic coincidence method. For elastic events, the FWHM resolution of the primary mass was $``$ 2 amu. The background of incompletely measured events of higher multiplicity was estimated and subtracted from the results . Behind the forward gas detectors, on one side of the beam, an array of 46 Silicon detectors (covering a sizeable region below and around the grazing angle) allowed to deduce the secondary (post-evaporative) mass of the PLF . Light charged particles were measured with the scintillator array “Le Mur” . It consists of 96 pads of fast plastic scintillator NE102, 2 mm thick (threshold $``$ 3.2 AMeV for protons and $`\alpha `$-particles) and covered in an axially symmetric geometry the region from 2 to 18.5 behind the gas and Silicon detectors. However, because of the shadows of these detectors, not all scintillator pads could be used. More details on the experimental set-up and analysis method are given in Ref. . The data presented in this letter are focused on binary events in which light charged particles were detected in the scintillators in coincidence with PLF and TLF, additionally requiring that the PLF hits a forward gas detector and one of the Silicon detectors behind it. Results about the analysis of the average number $`\mathrm{\Delta }A`$ of nucleons emitted by the PLF as a function of its primary mass $`A`$ are presented in detail elsewhere . Here we just need to mention that $`\mathrm{\Delta }A`$ versus $`A`$ gives origin to two distinct correlations for the Nb+Sn and Sn+Nb data, similarly to what already observed in the system <sup>100</sup>Mo + <sup>120</sup>Sn at 14 AMeV , which gave a direct evidence for the dependence of $`\mathrm{\Delta }A`$ on the net mass transfer. The data of “Le Mur” were used to deduce the multiplicity of light charged particles emitted by the PLF source. The limited solid angle of the scintillator array already selected mainly light particles emitted by the PLF and this geometric selection was further strengthened in the analysis by rejecting all slow particles stopped in the pads. The remaining particles were cleanly identified in charge Z but their isotopic composition could not be determined. Therefore in the following we will just refer to Hydrogen and Helium ions. Because of the shadows produced by the Silicon detectors, only light particles emitted on the other side of the beam with respect to the PLF were considered in the analysis. Moreover, for a cleaner selection of the PLF source, only light particles emitted in a forward range (from about 14 to 70) in the PLF frame were considered. In this frame, the experimental light-particle velocity spectra display an evaporation-like shape, consistent — within experimental errors — with the results of evaporation calculations obtained with the statistical code GEMINI . To deduce the multiplicities of particles emitted by the PLF, Monte Carlo efficiency corrections were performed, assuming an evaporative isotropic emission. Due to these corrections the absolute values of the light particle multiplicities may be affected by uncertainties up to $`\pm `$30%. However, the uncertainty is much smaller (of the order of $`\pm `$10%) on the relative values. The average multiplicities $`M_H`$ and $`M_{He}`$ of Hydrogen and Helium ions were determined as a function of the primary mass $`A`$ of the emitting PLF, for bins of TKEL. The left and right columns of Fig. 1 present the results for Hydrogen and Helium, respectively. The circles and squares refer to the light charged particles emitted from the PLF in the <sup>93</sup>Nb + <sup>116</sup>Sn and <sup>116</sup>Sn + <sup>93</sup>Nb reaction, respectively. Two assumptions on the excitation energy sharing were used in the Monte Carlo simulations to deduce the absolute multiplicities from the experimental ones. The solid symbols show the results obtained under the assumption of a non-equilibrium excitation energy sharing (dependent on the net mass transfer), as deduced from the analysis of the correlations between $`\mathrm{\Delta }A`$ and $`A`$ . The open symbols are the results obtained from the same data assuming an energy sharing independent of the net mass transfer. It is apparent that the results are rather insensitive to the particular physical hypothesis on energy sharing used for the correction. The absolute multiplicities for the channel without net mass transfer are lower than the values obtained by GEMINI calculations by a factor of about 1.5–2. However, they are in fair agreement with the experimental values obtained for the PLF emission in the similar systems Mo + Mo at 23.7 AMeV and Xe + Sn at 25 AMeV . Concerning the dependence on the primary mass $`A`$ of the PLF, the multiplicities grow with increasing $`A`$. We want to draw attention on the fact that, for a given TKEL, the multiplicities in the symmetric exit channel are quite different in the two kinematic cases, although the mass of the emitting PLF is the same ($`A`$ 105). They are larger in the direct reaction (where the PLF has gained mass) with respect to the reverse reaction (where it has lost an equal amount of nucleons). Being the multiplicity of light charged particles an increasing function of the excitation energy of the emitter, one can deduce that the nucleus gaining mass is more excited than the one loosing mass. This behavior is evident beyond errors at all TKEL values for the Helium data, whereas for Hydrogen it becomes weaker with increasing TKEL. These results on the light charged particles are thus qualitatively in good agreement with those concerning the total number of evaporated nucleons, $`\mathrm{\Delta }A`$, deduced on the base of a kinematic reconstruction. It is worth noting that the measurements of light charged particles in the scintillator array “Le Mur” and of heavy reaction products in the gas detectors are independent. Therefore the results of Fig. 1 should be little affected by “instrumental” correlations of the kind discussed in Ref. . As the relative multiplicities are less uncertain than the absolute ones, we further concentrated on the multiplicity ratio of Hydrogen and Helium, $`M_H/M_{He}`$. Figure 2(a) shows this ratio for the exit channel without net mass transfer. The solid dots refer to the direct reaction <sup>93</sup>Nb + <sup>116</sup>Sn at 25 AMeV, the solid squares to the reverse reaction at the same bombarding energy. Based on the results for the average excitation energy partition , the excitation energy of the PLF is estimated from TKEL assuming an equal division of the total excitation energy, although all arguments that follow are rather insensitive to this hypothesis. The two sets of experimental data are very similar, as expected from the weak dependence of the light particle multiplicities on the mass of the emitter. In both cases the ratio $`M_H/M_{He}`$ strongly decreases with increasing excitation energy. It is known since long that in a statistical decay process of a moderately excited nucleus the ratio of Hydrogen to Helium decreases with increasing spin of the emitter . Statistical-model calculations with GEMINI , suggest that this finding may still hold in our range of energies and masses. In Fig. 2(a) the open symbols joined by dotted lines are the results of GEMINI calculations for an excited <sup>93</sup>Nb at various values of the spin. Indeed the ratio between Hydrogen and Helium particles is found to be rather sensitive to the angular momentum of the evaporating nucleus, with large angular momenta favoring the emission of the more massive Helium particles with respect to the lighter Hydrogen isotopes. The comparison of the experimental data with the calculations suggests that the transfer of angular momentum from the orbital motion to the internal degrees of freedom of the colliding nuclei is weak for low TKEL (peripheral collisions) and becomes larger when going to larger TKEL values (that is, to more central collisions). However, quantitative estimates of the spin values are difficult as there are experimental indications that at large excitations GEMINI tends to underestimate the emission of intermediate mass fragments . Let us concentrate for the remaining part of this letter on the events leading to nearly symmetric mass division ($`A`$ 105) in the exit channel. Figure 2(b) presents the ratio of light charged particles $`M_H/M_{He}`$ emitted by the PLF measured in the direct and reverse kinematics for the collision <sup>93</sup>Nb + <sup>116</sup>Sn at 25 AMeV (solid dots and squares, respectively). Here the excitation energy $`E^{}`$ of the PLF has been estimated from the measured TKEL assuming an excitation energy division in agreement with that deduced from the total number of nucleons evaporated from the fragments, but again the arguments that follow do not depend on this hypothesis. For the PLF measured in the direct reaction —which therefore experienced a net mass gain— the experimental results (dots) correspond to rather low ratios $`M_H/M_{He}`$, thus being an indication of high spin. The opposite holds for the PLF measured in the reverse reaction, which experienced a net loss of nucleons. In this second case the results (squares) correspond to larger ratios $`M_H/M_{He}`$, thus pointing to lower spin of the emitter. This observation clearly indicates that the net gain of nucleons is correlated with an excess not only of excitation energy , but also of angular momentum. The same conclusion would hold true also in case of a different excitation energy partition, as a relative shift of the two sets of experimental points along the horizontal axis would leave the data for the direct reaction below those for the reverse one, thus indicating in any case an asymmetric sharing of angular momentum. As verified with GEMINI calculations, this interpretation of the data in terms of net-mass-transfer dependence of both excitation energy and angular momentum partition can explain the above mentioned weaker dependence of the Hydrogen multiplicity on the net-mass-transfer at larger excitations. In fact, in the direct reaction the simultaneous increase of both excitation energy and spin gives origin to two opposite effects. The higher excitation energy of PLF tends to increase the average multiplicity of Hydrogen, but the larger spin tends to depress it. On the contrary, for Helium both the higher excitation energy and the larger spin contribute to the increase of the average multiplicity. It is worth stressing that although the specific interpretation in terms of angular momentum sharing requires that the observed light charged particles emission is of statistical origin, in any case the experimental observation proves by itself that no full equilibrium in the relative degrees of freedom between the two final nuclei, both with $`A`$105, has been achieved at the end of the interaction phase. In spite of their equal masses, the two nuclei bear memory of the different ways they have been produced, either by net gain or loss of nucleons. The present data, together with other observations — like the persistence of steep correlations between $`\mathrm{\Delta }A`$ and $`A`$ even at large TKEL and the systematically larger than expected mass variances $`\sigma _A^2`$ at large TKEL — suggest the presence of some other mechanism besides the mere stochastic exchange of single nucleons across a window in the dinuclear system. For example, the fluctuations in the rupture of a long stretched neck might be an essential ingredient to explain the experimental features. They can explain in a natural way the observation of huge mass variances in spite of rather short interaction times. Asymmetric neck ruptures, with the lighter collision partner obtaining a larger share of the neck matter, could sizably contribute to the net mass transfer leading to symmetric mass division . In this case the two final nuclei, although of equal mass, would strongly differ in shape and moment of inertia. This neck remnant, if re-absorbed, might be responsible for the larger share of angular momentum. Or it might contribute to the increased production of intermediate mass fragments in the region between the two collision partners, which is presently a highly debated topic. Concluding, the collision <sup>93</sup>Nb + <sup>116</sup>Sn at 25 AMeV has been studied in direct and reverse kinematics. The light charged particles emitted by the PLF allow a qualitative estimation of the transferred angular momentum and give evidence of a situation of non-equilibrium existing between the two collision partners at the end of the interaction. In fact the net gain of nucleons appears to be correlated with an excess of both excitation energy and angular momentum. This experimental finding seems difficult to reconcile with existing models based on stochastic exchanges of singles nucleons and calls for a better theoretical understanding of the microscopic interaction mechanism, including other effects like, e.g., an explicit treatment of the neck degrees of freedom. We wish to thank the GANIL staff for delivering high quality beams pulsed with very good time structure. We also thank R. Ciaranfi and M. Montecchi for their skillfulness in the development of dedicated electronic modules and P. Del Carmine and F. Maletta for their valuable support in the preparation of the experimental set-up.
no-problem/9906/cond-mat9906260.html
ar5iv
text
# Anomalous c-axis charge dynamics in copper oxide materials \[ ## Abstract Within the $`t`$-$`J`$ model, the c-axis charge dynamics of the copper oxide materials in the underdoped and optimally doped regimes is studied by considering the incoherent interlayer hopping. It is shown that the c-axis charge dynamics is mainly governed by the scattering from the in-plane fluctuations. In the optimally doped regime, the c-axis resistivity is a linear in temperatures, and shows the metallic-like behavior for all temperatures, while the c-axis resistivity in the underdoped regime is characterized by a crossover from the high temperature metallic-like to the low temperature semiconducting-like behavior, which are consistent with experiments and numerical simulations. \] The copper oxide materials are among the most complex systems studied in condensed matter physics. The complications arise mainly from (1) strong anisotropy in the properties parallel and perpendicular to the CuO<sub>2</sub> planes which are the key structural element in the whole copper oxide superconducting materials, and (2) extreme sensitivity of the properties to the compositions (stoichiometry) which control the carrier density in the CuO<sub>2</sub> plane . After over ten years of intense experimental study of the copper oxide materials, a significant body of reliable and reproducible data has been accumulated by using many probes, which indicates that the normal-state properties in the underdoped and optimally doped regimes are quite unusual in many aspects suggesting the unconventional metallic state realized . Among the striking features of the normal-state, the quantity which most evidently displays the anisotropic property in the copper oxide materials is the charge dynamics , which is manifested by the optical conductivity and resistivity. It has been shown from the experiments that the in-plane charge dynamics is rather universal within the whole copper oxide materials . The in-plane optical conductivity for the same doping is nearly materials independent both in the magnitude and energy dependence, and shows the non-Drude behavior at low energies and anomalous midinfrared band in the charge-transfer gap, while the in-plane resistivity $`\rho _{ab}(T)`$ exhibits a linear behavior in the temperature in the optimally doped regime and a nearly temperature linear dependence with deviations at low temperatures in the underdoped regime . By contrast, the magnitude of the c-axis charge dynamics in the underdoped and optimally doped regimes is strongly materials dependent, i.e., it is dependent on the species of the building blocks in between the CuO<sub>2</sub> planes . Although the c-axis charge dynamics is very complicated, some qualitative features seem to be common, such as (1) for the optimally doped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>6+x</sub> and overdoped La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> systems , the transferred weight in the c-axis conductivity decaies as $`1/\omega `$ at low energies, which is in accordance with the metallic-like c-axis resistivity $`\rho _c(T)`$ for all temperatures, and (2) for the lower doping the temperature dependent c-axis resistivity $`\rho _c(T)`$ is characterized by a crossover from the high temperature metallic-like behavior to the low temperature semiconducting-like behavior . The nature of the c-axis charge dynamics in the copper oxide materials is of great importance, as the superconducting mechanism is closely associated with the anisotropic normal-state properties . Since the undoped copper oxide materials are antiferromagnetic Mott insulators, upon hole doping, the antiferromagnetic long-range-order is rapidly destroyed and the unusual metallic state emerges . In this case, many researchers believe that the essential physics is contained in the doped antiferromagnets , which may be effectively described by the $`t`$-$`J`$ model acting on the space with no doubly occupied sites . On the other hand, there is a lot of evidence from the experiments and numerical simulations in favour of the $`t`$-$`J`$ model as the basic underlying microscopic model . Within the two-dimensional (2D) $`t`$-$`J`$ model, the in-plane charge dynamic of the copper oxide materials has been extensively studied theoretically as well as numerically . Since the understanding of the charge dynamics in the copper oxide materials is not complete without an understanding of the c-axis charge dynamics, therefore a number of alternative mechanisms for the c-axis charge dynamics have been proposed , and the most reliable results for the c-axis charge dynamics have been obtained by the numerical simulation . To shed light on this issue, we, in this paper, apply the fermion-spin approach to study the c-axis charge dynamics based on the $`t`$-$`J`$ model by considering the interlayer coupling. Within each CuO<sub>2</sub> plane, the essential physics properties are described by the 2D $`t`$-$`J`$ model as, $`H_l`$ $`=`$ $`t{\displaystyle \underset{i\widehat{\eta }\sigma }{}}C_{li\sigma }^{}C_{li+\widehat{\eta }\sigma }+h.c.+\mu {\displaystyle \underset{i\sigma }{}}C_{li\sigma }^{}C_{li\sigma }`$ (1) $`+`$ $`J{\displaystyle \underset{i\widehat{\eta }}{}}𝐒_{li}𝐒_{li+\widehat{\eta }},`$ (2) where $`\widehat{\eta }=\pm a_0\widehat{x},\pm a_0\widehat{y}`$, $`a_0`$ is the lattice constant of the square planar lattice, which is set as the unit hereafter, $`i`$ refers to planar sites within the l-th CuO<sub>2</sub> plane, $`C_{li\sigma }^{}`$ ($`C_{li\sigma }`$) are the electron creation (annihilation) operators, $`𝐒_{li}=C_{li}^{}\sigma C_{li}/2`$ are the spin operators with $`\sigma =(\sigma _x,\sigma _y,\sigma _z)`$ as the Pauli matrices, and $`\mu `$ is the chemical potential. The Hamiltonian (1) is supplemented by the on-site local constraint, $`_\sigma C_{li\sigma }^{}C_{li\sigma }1`$, i.e., there be no doubly occupied sites. For discussing the c-axis charge dynamics, the hopping between CuO<sub>2</sub> planes is considered as $`H=t_c{\displaystyle \underset{l\widehat{\eta }_ci\sigma }{}}C_{li\sigma }^{}C_{l+\widehat{\eta }_ci\sigma }+h.c.+{\displaystyle \underset{l}{}}H_l,`$ (3) where $`\widehat{\eta }_c=\pm c_0\widehat{z}`$, $`c_0`$ is the interlayer distance, and has been determined from the experiments as $`c_0>2a_0`$. In the underdoped and optimally doped regimes, the experimental results show that the ratio $`R=\rho _c(T)/\rho _{ab}(T)`$ ranges from $`R100`$ to $`R>10^5`$, this large magnitude of the resistivity anisotropy reflect that the c-axis mean free path is shorter than the interlayer distance, and the carriers are tightly confined to the CuO<sub>2</sub> planes, and also is the evidence of the incoherent charge dynamics in the c-axis direction. Therefore the c-axis momentum can not be defined . Moreover, the absence of the coherent c-axis charge dynamics is a consequence of the weak interlayer hopping matrix element $`t_c`$, but also of a strong intralayer scattering, i.e., $`t_ct`$, and therefore the common CuO<sub>2</sub> planes in the copper oxide materials clearly dominate the most normal-state properties. In this case, the most relevant for the study of the c-axis charge dynamics is the results on the in-plane conductivity $`\sigma _{ab}(\omega )`$ and related single-particle spectral function $`A(k,\omega )`$. Based on the 2D $`t`$-$`J`$ model, the self-consistent mean-field theory in the underdoped and optimally doped regimes has been developed within the fermion-spin approach , which has been applied to study the photoemission, electron dispersion and electron density of states in the copper oxide materials, and the results are qualitative consistent with the experiments and numerical simulations. Moreover, the in-plane charge dynamics in the copper oxide materials has been discussed by considering the fluctuations around this mean-field solution, and the results exhibits a behavior similar to that seen in the experiments and numerical simulations . In the fermion-spin theory , the constrained electron operators in the $`t`$-$`J`$ model is decomposed as, $`C_{li}=h_{li}^{}S_{li}^{},C_{li}=h_{li}^{}S_{li}^+,`$ (4) with the spinless fermion operator $`h_i`$ keeps track of the charge (holon), while the pseudospin operator $`S_i`$ keeps track of the spin (spinon), then it naturally incorporates the physics of the charge-spin separation. The main advantage of this approach is that the electron on-site local constraint can be treated exactly in analytical calculations. In this case, the low-energy behavior of the $`t`$-$`J`$ model (2) in the fermion-spin representation can be rewritten as , $`H`$ $`=`$ $`t_c{\displaystyle \underset{l\widehat{\eta }_ci}{}}h_{l+\widehat{\eta }_ci}^{}h_{li}(S_{li}^+S_{l+\widehat{\eta }_ci}^{}+S_{li}^{}S_{l+\widehat{\eta }_ci}^+)+{\displaystyle \underset{l}{}}H_l,`$ (6) $`H_l`$ $`=`$ $`t{\displaystyle \underset{i\widehat{\eta }}{}}h_{li+\widehat{\eta }}^{}h_{li}(S_{li}^+S_{li+\widehat{\eta }}^{}+S_{li}^{}S_{li+\widehat{\eta }}^+)\mu {\displaystyle \underset{i}{}}h_{li}^{}h_{li}`$ (7) $`+`$ $`J_{eff}{\displaystyle \underset{i\widehat{\eta }}{}}(𝐒_{li}𝐒_{li+\widehat{\eta }}),`$ (8) where $`J_{eff}=J[(1\delta )^2\varphi ^2]`$, the holon particle-hole parameter $`\varphi =h_{li}^{}h_{li+\widehat{\eta }}`$, and $`S_{li}^+`$ and $`S_{li}^{}`$ are the pseudospin raising and lowering operators, respectively. These pseudospin operators obey the Pauli algebra, i.e., they behave as fermions on the same site, and as bosons on different sites. It is shown that the constrained electron operator in the $`t`$-$`J`$ model can be mapped exactly onto the fermion-spin transformation defined with an additional projection operator. However, this projection operator is cumbersome to handle for the actual calculation possible in 2D, we have dropped it in Eq. (4). It has been shown in Ref. that such treatment leads to the errors of the order $`\delta `$ in counting the number of spin states, which is negligible for small doping $`\delta `$. In the framework of the charge-spin separation, an electron is represented by the product of a holon and a spinon, then the external field can only be coupled to one of them. Ioffe and Larkin and Li et al. have shown that the physical conductivity $`\sigma (\omega )`$ is given by, $`\sigma ^1(\omega )=\sigma ^{(h)1}(\omega )+\sigma ^{(s)1}(\omega ),`$ (9) where $`\sigma ^{(h)}(\omega )`$ and $`\sigma ^{(s)}(\omega )`$ are the contributions to the conductivity from holons and spinons, respectively. Within the Hamiltonian (4), the c-axis current densities of holons and spinons are given by the time derivative of the polarization operator using Heisenberg’s equation of motion as, $`j_c^{(h)}=2\stackrel{~}{t}_ce\chi _{l\widehat{\eta }_ci}\widehat{\eta }_ch_{l+\widehat{\eta }_ci}^{}h_{li}`$ and $`j_c^{(s)}=t_ce\varphi _c_{l\widehat{\eta }_ci}\widehat{\eta }_c(S_{li}^+S_{l+\widehat{\eta }_ci}^{}+S_{li}^{}S_{l+\widehat{\eta }_ci}^+)`$, respectively, where $`\stackrel{~}{t}_c=t_c\chi _c/\chi `$ is the effective interlayer holon hopping matrix element, and the mean-field spinon and holon order parameters are defined as $`\chi _c=S_{li}^+S_{l+\widehat{\eta }_ci}^{}`$, $`\chi =S_{li}^+S_{li+\widehat{\eta }}^{}`$, and $`\varphi _c=h_{li}^{}h_{l+\widehat{\eta }_ci}`$. As in the previous discussions , a formal calculation for the spinon part shows that there is no the direct contribution to the current-current correlation from spinons, but the strongly correlation between holons and spinons is considered through the spinon’s order parameters entering in the holon part of the contribution to the current-current correlation, therefore the charge dynamics in the copper oxide materials is mainly caused by the charged holons within the CuO<sub>2</sub> planes, which are strongly renormalized because of the strong interactions with fluctuations of the surrounding spinon excitations. In this case, the c-axis optical conductivity is expressed as $`\sigma _c(\omega )=\mathrm{Im}\mathrm{\Pi }_c^{(h)}(\omega )/\omega `$ with the c-axis holon current-current correlation function $`\mathrm{\Pi }_c^{(h)}(tt^{})=j_c^{(h)}(t)j_c^{(h)}(t^{})`$. In the case of the incoherent charge dynamics in the c-axis direction, i.e., the weak interlayer hopping $`t_ct`$, this c-axis holon current-current correlation function $`\mathrm{\Pi }_c^{(h)}(\omega )`$ can be evaluated in terms of the in-plane holon Green’s function $`g(k,\omega )`$ , then we obtain the c-axis optical conductivity as , $`\sigma _c(\omega )`$ $`=`$ $`{\displaystyle \frac{1}{2}}(4\stackrel{~}{t}_ce\chi c_0)^2{\displaystyle \frac{1}{N}}{\displaystyle \underset{k}{}}{\displaystyle _{\mathrm{}}^{\mathrm{}}}{\displaystyle \frac{d\omega ^{}}{2\pi }}A_h(k,\omega ^{}+\omega )`$ (10) $`\times `$ $`A_h(k,\omega ^{}){\displaystyle \frac{n_F(\omega ^{}+\omega )n_F(\omega ^{})}{\omega }},`$ (11) where $`n_F(\omega )`$ is the Fermi distribution functions, the in-plane holon spectral function $`A_h(k,\omega )=2\mathrm{I}\mathrm{m}g(k,\omega )`$, while the in-plane holon Green’s function $`g(k,\omega )`$ has been obtained by considering the second-order correction for holons due to the antiferromagnetic fluctuations, and given in Ref. . As pointed in Ref. , the approximation assumption of the independent electron propagation in each layer has been used in the above discussions, and is justified for $`t_ct`$, therefore the c-axis conductivity is essentially determined by the properties of the in-plane spectral function. We have performed a numerical calculation for the c-axis optical conductivity $`\sigma _c(\omega )`$, and the results at the doping $`\delta =0.12`$ (solid line), $`\delta =0.09`$ (dashed line), and $`\delta =0.06`$ (dot-dashed line) for the parameters $`t/J=2.5`$, $`\stackrel{~}{t}_c/t=0.04`$, and $`c_0/a_0=2.5`$ at the temperature $`T`$=0 are shown in Fig. 1, where the charge $`e`$ has been set as the unit. From Fig. 1, it is found that $`\sigma _c(\omega )`$ is composed of two bands separated at $`\omega 0.4t`$, the higher-energy band, corresponding to the ”midinfrared band” in the in-plane optical conductivity $`\sigma _{ab}(\omega )`$ , shows a broad peak at $`\omega 0.7t`$, moreover, the weight of this band is strongly doping dependent, and decreasing rapidly with dopings, but the peak position does not appreciably shift to higher energies, which is consistent with the experimental results . On the other hand, the transferred weight of the lower-energy band forms a sharp peak at $`\omega <0.4t`$, which can be described formally by the non-Drude formula, and our analysis indicates that this peak decay is $`1/\omega `$ at low energies as in the case of $`\sigma _{ab}(\omega )`$ . In comparison with $`\sigma _{ab}(\omega )`$ , the present results also show that the values of $`\sigma _c(\omega )`$ are by $`23`$ orders of magnitude smaller than those of $`\sigma _{ab}(\omega )`$ in the corresponding energy range. For further understanding the property of $`\sigma _c(\omega )`$, we have also discussed the finite temperature behavior of $`\sigma _c(\omega )`$, and the numerical results at the doping $`\delta =0.12`$ for the parameters $`t/J=2.5`$, $`\stackrel{~}{t}_c/t=0.04`$, and $`c_0/a_0=2.5`$ with $`T=0.2J`$ (solid line) and $`T=0.5J`$ (dashed line) are plotted in Fig. 2, which show that $`\sigma _c(\omega )`$ is temperature dependent for $`\omega <1.2t`$ and almost temperature independent for $`\omega >1.2t`$, while the higher-energy band is severely suppressed with increasing temperatures, and vanishes at higher temperature ($`T>0.4J`$). These results are also qualitative consistent with the experimental results and numerical simulations . The quantity which is closely related with the c-axis conductivity is the c-axis resistivity $`\rho _c(T)`$, and can be expressed as, $`\rho _c={\displaystyle \frac{1}{lim_{\omega 0}\sigma _c(\omega )}}.`$ (12) This c-axis resistivity has been evaluated numerically and the results at the doping $`\delta =0.12`$ and $`\delta =0.06`$ for the parameters $`t/J=2.5`$, $`\stackrel{~}{t}_c/t=0.04`$, and $`c_0/a_0=2.5`$ are shown in Fig. 3(a) and Fig. 3(b), respectively. In the underdoped regime, the behavior of the temperature dependence of $`\rho _c(T)`$ shows a crossover from the high temperature metallic-like ($`d\rho _c(T)/dT>0`$) to the low temperature semiconducting-like ($`d\rho _c(T)/dT<0`$), but the metallic-like temperature dependence dominates over a wide temperature range. In comparison with the in-plane resistivity $`\rho _{ab}(T)`$ , it is shown that the crossover to the semiconducting-like range in $`\rho _c(T)`$ is obviously linked with the crossover from the temperature linear to the nonlinear range in $`\rho _{ab}(T)`$, and are caused by the pseudogap observed in the normal-state, but $`\rho _{ab}(T)`$ is only slightly affected by the pseudogap , while $`\rho _c(T)`$ is more sensitive to the underlying mechanism. Our results also show that there is the common origin for these crossovers. Therefore in this case, there is a general trend that the copper oxide materials show nonmetallic $`\rho _c(T)`$ in the underdoped regime at low temperatures. While in the optimally doped regime, $`\rho _c(T)`$ is a linear in temperatures, and shows the metallic-like behavior for all temperatures. These results are qualitative consistent with the experimental results and numerical simulation . It has been shown from the experiments that the charge dynamics in some strongly correlated ladder materials shows the similar behaviors. In the above discussions, the central concerns of the c-axis charge dynamics in the copper oxide materials are the two dimensionality of the electron state and incoherent hopping between the CuO<sub>2</sub> planes, and therefore the c-axis charge dynamics in the present fermion-spin picture is determined by the in-plane charged holon fluctuations. In this case, the c-axis scattering rate is associated with the in-plane scattering rate, and can be roughly described by the imaginary part of the self-energy of the charged holons within the CuO<sub>2</sub> planes, which is consistent with the ”dynamical dephasing” theory proposed by Leggett . In the fermion-spin theory , the charge and spin degrees of freedom of the physical electron are separated as the holon and spinon, respectively. Although both holons and spinons contributed to the charge and spin dynamics, but it has been shown that the scattering of spinons dominates the spin dynamics , while the results of the in-plane charge dynamics and present c-axis charge dynamics shows that scattering of holons dominates the charge dynamics, therefore the notion of the charge-spin separation naturally accounts for all the qualitative features of the normal-state properties of the copper oxide materials. To our present understanding, the main reasons why the fermion-spin theory based on the charge-spin separation is successful in studying the normal-state property of the strongly correlated copper oxide materials are that (1) the electron single occupancy on-site local constraint is exactly satisfied in the analytic calculation. Since the anomalous normal-state property of the copper oxide materials are caused by the strong electron correlation in these systems , and can be effectively described by the $`t`$-$`J`$ model , but the strong electron correlation in the $`t`$-$`J`$ model manifests itself by the electron single occupancy on-site local constraint, then the satisfaction of this local constraint is equivalent to that the strong electron-electron interaction has been properly treated. This is why the crucial requirement is to treat this constraint exactly in the $`t`$-$`J`$ model in the analytic discussions. (2) Since the local constraint is satisfied even in the mean-field approximation within the fermion-spin theory , the extra gauge degree of freedom related to the common ”flux” phase problem occurring in the slave-particle approach does not appear here, which is confirmed by our previous discussions within the mean-field theory , where the photoemission, electron dispersion and electron density of states in the copper oxide materials have been studied, and the results are qualitative similar to that seen in the experiments and numerical simulations. (3) As mentioned above, the dropping the projection operator in Eq. (4) will only lead to errors of the order $`\delta `$ in counting the number of spin states within the common decoupling approximation . This because that the constrained electron operators $`C_{i\sigma }`$ in the $`t`$-$`J`$ model can be also mapped onto the slave-fermion formulism as $`C_{i\sigma }=h_i^{}a_{i\sigma }`$ with the local constraint $`h_i^{}h_i+_\sigma a_{i\sigma }^{}a_{i\sigma }=1`$. We can solve this constraint by rewritting the boson operators $`a_{i\sigma }`$ in terms of the CP<sup>1</sup> boson operators $`b_{i\sigma }`$ as $`a_{i\sigma }=(1h_i^{}h_i)^{1/2}b_{i\sigma }(1h_i^{}h_i/2)b_{i\sigma }`$ supplemented by the local constraint $`_\sigma b_{i\sigma }^{}b_{i\sigma }=1`$. Since the CP<sup>1</sup> boson operators $`b_i`$ and $`b_i`$ with the local constraint can be identified with the pseudospin lowering and raising operators in the fermion-spin approach , respectively, then the spinon propagator in the restricted Hilbert space can be written as $`D_R(ij,tt^{})=[1h_i^{}(t)h_i(t)/2];[1h_i^{}(t^{})h_i(t^{})/2]D(ij,tt^{})[1\delta O(\delta ^2)]D(ij,tt^{})`$, where $`D(ij,tt^{})`$ is the spinon propagator within the fermion-spin approach. In this case, the extra spin degrees of freedom in the fermion-spin theory only lead to the errors of the order $`\delta `$ in calculating the spinon propagator within the common decoupling approximation , which is negligible for small doping $`\delta `$. This is why the theoretical results of the spin dynamics within the fermion-spin approach are qualitative consistent with the experiments and numerical simulations. In summary, we have studied the c-axis charge dynamics of the copper oxide materials within the $`t`$-$`J`$ model by considering the incoherent interlayer hopping. Our results show that the c-axis charge dynamics is mainly governed by the scattering from the in-plane charged holon fluctuations. The c-axis optical conductivity and resistivity have been discussed, and the results are qualitative consistent with the experiments and numerical simulations. Finally we also note that since the structure of the building blocks in between the CuO<sub>2</sub> planes for the chain copper oxide materials is different from these for the nochain copper oxide materials, some subtle differences between the chain and nochain copper oxide materials for the c-axis charge dynamics have been found from the experiments . It has been suggested that for the nochain copper oxide materials the doped holes may introduce the disorder in between the CuO<sub>2</sub> planes, contrary to the case of the chain copper oxide materials, where the increasing doping reduces the disorder in between the CuO<sub>2</sub> planes. It is possible that the disorder introduced by the doped holes residing between layers in the nochain copper oxide materials in the underdoped regime may modify the interlayer hopping elements, which leads to the subtle differences between the chain and nochain copper oxide materials for the c-axis charge dynamics. These and other related issues are under investigation now. ###### Acknowledgements. The authors would like to thank Professor Ru-Shan Han and Professor H. Q. Lin for helpful discussions. This work was supported by the National Natural Science Foundation under Grant No. 19774014 and the State Education Department of China through the Foundation of Doctoral Training. The partial support from the Earmarked Grant for Research from the Research Grants Council of Hong Kong, China are also acknowledged.
no-problem/9906/astro-ph9906007.html
ar5iv
text
# Limits on the proton-proton reaction cross-section from helioseismology ## 1 Introduction The precisely measured frequencies of solar oscillations provide us with a powerful tool to probe the solar interior with sufficient accuracy. These frequencies are primarily determined by the mechanical quantities like sound speed, density or the adiabatic index of the solar material. The primary inversions of the observed frequencies yield only the sound speed and density profiles inside the Sun. On the other hand, in order to infer the temperature and chemical composition profiles, additional assumptions regarding the input physics such as opacities, equation of state and nuclear energy generation rates are required. Gough & Kosovichev (dog88 (1988)) and Kosovichev (kos96 (1996)) have employed the equations of thermal equilibrium to express the changes in primary variables ($`\rho ,\mathrm{\Gamma }_1`$) in terms of those in secondary variables ($`Y,Z`$) and thus obtained equations connecting the frequency differences to variations in abundance profiles. Shibahashi & Takata (st96 (1996)), Takata & Shibahashi (tak98 (1998)) and Shibahashi, Hiremath & Takata (shi98 (1998)) adopt the equations of thermal equilibrium, standard opacities and nuclear reaction rates to deduce the temperature and hydrogen abundance profiles with the use of only the inverted sound speed profile. Antia & Chitre (ac95 (1995), ac98 (1998)) followed a similar approach, but they used the inverted density profile, in addition to the sound speed profile, for calculating the temperature and hydrogen abundance profiles, for a prescribed heavy element abundance ($`Z`$) profile. In general, the computed luminosity in a seismically computed solar model is not expected to be in agreement with the observed solar luminosity. By applying the observed luminosity constraint it is possible to estimate the cross-section of proton-proton (pp) nuclear reaction. Antia & Chitre (ac98 (1998)) estimated this cross-section to be $`S_{11}=(4.15\pm 0.25)\times 10^{25}`$ MeV barns. Similar values have been obtained by comparing the computed solar models with helioseismic data (Degl’Innocenti, Fiorentini & Ricci inn98 (1998); Schlattl, Bonanno & Paterno bon99 (1998)). The main source of error in these estimates is the uncertainties in $`Z`$ profiles. In this work we try to find the region in the $`Z`$$`S_{11}`$ plane that is consistent with the constraints imposed by the helioseismic data. It may even be argued that one can determine the pressure, in addition to the sound speed and density, from primary inversions using the equation of hydrostatic equilibrium. This profile can then be used as an additional constraint for determining the heavy element abundance profile. In this work we explore the possibility of determining the $`Z`$ profile in addition to the $`X`$ profile using this additional input. Alternately, we can determine the $`Z`$ profile (or opacities) instead of the $`X`$ profile (Tripathy & Christensen-Dalsgaard tri98 (1998)). Roxburgh (rox96 (1996)) has also examined $`X`$ profiles which are obtained by suitably scaling the hydrogen abundance profiles from a standard solar model in order to generate the observed luminosity. The motivation of this study was to explore the possibility of reducing the neutrino fluxes yielded by the seismic models by allowing for variations in both the composition profiles as well as selected nuclear reaction rates. ## 2 The technique The sound speed and density profiles inside the Sun are inferred from the observed frequencies using a Regularized Least Squares technique (Antia a96 (1996)). The primary inversions based on the equations of hydrostatic equilibrium along with the adiabatic oscillation equations, however, give only the mechanical variables like pressure, density and sound speed. This provides us with the ratio $`T/\mu `$, where $`\mu `$ is the mean molecular weight. In order to determine $`T`$ and $`\mu `$ separately, it becomes necessary to use the equations of thermal equilibrium, i.e., $`L_r`$ $`=`$ $`{\displaystyle \frac{64\pi r^2\sigma T^3}{3\kappa \rho }}{\displaystyle \frac{dT}{dr}},`$ (1) $`{\displaystyle \frac{dL_r}{dr}}`$ $`=`$ $`4\pi r^2\rho ϵ,`$ (2) where $`L_r`$ is the total energy generated within a sphere of radius $`r`$, $`\sigma `$ is the Stefan-Boltzmann constant, $`\kappa `$ is the Rosseland mean opacity, $`\rho `$ is the density and $`ϵ`$ is the nuclear energy generation rate per unit mass. In addition, the equation of state needs to be adopted to relate the sound speed to chemical composition and temperature: $`c=c(T,\rho ,X,Z)`$. These three equations are sufficient to determine the three unknowns $`T,L_r,X`$, provided the $`Z`$ profile is prescribed (Antia & Chitre ac98 (1998)). The resulting seismic model will not in general have the correct solar luminosity which is an observed quantity. It turns out that we need to adjust the nuclear reaction rates slightly to obtain the correct luminosity and we believe this boundary condition can be profitably used for constraining the nuclear reaction rates. The rate of nuclear energy generation in the Sun is mainly controlled by the cross-section for the pp nuclear reaction, which has not been measured in the laboratory. This nuclear reaction rate is thus calculated theoretically and it would be interesting to test the validity of calculated results using the helioseismic constraints. Since the computed luminosity in seismic models also depends on $`Z_c`$, the heavy element abundance in solar core, we attempt to determine the region in the $`Z_c`$$`S_{11}`$ plane which yields the correct solar luminosity. Using the density profile along with the equation of hydrostatic equilibrium, it should be possible to determine the pressure profile also from primary inversions. It may even be argued that if we use the additional constraint, $`p=p(T,\rho ,X,Z)`$ it should be possible to determine the $`Z`$ profile besides other profiles. However, it is not clear if these constraints are independent and in section 3.2 we examine this possibility. ## 3 Results We use the observed frequencies from GONG (Global Oscillation Network Group) data for months 4–10 (Hill et al. hil96 (1996)) which corresponds to the period from 23 August 1995 to 30 April 1996, to calculate the sound speed and density profiles. A Regularized Least Squares (RLS) technique for inversion is adopted for this purpose. With the help of the inverted profiles for sound speed and density, along with the $`Z`$ profile from Model 5 of Richard et al. (ric96 (1996)), we obtain the temperature and hydrogen abundance profiles by employing the equations of thermal equilibrium. We adopt the OPAL opacities (Iglesias & Rogers igl96 (1996)), OPAL equation of state (Rogers, Swenson & Iglesias rog96 (1996)) and nuclear reaction rates from Adelberger et al. (fusion (1998)) for obtaining the thermal structure. Recently, Elliot and Kosovichev (ell98 (1998)) have demonstrated that inclusion of relativistic effects in the equation of state improves the agreement with helioseismic data. Since the OPAL equation of state does not include this effect we have applied corrections as outlined by Elliot and Kosovichev (ell98 (1998)) to incorporate the relativistic effects. The inferred mean molecular weight profile is displayed in Fig. 1. The only difference between the present calculations and earlier work of Antia & Chitre (ac98 (1998)) is in the adopted nuclear reaction rates and application of the relativistic correction to the equation of state. ### 3.1 Cross-section for pp Reaction With the help of the inverted density, temperature and hydrogen abundance profiles, it is possible to compute the total energy generated by nuclear reactions, and this should be compared with the observed solar luminosity, $`L_{}=3.846\times 10^{33}`$ ergs/sec. As emphasized by Antia & Chitre (ac98 (1998)) there is an ($`2\sigma `$) uncertainty of about 3% in computing the luminosity of seismic models. This arises from possible errors in primary inversion, solar radius, equation of state, nuclear reaction rates for other reactions. The uncertainty arising from errors in $`Z`$ profiles is much larger and hence in this work we use seismic models with homogeneous $`Z`$ profile, covering a wide range of $`Z`$ values. For each central value of $`Z`$ we estimate the range of cross-section of pp nuclear reaction, which reproduces the luminosity to within 3% of the observed value. The results are shown in Fig. 2, which delineates the region in $`Z_c`$$`S_{11}`$ plane that is consistent with helioseismic and luminosity constraints. It can be seen that current best estimates for $`Z_c`$ and $`S_{11}`$ (Bahcall, Basu & Pinsonneault bp98 (1998)) are only marginally consistent with helioseismic constraints and probably need to be increased slightly. This figure also shows the limits on the values of $`Z_c`$ obtained by Fukugita & Hata (fuk98 (1998)) as well as the range of $`S_{11}`$ as inferred from various theoretical calculations so far (Bahcall & Pinsonneault bp95 (1995); Turck-Chiéze & Lopes tc93 (1993)). One therefore, expects that the values of $`Z_c`$ and $`S_{11}`$ should fall within the region with vertical shading in Fig. 2. The neutrino fluxes in seismic models with the correct luminosity (for the value of $`S_{11}`$ corresponding to the central line in Fig. 2) as a function of $`Z_c`$ are shown in Fig. 3. It can be seen that the neutrino flux in <sup>71</sup>Ga detector is never as low as the observed value, while the <sup>8</sup>B neutrino flux and the neutrino flux in <sup>37</sup>Cl are within observed limits, although for disjoint values of $`Z_c`$. Thus, a variation of $`Z_c`$ values does not yield neutrino fluxes that are simultaneously consistent with any two of the three solar neutrino experiments. Similar conclusions were reached from more general considerations by Hata, Bludman & Langacker (hat94 (1994)), Heeger & Robertson (hee96 (1996)), Bahcall (bah96 (1996)), Castellani et al. (cas97 (1997)), Antia & Chitre (ac97 (1997)). ### 3.2 Determination of $`X`$ and $`Z`$ profiles It is clear that $`Z`$ profile is the major source of uncertainty in helioseismic constraint on the pp nuclear reaction cross-section. We, therefore, explore the possibility of determining the $`Z`$ profile in addition to the $`T,X`$ profiles using the equations of thermal equilibrium, along with the sound speed, density and pressure profiles. This would require a determination of two of the three unknowns $`T,X,Z`$, with the two constraints obtained from primary inversions, namely, $`p(T,\rho ,X,Z)`$ and $`c(T,\rho ,X,Z)`$. We can thus write $`{\displaystyle \frac{\delta c}{c}}`$ $`=`$ $`\left({\displaystyle \frac{\mathrm{ln}c}{\mathrm{ln}\rho }}\right)_{T,X,Z}{\displaystyle \frac{\delta \rho }{\rho }}+\left({\displaystyle \frac{\mathrm{ln}c}{\mathrm{ln}T}}\right)_{\rho ,X,Z}{\displaystyle \frac{\delta T}{T}}`$ (3) $`+\left({\displaystyle \frac{\mathrm{ln}c}{X}}\right)_{\rho ,T,Z}\delta X+\left({\displaystyle \frac{\mathrm{ln}c}{Z}}\right)_{\rho ,T,X}\delta Z,`$ $`{\displaystyle \frac{\delta p}{p}}`$ $`=`$ $`\left({\displaystyle \frac{\mathrm{ln}p}{\mathrm{ln}\rho }}\right)_{T,X,Z}{\displaystyle \frac{\delta \rho }{\rho }}+\left({\displaystyle \frac{\mathrm{ln}p}{\mathrm{ln}T}}\right)_{\rho ,X,Z}{\displaystyle \frac{\delta T}{T}}`$ (4) $`+\left({\displaystyle \frac{\mathrm{ln}p}{X}}\right)_{\rho ,T,Z}\delta X+\left({\displaystyle \frac{\mathrm{ln}p}{Z}}\right)_{\rho ,T,X}\delta Z.`$ Since $`\rho `$ is known independently, we ignore the variation in $`\rho `$ and consider only $`T,X,Z`$. Now for a fully ionized nonrelativistic perfect gas, it is well known that $`2\left({\displaystyle \frac{\mathrm{ln}c}{\mathrm{ln}T}}\right)_{\rho ,X,Z}`$ $`=`$ $`\left({\displaystyle \frac{\mathrm{ln}p}{\mathrm{ln}T}}\right)_{\rho ,X,Z}=1,`$ (5) $`2\left({\displaystyle \frac{\mathrm{ln}c}{X}}\right)_{\rho ,T,Z}`$ $`=`$ $`\left({\displaystyle \frac{\mathrm{ln}p}{X}}\right)_{\rho ,T,Z}{\displaystyle \frac{5}{5X+3Z}},`$ (6) $`2\left({\displaystyle \frac{\mathrm{ln}c}{Z}}\right)_{\rho ,T,X}`$ $`=`$ $`\left({\displaystyle \frac{\mathrm{ln}p}{Z}}\right)_{\rho ,T,X}{\displaystyle \frac{1}{5X+3Z}},`$ (7) It is clearly not possible to determine any two of these three quantities $`T,X,Z`$, from $`c`$ and $`p`$, since if the $`\rho `$ variations are ignored, we always have $`2\delta c/c=\delta p/p`$, and these constraints are not independent. Thus, we need to check if the actual equation of state used in solar model computations allows these quantities to be independent. Another basic problem in trying to determine $`Z`$ using Eqs. (3–4) is that in general we would expect $`|\delta Z|<<|\delta X|`$, while the derivatives w.r.t. $`Z`$ are smaller than those w.r.t. $`X`$ and hence we would expect the $`\delta Z`$ term to be much smaller than the $`\delta X`$ term, making it difficult to determine $`Z`$ using these equations. Thus we can only hope to use these equations to determine $`T`$ and $`X`$, while $`Z`$ can be determined from equations of thermal equilibrium through the opacity, which depends sensitively on $`Z`$. Fig. 4 shows the ratio of partial derivatives for $`c^2`$ and $`p`$, as a function of $`r`$ in a solar model and it is clear that these derivatives are almost equal. The wiggles in the curve are probably due to errors in estimating these derivatives and it is clear that the departure of the ratio from unity is comparable to these errors, particularly, for the derivatives with respect to $`X`$. Thus, for the solar case these two constraints are not independent and it is demonstrably not possible to get any additional information by using the pressure profile. Any attempt to do so will only yield arbitrary results magnifying the errors arising from those in the equation of state and primary inversions. In order to estimate the extent of error magnification we can try to compute the ratio $$R_{T,X}=\frac{\left(\frac{\mathrm{ln}c}{\mathrm{ln}T}\right)_{\rho ,X,Z}\left(\frac{\mathrm{ln}p}{X}\right)_{\rho ,T,Z}}{\left(\frac{\mathrm{ln}c}{\mathrm{ln}T}\right)_{\rho ,X,Z}\left(\frac{\mathrm{ln}p}{X}\right)_{\rho ,T,Z}\left(\frac{\mathrm{ln}p}{\mathrm{ln}T}\right)_{\rho ,X,Z}\left(\frac{\mathrm{ln}c}{X}\right)_{\rho ,T,Z}},$$ (8) and similar ratios between derivatives with respect to $`(T,Z)`$ or $`(X,Z)`$. It turns out that all these quantities are greater than 200 over the entire solar model. Thus all errors will be magnified by a factor of at least 200, if we attempt to determine the $`Z`$ profile, in addition to $`T,X`$ profiles. Even if we do not impose the additional constraint arising from pressure, we can calculate the pressure profile using the OPAL equation of state from the inferred $`T,\rho ,X`$ and assumed $`Z`$ profiles. As mentioned earlier, we also apply the relativistic corrections (Elliot & Kosovichev ell98 (1998)) to the equation of state. This $`p`$-profile can be compared with that inferred from primary inversions using the equation of hydrostatic equilibrium and Fig. 5 shows the results. It is clear that even without applying the additional constraint from $`p(T,\rho ,X,Z)`$ the resulting profile comes out to be very close to the “independently” inferred profile, well within the $`1\sigma `$ error limits. Moreover, the inferred profile is rather insensitive to $`Z`$ and hence effecting a change in $`Z`$ is unlikely to produce the profiles that will match the primary inversion exactly. It is, therefore, evident that the pressure profile does not provide an independent constraint. There are only two independent constraints (e.g., $`c,\rho `$) that can be calculated from the primary inversions and it becomes well nigh impossible to determine $`Z`$ profile in addition to the $`T,X`$ profiles. ### 3.3 Computation of $`Z`$ profile We have stressed earlier that it is not feasible to determine both $`X`$ and $`Z`$ profiles, in addition to the temperature, from equations of thermal equilibrium and primary inversions. However, we can reverse the process and determine the $`Z`$ profile instead of the $`X`$ profile, using these equations. We, therefore, prescribe an $`X`$ profile from some solar model and seek to determine the $`Z`$ profile using the equations described earlier. In this case the equation of state $`c=c(T,\rho ,X,Z)`$ is used to determine $`T`$ and then using Eqs. (1–2) we can determine $`L_r`$ and $`\kappa `$. From the opacity $`\kappa `$ we can determine the required value of $`Z`$ using the OPAL opacity tables. Thus in this process we would also get an estimate of opacity variations required to make the solar model consistent with helioseismic data. This is similar to what has, indeed, been done by Tripathy & Christensen-Dalsgaard (tri98 (1998)) except for the fact that they have used only the inverted sound speed profile, while we constrain, in addition, the density profile. The resulting $`Z`$ profiles from our calculations are shown in Fig. 6. This figure displays the results using an $`X`$ profile from a model without diffusion (Bahcall & Pinsonneault bp92 (1992)) and some models with diffusion (Bahcall et al. bp98 (1998); Richard et al. ric96 (1996)). From Fig. 6 it is clear that for an $`X`$ profile from a solar model without diffusion, the required change in $`Z`$ or opacities is rather large, thus supporting other evidence for diffusion of helium below the solar convection zone. The long-dashed line in Fig. 6 has been obtained using the $`X`$ profile inferred by Antia & Chitre (ac98 (1998)) with the $`Z`$ profile from Richard et al. (ric96 (1996)). The $`Z`$ profile is evidently reproduced, demonstrating the consistency of the calculations. It may be noted that the error limits displayed in this figure denote the statistical error resulting from uncertainties in observed frequencies and do not include systematic errors arising from other sources. Possible errors in opacity tables may introduce much larger uncertainties in the inferred $`Z`$ profile. But it is difficult to estimate these errors and hence we have not included them in our analysis. The only purpose of this exercise is to estimate the extent of opacity (or $`Z`$) modifications required to get a solar model that is consistent with helioseismic constraints. Of course, this does not give us an estimate of actual error in opacity calculations as there could be other uncertainties in solar models which have not been addressed. ## 4 Discussion and Conclusions Using the primary inversions for $`c,\rho `$, it is possible to infer the $`T,X`$ profiles in solar interior, provided $`Z`$ profile is known. The resulting seismic models have the correct solar luminosity only when the heavy element abundance $`Z_c`$ in the solar core and the cross-section for pp nuclear reaction rate are within the shaded region shown in Fig. 2. It appears that the currently accepted values of $`Z_c`$ or $`S_{11}`$ need to be increased marginally to make them consistent with helioseismic constraints. It is not possible to uniquely determine all three quantities $`T,X,Z`$ using equations of thermal equilibrium along with results from primary inversions, as there are only two independent constraints that emerge from primary inversions. Incorporation of the pressure profile as an additional input from primary inversions does not yield an independent constraint for determining $`Z`$, in addition to $`T`$ and $`X`$. However, it may be possible to determine the $`Z`$ profile using equations of thermal equilibrium, provided the $`X`$ profile is independently prescribed. This gives an estimate of variation in opacity required to match the helioseismic data. From these results it is clear that $`X`$ profile for solar models without diffusion of helium is not consistent with helioseismic data, unless opacity (or $`Z`$) is reduced by a large amount. ###### Acknowledgements. This work utilizes data obtained by the Global Oscillation Network Group (GONG) project, managed by the National Solar Observatory, a Division of the National Optical Astronomy Observatories, which is operated by AURA, Inc. under a cooperative agreement with the National Science Foundation. The data were acquired by instruments operated by the Big Bear Solar Observatory, High Altitude Observatory, Learmonth Solar Observatory, Udaipur Solar Observatory, Instituto de Astrofisico de Canarias, and Cerro Tololo Interamerican Observatory.
no-problem/9906/math9906009.html
ar5iv
text
# On Uniquely List Colorable Graphs11footnote 1 The research is partially supported by the Institute for Studies in Theoretical Physics and Mathematics (IPM), Tehran, Iran. ## 1 Introduction and preliminaries We consider simple graphs which are finite, undirected, with no loops or multiple edges. For the necessary definitions and notation we refer the reader to standard texts, such as . In this section we mention some of the definitions and results which are referred to throughout the paper. For each vertex $`v`$ in a graph $`G`$, let $`L(v)`$ denote a list of colors available for $`v`$. A list coloring from the given collection of lists is a proper coloring $`c`$ such that $`c(v)`$ is chosen from $`L(v)`$. We will refer to such a coloring as an $`L`$–coloring. The idea of list colorings of graphs is due independently to V. G. Vizing and to P. Erdös, A. L. Rubin, and H. Taylor . For a recent survey on list coloring we refer the interested reader to N. Alon . It is interesting to note that a list coloring of $`K_n`$ is nothing but a system of distinct representatives (SDR) for the collection $`=\{L(v)|vV(K_n)\}`$. Let $`G`$ be a graph with $`n`$ vertices and suppose that for each vertex $`v`$ in $`G`$, there exists a list of $`k`$ colors $`L(v)`$, such that there exists a unique $`L`$–coloring for $`G`$, then $`G`$ is called a uniquely $`k`$–list colorable graph or a U$`k`$LC graph for short. ###### Example . The graph $`K_4e`$ is a uniquely $`2`$–list colorable graph. In Figure 1 a collection of lists is given, each of size two, and it can easily be checked that there is a unique coloring with these lists. ###### Remark . It is clear from the definition of uniquely $`k`$–list colorable graphs that each U$`k`$LC graph is also a U$`(k1)`$LC graph. The following theorem of Marshal Hall, which is a corollary of the celebrated Marriage Theorem of P. Hall and gives a lower bound for the number of SDRs, is a motivation for the definition of U$`k`$LC graphs. ###### Theorem A . If $`n`$ sets $`S_1,S_2,\mathrm{},S_n`$ have an SDR and the smallest of these sets contains $`k`$ objects, then if $`kn`$, there are at least $`k(k1)\mathrm{}(kn+1)`$ different SDRs; and if $`k<n`$, there are at least $`k!`$ different SDRs. ###### Corollary . If the sets $`S_1,S_2,\mathrm{},S_n`$ have an SDR and the smallest of these sets is of size $`k`$ ( $`k>1`$), then they have at least two SDRs. Or equivalently, the complete graph $`K_n`$ is not U$`k`$LC. If in the above corollary instead of $`K_n`$ we take any graph, then it is natural to ask the following question. ###### Question . For which graphs does the result of the above corollary hold? We say that a graph $`G`$ has the property $`M(k)`$ ($`M`$ for Marshal Hall) if and only if it is not uniquely $`k`$–list colorable. So $`G`$ has the property $`M(k)`$ if for any collection of lists assigned to its vertices, each of size $`k`$, either there is no list coloring for $`G`$ or there exist at least two list colorings. Note that if one tries to relate the idea of uniqueness to list coloring, then he or she reaches this definition naturally. M. Mahdian and E.S. Mahmoodian characterized uniquely $`2`$–list colorable graphs. They showed that, ###### Theorem B . A connected graph $`G`$ has the property $`M(2)`$ if and only if every block of $`G`$ is either a cycle, a complete graph, or a complete bipartite graph. It seems that characterizing U$`k`$LC graphs for any $`k`$ is not that easy. Even the U$`3`$LC graphs seem to be difficult to characterize. For example it will be shown below that, while there are some complete tripartite graphs which have the property $`M(3)`$, the property does not hold for any complete tripartite graph. The following definition was first given in . ###### Definition . The m–number of a graph $`G`$, denoted by $`\text{m}(G)`$, is defined to be the least integer $`k`$ such that $`G`$ has the property $`M(k)`$. E. S. Mahmoodian and M. Mahdian in have obtained some results on the m–number of planar graphs and introduced some upper bounds on $`\text{m}(G)`$. It is obvious from the definition of a U$`k`$LC graph that the graph $`G`$ is U$`k`$LC if and only if $`k<\text{m}(G)`$. For example, one can easily see that the graph $`K_4e`$ has the property $`M(3)`$ and in the above example we saw that it is U$`2`$LC, so $`m(K_4e)=3`$. The concept of U$`k`$LC graphs also arise naturally in finding defining sets of graphs. In a given graph $`G`$, a set of vertices $`S`$ with an assignment of colors is called a defining set of $`k`$–coloring, if there exists a unique extension of the colors of $`S`$ to a $`k`$–coloring of the vertices of $`G`$. For more information on defining sets see . As it is mentioned there, critical sets in latin squares are just the minimal defining sets of $`n`$–colorings of $`K_n\times K_n`$. A latin square is an $`n\times n`$ array from the numbers $`1,2,\mathrm{},n`$ such that each of these numbers occurs in each row and in each column exactly once. A critical set in an $`n\times n`$ array is a set $`S`$ of entries, such that there exists a unique extension of $`S`$ to a latin square of size $`n`$ and no proper subset of $`S`$ has this property. For a survey on critical sets in latin squares see . Each set of vertices $`SV(G)`$ with an assignment of colors induces a list of colors for each vertex in $`GS`$. So to find out if $`S`$ is a defining set or not, we need to know whether $`GS`$ is uniquely list colorable with those lists. In this paper we state some results which are towards characterizing U$`k`$LC graphs. In Section 2 we introduce some results which are helpful in determining the m–number of some graphs. In Section 3 some theorems about complete multipartite graphs are discussed. In Section 4 we present some examples of U$`k`$LC graphs, and finally in the last section we pose some open problems. ## 2 Some general results The following lemma is very useful throughout the paper. ###### Lemma 1 . For every graph $`G`$ we have $`\text{m}(G)|E(\overline{G})|+2`$. ###### Proof. Proof is by induction on $`r=|E(\overline{G})|`$. In the case $`r=0`$, $`G`$ is a complete graph and by Theorem B it has the property $`M(2)`$. Assume that the statement is true for every graph $`H`$ with $`|E(\overline{H})|<r`$ and let $`G`$ be a graph whose complement has $`r`$ edges. Suppose that there are assigned some lists of colors $`L(w)`$ of size at least $`r+2`$ to the vertices of $`G`$ and $`G`$ has an $`L`$–coloring $`c`$. Let $`u`$ and $`v`$ be two nonadjacent vertices of $`G`$. To obtain another $`L`$–coloring for $`G`$, we consider two cases. If $`c(u)c(v)`$, consider the graph $`G_1=G+uv`$. We have $`|E(\overline{G_1})|=r1`$, and by induction hypothesis $`\text{m}(G_1)r+1`$. So there exists another $`L`$–coloring for $`G_1`$ which is also legal for $`G`$ itself. Now if $`c(u)=c(v)`$, consider the graph $`G_2=G\{w|c(w)=c(v)\}`$. If $`V(G_2)=\mathrm{}`$, then $`G`$ is a null graph and the statement is trivial. Otherwise we have $`|E(\overline{G_2})|<r`$, and by induction hypothesis $`\text{m}(G_2)r+1`$. Assign to each vertex $`w`$ of $`G_2`$ the list $`L^{}(w)=L(w)\{c(u)\}`$. Again $`c|_{V(G_2)}`$ is an $`L^{}`$–coloring for $`G_2`$ and since $`|L^{}(w)|r+1`$ for each $`wV(G_2)`$, there exists another $`L^{}`$–coloring for $`G_2`$ which can be extended to an $`L`$–coloring of $`G`$, different from $`c`$, by giving the color $`c(u)`$ to all the vertices of $`G`$ which are not in $`G_2`$. $`\mathrm{}`$ From the following theorem we can deduce a lower bound for the number of vertices in a U$`k`$LC graph. ###### Theorem 1 . If a graph $`G`$ has at most $`3k`$ vertices, then $`m(G)k+1`$. ###### Proof. Proof is by induction on $`k`$. For $`k=1`$ the statement obviously holds. Suppose that $`k2`$, and $`G`$ is a graph with at most $`3k`$ vertices, and let there be lists of colors, each of size at least $`k+1`$, assigned to the vertices of $`G`$ and further suppose that there exists a list coloring $`c`$ for $`G`$, from these lists. We show that there exists another coloring for $`G`$ from these lists. If one color class has at least three vertices, we can remove that class from $`G`$ and its color from the lists of remaining vertices, and by induction hypothesis a new coloring exists for the remaining graph which extends to all of $`G`$. So assume that each color class has at most two vertices. By adding new edges between all vertices with different colors in $`c`$, we obtain a graph whose complement is union of some $`K_1`$s and some $`K_2`$s. Denote the number of $`K_2`$s by $`r`$. If $`rk1`$, we obtain a new coloring by the lemma above, otherwise $`rk`$. Now if there exists a vertex $`v`$ whose list contains a color $`x`$ which is not used in the coloring $`c`$, then we can obtain a new coloring by changing the color of $`v`$ to $`x`$. Otherwise the union of all lists has exactly $`nr2k`$ elements. If $`u`$ and $`v`$ are two vertices such that $`c(u)=c(v)`$, then since the unused colors in the lists of $`u`$ and $`v`$ are chosen from a ($`2k1`$)–set, thus $`u`$ and $`v`$ must have a common unused color. Consider a $`K_{nr}`$ obtained by identifying all the vertices in each color class of $`c`$ to a vertex. The list of each vertex in this $`K_{nr}`$ is the intersection of the lists of the vertices in the corresponding color class. So each list of the vertices in $`K_{nr}`$ has at least $`2`$ elements, and there exists a coloring for it from these lists. Hence by the property $`M(2)`$ of $`K_{nr}`$ we obtain a new coloring on it, which gives a new coloring for $`G`$. $`\mathrm{}`$ The following two corollaries are immediate from the theorem above. The first one gives an upper bound for the m–number of a graph and the second one introduces a lower bound for the number of vertices in a U$`k`$LC graph. ###### Corollary 1 . If a graph $`G`$ has $`n`$ vertices then $`m(G)n/3+1`$. ###### Corollary 2 . Every U$`k`$LC graph has at least $`3k2`$ vertices. Corollary 2 implies that a necessary condition to have equality in Lemma 1 is $`|V(G)|3|E(\overline{G})|+1`$. In the following proposition we see that when the edges of $`\overline{G}`$ are independent this condition is also sufficient . ###### Proposition 1 . If $`F`$ is a set of $`r`$ independent edges in $`K_n`$ and $`n3r+1`$, then $`m(K_nF)=r+2`$. ###### Proof. Suppose $`F=\{x_1y_1,\mathrm{},x_ry_r\}`$, and $`z_0,\mathrm{},z_s`$ are the vertices in $`K_nV(F)`$. By the hypothesis $`sr`$. Assign the list $`\{0,1,\mathrm{},r\}`$ to each $`x_i`$ and to $`z_0`$, and for each $`i1`$ assign the list $`\{r+1,\mathrm{},2r,i\}`$ to $`y_i`$, and the list $`\{1,\mathrm{},r,r+i\}`$ to $`z_i`$. Since the induced subgraph of $`K_nF`$ on $`\{x_1,\mathrm{},x_r,z_0\}`$ is a complete graph, all the colors $`0,1,\mathrm{},r`$ must appear on these vertices in any coloring of $`K_nF`$ from the assigned lists. So for each $`i1`$ the vertex $`z_i`$ must take the color $`r+i`$, and for each $`i1`$ $`y_i`$ receives the color $`i`$. Finally each $`x_i`$ must take the color $`i`$, and $`z_0`$ takes the color $`0`$. $`\mathrm{}`$ ## 3 Complete multipartite graphs It is shown in that any complete bipartite graph has the property $`M(2)`$. In the following theorem it is shown that one can not expect similar statement for complete tripartite graphs. ###### Theorem 2 . For each $`k2`$, there exists a complete tripartite U$`k`$LC graph. ###### Proof. Let $`A=\{a_1,\mathrm{},a_{k1}\}`$, $`B=\{b_1,\mathrm{},b_{k1}\}`$, and $`C=\{c_1,\mathrm{},c_{k1}\}`$ be mutually disjoint sets. We denote all $`(k1)`$-subsets of $`BC`$ by $`\{A_1,\mathrm{},A_m\}`$, those of $`AC`$ by $`\{B_1,\mathrm{},B_m\}`$, and those of $`AB`$ by $`\{C_1,\mathrm{},C_m\}`$; where $`m=\left(\genfrac{}{}{0pt}{}{2k2}{k1}\right)`$. Now consider a complete tripartite graph $`K_{m(k1),m(k1),m(k1)}`$ with the following list of colors on vertices in three parts, respectively: $`A_i\{a_j\}`$, $`B_i\{b_j\}`$, and $`C_i\{c_j\}`$; where $`i=1,2,\mathrm{},m`$ and $`j=1,2,\mathrm{},k1`$. We show that there is a unique coloring for this graph from the assigned lists. First note that the union of all lists is $`ABC`$ which has $`3(k1)`$ elements. We show that in any coloring of this graph, there are at least $`k1`$ colors present on the vertices of each part. To show this, suppose to the contrary that there exists a coloring in which one part uses less than $`k1`$ colors. Without loss of generality let $`L`$ be the set of colors used to color the first part, and $`|L|<k1`$. Then $`(BC)L`$ has at least $`k`$ elements and $`AL`$ has at least one element. Now consider a set $`L^{}`$ which contains $`k1`$ elements from the set $`(BC)L`$ and an element from $`AL`$. Then $`LL^{}=\mathrm{}`$. But there is a vertex in the first part whose list is $`L^{}`$, a contradiction. So each part has at least $`k1`$ colors and since we have $`3(k1)`$ colors altogether, thus in any coloring each part has exactly $`k1`$ colors. It can be easily verified that the colors of each of the three parts must be $`A`$, $`B`$, and $`C`$, respectively. Therefore there is a unique coloring for $`K_{m(k1),m(k1),m(k1)}`$ from the assigned lists. $`\mathrm{}`$ The following theorem and the propositions which follow are preparations to prove our main theorem of this section, Theorem 4, which is a characterization of uniquely $`3`$–list colorable complete multipartite graphs except for finitely many of them. The proof of the following useful lemma is immediate. ###### Lemma 2 . If $`L`$ is a $`k`$–list assignment to the vertices in the graph $`G`$, and $`G`$ has a unique $`L`$–coloring, then $`|_vL(v)|k+1`$ and all these colors are used in the (unique) $`L`$–coloring of $`G`$. ###### Theorem 3 . If $`G`$ is a complete multipartite graph which has an induced U$`k`$LC subgraph, then $`G`$ is U$`k`$LC. ###### Proof. Let $`H`$ be an induced subgraph of $`G`$ which is U$`k`$LC. Assume that $`L`$ is a $`k`$–list assignment to the vertices in $`H`$, by which $`H`$ has a unique list coloring. For the vertices in $`G`$ we introduce lists of colors each of size $`k`$, such that $`G`$ is uniquely colorable by these lists. Assign the list $`L(v)`$ to each vertex $`v`$ in $`H`$. For each part of $`G`$ that contains some vertices in $`H`$, consider a vertex $`v`$ in $`H`$ in that part and assign the list $`L(v)`$ to all vertices in $`GV(H)`$ in that part. In any part of $`G`$ which does not contain any vertex in $`H`$, we assign a list $`A\{i\}`$, where $`A`$ is a set of $`k1`$ colors from the $`L`$–coloring of $`H`$ and $`i`$ is a new color. $`\mathrm{}`$ We use the notation $`K_{sr}`$ for a complete $`r`$–partite graph in which each part is of size $`s`$. Notations such as $`K_{sr,t}`$, etc. are used similarly. ###### Proposition 2 . The graphs $`K_{3,3,3}`$, $`K_{2,4,4}`$, $`K_{2,3,5}`$, $`K_{2,2,9}`$, $`K_{1,2,2,2}`$, $`K_{1,1,2,3}`$, $`K_{1,1,1,2,2}`$, $`K_{14,6}`$, $`K_{15,5}`$, and $`K_{16,4}`$ are U$`3`$LC. ###### Proof. First we show the truth of the statement for $`K_{1,1,2,3}`$ and $`K_{14,6}`$. For $`K_{1,1,2,3}`$, let $`\{a\}`$, $`\{b\}`$, $`\{c,d\}`$, and $`\{e,g,f\}`$ be the parts in $`K_{1,1,2,3}`$. We assign the following lists for the vertices of this graph: $`L(a)=L(c)=L(f)=\{1,3,4\}`$, $`L(b)=L(d)=L(g)=\{2,3,4\}`$, and $`L(e)=\{1,2,4\}`$. A unique coloring exists from the assigned lists, because the vertices $`b`$, $`d`$ and $`g`$ form a triangle and all of them have the list $`\{2,3,4\}`$, thus the colors $`2`$, $`3`$, and $`4`$ all occur on these vertices. The vertex $`a`$ is adjacent to these three vertices, so it is forced to take the color $`1`$. Now the colors $`3`$, $`4`$ must both occur on $`c`$ and $`f`$ so $`b`$ must take the color $`2`$. Finally $`e`$ is forced to take the color $`4`$, $`c`$ and $`d`$ must take $`3`$, and the two remaining vertices $`f`$ and $`g`$ must take $`4`$. For $`K_{14,6}`$, assign the lists $`\{1,5,6\}`$, $`\{2,5,6\}`$, $`\{3,5,6\}`$, and $`\{4,5,6\}`$ to the vertices in the parts which have one vertex each, and the lists $`\{1,2,5\}`$, $`\{1,3,5\}`$, $`\{1,4,6\}`$, $`\{2,3,6\}`$, $`\{2,4,6\}`$, $`\{3,4,5\}`$ to the vertices in the last part. In any coloring we need all six colors because the last part needs at least two colors. Now none of the colors $`1,2,3`$, and $`4`$ can appear on the last part because in that case we need more than two colors on the last part, a contradiction. For each of the other eight graphs one can check by similar argument that it has a unique coloring from the lists given below: $`K_{3,3,3}`$: $`\{\{\underset{¯}{\mathrm{𝟣}}\mathrm{𝟥𝟦},\underset{¯}{\mathrm{𝟣}}\mathrm{𝟥𝟧},\underset{¯}{\mathrm{𝟤}}\mathrm{𝟦𝟧}\},\{\mathrm{𝟣𝟤}\underset{¯}{\mathrm{𝟥}},\mathrm{𝟣}\underset{¯}{\mathrm{𝟦}}\mathrm{𝟧},\underset{¯}{\mathrm{𝟥}}\mathrm{𝟧𝟨}\},\{\mathrm{𝟣𝟥}\underset{¯}{\mathrm{𝟨}},\mathrm{𝟣𝟦}\underset{¯}{\mathrm{𝟧}},\mathrm{𝟤𝟥}\underset{¯}{\mathrm{𝟧}}\}\}`$ $`K_{2,4,4}`$: $`\{\{\underset{¯}{\mathrm{𝟣}}\mathrm{𝟥𝟧},\underset{¯}{\mathrm{𝟤}}\mathrm{𝟦𝟨}\},\{\mathrm{𝟣}\underset{¯}{\mathrm{𝟥}}\mathrm{𝟧},\mathrm{𝟤}\underset{¯}{\mathrm{𝟦}}\mathrm{𝟨},\underset{¯}{\mathrm{𝟥}}\mathrm{𝟧𝟨},\underset{¯}{\mathrm{𝟦}}\mathrm{𝟧𝟨}\},\{\mathrm{𝟣𝟤}\underset{¯}{\mathrm{𝟧}},\mathrm{𝟥𝟦}\underset{¯}{\mathrm{𝟧}},\mathrm{𝟣𝟦}\underset{¯}{\mathrm{𝟨}},\mathrm{𝟤𝟥}\underset{¯}{\mathrm{𝟨}}\}\}`$ $`K_{2,3,5}`$: $`\{\{\underset{¯}{\mathrm{𝟣}}\mathrm{𝟦𝟨},\underset{¯}{\mathrm{𝟤}}\mathrm{𝟥𝟧}\},\{\mathrm{𝟣}\underset{¯}{\mathrm{𝟥}}\mathrm{𝟨},\mathrm{𝟤}\underset{¯}{\mathrm{𝟥}}\mathrm{𝟧},\underset{¯}{\mathrm{𝟦}}\mathrm{𝟧𝟨}\},\{\mathrm{𝟣𝟤}\underset{¯}{\mathrm{𝟧}},\mathrm{𝟥𝟦}\underset{¯}{\mathrm{𝟧}},\mathrm{𝟣𝟥}\underset{¯}{\mathrm{𝟨}},\mathrm{𝟤𝟥}\underset{¯}{\mathrm{𝟨}},\mathrm{𝟤𝟦}\underset{¯}{\mathrm{𝟨}}\}\}`$ $`K_{2,2,9}`$: $`\{\{\underset{¯}{\mathrm{𝟣}}\mathrm{𝟧𝟨},\underset{¯}{\mathrm{𝟤}}\mathrm{𝟥𝟦}\},\{\mathrm{𝟣}\underset{¯}{\mathrm{𝟥}}\mathrm{𝟧},\mathrm{𝟣}\underset{¯}{\mathrm{𝟦}}\mathrm{𝟨}\},\{\mathrm{𝟣𝟤}\underset{¯}{\mathrm{𝟧}},\mathrm{𝟣𝟥}\underset{¯}{\mathrm{𝟧}},\mathrm{𝟣𝟦}\underset{¯}{\mathrm{𝟧}},\mathrm{𝟣𝟤}\underset{¯}{\mathrm{𝟨}},\mathrm{𝟣𝟥}\underset{¯}{\mathrm{𝟨}},\mathrm{𝟣𝟦}\underset{¯}{\mathrm{𝟨}},\mathrm{𝟤𝟦}\underset{¯}{\mathrm{𝟧}},\mathrm{𝟥𝟦}\underset{¯}{\mathrm{𝟧}},\mathrm{𝟤𝟥}\underset{¯}{\mathrm{𝟨}}\}\}`$ $`K_{1,2,2,2}`$: $`\{\{\underset{¯}{\mathrm{𝟣}}\mathrm{𝟤𝟥}\},\{\mathrm{𝟣}\underset{¯}{\mathrm{𝟤}}\mathrm{𝟥},\underset{¯}{\mathrm{𝟤}}\mathrm{𝟦𝟧}\},\{\mathrm{𝟣𝟤}\underset{¯}{\mathrm{𝟥}},\underset{¯}{\mathrm{𝟥}}\mathrm{𝟦𝟧}\},\{\mathrm{𝟣𝟤}\underset{¯}{\mathrm{𝟦}},\mathrm{𝟣𝟤}\underset{¯}{\mathrm{𝟧}}\}\}`$ $`K_{1,1,1,2,2}`$: $`\{\{\underset{¯}{\mathrm{𝟣}}\mathrm{𝟦𝟧}\},\{\underset{¯}{\mathrm{𝟤}}\mathrm{𝟦𝟧}\},\{\underset{¯}{\mathrm{𝟥}}\mathrm{𝟦𝟧}\},\{\mathrm{𝟣𝟤}\underset{¯}{\mathrm{𝟦}},\mathrm{𝟥}\underset{¯}{\mathrm{𝟦}}\mathrm{𝟧}\},\{\mathrm{𝟣𝟤}\underset{¯}{\mathrm{𝟧}},\mathrm{𝟥𝟦}\underset{¯}{\mathrm{𝟧}}\}\}`$ $`K_{15,5}`$: $`\{\{\underset{¯}{\mathrm{𝟣}}\mathrm{𝟨𝟩}\},\{\underset{¯}{\mathrm{𝟤}}\mathrm{𝟨𝟩}\},\{\underset{¯}{\mathrm{𝟥}}\mathrm{𝟨𝟩}\},\{\underset{¯}{\mathrm{𝟦}}\mathrm{𝟨𝟩}\},\{\underset{¯}{\mathrm{𝟧}}\mathrm{𝟨𝟩}\},\{\mathrm{𝟣𝟤}\underset{¯}{\mathrm{𝟨}},\mathrm{𝟥𝟦}\underset{¯}{\mathrm{𝟨}},\mathrm{𝟣𝟧}\underset{¯}{\mathrm{𝟨}},\mathrm{𝟤𝟧}\underset{¯}{\mathrm{𝟩}},\mathrm{𝟥𝟦}\underset{¯}{\mathrm{𝟩}}\}\}`$ $`K_{16,4}`$: $`\{\{\underset{¯}{\mathrm{𝟣}}\mathrm{𝟩𝟪}\},\{\underset{¯}{\mathrm{𝟤}}\mathrm{𝟩𝟪}\},\{\underset{¯}{\mathrm{𝟥}}\mathrm{𝟩𝟪}\},\{\underset{¯}{\mathrm{𝟦}}\mathrm{𝟩𝟪}\},\{\underset{¯}{\mathrm{𝟧}}\mathrm{𝟩𝟪}\},\{\underset{¯}{\mathrm{𝟨}}\mathrm{𝟩𝟪}\},\{\mathrm{𝟣𝟤}\underset{¯}{\mathrm{𝟩}},\mathrm{𝟥𝟦}\underset{¯}{\mathrm{𝟩}},\mathrm{𝟣𝟤}\underset{¯}{\mathrm{𝟪}},\mathrm{𝟧𝟨}\underset{¯}{\mathrm{𝟪}}\}\}`$ $`\mathrm{}`$ ###### Proposition 3 . $`m(K_{2,2,3})=m(K_{2,3,3})=3`$. ###### Proof. By Theorem B the graph $`K_{2,2,3}`$ is a U$`2`$LC graph, so $`m(K_{2,2,3})3`$. We show that $`m(K_{2,2,3})=3`$. Suppose that there are assigned color lists, each of size at least $`3`$, to the vertices in $`K_{2,2,3}`$ and $`c`$ is a coloring from those lists. If all vertices in a part of $`K_{2,2,3}`$ have the same color in $`c`$, we can remove that color from the lists of the other two parts and by the property $`M(2)`$ of complete bipartite graphs we obtain a different coloring on those parts which is extendible to $`K_{2,2,3}`$. So suppose that at least two colors appear on each part. Add new edges between those nonadjacent vertices that take different colors in $`c`$, the resulting graph is a $`K_7`$ or $`K_7e`$, both of which have the property $`M(3)`$. So we obtain another coloring which is a legal coloring for $`K_{2,2,3}`$. The second graph is checked by a computer program and it has the property $`M(3)`$, so by Theorem B its m–number is equal to 3. $`\mathrm{}`$ ###### Proposition 4 . Every complete tripartite graph $`K_{1,s,t}`$ has the property $`M(3)`$. Thus if $`\mathrm{max}\{s,t\}2`$, then $`m(K_{1,s,t})=3`$. ###### Proof. The proof is immediate by a technique similar to one used in Proposition 3. $`\mathrm{}`$ ###### Proposition 5 . For each $`s2`$, $`m(K_{1,1,1,s})=3`$. ###### Proof. Suppose for each $`vV(K_{1,1,1,s})`$ there is assigned a color list $`L(v)`$ of size $`3`$, and $`K_{1,1,1,s}`$ has an $`L`$–coloring $`c`$. If one of the vertices in $`K_{1,1,1,s}`$ has a color in its list which is not used in $`c`$, we obtain a new $`L`$–coloring for $`K_{1,1,1,s}`$ by simply putting that unused color on that vertex. So suppose that each color in $`_vL(v)`$ is used in the coloring. Call the vertices in the first three parts $`x,y`$, and $`z`$, and the vertices in the last part $`w_1,\mathrm{},w_s`$. Suppose that the colors of $`x,y`$, and $`z`$ in the coloring are $`1,2`$, and $`3`$, respectively. So for each $`i`$, $`L(w_i)`$ contains $`c(w_i)`$ and two colors from $`1,2`$, and $`3`$. If two of the vertices $`x,y,`$ and $`z`$, say $`x`$ and $`y`$ have some colors of the last part in their lists, $`c(w_p)L(x)`$ and $`c(w_q)L(y)`$ where $`c(w_p)c(w_q)`$, then we obtain a new coloring $`c^{}`$ for $`K_{1,1,1,s}`$ by putting $`c(w_p)`$ on $`x`$, $`c(w_q)`$ on $`y`$, $`c(z)`$ on $`z`$, and since for each $`i=1,2,\mathrm{},s`$, there exists $`c^{}(w_i)L(w_i)\{1,2\}`$, we change each $`c(w_i)`$ by this $`c^{}(w_i)`$. Otherwise, either there is at most one color of the last part in $`L(x)L(y)L(z)`$, or there is one of $`x,y,`$ and $`z`$, say $`x`$, whose list contains two colors from the last part, and two other have no color of the last part in their lists. In the former case we can obtain a new coloring for the triangle induced on $`x,y,`$ and $`z`$ from the lists $`L(v)\{1,2,3\}`$ on each $`v\{x,y,z\}`$, by the property $`M(2)`$ of $`K_3`$. In the latter case a new coloring can be obtained by replacing the colors of $`y`$ and $`z`$. We showed that $`K_{1,1,1,s}`$ has the property $`M(3)`$, and so $`m(K_{1,1,1,s})3`$. On the other hand it has an induced $`K_{1,1,2}`$ subgraph which is a U$`2`$LC graph, and so we have $`m(K_{1,1,1,s})>2`$. $`\mathrm{}`$ ###### Proposition 6 . For every $`r2`$, we have $`m(K_{1r,3})=3`$. ###### Proof. Suppose there are some lists of colors each of size $`3`$ assigned to the vertices of $`K_{1r,3}`$, which have a coloring. We consider two cases and in each case obtain a new coloring for $`K_{1r,3}`$ from these lists. First consider the case that all vertices in the last part take the same color in the given coloring. By removing this color from the lists of other vertices, they have a new coloring because the complete graphs have the property $`M(2)`$. So at least two colors appear on the vertices in last part. Add new edges between those vertices in the last part that have different colors. The resulting graph is either a complete graph or a complete graph with an edge removed, and we know that both of those graphs have the property $`M(3)`$. So a new coloring can be obtained from the lists for the new graph. This coloring is also valid for $`K_{1r,3}`$. $`\mathrm{}`$ Now we state our main theorem of this section. ###### Theorem 4 . Let $`G`$ be a complete multipartite graph that is not $`K_{2,2,r}`$, for $`r=4,5,\mathrm{},8`$, $`K_{2,3,4}`$, $`K_{14,4}`$, $`K_{14,5}`$, or $`K_{15,4}`$ then $`G`$ is U$`3`$LC if and only if it has one of the graphs in Proposition 2 as an induced subgraph. ###### Proof. If $`G`$ has one of the graphs of Proposition 2 as an induced subgraph, then it is U$`3`$LC by Theorem 3. So we prove the other side of the statement. Assume that $`G`$ is not one of the graphs mentioned in the statement and it does not have any graphs of Proposition 2 as an induced subgraph. We show that it is not U$`3`$LC. There are two cases to be considered. (i) $`G=K_{1r,s}`$, for some $`r`$ and $`s`$. If $`r3`$ or $`s3`$, then by Proposition 5 and Proposition 6 it has the property $`M(3)`$. So assume $`r4`$ and $`s4`$. Since $`G`$ does not contain a $`K_{14,6}`$ we must have $`4s5`$. If $`s=5`$ we have $`r=4`$ which is exempted. If $`s=4`$ we have $`r=4`$ or 5, which are also exempted. (ii) $`G`$ has at least two parts whose sizes are greater than 1. Since it does not contain a $`K_{1,1,1,2,2}`$, it is either 4–partite, tripartite, or bipartite. If $`G`$ is bipartite, it is not U$`3`$LC, by Theorem B. If $`G`$ is 4–partite, since it does not contain a $`K_{1,2,2,2}`$ or a $`K_{1,1,2,3}`$, it must be $`K_{1,1,2,2}`$ which is not U$`3`$LC by Theorem 1. So assume that $`G=K_{r,s,t}`$ for some $`tsr`$. Since it does not contain a $`K_{3,3,3}`$ we have $`t2`$. If $`t=1`$ then it is not U$`3`$LC by Proposition 4. If $`t=2`$, since it does not contain a $`K_{2,4,4}`$ we must have $`s3`$. If $`s=2`$ then $`G`$ must be a $`K_{2,2,r}`$ with $`r8`$. But now, if $`r3`$ it is not U$`3`$LC by Proposition 3, and the cases of $`4r8`$ are exempted. If $`s=3`$ then $`G=K_{2,3,r}`$ where $`r4`$. Then if $`r3`$ it is not U$`3`$LC, by Proposition 3, and for $`r=4`$ it is exempted. $`\mathrm{}`$ ## 4 Some examples of U$`k`$LC graphs In this section we introduce some examples of U$`k`$LC graphs. ###### Example 1 . The graph $`K_{1k,2(k1)}`$ has m–number equal to $`k+1`$. ###### Proof. This is the example given in as a U$`k`$LC graph. It is a special case of graphs discussed in Proposition 1. $`\mathrm{}`$ ###### Example 2 . The graph $`K_{1,2(k1),k1}`$ has m–number $`k+1`$. ###### Proof. From each of the first $`k`$ parts choose a vertex and assign to it the list $`\{1,\mathrm{},k\}`$. To the other vertex in $`i`$–th part ($`2ik`$) assign the list $`\{k+1,\mathrm{},2k1,i\}`$. Finally in the last part, assign the list $`\{1,\mathrm{},k1,k+j\}`$ to the $`j`$–th vertex in that part ($`1jk1`$). Since this graph has a subgraph $`K_k`$ which has the list $`\{1,\mathrm{},k\}`$ on each of its vertices, by a similar argument as in the proof of Proposition 2, a unique coloring from these lists for $`K_{1,2(k1),k1}`$ can be obtained. $`\mathrm{}`$ ###### Example 3 . The complete $`(k+1)`$–partite graph $`K_{1,1,2,\mathrm{},k}`$ is U$`k`$LC. ###### Proof. We use the colors from the set $`A=\{1,2,\mathrm{},k+1\}`$. Assign the list $`A\{k\}`$ to the vertex in the first part, and in the $`(i+1)`$–th part ($`1ik`$) assign the list $`A\{kj+2\}`$ to the $`j`$–th vertex $`(1ji)`$. Since $`\chi (K_{1,1,2,\mathrm{},k})=k+1`$, we need $`k+1`$ colors to color this graph, so all of the colors must be used and in each part we must have exactly one color. Hence the vertices in the $`(k+1)`$–th part must all take the color $`1`$, the vertices in the $`k`$–th part must all take the color $`2`$, …, the single vertex in the second part must take the color $`k`$, and finally the single vertex in the first part is forced to take the color $`k+1`$. $`\mathrm{}`$ ###### Example 4 . The graph $`𝒰_k`$ constructed below has m–number $`k+1`$: Let the set $`\{v_1,\mathrm{},v_{3k2}\}`$ be the set of vertices in $`𝒰_k`$. The edges in $`𝒰_k`$ are $`v_iv_j`$s $`(ij)`$ where: * $`1i,jk`$, * $`1ik`$ and $`k+1j2k1`$, * $`k+1i2k1`$ and $`2kj3k2`$, * $`1ik1`$ and $`2kj3ki1`$. ###### Proof. Assign the list $`\{1,\mathrm{},k\}`$ to $`v_1,\mathrm{},v_k`$, the list $`\{1,\mathrm{},k1,i\}`$ to $`v_i`$ where $`k+1i2k1`$, and the list $`\{k+1,\mathrm{},2k1,i2k+1\}`$ to $`v_i`$ where $`2ki3k2`$. Again since there exists a $`K_k`$ in $`𝒰_k`$ induced on $`\{v_1,\mathrm{},v_k\}`$ and with a similar argument as in the proof of Proposition 2, a unique coloring from these lists for $`𝒰_k`$ is obtained. $`\mathrm{}`$ ###### Example 5 . The graph $`𝒯_k`$ constructed below is U$`k`$LC for each $`k2`$: $$V(G)=\{a_1,\mathrm{},a_{k1},b_1,\mathrm{},b_k,c_1,\mathrm{},c_{k1},d_1,\mathrm{},d_{2k3}\},$$ and for edges, * Make a $`K_{2k1}`$ on $`a_i`$s and $`b_i`$s, * Join $`b_i`$s to $`c_i`$s and $`c_i`$s to $`d_i`$s, * Join $`a_i`$ to $`d_j`$ for $`1ik1`$ and $`ijk1`$, * Join $`b_i`$ to $`d_j`$ for $`3ik`$ and $`kjk+i3`$. ###### Proof. Assign some lists to the vertices in $`𝒯_k`$ as follows: $`L(a_i)=\{1,\mathrm{},k\}`$, $`L(b_1)=\{k,\mathrm{},2k1\}`$, $`L(b_i)=\{i1,k+1,\mathrm{},2k1\}`$ for $`i>1`$, $`L(c_i)=\{k+1,\mathrm{},2k1,2k+i1\}`$, and $`L(d_i)=\{i+1,2k,\mathrm{},3k2\}`$. It is easy to check that $`𝒯_k`$ has a unique coloring from these lists. $`\mathrm{}`$ ## 5 Some open problems The following problems arise naturally from the work. ###### Problem 1 . Verify the property $`M(3)`$ for the graphs exempted in Theorem 4, i.e. $`K_{2,2,r}`$ for $`r=4,5,\mathrm{},8`$, $`K_{2,3,4}`$, $`K_{14,4}`$, $`K_{14,5}`$, and $`K_{15,4}`$. ###### Problem 2 . Characterize all graphs with m–number $`3`$. ###### Problem 3 . What is the computational complexity of the property $`M(3)`$? ## Acknowledgement We thank Bashir Sadjad who pointed out that in Lemma 1, the edges of $`\overline{G}`$ are not necessarily supposed to be independent.
no-problem/9906/cond-mat9906255.html
ar5iv
text
# Theory of transient spectroscopy of multiple quantum well structures ## I Introduction Transient spectroscopy of quantum-well (QW) structures allows to study the emission processes from the QWs and thus to obtain information on QW parameters, such as the energy spectrum, photoionization cross-section, tunneling escape time, etc. This technique is based on an analysis of the transient current or capacitance relaxation upon the application of a large-signal bias across the QW structure. It complements admittance spectroscopy, which studies the ac current in the QW structure upon application of a small-signal voltage. The transient spectroscopy of QWs has many similarities with the deep level transient spectroscopy (DLTS) and enables a simple theory to be derived for use in the processing of experimental data. However, there is an important difference in the carrier capture by deep levels and that involving QWs. In the latter case, the presence of a continuous energy spectrum for the in-plane motion in the QW allows the capture by emission of a single optical phonon rather than by a multi-phonon processes typical for deep levels. As a result, the corresponding capture times are several orders of magnitude less than for deep levels and often do not exceed a few picoseconds. This quantitative difference results in serious qualitative consequences. The processes of carrier transport between neighboring QWs can no longer be considered as infinitely fast. These processes may play a decisive role in the relaxation kinetics changing noticeably the formulae of a simple theory, similar to the case of structures with very high concentration of deep levels. In this work we present a theoretical description of the transient spectroscopy of QWs and discuss its possible applications. We obtain more general analytical expressions for the parameters of the transient current than those of Ref. . The analytical model is confirmed by numerical simulation, and the procedure of the extraction of the QW parameters from experimental data is discussed. ## II Analytical model We consider a QW structure containing $`M`$ QWs (n-doped at sheet density $`N_D`$) of width $`L_w`$ separated by undoped barriers of width $`L_b`$ large enough to prevent inter-well tunneling (see Fig. 1). This structure is typical for quantum well infrared photodetectors (QWIPs). The QW structure is provided with a heavily doped (Ohmic) collector and a blocking emitter contact (for example, containing a Schottky barrier or p-n junction) which is often used to avoid DC current and thus to simplify the interpretation of experimental data. During the first period of the transient spectroscopy experiment, the forward bias is applied to the emitter, and all QWs are filled by electrons with the equilibrium sheet density $`N_0N_D`$. In the second period, a large reverse bias $`V`$ is applied to extract electrons from the QWs, and transient current is recorded. The problem has some similarities with the treatment presented for the kinetics of electron packet in a system of undoped QWs. . Immediately after the application of $`V`$, the electric field in the QW structure is uniform and given by $`E_0=(V+V_{bi})/L`$, where $`V_{bi}`$ is the built-in voltage between the emitter and collector, and $`L=ML_w+(M+1)L_b(M+1)L_b`$ is the structure thickness. This field causes fast removal of delocalized electrons from the structure at almost fixed $`N_0`$. This is a very fast (on a ps time scale) component of the transient current limited by carrier capture and transit times, which is manifested as an instantaneous current step in the case of limited time resolution of a measurement setup. We shall be interested here in a subsequent slow current relaxation caused by the QW recharging. The initial part of this relaxation (after completion of the fast transient) can be easily calculated. In the presence of external illumination, the emission rate from the $`i`$-th QW with electron density $`N_i`$ is $`GN_i=\sigma \mathrm{\Phi }N_i+\gamma \mathrm{exp}[(\epsilon _f\epsilon _i)/kT]`$, where $`\sigma `$ is the photoionization cross-section, $`\mathrm{\Phi }`$ is the incident photon flux, $`\epsilon _i`$ is the QW ionization energy, $`\epsilon _f(N_i)`$ is the Fermi energy of the electrons in the QW, and $`\gamma `$ is the thermoionization coefficient. In our estimates and further numerical simulations, we shall restrict our analytical calculations to the case of relatively low temperatures or high light intensities giving $`G\sigma \mathrm{\Phi }`$. This restriction is not compulsory since all analytical formulae are applicable for an arbitrary relation between the optical and thermal generation. We assume that the carriers emitted from the QWs drift with a constant velocity $`v_d`$ towards the collector. While traversing a QW, carriers are captured by the QW with a probability $`p`$ ($`0<p1`$). Carriers emitted from the $`k`$-th QW give the following contribution to the carrier concentration in the $`i`$-th barrier (between $`i`$-th and $`(i+1)`$-th QWs): $$n_{ki}=\frac{GN_k}{v_d}(1p)^{ik},(ik).$$ (1) The total concentration in the $`i`$-th barrier $`n_i`$ $$n_i=\underset{k=1}{\overset{i}{}}n_{ki}=\frac{G}{v_d}\underset{k=1}{\overset{i}{}}N_k(1p)^{ik}.$$ (2) Since the change of a QW charge is determined by the balance between carrier capture and emission, we can, with the help of Eq. (2), obtain the system determining the kinetics of all $`N_i`$: $$\frac{dN_i}{dt}=GN_i+n_{i1}v_dp=G\left[N_i+\underset{k=1}{\overset{i1}{}}N_kp(1p)^{i1k}\right]$$ (3) and, hence, of the current in the external circuit $`I(t)`$ which could be expressed in terms of $`N_i(t):`$ $$I(t)=\frac{ev_d}{M+1}\underset{i=1}{\overset{M}{}}n_i=\frac{eG}{M+1}\underset{i=1}{\overset{M}{}}\underset{k=1}{\overset{i}{}}N_k(t)(1p)^{ik}.$$ (4) In principle, we can obtain analytical (though rather cumbersome) solution of the linear system of Eq. (3) for an arbitrary $`t`$. However, it would not be correct. The change in $`N_i`$ causes re-distribution of the electric field and, hence, of the drift velocity in the system. This means that $`v_d`$ is no longer constant but changes from point to point in an unknown way so that the behaviour of $`I(t)`$ remains unknown. That is why we restrict ourselves to the initial stage of the slow relaxation when we can still assume that in the right-hand side of Eq. (3) $`N_i=N_0`$ and $`v_d=\mathrm{const}.`$ This gives $`{\displaystyle \frac{dN_i}{dt}}`$ $`=`$ $`GN_0(1p)^{i1};`$ (5) $`I(0)`$ $`=`$ $`{\displaystyle \frac{I_e}{p(M+1)}}\left\{1{\displaystyle \frac{1p}{pM}}\left[1(1p)^M\right]\right\};`$ (6) $`{\displaystyle \frac{dI}{dt}}(0)`$ $`=`$ $`{\displaystyle \frac{GI_e}{p^2M(M+1)}}\left[1(1+Mp)(1p)^M\right]`$ (7) where $`I_e=eGN_0M`$ is the total emission rate from all QWs. Eqs. (6),(7) give us the relaxation time constant (inverse normalized slope of the current): $$\tau =\left(\frac{dI/dt}{I}\right)^1=\frac{pM(1p)[1(1p)^M]}{G\left[1(1+pM)(1p)^M\right]}.$$ (8) In the most interesting case, when $`p1`$ and $`M1`$ (which corresponds to practical QWIPs), parameters $`I_0`$ and $`\tau `$ are expressed as: $`I_0`$ $`=`$ $`I_e\times g[1g(1\mathrm{exp}(1/g))],`$ (9) $`\tau `$ $`=`$ $`{\displaystyle \frac{1}{G}}\times {\displaystyle \frac{1g[1\mathrm{exp}(1/g)]}{g[1(1+1/g)\mathrm{exp}(1/g)]}},`$ (10) where $`g=1/(pM)`$ is a transport parameter. If we characterize QW capture processes by the capture time $`\tau _c`$ or the capture velocity $`v_c=L_p/\tau _c`$, which is related to capture probability as $`p=1/(1+v_d/v_c)`$ then $`g=\tau _c/\tau _{tr}+1/M\tau _c/\tau _{tr}`$, where $`\tau _{tr}=L/v_d`$ is the transit time. Therefore, the parameter $`g`$ corresponds to the photocurrent gain of a QWIP. It should be noted that the amplitude of the transient current $`I_0`$ is equal to the amplitude of the fast transient (primary photocurrent) in a photoexcited QWIP. In general, the time constant for the transient current $`\tau `$ is determined not only by the emission time $`1/G`$, but also by a transport parameter $`g,`$ similarly to the case of DLTS for a very high concentration of deep centers. Particularly, in the case $`g1`$ we have $`\tau 1/(gG)1/G`$. Hence, one cannot obtain the photoionization cross-section from the transient spectroscopy experiment ignoring the correction factor (see Eqs. (8)–(10)) dependent on $`g`$. Only in the limiting case $`g1`$ (or $`pM1`$), the relaxation time tends to $`1/G`$ which corresponds to the simple model of Ref. . To fulfill this condition, one has to use QW structure with small capture probability and small number of QWs. In this case $`I_0=0.5\times I_e`$, which corresponds to a high-frequency gain value of 0.5 for extrinsic photoconductors and QWIPs with large value of the low-frequency gain $`g`$ Note that the capture probability $`p`$ is a function of the electric field, decreasing with field, so that the simplified approach predicting $`\tau =1/G`$ can be accurate at high fields, but inaccurate at low fields. We point out that the value of the photocurrent gain $`g`$ can be determined from the transient photocurrent in QWIP illuminated by a step-like infrared radiation, where the ratio of the amplitude of the fast transient to the steady-state photocurrent is equal to $`\{1g[1\mathrm{exp}(1/g)]\}`$ ## III Numerical simulation The model presented above is justified only for the initial part of the transient, since we neglected the modulation of the electric field due to the QW recharging. To check this model and to obtain a description of the transient current for a wider time interval, we also studied the transient processes using numerical simulation. A time dependent QWIP simulator was used with a zero-current boundary condition for the reverse-biased emitter contact. We simulated the transient spectroscopy experiment of Ref. on GaAs/Al<sub>0.25</sub>Ga<sub>0.75</sub>As QW structure with the area $`S=2\times 10^4`$ cm<sup>-2</sup> containing 10 donor-doped QWs with $`L_w=`$60 Å QWs and $`N_D=5\times 10^{11}`$ cm<sup>-2</sup>, undoped barriers with $`L_b=`$350 Å, Schottky emitter contact ($`V_{bi}=0.75`$ V) and collector GaAs contact doped with donors at 10<sup>18</sup> cm$`^3.`$ The photoexcitation conditions were similar to those used in the experiment. Figure 2(a) shows the transient current calculated for a reverse bias of 1 V, which for the given $`V_{bi}`$ corresponds to the applied field $`E_0`$40 kV/cm. The capture probability was chosen to be $`p=0.04`$ so that $`g=2.5`$. The initial part of the transient ($`t\tau ^{}`$ where $`\tau ^{}43`$ $`\mu `$s is the position of the first step in Fig. 2(a)) is very well described by an exponential function with the amplitude and time constant calculated by our analytical model without any fitting parameters. The time constant obtained is $`\tau 345`$ $`\mu `$s, while the emission time is somewhat smaller, $`1/G=300`$ $`\mu `$s. Starting from the time moment $`t=\tau ^{}`$, the transient current decays more rapidly, and displays a series of steps and shoulders. These features are due to the redistribution of the electric field caused by the depleting of QWs (see Fig. 2(b,c)). The $`i`$-th step occurs when the electric field in the $`(M+1i)`$-th barrier becomes zero. When this happens, the electron density in the $`(M+1i)`$-th QW returns to its equilibrium value $`N_0`$, and this well does not contribute to the emission current. The electron transport in the region between this QW and collector is purely diffusive. Using Eq. (5) and the condition of zero electric field in the $`M`$-th barrier $`E_0=_{i=1}^M\frac{e\mathrm{\Delta }N_i}{\epsilon \epsilon _0}\frac{i}{M+1}`$ ($`\epsilon \epsilon _0`$ is the dielectric constant), we obtain the following estimate for the time constant $`\tau ^{}`$: $$\tau ^{}\frac{\epsilon \epsilon _0E_0}{eGN_0M}\frac{1}{g^2[1(1+1/g)\mathrm{exp}(1/g)]}.$$ (11) For the case of Fig. 2 this estimate gives $`\tau ^{}39`$ $`\mu `$s, which is in a good agreement with the results of numerical calculations ($`\tau ^{}43`$ $`\mu `$s). Since $`\tau ^{}\tau `$ (unless $`g`$ is very small), only a small initial part of the transient process is described by the exponential function with time constant $`\tau `$ and amplitude $`I_0`$. Thus, the fitting of experimental data by an exponential function to extract the time constant $`\tau `$ should be done over the interval $`0t<\tau ^{}`$. The fitting over longer time intervals can result in a significant error in estimating $`\tau `$ (the dashed line on Fig. 2(a) is an “intuitive” exponential fitting with the time constant $`\tau =130`$ $`\mu `$s). It should be stressed that the measurement circuit should have $`RC`$-time constant ($`R`$ is the load resistance and $`C`$ is the QW structure high-frequency capacitance) much smaller than time constant $`\tau ^{}`$ for correct evaluation of $`\tau `$. To check the influence of the QW capture velocity $`v_c`$ on the time constant $`\tau `$, we simulated the transient response for different values of $`v_c`$. The values of the amplitude and time constant of the initial part of the transient current extracted from numerical simulation are shown in Fig. 3. A good agreement between these results and formulas Eq. (6),(8) proves the validity of the analytical model. So far we have assumed that the photoionization cross-section (or emission rate) is independent of the local electric field. While this is a good approximation for bound-to-continuum transitions, the photoionization cross section for bound-to-bound and bound-to-quasi-bound transitions depends strongly on electric field. To investigate this effect, we compared the results of simulation for the cases of field-dependent and field-independent cross-section (see Fig. 4). We used the model of the field-dependent cross-section proposed in Ref. : $$\sigma (E)=\sigma _0\times 0.5\text{ erfc}[(\epsilon _2eEL_w/2)/(\sqrt{2}\mathrm{\Delta }\epsilon )],$$ (12) where $`\sigma _{0\text{ }}`$is the optical cross-section, erfc($`x`$) is the complementary error function, $`\epsilon _2`$ is the ionization energy of the second level, and $`\mathrm{\Delta }\epsilon `$ is the variance of $`\epsilon _2`$ due to fluctuations. The values $`\epsilon _2=4`$ meV and $`\mathrm{\Delta }\epsilon =3.5`$ meV also taken from Ref. , were used. It is seen from Fig. 4 that the field dependence of cross-section washes out steps on the $`I(t)`$ curve. Moreover, the initial part of the curve, $`0<t<\tau ^{},`$ deviates significantly from an exponential function of the analytical model. Physically, this is caused by the decrease of the photoemission current from near-collector QWs due to the electric field redistribution. This effect makes the extraction procedure of the time constant $`\tau `$ from the slope of the transient current more complicated. However, the transient current amplitude $`I_0`$ is not affected by the field redistribution. Since the amplitude is directly related to the photoemission current ($`I_0=0.5I_e`$) in the case $`g1`$, we propose to use the amplitude of the transient current rather than its slope to extract the photoionization cross-section from experimental data. ## IV Conclusions A theory of the transient spectroscopy of QW structures is presented. Analytical expressions for the initial stage of relaxation current are derived. It is shown that the time constant of the transient current is a function of both the photoionization cross-section and the transport parameter $`g`$ becoming $`g`$-independent at $`g1`$. Numerical simulation is used to check the validity of the analytical model and study the transient current in more detail. The procedure of extraction the QW emission rate from the experimental data is discussed. This work is supported in part by the NSF under Grant No. ECS-9809746. H. R. and A. S. also gratefully acknowledge the support of NSERC.
no-problem/9906/adap-org9906004.html
ar5iv
text
# Noise-induced breakdown of coherent collective motion in swarms \[ ## Abstract We consider swarms formed by populations of self-propelled particles with attractive long-range interactions. These swarms represent multistable dynamical systems and can be found either in coherent traveling states or in an incoherent oscillatory state where translational motion of the entire swarm is absent. Under increasing the noise intensity, the coherent traveling state of the swarms is destroyed and an abrupt transition to the oscillatory state takes place. \] There is a large class of problems where individual interacting particles, that constitute a system, are capable of active motion and form collectively traveling populations. Self-propulsion of particles is already possible in simple physical systems (see e.g. ) and is widely found in biology where individual animals may group themselves into swarms, fish schools, bird flocks or traveling cell populations . The role of individual self-propelled “particles” can also be played by localized patterns (spots) in reaction-diffusion systems. A bifurcation leading to the onset of translational motion of spots has been studied in an activator-inhibitor system with global feedback and in three-component reaction-diffusion systems . Interactions between individual self-propelled spots have been determined from the underlying reaction-diffusion equations and used to describe formation of bound states of such “particles” . Mathematical modeling of collective active motion follows several different directions. One approach is based on the notion of discrete stochastic automata. Another approach is formulated in terms of continuous velocity and density fields and essentially treates a swarm as an active fluid (such hydrodynamical equations may be derived by averaging from the respective automata models ). A similar hydrodynamic approach is also used in the theory of traffic flows . Alternatively, one can specify dynamical equations of motion for all individual particles that explicitely include interactions between them and/or action of external fields . An interesting problem related to statistical mechanics of large populations of self-propelled particles is the spontaneous development of coherent collective motion in such systems. This problem has recently been discussed in the framework of continuous hydrodynamical and discrete automata models, and the properties of the respective kinetic phase transition were numerically and analytically investigated . Both in one- and two-dimensional systems, first- and second-order transitions have been found . In the present paper we consider a population of identical self-propelled particles near a transition between disordered oscillating motion and coherent translational motion. The particles interact via an isotropic attractive binary potential and are subject to the action of noises. This globally coupled population forms a cloud (the swarm) in the considered one-dimensional space. The swarm can be found in different states. Coherent compact traveling states are characterized by a narrow distribution of velocities around a certain mean drift velocity, directed either to the left or to the right. Another possible state of this population corresponds to the absence of coherent translational motion, with noisy oscillations around a certain mean position in space, determined by the initial conditions. The coherent traveling states exist only for sufficiently weak noise and, as the noise intensity increases, the swarm undergoes a transition to the incoherent oscillatory state. We find that the breakdown of coherent collective motion in this system is abrupt and characterized by a strong hysteresis. Thus, the globally coupled swarm represents a multistable system that may be found in different states depending on the initial conditions. This behavior, revealed by numerical simulations, is well reproduced by an approximate analytical theory and may represent a typical property of swarms with long-range interactions. To formulate the model, we note that if a system is close to the onset of active motion and this instability is soft, i.e. characterized by a supercritical bifurcation, the motion with small velocity $`V`$ can generally be described by equation $$\dot{V}=\alpha V\beta V^3,$$ (1) with real coefficients $`\alpha `$ and $`\beta >0`$. This equation may be viewed as a normal form of the supercritical bifurcation leading to translational motion. Such bifurcations are possible in simple physico-chemical systems . They are also known for localized spot patterns in reaction-diffusion models and correspond to the onset of their translational motion . According to Eq. (1), the velocity $`V`$ is zero below the bifurcation point (i.e. for $`\alpha <0`$). Above this point, active motion with $`V=\pm \sqrt{\alpha /\beta }`$ is asymptotically established. The direction of this motion for an individual particle remains arbitrary and is determined by initial conditions. Rescaling time and introducing the new velocity variable $`u=V\sqrt{\beta /\alpha }`$, Eq. (1) can be written as $$\dot{u}=uu^3$$ (2) When a population of identical self-moving particles is considered, the velocity $`u_i=\dot{x}_i`$ of each particle $`i`$ will satisfy this dynamical equation. Interactions between individuals may generally depend on both their relative positions and velocities. In this paper we assume that the interactions are pairwise and described by forces $`f(x_ix_j)`$ that depend only on the difference of coordinates of two particles $`i`$ and $`j`$. We shall further assume that the interactions are attractive and depend linearly on the distance between the particles, i.e. $`f(x_ix_j)(x_ix_j)`$. These attractive forces are supposed to model the interaction within the size ranges of the dynamical states considered below, where the population forms clouds of either oscillating or translational motion. The interaction could be extended to larger distances in order to represent, for instance, vanishing forces at infinity . Additionally, the system may include noise that will be modelled by independent random forces $`\xi _i(t)`$ acting on individual particles. Noise prevents the collapse of the population, so that short range repulsion can here be ignored. Under these conditions, the dynamical equations for a set of $`N`$ identical self-moving particles with coordinates $`x_i(t)`$ are $$\ddot{x}_i+(\dot{x}_i^21)\dot{x}_i+\frac{a}{N}\underset{j=1}{\overset{N}{}}\left(x_ix_j\right)=\xi _i(t),$$ (3) for $`i=1,\mathrm{}N`$. The coefficient $`a`$ characterizes the intensity of interactions and can be viewed as the parameter, specifying the strength of coupling in the population. Equations (3) constitute the basic model investigated in this paper. We shall assume that $`\xi _i(t)`$ are independent white noises of intensity $`S`$, so that $`\xi _i(t)\xi _j(t^{})=2S\delta _{ij}\delta (tt^{})`$. Note that Eqs. (3) are invariant with respect to an arbitrary translation in the coordinate space. The model (3) can behave as a system of globally coupled limit-cycle oscillators (cf. ). Introducing the average coordinate $`\overline{x}(t)`$ of the swarm, $$\overline{x}(t)=\frac{1}{N}\underset{j=1}{\overset{N}{}}x_j(t),$$ (4) Eqs. (3) in absence of noise read $$\ddot{x}_i+(\dot{x}_i^21)\dot{x}_i+a\left(x_i\overline{x}\right)=0(i=1,\mathrm{},N).$$ (5) Thus, if the swarm does not move as a whole, i.e. $`\overline{x}(t)=\text{constant}`$, the particles perform persistent oscillations. In this state the phases of individual oscillations are random. Note that the spatial location $`\overline{x}`$ of an oscillating swarm is arbitrary. In addition to the random oscillatory state, the system (5) has two coherent collapsed states where the coordinates of all particles are identical, i.e. $`x_i=\overline{x}`$ for any $`i`$. These states correspond to uniform translational motion of the entire swarm with the velocity $`u=\pm 1`$. A simple analysis shows that the oscillatory state and both coherent traveling states are linearly stable for any positive parameter $`a`$. The final state of the population is determined by the initial conditions. Our numerical simulations show that, if the average velocity $`\overline{u}=N^1_iu_i`$ is initially close to zero, the oscillatory standing state is asymptotically reached. If, however, this initial average velocity is large enough, one of the two coherent traveling states will be approached. Since the particles either converge to coherent motion with constant velocity or to disordered oscillations with no average drift, the ensemble can be thought of as a multistable system with qualitatively different attractors. In the following, we focus the attention on how these attractors respond to the effect of noise. With this aim, we study Eq. (3) numerically. Integration is performed by means of a standard Euler scheme with a time step $`\mathrm{\Delta }t=10^3`$ to $`10^2`$. Most calculations correspond to ensembles of $`100`$ particles, with the coupling intensity ranging from $`a=1`$ to $`100`$. Larger values of $`a`$ require smaller values of $`\mathrm{\Delta }t`$. Noise is introduced by generating, at each time step, a random number $`\xi `$ with uniform distribution in the interval $`(\xi _0,\xi _0)`$. This choice corresponds to having $`S=\xi _0^2/6\mathrm{\Delta }t`$. In practice, $`\xi _0`$ is calculated for each given value of $`S`$. Initial conditions are selected at random, distributing the particles around $`x=0`$ and $`u=0`$ or $`1`$ with a dispersion of the order of $`0.5`$ in both variables. From each initial condition the system is left to evolve in the absence of noise until it reaches the state of disordered oscillations or coherent motion. Then, at $`t=30`$, noise is switched on. Typical calculations extend up to $`t1000`$. For small noise intensities, $`S0.1`$, the stochastic perturbations to the trajectories preserve the characteristic features of the collective dynamics observed in the absence of noise. The completely collapsed state of the noiseless case transforms into a cloud of particles which still moves coherently at a given velocity. Oscillatory orbits, meanwhile, proceed now along a noisy limit cycle. Figure 1 shows three snapshots of a system of 100 particles with $`a=10`$, subject to noise with $`S=0.1`$. They started from different initial conditions, as described above. The arrows indicate the overall motion of each swarm. Within coherent clouds, each particle performs an oscillatory noisy motion which is superimposed to the collective translation. The distribution of particles inside the clouds has a well defined profile, shown in Fig. 2 for some values of $`a`$ in the case of positive velocity. The normalized distribution $`\rho (y)`$ is there plotted as a function of the coordinate relative to the average position, $`y_i=x_i\overline{x}`$. For decreasing $`a`$, the distribution becomes broader and more asymmetric, with an accumulation of particles at the front of the cloud. The coherent traveling states of the population cease to exist at sufficiently high noise intensities and the swarm undergoes an abrupt transition to its random oscillatory state, characterized by the absence of the translational motion. This breakdown of coherent swarm motion is illustrated in Fig. 3. We see that if the noise is relatively weak (Fig. 3a), switching it on at $`t=30`$ only produces a slight decrease of the velocity of the coherent cloud, so that the average velocity $`\overline{u}(t)`$ exhibits fluctuations around a constant mean value $`\overline{u}<1`$. If however the noise intensity exceeds a certain threshold, the effect of introducing noise is qualitatively different (Fig. 3b). Within a certain time interval after the introduction of noise, the swarm continues to travel at a somewhat reduced, strongly fluctuating average velocity $`\overline{u}(t)`$. Then, it suddenly starts to decelerate and soon reaches a steady state where the average velocity $`\overline{u}(t)`$ fluctuates near zero. Inspection of the distribution of particles in the ensemble shows that in this state the system has been attracted to the noisy limit cycle mentioned above. We conclude that the system undergoes a noise-induced transition from a condition of multistability with two kinds of attractors to a situation where only one of them exists. The coherent clouds observed for small noise intensities are no longer possible for $`S>S_c`$, and the system is necessarily led to the state of noisy, disordered oscillations. Figure 4 displays the dependence of the mean velocity $`\overline{u}`$ of the traveling swarm on the noise intensity $`S`$ for three different values of the coupling coefficient $`a`$. We see that the mean velocity monotonously decreases with the noise intensity, until a certain critical noise intensity is reached and the coherent swarm motion becomes impossible. The mean velocity at the critical point is still relatively large, $`\overline{u}0.8.`$ The critical noise intensity $`S_c`$ becomes lower for smaller values of $`a`$. Note that the behavior of the swarm is characterized by a strong hysteresis. If the breakdown of the coherent motion has occurred, subsequently decreasing the noise intensity leaves the system in the oscillatory state with zero mean velocity, down to $`S=0`$. An interesting property of the considered noise-induced transition is the divergence of the waiting time at the critical point. The waiting time $`T_0`$ is defined as the time at which the average velocity $`\overline{u}(t)`$ of the cloud first reaches zero (we measure this time starting from the moment $`t=30`$ when the noise is switched on). Figure 5 shows the waiting time $`T_0`$ as a function of $`SS_c`$ in a log-log plot. We see that for very small values of $`SS_c`$, this time decreases following a power law, $`T_0(SS_c)^\gamma `$, with $`\gamma 1.33`$. Then, at about $`SS_c=0.03`$, the behavior changes to a power law with $`\gamma 0.52`$. Straight dashed lines with slopes $`4/3`$ and $`1/2`$ have been plotted for reference. The observed noise-induced transition between coherent clouds and disordered oscillations of the swarm can be explained by a simple approximate analytical approach. By summing all Eqs. (5) for different particles $`i`$ and taking into account that the noises acting on individual particles are not correlated, an evolution equation for the average swarm velocity $`\overline{u}(t)`$ is obtained: $$\dot{\overline{u}}+\frac{1}{N}\underset{i=1}{\overset{N}{}}\dot{x}_i^3\overline{u}=0.$$ (6) Let us introduce for each particle its deviation $`y_i=x_i\overline{x}`$ from the average position of the swarm. Then we can write $$\frac{1}{N}\underset{i=1}{\overset{N}{}}\dot{x}_i^3=\overline{u}^3+3\sigma \overline{u}+\frac{1}{N}\underset{i=1}{\overset{N}{}}\dot{y}_i^3,$$ (7) where $`\sigma =N^1_i\dot{y}_i^2`$ is the average square dispersion of the swarm. The last cubic term in this equation can be neglected if the distribution of particles in the traveling cloud is symmetric. As we have seen from numerical simulations (Fig. 2), this is indeed a good approximation for sufficiently large values of the coupling constant $`a`$. Within this approximation, Eq. (6) takes the form $$\dot{\overline{u}}+(\overline{u}^21)\overline{u}+3\sigma \overline{u}=0.$$ (8) On the other hand, deviations of particles from the center of the swarm obey the stochastic differential equation $`\ddot{y}_i+(3\overline{u}^21)\dot{y}_i`$ $`+`$ $`ay_i+3\overline{u}\left(\dot{y}_i^2\sigma \right)`$ (9) $`+`$ $`\left(\dot{y}_i^3{\displaystyle \frac{1}{N}}{\displaystyle \underset{i=1}{\overset{N}{}}}\dot{y}_i^3\right)=\xi _i(t).`$ (10) Assuming that the deviations of $`\dot{y}_i`$ are relatively small and linearizing this equation, we obtain $$\ddot{y}_i+(3\overline{u}^21)\dot{y}_i+ay_i=\xi _i(t).$$ (11) In this approximation the deviations for different particles $`i`$ represent statistically independent random processes. This allows us to replace the ensemble average in the dispersion $`\sigma `$ by the statistical average taken over independent random realizations of such processes, defined by Eq. (11). Hence, we have derived a closed set of equations (8) and (11) that approximately describe the swarm. We want to investigate steady statistical states of this system. The stationary solutions to Eq. (8) are $`\overline{u}=\pm \sqrt{13\sigma }`$ and $`u=0`$. The latter solution corresponds to the resting swarm. Examining Eq. (11), we note that it describes damped oscillations only if $`3\overline{u}^21>0`$, i.e. only if mean velocity of the swarm is sufficiently large. Under this condition, the stationary probability distribution for $`y_i`$ is readily found and the average square dispersion of velocities is obtained as $$\sigma =\frac{S}{3\overline{u}^21}.$$ (12) The algebraic equations for $`\overline{u}`$ and $`\sigma `$ can be solved, yielding the statistical dispersion of particles in the traveling swarm, $$\sigma _{1,2}=\frac{1}{9}\left(1\pm \sqrt{19S}\right),$$ (13) and its mean velocity $$\overline{u}_{1,2}^2=\frac{1}{3}\left(2\pm \sqrt{19S}\right).$$ (14) Thus, the traveling state solutions disappear when the critical noise intensity $`S_c=1/9=0.11\mathrm{}`$ is reached. At this critical point the mean swarm velocity is $`\overline{u}_c=\sqrt{2/3}=0.82\mathrm{}`$ and the mean dispersion of particles in the cloud is $`\sigma _c=1/9=0.11\mathrm{}`$. Below the breakdown threshold (for $`S<S_c`$), solution (14) has two branches shown by solid and dashed lines in Fig. 4. The lower branch is apparently unstable, since it approaches the value $`\overline{u}=1/\sqrt{3}=0.58\mathrm{}`$ at $`S=0`$, i.e. in absence of the noise. A special property of the derived solution is that it does not depend on the parameter $`a`$. Comparing the theoretical prediction with the numerically determined values of the mean swarm velocity, that are also plotted in Fig. 4, we can see that this approximation provides good estimates of the swarm velocity and the critical noise intensity when the parameter $`a`$ is relatively high ($`a=100`$ and $`a=10`$). At small values of $`a`$, the deviations from the numerical results become significant near the breakdown threshold. This can be understood if we take into account that, according to Fig. 2, the distribution of particles in a traveling swarm shows significant asymmetry for such a small value of $`a`$ and therefore our approximations are not valid. For a standing swarm ($`\overline{u}=0`$), the deviations $`y_i=x_i\overline{x}`$ obey in the limit $`N\mathrm{}`$ the nonlinear stochastic differential equation $$\ddot{y}_i+(\dot{y}_i^21)\dot{y}_i+ay_i=\xi _i(t),$$ (15) which is similar to the Van der Pol equation . In this state, therefore, the particles in the swarm perform periodic limit-cycle oscillations with a random distribution of phases. This state exists for any noise intensity $`S`$ and is approached when the noise-induced breakdown of the coherent motion takes place at $`S=S_c`$. Thus, we have find in this paper that a swarm of interacting, actively moving particles may show bistable behaviour, i.e. be found either in a coherent state, traveling at a fixed velocity, or in a rest state where the translational motion is absent and the individual particles perform oscillations around the center of the swarm. The bistability persists in the presence of noise if its intensity remains relatively low. Increasing the noise intensity leads to a sudden breakdown of the coherent traveling motion and a transition to the resting oscillatory state occurs. This behavior is different from the second-order phase transitions to coherent collective motion, that were found in the previously studied models . We conjecture that the difference is related to the fact that in our model the interactions between self-propelled particles have a long range and extend over the entire swarm. It would be interesting to see how this behavior is modified when other interaction laws and systems with higher dimensionality are considered. Finally, we remark that, when formulated in terms of dynamical equations for individual interacting self-propelled particles, the problem shows significant similarities to synchronization and condensation in populations of globally coupled oscillators (see e.g. ). The significant new aspect is that collapsed synchronous states correspond here to translational motion of the entire population. The authors acknowledge financial support from Fundación Antorchas (Argentina).
no-problem/9906/cond-mat9906393.html
ar5iv
text
# Relaxation of Inter-Landau-level excitations in the Quatised Hall Regime In recent two decades considerable interest has been focused on the collective excitations in a strongly correlated 2DEG under the quantum Hall regime conditions. The study of such excitation provide a way to determine the fundamental properties of a 2DEG which eventually explain its relaxation and transport features. For integer filling $`\nu `$ the calculation of the exciton-like spectra (which are in fact of Bose type) in the limit of high magnetic fields reduces to an exactly solvable problem \[1-3\]. At the same time the direct experimental discovery of such excitons (spin-flip waves, magneto-plasmons (MPs) without and with spin flip) presents certain difficulties. The inter-Landau-level MPs were observed in the works \[4-5\] by means of inelastic light-scattering. However, a massive breakdown in wave-vector conservation implied for this detection is not understood as yet. The presented paper is devoted to the properties of MP relaxation (MPR) which might help indirectly to reveal the presence of MP excitations in 2DEG, namely: the two types of MPR for the filling $`\nu =1`$, which are studied below, should give rise to the special features in the hot luminescence and Raman scattering signals from 2DEG. In particular, a nonmonotonic dependence on the magnetic field has to be expected for the intensity of the hot luminescence which arises due to electron relaxation from the first and the second Landau levels (LLs). We concern only the MPs without spin flip, i.e. with the energies $$ϵ_{ab}(q)=\mathrm{}\omega _c(n_bn_a)+_{ab}(q)(n_b>n_a),$$ $`(1)`$ where $`\omega _c`$ is the cyclotron frequency, $`n_a`$ and $`n_b`$ are the numbers of initially (in the ground state) occupied and unoccupied LLs respectively. Having a Coulomb origin the energy $`_{ab}`$ is of the order of or smaller than $`E_C=e^2/\kappa _0l_B`$ which is a characteristic energy of electron-electron interaction in 2DEG ($`l_B`$ is the magnetic length, $`\kappa _0`$ is the dielectric constant). We should especially pay attention for MPs in such portions of their spectrum where the density of states becomes infinite (i.e. where $`dϵ_{ab}/dq=0`$) or/and the wave-vector $`q`$ is equal to zero. When so doing we take into account the definitive role of an experimental test. Indeed, the features in light-scattering spectra \[4-5\] attributed to the one-level excitations (where $`n_bn_a=1`$) are only detected in the vicinity of $`q=0`$ (therewith $`_{ab}=0`$ but $`d_{ab}/dq0`$) or near their roton minimum. In the latter case the interaction energy is $$_{01}\epsilon _0+(qq_0)^2/2M,|qq_0|q_0$$ $`(2)`$ (the index $`ab`$ is specified by replacing it with $`n_an_b`$, since the spin state does not change in our consideration). We consider the case where $`n_a=0`$ and $`n_b=1`$ with filling $`\nu =1`$; then in this equation $`q_01.92/l_B`$, $`M^10.28E_Cl_B^2`$ and $`\epsilon _00.15E_C`$ in the strict 2D limit (namely if 2DEG thickness $`d`$ satisfies the condition $`dl_B`$). Actually the MP spectra depend on $`d`$ but their shape do not change qualitatively. A. First we consider the magnetophonon resonance conditions when the energy $`ϵ_{01}`$ defined by Eq. (1) is equal to the LO-phonon energy $`\mathrm{}\omega _{LO}=35`$meV . The resonance with $`q=0`$ when $`\omega _c=\omega _{LO}`$ is just a consequence of the Kohn theorem and does not demonstrate the presence of MP excitations in the system. Therefore, we consider the case of magnetoroton relaxation when MP energy is transferred to the emitted optic phonon. Then in the vicinity of $`qq_0`$ we should expect the resonance if $`\mathrm{}(\omega _{LO}\omega _c)_{01}(q_0)=\epsilon _0`$. In the strict 2D limit this condition leads to the resonant magnetic field $`B=19`$T instead of $`21`$T corresponding to the case of $`\omega _c=\omega _{LO}`$. Note that in this case the MPR has to be accelerated. As a result, if the hot luminescence from the 1-st LL is measured, then a fall in its intensity in the resonant magnetic field should be detected. We calculate now the rate of this relaxation. Describing the states of the system in terms of the Excitonic Representation which means the transition from electron annihilation and creation operators to the excitonic ones $`Q_{ab𝐪}`$ and $`Q_{ab𝐪}^+`$ (see Refs. \[7-10\]), we should therefore study the transition between electron states $`|i=Q_{ab𝐪}^+|0`$ and $`f|=0|`$ ($`0|`$ is the ground state). The relevant matrix element calculation $$_{ab𝐪}=0|_{e,ph}|Q_{ab𝐪}^+|0$$ $`(3)`$ assumes the Excitonic Representation for Hamiltonian of electron-phonon interaction, $$_{e,ph}=\frac{1}{L}\left(\frac{\mathrm{}}{L_z}\right)^{1/2}\underset{𝐤}{}\stackrel{~}{U}_{opt}^{}(𝐤)H_{e,ph}(𝐪),$$ $`(4)`$ where $`L\times L\times L_z`$ are the sample sizes, $`𝐤=(𝐪,k_z)`$ is the phonon wave-vector, $`\stackrel{~}{U}_{opt}(𝐤)=\gamma (k_z)U_s(𝐤)`$ is the renormalized Fröhlich vertex ($`\gamma `$ is the size-quantised form-factor ), namely: $`|U_{opt}|^2=2\pi e^2\omega _{LO}/\overline{\kappa }k^2`$ (the standard notation for the reduced dielectric constant $`\overline{\kappa }^1=\kappa _{\mathrm{}}^1\kappa _0^1`$ is used). The relevant representation for $`H_{e,ph}`$ operating on the electron states can be obtained similar to the case of exciton-acoustic-phonon interaction : $$H_{e,ph}=\frac{L}{l_B\sqrt{2\pi }}\left[h_{n_an_b}(𝐪)Q_{ab𝐪}+h_{n_an_b}^{}(𝐪)Q_{ab𝐪}^+\right].$$ $`(5)`$ Here $`n_bn_a`$, and $$h_{n_an_b}(𝐪)=(n_a!/n_b!)^{1/2}(q_+l_B)^{n_bn_a}e^{q^2l_B^2/4}L_{n_a}^{n_bn_a}(q^2l_B^2/2),$$ $`(6)`$ where $`L_n^j`$ is Laguerre polynomial, $`q_\pm =i2^{1/2}(q_x\pm iq_y)`$. Now exploiting the relevant commutaion rules for the excitonic operators (see Refs. and ) we can find the matrix element (3) appropriate in our case, $$_{01𝐪}=(\mathrm{}/2\pi L_z)^{1/2}U_{opt}^{}(𝐤)h_{01}(𝐪)/l_B,$$ $`(7)`$ and then the MPR rate $$R_{ph}=\underset{𝐪^{},k_z}{}\frac{2\pi }{\mathrm{}}|_{01𝐪^{}}|^2\delta (ϵ_{01}\mathrm{}\omega _{LO})=\frac{e^2L^2\omega _{LO}}{4\overline{\kappa }|d_{01}/dq|}q^2e^{q^2l_B^2/2}\overline{n}(q).$$ $`(8)`$ In the last expression $`q`$ is the root of equation $`ϵ_{01}(q)=\mathrm{}\omega _{LO}`$, and $`\overline{n}(𝒒)`$ is the occupation number of 01MPs. Formally the result (8) becomes infinite when $`q=q_0`$. However, the real magnitude of $`R_{ph}`$ in the vicinity of $`q_0`$ can be found from the analysis of the homogeneity breakdown due random impurity potential $`U(𝐫)`$. Assuming $`U(𝐫)`$ to be smooth (its correlation length $`\mathrm{\Lambda }l_B`$) one can find that the energy correction for any $`ab`$MP is in the dipole approximation $`\delta =\mathrm{}\mathrm{𝐪𝐯}_d`$, where $`𝐯_d=(\widehat{z}\times U(𝐫))l_B^2/\mathrm{}`$ is the drift velocity (see Refs. and ). This energy is an inhomogeneous broadening of the MP energy and has to be added to $`_{ab}`$. The random potential correction plays no significant role if $`|d_{ab}/dq|l_B^2\left|U\right|`$, which means that the electron-hole Coulomb interaction is stronger than the force the electron and the hole are subjected to in the random potential. In other words, the derivative $`|d_{ab}/dq|`$ in Eq. (8) is limited from below by $`l_B^2\left|U\right|l_B^2\mathrm{\Delta }/\mathrm{\Lambda }`$, where $`\mathrm{\Delta }`$ is the random potential amplitude. Thus we obtain the rate near $`q_0`$ per unit area: $$\left[R_{ph}/L^2\right]_{\text{max}}\frac{e^2\mathrm{\Lambda }\omega _{LO}}{4\overline{\kappa }l_B^2\mathrm{\Delta }}q_0^2e^{q_0^2l_B^2/2}\overline{n}(q_0).$$ $`(9)`$ The roton minima broadening due to inhomogeneity is $`|𝐪𝐪_0|(2M\delta )^{1/2}`$. Estimating magnetoroton density as $`N\overline{n}q_0(2M\delta )^{1/2}`$ and setting $`dN/dt`$ equal to decay rate (9) we find the characteristic relaxation time $`\tau _{ph}=\overline{n}dt/d\overline{n}`$ which turns out to be of the order of $$\tau _{ph}4\mathrm{exp}(q_0^2l_B^2/2)\left(\frac{\mathrm{\Delta }}{\mathrm{\Lambda }}\right)^{3/2}\left(\frac{2M}{q_0}\right)^{1/2}\frac{\overline{\kappa }l_B^3}{e^2\omega _{LO}}0.1÷0.01\text{ps}.$$ $`(10)`$ (we assume that $`B=10`$T, $`\mathrm{\Delta }1`$meV, $`\mathrm{\Lambda }50`$nm). B. Naturally, the above results should be compared with the analogous ones in the case when emission of LO-phonons is suppressed off the resonance conditions. Generally, the MPR mechanism seems to be determined by many-phonon emission. However, a certain additional relaxation channel exists precisely for the considered magnetorotons. A coalescence of two of them with their conversion into a single MP of the “two-cyclotron” plasmon mode (with $`n_a=0`$, $`n_b=2`$) turns out to be energetically allowed because “by chance” the difference $`\delta =_{02}(0)2\epsilon _0`$ is numerically very small. Namely, in the strict 2D limit $`\delta 0.019E_C3÷4`$ K for $`B=10÷20`$T. This coalescence leads to an Auger-like MPR process because as a result the total number of excited electrons decreases as well as the total number of MP excitations. The dependence $`_{02}(q)`$ is nonmonotonous, but in the range $`0<ql_B<2.5`$ does not change by more than $`0.07E_c`$. Nevertheless, it would be preferable observe the generated “two-cyclotron” MP in the state with small 2D wave-vector, because in this case the generated MP could be detected by anti-Stokes Raman scattering similar to the experiments of Refs. \[4-5\]. We calculate the decay rate due to such an Auger-like process, $$=\frac{1}{2}\underset{𝐪_1,𝐪_2}{}\frac{2\pi }{\mathrm{}}\left|(𝐪_1,𝐪_2)\right|^2\overline{n}(𝐪_1)\overline{n}(𝐪_2)\delta \left[_{01}(𝐪_1)+_{01}(𝐪_2)_{02}(𝐪_1+𝐪_2)\right],$$ $`(11)`$ where the required matrix element of the considered conversion is $$(𝐪_1,𝐪_2)=0|Q_{02𝐪_1+𝐪_2}\left|H_{int}\right|Q_{01𝐪_1}^+Q_{01𝐪_2}^+|0.$$ $`(12)`$ Here bra and ket states are orthogonal, and $`H_{int}`$ is the Coulomb interaction Hamiltonian of 2DEG. Rewriting $`H_{int}`$ in the Excitonic Representation we should take into account that within the framework of the exploited high magnetic field approximation it is sufficient to keep in $`H_{int}`$ only the terms which commute with the Hamiltonian of noninteracting electrons and therefore conserve cyclotron part of the 2DEG energy. When so doing we find that the only term which gives the contribution to the matrix element (12) is $$H_{int}^{}=\frac{1}{2\pi l_B^2}\underset{𝐪}{}V(q)h_{10}(𝐪)h_{21}^{}(𝐪)Q_{12𝐪}^+Q_{01𝐪},$$ $`(13)`$ where $`V(q)`$ is the 2D Fourier component of the Coulomb potential averaged with the wave function in the $`\widehat{z}`$ direction (so that in the strict 2D limit: $`V(q)=2\pi l_BE_C/q`$). Substituting the operator (13) for $`H_{int}`$ into Eq. (12) and employing the special commutation rules for the excitonic operators ( see Refs. and ) one can calculate the matrix element (12) for arbitrary $`𝐪_1`$ and $`𝐪_2`$. We will later need this quantity only when $`𝐪_1𝐪_2`$ and $`q_1q_2q_0`$. In this case $`(𝐪_0,𝐪_0)=2(2\pi )^{1/2}\mu l_B/L`$, where in the strict 2D limit $`\mu 0.062E_C`$. To calculate the depopulation rate of 01MPs (11) one has additionally to know the $`\overline{n}(𝐪_1)`$ distribution and the appropriate phase area $`A`$ for the relevant final wave-vectors of 02MPs $`𝐪=𝐪_1+𝐪_2`$. When this area is sufficiently small, namely if $`\pi q^2A4\pi ^3M\delta `$, the result is $$/L^2=\frac{\overline{n}^2(q_0)\mu ^2l_B^2q_0}{2\pi \mathrm{}}\left(\frac{M}{\delta }\right)^{1/2}A.$$ $`(14)`$ This is just the case for the rate (11) when 02MP creation occurs in the phase area relevant for anti-Stokes inelastic backscattering. Then the role of the random potential in determining the value of $`A`$ is crucial. Indeed, if $`ql_B<1`$, one can get an estimate $`d_{02}/dqE_Cq^2l_B^3`$ (see Ref. ), and the uncertainty in $`q`$ due to disorder turns out to be $`\stackrel{~}{q}(\mathrm{\Delta }/E_c)^{1/2}(\mathrm{\Lambda }l_B)^{1/2}`$ (for the adopted numerical parameters we find $`\stackrel{~}{q}10^5`$cm<sup>-1</sup>). The quantity $`\pi \stackrel{~}{q}^2`$ should be substituted into Eq. (14) for $`A`$. If we wish to obtain the total rate of the Auger-like MP relaxation, then with the help of Eq. (11) a more complicated summation has to be fulfilled. Nevertheless, in this case Eq. (15) may be also employed for the approximate estimation if we substitute there $`A\pi l_B^2`$. Then estimating the 01MP density near their roton minima as $`N\overline{n}(q_0)q_0(2M\delta )^{1/2}`$ and setting $`dN/dt`$ equal to the relevant total rate of the coalescing 01MPs we find $$\tau _{Aug}=\overline{n}dt/d\overline{n}\frac{2\mathrm{}}{\overline{n}\mu ^2}\left(\frac{2\delta \mathrm{\Delta }l_B}{\mathrm{\Lambda }}\right)^{1/2}1/\overline{n}\text{ps}.$$ $`(15)`$ This time is by about a factor of 100 is longer than that given by Eq. (10). On the other hand, the considered Auger-like process is certainly the dominant relaxation channel in the case of 01-magnetorotons, if the magneto-phonon resonant conditions are not met. It is very important that the studied type of MP relaxation can reveal an additional possibility for the indirect experimental detection of the magnetorotons. Indeed, if one somehow excites 01MPs near their roton minima, then one could simultaneously observe the 02MPs (and therefore electrons at the 2-nd LL). It seems such an observation might be performed by means of anti-Stokes Raman scattering or by means of hot luminescence from the 2-nd LL. Note also that if the 1-st LL turns out in the vicinity of the LO-phonon energy, one might observe the decrease of the hot luminescence signal from the 2-nd LL in the field $`B`$ corresponding to the magnetoroton-phonon resonance studied above. This correlation between the 1-st LL excitations and the 2-nd LL hot luminescence would be an evidence of the Auger-like process and therefore of the magnetoroton existence. Finally, note that the deviation of the filling from 1 should nevertheless qualitatively retain the same picture of the considered MPR as long as this deviation does not reach the point where the 2DEG excitation spectrum is drastically renormalized (i.e. the fractional quantum Hall effect conditions arise). The work is supported by the MINERVA Foundation and by the Russian Fund for Basic Research. References 1. Yu. A. Bychkov, S. V. Iordanskii, and G. M. Éliashberg, JETP Lett. 33, 143 (1981). 2. C. Kallin, and B. I. Halperin, Phys. Rev. B 30, 5655 (1984). 3. C. Kallin, and B. I. Halperin, Phys. Rev. B 31, 3635 (1985). 4. A. Pinczuk, et al., Phys. Rev. Lett. 68, 3623 (1992); ibid. 70, 3983 (1993). 5. A. Pinczuk, et al., Semicond. Sci. Technol. 9, 1865 (1994). 6. We restrict ourselves to the bulk optic-phonon modes in GaAs. In this case only the longitudinal phonons interact with of the conduction band electrons. 7. A. B. Dzyubenko, and Yu. E. Lozovik, Sov. Phys. Solid State 26, 938 (1984). 8. S. Dickmann, and S. V. Iordanskii, JETP 83, 128 (1996). 9. S. Dickmann, Physica B 263-264, 202 (1999). 10. S. Dickmann, and Y. Levinson, Phys. Rev. B ??, (1996).
no-problem/9906/hep-ph9906384.html
ar5iv
text
# 𝑊-boson production at upgraded HERA ## Abstract Event characteristics of $`W`$ boson production at HERA collider are untrivial and sensitive to the production mechanisms. We analyse the distributions of the four particle final state defined by the complete set of $`W`$ producing perturbative leading order diagrams in the Standard Model and its extension with the anomalous effective lagrangian in the gauge sector. It is important to understand in details the mechanisms of $`W`$-boson production at upgraded HERA. The number of $`W`$-production events will be large enough both in leptonic and hadronic $`W`$ decay channels, giving potentially large backgrounds to the signals of new physics , such as contact interactions, leptoquarks and $`R`$-parity conserving or violating SUSY processes. At the same time the single $`W`$ production is influenced by $`\gamma WW`$ and $`ZWW`$ vertices that still are not precisely measured. While the weak current couplings of gauge bosons are measured with the accuracy of 10<sup>-4</sup> , the accuracy of anomalous couplings restriction in the gauge boson sector from the Tevatron data is around 0.5 (in units of dimensionless parameters $`\kappa `$, $`\lambda `$) only (, see also ), so the self-interaction of vector bosons is not experimentally fixed at comparable level of precision. $`W`$ boson production in the electron-proton mode can be observed in the main channel $`e^{}pe^{}\mu ^+\nu _\mu X`$ and maybe a few events $`e^{}p\nu _e\mu ^+\overline{\nu }_\mu X`$ can be reconstructed. Positron-proton mode $`e^+pe^+\mu ^+\nu _\mu X`$ has practically the same total rate and event characteristics, because the Feynman graph topology of the positron-proton mode is different only for some $`W`$ and $`Z`$ exchange weak diagrams giving the contributions that are small in comparison with the dominant $`t`$-channel photon exchange diagrams. The complete set of 10 tree level diagrams for the main process of $`W`$-boson production $`e^{}pe^{}\mu ^+\nu _\mu X`$ is shown in Fig.1. In the simpler $`23`$ process approximation of $`W`$-boson on-shell $`M_{\mu \nu }=(p_\nu +p_\mu )^2=m_W^2`$ we have to keep seven $`s`$-channel diagrams with an outgoing $`W`$ boson line and omit ladder diagrams 1,6 and 7. However this approximation of infinitely small $`W`$ width $`m_W\mathrm{\Gamma }_{tot}/[(M_{\mu \nu }^2m_W^2)^2+m_W^2\mathrm{\Gamma }_{tot}^2]=\delta (M_{\mu \nu }m_W)`$, being rather satisfactory for the calculation of total $`W`$ production rate, is not sufficient for the analysis of some specific features of event topology, like particle distributions from the multiperipheral mechanisms. Taking $`W`$ boson off-shell $`W\mu \nu _\mu `$ in the $`24`$ process approximation requires ladder graphs 1,6,7 in order to preserve gauge invariance . Main contribution to the cross section comes from the diagram 3 containing photon and quark $`t`$-channel propagators. When the t-channel gamma and quark are close to mass shell the QCD corrections are large, potentially developing a nonperturbative regime of ’resolved photon’ . The kinematical configurations of resolved photon are usually separated by a cut near the $`u`$-channel quark pole $`\mathrm{\Lambda }^2=(p_qp_W)^2`$. Resolved $`W`$-production is not more than a quark-antiquark fusion where one of the quarks appears as a constituent of gamma and another one as a constituent of proton. Existing parametrizations of quark distributions inside gamma (measured in a relatively low $`Q^2`$ $`\gamma \gamma `$ collisions and then extrapolated to the region of $`Q^2m_W^2`$), seem to be not as precise as PDF in proton, so the resolved $`W`$-production cannot be calculated with the same reliability as the perturbative leading order (also called ’direct’) contribution of ten diagrams in Fig.1. Double counting of the perturbative part of the photon structure function and the collinear configuration of the direct process amplitude is removed by the subtraction of LO $`\gamma qq^{}`$ splitting term. Resulting contribution of the resolved part is of order 10% of the $`W`$-production cross section. Alternative method of the resolved part separation by a cut near the $`t`$-channel gamma pole $`Q^2=(p_ep_e^{})^2`$ uses a delicate application of the equivalent gamma approximation to the subprocess with $`t`$-channel quark $`\gamma qq^{}W^{}`$. In this case by definition of the main ’direct’ (or ’photoproduction’) contribution comes from a configuration with the quasireal gamma and electron at a small angle $`\theta _e`$ with the beam. The $`23`$ amplitude with $`\theta _e`$ 5 deg. (seven diagrams of Fig.1, $`W`$ on-shell) corresponds to ’DIS contribution’. Ladder diagrams are omitted. Resolved photon contribution becomes dominant in the framework of this method. Important role of the NLO QCD corrections to the resolved part $`q^{}qW`$, giving the enhancement of it by about 40%, is demonstrated by an explicit calculation. In the following we shall discuss the distributions corresponding to direct (perturbative leading order) part of the $`W`$ production amplitude in the Standard Model and beyond . Resolved gamma is separated by a $`\mathrm{\Lambda }^2`$ cut. Calculation of the leading order amplitude (ten diagrams in Fig.1) was performed by means of CompHEP package . We used the ’overall’ prescription for vector boson propagators. Electron mass was kept nonzero for the regularization of $`t`$-channel gamma pole. Numerical level cancellation of the $`1/Q^4`$ gamma propagator pole in the separately taken squared diagrams to the $`1/Q^2`$ pole in the entire amplitude, required by the gauge invariance, can be demonstrated explicitly . We show the event characteristics of the process $`e^{}pe^{}\mu ^+\nu _\mu X`$ in Figs.2-4 (proton structure functions MRS A ). First row of plots in each figure represents the distributions of the electron, second row of plots - distributions of the (anti)muon, and third row of plots shows the distributions of the final quark (all partonic subprocesses contributing to $`W`$ production are summed). Different cuts for $`t`$-channel quark momentum $`\mathrm{\Lambda }^2=(p_qp_W)^2`$ were imposed for the calculation of event characteristics in Fig.2 and Fig.3, where $`\mathrm{\Lambda }`$ is equal to 0.2 GeV and 5 GeV, correspondingly. The distributions in Fig.4 were calculated at $`\mathrm{\Lambda }=`$ 5 GeV with the phase space cut 20 GeV around the $`W`$ pole: $`M_W20`$ GeV $`M(\mu \nu _\mu )M_W+20`$ GeV. This cut is used in the EPVEC generator , so the distributions in Fig.4 are calculated using the phase space cuts similar to the cuts used in . Additional phase space cuts in the EPVEC that were introduced to ensure the numerical stability of the muon pole integration in the ladder diagram 1, Fig.1, lead to the EPVEC total rate regularly smaller than the CompHEP total rate, where the muon pole region is exactly integrated (see Table 1). Soft muons in the distributions $`d\sigma /dE_\mu `$ and $`d\sigma /dp_{T\mu }`$ come from the ladder diagrams 1,6,7 in Fig.1. Jets at the angle 180 degrees with the proton beam appear from diagram 3, Fig.1, when the quasireal photon produces a quark-antiquark pair collinear to the initial electron. We can see that the backscattered jet peak is sensitive to the value of $`\mathrm{\Lambda }`$ cut and gradually goes down as we increase it. But this decrease of direct contribution should be compensated by the ’photon remnant’ in the contribution from the resolved part. Muons at 180 degree with the proton beam from the ladder diagrams appear at any value of $`\mathrm{\Lambda }`$ \- only the shape of $`d\sigma /dE_\mu `$ is slightly affected and the normalization is changed. Kinematical cut around the $`W`$-pole implemented in ’canonical’ EPVEC removes soft muons and muons at 180 degree with the beam completely. It is interesting to notice that the four-fermion final state configurations with backscattered soft muons are observed in the simulation for LEP2 , where in the channel $`e^+e^{}e^{}\overline{\nu }_e\mu ^+\nu _\mu `$ diagram topologies are the same (if we replace the quark line by the electron line). In the muonic channels the total cross section is equal approximately to 150-160 fb, giving about 35 events/year at the integrated luminosity 200 $`pb^1`$. Additional kinematical cuts are necessary in the electron channels for separation of misidentification backgrounds, and the number of identifiable events from $`We\nu _e`$ is slightly smaller. In total at upgraded HERA collider it could be possible to observe about 60 $`W`$-production events/year. At present time HERA luminosity is too small to produce a number of $`W`$ sufficient for a quantitative analysis. Presently available H1 data shows one event in the electron channel (2.4 events expected) and two events in the muon channel (0.8 events expected), with the topology compatible with $`W`$ production processes (three more H1 events in the muon channel have the kinematic properties different from those given by $`W`$ production mechanisms). ZEUS data shows three events in the electron channel and none of the events passes through kinematical cuts in the muon channel. Detailed discussion can be found in . Anomalous U(1) gauge invariant, C and P parity conserving effective lagrangian in the gauge sector can be taken in the form $$L_{eff}=g_V(W_{\mu \nu }^+W^\mu V^\nu W^{\mu \nu }W_\mu ^+V_\nu +\kappa W_\mu ^+W_\nu V^{\mu \nu }+\frac{\lambda }{m_W^2}W_{\rho \mu }^+W_\nu ^\mu V^{\nu \rho })$$ (1) where $`g_\gamma =e`$ and $`g_Z=ecos\vartheta _W/sin\vartheta _W`$, $`W_{\mu \nu }=_\mu W_\nu _\nu W_\mu `$, $`V_{\mu \nu }=_\mu V_\nu _\nu V_\mu `$. and $`\lambda `$, $`\kappa `$ are dimensionless parameters. Following the tradition the value of $`\lambda `$ is expressed in the units of $`m_W`$. We show one of the distributions calculated using the anomalous effective lagrangian (1) in Fig.5. Taking into account some typical detector experimental cuts nesessary for the separation of misidentification backgrounds and realistic experimental acceptancies we can estimate the following limits for $`\mathrm{\Delta }\kappa `$ and $`\lambda `$, giving the observable deviation of total cross section from the Standard Model value at 68% and 95% confidence level: $`1.70<\lambda <1.70,1.05<\mathrm{\Delta }\kappa <0.48,68\%CL`$ $`2.24<\lambda <2.24,\mathrm{\Delta }\kappa <0.89,95\%CL`$ at the integrated luminosity 200 $`pb^1`$, and $`1.03<\lambda <1.03,0.31<\mathrm{\Delta }\kappa <0.27,68\%CL`$ $`1.75<\lambda <1.75,0.58<\mathrm{\Delta }\kappa <0.46,95\%CL`$ at the integrated luminosity 1000 $`pb^1`$. The limits from Tevatron collider available at present time are close to the possible upgraded HERA limits at the luminosity 1000 $`pb^1`$. M.D. is grateful to M.Spira, M.Kuze and D.Zeppenfeld for useful discussions and he thanks very much C.Diaconu and D.Waters for help in the comparison of EPVEC and CompHEP results.
no-problem/9906/nucl-th9906006.html
ar5iv
text
# HISTORICAL SURVEY OF THE QUASI-NUCLEAR BARYONIUM Dedicated to the memory of C.B. Dover and I.S. Shapiro Invited talk at the Workshop on Hadron Spectroscopy, Frascati–INFN (Italy) March 8-12 1999, to appear in the Proceedings ## 1 Introduction The question of possible nucleon–antinucleon ($`\text{N}\overline{\text{N}}`$) bound states was raised many years ago, in particular by Fermi and Yang, who remarked on the strong attraction at large and intermediate distances between N and $`\overline{\text{N}}`$. In the sixties, explicit attempts were made to describe the spectrum of ordinary mesons ($`\pi `$, $`\rho `$, etc.) as $`\text{N}\overline{\text{N}}`$ states, an approximate realisation of the “bootstrap” ideas. It was noticed, however, that the $`\text{N}\overline{\text{N}}`$ picture hardly reproduces the observed patterns of the meson spectrum, in particular the property of “exchange degeneracy”: for most quantum numbers, the meson with isospin $`I=0`$ is nearly degenerate with its $`I=1`$ partner, one striking example being provided by $`\omega `$ and $`\rho `$ vector mesons. In the 70’s, a new approach was pioneered by Shapiro, Dover and others: in their view, $`\text{N}\overline{\text{N}}`$ states were no more associated with “ordinary” light mesons, but instead with new types of mesons with a mass near the $`\text{N}\overline{\text{N}}`$ threshold and specific decay properties. This new approach was encouraged by evidence from many intriguing experimental investigations in the 70’s, which also stimulated a very interesting activity in the quark model: exotic multiquark configurations were studied, followed by glueballs and hybrid states, more generally all “non-$`\text{q}\overline{\text{q}}`$” mesons which will be extensively discussed at this conference. Closer to the idea of quasi-nuclear baryonium are the meson–meson molecules. Those were studied mostly by particle physicists, while $`\text{N}\overline{\text{N}}`$ states remained more the domain of interest of nuclear physicists, due to the link with nuclear forces. ## 2 The $`𝐆`$-parity rule In QED, it is well-known that the amplitude of $`\mu ^+e^+`$ scattering, for instance, is deduced from the $`\mu ^+e^{}`$ one by the rule of $`C`$ conjugation: the contribution from one-photon exchange ($`C=1`$) flips the sign, that of two photons ($`C=+1`$) is unchanged, etc. In short, if the amplitude is split into two parts according to the $`C`$ content of the $`t`$-channel reaction $`\mu ^+\mu ^{}e^+e^{}`$, then $$(\mu ^+e^+)=_++_{},(\mu ^+e^{})=_+_{}.$$ (1) The same rule can be formulated for strong interactions and applied to relate $`\overline{\text{p}}\text{p}`$ to pp, as well as $`\overline{\text{n}}\text{p}`$ to np. However, as strong interactions are invariant under isospin rotations, it is more convenient to work with isospin eigenstates, and the rule becomes the following. If the NN amplitude of $`s`$-channel isospin $`I`$ is split into $`t`$-channel exchanges of $`G`$-parity $`G=+1`$ and exchanges with $`G=1`$, the former contributes exactly the same to the $`\text{N}\overline{\text{N}}`$ amplitude of same isospin $`I`$, while the latter changes sign. This rule is often expressed in terms of one-pion exchange or $`\omega `$-exchange having an opposite sign in $`\text{N}\overline{\text{N}}`$ with respect to NN, while $`\rho `$ or $`ϵ`$ exchange contribute with the same sign. It should be underlined, however, that the rule is valid much beyond the one-boson-exchange approximation. For instance, a crossed diagram with two pions being exchanged contributes with the same sign to NN and $`\text{N}\overline{\text{N}}`$. ## 3 Properties of the NN potential Already in the early 70’s, a fairly decent understanding of long- and medium-range nuclear forces was achieved. First, the tail is dominated by the celebrated Yukawa term, one-pion exchange, which is necessary to reproduce the peripheral phase-shifts at low energy as well as the quadrupole deformation of the deuteron. At intermediate distances, pion exchange, even when supplemented by its own iteration, does not provide enough attraction. It is necessary to introduce a spin-isospin blind attraction, otherwise, one hardly achieves binding of the known nuclei. This was called $`\sigma `$-exchange or $`ϵ`$-exchange, sometimes split into two fictitious narrow mesons to mimic the large width of this meson, which results in a variety of ranges. The true nature of this meson has been extensively discussed in the session chaired by Lucien Montanet at this Workshop. Refined models of nuclear forces describe this attraction as due to two-pion exchanges, including the possibility of strong $`\pi \pi `$ correlation, as well as excitation nucleon resonances in the intermediate states. The main conceptual difficulty is to avoid double counting when superposing $`s`$-channel type of resonances and $`t`$-channel type of exchanges, a problem known as “duality”. To describe the medium-range nuclear forces accurately, one also needs some spin-dependent contributions. For instance, the P-wave phase-shifts with quantum numbers $`{}_{}{}^{2S+1}\text{L}_{J}^{}={}_{}{}^{3}\text{P}_{0}^{}`$, $`{}_{}{}^{3}\text{P}_{1}^{}`$ and $`{}_{}{}^{3}\text{P}_{2}^{}`$, dominated at very low energy by pion exchange, exhibit different patterns as energy increases. Their behaviour is typical of the spin-orbit forces mediated by vector mesons. This is why $`\rho `$-exchange and to a lesser extent, $`\omega `$-exchange cannot be avoided. Another role of $`\omega `$-exchange is to moderate somewhat the attraction due to two-pion exchange. By no means, however, can it account for the whole repulsion which is observed at short-distances, and which is responsible of the saturation properties in heavy nuclei and nuclear matter. In the 70’s, the short-range NN repulsion was treated empirically by cutting off or regularising the Yukawa-type terms due to meson-exchanges and adding some ad-hoc parametrization of the core, adjusted to reproduce the S-wave phase-shifts and the binding energy of the deuteron. Needless to say, dramatic progress in the description of nuclear forces have been achieved in recent years. On the theory side, we understand, at least qualitatively, that the short-range repulsion is due to the quark content of each nucleon. This is similar to the repulsion between two Helium atoms: due to the Pauli principle, the electrons of the first atom tend to expel the electrons of the second atom. On the phenomenological side, accurate models such as the Argonne potential are now used for sophisticated nuclear-structure calculations. ## 4 Properties of the $`\text{N}\overline{\text{N}}`$ potential What occurs if one takes one of the NN potentials available in the 70’s, such as the Paris potential or one the many variants of the one-boson-exchange models, and applies to it a $`G`$-parity transformation? The resulting $`\text{N}\overline{\text{N}}`$ potential exhibits the following properties: 1) $`ϵ`$ (or equivalent) and $`\omega `$ exchanges, which partially cancel each other in the NN case, now add up coherently. This means that the $`\text{N}\overline{\text{N}}`$ potential is, on the average, deeper than the NN one. As the latter is attractive enough to bind the deuteron, a rich spectrum can be anticipated for $`\text{N}\overline{\text{N}}`$. 2) The channel dependence of NN forces is dominated by a strong spin-orbit potential, especially for $`I=1`$, i.e., proton–proton. This is seen in the P-wave phase-shifts, as mentioned above, and also in nucleon–nucleus scattering or in detailed spectroscopic studies. The origin lies in coherent contributions from vector exchanges ($`\rho `$, $`\omega `$) and scalar exchanges (mainly $`ϵ`$) to the $`I=1`$ spin-orbit potential. Once the $`G`$-parity rule has changed some of the signs, the spin-orbit potential becomes moderate, in both $`I=0`$ and $`I=1`$ cases, but one observes a very strong $`I=0`$ tensor potential, due to coherent contributions of pseudoscalar and vector exchanges. This property is independent of any particular tuning of the coupling constants and thus is shared by all models based on meson exchanges. ## 5 Uncertainties on the $`\text{N}\overline{\text{N}}`$ potential Before discussing the bound states and resonances in the $`\text{N}\overline{\text{N}}`$ potential, it is worth recalling some limits of the approach. 1) There are cancellations in the NN potential. If a component of the potential is sensitive to a combination $`g_1^2g_2^2`$ of the couplings, then a model with $`g_1`$ and $`g_2`$ both large can be roughly equivalent to another where they are both small. But these models can substantially differ for the $`\text{N}\overline{\text{N}}`$ analogue, if it probes the combination $`g_1^2+g_2^2`$. 2) In the same spirit, the $`G`$-parity content of the $`t`$-channel is not completely guaranteed, except for the pion tail. In particular, the effective $`\omega `$ exchange presumably incorporates many contributions besides some resonating three-pion exchange. 3) The concept of NN potential implicitly assumes the 6-quark wave function is factorised into two nucleon-clusters $`\mathrm{\Psi }`$ and a relative wave-function $`\phi `$, say $$\mathrm{\Psi }(\stackrel{}{r}_1,\stackrel{}{r}_2,\stackrel{}{r}_3)\mathrm{\Psi }(\stackrel{}{r}_4,\stackrel{}{r}_5,\stackrel{}{r}_6)\phi (\stackrel{}{r}).$$ (2) Perhaps the potential $`V`$ governing $`\phi (\stackrel{}{r})`$ mimics the delicate dynamics to be expressed in a multichannel framework. One might then be afraid that in the $`\text{N}\overline{\text{N}}`$ case, the distortion of the incoming bags $`\mathrm{\Psi }`$ could be more pronounced. In this case, the $`G`$-parity rule should be applied for each channel and for each transition potential separately, not a the level of the effective one-channel potential $`V`$. 4) It would be very desirable to probe our theoretical ideas on the long- and intermediate-distance $`\text{N}\overline{\text{N}}`$ potential by detailed scattering experiments, with refined spin measurements to filter out the short-range contributions. Unfortunately, only a fraction of the possible scattering experiments have been carried out at LEAR, and uncertainties remain. The available results are however compatible with meson-exchange models supplemented by annihilation. The same conclusion holds for the detailed spectroscopy of the antiproton–proton atom. ## 6 $`\text{N}\overline{\text{N}}`$ spectra The first spectral calculations based on explicit $`\text{N}\overline{\text{N}}`$ potentials were rather crude. Annihilation was first omitted to get a starting point, and then its effects were discussed qualitatively. This means the real part of the potential was taken as given by the $`G`$-parity rule, and regularised at short distances, by an empirical cut-off. Once this procedure is accepted, the calculation is rather straightforward. One should simply care to properly handle the copious mixing of $`L=J1`$ and $`L=J+1`$ components in natural parity states, due to tensor forces, especially for isospin $`I=0`$. The resulting spectra have been discussed at length in Refs.. Of course, the number of bound states, and their binding energy increase when the cut-off leaves more attraction in the core, so no detailed quantitative prediction was possible. Nevertheless, a few qualitative properties remain when the cut-off varies: the spectrum is rich, in particular in the sector with isospin $`I=0`$ and natural parity corresponding to the partial waves $`{}_{}{}^{3}\text{P}_{0}^{}`$, $`{}_{}{}^{3}\text{S}_{1}^{}{}_{}{}^{3}\text{D}_{1}^{}`$, $`{}_{}{}^{3}\text{P}_{2}^{}{}_{}{}^{3}\text{F}_{2}^{}`$, corresponding to $`J^{PC}I^G=0^{++}0^+`$, $`1^{}0^{}`$, $`2^{++}0^+`$, respectively. The abundant candidates for “baryonium” in the data available at this time made this quasi-nuclear baryonium approach plausible. As already mentioned, annihilation was first neglected. Shapiro and his collaborators insisted on the short-range character of annihilation and therefore claimed that it should not distort much the spectrum. Other authors acknowledged that annihilation should be rather strong, to account for the observed cross-sections, but should affect mostly the S-wave states, whose binding rely on the short-range part of the potential, and not too much the $`I=0`$, natural parity states, which experience long-range tensor forces. This was perhaps a too optimistic view point. For instance, an explicit calculation using a complex optical potential fitting the observed cross-section showed that no $`\text{N}\overline{\text{N}}`$ bound state or resonance survives annihilation. In Ref., Myhrer and Thomas used a brute-force annihilation. It was then argued that maybe annihilation is weaker, or at least has more moderate effects on the spectrum, if one accounts for 1) its energy dependence: it might be weaker below threshold, since the phase-space for pairs of meson resonances is more restricted. It was even argued that part of the observed annihilation (the most peripheral part) in scattering experiments comes from transitions from $`\text{N}\overline{\text{N}}`$ scattering states to a $`\pi `$meson plus a $`\text{N}\overline{\text{N}}`$ baryonium, which in turn decays. Of course, this mechanism does not apply to the lowest baryonium. 2) its channel dependence: annihilation is perhaps less strong in a few partial waves. This however should be checked by fitting scattering and annihilation data. 3) its intricate nature. Probably a crude optical model approach is sufficient to account for the strong suppression of the incoming antinucleon wave function in scattering experiments, but too crude for describing baryonium. Coupled-channel models have thus been developed (see, e.g., Ref. and references therein). It turns out that in coupled-channel calculations, it is less difficult to accommodate simultaneously large annihilation cross sections and relatively narrow baryonia. ## 7 Multiquark states vs. $`\text{N}\overline{\text{N}}`$ states At the time where several candidates for baryonium were proposed, the quasi-nuclear approach, inspired by the deuteron described as a NN bound state, was seriously challenged by a direct quark picture. Among the first contributions, there is the interesting remark by Jaffe that $`\text{q}^2\overline{\text{q}}^2`$ S-wave are not that high in the spectrum, and might even challenge P-wave $`\text{q}\overline{\text{q}}`$ to describe scalar or tensor mesons. From the discussions at this Workshop in other sessions, it is clear that the debate is still open. It was then pointed out that orbital excitations of these states, of the type $`(\text{q}^2)`$$`(\overline{\text{q}}^2)`$ have preferential coupling to $`\text{N}\overline{\text{N}}`$. Indeed, simple rearrangement into two $`\text{q}\overline{\text{q}}`$ is suppressed by the orbital barrier, while the string can break into an additional $`\text{q}\overline{\text{q}}`$ pair, leading to $`\text{q}^3`$ and $`\overline{\text{q}}^3`$. Chan and collaborators went a little further and speculated about possible internal excitations of the colour degree of freedom. When the diquark is in a colour $`\overline{3}`$ state, they obtained a so-called “true” baryonium, basically similar to the orbital resonances of Jaffe. However, if the diquark carries a colour 6 state (and the antidiquark a colour $`\overline{6}`$), then the “mock-baryonium”, which still hardly decays into mesons, is also reluctant to decay into N ad $`\overline{\text{N}}`$, and thus is likely to be very narrow (a few MeV, perhaps). This “colour chemistry” was rather fascinating. A problem, however, is that the clustering into diquarks is postulated instead of being established by a dynamical calculation. (An analogous situation existed for orbital excitations of baryons: the equality of Regge slopes for meson and baryon trajectories is natural once one accepts that excited baryons consist of a quark and a diquark, the latter behaving as a colour $`\overline{3}`$ antiquark. The dynamical clustering of two of the three quarks in excited baryons was shown only in 1985.) There has been a lot of activity on exotic hadrons meanwhile, though the fashion focused more on glueballs and hybrids. The pioneering bag model estimate of Jaffe and the cluster model of Chan et al. has been revisited within several frameworks and extended to new configurations such as “dibaryons” (six quarks), or pentaquarks (one antiquark, five quarks). The flavour degree of freedom plays a crucial role in building configurations with maximal attraction and possibly more binding than in the competing threshold. For instance, Jaffe pointed out that (uuddss) might be more stable that two separated (uds) , more likely than in the strangeness $`S=1`$ or $`S=0`$ sectors. In the four-quark sector, configurations like $`(\text{Q}\text{Q}\overline{\text{q}}\overline{\text{q}})`$ with a large mass ratio $`m(\text{Q})/m(\overline{\text{q}})`$ are expected to resist spontaneous dissociation into two separated $`(\text{Q}\overline{\text{q}})`$ mesons (see, e.g., and references therein). For the Pentaquark, the favoured configurations $`(\text{Q}\overline{\text{q}}^5)`$ consist of a very heavy quark associated with light or strange antiquarks. In the limit of strong binding, a multiquark system can be viewed as a single bag where quarks and antiquarks interact directly by exchanging gluons. For a multiquark close to its dissociation threshold, we have more often two hadrons experiencing their long-range interaction. Such a state is called an “hadronic molecule”. There has been many discussions on such molecules, $`\text{K}\overline{\text{K}}`$, $`\text{D}\overline{\text{D}}`$ or $`\text{B}\text{B}^{}`$. In particular, pion-exchange, due to its long range, plays a crucial role in achieving the binding of some configurations. From this respect, it is clear that the baryonium idea has been very inspiring. ## Acknowledgment I would like to thank the organisers for the very stimulating atmosphere of this Workshop, J. Carbonell for informative discussions and A.J. Cole for comments on the manuscript.
no-problem/9906/cond-mat9906401.html
ar5iv
text
# Ferromagnetism and superstructure in Ca1-xLaxB6 ## Abstract We critically investigate the model of a doped excitonic insulator, which recently has been invoked to explain some experimental properties of the ferromagnetic state in Ca<sub>1-x</sub>La<sub>x</sub>B<sub>6</sub>. We demonstrate that the ground state of this model is intrinsically unstable towards the appearance of a superstructure. In addition, the model would lead to a phase separation regime and the domain structure which may be prevented by the Coulomb forces only. Recent experiments indicate that a superstructure may indeed show up in this material. The discovery of weak ferromagnetism in lightly-doped hexaborides Ca<sub>1-x</sub>La<sub>x</sub>B<sub>6</sub> has renewed theoretical interest in the so-called “excitonic” transition vigorously discussed in the 60-s and 70-s (see for a review). Band structure calculations indicate that hexaborides, DB<sub>6</sub>, may, in fact, be semimetals owing to an accidental small band overlap at the three X-points of the Brillouin Zone, two bands at each X-point having symmetry $`X_3`$, $`X_3^{}`$. It was suggested that the use of the model together with the energy mechanism for ferromagnetism first suggested in Ref. provide at least a qualitative explanation for the surprising findings. Bearing much of the resemblance with the mathematics of the BCS weak-coupling theory of superconductivity, the Keldysh-Kopaev model is a convenient tool which makes possible to treat an excitonic transition in the controllable manner. Assuming that the basic physics of hexaborides is properly accounted for by this oversimplified weak coupling scheme, in what follows we study in some more details the zero temperature phase diagram as a function of the doping level, and its’ stability to anisotropy features. The main result is that the system almost inevitably develops a superstructure on the background of the initially cubic lattice. Our explanation for the magnetic moment per doped lanthanum ion to vary and even become very small differs from the one suggested in Ref.. Zhitomirsky et al. have used the model of Ref. of an excitonic phase transition in a semimetal to explain weak ferromagnetism observed in hexaborides. Ferromagnetism is known to appear in this model, and was investigated in detail by Volkov et al. . We first briefly recall the mechanism of magnetic moment formation in this model. It is well known that if the screened Coulomb interaction between an electron and a hole in two bands, $$H_{int}=\underset{𝐤,𝐤^{},𝐪}{}\underset{\alpha \beta }{}V(𝐪)a_{1\alpha }^{}(𝐤+𝐪)a_{2\beta }^{}(𝐤^{}𝐪)a_{2\beta }(𝐤^{})a_{1\alpha }(𝐤),$$ (1) is dominant, and the electron and hole Fermi surfaces do coinside (“nesting”), the excitonic transition has a degeneracy for the onset of CDW (“singlet”) and SDW(“triplet”) excitonic condensate. This degeneracy is lifted only by additional weak short-range Coulomb terms, which favor triplet excitonic condensate, or electron-phonon interactions, which favor singlet (CDW) state. This splitting, however, being usually considered to be weak at $`\delta gg^2`$, where $`g=V(0)N_01`$ is the screened Coulomb coupling constant, $`N_0=mk_F/(2\pi ^2)`$ is the density of states per spin for a single band, the temperatures of the triplet or singlet excitonic transitions are close (on the exponential scale, $`T_cexp[g^1]`$), $`T_{s0}T_{t0}`$. Then, as it was first shown in Ref. and applied to the physics of hexaborides in Ref., ferromagnetism can appear as a result of doping due to the development of a triplet excitonic instability in the presence of a singlet order, or vice versa. Ferromagnetism is the direct result of the fact that in the presense of these two condensates both crystalline and time-reversal symmetries get broken. Summation of the leading logarithmically divergent terms in the presence of one condensate explicitly confirms the divergent Curie-Weiss behavior of the spin susceptibility with some Curie temperature found in Ref.. For the analysis of ferromagnetism in the doped state at low temperatures one may neglect the small differences in the energy spectrum of the SDW or CDW ground state. The energetic analysis is straightforward in case when triplet and singlet coupling constants are equal, since then equations for different spin polarizations are decoupled. Indeed, for a simple model with two isotropic bands, $`m_e=m_h=m`$, and $`ϵ_e(𝐤)=\frac{𝐤^2}{2m}\mu +\frac{E_g}{2}`$, $`ϵ_h(𝐤)=\frac{𝐤^2}{2m}+\mu +\frac{E_g}{2}`$, the zero temperature excitonic gap in the unpolarized state is given by $$\mathrm{\Delta }_\alpha ^2=\mathrm{\Delta }_0(\mathrm{\Delta }_02n),$$ (2) where $`\mathrm{\Delta }_0=2ϵ_cexp(g^1)`$ is the excitonic gap at zero doping, $`ϵ_c`$ is a cutoff energy around the Fermi surface, $`n=N/4N_0`$ is the concentration of doped carriers in energy units (the level of the chemical potential in the bare metallic phase is $`\mu =n`$). The ground state energy per spin direction relative to the stoichiometric normal state is given by: $$E(n)=N_0n^2\frac{1}{2}N_0(\mathrm{\Delta }_02n)^2.$$ (3) Thus, the excitonic pairing disappears at $`n=0.5\mathrm{\Delta }_0`$. A quick analysis shows that the energy of a polarized excitonic state, $`E[n(1+p)]+E[n(1p)]`$ has a minimum value for $`p=1`$, i.e. for the complete polarization of added carriers. Therefore, below $`T_c`$, there are two order parameters, $`\mathrm{\Delta }_s`$ and $`𝚫_t`$, and, correspondingly, two different gaps in the spectrum for different spin polarizations: $`\mathrm{\Delta }_{}=\sqrt{\mathrm{\Delta }_0(\mathrm{\Delta }_04n)}0<n<\mathrm{\Delta }_0/4`$ (4) $`\mathrm{\Delta }_{}=\mathrm{\Delta }_00<n<\mathrm{\Delta }_0/2.`$ (5) Hence at T=0 electrons and holes are paired for only one spin direction when $`\mathrm{\Delta }_0/4<n<\mathrm{\Delta }_0/2`$. (The system undergoes a 1-st order phase transition into the unpolarized normal metal at $`n_{cr}=\mathrm{\Delta }_0/2`$.) Thus, this mechanism gives a large effective moment equal to 1$`\mu _B`$ per doped $`La`$ atom, and some efforts have been applied in Ref. to argue that this moment may be forced to become small if the interaction between the orientation of the magnetic moment and the direction $`𝐝`$ of the triplet order parameter (SDW), $`𝚫_t(p)=(𝐝(p)𝝈)`$, was taken into account. The mechanism of Ref. seem to us too artificial, first of all, because the energy difference between CDW and SDW states was ignored. Meanwhile, it is easy to see that an anisotropy of electron and hole pockets would reduce the net magnetization. Indeed, the two opposite spin polarizations preferred by the system are governed by the position of the chemical potential. First, an anisotropic solution for the order parameter, $`\mathrm{\Delta }(𝐩)`$, would result in variation of the gap itself along the Fermi surface. Secondly, even a small anisotropy (“antinesting”) of the electron and hole spectra hinders the excitonic gap, even leading to the gapless pockets along the initial Fermi surface . Therefore, while starting from the spin-up and spin-down spectra for the isotropic gap shown in Fig.1a (1 $`\mu _B`$ per La), one would end up with doped electrons to spill over between the two energy branches thus obviously reducing the total magnetization, as it is shown in Fig. 1b. We will not pursue the detailed calculations for these mechanisms, since the weak coupling model of an excitonic transition suffers from several obvious defficiencies, each of which results in an instability towards a formation of an inhomogeneous structure. First of all, for the homogeneous excitonic state to exist, the electron and hole Fermi surfaces should be sufficiently close to nesting. Indeed, a quick calculation shows that in a model with mass anisotropy, $`ϵ_h(𝐩)`$ $`=`$ $`{\displaystyle \frac{p_x^2}{2m+\delta m}}+{\displaystyle \frac{p_y^2}{2m\delta m}}+{\displaystyle \frac{p_z^2}{2m}}+(1/2)E_g`$ (6) $`ϵ_e(𝐩)`$ $`=`$ $`{\displaystyle \frac{p_x^2}{2m\delta m}}+{\displaystyle \frac{p_y^2}{2m+\delta m}}+{\displaystyle \frac{p_z^2}{2m}}+(1/2)E_g,`$ (7) a homogeneous excitonic phase disappears at $$\frac{\delta m}{m}>\frac{2e\mathrm{\Delta }_0}{|E_g|},$$ (8) where $`\mathrm{\Delta }_0`$ is the excitonic gap for the isotropic situation ($`\delta m=0`$); $`e=2.71828`$. Since $`\mathrm{\Delta }_0`$ is exponentially small in terms of $`ϵ_cE_g`$, $`\mathrm{\Delta }_0=2ϵ_cexp(1/g)`$, the weak coupling model allows only a modest mass anisotropy. The range of anisotropy for exciton formation may be extended beyond Eq.(8), however, since at larger anisotropies it leads to an inhomogeneous state with long-wavelength oscillations of spin and (or) electron densities. Similar drawback of the Keldysh-Kopaev model becomes obvious when one considers doping even in a model with the perfect nesting. Mathematically the problem in the last case becomes equivalent to the problem of the coexistence of the superconductivity and ferromagnetism. The exchange field, $`I`$, in that model is equivalent to the chemical potential $`\mu `$ in the excitonic state. Thus, the gap equation has two solutions: 1) $`\mathrm{\Delta }=\mathrm{\Delta }_0`$; 2) $`\mathrm{\Delta }^2=2I\mathrm{\Delta }_0\mathrm{\Delta }_0^2`$, where $`\mathrm{\Delta }_0`$ is the gap at $`I=0`$. The energies of the ground state are easily found: 1) $`\mathrm{\Omega }\mathrm{\Omega }_0=N_0\mathrm{\Delta }_0^2`$; 2) $`\mathrm{\Omega }\mathrm{\Omega }_0=N_0(4\mathrm{\Delta }_0I\mathrm{\Delta }_0^22I^2)`$. The branch (2) is not stable for the homogeneous superconductor. The physical difference between the problems of the doped excitonic insulator and the ferromagnetic superconductor is that in case of an excitonic insulator the number of added dopants is fixed, not the chemical potential. Then the “stable” solution (1) corresponds to zero doping, the case when the chemical potential $`\mu `$ lies below the gap edge $`\mathrm{\Delta }_0`$. The dependence of a homogeneous solution for the gap on doping then is given by (2), with $`\mu `$ being expressed in terms of added dopants, $`n=\sqrt{\mu ^2\mathrm{\Delta }^2}`$ (where n is in energy units), reproduces Eq.(2) and the effects considered by Volkov et al. and Zhitomirsky et al. . The instability of the homogeneous solution at larger $`n`$ is seen from the form of $`T_c(n)`$, explicitly calculated (see Fig. 2) in Ref.: $$T_c=\frac{\gamma \mathrm{\Delta }_0}{\pi }exp\left[\mathrm{\Psi }\left(\frac{1}{2}\right)\frac{1}{2}\mathrm{\Psi }\left(\frac{1}{2}+i\frac{n}{2\pi T_c}\right)\frac{1}{2}\mathrm{\Psi }\left(\frac{1}{2}i\frac{n}{2\pi T_c}\right)\right].$$ (9) At high enough densities, $`n>\mathrm{\Delta }_0/2`$, $`T_c(n)`$ displays a reentrant behavior from the excitonic insulator into the metallic state at low temperatures. The behavior indicates some sort of an instability or a 1-st order phase transition. However, similar to the case of non-perfect nesting above, one can again check for an instability towards a transition into an excitonic state with an incommensurate wave vector $`𝐪`$, searching for a maximum $`T_c(|𝐪|)`$. The result of our numerical calculation is shown in Fig.2. At low dopings $`T_c`$ corresponds to the homogeneous state with $`𝐪=0`$. The dashed line in Fig. 2 shows the temperature for the onset of an inhomogeneous phase, $`T_c(|𝐪|)`$, where the value of $`|𝐪|`$ itself depends on $`n`$. (At low $`T_c`$ $`|𝐪|=2.4n/v_F`$.) The excitonic regime at $`T=0`$ for such inhomogeneous state appears by a first order transition at $`n^{}=0.71\mathrm{\Delta }_0`$ (see below) and extends to $`n_{c1}0.755\mathrm{\Delta }_0`$. (Inhomogeneous state was first discussed by Rice in connection with itinerant antiferromagnetism in chromium.) Assuming again that CDW- and SDW- states have equal energies ($`\delta g=0`$), it is easy to show that $`T_c`$ in the presence of magnetic field, $`B`$, initially increases with $`B`$, as follows from $`T_c(n,B)`$ obtainable by a mere substitution: $$T_c(n,B)T_c(n\mu _BB),$$ (10) from Eq.(9). The $`|𝐪|`$-value in the presence of the magnetic field $`B`$ also follows from the same substitution. This result may also be considered as another manifestation of the system’s tendency towards a ferromagnetic state. According to Eq.(10), at a higher doping the magnetic field may cause reentrance into the excitonic insulator phase. The vicinity of the upper concentration, $`n_{c1}`$, where the non-homogeneous solution first appears if the concentration is decreased from the side of the normal metal, may be studied in the same manner as for the corresponding superconductivity problem, producing the well-known Larkin-Ovchinnikov-Fulde-Ferrel state. The solution in our case (we need to find $`\mu (n)`$) again secures the stripe phase as the most energetically favorable one. The density of carriers then oscillates according to: $$n(x)=nn[\mathrm{\Delta }_b^2/n_{c1}^2](1+1.3cos(2qx)),$$ (11) where $`qv=2.4(n_{c1}+\delta n/15.7)`$, $`\mathrm{\Delta }(x)=2\mathrm{\Delta }_bcos(qx)`$, and $$|\mathrm{\Delta }_b|^2=0.936n_{c1}(n_{c1}n).$$ (12) Here $`v`$ is the Fermi velocity. An inhomogeneous distribution Eq.(11) of a weak enough charge on the scale of a coherence length $`\xi _0>\xi _{TF}`$, where $`\xi _{TF}`$ is the Thomas-Fermi screening radius should not present a major trouble, since the lattice can adjust itself to produce a periodic modulation. Yet it remains unclear how at low $`T`$ the system evolves when one proceeds from the metallic end by further decrease of the concentration of dopants. In any event, if the Coulomb forces are completely neglected, with further concentration decrease the homogeneous phase is restored through the phase separation regime. This starts to take place at the point $`n^{}=\mathrm{\Delta }_0/\sqrt{2}`$. To find the two phases and how they coexist, in our case turns out to be completely equivalent to minimizing the energy of the intermediate state of the $`I^{st}`$-type superconductor in a fixed magnetic field. The same energy considerations show that the system separates into the mixture of two phases: the one with $`n_e=n_h`$ and the other with $`n_en_h=n^{}=\mathrm{\Delta }_0/\sqrt{2}`$. The relative volume fraction of each phase is given by: $$V_1/V=1n/n^{}V_2/V=n/n^{}.$$ (13) The domain sizes would be determined by the surface energy. Without the bulk Coulomb forces such phase separation is energetically even more favorable than ferromagnetism. On the other hand, the Coulomb energy would work against the spatial charge separation, which, hence, may stabilize a homogeneous regime in a range of small enough $`n`$. In view of these uncertainties we made no attempt to address the other issue, namely, that for a complete description of the true ground state of DB<sub>6</sub> in the frameworks of the model, one has to account for the three X-points in the cubic system related to each other by the symmetry transformations. This has been done recently in Ref. for the problem of multiband superconductors, and the results of Ref. are immediately mapped onto the problem of excitonic insulator with the spectrum of Ref.. In conclusion, we have shown that the excitonic model, even though may capture a qualitatively correct physics, necessarily leads to appearance of a superstructure as far as the doping dependence is concerned. We are aware of the only experimental observation yet of such a superstructure which, at least, does not contradict our expectations above. We emphasize again that our explanation for possible small magnetic moment differs from the one given in Ref.. The magnetic moment can be as big as $`1\mu _B`$ per doped $`La`$ in the model. Effects of anisotropy and changes in the energy spectrum through doping would change this significantly, as mentioned above. Recent experiments on $`BaB_6`$ doped with $`La`$ have produced moments about $`0.4\mu _B`$ at $`x=0.05`$. After this work was performed, the authors have discovered the preprint Ref. by Balents and Varma. We shall not discuss their results regarding the three different $`X`$-points. We note that our results on the doping dependence agree for the most part. We do not agree, however, with the claim that the results of contain significant errors, since the macroscopic phase separation is to be made difficult by the Coulomb terms. The authors thank Z. Fisk for everyday discussions on both the experimental results and related physics. We are grateful to R. Caciuffo for communicating the results of Ref. prior to publication. L.P.G. also acknowledges with gratitude stimulating discussions with M. Chernikov, A. Bianchi, S. Oseroff, H.-R. Ott, and C. M. Varma. This work was supported by the National High Magnetic Field Laboratory through NSF cooperative agreement No. DMR-9527035 and the State of Florida.
no-problem/9906/cs9906027.html
ar5iv
text
# Human-Computer Conversation ## 1 Introduction Practical and theoretical investigations into the nature and structure of human dialogue have been a topic of research in artificial intelligence and the more human areas of linguistics for decades: there has been much interesting work but no definitive or uncontroversial findings. The best performance overall in HMC (Human-machine conversation) has almost certainly been Colby’s PARRY program since its release on the (then ARPA) net around 1973. It was robust, never broke down, always had something to say and, because it was intended to model paranoid behaviour, its zanier misunderstandings could always be taken as further evidence of mental disturbance, rather than the processing failures they were. Colby actually carried out a version of the Turing test –usually taken to be the test of whether users can tell a human from a machine in a HMC environment–by getting psychiatrists to compare, blind, PARRY utterances with those of real paranoids and they were indeed unable to distinguish them. Indistinguishability results are never statisticfally watertight, but it was, nonetheless, a very striking demonstration and much longer ago than many now realise. ## 2 History In theoretical terms, modelling conversation or dialogue comes within the division of linguistics that Morris called pragmatics–that which deals with the relationship of individual agents to symbols and the world–as opposed to syntax, that deals with the relationship of symbols to each other, and semantics, that deals with the relationship of symbols to the world. HMC is a pragmatic study, but its history has involved many attempts to see it in syntactic and semantic terms as well, since the more tractable areas of research and pragmatics was often thought too hard. If we look back at the HMC systems of the Seventies we see a clear division of approach that persists to this day: on the one hand there were theoretically-motivated models in the artificial intelligence tradition that emphasized reasoning and a deep understanding of language based on knowledge of the world about which the conversation took place. The most famous was Winograd’s program that discussed a set of blocks on a table top, and more realistic settings were Grosz’s system that acted as a robot discussion of how to assemble a water pump. If you said “Attach the platform to the pump” it could reply “Where are the bolts?” although no bolts had been mentioned because it knew from its knowledge structure that bolts were needed for assemblies. The most advanced system of its generation was the Toronto train timetable system that would reply to: “The 6.15 to Vancouver?” with “Platform 6” In some sense, it knew what was wanted by a person saying the first utterance, which is just a noun phrase and not even a question. None of these systems had very good performance and their vocabularies were a few dozen words at most; they were never tested by being opened up to a public who could talk to them in an unconstrained way–their systems of syntactic and semantic analysis were far too fragile for that–if one word was differently placed they might understand nothing at all. A quite different kind of system was Colby’s PARRY, mentioned above–and which was never as famous as its far far weaker contemporary ELIZA . PARRY knew little of the world, had no syntax analysis and just worked by a large set (about 6000) of patterns with which it matched any input. It appeared to be a paranoid patient in a Veterans’ hospital and in reply to: “Have you been hospitalized before” it might say “No, this is the first time” The essence of PARRY was not in individual answers but in the fact that it could keep up a conversation like these for dozens of turns and appeared to have things it wanted to say, particularly stored resentments about the Mafia and bookies at the race track. It appeared to have a personality and many users (and it had thousands) refused to believe it was a computer at all. This dichotomy between theory-driven models and performance driven models persists to this day. One could classify the different approaches to HMC roughly as follows. ## 3 The current state of play One could argue that Human Machine Conversation (HMC) is now in a position like that of machine translation fifteen years ago: it is a real technology, coming into being in spite of scepticism, but with a huge gulf between busy practitioners getting on with the job, often within companies, and, on the other side, the researchers, in linguistics, artificial intelligence (AI) or whatever, whose papers fill the conferences. The rise of empirical linguistics has largely closed this gulf for machine translation and related arts, but not as yet, for HMC. First, let us set out historical trends in HMC, for if the HMC world is really as disparate as what follows it is hardly surprising there is so little consensus on how to progress. 1. Dialogue analysis based on models of individual agents’ beliefs and knowledge structures, usually presented within an AI-Derived theory of plans, inference and possibly speech/dialogue acts and truth maintenance, many using a “space” metaphor to represent individuals (e.g. Allen , Traum , Kobsa , Ballim and Wilks ). 2. AI-derived models of dialogue based on more linguistic notions and not primarily based on models of individuals; the representation is often in terms of partitioned semantic nets to represent domains but uses concepts of focus, failure, repair etc. (e.g. Grosz and Sidner , Webber ). 3. AI-derived models based largely on transitions in domain scripts, driving top-down inference but augmented by local inference rules representing quasi-plan etc., no models of individuals (e.g. Schank ). 4. Sociology/Ethnomethodology tradition of descriptive conversational analysis, usually based on local transition analysis types in actual dialogues, analysed non-statistically (e.g. Schegloff ). 5. Local AI theories of discourse, usually without models of individuals or domains, but with a taxonomy of speech/dialogue acts and inference rules applied bottom-up to utterances (Charniak , Bunt , Carletta ). 6. Transition network models of general discourse moves–apparently global but domain-independent scripts but largely dependent on local alternatives–a dialogue version of “text grammar” and in some ways a normalization of (4) above (e.g. Whittaker and Stanton ). 7. Empirical analysis of dialogue corpora to produce statistical measure of dialogue turn transitions based on a taxonomy of dialogue acts the first empirical pragmatics? In some ways, the provision of evidence for forms of (6) and so (4). (e.g. Maier and Reithinger ).Pattern-matching approaches to bottom-up dialogue analysis, providing input to some higher representational form and rejecting the possibility of effective dialogue grammar (Colby , ATIS and CONVERSE ). 8. Full treatment of empirical dialogue analysis, derived automatically from corpora, transducing from utterances to some form rich enough to support dialogue acts, whether in terms of some conventional grammatical representation or a fusion of taggers, lexical strings and pattern matcher outputs. The themes and approaches in this list are probably not wholly independent and may not be exhaustive. Notice that, thirty years after PARRY, no real form of (9) exists, and the corpora from which it might be done (for English at least) have only very recently come into existence. An interesting question right now, is whether (9) can be done in a principled way, as an alternative to PARRY-like systems built up over long periods by hand, or many of the other types of systems above with trivial vocabularies and virtually no functionality. This was exactly the opposition in machine translation for many years: with SYSTRAN’s large hand-crafted functionality contrasted with a host of theoretical, published, acclaimed but non-functional systems. In Machine Translation, that opposition began to collapse with the arrival of IBM’s statistical MT system about 1990. The possibility of a meaningful empirical pragmatics could do the same for HMC. One additional point should be made here: we have said nothing of computer recognition (and production) of speech in dialogue systems. Speech research has pursued its own agenda, separate from written text, and all the above systems communicated via screen typing. The chief speech problem was always decoding the signals into words, rather than the content of dialogue as we have described it above–researchers tended to assume that speech could be solved separately and then a dialogue model of one of the following types just bolted on, as it were. This agenda for research has had obvious defects, especially in that speech phenomena like pauses, stress, pitch etc. convey meaning as well–but basically there has been agreement on all sides until now to separate out the speech and language issues so as to progress. ## 4 The Loebner Prize The Loebner Prize Medal is awarded annually to the designer of the computer system that best succeeds in passing a variant of the Turing Test, in which human judges communicate with a workstation and try to decide which of the systems in the competition is a program and which a person. The winning program is the one the judges are least able to distinguish from the human interlocutors taking part. Complex competition rules control typing speeds and so on, so that the machine entries do not give themselves away by typing too fast.The competition is overseen by the ACM, the main US organization for computer professionals, and for the last two years there has been no domain restriction on what can be talked about: programs entering must in principle be prepared to talk about anything at all. CONVERSE (the 1997 winner) had strong views on the lesbian couple Bill Clinton had welcomed to the White House the night before the competition, and of course on Clinton himself. It narrowly beat out the 1998 winner, an Australian program that claimed to be a 14-year old girl marooned on a desert island and appealing for help over the World Wide Web. Competitions have included American, Canadian and Australian programs and it is the first time it has been won by a non-US team. David Levy, Director of Intelligent Research, claimed that in twenty years people will be falling in love with these programs, and they are certainly more stimulating than Tamagotchi pets, and of a far higher standard than the Chatterbots currently available on the Web. The example below simply repeats exactly the kind of limited paraphrase, masquerading as chat, that was to be found in Schank’s programs in the early 1970’s. More details on past and future competitions and full transcripts of the 1997 Loebner competition can be found on the Web site: http://acm.org/ loebner/loebner-prize.htmlx. A sample of CONVERSE’s 1997 performance is at the end of this article. ## 5 CONVERSE The CONVERSE program was not intended to be based on any scientific research but on hunches about how to do it, while taking advantage of some recent methodological shifts in computational linguistics towards empiricism and the use of real data. The main hunch was derived directly from PARRY’s impressiveness when compared with its passive contemporaries like ELIZA. (Weizenbaum, 1976) PARRY had something to say, just as people do, and did not simply react to what you said to it. It could be said to rest on the hunch that a sufficient condition for humanness in conversation may be what Searle calls intentionality: the apparent desire to act and affect surroundings through the conversation, which is a strong version of what we are calling “having something to say”, since a computer program without prostheses can only create such effects through speech acts and not real acts on physical objects. The extension of this hunch as far as Turing test situations are concerned – i.e. fooling people that the system is human–is that if the computer can get to say enough, to keep control of the conversation, as it were, through being interesting or demanding so that the human plays along, then there is correspondingly less opportunity for the human interlocutor to ask questions or get responses to an unconstrained range of utterances that will show up the system for what it is. Naturally enough, this hunch must be tempered in practice since a system that will not listen at all, and which will not be diverted from its script no matter what is said, is again inevitably shown up. The hunch is simply that, and translatable as: be as active and controlling in the conversation as RACTER, PARRY’s only real rival (as regards interestingness) over the last 30 years, worked on the principle of being so interesting and zany that many humans did not want to interrupt it so as to intrude new topics or demands of their own. Others were less charmed of course, but it was one very effective strategy for operating this key hunch, and one not involving clinical madness, as PARRY did. The original features of CONVERSE are as follows: 1. top down control of conversation by means of a range of scripts, plus an active bottom up module seeking to answer questions etc. using a set of data bases on individuals. 2. control and interaction of these features in (1) by means of a weighting system between modules that could be set so as to increase the likelihood of one or another of these modules ”gaining control” at any given point in the conversation. 3. the use of large scale linguistic data bases such as thesaurus networks giving conceptual connectivity–for dealing with synonyms–and large proper name inventories (Collins’ dictionary proper names in our case) that allowed CONVERSE to appear to know about a large range of people and things not otherwise in the scripts, the data bases, or the semantic nets, though this dictionary information was formally mapped to the structures of the semantic network and the databases. 4. a commercial and very impressive text parser, based on trained corpus statistics. This however had been trained for prose rather than dialogue which meant that much of its output had to be modified by us before being used. We also made use of large scale patterns of dialogue use derived from an extensive corpus of British dialogue that has recently been made available. The last takes advantage of recent trends in natural language processing: the use of very large resources in language processing and intermediate results obtained from such resources, like the dialogue patterns. It meant that CONVERSE was actually far larger than any previous Loebner entry, and that much of our effort had gone into making such resources rapidly available in a PC environment. So, although not based on specific research, CONVERSE was making far more use of the tools and methods of current language processing research than most such systems. Its slogan at this level was “big data, small program” which is much more the current trend in language processing and artificial intelligence generally than the opposite slogan, one which had ruled for decades and seen all such simulations as forms of complex reasoning, rather than the assembly of a vast array of cases and data. CONVERSE, although, it has some of the spirit of PARRY, does in fact have data bases and learns and stores facts, which PARRY never did, and will allow us in the future to expand its explicit reasoning capacity. The weighting system would in principle allow great flexibility in the system and could be trained, as Connectionist and neural network systems are trained, to give the best value of the weightings in terms of actual performance. We will continue to investigate this, and whether weightings in fact provide a good model of conversation–as opposed to purely deterministic systems that, say, always answer a question in the same way when it is posed. In the end, as so often, this may turn out to be a question of the application desired: a computer companion might be more functionally appropriate if weighted, since we seem to like our companions, spouses and pets to be a little unpredictable, even fractious. On the other hand, a computer model functioning as a counselor or advisor in a heath care situation, advising on the risks of a certain operation or test, might well be more deterministic, always answering a question and always telling all it knew about a subject when asked. ### The CONVERSE Personality The apparent character of CONVERSE is Catherine, a 26 year-old female editor for a magazine like Vanity Fair, who was born in the UK, but currently lives in New York. The contents of Catherine’s character are stored in a database of features and values, known as the Person database (PDB). The kinds of things that we store about Catherine are the details of her physical appearance, her birthday, astrological sign, some of her likes and dislikes, whether she has a boyfriend, where she works, etc. For the most part, things in the PDB are all related to facts about Catherine. We can also store information about other people in the PDB, in particular people that are related to Catherine in some way: her mother, father, friend, boss, etc. Scripts are the driving force of the program and whenever possible, we aim to keep control of the conversation, by posing a question at the end of a system utterance. The scripts cover a range of 80 topics, but this can be easily extended (within the limits of the hardware) with a graphical script editing interface. Currently, some of the topics covered are crime, racism, religion, the Simpsons, mobile phones, abortion, travel, food and violence. These are only differences of topic not of genre or style. The method for acquiring the scripts is done in a two stage process: first, a script writer sketches out the script on paper and secondly, the scripts are entered into the system via a semi-automatic script editor. The script editor establishes the flow of control through the script based on the user’s responses to each script utterance. Amongst the recreational applications which are foreseen for CONVERSE, foremost is the idea of using it as a virtual friend. CONVERSE’s Person Data Base can be augmented with data for different personalities, enabling the user to talk, on one day, to a virtual elderly English gentleman and on another occasion to a virtual 20-year-old punk music fan from the Bronx. Eventually the user should be able to describe the personality with whom he wishes to talk, and there would then be a module to create a suitable Person Data Base conforming to the user’s specification. ### Extending CONVERSE CONVERSE was based on quite simple intuitions: that conversational skill is a compromise between two tendencies. First, there is the active, top-down, intentional driver with something to say (the feature that Colby’s PARRY (Colby 1971) had and Weizenbaum’s ELIZA (Weizenbaum 1967) so clearly lacked). Secondly, there was the passive, bottom-up, listener aspect which meant understanding what was said to it and react appropriately, by answering questions or even changing the topic. This, as all researchers know, is much harder because it requires understanding. Humans who lack it are conspicuously bad conversationalists but this is normally attributed to not listening rather than not understanding what is said. We could call the simple CONVERSE architecture (Figure 1) Pushmepullyou (after Dr Doolittle) to convey the tension between the two elements. What we are now engaged in is an attempt to move both the push and pull sides to a higher level. For the latter we hope to use a model of individual agents beliefs and intentions we have worked on (in Sheffield and New Mexico) for some years, called ViewGen , it is a well-developed method (with several iterations of Prolog programs) of creating and manipulating the beliefs, goals etc. of individual spaces for inference we call environments, all controlled by an overall default process called ascription that propagates beliefs etc. from space to space with minimum effort so the system can model the states of other agents it is communicating with. It is based on the general assumption that a communicating agent must model the internal states of its interlocutor agents as best it can, not just by storing their features, like age and size, but their own states. This system is much stronger than the rather elementary Person Data Base in CONVERSE but we intend to strengthen that with aspects of ViewGen so as to increase its functionality substantially. The more immediately challenging move is to replace the push-me or parsing side of CONVERSE, which was based on a statistical parser of general English prose with a shell of what we called microqueries which adapted it to dialogue from normal prose. We are currently working on a robust parser of English conversation, which is to say, a transducer from sentences to a set of dialogue acts plausible for any domain. Now is a perfect moment to do this since corpora of English dialogue (like the British National Corpus or BNC) have now become available so that this task can be seen as an extension of contemporary empirical computational linguistics into the field of pragmatics itself, the last bastion. The chief difficulty, suffered by all researchers in this tradition, is that although the BNC is available for unsupervised training, there is very little dialogue corpus (Apart from Edinburgh’s MAPTASK , Rochester’s TRAINS and a corpus at VerbMobil) which is marked with a set of dialogue acts for supervised training and evaluation. There is currently an international initiative (DAMSL) in this area but as yet few signs of progress. The next key step in empirical linguistics, one not yet achieved anywhere, will be such a robust model of English dialogue structure, probably using machine learning methods, a task so far only attempted for speech interfaces in very narrow domains. ## 6 Conclusion It is worth emphasizing again how surprising it is that only recently, and within the Loebner competition, have levels of performance in HMC got back to the PARRY levels of nearly thirty years ago, in spite of all the theoretical activity since then–it has not led to much robust conversational performance. Even now if you check out the Chatterbots on the web they are of astonishingly low quality, far, far below that of the Loebner winners. A recent one, Peedy, chosen at random, told a user that: Just type in a sentence like: John gave Mary a book because it was her birthday and then you can ask the corresponding questions: Did you give Mary a book What did John give Mary Who gave Mary a book and so on. Those who look back at Schank’s inference system of 1970 will see that the example comes straight from there which is yet further proof, if it were needed, of how little movement there has been in some theoretical and empirical areas. In spite of its brusque approach to a complex technology, only the Loebner competition has been able to refocus R&D on robust and realistic performance, avoiding both the traps of theory without performance, and the performance trivialities like Chatterbot just quoted. ## 7 Appendix: CONVERSE Output from <br>Loebner Competition PROGRAM\[10:18:04\]Hi, my name is Catherine, what’s yours? JUDGE04\[10:18:32\]My name is Elliott. PROGRAM\[10:18:34\]Well, it’s nice to meet you, Elliott. PROGRAM\[10:18:48\]Did you see that story on CNN last night about the lesbian PROGRAM\[10:19:04\]couple who came out at a White House party on Sunday? JUDGE04\[10:19:55\]Yes, I did. I think it may all be a publicity stunt for Ellen. PROGRAM\[10:20:01\]Ellen Degeneres was one of them - she was kissing her lover PROGRAM\[10:20:11\]in front of Bill Clinton. Do you think he was very PROGRAM\[10:20:25\]embarrassed? JUDGE04\[10:21:00\]I think he want the gay vote. He will do anything to get it and keep it. JUDGE04\[10:21:15\]He does not have a conservative base. PROGRAM\[10:21:21\] PROGRAM\[10:21:21\]Are you still a Friend of Bill Clinton? JUDGE04\[10:22:20\]I am not sure if he has any real friends. He has achieved alot and has JUDGE04\[10:22:31\]left alot of people behind. PROGRAM\[10:22:35\]You really don’t like the guy--well, at least that’s PROGRAM\[10:22:50\]definite--I thought most people were just indifferent or PROGRAM\[10:23:02\]numb by now. JUDGE04\[10:23:34\]I think we could have a more honest and dynamic leader. One able to JUDGE04\[10:23:55\]face the real dangers facing the country and address them with JUDGE04\[10:24:26\]courage and intelligence. Rather then listening to public opinion polls. PROGRAM\[10:24:30\]I guess that means you really think there’s serious behind PROGRAM\[10:24:56\]Whitewater, the Special prosecutor and all that stuff? JUDGE04\[10:25:58\]We have two dead people, Brown and guy who commited suicide, we have lost JUDGE04\[10:26:36\]papers. And we have Clintons people willing to tell all. I think that JUDGE04\[10:26:55\]Hillary will be found guilty of something. PROGRAM\[10:26:58\]But how do you actually feel about him, now in the number PROGRAM\[10:27:09\]two term? PROGRAM\[10:27:15\]. JUDGE04\[10:27:44\]He is too smart to get caught. He deserved to named hotel manager of the JUDGE04\[10:27:47\]year. PROGRAM\[10:27:49\]OK, but what happens now--does it all drift on, with PROGRAM\[10:28:01\]everyone waiting to see how much his friends have said about PROGRAM\[10:28:17\]him under threat of lots and lots of jail-or will something PROGRAM\[10:28:32\]dramatic actually happen?
no-problem/9906/astro-ph9906496.html
ar5iv
text
# Direct Analysis of Spectra of the Type Ic Supernova 1994I ## 1 Introduction Supernova 1994I, in the Whirlpool Galaxy (M51$``$NGC 5194), was a well observed supernova of Type Ic. By definition, the spectrum of a Type Ic lacks the strong hydrogen lines of a Type II, the strong He I lines of a Type Ib, and the deep red Si II absorption of a Type Ia (e.g., Filippenko (1997). A Type Ic (SN Ic) is thought to be the result of the core collapse of a massive star that either loses its helium layer before it explodes, or ejects some helium that remains insufficiently excited to produce conspicuous optical He I lines. In the specific case of SN 1994I, the identification of an absorption feature observed near 10,250 Å with the He I $`\lambda 10830`$ line (Filippenko et al. 1995) has been taken to indicate that helium was ejected. Recently, interest in SNe Ic has been very high, because of the peculiar SNe Ic 1998bw and 1997ef. SN 1998bw probably was associated with a gamma–ray burst (Galama et al. 1998; Kulkarni et al. 1998; Pian 1999; but see Norris, Bonnell, & Watanabe 1998). It also was exceptionally bright at radio wavelengths and relativistic ejecta seem to be required (Kulkarni et al. 1998; Li & Chevalier 1999). On the basis of optical light–curve studies, the kinetic energy of its ejected matter has been inferred to be much higher than the canonical supernova energy of 10<sup>51</sup> ergs (Iwamoto et al. 1998a; Woosley, Eastman, & Schmidt 1998), unless the matter ejection was extremely asymmetric (Wang & Wheeler 1998). Spectroscopically, SN 1997ef resembled SN 1998bw, but Iwamoto et al. (1998b) were able to fit the SN 1997ef light curve with a normal kinetic energy. However, on the basis of synthetic–spectrum studies such as that which is presented in this paper for SN 1994I, we (Deaton et al. 1998; Branch 1999; Millard et al. in preparation) find that SN 1997ef, like SN 1998bw, was hyper–energetic. The light curve of SN 1997ef can also be fit with a high kinetic energy (K. Nomoto, private communication.) According to light–curve studies of SN 1994I (Nomoto et al. 1994; Iwamoto et al. 1994; Young, Baron, & Branch 1995; Woosley, Langer, & Wheeler 1995), it was not a hyper–energetic event . Thus, in addition to being of interest in its own right as the best observed “ordinary” SN Ic, SN 1994I provides a valuable basis for comparison with the apparently hyper–energetic SNe 1997ef and 1998bw. It should be noted that the spectra of SNe 1998bw and 1997ef were definitely unusual; they differed significantly from those of normal SNe Ic such as SN 1994I, but a comparative study (Branch 1999) suggests that the main difference is that SNe 1997ef and 1998bw ejected more mass at high velocity. In this paper we report the results of a study of photospheric–phase spectra of SN 1994I using the parameterized supernova spectrum–synthesis code SYNOW (Fisher et al. 1997, 1999). The observed spectra are discussed in Section 2 and our synthetic–spectrum procedure is described in Section 3. Our results are presented in Section 4 and discussed in Section 5. In a companion paper (Baron et al. 1999), detailed NLTE synthetic spectra of some hydrodynamical models are compared with SN 1994I spectra. ## 2 Observations Figure 1 displays the spectra that we have studied with SYNOW. The spectra have been corrected for interstellar reddening using $`A_V=1.2`$ mag. It is clear that the reddening was substantial, but the amount is uncertain \[e.g., Richmond et al. (1996) estimated $`A_V=1.4\pm 0.5`$ mag\]. Throughout this paper, spectral epochs are in days with respect to the date of maximum brightness in the $`B`$ band, 1994 April 8 UT (Richmond et al. 1996). The six spectra of Figure 1 (a) were obtained at the Lick Observatory by Filippenko et al. (1995), who presented additional spectra that are not reproduced here; see their paper for details of the observations and reductions. Also in Figure 1 (a), a spectrum obtained by the Supernova INtensive Study (SINS) group with the Hubble Space Telescope (HST) at $`+11`$ days is combined with the Filippenko et al. (1995) optical spectrum for $`+10`$ days. The seven spectra of Figure 1 (b) were obtained by Brian Schmidt and R.P.K. at the Multiple Mirror Telescope; only two of these been published previously, in Baron et al. (1996). Photospheric–phase optical spectra of SN 1994I also have been published by Sasaki et al. (1994; spectra obtained from $`5`$ to $`+8`$ days) and Clocchiatti et al. (1996; from $`2`$ to $`+56`$ days). The most detailed discussions of line identifications in SN 1994I, and SNe Ic in general, have been by Clocchiatti et al. (1996, 1997). The major contributors to some of the spectral features in Figure 1 are clear. The Ca II H and K doublet ($`\lambda \lambda 3934,3968`$) and the Ca II infrared triplet ($`\lambda \lambda 8498,8542,8662`$) are responsible for the P Cygni features at $`36004000`$ Å and $`80008800`$ Å, respectively. The Na I D lines ($`\lambda \lambda 5890,5892`$) produce the feature at $`55006000`$ Å (and interstellar Na I in M51 produces the narrow absorption near 5900 Å). O I $`\lambda 7773`$ produces the feature at $`74007900`$ Å. The structure from about 4300 to 5400 Å is recognizable as blended Fe II lines. The line identifications of the other features — including those in the HST ultraviolet and those longward of 8800 Å— are less obvious. As mentioned above, the strong absorption near 10,250 Å, which we will refer to as “the infrared absorption”, has previously been identified with He I $`\lambda 10830`$ but we will consider other possible contributors. Figure 1 shows at a glance that, as is usual for supernovae, the absorption features tend to drift redward with time, as line formation takes place in ever deeper, slower layers of the ejected matter. Anticipating the results of the analysis presented in Section 3, we show in Figure 2 the velocity at the photosphere, $`v_{phot}`$, that we adopt for our spectral fits. Our adopted values of $`v_{phot}`$ fall from 17,500 km s<sup>-1</sup> at $`4`$ days to 7000 km s<sup>-1</sup> at $`+26`$ days. ## 3 Procedure We use the fast, parameterized, supernova spectrum–synthesis code SYNOW to make a “direct” analysis (Fisher et al. 1997, 1999) of spectra of SN 1994I. The goal is to establish line identifications and intervals of ejection velocity within which the presence of lines of various ions is detected, without adopting any particular hydrodynamical model. The composition and velocity constraints that we obtain with SYNOW then can provide guidance to those who compute hydrodynamical explosion models and to those who carry out computationally intensive non–local–thermodynamic–equilibrium (NLTE) spectrum modeling. The SYNOW code is described briefly by Fisher et al. (1997) and in detail by Fisher (1999). In our work on SN 1994I we have made use of the paper by Hatano et al. (1999), which presents plots of LTE Sobolev line optical depths versus temperature for six different compositions that might be expected to be encountered in supernovae, and displays SYNOW optical spectra for 45 individual ions that can be regarded as candidates for producing identifiable features in supernova spectra. (Electronic data from the Hatano et al. paper, now extended from the optical to include the infrared, can be obtained at the website www.nhn.ou.edu/$``$baron/papers.html.) For comparison with each observed spectrum in Figure 1 we have calculated many synthetic spectra with various values of the following fitting parameters. The parameter $`T_{bb}`$ is the temperature of the underlying blackbody continuum; the values that we use range from 18,000 K at $`4`$ days to 6000 K at $`+26`$ days. The parameter $`T_{exc}`$ is the excitation temperature. For each ion that is introduced, the optical depth of a reference line is a fitting parameter, and the optical depths of the other lines of the ion are calculated for Boltzmann excitation at $`T_{exc}`$. Little physical significance is attached to $`T_{exc}`$; even if Boltzmann excitation held strictly, the adopted value of $`T_{exc}`$ would represent some kind of average over the line forming region. The values that we use range from 12,000 K to 5000 K. The radial dependence of the line optical depths is taken to be a power law, $`\tau v^8`$, and the line source function is that of resonance scattering. The most interesting fitting parameters (and therefore the only ones that we quote for each individual synthetic spectrum) are velocity parameters. The values that we use for $`v_{phot}`$, the velocity of matter at the photosphere, have been plotted in Figure 2. The outer edge of the line forming region is at $`v_{max}`$, which we have fixed at 40,000 km s<sup>-1</sup>. In addition, we can introduce restrictions on the velocity interval within which each ion is present; when the minimum velocity assigned to an ion is greater than $`v_{phot}`$, the line is said to be detached from the photosphere. ## 4 Results Although we have fit all of the spectra in Figure 1, only the fits to the Lick spectra (and the single HST spectrum) are shown here, because they have more extended wavelength coverage than the MMT spectra. Instead of discussing the spectra in chronological order, we first discuss those obtained from $`+7`$ to $`+26`$ days, and then work backward in time from $`+4`$ to $`4`$ days. One reason for doing this is that establishing line identifications generally is more difficult at earlier times, when line formation takes place in higher velocity layers and the blending is more severe. ### 4.1 From $`+7`$ to $`+26`$ days Figure 3 compares the $`+7`$ day observed spectrum to a synthetic spectrum that has $`v_{phot}=10,000`$ km s<sup>-1</sup>. The synthetic spectrum includes lines of C II, O I, Na I, Ca II, Ti II, and Fe II, and the fit is good. (The excessive height of the synthetic peaks in the blue part of the spectrum is not of great concern; the number of lines having significant optical depth rises very rapidly toward short wavelengths \[e.g., Wagoner, Perez, & Vasu 1991\] so our SYNOW synthetic spectra often are underblanketed in the blue because of missing weak lines of unused ions.) The identifications of Ca II, O I, Fe II, and Na I are definite. Lines of Ti II have been introduced to fit the absorption near 4200 Å, and we consider Ti II to be positively identified also. Lines of C II also are introduced, but detached at 16,000 km s<sup>-1</sup>, so that $`\lambda 6580`$ can account for most of the observed absorption near 6200 Å. At some epochs it is difficult to decide between detached C II $`\lambda `$6580 and the undetached Si II $`\lambda \lambda `$6347,6371 doublet (cf. Fisher et al. 1997), but at this epoch Si II would produce an absorption too far to the blue of the observed one. We consider the C II identification to be probably, but not definitely, correct. The main spectral features that remain to be explained, then, are those around 7000 and 9000 Å, and the IR absorption. The $`+7`$ day observed spectrum appears again in Figure 4, where it is compared to three more synthetic spectra. The one in the upper panel is like the one shown in Figure 3, except that (1) He I lines have been introduced in an attempt to account for the core of the IR absorption, and (2) the C II lines have been removed to allow the effects of the optical He I lines to be seen more clearly. For He I, instead of making our usual approximation that the relative populations of the lower levels of interest are given by LTE at excitation temperature $`T_{exc}`$, we assume that the populations of the lower levels of the optical He I lines (2<sup>3</sup>P, 2<sup>1</sup>P) are further reduced, relative to the population of the lower level of $`\lambda 10830`$ (2<sup>3</sup>S), by the geometrical dilution factor (which has the value 0.5 at the photosphere and decreases with radius). This is a reasonable approximation that is roughly consistent with the results of detailed calculations for He I by Lucy (1991; his Figure 3) and Mazzali & Lucy (1998; their Figures 1 and 8). Even with this reduction in the optical depths of the optical He I lines relative to that of $`\lambda 10830`$, when we try to account for the entire infrared absorption with $`\lambda 10830`$ the synthetic features produced by the optical lines, $`\lambda `$7065 (2<sup>3</sup>P), $`\lambda `$5876 (2<sup>3</sup>P), and $`\lambda `$6678 (2<sup>1</sup>P), are much too strong. As shown in the upper panel of Figure 4, even when we attempt to account only for the core of the infrared absorption with $`\lambda 10830`$, by reducing the He I optical depth and detaching the He I lines at 15,000 km s<sup>-1</sup>, the optical He I lines are still too strong. Thus we find that it is difficult to account for the IR absorption with He I $`\lambda 10830`$ alone. Lines of C I have been discussed in connection with the infrared feature by Woosley & Eastman (1997) and Baron et al. (1996). We find that like He I $`\lambda 10830`$, C I (multiplet 1, $`\lambda 10695`$) cannot account for the entire infrared absorption without compromising the fit in the optical. But, as shown in the middle panel of Figure 4, we find that undetached C I $`\lambda 10695`$, combined with He I $`\lambda 10830`$ detached at 18,000 km s<sup>-1</sup>, can do so. C I also accounts for the observed absorption near 9300 Å and it helps near 6800 Å, although it does produce an absorption that is stronger than the observed one near 8800 Å. Introducing C I allows the optical depth of He I $`\lambda 10830`$ to be reduced to the extent that the optical He I lines do no harm. To complicate matters, however, the lower panel of Figure 4 shows that lines of Si I (mainly multiplets 4, $`\lambda `$12047; 5, $`\lambda `$10790; 6, $`\lambda `$10482; and 13, $`\lambda `$10869), detached at 14,000 km s<sup>-1</sup>, can account for the entire infrared absorption while also doing some good in the optical, especially near 7000 Å. This means that without bringing in additional information, such as convincing identifications of very weak optical He I lines (cf. Clocchiatti et al. 1996, 1997) or rigorous NLTE calculations of line optical depths for a hydrodynamical model whose spectrum matches the entire observed spectrum (cf. Baron et al. 1996, 1999), there is some ambiguity concerning the identification of the infrared absorption. It may be primarily a blend of He I and C I, but at present we cannot exclude the possibility that it is partly, or even entirely, produced by Si I. Figure 5 compares the composite $`+11`$ day HST spectrum and $`+10`$ day optical spectrum to a synthetic spectrum that has $`v_{phot}=9500`$ km s<sup>-1</sup>. Now C II is detached at 15,000 km s<sup>-1</sup>. In addition to the ions needed for the optical spectrum, we have introduced Mg II and Cr II for the UV. More ions would need to be introduced to account for all of the structure in the observed UV, but we leave detailed fitting of the HST UV for future work. Figure 6 compares the $`+26`$ day observed spectrum to synthetic spectra that have $`v_{phot}=7000`$ km s<sup>-1</sup>. In the upper panel the fit is good, except (1) the synthetic O I feature at 8900 Å is too strong, (2) the observed strong net emission in the Ca II infrared triplet cannot be produced with the resonance–scattering source function that we are using, and (3) there is something missing from the synthetic spectrum around 7000 Å. In the lower panel, lines of O II are introduced. The only ones that make a difference are the forbidden lines \[O II\] $`\lambda \lambda `$7320,7330, which we (Fisher et al. 1999) have discussed in connection with the peculiar Type Ia SN 1991T. In SN 1991T our only alternative to introducing the \[O II\] lines was to lower our continuum placement and accept the observed flux minimum near 7000 Å as merely a gap between emissions. We would not necessarily expect the same gap to appear in the SN Ia 1991T and the SN Ic 1994I, so the evidence for the presence of the \[O II\] lines in SN 1994I strengthens the case for their presence in SN 1991T. We regard the identification of the \[O II\] lines in SN 1994I to be probable, but not definite. Figure 7, for the $`+26`$ day spectrum, is analogous to Figure 4 for the $`+7`$ day spectrum. In the top panel He I, detached at 15,000 km s<sup>-1</sup>, accounts for the infrared absorption, but He I $`\lambda 5876`$ is too strong in the synthetic spectrum. In the middle panel undetached C I and He I detached at 15,000 km s<sup>-1</sup> combine to account for the IR absorption. In the lower panel, Si I detached at 14,000 km s<sup>-1</sup> accounts for the infrared absorption on its own. ### 4.2 From $`+4`$ to $`4`$ days Now we return to earlier epochs. Figure 8 compares the $`+4`$ day observed spectrum with a synthetic spectrum that has $`v_{phot}=11,000`$ km s<sup>-1</sup>. At this epoch we use a combination of Si II, undetached but with a maximum velocity of 12,000 km s<sup>-1</sup>, and C II detached at 15,000 km s<sup>-1</sup> to account for the absorption near 6200 Å. Lines of Sc II and C I also have been introduced, but the fit would be much the same without them and we don’t consider their identification to be established. Figures 9 and 10 compare the $`2`$ and $`4`$ day observed spectra with synthetic spectra that have $`v_{phot}=16,500`$ and 17,500 km s<sup>-1</sup> and maximum Si II velocities of 18,000 km s<sup>-1</sup> and 20,000 km s<sup>-1</sup>, respectively. Now lines of Mg II also have been introduced but the identification is not definite. One or more additional ions would need to be included to fit these spectra at wavelengths longer than 9000 Å. ### 4.3 The Inferred Composition Structure Figure 11 shows our constraints on the composition structure as obtained from the optical spectrum. The arrows show the velocity intervals within which we think the ions have been detected in the observed spectra. The only ion on which we imposed a maximum velocity was Si II, around 20,000 km s<sup>-1</sup>. The only ion that we detached was C II, around 15,000 km s<sup>-1</sup>; this may represent the minimum velocity of the ejected carbon. Because of the ambiguity of the identification of the infrared absorption, He I, C I, and Si I are not shown in Figure 11. When He I $`\lambda 10830`$ has been invoked to account for part of the IR absorption, it has been detached at 15,000 or 18,000 km s<sup>-1</sup>; thus if helium is present at all, this is our estimate for its minimum velocity. For comparison, we note that in model CO21 of Iwamoto et al. (1994) the minimum carbon velocity is at about 12,000 km s<sup>-1</sup> and that of helium is at about 14,000 km s<sup>-1</sup>. In model 7A of Woosley, Langer, & Weaver (1995), which is favored by Woosley & Eastman (1997) for SN 1994I, the minimum velocities of helium and carbon are lower, about 6000 and 8000 km s<sup>-1</sup>, respectively. A problem with attributing the infrared absorption to a blend of detached He I $`\lambda `$10830 and undetached C I $`\lambda `$10695 is that using undetached C I down to 7000 km s<sup>-1</sup> probably is inconsistent with using C II detached at 15,000 km s<sup>-1</sup> for the optical spectrum, because carbon ionization is unlikely to increase outward. On the other hand, when Si I has been invoked to account for the infrared absorption it has been detached at 14,000 km s<sup>-1</sup>. This is not necessarily inconsistent with our use of Si II between 10,000 and 20,000 km s<sup>-1</sup> for the optical spectrum if, as expected, silicon ionization decreases outward. Figure 12 shows SYNOW plots in the infrared region, for four ions that can produce strong absorption features near the observed IR absorption. This figure shows that observed infrared spectra of SNe Ic would help to resolve the true identity of the infrared absorption; e.g., if Si I is responsible for the infrared absorption then we would expect another conspicuous Si I absorption near 11,700 Å. ### 4.4 Mass and Kinetic Energy Our adopted values of $`v_{phot}`$ can be used to make rough estimates of the mass and kinetic energy above the photosphere as a function of time. It is easily shown that for spherical symmetry and an $`r^n`$ density distribution, the mass (in $`M_{}`$) and the kinetic energy (in $`10^{51}`$ ergs) above the electron–scattering optical depth $`\tau _{es}`$ are $$M=(1.2\times 10^4)v_4^2t_d^2\mu _e\frac{n1}{n3}\tau _{es},$$ $$E=(1.2\times 10^4)v_4^4t_d^2\mu _e\frac{n1}{n5}\tau _{es},$$ where $`v_4`$ is $`v_{phot}`$ in units of 10,000 km s<sup>-1</sup>, $`t_d`$ is the time since explosion in days, and $`\mu _e`$ is the mean molecular weight per free electron. For illustration we use $`n=8`$, the value we used for the synthetic spectrum calculations, but it should be noted that the value of $`n`$ is not well constrained by our fits. We assume that the explosion occurred on March 30, i.e., that the rise time to maximum brightness was 9 days, and that $`\mu _e=14`$ (e.g., a mixture of singly ionized carbon and oxygen). At $`4`$ days, using $`v_4=1.75`$ and $`\tau _{es}=2/3`$ leads to $`M=0.1M_{}`$ and $`E=0.6\times 10^{51}`$ ergs. At $`+26`$ days, $`v_4=0.7`$ and $`\tau _{es}=2/3`$ give $`M=0.9M_{}`$ and $`E=0.8\times 10^{51}`$ ergs. The latter values may be overestimates of the values above the photosphere at $`+26`$ days because at this epoch the photosphere could be at $`\tau _{es}<2/3`$ owing to the contribution of lines to the pseudo–continuous opacity. Of course, there is additional mass (but not much kinetic energy) beneath the photosphere. In any case, our spectroscopic estimates of the mass and kinetic energy of SN 1994I are not inconsistent with previous estimates based on light–curve studies. ## 5 Discussion The photospheric–phase optical spectra of SN 1994I can be fit rather well by SYNOW synthetic spectra that include only “reasonable” ions, i.e., those that can regarded as candidate ions based on LTE optical–depth calculations (Hatano et al. 1999). In addition to the obvious Ca II, Na I, O I, and Fe II, we have invoked Ti II, C II, Si II, and \[O II\], and at the earliest phases, C I, Sc II, and Mg II. We regard the identification of Ti II to be definite. Although it can be difficult to distinguish Si II from detached C II, we think that both Si II and C II are needed. We regard the presence of \[O II\], Sc II, and Mg II to be probable. It is difficult to account for the infrared absorption with He I $`\lambda 10830`$ alone. It may be a blend of C I $`\lambda 10695`$ and He I $`\lambda `$10830, but then there is an apparent inconsistency between the minimum velocities of carbon as inferred from C I and C II. Alternatively, the entire infrared feature can be fit by lines of Si I without compromising the fit in the optical. The true identification of the infrared absorption is a matter that can only be decided by means of detailed NLTE spectrum calculations for realistic hydrodynamical models whose emergent spectra closely match the rest of the optical and infrared spectrum. The spectra of SN 1994I can be well fit with a simple model that includes the assumption of spherical symmetry. Thus, we certainly are not forced to invoke an asymmetry. However, the flux spectrum, unlike the polarization spectrum, is not very sensitive to a mild asymmetry (Jeffery & Branch 1990; Höflich et al. 1996), so the degree to which we can assert that SN 1994I was not very asymmetric (cf. Wang & Wheeler 1998) is unclear. We intend to investigate this issue in future work. Our spectroscopic estimates of the kinetic energy carried by matter above the photosphere are consistent with previous estimates based on the light curve, i.e., they are near the canonical supernova value of $`10^{51}`$ ergs. In this respect, SN 1994I is valuable for comparison with the peculiar SNe Ic 1998bw and 1997ef. We find that the same spectroscopic procedure for estimating the kinetic energy gives much higher kinetic energies for SNe 1997ef and 1998bw (Branch 1999; Millard et al. in preparation). This leads us to believe that SNe 1997ef and 1998bw were hyper–energetic. This work has been supported by NSF grants AST-9417102 to D.B., AST-9731450 to E.B., AST-9417213 to A.V.F., and NASA GO–2563.001 to the SINS group from the Space Telescope Science Institute, which is operated by AURA under NASA contract NAS 5–26555.
no-problem/9906/astro-ph9906039.html
ar5iv
text
# 1 Introduction ## 1 Introduction It is impossible at present numerically to model all the physical processes which are involved in galaxy formation. Galaxies, as we see them, are made of baryonic material, and at very least a gaseous component must be included in any modelling. In the hierarchical galaxy formation scenario dissipation by the baryons is also crucial for galaxies to form. However it is not clear that simply including cooling of the gas is a sensible approach numerically as this would appear to lead to catastrophic cooling in the early universe. The continual merging of dark halos leads to some shock reheating of cold gas but it is not obvious that the cooling catastrophy can be averted without help from some other heating mechanism . The most likely heat source is that from star forming regions, particularly supernovae explosions. It seems a reasonable assumption that the energy injected back into the gas from supernovae can act as a negative feedback mechanism and lead to some kind of self-regulation of star formation. This self-regulation is expected to be at its most effective in the early universe when the characteristic depth of the dark matter potential wells was much lower than today. For our simulations we rely on an effect of numerical resolution to mimic these feedback processes. The finite mass resolution in the simulation suppresses cooling in low mass halos and delays the onset of the formation of the cold dense gas phase (defined at $`T<10^4K`$, and $`\mathrm{60\hspace{0.33em}000}`$ times the mean gas density) which we identify as galaxies. We do not include star formation in the simulation, but assume that all the cold dense gas in the simulations should really be in the form of stars. The resolution is determined by the gas particle mass. We selected a gas particle mass such that the amount of gas in the cold dense phase at $`z=0`$ in the SCDM model provides a reasonable match to the observed stellar mass density in the local universe. With this choice of gas particle mass, which equates to $`2\times 10^9M_{sun}`$, we are able to simulate a 100 Mpc cubed shaped region with $`128^3`$ particles of both gas and dark matter. This volume is large enough to study clustering statistics on scales up to 10 Mpc with a sample of about 2000 galaxies. The simulations are started from a redshift of 50. The models we have simulated are a $`\mathrm{\Lambda }`$CDM model with $`\mathrm{\Omega }_0=0.3`$, $`\mathrm{\Lambda }_0=0.7`$, $`\sigma _8=0.9`$, $`h=0.7`$, $`\mathrm{\Omega }_b=0.03`$ and an SCDM model with $`\mathrm{\Omega }=1`$, $`\sigma _8=0.6`$, $`h=0.5`$, $`\mathrm{\Omega }_b=0.06`$. The power spectra are the same as used for the models in except that we used a higher value of $`\sigma _8`$ for SCDM. ## 2 The clustering of galaxies The galaxies in the simulation are identified using a friends-of-friends group finder. The object catalog returned by the group finder is relatively insensitive to the linking length. We chose a linking-length of 1/50 of the mean physical interparticle separation at $`z=0`$. Figure 1 shows the two-point correlation functions for the simulated galaxies (filled circles), and dark matter (solid line) in the $`\mathrm{\Lambda }`$CDM model. The observed galaxy correlation function is shown as dashed and dotted lines. The latter is from . On large scales the galaxies and mass have essentially the same correlation function. But on scales around $`1Mpc/h`$ the galaxy distribution is anti-biased with respect to the mass. The match to the observed galaxy correlation function is pretty good - in particular the simulated galaxy correlation function is close to a power law unlike the mass correlation function. A power law galaxy correlation function is also seen in the SCDM simulation. Figure 2 shows the two-point correlation function of the galaxies in the $`\mathrm{\Lambda }`$CDM model at a range of epochs. The amplitude of the correlation function has changed remarkably little from the present back until redshift of 3 even though the mass correlation function evolves linearly by a factor 10. At redshift 5 the galaxy clustering strength is significantly greater than at the present. Figure 3 shows the line-of-sight pairwise velocity dispersion (explained in ) of the galaxies in the $`\mathrm{\Lambda }`$CDM model compared to the dark matter, and a determination from the LCRS redshift survey . The galaxies have a smaller dispersion than the dark matter, something which is clearly necessary for the viability of this cosmological model. We have found that if one selects the nearest dark matter particle to each galaxy, then the pairwise dispersion of these particles is essentially the same as the galaxies. ## 3 Summary The clustering of the galaxies in the $`\mathrm{\Lambda }`$CDM simulation is surprisingly close to the observed galaxy correlation function. Significantly in both $`\mathrm{\Lambda }`$CDM and SCDM runs the galaxy correlation function is much closer to a power law than the mass correlation function. There is little evolution in the galaxy correlation function between redshift 3 and the present. For the $`\mathrm{\Lambda }`$CDM model the pairwise galaxy velocity dispersion is much lower than that of the dark matter. A similar effect is seen in the pairwise dispersions of galaxies in the SCDM model, but weaker. Acknowledgements. The simulations described here were run at the Edinburgh Parallel Computing Centre by the Virgo Consortium. CSF acknowledges a PPARC Senior Fellowship.
no-problem/9906/cond-mat9906334.html
ar5iv
text
# Effects of temperature upon the collapse of a Bose-Einstein condensate in a gas with attractive interactions ## Abstract We present a study of the effects of temperature upon the excitation frequencies of a Bose-Einstein condensate formed within a dilute gas with a weak attractive effective interaction between the atoms. We use the self-consistent Hartree-Fock Bogoliubov treatment within the Popov approximation and compare our results to previous zero temperature and Hartree-Fock calculations The metastability of the condensate is monitored by means of the $`l=0`$ excitation frequency. As the number of atoms in the condensate is increased, with $`T`$ held constant, this frequency goes to zero, signalling a phase transition to a dense collapsed state. The critical number for collapse is found to decrease as a function of temperature, the rate of decrease being greater than that obtained in previous Hartree-Fock calculations. PACS Numbers: 03.75.Fi, 05.30.Jp, 67.40.Db Mean field theories of the Bose-Einstein condensation of trapped alkali vapours have been extremely successful both qualitatively and quantitatively in determining the excitation frequencies of the condensates, especially at relatively low temperatures ($``$ 0.7 $`T_c`$) . These calculations have been based upon the Popov approximation to the Hartree-Fock Bogoliubov (HFB) treatment, where the anomalous average of the fluctuating field operator is neglected . In all cases the study has been of alkali vapours with positive s-wave scattering lengths (i.e., repulsive effective interactions). The case of attractive interactions (<sup>7</sup>Li for example, as used in experiments at Rice University ) has not been treated in this manner. Calculations have, rather, been based on the zero temperature Gross-Pitaevskii equation (GPE) or upon a Hartree-Fock variational calculation . There are two main reasons why the HFB formalism was not used in the Hartree-Fock study referred to above. Firstly, in the case of negative scattering length, the HFB-Popov collective excitations of a homogeneous system are unstable at long wavelengths. Houbiers and Stoof therefore found it more appealing to use the Hartree-Fock method, which has stable excitations at long wavelengths. In the case of the trapped gas this does not present a problem, as one is saved from the infra-red limit by the finite zero-point energy of the trap. From an alternative viewpoint, the finite size of the condensate eliminates very long wavelength excitations. The HFB-Popov theory is hence quite applicable for trapped gases. Secondly, there is the possibility that atoms with an attractive effective interaction can undergo a BCS-like pairing transition . This possibility is in fact included in the full theory that Houbiers and Stoof develop. However, in their numerical calculations , they ignore the possibility of pairing and the results presented are based on a Hartree-Fock treatment of Bose-Einstein condensation alone. If one is going to assume that their is no BCS transition, then a better description would appear to be that of the HFB-Popov formalism. This is the treatment adopted in this letter. The purpose of the present investigation is to determine the stability of the condensate against mechanical collapse, and the effects thereon of thermal excitations. It has been shown in the homogeneous limit that the condensate is unstable at the densities required for BEC. In the trap, the additional kinetic energy can stabilise the condensate and a metastable state is possible. This state decays on a timescale which is long compared to the lifetime of the experiment, but only exists for condensates below a certain size. At some critical condensate number the condensate becomes unstable and collapses. This instability is characterised by the monopolar collective excitation going soft (viz., the excitation frequency goes to zero). Various predictions for the critical number, $`N_c`$, have been made at zero temperature using the GPE and at finite temperatures using the Hartree-Fock treatment. Here we investigate the effects of temperature upon the collapse via the HFB-Popov approach as described briefly below. We make the usual decomposition of the Bose field operator into condensate and noncondensate parts; $`\widehat{\psi }(𝐫)\mathrm{\Phi }(𝐫)+\stackrel{~}{\psi }(𝐫)`$. The condensate wavefunction $`\mathrm{\Phi }(𝐫)`$ is then defined within the Popov approximation by the generalised Gross-Pitaevskii equation (GPE) $$\left[\frac{^2}{2m}+V_{ext}(𝐫)+gn_0(𝐫)+2g\stackrel{~}{n}(𝐫)\right]\mathrm{\Phi }(𝐫)=\mu \mathrm{\Phi }(𝐫).$$ (1) Here, $`n_0(𝐫)|\mathrm{\Phi }(𝐫)|^2`$ and $`\stackrel{~}{n}(𝐫)\stackrel{~}{\psi }^{}(𝐫)\stackrel{~}{\psi }(𝐫)`$ are the condensate and noncondensate densities respectively. The Popov approximation consists of omitting the anomalous correlation $`\stackrel{~}{\psi }(𝐫)\stackrel{~}{\psi }(𝐫)`$, but keeping $`\stackrel{~}{n}(𝐫)`$. The condensate wavefunction in Eq.(1) is normalised to $`N_0`$, the total number of particles in the condensate. $`V_{ext}(𝐫)`$ is the external confining potential and $`g=4\pi \mathrm{}^2a/m`$ is the interaction strength determined by the $`s`$-wave scattering length $`a`$. For <sup>7</sup>Li the value of $`a`$ used is -27.3 Bohr radii. The condensate eigenvalue is given by the chemical potential $`\mu `$. The usual Bogoliubov transformation, $`\stackrel{~}{\psi }(𝐫)=_i[u_i(𝐫)\widehat{\alpha }_iv_i^{}(𝐫)\widehat{\alpha }_i^{}]`$, to the new Bose operators $`\widehat{\alpha }_i`$ and $`\widehat{\alpha }_i^{}`$ leads to the coupled HFB-Popov equations $`\widehat{}u_i(𝐫)gn_0(𝐫)v_i(𝐫)`$ $`=`$ $`E_iu_i(𝐫)`$ (2) $`\widehat{}v_i(𝐫)gn_0(𝐫)u_i(𝐫)`$ $`=`$ $`E_iv_i(𝐫),`$ (3) with $`\widehat{}^2/2m+V_{ext}(𝐫)+2gn(𝐫)\mu \widehat{h}_0+gn_0(𝐫)`$. These equations define the quasiparticle excitation energies $`E_i`$ and the quasiparticle amplitudes $`u_i`$ and $`v_i`$. Once these quantities have been determined, the noncondensate density is obtained from the expression $`\stackrel{~}{n}(𝐫)`$ $`=`$ $`{\displaystyle \underset{i}{}}\left\{|v_i(𝐫)|^2+\left[|u_i(𝐫)|^2+|v_i(𝐫)|^2\right]N_0(E_i)\right\}`$ (4) $``$ $`\stackrel{~}{n}_1(𝐫)+\stackrel{~}{n}_2(𝐫),`$ (5) where $`\stackrel{~}{n}_1(𝐫)`$ is that part of the density which reduces to the quantum depletion of the condensate as $`T0`$. The component $`\stackrel{~}{n}_2(𝐫)`$ depends upon the Bose distribution, $`N_0(E)=(e^{\beta E}1)^1`$, and vanishes in the $`T0`$ limit. Rather than solving the coupled equations in Eq.(3) directly, we introduce the auxiliary functions $`\psi _i^{(\pm )}(𝐫)u_i(𝐫)\pm v_i(𝐫)`$ which are solutions of a pair of uncoupled equations (a more detailed discussion of the method is presented in Hutchinson et al. of Ref.). The two functions are related to each other by $`\widehat{h}_0\psi _i^{(+)}=E_i\psi _i^{()}`$. We note that the collective modes of the condensate can be shown to have an associated density fluctuation given by $`\delta n_i(𝐫)\mathrm{\Phi }(𝐫)\psi _i^{()}(𝐫)`$. To solve these equations we introduce the normalised eigenfunction basis defined as the solutions of $`\widehat{h}_0\varphi _\alpha (𝐫)=\epsilon _\alpha \varphi _\alpha (𝐫)`$ and diagonalize the resulting matrix problem. The lowest energy solution gives the condensate wavefunction $`\mathrm{\Phi }(𝐫)=\sqrt{N_0}\varphi _0(𝐫)`$ with eigenvalue $`\epsilon _0=0`$. The calculational procedure can be summarised for an arbitrary confining potential as follows: Eq.(1) is first solved self-consistently for $`\mathrm{\Phi }(𝐫)`$, with $`\stackrel{~}{n}(𝐫)`$ set to zero. Once $`\mathrm{\Phi }(𝐫)`$ is known, the eigenfunctions of $`\widehat{h}_0`$ required in the expansion of the excited state amplitudes are generated numerically. The matrix problem is then set up to obtain the eigenvalues $`E_i`$, and the corresponding eigenvectors $`c_\alpha ^{(i)}`$ are used to evaluate the noncondensate density. This result is inserted into Eq.(1) and the process is iterated, keeping the condensate number $`N_0`$ and temperature $`T`$ fixed. The level of convergence is monitored by means of the noncondensate number, $`\stackrel{~}{N}`$ and the iterations are terminated once $`\stackrel{~}{N}`$ is within one part in $`10^7`$ of its value on the previous iteration . In this way, we generate the self-consistent densities, $`n_0`$ and $`\stackrel{~}{n}`$, as a function of $`N_0`$ and $`T`$. We consider first the case of $`T=0`$ for an isotropic harmonic trap with a frequency equal to the geometric average of the frequencies corresponding to the Rice trap , $`\overline{\nu }=144.6`$ Hz. This is the geometry considered previously by Houbiers and Stoof and with whom we find qualitatively agreement. There are several signatures of a collapse of the condensate with increasing condensate number $`N_0`$. First, we can look at the behaviour of the convergence parameter (the total number of particles in the noncondensate for a given condensate number and temperature) used to monitor the convergence of the solution to the HFB-Popov equations. In Fig. 1 we show $`\stackrel{~}{N}`$ as a function of iteration number for the three values $`N_0=1243`$, 1244, and 1245. The convergence is clear in the first two cases, whereas in the final case the algorithm diverges catastrophically and no stable solution can be found. We therefore identify the critical number, $`N_c`$, of atoms in the condensate as 1244, beyond which the condensate is no longer metastable, but unstable to the formation of a dense solid phase. This value of $`N_c`$ is slightly greater than the value of 1241 obtain by Houbiers and Stoof using the Hartree-Fock approximation. A second, more physical indicator of the collapse is the observed strong dependence of the excitation frequencies on the number of condensate atoms. In particular, we find that the $`l=0`$ mode goes soft as $`N_0`$ approaches the critical number found above. We shall focus on this criterion for the instability in the following. We next consider a trap with confining frequency 150 Hz. The excitation frequencies are again calculated as a function of the number of particles in the condensate, both at $`T=0`$ and at finite temperature. The lowest lying modes at temperatures of 0, 200, and 400 nK are shown in Fig. 2. The lowest mode is the $`l=1`$ Kohn mode, which corresponds to a rigid centre of mass motion. For a harmonic trap the excitation frequency of this mode should be identically equal to the trap frequency. However, the dynamics of the noncondensate are neglected in this treatment and the calculated excitations are those of the condensate alone, moving in the effective static potential $`V_{eff}=V_{ext}+2g\stackrel{~}{n}(𝐫)`$. Due to the presence of the noncondensate, the effective potential is not parabolic and hence the generalised Kohn theorem does not apply. The Kohn theorem is approximately obeyed for low temperatures and low particle numbers since the noncondensate is either small, or relatively uniform over the extent of the condensate and hence does not introduce a significant anharmonicity. It is only for higher temperatures near $`N_c`$ where the noncondensate density is both large and sharply peaked around the centre of the trap that there is a marked deviation from the trap frequency. As mentioned above, the softening of the $`l=0`$ breathing mode is a signature of the instability from a metastable condensate to a completely collapsed state. For $`T=0`$ the critical number, $`N_c`$, is found to be 1227, which is slightly lower than that obtained in the previous case with a stronger confining potential. This is the change in the critical number expected on the basis of the dependence $`N_c1/\sqrt{\omega _0}`$, which shows that the critical number increases as the trap confinement is relaxed. With increasing temperature, the frequency of the $`l=0`$ mode is found to go to zero at lower condensate numbers. This is because the attractive nature of the interactions with the thermal cloud creates an effective potential for the condensate which is stiffer than the applied external potential . The peak density of the condensate hence increases with temperature (for fixed $`N_0`$) and the critical condensate number is reduced. The critical number for 200 nK, as obtained from the failure to find a converged solution at larger $`N_0`$, is $`N_c=1093`$. That for $`T=400`$ nK is $`N_c=1016`$. The temperature dependence of the critical condensate number as a function of temperature is shown in the inset of Fig. 2. It should be noted that the total number of trapped atoms, $`N`$, varies for each point in the figure. Alternatively one could vary $`T`$ (and hence $`N_0`$) keeping $`N`$ fixed, which would give a critical temperature for collapse. Experimentally evaporative cooling removes atoms. A certain total number, corresponding to the transition temperature, is reached at which condensation occurs. Further cooling (removal of atoms) then proceeds to a point where a second critical temperature (or total number) is reached, at which point the second phase transition (i.e. collapse) is observed. However for the experiments on <sup>7</sup>Li the difference between the critical temperatures for BEC and collapse is extremely small. Cooling thus results in repeated collapse and growth, reducing $`N`$ until a stable $`N_0`$ is reached . The collapse occurs because as one increases $`N_0`$, the peak noncondensate density increases due to the interactions. This in turn creates a tighter and tighter effective potential for the condensate, which eventually results in collapse. Fig. 3 shows the noncondensate density at 100 nK for a range of $`N_0`$. The dotted curve is for $`N_0=50`$, the dashed-dot-dot curve that for $`N_0=1000`$. Note that for a large change in $`N_0`$ the peak noncondensate density has changed relatively little. The next three curves are for $`N_0=`$ 1100, 1130, and 1146, the final figure being the critical number. The peak noncondensate density increases rapidly over this range. This is a cooperative effect; the tighter effective potential reduces the frequency of the lowest collective mode. This leads to a growth in the population of this low lying mode, which has a density localised near the centre of the condensate, increasing the peak noncondensate density. The variation of the critical number with temperature is shown in the inset to Fig. 2. The rate of decrease of $`N_c`$ with $`T`$ is significantly greater in the HFB-Popov treatment than in the Hartree-Fock treatment (see Fig. 8 of Ref. 5). This is due to the different excitation spectra calculated in the two formalisms. In our treatment we calculate (and populate) the collective excitations, which include the low lying $`l=0`$ mode. Near collapse this mode has a much lower frequency than the lowest single particle excitation of the Hartree-Fock spectrum. The population of excited states is therefore underestimated in the Hartree-Fock treatment, and as a result, the thermal population of the state is lower than it is with the HFB-Popov spectrum. The noncondensate population, and hence peak density, increases more rapidly as a function of temperature in our calculation (c.f. inset to Fig. 3 and Fig. 7 of Ref. 5). This is what gives rise to the more rapid reduction in the critical number as the temperature is increased. In conclusion, we have presented the first self-consistent HFB-Popov calculations for a dilute gas of atoms with attractive effective interactions. We have studied the collective mode frequencies of such a gas and using these frequencies, investigated the phase transition from metastable Bose-Einstein condensate to a collapsed dense phase. The results from these calculations are in general agreement with previous Hartree-Fock results, but we feel that the HFB-Popov approach is the more appropriate one to use if only a BEC transition is assumed to take place. We find a significantly greater dependence of the critical number upon temperature in the HFB-Popov treatment. If one includes the possibility of a BCS-like pairing transition then this is not the appropriate approach as the omitted pair correlations (the so called anomalous average) are very important. Indeed the pair correlation term, $`\stackrel{~}{\psi }(𝐫)\stackrel{~}{\psi }(𝐫)`$, becomes the order parameter for the BCS-like transition. The possibility of such a transition, or the existence of mixed phases containing both BEC and BCS macroscopic quantum states is currently under investigation. This work was supported by grants from the Natural Sciences and Engineering Research Council of Canada and from the United Kingdom Engineering and Physical Sciences Research Council. We would like to thank Henk Stoof and Keith Burnett for many helpful and enlightening discussions.
no-problem/9906/cond-mat9906065.html
ar5iv
text
# Finite-element theory of transport in ferromagnet-normal metal systems ## Abstract We formulate a theory of spin dependent transport of an electronic circuit involving ferromagnetic elements with non-collinear magnetizations which is based on the conservation of spin and charge current. The theory considerably simplifies the calculation of the transport properties of complicated ferromagnet-normal metal systems. We illustrate the theory by considering a novel three terminal device. Electron transport in hybrid systems involving ferromagnetic and normal metals has been shown to exhibit new phenomena due to the interplay between spin and charge. The giant magnetoresistance (GMR) effect in metallic magnetic multilayers is a result of spin dependent scattering. The manganese oxides exhibit a colossal magnetoresistance due to a ferromagnetic phase transition. The dependence of the current on the relative angle between the magnetization directions has been reported in transport through tunnel junctions between ferromagnetic reservoirs . Transport involving ferromagnets with non- collinear magnetizations has also been studied theoretically in Ref. Johnson and Silsbee demonstrated that spin dependent effects are also important in systems with more than two terminals . Their ferromagnetic-normal-ferromagnetic (F-N-F) device manifests a transistor effect that depends on the relative orientation of the magnetization directions. Recently another three terminal spin electronics device was realized; a ferromagnetic single-electron transistor. In this case the current depends on the relative orientation of the magnetization of the source, the island and the drain and of the electrostatic potential of the island tuned by a gate voltage . These examples illustrate that devices with ferromagnetic order deserve a thorough theoretical investigation. Inspired by the circuit theory of Andreev reflection we present a finite-element theory for transport in hybrid ferromagnetic-normal metal systems based on the conservation of charge and spin current. We demonstrate that spin-transport can be understood in terms of 4 generalized conductances for each contact between a ferromagnet and a normal metal. The relations between these conductance parameters and the microscopic details of the contacts are derived and calculated for diffuse, tunnel and ballistic contacts. Finally, we illustrate the theory by computing the current through a novel 3-terminal device. Let us first explain the basic idea of the finite-element theory of spin-transport. The system can be divided into (normal or ferromagnetic) “nodes”, where each node is characterized by the appropriate generalization of the distribution function, viz. a $`2\times 2`$ distribution matrix in spin space. The nodes are connected to each other and to the reservoirs by “contacts” which limit the total conductance but are arbitrary otherwise. The charge and spin current through the contacts is related to the distribution matrices of the adjacent nodes. Provided these relations are known, we can solve for the $`2\times 2`$ distribution matrices in the nodes under the constraint of conservation of spin and charge current in each node and thus determine the transport properties of the system. These macroscopic relations for each contact can be found in terms of the microscopic scattering matrices in the spirit of the Landauer-Büttiker formalism . The scattering matrices can be calculated using different models like a two-spin band model or realistic band-structures and for various contacts, e.g. ballistic or diffuse wires or tunnel junctions. Phase coherent scattering as in a resonant tunneling devices and effects like the Coulomb blockade can be included in principle by calling the double barrier a ”contact” with complex scattering properties, but these complications will be disregarded in the following. The device depicted in Fig. 1 will serve to illustrate our approach. Several contacts attach a normal metal node to (ferromagnetic or normal) metallic reservoirs. We assume that the resistances of the contacts are much larger than the resistance of the node. This is fulfilled when the area of the contact is sufficiently smaller than the cross-section of the node or when the contacts are in the tunneling regime. The current through the system and the distribution matrix in the node are determined by the properties of the contacts. The reservoirs are supposed to be large and in local equilibrium with a chemical potential $`\mu _\alpha `$, where the subscript $`\alpha `$ labels the reservoirs. The energy dependent distribution matrix in the (ferromagnetic) reservoir is then diagonal in spin-space $`\widehat{f}_\alpha ^F(ϵ)=\widehat{1}f(ϵ,\mu _\alpha )`$, where hat ($`\widehat{}`$) denotes a $`2\times 2`$ matrix in spin space, $`\widehat{1}`$ is the unit matrix and $`f(ϵ,\mu _\alpha )`$ is the the Fermi-Dirac distribution function. The direction of the magnetization is denoted by the unit vector $`𝐦_\alpha `$. When the chemical potentials of the reservoirs are not identical, the normal metal node is not in equilibrium and there can be a spin-accumulation on the normal metal node. The distribution is therefore represented by a $`2\times 2`$ matrix in spin-space, $`\widehat{f}^N(ϵ)`$ which allows a spin accumulation with arbitrary direction of the spins. The normal metal node is considered to be large and chaotic either because of impurity scattering inside the node or because of scattering at irregularities of its boundary. The distribution matrix inside the node is therefore isotropic in momentum space and depends only on the energy of the particle. The current through a contact is determined by its scattering matrix, the Fermi-Dirac distribution function of the adjacent ferromagnetic reservoir and the $`2\times 2`$ non-equilibrium distribution matrix in the normal node. The current is evaluated close to the contact on the normal side. The $`2\times 2`$ current in spin-space per energy interval at energy $`ϵ`$ leaving the node is $$\frac{h}{e^2}\widehat{i}=\underset{nm}{}\left[\widehat{r}^{nm}\widehat{f}^N(\widehat{r}^{nm})^{}+\widehat{t}^{nm}\widehat{f}^F(\widehat{t}^{nm})^{}\right]M\widehat{f}^N$$ (1) where $`M`$ is the number of propagating channels, $`\widehat{r}^{nm}(ϵ)`$ is the reflection matrix for an electron coming from the normal metal in mode $`m`$ being reflected to mode $`n`$ and $`\widehat{t}^{nm}(ϵ)`$ is the transmission matrix for an electron from the ferromagnet in mode $`m`$ transmitted to the normal metal in mode $`n`$. The total current is obtained by integrating over the energies, $`\widehat{I}=𝑑ϵ\widehat{i}(ϵ)`$. The current in the contact is thus completely determined by the scattering matrix of the contact, and the distribution matrices. The $`2\times 2`$ non-equilibrium distribution matrix in the node in the stationary state is uniquely determined by current conservation $$\underset{\alpha }{}\widehat{i}_\alpha =\left(\frac{\widehat{f}^N}{t}\right)_{\text{rel}},$$ (2) where $`\alpha `$ labels different contacts and the term on the right hand side describes spin relaxation in the normal node. The right hand side of Eq. (2) can be set to zero when the spin current in the node is conserved, i.e. when an electron spends much less time on the node than the spin-flip relaxation time $`\tau _{\text{sf}}`$. If the size of the node in the transport direction is smaller than the spin-flip diffusion length $`l_{\text{sf}}=\sqrt{D\tau _{\text{sf}}}`$, where $`D`$ is the diffusion coefficient then the spin relaxation in the node can be introduced as $`(\widehat{f}^N/t)_{\text{rel}}=(\widehat{1}\text{Tr}(\widehat{f}^N)/2\widehat{f}^N)/\tau _{\text{sf}}`$. If the size of the node in the transport direction is larger than $`l_{\text{sf}}`$ the simplest finite-element transport theory fails and we have to use a more complicated description with a spatially dependent spin distribution function. Eq. (2) gives the $`2\times 2`$ distribution matrix of the node in terms of Fermi-Diract distribution functions of the reservoirs. These distribution functions are determined by voltages of the reservoirs. Those voltages are either set by voltage sources or determined by conventional circuit theory. We will now demonstrate that the relation (1) between the current and the distributions has a general macroscopic form. Spin-flip processes in the contacts are disregarded, so that the reflection matrix for an incoming electron from the normal metal can be written as $`\widehat{r}^{nm}=_s\widehat{u}^sr_s^{nm}`$, where $`s=,`$, $`r_s^{nm}`$ are the spin dependent reflection coefficients in the basis where the spin quantization axis is parallel to the magnetization in the ferromagnet, $`\widehat{u}^{}=(\widehat{1}+\widehat{𝝈}𝐦)/2`$, $`\widehat{u}^{}=(\widehat{1}\widehat{𝝈}𝐦)/2`$ and $`\widehat{𝝈}`$ is a vector of Pauli matrices. Similarly for the transmission matrix $`\widehat{t}^{nm}(\widehat{t}^{nm})^{}=_s\widehat{u}^s|t_s^{nm}|^2,`$ where $`t_s^{nm}`$ are the spin dependent transmission coefficients. Using the unitarity of the scattering matrix, we find that the general form of the relation (1) reads $`\widehat{i}`$ $`=`$ $`G^{}\widehat{u}^{}\left(\widehat{f}^F\widehat{f}^N\right)\widehat{u}^{}+G^{}\widehat{u}^{}\left(\widehat{f}^F\widehat{f}^N\right)\widehat{u}^{}`$ (4) $`G^{}\widehat{u}^{}\widehat{f}^N\widehat{u}^{}(G^{})^{}\widehat{u}^{}\widehat{f}^N\widehat{u}^{},`$ where we have introduced the spin dependent conductances $`G^s`$ $`G^s={\displaystyle \frac{e^2}{h}}\left[M{\displaystyle \underset{nm}{}}|r_s^{nm}|^2\right]={\displaystyle \frac{e^2}{h}}{\displaystyle \underset{nm}{}}|t_s^{nm}|^2`$ (5) and the mixing conductance $$G^{}=\frac{e^2}{h}\left[M\underset{nm}{}r_{}^{nm}(r_{}^{nm})^{}\right].$$ (6) We thus see that the relation between the current through a contact and the distribution in the ferromagnetic reservoir and the normal metal node is determined by 4 conductances, the two real spin conductances ($`G^{}`$, $`G^{}`$) and the real and imaginary parts of the mixing conductance $`G^{}`$. These contact-specific parameters can be obtained by microscopic theory or from experiments. The spin conductances $`G^{}`$ and $`G^{}`$ have been used in descriptions of spin-transport for a long time . The mixing conductance is a new concept which is relevant for transport between non-collinear ferromagnets. The mixing conductance rotates spins around the magnetization axis of the ferromagnet. Note that although the mixing conductance is a complex number the $`2\times 2`$ current in spin-space is hermitian and consequently the current and the spin-current in an arbitrary direction given by Eq. (4) are real numbers. Generally we can show that $`\text{Re}G^{}(G^{}+G^{})/2`$. Below we present explicit results for the conductances when the contacts are in the diffuse, tunneling and ballistic regimes. For a diffuse contact Eq. (4) can quite generally be found by the Green function technique developed in Ref. . Here we use a much simpler approach based on the diffusion equation. On the normal metal side of the contact the boundary condition to the diffusion equation is set by the distribution matrix in the node $`\widehat{f}^N`$. On the ferromagnet side of the contact the boundary condition is set by the equilibrium distribution function in the reservoir $`f^F\widehat{1}`$. In a ferromagnetic metal transport of spins non-collinear to the local magnetization leads to a relaxation of the spins since electrons with different spins are not coherent. This causes an additional resistance, which as other interface related excess resistances, is assumed to be small compared to the diffuse bulk resistance. Sufficiently far from the ferromagnetic-normal metal interface the distribution function of the electronic states in the ferromagnet can always be represented by two components. Only the spin-current parallel to the magnetization of the ferromagnet is conserved. We denote the cross-section of the contact $`A`$, the length of the ferromagnetic part of the contact $`L^F`$, the length of the normal part of the contact $`L^N`$, the (spin dependent) resistitivity in the ferromagnet $`\rho ^{Fs}`$, the resistivity in the normal metal $`\rho ^N`$, so that the (spin dependent) conductance of the ferromagnetic part of the contact is $`G^{DFs}=A/(\rho ^{Fs}L^F)`$ and the conductance of the normal part of the contact is $`G^{DN}=A/(\rho ^NL^N)`$. Solving the diffusion equation $`^2\widehat{f}=0`$ on the normal and ferromagnetic side with the boundary conditions above, we find the current through a diffuse contact: $`\widehat{i}^D`$ $`=`$ $`G^D\widehat{u}^{}\left(\widehat{f}^F\widehat{f}^N\right)\widehat{u}^{}+G^D\widehat{u}^{}\left(\widehat{f}^F\widehat{f}^N\right)\widehat{u}^{}`$ (8) $`G^{DN}\left(\widehat{u}^{}\widehat{f}^N\widehat{u}^{}+\widehat{u}^{}\widehat{f}^N\widehat{u}^{}\right),`$ where the total spin dependent conductance is $`1/G^{Ds}=1/G^{DFs}+1/G^{DN}`$. This result can be understood as a specific case of the generic Eq. (4) with $`G^{}=G^D`$, $`G^{}=G^D`$, and $`G^{}=G^{DN}`$. The mixing conductance in the diffuse limit therefore depends on the conductance of the normal part of the contact only, which is a consequence of the relaxation of spins non-collinear to the magnetizations direction in the ferromagnet. For a ballistic contact, we use a simple semiclassical model proposed in Ref. . In this model the channels are either completely reflected or transmitted, with $`N^{}`$ and $`N^{}`$ being the number of transmitted channels for different spin directions. Substituting this in (1) we find that the spin conductance $`G^B=(e^2/h)N^{}`$, $`G^B=(e^2/h)N^{}`$ and the mixed conductance is determined by the lowest number of reflected channels, $`G^B=\text{max}(G^B,G^B)`$ and is real. For a tunneling contact we can expand Eq. (1) in terms of the small transmission. We find that $`\text{Re}G^T=(G^T+G^T)/2`$, where $`G^T`$ and $`G^T`$ are the tunneling conductances. The imaginary part of $`G^T`$ can be shown to be of the same order of magnitude as $`G^T`$ and $`G^T`$ but it is not universal. We will now illustrate the theory by computing the current through the 3-terminal device shown in Fig. 2. A normal metal node (N) is connected to 3 ferromagnetic reservoirs (F1, F2 and F3) by arbitrary contacts parameterized by our spin-conductances. A source-drain bias voltage $`V`$ applied between reservoir 1 and 2 causes an electric current $`I`$ between the same reservoirs. The charge flow into reservoir 3 is adjusted to zero by the chemical potential $`\mu _3`$. Still, the magnetization direction $`𝐦_3`$ influences the current between reservoir 1 and 2. We assume that spin relaxation in the normal node can be disregarded so that the right hand side of (2) is set to zero. Furthermore, we assume that the voltage bias $`V`$ is sufficiently small so that the energy dependence of the transmission (reflection) coefficients can be disregarded. To further simplify the discussions the contacts 1 and 2 are taken to be identical, $`G_1^{}=G_2^{}G^{}`$, $`G_1^{}=G_2^{}G^{}`$ and $`G_1^{}=G_2^{}G^{}`$. Contact 3 is characterized by the conductances $`G_3^{}`$, $`G_3^{}`$ and $`G_3^{}`$. We find the distribution in the normal node by solving the 4 linear Eqs. (2). The current through the contact between reservoir 1 (2) and the node is obtained by inserting the resulting distribution for the normal node into Eq. (4). When the magnetizations in reservoir 1 and 2 are parallel there is no spin-accumulation since contacts 1 and 2 are symmetric and consequently ferromagnet 3 does not affect the transport properties. The current is then simply a result of two total conductances $`G=G^{}+G^{}`$ in series, $`I=GV/2`$. The influence of ferromagnet 3 is strongest when there is a significant spin accumulation in the normal metal node, and in the following the magnetizations of the source and drain reservoirs are antiparallel, $`𝐦_1𝐦_2=1`$. We denote the relative angle between the magnetization in reservoir 3 and reservoir 1 (reservoir 2) $`\theta _3`$ ($`\pi \theta _3`$). The current is an even function of $`\theta _3`$ and symmetric with respect to $`\theta _3\pi \theta _3`$ as a result of the symmetry of the device, e.g. the current when the magnetizations in reservoir 1 and 3 are parallel equals the current when the magnetizations in reservoir 1 and 3 are antiparallel. Due to the finite mixing conductance at non-collinear magnetization the third contact acts as a drain for the spin-accumulation in the node, thus allowing a larger charge current between reservoir 1 and 2. The relative increase of the current due to the reduced spin-accumulation $`\mathrm{\Delta }_3(\theta _3)=[I(\theta _3)I(\theta _3=0)]/I(\theta _3=0)`$, is plotted in Fig. 3 as a function of $`\theta _3`$. The maximum of $`\mathrm{\Delta }_3`$ is achieved at $`\theta _3=\pi /2`$ ($`\theta _3=3\pi /2`$) and equals ($`\text{Im}G_3^{}=0`$) $$\mathrm{\Delta }_3=P^2\frac{2GG_3}{2G+G_3\eta _3}\frac{\eta _31+P_3^2}{2G(1P^2)+G_3(1P_3^2)}$$ (9) introducing the total conductance of the contact $`G_i=G_i^{}+G_i^{}`$, the polarization of the contact $`P_i=(G_i^{}G_i^{})/(G_i^{}+G_i^{})`$ and the relative mixing conductance $`\eta =2G_i^{}/(G_i^{}+G_i^{})`$. The influence of the direction of the magnetization of the reservoir 3 increases with increasing polarization $`P`$ and increasing relative mixing conductance $`\eta _3`$ and reaches its maximum when the total conductances are of the same order $`G_3G`$. Note that the physics of this three terminal device is very different from that of Johnson’s spin transistor; the latter operates with collinear magnetizations of two ferromagnetic contacts whereas the third may be normal. In conclusion we have proposed a finite-element transport theory for spin transport in mesoscopic systems. In the presence of ferromagnetic order a contact can be described by 4 conductance parameters which we obtained explicitly for diffuse, ballistic and tunnel contacts. We have applied the theory to a novel three terminal device with arbitrary contacts. This work is part of the research program for the “Stichting voor Fundamenteel Onderzoek der Materie” (FOM), which is financially supported by the ”Nederlandse Organisatie voor Wetenschappelijk Onderzoek” (NWO). We acknowledge benefits from the TMR Research Network on “Interface Magnetism” under contract No. FMRX-CT96-0089 (DG12-MIHT) and support from the NEDO joint research program (NTDP-98). We acknowledge stimulating discussions with Wolfgang Belzig, Daniel Huertas Hernando and Junichiro Inoue.
no-problem/9906/chao-dyn9906027.html
ar5iv
text
# A Tool to Recover Scalar Time-Delay Systems from Experimental Time Series ## Figure captions * Time series of the scalar time-delay system (2) obtained from a computer experiment ($`\tau _0=40.00`$). * Length $`L`$ of the polygon line connecting all ordered points of the projected point set $`(y_\tau ^i,\dot{y}^i)`$ versus $`\tau `$. $`L`$ has been normalized so that a maximally uncorrelated point set has the value $`L=1.0`$. The inset shows a close-up of the $`\tau `$-axis around the local minimum at $`\tau =\tau _0=40.00`$. Additionally, $`L\left(\tau \right)`$-curves gained from the analysis of noisy time series are shown (no additional noise – straight line, signal-to-noise ratio of $`100`$ – open circles, and signal-to-noise ratio of $`10`$ – squares). * (a)-(c): Trajectory $`\stackrel{}{y}_\tau \left(t\right)`$ which has been projected from the infinite dimensional phase space to the $`(y_\tau ,y,\dot{y})`$-space under variation of $`\tau `$. (a) $`\tau =20.00`$. (b) $`\tau =39.60`$. (c) $`\tau =\tau _0=40.00`$. (d)-(f): Projected point set $`\stackrel{}{y}_\tau ^i=(y_\tau ^i,\dot{y}^i)`$ resulting from the intersection of the projected trajectory $`\stackrel{}{y}_\tau \left(t\right)`$ with the $`\left(y=1.1\right)`$\- plane under variation of $`\tau `$. (d) $`\tau =20.00`$. (e) $`\tau =39.60`$. (f) $`\tau =\tau _0=40.00`$. * (a) Comparison of the function $`f`$ (line) of equation (2) with its recovery from the time series (points). (b) Comparison of the function $`g`$ (line) of equation (2) with its recovery from the time series (points). * Length $`L`$ of the polygon line connecting all ordered points of the projected point set $`\stackrel{}{y}_\tau ^i=(y_\tau ^i,y^i)`$ versus $`\tau `$ for (a) the Shinriki and (b) the Mackey-Glass oscillator. $`L\left(\tau \right)`$ has been normalized so that it has the value $`L=1`$ for an uncorrelated point set. (c) Comparison of the nonlinear characteristics of the Mackey-Glass oscillator, which is the function $`f\left(y_{\tau _0}\right)`$ of an ansatz of the form $`h(y,y_{\tau _0})=f\left(y_{\tau _0}\right)+g\left(y\right)`$, measured directly on the oscillator (line) with its recovery from the time series (dots).
no-problem/9906/math9906147.html
ar5iv
text
# On 𝐴_𝑘–singularity on a plane curve of fixed degree. There is a general problem to describe singularities which can be met on algebraic hypersurfaces, in particular on plane curves, of fixed degree (see, e.g., ). Here we shall consider $`A_k`$–singularities which can be met on a plane curve of degree $`d`$. Let $`k(d)`$ be the maximal possible integer $`k`$ such that there exists a plane curve of degree $`d`$ with an $`A_k`$–singularity. Statement 1 gives an upper bound for $`k(d)`$. According to it $`\overline{lim}_d\mathrm{}k(d)/d^23/4`$. We construct a plane curve of degree $`28s+9`$ ($`sZ_0`$) which has an $`A_k`$–singularity with $`k=420s^2+269s+42`$. Therefore one has $`\underset{¯}{lim}_d\mathrm{}k(d)/d^215/28`$ (pay attention that $`15/28>1/2`$). The example is constructed basically in the same way as a curve of degree $`22`$ with an $`A_{257}`$ singularity in (the aim of that example was somewhat different). ###### Statement 1 $`k(d)(d1)^2\left[\frac{d}{2}\right]\left(\left[\frac{d}{2}\right]1\right).`$ Remark. We believe that this statement is known, however we have not found a reference for it. Proof. Without any loss of generality one can suppose that the curve is reduced. Therefore one can choose the infinite line in the complex projective plane $`CP^2`$ so that it intersects the curve at $`d`$ different points. Let the affine part of the curve $`d`$ be given by the equation $`\{f(x,y)=0\}`$ (where $`f`$ is a polynomial of degree $`d`$). The surface $`\{f(x,y)+z^2=\epsilon \}C^3`$ is nonsingular and the negative inertia index of the intersection form on its second homology group $`H_2(V;R)`$ just coincides with the right hand side of the inequality in the statement (this follows, e.g., from the result of J.Steenbrink on the intersection form for quasihomogeneous singularities (), applied to the (isolated) singularity $`f_d(x,y)+z^2`$, where $`f_d(x,y)`$ is the homogeneous part of degree $`d`$ of the polynomial $`f`$. Now the statement follows from the facts that the intersection form of the $`A_k`$–singularity of $`3`$ variables is negative defined and, if the considered curve has an $`A_k`$–singularity, then the vanishing homology group of this singularity is embedded into the homology group $`H_2(V;R)`$ (see, e.g., ). $`\mathrm{}`$ ###### Statement 2 There exists a plane curve of degree $`28s+9`$ which has an $`A_k`$–singularity with $`k=420s^2+269s+42`$ ($`sZ_0`$). Proof. Let $`\mathrm{}=3s+1`$, $`m=7s+2`$, and let an (affine plane) curve $`C`$ be given by the equation $$F(x,y)=y^22yA(x,y)+x^8\mathrm{}+4x^7\mathrm{}y^m=0,$$ (1) where $`A(x,y)=\left[x^4\mathrm{}+2x^3\mathrm{}y^m2x^2\mathrm{}y^{2m}+4x^{\mathrm{}}y^{3m}10y^{4m}\right]`$. The degree $`d`$ of the curve $`C`$ is equal to $`4m+1=7\mathrm{}+m=28s+9`$. Let $`z=yA(x,y)`$ ($`x`$ and $`z`$ are local coordinates on the plane near the origin). Then $$F(x,y)=z^2+56x^3\mathrm{}y^{5m}56x^2\mathrm{}y^{6m}+80x^{\mathrm{}}y^{7m}100y^{8m}.$$ (2) Since $`z(x,y)=yx^4\mathrm{}+`$ terms of higher degree, one has $`y(x,z)=z+x^4\mathrm{}+`$ terms of higher degree. Substituting this expression into (2) one gets $$F(x,z)=z^2+56x^{3\mathrm{}+20\mathrm{}m}+a_{ij}x^iz^j=z^2+56x^{k+1}+a_{ij}x^iz^j,$$ where $`k=420s^2+269s+42`$ and the sum contains only members with powers $`(i,j)`$ which lie over the segment $`(0,2)(k+1,0)`$, i.e., those for which $`i/(k+1)+j/2>1`$. This proves the statement. $`\mathrm{}`$ Remark. There exists a similar problem to determine the maximal number $`k^{}=k^{}(d)`$ such that the singularity $`A_k^{}`$ is an adjacent to an isolated homogeneous singularity of degree $`d`$ (or (what is equivalent) to a singularity from the same stratum $`\mu =const`$). One has $`k^{}(d)k(d)`$. All the reasonings above are valid for this problem as well and thus the inequalities of the statements 1 and 2 hold also for the number $`k^{}(d)`$. Moscow State University, Dept. of Mathematics and Mechanics, Moscow, 119899, Russia. E-mail: sabir@mccme.ru nekhoros@nw.math.msu.su
no-problem/9906/quant-ph9906006.html
ar5iv
text
# DAMTP-1999-76 quant-ph/9906006 Non-Contextual Hidden Variables and Physical Measurements ## I introduction The experimental evidence against local hidden variable theories is compelling and fundamental theoretical arguments weigh heavily against those non-local hidden variable theories proposed to date, but the question of hidden variables is still interesting for at least two reasons. First, we want to distinguish strongly held theoretical beliefs from established facts. Since there are reasonably sound (though clearly not universally persuasive) theoretical arguments against every interpretation of quantum theory proposed to date, we need to be particularly cautious about distinguishing belief from fact where quantum foundations are concerned. In particular, we still need to pin down precisely what types of hidden variable theories can or cannot be excluded by particular theoretical arguments. Second, recent discoveries in quantum information theory and quantum computing give new interest to some foundational questions. For example, we would like to know in precisely what senses a quantum state carries information and can be regarded as a computer, and how it differs in these respects from classical analogues. From this perspective, as Meyer has emphasized, questions about the viability of hidden variable models for a particular process translate into questions about the classical simulability of some particular aspect of quantum behaviour, and are interesting independently of the plausibility of the relevant models as physical theories. Either way, we need to distinguish arguments based on idealised measurements, which can be specified precisely, from arguments based on realistic physical measurements, which are always of finite precision. Since we cannot precisely specify measurements, it is conceivable that all the measurements we can carry out actually belong to some subset of the full class of measurements allowed by the quantum formalism. That subset must be dense, assuming that any finite degree of precision can be attained in principle, but it need not have positive measure. Any no-go theorem relying on a model of measurement thus has a potential loophole which can be closed only if the theorem still holds when measurements are restricted to a dense subset. From the point of view of quantum computation, the precision attainable in a measurement is a computational resource: specifying infinite precision requires infinite resources and prevents any useful comparison with discrete classical computation. We consider here whether the predictions of quantum theory can be replicated by a hidden variable model in which the outcomes of measurements are pre-determined by truth values associated to the relevant operators. We first review the case of infinite precision with standard von Neumann, i.e. projection valued, measurements. The question then is whether there is a consistent way of ascribing truth values $`p(P)\{0,1\}`$ to the projections in such a way as to determine a unique outcome for any projection valued measurement. That is, is it possible to find a truth function $`p`$ such that if $`\{P_i\}`$ is a projective decomposition of the identity then precisely one of the $`P_i`$ has truth value $`1`$? To put it formally, does there exist a truth function $`p`$ such that $$\underset{i}{}p(P_i)=1\mathrm{if}\underset{i}{}P_i=I\mathrm{?}$$ (1) The Kochen-Specker (KS) theorem shows that the answer is no for systems whose Hilbert space has dimension greater than two. The general result follows from the result for projections in three-dimensional real space, and so can be proved by exhibiting finite sets of projections in $`R^3`$ for which a truth function satisfying (1) is demonstrably impossible. Kochen and Specker gave the first example of such a set, and some simpler examples were later found by Peres. An independent proof was given by Bell, who noted that, by an argument of Gleason’s, (1) implies a minimal finite separation between projectors with truth value $`1`$ and $`0`$, which is impossible since both values must be attained. We could ask the same question about positive operator valued measurements. That is, does there exist a truth function $`p`$ on the positive operators such that $$\underset{i}{}p(A_i)=1\mathrm{if}\underset{i}{}A_i=I\mathrm{?}$$ (2) Obviously, since projection valued measurements are special cases of positive operator valued measurements, the KS theorem still applies. Returning to the case of projections, we want to know whether the restriction to finite precision could make a difference. Our hypothesis, recall, is that a finite precision measurement could correspond to a measurement of some particular projective decomposition in the precision range, a decomposition whose projections do indeed have hidden pre-assigned truth values. The relevant question then is: is there a physically sensible truth function $`p`$ defined on a dense subset $`S_1`$ of the space of all projections such that (1) holds on a dense subset $`PD_1`$ of the space of all projective decompositions? This requires in particular that the set $`S_2`$ of projections belonging to decompositions in $`PD_1`$ must satisfy $`S_2S_1`$. By physically sensible, we mean that the subsets $`S_3`$ and $`S_4`$ of projections $`P`$ in $`S_2`$ for which $`p(P)=1`$ and $`p(P)=0`$ respectively are both dense in the space of all projections, so as to avoid the possibility of contradiction by experiments of sufficiently high precision. We first consider the case treated in the proof of the KS theorem, one-dimensional projections on $`R^3`$. The possibility of hidden variable models evading the KS theorem was first considered by Pitowsky. Meyer has given a very pretty example of a truth function $`p`$ defined on the subset $`S^2Q^3`$ of projections defined by rational vectors, which satisfies (1) for all orthogonal triples. Meyer’s elegant proofs, using earlier work of Godsil and Zaks, show that all the necessary denseness conditions hold and hence that the KS proof is indeed nullified if we restrict attention to finite precision measurements. Meyer’s result shows that the KS theorem cannot be directly applied in the finite precision case. However, it does not imply that the theorem itself is false, or that no similar no-go theorem can be found. Even in the case of three dimensional systems, this requires an example of a truth function that satisfies (1) for a dense subset of the triads of projections on $`C^3`$ rather than $`R^3`$. A more complete argument requires an example of a physically sensible truth function satisfying (1) for a dense subset of the projective decompositions of the identity on $`C^n`$. More generally still, since all physical measurements are actually positive operator valued, a complete defence against KS-like arguments requires an example of a physically sensible truth function satisfying (2) for a dense subset of the positive operator decompositions of the identity on $`C^n`$. We give such examples here. First, some notation. Define the one-dimensional projection $`P_{r_1,\mathrm{},r_{2n}}`$ on $`C^n`$ to be the projection onto the vector $`N(r_1+ir_2,\mathrm{},r_{2n1}+ir_{2n})`$, where the $`r_i`$ are real and not all zero and the normalisation constant obeys $`N^2=_{i=1}^{2n}r_i^2`$. Call $`P_{r_1,\mathrm{},r_{2n}}`$ true if all the $`r_i`$ are rational and non-zero and if, writing $`r_i=p_i/q_i`$ we have that $`q_1`$ is divisible by $`3`$ and none of the other $`q_i`$ are. Here, and throughout, any fractions we write are taken to be in lowest terms. Call an n-tuple $`\{Q_1,\mathrm{},Q_n\}`$ of orthogonal one-dimensional projections suitable if at least one of the $`Q_i`$ is true. If $`P`$ belongs to a suitable n-tuple but is not true, call $`P`$ false. (Note that a projection need not be either true or false.) Define $`p(P)=1`$ if $`P`$ is true and $`p(P)=0`$ if $`P`$ is false. Lemma$`1`$ A suitable n-tuple contains precisely one true projection. Proof If $`P`$ and $`Q`$ are both true projections, the corresponding vectors have inner product of the form $`(a/9)+(p/q)+i(r/s)`$, where $`3`$ is not a factor of $`q`$. The real part thus cannot vanish, so $`P`$ and $`Q`$ cannot be orthogonal. Lemma$`2`$ The true projections are dense in the space of all one-dimensional projections. Proof Given any one-dimensional projection $`P_{r_1,\mathrm{},r_{2n}}`$ we can find an arbitrarily close approximation $`P_{r_1^{},\mathrm{},r_{2n}^{}}`$ with rational $`r_i^{}=p_i^{}/q_i^{}`$. If $`q_1^{}`$ is not divisible by $`3`$, we can find an arbitrarily close rational approximation $`r_1^{\prime \prime }=p_1^{\prime \prime }/q_1^{\prime \prime }`$ to $`r_1^{}`$ with $`q_1^{\prime \prime }`$ divisible by $`3`$, for example by taking $`p_1^{\prime \prime }=3Np_1^{}+1`$ and $`q_1^{\prime \prime }=3Nq_1^{}`$ for a sufficiently large integer $`N`$. Similarly, if any of the $`q_i^{}`$ for $`i>1`$ are divisible by $`3`$, we can find arbitrarily close rational approximations $`r_i^{\prime \prime }=p_i^{\prime \prime }/q_i^{\prime \prime }`$ to $`r_i^{}`$ with $`q_i^{\prime \prime }`$ not divisible by $`3`$, for example by taking $`p_1^{\prime \prime }=Np_1^{}`$ and $`q_1^{\prime \prime }=Nq_1^{}+1`$ for a sufficiently large integer $`N`$. Lemma$`3`$ The suitable n-tuples are dense in the space of all n-tuples of orthogonal projections. Proof Given any n-tuple $`\{P_1,\mathrm{},P_n\}`$, choose one of the projections, say $`P_1`$. As above, we can find an arbitrarily close approximation to $`P_1`$ by a true projection $`Q`$. Let $`U`$ be a rotation in $`SU(n)`$ which rotates $`P_1`$ to $`Q`$ such that $`|UI|=(\mathrm{Tr}((UI)(U^{}I)))^{1/2}`$ attains the minimal value for such rotations. The compactness of $`SU(n)`$ ensures that such a $`U`$ exists, though it need not be unique, and the minimal value tends to zero as $`Q`$ tends towards $`P`$. The projections $`\{UP_1,\mathrm{},UP_n\}`$ form a suitable n-tuple, and this construction gives n-tuples of this type arbitrarily close to the original. Lemma$`4`$ The false projections are dense in the space of all one-dimensional projections. Proof Given any projection $`P`$, choose an n-tuple to which it belongs, and let $`Q`$ be another projection in that n-tuple. By the construction above, we can find arbitrarily close n-tuples in which the projections approximating $`Q`$ are true. The projections approximating $`P`$ are thus false. This concludes the argument for measurements defined by n-tuples of one-dimensional projections. For completeness, though, we also consider degenerate von Neumann measurements, corresponding to decompositions of the identity into general orthogonal projections. The construction above generalises quite simply. Fixing the basis as before, we can write each projection as a matrix: $`P=N(a_{ij}+ib_{ij})_{i,j=1}^n`$, where the $`a_{ij}`$ and $`b_{ij}`$ are real and $`N`$ is some normalisation constant. Consistently with our earlier definitions for one-dimensional projections, we can define $`P`$ to be true if it can be written in this form with all the $`a_{ij}`$ and $`b_{ij}`$ rational and non-zero and if $`a_{11}`$ is then the only one which, when written in lowest terms, has denominator divisible by $`9`$. Clearly if $`P`$ and $`Q`$ are both true then $`\mathrm{Tr}(PQ)0`$, so they cannot be orthogonal. We can thus define suitable projective decompositions and false projections as above, and all the earlier arguments run through with trivial modifications. At this stage a comment on measurement theory is required. The KS theorem assumes the traditional von Neumann definition of measurement, in which measurement projects the quantum state onto an eigenspace of the relevant observable. In more realistic modern treatments, a measurement causes an action on the quantum state by positive operators, which may but need not be close to projections. One could, indeed, realistically base measurement theory only on positive operator valued measurements in which the positive operators are not projections, for example stipulating that all positive operators involved must be of maximal rank. If so, the original KS theorem becomes irrelevant, though it can easily be modified to deal with these cases. It seems more natural, though, to either allow any precisely specified positive operator decomposition, whether or not it includes projections, or else to consider general finite precision positive operator valued measurements. If all precisely specified positive operators are included, then of course the KS theorem applies. On the other hand, as we now show, the finite precision loophole also exists for positive operator measurements. We need new definitions for positive operators. Again fixing a basis, we can write a positive operator as a matrix: $`A=(a_{ij}+ib_{ij})_{i,j=1}^n`$, where the $`a_{ij}`$ and $`b_{ij}`$ are real, so that $`a_{ij}=a_{ji}`$ and $`b_{ij}=b_{ji}`$. We say that $`A`$ is true if $`a_{11}=r_1+r_2\sqrt{2}`$, with $`r_1`$ and $`r_2`$ both rational and $`r_2`$ positive, and that a projective decomposition $`I=_iA_i`$ of the identity into positive operators is suitable if precisely one of the $`A_i`$ is true. $`A`$ is false if it belongs to a suitable decomposition but is not true. Under this definition, every $`A`$ is either true or false. Define the truth function $`p`$ by setting $`p(A)=1`$ if $`A`$ is true and $`p(A)=0`$ if $`A`$ is false. Clearly $`p`$ satisfies (2) on suitable decompositions. Clearly, too, true and false operators are dense in the space of positive operators, and suitable decompositions are dense in the space of all positive operator decompositions. Hence the desired result holds. Note that these last definitions, restricted to projections, give another example of a physically sensible truth function satisfying (1). The two different constructions perhaps help to illustrate the large scope for examples of this sort. There is nothing particularly special about either our constructions or those of Ref. : the possibility of closing the finite precision loophole by any KS-type argument can be refuted in many different ways. It follows from the above examples that non-contextual hidden variable theories cannot be excluded by theoretical arguments of the KS type once the imprecision in real world experiments is taken into account. This does not, of course, imply that such theories are very plausible, or that the particular constructions we give are capable of producing a physically interesting hidden variable theory. Nor does the discussion affect the situation regarding local hidden variable theories, which can be refuted by experiment, modulo reasonable assumptions. Acknowledgments I am grateful to Philippe Eberhard for suggesting clarifications in the presentation and to the Royal Society for financial support.
no-problem/9906/astro-ph9906186.html
ar5iv
text
# 1 Introduction ## 1 Introduction Recently, the largest and low-brightness classical radio sources are of a special interest for researching a number of astrophysical problems. Especially, their statistical studies are of a great importance for: (1) investigations of the time evolution of radio sources (cf. Kaiser et al. 1997; Chyży 1997; Blundell et al. 1999), (2) testing the orientation-dependent unification scheme (e.g. Barthel 1989; Urry & Padovani 1995), and (3) their usefulness to probe the low-density intergalactic and intercluster medium (cf. Strom & Willis 1980; Subrahmanyan & Saripalli 1993; Mack et al. 1998). Active nucleus of the largest ‘giant’ radio sources is likely approaching an endstage of its activity, therefore their studies can provide important informations on the properties of old AGN. The identification of a number of quasars among these giant sources rises a problem for the unified scheme of AGN, where a quasar-appearance should characterize the AGN whose jets are oriented closer to the observer’s line of sight than those in radio galaxies. Thus, quasars should not have large projected linear sizes. Finally, the large angular size of giants allows detailed studies of their spectral evolution, rotation measure (RM), and the depolarization towards these sources. All the above can provide useful data on density of the medium at very large distances from the AGN. In former decades, giant radio sources were often undetectable in radio surveys because they lay below the survey surface-brightness limit, even though they had total flux densities exceeding the survey flux limit. The new deep radio surveys WENSS (Rengelink et al. 1997), NVSS (Condon et al. 1998) offer a possibility to select unbiased samples of giant radio sources owing to their low surface-brightness limits. Using the NVSS survey, we have selected a sample of FRII-type (Fanaroff & Riley 1974) giant candidates, which further study is in progress. The source GB2 0909+353 is one of them. In this paper we show that this source is one of the largest FRII-type sources with extremely low values of the equipartition magnetic field and energy density. The 1.4-GHz high- and low-resolution radio observations, available for this source, and its optical field are summarized in Sec. 2. Also our special, deep VLA observation to detect its radio core, is described there. The radio spectrum of the source is analysed in Sec. 3. In Sec. 4, a possible optical counterpart, the redshift estimate and projected size of the source are discussed. Finally, in Sec. 5, the equipartition magnetic field and energy density within the source are calculated, and compared with corresponding values found for other giants, as well as for much smaller 3CR sources. ## 2 The radio 1.4 GHz map and optical field The source has been mapped in three recent sky surveys: FIRST (VLA B–array 1.4 GHz; Becker, White & Helfand 1995), NVSS (VLA D–array 1.4 GHz; Condon et al. 1998), and WENSS (WSRT 325 MHz; Rengelink et al. 1997). The low-resolution VLA map at 1.4 GHz, reproduced from NVSS survey, is shown in Fig. 1 with ‘grey-scale’ optical image from the Digitized Sky Survey (hereafter DSS) overlaid. The VLA high-resolution FIRST map confirms the FRII-type (Fanaroff & Riley 1974) of the source, and gives 6.3 arc min angular separation of the brightest parts of its lobes. Unfortunately, no radio core brighter than about 1 mJy at 1.4 GHz was detected during the FIRST survey. Therefore, we made a follow-up, deep VLA observations to detect the core at a higher frequency. For this purpose, the sky field centred at J091252.0+351000 was mapped at 4885 MHz with the B-array. That observed frequency and the array configuration allow to map the brightness distribution within a radius of about 4 arc min. With the integration time of 35 min, the rms brightness fluctuations were about 0.025 mJy beam<sup>-1</sup>.Unfortunately, no radio core brighter than 0.5 mJy at 5 GHz has been detected in vicinity of the target position. This makes the upper limit to the core–total flux ratio of about 0.023, which is exactly the median ratio determined at 8 GHz for radio galaxies larger than 1 Mpc by Ishwara-Chandra & Saikia (1999). Thus, our observations confirm very low brightness of the investigated radio source at high frequencies. ## 3 The radio spectrum Previously the source was detected during the low-frequency sky surveys at 151 and 408 MHz (6C2: Hales, Baldwin & Warner 1988, and 7C: Waldram et al., 1996; B2.3: Colla et al., 1973), respectively, and at 1400 MHz (GB2: Machalski 1978). It is present, as well, in the GB6 catalogue (Gregory et al., 1996), but its 4.85 GHz flux density may be underestimated there. Table 1 gives the flux densities available at the frequencies from 151 MHz to 5 GHz. The radio spectrum of the total source and its lobes is shown in Fig. 2. In order to calculate its total radio luminosity, the spectrum is expressed by a functional form. The best fit of the data in column 3 of Table 1 has been achieved with a parabola $`S(x)`$\[mJy\]$`=ax^2+bx+c`$, where $`x=\mathrm{log}\nu `$\[GHz\]; $`a=0.493\pm 0.085,b=1.097\pm 0.038,c=2.327\pm 0.030`$. This fit gives the fitted 1.4-GHz total flux density of 143 mJy, and the fitted spectral index of $`1.24\pm 0.063`$ at 1.4 GHz. The low- and high-frequency spectral indices between fitted flux densities at 408 and 1400 MHz, and at 1400 and 5000 MHz, are $`0.91`$ and $`1.28`$, respectively. Such a steep slope of the spectrum at frequencies above 1 GHz suggests that the radio source is related to a distant galaxy. However the spectrum evidently flattens at low frequencies, which allows to estimate the lifetime of relativistic electrons in the source. Therefore, we have fitted the straight lines to the spectral data at low and high frequencies, estimating a break of the spectrum at frequency of about 760 MHz. ## 4 The redshift estimate and size of the source To determine linear (projected) extent of the source, a distance to the source must be known. The optical field suggests that the radio source may be associated with one of two objects, marked #1 and #2 in Fig. 1. Object #1 is classified as a galaxy with R=19.35 mag in the DSS. Object #2 is not classified there because it is not visible on the Palomar Observatory Sky Survey’s blue plates, however it is likely another red galaxy with R=19.5$`\pm `$0.4 mag. Because no radio core was detected, we can only assume that any counterpart optical galaxy is not brighter than R=19.35 mag. Taking into account the Hubble relation R(z), well established for 3CR, and other radio galaxies (e.g. Kristian, Sandage & Westphal 1978; Machalski 1988), the galaxy should be at redshift $`z_{}^>0.4`$. This estimate is supported by the $`R`$-band Hubble diagram for the giant radio galaxies, recently published by Schoenmakers et al. (1998). On the basis of available spactroscopy of giants from the Westerbork sample and 7C sample of Cotter et al. (1996), they found $$\mathrm{log}z=0.125R2.79$$ which, for R=19.35 mag, gives z=0.42. This redshift estimate corresponds to the source’s distance of about 2.6 Gpc (assuming $`H_o=50,\mathrm{\Omega }=1`$), and linear size of 2.43 Mpc. If so, the source GB2 0909+353 is one of about the ten FRII-type radio sources extended over 2 Mpc. The radio source, investigated here, will be still over 1 Mpc (a conventional lower limit of size for ‘giant’ sources) at a redshift as low as 0.11. ## 5 The equipartition magnetic field and energy density The source is characterized by exceptionally low minimum energy density of relativistic particles and equipartition magnetic field. Recently, Ishwara- Chandra & Saikia (1999) have published very interesting statistics of the above parameters calculated for known radio sources extended more than 1 Mpc, and compared them with corresponding statistics of smaller 3CR sources. Following Ishwara-Chandra & Saikia, we calculate the minimum energy density $`u_{min}`$, and equipartition magnetic field $`B_{eq}`$ in GB2 0909+353 with the standard method (e.g. Miley 1980), assuming a cylindrical geometry of the source, a filling factor of unity, and equal energy distribution between relativistic electrons and protons. Integrating luminosity of the source between 10 MHz and 10 GHz, we found $`B_{eq}=0.084_{0.024}^{+0.040}`$ nT and $`u_{min}=(0.65_{0.33}^{+0.55})\times 10^{13}`$ erg cm<sup>-3</sup>. These values are weakly dependent of unknown distance $`D(z)`$ to the source; $`B_{eq}D^{2/7}`$, $`u_{min}D^{4/7}`$. This means that varying redshift by 2 (100 per cent), one can expect a change of $`B_{eq}`$ by 18 per cent, and $`u_{min}`$ by 33 per cent, only. $`B_{eq}`$ and $`u_{min}`$ values, found for GB2 0909+353, are rather extremal among the corresponding values for known giant sources. Ishwara-Chandra & Saikia have showed that $`B_{eq}`$ in almost all radio sources larger than 1 Mpc is less than equivalent magnetic field of the microwave background radiation, i.e. $`B_{eq}<B_{iC}0.324(1+z)^2`$\[nT\]. They also have showed that oppositely, $`B_{eq}>B_{iC}`$ for more luminous and smaller 3CR radio sources, suggesting that the inverse-Compton losses dominate the synchrotron radiative losses in the evolution of the lobes of giant sources. Our calculation shows that the $`B_{iC}/B_{eq}`$ ratio for GB2 0909+353 may vary from 7.6$`\pm `$2.9 (if z=0.4) to 4.6$`\pm `$1.7 (if z=0.11), respectively, and $`B_{eq}^2/(B_{iC}^2+B_{eq}^2)`$, which represents the ratio of the energy losses by synchrotron radiation to total energy losses due to both the processes, may vary from 0.017$`\pm `$0.013 (if z=0.4) to 0.044$`\pm `$0.035 (if z=0.11), respectively. In Figs. 3(a) and 3(b) we plot these values on the diagrams reproduced from the paper of Ishwara-Chandra & Saikia, and showing the above ratios as a function of the linear size of radio source. Similarly, the value of $`u_{min}`$ for GB2 0909+353 vs. linear size, and corresponding values for other giants from that paper, are plotted in Fig. 3(c). The solid line marks the expected relation $`\mathrm{log}u_{min}=(4/7)\mathrm{log}D+const`$. The loci of the source GB2 0909+353 on these diagrams fully support our thesis that this source is one of the largest radio sources with extremely low energy density and equivalent magnetic field. The age of relativistic electrons, derived from the value of $`B_{eq}`$ and $`B_{iC}`$ at the break frequency of 760 MHz may vary from $`\mathrm{3.4\hspace{0.17em}10}^7`$ yr (if z=0.4) to $`\mathrm{9.8\hspace{0.17em}10}^7`$ yr (if z=0.11), respectively. Assuming that (1) a redshift of the source is between the above values, and (2) the main axis of the source is close to the plane of the sky, and thus any physical distance from the center should not differ significantly from the projected one (this assumption is based on the high symmetry of the source) – a distance from the central galaxy to the brightest spots in the radio lobes should be within 640 kpc and 1540 kpc, the break frequency, characterizing the total radio spectrum, can be related to any distance between the above values, and the mean hotspot separation speed (resulting from this distance and the age of relativistic electrons) should be within 0.02$`c`$ and 0.15$`c`$. The range of these values is essentially the same as those found for much smaller, double 3CR radio sources (cf. Alexander & Leahy 1987; Liu, Pooley & Riley 1992), however the lower value seems to be much more likely in view of the strong positive correlation between the separation speed and 178-MHz luminosity, found by these authors. Detailed spectral observations and the ageing analysis of a sample of giant double sources are necessary to check whether this correlation holds for the largest radio sources. Concluding, we argue that the source GB2 0909+353 is one of the largest, low-brightness, and distant classical double radio source. Aknowledgements. The VLA is operated by the National Radio Astronomy Observatory (NRAO) for Associated Universities Inc. under a licence from the National Science Foundation of the USA. We acknowledge usage of the Digitized Sky Survey which was produced at the Space Telescope Science Institute based on photografic data obtained using the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope. We thank the anonymous referee for the valuable comments.
no-problem/9906/hep-th9906155.html
ar5iv
text
# Regular and Irregular Boundary Conditions in the AdS/CFT Correspondence ## 1 Introduction The celebrated AdS/CFT correspondence relates field theories on anti-de Sitter (AdS) space with conformal field theories (CFTs) living on the AdS horizon. The main prediction of this duality is that CFT correlation functions of conformal operators can be calculated by evaluating the AdS action on-shell as a functional of prescribed boundary values. For example, using a scalar field theory on AdS space, CFT correlators of conformal fields of scaling dimensions $`\mathrm{\Delta }d/2`$ have been calculated \[1, 2, 3, 4, 5, 6, and references therein\]. Until recently, no prescription was known to include operators with scaling dimension $`\mathrm{\Delta }`$, $`d/21<\mathrm{\Delta }<d/2`$. Here, $`d/21`$ is the unitary bound on the conformal dimension of scalar operators. Recently, Klebanov and Witten proposed a method to do just that. They used the fact that a scalar field on AdS space can obey two types of boundary conditions . The regular one, which can always be imposed, leads to the CFT correlators with $`\mathrm{\Delta }d/2`$, whereas the irregular one would lead to $`d/21<\mathrm{\Delta }d/2`$. They realized that the respective boundary fields are conjugate to each other and proposed to use a Legendre transform of the action, expressed as a functional of the irregular boundary value, as the generating functional. They also demonstrated the correctness of this proposal for CFT two point functions. In this article, we would like to expand on their proposal and demonstrate its correctness to all orders in perturbation theory. A second result of our analysis is that a different Green’s function must be used for internal lines in second or higher order graphs. The outline of the article shall be as follows. In the remainder of this section motivating arguments about the origin of the irregular boundary conditions will be given. In section 2 we will for completeness repeat the formalism using regular boundary conditions. Then, in section 3 Klebanov and Witten’s proposal to include irregular boundary conditions shall be analyzed and shown to be correct to any order in perturbation theory. To start, consider an interacting scalar field, whose action is given by $$I=\frac{1}{2}_\mathrm{\Omega }𝐝𝐱\left(𝐃_\mu \varphi 𝐃^\mu \varphi +𝐦^\mathrm{𝟐}\varphi ^\mathrm{𝟐}\right)+𝐈_{\mathrm{𝐢𝐧𝐭}},$$ (1) where $`I_{int}`$ denotes the interaction terms and $`𝐝𝐱=𝐝^{𝐝+\mathrm{𝟏}}𝐱\sqrt{𝐠(𝐱)}`$ is the invariant volume integral measure. The equation of motion following from the action (1) is given by $$\left(D_\mu D^\mu m^2\right)\varphi (x)=B(x),$$ (2) where<sup>1</sup><sup>1</sup>1The functional variation is done covariantly, cf. . $$B(x)=\frac{\delta I_{int}}{\delta \varphi (x)}.$$ Using as AdS representation the conventional upper half space $`𝐱^𝐝`$, $`x_0>0`$ with the metric $$ds^2=(x_0)^2dx^\mu dx^\mu ,$$ (3) the solution to equation (2) can be written in the form $$\varphi (x)=d^dy\left[\frac{x_0}{(x𝐲)^\mathrm{𝟐}}\right]^{\frac{d}{2}\pm \alpha }f_{}(𝐲)+_𝛀𝐝𝐲𝐆(𝐱,𝐲)𝐁(𝐲),$$ (4) where $`\alpha =\sqrt{d^2/4+m^2}`$ and $`G(x,y)`$ is a standard Green’s function satisfying $$\left(D_\mu D^\mu m^2\right)G(x,y)=\frac{\delta (xy)}{\sqrt{g(x)}}.$$ (5) The free field solution with the lower sign exists classically for $`\alpha <d/2`$, but the unitary bound restricts it further to $`\alpha <1`$ . The functions $`f_{}`$ and $`f_+`$ are called regular and irregular boundary values and are conformal fields of scaling dimensions $`d/2\alpha `$ and $`d/2+\alpha `$, respectively. In the AdS/CFT correspondence the fields obeying regular boundary conditions give rise to CFT correlation functions of operators with conformal dimensions $`\mathrm{\Delta }`$ restricted by $`\mathrm{\Delta }d/2`$. Hence, the use of irregular boundary conditions enables one to obtain correlation functions for operators with scaling dimensions $`d/21<\mathrm{\Delta }<d/2`$. ## 2 Regular Boundary Conditions Let us start by rewriting the expression (4) as $$\varphi (x)=\varphi ^{(0)}(x)+_\mathrm{\Omega }𝐝𝐱𝐆(𝐱,𝐲)𝐁(𝐲),$$ (6) where the Green’s function $`G(x)`$ is given by $`G(x,y)`$ $`=(x_0y_0)^{\frac{d}{2}}{\displaystyle \frac{d^dk}{(2\pi )^d}\mathrm{e}^{i𝐤(𝐱𝐲)}\{\begin{array}{cc}I_\alpha (kx_0)K_\alpha (ky_0)\hfill & \text{for }x_0<y_0,\hfill \\ I_\alpha (ky_0)K_\alpha (kx_0)\hfill & \text{for }x_0>y_0,\hfill \end{array}}`$ (7) $`={\displaystyle \frac{c_\alpha }{2}}\xi ^{\left(\frac{d}{2}+\alpha \right)}F(d/2,d/2+\alpha ;1+\alpha ;\xi ^2),`$ (8) where $`F`$ is the hypergeometric function, $$\xi =\frac{1}{2x_0y_0}\left\{\frac{1}{2}\left[(xy)^2+(xy^{})^2\right]+\sqrt{(xy)^2(xy^{})^2}\right\}$$ ($`y^{}`$ denotes the vector $`(y_0,𝐲)`$), and $$c_\alpha =\frac{\mathrm{\Gamma }(d/2+\alpha )}{\pi ^{d/2}\mathrm{\Gamma }(1+\alpha )}.$$ (9) Moreover, the free field solution $`\varphi ^{(0)}`$ shall be written as $$\varphi ^{(0)}(x)=d^dy𝒦_\alpha (x,𝐲)\varphi _{}^{(\mathrm{𝟎})}(𝐲)=𝐝^𝐝𝐲𝒦_\alpha (𝐱,𝐲)\varphi _+^{(\mathrm{𝟎})}(𝐲).$$ (10) The bulk-boundary propagators occuring in equation (10) are given by $$𝒦_{\pm \alpha }(x,𝐲)=\pm \alpha 𝐜_{\pm \alpha }\left[\frac{𝐱_\mathrm{𝟎}}{(𝐱𝐲)^\mathrm{𝟐}}\right]^{\frac{𝐝}{\mathrm{𝟐}}\pm \alpha },$$ (11) where $`c_{\pm \alpha }`$ is given by equation (9), and their Fourier transforms read $$𝒦_{\pm \alpha }(x,𝐤)=\frac{\pm \mathrm{𝟐}\alpha }{𝚪(\mathrm{𝟏}\pm \alpha )}\mathrm{e}^{\mathrm{𝐢𝐤}𝐱}\left(\frac{𝐤}{\mathrm{𝟐}}\right)^{\pm \alpha }𝐱_\mathrm{𝟎}^{\frac{𝐝}{\mathrm{𝟐}}}𝐊_\alpha (\mathrm{𝐤𝐱}_\mathrm{𝟎}).$$ (12) Equations (12) and (10) imply that the boundary functions $`\varphi _+^{(0)}`$ and $`\varphi _{}^{(0)}`$ are related by $$\varphi _+^{(0)}(𝐤)=\frac{𝚪(\mathrm{𝟏}\alpha )}{𝚪(\mathrm{𝟏}+\alpha )}\left(\frac{𝐤}{\mathrm{𝟐}}\right)^{\mathrm{𝟐}\alpha }\varphi _{}^{(\mathrm{𝟎})}(𝐤).$$ (13) Obviously, the free field $`\varphi ^{(0)}`$ can be written as a sum of two series, whose leading powers are $`x_0^{d/2\alpha }`$ and $`x_0^{d/2+\alpha }`$, respectively. Thus, one finds by direct comparison with equations (10) and (12) that the small $`x_0`$ behaviour of $`\varphi ^{(0)}`$ is $$\varphi ^{(0)}(x)\stackrel{x_00}{}x_0^{\frac{d}{2}\alpha }\varphi _{}^{(0)}(𝐱)+𝐱_\mathrm{𝟎}^{\frac{𝐝}{\mathrm{𝟐}}+\alpha }\varphi _+^{(\mathrm{𝟎})}(𝐱),$$ (14) where subleading terms have been dropped. Moreover, the Green’s function (8) goes like $$G(x,y)\stackrel{x_00}{}\frac{1}{2\alpha }x_0^{\frac{d}{2}+\alpha }𝒦_\alpha (𝐱,𝐲).$$ (15) Hence, the interaction contributes only to the $`\varphi _+`$ part of the asymptotic boundary behaviour, i.e. one can write $`\varphi (x)`$ $`\stackrel{x_00}{}x_0^{\frac{d}{2}\alpha }\varphi _{}(𝐱)+𝐱_\mathrm{𝟎}^{\frac{𝐝}{\mathrm{𝟐}}+\alpha }\varphi _+(𝐱),`$ (16) where $`\varphi _{}(𝐱)`$ $`=\varphi _{}^{(0)}(𝐱),`$ (17) $`\varphi _+(𝐱)`$ $`=\varphi _+^{(0)}(𝐱){\displaystyle \frac{\mathrm{𝟏}}{\mathrm{𝟐}\alpha }}{\displaystyle _𝛀}𝐝𝐲𝒦_\alpha (𝐱,𝐲)𝐁(𝐲).`$ (18) Identical relations hold for the Fourier transformed expressions. Now consider the on-shell action, treated as a functional of the regular boundary values $`\varphi _{}`$. Integrating equation (1) by parts yields $$I=\frac{1}{2}d^dxx_0^dn^\mu \varphi _\mu \varphi \frac{1}{2}_\mathrm{\Omega }𝐝𝐱\varphi (𝐱)𝐁(𝐱)+𝐈_{\mathrm{𝐢𝐧𝐭}}.$$ The first term must be regularized, which is done by writing $`x_0^dn^\mu \varphi _\mu \varphi `$ $`=x_0^d\varphi \left[\left({\displaystyle \frac{d}{2}}\alpha \right)x_0^{\frac{d}{2}\alpha }\varphi _{}+\left({\displaystyle \frac{d}{2}}+\alpha \right)x_0^{\frac{d}{2}+\alpha }\varphi _++\mathrm{}\right]`$ $`=x_0^d\left({\displaystyle \frac{d}{2}}\alpha \right)\varphi ^22\alpha \varphi _{}\varphi _++\mathrm{},`$ where the ellipses indicate contributions from subleading terms and other terms which vanish for $`x_0=0`$. The first term in the last line is cancelled by a covariant counterterm. Hence, the renormalized on-shell action is $`I[\varphi _{}]`$ $`=\alpha {\displaystyle \frac{d^dk}{(2\pi )^d}\varphi _{}(𝐤)\varphi _+(𝐤)}{\displaystyle \frac{\mathrm{𝟏}}{\mathrm{𝟐}}}{\displaystyle _𝛀}𝐝𝐱\varphi (𝐱)𝐁(𝐱)+𝐈_{\mathrm{𝐢𝐧𝐭}}`$ $`=I^{(0)}[\varphi _{}]{\displaystyle \frac{1}{2}}{\displaystyle _\mathrm{\Omega }}𝐝𝐱𝐝𝐲𝐁(𝐱)𝐆(𝐱,𝐲)𝐁(𝐲)+𝐈_{\mathrm{𝐢𝐧𝐭}},`$ (19) where equations (18), (17), (13), (6) and (10) have been used. The term $`I^{(0)}`$ in equation (19) is given by $`I^{(0)}[\varphi _{}]`$ $`=\alpha {\displaystyle \frac{\mathrm{\Gamma }(1\alpha )}{\mathrm{\Gamma }(1+\alpha )}}{\displaystyle \frac{d^dk}{(2\pi )^d}\left(\frac{k}{2}\right)^{2\alpha }\varphi _{}(𝐤)\varphi _{}(𝐤)}`$ $`=\alpha ^2c_\alpha {\displaystyle d^dxd^dy\frac{\varphi _{}(𝐱)\varphi _{}(𝐲)}{|𝐱𝐲|^{𝐝+\mathrm{𝟐}\alpha }}}`$ (20) and thus yields the correct two point function of scalar operators of conformal dimension $`\mathrm{\Delta }=d/2+\alpha `$, if one uses the AdS/CFT correspondence formula $$\mathrm{e}^{I[\varphi _{}]}=\mathrm{exp}\left[\alpha d^dx𝒪(𝐱)\varphi _{}(𝐱)\right].$$ (21) The other two terms have to be expressed as a perturbative series in terms of $`\varphi ^{(0)}`$. However, by virtue of equations (10) and (17) this naturally yields a perturbative series in terms of the boundary function $`\varphi _{}`$. ## 3 Irregular Boundary Conditions The treatment of irregular boundary conditions follows an idea by Klebanov and Witten . Consider the expression $`{\displaystyle \frac{\delta I[\varphi _{}]}{\delta \varphi _{}(𝐤)}}`$ $`=2\alpha {\displaystyle \frac{\mathrm{\Gamma }(1\alpha )}{\mathrm{\Gamma }(1+\alpha )}}\left({\displaystyle \frac{k}{2}}\right)^{2\alpha }\varphi _{}(𝐤)+{\displaystyle _𝛀}𝐝𝐱𝐁(𝐱){\displaystyle \frac{\delta \varphi (𝐱)}{\delta \varphi _{}(𝐤)}}`$ $`{\displaystyle _\mathrm{\Omega }}𝐝𝐱𝐝𝐲𝐝𝐳{\displaystyle \frac{\delta ^\mathrm{𝟐}𝐈_{\mathrm{𝐢𝐧𝐭}}}{\delta \varphi (𝐱)\delta \varphi (𝐳)}}𝐆(𝐱,𝐲)𝐁(𝐲){\displaystyle \frac{\delta \varphi (𝐳)}{\delta \varphi _{}(𝐤)}}.`$ Using equation (13) and the formula $`{\displaystyle \frac{\delta \varphi (x)}{\delta \varphi _{}(𝐤)}}`$ $`=𝒦_\alpha (x,𝐤)+{\displaystyle _𝛀}𝐝𝐲𝐝𝐳𝐆(𝐱,𝐲){\displaystyle \frac{\delta ^\mathrm{𝟐}𝐈_{\mathrm{𝐢𝐧𝐭}}}{\delta \varphi (𝐲)\delta \varphi (𝐳)}}{\displaystyle \frac{\delta \varphi (𝐳)}{\delta \varphi _{}(𝐤)}},`$ one finds $$\frac{\delta I[\varphi _{}]}{\delta \varphi _{}(𝐤)}=2\alpha \varphi _+^{(0)}(𝐤)+_𝛀𝐝𝐱𝒦_\alpha (𝐱,𝐤)𝐁(𝐱)=\mathrm{𝟐}\alpha \varphi _+(𝐤),$$ or, after an inverse Fourier transformation, $$\frac{\delta I[\varphi _{}]}{\delta \varphi _{}(𝐱)}=2\alpha \varphi _+(𝐱)$$ (22) This expression holds to any order in perturbation theory. This fact was obtained in using graph arguments. Furthermore, it shows first that $`\varphi _+`$ can be regarded as the conjugate field of $`\varphi _{}`$ and secondly that the functional $$J[\varphi _{},\varphi _+]=I[\varphi _{}]+2\alpha d^dx\varphi _{}(𝐱)\varphi _+(𝐱)$$ (23) has a minimum with respect to a variation of $`\varphi _{}`$. Klebanov and Witten’s idea is to formulate the AdS/CFT correspondence by the formula $$\mathrm{e}^{J[\varphi _+]}=\mathrm{exp}\left[\alpha d^dx𝒪(𝐱)\varphi _+(𝐱)\right].$$ (24) Here, the functional $`J[\varphi _+]`$ is a Legendre transform of the action $`I`$, i.e. it is the minimum value of the expression (23), expressed in terms of $`\varphi _+`$. In the following, Klebanov and Witten’s result about the correctness of the two point function shall be confirmed and interactions included. The minimum of $`J`$ is easiest found from equations (19) and (23), giving $`J[\varphi _+]`$ $`=\alpha {\displaystyle \frac{d^dk}{(2\pi )^d}\varphi _{}(𝐤)\varphi _+(𝐤)}{\displaystyle \frac{\mathrm{𝟏}}{\mathrm{𝟐}}}{\displaystyle _𝛀}𝐝𝐱\varphi (𝐱)𝐁(𝐱)+𝐈_{\mathrm{𝐢𝐧𝐭}}`$ $`=\alpha {\displaystyle \frac{\mathrm{\Gamma }(1+\alpha )}{\mathrm{\Gamma }(1\alpha )}}{\displaystyle \frac{d^dk}{(2\pi )^d}\left(\frac{k}{2}\right)^{2\alpha }\varphi _+(𝐤)\varphi _+(𝐤)}{\displaystyle \frac{\mathrm{𝟏}}{\mathrm{𝟐}}}{\displaystyle _𝛀}𝐝𝐱\varphi (𝐱)𝐁(𝐱)+𝐈_{\mathrm{𝐢𝐧𝐭}}`$ $`+{\displaystyle \frac{1}{2}}{\displaystyle d^dx_\mathrm{\Omega }𝐝𝐲𝒦_\alpha (𝐲,𝐱)\varphi _+(𝐱)𝐁(𝐲)}`$ $`=\alpha {\displaystyle \frac{\mathrm{\Gamma }(1+\alpha )}{\mathrm{\Gamma }(1\alpha )}}{\displaystyle \frac{d^dk}{(2\pi )^d}\left(\frac{k}{2}\right)^{2\alpha }\varphi _+(𝐤)\varphi _+(𝐤)}{\displaystyle \frac{\mathrm{𝟏}}{\mathrm{𝟐}}}{\displaystyle _𝛀}𝐝𝐱𝐝𝐲𝐁(𝐱)𝐆(𝐱,𝐲)𝐁(𝐲)`$ $`+I_{int}{\displaystyle \frac{1}{4\alpha }}{\displaystyle d^dz_\mathrm{\Omega }𝐝𝐱𝐝𝐲𝒦_\alpha (𝐱,𝐳)𝒦_\alpha (𝐲,𝐳)𝐁(𝐱)𝐁(𝐲)}.`$ (25) Here equations (17), (13), (18), (12) and (6) have been used. The first term in equation (25) can be inversely Fourier transformed, which yields $$J^{(0)}=\alpha ^2c_\alpha d^dxd^dy\frac{\varphi _+(𝐱)\varphi _+(𝐲)}{|𝐱𝐲|^{𝐝\mathrm{𝟐}\alpha }}.$$ (26) According to the correspondence formula (24), this yields the correct two point function of conformal operators $`𝒪`$ of scaling dimension $`\mathrm{\Delta }=d/2\alpha `$. Then, the second and fourth term in equation (25) can be combined by defining the Green’s function $$\stackrel{~}{G}(x,y)=G(x,y)+\frac{1}{2\alpha }d^dz𝒦_\alpha (x,𝐳)𝒦_\alpha (𝐲,𝐳).$$ (27) This modified Green’s function $`\stackrel{~}{G}`$ also satisfies equation (5), because the second term in equation (27) does not contribute to the discontinuity. Moreover, using equations (7) and (12) one finds $`\stackrel{~}{G}(x,y)`$ $`=(x_0y_0)^{\frac{d}{2}}{\displaystyle \frac{d^dk}{(2\pi )^d}\mathrm{e}^{i𝐤(𝐱𝐲)}}`$ $`\times \left[{\displaystyle \frac{2K_\alpha (kx_0)K_\alpha (ky_0)}{\mathrm{\Gamma }(\alpha )\mathrm{\Gamma }(1\alpha )}}+\{\begin{array}{cc}K_\alpha (ky_0)I_\alpha (kx_0)\hfill & \text{for }x_0<y_0,\hfill \\ K_\alpha (kx_0)I_\alpha (ky_0)\hfill & \text{for }x_0>y_0,\hfill \end{array}\right]`$ $`=(x_0y_0)^{\frac{d}{2}}{\displaystyle \frac{d^dk}{(2\pi )^d}\mathrm{e}^{i𝐤(𝐱𝐲)}\{\begin{array}{cc}K_\alpha (ky_0)I_\alpha (kx_0)\hfill & \text{for }x_0<y_0,\hfill \\ K_\alpha (kx_0)I_\alpha (ky_0)\hfill & \text{for }x_0>y_0,\hfill \end{array}}`$ which differs from equation (7) only by interchanging $`\alpha `$ and $`\alpha `$. Hence, the result (8) can be taken over, yielding $$\stackrel{~}{G}(x,y)=\frac{c_\alpha }{2}\xi ^{\left(\frac{d}{2}\alpha \right)}F(d/2,d/2\alpha ;1\alpha ;\xi ^2).$$ (28) Thus, inserting equation (27) into equation (25) yields $$J[\varphi _+]=J^{(0)}[\varphi _+]\frac{1}{2}_\mathrm{\Omega }𝐝𝐱𝐝𝐲𝐁(𝐱)\stackrel{~}{𝐆}(𝐱,𝐲)𝐁(𝐲)+𝐈_{\mathrm{𝐢𝐧𝐭}}.$$ (29) Moreover, one can see from equation (28) that for small $`x_0`$ $`\stackrel{~}{G}`$ behaves as $$\stackrel{~}{G}(x,y)\stackrel{x_00}{}\frac{1}{2\alpha }x_0^{\frac{d}{2}\alpha }𝒦_\alpha (𝐱,𝐲).$$ (30) Hence, writing $$\varphi (x)=d^dy𝒦_\alpha (x,𝐲)\varphi _+(𝐲)+_𝛀𝐝𝐲\stackrel{~}{𝐆}(𝐱,𝐲)𝐁(𝐲),$$ (31) the interaction contributes only to $`\varphi _{}`$. This in turn means that, expressing $`I_{int}`$ and $`B`$ as a perturbative series and using equation (31), the functional $`J`$ is naturally expressed in terms of the irregular boundary value $`\varphi _+`$. Moreover, it has the expected form, in that it is obtained from equation (19) by replacing $`\alpha `$ with $`\alpha `$ and $`\varphi _{}`$ with $`\varphi _+`$. An important point is that the Green’s function $`\stackrel{~}{G}`$ must be used for the calculation of internal lines. Finally, by a calculation similar to that of the derivation of equation (22) one finds $$\frac{\delta J[\varphi _+]}{\delta \varphi _+(𝐱)}=2\alpha \varphi _{}(𝐱).$$ (32) This is a final confirmation of the fact that the fields $`\varphi _{}`$ and $`\varphi _+`$ are conjugate to each other. In conclusion, we have expanded on Klebanov and Witten’s recent idea for formulating the AdS/CFT correspondence using irregular boundary conditions, showing it to give the expected answers to any order in perturbation theory. ## Acknowledgments This work was supported in part by a grant from NSERC. W. M. is very grateful to Simon Fraser University for financial support.
no-problem/9906/math9906084.html
ar5iv
text
# Untitled Document Pants Decompositions of Surfaces Allen Hatcher In studying the geometry and topology of surfaces it often happens that one considers a collection of disjointly embedded circles in a compact orientable surface $`\mathrm{\Sigma }`$ which decompose $`\mathrm{\Sigma }`$ into pairs of pants — surfaces of genus zero with three boundary circles. If $`\mathrm{\Sigma }`$ is not itself a pair of pants, then there are infinitely many different isotopy classes of pants decompositions of $`\mathrm{\Sigma }`$. It was observed in \[HT\] that any two isotopy classes of pants decompositions can be joined by a finite sequence of “elementary moves” in which only one circle changes at a time. In the present paper we apply the techniques of \[HT\] to study the relations which hold among such sequences of elementary moves. The main result is that there are five basic types of relations from which all others follow. Namely, we construct a two-dimensional cell complex $`𝒫(\mathrm{\Sigma })`$ whose vertices are the isotopy classes of pants decompositions of $`\mathrm{\Sigma }`$, whose edges are the elementary moves, and whose 2-cells are attached via the basic relations. Then we prove that $`𝒫(\mathrm{\Sigma })`$ is simply-connected. Now let us give the precise definitions. Let $`\mathrm{\Sigma }`$ be a connected compact orientable surface. We say $`\mathrm{\Sigma }`$ has type $`(g,n)`$ if it has genus $`g`$ and $`n`$ boundary components. By a pants decomposition of $`\mathrm{\Sigma }`$ we mean a finite collection $`P`$ of disjoint smoothly embedded circles cutting $`\mathrm{\Sigma }`$ into pieces which are surfaces of type $`(0,3)`$. We also call $`P`$ a maximal cut system. The number of curves in a maximal cut system is $`3g3+n`$, and the number of complementary components is $`2g2+n=|\chi (\mathrm{\Sigma })|`$, assuming that $`\mathrm{\Sigma }`$ has at least one pants decomposition. Let $`P`$ be a pants decomposition, and suppose that one of the circles $`\beta `$ of $`P`$ is such that deleting $`\beta `$ from $`P`$ produces a complementary component of type $`(1,1)`$. This is equivalent to saying there is a circle $`\gamma `$ in $`\mathrm{\Sigma }`$ which intersects $`\beta `$ in one point transversely and is disjoint from all the other circles in $`P`$. In this case, replacing $`\beta `$ by $`\gamma `$ in $`P`$ produces a new pants decomposition $`P^{}`$. We call this replacement a simple move, or S-move. Figure 1: an S-move and an A-move In similar fashion, if $`P`$ contains a circle $`\beta `$ such that deleting $`\beta `$ from $`P`$ produces a complementary component of type $`(0,4)`$, then we obtain a new pants decomposition $`P^{}`$ by replacing $`\beta `$ by a circle $`\gamma `$ intersecting $`\beta `$ transversely in two points and disjoint from the other curves of $`P`$. The transformation $`PP^{}`$ in this case is called an associativity move or A-move. (In the surface of type $`(0,4)`$ containing $`\beta `$ and $`\gamma `$ these two curves separate the four boundary circles in two different ways, and one can view these separation patterns as analogous to inserting parentheses via associativity.) Note that the inverse of an S-move is again an S-move, and the inverse of an A-move is again an A-move. Definition. The pants decomposition complex $`𝒫(\mathrm{\Sigma })`$ is the two-dimensional cell complex having vertices the isotopy classes of pants decompositions of $`\mathrm{\Sigma }`$, with an edge joining two vertices whenever the corresponding maximal cut systems differ by a single S-move or A-move, and with faces added to fill in all cycles of the following five forms: (3A) Suppose that deleting one circle from a maximal cut system creates a complementary component of type $`(0,4)`$. Then in this component there are circles $`\beta _1`$, $`\beta _2`$, and $`\beta _3`$, shown in Figure 2(a), which yield a cycle of three A-moves: $`\beta _1\beta _2\beta _3\beta _1`$. (No other loops in the given pants decomposition change.) Figure 2 (5A) Suppose that deleting two circles from a maximal cut system creates a complementary component of type $`(0,5)`$. Then in this component there is a cycle of five A-moves involving the circles $`\beta _i`$ shown in Figure 2(b): $`\{\beta _1,\beta _3\}\{\beta _1,\beta _4\}\{\beta _2,\beta _4\}\{\beta _2,\beta _5\}\{\beta _3,\beta _5\}\{\beta _3,\beta _1\}`$. (3S) Suppose that deleting one circle from a maximal cut system creates a complementary component of type $`(1,1)`$. Then in this component there are circles $`\beta _1`$, $`\beta _2`$, and $`\beta _3`$, shown in Figure 2(c), which yield a cycle of three S-moves: $`\beta _1\beta _2\beta _3\beta _1`$. (6AS) Suppose that deleting two circles from a maximal cut system creates a complementary component of type $`(1,2)`$. Then in this component there is a cycle of four A-moves and two S-moves shown in Figure 3: $`\{\alpha _1,\alpha _3\}\{\alpha _1,\epsilon _3\}\{\alpha _2,\epsilon _3\}\{\alpha _2,\epsilon _2\}\{\alpha _2,\epsilon _1\}\{\alpha _3,\epsilon _1\}\{\alpha _3,\alpha _1\}`$. Figure 3 (C) If two moves which are either A-moves or S-moves are supported in disjoint subsurfaces of $`\mathrm{\Sigma }`$, then they commute, and their commutator is a cycle of four moves. Theorem. The pants decomposition complex $`𝒫(\mathrm{\Sigma })`$ is simply connected. Thus any two sequences of A-moves and S-moves joining two given pants decompositions can be obtained one from the other by a finite number of insertions or deletions of the five types of cycles, together with the trivial operation of inserting or deleted a move followed by its inverse. Example. If $`\mathrm{\Sigma }`$ has type (0,4) or (1,1), the two cases when a maximum cut system contains just one circle, then $`𝒫(\mathrm{\Sigma })`$ is the two-dimensional complex shown in Figure 4, consisting entirely of triangles since only the relations 3A or 3S are possible in these two cases. The vertices of $`𝒫(\mathrm{\Sigma })`$ are labelled by slopes, which classify the nontrivial isotopy classes of circles on $`\mathrm{\Sigma }`$. This is a familiar fact for the torus, where slopes are defined via homology. For the $`(0,4)`$ surface, slopes are defined by lifting curves to the torus via the standard two-sheeted branched covering of the sphere by the torus, branched over four points which become the four boundary circles of the $`(0,4)`$ surface. Figure 4 PROOF. This uses the same basic approach as in \[HT\], which consists of realizing cut systems as level sets of Morse functions $`f:\mathrm{\Sigma }\text{}`$. Let $`I=[0,1]`$. We consider Morse functions $`f:(\mathrm{\Sigma },\mathrm{\Sigma })(I,0)`$ whose critical points all lie in the interior of $`\mathrm{\Sigma }`$. To such a Morse function we associate a finite graph $`\mathrm{\Gamma }(f)`$, which is the quotient space of $`\mathrm{\Sigma }`$ obtained by collapsing all points in the same component of a level set $`f^1(a)`$ to a single point in $`\mathrm{\Gamma }(f)`$. If we assume $`f`$ is generic, so that all critical points have distinct critical values, then the vertices of $`\mathrm{\Gamma }(f)`$ all have valence 1 or 3 and arise from critical points of $`f`$ or from boundary components of $`\mathrm{\Sigma }`$. Namely, boundary components give rise to vertices of valence 1, as do local maxima and minima of $`f`$, while saddles of $`f`$ produce vertices of valence 3. See Figure 2 of \[HT\] for pictures. We can associate to such a function $`f`$ a maximal cut system $`C(f)`$, unique up to isotopy, by either of the following two equivalent procedures: (1) Choose one point in the interior of each edge of $`\mathrm{\Gamma }(f)`$, take the circles in $`\mathrm{\Sigma }`$ which these points correspond to, then delete those circles which bound disks in $`\mathrm{\Sigma }`$ or are isotopic to boundary components, and replace collections of mutually isotopic circles by a single circle. (2) Let $`\mathrm{\Gamma }_0(f)`$ be the unique smallest subgraph of $`\mathrm{\Gamma }(f)`$ which $`\mathrm{\Gamma }(f)`$ deformation retracts to and which contains all the vertices corresponding to boundary components of $`\mathrm{\Sigma }`$. If $`\mathrm{\Gamma }_0(f)`$ has vertices of valence 2, regard these not as vertices but as interior points of edges. In each edge of $`\mathrm{\Gamma }_0(f)`$ not having a valence 1 vertex as an endpoint, choose an interior point distinct from the points which were vertices of valence 2. Then let $`C(f)`$ consist of the circles in $`\mathrm{\Sigma }`$ corresponding to these chosen points of $`\mathrm{\Gamma }_0(f)`$. Every maximal cut system arises as $`C(f)`$ for some generic $`f:(\mathrm{\Sigma },\mathrm{\Sigma })(I,0)`$. To obtain such an $`f`$, one can first define it near the circles of the given cut system and the circles of $`\mathrm{\Sigma }`$ so that all these circles are noncritical level curves, then extend to a function defined on all of $`\mathrm{\Sigma }`$, then perturb this function to be a generic Morse function. After these preliminaries, we can now show that $`𝒫(\mathrm{\Sigma })`$ is connected. Given two maximal cut systems, realize them as $`C(f_0)`$ and $`C(f_1)`$. Connect the generic Morse functions $`f_0`$ and $`f_1`$ by a one-parameter family $`f_t:(\mathrm{\Sigma },\mathrm{\Sigma })(I,0)`$ with no critical points near $`\mathrm{\Sigma }`$. This is possible since the space of such functions is convex. After perturbing the family $`f_t`$ to be generic, then $`f_t`$ is a generic Morse function for each $`t`$, except for two phenomena: birth-death critical points, and crossings interchanging the heights of two consecutive nondegenerate critical points, as described on p.224 of \[HT\]. The associated maximal cut systems $`C(f_t)`$ will be independent of $`t`$ except for possible changes caused by these two phenomena. Birth-death points are local in nature and occur in the interior of an annulus in $`\mathrm{\Sigma }`$ bounded by two level curves, hence produce no change in $`C(f_t)`$. Crossings can affect $`C(f_t)`$ only when both critical points are saddles. Up to level-preserving diffeomorphism, there are five possible configurations for such a pair of saddles, shown in Figures 5 and 6 of \[HT\]. The three simplest configurations are shown in Figure 5 below, and one can see that the intermediate level curve dividing the subsurface into two pairs of pants changes by an A-move as the relative heights of the two saddles are switched. Figure 5 The fourth configuration, shown in the left picture of Figure 6 below, also occurs in a subsurface of type (0,4). Here the crossing produces an interchange of the level curves $`\alpha _1`$ and $`\alpha _2`$ indicated in the middle picture. These two curves intersect in four points, and can be redrawn as in the right picture. They are related by a pair of A-moves, interpolating between them the horizontal circle $`\beta `$. (In terms of Figure 4, we can connect the slope $`1`$ and $`1`$ vertices by an edgepath passing through either the slope $`0`$ or slope $`\mathrm{}`$ vertices.) Figure 6 The fifth configuration takes place in a subsurface of type (1,2), as shown in Figure 7. Here the two level curves in the left-hand figure change to the two in the right-hand figure. This is precisely the change from the pair of circles in the middle of the upper row of Figure 3 to the pair in the middle of the lower row. Thus the change is realized by an A-move, an S-move, and an A-move. This finishes the proof that $`𝒫(\mathrm{\Sigma })`$ is connected. Figure 7 Note that the edgepath in $`𝒫(\mathrm{\Sigma })`$ associated to the generic family $`f_t`$ is not quite unique. For a crossing as in the fourth configuration, shown in Figure 6, there were two associated edgepaths in $`𝒫(\mathrm{\Sigma })`$, which in Figure 4 corresponded to passing from slope $`1`$ to slope $`1`$ through either slope $`0`$ or slope $`\mathrm{}`$. These two edgepaths are homotopic in $`𝒫(\mathrm{\Sigma })`$ using two relations of type 3A. Similarly, a crossing in the fifth configuration, in Figure 7, corresponded to an edgepath of three edges, but there are precisely two choices for this edgepath, the two ways of going halfway around Figure 3, so these two choices are related by a relation of type 6AS. Thus we conclude that the edgepath associated to a generic family $`f_t`$ is unique up to homotopy in $`𝒫(\mathrm{\Sigma })`$. A preliminary step to showing $`𝒫(\mathrm{\Sigma })`$ is simply connected is: Lemma. Every edgepath in $`𝒫(\mathrm{\Sigma })`$ is homotopic in the 1-skeleton of $`𝒫(\mathrm{\Sigma })`$ to an edgepath which is the sequence of maximal cut systems $`C(f_t)`$ associated to a generic one-parameter family $`f_t`$. PROOF. First we show: ($``$) If the cut systems $`C(f_0)`$ and $`C(f_1)`$ are isotopic, then there is a generic family $`f_t`$ joining $`f_0`$ and $`f_1`$ such that for all $`t`$, $`f_t`$ has nonsingular level curves in the isotopy classes of all the circles of $`C(f_0)`$ and $`C(f_1)`$. This can be shown as follows. Composing $`f_0`$ with an ambient isotopy of $`\mathrm{\Sigma }`$ taking the curves in $`C(f_0)`$ to the curves in $`C(f_1)`$, we may assume that $`C(f_0)=C(f_1)`$. The normal directions to these curves defined by increasing values of $`f_0`$ and $`f_1`$ may not agree, but this can easily be achieved by a deformation of $`f_0`$ near $`C(f_0)`$. Then we can further deform $`f_0`$ so that it agrees with $`f_1`$ near $`C(f_0)=C(f_1)`$ and near $`\mathrm{\Sigma }`$, without changing the local behavior near critical points. Then, keeping the new $`f_0`$ fixed where we have made it agree with $`f_1`$, we can deform it to coincide with $`f_1`$ everywhere by a generic family $`f_t`$. To deduce the lemma from ($``$) it then suffices to realize an arbitrary A-move or S-move. For A-moves we can just use Figure 5. Similarly, Figure 7 realizes a given S-move sandwiched between two A-moves, but we can realize the inverses of these A-moves, so the result follows. $``$$``$ Now consider an arbitrary loop in $`𝒫(\mathrm{\Sigma })`$. By the lemma, together with the statement ($``$) in its proof, this loop is homotopic to a loop of the form $`C(f_t)`$ for a loop of generic functions $`f_t`$. Since the space of functions is convex, there is a 2-parameter family $`f_{tu}`$ giving a nullhomotopy of the loop $`f_t`$. We may assume $`f_{tu}`$ is a generic 2-parameter family, so that $`f_{tu_0}`$ is a generic 1-parameter family for each $`u_0`$ except for the six types of isolated phenomena listed on page 230 of \[HT\]. The proof that $`𝒫(\mathrm{\Sigma })`$ is simply connected will be achieved by showing that these phenomena change the associated loop $`C(f_{tu_0})`$ by homotopy in $`𝒫(\mathrm{\Sigma })`$. The first three of the six involve degenerate critical points and are uninteresting for our purposes. In each case the change in generic 1-parameter family is supported in subsurfaces of $`\mathrm{\Sigma }`$ of type $`(0,k)`$, $`k3`$, bounded by level curves, so there is no change in the associated path in $`𝒫(\mathrm{\Sigma })`$. The last three phenomena, numbered (4), (5), and (6) on page 230 of \[HT\], involve only nondegenerate critical points, which we may assume are saddles since otherwise the reasoning in the preceding paragraph shows that nothing interesting is happening. Number (4) is rather trivial: A crossing and its “inverse” are cancelled or introduced. We may choose the segment of the edgepath in $`𝒫(\mathrm{\Sigma })`$ associated to the crossing and its inverse so that it simply backtracks across up to three edges, hence the edgepath changes only by homotopy. Number (5) is the commutation of two crossings involving four distinct saddles. This corresponds to a homotopy of the associated edgepath across 2-cells representing the commutation relation C. Number (6) arises when three saddles have the same $`f_{tu}`$-value at an isolated point in the $`(t,u)`$-parameter space. As one circles around this value, the heights of the saddles vary through the six possible orders: 123, 132, 312, 321, 231, 213, 123. To finish the proof it remains to analyze the various possible configurations for these three saddles. The interesting cases not covered by previous arguments are when the three saddles lie in a connected subsurface bounded by level curves just above and below the three saddles. Note that we can immediately say that all relations among moves, apart from the commutation relation, are supported in subsurfaces of types (0,5) and (1,3). This is because a subsurface bounded by level curves with three saddles, hence Euler characteristic $`3`$, must have at least two boundary circles, one below the saddles and one above, so if the surface is connected it must have type (0,5) or (1,3). The analysis below will show that the (1,3) subsurfaces can be reduced to (1,2) subsurfaces. There are sixteen possible configurations of three saddles on one level, shown in Figure 8, where the saddles are regarded as 1-handles, or rectangles, attached to level curves. The sixteen configurations are grouped into eight pairs, the two configurations in each pair being related by replacing $`f_{tu}`$ by its negative. Figure 8 The first five pairs involve a genus zero subsurface and are somewhat easier to analyze visually than the other three pairs, which occur in a genus one subsurface. We consider each of these five pairs in turn. (a) A picture of the subsurface with $`f_{tu}`$ as the height function is shown in Figure 9. Figure 9 Viewed from above, the surface can be seen as a disk with four subdisks deleted, a (0,5) surface. In the second row of the figure we show the various configurations of level curves when the saddles are perturbed to each of the six possible orders. For example, the first diagram shows the order 123, where the saddle 1 is the highest, 2 is the middle, and 3 is the lowest. The two circles shown lie between the two adjacent pairs of saddles. The four dots represent four of the five boundary circles of the (0,5) surface, the fifth being regarded as the point at infinity in the one-point compactification of the plane. In the third row of the figure this fifth point is brought in to a finite point and the level circles are redrawn accordingly. The two adjacent orderings 132 and 312 produce the same level curves, so we have in reality a cycle of five maximal cut systems. Each is related to the next (and the first to the last) by an A-move, and the whole cycle is the relation 5A. (c) We treat this case next since it is very similar to (a). From Figure 10 it is clear that one again has the relation 5A. Figure 10 (b) Here the 3-fold rotational symmetry makes it unlikely that one would directly get the relation 5A. The second row of Figure 11 shows the cycle of six cut systems. Figure 11 It is convenient to simplify the notation at this point by representing the two circles in a pants decomposition of the (0,5) surface by two arcs joining four of the five points representing the boundary circles. The boundary of a neighborhood of each arc is then a circle separating two of the five points from the other three. The third row of the figure shows the cycle of six cut systems in this notation, with the fifth point at infinity and an arc to this point indicated by an arrow from one of the other four points. Note that we have a cycle of six A-moves. This can be reduced to two 3A and two 5A relations by adjoining the two configurations in the fourth row. Schematically, one subdivides a hexagon into two pentagons and two triangles by inserting two interior vertices, as shown. (d) Here the cycle of six cut systems contains two steps which are not A-moves but resolve into a pair of A-moves. Thus we have a cycle of eight A-moves, and this decomposes into two 5A relations, as shown in Figure 12. (e) In this case we have the configuration shown in Figure 13, with 3-fold symmetry. The cycle of six multicurves has three steps which resolve into pairs of A-moves, so we have a cycle of nine A-moves. This can be reduced to three 3A relations and four cycles of six A-moves. After a permutation of the five boundary circles of the (0,5) surface, each of these 6-cycles becomes the 6-cycle considered in case (b). Figure 12 Figure 13 This completes the analysis of the five cases of the phenomenon (5) involving genus 0 surfaces. In particular, the theorem is now proved for surfaces of type $`(0,n)`$. To finish the proof it would suffice to do a similar analysis of the three remaining configurations of three saddles in surfaces of type (1,3). However, the cycles of A- and S-moves arising from these configurations are somewhat more complicated than those in the genus zero configurations, so instead of carrying out this analysis, we shall make a more general argument, showing that the relation 3A and 6AS suffice to reduce the genus one case to the genus zero case. So let $`\mathrm{\Sigma }`$ have type $`(1,n)`$. We can view the boundary components of $`\mathrm{\Sigma }`$ as punctures rather than circles, so $`\mathrm{\Sigma }`$ is the complement of $`n`$ points in a torus $`\widehat{\mathrm{\Sigma }}`$. Given an edgepath loop $`\gamma `$ in $`𝒫(\mathrm{\Sigma })`$, its image $`\widehat{\gamma }`$ in $`𝒫(\widehat{\mathrm{\Sigma }})`$ is nullhomotopic since the explicit picture of $`𝒫(\widehat{\mathrm{\Sigma }})`$ shows it is contractible. Our task is to show the nullhomotopy of $`\widehat{\gamma }`$ lifts to a nullhomotopy of $`\gamma `$. The nullhomotopy of $`\widehat{\gamma }`$ gives a map $`\widehat{g}:D^2𝒫(\widehat{\mathrm{\Sigma }})`$. Making $`\widehat{g}`$ transverse to the graph dual to the 1-skeleton of $`𝒫(\widehat{\mathrm{\Sigma }})`$, the preimage of this dual graph is a graph $`G`$ in $`D^2`$, intersecting the boundary of $`D^2`$ transversely, as depicted by the solid lines in the left half of Figure 14. Figure 14 The vertices of $`G`$ in the interior of $`D^2`$ have valence three, and are the preimages of the center points of triangles of $`𝒫(\widehat{\mathrm{\Sigma }})`$. Each such vertex corresponds to three circles on $`\widehat{\mathrm{\Sigma }}`$ having distinct slopes and disjoint except for a single point where they all three intersect transversely. Each edge of $`G`$ corresonds to a pair of circles on $`\widehat{\mathrm{\Sigma }}`$ of distinct slopes, intersecting transversely in one point. The complementary regions of $`G`$ correspond to single circles. In a neighborhood $`N`$ of $`G`$ we can choose all these circles in $`\widehat{\mathrm{\Sigma }}`$ to vary continuously with the point in $`N`$. We can also assume these continuously varying circles have general position intersections with the $`n`$ puncture points, so that they are disjoint from the punctures except along arcs, shown dotted in Figure 14, abutting interior points of edges of $`G`$, where a single circle slides across a puncture. Near such a dotted arc we thus have three circles: the circle before it slides across the puncture, the circle after it slides across the puncture, and a third circle intersecting each of the two circles in one point transversely. We can perturb the first two circles to be disjoint, so they are essentially two parallel copies of the same circle with the puncture between them. A neighborhood of the three circles is then a surface of type (1,2). We can identify the three circles in this subsurface with the three simplest circles in Figure 3: the upper and lower meridian circles and the longitudinal circle. The puncture is one of the two boundary circles of the subsurface. Adjoining the other circles shown in the figure, we get various pants decompositions of the subsurface. Choosing a fixed pants decomposition of the rest of $`\mathrm{\Sigma }`$ then gives a way of lifting $`\widehat{g}`$ to $`g:D^2𝒫(\mathrm{\Sigma })`$ in a neighborhood of the dotted arc, by superimposing Figure 3 on the right half of Figure 14. Since the chosen circles are disjoint from punctures elsewhere along $`G`$, we can then extend the lift $`g`$ over $`G`$ by extending the given circles to pants decompositions of $`\mathrm{\Sigma }`$, using just the fact that any two pants decompositions of a genus zero surface can be connected by a sequence of A-moves. Finally, the lift $`g`$ can be extended over the complementary regions of $`G`$ since the theorem is already proved for genus zero surfaces. $``$$``$References \[HT\] A. Hatcher and W. Thurston, A presentation for the mapping class group of a closed orientable surface, Topology 19 (1980), 221-237.
no-problem/9906/cond-mat9906054.html
ar5iv
text
# Comment on “Spin Transport properties of the quantum one-dimensional non-linear sigma model” ## Abstract In a recent preprint (cond-mat/9905415), Fujimoto has used the Bethe ansatz to compute the finite temperature, zero frequency Drude weight of spin transport in the quantum $`\mathrm{O}(3)`$ non-linear sigma model in a magnetic field $`H0`$. We show here that, contrary to his claims, the results are in accord with earlier semiclassical results (Sachdev and Damle, Phys. Rev. Lett. 78, 943 (1997)). We also comment on his $`1/N`$ expansion, and show that it does not properly describe the long-time correlations. In a recent preprint , Fujimoto has considered non-zero temperature ($`T`$) transport in the one-dimensional quantum $`\mathrm{O}(3)`$ non-linear sigma model. He considers the frequency ($`\omega `$) dependent spin-conductivity, $`\sigma (\omega )`$, and tests for the possibility that it has a term of the form $$\text{Re}\sigma (\omega )=K\delta (\omega )+\mathrm{}.$$ (1) In the presence of a non-zero magnetic field, $`H0`$, he uses a Bethe ansatz computation to show in the low-temperature limit that $`K\sqrt{T}e^{(\mathrm{\Delta }H)/T}`$, where $`\mathrm{\Delta }`$ is the magnitude of the $`T=0`$ energy gap. Here we will show that, contrary to the claims of Fujimoto , this result is in precise accord with earlier semiclassical results . For a classical system, the dynamical spin conductivity is given in terms of the the time ($`t`$) autocorrelation of the total spin current $`J(t)`$ as $$\sigma (\omega )=\frac{1}{TL}_0^{\mathrm{}}J(t)J(0)e^{i\omega t}𝑑t,$$ (2) where $`L`$ is the size of the system, and, in the notation of Ref , the spin current is $$J(t)=\underset{k}{}m_k\frac{dx_k(t)}{dt},$$ (3) where $`m_k`$ are the azimuthal spins of classical particles on trajectories $`x_k(t)`$. Then the average over spins given in Eqn 3 of Ref shows that $$J(t)J(0)=A_1\underset{k,\mathrm{}}{}\frac{dx_k(t)}{dt}\frac{dx_{\mathrm{}}(0)}{dt}+A_2\underset{k}{}\frac{dx_k(t)}{dt}\frac{dx_k(0)}{dt}.$$ (4) We will now show that the first term proportional to $`A_1`$ above contributes only to $`K`$; the second term proportional to $`A_2`$ yields only regular diffusive contributions to $`\sigma (\omega )`$, and these latter terms were the focus of attention in Ref . The terms proportional to $`A_1`$ were also discussed in Ref , but Fujimoto appears to have overlooked them. In the semiclassical model, the set of velocities at time $`t`$ is simply a permutation of the velocities at $`t=0`$, and so in the first summation in (4) we can relabel the particles at time $`t`$ such that $`dx_{k=\mathrm{}}(t)/dt=dx_{\mathrm{}}(0)/dt`$. Then the average in the first term in (4) easily evaluates to an average over a single Maxwell-Boltzmann distribution, and we get $$J(t)J(0)=A_1\frac{L\rho c^2T}{\mathrm{\Delta }}+A_2(\mathrm{}),$$ (5) where $`c`$ is the velocity of ‘light’ in the sigma model, and $`\rho `$ is the total density of particles. Combining (1), (2) and (5), and using expressions in Ref , we have $$K=\sqrt{\frac{\pi Tc^2}{2\mathrm{\Delta }}}e^{(\mathrm{\Delta }H)/T}\left(\frac{12e^{2H/T}+e^{4H/T}}{1+e^{H/T}+e^{2H/T}}\right).$$ (6) This result is valid for $`H,T\mathrm{\Delta }`$, but $`H/T`$ arbitrary. In the low $`T`$ limit at fixed $`H0`$ ($`TH\mathrm{\Delta }`$), (6) agrees precisely with Fujimoto’s result for $`K`$. It is interesting that $`K`$ vanishes as $`H0`$ for fixed $`T\mathrm{\Delta }`$, and then the conductivity only has the diffusive contribution proportional to $`A_2`$ . Fujimoto has only quoted results in the low temperature limit for fixed $`H0`$, and it would be interesting to extend his computations to $`H=0`$ to access the complementary regime discussed in Ref . Strictly speaking, a purely semiclassical method cannot rule out the possibility that neglected quantum interference effects in special integrable systems will lead to a small non-zero $`K`$ at $`H=0`$, but we can expect that $`K`$ should at least be suppressed by factors of order (thermal de Broglie wavelength)/(spacing between particles) from its nominal $`H0`$ values. Purely diffusive transport is possible only at $`H=0`$, and more generally in models with strict particle-hole symmetry . It is interesting that a similar phenomenon has been noted in the interacting electron models by Zotos et al , who were able to prove ballistic transport only in models without particle-hole symmetry. Next, we comment on the $`1/N`$ expansion of transport properties. Any kind of bare $`1/N`$ expansion , or even the solution of a $`1/N`$-derived quantum Boltzmann equation , is doomed to failure at low $`T`$ due to non-perturbative effects special to one spatial dimension. Transport involves collisions of particles, and at low $`T`$ two-particle collisions dominate. The exact $`S`$-matrix for such collisions is known at general $`N`$ — it is $`𝒮_{m_1^{}m_2^{}}^{m_1m_2}(\theta )`$ where $`\theta `$ is a rapidity difference, and particles with spins $`m_1`$, $`m_2`$ scatter into particles with spins $`m_1^{}`$, $`m_2^{}`$. Now for large $`N`$, at fixed $`\theta `$, we have $$𝒮_{m_1^{}m_2^{}}^{m_1m_2}(\theta )=\delta _{m_1m_1^{}}\delta _{m_2m_2^{}}+𝒪(1/N)$$ (7) which corresponds to ballistic transmission of spin, along with a small amount of scattering at order $`1/N`$. However at low $`T`$, small rapidities dominate, and we should really take the limit $`\theta 0`$ at fixed $`N`$. In this case we find, for any fixed $`N`$ $$\underset{\theta 0}{lim}𝒮_{m_1^{}m_2^{}}^{m_1m_2}(\theta )=(1)\delta _{m_1m_2^{}}\delta _{m_2m_1^{}}.$$ (8) This corresponds to total reflection of spin, and was the key effect behind the diffusive behavior discovered in Ref . This effect will not be captured at any finite order in the $`1/N`$ expansion; this makes all conclusions drawn from the $`1/N`$ expansion in Ref unreliable.
no-problem/9906/hep-ph9906353.html
ar5iv
text
# X Rays from Old Neutron Stars Heated by Axion Stars ## Abstract We show that axionic boson stars collide with isolated old neutron stars with strong magnetic field ($`>10^8`$ Gauss) and causes the neutron stars to radiate X ray by heating them. Surface temperatures of such neutron stars becomes $`10^5\text{K}10^6\text{K}`$. We suggest that these are possible candidates for X ray sources observed in ROSAT Survey. We discuss a possible way of identifying such neutron stars. We also point out that the collision generates a burst of monochromatic radiations with frequency given by axion mass. preprint: Nisho-3 Axion, Neutron Star, Boson Star, Dark Matter, X Ray Axions are one of most plausible candidates of dark matter in the Universe. The axions are produced in early Universe mainly by decay of axion strings, decay of axion domain wall or coherent oscillation of axion field. These axions can form coherent axionic boson stars in the present Universe. Namely, some of the axions contract themselves gravitationally to axionic boson stars. We call them simply as axion stars. In the previous papers we have pointed out that when axion stars collide with cold white dwarfs invisible with present observational apparatus, they heat such white dwarfs and make them visible. This heating arises owing to the energy deposited by the axion stars to the white dwarfs. That is, magnetic field of the white dwarfs induces an electric field in the axion stars, which turns to generate an electric current in the white dwarfs. The energy of this electric current is dissipated, owing to finite electric conductivity of the white dwarfs. Consequently the white dwarfs gain thermal energy, that is, the energy of the axion stars is transformed to the thermal energy of the white dwarfs by the collision. As a result, old white dwarfs regain their brightness. This thermalization of the axion energy under the magnetic field is a phenomenon similar to one arising in a cavity proposed for detection of the axion by Sikivie. In this letter we point out that cold neutron stars ( NSs ) also regain brightness by collision with axion stars. As a result these neutron stars becomes detectable with X ray observation. In particular, isolated old neutron stars become to emit X ray with this mechanism just as those accreting gas of interstellar medium. Hence they are possible candidates for X ray sources detected in ROSAT ALL-Sky Survey. Since both values of strength of magnetic field and electric conductivity are extremely large in the case of NS, energy dissipation of axion star proceeds very rapidly so that the collision would be observed as an explosion generating a blast of wind. Since electric currents induced in NS by the collision are oscillating with single frequency, a burst of monochromatic radiations is also produced. We calculate luminosity of such NSs and estimate the number of them existing in the neighborhood of the sun. We show that old NSs gain so much energies with the collision as for their surface temperatures to increase up to $`10^5\text{K}10^6`$K. Furthermore, we show that there may be one or more such NSs within the distance of $`1`$Kpc around the sun, assuming dark matter being dominated by axion stars. However, the precise number depends on several unknown parameters, e.g. mass of axion stars, collision parameters between axion star and NS, etc. The masses of the axion stars in which we are interested are assumed such as $`M_a=10^{11}M_{}10^{13}M_{}`$, which have been favored according to arguments of the generation mechanism of axion stars by Kolb and Tkachev Let us first explain briefly axionic boson stars and then we explain how they dissipate their energies in NS. In general, boson stars are composed of coherent bosons bounded gravitationally, which are described by a solution of a corresponding boson field equation coupled with gravity. In our case axions are such bosons with mass $`m_a`$ and are represented by a real scalar field $`a`$. Axion stars are coherent bound states of the boson and are characterized by their mass $`M_a`$ or radius $`R_a`$, which are related with each other. Explicitly they are represented approximately by $$a=f_{PQ}a_0\mathrm{sin}(m_at)\mathrm{exp}(r/R_a),$$ (1) where $`t`$ ($`r`$) is the time (radial) coordinate and $`f_{PQ}`$ is the decay constant of the axion. The value of $`f_{PQ}`$ is constrained from cosmological and astrophysical considerations such as $`10^{10}`$ GeV $`<f_{PQ}<`$ $`10^{12}`$ GeV. Here dimensionless amplitude, $`a_0`$ in eq(1) is represented explicitely in terms of the radius, $`R_a`$ in the limit of small mass ( e.g. $`10^{12}M_{}`$) of the axion star, $$a_0=1.73\times 10^8\frac{(10^8\text{cm})^2}{R_a^2}\frac{10^5\text{eV}}{m_a}.$$ (2) In the same limit we have found a simple relation between the mass $`M_a`$ and the radius $`R_a`$ of the axion star, $$M_a=6.4\frac{m_{pl}^2}{m_a^2R_a},$$ (3) with Planck mass $`m_{pl}`$. Numerically, $`R_a=1.6\times 10^8M_{12}^1m_5^2\text{cm}`$ where $`M_{12}M_a/10^{12}M_{}`$ and $`m_5m_a/10^5\text{eV}`$. A similar formula holds even without the limit but with a minor modification of numerical coefficient. It turns out that the axionic boson stars are “oscillating” with the frequency of $`m_a/2\pi `$. It has been shown that there are no physically relevant, “static”, axionic boson stars. This property is specific in real scalar field. Static solutions of complex boson field representing boson stars are well known to exist. We comment that there is a critical mass $`M_c`$ of axion star beyond which stable solutions do not exit. It is approximately given by $`M_c10^5M_{}/m_5`$. This notion is a similar to the critical mass of neutron stars or white dwarfs. The mass gives a typical mass scale of these stars present in the Universe. Thus axion stars could have the mass of the order of such a critical mass, although in this paper we address axion stars with the masses mentioned above. These axion stars induce electric fields under a magnetic field; the magnetic field is supposed to be associated with neutron star in this paper. This can be easily seen by taking account of a following interaction term between axion and electromagnetic field, $$L_{a\gamma \gamma }=c\alpha a\stackrel{}{E}\stackrel{}{B}/f_{PQ}\pi $$ (4) with $`\alpha =1/137`$, where $`\stackrel{}{E}`$ and $`\stackrel{}{B}`$ are electric and magnetic fields respectively. The value of $`c`$ depends on the axion models; typically it is of order 1. It follows from this interaction that Gauss’s law is modified such as $$\stackrel{}{}\stackrel{}{E}=c\alpha \stackrel{}{}(a\stackrel{}{B})/f_{PQ}\pi +\text{“matter”}$$ (5) where the last term, “matter”, denotes contributions from ordinary matter. The first term on the right hand side represents an electric charge density formed by axion field under external magnetic field $`\stackrel{}{B}`$. It is interesting that the axion field can induces the electric charge, inspite of the field itself being neutral. We find that axion star induces an electric field, $`\stackrel{}{E_a}=c\alpha a\stackrel{}{B}/f_{PQ}\pi `$, under the magnetic field. This field is oscillating since the field $`a`$ is oscillating, and induces oscillating electric currents in NS. Accordingly, monochromatic radiations are emitted. Note that the radius $`R_a`$ of the axion star is much larger than the radius, $`R_n10^6`$cm, of neutron star; $`R_a=10^7\text{cm}10^9\text{cm}`$ for axion stars with mass, $`M_a=10^{11}M_{}10^{13}M_{}`$. Hence the electric field induced at any place in the axion star does not necessarily generate electric current. Electric current is only induced in electric conducting medium such as NS. Thus only a part of the axion star contacting NS generates electric current in NS. This electric currents, $`J_a=\sigma E_a`$, are very strong since the electric conductivity, $`\sigma `$, is quite high in NS ( for example, $`\sigma 10^{26}`$/s in crystallized crust of NS ). Accordingly, the rate of the energy dissipation of the current is very large. Since the energy of the current is supplied by the axion star, the energy dissipation of the axion star itself proceeds very rapidly. Actually we find that axion stars dissipate their energies so rapidly that they evapolate quite soon simply when they touch with NS. It may be observed as an explosion generating a burst of monochromatic radiations as well as a blast of wind. In this way, the axion star releases the entire energy ($`10^{42}`$ erg ) in NS. The energy heats up cold NS. Consequently such a NS becomes bright again. Now we estimate the rate of the energy dissipation with use of Ohm’s law. We consider the circumstance that axion star collides with NS, which gains thermal energy inside of the axion star. Taking account of the fact that the radius $`R_a`$ of the axion star is much larger than that of NS, we calculate Joule’s heat $`W`$ produced in NS, $`W`$ $`=`$ $`{\displaystyle _{r<R_n}}\sigma E_a^2𝑑x^3,`$ (6) $`=`$ $`\sigma \alpha ^2c^2B^2R_a^3a_0^2/8\pi ,`$ (7) $`=`$ $`4c^2\times 10^{54}\text{erg/s}{\displaystyle \frac{\sigma }{10^{26}/s}}{\displaystyle \frac{M}{10^{12}M_{}}}{\displaystyle \frac{B^2}{(10^{12}\text{G})^2}},`$ (8) with $`c1`$, where we have used eqs(1)$``$(3) and have supposed that strength of magnetic field of NS is typically given by $`10^{12}`$ Gauss. Here $`\sigma `$ is taken as an average conductivity in NS. Thus its value is smaller by $`3`$ order of magnitude than the value $`10^{26}`$/s quoted. This is because electrons exist mainly in crust of NS whose size is given approximately by $`R_n/10`$. This large rate of the energy dissipation implies rapid evapolation of the axion star when both two stars collide. Actually, since the energy dissipation only arises in a part of the axion star contacting NS and the energy stored in the part is only about $`M_a(R_n/R_a)^310^{36}\text{erg}M_{12}^4m_5^6`$, formally it takes $`10^{36}/10^{54}=10^{18}`$ second for the dissipation of the energy ( it takes $`10^6`$ second even with $`\sigma =10^{22}`$/s and $`B=10^8`$ G ). Therefore we find that the rapid evapolation of the axion energy arises even if NS possesses much weak magnetic field such as $`10^8`$G; the strength of this order of magnetic field is expected to be associated with old NSs. Probably, such rapid dissipation of the energy may be seen as an explosion of envelope of NS. It would generates a blast of wind, which subsequently collides with interstellar medium. Thus the medium emits radiations with various frequencies as a burst. We should mention that since electric currents induced in NS are oscillating with frequency $`m_a/2\pi `$, the currents generate radiations with the frequency. Thus we expect that a burst of photons with energy $`m_a`$ of axion mass is produced by the collision. This detection of the radiations is strong indication of the occurrence of such a collision. Furthermore, we can determine the mass of the axion by the detection of the burst. After axion star collides with NS, it seems that axion star simply passes NS only with the loss of a part of its energy through the dissipation mentioned above. But it is reasonable to suppose that it is trapped to NS, because the mass of the axion star is much smaller than that of the NS. Both the kinetic energy and the angular momentum of axion star would be dissipated through the above mechanism. When the axion star is trapped, the entire energy of the axion star is dissipated after all. Hereafter, we assume that the entire energy of the axion star is dissipated when it collides with NS. Hence, the energy gained by NS with the collision is given by the mass of the axion star, $`M_a=10^{41}\text{erg}10^{43}\text{erg}`$. This energy heats up cold NSs and makes them become bright again. In order to calculate roughly how temperature of such NSs rises up, we use heat capacity of free nucleons composing the NSs for simplicity. Assuming that the density of the NS is low enough for non-relativistic approximation to be valid, we may use the following formula of thermal energy in the NS, $$U=6\times 10^{47}\text{erg}(M/M_{})(\rho /\rho _n)^{2/3}(T/10^9\text{K})^2,$$ (9) with $`\rho _n=2.8\times 10^{14}\text{g cm}^3`$ being density of nucleon, where $`T`$, $`M`$ and $`\rho `$ denote the core temperature, mass and average density of the NS, respectively. Explicitly, we take the numerical parameters, $`M=1.5M_{}`$ and $`\rho =7\times 10^{14}\text{g cm}^3`$, which implies that the radius of the neutron star is given by $`R=10^6`$ cm. Since old NSs with their ages $`10^{10}`$ years have lost almost all thermal energies, the energy deposited by the axion star is the main thermal energy by which their temperature is determined, $$T8.6\times 10^6\text{K},2\times 10^6\text{K},\text{and}6\times 10^5\text{K}$$ (10) for $$M_a=10^{11}M_{},10^{12}M_{},\text{and}10^{13}M_{}\text{respectively}.$$ (11) This temperature is core temperature of NS. In order to obtain luminosity of NS, we need to know surface temperature. The temperature depends on not only the core temperature but also composition of atmosphere, or envelope of the NS. Here we use a model in which it is assumed that the envelope is composed only of iron. Then we can find surface temperatures, $`T_s`$, $$T_s2.8\times 10^5\text{K},1.4\times 10^5\text{K},\text{and}9\times 10^4\text{K}$$ (12) with which the luminosity of NS is obtained, $`L`$ $`=`$ $`7\times 10^{36}\text{erg/s}(T_s/10^7\text{K})^4`$ (13) $``$ $`4.3\times 10^{30}\text{erg/s},2.7\times 10^{29}\text{erg/s},\text{and}4.6\times 10^{28}\text{erg/s}`$ (14) corresponding to the masses in eq(11) of the axion stars, respectively. From these luminosities we can estimate roughly a period of NSs keeping this brightness, that is the period of NSs exhausting the energies deposited by the axion stars. It is approximately given by $`M_a/L10^5`$ years for any cases mentioned above. This value is a lower limit of the period. Actual time scale for NS to exhaust the energy deposited is longer than $`10^5`$ years. But we may think the value as a typical time scale during whose period NS maintains its brightness. We have obtained the above values of the surface temperatures by assuming strength of surface gravity and composition of NS’s envelope. The surface gravity is determined by the parameters we have used; $`M=1.5M_{}`$ and $`R=10^6`$ cm. On the other hand, NS’s envelope has been assumed to be composed only of iron. If the surface gravity is much stronger or there are even few contamination of H or He in the envelope, the surface temperatures become much larger than the values estimated above. Thus we may expect that real temperatures range roughly in $`10^5\text{K}10^6\text{K}`$. Therefore, these NSs may be possible candidates of X ray sources observed in ROSAT Survey. Although the luminosities obtained are sufficiently large for observation, it is hard to detect such NSs if the number of the NSs present in our neighborhood is quite few. Thus we wish to estimate how many such bright NSs are present in our galaxy. Especially we are concerned with the number of the NSs located within the distance of $`1`$ Kpc around the sun. In order to estimate the quantities, we need to know both numbers of cold NSs and of axion stars present in our galaxy. The number of the NSs has been guessed on the basis of present rate of appearance of supernova in a galaxy and abundance of heavy elements in our galaxy. The number has been estimated to be of the order of $`10^9`$ in our galaxy. On the other hand the number of axion stars is completely unknown. However, we may assume that the halo is composed mainly of the axion stars, because the axions are plausible candidates of the dark matter. Using these assumptions, we can estimate the number of the NSs which have collided with axion stars and have not yet lost their brightness. Since local density $`\rho _a`$ of halo is given approximately such that $`\rho _a=0.5\times 10^{24}`$g $`\text{cm}^3`$, we find that the number density $`n_a`$ of axion stars is, $`n_a=\rho \times (1\text{Kpc})^3/M_a6\times 10^{18}/M_{12}`$ per $`1\text{Kpc}^3`$. Under the assumptions that cross section of the collision between a NS and an axion star is naively given by the geometrical cross section of axion star, $`\pi R_a^2`$ and that velocity of axion stars in halo is given typically by $`3\times 10^7`$ cm/s, we calculate the number of the collisions per $`1\text{Kpc}^3`$ and per year, $$R_c=n_a\times n_{ns}\times \pi R_a^2v\times 1\text{year}10^8\times \frac{1}{M_{12}^3m_5^4}\text{per year and per }1\text{Kpc}^3$$ (15) where $`n_{ns}`$ denotes the number of NSs in the volume of $`1\text{Kpc}^3`$. Here we have assumed uniform distribution of $`10^9`$ NSs in the disk of our galaxy. This $`R_c`$ represents the production rate of NSs heated by the collision. On the other hand, life time for such NSs to maintain brightness eq(13) is about $`10^5`$ years. Thus, the number of the bright NSs produced for $`10^5`$ years by the collision is $`10^3/M_{12}^3m_5^4`$ in a local region with its volume $`1\text{Kpc}^3`$ around the sun. In other words there are $`1/M_{12}^3m_5^4`$ NSs in our galaxy. The result suggests that if the mass of the axion star is smaller than $`10^{13}M_{}`$, the number of NSs within the volume is larger than $`1`$ and hence such NSs may be detectable. Here we have restricted that $`m_a>10^5`$ eV. In the estimation of $`R_c`$ we have assumed naive geometrical cross section of the axion star as the cross section of the collision with NS. But since NS and axion star interacts gravitationally with each other, its collision cross section is larger than the naive one. Since the rate $`R_c`$ becomes larger as the cross section increase more, it is plausible that real number of the NSs is larger than the value obtained above. Therefore we conclude that if the mass of the axion star colliding NS is smaller than $`10^{13}M_{}`$, or collision cross section is much larger than the geometrical one $`10^8M_{12}^1m_5^2`$ cm of the axion star, the number of the NSs present within the distance of $`1`$Kpc around the sun is large enough for observation. Since surface temperatures of these NSs are of the order $`10^5\text{K}10^6\text{K}`$, they are observable as X ray sources. In the derivation of this conclusion, significant assumption is that halo is composed mainly of axion stars. On this point, recent gravitational microlense observation indicates that a half of the halo is composed of objects with mass $`0.1M_{}0.5M_{}`$. If this is true, in order to obtain the conclusion we need the assumption that the other half of the halo is composed of axion stars. Here we should comment that if masses of axion stars present in the Universe are of the order of the critical mass $`M_c`$, the energy released by the collision amounts to $`10^{49}\text{erg}/m_5`$. Thus if axion mass $`m_a`$ can take a much smaller value such as $`10^9`$eV, the energy reaches one observed in gamma ray burst. Actually axion originated in string theory may take such a small value as its mass without contradicting observation. Then it is reasonable to guess that such a collision between NS and axion star with $`M_aM_c`$ is a possible mechanism generating the gamma ray burst. The collision generates a burst of monochromatic radiations. Therefore, we expect to detect the burst of such radiations associated with gamma ray burst. Wave length of the radiations is of the order $`10^4\text{cm}10^5\text{cm}`$. Finally we discuss how we should identify NSs as ones heated by axion stars. Especially, we wish to distinguish them from NSs which emit X ray by accreting gas of interstellar medium. The latter is located in a region where density of the gas is relatively large, while the former is located even in a region without any gas. Furthermore, the NS accreting the gas must have very low velocity ($`10\text{km/s}`$), while the NS heated by axion star may have relatively high velocity for instance $`200\text{km/s}`$. Accordingly, if a X ray source is located in a region with few interstellar medium and has a relatively high velocity, it may be identified as a NS heated by axion star. However, such a NS may be a middle aged NS which also can emit X rays. For further distinction, we note that the collision between NS and axion star would be seen as an explosion generating a blast of wind as well as a burst of monochromatic radiations. Thus the NS having collided with axion star would have a cloud of gas surrounding it, which was carried by the blast of the wind. This kind of the cloud is not present around middle aged NSs as well as NSs accreting the gas of the interstellar medium. Hence if we detect such a cloud around a X ray source, the source is strong candidate of the NS heated by axion star. The author wish to express his thank for the hospitality in Tanashi KEK.
no-problem/9906/patt-sol9906004.html
ar5iv
text
# Interaction of cavity solitons in degenerate optical parametric oscillators. ## Abstract Numerical studies together with asymptotic and spectral analysis establish regimes where soliton pairs in degenerate optical parametric oscillators fuse, repel, or form bound states. A novel bound state stabilized by coupled internal oscillations is predicted. Many nonlinear media can support soliton-like structures when contained in a driven optical cavity . We will refer to such structures as cavity solitons (CS). In quadratic nonlinear media CS have recently been predicted in both optical parametric oscillator (OPO) and second harmonic generation configurations. Although experimental observation of $`\chi ^{(2)}`$-CS remains a challenge, impressive bistability results demonstrate the required level of nonlinearity and thus pave the way towards this goal. The large values of effective $`\chi ^{(2)}`$ accessible in artificially phase-matched materials in combination with their practically instantaneous response are important advantages of using quadratic nonlinearity for implementation of CS for all-optical processing of information. They thus represent an interesting alternative to the CS which can be created in cavities with dispersive-absorptive ,and resonant electron-hole types of nonlinearities. In all such schemes high CS density is desirable and therefore understanding of their interaction is a practically important problem which is still largely open. In this Letter we focus on the interactions of CS found in the below-threshold regime of a degenerate doubly resonant OPO, under conditions where the signal field has three coexistent plane-wave states. Assuming phase-matching, a plane-wave input field, and ignoring walk-off, the mean-field OPO equations can be presented in the following dimensionless form $`i_tE_1=(\alpha _1_x^2+\delta _1+i\gamma _1)E_1+(E_2+\mu )E_1^{},`$ (1) $`i_tE_2=(\alpha _2_x^2+\delta _2+i\gamma _2)E_2+E_1^2/2,`$ (2) Here $`E_1`$ and $`(E_2+\mu )`$ are the signal and pump fields, respectively, at frequencies $`\omega `$ and $`2\omega `$ (we use $`\mu `$ as a measure of the pump strength). The slow time $`t`$ is scaled so that $`\gamma _m`$ (proportional to the cavity damping rates) and $`\delta _m`$ (to the detunings from its resonances) are of order unity. Here and below $`m=1,2`$. This system can describe either diffractive or dispersive effects. We consider $`x`$ a dimensionless transverse coordinate, and so set $`\alpha _m=1/m`$. For this case, existence of CS for $`\delta _m<0`$ was numerically demonstrated for $`\mu _L<\mu <\mu _R`$, where $`\mu _L=|\gamma _1\delta _2+\gamma _2\delta _1|/\sqrt{\delta _2^2+\gamma _2^2}`$, and $`\mu _R=\sqrt{\delta _1^2+\gamma _1^2}`$ is the OPO threshold. Within this range two different non-trivial homogeneous solutions ($`E_m0`$, $`_xE_m=0`$) coexist with the trivial one ($`E_m=0`$), and the CS are sech-like localized states on the zero background. We start our analysis by applying a perturbative method to the problem of CS interaction. We seek solutions of Eqs. (1) in the form $`E_m(x,t)=A_m(xx_A)+B_m(xx_B)+`$ (3) $`ϵ(a_m(xx_A,x_B,t)+b_m(xx_B,x_A,t))+O(ϵ^2),`$ (4) where $`A_m(xx_A)`$ and $`B_m(xx_B)`$ are CS centred on $`x_{A,B}`$. Note that Eqs. (1) are invariant with respect to a $`\pi `$ phase flip of the signal field, so that $`A`$ and $`B`$ can be either in-phase or out-of-phase CS. We assume $`0<ϵ1`$, and that the perturbation functions $`a_m`$, $`b_m`$ are negligible except close to $`x_A`$, $`x_B`$ respectively. We further assume that $`x_{A,B}`$ vary on the slow time scale $`\tau =ϵt`$ and that $`d=|x_Ax_B|`$ is large enough that the overlap functions $`_1=A_2B_1^{}+B_2A_1^{}`$ and $`_2=A_1B_1`$ are of order $`ϵ`$. Substituting ansatz (4) into Eqs. (1) and truncating $`O(ϵ^2)`$ terms we obtain two analogous systems of equations for $`a_m`$ and $`b_m`$, the former expressible in the form: $$(\widehat{}_A_t)\stackrel{}{a}=(_\tau x_A)\stackrel{}{\xi }_A+\stackrel{}{}/ϵ,$$ (5) Here $`\stackrel{}{a}=(Rea_1,Rea_2,Ima_1,Ima_2)^T`$; operator $`\widehat{}_A`$ is the linearization of Eqs. (1) around the soliton $`A_m`$; $`\stackrel{}{\xi }_A=_x(ReA_1,ReA_2,ImA_1,ImA_2)^T`$ is the neutral eigenmode of $`\widehat{}_A`$ associated with translational symmetry, $`\widehat{}_A\stackrel{}{\xi }_A=0`$; and $`\stackrel{}{}=(Im_1,Im_2,Re_1,Re_2)^T`$ controls the interaction of the two CS. The solution of Eq. (5) should in general be expressed as a superposition of the eigenmodes $`\stackrel{}{\xi }_n`$ of $`\widehat{}_A`$, $`\widehat{}_A\stackrel{}{\xi }_n=\lambda _n\stackrel{}{\xi }_n`$, with time dependent coefficients, because the CS interaction will couple to them all. However, apart from the above-mentioned neutral eigenmodes, the only analytic knowledge about the eigensystems of $`\widehat{}_A`$ and $`\widehat{}_B`$ is that they have two bands of continuum modes with eigenvalues $`\lambda `$ lying on $`Re\lambda =\gamma _m`$, i.e. that all extended eigenmodes are damped. We have obtained their full eigensystems numerically, using finite-differences, over wide ranges of all relevant parameters. We find that for sufficiently large dissipation all cavity solitons are stable throughout the entire region of their existence. A Hopf bifurcation can occur as photon lifetime is increased, but we will not consider here any parameter regions where isolated CS are unstable. With oscillatory eigenmodes absent or well damped, only the neutral mode is easily excited by external perturbations, and so we meantime neglect all other modes. This enables us to obtain semi-analytic results on CS interactions. To exclude secularly growing solutions the right-hand side of Eq. (5) must be orthogonal to the neutral eigenmode of $`\widehat{}_A^{}`$ (which we calculated numerically). This solvability condition, together with that for the $`B`$ soliton, defines a function $`f`$ which governs the dynamical evolution of the distance $`d`$ between the soliton centers: $$_td=f(d).$$ (6) We computed $`f`$ for both in-phase and out-of-phase interacting CS, for many parameter values. Typical examples are plotted in Fig. 1. Regions where $`f`$ is negative (positive) correspond to CS attraction (repulsion). Zeros of $`f(d)`$ thus mark stationary bound states of CS pairs, which are stable if $`_df<0`$ where $`f=0`$. We find that this equation gives generally correct predictions of the inter-soliton forces, in particular that in-phase CS attract and out-of-phase CS repel. Both repulsion and attraction become stronger as $`\mu `$ increases, presumably because the signal component ($`E_1`$) of the CS becomes less localized as $`\mu `$ approaches plane-wave threshold at $`\mu _R`$. A similar effect can be envisaged in other CS models. For in-phase CS the function $`f`$ can develop pairs of zeros, see Fig. 1. This predicts birth of new pairs of CS bound states, one stable and one unstable. In Fig. 2 we present simulation results showing different interaction scenarios for two CS initially separated by about three soliton widths. First, we consider interaction of in-phase solitons. For small $`\mu `$ mutual attraction results in fusion of two solitons into one (Fig. 2(a)). Gradually increasing $`\mu `$ we first observe formation of a stable oscillatory bound state (Fig. 2(b)), then of a stationary bound state (Fig. 2(c)) which is stable (the radiation visible in Fig. 2(c) decays, albeit slowly). Note that the equilibrium separation in Fig. 2(c) is predicted quite well by the appropriate zero of $`f(d)`$ in Fig. 1, even though these CS are close enough to endanger the assumptions of our perturbation method. Stationary two-hump solitary states have been found previously as solutions of an approximate equation derived from Eqs. (1), but no analysis of soliton interactions was performed. Note that we have found not only two-hump but also multi-hump solitary states. However the latter were usually dynamically unstable. Further details on this issue will be reported elsewhere. Close to the upper boundary of CS existence the interaction of two solitons excites a global pattern, see Fig. 2(d), via generation of a switching wave from the stable trivial solution up into the modulationally unstable nontrivial homogeneous state. As predicted by Eq. (6), out-of-phase CS repel each other throughout the entire region of their existence - contrast Fig. 2(e) with Fig. 2(d), which corresponds to the same value of $`\mu `$. Now we will describe numerical results of the interaction of CS where weakly-damped oscillatory modes strongly influence the soliton interactions. Oscillating solitons generally radiate energy, which can become trapped between neighbouring solitons, exerting a radiation force which may lead to formation of a bound state. An effect of this kind has been reported for solitons in models with a weak global dissipation . We investigated a quite different situation, where linear waves escaping from the soliton are strongly damped. Here strong interaction between the solitons is due, not to radiation modes, but to proto-Hopf modes, and thus has novel aspects. The effect is strong providing that two conditions are satisfied. First, and crucially, the corresponding eigenmodes must have tails with well pronounced and weakly decaying oscillatory structure, see Fig. 3(a). Second, as might be expected, the oscillatory mode should be weakly damped (see Fig. 3(b)), i.e. the CS is close to a Hopf instability. If both conditions hold, then, even if the global damping due to the $`\gamma _m`$ is strong, a CS acts as a guide for waves weakly damped in both space and time. If a second CS is close enough, these guided waves can couple and reinforce each other. Fig. 3(c, d) illustrates the dynamics of two interacting CS having eigenmodes shown in Fig. 3(a). Note that the separation of the interacting solitons in Fig. 3(c) is much greater than their width. Comparison with Fig. 3(b) clearly indicates that the undamped pulsations shown in Fig. 3(d) originate from coupling and mutual reinforcement of the oscillatory modes of the two solitons. A further interesting point is that we find these dynamic bound states also for out-of-phase solitons, balancing the repulsion induced by their neutral mode interaction. Quadratic nonlinearity is also known to support solitons in free propagation geometry and in particularly interaction of these solitons has recently been studied both experimentally and theoretically . Hamiltonian nature of free propagating solitons results in their interaction obeying the laws of Newtonian dynamics . Another important difference is that the relative phase of the interacting solitons can take only two discrete values in a cavity while it is a continuous free parameter in a propagation scheme. In spite of these differences fusion of the in-phase solitons and repulsion of the out-phase are common features in both schemes. However, existence of the stationary and oscillatory bound states coupled either via translational neutral modes or via internal oscillatory modes are novel important features arising due to presence of the external pump and cavity losses. Balance between the pump and losses acts as an additional, equally important with the balance between diffraction and nonlinearity, mechanism of soliton formation inside an optical cavity. In summary, we have presented the analytical and numerical study of the interaction of cavity solitons in a degenerate OPO and identified distinct static and dynamic binding mechanisms. D.V.S. thanks C. Etrich, F. Lederer, D. Michaelis and U. Peschel for warm hospitality and illuminating discussions of many relevant questions during his short visit to Jena. He also acknowledges financial support from the Royal Society of Edinburgh and British Petroleum. The work is partially supported by ESPRIT project PIANOS and EPSRC grant GR/M19727.
no-problem/9906/cond-mat9906247.html
ar5iv
text
# 1 Loglog plot of 𝑛^∗ vs 𝑝 for system sizes up to 5500 and for 𝑧=2, 4. Note that the curvature of 𝑛^∗⁢(𝑝) in the loglog plot, which gives us a local estimate of 𝜏, is increasing as 𝑝 decreases. In the inset, we show that 𝜏 approaches 1 as 𝑝→0. Our new estimate of 𝜏 is 0.97±0.05, consistent with the value 1 given by a simple scaling argument. Erratum: Small-world networks: Evidence for a crossover picture \[Phys. Rev. Lett. 82, 3180 (1999)\] Marc Barthélémy and Luís A. Nunes Amaral We have performed new calculations using the breadth-first search algorithm . We are now able to study systems with sizes up to $`n=5500`$. As shown in Fig. 1, we now find $`\tau 1`$, in agreement with the simple argument given in our Letter but different from the originally reported numerical result ($`\tau =0.67\pm 0.10`$). The reason for the incorrect numerical result reported initially is the small system sizes we studied, which did not allow us to reach the asymptotic regime. We thank M.E.J. Newman and D.J. Watts and A. Barrat for alerting us to the possibility of an error on our estimate of $`\tau `$. We also thank M. Argollo de Menezes for directing us to the breadth-first search algorithm. J. van Leeuwen, Ed., Handbook of Theoretical Computer Science. Volume A: Algorithms and Complexity (Elsevier, Amsterdam, 1990), p. 539. We found the LEDA libraries very useful and efficient \[http://www.mpi-sb.mpg.de/LEDA/leda.html\]. M.E.J. Newman and D.J. Watts, cond-mat/9903357. A. Barrat, cond-mat/9903323; A. Barrat and M. Weigt, cond-mat/9903411.
no-problem/9906/physics9906040.html
ar5iv
text
# Untitled Document Night Thoughts of a Quantum Physicist Adrian Kent Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Silver Street, Cambridge CB3 9EW, U.K. Abstract The most dramatic developments in theoretical physics in the next millennium are likely to come when we make progress on so far unresolved foundational questions. In this essay I consider two of the deepest problems confronting us, the measurement problem in quantum theory and the problem of relating consciousness to the rest of physics. I survey some recent promising ideas on possible solutions to the measurement problem and explain what a proper physical understanding of consciousness would involve and why it would need new physics. 1. Introduction As the twentieth century draws to a close, theoretical physics is in a situation that, at least in recent history, is most unusual: there is no generally accepted authority. Each research program has very widely respected leaders, but every program is controversial. After a period of extraordinary successes, broadly stretching from the 1900’s through to the early 1980’s, there have been few dramatic new experimental results in the last fifteen years, with the important exception of cosmology. All the most interesting theoretical ideas have run into serious difficulties, and it is not completely obvious that any of them is heading in the right direction. So to speak, some impressively large and well organised expeditionary parties have been formed and are faithfully heading towards imagined destinations; other smaller and less cohesive bands of physicists are heading in quite different directions. But we really are all in the dark. Possibly none of us will get anywhere much until the next fortuitous break in the clouds. I will try to sketch briefly how it is that we have reached this state, and then suggest some new directions in which progress may eventually be possible. But my first duty is to stress that what follow are simply my personal views. These lie somewhere between the heretical and the mainstream at the moment. Some of the best physicists of the twentieth century, would, I think, have been at least in partial sympathy.<sup>1</sup> In any case, I am greatly indebted to Schrödinger and Bell’s lucid scepticism and to Feynman’s compelling explanations of the scientific need to keep alternative ideas in mind if they are even partially successful, as expressed in, for example, Schrödinger 1954, Bell 1987, Feynman 1965. But most leading present day physicsts would emphasize different problems; some would query whether physicists can sensibly say anything at all on the topics I will discuss. I think we can, of course. It seems to me the problems are as sharply defined as those we have overcome in the past: it just happens that we have not properly tackled them yet. They would be quite untouched — would remain deep unsolved problems — even if what is usually meant by a “theory of everything” were discovered. Solving them may need further radical changes in our world view, but I suspect that in the end we will find there is no way around them. 2. Physics in 1999 The great discoveries of twentieth century physics have sunk so deeply into the general consciousness that it now takes an effort of will to stand back and try to see them afresh. But we should try, just as we should try to look at the night sky and at life on earth with childlike eyes from time to time. In appreciating just how completely and how amazingly our understanding of the world has been transformed, we recapture a sense of awe and wonder in the universe and its beauty.<sup>2</sup> We owe this, of course, not to nature — which gives a very good impression of not caring either way — but to ourselves. Though we forget it too easily, that sense is precious to us. So recall: in 1900, the existence of atoms was a controversial hypothesis. Matter and light were, as far as we knew, qualitatively different. The known laws of nature were deterministic and relied on absolute notions of space and time which seemed not only natural and common sense but also so firmly embedded in our understanding of nature as to be beyond serious question. The propagation of life, and the functioning of the mind, remained so mysterious that it was easy to imagine their understanding might require quite new physical principles. Nothing much resembling modern cosmology existed. Einstein, of course, taught us to see space and time as different facets of a single geometry. And then, still more astonishingly and beautifully, that the geometry of spacetime is nonlinear, that matter is guided by the geometry and at the same time shapes it, so that gravity is understood as the mutual action of matter on matter through the curvature of spacetime. The first experiments confirming an important prediction of general relativity — that light is indeed deflected by the solar gravitational field — took place in 1917: still within living memory. Subsequent experimental tests have confirmed general relativity with increasingly impressive accuracy. It is consistent with our understanding of cosmology, as far as it can be — that is, as far as quantum effects are negligible. At the moment it has no remotely serious competitor: we have no other picture of the macroscopic world that makes sense and fits the data. Had theorists been more timid, particle physics experiments and astronomical observations would almost certainly eventually given us enough clues to make the development of special and general relativity inevitable. As it happens, though, Einstein was only partially guided by experiment. The development of the theories of relativity relied on his extraordinary genius for seeing through to new conceptual frameworks underlying known physics. To Einstein and many of his contemporaries, the gain in elegance and simplicity was so great that it seemed the new theories almost had to be correct. While the development of quantum theory too relied on brilliant intuitions and syntheses, it was much more driven by experiment. Data — the black-body radiation spectrum, the photo-electric effect, crystalline diffraction, atomic spectra — more or less forced the new theory on us, first in ad hoc forms, and then, by 1926, synthesised. It seems unlikely that anyone would ever have found their way through to quantum theory unaided by the data. Certainly, no one has ever found a convincing conceptual framework which explains to us why something like quantum theory should be true. It just is. Nor has anyone, even after the event, come up with a truly satisfactory explanation of what precisely quantum theory tells us about nature. We know that all our pre-1900 intuitions, based as they are on the physics of the world we see around us every day, are quite inadequate. We know that microscopic systems behave in a qualitatively different way, that there is apparently an intrinsic randomness in the way they interact with the devices we use to probe them. Much more impressively, for any given experiment we carry out on microscopic systems, we know how to list the possible outcomes and calculate the probabilities of each, at least to a very good approximation. What we do not fully understand is why those calculations work: we have, for example, no firmly established picture of what (if anything) is going on when we are not looking. Quantum theory as originally formulated was inconsistent with special relativity. Partly for this reason, it did not properly describe the interactions between light and matter either. Solving these problems took several further steps, and in time led to a relatively systematic — though still today incomplete — understanding of how to build relativistic quantum theories of fields, and eventually to the conclusion that the electromagnetic force and the two nuclear forces could be combined into a single field theory. As yet, though, we do not know how to do that very elegantly, and almost everyone suspects that a grander and more elegant unified theory of those three forces awaits us. Nor can we truly say that we fully understand quantum field theory, or even that the theories we use are entirely internally consistent. They resemble recipes for calculation, together with only partial, though tantalisingly suggestive, explanations as to why they work. Most theorists believe a deeper explanation requires a better theory, perhaps yet to be discovered. Superstring theory, which many physicists hope might provide a complete theory of gravity as well as the other forces— a “theory of everything” — is currently the most popular candidate. Though no one doubts its mathematical beauty, it is generally agreed that so far superstring theory has two rather serious problems. Conceptually, we do not know how to properly make sense of superstrings as a theory of matter plus spacetime. Nor can we extract any very interesting correct predictions from the theory — for example, the properties of the known forces, the masses of the known particles, or the apparent four-dimensionality of space-time — in any convincing way. Opinions differ sharply on whether those problems are likely to be resolved, and so whether superstring theory is likelier to be a theory of everything or of nothing: time will tell. Almost everyone agrees, though, that reconciling gravity and quantum theory is one of the deepest problems facing modern physics. Quantum theory and general relativity, each brilliantly successful in its own domain, rest on very different principles and give highly divergent pictures of nature. According to general relativity, the world is deterministic, the fundamental equations of nature are non-linear, and the correct picture of nature is, at bottom, geometric. According to quantum theory, there is an intrinsic randomness in nature, its fundamental equations are linear, and the correct language in which to describe nature seems to be closer to abstract algebra than geometry. Something has to give somewhere, but at the moment we do not know for sure where to begin in trying to combine these pictures: we do not know how to alter either in the direction of the other without breaking it totally. However, I would like here to try to look a bit beyond the current conventional wisdom. There is always a danger that attention clusters around some admittedly deep problems while neglecting others, simply through convention, or habit or sheer comfort in numbers. Like any other subject, theoretical physics is quite capable of forming intellectual taboos: topics that almost all sensible people avoid. They often have good reason, of course, but I suspect that the most strongly held taboos sometimes resemble a sort of unconscious tribute. Mental blocks can form because a question carries the potential for revolution, and addressing it thoughtfully would raise the possibility that our present understanding may, in important ways, be quite inadequate: in other words, they can be unconscious defences against too great a sense of insecurity. Just possibly, our best hope of saying something about future revolutions in physics may lie in looking into interesting questions which current theory evades. I will look at two here: the measurement problem in quantum theory and the mind-body problem. 3. Quantum Theory and the Measurement Problem As we have already seen, quantum theory was not originally inspired by some parsimonious set of principles applied to sparse data. Physicists were led to it, often without seeing a clear way ahead, in stages and by a variety of accumulating data. The founders of quantum theory were thus immediately faced with the problem of explaining precisely what the theory actually tells us about nature. On this they were never able to agree. However, an effective enough consensus, led by Bohr, was forged. Precisely what Bohr actually believed, and why, remain obscure to many commentators, but for most practical purposes it has hardly mattered. Physicists found that they could condense Bohr’s “Copenhagen interpretation” into a few working rules which explain what can usefully be calculated. Alongside these, a sort of working metaphysical picture — if that is not a contradiction in terms — also emerged. C.P. Snow captures this conventional wisdom well in his semi-autobiographical novel, “The Search” (Snow 1934): Suddenly, I heard one of the greatest mathematical physicists say, with complete simplicity: “Of course, the fundamental laws of physics and chemistry are laid down for ever. The details have got to be filled up: we don’t know anything of the nucleus; but the fundamental laws are there. In a sense, physics and chemistry are finished sciences.” The nucleus and life: those were the harder problems: in everything else, in the whole of chemistry and physics, we were in sight of the end. The framework was laid down; they had put the boundaries round the pebbles which we could pick up. It struck me how impossible it would have been to say this a few years before. Before 1926 no one could have said it, unless he were a megalomaniac or knew no science. And now two years later the most detached scientific figure of our time announced it casually in the course of conversation. It is rather difficult to put the importance of this revolution into words. \[$`\mathrm{}`$\] However, it is something like this. Science starts with facts chosen from the external world. The relation between the choice, the chooser, the external world and the fact produced is a complicated one \[$`\mathrm{}`$\] but one gets through in the end \[$`\mathrm{}`$\] to an agreement upon “scientific facts”. You can call them “pointer-readings” as Eddington does, if you like. They are lines on a photographic plate, marks on a screen, all the “pointer-readings” which are the end of the skill, precautions, inventions, of the laboratory. They are the end of the manual process, the beginning of the scientific. For from these “pointer-readings”, these scientific facts, the process of scientific reasoning begins: and it comes back to them to prove itself right or wrong. For the scientific process is nothing more nor less than a hiatus between “pointer-readings”: one takes some pointer readings, makes a mental construction from them in order to predict some more. The pointer readings which have been predicted are then measured: and if the prediction turns out to be right, the mental construction is, for the moment, a good one. If it is wrong, another mental construction has to be tried. That is all. And you take your choice where you put the word “reality”: you can find your total reality either in the pointer readings or in the mental construction or, if you have a taste for compromise, in a mixture of both. In other words, on this conventional view, quantum theory teaches us something deep and revolutionary about the nature of reality. It teaches us that it is a mistake to try to build a picture of the world which includes every aspect of an experiment — the preparation of the apparatus and the system being experimented on, their behaviour during the experiment, and the observation of the results — in one smooth and coherent description. All we need to do science, and all we can apparently manage, is to find a way of extrapolating predictions — which as it happens turn out generally to be probabilistic rather than deterministic — about the final results from a description of the initial preparation. To ask what went on in between is, by definition, to ask about something we did not observe: it is to ask in the abstract a question which we have not asked nature in the concrete. On the Copenhagen view, it is a profound feature of our situation to the world that we cannot separate the abstract and the concrete in this way. If we did not actually carry out the relevant observation, we did not ask the question in the only way that causes nature to supply an answer, and there need not be any meaningful answer at all. We are in sight of the end. Quantum theory teaches us the necessary limits of science. But are we? Does it? Need quantum theory be understood only as a mere device for extrapolating pointer-readings from pointer-readings? Can quantum theory be satisfactorily understood this way? After all, as we understand it, a pointer is no more than a collection of atoms following quantum laws. If the atoms and the quantum laws are ultimately just mental constructions, is not the pointer too? Is not everything? Landau and Lifshitz, giving a precise and apparently not intentionally critical description of the orthodox view in their classic textbook (Landau & Lifshitz, 1974) on quantum theory, still seem to hint at some disquiet here: Quantum mechanics occupies a very unusual place among physical theories: it contains classical mechanics as a limiting case, yet at the same time requires this limiting case for its own formulation. This is the difficulty. The classical world — the world of the laboratory — must be external to the theory for us to make sense of it; yet it is also supposed to be contained within the theory. And, since the same objects play this dual role, we have no clear division between the microscopic quantum and the macroscopic classical. It follows that we cannot legitimately derive from quantum theory the predictions we believe the theory actually makes. If a pointer is only a mental construction, we cannot meaningfully ask what state is in or where it points, and so we cannot make meaningful predictions about its behaviour at the end of an experiment. If it is a real object independent of the quantum realm, then we cannot explain it — or, presumably, the rest of the macroscopic world around us — in terms of quantum theory. Either way, if the Copenhagen interpretation is right, a crucial component in our understanding of the world cannot be theoretically justified. However, we now know that Bohr, the Copenhagen school, and most of the pioneers of quantum theory were unnecessarily dogmatic. We are not forced to adopt the Copenhagen interpretation either by the mathematics of quantum theory or by empirical evidence. Nor is it the only serious possibility available. As we now understand, it is just one of several possible views of quantum theory, each of which has advantages and difficulties. It has not yet been superseded: there is no clear consensus now as to which view is correct. But it seems unlikely it will ever again be generally accepted as the one true orthodoxy. What are the alternatives? The most interesting, I think, is a simple yet potentially revolutionary idea originally set out by Ghirardi, Rimini, and Weber (Ghirardi et al. 1986), and later developed further by GRW, Pearle, Gisin and several others. According to their model, quantum mechanics has a piece missing. We can fix all its problems by adding rules to say exactly how and when the quantum dice are rolled. This is done by taking wave function collapse to be an objective, observer-independent phenomenon, with small localisations or “mini-collapses” constantly taking place. This entails altering the dynamics by adding a correction to the Schrödinger equation. If this is done in the way GRW propose, the predictions for experiments carried out on microscopic systems are almost precisely the same, so that none of the successes of quantum theory in this realm are lost. However, large systems deviate more significantly from the predictions of quantum theory. Those deviations are still quite subtle, and very hard to detect or exclude experimentally at present, but they are unambiguously there in the equations. Experimentalists will one day be able to tell us for sure whether or not they are there in nature. By making this modification, we turn quantum theory into a theory which describes objective events continually taking place in a real external world, whether or not any experiment is taking place, whether or not anyone is watching. If this picture is right, it solves the measurement problem: we have a single set of equations which give a unified description of microscopic and macroscopic physics, and we can sensibly talk about the behaviour of unobserved systems, whether they are microscopic electrons or macroscopic pointers. The pointer of an apparatus probing a quantum system takes up a definite position, and does so very quickly, not through any ad hoc postulate, but in a way that follows directly from the fundamental equations of the theory. The GRW theory is probably completely wrong in detail. There are certainly serious difficulties in making it compatible with relativity — though there also some grounds for optimism that this can be done (Pearle 1998, Kent 1999). But GRW’s essential idea has, I think, a fair chance of being right. Before 1986, few people believed that any tinkering with quantum theory was possible: it seemed that any change must so completely alter the structure of the theory as to violate some already tested prediction. But we now know that it is possible to make relatively tiny changes which cause no conflict with experiment, and that by doing so we can solve the deep conceptual and interpretational problems of quantum theory. We know too that the modified theory makes new experimental predictions in an entirely unexpected physical regime. The crucial tests, if and when we can carry them out, will be made not by probing deeper into the nucleus or by building higher energy accelerators, but by keeping relatively large systems under careful enough control for quantum effects to be observable. New physics could come directly from the large scale and the complex: frontiers we thought long ago closed. 4. Physics and Consciousness Kieslowski’s remarkable film series, Dekalog, begins with the story of a computer scientist and his son who share a joy in calculating and predicting, in using the computer to give some small measure of additional control over their lives. Before going skating, the son obtains weather reports for the last three days from the meteorological bureau, and together they run a program to infer the thickness of the ice and deduce that it can easily bear his weight. But, tragically, they neglect the fire a homeless man keeps burning at the lakeside. Literally, of course, they make a simple mistake: the right calculation would have taken account of the fire, corrected the local temperature, and shown the actual thickness of the ice. Metaphorically, the story seems to say that the error is neglecting the spiritual, not only in life, but perhaps even in physical predictions. I do not myself share Kieslowski’s religious worldview, and I certainly do not mean to start a religious discussion here. But there is an underlying scientific question, which can be motivated without referring to pre-scientific systems of belief and is crucial to our understanding of the world and our place in it, and which I think is still surprisingly neglected. So, to use more scientifically respectable language, I would like to take a fresh look at the problem of consciousness in physics, where by “consciousness” I mean the perceptions, sensations, thoughts and emotions that constitute our experience. There has been a significant revival of interest in consciousness lately, but it still receives relatively little attention from physicists. Most physicists believe that, if consciousness poses any problems at all, they are problems outside their province.<sup>3</sup> Penrose is the best-known exception: space does not permit discussion of his rather different arguments here, but see Penrose 1989, 1994. After all, the argument runs, biology is pretty much reducible to chemistry, which is reducible to known physical laws. Nothing in our current understanding suggests that there is anything physically distinctive about living beings, or brains. On the contrary, neurophysiology, experimental psychology, evolutionary and molecular biology have all advanced with great success, based firmly on the hypothesis that there is not. Of course, no one can exclude the possibility that our current understanding could turn out to be wrong — but in the absence of any reason to think so, there seems nothing useful for physicists to say. I largely agree with this view. It is very hard to see how any novel physics associated with consciousness could fit with what we already know. Speculating about such ideas does seem fruitless in the absence of data. But I think we can say something. There is a basic point about the connection between consciousness and physics which ought to be made, yet seems never to have been clearly stated, and which suggests our present understanding almost cannot be complete. The argument for this goes in three steps. First, let us assume, as physicists quite commonly do, that any natural phenomenon can be described mathematically. Consciousness is a natural phenomenon, and at least some aspects of consciousness — for example, the number of symbols we can simultaneously keep in mind — are quantifiable. On the other hand we have no mathematical theory even of these aspects of consciousness. This would not matter if we could at least sketch a path by which statements about consciousness could be reduced to well understood phenomena. After all, no one worries that we have no mathematical theory of digestion, because we believe that we understand in principle how to rewrite any physical statement concerning the digestive process as a statement about the local densities of various chemicals in the digestive tract, and how to derive these statements from the known laws of physics. But we cannot sketch a similar path for consciousness: no one knows how to transcribe a statement of the form “I see a red giraffe” into a statement about the physical state of the speaker. To make such a transcription, we would need to attach a theory of consciousness to the laws of physics we know: it clearly cannot be derived from those laws alone. Second, we note that, despite the lack of a theory of consciousness, we cannot completely keep consciousness out of physics. All the data on which our theories are based ultimately derive from conscious impressions or conscious memories of impressions. If our ideas about physics included no hypothesis about consciousness, we would have no way of deriving any conclusion about the data, and so no logical reason for preferring any theory over any other. This difficulty has long been recognised. It is dealt with, as best we can, by invoking what is usually called the principle of psycho-physical parallelism. We demand that we should at least be able to give a plausible sketch of how an accurate representation of the contents of our conscious minds could be included in the description of the material world provided by our physical theories, assuming a detailed understanding of how consciousness is represented. Since we do not actually know how to represent consciousness, that may seem an empty requirement, but it is not. Psycho-physical parallelism requires, for example, that a theory explain how anything that we may observe can come to be correlated with something happening in our brains, and that enough is happening in our brains at any given moment to represent the full richness of our conscious experience. These are hard criteria to make precise, but asking whether they could plausibly be satisfied within a given theory is still a useful constraint. Now the principle of psycho-physical parallelism, as currently applied, commits us to seeing consciousness as an epiphenomenon supervening on the material world. As William James magnificently put it (James 1879): Feeling is a mere collateral product of our nervous processes, unable to react upon them any more than a shadow reacts on the steps of the traveller whom it accompanies. Inert, uninfluential, a simple passenger in the voyage of life, it is allowed to remain on board, but not to touch the helm or handle the rigging. Third, the problem with all of this is, that as James went on to point out, if our consciousness is the result of Darwinian evolution, as it surely must be, it is difficult to understand how it can be an epiphenomenon. To sharpen James’ point: if there is a simple mathematical theory of consciousness, or of any quantifiable aspect of consciousness, describing a precise version of the principle of psycho-physical parallelism and so characterising how it is epiphenomenally attached to the material world, then its apparent evolutionary value is fictitious. For all the difference it would make to our actions, we might as well be conscious only of the number of neutrons in our kneecaps or the charm count of our cerebella; we might as well find pleasures painful and vice versa. In fact, of course, our consciousness tends to supply us with a sort of executive summary of information with a direct bearing on our own chances of survival and those of our genes; we tend to find actions pleasurable or painful depending whether they are beneficial or harmful to those chances. Though we are not always aware of vital information, and are always aware of much else, and though our preferences certainly don’t perfectly correlate with our genetic prospects, the general predisposition of consciousness towards survival is far too strong to be simply a matter of chance. Now, of course, almost no one seriously suggests that the main features of consciousness can be the way they are purely by chance. The natural hypothesis is that, since they seem to be evolutionarily advantageous, they should, like our other evolutionarily advantageous traits, have arisen through a process of natural selection. But if consciousness really is an epiphenomenon, this explanation cannot work. An executive summary of information which is presented to us, but has no subsequent influence on our behaviour, carries no evolutionary advantage. It may well be advantageous for us that our brains run some sort of higher-level processes which use the sort of data that consciousness presents to us and which are used to make high-level decisions about behaviour. But, on the epiphenomenal hypothesis, we gain nothing by being conscious of these particular processes: if they are going to run, they could equally well be run unconsciously, leaving our attention focussed on quite different brain activities or on none at all. Something, then, is wrong with our current understanding, There are really only two serious possibilities. One is that psycho-physical parallelism cannot be made precise and that consciousness is simply scientifically inexplicable. The other is that consciousness is something which interacts, if perhaps very subtly, with the rest of the material world rather than simply passively co-existing alongside that world. If that were the case, then we can think of our consciousnesses and our brains — more precisely, the components of our brains described by presently understood physics — as two coupled systems, each of which influences the other. That is a radically different picture from the one we presently have, of course. But it does have explanatory power. If it were true, it would be easy to understand why it might be evolutionarily advantageous for our consciousness to take a particular form. If say, being conscious of a particular feature of the environment helps to speed up the brain’s analysis of that feature, or to focus more of the brain’s processing power on it, or to execute relevant decisions more quickly, or to cause a more sophisticated and detailed description to enter into memory, then evolution would certainly cause consciousness to pay attention to the relevant and neglect the irrelevant. We have to be clear about this, though: to propose this explanation is to propose that the actions of conscious beings are not properly described by the present laws of physics. This does not imply that conscious actions cannot be described by any laws. Far from it: if that were the case, we would still have an insoluble mystery, and once we are committed to accepting an insoluble mystery associated with consciousness then we have no good reason to prefer a mystery which requires amending the laws of physics over one which leaves the existing laws unchallenged. The scientifically interesting possibility — the possibility with maximal explanatory power — is that our actions and those of other conscious beings are not perfectly described by the laws we presently know, but could be by future laws which include a proper theory of consciousness. This need not be true, of course. Perhaps consciousness will forever be a mystery. But it seems hard to confidently justify any a priori division of the unsolved problems in physics into the soluble and the forever insoluble. We ought at least to consider the implications of maximal ambition. We generally assume that everything in nature except consciousness has a complete mathematical description: that is why, for example, we carry on looking for a way of unifying quantum theory and gravity, despite the apparent difficulty of the problem. We should accept that, if this assumption is right, it is at least plausible that consciousness also has such a description. And this forces us to accept the corollary — that there is a respectable case for believing that we will eventually find we need new dynamical laws — even though nothing else we know supports it. One final comment: nothing in this argument relies on the peculiar properties of quantum theory, or the problems it poses. The argument runs through equally well in Newtonian physics. Maybe the deep problems of quantum theory and consciousness are linked, but it seems to me we have no reason to think so. It follows that anyone committed to the view I have just outlined must argue that a deep problem in physics has generally been neglected for the last century and a half. So let me try to make that case. There is no stronger or more venerable scientific taboo than that against enquiry, however tentative, into consciousness. James, in 1879, quoted “a most intelligent biologist” as saying: It is high time for scientific men to protest against the recognition of any such thing as consciousness in scientific investigation. Scientific men and women certainly have protested this, loudly and often, over the last hundred and twenty years. But have those protests ever carried much intellectual force? The folk wisdom, such as it is, against the possibility of a scientific investigation of consciousness seems now to rest on a confusion hanging over from the largely deleterious effect of logical positivism on scientists earlier this century. Hypotheses about consciousness are widely taken to be ipso facto unscientific because consciousness is presently unmeasurable and its influences, if any, are presently undetectable. Delete the word “presently”, and the case could be properly made: as it is, it falls flat. If logical positivism is to blame, is only the most recent recruit to the cause. The problem seems to run much deeper in scientific culture. Schrödinger described (Schrödinger 1954) the phenomenon of: \[$`\mathrm{}`$\] the wall, separating the ‘two paths’, that of the heart and that of pure reason. We look back along the wall: could we not pull it down, has it always been there? As we scan its windings over hills and vales back in history we behold a land far, far, away at a space of over two thousand years back, where the wall flattens and disappears and the path was not yet split, but was only one. Some of us deem it worth while to walk back and see what can be learnt from the alluring primeval unity. Dropping the metaphor, it is my opinion that the philosophy of the ancient Greeks attracts us at this moment, because never before or since, anywhere in the world, has anything like their highly advanced and articulated system of knowledge and speculation been established without the fateful division which has hampered us for centuries and has become unendurable in our days. Clearly, the revival of interest in Greek philosophy that Schrödinger saw did not immediately produce the revolution he hoped for. But our continued fascination with consciousness is evident on the popular science and philosophy bookshelves. It looks as though breaking down the wall and building a complete worldview are going to be left as tasks for the third millennium. There could hardly be greater or more fascinating challenges. Nor can there be many more necessary for our long term well being. Science has done us far more good than harm, psychologically and materially. But the great advances we have made in understanding nature have also been used to support a worldview in which only what we can now measure matters, in which the material and the external dominate, in which we objectify and reduce ourselves and each other, in which we are in danger of coming to see our psyches and our cultures, in all their richness, as no more than the evolutionarily honed expression of an agglomeration of crude competitive urges. To put it more succinctly, there is a danger, as Vaclav Havel put it in a recent essay (Havel 1996), of man as an observer becoming completely alienated from himself as a being. Havel goes on to suggest that hopeful signs of a more humane and less schizophrenic worldview can be found in what he suggests might be called postmodern science, in the form of the Gaia hypothesis and the anthropic principle. I disagree: it is hard to pin down precise scientific content in these ideas, and insofar as we can it seems to me they are no help. But I think we have the answer already. The alienation is an artefact, created by the erroneous belief that all that physics currently describes is all there is. But, on everything we value in our humanity, physics is silent. As far as our understanding of human consciousness is concerned, though we have learned far more about ourselves, we have learned nothing for sure that negates or delegitimizes a humane perspective. In that sense, nothing of crucial importance has changed. 5. Postscript All this said, of course, predicting the future of science is a mug’s game. If, as I have argued, physics is very far from over, the one thing we should be surest of is that greater surprises than anything we can imagine are in store. One prediction that seems likelier than most, though, is that the Editor will not be restricted to considering human contributors for the corresponding volume in 2999. Perhaps our future extraterrestrial or mechanical colleagues will find some amusement in our attempts. I do hope so. References Schrödinger, E. 1954 Nature and the Greeks. Cambridge: Cambridge University Press. Bell, J.S. 1987. Speakable and Unspeakable in Quantum Mechanics: Collected papers on Quantum Philosophy. Cambridge: Cambridge University Press Feynman, R. 1965 The Character of Physical Law. London: British Broadcasting Corporation. Reading: Addison Wesley. Snow, C.P. 1934 The Search. London: Victor Gollancz. Ghirardi, G. et al. 1986 Unified Dynamics for Microscopic and Macroscopic Systems. Physical Review D 34 470-491. Landau, L. and Lifshitz, E. 1974 Quantum Mechanics. Oxford: Pergamon Press. Pearle, P. 1999 Relativistic Collapse Model with Tachyonic Features. Physical Review A 59 80-101. Kent, A. 1998 Quantum Histories. Physica Scripta T76 78-84. Penrose, R. 1989 The Emperor’s New Mind: Concerning Computers, Minds, and the Laws of Physics. Oxford: Oxford University Press. Penrose, R. 1994 Shadows of the Mind: A Search for the Missing Science of Consciousness. Oxford: Oxford University Press. James, W. 1879 Are We Automata? Mind 13 1-22. Havel,V. 1996. In The Fontana Postmodernism Reader, (ed. W. Truett Anderson). London: Fontana.
no-problem/9906/astro-ph9906409.html
ar5iv
text
# ESO deep observations of the optical afterglow of GRB 990510 Partially based on data collected at the ESO VLT–FORS1 (Paranal), NTT–SUSI2 and ESO 3.6 m EFOSC2 (La Silla) telescopes ## 1 Introduction On 1999 May 10.36743 UT the BATSE detectors on board CGRO, and the Gamma–Ray Burst Monitor (GRBM) and the Wide Field Cameras (WFCs) on board the Italian–Dutch satellite BeppoSAX detected a gamma ray burst, GRB 990510, with a fluence of 2.5$`\times 10^5`$ erg cm<sup>-2</sup> above 20 keV (Kippen 1999; Amati et al. 1999; Dadina et al. 1999). The first optical follow–up observations began only $``$3.5 hours after the $`\gamma `$–ray event and revealed a relatively bright optical transient ($`R`$=$`17.54`$ , Axelrod, Mould & Schmidt 1999; Vreeswijk et al. 1999a) at $`\alpha =13^\mathrm{h}38^\mathrm{m}07.11^\mathrm{s}`$, $`\delta =80^{}29\mathrm{}48.2\mathrm{}`$ (equinox 2000; Hjorth et al. 1999b). When compared to previously studied afterglows, the OT showed initially a fairly slow flux decay ($`t^{0.9}`$; Galama et al. 1999), that steepened after one day ($`F_\nu t^{1.3}`$; Stanek et al. 1999a) and further steepened after 4 days ($`F_\nu t^{1.8}`$; Pietrzyński & Udalski 1999; Bloom et al. 1999) and 5 days ($`F_\nu t^{2.5}`$; Marconi et al. 1999a,b). Such a progressive and smooth steepening had not been observed before. Vreeswijk et al. (1999), using the VLT, detected red–shifted absorption lines in the OT spectrum corresponding to a redshift lower limit of $`z=1.619`$. In this letter we report on deep observations of the OT performed with the ESO VLT, NTT and 3.6 m telescopes. These observations allowed to extend the coverage of the OT light curve up to $``$10 days from the burst onset and to search for an underlying host galaxy. ## 2 Observations and results The observations were performed with the 8 m VLT–Antu telescope equipped with the Focal Reducer/Low Dispersion Spectrograph (FORS1) on May 11 (6.8′$`\times `$6.8′ field of view and 0.2″/pixel resolution), the ESO 3.5 m NTT equipped with the Superb Seeing Imager – 2 (SUSI2) between May 16–18 (5.5′$`\times `$ 5.5′ field of view and 0.16″/pixel resolution), and the ESO 3.6 m telescope with the ESO Faint Object Spectrograph and Camera (EFOSC2) mounted at the F/8 Cassegrain Focus on May 20 (5.5′$`\times `$5.5′ field of view and 0.32″/pixel resolution). We performed photometry in the Bessel–$`R`$ and Gunn–$`I`$ filters. The data were reduced using standard ESO–MIDAS and IRAF procedures for bias subtraction and flat–field correction. Photometry for each stellar object in the image was derived both with the DAOPHOT II and the ROMAFOT MIDAS–packages (Stetson 1987; Buonanno & Iannicola 1989). Point–like source $`R`$ magnitudes were derived by comparison with nearby stars, assuming $`R=16.5`$ for the star at $`\alpha =13^\mathrm{h}38^\mathrm{m}00.82^\mathrm{s}`$, $`\delta =80^{}29\mathrm{}11.7\mathrm{}`$ (Bloom et al. 1999), while the $`I`$ magnitudes were calibrated observing a number of Landolt photometric standards during the observational night (Landolt 1992). In Fig. 1 the field around the position of the OT as observed by the VLT–FORS1 (May 11; left panel) and NTT–SUSI2 (May 18; right panel) telescopes is shown. Table 1 reports the results of the photometry for each of the pointings. The mediocre seeing of the observations is in part due to the low OT elevation at the Paranal and La Silla Observatories. The May 18 NTT–SUSI2 image is the deepest and the one obtained with the best seeing (1.1″). In this image the OT is sufficiently faint to allow for a sensitive search for additional underlying point–like or diffuse objects. Both the DAOPHOT II and the ROMAFOT packages failed to associate a point–like Point Spread Function (PSF) to the OT. Moreover the flag that records the sharpness of each single object is consistent with diffuse emission either from an object with broad–wings or from two point–like nearby objects. However the presence of the OT itself plays an important role in increasing the background level which, in turn, translates into a reduced detection sensitivity. An underlying point–like object (either an unresolved host galaxy or a faint star) could have been detected if its magnitude were smaller than $`R26.6`$ and with a angular extension $`>`$ 1″. Note that the HST image of the GRB 990123 field showed a candidate host galaxy of $`1\mathrm{}`$ extension (Fruchter et al. 1999). If a similar size host were to be expected in our case (the redshift of the two bursts is similar), this would likely remain unresolved in our images. We also can infer a lower limit for a host galaxy near but PSF–separated from the OT by summing the three NTT–SUSI2 images carried out on May 16/17/18: a 7200 s exposure image in the $`R`$–band was obtained and analysed. Stellar objects in the image were searched with the DAOPHOT II; the faintest point–like star that we detected have $`R`$$``$27 (S/N$``$5). An alternative way of detecting a host galaxy consists in monitoring the low flux end of the afterglow decay: the presence of an unresolved galaxy causes the light curves to flatten when the flux of the OT becomes comparable to that of the host. This method was successfully applied to the afterglow light curves, obtained from days to months after the GRB event, in the cases of GRB 971214 (Odewahn et al. 1998), GRB 980703 (Bloom et al. 1998; Castro–Tirado et al. 1999) and GRB 970508 (Zharikov & Sokolov 1999). Typical values of the known underlying galaxies are in the $`R`$=22–27 range (Hogg & Fruchter 1999 and references therein). We applied the same technique to GRB 990510. In order to further characterise the afterglow decay we collected all the published fluxes of the OT in the $`V`$, $`R`$ and $`I`$ bands (Axelrod et al. 1999; Galama et al. 1999; Harrison et al 1999; Kaluzny et al. 1999a,b; Stanek 1999; Stanek et al. 1999a,b; Pietrzyński & Udalski 1999a,b,c; Beuermann et al. 1999). When available we used the known uncertainties, while in the remaining cases the 10% of the flux measurement was considered as a typical error value. We fitted the $`V`$, $`R`$ and $`I`$ light curves (36, 43 and 32 data points, respectively) by using the empirical model for the flux evolution, $`F_\nu (t)`$, described in Marconi et al. (1999a) $$F_\nu (t)=\frac{k_\nu t^{\alpha _1}}{1+(t/t_{})^{\alpha _2\alpha _1}}.$$ (1) Stanek et al. (1999b; hereafter S99) adopted the same model. The eariler model proposed by Bloom et al. (1999; see also Harrison et al. 1999) is similar. $`F_\nu (t)`$ is characterised by four free parameters: two power–law indices $`\alpha _1`$ and $`\alpha _2`$ (for the earlier and the later time part of the decay, respectively), a folding time $`t_{}`$ where the two power–laws match, and the normalisation $`k_\nu `$. Table 2 summarises the results obtained from the fitting. Note that parameter uncertainties are 1$`\sigma `$ for a single interesting parameter; all parameters in the fit were left free to vary. Similar results were reported by S99. We ascribe their somewhat smaller uncertainties (and larger $`\chi ^2`$) to the fact in evaluating the uncertainties in each parameters S99 held the other parameters fixed. We note that a different assumed values of the measurement uncertainties may affect the fit and account for the different $`\chi ^2`$ values quoted in our work and in S99. Fig. 2 shows the $`V`$, $`R`$ and $`I`$ relative flux light curves (obtained from M<sub>V,R,I</sub>=–2.5 log F<sub>V,R,I</sub>) fitted with the model in Eq. 1 keeping fixed $`\alpha _1`$, $`\alpha _2`$ and $`t_{}`$ to the best values obtained for the $`R`$ band and leaving free to vary only the normalisation (solid lines). A $`\chi ^2`$ of 95 for 105 degree of freedom ($`dof`$) corresponding to a $`\chi ^2/dof`$ of 0.9 was obtained. Note that in this case, given the larger number of data points in the $`R`$ filter, the simultaneous fit in the $`V`$, $`R`$ and $`I`$ bands is biased towards the $`R`$–band light curve. The derived fitting parameters are nearly filter–independent, within the statistical uncertainties, suggesting a similar temporal evolution of the emission in the $`VRI`$ bands where only the normalisation $`k_\nu `$ varies; this was already noted by Bloom et al. (1999) and Marconi et al. (1999a). To include the possibility of an underlying galaxy we added to the model described in Eq. 1 a fifth free parameter (i.e. a constant) corresponding to the host galaxy flux. Galaxy color indices strongly depend on the morphology and distance of the host and were adopted from Buzzoni (1995, 1999 and references therein) which take into account the effects of stellar evolution within the galaxy. We consider a set of color index values corresponding to $`z=1.6`$ and to three different morphologies: elliptical ($`V`$$`R`$=2.4; $`R`$$`I`$=2.1), spiral (Sb; $`V`$$`R`$=0.32; $`R`$$`I`$=1.02) and irregular ($`V`$$`R`$=0.15; $`R`$$`I`$=0.55). Moreover we corrected the set of data for absorption in the direction of GRB 990510 ($`E_{BV}`$0.20; Stanek et al. 1999b) which corresponds to an $`E_{VR}E_{RI}`$ 0.15. Dashed lines in Fig. 2 show the fit obtained by using the colors of an elliptical host galaxy of magnitude $`R=26.6`$. We first fitted the light curves as before without the deepest points (one in the $`V`$, three in the $`R`$ and one in the $`I`$ band), then we added the deep data of this paper, keeping fixed $`\alpha _1`$, $`\alpha _2`$ and $`t_{}`$, and leaving the normalisation and the host galaxy flux parameters free to vary. In all cases we obtained a $`\chi ^2/dof`$ in the 0.9–1.1 range. As expected, the unchanged value of the reduced $`\chi ^2`$ testifies that the constant parameter does not significantly improve the fit. In the case of an elliptical host the lower limit on the galaxy magnitude is strongly driven by the OT upper limit in the $`I`$–band; any (elliptical) object brighter than $`R=26.6`$ would produce a levelling off of the OT $`I`$–band light curve that is not observed. The spiral and irregular host galaxy cases are consistent with the $`I`$–band data, while the lower limit is driven by the VLT deep observation in the $`V`$–band (Beuermann et al. 1999); any object brighter than $`R=26.6`$ must be brighter than observed in the $`V`$ band. If the host galaxy is farther than $`z=1.6`$, lower magnitudes are to be expected, while color indeces (which are differential quantities) remain nearly constant (at least up to $`z2.0`$). ## 3 Conclusions We reported on the deep optical observations of the GRB 990510 OT. We are able to follow the OT in the $`R`$ band down to 23.7 and derive an upper limit in the $`I`$ band ($`>23.6`$). We propose a functional form for the fitting of the multi-band optical light curve (see also S99; Bloom et al. 1999) and derive strong upper limits on the magnitude of galaxy hosting the GRB. In particular, our photometric data exclude an elliptical host galaxy brighter than $`R`$=26.6. Further observations are needed in order to look for a faint R$``$27 galaxy host. ###### Acknowledgements. The authors thanks G. Iannicola for his kind support with ROMAFOT, and the referee K.Z. Stanek, the comment of which helped to improve this paper.
no-problem/9906/astro-ph9906404.html
ar5iv
text
# Overall Evolution of Realistic Gamma-Ray Burst Remnant and Its Afterglow*footnote **footnote *Supported by the National Natural Science Foundation of China under Grant Nos. 19773007 and 19825109, and the National Climbing Project on Fundamental Researches. ## Abstract Conventional dynamic model of gamma-ray burst remnants is found to be incorrect for adiabatic blastwaves during the non-relativistic phase. A new model is derived, which is shown to be correct for both radiative and adiabatic blastwaves during both ultra-relativistic and non-relativistic phase. Our model also takes the evolution of the radiative efficiency into account. The importance of the transition from the ultra-relativistic phase to the non-relativistic phase is stressed. PACS: 95.30.Lz, 98.70.Rz, 97.60.Jd slugcomment: Published in: Chinese Physics Letters 16 (1999) 775 The origin of gamma-ray bursts (GRBs) has remained unknown for over 30 years.<sup>1,2</sup> A major breakthrough appeared in early 1997, when the Italian-Dutch BeppoSAX satellite observed X-ray afterglows from GRB 970228 for the first time.<sup>3</sup> Since then, X-ray afterglows have been observed from about 15 GRBs, of which ten events were detected optically and five bursts were also detected in radio wavelengths. The cosmological origin of at least some GRBs is firmly established. The so called fireball model <sup>4,5</sup> is strongly favoured, which is found successful at explaining the major features of the low energy light curves.<sup>6-9</sup> In the fireball model, low energy afterglows are generated by ultra-relativistic fireballs, which first give birth to GRBs through internal or external shock waves and then decelerate continuously due to collisions with the interstellar medium (ISM). The dynamics of the expansion has been investigated extensively.<sup>6-9</sup> Both analytic solutions and numerical approaches are available. It is a general conception that current models describe the gross features of the process very well and further improvements are possible only by considering some details. However, we find that three serious problems are associated with the popular model. First, it is usually assumed that the expansion is ultra-relativistic. Then for an adiabatic fireball, the evolution of the bulk Lorentz factor is derived to be: $$\gamma (200400)E_{51}^{1/8}n_1^{1/8}t^{3/8},$$ (1) where $`E_{51}=E_0/(10^{51}\mathrm{erg})`$ with $`E_0`$ the initial fireball energy, $`n_1=n/(1\mathrm{c}\mathrm{m}^3)`$ with $`n`$ the ISM number density, and $`t`$ is observer’s time in unit of s.<sup>6-9</sup> The radius of the blastwave scales as $`Rt^{1/4}`$. Based on Eq.(1), flux density at frequency $`\nu `$ then declines as $`S_\nu t^{3(1p)/4}`$, where $`p`$ is the index characterizing the power-law distribution of the shocked ISM electrons, $`dn_\mathrm{e}^{}/d\gamma _\mathrm{e}\gamma _\mathrm{e}^p`$. These expressions are valid only when $`\gamma 1`$. However, optical afterglows from GRB 970228 and GRB 970508 were detected for as long as 190 and 260 d respectively, while in Eq.(1), even $`t=30`$ d will lead to $`\gamma 1`$. It is clear that the overall evolution of the postburst fireball can not be regarded as a simple one-phase process, we should pay special attention to the transition from the ultra-relativistic phase to the non-relativistic phase.<sup>10</sup> This is unfortunately ignored in the literature. Second, the expansion of the fireball might be either adiabatic or highly radiative. Extensive attempts have been made to find a common model applicable for both cases.<sup>11-13</sup> As a result, a differential equation has been proposed by various authors,<sup>12,13</sup> $$\frac{d\gamma }{dm}=\frac{\gamma ^21}{M},$$ (2) where $`m`$ is the rest mass of the swept-up ISM, $`M`$ is the total mass in the co-moving frame, including internal energy $`U`$. Since thermal energy produced during the collisions is $`dE=c^2(\gamma 1)dm`$, usually we assume:<sup>13</sup> $$dM=\frac{(1ϵ)}{c^2}dE+dm=[(1ϵ)\gamma +ϵ]dm,$$ (3) where $`ϵ`$ is defined as the fraction of the shock generated thermal energy (in the co-moving frame) that is radiated. It is putative that Eq.(2) is correct in both ultra-relativistic and non-relativistic phase, for both radiative and adiabatic fireballs. In the highly radiative case, $`ϵ=1`$, $`dM=dm`$, Eq.(2) reduces to, $$\frac{d\gamma }{dm}=\frac{\gamma ^21}{M_{\mathrm{ej}}+m},$$ (4) where $`M_{\mathrm{ej}}`$ is the mass ejected from the GRB central engine. Then an analytic solution is available,<sup>11,13</sup> which satisfies $`\gamma R^3`$ when $`\gamma 1`$ and $`vR^3`$ when $`\gamma 1`$, where $`v`$ is the bulk velocity of the material. These scaling laws indicate that Eq.(2) is really correct for highly radiative fireballs. In the adiabatic case, $`ϵ=0`$, $`dM=\gamma dm`$, Eq.(2) also has an analytic solution:<sup>12</sup> $$M=[M_{\mathrm{ej}}^2+2\gamma _0M_{\mathrm{ej}}m+m^2]^{1/2},$$ (5) $$\gamma =\frac{m+\gamma _0M_{\mathrm{ej}}}{M},$$ (6) where $`\gamma _0`$ is the initial value of $`\gamma `$. During the ultra-relativistic phase, Eqs.(5) and (6) do produce the familiar power-law $`\gamma R^{3/2}`$, which is often quoted for an adiabatic blastwave decelerating in a uniform medium. In the non-relativistic limit ($`\gamma 1`$, $`m\gamma _0M_{\mathrm{ej}}`$), Chiang and Dermer have derived $`\gamma 1+\gamma _0M_{\mathrm{ej}}/m`$,<sup>12</sup> so that they believe it also agrees with the Sedov solution, i.e., $`vR^{3/2}`$.<sup>14</sup> However we find that their approximation is not accurate,<sup>15</sup> because they have omitted some first-order infinitesimals of $`\gamma _0M_{\mathrm{ej}}/m`$. The correct approximation could be obtained only by retaining all the first and second order infinitesimals, which in fact gives: $`\gamma 1+(\gamma _0M_{\mathrm{ej}}/m)^2/2`$, then we have $`vR^3`$.<sup>15</sup> This is not consistent with the Sedov solution! The problem is serious: (i) It means that the reliability of Eq.(2) is questionable, although it does correctly reproduce the major features for radiative fireballs and even for adiabatic fireballs in the ultra-relativistic limit. (ii) In the non-relativistic phase of the expansion, the fireball is more likely to be adiabatic rather than highly radiative. However, it is just in this condition that the conventional model fails. So any calculation made according to Eq.(2) will lead to serious deviations in the light curves during the non-relativistic phase. Third, for simplicity, it is usually assumed that $`ϵ`$ is a constant during the expansion. But in realistic case this is not true. The fireball is expected to be highly radiative ($`ϵ=1`$) at first, due to significant synchrotron radiation. In only one or two days, it will evolve to an adiabatic one ($`ϵ=0`$) gradually. So $`ϵ`$ should evolve with time.<sup>16</sup> Below, we will construct a new model that is no longer subject to the aforementioned problems. In the fixed frame, since the total kinetic energy of the fireball is $`E_\mathrm{K}=(\gamma 1)(M_{\mathrm{ej}}+m)c^2+(1ϵ)\gamma U`$,<sup>17</sup> and the radiated thermal energy is $`ϵ\gamma (\gamma 1)dmc^2`$,<sup>11</sup> we have: $$d[(\gamma 1)(M_{\mathrm{ej}}+m)c^2+(1ϵ)\gamma U]=ϵ\gamma (\gamma 1)c^2dm.$$ (7) For the item $`U`$, it is usually assumed: $`dU=c^2(\gamma 1)dm`$.<sup>17</sup> Eq.(2) has been derived just in this way. However, the jump conditions <sup>11</sup> at the forward shock imply that $`U=(\gamma 1)mc^2`$, so we suggest that the correct expression for $`dU`$ should be: $`dU=d[(\gamma 1)mc^2]=(\gamma 1)dmc^2+mc^2d\gamma `$. Here we simply use $`U=(\gamma 1)mc^2`$ and substitute it into Eq.(7), then it is easy to get:<sup>15</sup> $$\frac{d\gamma }{dm}=\frac{\gamma ^21}{M_{\mathrm{ej}}+ϵm+2(1ϵ)\gamma m}.$$ (8) In the highly radiative case ($`ϵ=1`$), Eq.(8) reduces to Eq.(4) exactly. While in the adiabatic case ($`ϵ=0`$), Eq.(8) reduces to: $$\frac{d\gamma }{dm}=\frac{\gamma ^21}{M_{\mathrm{ej}}+2\gamma m}.$$ (9) The analytic solution is: $$(\gamma 1)M_{\mathrm{ej}}c^2+(\gamma ^21)mc^2E_0.$$ (10) Then in the ultra-relativistic limit, we get the familiar relation of $`\gamma R^{3/2}`$, and in the non-relativistic limit, we get $`vR^{3/2}`$ as required by the Sedov solution. From these analysises, we believe that Eq.(8) is really correct for both radiative and adiabatic fireballs, and in both ultra-relativistic and non-relativistic phase. In realistic fireballs, $`ϵ`$ is a variable dependent on the ratio of synchrotron-radiation-induced to expansion-induced loss rate of energy.<sup>16</sup> As usual, we assume that in the co-moving frame the magnetic field energy density is a fraction $`\xi _\mathrm{B}^2`$ of the energy density $`e^{}`$, $`B^2/(8\pi )=\xi _\mathrm{B}^2e^{}`$, and that the electron carries a fraction $`\xi _\mathrm{e}`$ of the energy, $`\gamma _{\mathrm{e},\mathrm{min}}=\xi _\mathrm{e}(\gamma 1)m_\mathrm{p}/m_\mathrm{e}+1`$, where $`m_\mathrm{p}`$ and $`m_\mathrm{e}`$ are proton and electron masses, respectively. The co-moving frame expansion time is $`t_{\mathrm{ex}}^{}=R/(\gamma c)`$, and the synchrotron cooling time is $`t_{\mathrm{syn}}^{}=6\pi m_\mathrm{e}c/(\sigma _\mathrm{T}B^2\gamma _{\mathrm{min},\mathrm{e}})`$, where $`\sigma _\mathrm{T}`$ is the Thompson cross section. Then we have:<sup>16</sup> $$ϵ=\xi _\mathrm{e}\frac{t_{\mathrm{syn}}^1}{t_{\mathrm{syn}}^1+t_{\mathrm{ex}}^1}.$$ (11) We have evaluated Eqs.(8) and (11) numerically, bearing in mind that:<sup>18</sup> $$dm=4\pi R^2nm_\mathrm{p}dR,$$ (12) $$dR=v\gamma (\gamma +\sqrt{\gamma ^21})dt.$$ (13) We take $`E_0=10^{52}`$ erg, $`n=1`$ cm<sup>-3</sup>, $`M_{\mathrm{ej}}=2\times 10^5`$ M. Figs.(1) and (2) illustrate the evolution of $`\gamma `$ and $`R`$, where full, dotted, and dashed lines correspond to constant $`ϵ`$ values of 0, 0.5, and 1 respectively. Dash-dotted lines are plot by allowing $`ϵ`$ to vary according to Eq.(11). It is clearly shown that our new model overcomes the shortcomings of Eq.(2). For example, for highly radiative expansion, the dashed lines in these figures approximately satisfy $`\gamma t^{3/7}`$, $`Rt^{1/7}`$, $`\gamma R^3`$ when $`\gamma 1`$, and $`vt^{3/4}`$, $`Rt^{1/4}`$, $`vR^3`$ when $`\gamma 1`$. While for adiabatic expansion, the full lines satisfy $`\gamma t^{3/8}`$, $`Rt^{1/4}`$, $`\gamma R^{3/2}`$ when $`\gamma 1`$, and satisfy $`vt^{3/5}`$, $`Rt^{2/5}`$, $`vR^{3/2}`$ when $`\gamma 1`$. In order to compare with observations, we have also calculated the synchrotron radiation from the shocked ISM. Fig.(3) illustrates R band afterglows from the realistic fireball. We see that after entering the non-relativistic phase, the light curve becomes steeper only slightly, consistent with the prediction made by Wijers et al..<sup>7</sup> In contrast, Eq.(2) generally leads to a much sharper decline.<sup>18</sup> In our new model optical afterglows from GRB 970228 are generally well fitted. To conclude, current researches are mainly concentrated on the ultra-relativistic phase of the expansion of GRB remnants. The popular dynamic model is in fact incorrect for adiabatic fireballs during the non-relativistic phase. This is completely unnoticed in the literature. We have revised the model. Our new model has been shown to be correct in both ultra-relativistic and non-relativistic phase. The revision is of great importance, taking account of the following facts: (i) Optical afterglows lasting for more than 100 – 200 d have been observed from some GRBs, the advent of the non-relativistic phase seems inevitable. (ii) Beaming effects also lead to a steepening in the optical light curve, non-relativistic effects should be considered carefully to tell whether GRB ejecta are beamed or not, which is crucial in understanding the GRB origin. (iii) HI supershells might be highly evolved GRB remnants,<sup>19,20</sup> to address this question in detail, we should deal with non-relativistic blastwaves. Additionally we suggest that at very late stages, GRB remnants might become highly radiative again, just in the same way that supernova remnants do.<sup>14</sup> This might occur when the bulk velocity is just several tens kilometers per second. Figure Caption