id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/0003/cond-mat0003186.html
ar5iv
text
# On the possibility to supercool molecular hydrogen down to superfluid transition \[ ## Abstract Recent calculations by Vorobev and Malyshenko (JETP Letters, 71, 39, 2000) show that molecular hydrogen may stay liquid and superfluid in strong electric fields of the order of $`4\times 10^7V/cm`$. I demonstrate that strong local electric fields of similar magnitude exist beneath a two-dimensional layer of electrons localized in the image potential above the surface of solid hydrogen. Even stronger local fields exist around charged particles (ions or electrons) if surface or bulk of a solid hydrogen crystal is statically charged. Measurements of the frequency shift of the $`12`$ photoresonance transition in the spectrum of two-dimensional layer of electrons above positively or negatively charged solid hydrogen surface performed in the temperature range 7 - 13.8 K support the prediction of electric field induced surface melting. The range of surface charge density necessary to stabilize the liquid phase of molecular hydrogen at the temperature of superfluid transition is estimated. PACS no.: 05.70.Fh, 05.70.Ce, 05.30.Ip. \] The prediction by Ginzburg and Sobyanin that sufficiently supercooled liquid molecular hydrogen may undergo transition to a superfluid state attracts much current attention. Recent theoretical calculations show that such a transition may occur around 1.1 - 1.2 K. On the experimental side, a lot of work has been done on molecular hydrogen in porous vycor glass and on the thin hydrogen films adsorbed on different substrates . Although some supercooling of liquid hydrogen was indeed observed in the experiment, the temperatures achieved were not sufficiently low to induce superfluid transition. Very recently a new possibility to produce a strongly supercooled molecular hydrogen state has been suggested on the basis of thermodynamic functions calculation of the molecular hydrogen in both the stable and metastable regions . It has been proposed to expose a two-phase system of solid and liquid hydrogen to the effect of strong external electric field. According to , the condition of phase equilibrium in this case may be written as $$\mu _1(p)(\frac{ϵ_1}{\rho _1})\frac{E_1^2}{8\pi }\pm \frac{ϵ_1E_1^2ϵ_2E_2^2}{8\pi \rho _1}=\mu _2(p)(\frac{ϵ_2}{\rho _2})\frac{E_2^2}{8\pi }$$ (1) where $`\mu _1(p)`$ and $`\mu _2(p)`$ are the chemical potentials of the solid (1) and liquid (2) hydrogen phases at pressure $`p`$, $`E`$ is the electric field intensity, $`ϵ`$ is the dielectric constant, and $`\rho `$ is the density. The plus sign corresponds to the case when the field is generated by constant charges, and the minus sign corresponds to the field created by constant potential. The action of electric field on the two-phase system may be understood in terms of creation of ”different pressures” in the phases. As a result, the liquid and solid hydrogen phases which normally can not coexist at any positive pressure around 1 - 2 K, may coexist in equilibrium with each other around the temperature of superfluid transition at small positive pressures. Unfortunately, the electric field necessary for this to occur is quite large due to the small difference in $`ϵ`$ for liquid and solid hydrogen. The result obtained in for the case of field generated by constant potentials and the field lines parallel to the phase boundary looks like $$E=(\frac{24\pi (\mu _1(0)\mu _2(0))\rho _1\rho _2}{(ϵ_1^2+4ϵ_123ϵ_2)(\rho _1\rho _2)})^{1/2}=4\times 10^7V/cm$$ (2) where Clausius-Mosotti relation was used to get the value of $`(ϵ/\rho )_T`$, and the following values of dielectric constant and density were accepted: $`ϵ_1=1.3`$, $`ϵ_2=1.25`$, $`\rho _1=0.087g/cm^3`$, and $`\rho _2=0.078g/cm^3`$. About the same value of the electric field may be obtained for the cases of field created by constant charges and/or field lines perpendicular to the phase boundary. It was unclear for the authors of if such strong fields can be created in solid hydrogen because of dielectric breakdown and other experimental difficulties. In this Letter I am going to show that strong local electric fields of similar magnitude exist in some real physical systems created in the lab, namely, beneath a two- dimensional layer of electrons localized in the image potential above the surface of solid hydrogen . Even stronger local fields exist around charged particles (ions or electrons) if surface or bulk of a solid hydrogen crystal is statically charged. I am going to present the data of measurements of the frequency shift of the $`12`$ photoresonance transition in the spectrum of such two-dimensional layer of electrons above positively or negatively charged solid hydrogen surface performed in the temperature range 7 - 13.8 K. These previously unpublished data obtained in strongly support the prediction of electric field induced melting of solid hydrogen. I am also going to estimate the range of surface charge density necessary to stabilize the liquid phase of molecular hydrogen at the temperature of superfluid transition. Most of the experimental and theoretical work on two-dimensional surface image states above the dielectric surfaces with $`(ϵ1)<<1`$ has been done for the case of liquid helium . In the simplest model, the interaction potential $`\varphi `$ for an electron near the surface of such a dielectric depends only on the electrostatic image force, and on the external electric field $`E`$, which is normal to the surface and is necessary for the electron confinement near the surface: $$\varphi (z)=e^2(ϵ1)/(4z(ϵ+1))+eEz=Qe^2/z+eEz$$ (3) for $`z>0`$, and $`\varphi (z)=V_0`$ for $`z<0`$, where the z axis is normal to the surface, and $`V_0`$ is the surface potential barrier. If $`V_0\mathrm{}`$ one obtains the electron energy spectrum $$E_n=Q^2me^4/(2\mathrm{}^2n^2)+eE<z_n>+p^2/(2m)$$ (4) The first term in this expression gives the exact solution for $`E=0`$. The second term is the first-order correction for a non-zero confining field, where the average distance of electrons from the surface in the $`n`$th energy level is $`<z_n>=3n^2\mathrm{}^2/(2me^2Q)`$. This correction provides an extremely convenient way of fine-tuning the energy spectrum. The last term corresponds to free electron’s motion parallel to the surface. It is important to mention that at sufficiently high electron density individual electrons in these surface states are no longer free. When the average electrostatic potential energy per electron exceeds about $`100k_BT`$ the electron layer undergoes transition into the two-dimensional Vigner crystal state with the parallel motion of individual electrons substantially restricted by other electrons in the lattice of the Vigner crystal. Similar two-dimensional electron layers have been observed on the surface of liquid and solid hydrogen. Resonance absorption of light for $`12`$ and $`13`$ transitions in the spectrum of electrons levitating above the surface of solid hydrogen has been reported and the frequencies of these transitions were measured as a function of the confining electric field . While general agreement with the spectrum (4) within $`30\%`$ has been observed, quite a few important questions have been raised by these experiments which have not yet found satisfactory answers. The most striking feature of this system is a very strong dependence of the photoresonance frequencies on the hydrogen temperature (or vapor pressure). The frequency of $`12`$ transition in zero confining field grew by $`20\%`$ on the cooling down from 13.6 K to 7 K. In the absence of any other competing theory at the time of the measurements, this effect was interpreted in terms of quantum refraction due to the presence of hydrogen vapor molecules around the levitating electrons. For this explanation to be true, the scattering length value for the scattering of an electron by a hydrogen molecule has to be equal to $`L=1.4\AA `$, contrary to the currently accepted value of $`L=+0.672\AA `$ . On the other hand, if spectral changes are completely attributed to the increase in the value of $`Q=(ϵ1)/(4(ϵ+1))`$, the frequency shift may be interpreted as a gradual freezing of the supercooled liquid hydrogen film on top of the hydrogen crystal (with an increase of the dielectric constant of the thin surface layer from the liquid hydrogen value of 1.25 to the solid hydrogen value of 1.3). This consideration raises an important question: how strong is the electric field on the surface of the solid hydrogen just below an electron in the ground surface state? Can it cause a substantial supercooling of the liquid molecular hydrogen phase on the surface? If we imagine for a moment that the electron is not moving parallel to the surface, the answer is very easy to get. We may forget about small contribution from the electron’s image and write the average value of the local field as $`<E_L>=<e/z_1^2>=9e/(2<z_1>^2)`$, where we have used the ground state wave function for an idealized potential (3) with an infinite potential barrier. Taking into account the experimentally observed value of $`<z_1>=20\AA `$ , we obtain $`<E_L>=1.6\times 10^7V/cm`$. This field is of the same order of magnitude as the field that is (according to ) sufficient to stabilize superfluid liquid hydrogen phase. Unfortunately, free electrons rapidly move around the surface and every region of solid hydrogen surface experiences this strong electric field for very brief periods of time. This situation changes at sufficiently high electron density and low temperature when the electron layer undergoes transition into the Vigner crystal state. In this state the motion of each individual electron is substantially restricted to a small area around electron’s equilibrium position in the Vigner crystal lattice by electrostatic interaction with other electrons in the lattice. Let us estimate the size of this area for the parameters of the electron system typically observed in the experiment. The density of electrons on the solid hydrogen surface is determined by the external confining field: $`nE/(4\pi e)`$ . Density values of the order of $`3\times 10^{10}10^{11}cm^2`$ were routinely observed. At $`n=10^{11}cm^2`$ an electrostatic potential energy per electron is of the order of $`e^2n^{1/2}=540k_B\times 1K`$. Thus, at T=1K such an electron system will be in two-dimensional Vigner crystal state. Classically allowed area of electron’s motion around its equilibrium state in the lattice may be estimated by taking into account only the nearest neighbors located at a distance $`d`$ from each other. For the potential energy we may write approximately $$V=e^2/(dx)+e^2/(d+x)=2e^2/d+2e^2x^2/d^3$$ (5) where $`x`$ is the displacement towards the nearest neighbor. Immediately we obtain an estimate for the radius of the allowed area as $`x=n^{1/2}(k_BT/(e^2n^{1/2}))^{1/2}`$. At T=1K and $`n=10^{11}cm^2`$ this radius is approximately $`10\AA `$. Thus, an electron stays over the same region of the hydrogen surface. We may now conclude that at the above mentioned parameters of the electron system and at the temperatures near the temperature of suspected superfluid transition a substantial portion of the solid hydrogen surface under the electrons will experience strong electric fields of the order of the field necessary to induce the superfluid transition. Taking into account that the maximum density of surface electrons may be substantially increased with respect to the values observed in , this system looks extremely promising for observation of superfluid transition of the supercooled molecular hydrogen. Another evident way to create even stronger local electric fields inside a solid hydrogen crystal is accumulation of static electric charge in the form of positive or negative ions and bound electrons on the surface or in the bulk of hydrogen crystal. Such an accumulation of negative and positive static charge was a routine problem in our spectroscopic experiments. The detection of static charge has been discussed in detail in . The results of these experiments published so far were carefully selected to avoid any influence of static charges which were considered to be a problem. Nevertheless, the influence of negative and positive static charge of the hydrogen surface on the frequency and the linewidth of photoresonance $`12`$ transitions was carefully measured and reported in . Because of the recent appearance of paper and considering the arguments above, I feel necessary to make physical community aware of these results. The most careful measurements of the influence of negative static surface charge on the frequency and linewidth of $`12`$ transitions were performed at a fixed temperature T=13.4K (P=40 Torr), just below the triple point of hydrogen. The negative surface charge was formed by energetic electrons which were not stopped by the surface potential barrier $`V_0`$ upon the deposition of free electrons on the solid hydrogen surface. Fig.1 reproduces the data of these measurements as they were presented in . Both the frequency shift and the line broadening was measured at a fixed wavelength of excitation light by tuning the confining field $`E`$ (that is why both values are expressed in the units of electric field). It is easy to convert these data into units of frequency by using the spectrum (4). The frequency shift was very large and negative (corresponding to the positive change in the confining field at a fixed laser frequency) and much bigger than the line broadening. It corresponds to more than $`20\%`$ reduction of the frequency of $`12`$ transition in zero confining field. This shift did not find an adequate explanation at the time of the measurements. It is consistent though with the picture of supercooling of liquid hydrogen on the surface of the solid hydrogen phase in the presence of static electric charges. Similar negative frequency shift was observed also in the case of positive static surface charge, although detailed measurements of the frequency shift and line broadening were not conducted in this case. It was possible to create a two-dimensional electron layer on the surface of positively charged hydrogen crystal (in zero or even repulsive external electric field $`E`$). Both positive ions charging the surface and free electrons were deposited in turn on the surface of hydrogen crystal at P=5 Torr hydrogen vapor pressure from the gas discharge created in the experimental chamber. The life time of free electrons in such a system was longer than an hour which allowed us to perform spectroscopic measurements shown in Fig.2. Photoresonance $`12`$ transition has been detected in such a system at $`\lambda =118.6\mu m`$ wavelength of excitation light at $`N10^{10}cm^2`$ density of positive ions deposited onto the hydrogen crystal surface. This corresponds to the frequency of $`12`$ transition substantially below the frequencies of $`12`$ transitions in zero confining field observed in surface electron layers in the absence of static charge in all the temperature range studied (7-13.8 K). The fact that negative frequency shift is observed both for positive and negative static surface charge is a strong argument in favor of electric field induced liquid hydrogen supercooling phenomenon. Although the physical location of static charges with respect to the hydrogen surface is not as clear as in the case of free electrons (and, hence, the sample is not as well characterized) our data show that the charged solid hydrogen phase is easily accessible to experiments, and it is a very interesting physical object from the point of view of superfluid molecular hydrogen observation. If $`E_\lambda `$ is a field necessary for supercooled liquid and solid phase coexistence, the diameter of the area around a static charge where this condition is met is determined by $`d=(4e/E_\lambda )^{1/2}`$. Using the result of one may get $`d15\AA `$. At static charge concentration of $`N1/(4d^2)=10^{13}cm^2`$ one may expect a substantial portion of the crystal surface to be in adequate condition for the liquid and solid phase coexistence at T=1 K. It must be also noted that the values of temperature and field necessary for the superfluid transition to occur are not quite known. In reality one may see the onset of superfluid transition at sufficiently lower static charge densities. In conclusion, it was pointed out that the electric fields comparable to the field which may stabilize supercooled liquid hydrogen at the temperature of superfluid transition occur naturally in such existing physical systems as electrons levitating above the surface of solid hydrogen and statically charged hydrogen crystals. The results of photoresonance measurements performed on the system of levitating electrons with and without static surface charge strongly indicate the possibility of liquid hydrogen supercooling in these systems. It would be extremely interesting to study the behavior of charged solid hydrogen at temperatures around T=1 K with the goal of the observation of supercooled superfluid hydrogen phase.
no-problem/0003/nucl-th0003071.html
ar5iv
text
# The 𝛼-particle in nuclear matter ## I Introduction The modification of few-body properties such as the binding energy and the wave function of a bound state due to a medium of finite temperature and density is an important subject of many-particle theory. As an example we consider symmetric nuclear matter consisting of nucleons (equal number of protons and neutrons) at density $`\rho `$ and temperature $`T`$. The modification of single-nucleon properties can be obtained from a Dyson equation in terms of a self-energy. In a specific approximation the quasiparticle picture can be derived. A more rigorous description leads to the nucleon spectral function. Similarly we can consider the two-nucleon system where the medium modification are obtained from a Bethe-Goldstone equation . In addition to the self-energy shift, also Pauli-blocking has to be taken into account that is of the same order of magnitude. It has been shown that, as a consequence, deuterons in nuclear matter become unbound if the density exceeds a certain value, the Mott density. Of course, the same mechanisms are also responsible for the modification of higher clusters embedded in nuclear matter. However, the solution of the few-body in-medium equations where the effects of the medium are accounted for by a density and temperature dependent contribution to the Hamiltonian has only been done within perturbation theory, see . Therefore the results achieved for the energy shifts and the Mott densities are only approximations. Recently, rigorous methods have been used to find solutions for the three-body problem in nuclear matter . The Faddeev equations are extended to include the effects of the medium, and the corresponding Alt-Grassberger-Sandhas (AGS) equations have been solved . Different properties of the three-nucleon system in the medium such as the modification of the binding energy of the three-nucleon bound state and the medium modification of the nucleon-deuteron break-up cross section have been calculated. In the present letter we give first results of the solution of the in-medium four-particle equation describing the modification of the binding energy of the $`\alpha `$-particle in symmetric nuclear matter. An AGS-type equation has been solved and the results will be compared with those of perturbation theory. Note that the four-particle correlations in low-density nuclear matter are very important because of the large binding energy of the $`\alpha `$-particle. They have to be accounted for not only in equilibrium when considering the nuclear matter equation of state or the contributions of correlations to the single-nucleon spectral function, but also in nonequilibrium such as the light cluster formation in heavy ion collisions. ## II In-medium few-body equations The few-nucleon problem in nuclear matter can be treated using Green function approaches. Within the cluster-mean field expansion , a self-consistent system of equations can be derived describing a $`n`$-nucleon cluster moving in a mean field produced by the equilibrium mixture of clusters with arbitrary nucleon number $`m`$. A Dyson equation approach to describe clusters at finite temperatures and densities has been given in . However, the self-consistent determination of the composition of the medium is a very challenging task that is not solved until now. We will perform the approximation where the correlations in the medium are neglected so that the embedding nuclear matter is described by the equilibrium distribution of quasiparticles (see also for the two-particle problem, or for the three-particle problem). The extension of this formalism to describe $`n`$-nucleon correlations in nuclear matter will be given elsewhere. Here, we will give some of the basic relations, which are direct generalizations of the three-particle case. Let the Hamiltonian of the system be given by $$H=\underset{1}{}\frac{k_1^2}{2m}a_1^{}a_1+\frac{1}{2}\underset{121^{}2^{}}{}V_2(12,1^{}2^{})a_1^{}a_2^{}a_2^{}a_1^{}$$ (1) where $`a_1`$ etc. denotes the Heisenberg operator of the particle that includes quantum numbers such as spin $`s_1`$ and momentum $`k_1`$. The free resolvent $`G_0`$ for an $`n`$-particle cluster is given in Matsubara-Fourier representation by $$G_0(z)=(zH_0)^1NR_0(z)N,$$ (2) where $`G_0`$, $`H_0`$, and $`N`$ is a compact notation for matrices in the space of $`n`$ particles with respect to the particle indices given below. Here $`z`$ denotes the Matsubara frequencies $`z_\lambda =\pi \lambda /(i\beta )+\mu `$ with $`\lambda =0,\pm 2,\pm 4,\mathrm{}`$ for bosons and $`\lambda =\pm 1,\pm 3,\mathrm{}`$ in the case of fermions. To simplify the notation we have further dropped the index $`n`$ on the matrices, however use it if explicitly needed. The effective in-medium Hamiltonian $`H_0`$ for noninteracting quasi-particles is given by $$H_0=\underset{i=1}{\overset{n}{}}\frac{k_i^2}{2m}+\mathrm{\Sigma }_i\underset{i=1}{\overset{n}{}}\epsilon _i$$ (3) where the energy shift $`\mathrm{\Sigma }_1`$ and the Fermi function $`f_1`$ are $`\mathrm{\Sigma }_1`$ $`=`$ $`{\displaystyle \underset{2}{}}V_2(12,\stackrel{~}{12})f_2,`$ (4) $`f_1`$ $``$ $`f(\epsilon _1)={\displaystyle \frac{1}{\text{e}^{(\epsilon _1\mu )/k_BT}+1}}.`$ (5) The notation $`\stackrel{~}{12}`$ means antisymmetrisation. The factor $`N`$ in Eq. (2) resembles the Pauli blocking or normalization of the Green functions. This factor is different for the different clusters considered depending on the number of particles $`n`$. It is given by $$N=\overline{f}_1\overline{f}_2\mathrm{}\overline{f}_n\pm f_1f_2\mathrm{}f_n$$ (6) where $`\overline{f}=1f`$. The upper sign is for an odd number of fermions (Fermi type) and the lower for an even number of fermions (Bose type). Note that $`NR_0=R_0N`$. The full resolvent after Matsubara-Fourier transformation may be written in the following way $$G(z)=(zH_0V)^1NR(z)N,$$ (7) where the potential $`V`$ is a sum of two-body interactions between pairs $`\alpha `$, i.e. $$V=\underset{\alpha }{}V_\alpha =\underset{\alpha }{}N_2^\alpha V_2^\alpha ,$$ (8) and $`V_2^\alpha `$ is the two-body potential given in Eq. (1). The sum runs over all unique pairs in the cluster. Note, that as a consequence of Eq. (8) $`V^{}V`$, also $`R(z)NNR(z)`$ that later on leads to right and left eigen–vectors. To be more specific: If the interaction is between particle 1 and 2 (in the pair $`\alpha =(12)`$ of a cluster of $`n`$ particles) the effective potential of Eq. (8) reads $$12|N_2^{(12)}V_2^{(12)}|1^{}2^{}=(\overline{f}_1\overline{f}_2f_1f_2)V_2(12,1^{}2^{}).$$ (9) A useful notion is the channel resolvent $`G_\alpha (z)`$ for an $`n`$ particle cluster, where only the pair interaction in channel $`\alpha `$ is considered. This may be written as $`G_\alpha (z)`$ $`=`$ $`(zH_0V_\alpha )^1N`$ (10) $`=`$ $`(zH_0N_2^\alpha V_2^\alpha )^1NR_\alpha (z)N.`$ (11) Using $`R_0^1(z)`$, $`R_\alpha ^1(z)`$, and $`R^1(z)`$ it is possible to formally derive the resolvent equations in the standard way. To keep the formal equivalence to the isolated case, the $`n`$-particle channel $`t`$-matrix $`T_\alpha `$ is defined by $$R_\alpha (z)=R_0(z)+R_0(z)T_\alpha (z)R_0(z).$$ (12) With the use of $`T_\alpha (z)=N_2^\alpha T_2^\alpha (z)`$ Eq. (12) leads to the well-known Bethe-Goldstone equation $$T_2^\alpha (z)=V_2^\alpha +V_2^\alpha R_0(z)N_2^\alpha T_2^\alpha (z)=V_2^\alpha +T_2^\alpha (z)R_0(z)N_2^\alpha V_2^\alpha .$$ (13) We remark that similiar equations have been written down by various authors previously . Note that the above equations are also valid for the two-particle subsystem embedded in a larger cluster (three, four, or more particles). As for the isolated equations the effects of the other particles appear only in the Matsubara frequencies $`z`$ (energies) of the other particles in the cluster. No additional blocking factors $`N`$ related to the larger cluster arise. Also note, that the changes due to the Pauli blocking are in the resolvent $`G_0`$ not in the potential $`V_2`$. However, it is possible to rewrite this equation and introduce an effective potential as seen in Eq. (13) and use unchanged resolvents instead. Making use of the more intuitive picture of a blocking in the propagation of the particles (related to the resolvents) we find by Eq. (13) the correct expression for the $`t`$-matrix that enters into the Boltzmann collision integral (see, for example, ). The derivation of the three-body equation is straight forward and has been given elsewhere . The AGS operator $`U_{\beta \alpha }(z)`$ for the three particle system is defined by $$R(z)=\delta _{\beta \alpha }R_\alpha (z)+R_\beta (z)U_{\beta \alpha }(z)R_\alpha (z).$$ (14) Inserting Eqs. (12) and (13) in the above identity we result with the AGS-type equation $$U_{\beta \alpha }(z)=\overline{\delta }_{\beta \alpha }R_0(z)^1+\underset{\gamma }{}\overline{\delta }_{\beta \gamma }N_2^\gamma T_2^\gamma (z)R_0(z)U_{\gamma \alpha }(z),$$ (15) that includes now medium effects as Pauli blocking and self energy shifts. We used the notation $`\overline{\delta }_{\alpha \beta }=1\delta _{\alpha \beta }`$. This equation solves the three-body transition operator for a three-particle cluster as well as for a three-particle cluster embedded in a more-particle cluster, i.e. the effect of the other particles in the cluster is again only in the Matsubara frequency (energy) $`z`$. The definition of the transition operator given by Eq. (14) was chosen so that no additional factor $`N`$ appears in the final equation. This guarantees that the cluster equations are valid also if they are part of a larger cluster. Thus, the two-body subsystem $`t`$-matrix entering in Eq. (15) is the same as the one given in Eq. (13). Therefore, it is possible to use all results of the few-body ’algebra’, in particular those based on cluster decomposition. The in-medium bound state equation for an $`n`$-particle cluster follows from the homogeneous Lippmann-Schwinger equation and is given by $$|\psi _B=R_0(E_B)V|\psi _B=R_0(E_B)\underset{\gamma }{}N_2^\gamma V_2^\gamma |\psi _B$$ (16) where the sum is over all unique pairs in the cluster. As shown in Ref. for the three-body bound state it is convenient to introduce form factors $$|F_\beta =\underset{\gamma =1}{\overset{3}{}}\overline{\delta }_{\beta \gamma }N_2^\gamma V_2^\gamma |\psi _{B_3}$$ (17) that leads to the homogeneous in-medium AGS-type equation $$|F_\alpha =\underset{\beta =1}{\overset{3}{}}\overline{\delta }_{\alpha \beta }N_2^\beta T_2^\beta R_0(B_3)|F_\beta .$$ (18) We may generalize the AGS method given in Refs. to the in-medium four-body case $$|\psi _\beta =R_0(B_4)N_2^\beta T_2^\beta (B_4)R_0(B_4)\underset{\gamma =1}{\overset{6}{}}\overline{\delta }_{\beta \gamma }R_0^1(B_4)|\psi _\gamma ,\beta =1,\mathrm{},6$$ (19) where $$|\psi _\beta =R_0(B_4)N_2^\beta V_2^\beta |\psi _{B_4}.$$ (20) Introducing the $`3+1`$ and $`2+2`$ cluster decomposition of the four-body system, denoted by $`\tau ,\sigma ,\mathrm{}`$, the sum on the right hand side of Eq. (19) may be rearranged by introducing four-body form factors $$|_\beta ^\sigma =\underset{\tau }{}\overline{\delta }_{\sigma \tau }\underset{\alpha }{}\overline{\delta }_{\beta \alpha }^\tau R_0^1(B_4)|\psi _\alpha $$ (21) with $`\beta \sigma `$, $`\overline{\delta }_{\beta \alpha }^\tau =\overline{\delta }_{\beta \alpha }`$, if $`\beta ,\alpha \tau `$ and $`\overline{\delta }_{\beta \alpha }^\tau =0`$ otherwise. The homogeneous in-medium AGS-type equation for the four-body form factors is then written $$|_\beta ^\sigma =\underset{\tau \gamma }{}\overline{\delta }_{\sigma \tau }U_{\beta \gamma }^\tau (B_4)R_0(B_4)N_2^\gamma T_2^\gamma (B_4)R_0(B_4)|_\gamma ^\tau ,\beta \sigma ,\gamma \tau .$$ (22) The driving kernel consists of the in-medium two-body $`t`$-matrix defined by the Bethe-Goldstone equation and the in-medium AGS-type transition operator defined in Eq. (15). Note also that an additional Pauli blocking factor $`N_2^\gamma `$ occurs. The equations for the three-body scattering and bound state problem have been solved numerically in Refs. . An exploratory calculation to study a possible $`\alpha `$-like condensate (quartetting) has been carried out using a variational ansatz for the (2+2) channel and by neglecting the (3+1) channel . Because of the medium dependence of the equations the calculation time increases drastically. This is due to the fact that the positions of the deuteron pole as well as the three-nucleon pole vary with the intrinsic momentum and are not fixed at the usual binding energy because of the phase space occupation through other particles. Presently a sufficiently fast and accurate method to solve the three- and four-body equations relies on the separability of the subamplitutes that appear in the AGS equations. To solve the four-body bound states we utilize the energy dependent pole expansion (EDPE) that needs to be adjusted to the in-medium case, because of different right and left expansion functions due to the nonsymmetric effective potential. To be more specific, we assume the following expansions for the amplitudes of the respective sub-systems embedded in the four-body equation. For the two-body sub-system we have $$T_\gamma (z)\underset{n}{}|\stackrel{~}{\mathrm{\Gamma }}_{\gamma n}(z)t_{\gamma n}(z)\mathrm{\Gamma }_{\gamma n}(z)|\underset{n}{}|\stackrel{~}{g}_{\gamma n}t_{\gamma n}(z)g_{\gamma n}|=\underset{n}{}N_2^\gamma |g_{\gamma n}t_{\gamma n}(z)g_{\gamma n}|.$$ (23) The last equation of the right hand side is used in the present calculation and reflects a simple Yamaguchi ansatz for the form factors . The Pauli blocking factor then appears explicitly. This has been used in Refs. and a comparison to the Paris potential is given in Ref.. For the present purpose this approximation seems sufficient. For the three-body subamplitudes we use the EDPE expansion $$g_{\beta m}(z)|R_0(z)U_{\beta \gamma }^\tau (z)R_0(z)|\stackrel{~}{g}_{\gamma n}(z)\underset{t,\mu \nu }{}|\stackrel{~}{\mathrm{\Gamma }}_{\beta m}^{\tau t,\mu }(z)t_{\mu \nu }^{\tau t}(z)\mathrm{\Gamma }_{\gamma n}^{\tau t,\nu }(z)|$$ (24) with $$|\stackrel{~}{\mathrm{\Gamma }}_{\beta m}^{\tau t,\mu }(z)=g_{\alpha n}|R_0(z)|\stackrel{~}{g}_{\beta m}t_{\beta m}(B_3)|\stackrel{~}{\mathrm{\Gamma }}_{\beta m}^{\tau t,\mu }.$$ (25) The Sturmian functions corresponding to the fixed energy $`B_3`$ are given by $`\eta _{t,\mu }|\stackrel{~}{\mathrm{\Gamma }}_{\alpha n}^{\tau t,\mu }`$ $`=`$ $`{\displaystyle \underset{\beta m}{}}g_{\alpha n}|R_0(B_3)|\stackrel{~}{g}_{\beta m}t_{\beta m}(B_3)|\stackrel{~}{\mathrm{\Gamma }}_{\beta m}^{\tau t,\mu }`$ (26) $`\eta _{t,\mu }|\mathrm{\Gamma }_{\alpha n}^{\tau t,\mu }`$ $`=`$ $`{\displaystyle \underset{\beta m}{}}\stackrel{~}{g}_{\alpha n}|R_0(B_3)|g_{\beta m}t_{\beta m}(B_3)|\mathrm{\Gamma }_{\beta m}^{\tau t,\mu }.`$ (27) Introducing the form factors $$|𝔽_\mu ^{\sigma s}=\underset{\beta m}{}\mathrm{\Gamma }_{\beta m,\nu }^{\sigma s}(B_4)|t_{\beta m}(B_4)g_{\beta m}(B_4)|R_0(B_4)|_\beta ^\sigma $$ (28) we obtain the following homogeneous system of integral equations $$|𝔽_\mu ^{\sigma s}=\underset{\tau t}{}\underset{\nu \kappa }{}\underset{\gamma n}{}\overline{\delta }_{\sigma \tau }\mathrm{\Gamma }_{\gamma n}^{\sigma s,\nu }(B_4)|t_{\gamma n}(B_4)|\stackrel{~}{\mathrm{\Gamma }}_{\gamma n}^{\tau t,\mu }(B_4)t_{\mu \kappa }^{\tau t}(B_4)|𝔽_\kappa ^{\tau t}.$$ (29) Formally these equations resemble the structure of the isolated four-body equations. However, the dominant features of the influence of the medium, i.e. the self-energy correction and the Pauli blocking, are systematically taken into account. Inclusion of spin-isospin degrees of freedom and symmetrization is a challenging task for the four-body problem and done as for the isolated case. To this end we have intoduced angle averaged Pauli factors as explained e.g. in Ref. and fit the self-energy by use of effective masses. ## III Results and Conclusion To solve the four-body equation numerically we use a Yamaguchi type rank one potential for the $`{}_{}{}^{3}S_{1}^{}`$ and $`{}_{}{}^{1}S_{0}^{}`$ channels. The parameters are taken from an early work of Gibson and Lehman . We renormalized the calculated binding energy of the $`\alpha `$-particle so that it coincides for the isolated particle with the experimental one. Presently, instead of using a more elaborated approach to the isolated four-nucleon problem we merely study the change of the binding energy due to the density and temperature of the surrounding nuclear matter. From our recent results for the three-body systems , we argue that the change is not very sensitive to the particular form of the potential. We shall, therefore, leave the study of model dependences for a future communication. We calculated the binding energy of an $`\alpha `$-like cluster with zero center of mass momentum in symmetric nuclear matter at temperature $`T`$ = 10 MeV as a function of the nucleon density. The results are shown as a solid line in Fig. 1. The Mott transition occurs at a single particle density of $`\rho _{\mathrm{Mott}}=0.0207`$ fm<sup>-3</sup>. For comparison we have given a perturbative calculation shown as dashed line. This calculation is based on a simple Gaussian wave function for the $`\alpha `$-particle with the width fitted to the electric rms radius. Also the binding energy has been renormalized to the experimental value. The Mott density in this case is at 0.0305 fm<sup>-3</sup> that strongly differs from the value gained from the solution of Eq. (29). The corresponding curves for the triton and the deuteron are shown as dotted and dashed dotted lines respectively. Note, that these binding energies are for clusters at rest in the medium. Although it is an interesting case, when the sub-clusters embedded in the larger cluster vanishes as a bound state the sub-clusters in this case have a dynamical binding energy that depends on the c.m. momentum. Nevertheless, the question of Boromenian states and the Effimov effect needs further investigation. Unlike for the triton the $`\alpha `$-particle still exists at densities where the Pauli blocking factor, see Eq. (9), becomes negative. The usual procedure of symmetrizing the effective potential by proper square root factors fails. Therefore when solving the four-body equation we have to keep track of the right and left eigenvectors in the subsystem. In conclusion, we derived and solved for the first time an effective in-medium four-particle equation of the AGS type. Applying it to symmetric nuclear matter, we found that the binding energy of the $`\alpha `$-particle decreases with increasing density due to Pauli blocking and disappears at a critical value of the density (about 1/10 of the nuclear matter density for $`T`$ = 10 MeV). The dependence of the results on temperature and center of mass momentum will be the subject of an extended work. ## Acknowledgement This work has been supported by the Deutsche Forschungsgemeinschaft grant BE1092/7-1. ## Figure Captions Fig. 1. Binding energy of an $`\alpha `$-like cluster with zero center of mass momentum embedded in symmetric nuclear matter at a temperature of $`T=10`$ MeV as a function of nucleon density. Solid line: Yamuguchi potential, renormalized to experimental binding energy at zero density. Dashed line: perturbation approach. For comparison, the medium dependent binding energies of the deuteron (dashed-dotted) and triton (dotted) are also shown.
no-problem/0003/astro-ph0003310.html
ar5iv
text
# Photometry and Photometric Redshifts of Faint Galaxies in the Hubble Deep Field South NICMOS Field1,2 ## 1 INTRODUCTION The Hubble Deep Field South (HDF–S) images are among the deepest images of the universe ever obtained at optical and infrared wavelengths. In this paper, we present a catalog of photometry and photometric redshifts of 335 faint objects in the HDF–S NICMOS field. The analysis is based on (1) infrared images obtained with the Hubble Space Telescope (HST) using the Near Infrared Camera and Multi-Object Spectrograph (NICMOS) with the F110W, F160W, and F222M filters, (2) an optical image obtained with HST using the Space Telescope Imaging Spectrograph (STIS) with no filter, and (3) optical images obtained with the European Southern Observatory (ESO) Very Large Telescope (VLT) with U, B, V, R, and I filters. The analysis is similar to the analyses of the Hubble Deep Field (HDF) described previously by Lanzetta, Yahil, & Fernández-Soto (1996, hereafter LYF96) and Fernández-Soto, Lanzetta, & Yahil (1999, hereafter FLY99), although in detail the current analysis differs from the previous analyses in three important ways: First, objects are detected in the NICMOS F160W and F222M images, at central wavelengths of $`\lambda 16,000`$ Å and $`\lambda 22,200`$ Å, respectively. The analysis is in principle sensitive to galaxies of redshift as large as $`z18`$, beyond which the Ly$`\alpha `$-forest absorption discontinuity is redshifted past the response of the NICMOS F222M filter. Second, the optical and infrared photometry is measured using a new quasi-optimal photometry technique that fits model spatial profiles of detected objects to the space- and ground-based images. The technique is based on but extends the spatial profile fitting technique described previously by FLY99. In comparison with conventional methods, the new technique provides higher signal-to-noise ratio measurements, and in contrast with conventional methods, the new technique accounts for uncertainty correlations between nearby, overlapping neighbors. Third, the photometric redshifts are measured using our redshift likelihood technique with a sequence of six spectrophotometric templates, including the four templates of our previous analyses (of E/S0, Sbc, Scd, and Irr galaxies) and two new templates (of star-forming galaxies). Inclusion of the two new templates eliminates the tendency of our previous analyses to systematically underestimate the redshifts of galaxies of redshift $`2<z<3`$ (by a redshift offset of roughly 0.3), in agreement with results found previously by Benítez et al. (1998). Comparison with spectroscopic redshifts of galaxies identified in the HDF and HDF–S indicates that with the sequence of six spectrophotometric templates the photometric redshifts are accurate to within an RMS relative uncertainty of $`\mathrm{\Delta }z/(1+z)7\%`$ at all redshifts $`z<6`$ that have as yet been examined. The primary utility of the catalog of photometric redshifts is as a survey of faint galaxies detected in the NICMOS F160W and F222M images. The sensitivity of the survey varies significantly with position, reaching a limiting depth of $`AB(16,000)28.7`$ and covering 1.01 arcmin<sup>2</sup> to $`AB(16,000)=27`$ and 1.05 arcmin<sup>2</sup> to $`AB(16,000)=26.5`$. Likewise, the survey reaches a limiting depth of $`AB(22,200)24.8`$ and covering 0.79 arcmin<sup>2</sup> to $`AB(22,200)=24`$ and 1.09 arcmin<sup>2</sup> to $`AB(22,200)=23`$. The organization of the paper is as follows: In § 2, the observations are described. In § 3, the object identification, photometry, and photometric redshift measurements are described. The results are presented in § 4, and the discussion is presented in § 5 and § 6. The summary and conclusions are given in § 7. Scientific analysis of the catalog will be presented in forthcoming papers. ## 2 OBSERVATIONS The HDF–S NICMOS field is centered at J2000 coordinates $`\alpha =`$ 22:32:51.75 and $`\delta =`$ $``$60:38:48.20. The observations consist of three sets of images: (1) infrared images obtained with HST using NICMOS, (2) an optical image obtained with HST using STIS, and (3) optical images obtained with the ESO VLT using the Test Camera. Table 1 summarizes details of the observations. The HST NICMOS images were acquired in September, 1998 using NICMOS with Camera 3 and the F110W, F160W, and F222M filters. For each band, the observations consisted of $`100`$ dithered exposures of between 512 s and 1472 s duration. The raw images were processed and reduced by the Space Telescope Science Institute (STScI) NICMOS team. The processed images were registered onto a grid of $`1100\times 1300`$ pixel<sup>2</sup> at a scale of 0.075 arcsec pixel<sup>-1</sup>, which covers an angular area of $`1\times 1`$ arcmin<sup>2</sup>. The HST STIS image was acquired in September and October 1998 using STIS with the 50CCD in open filter mode (which is sensitive at wavelengths spanning $`\lambda 200010000`$ Å). The observations consisted of 9 dithered exposures of 2900 s duration. The raw images were processed and reduced by the STScI STIS team. The processed image was registered onto a grid of $`3300\times 3900`$ pixel<sup>2</sup> at a scale of 0.025 arcsec pixel<sup>-1</sup>, which covers roughly the same angular area as the NICMOS images. We used the non-drizzled Version 1 release of the NICMOS and STIS images, which were made available by STScI on 23 November, 1998, and we adopted photometric zero points determined by the STScI NICMOS and STIS teams. The ESO VLT images were acquired in August, 1998 using the Unit Telescope #1 (UT1) with the Test Camera and the U, B, V, R, and I filters as a part of the VLT science verification campaign (ESO VLT-UT1 Science Verification 1998). For each band, the observations consisted of $`20`$ dithered exposures of $`900`$ s duration. We reduced the raw images, taking extra care in constructing the flat-field images, because the Test Camera CCD suffers from a large, wavelength-dependent blemish in its center. In agreement with Fontana et al. (1999), we found that separate superflats constructed from the median of the images obtained each night work best. The processed images are sampled on a grid of $`1000\times 1000`$ pixel<sup>2</sup> at a scale of 0.091 arcsec pixel<sup>-1</sup>, which covers an angular area of $`1.5\times 1.5`$ arcmin<sup>2</sup>. We adopted photometric zero points determined by Fontana et al. (1999). The point spread functions (PSFs) of the ground-based images vary significantly from image to image, with the best images characterized by $`\mathrm{FWHM}0.5`$ arcsec and the worst images characterized by $`\mathrm{FWHM}2.2`$ arcsec. For this reason, we worked only with the individual images, i.e. without combining the images in each band. We registered the ground-based images to the space-based images (because the space-based images were already registered to within $`0.05`$ pixel by the STScI NICMOS and STIS teams). We registered the images by measuring coordinates of stars in the space- and ground-based images, using these measurements to derive transformations to the space-based frame, and shifting, rotating, and scaling the ground-based images to the space-based frame. Special treatment was required for the U-band images, for which only a single bright star is available. In this case, we shifted according to measurements of the star and rotated and scaled according to measurements from the other bands. Adjacent pixels of the final, registered images are correlated as a result of the reduction and registration procedures, which must be accounted for by the analysis. ## 3 ANALYSIS ### 3.1 Object Detection We detected objects in the NICMOS F160W image following procedures similar to those described previously by LYF96 and FLY99. First, we formed a signal-to-noise image by dividing the F160W image by the square root of the F160W variance image. Next, we applied the SExtractor object detection program (Bertin & Arnouts, 1996) to detect objects in the signal-to-noise image. We set the SExtractor detection parameters by requiring that no spurious objects were detected in the “negative” signal-to-noise image, which we formed by dividing the negative of the F160W image by the square root of the F160W variance image. We detected objects according to a signal-to-noise criterion (rather than a signal criterion) because the sensitivity of the F160W image varies significantly with position. (Generally, the image is more sensitive toward the center and less sensitive toward the edges as a consequence of the way the individual exposures were dithered.) Finally, we modified the resulting SExtractor segmentation map to make three small corrections: (1) we eliminated objects near the edges of the image, (2) we deblended objects around bright stars or galaxies, and (3) we merged diffraction spikes of bright stars to the host stars. A total of 332 objects were detected, the brightest of which is of magnitude $`AB(16,000)17.2`$ and the faintest of which is of magnitude $`AB(16,000)29.2`$. We repeated the same detection procedures using the NICMOS F222M image. Three additional objects were detected and incorporated into the object catalog. ### 3.2 Photometry We measured optical and infrared photometry using a new quasi-optimal photometry technique that fits model spatial profiles of the detected objects to the space- and ground-based images. The technique is based on but extends the spatial profile fitting technique described previously by FLY99. In particular, the current technique implements two improvements over the previous technique: First, we used an image reconstruction method to generate smooth models of the intrinsic spatial profiles of the objects, which allows the spatial profile fitting technique to be applied to the space-based images as well as the ground-based images. Second, we applied the spatial profile fitting technique to the individual ground-based images (without combining the images in each band), which is necessary in order to achieve optimal sensitivity given that the PSFs of the ground-based images vary significantly from image to image. In comparison with conventional methods, the new technique provides higher signal-to-noise ratio measurements, and in contrast with conventional methods, the new technique accounts for uncertainty correlations between nearby, overlapping neighbors. First, we determined the PSFs of the space- and ground-based images. For the space-based images, we approximated the PSFs by the median average of the three faintest of the four brightest stars in each image. For the ground-based images, we approximated the PSFs by double Gaussian profiles $$\mathrm{\Phi }\left(r\right)=\underset{i=1}{\overset{2}{}}A_i\mathrm{exp}\left[\frac{1}{2}\left(\frac{r}{\sigma _i}\right)^2\right]$$ (1) (where $`r`$ is the distance from the profile center), where we estimated parameters by fitting to the brightest stars in each image (excluding saturated stars). Next, we modeled the intrinsic spatial profiles of the objects. We produced the models by reconstructing the F160W and F222M images using the non-negative least-squares (NNLS) image reconstruction method (see Puetter & Yahil, 1999, and references therein) and masking the reconstructed image with the SExtractor segmentation map to identify individual intrinsic spatial profiles of individual objects. Briefly, the NNLS image reconstruction method is an indirect image reconstruction method that constrains the reconstructed image to be non-negative, which forces the reconstructed image—i.e. the model image convolved with the PSF—to be smooth on the scale of the PSF. The NNLS image reconstruction method is matched to our purpose of modeling the intrinsic spatial profiles of the objects because it produces a smooth model of an image with the effects of the PSF removed. Next, we formed image templates of the objects in the images by convolving the intrinsic spatial profiles of the objects with the appropriate PSFs of the images. Finally, we fitted the image templates to the images to determine optimal flux estimates. For the $`k`$th image of a given band, we calculated a $`\chi ^2`$ statistic of the form $$\chi _k^2=\underset{i,j}{}\left[\frac{I^{(k)}(i,j)B^{(k)}(i,j)_{n=1}^NF_nP_n^{(k)}(i,j)}{\sigma _{\mathrm{eff}}^{(k)}(i,j)}\right]^2,$$ (2) where $`I^{\left(k\right)}`$ is the image, $`B^{\left(k\right)}`$ is the background, $`P_n^{\left(k\right)}`$ is the image template, $`\sigma _{\mathrm{eff}}^{(k)}`$ is the effective uncertainty (described previously by FLY99) and $`F_n`$ is the optimal flux estimate of the $`n`$th object and where the sum extends over all pixels in the image. We determined local backgrounds by median averaging within $`64\times 64`$ pixel<sup>2</sup> boxes centered on the objects, excluding pixels occupied by any objects, and we determined effective uncertainties by summing the elements of $`3\times 3`$ data covariance matrices. We formed the total $`\chi ^2`$ of a given band by summing the $`\chi ^2`$ statistic over all individual images, i.e. $$\chi ^2=\underset{k}{}\chi _k^2.$$ (3) For the space-based images, only one image (i.e. the final processed image) enters into the sum, whereas for the ground-based images, $`20`$ images (i.e. the individual exposures) enter into the sum. Given $`N`$ objects detected in the image, we set $`\chi ^2/F_i=0`$ to yield a system of $`N`$ coupled linear equations with $`N`$ unknowns (i.e. the $`F_i`$, with $`i=1,N`$). We solved the equations by Cholesky decomposition of the Hessian matrix to determine the optimal flux estimates $`F_i`$ and the optimal flux uncertainty estimates $`\sigma _{F_i}`$. Note that the technique is applicable to a set of unadded images simply because the Hessian matrix of a given band is additive with respect to the individual images. The signal-to-noise ratios obtained by the current method are in general substantially larger than the signal-to-noise ratios obtained by conventional methods, say by direct integration within isophotal apertures. This is demonstrated in Figure 1, which shows the signal-to-noise ratio obtained by the current method compared with the signal-to-noise ratio obtained by the aperture method versus object flux, for objects measured in the F160W, STIS, and VLT I-band images. For the majority of objects at all flux levels, the signal-to-noise ratio obtained by the current method is larger than the signal-to-noise ratio obtained by the aperture method, by a factor that is typically $`2`$ but that ranges up to $`10`$. The improvement is particularly substantial for the ground-based image where the PSFs vary significantly from exposure to exposure. For a small minority of objects, the signal-to-noise ratio obtained by the current method is formally smaller than the signal-to-noise ratio obtained by the aperture method. This is explained by noting that the current method accounts for uncertainty correlations between nearby, overlapping neighbors whereas the aperture method does not. In particular, by examining individual objects with low signal-to-noise ratio detections, we find that: (1) objects with overlapping neighbors of comparable flux levels have flux errors underestimated by aperture photometry due to significant contributions from the off-diagonal parts of the covariance matricies, and (2) objects with overlapping neighbors of much higher flux levels have flux measurements overestimated by aperture photometry because substantial amounts of flux from the brighter objects are incorrectly assigned to the fainter objects. These two errors of aperture photometry have the effect of falsely increasing the signal-to-noise ratio of the aperture measurements at low signal-to-noise ratios. We therefore conclude that the current method is superior to conventional methods in two respects: (1) it provides substantially higher signal-to-noise ratios, and (2) it provides more realistic error estimates. ### 3.3 Photometric Redshifts We measured photometric redshifts following procedures similar to those described previously by LYF96 and FLY99, but with a sequence of six spectrophotometric templates, including the four templates of our previous analyses (of E/S0, Sbc, Scd, and Irr galaxies) and two new templates (of star-forming galaxies). First, we formed the two new spectrophotometric templates of star-forming galaxies by adopting ultraviolet- and optical-wavelength spectrophotometry of starburst galaxies of intrinsic color excess $`E_{BV}<0.10`$ (designated SB1) and $`0.11E_{BV}0.21`$ (designated SB2) of Kinney et al. (1996) and extrapolating toward ultraviolet and infrared wavelengths and incorporating the effects of intrinsic and intervening absorption according to the prescription of FLY99. (Specifically, we assumed that galaxies are optically thick at the Lyman Limit and incorporated the average Lyman $`\alpha `$ and Lyman $`\beta `$ decrement parameters of Madau 1995 and Webb 1996.) Figure 2 shows the spectrophotometric templates, which span rest-frame wavelengths $`91225,000`$ Å. Next, we integrated the spectrophotometric templates with the system throughput functions of each instrument with each filter. For the HST instruments, we adopted the system throughput functions provided by the STScI NICMOS and STIS teams. For the VLT instruments, we modeled the system throughput functions using filter and detector response functions provided by the VLT Science Verification team, instrument response functions calculated from the measured reflectivity of the Al reflecting surfaces, and a standard atmospheric response function. Figure 3 shows the system throughput functions. Finally, we determined photometric redshifts by maximizing a likelihood estimator of the form $$(z,T)=\underset{i=1}{\overset{9}{}}\mathrm{exp}\left\{\frac{1}{2}\left[\frac{f_iAF_i(z,T)}{\sigma _i}\right]^2\right\},$$ (4) where $`f_i`$ is the measured flux in band $`i`$, $`\sigma _i`$ is the measured flux uncertainty in band $`i`$, $`F_i(z,T)`$ is the modeled flux in band $`i`$ at assumed redshift $`z`$ and spectral type $`T`$, and $`A`$ is an arbitrary flux normalization and where the product extends over all nine bands. For each object, $`(z,T)`$ was maximized with respect to $`A`$ and $`T`$ to determine the “redshift likelihood function” $`(z)`$, which was maximized with respect to $`z`$ to determine the maximum-likelihood photometric redshift. ### 3.4 Star and Galaxy Separation We identified probable stars on the basis of visual inspection of the space-based images and the spectral energy distributions. Although in principle stars might be identified based solely on their spectrophotometric characteristics, in practice stars occur so infrequently in the HDF or HDF–S images that we decided not to incorporate stellar spectrophotometric templates into the photometric redshift analysis. A total of five probable stars were identified, the brightest of which is of magnitude $`AB(16,000)17.2`$ and the faintest of which is of magnitude $`AB(16,000)21.1`$. Table 2 lists properties of the probable stars. ## 4 RESULTS ### 4.1 Catalog of Photometry and Photometric Redshifts The result of the analysis described in the previous section is a catalog of photometry and photometric redshifts of 335 objects in the HDF–S NICMOS field. The catalog is available on a World Wide Web site at http://www.ess.sunysb.edu/astro/hdfs/index.html. For each object, the catalog lists (1) the object identification, (2–3) the J2000 right ascension and declination, (4) the F160W magnitude $`AB(16,000)`$, (5–22) the relative energy flux density per unit frequency interval and uncertainty in the U, B, V, R, I, STIS, F110W, F160W, and F222M bands (with respect to the F160W band), (23) the best-fit photometric redshift, and (24) the best-fit spectral type. Here $`AB`$ magnitude is related to energy flux density per unit frequency interval $`f_\nu `$ as $$AB(\lambda )=2.5\mathrm{log}\frac{f_\nu (\lambda )}{1\mu \mathrm{Jy}}+23.90.$$ (5) The World Wide Web site also includes individual object pages that display the redshift likelihood functions, measured and modeled spectral energy distributions, and images of the objects. ### 4.2 Survey Area versus Depth Relation The primary utility of the catalog of photometric redshifts is as a survey of faint galaxies detected in the NICMOS F160W image. Because the sensitivity of the F160W image varies significantly with position, the selection function of the survey is characterized by the survey area versus depth relation. We determined the survey area versus depth relation by assuming that the sensitivity image traces the shape (but not necessarily the normalization) of the sensitivity versus position relation of the F160W image. (The sensitivity image may not trace the normalization of the sensitivity versus position relation because of the non-zero off-diagonal elements of the covariance matrix.) First, we formed the sensitivity image by taking the square root of the variance image. Next, we scaled the sensitivity image so that it traced the faint-end envelope of the measured brightnesses of objects detected in the F160W image. Finally, we integrated the enclosed area as a function of limiting depth to determine the survey area versus depth relation. Figure 4 shows the survey area versus depth relation, which indicates that the survey reaches a limiting depth of $`AB(16,000)28.7`$ and covers 1.01 arcmin<sup>2</sup> to $`AB(16,000)=27`$ and 1.05 arcmin<sup>2</sup> to $`AB(16,000)=26.5`$. Likewise, the survey reaches a limiting depth of $`AB(22,200)24.8`$ and covers 0.79 arcmin<sup>2</sup> to $`AB(22,200)=24`$ and 1.09 arcmin<sup>2</sup> to $`AB(22,200)=23`$. The survey area versus depth relation is crucial to any statistical analysis of the catalog. ## 5 EVALUATION OF THE PHOTOMETRIC REDSHIFT TECHNIQUE ### 5.1 Accuracy and Reliability of the Photometric Redshift Measurements Spectroscopic redshift measurements of $`120`$ faint galaxies in the HDF have been obtained using the Keck telescope (see, e.g., the list compiled by FLY99), and spectroscopic measurements of three galaxies in the HDF–S WFPC2 field have been obtained using the Anglo-Australian Telescope (Glazebrook et al., 2000, in preparation). Although more such measurements will undoubtedly be obtained (especially of faint galaxies in the HDF–S), the current measurements provide a means of assessing the accuracy and reliability of the photometric redshift measurements and of comparing results of the four- versus six-template photometric redshift measurements. We compiled spectroscopic redshift measurements from published and unpublished sources, rejecting as unreliable spectroscopic measurements with uncertain or ambiguous interpretations. \[A non-negligible fraction of the spectroscopic redshift measurements have been shown to be in error and so must be excluded from consideration; see, e.g., the discussions of Lanzetta, Fernández-Soto & Yahil (1998, hereafter LFY98) and FLY99.\] Figure 5 shows the comparison of 104 photometric and reliable spectroscopic redshift measurements. Specifically, Figure 5(a) shows the comparison of the four-template photometric redshift measurements with the reliable spectroscopic redshift measurements, and Figure 5(b) shows the comparison of the six-template photometric redshift measurements with the reliable spectroscopic redshift measurements. Several results are evident on the basis of Figure 5: 1. Inclusion of the two new templates eliminates the tendency of our previous analyses to systematically underestimate the redshifts of galaxies of redshift $`2<z<3`$ (by a redshift offset of roughly 0.3), in agreement with results found previously by Benítez et al. (1998). The six-template photometric redshift measurements are essentially free of systematic bias at all redshifts $`z<6`$ that have as yet been examined. 2. The RMS residual between the six-template photometric redshift measurements and the reliable spectroscopic redshift measurements is $`\mathrm{\Delta }z=0.09`$ at redshifts $`z<2`$, $`\mathrm{\Delta }z=0.29`$ at redshifts $`2<z<4`$, and $`\mathrm{\Delta }z=0.18`$ at redshifts $`z>4`$. The median absolute residual between the six-template photometric redshift measurements and the reliable spectroscopic redshift measurements is $`\mathrm{\Delta }z=0.07`$ at redshifts $`z<2`$, $`\mathrm{\Delta }z=0.22`$ at redshifts $`2<z<4`$, and $`\mathrm{\Delta }z=0.09`$ at redshifts $`z>4`$. 3. The six-template photometric redshift measurements are accurate to within an RMS relative uncertainty of $`\mathrm{\Delta }z/(1+z)7\%`$ at all redshifts $`z<6`$ that have as yet been examined. We conclude that the photometric redshift technique is in general capable of determining reliable redshifts to within a relative uncertainty of $`\mathrm{\Delta }z/(1+z)7\%`$. ### 5.2 Photometric Redshift Measurements of Stars The photometric redshift measurements of the probable stars listed in Table 2 are $`z=0.07`$, 0.30, 5.33, 5.63, and 5.72. Thus the spectral energy distributions of some stars resemble the spectral energy distributions of galaxies of redshift $`z=56`$. We believe that all such stars were identified on the basis of visual inspection of the space-based images, but we cannot exclude the possibility that a small number of faint stars were misidentified as galaxies of redshift $`z=56`$. ### 5.3 Effects of Photometric Error on the Photometric Redshift Measurements Comparison of the photometric and spectroscopic redshift measurements yields a measure of the uncertainties of the photometric redshift technique, which in principle can include contributions from both photometric error and cosmic variance with respect to the spectral templates. At bright magnitudes, the effects of photometric error are expected to be negligible, while at faint magnitudes the effects of photometric error are expected to dominate. We assessed the effects of photometric error on the photometric redshift measurements by performing a series of simulations similar to those described previously by LFY98. First, we determined the expected energy fluxes through the various filters of an Irr galaxy spectrophotometric template, given an assumed galaxy magnitude $`AB(16,000)`$ \[selected over the range $`21<AB(16,000)<29`$\] and redshift $`z`$ (selected over the range $`0<z<11`$). Next, we added random noise to the expected energy fluxes according to the actual noise characteristics of the images. Next, we determined photometric redshift measurements of the simulated objects using the sequence of six spectrophotometric templates. Finally, we repeated these steps 1000 times as functions of $`AB(16,000)`$ and $`z`$ to determine the distribution of redshift residuals between the input and output models. Figure 6 shows the distributions of redshift residuals as functions of $`AB(16,000)`$ and $`z`$. Several results are evident on the basis of Figure 6: 1. At $`AB(16000)<25`$, photometric errors have a negligible effect on the photometric redshift measurements. At these relatively bright magnitudes, the RMS dispersion of the residuals is $`\mathrm{\Delta }z0.02`$. 2. At $`AB(16000)=2526`$, photometric errors have only a modest effect on the photometric redshift measurements at redshifts $`z7`$, where the RMS dispersion of the residuals is $`\mathrm{\Delta }z0.25`$, but have a somewhat more significant effect on the photometric redshift measurements at $`z7`$, where a secondary peak in the residual distribution occurs at large negative residual, i.e. at $`\mathrm{\Delta }z6`$. The secondary peak is caused by ambiguity between high-redshift late-type galaxies and low-redshift early-type galaxies. 3. At $`AB(16000)=2728`$, photometric errors have a modest effect on the photometric redshift measurements at all redshifts, with a prominent secondary peak in the residual distribution at all redshifts $`z3`$. The sense of the secondary peak is such that it is more likely for high-redshift objects to be assigned low redshifts than for low-redshift objects to be assigned high redshifts. 4. At $`AB(16000)>28`$, photometric errors have a significant effect on the photometric redshift measurements at all redshifts. We conclude that the effects of photometric error on the photometric redshift measurements must be taken into consideration at magnitudes fainter than $`AB(16,000)25`$. ## 6 DISCUSSION Here we briefly discuss results of the catalog of photometric redshifts, concentrating on results related to the highest-redshift galaxies identified by the analysis. Scientific analysis of the catalog will be presented in forthcoming papers. ### 6.1 Redshift Distribution of Galaxies in the HDF–S NICMOS Field The catalog of photometric redshifts identifies 330 galaxies, of photometric redshift measurement ranging from $`z0`$ through $`z>10`$. Figure 7 shows the redshift distribution of the galaxies in the HDF–S NICMOS field. The distribution is characterized by a median redshift of $`z_{\mathrm{med}}=1.38`$ and by a tail that stretches to redshifts beyond $`z=10`$. The redshift distribution of Figure 7 does not, of course, apply for any magnitude-limited sample, because the sensitivity of the F160W image varies significantly with position. ### 6.2 Galaxies of Redshift $`z>5`$ One difference between the current analysis of the HDF–S NICMOS field and our previous analyses of the HDF is that the current analysis is in principle sensitive to galaxies of redshift larger than was the previous analyses. In this section, we discuss the galaxies of photometric redshift measurement $`z>5`$. The catalog of photometric redshifts identifies 21 galaxies (or 6% of the total) of redshift $`z>5`$. Table 3 lists the positions, magnitudes $`AB(16,000)`$, photometric redshift measurements $`z`$, and best-fit spectral types of these galaxies, and Figure 8 shows the observed and modeled spectral energy distributions and redshift likelihood functions of these galaxies. Table 4 lists surface densities of galaxies of redshift $`z>5`$ derived from the catalog of photometric redshifts accounting for the variation of the survey area versus depth relation as a function of limiting magnitude $`AB(16,000)`$. Uncertainties listed in Table 4 are derived by a bootstrap resampling technique, which explicitly accounts for sampling error, photometric error, and cosmic variance with respect to the spectrophotometric templates. First, we resampled the original catalog, allowing the possibility of duplication. Next, we added random noise to the flux measurements (according to the actual noise properties of the images) and redetermined the photometric redshift measurements. Next, we added random noise to the photometric redshift measurements (according to the actual noise properties of the photometric redshift technique, as described in § 5.1). Next, we measureed galaxy surface densities from the resampled and perturbed photometric redshift catalog. Finally, we repeated these steps a thousand times and determine the 1 $`\sigma `$ deviations of the surface density measurements. We conclude that galaxies of redshift $`z>5`$ are a non-negligible fraction of the galaxy population at magnitudes $`AB(16,000)27`$. ### 6.3 Galaxies of Redshift $`z>10`$ The catalog of photometric redshifts identifies 8 galaxies (or 2% of the total) of redshift $`z>10`$ including 3 galaxies detected on the basis of the F222M image. Table 5 lists surface densities of galaxies of redshift $`z>10`$ derived from the catalog of photometric redshifts accounting for the variation of the survey area versus depth relation as a function of limiting magnitudes $`AB(16,000)`$ and $`AB(22,200)`$. We are struck that the surface density of galaxies of redshift $`z>10`$ at $`AB(22,200)=24`$ is comparable to the surface density of galaxies of redshift $`z>10`$ at $`AB(16,000)=28`$. ### 6.4 Early-Type Galaxies of Redshift $`z>1`$ Another difference between the current analysis of the HDF–S NICMOS field and our previous analyses of the HDF is that the current analysis is in principle sensitive to early-type galaxies of redshift larger than was the previous analyses. In this section, we discuss the early-type galaxies of photometric redshift measurement $`z>1`$. The catalog of photometric redshifts identifies 11 galaxies (or 3% of the total) of best-fit spectral type E/S0, of which 5 galaxies (or 1% of the total) are of redshift $`z>1`$. Table 6 lists the positions, magnitudes $`AB(16,000)`$, photometric redshift measurements $`z`$, and Figure 9 shows the observed and modeled spectral energy distributions and redshift likelihood functions of these galaxies. (It should be noted, however, that the likelihood that SB–NI–0471–0941 is an early–type galaxy of redshift $`z=5.10`$ is statistically indistinguishable from the likelihood that this object is a later–type galaxy of redshift $`z10`$. A similar result also applies for SB–NI–0844–0698.) Table 7 lists surface densities of early-type galaxies of redshift $`z>1`$ derived from the catalog of photometric redshifts and the survey area versus depth relation as a function of limiting magnitude $`AB(16,000)`$. (Uncertainties listed in Table 7 are derived as in Table 4.) We conclude that early-type galaxies of redshift $`z>1`$ are a non-negligible fraction of the galaxy population at magnitudes $`AB(16,000)25`$. ### 6.5 Comparison with Other Photometric Redshift Measurements Table 8 compares photometric redshift measurements of six galaxies in common between the current analysis and the previous analysis of Benítez et al. (1999). Four of the six pairs of measurements are concordant and two of the six pairs of measurements are discordant to within the cosmic dispersion $`\mathrm{\Delta }z0.1`$ of the photometric redshift measurement technique as described in § 5.1. Benítez et al. (1999) used similar photometric redshift techniques and template spectra, so the discrepancies most likely arise from differences in photometry. Comparison of photometry in all bands indicates that, while our flux measurements are consistent with the flux measurements of Benítez et al. (1999) in the optical and the F160W bands, our flux measurements are systematically lower in the F110W band and systematically higher in the F222M band. These differences arise because we use a spatial profile fitting technique, which takes into account differences in the PSF from band to band, whereas Benítez et al. (1999) use a fixed aperture technique (where the apertures are determined from a combined F110W and F160W image), which implicitly assumes identical PSFs in all bands. There are indeed differences in the PSFs for the three NICMOS images. The PSF of the F222M image is slightly broader than the PSF of the F160W image, and the PSF of the F110W image is slightly sharper than the PSF of the F160W image. We therefore suspect that Benítez et al. (1999) may have overestimated fluxes in the F110W image (because the apertures were too large) and underestimated fluxes in the F222M image (because the apertures were too small). ## 7 SUMMARY AND CONCLUSIONS Here we present a catalog of photometry and photometric redshifts of 335 faint objects in the HDF–S NICMOS field. The analysis is based on (1) infrared images obtained with HST using NICMOS with the F110W, F160W, and F222M filters, (2) an optical image obtained with HST using STIS with no filter, and (3) optical images obtained with the ESO VLT with U, B, V, R, and I filters. The primary utility of the catalog of photometric redshifts is as a survey of faint galaxies detected in the NICMOS F160W and F222M images. The sensitivity of the survey varies significantly with position, reaching a limiting depth of $`AB(16,000)28.7`$ and covering 1.01 arcmin<sup>2</sup> to $`AB(16,000)=27`$ and 1.05 arcmin<sup>2</sup> to $`AB(16,000)=26.5`$. Likewise, the survey reaches a limiting depth of $`AB(22,200)24.8`$ and covering 0.79 arcmin<sup>2</sup> to $`AB(22,200)=24`$ and 1.09 arcmin<sup>2</sup> to $`AB(22,200)=23`$. The catalog of photometric redshifts identifies 21 galaxies (or 6% of the total) of redshift $`z>5`$, 8 galaxies (or 2% of the total) of redshift $`z>10`$, and 11 galaxies (or 3% of the total) of best-fit spectral type E/S0, of which 5 galaxies (or 1% of the total) are of redshift $`z>1`$. The authors thank Bob Williams and the entire STScI HDF–S team for providing access to the HDF–S observations, the entire ESO VLT team for making observations of the HDF–S publicly available, and an anonymous referee for helpful comments. HWC, KML, SMP, and NY were supported by NASA grant NAGW–4422 and NSF grant AST–9624216. AFS was supported by a grant from the Australian Research Council. RCP was supported by NASA grant NAG–53944. AY was supported by NASA grant AR–07551.01–96A.
no-problem/0003/nucl-ex0003006.html
ar5iv
text
# Shape of the 8B alpha and neutrino spectra ## ACKNOWLEDGMENTS We thank J.J. Kolata, F. Bechetti, D. Peterson, and P. Santi, for help during the early stages of this experiment, J. Napolitano for sending us the $`\beta ^+`$ spectrum, and J.F. Beacom, R.G.H. Robertson and S.J. Freedman for illuminating comments. AG thanks the National Institute for Nuclear Theory at Seattle for hosting during the summer of 1999.
no-problem/0003/hep-th0003131.html
ar5iv
text
# 1 Introduction ## 1 Introduction The $`P_{}=N/R`$ sector of the discrete light–cone quantization of uncompactified M–theory is given by the supersymmetric quantum mechanics of $`U(N)`$ matrices. The compactification of M(atrix) theory as a model for M–theory has been studied in . In it has been treated using noncommutative geometry . These investigations apply to the $`d`$–dimensional torus $`T^d`$, and have been further dealt with from various viewpoints in . These structures are also relevant in noncommutative string and gauge theories . In this paper, following , we address the compactification M(atrix) theory on Riemann surfaces with genus $`g>1`$. A Riemann surface $`\mathrm{\Sigma }`$ of genus $`g>1`$ is constructed as the quotient $`H/\mathrm{\Gamma }`$, where $`H`$ is the upper half–plane, and $`\mathrm{\Gamma }\mathrm{PSL}_2(R)`$, $`\mathrm{\Gamma }\pi _1(\mathrm{\Sigma })`$, is a Fuchsian group acting on $`H`$ as $$\gamma =\left(\begin{array}{c}a\\ c\end{array}\begin{array}{cc}b& \\ d& \end{array}\right)\mathrm{\Gamma },\gamma z=\frac{az+b}{cz+d}.$$ (1.1) In the absence of elliptic and parabolic generators, the $`2g`$ Fuchsian generators $`\gamma _j`$ satisfy $$\underset{j=1}{\overset{g}{}}\left(\gamma _{2j1}\gamma _{2j}\gamma _{2j1}^1\gamma _{2j}^1\right)=I.$$ (1.2) Inspired by M(atrix) theory, let us promote the complex coordinate $`z=x+iy`$ to an $`N\times N`$ complex matrix $`Z=X+iY`$, with $`X=X^{}`$ and $`Y=Y^{}`$. This suggests defining fractional linear transformations of $`Z`$ through conjugation with some non–singular matrix $`𝒰`$: $$𝒰Z𝒰^1=(aZ+bI)(cZ+dI)^1.$$ (1.3) Accordingly, operators $`𝒰_k`$ representing the Fuchsian generators $`\gamma _k`$ can be constructed, such that $$\underset{k=1}{\overset{g}{}}\left(𝒰_{2k1}𝒰_{2k}𝒰_{2k1}^1𝒰_{2k}^1\right)=e^{2\pi i\theta }I.$$ (1.4) While we will find the solution to (1.4), we will consider slightly different versions of (1.3). This construction cannot be implemented for finite $`N`$, as taking the trace of (1.3) shows. It can be interpreted as defining a sort of M(atrix) uniformization, in which the Möbius transformation of the M(atrix) coordinate $`Z`$ is defined through (1.3). ## 2 Compactification in $`g>1`$ Next we present an explicit Ansatz to compactify 11–dimensional supergravity on a Riemann surface with $`g>1`$. The Einstein equations read $$R_{MN}\frac{1}{2}G_{MN}R$$ $$=\frac{1}{3}(H_{ML_1L_2L_3}H_{NL_1^{}L_2^{}L_3^{}}G^{L_1L_1^{}}G^{L_2L_2^{}}G^{L_3L_3^{}}\frac{1}{8}G_{MN}H_{L_1L_2L_3L_4}H_{L_1^{}L_2^{}L_3^{}L_4^{}}G^{L_1L_1^{}}G^{L_2L_2^{}}G^{L_3L_3^{}}G^{L_4L_4^{}}),$$ (2.1) where $`H_{MNPQ}`$ is the field strength of $`C_{MNP}`$. We try an Ansatz by diagonally decomposing $`G_{MN}`$ into 2–, 4– and 5–dimensional blocks, with $`H_{MNPQ}`$ taken along the 4–dimensional subspace: $$G_{MN}=\mathrm{diag}(g_{\alpha \beta }^{(2)},g_{mn}^{(4)},g_{ab}^{(5)}),$$ $$H_{MPQR}=ϵ_{mpqr}f.$$ (2.2) The Einstein equations then decompose as $$R_{i_kj_k}^{(k)}\frac{1}{2}g_{i_kj_k}^{(k)}(R^{(2)}+R^{(4)}+R^{(5)})=ϵ_k\mathrm{det}g^{(4)}f^2g_{i_kj_k}^{(k)},$$ (2.3) where $`k=2,4,5`$, $`(i_2,j_2)=(\alpha ,\beta )`$, $`(i_4,j_4)=(m,n)`$, $`(i_5,j_5)=(a,b)`$, and $`ϵ_2=ϵ_4=ϵ_5=1`$. Some manipulations lead to $$R^{(k)}=c_kf^2\mathrm{det}g^{(4)},$$ (2.4) with $`c_2=4/3`$, $`c_4=16/3`$ and $`c_5=10/3`$. We observe that $`f=0`$ would reproduce the toroidal case. A non–vanishing $`f`$ is a deformation producing $`g>1`$. It suffices that $`g^{(4)}`$ have positive signature for $`R^{(2)}`$ to be negative, as required in $`g>1`$. Then a choice for the 4– and 5–dimensional manifolds is $`S^4`$ and $`AdS^5`$. ## 3 Differential representation of $`\mathrm{\Gamma }`$ ### 3.1 The unitary gauged operators For $`n=1,0,1`$ and $`e_n(z)=z^{n+1}`$ we consider the $`\mathrm{sl}_2(R)`$ operators $`\mathrm{}_n=e_n(z)_z`$. We define $$L_n=e_n^{1/2}\mathrm{}_ne_n^{1/2}=e_n\left(_z+\frac{1}{2}\frac{e_n^{}}{e_n}\right).$$ (3.1) These satisfy $$[L_m,L_n]=(nm)L_{m+n},[\overline{L}_m,L_n]=0,$$ $$[L_n,f]=z^{n+1}_zf.$$ (3.2) For $`k=1,2,\mathrm{},2g`$, consider the operators $$T_k=e^{\lambda _1^{(k)}(L_1+\overline{L}_1)}e^{\lambda _0^{(k)}(L_0+\overline{L}_0)}e^{\lambda _1^{(k)}(L_1+\overline{L}_1)},$$ (3.3) with the $`\lambda _n^{(k)}`$ picked such that $`T_kzT_k^1=\gamma _kz=(a_kz+b_k)/(c_kz+d_k)`$ so that by (1.2) $$\underset{k=1}{\overset{g}{}}\left(T_{2k1}T_{2k}T_{2k1}^1T_{2k}^1\right)=I.$$ (3.4) On $`L^2(H)`$ we have the scalar product $$\varphi |\psi =_H𝑑\nu \overline{\varphi }\psi ,$$ (3.5) $`d\nu (z)=idzd\overline{z}/2=dxdy`$. The $`T_k`$ provide a unitary representation of $`\mathrm{\Gamma }`$. Next consider the gauged $`\mathrm{sl}_2(R)`$ operators $$_n^{(F)}=F(z,\overline{z})L_nF^1(z,\overline{z})=e_n\left(_z+\frac{1}{2}\frac{e_n^{}}{e_n}_z\mathrm{ln}F(z,\overline{z})\right),$$ (3.6) where $`F(z,\overline{z})`$ is an undetermined phase function, to be determined later on. The $`_n^{(F)}`$ also satisfy the algebra (3.2). The adjoint of $`_n^{(F)}`$ is given by $$_n^{(F)}=F\overline{e_n^{1/2}}_{\overline{z}}\overline{e_n^{1/2}}F^1,$$ (3.7) with $`_n^{(F)}=\overline{}_n^{(F^1)}`$. Finally we define $$\mathrm{\Lambda }_n^{(F)}=_n^{(F)}_n^{(F)}=_n^{(F)}+\overline{}_n^{(F^1)}.$$ (3.8) The $`\mathrm{\Lambda }_n^{(F)}`$ enjoy the fundamental property that both their chiral components are gauged in the same way by the function $`F`$, that is $$\mathrm{\Lambda }_n^{(F)}=F(L_n+\overline{L}_n)F^1,$$ (3.9) while also satisfying the $`\mathrm{sl}_2(R)`$ algebra: $$[\mathrm{\Lambda }_m^{(F)},\mathrm{\Lambda }_n^{(F)}]=(nm)\mathrm{\Lambda }_{m+n}^{(F)},$$ $$[\mathrm{\Lambda }_n^{(F)},f]=(z^{n+1}_z+\overline{z}^{n+1}_{\overline{z}})f.$$ (3.10) It holds that $$e^{\mathrm{\Lambda }_n^{(F)}}=Fe^{L_n+\overline{L}_n}F^1,$$ (3.11) which is a unitary operator since $`\mathrm{\Lambda }_n^{(F)}=\mathrm{\Lambda }_n^{(F)}`$. Let $`b`$ be a real number, and $`A`$ a Hermitean connection 1–form to be identified presently. Set $$𝒰_k=e^{ib_z^{\gamma _kz}A}T_k,$$ (3.12) where the integration contour is taken to be the Poincaré geodesic connecting $`z`$ and $`\gamma _kz`$. As the gauging functions introduced in (3.6) we will take the functions $`F_k(z,\overline{z})`$ that solve the equation $$F_kT_kF_k^1=e^{ib_z^{\gamma _kz}A}T_k,$$ (3.13) that is $$F_k(\gamma _kz,\gamma _k\overline{z})=e^{ib_z^{\gamma _kz}A}F_k(z,\overline{z}).$$ (3.14) ### 3.2 The gauged algebra With the choice (3.13) for $`F_k`$, (3.9) becomes $$\mathrm{\Lambda }_{n,k}^{(F)}=F_k(L_n+\overline{L}_n)F_k^1=z^{n+1}\left(_z+\frac{n+1}{2z}_z\mathrm{ln}F_k\right)+\overline{z}^{n+1}\left(_{\overline{z}}+\frac{n+1}{2\overline{z}}_{\overline{z}}\mathrm{ln}F_k\right).$$ (3.15) The $`\mathrm{\Lambda }_{n,k}^{(F)}`$ satisfy the algebra $$[\mathrm{\Lambda }_{m,j}^{(F)},\mathrm{\Lambda }_{n,k}^{(F)}]=(nm)\mathrm{\Lambda }_{m+n,j}^{(F)}+F_k^1|e_n|\mathrm{\Lambda }_{n,k}^{(F)}|e_n|^1F_kF_j^1|e_m|\mathrm{\Lambda }_{m,j}^{(F)}|e_m|^1F_j(\mathrm{ln}F_j\mathrm{ln}F_k),$$ $$[\mathrm{\Lambda }_{n,k}^{(F)},f]=(z^{n+1}_z+\overline{z}^{n+1}_{\overline{z}})f.$$ (3.16) Upon exponentiating $`\mathrm{\Lambda }_{n,k}^{(F)}`$ one finds $$𝒰_k=e^{\lambda _1^{(k)}\mathrm{\Lambda }_{1,k}^{(F)}}e^{\lambda _0^{(k)}\mathrm{\Lambda }_{0,k}^{(F)}}e^{\lambda _1^{(k)}\mathrm{\Lambda }_{1,k}^{(F)}},$$ (3.17) that is, the $`𝒰_k`$ are unitary, and $$𝒰_k^1=T_k^1e^{ib_z^{\gamma _kz}A}=e^{ib_{\gamma _k^1z}^zA}T_k^1.$$ (3.18) ### 3.3 Computing the phase It is immediate to see that the $`𝒰_k`$ defined in (3.12) satisfy (1.4) for a certain value of $`\theta `$: $$\underset{k=1}{\overset{g}{}}\left(𝒰_{2k1}𝒰_{2k}𝒰_{2k1}^{}𝒰_{2k}^{}\right)$$ $$=e^{ib_z^{\gamma _1z}A}T_1e^{ib_z^{\gamma _2z}A}T_2e^{ib_{\gamma _1^1z}^zA}T_1^1e^{ib_{\gamma _2^1z}^zA}T_2^1\mathrm{}$$ $$=\mathrm{exp}\left[ib\left(_z^{\gamma _1z}A+_{\gamma _1z}^{\gamma _2\gamma _1z}A+_{\gamma _2\gamma _1z}^{\gamma _1^1\gamma _2\gamma _1z}A+_{\gamma _1^1\gamma _2\gamma _1z}^{\gamma _2^1\gamma _1^1\gamma _2\gamma _1z}A+\mathrm{}\right)\right]\underset{k=1}{\overset{g}{}}\left(T_{2k1}T_{2k}T_{2k1}^1T_{2k}^1\right)$$ $$=e^{ib__zA},$$ (3.19) where $`_z=\{z,\gamma _1z,\gamma _2\gamma _1z,\gamma _1^1\gamma _2\gamma _1z,\mathrm{}\}`$ is a fundamental domain for $`\mathrm{\Gamma }`$. The basepoint $`z`$, plus the action of the Fuchsian generators on it, determine $`_z`$, as the vertices are joined by geodesics. ### 3.4 Uniqueness of the gauge connection For (3.19) to provide a projective unitary representation of $`\mathrm{\Gamma }`$, $`__z𝑑A`$ should be $`z`$–independent. Changing $`z`$ to $`z^{}`$ can be expressed as $`zz^{}=\mu z`$ for some $`\mu \mathrm{PSL}_2(R)`$. Then $`_z_{\mu z}=\{\mu z,\gamma _1\mu z,\gamma _2\gamma _1\mu z,\gamma _1^1\gamma _2\gamma _1\mu z,\mathrm{}\}`$. Now consider $`_z\mu _z=\{\mu z,\mu \gamma _1z,\mu \gamma _2\gamma _1z,\mu \gamma _1^1\gamma _2\gamma _1z,\mathrm{}\}`$. The congruence $`\mu _z_{\mu z}`$ follows from two facts: that the vertices are joined by geodesics, and that $`\mathrm{PSL}_2(R)`$ maps geodesics into geodesics. Since $`\mathrm{\Gamma }`$ is defined up to conjugation, $`\mathrm{\Gamma }\mu \mathrm{\Gamma }\mu ^1`$, if $`\mu _z`$ is a fundamental domain, so is $`_{\mu z}`$. Thus, to have $`z`$–independence we need $`\mu \mathrm{PSL}_2(R)`$ $$__z𝑑A=_{_{\mu z}}𝑑A=_{\mu _z}𝑑A=_{}𝑑A.$$ (3.20) This fixes the (1,1)–form $`dA`$ to be $`\mathrm{PSL}_2(R)`$–invariant. It is well known that the Poincaré form is the unique $`\mathrm{PSL}_2(R)`$–invariant (1,1)–form, up to an overall constant factor. This is a particular case of a more general fact . The Poincaré metric $`ds^2=y^2|dz|^2=2g_{z\overline{z}}|dz|^2=e^\phi |dz|^2`$ has curvature $`R=g^{z\overline{z}}_z_{\overline{z}}\mathrm{ln}g_{z\overline{z}}=1`$, so that $`_{}𝑑\nu e^\phi =2\pi \chi (\mathrm{\Sigma })`$, where $`\chi (\mathrm{\Sigma })=22g`$ is the Euler characteristic. As the Poincaré (1,1)–form is $`dA=e^\phi d\nu `$, this uniquely determines the gauge field to be $$A=A_zdz+A_{\overline{z}}d\overline{z}=\frac{dx}{y},$$ (3.21) up to gauge transformations. Using $`_{}A=_{}𝑑A`$ we finally have that (3.19) becomes $$\underset{k=1}{\overset{g}{}}\left(𝒰_{2k1}𝒰_{2k}𝒰_{2k1}^{}𝒰_{2k}^{}\right)=e^{2\pi ib\chi (\mathrm{\Sigma })}.$$ (3.22) ### 3.5 Non–Abelian extension Up to now we considered the case in which the connection is Abelian. However, it is easy to extend our construction to the non–Abelian case in which the gauge group $`U(1)`$ is replaced by $`U(N)`$. The operators $`𝒰_k`$ now become $$𝒰_k=Pe^{ib_z^{\gamma _kz}A}T_k,$$ (3.23) where the $`T_k`$ are the same as before, times the $`N\times N`$ identity matrix. Eq.(3.19) is replaced by $$\underset{k=1}{\overset{g}{}}\left(𝒰_{2k1}𝒰_{2k}𝒰_{2k1}^{}𝒰_{2k}^{}\right)=Pe^{ib__zA}.$$ (3.24) Given an integral along a closed contour $`\sigma _z`$ with basepoint $`z`$, the path–ordered exponentials for a connection $`A`$ and its gauge transform $`A^U=U^1AU+U^1dU`$ are related by $$Pe^{i_{\sigma _z}A}=U(z)Pe^{i_{\sigma _z}A^U}U^1(z)=U(z)Pe^{i_{\sigma _z}𝑑\sigma ^\mu _0^1𝑑ss\sigma ^\nu U^1(s\sigma )F_{\nu \mu }(s\sigma )U(s\sigma )}U^1(z).$$ (3.25) Applying this to (3.24), we see that the only possibility to get a coordinate–independent phase is for the curvature (1,1)–form $`F=dA+[A,A]/2`$ to be the identity matrix in the gauge indices times a (1,1)–form $`\eta `$, that is $`F=\eta I`$. It follows that $$Pe^{ib_{}A}=e^{ib_{}F}.$$ (3.26) However, the above is only a necessary condition for coordinate–independence. Nevertheless, we can apply the same reasoning as in the Abelian case to see that $`\eta `$ should be proportional to the Poincaré (1,1)–form. Denoting by $`E`$ the vector bundle on which $`A`$ is defined, we have $`k=\mathrm{deg}(E)=\frac{1}{2\pi }\mathrm{tr}_{}F`$. Set $`\mu (E)=k/N`$ so that $`_{}F=2\pi \mu (E)I`$ and $`\eta =\frac{\mu (E)}{\chi (\mathrm{\Sigma })}e^\phi d\nu `$, i.e. $$F=2\pi \mu (E)\omega I,$$ (3.27) where $`\omega =\left(e^\phi /_{}𝑑\nu e^\phi \right)d\nu `$. Thus, by (3.26) we have that Eq.(3.24) becomes $$\underset{k=1}{\overset{g}{}}\left(𝒰_{2k1}𝒰_{2k}𝒰_{2k1}^{}𝒰_{2k}^{}\right)=e^{2\pi ib\mu (E)}I,$$ (3.28) which provides a projective unitary representation of $`\pi _1(\mathrm{\Sigma })`$ on $`L^2(H,C^N)`$. ### 3.6 The gauge length A basic object is the gauge length function $$d_A(z,w)=_z^wA,$$ (3.29) where the contour integral is along the Poincaré geodesic connecting $`z`$ and $`w`$. In the Abelian case $$d_A(z,w)=_{\mathrm{Re}z}^{\mathrm{Re}w}\frac{dx}{y}=i\mathrm{ln}\left(\frac{z\overline{w}}{w\overline{z}}\right),$$ (3.30) which is equal to the angle $`\alpha _{zw}`$ spanned by the arc of geodesic connecting $`z`$ and $`w`$. Observe that the gauge length of the geodesic connecting two punctures, i.e. two points on the real line, is $`\pi `$. This is to be compared with the usual divergence of the Poincaré distance. Under a $`\mathrm{PSL}_2(R)`$–transformation $`\mu `$, we have ($`\mu _x_x\mu x`$) $$d_A(\mu z,\mu w)=d_A(z,w)\frac{i}{2}\mathrm{ln}\left(\frac{\mu _z\overline{\mu }_w}{\overline{\mu }_z\mu _w}\right).$$ (3.31) Therefore, the gauge length of an $`n`$–gon $$d_A^{(n)}(\{z_k\})=\underset{k=1}{\overset{n}{}}d_A(z_k,z_{k+1})=\pi (n2)\underset{k=1}{\overset{n}{}}\alpha _k,$$ (3.32) where $`z_{n+1}z_1`$, $`n3`$, and $`\alpha _k`$ are the internal angles, is $`\mathrm{PSL}_2(R)`$–invariant. One can check that the $`\mathrm{PSL}_2(R)`$–transformation (3.31) corresponds to a gauge transformation of $`A`$. Furthermore, as we will see, the triangle length, that by Stokes’ theorem corresponds to the Poincaré area, is proportional to the Hochschild 2–cocycle. ### 3.7 Pre–automorphic forms A related reason for the relevance of the gauge length function is that it also appears in the definition of the $`F_k`$. The latter, which apparently never appeared in the literature before, are of particular interest. Let us recast (3.13) as $$F_k(\gamma _kz,\gamma _k\overline{z})=\left(\frac{\gamma _kz\overline{z}}{z\gamma _k\overline{z}}\right)^bF_k(z,\overline{z}).$$ (3.33) Since $`(\gamma _kz\overline{z})/(z\gamma _k\overline{z})`$ transforms as an automorphic form under $`\mathrm{\Gamma }`$, we call the $`F_k`$ pre–automorphic forms. Eq.(3.14) indicates that finding the most general solution to (3.33) is a problem in geodesic analysis. In the case of the inversion $`\gamma _kz=1/z`$ and $`b`$ an even integer, a solution to (3.33) is $`F_k=\left(z/\overline{z}\right)^{\frac{b}{2}}`$. By (3.30) $`F_k=\left(z/\overline{z}\right)^{\frac{b}{2}}`$ is related to the $`A`$–length of the geodesic connecting $`z`$ and $`0`$: $$e^{\frac{i}{2}b_z^0A}=F_k(z,\overline{z})=\left(\frac{z}{\overline{z}}\right)^{\frac{b}{2}}.$$ (3.34) An interesting formal solution to (3.33) is $$F_k(z,\overline{z})=\underset{j=0}{\overset{\mathrm{}}{}}\left(\frac{\gamma _k^jz\gamma _k^{j1}\overline{z}}{\gamma _k^{j1}z\gamma _k^j\overline{z}}\right)^b.$$ (3.35) To construct other solutions, we consider the uniformizing map $`J_H:H\mathrm{\Sigma }`$, which enjoys the property $`J_H(\gamma z)=J_H(z)`$, $`\gamma \mathrm{\Gamma }`$. Then, if $`F_k`$ satisfies (3.33), this equation is invariant under $`F_kG(J_H,\overline{J}_H)F_k`$. Since $`|F_k|=1`$, we should require $`|G|=1`$, otherwise $`G`$ is arbitrary. ## 4 Hochschild cohomology of $`\mathrm{\Gamma }`$ The Fuchsian generators $`\gamma _k\mathrm{\Gamma }`$ are projectively represented by means of unitary operators $`𝒰_k`$ acting on $`L^2(H)`$. The product $`\gamma _k\gamma _j`$ is represented by<sup>1</sup><sup>1</sup>1The differential representation of $`\mathrm{PSL}_2(R)`$ acts in reverse order with respect to the one by matrices. $`𝒰_{jk}`$, which equals $`𝒰_j𝒰_k`$ up to a phase: $$𝒰_j𝒰_k=e^{2\pi i\theta (j,k)}𝒰_{jk}.$$ (4.1) Associativity implies $$\theta (j,k)+\theta (jk,l)=\theta (j,kl)+\theta (k,l).$$ (4.2) We can easily determine $`\theta (j,k)`$: $$𝒰_j𝒰_k=\mathrm{exp}\left(ib_z^{\gamma _jz}A+ib_{\gamma _jz}^{\gamma _k\gamma _jz}Aib_z^{\gamma _k\gamma _jz}A\right)𝒰_{jk}=\mathrm{exp}\left(ib_{\tau _{jk}}A\right)𝒰_{jk},$$ (4.3) where $`\tau _{jk}`$ denotes the geodesic triangle with vertices $`z`$, $`\gamma _jz`$ and $`\gamma _k\gamma _jz`$. This identifies $`\theta (j,k)`$ as the gauge length of the perimeter of the geodesic triangle $`\tau _{jk}`$. By Stokes’ theorem this is the Poincaré area of the triangle. A similar phase, introduced independently of any gauge connection, has been considered in in the context of Berezin’s quantization of $`H`$ and Von Neumann algebras. The information on the compactification of M(atrix) theory is encoded in the action of $`\mathrm{\Gamma }`$ on $`H`$, plus a projective representation of $`\mathrm{\Gamma }`$. The latter amounts to the choice of a phase. Physically inequivalent choices of $`\theta (j,k)`$ turn out to be in one–to–one correspondence with elements in the 2nd Hochschild cohomology group $`H^2(\mathrm{\Gamma },U(1))`$ of $`\mathrm{\Gamma }`$. This cohomology group is defined as follows. A $`k`$–cochain is an angular–valued function $`f(\gamma _1,\mathrm{},\gamma _k)`$ with $`k`$ arguments in $`\mathrm{\Gamma }`$. The coboundary operator $`\delta `$ maps the $`k`$–cochain $`f`$ into the $`(k+1)`$–cochain $`\delta f`$ defined as $$(\delta f)(\gamma _0,\mathrm{},\gamma _k)=f(\gamma _1,\mathrm{},\gamma _k)+\underset{l=1}{\overset{k}{}}(1)^lf(\gamma _0,\mathrm{},\gamma _{l1}\gamma _l,\mathrm{},\gamma _k)+(1)^{k+1}f(\gamma _0,\mathrm{},\gamma _{k1}).$$ (4.4) Clearly $`\delta ^2=0`$. A $`k`$–cochain annihilated by $`\delta `$ is called a $`k`$–cocycle. $`H^k(\mathrm{\Gamma },U(1))`$ is the group of equivalence classes of $`k`$–cocycles modulo the coboundary of $`(k1)`$–cochains. The associativity condition (4.2) is just $`\delta \theta (j,k)=0`$. Thus $`\theta `$ is a 2–cocycle of the Hochschild cohomology. Projective representations of $`\mathrm{\Gamma }`$ are classified by $`H^2(\mathrm{\Gamma },U(1))=U(1)`$. Hence $`\theta =b\chi (\mathrm{\Sigma })`$ is the unique parameter for this compactification ($`\theta =b\mu (E)`$ in the general case). ## 5 Stable bundles and double scaling limit We now present some facts about projective, unitary representations of $`\mathrm{\Gamma }`$ and the theory of holomorphic vector bundles . Let $`E\mathrm{\Sigma }`$ be a holomorphic vector bundle over $`\mathrm{\Sigma }`$ of rank $`N`$ and degree $`k`$. The bundle $`E`$ is called stable if the inequality $`\mu (E^{})<\mu (E)`$ holds for every proper holomorphic subbundle $`E^{}E`$. We may take $`N<k0`$. We will further assume that $`\mathrm{\Gamma }`$ contains a unique primitive elliptic element $`\gamma _0`$ of order $`N`$ ($`i.e.`$, $`\gamma _0^N=I`$), with fixed point $`z_0H`$ that projects to $`x_0\mathrm{\Sigma }`$. Given the branching order $`N`$ of $`\gamma _0`$, let $`\rho :\mathrm{\Gamma }U(N)`$ be an irreducible unitary representation. It is said admissible if $`\rho (\gamma _0)=e^{2\pi ik/N}I`$. Putting the elliptic element on the right–hand side, and setting $`\rho _k\rho (\gamma _k)`$, (1.2) becomes $$\underset{j=1}{\overset{g}{}}\left(\rho _{2j1}\rho _{2j}\rho _{2j1}^1\rho _{2j}^1\right)=e^{2\pi ik/N}I.$$ (5.1) On the trivial bundle $`H\times C^NH`$ there is an action of $`\mathrm{\Gamma }`$: $`(z,v)(\gamma z,\rho (\gamma )v)`$. This defines the quotient bundle $$H\times C^N/\mathrm{\Gamma }H/\mathrm{\Gamma }\mathrm{\Sigma }.$$ (5.2) Any admissible representation determines a holomorphic vector bundle $`E_\rho \mathrm{\Sigma }`$ of rank $`N`$ and degree $`k`$. When $`k=0`$, $`E_\rho `$ is simply the quotient bundle (5.2) of $`H\times C^NH`$. The Narasimhan–Seshadri (NS) theorem now states that a holomorphic vector bundle $`E`$ over $`\mathrm{\Sigma }`$ of rank $`N`$ and degree $`k`$ is stable if and only if it is isomorphic to a bundle $`E_\rho `$, where $`\rho `$ is an admissible representation of $`\mathrm{\Gamma }`$. Moreover, the bundles $`E_{\rho _1}`$ and $`E_{\rho _2}`$ are isomorphic if and only if the representations $`\rho _1`$ and $`\rho _2`$ are equivalent. The standard Hermitean metric on $`C^N`$ gives a metric on $`H\times C^NH`$. This metric and the corresponding connection are invariant with respect to the action $`(z,v)(\gamma z,\rho (\gamma )v)`$, when $`\rho `$ is admissible. Hence they determine a (degenerate) metric $`g_{NS}`$ and a connection $`A_{NS}`$ on the bundle $`E=E_\rho `$. The connection $`A_{NS}`$ is compatible with the metric $`g_{NS}`$ and with the holomorphic structure on $`E`$, but it has a singularity at the branching point $`x_0\mathrm{\Sigma }`$ of the covering $`H\mathrm{\Sigma }`$. The curvature $`F_{NS}`$ of $`A_{NS}`$ is a $`(1,1)`$–current with values in the bundle $`\mathrm{End}E`$, characterized by the property<sup>2</sup><sup>2</sup>2Note that our convention for $`A`$ differs from the one in the mathematical literature by a factor $`i`$. $$_\mathrm{\Sigma }fF_{NS}=2\pi i\mu (E)\mathrm{tr}f(x_0),$$ (5.3) for every smooth section $`f`$ of the bundle $`\mathrm{End}E`$. The connection $`A_{NS}`$ is uniquely determined by the curvature condition (5.3) and by the fact that it corresponds to the degenerate metric $`g_{NS}`$. The connection $`A_{NS}`$ on the stable bundle $`E=E_\rho `$ is called the NS connection. A differential–geometric approach to stability has been given by Donaldson . Fix a Hermitean metric on $`\mathrm{\Sigma }`$, for example the Poincaré metric, normalized so that the area of $`\mathrm{\Sigma }`$ equals 1. Let us denote by $`\omega `$ its associated (1,1)–form. A holomorphic bundle $`E`$ is stable if and only if there exists on $`E`$ a metric connection $`A_D`$ with central curvature $`F_D=2\pi i\mu (E)\omega I`$; such a connection $`A_D`$ is unique. The unitary projective representations of $`\mathrm{\Gamma }`$ we constructed above have a uniquely defined gauge field whose curvature is proportional to the volume form on $`\mathrm{\Sigma }`$. With respect to the representation considered by NS, we note that NS introduced an elliptic point to produce the phase, while in our case the latter arises from the gauge length. Our construction is directly connected with Donaldson’s approach as $`F=iF_D`$, where $`F`$ is the curvature (3.27). However, the main difference is that our operators are unitary differential operators on $`L^2(H,C^N)`$ instead of unitary matrices on $`C^N`$. This allowed us to obtain a non–trivial phase also in the Abelian case. It is however possible to understand the formal relation between our operators and those of NS. To see this we consider the adjoint representation of $`\mathrm{\Gamma }`$ on $`\mathrm{End}C^N`$, $$\mathrm{Ad}\rho (\gamma )Z=\rho (\gamma )Z\rho ^1(\gamma ),$$ (5.4) where $`Z\mathrm{End}C^N`$ is understood as an $`N\times N`$ matrix. Let us also consider the trivial bundle $`H\times \mathrm{End}C^NH`$. There is an action of $`\mathrm{\Gamma }`$: $`(z,Z)(\gamma z,\mathrm{Ad}\rho (\gamma )Z)`$ that defines the quotient bundle $$H\times \mathrm{End}C^N/\mathrm{\Gamma }H/\mathrm{\Gamma }\mathrm{\Sigma }.$$ (5.5) Then, the idea is to consider a vector bundle $`E^{}`$ in the double scaling limit $`N^{}\mathrm{}`$, $`k^{}\mathrm{}`$, with $`\mu (E^{})=k^{}/N^{}`$ fixed, that is $$\mu (E^{})=b\mu (E).$$ (5.6) In this limit, fixing a basis in $`L^2(H,C^N)`$, the matrix elements of our operators can be identified with those of $`\rho (\gamma )`$. ## 6 Noncommutative Riemann surfaces Let us now introduce two copies of the upper half–plane, one with coordinates $`z`$ and $`\overline{z}`$, the other with coordinates $`w`$ and $`\overline{w}`$. While the coordinates $`z`$ and $`\overline{z}`$ are reserved to the operators $`𝒰_k`$ we introduced previously, we reserve $`w`$ and $`\overline{w}`$ to construct a new set of operators. We now introduce noncommutative coordinates expressed in terms of the covariant derivatives $$W=_w+iA_w,\overline{W}=_{\overline{w}}+iA_{\overline{w}},$$ (6.1) with $`A_w=A_{\overline{w}}=1/(2\mathrm{Im}w)`$, so that $$[W,\overline{W}]=iF_{w\overline{w}},$$ (6.2) where $`F_{w\overline{w}}=i/[2(\mathrm{Im}w)^2]`$. Let us consider the following realization of the $`\mathrm{sl}_2(R)`$ algebra: $$\widehat{L}_1=w,\widehat{L}_0=\frac{1}{2}(w_w+_ww),\widehat{L}_1=_ww_w.$$ (6.3) We then define the unitary operators $$\widehat{T}_k=e^{\lambda _1^{(k)}(\widehat{L}_1+\overline{\widehat{L}}_1)}e^{\lambda _0^{(k)}(\widehat{L}_0+\overline{\widehat{L}}_0)}e^{\lambda _1^{(k)}(\widehat{L}_1+\overline{\widehat{L}}_1)},$$ (6.4) where the $`\lambda _n^{(k)}`$ are as in (3.3). Set $`𝒱_k=\widehat{T}_k𝒰_k`$. Since the $`\widehat{T}_k`$ satisfy (3.4), it follows that the $`𝒱_k`$ satisfy (3.28) and $$𝒱_k_w𝒱_k^1=\widehat{T}_k_w\widehat{T}_{k}^{}{}_{}{}^{1}=\frac{a_k_w+b_k}{c_k_w+d_k}.$$ (6.5) Setting $`W=G_wG^1`$, i.e. $`G=(w\overline{w})^2`$, and using $`Af(B)A^1=f(ABA^1)`$, we see that $$𝒱_kW𝒱_k^1=\widehat{T}_kW\widehat{T}_{k}^{}{}_{}{}^{1}=G(\stackrel{~}{w})\widehat{T}_k_w\widehat{T}_{k}^{}{}_{}{}^{1}G^1(\stackrel{~}{w}),$$ (6.6) where $$\stackrel{~}{w}=\widehat{T}_kw\widehat{T}_{k}^{}{}_{}{}^{1}=e^{\lambda _0^{(k)}}+2\lambda _1^{(k)}(\widehat{L}_0\lambda _1^{(k)}w)\lambda _1^{(k)2}e^{\lambda _0^{(k)}}(\widehat{L}_1+2\lambda _1^{(k)}\widehat{L}_0\lambda _1^{(k)2}w),$$ (6.7) and by (6.5) $$𝒱_kW𝒱_k^1=\widehat{T}_kW\widehat{T}_{k}^{}{}_{}{}^{1}=\frac{a_k\stackrel{~}{W}+b_k}{c_k\stackrel{~}{W}+d_k},$$ (6.8) where $`\stackrel{~}{W}`$ differs from $`W`$ by the connection $$\stackrel{~}{W}=_w+G(\stackrel{~}{w})[_wG^1(\stackrel{~}{w})].$$ (6.9) ### 6.1 $`C^{}`$–algebra By a natural generalization of the $`n`$–dimensional noncommutative torus, one defines a noncommutative Riemann surface $`\mathrm{\Sigma }_\theta `$ in $`g>1`$ to be an associative algebra with involution having unitary generators $`𝒰_k`$ obeying the relation (3.22). Such an algebra is a $`C^{}`$–algebra, as it admits a faithful unitary representation on $`L^2(H,C^N)`$ whose image is norm–closed. Relation (3.22) is also satisfied by the $`𝒱_k`$. However, while the $`𝒰_k`$ act on the commuting coordinates $`z,\overline{z}`$, the $`𝒱_k`$ act on the operators $`W`$ and $`\overline{W}`$ of (6.1). The latter, factorized by the action of the $`𝒱_k`$ in (6.8), can be pictorially identified with a sort of noncommutative coordinates on $`\mathrm{\Sigma }_\theta `$. Each $`\gamma I`$ in $`\mathrm{\Gamma }`$ can be uniquely expressed as a positive power of a primitive element $`p\mathrm{\Gamma }`$, primitive meaning that $`p`$ is not a positive power of any other $`p^{}\mathrm{\Gamma }`$ . Let $`𝒱_p`$ be the representative of $`p`$. Any $`𝒱C^{}`$ can be written as $$𝒱=\underset{p\{prim\}}{}\underset{n=0}{\overset{\mathrm{}}{}}c_n^{(p)}𝒱_p^n+c_0I,$$ (6.10) for certain coefficients $`c_n^{(p)}`$, $`c_0`$. A trace can be defined as $`\mathrm{tr}𝒱=c_0`$. In the case of the torus one can connect the $`C^{}`$–algebras of $`U(1)`$ and $`U(N)`$. To see this one can use ’t Hooft’s clock and shift matrices $$V_1V_2=e^{2\pi i\frac{M}{N}}V_2V_1.$$ (6.11) The $`U(N)`$ $`C^{}`$–algebra is constructed in terms of the $`V_k`$ and of the unitary operators representing the $`U(1)`$ $`C^{}`$–algebra. Morita equivalence is an isomorphism between the two. In higher genus, the analog of the $`V_k`$ is the $`U(N)`$ representation $`\rho (\gamma )`$ considered above. One can obtain a $`U(N)`$ projective unitary differential representation of $`\mathrm{\Gamma }`$ by taking $`𝒱_k\rho (\gamma _k)`$, with $`𝒱_k`$ Abelian. This non–Abelian representation should be compared with the one obtained by the non–Abelian $`𝒱_k`$ constructed above. In this framework it should be possible to understand a possible higher–genus analog of the Morita equivalence. The isomorphism of the $`C^{}`$–algebras is a direct consequence of an underlying equivalence between the $`U(1)`$ and $`U(N)`$ connection. The $`z`$–independence of the phase requires $`F`$ to be the identity matrix in the gauge indices. This in turn is deeply related to the uniqueness of the connection we found. The latter is related to the uniqueness of the NS connection. We conclude that Morita equivalence in higher genus is intimately related to the NS theorem. Finally let us observe that, as our operators correspond to the $`N\mathrm{}`$ limit of projective unitary representations of $`\mathrm{\Gamma }`$, these play a role in the $`N\mathrm{}`$ limit of QCD as considered in . Acknowledgments. It is a pleasure to thank D. Bellisai, D. Bigatti, M. Bochicchio, U. Bruzzo, R. Casalbuoni, G. Fiore, L. Griguolo, P.M. Ho, S. Kobayashi, I. Kra, G. Landi, K. Lechner, F. Lizzi, P.A. Marchetti, B. Maskit, F. Rădulescu, D. Sorokin, W. Taylor, M. Tonin and R. Zucchini for comments and interesting discussions. G.B. is supported in part by a D.O.E. cooperative agreement DE-FC02-94ER40818 and by an INFN “Bruno Rossi” Fellowship. J.M.I. is supported by an INFN fellowship. J.M.I., M.M. and P.P. are partially supported by the European Commission TMR program ERBFMRX-CT96-0045.
no-problem/0003/physics0003066.html
ar5iv
text
# 1 Problem ## 1 Problem A vacuum photodiode is constructed in the form of a parallel plate capacitor with plate separation $`d`$. A battery maintains constant potential $`V`$ between the plates. A short laser pulse illuminates that cathode at time $`t=0`$ with energy sufficient to liberate all of the surface charge density. This charge moves across the capacitor gap as a sheet until it is collected at the anode at time $`T`$. Then another laser pulse strikes the cathode, and the cycle repeats. Estimate the average current density $`j`$ that flows onto the anode from the battery, ignoring the recharing of the cathode as the charge sheet moves away. Then calculate the current density and its time average when this effect is included. Compare with Child’s Law for steady current flow. You may suppose that the laser photon energy is equal to the work function of the cathode, so the electrons leave the cathode with zero velocity. ## 2 Solution The initial electric field in the capacitor is $`𝐄=V/d\widehat{𝐱}`$, where the $`x`$ axis points from the cathode at $`x=0`$ to the anode. The initial surface charge density on the cathode is (in Gaussian units) $$\sigma =E/4\pi =V/4\pi d.$$ (1) The laser liberates this charge density at $`t=0`$. The average current density that flows onto the anode from the battery is $$j=\frac{\sigma }{T}=\frac{V}{4\pi dT},$$ (2) where $`T`$ is the transit time of the charge across the gap $`d`$. We first estimate $`T`$ by ignoring the effect of the recharging of the cathode as the charge sheet moves away from it. In this approximation, the field on the charge sheet is always $`E=V/d`$, so the acceleration of an electron is $`a=eD/m=eV/dm`$, where $`e`$ and $`m`$ are the magnitudes of the charge and mass of the electron, respectively. The time to travel distance $`d`$ is $`T=\sqrt{2d/a}=\sqrt{2d^2m/eV}`$. Hence, $$j=\frac{V^{3/2}}{8\pi d^2}\sqrt{\frac{2e}{m}}.$$ (3) This is quite close to Child’s Law for a thermionic diode, $$j_{\mathrm{steady}}=\frac{V^{3/2}}{9\pi d^2}\sqrt{\frac{2e}{m}}.$$ (4) We now make a detailed calculation, including the effect of the recharging of the cathode, which will reduce the average current density somewhat. At some time $`t`$, the charge sheet is at distance $`x(t)`$ from the cathode, and the anode and cathode have charge densities $`\sigma _A`$ and $`\sigma _C`$, respectively. All the field lines that leave the anode terminate on either the charge sheet or on the cathode, so $$\sigma +\sigma _C=\sigma _A,$$ (5) where $`\sigma _A`$ and $`\sigma _C`$ are the charge densities on the anode and cathode, respectively. The the electric field strength in the region I between the anode and the charge sheet is $$E_I=4\pi \sigma _A,$$ (6) and that in region II between the charge sheet and the cathode is $$E_{II}=4\pi \sigma _C.$$ (7) The voltage between the capacitor plates is therefore, $$V=E_I(dx)E_{II}x=4\pi \sigma _AdV\frac{x}{d},$$ (8) using (1) and (5-7), and taking the cathode to be at ground potential. Thus, $$\sigma _A=\frac{V}{4\pi d}\left(1+\frac{x}{d}\right),\sigma _C=\frac{Vx}{4\pi d^2},$$ (9) and the current density flowing onto the anode is $$j(t)=\dot{\sigma }_A=\frac{V\dot{x}}{4\pi d^2}.$$ (10) This differs from the average current density (2) in that $`\dot{x}/dT`$, since $`\dot{x}`$ varies with time. To find the velocity $`\dot{x}`$ of the charge sheet, we consider the force on it, which is due to the field set up by charge densities on the anode and cathode, $$E_{\mathrm{on}\sigma }=2\pi (\sigma _A+\sigma _C)=\frac{V}{2d}\left(1+\frac{2x}{d}\right).$$ (11) The equation of motion of an electron in the charge sheet is $$m\ddot{x}=eE_{\mathrm{on}\sigma }=\frac{eV}{2d}\left(1+\frac{2x}{d}\right),$$ (12) or $$\ddot{x}\frac{eV}{md^2}x=\frac{eV}{2md}.$$ (13) With the initial conditions that the electrons start from rest, $`x(0)=0=\dot{x}(0)`$, we readily find that $$x(t)=\frac{d}{2}(\mathrm{cosh}kt1),$$ (14) where $$k=\sqrt{\frac{eV}{md^2}}.$$ (15) The charge sheet reaches the anode at time $$T=\frac{1}{k}\mathrm{cosh}^13.$$ (16) The average current density is, using (2) and (16), $$j=\frac{V}{4\pi dT}=\frac{V^{3/2}}{4\pi \mathrm{cosh}^1(3)d^2}\sqrt{\frac{e}{m}}=\frac{V^{3/2}}{9.97\pi d^2}\sqrt{\frac{2e}{m}}.$$ (17) The electron velocity is $$\dot{x}=\frac{dk}{d}\mathrm{sinh}kt,$$ (18) so the time dependence of the current density (10) is $$j(t)=\frac{1}{8\pi }\frac{V^{3/2}}{d^2}\sqrt{\frac{e}{m}}\mathrm{sinh}kt(0<t<T).$$ (19) A device that incorporates a laser driven photocathode is the laser triggered rf gun .
no-problem/0003/astro-ph0003049.html
ar5iv
text
# A limit for the mass transfer rate in soft X-ray transients ## 1 Introduction Transient X-ray binaries containing a black hole form two different classes of objects, the high-mass and the low-mass binaries. The high-mass systems have an O or B star companion and the observations indicate mass transfer at a high rate onto the black hole primary. The low-mass systems, known as soft X-ray transients (SXT) or X-ray novae have a K or M dwarf companion. The Roche-lobe filling low-mass star transfers matter via an accretion disk onto the compact star. These binaries exhibit outbursts, usually detected in X-rays. The transient sources are interesting objects to study the accretion disk. Already a decade ago Huang & Wheeler (1989) and Mineshige & Wheeler (1989) argued that the rare outbursts are caused by a disk instability as in dwarf nova systems where the primary star is a white dwarf. Recently detailed modelling of the decline of the outburst lightcurve was done; for a comprehensive study and an overview see Cannizzo (1998,1999). Observational data for X-ray novae and system parameters deduced from the observations are summarized in reviews by Tanaka & Shibazaki (1996) and Tanaka (1999). Chen et al. (1997) carried out a statistical study of all long-term X-ray and optical lightcurves. Asai et al. (1998) investigated nine black hole X-ray transients in the quiescent state. If we compare SXTs with dwarf novae (Kuulkers 1999) we especially notice the long outburst recurrence times. The primary stars are black holes or neutron stars, and the orbital periods are longer. Only a few dwarf novae with very short orbital periods have such long recurrence times, connected with a high outburst amplitude. These systems are known as “tremendous outburst amplitude dwarf novae”, TOADs (Howell et al. 1995). For the best observed system, WZ Sagittae, with outbursts every 31 years, the amount of matter accumulated in the disk for the outburst is about 1-2 $`10^{24}`$g (Smak 1993), a factor of only about 3 smaller than that estimated for the transient source A0620-00 (McClintock et al. 1983). But there is one important difference between WZ Sge stars and SXTs: the size of the disk in WZ Sge with an orbital period of only 81.6 minutes is much smaller than those in SXTs with typical periods of several days. The viscosity parameter in the disk in WZ Sge therefore has to be a factor of 10 to 100 lower if the same amount of matter is accommodated in the much smaller area (Meyer-Hofmeister et al. 1998). We want to point out another feature, different in dwarf novae and SXTs, but essential for the state of the disk. In transient sources and dwarf novae in quiescence the outer accretion disk is cool and matter is accumulated for the outburst. In dwarf novae such a disk is cool everywhere from the outer to the inner edge. In black hole transient sources, the disk can reach inward to the vicinity of the black hole. At such close distance a disk is hot even for very low mass flow rates. Since hot and cool disk regions cannot remain stationary side by side (Meyer 1981), transition fronts will sweep back and forth over the disk leading to a rapid sequence of hot and cool states preventing any long-term quiescent accumulation of mass in the disk. To circumvent this problem one either has to assume an extremely low viscosity so that matter cannot flow towards the inner disk (in contradiction to the amount of matter accumulated for a SXT outburst) or a hole in the inner thin disk due to evaporation. Our computation of disk evolution includes evaporation into a coronal flow and the co-existence of thin disk and corona consistently. Since in quiescence about half of the matter flows through the corona evaporation is an essential feature in the evolution of SXTs. The spectra of quiescent SXTs are not consistent with an accretion disk model of a thin disk reaching inward to the black hole, as pointed out by McClintock et al. (1995) and Narayan et al. (1996), but the problem can be resolved by accretion via an ADAF which is the inward continuation of the coronal evaporation flow (for a review see Narayan et al. 1999). The observed outburst recurrence time of transient sources ranges from around one year to several decades, for many systems only one outburst is recorded. We show in our investigation that the outbursts may be triggered only marginally and the recurrence time then can vary very significantly for a small difference in mass transfer rates. For slightly lower mass overflow rates from the companion star the systems remain in a stationary state with a cool disk. The question whether such faint non-transient black hole binaries exist was also approached, in a different way, in connection with the physics of an advection-dominated accretion flow (ADAF) by Menou et al. (1999b). The answer depends on the expected mass overflow rates in these binaries. But the predictions of the rates caused by magnetic braking are so uncertain that one can better draw conclusions from the outburst behavior of SXTs on the efficiency of magnetic braking than vice versa. In Table 1 we summarize properties of transient sources. Listed are binaries for which the observations document that the compact star is a black hole. In addition to these systems there exist a number of transients which are probably also black hole binaries, but also for those only in a few cases is more than one outburst known. In our investigation we discuss the following points. We describe the computational code for evolution of the disk in quiescence including evaporation of the inner disk in Sect. 2. In Sect. 3 we show the results: the outburst recurrence time depends strongly on the black hole mass, the amount of matter accumulated in the disk during quiescence, and the fraction of mass accumulated to mass transferred from the companion star. We determine the lower limit for the overflow rate to trigger a disk instability. In Sect. 4 we discuss the regime of faint non-transient black hole low mass X-ray binaries. Conclusions follow in Sect. 5. ## 2 Evolution of the disk in quiescence, interaction of cool disk and hot corona The evolution of accretion disks in binaries is governed by the frictional diffusion of angular momentum. Conservation of mass and angular momentum give the diffusion equation. The matter transferred from the companion star accretes via the disk onto the primary star. At the same time angular momentum is transported outward in the disk. We consider a geometrically thin disk with a corona above the inner part of the thin disk. The corona originates from evaporation of matter from the cool thin disk in a “siphon-flow” process (Meyer & Meyer-Hofmeister 1994). Conservation of mass and angular momentum has to be considered in the thin disk and the corona together. To compute the evolution of disk plus corona we solve the diffusion equation for the change of surface density in the cool disk with an additional term for the mass and angular momentum exchange with the corona above. The boundary conditions at the inner and outer edge of the disk are the following. The outer disk radius cannot grow above a cut-off radius where either tidally induced shocks or the eccentric disk instability (3:1 resonance between Kepler binary period and Kepler rotation in the disk (Whitehurst 1988, Lubow 1991)) transfer any surplus of angular momentum outflow in the disk to the orbit. For low mass ratios $`M_1/M_2`$, as in the binaries with a black hole primary and a low mass star secondary, the 3:1 resonance radius lies inside the tidal truncation radius and determines the cut-off. If the disk radius is inside this radius, it constitutes a free boundary where the diffusion between mass and angular momentum flows in the impinging stream and in the disk determines its growth or decline. The inner edge of the thin disk is reached where the coronal flow via evaporation has picked up all mass flow brought in by the thin disk. Evaporation determines how mass and angular momentum flow in the disk tend to zero at this boundary. This results in a thin disk boundary condition. Farther in only the coronal flow exists. The diffusion equation for the cool disk with a corona above and the appropriate boundary conditions were first derived for a dwarf nova accretion disk around a white dwarf (Liu et al. 1997, Meyer-Hofmeister et al. 1998), then used also for disk around black holes (Meyer-Hofmeister & Meyer 1999a). ### 2.1 Computational method We followed the evolution of the cool accretion disk with the hot corona above. We took primary star masses of 4 to 12 $`M_{}`$, binary orbital periods from 4 to 16 hour and mass overflow rates of several $`10^{10}M_{}/\mathrm{yr}`$ to simulate the situation in SXTs. To solve the diffusion equation we used the viscosity-surface density relation from Ludwig et al. (1994) and for the viscosity parameter (Shakura & Sunyaev 1973) of the cool state we took the value $`\alpha _{\mathrm{cool}}`$=0.05. It is interesting that a value usually taken for dwarf nova instability modelling also allows a successful modelling of the SXT outburst intervals. As mentioned above, for the low mass ratio SXTs the disk size is limited by the 3:1 resonance radius. This radius does not depend on the secondary mass, it is determined only by the assumed primary mass and period. For the initial size of the disk we assume 90% of this 3:1 resonance radius. In our calculations, the initial disk quickly expands to the limiting size and the evolution becomes independent of the value of the mass ratio. Only for the very early phase the specific angular momentum of the matter transferred from the secondary has some effect on the evolution. For low mass ratios it depends only weakly on the mass ratio. For convenience we took a secondary mass according to a Roche-lobe filling main-sequence star. The secondaries could be evolved (as for example indicated for A0620-00 with its K5V companion) resulting in a smaller mass (King et al. 1996). The effect on our results is very small. We take evaporation into account using a scaling law for the rate $`\dot{M}_{\mathrm{ev}}`$ $$\dot{M}_{\mathrm{ev}}=10^{15.6}\left(\frac{r}{10^9\mathrm{cm}}\right)^{1.17}\left(\frac{M_1}{M_{}}\right)^{2.34}[\mathrm{gs}^1].$$ (1) with $`M_1`$ primary mass, $`r`$ distance to the primary (Liu et al. 1995). We assume that immediately after the decline from the long lasting outburst the surface density initially is very low everywhere in the disk. This seems adequate for long recurrence times. For short recurrence times this might not be a good approximation. But we include these latter systems only to show the trend for higher mass overflow rates. ## 3 Results of computations and comparison with observations ### 3.1 General features of disk evolution During the early evolution after an outburst the surface density distribution in the cool disk is low and the thin disk is in the cool state. Matter flows continuously over from the companion star. A part of this matter is accumulated in the cool disk, a part flows through the thin disk inward and is evaporated in the disk region near the inner edge where evaporation is most efficient. During the formation of the coronal flow wind loss occurs, so that only about 80% of the matter evaporated out of the cool disk yields the mass supply for the advection-dominated accretion flow (ADAF) towards the black hole. The evolution of the thin outer disk proceeds until the surface density reaches the critical value beyond which no cool state of the disk is possible anymore. The limiting surface density depends on the values chosen for the disk viscosity parametrization. An example is the modelling of the SXT A0620-00 (Meyer-Hofmeister & Meyer 1999b), the accumulation of matter in the disk is shown in Fig. 1. ### 3.2 Recurrence time of outbursts In Fig. 2 we show the outburst recurrence time found for the evolution of disks around black holes of 4, 8 and 12 $`M_{}`$. The interesting result is the effect of the black hole mass. This can be understood in the following way. The higher evaporation efficiency for higher primary mass leads to the formation of a more extended inner disk hole. If the hole is more extended, matter is accumulated in further outward geometrically larger disk regions. To reach the critical surface density for the disk instability there definitely requires the accumulation of more matter. This means higher mass overflow rates are necessary to trigger an outburst within the same recurrence time. This feature already became clear in the modelling of the accretion disk in A0620-00 assuming primary masses of 4 or 6 $`M_{}`$ (Meyer-Hofmeister & Meyer 1999b, Fig. 5). If the outbursts are triggered only marginally an analytical calculation of the recurrence time (Menou et al. 1999b) is not possible anymore. We study the disk evolution for different orbital periods. A longer period means that the disk is larger. If the mass transfer rate is close to the value for which outbursts are triggered only marginally, most of the matter flows through the cool disk and the size of the disk does not influence the disk evolution anymore. Our computations establish a lower limit for the mass overflow rate in order that a dwarf nova type instability in the accretion disk can occur, i.e. that the system is a transient source. The question arises how many systems might have mass transfer rates below this limit. ### 3.3 Accumulated matter in the disk Our computations of disk evolution give both the amount of matter accumulated in the disk until the next outburst is triggered and also how much matter was accreted onto the black hole during this time. Accumulation of matter in the disk means that everywhere in the disk between inner and outer edge the surface density rises continuously during quiescence (see for example Fig. 3 in Meyer-Hofmeister & Meyer 1999b). Our computations yield the following. For given black hole mass and orbital period the value $`M_d`$ depends slightly on the mass transfer rate. For a low mass transfer rate $`\dot{M}_T`$, i.e. an only marginally unstable disk, most matter flows through the disk (for a stable disk all matter flows through).The total amount of accumulated mass is low. A somewhat higher mass transfer rate leads to a higher value $`M_d`$. For relatively high transfer rates the outburst then occurs earlier and again less mass in accumulated. This means $`M_d`$ as a function of $`\dot{M}_T`$ has a maximum for a certain transfer rate; but for all cases, considered here, recurrence time $``$ 10 years, the values lie in a narrow range. In Fig. 3 we show this range of $`M_d`$. As discussed in connection with the outburst recurrence time the amount of matter accumulated increases with the primary mass due to the more extended inner hole and the accumulation in outer larger disk regions. In Fig. 3 we show our results for two primary masses and for different orbital periods of the binaries. The amount of matter is higher for longer orbital periods because of the larger disks. It is remarkable that for a given black hole mass and orbital period, the total amount of matter in the disk varies only within a narrow range shown in Fig. 3, for a wide variation of the recurrence time. The theoretically determined amount of matter $`M_d`$ can be compared with observations. In Table 1 we list the values derived by Chen et al. (1997), in general accurate to about a factor of 2-3. Except for the two systems J0422+32 and J1655-40 our theoretically derived values and those from observations agree. For J0422+32 the estimated black hole mass is relatively low, which should be related to a lower value $`M_d`$ as shown in Fig. 3. As already pointed out by Menou et al. (1999b) for J1655-40 several complications appear in the determination of the value $`M_d`$. Note that the highest value was found for the particularly extended disk of GS2023+338 with the long orbital period of 155 days. The amount of matter in the disk is connected with the viscosity parameter for the cool disk. The agreement with observations confirms that the chosen value 0.05 is adequate and documents the similarity with dwarf nova accretion disks. ### 3.4 The proportion of mass accumulated in quiescence An interesting result is also what fraction of the matter transferred from the companion star is actually accumulated in the disk. For an orbital period of 8 hours we show in Fig. 4 the fraction $`\dot{M}_{\mathrm{acc}}`$/$`\dot{M}_T`$ as a function of the recurrence time. Here $`\dot{M}_{\mathrm{acc}}`$ is the average value over the total quiescence (recurrence time). As can be seen from Fig. 3, the amount of accumulated matter is always about the same no matter how long the recurrence time is. Therefore the fraction $`\dot{M}_{\mathrm{acc}}`$/$`\dot{M}_T`$ decreases when the outbursts occur more rarely. If the transfer rate is so low that outbursts do occur only marginally almost all matter flows through the disk, the fraction approaches zero. The fraction is almost the same for 4 and 8$`M_{}`$. This arises from two facts which compensate: (1) about three times more matter is accumulated for the more massive black hole (see Fig. 3), (2) the rates necessary to trigger an outburst after a given time interval are larger by about the same factor (see Fig. 2). For typical black hole masses and a recurrence time of about 50 years our computations yield the value 35 to 40 %. This means 60 to 65% of the matter change to the coronal flow towards the black hole. About 1/5 of this flow is removed by wind loss from the corona. Menou et al. (1999b) studied the total mass flow rate in the disks of SXTs. They took the mass flow rate towards the black hole in quiescence from fits of the observed spectra based on the ADAF model and the rate of average accumulated matter from the total outburst energy for each system. They came to the conclusion that the amount which flows through the disk is comparable or larger than the accumulated amount, in agreement with our results of disk evolution modelling. ## 4 Stationary systems As shown in Fig. 2, for each assumed primary mass there exists a limiting mass transfer rate for which the recurrence time approaches infinity. For rates below no outburst can be triggered, the accretion disk is stationary. All matter transferred from the companion star flows through the disk, changes to a coronal flow (except for the wind loss) and forms an ADAF. Such systems are very faint, only recognizable from their radiation in X-rays from the innermost region around the black hole primary. How many systems of this kind may exist? The answer depends on the mass transfer rates in these systems and therefore on the angular momentum loss expected for these systems during their secular evolution. King and collaborators (King et al. 1996, 1997) discussed the angular momentum losses caused by magnetic braking in black hole binaries and the question of whether the expected mass transfer rates let the systems appear as transient sources. Assuming unevolved main-sequence companion stars they got, based on magnetic braking rates of Verbunt & Zwaan (1981), mass transfer rates so high that one would instead expect to find binaries with disks in a permanent hot state. Using the form of Mestel & Spruit (1987) leads to somewhat smaller but still high rates. That means the disk is hot everywhere out to the outer edge. For this evaluation the location of the inner edge of the disk is unimportant. High mass X-ray black hole binaries as Cyg X-1, LMC X-1 and LMC X-3 with O or B star companions have mass flow rates high enough to avoid the disk instability. Two very bright hard X-ray sources in the Galactic center region, the black hole candidates GRS 1758-258 and 1E1740.7-2942 with spectra similar to Cyg X-1 in the low state were detected by SIGMA during most of the observations in 1990 to 1998 (Kuznetsov et al. 1999). But optical data indicate a companion star mass of about $``$ 4$`M_{}`$ (Chen et al. 1994). If this is the case these systems would be low-mass black hole binaries in a persistent state. For rates slightly below the upper critical rate we expect outbursts with a short repetition time, hundreds of days, as observed for example about 460 days for GX 339-4, or 600 days for 1630-472RN (Tanaka & Lewin 1995, van Paradijs 1995). But this situation is not described well with our modelling; it would be necessary to follow the complete outburst cycles. Trudolyubov et al. (1998) argued that the occurrence of four successive outbursts of GX 339-4 in 1990 to 1994 might be connected with an increase of the mass transfer from the companion star due to irradiation. Menou et al. (1999b) approached the question of whether a population of faint non-transient low mass black hole binaries exists. For this investigation the location of the inner edge is important, because the instability would be triggered there. The radius of transition $`r_{\mathrm{tr}}`$ from the thin disk to a hot flow was estimated combining three concepts, constraints from the stream dynamics $`r_{\mathrm{tr}}`$ $``$ the impact radius of the stream in the disk), from the observed $`H_\alpha `$ emission line width (provides an upper limit to the speed of matter in the disk, therefore to the inner edge position) and the maximum radius where an ADAF is possible. The total mass flow rate in the disk was deduced from the rate of accretion in the innermost disk (from spectral fitting of the ADAF) together with a rate of average accumulation (derived from the outburst energy). The analysis was performed for the systems with best data, which we also used. The conclusion was that for the evaluated total mass flow rate the disks in black hole SXTs truncated at $`r_{\mathrm{tr}}`$ are unstable and will undergo the thermal-viscous instability. Our detailed computations including evaporation give the location of the inner disk edge at every time of the evolution, and take the actual mass flow rate in the inner disk in account, so that the question whether an outburst is triggered can be answered immediately for different black hole masses and different mass transfer rates from the companion star. Menou et al. (1999b) argued that magnetic braking could not be the cause for the mass transfer since the values would be too high. This is true for unevolved companion stars. But if the companion star is evolved, its mass may be only about half of that of a Roche-lobe filling main-sequence star (King et al. 1996) and the rates would be lower by a factor of about 1/5. The lower rates would come down to about the values for transient behaviour. ## 5 Conclusions Our investigation gives new insight into the evolution of the disks in black hole X-ray transients. At the same time new questions also arise. ### 5.1 The occurrence of outbursts We follow the disk evolution including evaporation into a coronal flow. Conclusions on stability are only possible if one considers these truncated disks where at a certain radius $`r_{\mathrm{tr}}`$ the thin accretion disk ends and the accretion changes to the form of a hot coronal flow. The outbursts are caused by the thermal-viscous instability as in dwarf novae, modelled with a viscosity value suitable for dwarf nova outbursts, which confirms the similarity. We found that the dependence of the evaporation process on the black hole mass essentially determines the outburst cycles. If the black hole mass is higher, a higher mass transfer rate is needed to trigger an outburst after a certain time interval of accumulation of matter. For example to get a recurrence time of 30 years for 4 or 8 $`M_{}`$ black holes about 2.5 or 6.5 $`10^{10}M_{}/\mathrm{yr}`$ respectively are needed (compare Fig. 2). The outburst after long quiescence can be understood as marginal triggering of the disk instability. In such a case a small difference in the mass transfer rate results in an large change of the recurrence time. The location of the inner edge of the thin disk is important for the outburst cycles. In our modelling $`r_{\mathrm{tr}}`$ follows from the evaporation model. The chosen viscosity parameter $`\alpha _{\mathrm{cool}}`$ also influences the result, but this not a free parameter because the total amount of accumulated matter has a constraint from the outburst energy. The systems listed in Table 1 are the best observed sources with the black hole mass established from observations. Assuming that no outburst was missed during the 30 years of X-ray observations (for a discussion see Chen et al. 1997) the recurrence times might be very long. In our view the instability is marginally triggered in these sources. For only a little lower rates these systems would be stationary, all matter transferred from the companion star steadily flows towards the black hole (wind loss excepted). Being so close to the marginal state in several black hole binaries, one expects a large number of similar systems in permanent quiescence. Such sources are very faint, with a spectrum like SXTs in quiescence. Menou et al. (1999b) discussed the observational signatures of such faint persistent black hole low-mass X-ray binaries. ### 5.2 Mass transfer rates Our computations of disk evolution to model the observations confirm that mass transfer rates of $`10^{10}`$ to $`10^9M_{}/\mathrm{yr}`$ cause the transient behaviour ( with a strong dependence on the black hole mass). These rates agree with the estimates for the amount of accumulated matter and with the rates derived from the spectral fits based on the ADAF model (Narayan et al. 1996, 1997, for a review see Narayan et al. 1999). For the accretion in disks around black holes we get a consistent description with the thin outer disk and the change to a coronal flow due to evaporation. ### 5.3 The matter in the disk The fact that the average rate of matter accumulation in the disk, derived from the observations (total outburst energy) and the mass flow rate towards the black hole (derived from ADAF spectral fitting) are just about the same seems surprising, as pointed out by Tanaka (1999). Menou et al.(1999b) estimated the relative importance of both rates and came to the conclusion that those are about equal (see also Menou et al. 1999a). Our computations naturally provide a value $`\dot{M}_{\mathrm{acc}}`$/$`\dot{M}_T`$ $``$ 0.35-0.55 for outburst recurrence times of 30 to 50 years, characteristic for SXTs. ### 5.4 The cause of the mass transfer Assuming an evolved companion star the mass transfer rates caused by magnetic braking might be low enough to lead to outbursts. These rates of mass overflow from the secondary star according to the suggestions by Verbunt & Zwaan (1981) and Mestel & Spruit (1987) depend only weakly on the primary mass. The observed outbursts strongly point to the fact that the transfer rates are marginal to trigger an outburst. Then one would conclude that the rates have a spread such that no outbursts or only rare outbursts occur. But such an interpretation is not possible if the limiting rate is so different for different black hole mass, assuming that the observed systems do not all have the same black hole mass (for a discussion see Bailyn et al. 1998). Only if the transfer rates depend somehow on the primary mass could these rates be such that marginally triggered rare outbursts occur for different black hole masses. We do not know about any mechanism which could cause these transfer rates. ###### Acknowledgements. We thank Yasuo Tanaka for information on black hole transients and Marat Gilfanov for valuable discussions.
no-problem/0003/cond-mat0003216.html
ar5iv
text
# Pseudo–𝜀 expansion of six–loop renormalization group functions of an anisotropic cubic model ## I Introduction Progress in the qualitative understanding and the quantitative description of critical phenomena to a great extent was achieved by the ideas of renormalization group (RG) theory . Only global features of a many–body system such as the range of interparticle forces, the space dimensionality $`d`$ as well as the dimension $`N`$ and the symmetry of an order parameter were suggested to be responsible for long–distance and abrupt behaviour of matter in the critical region. As a final step the role of the relevant parameters in microscopic Hamiltonians of various nature was represented adequately by effective Hamiltonians used in field theories. While already a vector field theory with an isotropic rotationally symmetrical order parameter allowed unified and correct description of a large spectrum of critical phenomena, an extension of theories is of special interest since in real substances anisotropies are always present . For instance, in cubic crystals one expects the spin interaction to react to the lattice structure (crystalline anisotropy), suggesting additional terms in the Hamiltonian, invariant under the cubic group. The anisotropy breaks rotational symmetry of the Heisenberg–like ferromagnet and makes the order parameter to point either along edges or along diagonals of a cube. The corresponding field theory is defined by a Landau - Ginzburg - Wilson (LGW) Hamiltonian with two $`\varphi ^4`$ terms of $`O(N)`$ and cubic symmetry and can exhibit a second order phase transition characterized either by spherical or cubic critical exponents. Varying the number of components of the order parameter $`N`$ a new crossover phenomenon between these two scenarios takes place at the marginal value $`N_c`$. In the framework of RG theory the critical point corresponds to the stable fixed point of the RG transformation . In a model with competing fixed points the study of domains of their attraction as well as the crossover phenomenon is a fundamental problem for universality comprehension. Apart from the academic interest the determination of $`N_c`$ can lead to decisive conclusions about phase transition order in a certain class of cubic crystals. For instance, $`d=3`$ cubic crystals with three easy axis should undergo either a second or a weak first–order phase transition provided $`N_c`$ is greater or less than $`3`$ . This argumentation states that the existence of the stable fixed point of a field theory is a necessary but not a sufficient condition for a model to exhibit a second order phase transition. If the parameters of an microscopic Hamiltonian are mapped in the plain of the LGW Hamiltonian couplings to a point which lies outside the domain of attraction of the stable fixed point, the Hamiltonians will flow away to infinitely large values of couplings. Such a behaviour might serve as evidence of a weak first order phase transition and this is confirmed in some experiments (see Ref. and references therein). If $`N_c<3`$ for a $`d=3`$ ferromagnet, the new cubic fixed point governing the critical regime appears to be inaccessible from the initial values of couplings which correspond to the ferromagnetic ordering with three easy axis. It appears that $`N_c`$ is very close to $`N=3`$ and the critical exponents in both regimes are indistinguishable experimentally. In order to calculate the value of $`N_c`$ within field theory one has to treat a complex model of two couplings. This is different from a Heisenberg-like $`N`$–component ferromagnet with weak quenched disorder, where the Harris criterion answers the question about the type of critical behaviour . The description of the crossover and the precise determination of its numerical characteristics has been a challenge for many RG studies of the anisotropic cubic model. High orders of perturbation theory were obtained for this model in successive approximations either in $`\epsilon `$–expansion with dimensional regularization in the minimal subtraction (MS) scheme or within the massive $`d=3`$ scheme . The expressions are available now in the five–loop and in the six–loop approximations respectively. However, divergent properties of the series did not allow their straightforward analysis and called for the application of various resummation procedures. For instance, $`N_c`$ calculated in five–loop $`\epsilon `$–expansion yielded depending on the resummation procedure: $`N_c=2.958`$ , $`N_c=2.855`$ and $`N_c=2.87(5)`$ . Alternative approaches on the basis of the $`\epsilon `$–expansion lead to $`N_c=2.97(6)`$ and to $`N_c=2.950`$ . On the other hand the massive $`d=3`$ scheme RG functions extended for arbitrary $`N`$ to four loops , yielded $`N_c=2.89(2)`$ (see for a recent extended review of theoretical determination of $`N_c`$). These results suggest that the most reliable theoretical estimate is $`N_c<3`$. However, recent MC simulations questioned the values for $`N_c`$ obtained so far. There, considering the finite size corrections of a cubic invariant perturbation term at the critical $`O(N)`$–symmetric point, the eigenvalues $`\omega _i`$ of the stability matrix were extracted. From the estimate $`\omega _2=0.0007(29)`$ a value of $`N_c=3`$ was concluded. This disagreement as well as the crucial influence of the value of $`N_c`$ on the order of the phase transition makes an implementation of an alternative method for calculation of $`N_c`$ to be hardly overrated. Recently the massive $`d=3`$ scheme RG functions of the cubic model were extended to five–loop order and very recently the six–loop series were obtained. The traditional analysis of these series, including an information on large order behaviour of the RG functions , yielded $`N_c=2.89(4)`$. However let us note that the most accurate estimates of the critical exponents of a $`d=3`$ $`O(N)`$–symmetric $`\varphi ^4`$ model in a massive field theoretical RG scheme are based on a pseudo–$`\epsilon `$ expansion technique . Up to our knowledge the last method has never been applied to the cubic model . Therefore the main aim of the present paper is to apply the pseudo–$`\epsilon `$ expansion to the up–to–date most precise massive scheme RG function of the cubic model. The set-up of the article is as follows. After a brief consideration of the model and renormalization procedure, we present the pseudo–$`\epsilon `$ expansion for $`N_c`$ and discuss its properties. Applying Padé– and Padé–Borel analysis we obtain precise estimates of $`N_c`$ and compare the result with the corresponding $`\epsilon `$–expansion. Finally we evaluate the critical exponents of a $`d=3`$ cubic system belonging to the new universality class for different values of $`N>N_c`$ and discuss the weakly diluted Ising model case $`N=0`$. ## II Pseudo-$`\epsilon `$ series and numerical results We start from a $`d=3`$ effective LGW Hamiltonian with two couplings at terms of spherical and cubic symmetry: $$(\phi )=\mathrm{d}^3R\left\{\frac{1}{2}\underset{\alpha =1}{\overset{N}{}}\left[|\phi _\alpha |^2+m_0^2\phi _\alpha ^2\right]+\frac{u_0}{4!}\left(\underset{\alpha =1}{\overset{N}{}}\phi _\alpha ^2\right)^2+\frac{v_0}{4!}\underset{\alpha =1}{\overset{N}{}}\phi _\alpha ^4\right\},$$ (1) where $`\phi _\alpha (R)`$ are components of a bare $`N`$–component vector field; $`u_0>0`$, $`v_0`$ are bare couplings, $`m_0^2`$ is a squared bare mass being a linear function of temperature. The vicinity of a critical point corresponds to a long-distance behaviour of the model (1), while ultraviolet divergences of the theory are dealt with by means of an appropriate renormalization procedure . In particular, the renormalization of the bare couplings leads to $`\beta _u(u,v)`$ and $`\beta _v(u,v)`$ – the so-called $`\beta `$–functions; a renormalization of the bare field and square–field insertion produces $`\gamma _\varphi (u,v)`$ and $`\overline{\gamma }_{\varphi ^2}(u,v)`$ – the so-called $`\gamma `$–functions. All these functions depend on renormalized couplings $`u`$ and $`v`$ . The critical behaviour of the model is determined by the infrared stable fixed point $`u^{},v^{}`$. It is given by the condition that both $`\beta `$–functions are zero and all the real parts of the stability matrix eigenvalues are positive. The pair correlation function critical exponent $`\eta `$ and the correlation length critical exponent $`\nu `$ are obtained via the relations $`\eta =\gamma _\varphi (u^{},v^{}),1/\nu =2\overline{\gamma }_{\varphi ^2}(u^{},v^{})\gamma _\varphi (u^{},v^{})`$. The correction-to-scaling exponent $`\omega `$ is given by the largest stability matrix eigenvalue in the stable fixed point. In the present study we reconsider RG functions of the model (1) as they are obtained within massive fixed $`d=3`$ scheme : $`{\displaystyle \frac{\beta _u(u,v)}{u}}`$ $`=`$ $`1u{\displaystyle \frac{2}{3}}v+{\displaystyle \frac{4}{27}}{\displaystyle \frac{\left(190+41N\right)}{\left(8+N\right)^2}}u^2+{\displaystyle \frac{400}{81}}{\displaystyle \frac{uv}{8+N}}+{\displaystyle \frac{92}{729}}v^2+\beta _u^{(3LA)}+\mathrm{}+\beta _u^{(6LA)},`$ (2) $`{\displaystyle \frac{\beta _v(u,v)}{v}}`$ $`=`$ $`112{\displaystyle \frac{u}{8+N}}v+{\displaystyle \frac{4}{27}}{\displaystyle \frac{\left(370+23N\right)}{\left(8+N\right)^2}}u^2+{\displaystyle \frac{832}{81}}{\displaystyle \frac{uv}{8+N}}+{\displaystyle \frac{308}{729}}v^2+\beta _v^{(3LA)}+\mathrm{}+\beta _v^{(6LA)},`$ (3) $`\gamma _\varphi (u,v)`$ $`=`$ $`{\displaystyle \frac{8}{27}}{\displaystyle \frac{\left(2+N\right)}{\left(8+N\right)^2}}u^2+{\displaystyle \frac{16}{81}}{\displaystyle \frac{uv}{8+N}}+{\displaystyle \frac{8}{729}}v^2+\gamma _\varphi ^{(3LA)}+\mathrm{}+\gamma _\varphi ^{(6LA)},`$ (4) $`\overline{\gamma }_{\varphi ^2}(u,v)`$ $`=`$ $`{\displaystyle \frac{\left(2+N\right)u}{8+N}}+{\displaystyle \frac{1}{3}}v2{\displaystyle \frac{\left(2+N\right)}{\left(8+N\right)^2}}u^2{\displaystyle \frac{4uv}{3(8+N)}}{\displaystyle \frac{2}{27}}v^2+\overline{\gamma }_{\varphi ^2}^{(3LA)}+\mathrm{}+\overline{\gamma }_{\varphi ^2}^{(6LA)},`$ (5) where $`\beta _u^{(3LA)}\mathrm{}\overline{\gamma }_{\varphi ^2}^{(6LA)}`$ denote the three–loop contributions obtained in Ref. , the four–loop, the recent five–loop and the very recent six–loop contributions obtained in Refs. , and respectively. Furthermore, the large-order behaviour was established for the RG functions (2)–(5) which allowed to apply the refine resummation procedure based on Borel transformation with conformal mapping . In this way a convergent sequence of approximations for $`N_c`$ as well as critical exponents within the cubic universality class were obtained . One possible way of analysis of the massive RG functions (2)–(5) consists in solution of a system of equations for the (resummed) $`\beta `$–functions (2), (3) $`\beta _u(u^{},v^{})`$ $`=`$ $`0,`$ (6) $`\beta _v(u^{},v^{})`$ $`=`$ $`0`$ (7) to get numerical values of a stable fixed point coordinates $`u^{},v^{}`$. Then these numerical values are substituted into (resummed) series for $`\gamma `$–functions (4), (5) which lead to the numerical values for critical exponents. As a result the final errors for the critical exponents are the sum of the errors of the series for exponents and of the errors coming from $`u^{},v^{}`$. To avoid such errors accumulation it is standard now in the analysis of field theories with one coupling to use a pseudo–$`\epsilon `$ expansion . Here, we will apply the pseudo–$`\epsilon `$ expansion for a cubic model (1) . The procedure is defined in the following way. Let us introduce the functions: $`\beta _u(u,v,\tau )`$ $`=`$ $`u(\tau u{\displaystyle \frac{2}{3}}v+\mathrm{}),`$ (8) $`\beta _v(u,v,\tau )`$ $`=`$ $`v(\tau 12{\displaystyle \frac{u}{8+N}}v+\mathrm{})`$ (9) where the “pseudo–$`\epsilon `$” auxiliary parameter $`\tau `$ has been introduced into the $`\beta `$–functions (2), (3) instead of the zeroth order term. Obviously, $`\beta _u(u,v)\beta _u(u,v,\tau =1)`$, $`\beta _v(u,v)\beta _v(u,v,\tau =1)`$. Then a fixed point coordinates are obtained as series in $`\tau `$. The series for the stable fixed point coordinates $`u^{}(\tau )`$, $`v^{}(\tau )`$ are then substituted into series (4), (5) for $`\gamma `$–functions leading to the pseudo–$`\epsilon `$ expansion for critical exponents. In the resulting series the expansion parameter $`\tau `$ collects contributions from the loop integrals of the same order coming from both the series of $`\beta `$– and $`\gamma `$–functions. Finally, one puts $`\tau =1`$. In such a way one gets a self-consistent perturbation theory and avoids cumulation of errors originating from different steps of calculation. With the above described method we obtain the marginal value $`N_c`$ and the critical exponents in the cubic universality class for the model (1). The series for $`N_c`$ reads: $$N_c=44/3\tau +0.29042005\tau ^20.18967704\tau ^3+0.19951035\tau ^40.22465150\tau ^5.$$ (10) One notes that at least up to the presented number of loops the series does not behave like an asymptotic one with factorial growth of coefficients. This can be seen by considering a Padé–table (11) for $`N_c`$ series (10): $$\left[\begin{array}{cccccc}4& 3& 2.9158& 2.8411& ^{2.8922}& 2.8298\\ \multicolumn{6}{c}{}\\ 2.6667& 2.9051& ^{2.0643}& 2.8711& 2.8638& o\\ \multicolumn{6}{c}{}\\ 2.9571& 2.8423& 2.8616& 2.8616& o& o\\ \multicolumn{6}{c}{}\\ 2.7674& 2.8646& 2.8616& o& o& o\\ \multicolumn{6}{c}{}\\ 2.9669& 2.8613& o& o& o& o\\ \multicolumn{6}{c}{}\\ 2.7423& o& o& o& o& o\end{array}\right].$$ (11) Here, a result of an approximant $`[M/N]`$ is represented as an element of a matrix with usual notation. The approximants $`[0/4]`$ and $`[1/2]`$ have poles at values of $`\tau `$ of the order 1 (at points $`\tau _1=3.7`$ and $`\tau _2=1.1`$ respectively) and thus the estimates of $`N_c`$ on their basis are considered as unreliable (they are noted in the table by small numbers). The values in the first column of the table are merely the sums of the corresponding number of terms in the expansion (10) and do not diverge. However, the most prominent property of the table is the perfect convergence of the values within main diagonals. In particular, the six–loop result of the $`[3/2]`$ and the $`[2/3]`$ approximants and the five–loop result of $`[2/2]`$ approximant coincide within the 4th digit and lead to an estimate $`N_c=2.8616`$. Though the next order terms in (10) could spoil such convergence, it is worth to compare the pseudo–$`\epsilon `$ expansion (10) for $`N_c`$ with corresponding $`\epsilon `$–expansion series which is one loop order shorter : $$N_c=42\epsilon +2.58847559\epsilon ^25.87431189\epsilon ^3+16.82703902\epsilon ^4.$$ (12) The obvious worse convergence properties of the series (12) lead to a corresponding bad convergence of the values in the Padé–table (obtained in Ref. ). In particular, mere summation of several first terms leads now to diverging result: $$\left[\begin{array}{ccccc}4& 2.667& ^{3.627}& 1.952& ^{5.772}\\ \multicolumn{5}{c}{}\\ 2& 3.128& 2.893& 2.972& o\\ \multicolumn{5}{c}{}\\ 4.589& 2.792& 2.958& o& o\\ \multicolumn{5}{c}{}\\ 1.286& 3.068& o& o& o\\ \multicolumn{5}{c}{}\\ 15.540& o& o& o& o\end{array}\right].$$ (13) Here, the approximants $`[0/2]`$, $`[0/4]`$ have poles close to $`\tau =1`$ (at $`\tau _1=2.3`$, $`\tau _2=0.9`$) respectively and thus are unreliable. In order to take into consideration possible factorial divergence of the pseudo–$`\epsilon `$ expansion (10), as a next step we apply to the series (10) the Padé–Borel resummation procedure. The Padé–Borel resummation of the initial sum $`N_c(\tau )`$ consists of the following steps: i) construction of Borel–Leroy image of $`N_c(\tau )`$; ii) its extrapolation by a rational Padé–approximant $`[M/N](\tau t)`$ and iii) definition of a resumed $`N_c^{res}(\tau )`$ by the integration $`_0^{\mathrm{}}dt\mathrm{exp}(t)t^p[M/N](\tau t)`$, where $`p`$ is an arbitrary parameter entering the Borel–Leroy image . One possibility to fix $`p`$ is to require fastest convergence of the resulting values, given by the diagonal approximant resummation similar to Padé–analysis (11). However the convergence of these values appears to be almost independent of $`p`$. On the other hand approximants possessing poles on the positive real axis are considered as unreliable and we can equally well choose $`p`$ to provide a minimal number of such divergent approximants. For instance, for $`p4`$ the imaginary part is smaller then $`10^{10}`$ in the ”bad” $`[1/4]`$ and $`[3/2]`$ approximants and therefore can be neglected. Processing the series (10) for $`p=4`$ as described we obtain the results presented in (14). Here, one encounters only one unreliable approximant $`[1/2]`$ which is again denoted with small numbers. This analysis yields $`N_c=2.862\pm 0.005`$, where error bar stems from the maximal deviation between the six and the five–loop results for arbitrary $`p`$ between $`0`$ and $`10`$. $$\left[\begin{array}{cccccc}4& 3.0353& 2.9245& 2.8634& 2.8763& 2.8561\\ \multicolumn{6}{c}{}\\ 2.6667& 2.8995& ^{2.7173}& 2.8737& 2.8685& o\\ \multicolumn{6}{c}{}\\ 2.9571& 2.8461& 2.8595& 2.8631& o& o\\ \multicolumn{6}{c}{}\\ 2.7674& 2.8617& 2.8645& o& o& o\\ \multicolumn{6}{c}{}\\ 2.9669& 2.8641& o& o& o& o\\ \multicolumn{6}{c}{}\\ 2.7423& o& o& o& o& o\end{array}\right].$$ (14) It is obvious that other values of interest such as fixed point coordinates and critical exponents can also be obtained within the pseudo–$`\epsilon `$ expansions. For instance, for different values of $`N`$ we obtain the following expressions for the critical exponents $`\gamma `$ of the susceptibility, $`\nu `$ of the correlation length, and $`\omega `$ of the correction–to–scaling: $`\gamma _{_{N=3}}`$ $`=`$ $`1+2/9\tau +0.10157666\tau ^2+0.03325297\tau ^3+0.02024452\tau ^4+0.00312386\tau ^5+0.00905558\tau ^6,`$ (15) $`\gamma _{_{N=4}}`$ $`=`$ $`1+1/4\tau +0.11188272\tau ^2+0.03494088\tau ^3+0.01575673\tau ^40.00023288\tau ^5+0.00322125\tau ^6,`$ (16) $`\gamma _{_{N=5}}`$ $`=`$ $`1+4/15\tau +0.11314861\tau ^2+0.03107333\tau ^3+0.00939269\tau ^40.00376555\tau ^50.00055733\tau ^6,`$ (17) $`\gamma _{_{N=\mathrm{}}}`$ $`=`$ $`1+1/3\tau +0.06675812\tau ^2+0.00726155\tau ^30.00746706\tau ^40.00082309\tau ^50.00713623\tau ^6,`$ (18) $`\nu _{_{N=3}}`$ $`=`$ $`1/2+1/9\tau +0.05383664\tau ^2+0.01993814\tau ^3+0.01227945\tau ^4+0.00300477\tau ^5+0.00535272\tau ^6,`$ (19) $`\nu _{_{N=4}}`$ $`=`$ $`1/2+1/8\tau +0.05902778\tau ^2+0.02074731\tau ^3+0.01004792\tau ^4+0.00133632\tau ^5+0.00248012\tau ^6,`$ (20) $`\nu _{_{N=5}}`$ $`=`$ $`1/2+2/15\tau +0.05964701\tau ^2+0.01877086\tau ^3+0.00688561\tau ^40.00041380\tau ^5+0.00060891\tau ^6,`$ (21) $`\nu _{_{N=\mathrm{}}}`$ $`=`$ $`1/2+1/6\tau +0.03612254\tau ^2+0.00709205\tau ^30.00142535\tau ^4+0.00103317\tau ^50.00285768\tau ^6,`$ (22) $`\omega _{_{N=3}}`$ $`=`$ $`\tau 0.39042829\tau ^2+0.29428918\tau ^30.25565542\tau ^4+0.31134025\tau ^50.43957722\tau ^6,`$ (23) $`\omega _{_{N=4}}`$ $`=`$ $`\tau 0.36419753\tau ^2+0.24511892\tau ^30.20419925\tau ^4+0.21874431\tau ^50.27962773\tau ^6,`$ (24) $`\omega _{_{N=5}}`$ $`=`$ $`\tau 0.35129140\tau ^2+0.21196053\tau ^30.16985912\tau ^4+0.17369321\tau ^50.19948859\tau ^6,`$ (25) $`\omega _{_{N=\mathrm{}}}`$ $`=`$ $`\tau 0.42249657\tau ^2+0.34513141\tau ^30.32006198\tau ^4+0.44947688\tau ^50.67842170\tau ^6.`$ (26) One can determine the cubic model critical exponents of the new universality class on the basis of the expansions (18) in the same manner as for the expansion (10) of $`N_c`$. However, the expansions for the combination $`1/\gamma `$ and $`1/\nu 1`$ appear to have better convergence properties and all values are obtained on their basis. The Padé– and Padé–Borel analysis lead to the critical exponents values as they are given in the table I in the last column (here, we do not show intermediate results similar to (11)-(14)). The error bars for the critical exponents given in the Table I were obtained from the maximal deviation between the six- and the five-loop results among all deviations for the parameter $`p`$ value $`0p10`$. The error bars within the pseudo–$`\epsilon `$ expansion are typically much smaller than those based on other methods. The reason is that $`\beta `$– and $`\gamma `$–functions contribute in a self-consistent way into the pseudo–$`\epsilon `$ expansion series for critical exponents. To compare our results we represent in the table I the values obtained from the five–loop $`\epsilon `$–expansion by means of a modified Borel summation and of the Borel transformation with conformal mapping (the corresponding citation in the table is primed). Critical exponents from the four–loop fixed massive $`d=3`$ scheme with an application to the RG functions of Padé-Borel resummation and the results from the six–loop RG functions resummed by Borel transformation with conformal mapping are given in the table. Recently, the modified Padé-Borel resummation has been applied to the six–loop RG functions of the cubic model. These data are also displayed in the table. The error bars for the values of the Refs. and were obtained from the condition of the result stability in successive approximation orders. However, the numerical value of the fixed point coordinates were substituted into the expansions for the $`\gamma `$–functions (4), (5). To this end the most reliable numerical values of the stable fixed point coordinates were substituted into the resummed $`\gamma `$–functions and then an optimal value of the fit parameter in modified Padé-Borel resummation and two fit parameters in the conformal mapping procedure were chosen. The deviations between five- and six-loop results obtained within the resummation procedure with optimal fit parameter(s) value gave the error interval. To complete the list, we show the value of $`\omega `$ for $`N=3`$ obtained in Ref. on the basis of Borel transformation involving knowledge on RG functions large–order behaviour. Let us note that for finite values of $`N`$ our data for $`\gamma `$ and $`\nu `$ interpolate the results of the minimal subtraction (,$`^{^{}}`$) and the massive scheme (,), though the values are closer to the last. On the other hand, our method gives smaller values for $`\omega `$ in comparison with other methods. We note as well that passing from the four–loop to the six–loop approximation in frames of a massive $`d=3`$ approach shifts the numerical values of critical exponents towards our data. In the limit $`N\mathrm{}`$ the critical properties of the cubic model reconstitute those of the annealed diluted Ising model where Fisher renormalization for critical exponents holds . In particular, based on the recent RG estimates for the critical exponents of the pure Ising model $`\alpha =0.109\pm 0.004`$, $`\nu =0.6304\pm 0.0013`$, $`\gamma =1.2397\pm 0.0013`$ , one obtains the values $`\nu =0.708\pm 0.005`$, $`\gamma =1.391\pm 0.008`$ for the Ising model model with annealed disorder. The last values agree very well with our results of the last row of the table I. Moreover, they are in very good agreement with other data of the table. It is worth to note here that the RG series for the cubic model allow to reconstitute the functions which describe the Ising model with the other type of randomness. By substitution $`N=0`$ one reconstitutes the weakly diluted quenched Ising model (RIM) . In this case, however, the pseudo–$`\epsilon `$ expansion in $`\tau `$ degenerates into a $`\sqrt{\tau }`$–expansion for the same reasons as the $`\epsilon `$–expansion for the RG functions degenerates into a $`\sqrt{\epsilon }`$–expansion . Moreover, our calculation show that an expansion in $`\sqrt{\tau }`$ is numerically useless as this was shown for $`\sqrt{\epsilon }`$–expansion . This can be regarded as an evidence of the Borel non–summability of the RIM RG functions. Since the asymptotic properties of the series still are not proven despite of noticing their divergent character , the RG functions of RIM as series in renormalized couplings used to be treated by means of Padé–Borel or Chisholm-Borel resummations (see e.g. ). The first of the stated methods was recently applied to study the five-loop RG functions of RIM . The analysis allowed the authors to obtain the five–loop estimates for the RIM critical exponents. Extending the analysis of Ref. to the six–loop order reveals the wide gap between five– and six–loop fixed point coordinates. This leads to an inconsistency of the six–loop values of critical exponents compared with the five–loop results of Ref. . However, the analytical solution of a toy $`d=0`$ RIM showed its free energy to be Borel summable provided that resummation is done asymmetrically: resumming first the series in the coupling $`u`$ and subsequently the series in $`v`$ . The corresponding resummation applied to $`d=3`$ RIM massive scheme RG functions allowed precise determination of the critical exponents . ## III Conclusions In the present paper we studied the critical properties of a cubic model associated with $`\varphi ^4`$–terms of spheric and cubic symmetry of the LGW Hamiltonian. In particular, we were interested in the crossover between $`O(N)`$–symmetric and cubic behaviour which occurs at a certain value $`N_c`$ of order parameter components number. Recently, five– and six–loop order RG functions were obtained for the cubic model within massive $`d=3`$ scheme . We applied the pseudo–$`\epsilon `$ expansion to their analysis. This method is known as a standard one for the $`O(N)`$–symmetric model analysis and leads to the most accurate values of critical exponents . Here, to our knowledge, it has been applied to the cubic model for the first time . The pseudo–$`\epsilon `$ expansion for $`N_c`$ appears to have much better convergence properties then the corresponding $`\epsilon `$–expansion (c.f. Padé–tables (10) and (12)). This provides very good convergence of its Padé–analysis (11). The last together with the refined Padé–Borel analysis yields the best estimate $`N_c=2.862\pm 0.005`$ of the paper. Our conclusion $`N_c<3`$ means in particular, that all ferromagnetic cubic crystals with three easy axis should undergo a first order phase transition . We obtained the values of cubic model critical exponents in the new universality class in pseudo–$`\epsilon `$ expansions with the results given in table I. In the $`N\mathrm{}`$ limit our data reproduce the critical behaviour of an annealed weakly diluted Ising model . The $`N0`$ limit, corresponding to a quenched weakly diluted Ising model , however does not yield reliable results in pseudo–$`\sqrt{\epsilon }`$ expansion. Within a traditional $`d=3`$ massive technique the resummation of the RG functions by means of the convenient Padé-Borel analysis reveals a gap between five– and six–loop fixed point coordinates. This leads to an inconsistency of the obtained critical exponents values compared to the declared in Ref. . Let us note, however, that recently reliable values have been obtained by a resummation method which treats the couplings of the RIM model asymmetrically . ## Acknowledgements We acknowledge useful discussions with Józef Sznajd and Maciej Dudziṅski and thank Konstantin Varnashev for communicating his results prior to publication. This work has been supported in part by ”Österreichische Nationalbank Jubiläumsfonds” through the grant No 7694.
no-problem/0003/astro-ph0003033.html
ar5iv
text
# An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I ## 1 Introduction M dwarfs are among the faintest and coolest stellar objects. Their optical spectra are dominated by molecular band absorption. The major opacity sources in the optical regions are TiO and VO bands, which produce a pseudo-continuum with the atomic lines superposed. Recent surveys (e.g. DENIS, 2MASS) have turned up even cooler objects, including brown dwarfs (Delfosse et al. (1999); Kirkpatrick et al. 1999a ). The optical spectra of these objects are still characterized by molecular band absorption; however, the TiO and VO bands, which constitute the defining characteristic of the M spectral class, become very weak or are absent, while the metal hydrides begin to dominate. This is presumably because of the depletion of heavy elements in their cool photospheres through the formation (and perhaps subsequent gravitational settling) of dust grains, especially solid VO and perovskite (CaTiO<sub>3</sub>) (Sharp & Huebner (1990); Tsuji et al. 1996a ; Allard (1998)). For this reason, a new spectral class, ‘L’, has been proposed for these very cool low-mass objects (Martín et al. (1997); Kirkpatrick et al. 1999a ; Martín et al. 1999b ). To understand these very cool, low mass stellar and substellar objects, it is necessary to assign to them an effective temperature scale. The construction of T$`_{eff}`$ sequences has been attempted in recent years by comparing the spectra of these objects to synthetic ones generated by atmospheric models. Comparisons have been made on the basis of spectral energy distributions (SEDs) (Tinney et al. (1998); Ruiz et al. (1997)) and infrared colors (Leggett et al. (1998)). Attempts have also been made to construct a temperature scale using the molecular lines in the observed spectra, since these are sensitive to photospheric temperature. Tinney et al. (1998) do this in the optical by ranking the objects in order of TiO, VO, CrH and FeH equivalent widths; Delfosse et al. (1999) pursue a similar program in the infrared by using H<sub>2</sub>O indices. Tokunaga & Kobayashi (1999) do so with a spectral color index based on moderate dispersion spectroscopy in the K band. Kirkpatrick et al. 1999a have performed a detailed analysis of low dispersion optical spectra as a first step towards defining the subclasses of the L spectral type. It is found that the models fit the observed SEDs and colors fairly well overall, and an effective temperature sequence is indeed derivable from the fits. However, as T$`_{eff}`$ decreases from early to late spectral types, some molecular bands which comprise most of the low resolution spectral features of these cool objects first increase and then decrease in strength due to dust formation (both through depletion from the gas phase and heating effects). The broadband infrared color indices (eg. J-K or I-K) saturate at very cool temperatures, and eventually also reverse direction (though it seems that I-Z or I-J might be monotonic with temperature). The molecular line lists in the models are not fully satisfactory, which could affect the temperature scale. It is therefore useful to have an alternative and complementary way of estimating effective temperatures. We present such a method here: fitting synthetic spectra to the profiles of strong atomic alkali lines. Their extremely low ionization potential allows the alkali metals (Li, Na, K, Cs and Rb) to remain neutral only at temperatures below 3000K. More importantly, their atomic resonance lines occur in the red and are thus prominent in very cool photospheres, displaying deep cores and broad wings in high-resolution spectra. Finally and most importantly, the alkali metals are relatively undepleted by dust formation (Burrows et al. (1999)), and their resonance lines are apparently formed in photospheric layers located above the region affected most by dust. For these reasons, the resonance features of the alkali metals are excellent temperature indicators in cool photospheres, and grow monotonically in strength with decreasing temperature in the range of interest. We therefore attempt to assign effective temperatures to 17 late-M and L dwarfs by fitting model spectra to their Cs I and Rb I high-resolution line-profiles. In principle, the Na I and K I lines can be also be used for this purpose. However, we have not considered them in this paper, for the following reasons. The K I line in some of our sample objects is very broad, extending beyond one whole echelle order. To fit models to it, neighboring orders must be patched together, a task fraught with sources of error and thus considered too risky for the fine-analysis we undertake here. We also do not have observations of the Na I lines for a number of our sample objects. Therefore, we consider only the Cs I and Rb I lines in this paper. We should point out that this method has its own weak points. The treatment of dust, or of the background molecular opacities around the lines, do influence the apparent strength of the line by changing the nearby continuum. While it is unlikely that the molecular opacities are wildly off, we show that dust can have a major effect. The synthetic spectra we use are generated by the atmospheric models of Allard and Hauschildt (hereafter AH). The models come with 3 treatments: “standard (no-dust)”, “dusty” and “cleared-dust” (discussed in §3). The first does not account for dust formation at all, while the second takes dust formation into account by considering both dust grain opacities and photospheric depletion of dust-forming elements (physically, this implies a haze of dust that is mixed throughout the atmosphere). The third ignores dust opacities but accounts for photospheric depletion, simulating the state of affairs when dust forms and then gravitationally settles below the photosphere. All 3 treatments treat line broadening in the same fashion. We find that the “standard” and “dusty” models do a poor job of reproducing the optical observations – the alkali lines in the first are too narrow and in the second they are too weak to fit most observations. The “cleared-dust” models, however, fit the data much better. ## 2 Observations ### 2.1 Sample Until 1997, the only L-type object known was GD165B (Becklin & Zuckerman (1988)), although it was not identified with that sobriquet. Indeed, a good spectral analysis of it has only recently appeared (Kirkpatrick et al. 1999b ). All the L dwarfs studied in this paper, therefore, are recent discoveries. They come primarily from the DENIS all-sky IR southern survey (Delfosse et al. (1999)). Some are from other sources. Kelu-1 (Ruiz et al. (1997)) was announced as the first confirmed field brown dwarf; it was found in a proper motion survey of faint red objects. LP 944-20 is a late M star from the Luyten proper motion survey, which Tinney et al. (1998) discovered contains lithium, making it a brown dwarf. G196-3B is a brown dwarf companion to a nearby young M star found by Rebolo et al. (1998); this is among the lowest mass brown dwarfs currently imaged. LHS 102B is an L dwarf companion to another nearby M star, found by a proper motion study of the EROS project’s images (Goldman et al. (1999)). Recently, the 2MASS project has greatly added to the inventory of L stars (Kirkpatrick et al. 1999a ). Thus, our sample is not selected in any particular way, but is a first look at objects out of the initial discovery list of cool VLM field objects. The list of objects observed is given in Table 1 (the spectral types listed in Table 1 are from the literature; see the end of §5.1 for a discussion). There are new objects being discovered monthly now by the DENIS, 2MASS, and Sloan surveys; it will soon be possible to study samples chosen in a more statistically meaningful way. ### 2.2 Data Acquisition and Reduction Observations were made with the W. M. Keck I 10-m telescope on Mauna Kea using the HIRES echelle spectrometer (Vogt et al. (1994)). The observation dates are listed in Table 1. The instrument yielded 15 spectral orders from 640nm to 860nm (with gaps between orders), detected with a Tektronix $`2048^2`$ CCD. The CCD pixels are 24$`\mu m`$ in size and were binned 2 $`\times `$ 2; the bins are hereafter referred to as individual “pixels”. Each pixel covered 0.1A, and use of slit decker “D1” gave a slit width of 1.15 arcsec projected on the sky, corresponding to 2 pixels on the CCD and a spectral resolution of R = 31000. The slit length is 14 arcsec, allowing excellent sky subtraction. The CCD exhibited a dark count of $``$2 e-/h and a readout noise of 5 e-/pixel. The data were reduced in a standard fashion for echelle spectra, using IDL procedures. This includes flat-fielding with a quartz lamp, order definition and extraction and sky subtraction. The stellar slit function is found in the redmost order, and used to perform a weighted extraction in all orders. The wavelength scale is established using a ThAr spectrum taken without moving the grating; the solution is a 2-D polynomial fit good to 0.3 pixels or better everywhere. Cosmic-ray hits and other large noise-spikes were removed using an upward median correction routine, wherein points more than 7$`\sigma `$ above the median (calculated in 9-pixel bins) were discarded. All points below the median were retained, to preserve the integrity of sharp absorption lines. For the purpose of comparing data to models, it is necessary to normalize both to the same continuum value. However, the continuum in the data and the models is characterized by a large number of overlapping molecular lines; moreover, the data is not flux-calibrated, and has a superimposed echelle blaze function. Thus, only a pseudo-continuum is derivable for the data and models. A straight-line fit to the pseudo-continuum was obtained for each observed spectrum, and divided out for normalization. The fit was made after manually discarding strong spectral lines (selected by eye), including the alkali absorption lines. For Cs I , the overall fit was obtained in the range 8500-8540A; for Rb I , in the range 7930-7960A. ### 2.3 Radial Velocities Barycentric and stellar radial velocity corrections were derived for each observed spectrum. We thank G. Marcy for the IDL routine which calculates the barycentric correction. The radial velocities were derived by measuring the peak position for cross-correlation functions between each star and a radial velocity standard. We chose Gl 406 (M6) as our standard for the M dwarfs, and LHS 29224 (M9) and 2MASSW 1439+1929 (L1) as our standards for the L-dwarfs. Gl 406 was chosen as a standard since its velocity has been given by several authors to good accuracy. In particular, Delfosse has found it to be +19 km s<sup>-1</sup> as part of his precision radial velocity project. Measurement of the Cs I line shift on an absolute wavelength scale for our December 1997 observation of Gl 406 yielded a velocity of +19 km s<sup>-1</sup> . A previous spectrum taken by Basri in November 1993 gives +18 km s<sup>-1</sup> , and a new spectrum observed in June 1999 gives +19 km s<sup>-1</sup> (using the December 1997 spectrum as calibrator). However, measurement of the Cs I line shift on an absolute wavelength scale yields a velocity of +43 km s<sup>-1</sup> for a spectrum observed on June 2, 1997 (with an accuracy of about 1km s<sup>-1</sup> ). Cross-correlation of this spectrum with the December 1997 Gl 406 observation also yields a velocity of +43 km s<sup>-1</sup> . Another spectrum of Gl 406 taken the next night (June 3, 1997), with a different grating setting, also yields a similar anomalous velocity (+40 km s<sup>-1</sup> ) upon cross-correlation with the December 1997 observation. We examined the discrepant velocity of June 1997, comparing the ThAr spectra and night sky spectra with those of December 1997 to confirm that the grating had not moved from where we thought it was. There is a clear shift of the star compared with night sky lines on the same echelle frames for the two epochs. The stellar spectra themselves are otherwise almost identical. Radial velocities of all objects earlier than L0 were obtained by independent cross-correlation with the December 1997 and June 1999 Gl 406 spectra, in the spectral interval 8630 - 8753A (containing TiO, VO, CrH and FeH lines). The results are mutually consistent. We also obtained radial velocities of the objects earlier than L0 through cross-correlation with the June 1997 Gl406 spectra in the stated interval - the velocities obtained were consistent (within $`\pm `$5 km s<sup>-1</sup> ) with those obtained through cross-correlation with the December 1997 and June 1999 Gl406 spectra. This implies that the anomalous Gl406 radial velocity of June 1997 is real. Delfosse, however, has not seen radial velocity variations in his precision observations of Gl 406, including one only a few months away from those reported here. Either Gl 406 is a radial velocity variable, or we happened to observe a near twin quite close by (none is known). We have not discovered such variable behavior in any other of our sample objects. However, Tinney & Reid (1998) obtained an “inexplicable” discrepancy in velocity for GJ 1111 compared to other published results. Such behavior could result if these are highly eccentric single-lined spectroscopic binaries. Clearly these systems bear closer monitoring. Since the spectra of the objects L0 and later in our sample in the given interval are not very similar to that of Gl 406 (as TiO and VO disappear and CrH and FeH appear with decreasing temperature), cross-correlation for these objects was carried out using LHS 2924 (M9) and 2MASSW 1439+1929(L1) as independent standards. Radial velocities were then calculated using the radial velocity of -37 km s<sup>-1</sup> and -29 km s<sup>-1</sup> obtained for LHS 2924 and 2MASSW 1439+1929 respectively by cross-correlation with the Gl 406 spectra. The results were mutually consistent. However, the very coolest objects (DENIS-P J0205-1159, 2MASSW 1632+1904 and DENIS-P J0255-4700; $``$ L5-L6), did not yield very good cross-correlation functions with either of the two calibrators. Two factors probably contribute to this: The lower S/N of the spectra of these three objects, and the fact that, at their low T$`_{eff}`$ , their spectral features begin to diverge from those of the two calibrators. The radial velocity results are given in Table 1. The stated errors in the Gl 406 December 1997 and June 1999 radial velocities are from comparing our values to those of Delfosse. All other stated errors are calculated from the differences in radial velocity yielded by using different calibrator spectra. ## 3 Model Atmospheres As photospheric temperatures decrease, atoms combine to form molecules; upon still further cooling, atoms and molecules may coagulate to form dust grains. Dust formation can change the atmospheric spectral characteristics in various ways. For example, grains can warm the photosphere by backwarming, making the spectral distribution redder while weakening molecular lines (e.g., of H<sub>2</sub>O and TiO) (Jones & Tsuji (1997); Tsuji et al. 1996b ). Dust formation decreases the photospheric gas phase abundance of the atoms that form dust (e.g. Ti, Ca, Al, Mg, Si, Fe). Furthermore, with decreasing temperature, dust grains can become larger and gravitationally settle below the photosphere (Allard (1998)). Thus, in any attempt to assign effective temperatures to cool photospheres, we must make use of models that take dust formation and behavior into account. Three basic models are considered: Standard (no-dust) Models: In these models, various molecules are formed with decreasing T$`_{eff}`$ , but no dust is allowed to form. These are the NextGen series of models described by Allard et al. (1996). This scenario is known to be problematic below a certain temperature; Tsuji et al. 1996a have argued that this temperature might be as high as 2800K. A panel of spectra from these models in the Cs I region is shown in Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I, left panel. The opacity is dominated by TiO over most of the spectrum shown. There is also a band due to VO at 8528A ; this is treated with the JOLA (Just Overlapping Line Approximation) method, as no good line lists are available for this molecule. The apparent TiO strengths remain high throughout the temperature range (2500-1600K) shown, and the Cs I line at 8521A grows moderately stronger. It does not achieve anything like the observed strengths of Cs I in L dwarfs. Dusty Models: Here, various molecules are formed with decreasing T$`_{eff}`$ , which then coalesce into dust grains that remain in the photosphere. A range of grain sizes is assumed, and the resulting dust opacity taken into account, as well as the photospheric depletion of dust-forming atoms and molecules. These models are described by Leggett et al. (1998) and Allard (1998). The same spectral region as above is shown for this model approximation in Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I, right panel. Even at 2600K the spectrum looks rather different, as Ti is already condensing out and becoming unavailable to the molecular bands. A very different set of molecular lines is apparent at 2200K, and the Cs I line has actually disappeared! This is not due to depletion of atomic cesium, but the growth of stronger opacity sources. At cooler temperatures, all the molecular lines are increasingly weakened by the growing dominance of the (featureless) dust opacity, leaving a rather smooth spectrum by 1600K. What is not apparent from these normalized plots is that the actual flux levels are quite different in the 2 panels. In the models without dust the TiO lines strongly suppress the flux, while the dusty case is brighter not only because of lower opacity but because the atmosphere is heated by absorption of radiation by the dust. Cleared-Dust Models: In this case as dust grains form they are assumed to gravitationally settle below the photosphere. These models are constructed using the full dust equation of state (thus accounting for photospheric depletion of dust-forming materials) but neglecting dust opacity. <sup>1</sup><sup>1</sup>1We have chosen the name “cleared-dust” models to denote the absence of dust opacity in these models. In the simplest picture, formation of dust coupled with the absence of dust opacity could be accomplished by the gravitational settling of dust below the photosphere. However, other scenarios may be invoked as well; for example, the dust may collect into photospheric clouds with a small covering fraction, or the dust may form very large grains with a small total blocking efficiency. Thus the absence of dust opacity in the photosphere should not necessarily be taken to guarantee the absence of photospheric dust itself. In these spectra (Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I, left panel) the strengths of both molecular and atomic lines grow at cooler temperatures. Although TiO is gone, it is replaced by other molecular lines (such as CrH), but these are not nearly as strong in absolute opacity. The VO JOLA features at 8528Å grows stronger with cooler temperatures (not seen in the observations). The Cs I line becomes very strong at cooler temperatures in agreement with observations. The line wings are easily able to overcome the rather weak molecular opacities, and the line is not only deep but broad. These are collisional damping wings. In Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I (right panel) we show the analogous spectra for the Rb I region, with a VO JOLA feature at 7942Å . Similar behavior is apparent, except that the molecular opacities are even weaker, and the line comes to completely dominate the spectral region shown by 1600K. This is partly due to the increased abundance of Rb I compared to Cs I (the K I resonance lines, not shown, are vastly stronger than even Rb I , due to a much higher abundance; these lines grow wings which cover more than 100 A). It is the lack of TiO combined with the lack of dust opacities which allow the atomic lines to grow so strong in these models, and they resemble the observations most closely. Naively, one might have expected the standard models to hold true in our hottest objects, the dusty models in the cooler objects, and the cleared-dust ones in the coolest objects. However, our observations indicate that the dusty models are never representative of the optical observations. On the other hand, Leggett et al. (1998) and Goldman et al. (1999) find their best fit to the infrared photometry (color and absolute flux) of L and very late M dwarfs with these dusty models. One way of reconciling this disparity is to postulate that gravitational settling depletes dust in the upper photosphere (where the visible spectrum forms), but not in the lower photosphere (where the IR spectrum forms). In reality, formation of dust clouds, dredging up of grains by convection, and various other meteorological phenomena must be modeled for an accurate portrayal of the situation. In our present early stages of modeling, we expect the three models outlined above to at least bracket the observed spectra. ## 4 Analysis In this section, we present a discussion of determination of rotational broadening for each star (§4.1), model parameters used for profile-fitting (§4.2), and the adjustments made to the models for purposes of comparison to the data (§4.3). ### 4.1 Rotational Velocities Though the known rotation velocities of very early M-dwarfs ($`<`$M3) are usually $``$ 5 km s<sup>-1</sup> , a significant fraction of the mid-M dwarfs (M3.5 - M6) show faster rotation (Delfosse et al. (1998)). Moreover, a trend toward increasing rotation velocity has been observed as one moves to later M spectral classes (Basri & Marcy (1995); Basri et al. (1996); Tinney & Reid (1998)). Since stellar rotation (corrected for the inclination of the rotation axis) Doppler broadens the line profile, and changes the way it merges into the surrounding pseudo-continuum, rotational effects must be considered during profile-fitting. We determine v sini from molecular lines by the cross-correlation method described below. Once the v sini of the target objects are determined, the models are convolved with the corresponding rotation profiles. For the spectral types earlier than L2 in our sample, v sini is determined by correlating their spectra in the interval $``$8630 - 8753A with that of Gl 406 in the same interval. This spectral interval contains molecular lines caused by VO, TiO, CrH and FeH. The v sini for Gl 406 has been determined by Delfosse et al. (1998) to be $`<`$ 3 km s<sup>-1</sup> ; thus it can be considered at the level of accuracy desired here to be non-rotating (since our instrumental broadening is of the same order). We first obtain the correlation function between our target spectrum and Gl 406 in the specified spectral interval. We then artificially rotate our observed Gl 406 spectrum to various velocities by a convolution algorithm, and obtain the correlation function between each of the rotated Gl 406 spectra and the original unrotated Gl 406 spectrum, in the same spectral interval. Correlation functions are normalized to unity in the “wings”, and additively adjusted so the amplitude of the main peaks are matched. The correlation function thus obtained that best matches that between the target object and the unrotated Gl 406 spectrum is selected, and the corresponding rotation velocity assigned to the target object. For the cooler L dwarfs (L2 and later) in our sample, Gl 406 can no longer be used as a calibrator. Their spectral features in the chosen interval (or in any other interval available to us) are not very similar to those in Gl 406. New molecular lines appear (as expected) with decreasing T$`_{eff}`$ , and the correlation function of Gl 406 with the L-dwarfs is no longer similar to those of Gl 406 with rotated versions of itself. It is therefore preferable to find a cooler calibrator. An ideal calibrator should be a non-rotator, but we have not yet found any non-rotating L-dwarfs. Both G196-3B and 2MASSW 1439+1929, the slowest L-dwarf rotators in our sample, have a v sini of 10 km s<sup>-1</sup> (as determined through cross-correlation with Gl 406). We choose 2MASSW 1439+1929 as our calibrator, since it appears, from our model fits in Cesium and Rubidium, to be slightly cooler than G196-3B, and should thus be spectrally more similar to the later L-dwarfs, resulting in more robust correlation functions. Through empirical tests, we determined that a rotating calibrator can indeed be used, so long as the targets rotate sufficiently faster than the calibrator. More precisely, we find that the cooler L-dwarfs need to be rotating faster than 30 km s<sup>-1</sup> for their v sini to be accurately determined (with errors of $`\pm `$5 km s<sup>-1</sup> ), by cross-correlation with a 10 km s<sup>-1</sup> calibrator. If the cooler L-dwarfs have vsini between 15 and 30 km s<sup>-1</sup> , their rotational velocities can still be determined through cross-correlation with the 10 km s<sup>-1</sup> calibrator, by applying a small, systematic correction factor of $``$ 5 km s<sup>-1</sup> . If the v sini of the target is less than the v sini of the calibrator, one can determine only that fact. The v sini of all the objects in our sample of spectral type later than L1 have been determined by this method. The values of v sini for all our targets are listed in Table 2. As noted previously in §2.3, the coolest objects in our sample (DENIS-P J0205-1159, 2MASSW 1632+1904 and DENIS-P J0255-4700) do not yield very good correlation functions; the errors in their v sini determination are thus somewhat higher. ### 4.2 Model Parameters Effective temperature, surface gravity, metallicity and projected rotational velocity are all fundamental parameters which affect the observed spectra. In our case T$`_{eff}`$ is kept a free parameter, while v sini is determined as described above. Surface gravity and metallicity also affect the observed line-broadening. Higher surface gravity increases the pressure at every atmospheric layer, causing increased collisional (Van der Waals) line-broadening, and a larger concentration of molecules at every layer. The net effect is a broadening of the line with increasing gravity (Schweitzer et al. (1996)). Metallicity enters in two opposite ways. First, higher metallicity decreases the proportion of hydrogen particles (which are the main source of collisional broadening). Second, higher metallicity also implies a decrease in pressure at a given optical depth (because of higher opacity), thereby reducing the line-broadening (this second effect probably dominates the first one, unless one goes to extremes of metallicity). The net effect is a narrowing of the line with increasing metallicity (Schweitzer et al. (1996)). Higher metallicity also implies a greater abundance of alkali nuclei, which increases the strength of the atomic alkali lines. This effect is diminished to the extent that the alkali metals end up in molecules. Notice that increasing gravity and increasing metallicity affect line-broadening in opposite senses; making the determination of gravity and metallicity a non-trivial task in cool dwarfs. Metallicities for these relatively young objects may reasonably be supposed to be solar, but can vary somewhat about this value. In this paper we assume a solar metallicity and defer dealing with metallicity variations. Stellar evolutionary models indicate that late-M and L dwarfs should have a surface gravity given approximately by log\[$`g`$\]=4.5 - 5.5 (Baraffe et al. (1998); Burrows et al. (1997)). Ideally one should keep $`g`$ a free parameter between these limits during profile-fitting. At the time of writing, however, AH cleared-dust models were not yet available for values of log\[$`g`$\] other than 5.0. We therefore examined the no-dust AH models in the gravity range log\[$`g`$\]=4.5 - 5.5 and temperature range T$`_{eff}`$ = 2600 - 3200K (no-dust AH models with varying gravity at lower temperatures were also unavailable). They show that the Rb I line in this T$`_{eff}`$ range is saturated, while the Cs I line is not. However, both the Cs I and Rb I lines in most of our data (which is at much lower T$`_{eff}`$ than 2600K) are close to saturation. Thus, we worked under the assumption that both Cs I and Rb I lines in our data (and in the cleared-dust models) behave similarly to the Rb I line in the no-dust AH models. With this assumption, we examined the behavior of the Rb I line in the no-dust AH models, at log\[$`g`$\]=4.5,5.0 and 5.5. We find that, at a given T$`_{eff}`$ , the log\[$`g`$\]=5.0 and log\[$`g`$\]=5.5 Rb I lines are indistinguishable from each other, while the log\[$`g`$\]=4.5 line is much narrower. Moreover, at a given gravity, the Rb I line width is strongly dependent on T$`_{eff}`$ , becoming narrower with increasing T$`_{eff}`$ . What this means is that, with “no-dust” models, assuming log\[$`g`$\]=5.0 when it is really 5.5 will not lead to any significant errors in T$`_{eff}`$ determination. On the other hand, if log\[$`g`$\] is really 4.5, then using 5.0 models will lead us to infer a higher T$`_{eff}`$ than is correct. For example, in the “no-dust” models, we find that the model Rb I line at T$`_{eff}`$ =2600K, log\[$`g`$\]=4.5 is very similar to the model Rb I line at T$`_{eff}`$ =2900K, log\[$`g`$\]=5.0. Thus, if 4.5 were the correct value for log\[$`g`$\], using 5.0 would lead us to a T$`_{eff}`$ 300K higher than the real value. Therefore, if indeed the Cs I and Rb I lines in the “cleared-dust” models at lower temperatures mimic the Rb I line behavior in the higher temperature “no-dust” models, then our T$`_{eff}`$ are too high by $``$300K for low-gravity objects (see, for example, discussion of G196-3B in §5.9). On the other hand, at very low temperatures, we predict that the very high saturation in the resonance lines will lead to decreasing sensitivity to gravity in these lines. Thus, in general, it appears that gravity variations lead us to T$`_{eff}`$ errors of $``$+300K in our sample. This issue must be revisited once cleared-dust AH models at different gravities become available. There is an interplay between T$`_{eff}`$ and v sini in cleared-dust AH models. Decreasing T$`_{eff}`$ increases both line width and line depth, while increasing v sini also increases line width but decreases line depth. These effects are demonstrated in Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I. We find that line width and line depth considered together provide constraints on the effective temperature and rotational velocity. We have used this fact to check the rotational velocities we derive through cross-correlation. The results from profile fitting are consistent with those from cross-correlation. However, since the cross-correlation method is more reliable and more precise, it is the method we use to determine v sini . We note also that, at the lowest temparatures in our data ($``$ 1700K), rotational broadening effects are largely overwhelmed by collisional broadening ones in the resonance lines (Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I). Thus, the larger uncertainties in the v sini of the coolest objects do not significantly affect their T$`_{eff}`$ determination. We must also ask whether collisional broadening, due to surface gravity or metallicity variations, could affect our correlation method results by providing broadening we mistake as rotation. Models with different metallicities have not yet been produced, but no-dust AH models with different gravities are available. By rotating models with different gravities (log\[$`g`$\] = 3.5, 4.5, 5.5) to various velocities and cross-correlating between them, we find that the cross-correlation function is insensitive to gravity, even over the large range of gravities considered. Thus, the v sini we derive are likely valid even if our sample objects do not have our assumed value of log\[g\]=5.0. Insofar as metallicity effects are comparable to gravitational ones, the same should hold true for them. ### 4.3 Model Profile Adjustments As discussed, the models must first be corrected for rotational broadening effects (see §4.2) through a convolution algorithm. This effect occurs at the star. There is also a finite instrumental resolution in the observations. Thus, the rotationally broadened models were also broadened by Gaussians in accordance with the HIRES resolution of 31000 (this is an unimportant correction for most of the objects, given their high rotational velocities). In the data, the presence of overlapping molecular lines makes it necessary to renormalize each model with a derived pseudo-continuum (after an initial normalization with the appropriate blackbody spectrum). Derivation of pseudo-continua in the model spectra is complicated by sudden strong breaks which are not apparent in the observed spectra. These are produced by the Just Overlapping Line Approximation (JOLA) method used to treat VO, FeH, CaH, CaOH and NH<sub>3</sub> bands, for which insufficient parameters are currently available to allow a more accurate handling of the individual molecular lines. In the cleared-dust models the Cs I line occurs outside (i.e., blueward of) the JOLA break region (Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I, left panel), while the Rb I line occurs within a similar JOLA transition (redward of the break, Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I, right panel). Thus, the models compute the Rb I line with a background opacity higher than the real value, and we expect the model Rb I fits to the data to be less accurate than the Cs I fits. This effect becomes smaller as the Rb I line overwhelms the molecular opacities at the coolest temperatures. We have compensated for the JOLA breaks in the following fashion. For each resonance line, the model spectrum (in the range 8500-8540A for Cs I and 7930-7960A for Rb I ) was divided into three sections - one blueward of the JOLA break region (section 1), one redward of it (section 2) and a narrow section over which the break actually occurs (section 3). A polynomial fit is obtained for section 1, after manually discarding strong spectral lines (selected by eye), and section 1 is normalized by this fit. The flux in section 3 is assigned a single value – the average, after normalization, of the 5 redmost points in section 1. Section 2 is then separately normalized with a polynomial fit (again after discarding strong spectral features). The combined effect of these procedures is to artificially remove the JOLA feature and concurrently normalize the model by the pseudo-continuum. Note that these changes do not affect the actual calculation of the spectrum; they are applied post facto primarily for aesthetic reasons. They have the effect of making the entire model spectrum resemble the observations more closely. We emphasize that the profile-fitting we do is only in the line itself, while the normalization procedure detailed above affects primarily the spectrum outside the line. It should not affect our best fit parameters. ## 5 Results ### 5.1 General Considerations for Profile Fitting Our preliminary profile-fitting of both the Cs I and Rb I lines indicates that the cleared-dust models are the only ones that fit the entire line data set well. The dusty models do not agree with the data – dust opacity in the models makes the lines much weaker than observed. The no-dust models do not fit most of the data either, producing lines that are much narrower than those actually seen when the line depth is approximately correct, even after reasonable rotational broadening. This is because in the no-dust models, the opacity in the wings of the resonance line is dominated by molecular opacity rather than the line wing opacity itself; hence only the narrow core of the resonance line is visible. When no-dust models do fit the data, the inferred temperatures (2400-2700K) agree with those from cleared-dust model fits. Moreover, these inferred temperatures are high enough that we expect dust formation to be relatively unimportant; the no-dust and cleared-dust models should resemble each other. In short, we found that cleared-dust models always produced fits at least as good as, and mostly much better than, either of the other two model types. This is in agreement with the conclusions of Tinney (1998), who finds that cleared-dust models provide the best overall spectral shape fits in the optical regime for their sample of DENIS objects. In what follows, therefore, we only consider the results of profile fitting with cleared-dust models. We note again that in the infrared, dusty models do a better job of fitting SEDs (eg. Leggett et al. (1998)). The left panels of the remaining Figures show our best fits for Cs I and the right panels show the best fits for Rb I . When looking at the observed molecular regions, it may appear that the expected smoothing with increasing rotation is not present. This is because the spectra are somewhat noisy, and noise spikes have the same “sharpness” regardless of rotation. The eye is fooled by this, but a cross-correlation function is not. Before going on to examine the results of profile-fitting for each of the objects in detail, certain general comments can be made. First, the selection of best fits has been done by eye; any quantitative fitting procedure would, in our opinion, require more accurate synthetic spectra (especially for the molecular lines) and less noisy observations than are currently available. Second, the figures show both data and models after smoothing with a 5-pixel box; this was done to allow the eye to follow the fit better by reducing noise fluctuations; a 5-pixel box, applied to the models and our current data sample, does this well without affecting the integrity of the resonance line significantly. However, the actual model fits were derived by comparing unsmoothed data and models, to eliminate the possibility of overlooking real but sharp features during profile fitting. Third, there exist certain obvious general discrepancies between the models and the observed spectra, which are better noted at the outset rather than repeatedly on an object-by-object basis. The first of these are poorly modeled or unmodeled molecular lines, that appear as additional broadenings in the observed Cs I and Rb I line wings, but not in the models (see for example the Cs I and Rb I fits to Kelu-1 (Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I)). In general, they appear to increase in number and strength with decreasing T$`_{eff}`$ . For instance, there is no hint of them in the Cs I wings of the Gl 406 data (T$`_{eff}`$ $``$2800K), while they appear strong and numerous in the observed Cs I wings of DENIS-P J0255-4700 (T$`_{eff}`$ $``$1700K). This is expected, as the resonance line wings become broader with decreasing T$`_{eff}`$ and include increasing numbers of the continuum molecular lines. Hence, since the models do not exactly reproduce the observed continuum molecular lines, they also do not accurately reproduce the observed resonance lines at lower temperatures. On the other hand, the rotational velocity of the objects also increases with decreasing T$`_{eff}`$ , which smooths out the molecular lines more than it does the stronger resonance lines. At a given T$`_{eff}`$ , therefore, this allows better model fits to the data with increasing v sini (compare the 25 km s<sup>-1</sup> , 2200K Cs I fit to DENIS-P J0909-0658 (Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I), to the 10 km s<sup>-1</sup> , 2200K Cs I fit to DENIS-P J2146-2153 (Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I)). Of course, some of the features within the resonance lines may simply be noise spikes and not real molecular lines, as is probably the case in our low S/N observations (e.g., Rb I spectrum of DENIS-P J1228-1547 (Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I)). However, this is not likely to be the case for those features that repeatedly appear in the spectra of a number of different objects. Furthermore, for Rb I , there is a break in the observed spectra at $``$ 7939A that is not reproduced in the models; the lack of agreement between models and data is most pronounced for this feature in the Rb I temperature range $``$ 2300-2700K (2MASSW 1439+1929 to Gl 406). This feature, in fact, looks somewhat like the JOLA break that we labored to remove from the models. However, a comparison between the uncorrected models (Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I, right panel) and the data in the 7930-7960Å range shows that the observed feature occurs blueward of the JOLA break in the models, and is stronger than the JOLA breaks at the same temperature. Also, the observed opacity break is limited to only a small span of wavelengths; this is not true for the modeled JOLA breaks, which persist over large wavelength spans. Only in LP 944-20 (Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I, right panel) does the observed opacity break resemble the (removed) JOLA break. Finally, the model JOLA break persists to much lower temperatures than the observed feature. Thus, even if this is the feature that the JOLA formulation tries to simulate, the simulation is not a good one. Except for this particular feature, there is reasonable agreement between the models and the data in the 7930-7960Å range after the model JOLA break has been corrected for, so we appear vindicated in making this correction. The effect on the fine analysis of the background JOLA opacity in the model Rb I is complicated. On the one hand, the extra background opacity will tend to make the line look weaker (similar to the effect of dust in the “dusty” models). This would require a cooler model to reproduce a given observed line strength, since the line grows with decreasing temperature. On the other hand, the effect of the JOLA break can be reduced by making the model hotter (which lets the model line appear stronger). The question is whether this reduction goes faster than the reduction in line strength caused by the hotter temperature. Our actual results suggest that it does, which leads us to derive generally hotter temperatures from the Rb I line than the Cs I line. This is why we base our final temperatures on Cs I . For Cs I , there is an absorption line, centered on $``$8514A , that appears in the models but is absent in the data. This feature becomes weaker with decreasing effective temperature. There is also a molecular feature at $``$ 8505Å , that becomes stronger with decreasing temperature in the models but follows the opposite trend in the data. This leads to poor agreement between models and data at this wavelength at effective temperatures below $``$2000K. These features may both be due to water in the stellar atmosphere. There are other small anomalies in the Cs I region. In all but one of the observed objects with Cs I T$`_{eff}`$ between 2150 and 2800K (2MASSW 1439+1929 to Gl 406, except DENIS-P J1208+0149), there appear to be molecular bands (not strong in the models) at 8506 and 8516A; the latter impinges on the Cs I line. Thus, when the normalized models and data are overlaid, the models appear brighter than the data over this wavelength range. To correct for this, the models (for only the objects indicated above) were also renormalized so as to fit the data between 8516 to 8524A. We show both normalizations in the figures for these objects, and tended to choose the temperature of the model renormalized to fit near the Cs I line. It is conceivable that this molecular band is the one that the model JOLA breaks try to simulate, but if so, the simulation is an inadequate one, for much the same reasons given above for the feature at 7939A. It is worth noting that both features occur over the same temperature range, and might thus be caused by the same molecule, VO or TiO. Finally, a note about which model fits are given, and why. For spectra for which only a single model is given, no other model was found to reasonably fit the data. Where two model fits are given, one model is a better fit to the resonance line core, while the other is a better fit to the resonance line wings. Using information on the effective temperatures found below which best fit our profiles, Martín et al. 1999b have assigned subclasses to the L-type objects in our sample based on low dispersion spectral indices. They propose a subclass designation similar but not identical to that proposed by Kirkpatrick et al. 1999a . While hotter objects have the same classification in both schemes, Martín et al. 1999b extend to L6 for the coolest objects, while Kirkpatrick et al. 1999a extend to L8 for the same coolest objects. Since Kirkpatrick et al. 1999a assign these objects a temperature of $``$1500K, while we find them to have T$`_{eff}`$ of $``$1700K, we prefer to follow Martín et al. 1999b . The designation ”bd” in Tables 1 and 2 indicates that lithium has been seen in the object, certifying it as substellar (given the low temperatures). Its absence does not mean the object is definitely stellar; brown dwarfs above 60 M<sub>J</sub> will eventually deplete lithium. In Table 2 we provide a summary of our results. The spectral types listed are from Martín et al. 1999b for the objects common to both our samples. In most cases, their spectral classification is consistent with our T$`_{eff}`$ determinations (using the relation between spectral subclass and temperature given by them). In the case of DENIS-P J1208+0149, however, Martín et al. 1999b find M9, while our Cs I T$`_{eff}`$ belongs to a slightly later subclass (M9.5 in their scheme). Martín et al. 1999b also assign L6 to DENIS-P J0255-4700 and 2MASSW 1632+1904, while our T$`_{eff}`$ determination suggests a slightly earlier type ($``$L5-5.5). 2MASSW 1146+2230, 2MASSW 1439+1929, DENIS-P J0021-4244 and DENIS-P J2146-2153 are not included in the Martín et al. 1999b sample. In the cases of 2MASSW 1146+2230 and 2MASSW 1439+1929, therefore, the listed spectral types are from Kirkpatrick et al. 1999a , since they are consistent with our T$`_{eff}`$ determination. DENIS-P J0021-4244 and DENIS-P J2146-2153 are not in the Kirkpatrick et al. 1999a sample either, so we assign a spectral type to them based on our derived T$`_{eff}`$ for the two objects and the spectral type given by Martín et al. 1999b for objects with similar T$`_{eff}`$ . ### 5.2 Gl 406 (M6V) Delfosse et al. (1998) find a v sini of $`<`$3 km s<sup>-1</sup> for Gl 406; at our level of accuracy, this is indistinguishable from no rotation at all. With v sini =0 km s<sup>-1</sup> , we find T$`_{eff}`$ =2800K from the Cs I cleared-dust model fits and 2700K from the Rb I , in agreement with the result of Jones et al. (1994). The 2800K model Cs I line is slightly shallower than what is observed (Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I) but otherwise fits the line wings and core well. The molecular lines near the resonance line are reasonably well fit after renormalization, while the fit to the molecular lines further away is good before renormalization. The model Rb I line fits the observed line core and wings well, except in the partially modeled molecular feature in the right wing. The molecular lines are also fit, and there is no significant worsening of the fit as one moves away from the line over the wavelength range considered here. It is noteworthy that this is the only object in our sample for which we derive a Cs I T$`_{eff}`$ that is higher than the Rb I one; in all our other objects, the Cs I T$`_{eff}`$ is lower than the Rb I . ### 5.3 LHS 2924 (M9V) We derive a rotational velocity of 10 km s<sup>-1</sup> through cross-correlation with Gl 406. An effective temperature of 2400-2500K is derived from the Cs I fit, and 2600K from the Rb I fit; these values are somewhat higher than the $``$2200K found by Jones et al. (1994) from a low-resolution IR spectrum. In both cases the line core and wings are fit very well (Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I). In Cesium, the molecular lines near the resonance line fit well after renormalization, while the lines further away match better before renormalization. The partially modeled molecular lines are evident in the wings of the Rb I line, but the overall fit to the molecular lines is good in Rubidium. The effective temperature scale for late-M dwarfs is in fact still under discussion. These objects may be, in some ways, more sensitive to the treatment of dust than the L dwarfs (for which ignoring dust opacities altogether seems to work). We have a large sample of other late-M spectra, and will revisit this question in a subsequent paper (with updated AH models). The several objects listed as M9.5 below do not have exactly the same temperatures. ### 5.4 LP 944-20 (M9V) This is a confirmed brown dwarf, with a mass between 0.056 and 0.064 M , aged between 475 and 650Myr (Tinney (1998)). Through cross-correlation with Gl 406, we derive a rotational velocity of 30 $`\pm `$ 2.5 km s<sup>-1</sup> , which agrees with Tinney & Reid (1998). The Cs I fit gives T$`_{eff}`$ =2400K and the Rb I fit gives T$`_{eff}`$ =2600K. Kirkpatrick et al. 1999a assign a spectral type of M9V to LP 944-20, similar to LHS 2924. We find it marginally cooler than LHS 2924, at least as implied by the cesium fit. In both Cs I and Rb I , the models fit the line core and wings well (Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I), and the nearby molecular lines moderately well. The higher rotational velocity smooths the model molecular lines to a degree comparable to that observed, but there is poor agreement between individual lines in the models and the data. ### 5.5 DENIS-P J0021-4244 (M9.5V) We obtain a v sini of 17.5$`\pm `$2.5 km s<sup>-1</sup> for this object, through cross-correlation with Gl 406. The Cs I fit yields an effective temperature of 2300K, and the Rb I fit an effective temperature of 2400-2500K. The resonance lines in both cases are fairly well fit (Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I), with the data being slightly wider than the models in the wings, due to poorly modeled molecular lines. In Rb I , the 2400K model is a little deeper than the data and overestimates the width of the line by fitting the unmodeled molecular line on the right, while the 2500K line matches the true width of the line but is slightly shallower than the data. The nearby molecular lines are well modeled in the Rb I section (at least in shape, if not in depth) and somewhat less so in the Cs I . The lines further away from the resonance line are well reproduced in the Cs I section before renormalization. The TiO features in this object are quite similar to those of BRI 0021 (M9.5). ### 5.6 DENIS-P J1208+0149 (M9V) We derive v sini = 10 km s<sup>-1</sup> for DENIS-P J1208+0149, through cross-correlation with Gl 406. An effective temperature of 2200-2300K is found from the Cs I fits, and of 2500K from the Rb I fits. The Rb I fit is good, except for the unmodeled molecular lines in the wings. The fit to the molecular continuum in this spectrum is good as well. In Cs I , the line core is fit well by the 2200 and 2300K models, though the former is slightly broader and the latter a little shallower than the data, but both models appear much stronger than the data at the right edge of the resonance line. Our results are slightly puzzling. Martín et al. 1999b find this object to be of spectral class M9, while our Cs I T$`_{eff}`$ puts it at $``$ M9.5. However, our Rb I T$`_{eff}`$ , which agrees with the Cs I T$`_{eff}`$ ordering for most of the objects in our sample, does not in this case but implies instead that the object is somewhat hotter than 2200-2300K. Unfortunately, we do not have color data for this object, which might help resolve these discrepancies. The opacity band over 8516-8524A which is apparent in all other objects in the temperature range 2150-2800 K is not seen in this object. These anomalies may result from the noisiness of the data. This will have to be resolved with better spectra and improved models. ### 5.7 DENIS-P J2146-2153 (L0) We find a rotational velocity of 10 $`\pm `$ 2.5 km s<sup>-1</sup> for DENIS-PJ2146-2153, through cross-correlation with Gl 406. An effective temperature of 2200K is derived from the Cs I fit and of 2400K from the Rb I fit. In the Cs I section, the inclusion of unmodeled molecular lines makes the observed resonance line appear broader than the model in the wings. The Rb I resonance line is well modeled, though the data is slightly broader than the model in the left wing, due to a partially modeled molecular line (Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I). Both models are good fits to the continuum molecular lines, the Cesium models slightly better so before renormalization than after. We also note that while we assign this object to class M9.5, the low resolution spectrum of Tinney et al. (1998) indicates M9; in particular, at low-resolution this object appears hotter than BRI 0021 (M9.5). A comparison of the TiO features in our spectra with a HIRES spectrum of that object bears this out. We need the Cs I spectrum of BRI 0021 to sort this out; our Rb I line here is stronger than that reported for BRI 0021 by Basri & Marcy (1995). ### 5.8 DENIS-P J0909-0658 (L0) We obtain a rotational velocity of 25$`\pm `$2.5 km s<sup>-1</sup> for this object, through cross-correlation with Gl 406. An effective temperature of 2200K is derived from the Cs I fit, and 2400K from the Rb I fit. In both cases, the line core and wings are well simulated by the models (Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I), except a poorly modeled molecular line in the right Rb I wing. The Cs I line does not show any evidence of unmodeled molecular lines; this may be a result of this object’s higher v sini , which would tend to smooth out such lines, or the fact that the previously unmodeled molecular line has disappeared with decreasing T$`_{eff}`$ . The former explanation would seem to be the more correct one, since 2MASSW 1439+1929, which we find (see below) has a very similar Cs I T$`_{eff}`$ ($``$2150K) but is a slower rotator (10 km s<sup>-1</sup> ) does show evidence for unmodeled lines in Cs I . Nearby molecular lines are fit well in the Cs I section, and poorly in the Rb I section, which appears noisier than the Cs I section. The low resolution spectra of this object also suggest it is a good marker of the start of the L spectral type (Martín et al. 1999b ). ### 5.9 G196-3B (L1) This sub-stellar object was recently discovered by direct imaging (Rebolo et al. (1998)), at a distance of $``$ 300 AU around the young low-mass star G196-3 (M3Ve). Its mass has been determined to be 25 (+15/-10) M<sub>J</sub> and its age to be $``$ 100Myr (from the activity level of the primary star). By comparing low-resolution spectra of G196-3B with those of Kelu-1 and DENIS objects, Rebolo et al. infer an effective temperature of 1800-2000K. We derive a rotational velocity of 10$`\pm `$2.5 km s<sup>-1</sup> for G196-3B through cross-correlation with Gl 406. This is relatively slow at such low mass, especially for such a young object (however, this may be simply a projection effect). This gives us an effective temperature of 2200K from the Cs I model fit and 2400K from the Rb I fit. The fit in Cs I is good in the line core and wings, except at the very edge of the red wing. The overall fit to the molecular lines in the Cesium section is good before renormalization. The model Rb I line is slightly deeper than the data; the observed core may be chopped by noise. It is also slightly narrower than the observation in the wings due to poorly modeled molecular lines. The overall fit to the molecular lines is good in the Rubidium section, though the data appears somewhat noisy. Our derived T$`_{eff}`$ is higher than Rebolo et al.’s estimate. However, the molecular band between 8516A and 8524A that is present in all other objects in our sample that have T$`_{eff}`$ $``$ 2200K, is present in G196-3B as well, giving credence to our higher temperature estimate. On the other hand, the low-resolution spectrum obtained by Rebolo et al shows a clear lack of TiO bands and only a very faint VO band, indicating that this is an early L class object. Our high effective temperature might be due to effects of low surface gravity. Given the apparent youth of G196-3B, it is reasonable that the object is still contracting. As mentioned in §4.2, an analysis of “no-dust” AH models indicates that attempting to fit low-gravity scenarios with a higher gravity model might lead to spuriously high T$`_{eff}`$ values. This question will have to be examined in the future with newer models. Since the no-dust AH models also indicate that the TiO band ($``$7040-7140A) is relatively insensitive to gravity but sensitive to temperature, it may also be useful in separating gravity and temperature effects in G196-3B, which shows a lack of TiO bands. ### 5.10 2MASSW 1439+1929 (L1) This is a very high proper motion object, at a distance of $``$15.1 pc (Kirkpatrick et al. 1999a ). We find a rotational velocity of 10$`\pm `$2.5 km s<sup>-1</sup> for this object through cross-correlation with Gl 406. The cross-correlation function was checked against that obtained for G196-3B - since both objects are found to have v sini of 10 km s<sup>-1</sup> through cross-correlation with Gl 406, and are of similar spectral type, their correlation functions with Gl 406 should be identical - and v sini $``$ 10 km s<sup>-1</sup> was confirmed. This object was subsequently used as a calibrator for deriving the radial and rotational velocities of all the later L dwarfs in our sample. The Cs I model fits imply an effective temperature of 2100-2200K for this 2MASSW 1439+1929, and the Rb I fits 2300K. Both 2100 and 2200K models are slightly narrower than the data in the red wing of Cs I , because of unmodeled molecular lines. The 2100K model also seems to reproduce the molecular line in the blue wing, but is slightly deeper than the data, while the 2300K model does does not reproduce the observed blue wing as well, but matches the core depth better. The molecular lines near the Cs I line are reasonably matched by both models after renormalization, and the general molecular continuum well fit before renormalization. This is the coolest object in which the anomalous opacity band between 8516-8524A is seen. In the Rubidium section, the model is slightly narrower than the observed Rb I line due to unmodeled molecular lines, and the continuum is generally well matched in shape but not strength. Our spectral classification of 2MASSW 1439+1929 as L1 agrees with result of Kirkpatrick et al. 1999a . ### 5.11 Kelu-1 (L2) Kelu-1 is a free-floating brown dwarf (Ruiz et al. (1997)) at a distance of $``$ 20pc (Ruiz: priv. comm.). A rotational velocity of 60$`\pm `$5 km s<sup>-1</sup> is derived through cross-correlation with 2MASSW 1439+1929. From the Cs I fits an effective temperature of 2000K, and from the Rb I fits, a temperature of 2200K are found. In Cs I , the inclusion of poorly modeled lines makes the data appear much broader than the model. In Rb I , again, the data appears broader than the best-fit 2200K model. The cores of the observed Cs I and Rb I lines appear to be somewhat chopped by noise. We note here that the Cs I and Rb I fits, at 2000 and 2200K respectively, are substantially better if a v sini of 80 km s<sup>-1</sup> is used, instead of 60 km s<sup>-1</sup> . However, the higher velocity is not supported by our cross-correlation v sini determination. Our derived rotational velocity for Kelu-1 is very high, even given the trend towards increasing rotational velocity as one moves to the bottom of the main sequence (Basri et al. (1996)). Presumably we are near its equatorial plane. In both Cs I and Rb I , the models match the overall observed smoothness of the molecular lines, lending credence to our high derived v sini , but do not match the lines in detail. If indeed our derived T$`_{eff}`$ (which is dependent on our derived v sini ) for Kelu-1 is correct, then we find it to be cooler than 2MASSW 1439+1929 but hotter than 2MASSW 1146+2230, which is in agreement with the results of Kirkpatrick et al. 1999a . Moreover, Ruiz et al. (1997) find that the best fit to the Kelu-1 spectral energy distribution is given by a dusty AH model with T$`_{eff}`$ = 1900 $`\pm `$ 100 K and log\[g\] = 5.0 - 5.5 (with \[M/H\] fixed at 0.0). Leggett et al. (1998) find a best fit to the Kelu-1 IR colors (using AH dusty models) at T$`_{eff}`$ $``$ 2000K, and \[M/H\] = 0.0 (holding log\[g\] fixed at 5.0). Together, the two studies suggest that log\[g\] = 5.0-5.5 and \[M/H\] $``$ 0.0, consistent with the values we have chosen, are suitable choices for Kelu-1, and that our T$`_{eff}`$ is also in the right ballpark. Taken together, this again suggests that our v sini is also approximately correct. One implication of such a high v sini is that spectra with an exposure time of an hour or more actually sample most of the stellar surface. Searches for variability or “weather” on this object, must therefore be done using short exposures (forcing low spectral resolution). The rapid rotation may well be the explanation for the rather different equivalent widths reported by different authors (Rebolo et al. (1998)). The rotation means that the pseudo-continuum will be set somewhat differently at low, medium, and high spectral resolutions. In fact, we now have high resolution spectra of Kelu-1 from 4 separate nights spanning a year (2 pairs), and see no evidence of variability in the line profiles. We also note here that a parallax for Kelu-1 has been found (Ruiz; private communication); this parallax indicates that Kelu-1 appears somewhat brighter than would be expected from T$`_{eff}`$ $``$ 1900K (as advocated by Ruiz et al. (1997)). This may imply that this object is a binary (although it appears single in HST images). ### 5.12 DENIS-P J1058-1548 (L2.5) A corrected rotational velocity of 37.5$`\pm `$2.5 km s<sup>-1</sup> is obtained by correlation with 2MASSW 1439+1929. An effective temperature of 1900-2000K is derived from the Cs I fit, and of 2000K from the Rb I fit, in agreement with the value ($``$ 2000K) derived by Leggett et al. (1998) from infrared photometry for \[M/H\]=0.0. In Cs I , the 1900K model matches the wings of the line but is slightly deeper than the data, and the 2000K model matches the depth of the line but is slightly narrower than the data (Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I). In Rb I , the model is both slightly deeper and narrower than the data; this may be due to a core that is chopped by noise and poorly modeled lines in the wings. The pseudo-continuum is not as smooth as the models, perhaps because of noisy data. ### 5.13 2MASSW 1146+2230 (L3) This object is a close double (separation $``$1”); an LRIS spectrum reveals the companion to be a background star of much earlier type (Kirkpatrick et al. 1999a ). 2MASSW 1146+2230 shows a strong lithium line with an equivalent width of 5.1A (Kirkpatrick et al. 1999a ). We derive a v sini of 32.5$`\pm `$2.5 km s<sup>-1</sup> for this object through cross-correlation with 2MASSW 1439+1929. The Cs I model fits give a T$`_{eff}`$ of 1900-2000K, while the Rb I fits give 2100K. The observed spectrum appears noisy, and the models match the general trend but not the observed sharpness of the continuum lines in both the Cesium and Rubidium sections. The Cs I line is chopped in the very core by noise. This is one of our only objects in which the ordering implied by the T$`_{eff}`$ from Rb I fits does not agree with that implied by the T$`_{eff}`$ from Cs I . The Rb I fits imply that 2MASSW 1146+2230 is slightly hotter than DENIS-P J1058-1548, while th Cs I fits imply that it is slightly cooler. However, the two objects are very close in spectral type and v sini , so, given the noise in our spectra, this does not constitute a serious anomaly. If we follow our convention of giving the T$`_{eff}`$ derived from Cs I more weight than that from Rb I , we find 2MASSW 1146+2230 to be of spectral type L3, in agreement with the result of Kirkpatrick et al. 1999a . ### 5.14 LHS 102B (L4) LHS 102B is an L dwarf proper motion companion to LHS 102 (GJ 1001), a field mid-M dwarf. It was very recently discovered in the course of the EROS 2 proper motion survey (Goldman et al. (1999)). The rotational velocity of this object is found to be 32.5$`\pm `$2.5 km s<sup>-1</sup> , from cross-correlation with 2MASSW 1439+1929. From Cs I model fits, we find a T$`_{eff}`$ of 1800-1900K and from Rb I 1900K. In both Cs I and Rb I , the resonance lines are well modeled (Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I), though the models are slightly narrower than the data, because of poorly modeled molecular lines. The Cs I fit is also somewhat deeper than the data. The overall fit to the molecular lines is better in Cesium than in Rubidium; in both cases (though more so in Rubidium) the model continuum is smoother than the data, probably due to noisy data. This object is similar to GD165B in general appearance. It shares with GD165B an ambiguity about whether it is really a brown dwarf. While the lithium test has not been applied to GD165B (Kirkpatrick et al. 1999b have argued persuasively that it will fail); LHS102B definitely fails the lithium test. This does not mean that either object is stellar – only that they are older than about 200Myr and greater in mass than 60 M<sub>J</sub> . It is likely that objects at about this temperature are near the minimum main sequence temperature. When the L subclass of the minimum main sequence temperature (for a given metallicity) is precisely identified, all cooler objects can safely be certified as brown dwarfs without regard to the lithium test. ### 5.15 DENIS-P J1228-1547 (L4.5) This was the second field brown dwarf to be confirmed by the lithium test (Martín et al. (1997)). The corrected rotational velocity for this object was found to be $``$22$`\pm `$2.5 km s<sup>-1</sup> through cross-correlation with 2MASSW 1439+1929. From Cs I model fits, a T$`_{eff}`$ of 1800K is found, and from Rb I fits, 1900K. However, the Rb I data is very noisy, and thus the effective temperature derived from it is rather approximate. The Cs I line is well modeled (Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I), but the model is slightly narrower than the data in the wings, due to poorly modeled molecular lines, and the data is chopped in the very core by noise. The continuum in the Cesium data also appears rather noisy, and does not match the model smoothness. However, recently Martín et al. 1999a have found that this object is actually a binary (separation 0.3 arcsec), with nearly equal brightness. It may be that one brown dwarf is slightly cooler than the other, in which case we are looking at a composite spectrum, and the issues of line-width and noisiness just mentioned will have to be reexamined after taking binarity into account. Leggett et al. (1998) derive an approximate effective temperature of 2000K (with \[M/H\]=0.0) for DENIS-P J1228-1547 from infrared photometry, and find it to be cooler than DENIS-P J1058-1548, in agreement with our results (and that of Tokunaga & Kobayashi (1999)). ### 5.16 DENIS-P J0205-1159 (L5) A corrected rotational velocity of $``$22$`\pm `$5 km s<sup>-1</sup> was derived for DENIS-P J0205-1159 through cross-correlation method with 2MASSW 1439+1929. From the Cs I fits we get a T$`_{eff}`$ of 1700-1800K. The noisiness of the Rb I data precluded diagnostic profile-fitting to the Rb I line; however, the 1700K model generally matches the form of the data. The Cs I line is well modeled, though somewhat noisy as well - the very core of the line appears chopped by noise , and the molecular continuum in the data does not match the model smoothness (Fig. An Effective Temperature Scale for Late M and L Dwarfs, from Resonance Absorption Lines of Cs I and Rb I). Unmodeled features in the wings of the observed Cs I line may be due to noise or poorly modeled molecular lines. Both models exclude the apparently real molecular line in the right wing, but the 1700K model fit assumes that the features in the left wing are noise, while the 1800K fit assumes that they are real and thus excludes them completely. Leggett et al. (1998) use infrared photometry to propose an effective temperature of $``$ 2000K, which is higher than our result. More discrepantly, they find it to be hotter than both DENIS-P J1228-1547 and DENIS-P J1058-1548, which is at definitely at odds with our results. Tinney et al. (1998) and Delfosse et al. (1999), on the other hand, find the same ordering in temperature based on low dispersion spectra as we do. Tokunaga & Kobayashi (1999) find the same from moderate resolution K band spectra (and a new proposed K color index diagnostic ratio). These discrepancies show why it is valuable to have several independent temperature calibration methods. We suspect this object is cool enough to be classified as a brown dwarf on the basis of temperature alone. ### 5.17 2MASSW 1632+1904 (L6) 2MASSW 1632+1904 is the coolest L dwarf in Kirkpatrick et al. 1999a , and they find a possible lithium line at their detection limit (EW $``$ 9.4A ). We do not have the lithium line in our observations of this object. We derive a v sini of 30$`\pm `$10 km s<sup>-1</sup> for this object through cross-correlation with 2MASSW 1439+1929. Our spectrum for this object is noisy. From the Cs I model fits we derive an effective temperature of $``$1700K. The model matches the general trend but not the sharpness of the lines in the noisy continuum. In the Rubidium section, the noisiness of the data precluded any accurate fits, but both 1600 and 1700K models appear to follow the general shape of the Rb I line. Martín et al. 1999b ascribe a spectral type of L6 to this object; we have adopted this since it is consistent with our T$`_{eff}`$ determination. Kirkpatrick et al. 1999a , on the other hand, classify this object as L8. It is to be noted, however, that they correspondingly find 2MASSW 1632+1904 to have a T$`_{eff}`$ of $``$1500K. A T$`_{eff}`$ of $``$1600-1700K, as we find, corresponds to $``$L6 in their classification scheme as well. It is also possible that our derived T$`_{eff}`$ is higher than that of Kirkpatrick et al. 1999a because this is a lower-gravity object (as discussed in §4.2. “Cleared-dust” models at varying gravities are needed to settle this question. ### 5.18 DENIS-P J0255-4700 (L6) This is our object with the strongest Cs I line, so we think it is the coolest of the sample. For this object the rotational velocity was found to be 40$`\pm `$10 km s<sup>-1</sup> by cross-correlation with 2MASSW 1439+1929. From Cs I model fits, we find a T$`_{eff}`$ of 1700K, and from Rb I , 1900K. In Cs I , the inclusion of unmodeled molecular lines makes the smoothed data appear much broader than the model. The observed molecular continuum appears somewhat noisy, but seems to match the overall smoothness of the models. The Rb I data is noisy, but the resonance line is reasonably fit by both 1900K after excluding the unmodeled molecular lines. Despite the fact that it does not show lithium, it is very likely a brown dwarf based on its Cs I temperature (with mass greater than 60 $`M_J`$). Griffith et al. (1998) worry about whether lithium may not show up due to formation of molecules such as LiCl, but this object is not too much cooler than DENIS-P J1228-1547 (which shows lithium). Even more reassuring is the detection of lithium in some objects this cool by Kirkpatrick et al. 1999a , although they do see a possible beginning of the expected weakening of lithium at very cool temperatures. The results of Griffith et al. (1998) are suspect, since they have not done a self-consistent treatment of the stellar model atmosphere. Our models include lithium molecular formation but predict that strong lithium lines remain at these temperatures. ## 6 Conclusions In general we find that the cleared-dust model profiles fit the observed Cs I and Rb I resonance lines reasonably well (with errors of $`\pm `$ 50K in the fitting). We further find that, in a given object, the effective temperatures derived independently from Cs I and Rb I agree to within $`\pm `$ 150K, with the Rb I fit consistently giving the higher temperature. We think this is due to the JOLA modeling of overlying molecular opacities, and the overlap of a JOLA band with the Rb I line. We find that the Cs I and Rb I analyses independently imply the same ordering of objects by effective temperature. The temperature scale derived from these lines may be a little hotter than that which seems to be coming out of consideration of low dispersion spectral energy distributions. This could be due to our approximate treatment of dust or variations in gravity and/or metallicity, or perhaps the infrared treatment needs adjustment. Our final temperatures are based on the Cs I lines. On the whole, we find good agreement between the models and the data with log\[g\]=5.0 and \[M/H\]=0.0. With these values, we estimate our errors in T$`_{eff}`$ to be $`\pm `$50K, since our model grid has a spacing of 100K. However, an analysis of “no-dust” AH models indicates that gravity variations in the log\[$`g`$\]=4.5-5.5 regime may lead us to infer a T$`_{eff}`$ greater by $``$ 300K than the real value, for low-gravity objects. Metallicity effects, which act in an opposite sense to gravity ones, may also affect our T$`_{eff}`$ values. “Cleared-dust” models with varying gravities and metallicites are needed to resolve this issue. In the cases where the models do not reproduce the resonance lines very well, various effects may be responsible - collisional broadening effects due to metallicity and/or surface gravity variations, imperfectly modeled molecular opacity overlapping the line, or a low S/N ratio in the data. We find that the molecular lines are not modeled as well as the resonance lines are, and that the fit to the molecular lines generally worsens as one moves lower in effective temperature. Both effects are expected, given the comparative paucity of well-determined parameters for many of the observed molecules. Obviously it is desirable to improve our modeling of these lines. These affect the analysis of the objects at both high and low spectral dispersions (and in both the optical and near-IR). The treatment of dust is still problematic. Although it must be true that there is rather little dust opacity in the far red (or we would not observe such strong atomic lines), it is not clear that our assumption of $`no`$ opacity is valid. We must also reconcile the results in the far red with those in the near infrared, which apparently require more dust opacity. We suggest that the dust has largely settled out of the portion of the atmosphere sampled by the alkali resonance lines, but is present in the lower photosphere where the near infrared is formed. Tsuji et al. (1999) have made a similar suggestion in the context of Gl 229B. It will be important to study line profiles in the near infrared to help sort this out. The suggestion is that stratified dust models are worth pursuing. At the moment, the height or extent of such dust stratification can probably be better informed by observations than theory. This is the weak point in our analysis; the addition of some dust opacity would tend to reduce the inferred temperatures. It is not clear whether the good fits to the line shapes can be preserved as dust opacity is added (and a significant amount is needed to affect these very strong lines). The temperature scale found here led Martín et al. 1999b to propose a subclass designation scheme for the L spectral class in which L0 occurs at about 2200K, and each subclass is 100K cooler. For the coolest L dwarf in common with Kirkpatrick et al. 1999a (2MASSW 1632+1904), there is a disagreement in temperature. We predict that there is still a gap of a few hundred degrees between such objects and the T dwarfs, while Kirkpatrick et al. 1999a believe the gap is quite small. On the other hand, variations in gravity and/or metallicity may be causing us to infer an artificially high T$`_{eff}`$ for this object. Depending on the resolution of this issue, either the Martín et al. 1999b or Kirkpatrick et al. 1999a L star classification scheme should be modified in order that the L stars extend down to the appearance of methane in the K band. One clear and fairly model independent result from our analysis is that the average rotation velocity of very low mass stars gets higher and higher as one moves down through the bottom of the main sequence. It is clear that as hydrogen burning begins to turn off near the substellar boundary, the magnetic braking which affects all higher mass convective stars is also weakening. Our sample is entirely from the field, and some objects are several hundred Myr old (although there is certainly an observational bias against finding very old objects, particularly if they are brown dwarfs). It appears that there is relatively little angular momentum evolution among these objects, and that they are typically born with relatively rapid rotation. Their lack of magnetic activity is seen directly (through a lack of H$`\alpha `$ emission), as well as indirectly in their rapid rotation. The effort to understand these very low mass objects has just begun. Acknowledgments: This research is based on data collected at the W. M. Keck Observatory, which is operated jointly by the University of California and California Institute of Technology. GB and SM acknowledge the support of NSF through grant AST96-18439. SM would like to thank GB and Don McCarthy (Steward Observatory) for invaluable mentorship. EM acknowledges support from the Fullbright-DGES program of the Spanish ministry of Education. AH’s work is supported by the CNRS, a NASA LTSA NAG5-3435 and a NASA EPSCoR grant to Wichita State University. Some of the calculations presented in this paper were performed on the IBM SP2 of the CNUSC in Montpellier, and at the Cray T3E of the CEA in Grenoble. PHH’s work was supported in part by NSF grant AST-9720704, NASA ATP grant NAG 5-3018 and LTSA grant NAG 5-3619 to the University of Georgia. Some of the calculations presented in this paper were performed on the IBM SP2 and the SGI Origin 2000 of the UGA UCNS and on the IBM SP of the San Diego Supercomputer Center (SDSC), with support from the National Science Foundation, and on the Cray T3E of the NERSC with support from the DoE. We thank all these institutions for a generous allocation of computer time.
no-problem/0003/cond-mat0003413.html
ar5iv
text
# Paramagnetic Reentrance Effect in NS Proximity Cylinders ## Abstract A scenario for the unusual paramagnetic reentrance behavior at ultra-low temperatures in Nb-Ag, Nb-Au, and Nb-Cu cylinders is presented. For the diamagnetic response down to temperatures of the order 15 mK, the standard theory (quasi-classical approximation) for superconductors appears to work very well, assuming that Ag, Au, and Cu remain in the normal state except for the proximity-induced superconductivity. Here it is proposed that these noble metals may become p-wave superconductors with a transition temperature of order 10 mK. Below this temperature, p-wave triplet superconductivity emerges around the periphery of the cylinder. The diamagnetic current flowing in the periphery is compensated by a quantized paramagnetic current in the opposite direction, thus providing a simple explanation for the observed increase in the susceptibility at ultra-low temperatures. In 1990 Visani et al. reported a surprising paramagnetic reentrance phenomenon at ultra-low temperatures. When a Nb cylinder of diameter $``$ 20-100 $`\mu `$m, covered with a thin film of Ag with a thickness of a few $`\mu `$m, is cooled below the superconducting transition temperature of Nb, the systems initially exhibits the expected diamagnetic response down to temperatures around 10 mK. However, when the temperature is further lowered, the uniform magnetic susceptibility starts to increase again, indicating a decreasing diamagnetic response at ultra-low temperatures as $`\mathrm{T}0`$. A very similar observation was later reported for an analogous Nb-Cu system, and more recently also for a Nb-Au proximity cylinder. The standard theory for the proximity effect is most conveniently described in terms of the quasi-classical approximation, which is a more refined version of the approach first discussed in Ref. Within this approach, one can describe the diamagnetic response of an S-N system down to 100 mK perfectly well with only one adjustable parameter, the quasi-particle mean free path in the normal state. The sudden failure of this quasi-classical approach below 100 mK, suggested by the observed reentrance behavior, implies that some crucial and new element is missing from the usual model. Therefore, Bruder and Imry have proposed a new kind of persistent current around the edge of the normal metal, circulating in the direction opposite to the diamagnetic current. Although this current is associated with an extended state in the weak localization theory, it turns out from a simple estimate that it is of the order of $`10^3`$ smaller that the one required to accurately describe the experiments. More recently, Fauchère et al. have proposed that the pairing interaction in noble metals, such as Cu, Ag, or Au, is repulsive. This implies that the sign of $`\mathrm{\Delta }(r)`$ changes at the N-S boundary, thus generating an intrinsic $`\pi `$-junction at the boundary. As in the model by Bruder and Imry, this $`\pi `$-junction could generate a current in the direction opposite to the diamagnetic current, resulting in a paramagnetic reentrance effect at ultra-low temperatures. However, a Stoner-analysis suggests that the repulsive interaction used by these authors is likely to cause a magnetic or charge-density-wave instability with a transition temperature of $`T_c^M`$ 100 mK or higher. Other sets of experiments on the proximity effect appear to exclude such a large repulsive potential. Furthermore, if the pairing potential is repulsive, we would rather expect p-wave superconductivity in these noble metals if we follow the analysis of Kohn and Luttinger. Let us therefore propose here that p-wave superconductivity is generated in the outer film below a critical temperature of $`T_c`$ 10 - 100 mK. Earlier experiments have so far not excluded the possibility of anisotropic superconductivity in Cu, Ag, or Au at ultra-low temperatures. The main problem for the observation of these transitions is that anisotropic superconductors are highly sensitive to disorder. For example, assuming a p-wave transition temperature in the regime of $`T_c0.1K`$, a sample with a quasi-particle mean free path of 10 $`\mu `$m or longer would be needed. In our proposed scenario for the NS proximity cylinders, an additional order parameter $`\mathrm{\Delta }_p(r)`$, associated with intrinsic p-wave superconductivity in the outer film, has to establish itself below $`T_c`$ against the presence of the proximity-generated s-wave superconductivity with $`\mathrm{\Delta }_s(r)`$ penetrating into the outer film. The p-wave superconducting ordering will thus generate a counter-current, reducing the kinetic energy associated with $`\mathrm{\Delta }_p(r)`$. This counter-current will be quantized, and an approximate expression can be derived by minimizing the kinetic energy, as we will show here. We assume that the London penetration depth of the thin film is larger than $`d_N`$, the thickness of the film. The kinetic energy associated with the p-wave superconductor is then approximately given by $$E_{kin}=\frac{1}{2}\rho _S^p(2eBr\frac{2\pi n}{l})^2.$$ (1) Here $`\rho _S^p`$ is the superfluid density of the p-wave superconductor, $`n`$ is the integer quantum number of the quantized current, and $`l`$ is the circumference of the thin film, encircling the inner s-wave superconductor. By minimizing with respect to $`n`$ we find $$n=2eB(l/(2\pi ))^2=0.7958Bl^2,$$ (2) where $`B`$ and $`l`$ are expressed in gauss and $`\mu `$m respectively. Hence it is very likely that a spontaneous counter-current with $`n=1,2,3,\mathrm{}`$ is generated, compatible with the actual experimental conditions. In deriving Eq. 2 we used $`rl/(2\pi )`$, and $`d_Nl`$. The spatial variation of the magnetic field $`B(r)`$ is obtained from $$B_eB(r)=\frac{1}{\lambda _p^2}_r^{r0}𝑑r^{}\left(A_\varphi (r^{})\frac{n}{\varphi _0l}(r_0r^{})\right),$$ (3) where $`A_\varphi (r)=_0^r𝑑r^{}B(r^{})`$ is the azimuthal component of the vector potential. This leads to a simple differential equation, $$\frac{^2B(r)}{r^2}=\frac{1}{\lambda _p^2}B(r),$$ (4) where $`\lambda _p^2=\rho _s^p4\pi e^2/m`$ is the magnetic penetration depth, and $`\rho _s^p`$ is the superfluid density of the p-wave superconductor. The solution $$B(r)=B_e\mathrm{exp}[(r_0r)/\lambda _p]+\frac{n}{\lambda _p^2\varphi _0l}(rr_0)$$ (5) is valid in the outer region $`r_0rd_N`$. $`B(r)`$ is exponentially suppressed in the inner Nb-cylinder, as shown in Fig. 1. In recent experiments it was found that the onset temperature $`T^{}`$ of this paramagnetic reentrance behavior appears to be inversely proportional to the length of the cylinder periphery. As shown in Ref., the experimental data can be represented by $$\chi _r(T)=A\mathrm{exp}(\frac{T}{T^{}}),$$ (6) where $`Al^2=const.`$ and $$T^{}=\frac{\mathrm{}v_F}{2\pi k_Bl}.$$ (7) Let us first attempt to understand the zero-temperature limit of Eq. 6, i.e. the dependence of the prefactor $`A`$ on the system dimension $`l`$. In the absence of a paramagnetic current, the magnetic field is almost completely pushed out of the sample except near the edge of the thin outer film (dashed line in Fig. 1). Once this layer becomes a p-wave superconductor at ultra-low temperatures, the paramagnetic counter current at the PS interface changes $`B(r)`$ as indicated by the solid line in Fig. 1. Assuming $`d_Nd_S`$, the magnetic field in the sample may then be approximately be expressed as $$\overline{B}\frac{n\varphi _0}{\pi (l/2\pi )^2}=4\pi n\varphi _0l^2,$$ (8) where $`n`$ is a small integer. Under these conditions we can expect that the corresponding susceptibility is given by $$\chi _r(T=0)=4\pi n\varphi _0(Bl^2)^1,$$ (9) in agreement with the experiments, and thus $`\chi _r`$ diverges as $`B^1`$. Damping effects may smooth out this divergence as $`B\frac{B}{B_0^2+B^2}`$, consistent with Fig. 3 in Ref. . At finite temperatures, it appears that thermal phase fluctuations are not negligible anymore. Let us first recall that the superfluid density in the Ginzburg-Landau region is given by $`\rho _S^p|\mathrm{\Delta }_p|^2`$, where $`\mathrm{\Delta }_p`$ is the superconducting order parameter of the p-wave superconductor. Since we are considering a quantized flux around the cylinder, the phase coherence along the periphery becomes of crucial importance. Taking into account the possible loss of phase coherence, let us replace $`|\mathrm{\Delta }_p|^2`$ by $`|\mathrm{\Delta }_p|^2\mathrm{exp}(i\varphi (l)i\varphi (0))`$ with $$\mathrm{exp}(i\varphi (l)i\varphi (0))=\mathrm{exp}[\frac{1}{2}(\varphi (l)\varphi (0))^2].$$ (10) The average $`(\varphi (l)\varphi (0))^2`$ may be evaluated within the one-dimensional model along the azimuthal direction of the cylinder as $$(\varphi (l)\varphi (0))^2=\frac{2T}{N(0)}\frac{dq}{2\pi }\frac{1\mathrm{cos}(ql)}{\xi _0^2q^2}\frac{Tl}{N(0)\xi _0^2},$$ (11) for $`T<T_c`$ and $`\xi _0^2=\frac{7\xi (3)v_F^2}{2(4\pi T_c)^2}`$. These azimuthal fluctuations along the periphery of the cylinder destabilize the diamagnetic response in favor of the paramagnetic counter-current at low temperatures. Taking into account the length $`L`$ of the cylinder, this result can be substituted into the expression for $`\rho _S^p`$. It is then found that the superfluid density of the p-wave superconductor reduces to $$\rho _S^p\rho _S^p\mathrm{exp}(\frac{TlL}{N(0)\xi _0^3})=\rho _S^p\mathrm{exp}(\frac{T}{T^{}})$$ (12) due to the phase fluctuations. Perpendicular fluctuations along the cylinder are neglected in this context because they play a subdominant role in stabilizing the counter-current along the PS interface. Hence we can offer an explanation for the observed $`T`$\- and $`l`$-dependence of the exponent, as suggested by the experiments (Eq. 6). In particular, the above expression for the superfluid density implies that $$T^{}=\frac{N(0)\xi _0^3}{lL}=\frac{mp_F\xi _0^3}{2\pi ^2lL},$$ (13) if the density of states $`N(0)`$ for a 3D system is used. Here $`m`$ is the quasiparticle mass. Within this approach, $`T^{}`$ exhibits the observed $`l`$-dependence (Eq. 6). However, the numerical value which is obtained for $`T^{}`$ is still much larger than the experimentally one, $`T^{}\frac{v_F}{2\pi k_Bl}`$. This fact may be remedied by considering a 2D density of states $`N(0)`$ instead, normalized by the width $`d_N`$ of the periphery: $`N(0)_{2D}=\frac{m}{2\pi d_N}`$. In addition, other possible fluctuations should be considered which may reduce $`T^{}`$ even further. In quasi-one-dimensional systems, phase coherence can be broken by thermal excitations of vortex pairs or phase slip centers. If the spatial extension of the phase slip centers is of the order of $`\xi `$ with $`\xi =\frac{v_F}{2\pi k_BT}`$, it is perhaps plausible to have a factor $`\mathrm{exp}(l/\xi )`$, as observed in the experiments, since the phase slip centers cannot be densely populated. In any case, a quantitatively correct interpretation of the temperature-dependence in the exponential factor appears to be difficult to find. In conclusion, we propose (1) that noble metals may become p-wave superconductors with $`T_c10100`$mK. (2) With this assumption, the paramagnetic reentrance behavior at ultra-low temperatures can be described in a quantitative way. (3) Therefore this behavior should not extend beyond noble metals. More experiments with Pt, Ir, and Os would be of great interest.
no-problem/0003/physics0003087.html
ar5iv
text
# XAFS spectroscopy. II. Statistical evaluations in the fitting problems ## I Introduction In the Open Letter to the XAFS Community Young and Dent, the leaders of the UK XAFS User Group, expressed their concern over the persistence of lingering common opinion that XAFS is a “sporting technique” and it is possible to obtain the “answer you want”. Some way out they see in a special attention to the publishing XAFS data (first of all, to XAFS spectra) and have formulated several recommendations for editors and referees. Undoubtedly, in the matter of extraction of the real, not invented, information from XAFS experiments the quality of spectra is of great importance. We see here another problem as well. Not having some necessary elements of XAFS analysis (some values and the procedures for their determination), one has a quite natural desire to turn those values to advantage. Principally we mean the inability of the standard methods to find the errors of the atomic-like background $`\mu _0`$. Traditionally, the noise is assigned to these errors. However, as was shown in Ref. , the noise is essentially lower than the errors of the $`\mu _0`$ construction. Below, we will show that the underestimation of the errors of XAFS-function extraction is a source of the unreasonable optimistic errors of fitting parameters. Practically all known programs for XAFS modeling in some way calculate confidence limits of fitting parameters. However, since there is no standardized technique for that and since most published XAFS works do not contain any mention of methods for estimation of the errors of fitting parameters, the accuracy of the XAFS results remains to be field for trickery. In the present article we derive the expressions for the errors of fitting parameters under different assumptions on the degree of their correlation. Besides, the prior information about parameters is possible to take into account in the framework of Bayesian approach. Moreover one can find the most probable weight of the prior information relative to the experimental information. We also discuss the grounds and usage of the statistical tests. The special attention was focused on that where and how one can embellish the results and artificially facilitate the statistical tests to be passed. All methods and tests described in the paper are realized in the program viper . ## II Errors in determination of fitting parameters Let for the experimental curve $`𝐝`$ defined on the mesh $`x_1,\mathrm{},x_M`$ there exists a model $`𝐦`$ that depends on $`N`$ parameters $`𝐩`$. In XAFS fitting problems as $`𝐝`$ may serve both $`\chi (k)`$ (not weighted by $`k^w`$) and $`\chi (r)`$. The problem is to find the parameter vector $`\widehat{𝐩}`$ that gives the best coincidence of the experimental and model curves. Introduce the figure of merit, the $`\chi ^2`$-statistics (do not confuse with the symbol of XAFS function): $$\chi ^2=\underset{i=1}{\overset{M}{}}\frac{(d_im_i)^2}{\epsilon _i^2},$$ (1) where $`\epsilon _i`$ is the error of $`d_i`$. The variate $`\chi ^2`$ obeys the $`\chi ^2`$-distribution law with $`MN`$ degrees of freedom. Of course, for the given spectrum $`𝐝`$ and the given model $`𝐦`$ the value of $`\chi ^2`$ is fully determined; we call it “variate” bearing in mind its possible dispersion under different possible realizations of the noise and the experimental errors of $`d_i`$ extraction. Often a preliminary processing (before fitting) is needed: smoothing, filtration etc. Naturally, during the pre-processing some part of the experimental information is lost, and on the variates $`\xi _i=(d_im_i)/\epsilon _i`$ additional dependencies are imposed (before, they were bound solely by the model $`𝐦`$). It is necessary to determine the number of independent experimental points $`N_{\mathrm{ind}}`$. For the commonly used in XAFS spectroscopy Fourier filtering technique the number of independent points is given by : $$N_{\mathrm{ind}}=2\mathrm{\Delta }k\mathrm{\Delta }r/\pi +2,$$ (2) where $`\mathrm{\Delta }k=k_{\mathrm{max}}k_{\mathrm{min}}`$ and $`\mathrm{\Delta }r=r_{\mathrm{max}}r_{\mathrm{min}}`$ are the ranges in $`k`$\- $`r`$-spaces used for the analysis, and $`r_{\mathrm{min}}>0`$. If $`r_{\mathrm{min}}=0`$ then $$N_{\mathrm{ind}}=2\mathrm{\Delta }k\mathrm{\Delta }r/\pi +1.$$ (3) Instead of keeping in the sum (1) only $`N_{\mathrm{ind}}`$ items which are equidistantly spaced on the grid $`x_1,\mathrm{},x_M`$, it is more convenient to introduce the scale factor $`N_{\mathrm{ind}}/M`$: $$\chi ^2=\frac{N_{\mathrm{ind}}}{M}\underset{i=1}{\overset{M}{}}\frac{(d_im_i)^2}{\epsilon _i^2}.$$ (4) Now the variate $`\chi ^2`$ follows the $`\chi ^2`$-distribution with $`N_{\mathrm{ind}}N`$ degrees of freedom. It can be easily verified that with the use of all available data ($`r_{\mathrm{min}}=0`$ and $`r_{\mathrm{max}}=\pi /2dk`$) the definition (4) turns into (1). Let us now derive the expression for the posterior distribution for an arbitrary fitting parameter $`p_j`$: $$P(p_j|𝐝)=\mathrm{}𝑑p_{ij}\mathrm{}P(𝐩|𝐝),$$ (5) where $`P(𝐩|𝐝)`$ is the joint probability density function for all values $`𝐩`$, and the integration is done over all $`p_{ij}`$. According to Bayes theorem, $$P(𝐩|𝐝)=\frac{P(𝐝|𝐩)P(𝐩)}{P(𝐝)},$$ (6) $`P(𝐩)`$ being the joint prior probability for all $`p_i`$, $`P(𝐝)`$ is a normalization constant. Assuming that $`N_{\mathrm{ind}}`$ values in $`𝐝`$ are independent and normally distributed with zero expected values and the standard deviations $`\epsilon _i`$, the probability $`P(𝐝|𝐭)`$, so-called likelihood function, is given by $$P(𝐝|𝐩)\mathrm{exp}\left(\chi ^2/2\right),$$ (7) where $`\chi ^2`$ was defined above by (4). Its expansion in terms of $`𝐩`$ near the minimum ($`_p\chi ^2=0`$) which is reached at $`𝐩=\widehat{𝐩}`$ yilds: $$P(𝐝|𝐩)\mathrm{exp}\left(\frac{1}{4}(𝐩\widehat{𝐩})^T\stackrel{}{\mathrm{H}}(𝐩\widehat{𝐩})\right)\left(\frac{1}{4}\underset{k,l=1}{\overset{N}{}}\frac{^2\chi ^2}{p_kp_l}\mathrm{\Delta }p_k\mathrm{\Delta }p_l\right),$$ (8) where $`\mathrm{\Delta }p_k=p_k\widehat{p}_k`$, and the Hessian $`\stackrel{}{\mathrm{H}}`$ components (the second derivatives) are calculated in the fitting program at the minimum of $`\chi ^2`$. The sufficient conditions for the minimum are $`\stackrel{}{\mathrm{H}}_{kk}>0`$ and $`\stackrel{}{\mathrm{H}}_{kk}\stackrel{}{\mathrm{H}}_{ll}\stackrel{}{\mathrm{H}}_{kl}^2>0`$, for any $`k,l`$. Hence, the surfaces of constant level of $`P(𝐝|𝐩)`$ are ellipsoids. ### A Simplest cases If one ignores the prior then the posterior probability density function $`P(𝐩|𝐝)`$ coincides with the likelihood $`P(𝐝|𝐩)`$. Let us consider here two widely used approaches. (a) Parameters are perfectly uncorrelated. In this case the Hessian is diagonal and $$P(p_j|𝐝)\mathrm{exp}\left(\frac{1}{4}\stackrel{}{\mathrm{H}}_{jj}\mathrm{\Delta }p_j^2\right).$$ (9) The standard deviation of $`p_j`$ is just $$\delta ^{(\mathrm{a})}p_j=(2/\stackrel{}{\mathrm{H}}_{jj})^{1/2}.$$ (10) (b) Parameter $`p_j`$ essentially correlates solely with $`p_i`$. In this case $`P(p_j|𝐝)`$ $``$ $`{\displaystyle 𝑑p_iP(p_ip_j|𝐝)}{\displaystyle 𝑑p_i\mathrm{exp}\left(\frac{1}{4}\stackrel{}{\mathrm{H}}_{jj}(\mathrm{\Delta }p_j)^2\frac{1}{2}\stackrel{}{\mathrm{H}}_{ij}\mathrm{\Delta }p_j\mathrm{\Delta }p_i\frac{1}{4}\stackrel{}{\mathrm{H}}_{ii}(\mathrm{\Delta }p_i)^2\right)}`$ (11) $``$ $`\mathrm{exp}\left({\displaystyle \frac{1}{4}}\left[\stackrel{}{\mathrm{H}}_{jj}\stackrel{}{\mathrm{H}}_{ij}^2/\stackrel{}{\mathrm{H}}_{ii}\right](\mathrm{\Delta }p_j)^2\right),`$ (12) from where one finds $`\overline{p}_j=\widehat{p}_j`$ and the mean-square deviation $`\delta ^{(\mathrm{b})}p_j=\left({\displaystyle \frac{2\stackrel{}{\mathrm{H}}_{ii}}{\stackrel{}{\mathrm{H}}_{jj}\stackrel{}{\mathrm{H}}_{ii}\stackrel{}{\mathrm{H}}_{ij}^2}}\right)^{1/2}.`$ (13) In practice, to find the strongly correlated pairs of parameters, one finds the pair-correlation coefficients: $`r_{ij}={\displaystyle \frac{\mathrm{\Delta }p_i\mathrm{\Delta }p_j\mathrm{\Delta }p_i\mathrm{\Delta }p_j}{\delta (\mathrm{\Delta }p_i)\delta (\mathrm{\Delta }p_j)}}`$ (14) taking on the values from -1 to 1. Two parameters are uncorrelated if their correlation coefficient is close to zero. It is easy to calculate the average values over the distribution (11): $`\mathrm{\Delta }p_i^2=2\stackrel{}{\mathrm{H}}_{jj}/\mathrm{Det}`$, $`\mathrm{\Delta }p_j^2=2\stackrel{}{\mathrm{H}}_{ii}/\mathrm{Det}`$, $`\mathrm{\Delta }p_i\mathrm{\Delta }p_j=2\stackrel{}{\mathrm{H}}_{ij}/\mathrm{Det}`$, where $`\mathrm{Det}=\stackrel{}{\mathrm{H}}_{jj}\stackrel{}{\mathrm{H}}_{ii}\stackrel{}{\mathrm{H}}_{ij}^2`$. Notice, by the way, that these are the elements of the inverse matrix of $`\stackrel{}{\mathrm{H}}/2`$. Now the pair-correlation coefficients are given by: $`r_{ij}={\displaystyle \frac{\stackrel{}{\mathrm{H}}_{ij}}{\sqrt{\stackrel{}{\mathrm{H}}_{ii}\stackrel{}{\mathrm{H}}_{jj}}}}.`$ (15) Via the correlation coefficient the mean-square deviations, found for the cases (a) and (b), are simply related: $`\delta ^{(\mathrm{a})}p_j=\delta ^{(\mathrm{b})}p_j\sqrt{1r_{ij}^2}.`$ (16) Consider an example of the error analysis. For $`L_3`$ Pb absorption spectrum<sup>*</sup><sup>*</sup>*The spectrum was recorded at 50 K in transmission mode at D-21 line (XAS-13) of DCI (LURE,Orsay, France) at positron beam energy 1.85 GeV and the average current $`250`$ mA. Energy step — 2 eV, counting time — 1 s. Energy resolution of the double-crystal Si monochromator (detuned to reject 50% of the incident signal in order to minimize harmonic contamination) with a 0.4 mm slit was about 2–3 eV at 13 keV. for BaPbO<sub>3</sub> compound the average error of the XAFS extraction from the measured absorption was $`\epsilon _i=0.007`$. For the filtered over the range $`1.0<r<2.1`$ Å (the signal from the octahedral oxygen environment of lead atom) XAFS (see Fig. 1), the model function was calculated as follows. For one-dimensional the Hamiltonian of the lead-oxygen atomic pair with potential $`U=a/2(rr_0)^2`$ we found the energy levels and corresponding to them wave functions. Then, averaging over the Gibbs distribution, the pair radial distribution function (PRDF) normalized to the coordination number $`N`$ was found as: $$g(r)=N\underset{n}{}|\mathrm{\Psi }_n(r)|^2e^{E_n/kT}/\underset{n}{}e^{E_n/kT},N=g(r)𝑑r,$$ (17) and the XAFS function as: $$\chi (k)=\frac{1}{k}F(k)\underset{r_{\mathrm{min}}}{\overset{r_{\mathrm{max}}}{}}g(r)\mathrm{sin}[2kr+\varphi (k)]/r^2𝑑r.$$ (18) The phase shift $`\varphi (k)`$ and the scattering amplitude $`F(k)`$ were calculated using feff6 program . By variation of the parameters $`r_0`$, $`a`$, $`N`$ (where $`N`$ includes the factor $`S_0^2`$), and $`E_0`$, the shift of the origin for the wave number $`k`$, one search for the best accordance between the model and experimental curves. Here for the fitting, the viper program was used which, in particular, calculates the Hessian of $`\chi ^2`$ (defind by (4) with $`N_{\mathrm{ind}}=11.8`$) at the minimum. The correlation coefficients are listed in the Table I. We now turn our attention to the errors of fitting parameters. In ignoring the correlations, the errors $`\delta ^{(\mathrm{a})}p`$ are rather small (see Table II). However, we know that the parameters $`r_0`$ and $`E_0`$ are highly correlated, and their real errors must be larger. In the traditional XAFS-analysis two-dimensional contour maps have long been used for estimates of the correlation score and the error bars. Notice, that to do this requires, first, the definition and determination of the correct statistical function $`\chi ^2`$ (but not a proportionate to it), and, second, a criterion to choose the critical value of $`\chi ^2`$ (depending on the chosen confidence level). For the most correlated pair, $`r_0`$ and $`E_0`$, find the joint probability density function $`P(r_0E_0|𝐝)`$ using the Hessian elements found at the minimum of the $`\chi ^2`$: $`P(r_0E_0|𝐝)\mathrm{exp}\left({\displaystyle \frac{1}{4}}\stackrel{}{\mathrm{H}}_{r_0r_0}(\mathrm{\Delta }r_0)^2{\displaystyle \frac{1}{2}}\stackrel{}{\mathrm{H}}_{r_0E_0}\mathrm{\Delta }r_0\mathrm{\Delta }E_0{\displaystyle \frac{1}{4}}\stackrel{}{\mathrm{H}}_{E_0E_0}(\mathrm{\Delta }E_0)^2\right)`$ (19) which is depicted in Fig. 2 as a surface graph and as a contour map. The ellipses of the equal probability are described by: $`\stackrel{}{\mathrm{H}}_{r_0r_0}(\mathrm{\Delta }r_0)^2+2\stackrel{}{\mathrm{H}}_{r_0E_0}\mathrm{\Delta }r_0\mathrm{\Delta }E_0+\stackrel{}{\mathrm{H}}_{E_0E_0}(\mathrm{\Delta }E_0)^2=4\lambda .`$ (20) In Fig. 2 they limit such areas that the probability for the random vector ($`r_0`$,$`E_0`$) to find itself in them is equal to $`\mathrm{}=1e^\lambda =0.2`$, 0.6, 0.8, 0.9 and 0.95. By the thick line is drawn the ellipse corresponding to the standard deviation: $`\lambda =1/2`$ and $`\mathrm{}=1e^{1/2}0.3935`$. For this ellipse the point of intersection with the line $`\mathrm{\Delta }E_0=0`$ and the point of maximum distance from the line $`\mathrm{\Delta }r_0=0`$ give the standard mean-square deviations $`\delta ^{(\mathrm{a})}r_0`$ and $`\delta ^{(\mathrm{b})}r_0`$ that coincide with the expressions (10) and (13). To find the mean-square deviation $`\delta ^{(\mathrm{b})}`$ for an arbitrary confidence level $`\mathrm{}`$, one should multiply the standard deviation by $`\sqrt{2\mathrm{ln}(1\mathrm{})}`$. In Table II the errors in the column $`\delta ^{(\mathrm{b})}p`$ were found as the largest errors among all those calculated from the pair correlations. For the parameters $`N`$ and $`a`$ all pair correlations are weak, so their $`\delta ^{(\mathrm{a})}`$ and $`\delta ^{(\mathrm{b})}`$ are hardly differ. For the parameters $`r_0`$ and $`E_0`$ these mean-square deviations differ remarkable. Finally, we put the question, how much is rightful the expansion (8) for the likelihood function? In Fig. 2, on the right, the dashed ellipses of equal probability are found for the exact $`\chi ^2`$ that was calculated by the viper program as well. Mainly, just-noticeable difference is caused by the realization of the fitting algorithm or to be more precise, by the values of the variations of the fitting parameters which determine the accuracy of the minimum itself and the accuracy of the derivatives at the minimum. Of course, this difference can be neglected. ### B General case Often, a particular fitting parameter significantly correlates not with a one, but with several other parameters (in our example this is not so, but, for instance, the problem of approximation of the atomic-like background by interpolation spline drawn through the varied knots is that very case). Now, the consideration of the two-dimensional probability density functions is not correct no more, one should search for the total joint posterior probability $`P(𝐩|𝐝)`$. For that, first of all, one is to find the prior probability $`P(𝐩)`$. Let we approximately know in advance the size $`S_k`$ of the variation range of the parameter $`p_k`$. Then the prior probability can be expressed as: $$P(𝐩|\alpha )\alpha ^{N/2}\mathrm{exp}\left(\frac{\alpha }{2}\underset{k=1}{\overset{N}{}}\frac{\mathrm{\Delta }p_k^2}{S_k^2}\right),$$ (21) where the regularizer $`\alpha `$ specifies the relative weight of the prior probability; at $`\alpha =0`$ there is no prior information, at $`\alpha \mathrm{}`$ the fitting procedure gives nothing and the posterior distribution coincides with the prior one. In the expression (21) $`\alpha `$ appears as a known value. Later, we apply the rules of probability theory to remove it from the problem. So, for the sought probability density functions we have: $`P(p_j|𝐝,\alpha )`$ $``$ $`{\displaystyle \mathrm{}𝑑p_{ij}\mathrm{}\alpha ^{N/2}\mathrm{exp}\left(\frac{1}{2}\underset{k,l=1}{\overset{N}{}}g_{kl}\mathrm{\Delta }p_k\mathrm{\Delta }p_l\right)},`$ (22) where $`g_{kl}={\displaystyle \frac{\alpha }{S_k^2}}\delta _{kl}+{\displaystyle \frac{\stackrel{}{\mathrm{H}}_{kl}}{2}}.`$ (23) Since there is no integral over $`p_j`$, separate it from the other integration variables: $`P(p_j|𝐝,\alpha )`$ $``$ $`\alpha ^{N/2}\mathrm{exp}\left({\displaystyle \frac{1}{2}}g_{jj}\mathrm{\Delta }p_j^2\right){\displaystyle \mathrm{}𝑑p_{ij}\mathrm{}\mathrm{exp}\left(\frac{1}{2}\underset{k,l=1}{\overset{N}{^j}}g_{kl}\mathrm{\Delta }p_k\mathrm{\Delta }p_l\mathrm{\Delta }p_j\underset{k=1}{\overset{N}{^j}}g_{kj}\mathrm{\Delta }p_k\right)},`$ (24) Here, the symbol $`j`$ near the summation signs denotes the absence of $`j`$-th item. Further, find the eigenvalues $`\lambda _i`$ and corresponding eigenvectors $`𝐞_i`$ of the matrix $`g_{kl}`$ in which the $`j`$-th row and column are deleted, and change the variables: $$b_i=\sqrt{\lambda _i}\underset{k=1}{\overset{N}{^j}}\mathrm{\Delta }p_ke_{ik},\mathrm{\Delta }p_k=\underset{i=1}{\overset{N}{^j}}\frac{b_ie_{ik}}{\sqrt{\lambda _i}}(i,kj).$$ (25) Using the properties of eigenvectors: $$\underset{k=1}{\overset{N}{^j}}g_{lk}e_{ik}=\lambda _ie_{il},\underset{k=1}{\overset{N}{^j}}e_{lk}e_{ik}=\delta _{li}(l,ij),$$ (26) one obtains: $`P(p_j|𝐝,\alpha )`$ $``$ $`\alpha ^{N/2}\mathrm{exp}\left({\displaystyle \frac{1}{2}}[g_{jj}𝐮^2]\mathrm{\Delta }p_j^2\right){\displaystyle \mathrm{}𝑑b_{lj}\mathrm{}\mathrm{exp}\left(\frac{1}{2}\underset{i=1}{\overset{N}{^j}}[b_i+u_i\mathrm{\Delta }p_j]^2\right)}`$ (27) $``$ $`\alpha ^{N/2}\mathrm{exp}\left({\displaystyle \frac{1}{2}}[g_{jj}𝐮^2]\mathrm{\Delta }p_j^2\right),`$ (28) where new quantities were introduced: $`u_i={\displaystyle \frac{1}{\sqrt{\lambda _i}}}\underset{k=1}{\overset{N}{{\displaystyle }^j}}g_{kj}e_{ik},𝐮^2=\underset{i=1}{\overset{N}{{\displaystyle }^j}}u_i^2.`$ (29) Thus, we have found the explicit expression for the posterior distribution of an arbitrary fitting parameter. This is a Gaussian distribution with the mean $`\overline{p}_j=\widehat{p}_j`$ and the standard deviation $`\delta ^{(\mathrm{c})}p_j=(g_{jj}𝐮^2)^{1/2}.`$ (30) The formulas (27)–(30) require to find the eigenvalues and eigenvectors for the matrix of rank $`N1`$ for each parameter. Those formulas have merely a methodological value: the explicit expressions for posterior probabilities enables one to find the average of arbitrary function of $`p_j`$. However, the standard deviations could be calculated significantly easier, having found the eigenvalues and eigenvectors for the matrix of rank $`N`$ one time. $`(\delta ^{(\mathrm{c})}p_j)^2={\displaystyle \frac{\mathrm{\Delta }p_j^2P(p_j|𝐝,\alpha )𝑑p_j}{P(p_j|𝐝,\alpha )𝑑p_j}}={\displaystyle \frac{\mathrm{\Delta }p_j^2\mathrm{exp}\left(\frac{1}{2}_{k,l=1}^Ng_{kl}\mathrm{\Delta }p_k\mathrm{\Delta }p_l\right)𝑑𝐩}{\mathrm{exp}\left(\frac{1}{2}_{k,l=1}^Ng_{kl}\mathrm{\Delta }p_k\mathrm{\Delta }p_l\right)𝑑𝐩}}.`$ (31) Analogously to what was done above, performing the diagonalization of $`g_{kl}`$, one obtains: $`(\delta ^{(\mathrm{c})}p_j)^2={\displaystyle \frac{𝑑𝐛\left(_{i=1}^Nb_ie_{ij}/\sqrt{\lambda _i}\right)^2\mathrm{exp}\left(\frac{1}{2}_{i=1}^Nb_i^2\right)}{𝑑𝐛\mathrm{exp}\left(\frac{1}{2}_{i=1}^Nb_i^2\right)}}={\displaystyle \underset{i=1}{\overset{N}{}}}{\displaystyle \frac{e_{ij}^2}{\lambda _i}},`$ (32) where the eigenvalues ($`\lambda _i`$) and eigenvectors ($`𝐞_i`$) correspond to the full matrix $`g_{kl}`$. One can give another interpretation of the $`\delta ^{(\mathrm{c})}p`$-finding process. It is easy to verify that $`\stackrel{}{\mathrm{H}}/2`$ and the covariance matrix $`C`$ of the vector $`𝐩`$ are mutually inverse. Therefore $`(\delta ^{(\mathrm{c})}p_j)^2=C_{jj}=2(\stackrel{}{\mathrm{H}}^1)_{jj},`$ (33) and the variate $`(𝐩\widehat{𝐩})^TC^1(𝐩\widehat{𝐩})=\frac{1}{2}(𝐩\widehat{𝐩})^T\stackrel{}{\mathrm{H}}(𝐩\widehat{𝐩})`$ is $`\chi ^2`$-distributed with $`N`$ degrees of freedom if $`𝐩`$ is the $`N`$-dimensional normally distributed vector (by Eq. (27) this condition is met). The ellipsoid that determines the standard deviation is: $`(𝐩\widehat{𝐩})^T\stackrel{}{\mathrm{H}}(𝐩\widehat{𝐩})=N.`$ (34) For an arbitrary confidence level $`\mathrm{}`$, on the r.h.s. would be $`(\chi _N^2)_{\mathrm{}}`$, the critical value of the $`\chi ^2`$-distribution with $`N`$ degrees of freedom. The error $`\delta ^{(\mathrm{c})}p_k`$ is equal to the half the ellipsoid size along the $`k`$-th axis. In our example fitting, the errors found in the absence of any prior information ($`\alpha =0`$) from the formula (32) are listed in Table II in the column $`\delta ^{(\mathrm{c})}p`$. Due to every one parameter correlates at the most with one other parameter, all $`\delta ^{(\mathrm{c})}p`$ are practically coincide with $`\delta ^{(\mathrm{b})}p`$. Generally, this may be not so. Finally, let us find the most probable value of $`\alpha `$. Its posterior distribution is given by: $`P(\alpha |𝐝)={\displaystyle 𝑑𝐩P(\alpha ,𝐩|𝐝)}={\displaystyle 𝑑𝐩P(\alpha )P(𝐩|\alpha ,𝐝)}.`$ (35) Using a Jeffreys prior $`P(\alpha )=1/\alpha `$ , one obtains for the posterior distribution: $`P(\alpha |𝐝)`$ $``$ $`{\displaystyle 𝑑𝐩\alpha ^{N/21}\mathrm{exp}\left(\frac{1}{2}\underset{k,l=1}{\overset{N}{}}g_{kl}\mathrm{\Delta }p_k\mathrm{\Delta }p_l\right)}(\lambda _1\mathrm{}\lambda _N)^{1/2}\alpha ^{N/21}.`$ (36) In our example we have set the variation range of the parameter $`p_k`$ to be equal to $`S_k=\pm \widehat{p}_k`$ (this means that $`p_k[0,2\widehat{p}_k]`$) for all parameters except for $`E_0`$; since it varies near zero, we have chosen $`S_{E_0}=\pm 10`$ eV. For the mentioned variation ranges, the distribution $`P(\alpha |𝐝)`$ has its mode at $`\alpha =2.6410^3`$ (see Fig. 3). The bayesian errors found for this regularizer are listed in the column $`\delta ^{(\mathrm{d})}p`$ of Table II. As a result, we have got the mean-square errors that for some parameters are significantly lower than even $`\delta ^{(\mathrm{a})}p`$. There is nothing surprising in that: any additional information narrows the posterior distribution. If we would choose $`S_k`$ to be less, $`\delta ^{(\mathrm{d})}p_k`$ would be yet lower. For instance, XAFS is quite accurate in distance determination, and for many cases one can assume distances to be known within $`\pm 0.2`$ Å. In our case this leads to $`\delta ^{(\mathrm{d})}r_0=3.410^3`$ Å. ### C Important note Having obtained the expressions (10), (13) and (32) for the errors of fitting parameters, we are able now to draw an important conclusion. If in the definition (4) one substitutes for $`\epsilon _i`$ the values that are smaller by a factor of $`\beta `$ than the real ones, the $`\chi ^2`$ and its Hessian’s elements are exaggerated by a factor of $`\beta ^2`$, and from (10), (13) and (32) follows that the errors of fitting parameters are understated by a factor of $`\beta `$! In the preceding paper it was shown that the errors of the atomic-like absorption construction are essentially larger than the experimental noise, and therefore it is the former that should determine the $`\epsilon _i`$ values. However, these values are traditionally assumed to be equal to the noise, or one uses unjustified approximations for them, also understated (like $`1/\epsilon _i^2=k^w`$ ). It is here where we see the main source of the groundless optimistic errors. ## III Statistical tests in fitting problems ### A $`\chi ^2`$-test Introducing the statistical function $`\chi ^2`$, we assumed that it follows the $`\chi ^2`$ distribution with $`\nu =MN`$ degrees of freedom. However for this would be really so, one should achieve a sufficient fitting quality. This “sufficient quality” could be defined as such that the variate (4) obeys the $`\chi ^2`$ distribution law, that is this variate does not fall within the tail of this distribution. Strictly speaking, the following condition must be met: $`\chi ^2<(\chi _\nu ^2)_{\mathrm{}},`$ (37) where the critical value $`(\chi _\nu ^2)_{\mathrm{}}`$ for the specified significance level $`\mathrm{}`$ may be calculated exactly (for even $`\nu `$) or approximately (for odd $`\nu `$) using the known formulas . Notice, that the choice of the true $`\epsilon _i`$ here also plays a cardinal role. However, it is important here that one would not use the overestimated values which facilitate to meet the requirement (37). As we have shown in , one could obtain the overestimated $`\epsilon _i`$, having assumed the Poisson destribution law for the detectors counts when the actual association between the probability of a single count event and the radiation intensity is unknown. Thus, the exaggerated values $`\epsilon _i`$ tell about a quality fitting, but give the large errors of fitting parameters. The understated $`\epsilon _i`$ lead to the would-be small errors, but make difficult to pass the $`\chi ^2`$-test (i. e. to meet the condition (37)). We are aware of many works the authors of which do not describe explicitly the evaluation process for the errors of XAFS-function extraction and do not report their explicit values. However, by implication it is seen that $`\epsilon _i`$ were chosen (not calculated!) as low as possible to scarcely (with $`\mathrm{}=0.90.95`$) pass the $`\chi ^2`$-test; as a result, very impressive errors of the structural parameters were obtained. In such approach no wander that the difference of 0.01 Å between the diffraction data and the XAFS-result that was found within 0.002 Å was attributed to the “suggested presence of a small systematic error” . ### B $`F`$-test Let there is a possibility to choose between two physical models depending on different numbers of parameters $`N_1`$ and $`N_2`$ ($`N_2>N_1`$). Which one of them is more statistically important? For instance one wish to decide whether a single coordination sphere is split into two. Let for the two models the functions $`\chi _1^2`$ and $`\chi _2^2`$ obey the $`\chi ^2`$-distribution law with $`\nu _1=N_{\mathrm{ind}}N_1`$ and $`\nu _2=N_{\mathrm{ind}}N_2`$ degrees of freedom, correspondingly. From the linear regression problem (near the minimum of $`\chi ^2`$, the likelihood function is expressed by (8) and is identical in form to that of the linear regression problem) it is known that the value $`f={\displaystyle \frac{(\chi _1^2\chi _2^2)/(\nu _1\nu _2)}{\chi _2^2/\nu _2}}`$ (38) obeys the Fisher’s $`F`$-distribution law with $`(\nu _1\nu _2,\nu _2)`$ degrees of freedom if exactly $`r=\nu _1\nu _2`$ parameters in the second model are linearly dependent, that is if exist the $`r\times N_2`$ matrix $`C`$ of rank $`r`$ and the vector $`𝐜`$ of the dimension $`r`$ such that $`C𝐩=𝐜`$. In order for the linear restrictions on the second model parameters to be absent, the value $`f`$ should not follow the $`F`$-distribution, that is it should be greater than the critical value $`(F_{\nu _1\nu _2,\nu _2})_{\mathrm{}}`$ for the specified significance level $`\mathrm{}`$: $`f>(F_{\nu _1\nu _2,\nu _2})_{\mathrm{}}`$ (39) or $`\chi _2^2<\chi _1^2\left((F_{\nu _1\nu _2,\nu _2})_{\mathrm{}}{\displaystyle \frac{\nu _1\nu _2}{\nu _2}}+1\right)^1.`$ (40) Notice, that the expression (40) means the absence of exactly $`r`$ linear restrictions on the second model parameters. Even if (40) is realized, the less number of linear dependencies are possible. If, for instance, the splitting of a single coordination sphere into two does not contradict to the $`F`$-test (40), some of the parameters of these two spheres may be dependent, but not all. This justifies the introduction of a new sphere into the model XAFS function. Thus, having specified the significance level $`\mathrm{}`$, one can answer the question “what decrease of $`\chi ^2`$ must be achieved to increase the number of parameters from $`N_1`$ to $`N_2`$?” or, inside out, “what is the probability that the model 2 is better than the model 1 at specified $`(N_1,\chi _1^2)`$ and $`(N_2,\chi _2^2)`$?” Notice, that since in the definition for $`f`$ the ratio $`\chi _1^2/\chi _2^2`$ appears, the actual values of $`\epsilon _i`$ become not important for the $`F`$-test (only if they all are taken equal to a single value). Consider an example of the statistical tests in the fitting problem. In Fig. 4 are shown the experimental curve with $`N_{\mathrm{ind}}=11.8`$ and two model curves with $`N_1=4`$ and $`N_2=7`$. The underlying physical models were described in Ref. ; here only the number of parameters is of importance. Let us apply the statistical tests. Through the fitting procedure for the model 1 we have: $`\nu _1=114=7`$, $`\chi _1^2=16.8>14.1=(\chi _7^2)_{0.95}`$, for the model 2: $`\nu _2=117=4`$, $`\chi _1^2=5.3<9.5=(\chi _4^2)_{0.95}`$. That is the first model does not pass the $`\chi ^2`$-test. Further, $`f=2.89=(F_{3,4})_{0.84}`$, from where with the probability of 84% we can assert that the model 2 is better than the model 1. In the XAFS analysis the $`F`$-test has long been in use . However, the words substantiating the test are often wrong. The authors of Refs. , for example, even claimed that the value $`f`$ (38) must follow the $`F`$-distribution, although then in Ref. there appears a really correct inequality (40). ## IV Conclusion The solution of the main task of the XAFS spectroscopy, determination of the structural parameters, becomes worthless if the confidence in this solution is unknown. Here we mean not only the confidence in the obtained fitting parameters that is their mean-square deviations, but also the credence to the very methods of the error analysis. It is excessive optimistic errors evaluations lead to the suspicious attitude to the XAFS results. To improve the situation could the development of the reliable and well-grounded techniques that do not allow one to treat the data in an arbitrary way. First of all, this is a technique for determination of the real errors of the atomic-like absorption construction. Second, we regard as necessary to standardize the method for the correct taking into account of all pair correlation between fitting parameters. And third, (we have not raised this question here) programs for scattering phase and amplitude calculations should report on the confidence limits for the calculated values, that is report how sensitive the calculated values are to the choice of the parameters of scattering potentials.
no-problem/0003/astro-ph0003161.html
ar5iv
text
# Star Clusters and the Duration of Starbursts ## 1. Introduction Over the past few decades starbursts, brief intense episodes of massive star formation, have been recognized as important agents of galaxy evolution. However, despite many journal pages per year of attention, there is much we do not know about the inner workings of starbursts. In particular: how long do they last? Estimates in the literature have been obtained from a variety of techniques and range from Myr to Gyr time scales (Mas-Hesse & Kunth & 1999, and Coziol, Barth & Demers 1995). A priori the minimum expected duration is the crossing time $`t_{\mathrm{cross}}`$: the time it would take for a disturbance to travel from one end of the starburst to the other. Local starbursts typically have size scales of $`0.22`$ Kpc and velocity dispersions of $`30300`$ km/s, and hence crossing times ranging from $`1`$ Myr to $`60`$ Myr, with 10 Myr being typical. Starburst duration is important for determining the efficiency that metals are released into the IGM, and for determining the total number of bursts that a galaxy can undergo. Knowing the duration of starbursts will give us a better understanding about how bursts evolve. If the durations are similar to $`t_{\mathrm{cross}}`$ it would suggest that they are self-extinguishing explosions, destroyed by their energy output produced in a non-equilibrium fashion. Conversely, durations much longer than $`t_{\mathrm{cross}}`$ would indicate that starbursts are sustainable, perhaps even being in equilibrium: i.e. ISM inflow $``$ star formation rate (SFR) + outflow. ## 2. Starbursts and star clusters. In the mid-1990’s we obtained HST ultraviolet (UV) images of starburst galaxies with the Faint Object Camera imaging at $`\lambda 2300`$Å in order to examine the distribution of high mass stars which power starbursts. The results of our imaging study are given in Meurer et al. (1995; hereafter M95). The UV structure of starbursts is well exemplified by NGC 3310 and NGC 3690 as shown in Fig. 1. Immediately striking are numerous prominent and compact star clusters. However, the total UV emission is dominated by a diffuse distribution of stars. Typically, this amounts to about 80% of the UV light within starbursts (M95). The nearest starbursts in our sample start to resolve into individual high mass stars which trace the diffuse light isophotes, demonstrating that the diffuse light is not a product of scattered light from the clusters, nor is it a figment of the pre-COSTAR optics of HST. These images reveal that a starburst is not the same thing as a star cluster. Nor, is it the sum of multiple clusters. The natural conclusion is that there are two modes of star formation in starbursts: a very significant 20% of the star formation is in compact star clusters while the dominant mode is diffuse star formation. The $`10`$ Myr crossing time of starbursts is set by the dimensions of the diffuse light. It is difficult to verify whether the star formation timescales are consistent with such large crossing times. Broad band photometric indexes are degenerate in terms of population age $`t`$ and duration $`dt`$ (Mas-Hesse & Kunth 1991). This is because for extended duration star formation it is the youngest stellar populations that dominate the bolometric output. Hence the light from the onset of the burst has relatively little weight. The clusters within starbursts are very compact, having effective radii typically of a few pc (M95, Whitmore et al. 1999). They have velocity dispersions of typically $`10`$ km s<sup>-1</sup>(Ho & Fillipenko, 1996; Smith & Gallagher, 2000), and hence crossing times less than a Myr. Since their photometric ages are usually at least a few Myr, they are well mixed systems. It is fair to assume that they were each formed in a single short duration burst. If so, one can determine good photometric ages; since $`dt`$ is very small the $`tdt`$ degeneracy is broken. This is the basis of our project to measure the the duration of starbursts as a whole using star clusters as chronometers. The duration of the entire starburst is the width of the cluster $`t`$ distribution. We have been granted HST time in cycle 6 to measure the starburst duration in four systems. The sample is listed in Table 1, which lists the distance ($`D`$), effective radius of the entire starburst ($`R_e`$), velocity dispersion, $`\sigma `$, and crossing time ($`t_{\mathrm{cross}}=2R_e/\sigma `$). These galaxies were imaged with WFPC2 using the F336W ($`U`$), F435W ($`B`$), and F814W ($`I`$) filters. While our own data on Tol1924-416 has not yet been obtained, Östlin et al. (1998) have obtained equivalent data for this system, known in Europe as ESO 338-IG04. Three color images of these systems can be found on the web <sup>1</sup><sup>1</sup>1http://www.pha.jhu.edu/$``$meurer/research/starburst\_dt.html. Most of the rest of this talk will focus on NGC 3310 for which the analysis has progressed the furthest. ## 3. NGC 3310 NGC 3310 is a nearly face-on spiral with a prominent ring of star formation encircling its mildly active nucleus (LINER or transition spectrum: Heckman & Balick 1980; Pastoriza et al. 1993). In ground based images the ring is very clumpy, with the brightest clump described as a “Jumbo” Hii region by Balick and Heckman (1981). NGC 3310’s outer structure is disturbed (van der Kruit & de Bruyn, 1976) suggesting that an interaction or merger has triggered the present starburst. The spectroscopy of Pastoriza et al. (1993) reveals both the Wolf-Rayet feature at 4686Å and the Caii triplet, which arises in red supergiants, indicating that star formation has proceeded for $`>`$ 10 Myr. The $`U`$ band Planetary Camera chip image of NGC 3310’s center is shown in Fig. 2. The structures detected from the ground are clearly revealed including the starburst ring and the Jumbo Hii region. Numerous compact clusters are prominent in these structures. NGC 3310 is close enough, and the images deep enough that some of the faintest sources are probably individual high-mass stars. Photometry was done on the clusters using DAOPHOT. Isolated clusters on the frame were used to construct the “point spread function”. So, it is better described as a “cluster spread function”, since the clusters are comparable in size to the pixels, noticeably broadening the PSF width (Whitmore et al. 1999; M95). The clusters were separated from the diffuse light and the photometry was refined in an iterative fashion. The resulting total and diffuse light images are shown in Fig. 2. A comparison of the two panels clearly shows the dominance of the diffuse light. The fractional flux of NGC 3310 contributed by the clusters in the different bands is 0.07 (UV; M95), 0.092 ($`U`$), 0.073 ($`B`$), and 0.065 ($`I`$). Despite the improved optical performance of WFPC2 compared to pre-COSTAR FOC, starbursts do not resolve out into an ensemble of star clusters. The clusters span a wide range of colors, as is apparent in the color composite image presented at the meeting (see footnote 2). While dust lanes are apparent in the image, the lack of correlation between cluster color and proximity to the dust lanes shows that dust is not causing the effect. This can also be seen in the $`UB`$ versus $`BI`$ two-color diagrams shown in Fig. 3 where the clusters form an $`S`$ like locus; they are not all strung out on a reddening vector. In Fig. 3 the cluster data are compared with Starburst99 (Leitherer et al. 1999) single burst models. A range of metallicities and IMF parameters were examined. While none exactly matches the cluster two color diagram (better models are needed), the data matches the general sense of how photometric evolution proceeds. Clusters start out at the lower left (blue in each color) and evolve more rapidly in $`BI`$ than $`UB`$ as the main sequence widens and red evolved stars appear. After about 30 Myr, evolution becomes more rapid in $`UB`$ than $`BI`$, tracing the evolution of the main-sequence turn-off. The reddest clusters in NGC 3310 are on the order of a few hundred Myr old. In contrast to the clusters, the diffuse light spans a vary narrow range of colors. This agrees well with continuous star formation models. The model comparison suggest durations range from 5 to 300 Myr for the diffuse light, with most of the emission implying $`10`$ to 100 Myr durations. The overall picture is fairly star formation in both the clusters and consistent: the diffuse light has been ongoing for several tens or up to a few hundred Myr. This is several crossing times of the starburst ring. ## 4. Discussion Other researchers have also reported extended star formation durations within starbursts. The work of Whitmore et al. (1999) on the spectacular merging “Antenna” system (NGC 4038/4039; discussed in these proceedings by Miller, 2000) reveals clusters with ages up to $`200`$ Myr within the starburst. This is largely consistent with the merger timescale of 200 Myr, and a few times the disk crossing timescale of $`50`$ Myr (Barnes 1988). Calzetti et al. (1997; 2000) report clusters in the blue compact dwarf (BCD) galaxy NGC 5253 with photometric and spectroscopic ages up to $``$ 10 Myr, a few times larger than the starburst crossing timescale of $`3`$ Myr. Walborn & Blades (1997) discuss in detail the complex age distribution of stars and star clusters in the nearest external starburst: 30 Doradus. Grebel & Chu (2000) expand on a case in point, reporting an age of 25 Myr for Hodge 301, a neighbor of R136a which has an age of 2.5 Myr. The projected turbulent velocity dispersion crossing time between these clusters is $`<1`$ Myr. Clearly even small starbursts have an extended history. Ground based observations of BCDs and Hii galaxies yield similar results. Their strong emission line spectra indicate that they must contain a substantial young ionizing population. However, the colors of their starbursts (determined after carefully subtracting the underlying populations, and correcting for extinction), are clearly too red to have been produced by a young ($`<10`$ Myr) instantaneous burst. Instead they are consistent with continuous star formation over $``$ 10 to 100 Myr timescales (Telles & Terlevich 1997; Marlowe et al. 1999). This amounts to at least a few crossing times in most starbursts. Contrary claims for relatively short burst durations have been made, particularly with regard to Wolf-Rayet galaxies (Schaerer, Contini, & Kunth, 1999) and BCDs (Mas-Hesse & Kunth 1999) which have star formation durations estimated to be $`<4`$ Myr<sup>2</sup><sup>2</sup>2Mas-Hesse & Kunth note that in several cases BCD data are consistent with $``$ 20 Myr continuous star formation.. In these studies timescales are often spectroscopically constrained by the presence of WR features, which are washed out for durations $`>5`$ Myr. However, the spectroscopy can be misleading as suggested by Fig. 4. An observer would usually center the slit on the brightest source within the galaxy, which is likely to be a cluster, and preferably a young one. Such a cluster could dominate the entire spectrum, the majority of the starburst being outside of the slit. This would give the impression that the whole starburst is dominated by a short duration burst. Cases in point are NGC 5253 (Walsh & Roy, 1987; Schaerer et al. 1997) and NGC 3310 (Pastoriza et al. 1993), both of which show WR spectra, but only over a fraction of the starburst. Another strong voice for short burst durations is Elmegreen (2000) who claims that star formation occurs over a timescale of only one or two crossing times, citing various results in the Galaxy and LMC. The difference here may in part be due to the velocity used to calculate the crossing time. Elmegreen uses the sound velocity whereas I have quoted measured velocity dispersions of the ISM of the starbursts. The latter is supersonic, resulting in lower crossing times, and a better representation of the true speed that a disturbance travels within the star forming medium. If starbursts do last many crossing times there are some important implications. Firstly, starbursts are sustainable, perhaps even self-regulating. The upper-limit to the effective surface brightness of starbursts reported by Meurer et al.(1997) and Lehnert & Heckman (1996) is further evidence of this regulation, although it is not clear what the regulating mechanism is. Second, burst durations may be longer than the typical ISM expansion timescales seen in galactic winds (Martin 1998). Hence, galactic winds are likely to occur in a previously fractured ISM. This should increase the efficiency of metal ejection into the intergalactic medium compared to simulations which usually are modeled in smooth undisturbed media. Acknowledgements. I gratefully acknowledge Tim Heckman and Claus Leitherer, my collaborators on this project. ## References Balick, B., & Heckman, T. 1981, A&A, 96, 271 Barnes, J.E. 1988, ApJ, 331, 699 Calzetti, D., Meurer, G.R., Bohlin, R.C., Garnett, D.R., Kinney, A.L., Leitherer, C., & Storchi-Bergmann, T. 1997, AJ, 114, 1834 Calzetti, D., Tremonti, C.A., Heckman, T.M. & Leitherer, C. 2000, this volume (astro-ph/9912504) Coziol, R., Barth, C.S., & Demers, S. 1995, MNRAS, 276, 1245 Elmegreen, B.G. 2000, ApJ, 530, 277 Grebel, E.K., & Chu, Y.-H. 2000, AJ, 119, 79 Heckman, T.M., & Balick, B. 1980, A&A, 83, 100 Ho, L.C., & Fillipenko, A.V. 1996, ApJ, 466, L83 Lehnert, M., & Heckman, T.M. 1996, ApJ, 472, 546 Leitherer, C., Schaerer, D., Goldader, J.D., Gonzalez Delgado, R.M., Carmelle, R., Kune, D.F., de Mello, D.F., Devost, D., & Heckman, T.M. 1999, ApJS, 123, 3 Marlowe, A.T., Meurer, G.R., & Heckman, T.M. 1999, ApJ, 522, 183 Martin, C.L. 1998, ApJ, 506, 222 Mas-Hesse, J.M., & Kunth, D. 1991, A&AS, 88, 399 Mas-Hesse, J.M., & Kunth, D. 1999, A&A, 349, 765 Meurer, G.R., Heckman, T.M., Leitherer, C., Kinney, A., Robert, C., & Garnett D.R. 1995, AJ, 110, 2665 (M95) Meurer, G.R., Heckman, T.M., Lehnert, M.D., Leitherer, C., & Lowenthal, J. 1997, AJ, 114, 54 Miller, B.W. 2000, this volume (astro-ph/9912453) Östlin, G., Bergvall, N., & Rönnback, J. 1998, A&A, 335, 85 Pastoriza, M.G., Dottori, H.A., Terlevich, E., Terlevich, R., & Díaz, A.I. 1993, MNRAS, 260, 177 Schaerer, D., Contini, T., Kunth, D., & Meynet, G. 1997, ApJ, 481, L75 Smith, L.J., & Gallagher, J.S. 2000, this volume (astro-ph/0001529) Telles, E., & Terlevich, R. 1997, MNRAS, 286, 183 van der Kruit, P.C., & de Bruyn, A.G. 1976, A&A, 48, 373 Walborn, N.A., & Blades, J.C. 1997, 112, 457 Walsh, J.R., & Roy, J.-R. 1987, ApJ, 319, L57 Whitmore, B.C., Zhang, Q., Leitherer, C., Fall, S.M., Schweizer, F., & Miller, B.W. 1999, AJ, 118, 1551 Comments and discussion Grebel: The duration of a starburst appears to be determined by the angular resolution with which one looks at a starburst region. Consider for instance 30 Doradus, which can be observed with very high angular resolution: The massive central cluster R136 has an age of 1-2 Myr; Hodge 301 has an age of $``$ 25 Myr, and there are several other spatially distinct regions within 30 Dor that have ages between the above. The duration of formation in each of these clusters in this starburst region is only a few Myr. The entire 30 Dor region is composed of several smaller “starbursts” if they were seen at a distance of 40 or 100 Mpc. Meurer: Yes this is a problem. Even with HST, our method works best on the nearest systems. Lançon: It is generally accepted that you worry about internal extinction in young clusters, not in old clusters. At what time should one stop worrying about internal extinction? Is dust produced by red supergiant / AGB stars relevant? Meurer: I expect that once a cluster ejects its natal ISM, that it shouldn’t be much affected by internal dust until the starburst region shuts down. Hot ($`10^{67}`$ K) gas pervades the entire starburst at a high filling factor. Internally produced dust will be quickly swept away or destroyed. Burgarella: From UV-2000Å observations with the balloon borne FOCA telescope, we estimated a diffuse light / point like sources to be closer to 50/50. This may not be consistent with the ratio that you find for NGC 3310. One point to note is that even with a resolution of $`10`$ arcsecs, we probably got smaller star forming regions. Do you find that this may be consistent with your view? We should be careful about what is diffuse light. Meurer: I suspect that much of the difference is due to resolution. However much work needs to be done to see the clumpy fraction varies with properties such as metallicity, potential well depth, and star formation intensity (surface brightness) The best place to check may be the Magellanic clouds where UIT images directly resolve the UV continuum. Zinneckar: You mentioned that in NGC 3690 20% of the UV light is in prominent clusters while 80% is in diffuse UV light. What is the nature of this diffuse light: scattered light from the prominent clusters or the superposition of unresolved light from a more distributed OB star poulation? What is the spectrum or color of the diffuse light? Meurer: Calzetti et al. (2000) shows that the spectrum of NGC 5253’s diffuse light has narrower Civ and Siiv lines than seen in the WR clusters. Its diffuse light is starting to resolve into individual stars with HST imaging (e.g. M95), so it is unlikely to be dominated by scattered light. Schaerer: Indeed it is important to clarify what kind of object/region is included in the observations when burst durations are studied. Although no exact quantification of this is usually done for WR “galaxies”, the short durations mostly found (cf. Schaerer et al. 1999; Mas-Hesse and Kunth 1999) is likely due to the fact that just one or a few clusters typically dominate the integrated light in these objects (most metal-poor BCDs).
no-problem/0003/quant-ph0003062.html
ar5iv
text
# Entanglement and Collective Quantum Operations ## Acknowledgements. We would like to thank Sandu Popescu, Noah Linden, Osamu Hirota and Masahide Sasaki for interesting discussions. Part of this work was carried out at the Japanese Ministry of Posts and Telecommunications Communications Research Laboratory, Tokyo, and we would like to thank Masayuki Izutsu for his hospitality. This work was funded by the UK Engineering and Physical Sciences Research Council, and by the British Council.
no-problem/0003/astro-ph0003100.html
ar5iv
text
# Iterative maps with hierarchical clustering for the observed scales of astrophysical and cosmological structures ## Abstract We compute the order of magnitude of the observed astrophysical and cosmological scales in the Universe, from neutron stars to superclusters of galaxies, up to, asymptotically, the observed radius of the Universe. This result is obtained by introducing a recursive scheme that yields a rapidly convergent geometric sequence, which can be described as a hierarchical clustering. The theoretical understanding of the observed scales, sizes and dimensions of aggregated structures in the Universe (stars, galaxies, clusters, etc.) is a long–standing open problem in astrophysics and cosmology. In this letter, we introduce a geometric description which accounts with accuracy for the order of magnitude, and provides a rapidly converging succession for the observed length and mass scales of the main astrophysical and cosmological structures. We base our analysis on the granularity of the Universe, in the sense that nucleons are taken as the building blocks of the observed stable cosmological aggregates. We further stipulate that only the gravitational interaction accounts for the gross features (sizes, masses, number of components) of the observed cosmological and astrophysical aggregates. It is worth noting that, if this hypothesis can be considered obvious when one looks at large scale cosmological structures, it appears rather nontrivial for astrophysical ones, e. g., stars, in which other interactions of nongravitational nature play a relevant role. However, as will be discussed and clarified in the following, our scheme allows to identify the sizes of those astrophysical structures, like neutron stars and planetary systems, where gravity is the only overall effective interaction. The iterative scheme introduced in the present paper provides the scales of astrophysical and cosmological structures as a hierarchical sequence of “close–packed” aggregates of increasing size in a spatially flat Universe, i. e. with a density parameter $`\mathrm{\Omega }1`$. This finding is strongly supported by the recent evidences coming from the BOOMERANG experiment , and explains the good agreement with the observational data of our theoretical scheme which assumes a space–like sheet embedded in a space–time cosmological manifold. In fact, the order of magnitude of the limiting size in our iterative procedure, i.e. the “observed radius of the Universe”, will turn out to be $`10^{26}cm`$, which actually coincides with the observed distance that can be measured without introducing second order corrections to the Hubble law . The scheme basically consists of successive iterations of two alternating physical mechanisms. The first mechanism is suggested by the tendency towards collapse for a system of gravitationally interacting bodies, due to the long range nature of the gravitational force. This tendency to a three–dimensional (3–d) close packing is however opposed by the relativistic constraint of the maximum attainable (critical) gravitational energy for the system (essentially, the rest mass). This constraint leads to a “critical 3–d close packing” that singles out a minimal scale of aggregation. The critical 3–d close packing yields, for the smallest aggregate, a spatial gravitational energy density exceedingly large with respect to a minimum mean spatial gravitational energy density, which will be defined below (here and in the following, if not otherwise specified, we take the gravitational energies and energy densities always in modulus). This latter quantity will turn out to be the spatial energy density associated to the asymptotic length scale in our iterative scheme (mean energy density of the observed Universe). Therefore, in the second step of the iteration, the mass of the smallest aggregate is redistributed on a larger spatial scale in such a way to bring the mean energy density of the new aggregate to coincide with the mean energy per unit volume of the observed Universe. This condition implies a proportionality between the total mass of the new aggregate and the square of the new spatial radius, and is equivalent to a two–dimensional (2–d) close packing. The mass distribution thus obtained is confirmed, at least on large scales, by the statistical analysis on cosmological data catalogues performed in recent years , in particular on the statistical correlation among galaxies. Observing that, on scales smaller than $`10^{25}÷10^{26}cm`$ the matter distribution cannot be considered homogeneous, the authors in Ref. assume a fractal behavior that yields a statistical density–density correlation decreasing with the inverse of the length scale. This result implies that the aggregates’ masses must be proportional to the square of the aggregates’ radii. The recursive scheme is implemented by iterating the two different processes of aggregation in alternating order: critical 3–d close packing of a system made of second–step aggregates, enlargement of the new radius with fixed mass in order to attain the 2–d close packing at constant energy density, and so on. In this way we shall obtain a sequence of length and mass scales with rapid geometric convergence to an asymptotic scale at which further 3–d close packing becomes irrelevant and thus the iteration reaches a fixed point. In order to avoid possible sources of confusion, we remark that our iterative process does not imply that larger structures are generated in time from smaller ones (rather than viceversa) as will be clear from the following and further discussed in the conclusions. We now proceed to develop explicitely the scheme. Denote by $`R_n`$ and $`M_n`$, respectively, the length extension and the total mass of the $`n`$–th aggregate labelled in the sense of increasing size. Let us define next $`N_n`$, the number of aggregates living on the ($`n1`$)–th scale which, in turn, form the $`n`$–th aggregate. Obviously, $`M_n`$ and $`N_n`$ are not independent objects. Let us then introduce the quantities $`m`$ and $`\lambda `$ associated to the elementary constituents (nucleons). Here $`m`$ is the mass of the nucleon (proton or neutron), in order of magnitude: $`m10^{24}g`$, while $`\lambda `$ is, in order of magnitude, the spatial extension of a nucleon, $`\lambda 10^{13}cm`$. We stress that, in this context, $`\lambda `$ is simply the linear dimension of the space region forbidden to penetration, due to the presence of a nucleon, as determined, e.g., by alpha particle scattering experiments. In this framework, then, $`\lambda `$ has a purely classical meaning, and plays the role of a minimum scale of length. Let us consider the initial, minimal scale: $`R_0=\lambda ,M_0=m,N_0=1`$ (step zero, the nucleon). In step one, the smallest aggregate $`R_1,M_1,N_1`$ is obtained by a critical 3–d close packing of nucleons. This amounts, first, to equating the mass density per unit volume of the aggregate with that of a nucleon (3–d close packing), and then to imposing the condition (criticality) that the total gravitational energy $`GM_{1}^{}{}_{}{}^{2}/R_1`$ attains the maximum value compatible with the relativistic constraint, that is the rest energy $`M_1c^2`$. Imposing the two conditions $$\frac{M_1}{R_{1}^{}{}_{}{}^{3}}=\frac{m}{\lambda ^3};\frac{GM_{1}^{}{}_{}{}^{2}}{R_1}=M_1c^2,$$ (1) and solving for the radius $`R_1`$, and for the mass $`M_1`$ (and for the number $`N_1`$), we have: $$R_1^2=\lambda \frac{(\lambda c)^2}{Gm};M_1=N_1m;N_1=\left(\frac{\lambda c^2}{Gm}\right)^{3/2}.$$ (2) Inserting numbers: $`R_110^6cm`$, $`M_110^{34}g`$, and $`N_110^{58}`$. These data coincide with the well known typical dimensions of a neutron star . We note that for a neutron star the first condition in Eq. (1), the equality of the mass density of the star with the mass density of the nucleon, is a well established fact . Let us now define $$R\frac{(\lambda c)^2}{Gm}=\lambda \gamma ^1;\gamma \frac{\lambda }{R}=\frac{Gm}{\lambda c^2}.$$ (3) In terms of these two universal quantities, whose numerical values are $`R10^{26}cm`$ and $`\gamma 10^{39}`$, the expressions in Eq. (2) are recast in the simpler form $$R_1^2=\lambda R;M_1=\gamma ^{3/2}m;N_1=\gamma ^{3/2}.$$ (4) The length $`R`$ and the pure number $`\gamma `$ are the basic quantities in terms of which the dimensions on all scales will be expressed. In the second step of the iteration, we impose that the mass $`M_1`$ redistributes itself on a larger radius $`R_2>>R_1`$, determined by the condition that its spatial gravitational energy density $`\rho _2`$ takes a universal constant value $`\rho _0`$: $$\rho _2\frac{GM_2^2}{R_2^4}=\frac{GM_1^2}{R_2^4}=\rho _0.$$ (5) Eq. (5) immediately yields, for some constant $`a`$, $$M_2=M_1=aR_2^2.$$ (6) The choice of the constant $`a`$ is suggested by the fact that the nucleons are still the fundamental objects, and therefore we postulate the surface mass density of the aggregate $`M_2/R_{2}^{}{}_{}{}^{2}`$, to be the surface mass density of a nucleon: $$a=\frac{m}{\lambda ^2}.$$ (7) The crucial choice in Eq. (7) completely determines the recursive scheme and together with Eq. (6) defines a 2–d close packing of nucleons. Moreover, from Eq. (5), the spatial energy density of the second–step aggregate is $`\rho _0=Ga^2`$, and will coincide with the mean spatial energy density of the observed Universe. Collecting Eqs. (6) and (7), the second–step aggregate is completely specified: $$R_2^2=R_1R;M_2=M_1;N_2=N_1.$$ (8) Inserting numbers: $`R_210^{16}cm,M_2=M_110^{34}g,N_2=N_110^{58}`$, which correspond to the typical dimensions of the solar system or, if one wishes, of the interaction range of a typical star. It is not surprising that we get the radius of the solar system rather than the solar radius, since our scheme summarizes the effective interaction on a test particle, due to the presence of a star, through the overall gravitational attraction and thus selects the maximum external binding range of the interaction. In other words, we can state that our scheme is capable of selecting effective geometric lenghts (neutron stars) and effective interaction lengths (planetary systems). Consider now the spatial energy density $`\rho _1`$ of the first aggregate. From Eqs. (6), (7), (8), it follows that $`\rho _1=Ga^2R^2/R_1^2=\rho _0\gamma ^1`$. We thus see, from the numerical value of $`\gamma `$, that the spatial energy density $`\rho _1`$ is enormous with respect to the mean spatial energy density $`\rho _0`$ of the second aggregate. We now proceed to iterate at all orders the two alternating mechanisms. The recursion produces a sequence of odd–numbered aggregates $`R_{2k+1}`$, $`M_{2k+1}`$, $`N_{2k+1}`$ determined by critical 3–d close packing of even–numbered aggregates $`R_{2k}`$, $`M_{2k}`$, $`N_{2k}`$, these latter in turn determined by the 2–d close packing analogous to condition (6). Explicitely ($`M_0=m`$, and $`k0`$): $`{\displaystyle \frac{M_{2k+1}}{R_{2k+1}^3}}={\displaystyle \frac{M_{2k}}{R_{2k}^3}};{\displaystyle \frac{GM_{2k+1}^2}{R_{2k+1}}}=M_{2k+1}c^2,`$ $$M_{2k+2}=M_{2k+1}=aR_{2k+2}^2;N_{2k+2}=N_{2k+1}.$$ (9) The procedure is summarized by the iterative map for the radii (linear dimensions) $$R_0=\lambda ;R_{n+1}^2=R_nR;n0,$$ (10) and by the iterative map for the numbers and, equivalently, for the masses ($`N_0=1`$, $`M_0=m`$, $`N_1`$ and $`M_1`$ as given in Eq. (4), and $`k1`$): $`N_{2k+1}=N_{2k+2}={\displaystyle \frac{R_{2k+1}}{R_{2k1}}}=\gamma ^{\left(\frac{3}{4}\right)2^{(2k1)}};`$ $$M_{2k+1}=M_{2k+2}=N_{2k+1}M_{2k}.$$ (11) It is important to note that Eq. (10) expresses a relation of hierarchical clustering between aggregates on different scales. From the same map we see that $`R10^{26}cm`$ is the fixed point, and thus $`R=lim_n\mathrm{}R_n`$. As a consequence, $`R`$ has the meaning of the maximum observable length scale. The map (10) can be reexpressed in the remarkable adimensional form: $$x_{n+1}=x_n^{1/2};n0,$$ (12) where $`x_n=R_n/R`$. The map (12) generates the relative scales from $`x_0=\lambda /R\gamma `$ to $`x_{\mathrm{}}=1`$, and can be completely solved, yielding $`x_n=(x_0)^{2^n}(\gamma )^{2^n}`$. The fast geometric convergence to the relevant cosmological scales is made evident by rewriting the adimensional map in the form $`X_n=2^n`$, where $`X_n\mathrm{ln}x_n/\mathrm{ln}x_0`$. The maps in Eq. (11) show that the aggregates on larger scales contain fewer components in terms of aggregates defined on the preceding scales, until, in the limit $`k\mathrm{}`$ the sequence $`N_{2k+1}`$ converges to $`N=1`$ (the Universe contains only itself). The recursive relations in Eq. (11) allow also to compute the total number $`N_{nucl}`$ of nucleons contained in the asymptotic scale aggregate (i. e. the total number of nucleons in the observed Universe). From Eqs. (4) and (11) it follows that $$N_{nucl}=N_1\underset{k\mathrm{}}{lim}\underset{s=1}{\overset{k}{}}N_{2s+1}=\gamma ^2.$$ (13) Inserting the numerical value of $`\gamma `$, $`N_{nucl}10^{78}`$, in perfect agreement with the central value obtained from nucleosynthesis calculations . Finally, inserting the mass/radius relations of Eq. (9) and the recurrencies of Eq. (10) into the expression $`E_{2k+1}=GM_{2k+1}^2/R_{2k+1}`$ of the total gravitational energy of the ($`2k+1`$)–th aggregate, we observe that all the odd scales (corresponding to critical 3–d close packings) share the same linear energy density $`\rho _0R^2`$. The mean spatial energy density of the ($`2k+1`$)–th aggregate is thus $`\rho _0(R/R_{2k+1})^2`$, and can be computed in terms of powers of $`\gamma `$. Consider now $`\rho _{2k}`$, $`\rho _{2k+1}`$, respectively, the spatial energy densities of the even and of the odd aggregates. It is easy to verify that the behavior of the spatial gravitational energy densities as functions of the different scales ($`k0`$) reads: $$\mathrm{ln}\left[\frac{\rho _{2k}}{\rho _0}\right]=0;\frac{1}{\mathrm{ln}\gamma }\mathrm{ln}\left[\frac{\rho _{2k+1}}{\rho _0}\right]=2^{2k}.$$ (14) These relations show that our construction is based on a distribution of the spatial gravitational energy density, as a function of the scales, that grows from the mean density $`\rho _0`$ of the even–numbered aggregates to peaks for the odd–numbered aggregates, in such a way that the peaks’ heights decrease with increasing scales. Asymptotically, these peaks disappear, the density ultimately acquires the universal value $`\rho _0`$, and further 3–d close packings become trivial. Reminding that we consider the moduli of the energy densities, the peaks correspond to relative minima, while $`\rho _0`$ is an absolute maximum. In order of magnitude, the sequence of length scales reads: $`10^6cm`$, $`10^{16}cm`$, $`10^{21}cm`$, $`10^{23÷24}cm`$, $`10^{24÷25}cm`$, $`10^{25÷26}cm`$. After the sixth iteration $`R_6`$, the fast geometric convergence of the sequence does not allow to single out further significant sizes but that associated to the fixed point $`R10^{26}cm`$. Thus, beyond the scales of the neutron stars and of the solar system, the subsequent iterations yield the sizes of galaxy bulges, giant galaxies or tight galaxy groups, up to clusters and superclusters. Some subtle conceptual issues related to the above results deserve some comments, due to the apparent and puzzling occurrence of the Planck action constant in the cosmological context. We believe that the key to understand this problem lies in the intriguing numerical coincidence of the length extension $`\lambda `$ of a nucleon, as obtained in scattering experiments, and the Compton length $`\lambda _c`$ appearing in quantum electrodynamics: $$\lambda \lambda _c\frac{\mathrm{}}{mc}.$$ (15) This is a nontrivial point. In fact, exploiting the identification (15) in Eq. (3), we obtain $$R=\frac{\mathrm{}^2}{Gm^3};\gamma =\frac{\mathrm{}c}{Gm^2}.$$ (16) The expression of the observed radius of the Universe, as given in Eq. (16) is nothing but the Eddington–Weinberg relation $`G^{1/2}m^{3/2}R^{1/2}\mathrm{}`$ , while the form of $`\gamma `$ in Eq. (16) is the gravitational fine structure constant. Therefore it would seem that $`R`$ is directly determined by the microscopic quantum background. This implication can also be formally obtained by computing, in semiclassical quantization, the Bohr radius of a system formed by two nucleons mutually interacting only via the gravitational force. An easy calculation immediately yields $`R_{Bohr}=R`$. We note that the identification of the observed radius of the Universe through the Eddington–Weinberg formula is at the heart of a recent proposal by F. Calogero , which contains some hints that eventually lead us to develop the scheme presented in this paper. The numerical coincidence $`N_{nucl}=\gamma ^2`$ for the total number of nucleons, but with the expression (16) for $`\gamma `$, was instead conjectured by Dirac , with a consequent possible role of $`\mathrm{}`$ in determining global cosmological sizes. We remark that, in our procedure, we introduce the linear dimension of the nucleon as the minimal length scale, without exploring the subtle conceptual implications of the identification (15). We thus believe that it is this last coincidence which should deserve a deeper understanding. A somewhat different situation could hold in the case of neutron stars. In fact, the expression for the number $`N_1`$ of nucleons in a star given in Eq. (4), with the definition (16) for $`\gamma `$, was obtained in the seminal works of Chandrasekhar and Carter , who applied a Thomas–Fermi approximation and considered the equilibrium condition between the radiation pressure and the gravitational force. In this case, the identification (15) and the related expression of $`\gamma `$ in Eq. (16) well account for the balancing of gravitational and quantum forces. In conclusion, we have obtained, in order of magnitude, the scales of astrophysical and cosmological structures through a recursive geometric scheme. We have derived a sequence of aggregates starting from the minimum scale (the nucleon) and then moving upwards (bottom–up), by iterating the alternating mechanisms of tighter 3–d critical close packings and of looser 2–d close packings. However, the sequence exhibits an evident symmetry, in the sense that it can be obtained travelling downwards (top–down). Actually, it is possible, and moreover likely, that the Universe started from a 2–d close packing at low space energy density on the maximum scale $`R`$ and, due to some fluctuative phenomena, it collapsed onto critical 3–d close–packed structures on smaller scales, later reaching a less confined 2–d close–packed configuration, and so on down to the neutron stars. The characterization of the temporal sequence, at variance with the geometric one, remains an open question. Looking forward to future developments, the first aim is obviously to improve the scheme introducing corrections providing a more accurate description of cosmological structures. At the same time, we point out that there exist hints suggesting the possible applicability to other, nongravitational systems of mechanisms similar to the one presented here in the cosmological context .
no-problem/0003/astro-ph0003316.html
ar5iv
text
# A Survey for Low-mass Stars and Brown Dwarfs in the Upper-Scorpius OB Association ## 1 Introduction OB associations and gravitationally unbound clusters are likely to be the dominant birthplaces for the low-mass field star population (Preibisch & Zinnecker 1999). Furthermore, they provide an opportunity to study very young low-mass stars and brown dwarfs, since these objects should be relatively bright in very young regions. Besides their intrinsic interest, the low-mass population of OB associations can provide constrains on the shape of the Initial Mass Function (IMF) at low masses. There are indications (Bouvier et al. 1998) that the shape of this IMF may deviate from the simple Miller-Scalo (Miller & Scalo 1979) at masses less than 0.1 $`\mathrm{M}_{}`$. While the low-mass stellar content of T associations (such as Taurus-Auriga, see Briceño et al. 1998 and Kenyon & Hartmann 1995) or very young clusters (such as IC 348, see Herbig 1998) seems well known, not much is known about the low-mass stellar content of OB associations. Part of the difficulty is due to the fact that most of the low-mass pre-main sequence (PMS) stars cannot be easily distinguished from normal field stars in the huge area on the sky (several hundred square degrees) covered by nearby OB associations. Only Classical T-Tauri stars can be found easily by their strong H$`\alpha `$ emission, using objective-prism surveys. X-ray observations have also proved to be an efficient way of distinguishing PMS stars from older field stars (see below). The Scorpius-Centaurus association is the OB association nearest to the Sun. It contains several hundred B stars arranged in three subgroups: Upper-Scorpius, Upper Centaurus-Lupus and Lower Centaurus-Crux. Upper-Sco ($`l354^o`$, $`b20^o`$) is the youngest subgroup. Its lack of dense molecular material and deeply embedded young stellar objects indicates that the process of star formation has ended. The area is free of dense gas and clouds, and the association members show only moderate extinction ($`A_V<2`$ mag). Hipparcos data has been used to identify 120 association members, including 49 B stars and 34 A stars. According to these data, Upper-Sco is 145 pc away from the Sun and has a size of $`130\mathrm{d}\mathrm{e}\mathrm{g}^2`$ (De Zeeuw et al. 1999). Age determinations based on the upper (de Zeeuw & Brand 1985; de Geus et al. 1989) and lower (Preibisch & Zinnecker 1999) mass range derive a value of 5 Myr. However, there have been age determinations that seem to suggest that many of the low-mass cluster members could be 10 Myr old (Martín 1998; Frink 1999). The first search for low-mass members of Upper-Sco was performed by Walter et al. (1994). They obtained photometry and spectroscopy for the optical counterparts of X-ray sources detected in seven Einstein fields and classified 28 objects as low-mass PMS stars. Two large-scale surveys have been performed recently. The first was conducted by Kunkel (1999) who observed optical counterparts of more than 200 ROSAT All-Sky Survey (RASS) X-ray sources in a $`60\mathrm{d}\mathrm{e}\mathrm{g}^2`$ area in Upper-Scorpius and Upper Centaurus-Lupus. The other study was a spectroscopic survey for PMS stars in a $`160\mathrm{d}\mathrm{e}\mathrm{g}^2`$ area by Preibisch et al. (1998). A number of further searches have been performed, all focused on small subregions within the association (Meyer et al. 1993; Sciortino et al. 1998; Martín 1998). A study of the history of star formation in Upper-Sco has been published by Preibisch & Zinnecker (1999). The authors obtained R and I photometry of association members to about $`I12.8`$ and $`R14`$. With the goal of extending the low-mass sequence of the association, we have obtained photometry for the association in the R, I, and Z filters. We have also observed selected member candidates in the J and H filters and spectroscopically. Our search starts at $`I13`$ and it is therefore complementary to Preibisch & Zinnecker (1999). Section 2 describes the observations and section 3 outlines the results. ## 2 Observations A summary of all observations is in Table 1. We obtained photometry for eight fields of 80 by 80 arcmins in Upper-Sco using the 60cm Michigan Curtis-Schmidt telescope at CTIO. We therefore cover $`10\%`$ of the association. Figure 1 shows the location of the fields. We observed the fields in the R, I, and Z filters. Raw frames were reduced within the IRAF<sup>1</sup><sup>1</sup>1IRAF is distributed by National Optical Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc., under contract with the National Science Foundation environment, using the CCDRED package. The images were bias-subtracted and flat-fielded. The photometry was obtained using the PSF fitting routines from IRAF. As stars are undersampled (the average FWHM of a star is $`1.5`$ pixels) the errors in the photometry are dominated by centering errors. We obtained magnitudes of more than 180,000 stars. The completeness limits for each filter are: $`R19`$ magnitudes, $`I18.5`$ magnitudes, $`Z18.5`$ magnitudes. All our fields saturate at $`13`$ mag. in all the filters. Figures 2ab show the color-magnitude diagrams for R-I and I-Z. Also shown are the completeness limits and saturation limits. We select as preliminary candidates those objects that lie to the right of the Leggett (1992) main sequence in both color-magnitude diagrams and below the saturation limit in the I vs. I-Z color-magnitude diagram. This assumption will most certainly increase the number of candidates that are not cluster members. The best way of doing the selection would be to include the errors due to photometry and undetected binaries in the estimation of the number of objects in a band around the 5 Myr isochrones. However, given that we do not have a priori information about reddening, it seems safer to begin by assuming that every object to the right of the Leggett (1992) sequence will be a member of the cluster. With this selection method we also take account of the saturation of the R-I color for low-mass stars. As has been shown by Bessell (1991), the R-I color saturates at $`RI2.4`$ (due to molecular absorption in the stellar photosphere) and becomes bluer for cooler objects. Therefore, a selection based only on a band around the 5 Myr isochrones would miss the very low-mass objects. Figure 3 shows the I vs. R-I color-magnitude diagrams for the resulting candidates. As can be seen from Figure 3, the saturation limit in the Z filter affects the selection of candidates brighter than $`I13.5`$ which corresponds to $`M2`$. In the lower limit, we will miss objects with $`RI>2.2`$. We obtain 138 candidates, listed in Table 2 (finding charts can be obtained by contacting the authors). If all candidates belonged to the cluster then we would neatly cover all the range of M stars. As mentioned in the introduction, Preibisch & Zinnecker (1999) obtained R and I photometry of association members to about $`I12.8`$ and $`R14`$. Our search therefore begins where theirs ended. We have complemented the optical observations with infrared J and H observations of selected candidates. To perform these observations we used the Cerro Tololo Infrared Imager (CIRIM) at CTIO. Raw frames were reduced within the IRAF environment, following the procedure outlined by Joyce (1992). We were able to observe only 9 objects, due to bad weather and instrumental problems. Table 3 details the results. Using the red arm of the KAST spectrograph at the Lick 3m telescope (which covers the range from 5000 to 10000 Å), we observed selected bright candidates in low resolution (Grating 300/7500, which gives $`\lambda 11`$ Å of resolution). Raw images were reduced within the IRAF environment, using standard tools to perform flat-fielding, optimal extraction, wavelength calibration and response correction of the spectra. The spectra were not corrected for telluric absorption. Because of the low declination of the cluster, observations from Lick Observatory must be made through a large airmass. This, together with the low resolution of the observations and the variable fringing of the spectrograph in the red arm, make it very difficult to comment on gravity sensitive lines that lie on regions of telluric absorption, such as the K I resonance doublet ($`\lambda \lambda 7665,7699`$) or the subordinate lines of the Na I doublet ($`\lambda \lambda 8183,8195`$). However, the spectral resolution is good enough to identify whether or not a star has H$`\alpha `$ in emission. We have used the I3 index defined by Martín & Kun (1996) and the VO index defined by Kirkpatrick, Henry & Simons (1995) to find the spectral type of the Lick stars. These indexes have the advantage of being based on flux ratios that are close to each other. Therefore, they are not very sensitive to reddening. The spectral type obtained from the spectroscopic indexes can be compared with the spectral type from the colors to obtain the reddening. The results are shown in Table 3. The spectral types derived from the spectroscopy confirm the spectral types from the colors. ## 3 Discussion ### 3.1 Contamination by other sources Photometric observations as a way to select cluster candidates are susceptible to contamination by foreground and background objects. There are four kinds of objects that may appear above the main sequence in the color-magnitude diagram: background giants, background galaxies, reddened background stars, and foreground low-mass stars. From the results of Kirkpatrick et al. (1994) it is possible to show that the number of contaminating giants is negligible ($`<5\%`$ for all spectral types). Contamination by galaxies is not important for the region of interest in the color magnitude diagram (Bouwens et al. 1998ab). To estimate the contamination due to reddened background stars we use the maximum reddening towards the Upper-Sco region, which is $`A_V2`$ (Schlegel, Finkbeiner & Davis, 1998). Using the density of background stars (from the observed population of background stars under the Leggett sequence in the color-magnitude diagrams) we find that at most 25 candidates may be background stars: 7 before M4, 10 between M4 and M5, 8 after M5. These will preferentially lie close to the Leggett sequence. Another source of contamination is foreground field M-stars in the cluster line of sight. Using the luminosity function derived by Kirkpatrick et al. (1994) we find that there should be 10 field stars between M4 and M5, 3 between M5 and M6, 6 between M6 and M7, and 15 between M7 and M9. Before M4 we expect a lower limit of 15 field stars. ### 3.2 Spectroscopic observations Low resolution spectroscopic observations of 22 candidates were made with the idea of determining cluster membership. As was mentioned before, the observations are not detailed enough to give information about gravity, but they can provide information about activity in the form of the H$`\alpha `$ line. As has been shown by Prosser, Stauffer & Kraft (1991), activity decreases with age, and therefore a strong H$`\alpha `$ line in emission is an indicator of youth. Prosser et al. (1991) and Liebert et al. (1992) have also shown that activity increases with spectral type, starting about M1 for field dwarfs. The H$`\alpha `$ equivalent width reaches $`12`$ Å at about M9. We find H$`\alpha `$ in 20 of the 22 objects observed spectroscopically. The values of the equivalent width are in Table 3. In Table 3 we also compare the spectral types determined from the spectra to those determined from CIRIM observations (I-J) when possible. As the Table shows, the results between the measurements are consistent, which confirms the accuracy of the photometry. Figure 4 shows the traces of five representative spectra, uncorrected for telluric absorption but corrected for reddening. Figure 5 shows the comparison of the measured H$`\alpha `$ equivalent widths in Upper-Sco with those in the $`\sigma `$-Orionis (3-7 Myr, Béjar et al. 1999) and the $`\alpha `$-Persei (60 - 90 Myr, Stauffer et al. 1999, Basri & Martín 1999) clusters. Also shown is the envelope of H$`\alpha `$ equivalent widths for field stars (Prosser et al. 1991; Liebert et al. 1992). Overall, the Upper-Sco values lie above those of $`\alpha `$-Per and below those of $`\sigma `$-Ori. 9 of the Upper-Sco objects are below the envelope defined by field stars, which means that their H$`\alpha `$ strength is consistent with them not being cluster members. However, the smoothness of the envelope is misleading. The original data for field stars shows that the maximum H$`\alpha `$ equivalent width for each spectral type has a fair amount of scattering. Furthermore, spectroscopic observations of the young $`\sigma `$-Orionis cluster have shown that the three objects below the field-star envelope are likely to be cluster members. These arguments show that the presence of H$`\alpha `$ alone is not enough to confirm or deny the status of a candidate as a cluster member. Figure 6 shows the color-magnitude diagram for the objects observed spectroscopically. All the colors and magnitudes have been corrected for reddening, if known from the spectroscopy. The errors in each axis are $`\pm 0.1`$ magnitudes. We have included in Figure 6 the isochrones calculated by D’Antona & Mazzitelli (1994). The translation from their Luminosity-T<sub>eff</sub> calculations to I and R-I involves a color-T<sub>eff</sub> scale and bolometric corrections. We have used the transformations given by Bessell, Castelli & Plez (1998) and the colors for low-mass stars from Kirkpatrick & McCarthy (1994). As Bessell et al. (1998) give bolometric corrections only for dwarf stars and the Kirkpatrick & McCarthy (1994) colors are from field stars, the color-magnitude isochrones in Figure 6 suffer from considerable uncertainties. This is another reason to base the selection of candidates in terms of the Leggett (1992) main sequence. Besides D’Antona & Mazzitelli (1994), Burrows et al. (1997) and Baraffe et al. (1998) have published low-mass isochrones. Burrows et al. (1997) provide luminosities and effective temperatures, and so the translation to observable quantities suffer from the same problems as indicated above. The predicted masses for each color are smaller by about 50%, compared to those of D’Antona & Mazzitelli (1994). Baraffe et al. (1998) provide isochrones in the color-magnitude space but they are too blue, touching the observed Leggett main sequence already at 5 Myr. A comparison of the models of the three groups can be found in Béjar, Zapatero-Osorio & Rebolo (1999). This comparison shows that there may still be systematic errors in the R,I isochrones. Even taken into account the measurement errors, the objects are scattered around various isochrones. Assuming they all belong to the cluster, it is not clear from these observations what would be the correct age of the association, because even those objects with strong H$`\alpha `$ (e.g. objects above the H$`\alpha `$ envelope for the field) do not all fall on a single isochrone. If one believes the 5 Myrs estimate for the age of the cluster (Preibisch & Zinnecker 1998), the scatter has to be explained by other means. If some of the spectroscopic objects were unresolved binaries, their magnitudes would have to be increased. This would work for the latest types of objects, but not for earliest types. As mentioned above, errors in the theoretical isochrones are also a possibility. However, it does not seem likely that any realistic adjustment in the models will make all the data points lie on the same isochrone. All these arguments point to the conclusion that the scatter is probably caused by more than one factor. Without follow-up spectroscopy (see below) it is not possible to make a stronger statement. Of the 11 objects with a spectral type between M4 and M5 we find H$`\alpha `$ in all except two. We expect 40% contamination in this bin, and given the small-number statistics involved, our findings of two contaminating stars are consistent with this estimate, assuming that all the stars with H$`\alpha `$ in emission are indeed cluster members. We found H$`\alpha `$ on the eight stars observed with spectral types between M5 and M6. The expected contamination is about 10%. The fact that we did not find any clear non-cluster member is consistent with the estimate. In other words, assuming that the objects with H$`\alpha `$ in emission are cluster members is consistent with the contamination estimates. ### 3.3 The Initial Mass Function From the D’Antona & Mazitelli (1994) models we find that the substellar limit for this association should be at $`I14.6`$, $`RI2.1`$, $``$M6. This is not an accurate number, given the uncertainties in the color-$`T_{eff}`$ scale and the bolometric corrections mentioned before. However, if one accepts this estimate, we should have 10 brown dwarfs in our spectroscopic sample, assuming that all the H$`\alpha `$ emitters belong to the association. One can use the Miller-Scalo IMF (Miller & Scalo 1979) centered in 0.1 $`\mathrm{M}_{}`$to estimate the expected number of low-mass stars and brown dwarfs in our sample. Using the 34 A stars found by Hipparcos (De Zeeuw et al. 1999) we obtain that there should be 650 stars with masses between 0.07 and 0.5 $`\mathrm{M}_{}`$, and about 600 brown dwarfs with masses between 0.005 and 0.07 $`\mathrm{M}_{}`$. We are only complete to 0.03 $`\mathrm{M}_{}`$: there should be 30 brown dwarfs between 0.03 and 0.07 $`\mathrm{M}_{}`$. Given that we are covering 10% of the cluster, we expect 65 M stars and 60 brown dwarfs, with 3 brown dwarfs having masses between 0.03 and 0.07 $`\mathrm{M}_{}`$. In the whole sample, we find $`20`$ objects with masses between 0.03 and 0.07 $`\mathrm{M}_{}`$(the number of expected contaminants is $`10`$) and $`90`$ objects with masses greater than 0.03 $`\mathrm{M}_{}`$(the number of expected contaminants is $`20`$). The number of M Stars (70) is similar to that predicted by the Miller-Scalo IMF. The number of brown dwarf candidates that we are finding here is a little high (3 predicted compared to 10 after taking account of the contamination). As the recent work by Paresce & De Marchi (1999) shows, most estimations of the IMF assume a log-normal distribution, generally centered in masses larger than 0.1 $`\mathrm{M}_{}`$. Using one of such distributions would decrease even more the number of predicted brown dwarfs in our sample. This discrepancy between observed and predicted number of brown dwarfs may be due to an underestimation of the number of contaminants. The study by Kirkpatrick et al. (1994) mentioned in the section about contamination measures stellar populations towards the galactic poles. Such populations may not be representative of the line of sight towards Upper-Scorpius. The study by Guillout et al. (1998) finds a population of young late-type stars (down to M2) in nearby Lupus distributed along the so-called ’Gould Belt’. It is possible that the Belt population is contaminating our sample as it crosses near Upper-Scorpius, even though it is difficult to understand why it affects only the brown dwarfs range and not the low-mass stars range. Alternatively, the discrepancy between observed and expected number of brown dwarfs may reflect a real increase in the IMF at low masses, similar to that described by Bouvier et al. (1998). It is not possible to make a stronger statement about the IMF without follow-up spectroscopy of all the candidates. ### 3.4 Lithium Burning As models show, the transition between lithium destruction and preservation in stellar photospheres occurs at higher masses for younger clusters. At the age of Upper-Sco (5 Myr) neither very low-mass stars nor brown dwarfs have had sufficient time to reach the core temperatures necessary to start lithium burning (D’Antona & Mazzitelli 1997; Soderblom et al. 1998). We therefore plan to search for lithium in these candidates, as they should have it if they are cluster members. As the cluster is very young, the so-called ’Lithium Test’ (Magazzu, Martín & Rebolo 1993), the use of the substellar lithium boundary to date the cluster, cannot be applied in Upper-Sco. However, the amount of lithium in the photosphere of early M stars could be used to date the association, as an M0 member of a 10 Myr cluster should have about 4 times less lithium than if the cluster were 5 Myr old (Soderblom et al. 1998). Also, as always, an object with spectral type later than M7 and showing lithium in the photosphere should be a brown dwarf (Basri 1998). Therefore, even though there is no lithium boundary in the cluster, lithium is still an important age and mass diagnostic. Recently, Béjar et al. (1999) have suggested that the deuterium boundary could be used as another way to date the cluster. For the Upper-Sco association we expect the deuterium boundary to be located at 0.04$`\mathrm{M}_{}`$ and therefore some of our reddest candidates could still have deuterium in their atmospheres. However, detecting of deuterium abundances posses considerable challenges from the observational point of view. UScoCTIO 128 is a very interesting object: it shows strong H$`\alpha `$ emission, and its position in the color magnitude diagram indicates a mass of 0.02 $`\mathrm{M}_{}`$. For comparison, the models from Burrows et al. (1997) give a mass even smaller, of about 0.015 $`\mathrm{M}_{}`$. We have two measurements of the H$`\alpha `$ equivalent width (separated by a month and taken with the same instrument in the same configuration), and the value is the same within the errors. This indicates that the strong equivalent width is probably not a flare. In this respect UScoCTIO 128 is very similar to SOri 45, found by Béjar et al. (1999). More spectroscopic observations are needed before we can confirm UScoCTIO 128 as the lowest mass member yet of the Upper-Sco OB association. ## 4 Conclusions We have conducted a photometric search for the low-mass members of the Upper-Scorpius OB association, using the R, I, and Z filters. Completeness limits are R$`19`$, I$`18.5`$, Z$`18.5`$. Our search covers $`10\%`$ of the association. The photometry comfortably crosses the substellar limit for the cluster, situated at $`I14.6`$, $`RI2.1`$, $``$M6. This is the first survey to sample the low-mass region of Upper-Sco. We find 138 candidate members of the cluster. Contamination by non-cluster members (mainly foreground M stars and background reddened stars) has been estimated to be 59 objects. Follow-up observations using infrared images and low resolution spectroscopy confirm the optical photometry of a reduced sample of the candidates. Of 22 objects observed spectroscopically, 20 have H$`\alpha `$ in emission, an indicator of young age. Comparisons between the H$`\alpha `$ equivalent widths found in other clusters and those in Upper-Sco indicate that 11 of those 20 objects may be members of the association, as those 11 object have stronger H$`\alpha `$ than expected for low-mass field stars. However, it is possible that all the 20 objects are indeed association members. The objects with strong H$`\alpha `$ do not all fall on a single isochrone. This may be due to the presence of unresolved binaries, to contamination from field H$`\alpha `$ emitters or to an intrinsic age spread in the cluster. Using the Miller-Scalo IMF centered on 0.1 $`\mathrm{M}_{}`$ we estimate the number of objects with masses between 0.07 and 0.3 $`\mathrm{M}_{}`$ as 3 times less than what is observed. This may be due to an underestimation on the number of contaminants affecting this mass bin, due for example to a contribution of low-mass objects by the Gould Belt. On the other hand, similar excesses in the low-mass populations have been observed in other young clusters, like the Pleides, and may point to a departure of the IMF from a simple log-normal form. A stronger statement about the IMF will have to wait until we have more cluster membership diagnostics. The spectroscopic observations do not have high enough resolution to observe gravity sensitive lines, such as the resonant transitions of K I or the subordinate lines of Na I. Therefore, they can not be used to distinguish between young low-mass objects and main-sequence objects. We find a very interesting object, UScoCTIO 128 ($``$M7) with very strong H$`\alpha `$ (equivalent width of $`130`$ Å) and an estimated mass of 0.02 $`\mathrm{M}_{}`$. If this object is a member of the cluster, it would be one of the lowest-mass brown dwarfs known to date. Full confirmation of the membership of these objects will have to wait for higher resolution spectroscopy that can observe Li I, K I, and Na I. As has been suggested by other groups, these very young objects provide a unique opportunity to study depletion of very light elements such as deuterium. Perhaps in the future studies of deuterium depletion will take the place of lithium depletion as a precise method of determining the ages of very young clusters. We would like to thank Victor Béjar for invaluable help in collecting the CTIO observations and for many stimulating discussions concerning brown dwarfs. Thanks are also due to Debi Howell-Ardila, who edited the manuscript for language. We acknowledge the support of the National Science Foundation through grant number AST96-18439. EM acknowledges support from the NASA Origins program. Figure 1: Location of the observed fields in Upper-Scorpius. Each field is 80’ by 80’. The black dots indicate Hipparcos members (De Zeeuw et al. 1999). Figure 2: (a) I vs. R-I color-magnitude diagram for all the objects observed. The results of the photometry are represented by small dots. The lower dashed line is the completeness limit. The solid line is the Leggett (1992) main sequence. (b) Same diagram for I vs. I-Z. The upper dashed line is the saturation limit. Figure 3: Color-magnitude diagram for the selected candidates. The solid line is the Leggett (1992) main-sequence. The dashed line is the completeness limit. The arrow corresponds to $`A_V=1`$. Figure 4: Traces of five representative spectra with H$`\alpha `$ . The spectra are corrected for reddening but not for telluric absorption. UScoCTIO 128 has a H$`\alpha `$ equivalent width of -130.5. Figure 5: Absolute values of H$`\alpha `$ equivalent widths for various clusters. All the H$`\alpha `$ lines are in emission. The symbols are: ($``$) Upper-Sco, ($``$) $`\alpha `$-Persei, ($``$) $`\sigma `$-Orionis. For clarity, only those objects with H$`\alpha `$ EqW. less than $`50`$ Å are plotted. The dashed line shows the envelope of the H$`\alpha `$ equivalent widths for field stars (Prosser et al. 1991; Liebert et al. 1992). The cross indicates the size of the error bars. The H$`\alpha `$ strength in Upper-Sco is intermediate between that of $`\sigma `$-Orionis and $`\alpha `$-Persei. Figure 6: Color-magnitude diagram of objects observed spectroscopically. All objects have been de-reddened. The dots ($``$) indicate objects for which the H$`\alpha `$ equivalent width is above the field envelope (see text). The asterisks ($``$) indicate objects for which the H$`\alpha `$ equivalent width is below the field. The upper axis shows the spectral types calculated using the color-spectral type calibration from Kirkpatrick & McCarthy (1994). Also shown are the evolutionary models by D’Antona & Mazzitelli (1994), for masses from 0.20 to 0.02$`\mathrm{M}_{}`$, and isochrones from 1 Myr to 10 Myr Of those objects observed spectroscopically only two (UScoCTIO 28 and UScoCTIO 162) do not have H$`\alpha `$ .
no-problem/0003/nlin0003010.html
ar5iv
text
# Transition to Stochastic Synchronization in Spatially Extended Systems ## I A Survey on Stochastic Synchronization in low-dimensional systems Before entering the main topic of this paper, i.e. stochastic synchronization in spatially extended systems, it is worth presenting concepts, tools and properties characterizing the same kind of phenomenon in low-dimensional dynamical systems. In this section we consider some simple models, that exemplify the general features of this phenomenon. The basic model equation is the stochastic map $$x^{t+1}=f(x^t)+\sigma \eta ^t$$ (1) where the state variable $`x^t`$ is a real quantity depending on the discrete time index $`t`$, $`\sigma `$ is the amplitude of the time dependent random variable $`\eta ^t`$, uniformly distributed in the interval \[-1,1\], and $`f(x)`$ is a map from $`𝒮IR`$ into the interval $`IR`$. Stochastic synchronization can be investigated by considering two different initial conditions, $`x^0`$ and $`y^0`$ of dynamics (1) with the same realization of additive noise $`\eta ^t`$. More precisely, we assume that the corresponding trajectories, $`x^t`$ and $`y^t`$, generated by (9) may synchronize if their distance $$d(t)=|x^ty^t|$$ (2) becomes smaller than a given threshold $`\mathrm{\Delta }`$, usually assumed much smaller than unit. This definition allows one to identify two natural quantities associated with the synchronization of trajectories: the first passage time $`\tau _1(\mathrm{\Delta })`$, i.e. the first instant of time at which $`d(t)\mathrm{\Delta }`$ and the synchronization time $`\tau _2(\mathrm{\Delta })`$, i.e. the interval of time during which $`d(t)`$ remains smaller than $`\mathrm{\Delta }`$ . We want to point out that, in general, both $`\tau _1(\mathrm{\Delta })`$ and $`\tau _2(\mathrm{\Delta })`$ depend on the initial conditions and on the realization of noise. Accordingly, their averages over both ensembles have to be considered as the quantity of interest for our analysis. For the sake of simplicity we shall indicate these averaged times with the same notations $`\tau _1(\mathrm{\Delta })`$ and $`\tau _2(\mathrm{\Delta })`$. In all the numerical simulations, for sufficiently small value of $`\mathrm{\Delta }`$ (typically $`\mathrm{\Delta }<10^7`$), the results do not depend on the choice of $`\mathrm{\Delta }`$. Moreover, one can assume that the trajectories have eventually reached the synchronized state if, after $`\tau _1(\mathrm{\Delta })`$, $`d(t)`$ remains smaller than $`\mathrm{\Delta }`$ for arbitrarily large integration times. This amounts to say that $`\tau _2(\mathrm{\Delta })`$ goes to infinity with the integration time. This heuristic definition of the synchronized state can be replaced by a more quantitative criterion: according to Pikovsky, for a given value of $`\sigma `$ the synchronized state occurs if the Lyapunov exponent $`\mathrm{\Lambda }`$ of dynamics (1) is negative. This indicator is defined as follows: $$\mathrm{\Lambda }=\underset{t\mathrm{}}{lim}\frac{1}{t}ln\underset{j=1}{\overset{t}{}}|\frac{\xi ^j}{\xi ^{j1}}|$$ (3) where the dynamical variable $`\xi ^t`$ obeys the ”linearized” dynamics $$\xi ^{t+1}=\frac{x^{t+1}}{x^t}\xi ^t$$ (4) and the derivative is computed along the trajectory given by (1) . We want to remark that $`\mathrm{\Lambda }`$ is well defined for deterministic dynamics (e.g. the $`\sigma =0`$ case in (1) ). In the framework of the linear stability analysis a positive (negative) $`\mathrm{\Lambda }`$ measures the average exponential expansion (contraction) rate of nearby trajectories. For $`\sigma 0`$ we are faced with stochastic trajectories and it is not a priori obvious if $`\mathrm{\Lambda }`$ is still a meaningful quantity . On the other hand, eq.(4) does not depend explicitely on $`\eta ^t`$ and it is formally equivalent for both the noisy and the noise-free cases. However, the presence of noise modifies the evolution of the system w.r.t the noise-free case and, accordingly, also the tangent space dynamics. We have verified numerically that $`\mathrm{\Lambda }`$ is a self-averaging asymptotic quantity also for dynamics (1) with $`\sigma 0`$. It can be interpreted as the average exponential expansion (contraction) rate of infinitesimal perturbations of stochastic trajectories generated by the evolution rule (1) . In particular, its value is found to depend on $`\sigma `$, but not on the realization of noise. Let us remark that it is quite simple to argue why $`\mathrm{\Lambda }<0`$ implies $`\tau _2\mathrm{}`$: After some finite time $`\tau _1(\mathrm{\Delta })`$, $`d(t)`$ has decreased below a small threshold $`\mathrm{\Delta }`$ and the trajectories can be viewed as a perturbation of each other. Linear stability implies that their distance will keep on decreasing exponentially with an average rate $`\mathrm{\Lambda }`$, so that, within numerical precision, they will converge rapidly onto the same trajectory. As a first example, we consider the continuous map shown in Fig. 1: $$f(x)=\{\begin{array}{cc}c\mathrm{tanh}(b(1+x))\hfill & \mathrm{if}x<1;\hfill \\ ax(1|x|)\hfill & \mathrm{if}|x|<1;\hfill \\ c\mathrm{tanh}(b(1x))\hfill & \mathrm{if}x>1.\hfill \end{array}$$ (5) where $`𝒮`$R and $`=[1,1]`$. We choose the parameter values $`a=4`$, and $`c=0.5`$, so that (5) can be viewed as a sort of anti-symmetrized version of the logistic map at the Ulam point, taking values over the whole real axis. It can be easily shown that for $`\sigma =0`$ and independently of $`b`$, $`\mathrm{\Lambda }=\mathrm{ln}2`$, i.e. map (5) is chaotic. Notice that the noise term of amplitude $`\sigma `$ extends $``$ to the interval $`[1\sigma ,1+\sigma ]`$. We have verified numerically that, for any value of $`b`$ and for $`\sigma `$ larger than a threshold value $`\sigma _\mathrm{\Lambda }`$, $`\mathrm{\Lambda }`$ becomes negative and after some finite time $`\tau _1`$ a synchronized state is eventually achieved. In particular, we find that $`\sigma _\mathrm{\Lambda }`$ is strongly dependent on $`b`$: for instance, we have obtained $`\sigma _\mathrm{\Lambda }=1.2`$ for $`b=2`$ and $`\sigma _\mathrm{\Lambda }=0.019`$ for $`b=1000`$. As recently found also by Lai and Zhou for a similar mapping, this result indicates that symmetric , i.e. zero-average, noise can yield stochastic synchronization. It is worth mentioning that some time ago Herzel and Freund conjectured that stochastic synchronization can be achieved only if noise has a non-zero average. They were led to such a conclusion by studying stochastic synchronization for the case of a map $`f`$ of the unit interval into itself, i.e. $`𝒮=[0,1]`$. In such a case the application of the stochastic evolution rule (1) demands the adoption of some further recipe for maintaining the state variable $`x^t`$ inside the unit interval, when $`\sigma 0`$. For instance, one can choose the following reinjection rule $`x^{t+1}x^{t+1}+1`$ ( $`x^{t+1}x^{t+1}1`$) if $`x^{t+1}<0`$ ($`x^{t+1}>1`$). As discussed in , any recipe of this kind yields an effective state-dependent noise, that does not preserve the original symmetry of the stochastic process $`\eta ^t`$, thus acquiring a non-zero average value. The above described example and the results obtained in disprove their conjecture. Nonetheless, an interesting observation is contained in : the stochastic evolution rule (1) for maps definite on a finite interval induces strong non-linear effects due to the discontinuities introduced into the dynamics by the state-dependent noise. This strong non-linear character of the dynamics is irrelevant for stochastic synchronization in low-dimensional systems; conversely, it reveals a crucial property for discriminating between different critical behaviours in high-dimensional systems, as we shall discuss in Section IV. It is also worth considering that even the presence of non-zero average noise does not necessarily guarantee stochastic synchronization. For instance, a counterexample is provided by considering dynamics (1) for the logistic map at the Ulam point: $$f(x)=4x(1x)$$ (6) whose noise-free dynamics is mixing. The state-dependent noise modifies the probability measure of $`x^t`$ in such a way to increase the weight of the contracting regions of the map. On the other hand, $`\mathrm{\Lambda }`$ remains positive as well as $`\tau _2`$ remains finite for any value of $`\sigma [0,1]`$, despite they can be made so small and so large, respectively, to produce an apparent synchronization effect. As discussed in , misleading results can be obtained in this case due to the finite computational precision of numerical simulations. As a final example, we consider the map $$f(x)=\{\begin{array}{cc}bx\hfill & \text{if }0<x<1/b\hfill \\ a+c(x1/b)\hfill & \text{if }1/b<x<1\hfill \end{array}$$ (7) Its dynamics converges to a stable periodic attractor: for $`b=2.7`$, $`a=0.07`$ and $`c=0.1`$ this is a period-3 orbit with negative Lyapunov exponent, $`\mathrm{\Lambda }=ln(cb^2)0.316`$. Numerical analysis shows that the Lyapunov exponent of the stochastic evolution rule (1) remains negative for any value of $`\sigma `$ and, according to Pikovsky’s criterion, a synchronized state is always achieved, as we have numerically checked. In summary, we have presented in this section three different examples of low-dimensional dynamical systems epitomizing possible scenarios concerning the phenomenon of stochastic synchronization: In the last example it occurs for any value of the noise amplitude $`\sigma `$, in the second one for no value of $`\sigma `$ and only above some threshold value, $`\sigma _\mathrm{\Lambda }`$, in the first example. ## II Stochastic synchronization in CML models The generalization of the stochastic map (1) to a CML model with additive spatio-temporal noise can be defined by the following two-step evolution rule: $`\stackrel{~}{x}_i^t=(1\epsilon )x_i^t+{\displaystyle \frac{\epsilon }{2}}(x_{i1}^t+x_{i+1}^t)`$ (8) $`x_i^{t+1}=f(\stackrel{~}{x}_i^t)+\sigma \eta _i^t`$ (9) The real state variable $`x_i^t`$ depends now also on the discrete space index $`i=1,2,\mathrm{},L`$: assuming unit lattice spacing, $`L`$ corresponds to the lattice size. The strength of the spatial coupling between nearest neighbour maps in the lattice is fixed by the parameter $`\epsilon `$, that can take values in the interval . Note that this kind of coupling amounts to a discrete version of a diffusive term. The application of the map $`f`$ mimicks the reaction term of reaction-diffusion PDE’s, so that CMLs are commonly assumed to represent a sort of discretized version of continuous PDE’s. At variance with standard CML models, dynamics (9) contains also a stochastic term given by a set of identical, independent, equally distributed (i.i.e.d.) random variables $`\{\eta _i\}`$, whose amplitude is determined by the parameter $`\sigma `$. In what follows these i.i.e.d. random variables are assumed to be uniformly distributed in the interval \[-1,1\]. In full analogy with the low-dimensional case discussed in the previous section, stochastic synchronization can be investigated by considering two different initial conditions $`\{x_i^0\}`$ and $`\{y_i^0\}`$ for dynamics (9), coupled by the same realization of additive spatio-temporal noise. More precisely, we assume that the corresponding trajectories, $`\{x_i^t\}`$ and $`\{y_i^t\}`$, generated by (9) may synchronize if their distance $$d(t)=\frac{1}{N}\underset{i=1}{\overset{N}{}}|x_i^ty_i^t|$$ (10) becomes smaller than a given threshold $`\mathrm{\Delta }`$, usually assumed much smaller than unit. Upon this definition one can straightforwardly extend to the CML case indicators like the first passage time $`\tau _1(\mathrm{\Delta })`$ and the synchronization time $`\tau _2(\mathrm{\Delta })`$. Also in this case we assume that the trajectories synchronize if after the time $`\tau _1(\mathrm{\Delta })`$, $`\tau _2(\mathrm{\Delta })`$ is found to diverge with the integration time. In all the numerical simulations we have used the same value of $`\mathrm{\Delta }`$ as in the low-dimensional case. In fact, we have verified that also for the high-dimensional case the results of numerical simulations are not affected by the choice of $`\mathrm{\Delta }`$, provided it is small enough, tipically $`\mathrm{\Delta }<10^7`$. Again, both $`\tau _1(\mathrm{\Delta })`$ and $`\tau _2(\mathrm{\Delta })`$ are quantities averaged over initial conditions and over realizations of noise. In analogy with low-dimensional systems, we expect that the suitable dynamical indicator for identifying stochastic synchronization in dynamics (9) is the maximum Lyapunov exponent, $`\mathrm{\Lambda }`$. In particular the phenomenon is expected to occur only for values of the noise amplitude $`\sigma `$ such that $`\mathrm{\Lambda }`$ is negative. For what concerns the interpretation of this indicator for the stochastic CML dynamics (9), the same kind of remarks and conclusions discussed in Section I for the low dimensional case still hold. All the numerical estimates of $`\mathrm{\Lambda }`$ have been performed by applying the standard algorithm outlined in . Another relevant indicator, strictly related to the spatial structure of CML dynamics, is the average propagation velocity of finite amplitude perturbations $$V=\underset{t\mathrm{}}{lim}\underset{L\mathrm{}}{lim}\frac{N(t)}{2t}$$ (11) where $$N(t)=\underset{i=1}{\overset{L}{}}h_i^t\mathrm{with}h_i^t=\{\begin{array}{cc}1\hfill & \mathrm{if}|x_i^ty_i^t|>0;\hfill \\ 0\hfill & \mathrm{otherwise}\hfill \end{array}$$ (12) is the number of non-synchronized or “infected” sites at time $`t`$. Here $`\{x_i^t\}`$ and $`\{y_i^t\}`$ represent the trajectories generated by dynamics (9) starting from two initial conditions that differ of finite amounts $`\delta _i𝒪(1)`$ only inside a space region of size $`S`$: $$y_i^0=\{\begin{array}{cc}x_i^0+\delta _i\hfill & \mathrm{if}|L/2i|S/2;\hfill \\ x_i^0\hfill & \mathrm{otherwise}\hfill \end{array}$$ (13) The average $`<>`$ in (11) is performed over different initial conditions and noise realizations. The indicator $`V`$ measures the rate of information propagation : we want to point out that $`V`$ can take finite values even for non-chaotic evolution, i.e. for $`\mathrm{\Lambda }<0`$. For instance, this scenario has been observed for CMLs made of discontinuous maps . In this case infinitesimal perturbations are unable to be amplified, while finite amplitude perturbations, induced by the discontinuity, can propagate thanks to the spatial coupling. Therefore in spatially extended systems the information flow is absent only when $`V`$ vanishes. As a first example let us consider the stochastic CML dynamics (9), equipped with map (5) for $`b=2.0`$ (see Fig.1). In this case, numerical simulations indicate that the stochastic synchronization of trajectories occurs for any value of the diffusive coupling $`\epsilon `$ and for sufficiently large values of $`\sigma `$, above which $`\mathrm{\Lambda }`$ becomes negative. For instance, with $`\epsilon =1/3`$ we find that synchronization is obtained for $`\sigma >\sigma _\mathrm{\Lambda }=2.4768`$. This exemplifies the validity of Pikovsky’s criterion also for spatially extended systems. Let us provide more details of the dynamics below and above the threshold value $`\sigma _\mathrm{\Lambda }`$. For $`\sigma <\sigma _\mathrm{\Lambda }`$ we find that $`\tau _1(\mathrm{\Delta })`$ diverges exponentially with the system’s size $`L`$, i.e. $`\tau _1\mathrm{exp}(L/\xi )`$. The length scale factor $`\xi `$ is found to be independent of $`L`$ and proportional to the inverse decay rate of the space correlation function of $`x_i^t`$. Accordingly, $`L/\xi `$ is an estimate of the number of effective independent degrees of freedom. In this dynamical regime the probability $`P(\xi ,\mathrm{\Delta })`$ that two trajectories get closer than a distance $`\mathrm{\Delta }`$ is proportional to the combined probability that each one of the $`L/\xi `$ degrees of freedom get closer than $`\mathrm{\Delta }`$, i.e. $`P(\xi ,\mathrm{\Delta })\mathrm{\Delta }^{L/\xi }`$. One can reasonably assume $`\tau _1^1P(\xi ,\mathrm{\Delta })`$; this rough argument explains why $`\tau _1exp(L/\xi )`$. Note that even if $`d(t)`$ eventually becomes smaller than $`\mathrm{\Delta }`$ synchronization is rapidly lost because of the linear instability mechanism due to positive $`\mathrm{\Lambda }`$ and $`\tau _2`$ is always finite. Conversely, a logarithmic dependence of $`\tau _1(\mathrm{\Delta })`$ on $`L`$ is found for $`\sigma >\sigma _\mathrm{\Lambda }`$ (see Fig. 2 (b)). Numerical simulations show that in this case the number of regions made of a few synchronized sites increases as time flows. Moreover, once a synchronized region is formed, it grows linearly in time, until all regions merge and synchronization sets in over the whole lattice. One can introduce an argument accounting for the logarithmic dependence $`\tau _1\mathrm{ln}(L)`$. An effective rate equation for the number of synchronized sites, $`n(t)`$, can be constructed by assigning a probability $`p`$ for the formation of new synchronized sites and a rate $`\gamma `$ for the linear increase of synchronized regions: $$\frac{n}{t}=\gamma +p(Ln),$$ (14) with $`0n(t)L`$. This equation can be solved with the initial condition $`n(0)=0`$, so that an estimate of $`\tau _1`$ is obtained by imposing the condition $`n(\tau _1)=L`$: $$\tau _1=\frac{1}{p}\mathrm{ln}\left[\frac{pL}{\gamma }+1\right].$$ (15) Note that a logarithmic dependence on $`L`$ is consistent with the condition $`\frac{pL}{\gamma }>>1`$, that can be satisfied for sufficiently large values of $`L`$. By estimating the parameters $`p`$ and $`\gamma `$ directly from numerics we have verified that they fit reasonably well the simple phenomenological eq. (15). The indicator $`V`$ does not provide any additional information about this kind of synchronization transition. Actually, $`V`$ is found to be positive for $`\sigma <\sigma _\mathrm{\Lambda }`$, while it vanishes at $`\sigma =\sigma _\mathrm{\Lambda }`$. This implies that in this model the linear mechanism of information production associated with a positive $`\mathrm{\Lambda }`$ rules also the propagation of finite amplitude perturbations . In summary, for $`\sigma >\sigma _\mathrm{\Lambda }`$ $`\mathrm{\Lambda }`$ is negative and $`V=0`$ , so that after the trajectories have reached $`\tau _1`$ it seems that no information production mechanism can be responsible of the resurgence of $`d(t)`$ above the threshold $`\mathrm{\Delta }`$. This is confirmed by numerical simulations, although in principle one could not exclude that the occurrence of a temporarily positive exponential expansion rate at some lattice site might produce a local amplification of $`d(t)`$. In this respect, we want to remark that $`\mathrm{\Lambda }`$ is a global indicator, i.e. the exponential expansion rate between nearby orbits averaged in time and over the whole phase space. Accordingly a negative value of $`\mathrm{\Lambda }`$ is fully compatible with the above mentioned local, instantaneous event. This point, that reveals important for the understanding of the dynamical mechanisms underlying the synchronization transition, will be analyzed more carefully in the following section. Different scenarios are obtained by considering dynamics (9) with $`f`$ given by the logistic map (6). For $`\epsilon =1/3`$ and large enough values of $`L`$, one recovers features very similar to the case of a single logistic map, where the synchronization transition is absent. In fact, for any value of $`\sigma `$, $`\mathrm{\Lambda }`$ and $`V`$ are positive, while $`\tau _1`$ and $`\tau _2`$ remain finite. Moreover $`\tau _1`$ is found to diverge exponentially with $`L`$ with a parameter dependent rate. A different situation occurs for $`\epsilon =2/3`$, where $`\mathrm{\Lambda }`$ vanishes at $`\sigma _\mathrm{\Lambda }=0.27`$, while $`V`$ remains positive up to $`\sigma _V0.4`$ (see Fig. 3). According to Pikovsky’s criterion one expects that synchronization occurs above $`\sigma _\mathrm{\Lambda }`$.This is indeed the case, although, for $`\sigma _\mathrm{\Lambda }<\sigma <\sigma _V`$ , $`\tau _1(\mathrm{\Delta })`$ is still found to increase exponentially with $`L`$, while for $`\sigma >\sigma _V`$ , $`\tau _1(\mathrm{\Delta })`$ grows logarithmically with $`L`$ (see Fig. 4). Despite the strong analogy between this transition at $`\sigma _V`$ and the one occurring in the first example at $`\sigma _\mathrm{\Lambda }`$, we want to remark that there is a crucial difference between them: below threshold $`\tau _2`$ diverges in the former case, while it is finite in the latter. This indicates the existence of a new kind of synchronization transition at $`\sigma _V`$ essentially ruled by $`V`$. Let us point out that, at variance with the first example, for $`\sigma _\mathrm{\Lambda }<\sigma <\sigma _V`$ the non-linear mechanism of information propagation is enough for maintaining the exponential dependence of $`\tau _1`$ on $`L`$. On the other hand, after $`\tau _1`$ two trajectories get very close to each other and the negative $`\mathrm{\Lambda }`$ stabilizes them onto the same stochastic trajectory. Let us stress that the above described scenarios are not peculiar of the specific choice of the parameter values $`\epsilon =1/3`$ and $`\epsilon =2/3`$. We have checked that their main features are robust w.r.t. small but finite variations of $`\epsilon `$, although a systematic investigation of the parameter space would demand too huge computational times. Upon these examples one can conclude that in a CML of very large but finite size $`L`$ stochastic synchronization of two trajectories within an accessible time span occurs when $`V`$ vanishes. Moreover, a very similar situation is obtained when considering dynamics (9) for the model of period-3 stable maps (7) introduced in . In this CML model $`\mathrm{\Lambda }`$ is found to be negative, independently of the value of the diffusive coupling parameter $`\epsilon `$ and, accordinlgy, the dynamics eventually approaches a periodic attractor. On the other hand, the model exhibits a transition between a frozen disordered phase with $`V=0`$ to a chaotic phase with $`V>0`$ at $`\epsilon _c0.6`$. The peculiar feature of this transition is that these two phases are separated by a small fuzzy region centered around $`\epsilon _c`$, where both positive and null values of $`V`$ can be observed up to available numerical resolution. The addition of noise to this CML dynamics according to eq. (9) has interesting consequences: $`\mathrm{\Lambda }`$ is kept negative independently of the noise amplitude $`\sigma `$, while a small amplitude noise destabilizes the frozen disordered phase: $`V`$ becomes positive even for $`\epsilon <\epsilon _c`$, so that the fuzzy transition disappears. This notwithstanding, by increasing $`\sigma `$ up to a critical value $`\sigma _V(\epsilon )`$ , $`V`$ is found to drop again to zero not only below, but also above $`\epsilon _c`$. For instance, one has $`\sigma _V0.16`$ for $`\epsilon =0.58`$ and $`\sigma _V0.18`$ for $`\epsilon =0.62`$. In both cases we recover the same kind of mechanisms characterizing the synchronization transition discussed for coupled logistic maps with $`\epsilon =2/3`$. We want to point out that a finite value of $`V`$, when $`\mathrm{\Lambda }`$ is negative, is usually reported as a typical signature of a strong non-linear effect . For instance, it has been shown that the discontinuity of map (7) yields such an effect already for the noise-free CML dynamics. Even if the discontinuity of map (7) is removed by interpolating the expanding and contracting regions with a sufficiently steep segment the effect is maintained . The reinjection mechanism introduced by additive noise in maps of the interval into itself produces similar discontinuities in the dynamics. As we have shown this non-linear effect is sufficient for giving raise to dynamical phases with $`\mathrm{\Lambda }<0`$ and $`V>0`$ in the noisy dynamics (9). All these observations suggest to investigate if a similar scenario can be obtained for map (5) in dynamics (9), by introducing an almost-discontinuity in the map, since in this case the reinjection mechanism is not present. This is easily obtained by taking a sufficiently large value of the parameter $`b`$, e.g. $`b𝒪(10^3)`$. Numerical simuations show that, still for $`\epsilon =1/3`$, it exists a range of $`\sigma `$-values for which $`V>0`$ and $`\mathrm{\Lambda }<0`$. In Fig.5 we show the dependence of $`\tau _1`$ on $`L`$ in two regions of the parameter space where $`\mathrm{\Lambda }`$ is negative, while $`V`$ is either positive or null. If $`V>0`$ an exponential increases of $`\tau _1`$ with $`L`$ is again observed, while a logarithmic dependence characterizes the dynamical phase with $`V=0`$. Accordingly, this second kind of transition is not just an artifact due to discontinuities introduced by the rejection rule, but a mere consequence of sufficiently strong non-linear effects, that may be produced also in the absence of any rejection mechanism and also if the noise distribution maintains its symmetry. ## III Critical properties of the synchronization transition in CMLs In the previous Section we have described two different kinds of phase transitions for stochastic synchronization where the noise amplitude $`\sigma `$ plays the role of the control parameter. When both $`\mathrm{\Lambda }`$ and $`V`$ vanish at $`\sigma =\sigma _\mathrm{\Lambda }`$ the transition is from a non-synchronous dynamical phase to a synchronous one: we have denoted it with PT1. In the other case, that we indicate with PT2, $`\mathrm{\Lambda }`$ passes from positive to negative values while $`V`$ remains positive up to $`\sigma =\sigma _V`$, where we have observed a transition between different synchronous dynamical phases, characterized by an exponential and by a logarithmic dependence on the system size of the first passage time $`\tau _1`$ . In both cases we have found that the correlation length $`\xi `$, defined in Sect. II, diverges at the transition point : this is an indication in favour of a continuous phase transition. Nonetheless, $`1/\xi `$ is found to vanish less than linearly when $`\sigma \sigma _\mathrm{\Lambda }^{}`$ in PT1, while it vanishes linearly in PT2 when $`\sigma \sigma _V^{}`$ (see Figs. 6 (a),(b)). Moreover, close to the critical point the time averages of the mean distance between trajectories, $`z^t=\frac{1}{L}_{i=1}^L|x_i^ty_i^t|`$, and of the topological distance, $`\rho (t)\frac{1}{L}_{i=1}^Lh_i^t`$ ($`h_i^t=1`$ if $`|z_i^t|>\mathrm{\Delta }`$, otherwise $`h_1^t=0`$ ) exhibits a continuous dependence on $`\sigma `$ (for the sake of space in Fig.7 we show these quantities only for the case of coupled logistic maps). We want to remark that our definition of the synchronized state implies that it exists a stable stochastic orbit that prevents the trajectories of the dynamical system to flow apart from each other below some very small, but finite threshold $`\mathrm{\Delta }`$ so that $`\tau _2=\mathrm{}`$. In both cases numerical simulations show that this is what happens above the critical point, where $`\mathrm{\Lambda }<0`$ and $`V=0`$. If at some lattice site $`i`$ a fluctuation makes the local Lyapunov multiplier positive (i.e., $`\mathrm{ln}|f^{}(\stackrel{~}{x}_i^t)|>0`$, where $`f^{}`$ indicates the first derivative of the map w.r.t. its argument), giving rise locally to the exponential divergence of nearby orbits, the process is rapidly reabsorbed due to the lack of any mechanism of information propagation. In this sense the synchronized state should be equivalent to the absorbing state, typical of Directed Percolation (DP) processes . Below the critical points the role of fluctuations determines the difference between PT1 and PT2. The inspection of the space-time evolution of dynamics (9), using the symbolic representation of the state variables $`h_i^t`$, is quite helpful for visualizing such a difference. For what concerns PT1, when $`\sigma \sigma _\mathrm{\Lambda }^{}`$ one observes that non-synchronized clusters propagate as time flows (since $`V`$ is positive) and, even if some of them may eventually die, in the meanwhile new ones have started to propagate, emerging also from previously synchronized regions. Since also $`\mathrm{\Lambda }>0`$ any local fluctuation of $`d(t)`$ produced by a positive multiplier has a finite probability to be amplified and eventually propagated through the lattice. On the contrary, sufficiently close to PT2, for $`\sigma \sigma _V^{}`$, non-synchronized clusters never emerge from already synchronized regions and any connected non-synchronous cluster eventually dies at $`\tau _1(L,\sigma )`$ in a lattice of finite size $`L`$ (a situation very similar to what is observed in the active phase of DP as a finite size effect). Even if, in principle, the non-linear mechanism of information propagation is active, this suggests that, inside an already synchronized region, any local fluctuation of the Lyapunov multiplier towards positive values never persits long enough to activate the non-linear process of information propagation. Upon these numerical observations and exploiting the analogies with other critical phenomena, we are led to conjecture not only the existence of an absorbing, i.e. synchronized, state but also that PT2, at variance with PT1, should belong to the universality class of DP. This can be confirmed only by direct measurements of the critical exponents associated with the synchronization transitions. It is worth stressing that numerical estimates have been performed by approaching the critical points from below, i.e. for $`V0^+`$. Here we report the analysis of PT2 in the case of coupled logistic maps with $`\epsilon =2/3`$. As usual a reliable measurement of any critical exponent demands a very accurate estimate of the critical point, i.e. $`\sigma _V`$, in this case. For this reason we have performed careful simulations for evaluating the dependence on the system size $`L`$ of $`\tau _1`$, that corresponds to the absorption time in the DP language. At the critical point $`\sigma =\sigma _V`$, this time should diverge as $$\tau _1(L,\sigma _V)L^z$$ (16) where $`z=\nu _{}/\nu _{}`$ is the dynamical exponent . The quantity $`\tau _1(L,\sigma )`$ is reported in Fig.8 as a function of $`L`$ in a log-log scale for different values of $`\sigma `$. The best scaling behaviour is obtained for $`\sigma _V=0.4018`$ , where one has $`z=1.55\pm 0.05`$. This result agrees quite well with the most accurate numerical estimates of the DP value $`z=1.5807`$ . Relying upon this result, we have also measured the critical exponent associated with the temporal decay of the density $`\rho (t)`$ of active sites, i.e. those sites where $`|x_i^ty_i^t|>\mathrm{\Delta }`$ . The DP transition exhibits at the critical point the following scaling law for the topological distance: $`\rho (t)t^\delta `$, with $`\delta =\beta /\nu _{}=0.1595`$ . The density $`\rho =\rho (t)`$ is shown in Fig. 9 for three different system sizes, namely $`L=500,1500`$ and $`2000`$. As expected the scaling region increases with $`L`$ and for $`L=2000`$ an optimal fitting in the interval $`3.2\mathrm{log}(t)4.8`$ provides the estimate for the critical exponent $`\delta =0.159\pm 0.002`$. This confirms that PT2 belongs to the universality class of DP. For what concerns PT1 we have considered coupled maps of the type (5) for $`b=2.0`$ and $`\epsilon =1/3`$. The best scaling for the $`\tau _1`$ as a function of $`L`$ (accordingly to equation (16)) is observed for a noise amplitude $`\sigma _\mathrm{\Lambda }=2.5015`$. For $`\sigma =\sigma _\mathrm{\Lambda }`$ we have obtained the following estimates for the critical exponents : $`z1.011.04`$ and $`\delta 0.35`$. Such values certainly do not correspond either to DP, or to any known universality class for percolation or growth processes. This seems analogous to what has been pointed out by Grassberger for systems exhibiting incomplete deaths: when the asymptotic state is not a truly absorbing one the critical properties of DP cannot be recovered. Since in this situation an absorbing state seems not to exist, the dynamical exponent $`z`$ can be better estimated by measuring the number of “infected” sites of the chain $`N(t)`$ defined in (12). At the critical point $`\sigma =\sigma _\mathrm{\Lambda }`$ the following scaling law is expected to hold $$N(t)t^{1/z}.$$ (17) The data from numerical analysis are reported in Fig. 10. For short times ($`t<500`$) we obtain the inverse of the the dynamical exponent $`1/z0.47\pm 0.05`$, a value consistent with the one expected for the Edwards-Wilkinson universality class $`z_{EW}=2.0`$ . For longer times we observe a crossover to a lower $`z`$-value that is consistent with the one expected for the 1d KPZ universality class (namely $`z_{KPZ}=3/2`$). We think that these results could be interpreted in terms of the conjecture reported in . In that paper it has been suggested that, for generic synchronization transition of coupled spatio-temporal chaotic systems with continuous state variables, the appropriate universality class should be the one of the KPZ model with a non-linear growth limiting term . This idea originates from the observation that, close to the synchronization transition, it is possible to describe the dynamics of small perturbations in terms of a reaction diffusion model with multiplicative noise . Finally, this model can be mapped via a Hopf-Cole transformation into an equation that corresponds to a KPZ model with a non-linear term that prevents the surface from growing indefinitely . The critical scaling laws for this kind of models have been reported in : the dynamical exponent $`z`$ is found to coincide with the KPZ-one, while the other exponents are found to be different from the standard KPZ ones. Unfortunately a full numerical comparison of PT2 with the model discussed in is prevented by two major technical problems: the extreme difficulty to estimate with sufficient precision the critical value $`\sigma _\mathrm{\Lambda }`$ and the strong finite size effects. ## IV Concluding remarks We have studied the problem of stochastic synchronization induced by additive spatio-temporal noise in CML models. In analogy with the low-dimensional case, synchronization of trajectories is observed if the maximum Lyapunov exponent becomes negative at a critical value of the noise amplitude. We have also identified two different critical behaviours associated with the synchronization transition. One of them belongs to the universality class of Directed Percolation. While the other one is not clearly identified, but indications suggest that it could belong to the universality class of the KPZ model with a non-linear growth limiting term. In particular, the DP-like phase transition describes the crossover between an ”active” and an ”absorbing” phase. The former is characterized by an exponential dependence on the system size $`L`$ of the time needed for achieving the synchronized state, while the latter exhibits a logarithmic dependence. This scenario is reminescent of the phenomenology associated with stable-chaos in CMLs , where the dynamics approaches a periodic attractor, rather than a ”synchronized” stochastic trajectory. The two kinds of synchronization transition here reported are quite general for extended dynamical systems, since analogous behaviours have been recently observed for two coupled CMLs without any external noise . Moreover, as far as the control of chaos is concerned when the erratic behaviours present in the extended system are due solely to non-linear mechanisms (as it happens when the maximal Lyapunov is negative, but $`V`$ is still positive) the control schemes based on linear analysis should fail and new “non-linear” methods have to be introduced. We believe that the appropriate indicator to employ in this context is the Finite Size Lyapunov Exponent , because it is able to capture infinitesimal as well as finite amplitude perturbation growth. We also expect that the observed phenomenology is not peculiar of CMLs and the present study can apply to a wider class of spatially extended dynamical systems, like coupled oscillator or PDE’s. ###### Acknowledgements. We want to thank all the components of the D.O.C.S research group of Firenze for stimulating discussions and interactions, and in particular F. Bagnoli and A. Politi. We express our gratitude also to V. Ahlers, M.A. Muñoz, A. Pikovsky and P. Grassberger for helpful discussions (P.G. also for providing us the efficient random number generator that we have employed in numerical simulations). A.T. acknoweledges the contribution of T. Caterina, H. Katharina, H. Daniel amd T. Sara for providing him a realistic realization of a chaotic and noisy enviroment. Part of this work was performed at the Institute of Scientific Interchange in Torino, during the workshop on “ Complexity and Chaos ”, June 1998 and October 1999. We acknowledge CINECA in Bologna and INFM for providing us access to the parallel computer CRAY T3E under the grant “ Iniziativa Calcolo Parallelo”.
no-problem/0003/cond-mat0003242.html
ar5iv
text
# Zero temperature correlations in trapped Bose-Einstein condensates ## Abstract We introduce a family of correlated trial wave functions for the $`N`$-particle ground state of an interacting Bose gas in a harmonic trap. For large $`N`$, the correlations lead to a relative energy decrease of a fraction $`\frac{3}{5N}`$, compared to mean field Gross-Pitaevskii theory. The kinetic energy in the weakly confining direction turns out to be most sensitive to our correlations and, remarkably, is higher by as much as a few per cent for condensates with atom numbers of a few thousand. Thus, the predicted deviations from Gross-Pitaevskii theory originating from ground state correlations might be observed in momentum distribution measurements of small condensates. Initiated by the first realizations of alkali Bose-Einstein condensates in magnetic traps a few years ago there is now an ever growing number of experiments with atomic condensates in laboratories all over the world. As a result of these efforts, a more and more detailed understanding of such quantum $`N`$-body systems is emerging. Properties hitherto measured are successfully described by updated versions of many-body theories established many years ago, see for a recent review. Very importantly, the ‘condensate wave function’, the most relevant object well below the transition temperature, is determined by the mean field Gross-Pitaevskii (GP) equation . Among future challenges remains experimental access to the detailed nature of the $`N`$-particle state of these quantum systems well below the critical temperature, exploring new physics beyond the standard mean field description. This paper presents a theoretical backup of this aim, emphasizing non-mean-field effects in trapped condensates with a relatively small number of atoms. Consequences of a correlated ground state are expected to be most prominent in two-body or even higher-order correlation functions. Nevertheless, in this paper we show that correlations may already have a significant effect on experimentally directly accessible quantities like the kinetic energy of the condensate. This holds true, as we will show, whenever the particle number is not too large and the trap is anisotropic. The harmonic trap potential with frequencies $`\omega _x=\omega _y`$, and $`\omega _z`$, provides the scales of the problem. We express energies in units of $`\mathrm{}\overline{\omega }`$ where $`\overline{\omega }=(\omega _x\omega _y\omega _z)^{\frac{1}{3}}`$, the corresponding oscillator ground state length scale is $`\overline{d}=\sqrt{\frac{\mathrm{}}{m\overline{\omega }}}`$, and thus the momentum scale $`(\mathrm{}/\overline{d})`$. In these units the Hamilton operator, representing the energy of $`N`$ interacting atoms in an axially symmetric harmonic trap with anisotropy parameter $`\lambda =\omega _z/\omega _x`$, reads $$H=\underset{i=1}{\overset{N}{}}\left\{\frac{𝐩_i^2}{2}+\frac{1}{2}\left(\lambda ^{\frac{2}{3}}(x_i^2+y_i^2)+\lambda ^{\frac{4}{3}}z_i^2\right)\right\}+4\pi a\underset{i<j}{}\delta (𝐫_i𝐫_j).$$ (1) Interactions between atoms are two-body collisions with a strength determined by the $`s`$wave scattering length $`a`$ of the atoms (measured in units of the oscillator length $`\overline{d}`$). The problem thus depends on the three parameters $`N`$, $`\lambda `$, and $`a`$. Bose-Einstein condensation occurs when the thermal de-Broglie wave length of the atoms becomes larger than the mean distance between atoms. Then, loosely speaking, a noticeable fraction of all atoms occupy the same one-particle state. Near zero temperature, i.e. well below the critical temperature, we therefore expect the $`N`$-particle state to be well approximated by a product state $`\mathrm{\Psi }`$ of one-particle states $`\psi `$ , $$\mathrm{\Psi }(𝐫_1,\mathrm{},𝐫_N)=\psi (𝐫_1)\psi (𝐫_2)\mathrm{}\psi (𝐫_N).$$ (2) In second-quantized language with atom field operator $`\widehat{\psi }(𝐫)`$, the product state (2) becomes a Fock state with particle number $`N`$, resulting in the more common expressions for the manifestation of a Bose condensate, for instance $`\widehat{\psi }^{}(𝐫)\widehat{\psi }(𝐫^{})=N\psi ^{}(𝐫)\psi (𝐫^{})`$. The total energy of the system, $`E=\mathrm{\Psi }|H|\mathrm{\Psi }`$ with Hamiltonian (1), evaluated with the product state (2) is the Gross-Pitaevskii energy functional $$E_{\mathrm{GP}}[\psi ]=Nd^3r\psi ^{}(𝐫)\left[\frac{1}{2}\mathrm{\Delta }+\frac{1}{2}\left(\lambda ^{\frac{2}{3}}(x^2+y^2)+\lambda ^{\frac{4}{3}}z^2\right)+2\pi a(N1)|\psi (𝐫)|^2\right]\psi (𝐫).$$ (3) The determination of the approximate $`N`$-particle ground state is thus reduced to finding the minimum energy wave function of the energy functional (3), resulting in the famous Gross-Pitaevskii equation for the condensate wave function $`\psi (𝐫)`$ (strictly speaking, in GP theory the factor $`(N1)`$ in front of the interaction term is replaced by $`N`$; as we are interested in effects for a fixed, finite number of particles $`N`$, we stick to $`(N1)`$ throughout this paper). By now it is experimentally established that the one-particle state $`\psi (𝐫)`$ so obtained does in fact well describe the properties of the condensate. For these dilute bosonic gases at near-zero temperature, the assumption of a product wave function (2) for the $`N`$-particle state is thus a good approximation. There is now also a rigorous proof available showing that in the limit $`N\mathrm{}`$, keeping the product $`Na`$ fixed (dilute), the true $`N`$-particle ground state energy is indeed given by the the minimum of $`E_{\mathrm{GP}}[\psi ]`$. Nevertheless, it is clear for finite $`N`$ that due to two-body forces a product wave function (2) can only be an approximation for the $`N`$-particle ground state and it is worth thinking about possible observable deviations for not too large atom numbers $`N`$. In particular, as the center-of-mass motion of the $`N`$-body problem may be separated, we expect the true ground state to be a product wave function of center of mass $`𝐫_{\mathrm{cm}}=\frac{1}{N}_i𝐫_i`$ and relative coordinates, i.e. $`\mathrm{\Psi }(𝐫_1,\mathrm{},𝐫_N)=\psi _{\mathrm{cm}}(𝐫_{\mathrm{cm}})\stackrel{~}{\psi }(\mathrm{relative}\mathrm{coordinates})`$ with the wave function $`\stackrel{~}{\psi }`$ for the relative motion symmetric in the $`𝐫_i`$. This form of the wave function is clearly different from the product state (2). In this paper, we are not going to separate the center-of-mass motion, but instead we introduce new coordinates $$𝐑_i=𝐫_i+(C\text{11})𝐫_{\mathrm{cm}},(i=1,\mathrm{},N)$$ (4) that democratically incorporate the center-of-mass degree of freedom. The matrix $`C`$, without loss of generality, is chosen to be the diagonal matrix $$C=\left(\begin{array}{ccc}c_r& 0& \\ 0& c_r& 0\\ 0& 0& c_z\end{array}\right),$$ (5) where $`c_r`$ and $`c_z`$ are two positive parameters; $`C`$ will play the role of an additional variational variable and will help to lower the total energy. For a totally anisotropic trap one should introduce two parameters $`c_x,c_y`$ instead of just $`c_r`$. The Jacobian of transformation (4) is $`\mathrm{d}^N𝐑=(detC)\mathrm{d}^N𝐫`$, and notice that the choice $`C=\text{11}`$ corresponds to the identity transformation $`𝐑_i=𝐫_i`$. Apart from simplicity, transformation (4) is motivated by the fact that a wave function $`\mathrm{\Psi }(𝐫_1,\mathrm{},𝐫_N)=\sqrt{detC}\mathrm{\Phi }(𝐑_1,\mathrm{},𝐑_N)`$ is a proper bosonic wave function whenever $`\mathrm{\Phi }(𝐑_1,\mathrm{},𝐑_N)`$ is a bosonic wave function in $`𝐑_i`$ coordinates. Crucially, the wave function obtained from a product wave function in $`𝐑_i`$-coordinates $$\mathrm{\Psi }(𝐫_1,\mathrm{},𝐫_N)=\sqrt{detC}\varphi (𝐑_1)\varphi (𝐑_2)\mathrm{}\varphi (𝐑_N)$$ (6) will be a correlated bosonic wave function in atom coordinates unless $`C=\text{11}`$, when (6) coincides with the usual product state (2). The actual values of the transformation parameters $`c_r,c_z`$ and the shape of the wave function $`\varphi `$ are fixed by the requirement that the total energy $`E=\mathrm{\Psi }|H|\mathrm{\Psi }`$ evaluated in the space of correlated atomic wave functions (6) should be minimal. It is straightforward to express the energy operator (1) of the $`N`$ atoms in new coordinates $`𝐑_i`$ and the corresponding momenta. Variation with respect to $`C`$ determines the parameters $`c_r`$, $`c_z`$ of the transformation (4) with minimum energy, for a given wave function $`\varphi `$ in (6). We find $$c_r=\lambda ^{\frac{1}{6}}\left(\frac{X^2+Y^2}{P_X^2+P_Y^2}\right)^{\frac{1}{4}},c_z=\lambda ^{\frac{1}{3}}\left(\frac{Z^2}{P_Z^2}\right)^{\frac{1}{4}},$$ (7) where $`P_X=i\frac{}{X}`$, and, for instance, $`Z^2=\varphi |Z^2|\varphi `$. In deriving (7) we simplified using the fact that in determining the ground state $`\mathrm{\Psi }`$ we may restrict ourselves to wave functions $`\varphi (𝐑)`$ with $`𝐑=𝐏=0`$. The total energy $`E=\mathrm{\Psi }|H|\mathrm{\Psi }`$ based on the correlated state (6) with the best matrix $`C`$ from (7) becomes $$E_{\mathrm{cor}}[\varphi ]=E_{\mathrm{GP}}[\varphi ]\frac{1}{2}\left(\lambda ^{\frac{1}{3}}\sqrt{X^2+Y^2}\sqrt{P_X^2+P_Y^2}\right)^2\frac{1}{2}\left(\lambda ^{\frac{2}{3}}\sqrt{Z^2}\sqrt{P_Z^2}\right)^2$$ (8) which is manifestly lower than the uncorrelated Gross-Pitaevskii energy $`E_{\mathrm{GP}}[\varphi ]`$ (3) and one of the main results of this paper. Let us first discuss result (8) for the case of a very large number $`N`$ of atoms. The difference between $`E_{\mathrm{cor}}[\varphi ]`$ and $`E_{\mathrm{GP}}[\varphi ]`$ is of the order of expectation values like $`X^2+Y^2`$ or $`P_Z^2`$, while the dominating term $`E_{\mathrm{GP}}[\varphi ]`$ is $`N`$ times larger. To leading order, therefore, we may evaluate the improved total energy with the state $`\varphi `$ obtained from the minimum of $`E_{\mathrm{GP}}[\varphi ]`$ alone, i.e. with the solution of the usual GP equation. Since, for $`N\mathrm{}`$, $`𝐏^2`$ (‘kinetic energy’) may be neglected with respect to $`𝐑^2`$ (‘potential energy’), we see from (8) that in this limit, the energy of the correlated state (6) may be written as $`E_{\mathrm{cor}}[\varphi ]=E_{\mathrm{GP}}[\varphi ]\frac{1}{N}E_{\mathrm{GP}}^{\mathrm{pot}}[\varphi ]`$, where $`\varphi `$ is the solution of the usual GP equation and $`E_{\mathrm{GP}}^{\mathrm{pot}}`$ the GP potential energy. Note that this energy decrease is possible due to the existence of the trap potential. Using the asymptotic Thomas-Fermi expressions for the total GP energy $`E_{\mathrm{GP}}=\frac{5N}{14}\left(15(N1)a\right)^{\frac{2}{5}}`$ and the GP potential energy $`E_{\mathrm{GP}}^{\mathrm{pot}}[\varphi ]=\frac{3N}{14}\left(15(N1)a\right)^{\frac{2}{5}}`$ , we find for the energy of our correlated approximate ground state (6) $$E_{\mathrm{cor}}=(1\frac{3}{5N})E_{\mathrm{GP}}\text{as}N\mathrm{},$$ (9) a small decrease for the assumed large $`N`$. Far more relevant are effects of the correlated ground state (6) on the properties of small condensates. Let us therefore concentrate on the full expression (8) for the total energy $`E_{\mathrm{cor}}[\varphi ]`$ of the correlated state. As in usual GP theory , we may proceed analytically and use Gaussian trial wave functions $`\varphi (𝐑)`$ parameterized by two parameters $`\mathrm{\Sigma }_r`$ and $`\mathrm{\Sigma }_z`$ for the radial and axial widths (note that these are widths in $`R`$ coordinates and should not be confused with the condensate widths in physical space, the difference being of the order $`N^1`$). The full energy functional (8) becomes a function of $`\mathrm{\Sigma }_r`$ and $`\mathrm{\Sigma }_z`$ only, and it is a simple numerical task to find its minimizing values. Results based on this Gaussian approximation are shown in the following Figures. In Fig.1 we show the parameters $`c_r`$ (Fig. 1a) and $`c_z`$ (Fig. 1b) of our transformation on correlated coordinates (4) for the minimum energy correlated state (6) as a function of particle number $`N`$, and for two different anisotropies $`\lambda =0.04`$ (cigar), and $`\lambda =3`$ (pancake). The three corresponding different graphs in each figure represent different interaction strengths: $`a=0.004`$ (solid line), $`a=0.008`$ (dashed line), and $`a=0.012`$ (dotted line). Recall that $`c_r=c_z=1`$ would correspond to the usual uncorrelated GP ground state (2). We clearly see that the transformation parameters $`c_r`$ and $`c_z`$ are larger in the weakly confining direction, and become larger for larger interaction strength $`a`$. Is there any hope to measure effects induced by the correlations of the improved ground state (6)? Among the various contributions to the total energy of the condensate, it turns out that the small kinetic energy in the weakly confining direction is most sensitive to our correlations and exhibits an increase of a few per cent compared to GP theory, for small condensates. This increase is compensated for by a larger decrease of potential energy of the correlated ground state, such that the total energy is indeed lowered. In Fig.2 we show the kinetic energy in the weakly confining $`z`$-direction $`E_{\mathrm{cor}}^{\mathrm{kin},\mathrm{z}}`$ of the correlated state (6) as a function of atom number $`N`$ for an anisotropy $`\lambda =0.04`$ (cigar). As in Fig.1, results are shown for three different interaction strengths, $`a=0.004`$ (solid line), $`a=0.008`$ (dashed line), and $`a=0.012`$ (dotted line). Recall that energies are measured in units of $`\mathrm{}\overline{\omega }`$, i.e. the kinetic energy shown in Fig.2 is very small compared to the potential and internal energy of the condensate. In Fig.3 we compare the predictions of correlated (6) and uncorrelated (2) ground state for the kinetic energy $`E^{\mathrm{kin},\mathrm{z}}`$ in the weakly confining $`z`$-direction, again for a cigar shaped condensate with $`\lambda =0.04`$ (both quantities evaluated in a Gaussian approximation). Shown is the relative energy increase $`\mathrm{\Delta }E^{\mathrm{kin},\mathrm{z}}=(E_{\mathrm{cor}}^{\mathrm{kin},\mathrm{z}}E_{\mathrm{GP}}^{\mathrm{kin},\mathrm{z}})/E_{\mathrm{GP}}^{\mathrm{kin},\mathrm{z}}`$, compared to GP theory. We see that the difference may be a few per cent, depending on interaction strength and atom number. For larger anisotropy, the observed difference is even larger. Similar, yet less pronounced results are obtained for the change of radial kinetic energy in pancake-type condensates. The difference being a few per cent, high precision momentum distribution measurements might be able to distinguish between the predictions of correlated and uncorrelated ground state. In principle, the kinetic energy could be measured indirectly via the virial relation $`E^{\mathrm{kin},\mathrm{z}}=E^{\mathrm{rel}}E^{\mathrm{pot},\mathrm{r}}`$ from the knowledge of release energy $`E^{\mathrm{rel}}=E^{\mathrm{kin}}+E^{\mathrm{int}}`$ and radial potential energy $`E^{\mathrm{pot},\mathrm{r}}`$. In this approach, however, the small kinetic energy is measured as the difference of two very large quantities and it seems doubtful whether the required precision can be achieved. Far more promising are direct methods to measure the momentum distribution of the condensate, as recently established through Bragg spectroscopy . The high precision achieved in this experiment might be sufficient to confirm the predicted deviations from GP theory experimentally. Let us summarize this paper. We found a correlated $`N`$-particle wave function (6) for an interacting Bose gas in a harmonic trap with lower total energy than the mean field GP product state. The existence of the trap potential is crucial for the correlations; the energy decrease is roughly the potential energy of one atom. We have concentrated on effects on simple averaged quantities, like kinetic, potential or release energy. Consequences of our correlations will be most significant for relatively small condensates and a more detailed investigation of further effects is required, focusing on higher-order correlation functions. Studying the dynamics of the condensate in a correlated state $`\mathrm{\Psi }`$ will also be of interest as excitation frequencies can be measured with fairly high precision. Based on a hydrodynamic approach to superfluids, non-mean field corrections to the frequency of elementary excitations due to a finite gas parameter have been calculated in in the large $`N`$-limit. Note that for dynamics, we have to replace the correlated energy functional (8) by the more general expression, valid for nonvanishing $`𝐑`$, $`𝐏`$. The correlations reported in this paper depend on the finite number of atoms and the existence of the trap; both conditions are met by current experiments. They are thus of very different nature than zero-temperature non-mean field effects described by Bogoliubov theory (quantum depletion) . In our approach, Bogoliubov theory in $`R`$-coordinates may be introduced on top of the correlated ground state (6). We conclude that there is no obvious relation between the correlations described by our wave function (6) and the usual Bogoliubov quantum depletion corrections. Last but not least, recent progress in high precision spectroscopy of the momentum distribution of the condensate might lead to an experimental confirmation of our results. I would like to thank R. Graham, M. Fliesser, and J. Reidl for numerous fruitful discussions. Support by the Deutsche Forschungsgemeinschaft through SFB 237 “Unordnung und große Fluktuationen” is gratefully acknowledged. Fig. 1. Parameters $`c_r`$ (Fig. 1a) and $`c_z`$ (Fig. 1b) of our transformation on correlated coordinates for the minimum energy state $`\mathrm{\Psi }`$ as a function of particle number $`N`$ for two different anisotropies $`\lambda =0.04`$ (cigar), and $`\lambda =3`$ (pancake). The three corresponding different graphs in each figure represent different interaction strengths: $`a=0.004`$ (solid line), $`a=0.008`$ (dashed line), and $`a=0.012`$ (dotted line). The uncorrelated case is $`c_r=c_z=1`$. Fig. 2. Kinetic energy $`E_{\mathrm{cor}}^{\mathrm{pot},\mathrm{z}}`$ of the correlated state in the weakly confining $`z`$-direction as a function of atom number $`N`$ for an anisotropy $`\lambda =0.04`$ (cigar). As in Fig.1 we show results for three different interaction strengths, $`a=0.004`$ (solid line), $`a=0.008`$ (dashed line), and $`a=0.012`$ (dotted line). Fig. 3. Relative difference between the predictions for the kinetic energy $`E^{\mathrm{kin},\mathrm{z}}`$ in the weakly confining $`z`$-direction based on the correlated state and the uncorrelated GP state as a function of atom number $`N`$ for an anisotropy $`\lambda =0.04`$ (cigar). As in Fig.1 and 2, we show results for three different interaction strengths, $`a=0.004`$ (solid line), $`a=0.008`$ (dashed line), and $`a=0.012`$ (dotted line).
no-problem/0003/cond-mat0003297.html
ar5iv
text
# Lattice dynamics and electron-phonon coupling in 𝛽-(BEDT-TTF)2I3 organic superconductor ## I Introduction The strength and importance of electron-lattice phonon (e–lph) coupling in the superconductivity mechanism of organic superconductors has always been rather controversial. Early numerical estimates based on simplified models gave very low values for the coupling to acoustic phonons, and much more attention was then devoted to electron-molecular vibration e–mv coupling. On the other hand, “librons” were also invoked in the pairing mechanism of organic superconductors. On the experimental side, most of the data have been collected for bis-ethylen-dithio-tetrathiafulvalene (BEDT-TTF) salts, which are the most extensive and representative class of organic superconductors. In particular, recent Raman experiments on BEDT-TTF salts pointed out that the intensity and the frequency of some low-frequency phonon mode change at the superconducting critical temperature $`T_c`$. Oddly enough, also one intra–molecular BEDT-TTF mode has been shown to exhibit a frequency shift at $`T_c`$. Carbon isotopic substitution on the central double bond BEDT-TTF was claimed to have dramatic effects on the $`T_c`$ of one superconducting BEDT-TTF salt, but subsequent extensive isotopic substitution studies on other superconducting BEDT-TTF salts strongly suggested that the lattice phonons are likely involved in the superconducting mechanism. Attempts to take into account both e-mv and e-lph coupling have been put forward, but the role and the relative importance of the two types of coupling in the pairing mechanism is far from being settled. Whereas extensive studies have been devoted to the characterization of intra–molecular phonons of BEDT-TTF and to the estimate of the relevant e–mv coupling strength, very little is known about the lattice phonon structure in BEDT-TTF salts or in other organic superconductors. Obtaining a sound characterization of BEDT-TTF salts lattice phonons is not easy, since in general the unit cell contains several molecular units, and the phonon modes obviously differ for different crystalline structures. We have tackled the problem by adopting the “Quasi Harmonic Lattice Dynamics” (QHLD) method, by which we are able to analyze both the crystal and the lattice phonon structure in terms of empirical atom-atom potentials, in principle transferable among crystals containing the same atoms. We have first obtained C, S and H atom-atom potential parameters reproducing crystal structure and lattice phonons of neutral BEDT-TTF. Then we have considered the I$`{}_{}{}^{}{}_{3}{}^{}`$ salts, which have only one additional atom to parametrize, and present several crystalline phases. After the successful application of the potential to non-superconducting $`\alpha `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> crystal, we present in this paper the results relevant to the extensively studied superconducting $`\beta `$-phases. $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> has been the first ambient pressure BEDT-TTF based superconductor to be discovered, and its unit cell contains only one formula unit. The BEDT-TTF radicals are arranged in stacks, and the stacks form sheets parallel to the ab crystal plane. The centrosymmetric linear I$`{}_{}{}^{}{}_{3}{}^{}`$ anions separate the sheets, forming an insulating layer. Several variants of the $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> phase have been reported, making difficult a full and detailed characterization. The electrochemically prepared $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> exhibits ambient pressure superconductivity at $`T_c`$= 1.3 K ($`\beta _L`$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub>) or at $`T_c`$=8.1 ($`\beta _H`$\- or $`\beta ^{}`$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub>) depending on the pressure-temperature history of the sample. Such $`T_c`$ increase has been attributed to a pressure induced ordering process of the ethylene groups of BEDT-TTF cation. In addition, thermal treatment or laser irradiation of the $`\alpha `$-phase yields an irreversible transformation to a superconducting phase ($`T_c`$= 8.0 K), named $`\alpha _t`$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub>, which was claimed to be similar to the $`\beta `$-phase. On the other hand, $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> can also be prepared by direct chemical oxidation ($`\beta _{CO}`$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub>), with $`T_c`$ between 7.1 and 7.8 K. Recent X-ray data confirm that thermally treated $`\alpha `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> is identical to $`\beta _{CO}`$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub>, but it is still not clear whether $`\beta _{CO}`$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> is the same as $`\beta _H`$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub>: the possibility of non-stoichiometric phases has also been put forward as an alternative to the ordering process in causing a $`T_c`$ of about 8 K. The paper is organized as follows. We first discuss in some detail the methods we have adopted to calculate the structure, the phonon dynamics and the e–lph coupling strength of (BEDT-TTF)<sub>2</sub>I<sub>3</sub> salts. The results relevant to the $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> phase are then presented and compared with available experimental data. Finally, the possible role of electron-phonon coupling in the pairing mechanism of organic superconductors is briefly discussed. ## II Methods ### A Quasi Harmonic Lattice Dynamics The crystal structure at thermodynamic equilibrium of (BEDT-TTF)<sub>2</sub>I<sub>3</sub> salts is computed using Quasi Harmonic Lattice Dynamics (QHLD). In QHLD the Gibbs free energy $`G(p,T)`$ of the crystal is approximated with the free energy of the harmonic phonons calculated at the average lattice structure ($`\mathrm{}=1`$): $$G(p,T)=\mathrm{\Phi }_{\mathrm{inter}}+pV+\underset{𝐪i}{}\frac{\omega _{𝐪i}}{2}+k_BT\underset{𝐪i}{}\mathrm{ln}\left[1\mathrm{exp}\left(\frac{\omega _{𝐪i}}{k_BT}\right)\right]$$ $`(1)`$ Here, $`\mathrm{\Phi }_{\mathrm{inter}}`$ is the total potential energy of the crystal, $`pV`$ is the pressure-volume term, $`_{𝐪i}\omega _{𝐪i}/2`$ is the zero-point energy, and the last term is the entropic contribution. The sums are extended to all phonon modes of wavevector q and frequency $`\omega _{𝐪i}`$. Given an initial lattice structure, one computes $`\mathrm{\Phi }_{\mathrm{inter}}`$ and its second derivatives with respect to the displacements of the molecular coordinates. The second derivatives form the dynamical matrix, which is numerically diagonalized to obtain the phonon frequencies $`\omega _{𝐪i}`$ and the corresponding eigenvectors. The structure as a function of $`p`$ and $`T`$ is then determined self-consistently by minimizing $`G(p,T)`$ with respect to lattice parameters, molecular positions and orientations. In the case of (BEDT-TTF)<sub>2</sub>I<sub>3</sub> salts, and in particular of the $`\beta `$– phase, the choice of the initial lattice structure is somewhat problematic, due to the conformational disorder of the BEDT-TTF molecules. In fact, the X-rays structural investigations indicate that $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> at 120 K is disordered with two alternative sites for the terminal C atoms, labeled 9a,10a (staggered form) and 9b,10b (eclipsed form). On the other hand, ab–initio calculations for neutral BEDT-TTF indicate that the “boat” geometry ($`C_2`$ symmetry) is more stable than the “planar” geometry ($`D_2`$ symmetry) by 0.65 kcal/mole. The “chair” distortion ($`C_s`$ symmetry) is slightly more stable than the planar molecule, but still less stable than the boat one. The BEDT-TTF<sup>+</sup> ion is planar, and in (BEDT-TTF)<sub>2</sub>I<sub>3</sub> crystals we have a statistical mixture of neutral and ionized molecules. On the basis of the site symmetry constraints, we observe that neutral molecule boat and chair geometries correspond to the Leung’s configurations 9a,10a and 9b,10b, respectively. Thus the conformational disorder observed in most BEDT-TTF salts is readily understood: the energetic cost of deforming the molecules is small with respect to the energy gain among different packing arrangements in the crystals. To investigate at least partially the effect of conformational disorder on the stability of (BEDT-TTF)<sub>2</sub>I<sub>3</sub> phases, we have performed several calculations starting from different initials molecular geometries, as detailed in Section III. ### B Potential Model We have adopted a pairwise additive inter-molecular potential of the form $`\mathrm{\Phi }_{\mathrm{inter}}=\frac{1}{2}_{mn}[q_mq_n/r_{mn}+A_{mn}\mathrm{exp}(B_{mn}r_{mn})C_{mn}/r_{mn}^6]`$ where the sum is extended to all distances $`r_{mn}`$ between pairs $`m`$,$`n`$ of atoms in different molecules. The Ewald’s method is used to accelerate the convergence of the Coulombic interactions $`q_mq_n/r_{mn}`$. The atomic charges $`q_m`$ are the PDQ (PS-GVB) results of a recent ab–initio Hartree-Fock calculations, and are introduced to model both the neutral and ionized forms of the BEDT-TTF molecule. The parameters $`A_{mn}`$, $`B_{mn}`$ and $`C_{mn}`$ involving C, H and S atoms are taken from our previous calculation of neutral BEDT-TTF. Since in the chosen model C–H parameters are computed from C–C and H–H parameters via “mixing rules”, the same procedure is adopted here for all the interactions between different types of atoms. The iodine parameters have been derived from 9,10-diiodoanthracene, and successfully tested on $`\alpha `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub>. The complete atom-atom model is given in Table I. ### C Specific Heat The constant volume specific heat as a function of $`T`$ is computed directly from its statistical mechanics expression for a system of phonons: $$C_V(T)=\underset{𝐪i}{}k_B\left(\frac{\omega _{𝐪i}}{k_BT}\right)^2\mathrm{exp}\left(\frac{\omega _{𝐪i}}{k_BT}\right)\left[1\mathrm{exp}\left(\frac{\omega _{𝐪i}}{k_BT}\right)\right]^2$$ $`(2)`$ As usual in these cases, eq. (2) is evaluated by sampling a large number of $`𝐪`$-vectors in the first Brillouin Zone (BZ). In our first attempts to compute $`C_V`$, we sampled over regular grids in the BZ. We have found that for $`T5`$ K the statistical noise was still noticeable even after summing over several thousands of $`𝐪`$-vectors; the results were dependent on the sample size. At large $`T`$, on the contrary, statistical convergence was quite fast. This pathology can be attributed to the fact that, due to the exponential factor in eq. (2), only the phonons with $`\omega _{𝐪i}k_BT`$ give a non-negligible contribution to $`C_V(T)`$. For very low $`T`$, only the acoustic branches of the phonons with $`𝐪`$ close to zero have sufficiently small frequencies. With a regular grid, only a few of these vectors are sampled, and most of the computer time is wasted over regions of the BZ that are already well sampled. To obtain accurate statistics at a reasonable cost, we have used a Monte Carlo (random) integration scheme, biased to yield a larger sampling probability close to $`𝐪=0`$. For computational simplicity, we have chosen a three-dimensional Lorentzian probability distribution, $`L(𝐪)(1+a𝐪^2)^1`$, where $`a`$ is a width parameter. The bias is compensated by using the reciprocal of the sampling probability as the sample weight. With this scheme most of the computer effort is spent in the region $`𝐪0`$, where a denser sampling really matters. By summing over about 2000 $`𝐪`$-vectors, we have been able to reach a satisfactory statistical convergence, in the whole range between 0.1 and 20 K. At high $`T`$, the results coincide with those obtained by integrating over a grid. At low $`T`$, $`C_V`$ goes as $`T^3`$, as it should when the acoustic modes are properly sampled, and does not fluctuate with the sample size. ### D Coupling with low-frequency intra–molecular degrees of freedom In most calculations for molecular crystals all intra–molecular degrees of freedom are neglected and the molecules are maintained as rigid units. This rigid molecule approximation (RMA) is reasonable for small compact molecules, like benzene, where all normal modes have frequencies much higher than those of the lattice phonons. Since for both I$`{}_{}{}^{}{}_{3}{}^{}`$ and BEDT-TTF several investigations suggest that there are low frequency intra–molecular modes, the validity of RMA for (BEDT-TTF)<sub>2</sub>I<sub>3</sub> appears questionable. Therefore, we have decided to relax the RMA and to investigate the effects of the intra–molecular degrees of freedom. For this purpose we adopt an exciton-like model. To start with, it is convenient to use a set of molecular coordinates $`Q_i`$ describing translations, rotations and internal vibrations of the molecular units in the crystal. To each BEDT-TTF molecule of $`N=26`$ atoms we associate the following $`3N`$ coordinates: 3 mass-weighted cartesian displacements of the center of mass, 3 inertia-weighted rotations about the principal axes of inertia, and $`3N6=72`$ internal vibrations (the normal modes of the isolated BEDT-TTF molecule). The I$`{}_{}{}^{}{}_{3}{}^{}`$ ion, which is linear, has 3 translations, 2 rotations and $`4`$ internal vibrations. In order to compute the phonon frequencies, we need all derivatives $`^2\mathrm{\Phi }/Q_{ri}Q_{sj}`$ of the total potential $`\mathrm{\Phi }`$ with respect to all pairs of molecular coordinates $`Q_{ri}`$ and $`Q_{sj}`$. Here $`r`$ and $`s`$ label molecules in the crystal, while $`i`$ and $`j`$ distinguish molecular coordinates. The potential $`\mathrm{\Phi }`$ is made of intra– and inter–molecular parts, $`\mathrm{\Phi }_{\mathrm{intra}}`$ and $`\mathrm{\Phi }_{\mathrm{inter}}`$. In the exciton model, the diagonal derivatives of $`\mathrm{\Phi }_{\mathrm{intra}}`$ potential are taken to coincide with those of an isolated molecule: $`^2\mathrm{\Phi }_{\mathrm{intra}}/Q_{ri}^2=\omega _{ri}^2`$. Here $`\omega _{ri}`$ is the frequency of the $`i`$-th normal mode of the $`r`$-th molecule. All off-diagonal derivatives are zero, which means no coupling among different normal modes, and no coupling between normal modes and rigid roto-translations. These assumptions are correct for the intra–molecular potential at the harmonic level (by definition). The coupling between the molecular coordinates is given by $`\mathrm{\Phi }_{\mathrm{inter}}`$. For $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub>, $`\mathrm{\Phi }_{\mathrm{inter}}`$ is described by atom-atom and charge-charge interactions, which are both functions only of the interatomic distance. Since the distance depends on the cartesian coordinates of the atoms, $`X_{ra}`$, the derivatives of $`\mathrm{\Phi }_{\mathrm{inter}}`$ can be directly computed in terms of the coordinates $`X_{ra}`$, and then converted to molecular coordinates $`Q_{ri}`$: $$\frac{^2\mathrm{\Phi }_{\mathrm{inter}}}{Q_{ri}Q_{sj}}=\underset{ab}{}\frac{^2\mathrm{\Phi }_{\mathrm{inter}}}{X_{ra}X_{sb}}\frac{X_{ra}}{Q_{ri}}\frac{X_{sb}}{Q_{sj}}$$ $`(3)`$ Here $`a`$ and $`b`$ label the cartesian coordinates of the atoms in molecules $`r`$ and $`s`$, respectively, and the matrix $`X_{pa}/Q_{pi}`$ describes the cartesian displacements which correspond to each molecular coordinate $`Q_{pi}`$. The displacements corresponding to rigid translations and rotations of the molecules can be derived by simple geometric arguments. The displacements associated to the intra–molecular degrees of freedom are the cartesian eigenvectors of the normal modes of the isolated molecule. The atomic displacements, together with the inter–molecular potential model, determine the coupling between intra–molecular and lattice modes. We remark that the intra–molecular degrees of freedom are taken into account only as far as their effects on the vibrational contribution to the free energy are concerned. No attempt to decrease the potential energy by deforming the molecules is done. ### E e–lph coupling constants and the Eliashberg function In molecular crystals, intra–molecular vibrations are assumed to couple with electrons through modulation of on–site energies (e–mv coupling). Lattice phonons are instead expected to modulate mainly the inter–molecular charge transfer (CT) integral, $`t`$, the corresponding linear e–lph coupling constants being defined as: $$g(KL;𝐪,j)=(t_{KL}/Q_{𝐪j})$$ $`(4)`$ where $`t_{KL}`$ is the CT integral between neighboring pairs K,L of BEDT-TTF molecules, and where $`Q_{𝐪j}`$ is the dimensionless normal coordinate for the $`j`$–th phonon with wavevector $`𝐪`$. By relaxing the RMA, as explained above, the distinction between low–frequency intra–molecular modes and lattice modes is at least partially lost. On the other hand, e–mv coupling by the low-frequency molecular modes is expected to be fairly small, as suggested by the calculations available for isolated BEDT-TTF. Therefore, we have assumed that the calculated low–frequency phonons of $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub>, occurring between 0 and about 200 cm<sup>-1</sup>, are coupled to the CT electrons only through the $`t`$ modulation. To evaluate the $`g(KL;𝐪,j)`$’s, we have followed a real space approach. Adopting the extended Hückel method, for each pair $`K,L`$ of BEDT-TTF molecules within the $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> crystal we have calculated $`t_{KL}`$ as the variation of the HOMO energy in going from the monomer to the dimer. Such an approach is known to give $`t`$ values in nice agreement with those calculated by extended basis set ab–initio methods. $`t_{KL}`$ is calculated for the dimer equilibrium geometry within the crystal, as well as for geometries displaced along the QHLD eigenvectors. The various $`g(KL;𝐪,j)`$ are then obtained by numerical differentiation. We have considered only the modulation of the four largest $`t`$’s, all along the $`ab`$ crystal plane. In the case of e–mv coupling the overall electron-phonon coupling strength is generally expressed by the small polaron binding energy, $`E_{sp}^{mv}=_ig_i^2/\omega _i`$, where both $`g_i`$, the $`i`$-th e–mv coupling constant, and $`\omega _i`$, the corresponding reference frequency, are quite naturally taken as independent of the wavevector $`𝐪`$. Also in the calculation the e–lph coupling we have assumed the optical lattice phonons as dispersionless, and have performed the calculations for the $`𝐪=0`$ eigenvectors only. Within this approximation, symmetry arguments show that only the totally symmetric ($`A_g`$) phonons can be coupled with electrons. Thus, the overall e–lph coupling strength for the $`j`$-th lattice optical phonon, can again be expressed by the small polaron binding energy relevant to the $`j`$-th phonon: $`ϵ_j=_{KL}(g_{KL,j}^2/\omega _j)`$. The total coupling strength is then given by $`E_{sp}^{lp}=_jϵ_j`$. For the three acoustic branches we must of course consider the $`𝐪`$ dependence of the $`g`$’s, the coupling constants being zero for $`𝐪`$ = 0. We have then calculated the coupling strength ($`ϵ_j^{ac}`$) at some representative BZ edges in the $`a^{}b^{}`$ reciprocal plane. For each branch, we have averaged the found $`ϵ_j^{ac}`$, and assumed a linear dependence on $`|𝐪|`$. The latter assumption is correct only in the small $`|𝐪|`$ limit. The most important single parameter characterizing the strength of electron-phonon coupling in the superconductivity mechanism is the dimensionless electron–phonon coupling constant $`\lambda `$. This parameter is in turn related to the Eliashberg coupling function $`\alpha ^2(\omega )F(\omega )`$: $$\lambda =2_0^{\omega _{max}}\frac{\alpha ^2(\omega )F(\omega )}{\omega }d\omega $$ $`(5)`$ where $`F(\omega )`$ is the phonon density of states per unit cell, and $`\alpha ^2(\omega )`$ is an effective coupling function for phonons of energy $`\omega `$. The e-lph Eliashberg coupling function can be evaluated from the QHLD phonon density of states and from the electron–phonon matrix element $`g(𝐤,𝐤^{};\mathrm{j})`$ expressed in the reciprocal space: $$\alpha ^2(\omega )F(\omega )=N(E_F)\underset{ȷ}{}|g(𝐤,𝐤^{};j)|^2\delta (\omega \omega _{𝐪j})_{FS}$$ $`(6)`$ where $`𝐪=𝐤^{}𝐤`$, $`𝐤`$ and $`𝐤^{}`$ denoting the electronic wavevectors, and $`N(E_F)`$ is the density of states per spin per unit cell at the Fermi level. In eq.(6), $`_{FS}`$ indicates the average over the Fermi surface. We have calculated the $`g`$’s in real space, as detailed above. In order to introduce the dependence on the electronic wavevector $`𝐤`$, as required in eq. (6), we have to describe the electronic structure of the $`\beta `$–phase metal. To get a simple yet realistic model we make resort of the rectangular tight–binding dimer model, where the BEDT-TTF dimers inside the actual unit cell are taken as a supermolecule. Actually, as in $`\kappa `$–phase, in the $`\beta `$–phase structure BEDT-TTF dimers are clearly recognized (in the present formalism, they correspond to the $`t_{AB}`$ CT integral). In this model there is only one half–filled conduction band in the first BZ, whose dispersion relation as a function of the $`t_{KL}`$ CT integrals is easily obtained: $$ϵ(𝐤)=t_{AB}+t_{AH}\mathrm{cos}(k_x)+t_{AE}\mathrm{cos}(k_y)+t_{AC}\mathrm{cos}(k_x+k_y)$$ $`(7)`$ The chemical potential is obtained numerically from the half–filling condition. Within our tight–binding approximation, the dependence in reciprocal space of the coupling constants associated to the inter–dimer (inter–cell) hoppings is given by: $$g(𝐤,𝐤^{};j)=2\mathrm{i}g(KL;𝐪,j)[\mathrm{sin}(𝐤+𝐪)𝐑\mathrm{sin}\mathrm{𝐤𝐑}]$$ $`(8)`$ where R represents the nearest–neighbor lattice vectors ($`a`$, $`b`$, $`a+b`$), and $`g(KL;𝐪,j)`$ are the three corresponding real space inter–cell CT integrals. The Fermi surface average of eq. (6) can now easily performed numerically for the inter–dimer contribution. The coupling constants associated with the modulation of the it intra–dimer CT integrals are treated as intramolecular coupling constants, and as such are independent of $`𝐤`$. We finally remark that the e-mv Eliashberg coupling function is simply given by $`[\alpha ^2(\omega )F(\omega )]_{emv}=(N(E_F)/N)_ig_i^2\delta (\omega \omega _i)`$, $`N`$ being the number of molecules per unit cell and $`g_i`$ being the usual e-mv coupling constant, so that $`\lambda _{emv}=N(E_F)E_{sp}^{mv}`$. ## III Results ### A Crystallographic structures The unit cell of $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> contains one I$`{}_{}{}^{}{}_{3}{}^{}`$ ion at the (0 0 0) inversion site and two BEDT-TTF molecules at generic sites. At 4.5 K the two BEDT-TTF molecules have a boat geometry (with the terminal C atoms in 9a,10a positions) and are interconverted by the inversion. At 100 and 120 K the lattice is disordered and inversion symmetry is satisfied only statistically, with a mixture of boat and chair molecules. As explained in section II, at first we have made calculations with rigid molecules, and then we have relaxed the RMA with the addition of a subset of intra–molecular modes. The crystal structure is only marginally affected by RMA and in Table II we report the comparison between RMA-calculated and experimental crystal structure at several temperatures and pressures. Fig. 1 reports a more extensive and direct comparison between calculated and experimental crystal axis lengths against $`T`$ and $`p`$. The calculations have been performed by minimizing the free energy $`G`$ with the molecules kept rigid at their experimental, ordered geometry at 4.5 K. To investigate the effect of small changes in molecular geometry, the structures of $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> have been recomputed with the 120 K geometry and ordered molecules (staggered or boat form). The effect of the change in molecular geometry is negligible. At all temperatures, $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> appears to be thermodynamically more stable than $`\alpha `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub>, giving account for the irreversible interconversions of $`\alpha `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> into $`\beta `$–like phases. The effect of molecular deformations has been investigated, within the RMA approximation, by testing several model geometries in $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> as well as in $`\alpha `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> phases. The potential energy has been minimized with the experimental geometries and with chair-$`\alpha `$, chair-$`\beta `$, boat-$`\alpha `$ and boat-$`\beta `$ model geometries. The chair-$`\alpha `$ geometry is the average of the two chair molecules in $`\alpha `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub>, while chair-$`\beta `$ is the molecule of $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> with all the terminal carbons in 9b,10b positions. The boat-$`\alpha `$ and $`\beta `$ geometries are those observed in the corresponding phases. For both $`\alpha `$ and $`\beta `$ phases, the potential energy minimum is found with experimental geometry of that phase, and the system becomes less stable if any other geometry is used. It should be noticed that for $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> the boat-$`\beta `$ geometry coincides with the experimental geometry at 4.5 K, and thus yields the lowest energy. The difference between $`\alpha `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> and $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> essentially vanishes if the chair-$`\alpha `$ and boat-$`\alpha `$ geometries are used in the $`\beta `$–phase, while the chair-$`\beta `$ and boat-$`\beta `$ geometries drastically destabilize the $`\alpha `$–phase. This behavior clearly indicates that molecular deformations play a crucial role in stabilizing the various (BEDT-TTF)<sub>2</sub>I<sub>3</sub> phases. ### B Specific heat We next turn our attention to the phonon structure. In the $`\beta `$–phase we have only one formula unit in the triclinic unit cell, and within RMA we expect 8$`A_g`$ and 6$`A_u`$ $`𝐪=0`$ lattice phonons active in Raman and in IR, respectively. The number of phonons experimentally observed in the 10-150 cm<sup>-1</sup> spectral region is in any case smaller than the above prediction, so vibrational spectra do not offer a very stringent test of the calculations. On the other hand, there is another observable, the specific heat, which depends on the frequency distribution. As shown in Fig. 2, at 20 K the $`C_V`$ calculated within RMA (dotted line) is about 50% smaller than the experimental $`C_p`$ (dots, from Ref.). The difference between $`C_V`$ and $`C_p`$ is usually small for solids, since their thermal expansion is small. Therefore, we attribute most of the discrepancy between $`C_V`$ and $`C_p`$ to the intra–molecular modes, which are neglected in the RMA calculation. Ab–initio calculations indeed indicate the presence of several low-frequency BEDT-TTF intra–molecular (internal) normal modes. Since calculations refer to a free molecule, a direct comparison with experimental data in the solid state is not feasible. However, they constitute a very convenient starting point for relaxing RMA in QHLD calculations, as explained in Section II. We have included the lowest nine BEDT-TTF internal modes which fall in the same spectral region as the lattice modes (below $``$ 220 cm<sup>-1</sup>), and therefore are likely coupled. In addition, the symmetric and antisymmetric stretchings, and the two bendings of I$`{}_{}{}^{}{}_{3}{}^{}`$, expected at 114, 145, 52 and 52 cm<sup>-1</sup>, respectively, have been included in the QHLD calculations. The cartesian displacements of BEDT-TTF were obtained from the ab–initio calculations, while those of I$`{}_{}{}^{}{}_{3}{}^{}`$ were determined by symmetry alone, as often it happens for small molecules with high symmetry. The $`C_V`$ computed by relaxing the RMA is also shown in Fig. 2. The agreement with experiment is greatly improved with respect to RMA calculations. We anticipate that the same kind of result has been obtained for $`\kappa `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub>, and conclude by stating that RMA has to be relaxed for a realistic calculation of the low-frequency phonons of BEDT-TTF crystals. ### C Phonon assignments We now go back to the characterization of individual low-frequency phonons. In the RMA classification, below 220 cm<sup>-1</sup> we expect Raman activity for 8 lattice modes, 9 BEDT-TTF intra–molecular modes and one stretching of I$`{}_{}{}^{}{}_{3}{}^{}`$; in IR we expect 6 lattice modes, 9 BEDT-TTF intra–molecular modes and three I$`{}_{}{}^{}{}_{3}{}^{}`$ modes. The modes calculated at the minimum $`G`$ structure at 120 K are compared with experimental ones, in Tables III and IV for $`A_u`$ and $`A_g`$ modes, respectively. We have chosen the 120 K temperature since in this way we can compare the normal state phonon frequencies and eigenvectors for the minimum $`G`$ and the experimental structure. The frequency differences between minimum $`G`$ and experimental structure are quite small. The comparison between calculated and experimental vibrational frequencies is satisfactory, although not very significant given the low number of observed frequencies. Since we also have all the corresponding eigenvectors, we report an approximate description of the phonons, given for both BEDT-TTF and I$`{}_{}{}^{}{}_{3}{}^{}`$ as percentage of the lattice (rigid molecule) and of the intra–molecular contributions. Since in some cases there is a considerable mixing between lattice and molecular modes, a clear distinction cannot be made. Fig. 3 reports the full dispersion curves along the C, V, X, and Y directions and density of states of $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub>. In order to make the figure more readable, we have limited the highest frequency to 150 cm<sup>-1</sup>. Fig. 3 puts in evidence the complex structure of $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> low-frequency phonons. We have a very dense grouping of modes in the 50-100 cm<sup>-1</sup> region, with several avoided crossings between the dispersion curves, and clear mixing between lattice and molecular modes. Only the acoustic phonon branches contribute to the density of states below $``$ 25 cm<sup>-1</sup>, so that the typical $`\omega ^2`$ dependence is observed. On the other hand, at energies higher than $``$ 140 cm<sup>-1</sup> the almost dispersionless intra–molecular modes dominate, and the phonon density of states appears as a sum of delta-like peaks (not shown in the figure). ### D Electron-phonon coupling The e–lph coupling constants for the optical phonons of $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> are reported in Table IV. As explained in Section II, if one assumes that the eigenvectors are independent of $`𝐪`$, only the $`A_g`$ phonons can couple to electrons. In Table IV for each phonon we report both the individual $`g(KL;j,𝐪)`$ (Eq. 5) and the small polaron binding energy $`ϵ_j`$. The two most strongly coupled modes are those calculated at 32 and 113 cm<sup>-1</sup>. Whereas the former mode has been observed in the Raman spectrum, and together with the lower frequency mode (27 cm<sup>-1</sup>) undergoes a drastic intensity weakening at $`T_c`$, the latter has not been reported even in the normal state. The reason might be due to the proximity of the very intense, resonantly enhanced band at 121 cm<sup>-1</sup>, due to the symmetric stretch of the I$`{}_{}{}^{}{}_{3}{}^{}`$ anion. One band at 107 cm<sup>-1</sup> has been observed below 6 K for 488 nm laser excitation. On the other hand, a band at 109 cm<sup>-1</sup>, whose intensity varies with sample and irradiation, has been attributed to the splitting of the I$`{}_{}{}^{}{}_{3}{}^{}`$ stretching mode, as a consequence of the commensurate superstructure reported in one X-ray investigation at 100K. Certainly the 100-130 cm<sup>-1</sup> spectral region deserves further experimental scrutiny with the latest generation of Raman spectrometers. A second observation is that whereas the 113 cm<sup>-1</sup> mode involves only the BEDT-TTF units, and is mostly a lattice mode, the 32 cm<sup>-1</sup> one is a mixing between rigid I$`{}_{}{}^{}{}_{3}{}^{}`$ motion and “flexible” BEDT-TTF vibrations. This finding suggests a not marginal role of the counter-ions sheets in $`\beta `$–type BEDT-TTF salts. As shown by Table IV, the coupling of individual optical modes with electrons is in general not particularly strong, but on the whole the strength of e–lph coupling, as measured by the sum of the $`ϵ_j`$, is appreciable, around 45 meV, to which we have to add the contribution of the acoustic phonons. For the sake of comparison, we give also the $`ϵ_j^{ac}`$ for the three acoustic branches, calculated as average over several points at the BZ edges. In order of decreasing phonon frequency (see Fig. 3) the $`ϵ^{ac}`$ are: 2.3, 3.3 and 18.2 meV, respectively. The coupling strength of the lowest acoustic branch at the zone edge is comparable to that of the most strongly coupled optical phonons. Thus the overall e-lph coupling strength is of the same order of magnitude as that due to e–mv coupling, about 70 meV. We can make a more direct connection with superconducting properties by calculating the Eliashberg function and the dimensionless electron-phonon coupling constant $`\lambda `$. As seen in eq. (6), the absolute value of the Eliashberg function depends on the electronic density of states at the Fermi energy, $`N(E_F)`$. Experimental estimates of this critical parameter are problematic, since the measured quantities already include or the $`\lambda `$ enhancement factor, or the Coulomb enhancement factor, or both. The available theoretical estimates are all based on the extended Hückel tight binding method. The choice of $`N(E_F)`$ = 2.1 spin states/eV/unit cell, as obtained by this method, is consisten with our extended Hückel estimates of the CT integrals in real space. To our advantage, we can compare the calculated $`\alpha (\omega )F(\omega )`$ with that derived from normal state current/voltage measurements at a point contact junction. This kind of experiment is rather difficult to perform on organic crystals like $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub>, since one has to be careful about pressure effects at the point contact. The use of a contact between two $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> crystals made the current-voltage characteristics rather stable, from which the $`\alpha (\omega )F(\omega )`$ function reported in the upper part of Fig. 4 was obtained. We have changed the scale on the ordinate axis to maintain the same energy unit (cm<sup>-1</sup>) throughout. It is clear that the spectral resolution of the experiment is larger than $``$ 10 cm<sup>-1</sup>, and probably increases with energy. Indeed, no spectral detail is visible beyond 240 cm<sup>-1</sup>, where the contribution of e–mv coupled modes should be detectable. Therefore, to make easier the visual comparison with the experimental data, we have smoothed the calculated $`\alpha (\omega )F(\omega )`$ (Fig. 4, lower part) by a convolution with a Gaussian distribution. We have also assumed that the Gaussian distribution width increases linearly with $`\omega `$ (from 0.1 to 20 cm<sup>-1</sup> in the 1-200 cm<sup>-1</sup> interval). Fig. 4 puts in evidence the very good agreement between experiment and calculation. The absolute scale of $`\alpha (\omega )F(\omega )`$ turns out to be practically the same, even if both experiment and calculation are affected by considerable uncertainties, as explained above. The three main peaks observed in the experiment are well reproduced and are identified as due to the most strongly coupled phonon branches, namely, the optical phonons at 113 and 32 cm<sup>-1</sup>, and the lowest frequency acoustic branch. The calculated peak frequency due to the latter is slightly higher than the experimental one (22 vs 10 cm<sup>-1</sup>). This discrepancy might be due to the fact that the experiment refers to the $`\beta _L`$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> phase, whereas our calculation refers to a perfectly ordered phase like the $`\beta ^{}`$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub>. We also remark that, at variance with traditional superconductors, the Eliashberg function is remarkably different from the phonon density of states (Fig. 3). For instance, the peak around 120 cm<sup>-1</sup> in $`F(\omega )`$ is due to the dispersionless I$`{}_{}{}^{}{}_{3}{}^{}`$ stretching mode, which is completely decoupled from the electron system, whereas the broad peak in $`\alpha (\omega )F(\omega )`$ is due to the nearby (113 cm<sup>-1</sup> “lattice” mode of the BEDT-TTF molecules. Due to the complex phonon structure, $`\alpha (\omega )`$ is not nearly constant, but varies rapidly with the frequency. The dimensionless coupling constant $`\lambda `$ obtained by integration of $`\alpha (\omega )F(\omega )/\omega `$ up to 240 cm<sup>-1</sup> turns out to be around 0.4. The contribution to $`\lambda `$ from e–mv coupled modes is instead around 0.1. Thus in the McMillan picture the overall $`\lambda `$ of $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> is $``$ 0.5, which may well account for the observed $`T_c=8.1`$ K. ## IV Discussion and conclusions The computational methods we have adopted to analyze the crystal and lattice phonon structure, and the electron-phonon coupling strength of $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> are empirical or semiempirical. The form of QHLD atom–atom potentials has no rigorous theoretical justification, and the corresponding parameters are derived from empirical fittings. We have adopted ab–initio atomic charges to take into account Coulomb interactions between atoms, and ab–initio vibrational eigenvectors to introduce the coupling between lattice and molecular modes. Also the extended Hückel method used to characterize the $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> electronic structure is semiempirical, albeit with an experimental basis far wider than QHLD. In view of the obvious limitations of empirical or semiempirical methods, the success achieved in the case of $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> is even beyond our expectations, also considering that none of the empirical parameters has been adjusted to fit $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> experimental data. Indeed, all the available $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> experimental data have been accounted for. The crystal structure, and its variation with temperature and pressure, is correctly reproduced (Fig. 1 and Table II). Useful hints about the relative thermodynamic stability of (BEDT-TTF)<sub>2</sub>I<sub>3</sub> $`\alpha `$ and $`\beta `$ phases have been obtained, as well as some indications on the effect of BEDT-TTF conformation of the phases stability. The specific heat (Fig. 2) and the few detected Raman and infrared bands have been accounted for by including the coupling with low-frequency molecular vibrations. Finally, the point contact Eliashberg spectral function has been satisfactorily reproduced (Fig. 4). Despite the success, it is wise to keep in mind the QHLD limitations. First and foremost, conformational disorder in the crystal structure is not included. This is not a limitation of the QHLD method only, but it is a serious one particularly for $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> salts, where disorder plays an important role even in the superconducting properties. It is in fact believed that a fully ordered structure is at the origin of the higher $`T_c`$ (8.1 K) displayed by $`\beta ^{}`$\- or $`\beta _H`$\- phases with respect to $`\beta _L`$(BEDT-TTF)<sub>2</sub>I<sub>3</sub> (1.5 K). Furthermore, even if the QHLD method is able to follow the $`T`$ and $`p`$ dependence of the crystal structure, phase transitions implying subtle structural changes may be beyond its present capabilities, even for fully ordered structures. The relative stabilities of the phases can indeed be reproduced only at a qualitative level. For what concerns the electron-phonon coupling, one has to keep in mind that it depends on phonon eigenvectors, and these are obviously more prone to inaccuracies than the energies. Finally, extension to other BEDT-TTF salts with counter-ions different from I$`{}_{}{}^{}{}_{3}{}^{}`$ is not obvious, requiring additional atom-atom parameters. Once the above necessary words of caution about the method are spelled out, we can underline what in any case have learned from the present QHLD calculations. So far, in the lack of any description, no matter how approximate, of the phonons modulating the CT integrals, only speculative discussions about their role in the superconductivity could be put forward, catching at best only part of the correct picture. One of the most important indications coming out from the present paper is the need of relaxing the RMA. So from one side we cannot try to focus on the isolated molecule intra–molecular vibrations presumably modulating the CT integral, and on the other side that the “librations” of the rigid molecules lack of precise meaning. In other words, there is no simple or intuitive picture of the phonons modulating the CT integrals. Our results, and the overall mode mixing, suggest that also the counter-ions vibrations may play a perhaps indirect role in the coupling. The results of the present paper definitely assess the very important role played by the low-frequency phonons in the superconducting properties of BEDT-TTF salts. Both acoustic and optic modes modulating the CT integral are involved. The overall dimensionless coupling constant is $`0.4`$, much larger then that due to it e–mv coupled phonons ($`0.1`$). Of course, a mere numerical comparison of the two $`\lambda `$’s is not particularly significant, since one has to keep in mind the very different time scales (frequencies) of the two types of phonons. The phonons appreciably modulating the CT integrals fall in the 0-120 cm<sup>-1</sup> spectral region (Table IV), whereas those modulating on-site energies have frequencies ranging from 400 to 1500 cm<sup>-1</sup>. Applicability of the Migdal theorem to the latter appears dubious: non-adiabatic corrections or alternative mechanisms such as polaron narrowing have been suggested. For these reasons we will not get involved into detailed discussions about the relative role of e–lph and e–mv coupling in the superconductivity mechanism. We limit ourselves to state that phonon mediated coupling can well account for the observed critical temperature of the ordered $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub> phase, given plausible values of the other fundamental parameter, the Coulomb pseudopotential $`\mu `$. The results of the present paper suggest that phonon mediated mechanism is responsible for the superconductivity of BEDT-TTF-based salts. The same conclusion was reached on the basis of the solution of the BCS gap equation for $`\kappa `$ phase BEDT-TTF salts. On the other hand, evidences are also accumulating towards non-conventional coupling mechanisms in organic superconductors, such as spin-fluctuation mediated superconductivity. Similar apparently contrasting experimental evidences are also found for cuprates, pointing to a superconductivity mechanism where both electron-phonon coupling and antiferromagnetic spin correlations are taken into account. ###### Acknowledgements. We express many thanks to Rufieng Liu for providing the ab–initio cartesian displacements of BEDT-TTF, to R. Swietlick for sending us unpublished Raman spectra of $`\beta `$-(BEDT-TTF)<sub>2</sub>I<sub>3</sub>, and to A.Müller for useful correspondence. We acknowledge helpful discussions with many people, notably A.Painelli, D.Pedron, D.Schweitzer, H.H.Wang, J.Wosnitza. This work has been supported by the Italian National Reasearch Council (CNR) within its “Progetto Finalizzato Materiali Speciali per tecnologie Avanzate II”, and the Ministry of University and of Scientific and Technological Research (MURST).
no-problem/0003/physics0003102.html
ar5iv
text
# Transition from the Couette-Taylor system to the plane Couette system ## Acknowledgments This work was financially supported by the Deutsche Forschungsgemeinschaft.
no-problem/0003/cond-mat0003060.html
ar5iv
text
# Metallic and insulating behaviour of the two-dimensional electron gas on a vicinal surface of Si MOSFETs ## Abstract The resistance $`R`$ of the 2DEG on the vicinal Si surface shows an unusual behaviour, which is very different from that in the $`\left(100\right)`$ Si MOSFET where an unconventional metal to insulator transition has been reported. The crossover from the insulator with $`dR/dT<0`$ to the metal with $`dR/dT>0`$ occurs at a low resistance of $`R_{\mathrm{}}^c`$ $`0.04\times h/e^2`$ . At the low-temperature transition, which we attribute to the existence of a narrow impurity band at the interface, a distinct hysteresis in the resistance is detected. At higher temperatures, another change in the sign of $`dR/dT`$ is seen and related to the crossover from the degenerate to non-degenerate 2DEG. The problem of the metal-to-insulator transition (MIT) in two-dimensional systems has been attracting much attention after the observation that in the two-dimensional gas in (100) Si MOSFET there is a change in the sign of the temperature dependence from $`dR/dT>0`$ (metal) to $`dR/dT<0`$ (insulator) with varying concentration . The transition occurs at a resistance of $`R_{\mathrm{}}h/e^2`$ and was then confirmed to exist in other n-(100) Si as well as in Si/SiGe and p-GaAs structures . The metallic-like behaviour is in obvious contradiction with the 2D scaling theory of electron localization which allows only insulating behaviour. The specifics of the unusual metal is in the fact that it has been seen in high mobility structures and at low concentration of carriers which have large effective mass, so that the Coulomb interaction could play a significant role. There have been many different explanations suggested although this effect still remains an unresolved problem. Recently, several models have appeared where the unusual metallic behaviour is explained by conventional, though non-trivial, electron transport at low temperatures: i) classical conduction with scattering by an impurity band in the oxide , ii) temperature dependent screening of impurity scattering and crossover from the degenerate to non-degenerate state , iii) two-band conduction in p-GaAs . In a broad temperature range from 50 mK to 70 K, we have performed an investigation of the MIT on the 2DEG in high mobility Si MOSFETs with another orientation of Si surface - the vicinal surface which is cut at a small angle to the plane $`(100)`$. Such types of structure have been studied previously in the context of superlattice effects which were seen at higher electron concentrations than used in this work. We expected that the difference in the surface and impurity states at the interface would affect the manifestation of the MIT. Indeed, our results show that the MIT in the 2DEG of Si MOSFETs is not universal and has a different manifestation in the vicinal samples. We have observed $`two`$ crossovers in $`R(T)`$, at low and high temperatures, which are explained in terms of the models and of the temperature dependent impurity scattering. The low-temperature transition has been seen at a small critical resistance where one can neglect quantum corrections to the conductivity. We have observed a strong hysteresis at this transition, which clearly indicates that it originates from a narrow impurity band (IB). We also report an unusual low-temperature reentrant MIT, which does not exist in (100) Si structures we made by the same technology for a comparative study. The vicinal samples are high mobility n-channel MOSFETs fabricated on a surface which is tilted from the $`\left(100\right)`$ surface around the $`\left[011\right]`$ direction by an angle of $`9.5^{}`$. The samples have a peak mobility of $`2\times 10^4`$ $`cm^2/Vs`$ at $`T=4.2`$ K. The ‘normal’ samples are grown on the $`\left(100\right)`$ Si and have maximum mobility around $`1.5\times 10^4`$ $`cm^2/Vs`$. The oxide thickness in both types of structure is $`120`$ $`nm`$. The samples have a Hall bar geometry with length $`1200`$ $`\mu m`$ and width $`400`$ $`\mu m`$. Their resistance has been measured in the temperature range $`0.0570`$ K by a four-terminal ac method with frequency $`10`$ Hz and current $`2I_{ac}10`$ $`nA`$. The electron concentration has been determined by the Shubnikov-de Haas and capacitance measurements, and has been varied in the range $`2\times 10^{11}1.4\times 10^{12}`$ $`cm^2`$. Fig. 1 shows the resistance as a function of the gate voltage $`V_g`$ for a vicinal sample Si-4.1 in the temperature range below 1 K. A change in the sign of $`dR/dT`$ is clearly seen near $`R_{\mathrm{}}1`$ kOhm $`0.04\times h/e^2`$, with metallic behaviour at larger $`V_g`$. When the gate voltage, controlling the concentration, is slowly swept (with rate $`2`$ $`V`$/hour) in the two opposite directions, two distinct groups of curves are detected. The hysteresis loop disappears above 4 K and seems to be most pronounced near the crossover region. To quantify this observation, we have performed an experiment where a particular $`V_g`$ is approached from opposite directions: from $`V_g^{(1)}=0.5`$ V and $`V_g^{(2)}=9`$ V. After a brief transient time when the equilibrium is established, the difference between the two resistances $`\mathrm{\Delta }R=R^{(1)}R^{(2)}`$ is not changing for many hours. This value is shown in Fig. 1, inset, with a clear peak at $`V_g2.2`$ V - exactly in between the two crossover points. The Shubnikov-de Haas measurements performed in each case have shown that, for a particular $`V_g`$, the electron concentration is independent of the direction of the sweep - that is, it is the difference in the mobility which gives rise to $`\mathrm{\Delta }R`$. Noticeably, the crossover point in the same sample does not have a universal nature: the two transition points do not coincide either in their resistance or concentration. On the other hand, the value of the mobility is practically the same at the transition points, which indicates that it is the mobility which governs the transition. Hence we suggest that the peak in $`\mathrm{\Delta }R`$ occurs when a narrow ($`W<0.5`$ meV) impurity band at the interface comes to the Fermi level of the 2DEG and changes the character of the electron scattering. A natural suggestion for the origin of the hysteresis is a slow (at low temperatures) electron exchange between the impurity band and the 2DEG separated by a barrier. With increasing $`V_g`$ and rising of the Fermi level, the IB gets charged by electrons from the 2DEG. Some of the electrons will still remain in the IB when the gate voltage is decreased back to lower values, until the Fermi level is below the IB and it releases all its negative charge (this is why $`\mathrm{\Delta }R`$ is small both at high and low $`V_g`$s ). It is worth mentioning that the presence of the IB was detected earlier in (100) Si MOSFETs , although this was done in the hopping regime, where it gave rise to an increased density of localized states and was easily detected as a peak in the conductance $`G(V_g)`$. Here its effect is seen on electron scattering in the metallic regime: as a crossover point in $`R(T)`$ and as a peak in the hysteresis. We suggest that the character of the IB is similar to that considered in : it scatters electrons when it is positively charged and the scattering decreases when more electrons are added to it. In that model, when the Fermi level is above and close to the impurity band, the IB does not contribute to scattering at $`T=0`$ K. With increasing temperature, the IB becomes positively charged and the resistance of the 2DEG increases, $`dR/dT>0`$. To explain the transition to the insulating behaviour with decreasing electron concentration, it was assumed that the electron localization takes over at large enough resistance of the order of $`R_{\mathrm{}}`$10 kOhm . In our case, however, the transition occurs at much lower resistance where one can neglect electron localization. At the same time, we think that for a narrow band the crossover in the sign of $`dR/dT`$ should occur when the Fermi level $`F`$ is close to its centre. When the Fermi level moves down to the lower part of the IB, the mobility $`\mu `$ will increase with increasing temperature, as the positive charge of the IB decreases as $`\mu ^1(T)N^+1/\left(1+\mathrm{exp}\frac{FE_i}{k_BT}\right)`$, where the IB is assumed to be at the level $`E_i`$, Fig. 2, inset. However, in such a simple model one should not expect a decrease in the resistance by more than a factor of two. Fig. 2 shows the temperature dependence of the resistance $`R^{(2)}`$ near the transition at $`n_c4.18\times 10^{11}`$ $`cm^2`$, in the range T=$`50`$ mK $`4`$ K, presented as $`\mu ^1`$ to illustrate this simple model of IB scattering. To calculate the IB contribution, we subtracted the background $`\mu ^1`$ at $`V_g=2.4`$ V when the IB is full. The value $`FE_i`$ is calculated from known electron concentration and hence the Fermi energy $`E_F`$. The IB has been taken as having a constant density of states with width $`W0.08`$ meV used as an adjustable parameter. It is interesting to note that in order to get a satisfactory agreement, we have to shift gradually the level $`E_i`$ up when the IB gets more than half filled. Also, the width of the IB appears to be smaller for the set $`R^{(2)}(T)`$ (by a factor of two), that is, when the IB is more filled with electrons. We expect this to be a reflection of the Coulomb interaction of the states in the IB (to be discussed elsewhere ). It is important to emphasize that the ‘insulating’ behaviour presented in Fig. 2 for the resistance range $`R=0.91.2`$ kOhm is in fact a property of a metallic 2DEG with a well defined Fermi surface. A direct proof of this is obtained by measuring Shubnikov-de Haas oscillations at $`V_g=2.0`$ V $``$ for a concentration below the transition. Fig. 3 shows the temperature dependence in the whole temperature range. It is seen that the discussed low-temperature MIT at $`R1`$ kOhm exists only in a narrow range of temperatures and that, in general, $`R(T)`$ has complicated non-monotonic character. At $`V_g<2`$ V, the 2DEG shows an insulator which cannot be explained by the simple IB model and cannot occur due to the electron localization of a quantum nature because of the low sample resistance. This could be a result of a percolation-type localization. In Fig. 3, inset, the variation of the slope of the temperature dependence with decreasing $`V_g`$ is shown for this insulator, using an exponential fit of $`R(T)`$ in the range $`T1.57`$ K. The activation energy $`\mathrm{\Delta }=E_cF`$ is seen to increase linearly with decreasing Fermi level (calculated from the capacitance consideration). The activation energy extrapolates to zero at the mobility edge $`E_c`$ corresponding to $`V_g=1.9`$ V - the value which is close to the lowest $`V_g`$ where Shubnikov- de Haas oscillations were seen. Let us discuss the behaviour of $`R\left(T\right)`$ in Fig. 3 with increasing temperature. Localized electrons become delocalized at $`k_BT\mathrm{\Delta }`$, and after a dip in $`R(T)`$ around $`T4`$ K there is a change in the sign of the temperature dependence and the transition to metallic behaviour. At higher temperatures, another crossover is now seen at $`R_{\mathrm{}}^C`$ $``$ 3 kOhm. Near this transition, there is a non-monotonic $`R\left(T\right)`$ with a gradual change from $`dR/dT>0`$ to $`dR/dT<0`$ with increasing $`T`$. The phonon scattering can be neglected in this regime as it only becomes important at $`T>100`$ K . We note that in the temperature range $`T>`$ 4 K the system experiences a transition from degenerate (quantum) to nondegenerate (classical) state (the Fermi temperature $`T_F`$ varies from $`90.5`$ K for the bottom curve to $`15`$ K for the top one in Fig. 3). The variation of $`T_F`$ with concentration corresponds to the position of the hump in Fig. 3. All main features of the model for the temperature dependent ionized impurity scattering can be seen in this regime. The metallic behaviour at $`T<T_F`$ is explained in by the temperature dependence of the screening function. This produces a linear rise in the resistivity with increasing temperature, $`\rho \left(T\right)\rho \left(T=0\right)+A\left(T/T_F\right)`$, which agrees with experiment where $`\rho \left(T\right)\left(T/T_F\right)^{1.1}`$ for the bottom curve in Fig. 3. When $`T>T_F`$, the temperature dependence is expected to become $`\rho \left(T\right)A\left(T/T_F\right)^1`$ due to ionized impurity scattering in the non-degenerate case which is in a qualitative agreement with upper curves in Fig. 3. The further support of the classical to quantum transition at high temperatures is obtained from measurements of the perpendicular magnetoresistance. When $`T>T_F`$, we observed a small, $`\mathrm{\Delta }R/R`$1-2 %, positive magnetoresistance. The magnetoresistance decreases with either increasing concentration or decreasing temperature, i.e. when the system is driven towards the degenerate state. This qualitatively agrees with the classical behaviour of degenerate semiconductors . We have performed a comparative study of a normal sample (100) Si . At high temperatures it also shows a crossover in $`R(T)`$ but at a higher critical resistance, $`R_c15`$ kOhm, and lower concentration, $`n_c2.8\times 10^{11}`$ $`cm^2`$, than in the vicinal sample. The position of the resistance hump near the transition is shifted from $`T_F20`$ K in the vicinal sample to $`T_F10`$ K in the normal sample. This supports the applicability of the model for the high-temperature transition in both samples. Comparing our normal sample with other (100) Si samples, one can see a similarity in the shape of $`R(T)`$ near the transition, apart from the fact that the crossover in $`R(T)`$ is shifted from $`T`$ 2 K in to $`T10`$ K in our normal sample. If it is assumed that the transition in can also be explained in terms of , the difference in the concentrations ($`n_c1\times 10^{11}`$ $`cm^2`$ ) could account for the shift of the transition in the temperature scale. At temperatures below 1 K, the normal sample shows a similar crossover behaviour as the vicinal ones, although no hysteresis has been observed, which does not allow us to link directly the low-temperature crossover around $`R1`$ kOhm in the normal sample to an IB. In the metallic regime below $`R1`$ kOhm, we have observed a striking difference between the vicinal and normal samples: in the vicinal samples there exists another crossover point at $`R0.33`$ kOhm and $`n8\times 10^{11}`$ $`cm^2`$, Fig. 4a. Re-appearance of the insulating state with increasing carrier concentration has been previously reported in (100) Si-MOS structures and p-GaAs/GaAlAs where it was attributed to weak electron localization. The effect we have observed on the vicinal sample is quite different from that in . Firstly, it shows a significantly stronger (by a factor of ten) insulating $`R\left(T\right)`$ which cannot be explained by weak localization. Secondly, the new transition is accompanied by a hump in $`R(V_g)`$ at base temperature, Fig. 4b. This feature, which we have seen in several vicinal samples, is possibly a manifestation of a gap in the energy spectrum which is only detected below 1 K. It is tempting to link this effect with a superlattice minigap, however this is usually seen at much higher concentrations $`2.5\times 10^{12}`$ $`cm^2`$ . Also, the reentrant insulator cannot be explained by occupation of the second subband as we have not been able to identify its presence in the Shubnikov-de Haas oscillations and expect it to appear at higher electron concentrations . In conclusion, we have observed several unusual features of the metal-to-insulator transition of the 2DEG on a vicinal Si surface and have been able to explain most of them by classical electron conduction with temperature dependent impurity scattering. We are grateful to B. L. Altshuler and D. L. Maslov for stimulating discussions, EPSRC and ORS award fund for financial support. We also thank Y. Y. Proskuryakov for participating in discussions and helping with experiment.
no-problem/0003/astro-ph0003249.html
ar5iv
text
# The MONS Star Trackers11footnote 1To appear in Proceedings of the Third MONS Workshop: Science Preparation and Target Selection, edited by T.C.V.S. Teixeira and T.R. Bedding (Aarhus: Aarhus Universitet). ## 1 Introduction Like many satellites that require precise attitude control, MONS will use star trackers to sense the spacecraft attitude. A star tracker is basically a wide-field CCD camera. In acquisition mode, images of the sky are compared with a star catalogue to determine the absolute orientation of the spacecraft. Once a target is acquired, the spacecraft attitude is continuously updated in tracking mode. There is clear scientific interest in using the photometry from a star tracker to observe variable stars. Despite the small aperture, the advantages of space (long observing periods and no atmospheric scintillation) make such a camera superior to ground-based telescopes for some applications. The only example so far has been the work by Buzasi et al. (2000 and these Proceedings), who are making impressive use of the 52-mm star camera on the WIRE satellite after the failure of the main instrument. As far as we know, MONS is the first mission to be designed from the start to use star trackers for science. Of course, we must keep in mind that the primary role of the star trackers is to sense spacecraft attitude with the precision required for the main camera. We describe here the plan for the MONS Star Trackers as it stood at the time of the Workshop (January 2000). This plan differs slightly from that described in the MONS Proposal (Kjeldsen, Bedding & Christensen-Dalsgaard 1999), and will undoubtedly change again before the final design is frozen<sup>2</sup><sup>2</sup>2One change is that we now use the term “Star Tracker” rather than “Star Imager.”. There are several factors driving the changes, including: (i) performance of trackers as attitude sensors for the main camera, including redundancy issues; (ii) volume, thermal and power constraints from the possible presence of Ballerina; and (iii) performance of trackers as secondary science instruments. ## 2 Current plans for the Star Trackers There will be two Star Trackers, pointing in opposite directions. Both will consist of a small CCD camera (24 mm aperture) with a circular field of view with diameter 22 (see Fig. 1). The Bright Star Tracker (BST) will point forwards, in the same direction as the main telescope, and will have a broad blue filter (380–420 nm) to allow it to observe bright stars without saturating. The Faint Star Tracker will be unfiltered, preventing observations of very bright stars but improving the signal-to-noise at the faint end. Figures 24 show the expected photometric performance of the two Star Trackers. As a guide, the FST will take several seconds to reach the same photometric precision for a given star as an individual Hipparcos measurement. Of course, each star was only measured by Hipparcos about 100 times over a few years, whereas MONS will observe each field almost continuously for about one month. We intend to use the Hipparcos catalogue at the input catalogue for MONS. There will be about 200 stars in a typical BST field ($`V=0.5`$ to $`8.0`$), rising to about 600 in the plane of the Milky Way. For the FST ($`V=3.5`$ to $`12.0`$) the corresponding numbers are about 700 and 2000 stars. The number of stars that will be measured and down-linked will depend both on crowding and down-link capacity. As a general rule one can assume that MONS BST will measure all Hipparcos stars down to $`V=7`$ in the non-Milky Way fields, except stars that are too crowded. For the FST, the equivalent value is $`V=8.5`$. Stars below these magnitude limits will only be observed if they are of particular scientific interest. ## 3 Possible changes to the plan Details subject to change include the directions in which the Star Trackers point and the filters used. For example, one might consider having the two Star Trackers pointing in the same direction, one with a red filter and the other with a blue filter. This would give colour information, which is of great scientific value for many types of stars. The full range of observable magnitudes could be retained by using bracketed exposure times, with the CCD being read out in a sequence (e.g., 50 ms, 200 ms and 1000 ms) to sample both bright and faint stars. The total number of stars observed in this revised configuration would be similar to that described in Section 2, but in two colours and with the stars concentrated in half the number of fields. ### Acknowledgements We thank the Australian Research Council for financial support, as well as the Danish National Research Foundation through its establishment of the Theoretical Astrophysics Center.
no-problem/0003/quant-ph0003140.html
ar5iv
text
# Absorption-free discrimination between Semi-Transparent Objects ## Abstract Absorption-free (also known as “interaction-free”) measurement aims to detect the presence of an opaque object using a test particle without that particle being absorbed by the object. Here we consider semi-transparent objects which have an amplitude $`\alpha `$ of transmitting a particle while leaving the state of the object unchanged and an amplitude $`\beta `$ of absorbing the particle. The task is to devise a protocol that can decide which of two known transmission amplitudes is present while ensuring that no particle interacts with the object. We show that the probabilities of being able to achieve this are limited by an inequality. This inequality implies that absorption free distinction between complete transparency and any partial transparency is always possible with probabilities approaching 1, but that two partial transparencies can only be distinguished with probabilities less than 1. PACS numbers: 03.65.Bz, 03.67.-a In “interaction-free” measurement the task is to decide, using a test particle, whether an opaque object is present or absent while ensuring that the test particle is not absorbed by the object. Many methods for achieving this have been devised . The essential idea behind all of them is that the measurement picks a set of histories in none of which an interaction between object and test particle takes place, so no absorption occurs; other histories in the protocol will involve interactions, and in this sense the term “absorption-free” may be preferred to the more commonly-used term “interaction-free”. We abbreviate absorption-free measurement to AFM henceforth. In standard AFM, the object is considered to be either completely opaque or completely transparent (absent). One can also consider semi-transparent objects, for which there is an amplitude $`\alpha `$ of the particle passing through the object while leaving the state of the object unchanged and an amplitude $`\beta `$ for the particle to interact with the object and hence be absorbed, leaving the object in an “interacted” state (in Elitzur and Vaidman’s proposal this is the exploded state of the bomb). One can then ask whether one can infer the transmission amplitude of the object while ensuring that the object never reaches the “interacted” state. This is the problem we consider here, in the case where there are two known transmission amplitudes that have to be distinguished. This problem is of obvious practical interest. Indeed there are situations where one wants to determine the nature of an object but where radiation will damage the object, for instance when imaging a biological specimen in the ultraviolet. In these cases one wants to minimize the amount of radiation absorbed by the object. Standard AFM shows that if the object has only two possible states, completely transparent or completely opaque, then it is possible to determine the state without any photon being absorbed. However most objects will be semi-transparent. Here we address this more general case. In , a general framework for counterfactual quantum events was proposed, which includes AFM. Two variables $`|p`$ and $`|q`$ are distinguished in the total state space. The first variable $`|p`$ defines the state of the particle and its position within the apparatus used for AFM, and we assume there is a particular subset of values, $``$, for which interactions between the particle and apparatus can occur leading to absorption. The second variable $`|q`$, which we call the interaction variable, takes the value $`|1`$ if absorption occurs and $`|0`$ if not. It may have additional values, but they play no role in the following discussion. Any protocol for AFM can be divided into a series of steps. In some of these steps an interaction can potentially occur; we call these I-steps. An I-step has two parts. The first is a unitary transformation given by $$\begin{array}{cc}|p0\alpha |p0+\beta |p1\hfill & p\hfill \\ |p0|p0\hfill & p\hfill \end{array}$$ (1) where $`\alpha `$, $`\beta `$ are complex numbers satisfying $`|\alpha |^2+|\beta |^2=1`$. The second part is a measurement of the interaction variable in the basis $`|0`$, $`|1`$. The unitary transformation is not fully defined by (1), but we do not need to specify its action on terms like $`|p1`$ since we are concerned with histories on which no interaction occurs, and a protocol can be assumed to halt when measurement of the interaction variable yields $`|1`$<sup>*</sup><sup>*</sup>*In the case of photons, we can also let many photons pass through the object together. The I-step then takes the form $`|n0\alpha ^n|n0+|interactedstates`$ (if the photons belong to $``$), and after the measurement of the interaction variable, the state becomes $`\alpha ^n|n0`$ (if no interaction occurs). This is identical, up to a unitary transformation, to the state obtained if n particles pass successively through the object and none interact with the object. This remark shows that restricting to particles passing one by one, as in (1), does not make the analysis less general. . A protocol for AFM starts from a specified initial state. It is allowed to undergo a unitary transformation between successive I-steps, this transformation leaving the interaction variable unchanged. At the end of the protocol the variable $`|p`$ is measured. A protocol with measurements of $`|p`$ before the end can be converted to the form we specify by entangling the measured variables with extra variables and postponing their measurement till the end . In all the protocols, there are two measurement outcomes, $`M_1`$ and $`M_2`$ say, the first of which indicates that the object was absent, while the second indicates that the object was present and also that no absorption occurred. There will also be other outcomes, for instance that an absorption occurred. We denote the probability of $`M_i`$ by $`P(ident|i)`$, i.e. the probability of identifying, without the particle being absorbed, whether the object is present ($`i=2`$) or absent ($`i=1`$). The probabilities $`P(ident|i)`$ give an indication of the efficiency of the protocol. In Elitzur and Vaidman’s original proposal , one hasIn Elitzur and Vaidman’s original proposal one can never be certain that the object is absent, hence $`P(ident|1)=0`$, and the probability of learning that the bomb is present without it exploding is $`1/4`$. $`P(ident|2)=1/4`$ and $`P(ident|1)=0`$. In many recent protocols, $`P(ident|2)=1`$ and $`P(ident|1)`$ tends to 1. Figure 1 shows two types of AFM protocol. The quantum Zeno type is an elaboration of Elitzur and Vaidman’s original proposal . We have adapted it so that it can distinguish between no object ($`\alpha _1=1`$) and an object of transmission amplitude $`0|\alpha _2|<1`$. We take the first qubit of $`|pq`$ to correspond to polarization, and the initial state is a vertically polarized photon, denoted $`|v0`$. The AFM consists of repeated passages through a polarization rotator, a Mach-Zender interferometer, and a second polarization rotator. After the first rotation the state becomes $`\mathrm{cos}\theta |v0+\mathrm{sin}\theta |h0`$. After passing through the polarization beam splitter, the horizontally polarised component $`|h0`$ goes along the lower path, that may contain the object, whereas $`|v0`$ takes the object-free upward-going path. In this case, therefore, $``$ is the single value $`p=h`$. Applying (1), the I-step gives the un-normalised state $`|\psi _i=\mathrm{cos}\theta |v0+\alpha _i\mathrm{sin}\theta |h0`$. The second polarization beam splitter then recombines the two polarizations into one beam. If $`\alpha _2`$ is real and positive, then the state in case $`i=2`$ can be rewritten as $`|\psi _2=\gamma \left(\mathrm{cos}\theta ^{}|v0+\mathrm{sin}\theta ^{}|h0\right)`$ where $`\mathrm{cos}\theta ^{}=\mathrm{cos}\theta /\sqrt{\mathrm{cos}^2\theta +\alpha _2^2\mathrm{sin}^2\theta }`$ (note that $`\theta ^{}\theta `$). The final step is a rotation by the angle $`\theta ^{}`$. This brings the state to $`\mathrm{cos}(\theta \theta ^{})|v0+\mathrm{sin}(\theta \theta ^{})|h0`$ (no object present) or $`\gamma |v0`$ (object present). We then iterate this procedure $`N`$ times, choosing $`N`$ such that $`N(\theta \theta ^{})=\pi /2`$. This brings the state to $`|h0`$ (no object present) or $`\gamma ^N|v0`$ (object present). Since these states are orthogonal $`P(ident|1)=1`$ and $`P(ident|2)=\gamma ^{2N}`$. For large $`N`$ (small $`\theta `$), $`\gamma 1\frac{(1+\alpha _2)\pi ^2}{(1\alpha _2)4N^2}`$ and $`P(ident|2)1`$. If $`\alpha _2=|\alpha _2|e^{i\varphi _2}`$ has a non zero phase (the phase is defined by the convention that $`\alpha _1=1`$) then after recombining the two beams the state is $`|\psi _2=\gamma \left(\mathrm{cos}\theta ^{}|v0+\mathrm{sin}\theta ^{}e^{i\varphi _2}|h0\right)`$. The final rotation is chosen so as to take this state to the state $`\gamma |v0`$ and to take the state if no object is present to $`\mathrm{cos}\omega |v0+\mathrm{sin}\omega |h0`$ where $`\theta \theta ^{}\omega \theta +\theta ^{}`$. Iterating this procedure $`N`$ times, with $`N\omega =\pi /2`$, and then carrying out a measurement of polarisation, realizes an AFM. An alternative type of AFM protocol uses the concept of a monolithic total-internal-reflection resonator , or a Fabry-Perot (F-P) interferometer . In the case of the F-P there is a photon of momentum $`k`$ incoming from the left, and one measures whether the photon is reflected or transmitted. To see how the F-P fits into our framework for AFM, we make the dynamics discrete to correspond to the steps in a protocol. Define a lattice of spacing $`d`$ (the spacing of the mirrors, see Figure 1(b)) and let each step in the protocol correspond to a time $`t=d/c`$. The state $`|R_n`$ corresponds to a segment of right-moving plane wave over the spatial range $`[nd,(n+1)d]`$ for time interval $`t`$, and $`|L_n`$ to the corresponding segment of left-moving plane wave. Over time $`t`$, $`|R_n`$ evolves into $`|R_{n+1}`$ and $`|L_n`$ into $`|L_{n1}`$, except in the vicinity of the mirrors, where we have $`|R_0c|L_0+is|R_1`$, $`|R_1c|L_1+is|R_0`$, $`|L_1c|R_1+is|L_0`$, and $`|L_0c|R_0+is|L_1`$, $`c`$ and $`is`$ being reflection and transmission coefficients, respectively. We have treated the mirrors as dispersionless, which is a mathematical convenience to restrict ourselves to the Fourier component of the incoming plane wave. We have also taken $`d`$ such that $`e^{idk}=1`$ (where $`k`$ is the wave number of the plane wave) so that no phase is accumulated between the mirrors. If an object of transparency $`\alpha `$ is inserted between the two mirrors, the discretised dynamics, conditional on no photon being absorbed, becomes $`|R_0\alpha \left(c|L_0+is|R_1\right)`$, $`|R_1c|L_1+is|R_0`$, $`|L_1c|R_1+is|L_0`$, and $`|L_0\alpha \left(c|R_0+is|L_1\right)`$. Thus $`=\{L_0,R_0\}`$, since interaction can only occur within the apparatus. The initial state is $`e^{ikx}=_{\mathrm{}}^{n=1}|R_n`$. After many time steps, one settles into a steady state regime and the state outgoing to the left $`f_L`$ is the sum of the pulses reflected once by the left mirror and those reflected $`2m1`$ times inside the instrument and traversing the object $`2m`$ times, for $`m=1,2,\mathrm{}`$: $`f_L`$ $`=`$ $`{\displaystyle \underset{\mathrm{}}{\overset{n=1}{}}}|L_n\left(c+s^2{\displaystyle \underset{m=1}{\overset{\mathrm{}}{}}}c^{2m1}\alpha ^{2m}\right)`$ $`=`$ $`{\displaystyle \underset{\mathrm{}}{\overset{n=1}{}}}|L_n{\displaystyle \frac{c(1\alpha ^2)}{1c^2\alpha ^2}}.`$ As $`c1`$, the probability $`|f_L|^2=|c(1\alpha ^2)/(1c^2\alpha ^2)|^2`$ of reflection to the left tends to 1, except when $`\alpha =1`$ (object absent) in which case the probability of transmission to the right is 1. Thus the F-P allows an absorption-free discrimination between the absence of an object ($`\alpha _1=1`$) and the presence of an object of transparency $`\alpha _21`$. Now consider any protocol that falls within our general scheme, and suppose that one must distinguish between two semi-transparent objects with transmission amplitude $`\alpha _1`$, $`\alpha _2`$ (which can both be different from $`1`$) and interaction amplitude $`\beta _1`$, $`\beta _2`$ respectively. We shall prove the following constraint on the probability $`P(ident|i)`$ of identifying transparency $`\alpha _i`$ without any absorption occurring: Theorem: $`(1P(ident|1))(1P(ident|2))\eta ^2`$, where $`\eta =|\beta _1\beta _2/(1\overline{\alpha }_1\alpha _2)|`$. Before giving the proof, we look at some of the consequences of this inequality. First, note that $`|1\overline{\alpha }_1\alpha _2|^2|\beta _1\beta _2|^2=|\alpha _1\alpha _2|^2`$. Thus $`\eta 1`$, and $`\eta =1`$ iff $`\alpha _1=\alpha _2`$. This implies that $`P(ident|1)=P(ident|2)=0`$ when $`\alpha _1=\alpha _2`$, which must of course be the case, since two equal transmission amplitudes cannot be distinguished. Whenever $`\alpha _1\alpha _2`$, however, the theorem allows non-zero values of $`P(ident|1)`$ and $`P(ident|2)`$. Another special case is when one object is completely transparent (absent), ie. $`|\alpha _1|=1`$ and $`\beta _1=0`$. If $`\alpha _2\alpha _1`$, then $`\eta =0`$, and the theorem permits $`P(ident|1)=P(ident|2)=1`$. That this can be achieved was shown above. The most significant aspect of this result is that when both $`|\alpha _1|`$ and $`|\alpha _2|`$ are different from $`1`$, that is neither object is completely transparent, then $`\eta `$ is strictly positive. This implies that both $`P(ident|1)`$ and $`P(ident|2)`$ must be strictly less than $`1`$. Thus it is impossible to identify two semi-transparent objects with vanishing probability that the test particle is absorbed by the objects. This is bad news for the applications outlined above. Proof: The total state space can be decomposed into two orthogonal subspaces, the first spanned by components whose first variable $`p`$ satisfies $`p`$, and the second by components whose first variable $`p`$ satisfies $`p`$. Recall that a general protocol can be written as a series of I-steps followed by unitary transformations. We can write the un-normalized state for transparency $`i`$ at stage $`k`$ of the protocol immediately before the I-step as $`|u_i^k+|v_i^k`$, where $`|u_i^k`$ lies in the first subspace and $`|v_i^k`$ in the second. Immediately after the I-step (1) implies that the un-normalized state is $`|\psi _i^k=|u_i^k+\alpha _i|v_i^k.`$ We assume that the states $`|\psi _i^k`$ are all un-normalized, so $`1|\psi _i^k|^2`$ is the probability of no absorption occurring up to stage $`k`$ of the protocol. After the I-step there is a unitary transformation that carries $`|\psi _i^k`$ to $`|u_i^{k+1}+|v_i^{k+1}`$. We define $$f^k=\psi _1^k|\psi _2^k,$$ (2) whereupon unitarity implies $`f^k`$ $`=`$ $`u_1^{k+1}+v_1^{k+1}|u_2^{k+1}+v_2^{k+1}`$ $`=`$ $`u_1^{k+1}|u_2^{k+1}+v_1^{k+1}|v_2^{k+1}`$ (since the components $`u`$ and $`v`$ lie in orthogonal subspaces), and (2) for $`k+1`$ implies $`f^{k+1}=u_1^{k+1}|u_2^{k+1}+\overline{\alpha }_1\alpha _2v_1^{k+1}|v_2^{k+1}.`$ We therefore get $`f^{k+1}=f^k(1\overline{\alpha }_1\alpha _2)v_1^{k+1}|v_2^{k+1},`$ and hence $`f^N=1(1\overline{\alpha }_1\alpha _2){\displaystyle \underset{k=0}{\overset{N1}{}}}v_1^{k+1}|v_2^{k+1},`$ where the $`N`$-th step is the last step of the protocol before the final measurement. This implies $`{\displaystyle \frac{|1f^N|^2}{|1\overline{\alpha }_1\alpha _2|^2}}=|{\displaystyle \underset{k=0}{\overset{N1}{}}}v_1^{k+1}|v_2^{k+1}|^2.`$ We now use the Cauchy-Schwartz inequality to obtain $$\frac{|1f^N|^2}{|1\overline{\alpha }_1\alpha _2|^2}\underset{k=0}{\overset{N1}{}}|v_1^{k+1}|^2\underset{k=0}{\overset{N1}{}}|v_2^{k+1}|^2.$$ (3) The probability that an interaction occurs during the k’th I-step is $`|\beta _i|^2|v_i^k|^2`$, and therefore we can rewrite (3) as $$\frac{|1f^N|^2|\beta _1|^2|\beta _2|^2}{|1\overline{\alpha }_1\alpha _2|^2}P(interact|1)P(interact|2)$$ (4) where $`P(interact|i)=|\beta _i|^2|v_i^{k+1}|^2`$ is the total probability of interaction for transparency $`i`$. We now turn to the final measurement. There are three possible outcomes. The first is that the test particle is absorbed by the object. The second is that the particle is not absorbed and that the object is identified. The third is that no absorption occurs but the object is not identified. This occurs with probability $`P(NOTident|i)`$. Our aim is to construct a measurement setup such that $`P(ident|i)`$ is as large as possible. To this end we note that an optimal setup necessarily has $`P(NOTident|i)=0`$. Indeed suppose that $`P(NOTident|i)0`$. Then we run the protocol once, and if we obtain the outcome $`NOTident`$ we run the protocol a second time (constructing a protocol with its measurement of $`|p`$ at the end by entangling the outcomes of the first protocol with extra qubits). This increases the probability of identifying the object from $`P(ident|i)`$ to $`P(ident|i)(1+P(NOTident|i))`$. This procedure can be iterated many times to ensure that the probability of not identifying the object is as small as we wish. Upon taking the limit $`P(NOTident|i)0`$ one finds that $`P(interact|i)1P(ident|i)`$ and that $`f^N0`$. The latter limit is because if the two states $`\psi _i^N`$ can be identified with certainty, their scalar product must be zero. Thus in the limit $`P(NOTident|i)0`$, (4) tends to the inequality of the theorem. If $`P(NOTident|i)0`$, then $`P(ident|i)`$ is necessarily smaller than in the limiting case, and the inequality is also obeyed. $`\mathrm{}`$ This result establishes some limits on AFM of semi-transparent objects. It also raises various questions. First, can the bound be attained? We showed above that this is the case if one of the objects is transparent. The following numerical procedure suggests that the bound can be approached very closely for any real $`\alpha _i`$. Consider a quantum Zeno protocol based on a polarisation degree of freedom as described above. We denote by $`|\psi _i^k=|v0a_i^k+|h0b_i^k`$ the state for transparency $`i`$ at stage $`k`$. Suppose $`|\psi _i^{k+1}`$ is obtained from $`|\psi _i^k`$ by a rotation of angle $`\theta ^k`$ followed by an I-step. Then we have $`a_i^{k+1}=\mathrm{cos}\theta ^ka_i^k\mathrm{sin}\theta ^kb_i^k`$ and $`b_i^{k+1}=\alpha _i(\mathrm{sin}\theta ^ka_i^k+\mathrm{cos}\theta ^kb_i^k)`$, for $`i=1,2`$. Pick $`\lambda `$ and require $`b_2^k/b_1^k=\lambda `$ for all $`k`$, this being the condition for equality in the Cauchy-Schwartz step leading to equation (3). This would imply $`\mathrm{tan}\theta ^k={\displaystyle \frac{\alpha _1b_1^k\lambda \alpha _2b_2^k}{\alpha _2a_2^k\alpha _1a_1^k\lambda }},`$ which can be used to generate a series of angles $`\theta ^k`$. However, there is a problem with starting the procedure, since the initial state must be the same for $`i=1,2`$, and so the ratio $`b_2^0/b_1^0`$ must be 1. Yet we wish to choose $`\lambda `$ freely. By taking $`b_i^0=ϵ`$, $`a_i^0=\sqrt{1ϵ^2}`$, $`i=1,2`$, for small $`ϵ`$, we ensure that the initial terms $`b_i^0`$ are small and thereafter for larger terms $`b_i^k`$ the ratio $`b_2^k/b_1^k`$ is $`\lambda `$. This means that the condition for equality in Cauchy-Schwartz comes very close to being satisfied. Simulations show that a simple search always comes up with a value of $`\lambda `$ that makes the $`|\psi _i^N`$’s very close to orthogonal after some number of steps $`N`$. One can therefore make a final measurement at the $`N`$-th step using a POVM, in which the components yielding the AFM outcomes $`M_1`$ and $`M_2`$ are very close to the $`|\psi _i^N`$’s. By taking $`ϵ`$ small enough one can make the approach to equality of $`(1P(ident|1))(1P(ident|2))`$ and $`\eta ^2`$ as near as one likes for any $`\alpha `$’s (eg see Figure 2). It would be interesting to prove analytically that this must be so, and also to extend it to complex amplitudes. A second question concerns interaction-free discrimination of more than two transparencies. What bounds apply in this case? We can also broaden the question and consider situations where the object is not destroyed when one particle interacts with it (eg the Elitzur-Vaidman bomb), but where one wants to minimize the amount of interaction (eg to reduce potential radiation damage). What bounds apply to minimal absorption measurements ? We thank Richard Jozsa, Noah Linden, Sandu Popescu and Stefano Pironio for helpful discussion. Particular thanks to Sandu Popescu for raising the question of whether two grey levels can be distinguished by an AFM. S.M. is a research associate of the Belgian National Research Fund. He thanks the European Science Foundation for financial support.
no-problem/0003/cond-mat0003220.html
ar5iv
text
# 1 Introduction ## 1 Introduction Lattice spin systems and different ferromagnetic materials with competing interaction have been catching attention in the last decades . Spin glasses, alloys and amorphous systems which have randomly distributed competing interaction, are also of similar nature . In the recent articles the authors formulated a new class of statistical systems in which the energy functional is proportional to the total length of edges of the interface . These lattice spin systems have specially adjusted interaction between spins in order to simulate a given energy functional. The specific property of these systems is that they have very high – exponential degeneracy of the ground state. This happen simple because surface tension forces are tuned to vanish . This peculiar property of the system could make them useful for practical applications. In this article I suggest and examine application of this system in memory devices and possibly to store bits in future quantum computers. In three dimensions the corresponding Hamiltonian is equal to $$H_{gonihedric}^{3d}=2k\underset{\stackrel{}{r},\stackrel{}{\alpha }}{}\sigma _\stackrel{}{r}\sigma _{\stackrel{}{r}+\stackrel{}{\alpha }}+\frac{k}{2}\underset{\stackrel{}{r},\stackrel{}{\alpha },\stackrel{}{\beta }}{}\sigma _\stackrel{}{r}\sigma _{\stackrel{}{r}+\stackrel{}{\alpha }+\stackrel{}{\beta }}\frac{1k}{2}\underset{\stackrel{}{r},\stackrel{}{\alpha },\stackrel{}{\beta }}{}\sigma _\stackrel{}{r}\sigma _{\stackrel{}{r}+\stackrel{}{\alpha }}\sigma _{\stackrel{}{r}+\stackrel{}{\alpha }+\stackrel{}{\beta }}\sigma _{\stackrel{}{r}+\stackrel{}{\beta }},$$ (1) where $`\stackrel{}{r}`$ is a three-dimensional vector on the lattice $`Z^3`$, the components of which are integer and $`\stackrel{}{\alpha }`$, $`\stackrel{}{\beta }`$ are unit vectors parallel to the axes. This lattice system crucially depends on the coupling constant $`k`$ called self-intersection coupling constant. The form of the Hamiltonian $`H^k`$ and the symmetry of the system essentially depends on k: when $`k0`$ one can flip spins on arbitrary parallel layers and thus the degeneracy of the ground state is equal to $`32^N`$, where $`N^3`$ is the size of the lattice (see Figures 1). When $`k=0`$ the system has even higher symmetry, all states, including the ground state are exponentially degenerate <sup>1</sup><sup>1</sup>1This is very important property of the system which allows also to construct the dual system . . This degeneracy is equal to $`2^{3N}`$ . This is because now one can flip spins on arbitrary layers, even on intersecting ones (see Figure 2). The corresponding Hamiltonian contains only exotic four-spin interaction term. This simply means that the ”crystal” of the size $`N^3`$ has $`2^{3N}`$ different ground states. This exponential degeneracy is much bigger than the degeneracy of the vacuum state of the Ising ferromagnet, which is simply equal to two, and in this respect has very close nature with spin glasses and may describe liquid-glassy phase transition . In the usual Ising ferromagnet we have two different vacuum states, so in order to store more than one bit of information one should allow excited states as it is shown on Figure 1 A. With decreasing geometries of recording and reading heads and increasing magnetic media storage densities, these excitations become metastable thanks to the fluctuations on $`nm`$ scale and thus can not be protected from damages. In about 10 years, storage of one bit of information is expected to cover an area of 100x100 $`nm^2`$, and metastability of excitations on $`nm`$ scale becomes an increasingly limiting factor of performance . Opposite to that situation the ”crystal” of size $`N^3`$ which has special interaction between spins can ”memorize” $`2^{3N}`$ different states, which are well separated by potential barriers (see Figure 1 and 2). Therefore we suggest that natural or artificial materials with the corresponding structure of interactions can be used as magnetic recording systems. This article considers the question of possible construction of artificial material with the above property, its possible realization and application in memory devices and possibly to store bits in future quantum computers. ## 2 System with $`3\times 2^N`$ ground state degeneracy The benefit of having system with exponentially degenerate vacuum state in practical applications is that it can be used as high density magnetic recording system. Each vacuum state is realized as a particular spin configuration separated from others by potential barriers of height $`U`$ which is proportional to the width of the magnetic strip $`h`$ (see formulas (3) and Figure 1). The information can be stored as a particular vacuum state of the system. The process of recording can be visualized as a process in which the system moves form one vacuum state to another. Storing information in the form of different vacuum states separated by potential barriers will allow to keep it safely away from fluctuations and for a longer time. We shall demonstrate that this material can be realized as a lattice of nuclear spins with specially adjusted interactions. First let us consider the system which has exponentially degenerate vacuum state, but only at zero temperature. This ”crystal” which has $`3\times 2^N`$ ground state degeneracy has been constructed in and was studied in a number of articles analytically and numerically (see Figure 1). It corresponds to the case $`k=1`$ in the equation (1). The Hamiltonian of the system has the form $$H_{gonihedric}^{3d}=J\underset{\stackrel{}{r},\stackrel{}{\alpha }}{}\sigma _\stackrel{}{r}\sigma _{\stackrel{}{r}+\stackrel{}{\alpha }}+\frac{J}{4}\underset{\stackrel{}{r},\stackrel{}{\alpha },\stackrel{}{\beta }}{}\sigma _\stackrel{}{r}\sigma _{\stackrel{}{r}+\stackrel{}{\alpha }+\stackrel{}{\beta }}.$$ (2) The energy of a configuration is equal to the curvature of the boundary plus the energy at the intersections $$E=h[\underset{i}{}(RightAngles)_i+4\kappa \underset{j}{}(Intersections)_j].$$ (3) In this case the Hamiltonian includes only competing ferromagnetic and antiferromagnetic interactions. The ferromagnetic coupling constant is $`J_{ferromagnetic}=J`$ and the antiferromagnetic coupling constant should be four times smaller $`J_{antiferromagnetic}=J/4`$, thus the ratio is equal to four $$J_{ferromagnetic}/J_{antiferromagnetic}=4.$$ (4) The critical temperature $`\beta _c=J/KT0.44`$ has been predicted in (see also ) and confirmed by Monte-Carlo simulations and the low temperature expansion , thus $`T_cJ/0.44K`$. In order to have the phase transition point at high temperatures the coupling constant $`J`$ should be large enough. It is an interesting question if there exist a material with the above interactions. The crystalline $`EuS`$ and $`Eu_xSr_{1x}S`$ , which is a ferromagnetic insulator, has exchange energy coupling constants equal to $$J_{ferromagnetic}=(0.221\pm 0.003)K,$$ $$J_{antiferromagnetic}=(0.1\pm 0.004)K,$$ therefore the ratio is equal to two and one should look for other materials with appropriate coupling constants. If the material with these interactions will be found or constructed it will be not so useful for direct applications because the exponential degeneracy of the vacuum state is lifted at nonzero temperatures. This is because nonzero surface tension is generated by quantum-thermal fluctuations , the area term in the energy functional. This effect will suppress the interface walls and thus the degeneracy is lifted. From other side, crystal of this type can be helpful for experimental verification of the string tension generation phenomena in string theory suggested in , sort of experimental laboratory for QCD string. ## 3 System with $`2^{3N}`$ ground state degeneracy The system which has even higher degeneracy of the ground state than the one which we described in the previous section has been constructed in . The advantage of this system is that the degeneracy of the vacuum state remain untouched even at nonzero temperatures. In terms of Ising spin variables $`\sigma _\stackrel{}{r}`$ the Hamiltonian of the system with $`2^{3N}`$ ground state degeneracy can be written in the form $$H_{Gonihedric}^{3D}=J_4\underset{\stackrel{}{r},\stackrel{}{\alpha },\stackrel{}{\beta }}{}\sigma _\stackrel{}{r}\sigma _{\stackrel{}{r}+\stackrel{}{\alpha }}\sigma _{\stackrel{}{r}+\stackrel{}{\alpha }+\stackrel{}{\beta }}\sigma _{\stackrel{}{r}+\stackrel{}{\beta }}$$ (5) where $`\stackrel{}{r}`$ is a three-dimensional vector on the lattice $`Z^3`$, the components of which are integer and $`\stackrel{}{\alpha }`$, $`\stackrel{}{\beta }`$ are unit vectors parallel to the axes. This Hamiltonian corresponds to the case $`k=0`$ in equation (1). We should stress that the Ising spins in (5) are on the vertices of the lattice $`Z^3`$ and are not on the links and the coupling constant $`J_4`$ should be $`positive`$. The Hamiltonian contains only exotic four-spin interaction term $`\sigma \sigma \sigma \sigma `$. It is hard to see how this four-spin interaction term can be simulated even by artificial materials. In this section we propose to introduce additional spin which is located in the center of plaquette and then adjust it interaction so that effective interaction between four-spins located at the vertices of the plaquette will be of the form $`\sigma \sigma \sigma \sigma `$. One can consider this transformation as a modification of the decoration transformation and it is analogous to the star-triangle transformation . Thus to generate four-spin interaction term we shall introduce central spin $`\sigma `$ which interacts with its four neighbors (see Figure 3). We should prove that integrating out the interaction with the central spin will generate the necessary four-spin interaction term. Therefore we have to prove the existence of the following relation $$\frac{1}{2}\underset{\sigma =\pm 1}{}e^{J\sigma (\sigma _1+\sigma _2+\sigma _3+\sigma _4)}=Ae^{J_1(\sigma _1\sigma _2+\sigma _2\sigma _3+\sigma _3\sigma _4+\sigma _4\sigma _1)}e^{J_2(\sigma _1\sigma _3+\sigma _2\sigma _4)}e^{J_4(\sigma _1\sigma _2\sigma _3\sigma _4)}$$ (6) with nonzero coupling constant $`J_4`$. Here $`\sigma `$ is the central spin and $`\sigma _i,i=1,2,3,4`$ are spins on the vertices. If this relation holds then it means that the interaction of the central spin with its neighbors can be effectively replaced by direct $`J_1`$, diagonal$`J_2`$ and four-spin interaction $`J_4`$ (see Figure 3). To express coupling constants $`J_1,J_2,J_4`$ and $`A`$ through the coupling constant $`J`$ we have to solve the system of $`2^4`$ equations which appear when we substitute the values of the spins $`\sigma _i,i=1,2,3,4`$ into the equation (6). Only four equations are independent because of the global $`Z_2`$ invariance: $`ch(4J)=Aexp(4J_1+2J_2+J_4),`$ $`1=Aexp(2J_2+J_4),`$ $`ch(2J)=Aexp(J_4),`$ $`1=Aexp(4J_1+2J_2J_4),`$ (7) From the first,second and the forth equations it follows that $$J_1=J_2$$ and our equations reduce to: $`ch(4J)=Aexp(6J_2+J_4),`$ $`1=Aexp(2J_2+J_4),`$ $`ch(2J)=Aexp(J_4).`$ (8) From these equations it follows that: $$ch(4J)=A^8/(ch2J)^4$$ and thus $$A=(ch^4(2J)ch(4J))^{1/8}.$$ (9) Using again equations (8) we can find coupling constants $`J_2`$ and $`J_4`$ $`J_2={\displaystyle \frac{1}{8}}ln(ch(4J)),`$ $`J_4={\displaystyle \frac{1}{8}}ln(ch(4J)/ch^4(2J)).`$ (10) The formulas (9) and (10) express the solution through the coupling $`J`$. It is easy to see that $$A1,J_1=J_20,J_40.$$ (11) Let us rewrite our basic relation in the form $$e^{J_2(\sigma _1\sigma _2+\sigma _2\sigma _3+\sigma _3\sigma _4+\sigma _4\sigma _1)}e^{J_2(\sigma _1\sigma _3+\sigma _2\sigma _4)}\frac{1}{2}\underset{\sigma =\pm 1}{}e^{J\sigma (\sigma _1+\sigma _2+\sigma _3+\sigma _4)}=Ae^{J_4(\sigma _1\sigma _2\sigma _3\sigma _4)}$$ (12) where as we have seen $`J_40`$. The physical interpretation of the last formula is as follows: the initial direct and diagonal antiferromagnetic interactions $`J_1(J)=J_2(J)`$ between spins $`\sigma _i,i=1,2,3,4`$ together with the interaction $`J`$ with the central spin $`\sigma `$ generate effective four spin interaction $$e^{J_4(\sigma _1\sigma _2\sigma _3\sigma _4)},J_40.$$ (13) Unpleasant feature of this result is that the coupling constant $`J_4`$ is negative while we need it to be positive. To generate four spin interaction with positive, ferromagnetic coupling constant $`J_4`$ we have to change the interaction of one of the spins, let us say $`\sigma _1\sigma _1`$ in the formula (12) $$e^{J_2(\sigma _1\sigma _2\sigma _2\sigma _3\sigma _3\sigma _4+\sigma _4\sigma _1)}e^{J_2(\sigma _1\sigma _3\sigma _2\sigma _4)}\frac{1}{2}\underset{\sigma =\pm 1}{}e^{J\sigma (\sigma _1+\sigma _2+\sigma _3+\sigma _4)}=Ae^{J_4(\sigma _1\sigma _2\sigma _3\sigma _4)}$$ (14) where now the four-spin interaction comes with the right positive sign $`J_40`$. The interpretation of the last formula is as follows: one should introduce three types of spin-atoms $`A,B,C`$ as it is shown on Figure 4 with corresponding interactions between them and then after integration over central spin one can see that effective interaction is of the type (5). This structure can be periodically extended to the whole three-dimensional lattice. For that one should also use the structure similar to the one shown on Figure 4, in which $`A`$ and $`B`$ spin-atoms have been interchanged. ## 4 Discussion As we already discussed in the introduction the high density magnetic recording systems require a storage of information in $`nm`$ scale, but fluctuations on $`nm`$ scale will produce damages which are difficult to prevent. We will face this fundamental problem in the near future. We have seen that at least theoretically one can construct lattice crystal with specially tuned interactions which has exponentially degenerate vacuum state and it is suggested that it can store information in a form of different vacuum states. The planes of flipped spins representing different vacuua can in principle be of atomic scale. The Ising type spin systems can only mimic real magnetic materials and one should think about similar construction involving interaction between magnetic domains, but from the other side one can also think about materials in which electron or nuclear spin interaction is organized in a proper way. We are facing close phenomena with computer circuits as well. As the components of the computer circuits become very small, in the extreme limit they can approach the atomic scale. Their description in the limit of atomic scale should be quantum-mechanical and in recent years there have been intensive studies in the physical limitations of the computational process which we shall review in Appendix. In conclusion I would like to acknowledge Professor E. Paschos for discussions and kind hospitality at Dortmund University where part of this work was completed and to Professor D.Niarchos for pointing out the problems in $`nm`$ magnetic recording systems to me . This work was supported in part by the EEC Grant no. HPRN-CT-1999-00161 and ESF Network ”Geometry and Disorder: From Membranes to Quantum Gravity” . ## 5 Appendix Quantum computers were suggested and analyzed by Benioff and Feynman . Typically they consist of $`N`$ interacting Ising type two-state spin systems. The initial input state of quantum computer is a quantum binary string. Computation is accomplished through the unitary evolution. In the course of computation the intermediate states are in general superpositions of binary strings. The theoretical importance of quantum computers comes from the realization of the fact that quantum computation can be exponentially faster than the best known classical algorithms. The most important examples are quantum algorithms for integer factorization, the discrete logarithm and searching in unsorted database . Although the theory is fairly well understood, the actual building of quantum computer is extremely difficult. The measurement of the state of the quantum computer lead to obstacles in making computation reversible . The other problem is that unknown quantum state cannot be perfectly duplicated. Nevertheless it was demonstrated that quantum error correction is possible in order to protect quantum information against corruption . Quantum teleportation and superdeuce coding was developed in . The problem of maintaining the coherence in the process of quantum computation was discussed in . There is an increasing interest in practical realization of quantum computers. One of the ideas is to exploit quantum effects to build molecular-level computers , that is to induce parallel logic in arrays of quantum dots and in molecules . Real physical implementation comes with ions traps: laser cooling and thermal isolation of the gaseous Bose-Einstein condensate. The ion can be used to operate quantum logic gate that couples the hyperfine splitting of a single trapped ion $`Be_+`$ to its oscillation modes in the ion trap . Optical cavities have been used in the other setup: quantum phase gate, was demonstrated for photon pair coupled by a single atom in a quantum electrodynamics cavity. The control and target bits of quantum phase gate are two photons of different optical frequency, passing together through a low-loss QED cavity few microns long. But most promising is probably the bulk nuclear magnetic resonance technique: nuclear spins act as quantum bits, and are particularly suited to this role because of their natural isolation from environment .
no-problem/0003/cond-mat0003137.html
ar5iv
text
# Comment on “Phase Diagram of the Random Energy Model with Higher-Order Ferromagnetic Term and Error Correcting Codes due to Sourlas” ## Comment on “Phase Diagram of the Random Energy Model with Higher-Order Ferromagnetic Term and Error Correcting Codes due to Sourlas” In a recent Letter, Dorlas and Wedagedera (DW) have studied the random energy model (REM) with an additional $`p`$-spin ferromagnetic interaction, as a guide to the properties of a $`p`$-spin Ising model with both random spin glass and uniform ferromagnetic exchange, itself relevant to an error-correcting code . They showed that the non-glassy ferromagnetic phase, found for $`p=2`$ to lie between the paramagnetic and glassy ferromagnetic phases, is squeezed out to larger ferromagnetic exchange as $`p`$ is increased and is eliminated in the limit of $`p\mathrm{}`$. Here we note that (i) we have solved the corresponding problem of a spherical spin system with $`p`$-spin glass interactions and $`r`$-spin ferromagnetic interactions and have shown that for all $`rp>2`$ the opposite situation applies, namely glassy ferromagnetism is suppressed and only non-glassy ferromagnetism remains, and (ii) a simple mapping yields the results of DW and generalizations. The Hamiltonians for both the Ising and spherical models consist of a disordered and a ferromagnetic term: $$\begin{array}{c}=\underset{i_1<i_2\mathrm{}<i_p}{}J_{i_1\mathrm{}i_p}\varphi _{i_1}\mathrm{}\varphi _{i_p}\hfill \\ \hfill \frac{J_0(r1)!}{N^{r1}}\underset{i_1<i_2\mathrm{}<i_r}{}\varphi _{i_1}\mathrm{}\varphi _{i_r},\end{array}$$ (1) where the $`J_{i_1\mathrm{}i_p}`$ are independent Gaussian random couplings of zero mean and variance $`p!J^2/2N^{p1}`$, and $`\varphi _i^2=1`$ for Ising or $`\frac{1}{N}_i\varphi _i^2=1`$ for spherical spins. The properties of the system can be found from the free energy $`f_{\mathrm{SG}}(M)`$ of the system with $`J_0=0`$ and a constrained magnetization $`M`$. They are obtained by minimizing the free energy $`f(M)`$ $`=f_{\mathrm{SG}}(M){\displaystyle \frac{1}{r}}J_0M^r,`$ (2) with respect to $`M`$, which means solving $`f_{\mathrm{SG}}^{}(M)`$ $`{\displaystyle \frac{df_{\mathrm{SG}}(M)}{dM}}=J_0M^{r1}.`$ (3) Generally $`f_{\mathrm{SG}}^{}(M)`$ is first order in small $`M`$, diverges as $`|M|1`$, and is monotonically increasing in between. For $`r=1`$, corresponding to an applied field $`h=J_0`$, $`f_{\mathrm{SG}}^{}(M)=h`$, so the equilibrium magnetization increases monotonically with $`h`$ and tends to unity as $`h\mathrm{}`$. For $`r=2`$, $`f_{\mathrm{SG}}^{}(M)=J_0M`$, so there is always a solution at $`M=0`$, and a ferromagnetic solution appears continuously when $`J_0f_{\mathrm{SG}}^{}(0)`$. For $`r>2`$, the transition is to a magnetization $`M_{\mathrm{min}}>0`$, and $`M_{\mathrm{min}}`$ increases with $`r`$. The true strength of this method is in predicting the onset of glassiness: this depends on which parts of $`f_{\mathrm{SG}}(M)`$ correspond to glassy solutions and so varies with model and with temperature. In the upper curve of the Figure we show $`f_{\mathrm{SG}}^{}(M)`$ for the REM above the glass transition temperature $`T_\mathrm{s}^0`$; below $`T_\mathrm{s}^0`$ the solution is glassy everywhere. At the temperature shown, the ferromagnetic transition is to a non-glassy phase for small enough $`r`$, while for larger $`r`$ $`M_{\mathrm{min}}`$ is already in the glassy region. As $`r\mathrm{}`$, $`J_0M^{r1}`$ approaches a function which jumps from zero to $`J_0`$ at $`M=1`$, so $`M_{\mathrm{min}}1`$ and the transition is directly to the glassy ferromagnet. In the lower curve we show $`f_{\mathrm{SG}}^{}(M)`$ for the spherical $`p`$-spin model slightly above its $`T_\mathrm{s}^0`$; at some higher temperature the glassy region disappears; below $`T_\mathrm{s}^0`$ the glassy region extends down to $`M=0`$. At the temperature shown, for small enough $`r`$, $`M_{\mathrm{min}}`$ lies in the lower non-glassy branch, so increasing $`J_0`$ leads to a non-glassy ferromagnet, then a glassy, then back to a non-glassy. For some larger $`r`$ the first non-glassy ferromagnet disappears, and for still larger $`r`$ so does the glassy ferromagnet. A full calculation shows that the second critical value is $`r=p`$. The discussion here has been of the static spinodal transition, but it can be easily extended to the thermodynamic transition by comparing the free energies (2) of competing phases; DW concentrate on this latter case. Peter Gillin and David Sherrington Theoretical Physics, 1 Keble Rd, Oxford, OX1 3NP, UK.
no-problem/0003/cond-mat0003004.html
ar5iv
text
# Molecular Dynamics Simulations of Dynamic Force Microscopy: Applications to the Si(111)-7x7 Surface ## I Introduction Atomic force microscopy (AFM) has become an important method in fundamental and applied surface science able to obtain local information about surface structure and interactions, especially on nonconducting samples. In this method the surface is probed by measuring the interaction with a sharp tip or changes produced by that interaction. Atomic resolution can be obtained if the tip can approach the surface in the attractive regime without making contact. In such non-contact experiments it is difficult to measure this interaction force accurately while avoiding cantilever jump to contact. However, because the resonance frequency of the cantilever changes locally if the tip starts to be attracted to specific surface sites, it became possible to image surface atoms by maintaining a constant frequency shift $`\mathrm{\Delta }f`$ via feedback control of the cantilever perpendicular displacement . This shift can be measured very accurately using frequency demodulation . Jump to contact is avoided by using a large cantilever force constant $`k`$ and a large tip oscillation amplitude $`A`$, such that the restoring force $`kA`$ considerably exceeds the attractive force at the distance of closest approach. Under this condition, the tip-sample interaction weakly perturbs the harmonic motion of the tip, so that $`\mathrm{\Delta }f`$ can be calculated from the force-distance dependence at any particular site, as first pointed out by Giessibl Following the pioneering experiments , many investigators have obtained atomically-resolved images of the Si(111)-7$`\times `$7 surface in ultra high vacuum (UHV) using large-amplitude dynamic force microscopy (DFM) under various feedback conditions . In particular Lüthi *et al* recorded simultaneous images of the topography, of the time-averaged tunneling current and of the cantilever excitation, i.e. the damping, at constant $`A`$ and $`\mathrm{\Delta }f`$. All images showed characteristic features (adatoms and corner holes) of the dimer-adatom-stacking fault (DAS) model of the 7$`\times `$7 reconstruction . Point defects, e.g. missing adatoms appearing at the same locations in all three images; this convincingly proved true atomic resolution. Surprisingly, the contrast in the damping appeared inverted with respect to the other two images. Other issues are whether $`\mathrm{\Delta }f`$ jumps at the onset of covalent bonding between dangling bonds at the tip apex and on adatoms of the sample , and whether optimum atomic-scale resolution is obtained at a closer separation , in a soft intermittent contact mode or even in the range where $`\mathrm{\Delta }f`$ increases with decreasing tip sample separation . In this contribution we show how relevant information about those issues and quantities measured in large-amplitude DFM can be effectively extracted from classical molecular dynamics simulations despite the enormous discrepancy in time scales between the tip oscillation and atomic motions. The rest of the paper is organized as follows: the next section describes the model and computational details; results are presented in the following two sections; conclusions are summarized at the end. ## II The Model The Si(111) sample is represented by a slab of eight layers with atoms coupled via the well-known short-range Stillinger-Weber (SW) potential which, besides a pair interaction, contains a three-body term favoring tetrahedral covalent bonding. Periodic boundary conditions are applied laterally to a nearly square supercell encompassing four 7$`\times `$7 unit cells. Atoms in the two bottom layers are fixed in bulk-like positions. Atoms in the following three layers are allowed to move, but their average kinetic energy is controlled by a thermostat. The Si(111)-7$`\times `$7 reconstruction was initiated by removing and shifting atoms in the next two layers, and placing adatoms in the top layer so as to approximately obtain the geometry of the dimer-adatom-stacking fault(DAS) model . Dimers spontaneously formed between faulted and unfaulted half-cells in the third layer and adatoms equilibrated 2.7Å away from their second layer neighbors shortly after molecular dynamics was started. Subsequent simulations were initiated with this equilibrated structure. Focusing on short-range covalent interactions, which presumably give rise to atomic-scale contrast in the non-contact regime, we model the tip by a sharp pyramidal silicon cluster with 34 atoms in 6 layers. Like AFM tips etched out of Si, it has a axis. They were kept at bulk-like positions in order to isolate effects due to sample deformation and dynamics. Because the SW potential vanishes beyond 3.8Å , the longer-ranged pair part of a potential which was fit to a first-principles computation of the interaction between Si layers was added when considering the interaction between the tip and sample atoms. This potential has a decay length $`\lambda `$=0.8Å close to that found in similar recent calculations of the interaction between a Si<sub>10</sub> cluster tip and a Si(111) surface. Compared to Ref.14, the force $`F_z`$ computed for the same Si<sub>10</sub> tip above an adatom using our interaction has a similar tail, but reaches a minimum (maximum attraction) at a distance $`z=`$3.0Å from the unperturbed top layer which is too strong (-7 nN vs. -2.25 nN). Similar values are obtained with our model tip. In the non-contact range, our results are thus at least semiquantitatively correct. The coordinates of the all moving atoms are updated according Newton’s law, using a 5<sup>th</sup>-order Gear predictor-corrector algorithm with a time step of 0.4fs as advocated by Stillinger and Weber . The average temperature is controlled by a first-order feedback loop . The thermalization time $`\tau _T`$ can be chosen such that the coupling to the thermostat is weak, but the discrete vibrational spectrum of our finite system is broadened without appreciable distortions. Because the SW potential led to appreciable surface diffusion at room temperature (which is not realistic), the setpoint temperature was fixed at 100K to ensure only small oscillations about the 7$`\times `$7 equilibrium structure. Another important concern is the discrepancy between practical times for simulations and relevant ones in DFM experiments, e.g. the tip oscillation period $`1/f`$ (few ps vs. few ms). Because $`\mathrm{\Delta }f`$ and the distance-dependent contribution to the damping are induced by the tip-sample interaction, the cantilever dynamics *per se* can be ignored if only its stationary oscillation on resonance is considered. In order to track tip-induced changes, in each simulation we recorded snapshots of atomic positions, as well as the time evolution of the following quantities: the center-of-mass of all moving atoms, their kinetic energy(temperature), the coordinates of the tip apex and of selected adatoms, all components of forces on the tip and on those adatoms, the work done on the tip and the heat transfer to the thermostat. ## III Quasistatic scans According to perturbation theory , a normalized frequency shift independent of $`k`$ can be computed in the non-contact range from the z-dependence of $`F_z`$ (including possible reversible deformations), $$kA\frac{\mathrm{\Delta }f}{f_0}=f_0_0^{\frac{1}{f_0}}F_z[z(t)]cos\omega tf_0𝑑t$$ (1) where $`\omega =2\pi f_0`$, $`z=d+A(1+cos\omega t)`$, $`d`$ is the minimum tip-surface distance. In the large amplitude limit $`A>>d`$, constant $`\mathrm{\Delta }f`$ approximately corresponds to constant $`F_z(d)`$ at the turning point $`z=d`$. Quasistatic force-distance characteristics were obtained by approaching the tip at a constant velocity $`v_z<<v_c=\lambda f_{max}1000m/s`$, $`f_{max}=16THz`$ being the highest natural frequency of our system (adatom vibration against neighboring atoms). Such approach curves are essentially $`v_z`$-independent and reversible down to a site-dependent separation $`z_c`$ (e.g. 3.75Å above adatoms) but strongly $`v_z`$-dependent and hysteretic below because irreversible symmetry-lowering jumps of the nearest adatom(s) are then induced, leading to a transverse force in the nN range. Although our interaction overemphasizes such instabilities, we expect them to develop below the inflection point in $`F_z(z)`$, even before the net attraction reaches a maximum. Indeed, this inflection signals the appearance of repulsive forces on the nearest adatoms which can be efficiently released by sideways displacement. Below the highest $`z_c`$ constant height scans recorded at constant velocity exhibit friction and wear, adatoms being successively picked up and sometimes redeposited by the moving tip. Above that critical height such scans are reversible and, as shown in Fig. 1, exhibit a contrast with site-specific features in $`F_z`$ which becomes more pronounced at smaller $`z`$. Whereas maximum attraction above adatoms is expected , note that $`F_z`$ is the same above the rest atom (R) and above the hollow site (X) in the adjacent triangle between adatoms, although a recessed dangling bond exists only at the R-site. This implies that attraction to the adatoms dominates the picture. Because our model takes no account of charge transfer effects , both cell halves and all adatoms appear equivalent. Force-distance characteristics recorded at closely spaced points along a particular line can be used to reconstruct scans measurable under all possible feedback operation modes. We checked that the constant height scan at $`z`$=4 Å in Fig. 1 essentially coincides with the corresponding scan reconstructed in this fashion. The same procedure can be used to reconstruct scans at constant normalized frequency shift using Eq.(1) with the time variable eliminated in favor of $`\stackrel{~}{z}=Acos\omega t`$ in order to take advantage of the dense set of data points recorded in approach simulations, $$kA\frac{\mathrm{\Delta }f}{f_0}=\frac{1}{\pi A}_A^A\frac{F_z(d+A+\stackrel{~}{z})\stackrel{~}{z}}{\sqrt{A^2\stackrel{~}{z}^2}}𝑑\stackrel{~}{z}$$ (2) The rhs of Eq.(2) is independent of spring constant $`k`$ and oscillation frequency. From the reconstructed topographical scans along the long diagonal shown in Fig. 2 we see that the constant frequency shift scan is close to the constant force curve. The contrast of the former is slightly reduced due to the non-uniform weight multiplying $`F_z`$ in rhs of Eq.(2). As expected, the overall contrast is inverted with respect to the constant height scans in Fig. 1. Furthermore, the apparent height difference between corner hole and adatom sites is approximately 1.7Å , i.e. larger than in measurements at constant $`\mathrm{\Delta }f`$, except one . This is likely due to our neglect of interactions other than covalent bonding. The contrast we observe is the maximum possible one for a given normalized frequency shift or force; the inclusion of Van der Waals and long range electrostatic interactions would reduce the contrast. The main advantage of constant height scans is that the contrast should be independent of such long-range interactions, because they are mainly determined by many atoms away from the tip apex. In order to assess the resolution of native surface defects, we first created representative ones by (i) removing a center adatom and (ii) placing an additional one at one of lowest energy surface sites $`H_3`$ which is next to a corner adatom, and then relaxed the sample. Afterwards we recorded the constant height scans plotted in Fig. 3 together with the scan above the perfect surface already shown in Fig. 1. Compared to the native adatom sites, we observe an almost twice as strong attraction above the dimer formed between the additional adatom and the nearby corner adatom, but it is not possible to differentiate the two partners. On the other hand, the missing adatom produces a maximum almost as high and slightly broader than the corner hole. This appearance is consistent with that of deep minima associated with a missing adatom in constant $`\mathrm{\Delta }f`$ images . ## IV Periodic tip oscillation In order to determine the distance-dependent damping attributed to the excitation of phonons , we have performed dynamical simulations in which the tip was sinusoidally driven so that it comes within a few angstroms of the sample. As expected, the time-dependent force on the tip consist of a periodic series of narrow pulses of width $`\sqrt{\lambda /A}/f`$ slightly modulated by thermal fluctuations. Fig. 4 illustrates an extreme case where this minimum distance is smaller than $`z_c`$, so that the center adatom under the tip would be destabilized in a quasistatic scan. In view of the(unrealistically) narrow pulse width, a jump to an energetically more favored position on the side of the original site occurs only after the ninth pulse. Under typical experimental conditions the pulse width would be $`1\mu s`$ i.e. encompassing more than one million adatom vibrations, a quasistatic situation. Nevertheless, a small amount of energy can be transferred to acoustic phonons, either directly or via anharmonic decay of local vibrations excited by the force pulses. In our simulations interaction-induced damping due to both processess can be obtained if the inverse interaction time $`f\sqrt{A/\lambda }`$ lies between $`f_{max}`$ and the frequency of the lowest mechanical resonance of our slab (about 1 $`THz`$). Thus that the imposed frequency cannot be much below that lower limit. To visualize energy transfer we calculate the work done by the tip as $`W=F_zv_z𝑑t`$. From Fig. 4 we see that apart from quasi-reversible spikes accompanying the force pulses, $`W`$ exhibits an irreversible increase, which comes about because sample atoms do not quite respond adiabatically. This results in a slight asymmetry of the force pulses. This irreversible part of the work goes into vibrational energy which is then shared between different modes i.e. turns into heat. This transformation is somewhat assisted by our weakly coupled feedback loop which regulates the total kinetic energy of all thermalized atoms according to the equipartition rule. The thermostat time constant $`\tau _T`$ can be chosen shorter than the period so that equilibration is practically achieved between successive pulses. The heat transferred to the thermostat then looks like a staircase as a function of time, and the average transfer rate coincides with the mean power transferred by the tip (overall energy conservation) as demonstrated elsewhere . In those earlier simulations, we observed that for a given force at the minimum distance d the energy transfer rate was stronger at the CH site than at the X site, in apparent agreement with the experimentally observed inverted contrast in the damping. As expected the damping contrast was in accordance with the topography if the tip was driven with the same amplitude, but coming within the same minimum distance d=3.4 Å. The observation of a contrast in the damping at a constant force $`F_z(d)`$ is a delicate issue because such a contrast would vanish if the substrate were treated as a continuum subject to delta function like pulses . In recent dynamic simulations we managed to detect a damping roughly proportional to $`F_z^2(d)`$ in the non-contact range, as expected for a linear response. We found that it is maximum above adatoms, i.e. at sites where the distribution of $`F_z`$ among nearby atoms is most localized laterally. ## V Conclusions and Acknowledgments Molecular dynamics simulations can be conducted and analyzed effectively to provide valuable information unavailable in experiments, as well as to compute measurable properties (resonance frequency shift and damping) and corresponding scan lines. This has been illustrated for the Si(111)-7$`\times `$7 surface. Bearing in mind that our model tip is extremely sharp and that long-range interactions are ignored, the agreement with experimental constant $`\mathrm{\Delta }f`$ scans is quite satisfactory. As with any simulations, some details are sensitive to the underlying model. The interactions we assumed tend to unduly favor adatom jumps along the surface and to the tip below a separation $`z_c`$ which appears somewhat large. Nevertheless, such a critical distance is expected to appear before the net force reaches a minimum (maximum attraction) upon approach. It is therefore still difficult to understand how stable DFM images at constant $`\mathrm{\Delta }f<0`$ can be obtained following a jump in $`\mathrm{\Delta }f`$ or in the range where $`\mathrm{\Delta }f`$ increases with decreasing tip-sample separation . The inverted contrast sometimes observed in the topography and in the damping also remains unexplained. New attempts to reproduce such measurements, with careful attention to possible feedback artifacts and to the relevant range of setpoints are therefore very desirable. More work is also needed to understand the detailed dissipation pathways and to extrapolate the computed damping for comparison with measured values. The authors thank Prof. H.-J. Güntherodt and the Swiss National Research program NFP36 ”Nanosciences” for encouragement and support, T. Bonner for initiating this simulation and his molecular dynamics code, and M. Bammerlin, R. Bennewitz, M. Guggisberg and R. Lüthi for helpful discussions.
no-problem/0003/nlin0003043.html
ar5iv
text
# Exit-Times and ϵ-Entropy for Dynamical Systems, Stochastic Processes, and Turbulence ## I Introduction Many sciences, ranging from geophysics to economics, share the crucial problem of extracting information about the underlying dynamics of a system through the analysis of data time series . In these investigations, a central role is played by the evaluation of the complexity degree of a string of data as a way to probe the underlying dynamics . Since the pioneering works of Shannon on information theory , entropy has been proposed as the proper mathematical tool to quantitatively address such a question. Nowadays, entropy constitutes a key-concept to answer questions ranging from the more conceptual aim to distinguish a pure stochastic evolution from a chaotic deterministic one to the more applied goal of quantifying the degree of predictability at varying the space-time resolution . The latter question is evidently of primary importance, e.g., to set the proper resolution of the data accumulation rate in experimental settings or to efficiently compress data which have to be stored or transmitted. The distinction between stochastic and deterministic chaotic evolution can be formalised by introducing the Kolmogorov-Sinai ($`KS`$) entropy, $`h_{KS}`$ . Let us consider a time series $`x_t`$ (with $`t=1,\mathrm{},T`$) where, for simplicity, the time is discretised but $`x_t`$ is a continuous variable. By defining a finite partition of the phase-space, where each element of the partition has diameter smaller than $`ϵ`$, and by recording for each $`t`$ the symbol (letter) identifying the cell $`x_t`$ belongs to, one can code the time series into a sequence of symbols out of a finite alphabet. Then, from the probabilities of words of length $`m`$ ($`m`$-words) one can compute the $`m`$-block entropy. Finally, one measures the information-gain in going from $`m`$-words to $`(m+1)`$-words: in the limit of infinitely long words ($`m\mathrm{}`$) and of arbitrary fine partition ($`ϵ0`$) one obtains $`h_{KS}`$, that is an entropy per unit time . The value of $`h_{KS}`$ characterises the process which has generated the time series. For example, in a continuous stochastic evolution, which reveals more and more unpredictable outcomes at increasing the resolution, the $`KS`$-entropy is infinite. On the other hand, a regular deterministic signal is characterised by a zero $`KS`$-entropy, since it is completely predictable after a finite number of observations, at any given resolution. Between these two limiting cases, a finite positive value of $`h_{KS}`$ is the signature of a deterministic chaotic dynamics. The $`KS`$-entropy measures the growth rate of unpredictability of the evolution, which coincides with the rate of information acquisition necessary to unambiguously reconstruct the signal. However, the distinction between chaotic and stochastic dynamics can be troublesome in practical application (see for a related discussion). Indeed, only in simple, low dimensional, dynamical systems the $`h_{KS}`$ evaluation can be properly carried out. As soon as one has to cope with realistic systems, e.g. geophysical flows, the number of degrees of freedom is so large that it inhibits any definite statement based on the $`KS`$-entropy evaluation. Moreover, even if one were able to compute the $`KS`$-entropy of those systems, many interesting features can not be answered by only knowing $`h_{KS}`$. As a relevant example we mention the case of turbulence, the dynamics of which is characterised by a hierarchy of fluctuations with different characteristic times and spatial scales . In this respect the $`KS`$-entropy is related only to the fastest time scale present in the dynamics. Therefore, to quantify the predictability degree depending on the range of scales and frequencies analysed, we need a more general tool . In order to make a step to overcome these difficulties, we consider a scale-dependent quantity, namely the $`ϵ`$-entropy, $`h(ϵ)`$, originally introduced by Shannon and Kolmogorov to characterise continuous processes. It is remarkable that, in spite of its deep relevance for the characterisation of stochastic processes and non trivial dynamical systems, the $`ϵ`$-entropy is not widely used in the physical community. Only recently, mainly after the review paper of Gaspard and Wang and the introduction of the Finite Size Lyapunov Exponent , there appeared some attempts in the use of the $`ϵ`$-entropy. For this reason, in Section II we give a brief pedagogical review, aimed to introduce the reader to the $`ϵ`$-entropy and $`(ϵ,\tau )`$-entropy. Practically the $`(ϵ,\tau )`$-entropy, $`h(ϵ,\tau )`$, is the Shannon entropy of time series sampled at frequency $`\tau ^1`$ and measured with an accuracy $`ϵ`$ in the phase space. We will see that the analysis of the $`ϵ`$-dependence of $`h(ϵ)`$ is able to highlight many dynamical features of very high-dimensional systems like turbulence as well as of stochastic processes . The determination of $`h(ϵ,\tau )`$ is usually performed, as already stated, by looking at the Shannon entropy of the coarse-grained dynamics on a $`(ϵ,\tau )`$ grid in phase-space. Unfortunately, this method suffers of so many computational drawbacks that it is almost unusable in many interesting situations. In particular, it is very inefficient when one investigates phenomena arising from the complex interplay of many different spatial and temporal scales, the ones we are interested in. Therefore, here we resort to a recently proposed method based on the exit-time analysis, which has been demonstrated to be both practically and conceptually advantageous with respect to the standard one. In a few words, the idea consists in looking at a sequence of data not at fixed sampling time but at fixed fluctuation, i.e. when the signal is larger than some given threshold, $`ϵ`$. This procedure allows a noticeable improvement of the computational possibility to measure the $`ϵ`$-entropy. We give an ample demonstration of the advantages of this method in a number of examples ranging from one-dimensional dynamical systems, to stochastic (affine and multi-affine) processes and turbulence. As far as turbulence is concerned, we present both an application to experimental data analysis and a theoretical remark. Namely, we will see that from the computation of the $`ϵ`$-entropy of turbulent flows one has a deep understanding of the spatial correlation induced by the sweeping of large scales on the smaller ones. In order to understand these features we also introduce and discuss a new stochastic model of turbulent flows which takes into account sweeping effects. The paper is organised as follows. In Section II we briefly define the $`ϵ`$-entropy and discuss its properties; we use a simple example which shows the conceptual relevance of this quantity together with the difficulties of its computation. In section III we introduce the exit-time approach to the calculation of the $`ϵ`$-entropy discussing in detail its theoretical and numerical advantages. In section IV, we discuss the use of the $`ϵ`$-entropy in characterising intermittent low dimensional dynamical systems and stochastic (affine and multi-affine) processes. In section V we present a study of high-Reynolds experimental data and a theoretical analysis of the $`ϵ`$-entropy in turbulence. Some conclusions and remarks follow in section VI. Details on the stochastic model of a turbulent field are discussed in the Appendices. ## II The $`ϵ`$-entropy Assume a given time-continuous record of one observable, $`x(t)\mathrm{I}\mathrm{R}`$, over a total time $`T`$ long enough to ensure a good statistics. For the sake of simplicity, we start considering $`x`$ as an observable of a 1d system. The estimate of the entropy of the time record $`x(t)`$ requires the construction of a symbolic dynamics . With this purpose, one considers, as a first step, a grid on the time axis, by introducing a small time-interval, $`\tau `$, so as to obtain a sequence $`\{x_i=x(t_i),i=1,\mathrm{},N\}`$, with $`N=[T/\tau ]`$ ($`[]`$ denotes the integer part). As a second operation, one performs a coarse-graining of the phase space, with a grid of mesh size $`ϵ`$, and defines a set of symbols, $`\{S\}`$ (the letters of the alphabet), that biunivocally correspond to the so formed cells. Then, one has to consider the different words of length $`n`$, out of the complete sequence of symbols: $$W_k^n(ϵ,\tau )=(S_k,S_{k+1},\mathrm{},S_{k+n1}),$$ where $`S_j`$ labels the cell containing $`x_j`$. See Fig. 1 where the above codification is sketched. From the probability distribution $`P(W^n(ϵ,\tau ))`$, estimated from the words frequencies, one calculates the block entropies $`H_n(ϵ,\tau )`$: $$H_n(ϵ,\tau )=\underset{\{W^n(ϵ,\tau )\}}{}P(W^n(ϵ,\tau ))\mathrm{ln}P(W^n(ϵ,\tau )),$$ (1) where $`\{W^n(ϵ,\tau )\}`$ indicates the set of all possible words of length $`n`$. The $`(ϵ,\tau )`$-entropy per unit time, $`h(ϵ,\tau )`$, is finally defined as: $`h_n(ϵ,\tau )`$ $`=`$ $`{\displaystyle \frac{1}{\tau }}[H_{n+1}(ϵ,\tau )H_n(ϵ,\tau )],`$ (2) $`h(ϵ,\tau )`$ $`=`$ $`\underset{n\mathrm{}}{lim}h_n(ϵ,\tau )={\displaystyle \frac{1}{\tau }}\underset{n\mathrm{}}{lim}{\displaystyle \frac{1}{n}}H_n(ϵ,\tau ).`$ (3) For practical reasons the dependence on the details of the partition is ignored, while the rigorous definition is given in terms of the infimum over all possible partitions with elements of diameter smaller than $`ϵ`$ . Note that the above defined $`(ϵ,\tau )`$-entropy is nothing but the Shannon-entropy of the sequence of symbols $`\{S_i\}`$. In the case of the time-continuous evolutions, whose realizations are continuous functions of time, the $`\tau `$ dependence of $`h(ϵ,\tau )`$ does not exist . When this happens, one has a finite $`ϵ`$-entropy per unit time, $`h(ϵ)`$. For genuine time-discrete systems, one can simply put $`h(ϵ)h(ϵ,\tau =1)`$. In all these cases $$h_{KS}=\underset{ϵ0}{lim}h(ϵ),$$ (4) The determination of $`h_{KS}`$ involves the study of the limits $`n\mathrm{}`$ and $`ϵ0`$ which are in principle independent, but in all practical cases one has to find an optimal choice of the parameters such that the estimated entropy is close to the exact value . For a genuine chaotic system, one has $`0<h_{KS}<\mathrm{}`$, i.e. the rate of information creation is finite. On the other hand, for a continuous random process $`h_{KS}=\mathrm{}`$. Therefore, in order to distinguish between a purely deterministic system and a stochastic system it is necessary to perform the limit $`ϵ0`$. Unfortunately, from a physical or numerical point of view this is extremely difficult. Nevertheless, by looking at the behaviour of the $`ϵ`$-entropy of the signal at varying $`ϵ`$ one can have some qualitative and quantitative insights on the chaotic or stochastic nature of the underlying process . Moreover, for some stochastic processes one can explicitly give an estimate of the entropy scaling behaviour of $`ϵ`$-entropy . For instance, in the case of a stationary Gaussian process with spectrum $`S(\omega )\omega ^2`$, Kolmogorov has rigorously derived $$h(ϵ)\frac{1}{ϵ^2},$$ (5) for small $`ϵ`$. However, as we show in the following simple but non-trivial example there are many practical difficulties in the computation of $`h(ϵ)`$ . Let us consider the chaotic map: $$x_{t+1}=x_t+p\mathrm{sin}2\pi x_t,$$ (6) which for $`p>0.7326\mathrm{}`$ produces a large scale a diffusive behaviour , i.e.: $$\left(x_tx_0\right)^22Dt\text{for}t\mathrm{},$$ (7) where $`D`$ is the diffusion coefficient. By computing the $`ϵ`$-entropy of this system one expects $$h(ϵ)\lambda \mathrm{for}ϵ1h(ϵ)\frac{D}{ϵ^2}\mathrm{for}ϵ1,$$ (8) where $`\lambda `$ is the Lyapunov exponent. In Fig. 2 we show that the numerical computation of $`h(ϵ)`$, using the standard codification (Fig. 1) is highly non-trivial already in this simple system. Indeed the behaviour (8) in the diffusive region is just poorly obtained by considering the envelope of $`h_n(ϵ,\tau )`$ computed for different values of $`\tau `$; while looking at any single (small) value of $`\tau `$ (one would like to put $`\tau =1`$) one obtains a rather inconclusive result. This is due to the fact that one has to consider very large block lengths, $`n`$, in order to obtain a good convergence for $`H_{n+1}(ϵ,\tau )H_n(ϵ,\tau )`$ in (3). In the diffusive regime, a dimensional argument shows that the characteristic time of the system at scale $`ϵ`$ is $`T_ϵϵ^2/D`$. If we consider for example, $`ϵ=10`$ and typical values of the diffusion coefficient $`D10^1`$, the characteristic time, $`T_ϵ`$, is much larger than the elementary sampling time $`\tau =1`$. Concluding this section, we remind that for systems living in $`d>1`$ dimensions, the procedure sketched above, for the determination of $`h(ϵ,\tau )`$, goes unaltered, considering that the set of symbols $`\{S\}`$ now identifies cells in the $`d`$-dimensional space where the state-vector $`𝐱(t)`$ evolves. ## III How to compute the $`ϵ`$-entropy with exit times The approach we propose to calculate $`h(ϵ)`$ differs from the usual one in the procedure to construct the coding sequence of the signal at a given level of accuracy . This is an important point because the quality of the coding affects largely the result of the $`ϵ`$-entropy computation. An efficient procedure reduces redundancy and improves the quality of the results. The problem to encode signals efficiently is quite old and widely discussed in the literature . The most efficient compression or codification of a symbolic sequence is linked to its Shannon entropy. The Shannon’s compression theorem states: given an alphabet with $`m`$ symbols, and a sequence of these symbols, $`\{S_i,i=1,\mathrm{},N\}`$, with entropy $`h`$, it is not possible to construct another sequence $`\{S_i^{},i=1,\mathrm{},N^{}\}`$ – using the same alphabet and containing the same infomation – whose length $`N^{}`$ is smaller than $`(h/\mathrm{ln}m)N`$. That is to say: $`h/\mathrm{ln}m`$ is the maximum allowed compression rate. As a consequence, if one is able to map a sequence $`\{s_i,i=1,\mathrm{},N_s\}`$ of $`m`$ symbols, into another sequence $`\{\sigma _i,i=1,\mathrm{},N_\sigma \}`$, with the same symbols, the ratio $`(N_\sigma /N_s)\mathrm{ln}m`$ gives an upper bound for the entropy of $`\{s_i\}`$. More generally, if $`\{\sigma _i\}`$ is a codification of $`\{s_i\}`$ without information loss, then the two sequences must have equal total entropy: $`N_sh(s)=N_\sigma h(\sigma )`$. Now we introduce the coding of the signal by the exit-time, $`t(ϵ)`$, that is the time for the signal to undergo a fluctuation of size $`ϵ`$. To do so, we define an alternating grid of cell size $`ϵ`$ in the following way: we consider the original continuous-time record $`x(t)`$ and a reference starting time $`t=t_0`$. The first exit-time, $`t_1`$, is then defined as the first time necessary to have an absolute variation equal to $`ϵ/2`$ in $`x(t)`$, i.e., $`|x(t_0+t_1)x(t_0)|ϵ/2`$. This is the time the signal takes to exit the actual cell of size $`ϵ`$. Then we restart from $`t_1`$ to look for the next exit-time $`t_2`$, i.e., the first time such that $`|x(t_0+t_1+t_2)x(t_0+t_1)|ϵ/2`$ and so on, to obtain a sequence of exit-times: $`\{t_i(ϵ)\}`$. To distinguish the direction of the exit (up or down out of a cell), we introduce the label $`k_i=\pm 1`$, depending on whether the signal is exiting above or below. For clarifying the procedure see Fig. 3, where we sketch the coding method for the signal shown in Fig. 1. From Fig. 3 one recognises the alternating structure of the grid: the starting point to find $`t_{i+1}`$ lies in the middle of the cell $`x(t_i)\pm ϵ/2`$, whereas it lies on the border of the cell $`x(t_{i1})\pm ϵ/2`$. In this way one avoids the fast exit out of a cell due to small fluctuations (compare Figs. 1 and 3). At the end of this construction, the trajectory is coded without ambiguity, with the required accuracy, by the sequence $`\{(t_i,k_i),i=1,\mathrm{},M\}`$, where $`M`$ is the total number of exit-time events observed during the total time $`T`$. A continuous signal, evolving in a continuous time, is now coded in two sequences – a discrete-valued one $`\{k_i\}`$ and a continuous-valued one $`\{t_i\}`$. Performing a coarse-graining of the possible values assumed by $`t(ϵ)`$ by the resolution time $`\tau _r`$, we accomplished the goal of obtaining a symbolic sequence. After that, one proceeds as usual, studying the “exit-time words” of various lengths $`n`$. These are the subsequences of couples of symbols $$\mathrm{\Omega }_i^n(ϵ,\tau _r)=((\eta _i,k_i),(\eta _{i+1},k_{i+1}),\mathrm{},(\eta _{i+n1},k_{i+n1})),$$ (9) where $`\eta _j`$ labels the cell (of width $`\tau _r`$) containing the exit-time $`t_j`$. From the probabilities of these words one calculates the block entropies at the given time resolution, $`H_n^\mathrm{\Omega }(ϵ,\tau _r)`$, and then the exit-time $`(ϵ,\tau _r)`$-entropies: $$h^\mathrm{\Omega }(ϵ,\tau _r)=\underset{n\mathrm{}}{lim}H_{n+1}^\mathrm{\Omega }(ϵ,\tau _r)H_n^\mathrm{\Omega }(ϵ,\tau _r).$$ (10) The limit of infinite time-resolution gives us the $`ϵ`$-entropy per exit, i.e.: $$h^\mathrm{\Omega }(ϵ)=\underset{\tau _r0}{lim}h^\mathrm{\Omega }(ϵ,\tau _r).$$ (11) This result may be obtained also by arguing as follows. There is a one-to-one correspondence between the (exit-time)-histories and the $`(ϵ,\tau )`$-histories (in the limit $`\tau 0`$) originating from a given $`ϵ`$-cell. The Shannon-McMillan theorem assures that the number of the typical $`(ϵ,\tau )`$-histories of length $`N`$, $`𝒩(ϵ,N)`$, is such that: $`\mathrm{ln}𝒩(ϵ,N)h(ϵ)N\tau =h(ϵ)T`$. For the number of typical (exit-time)-histories of length $`M`$, $`(ϵ,M)`$, we have: $`\mathrm{ln}(ϵ,M)h^\mathrm{\Omega }(ϵ)M`$. If we consider $`T=Mt(ϵ)`$ we must obtain the same number of (very long) histories. Therefore, from the relation $`M=T/t(ϵ)`$, where $`t(ϵ)=1/M_{i=1}^Mt_i`$, we obtain finally for the $`ϵ`$-entropy per unit time: $$h(ϵ)=\frac{Mh^\mathrm{\Omega }(ϵ)}{T}=\frac{h^\mathrm{\Omega }(ϵ)}{t(ϵ)}.$$ (12) Note that a relation similar to (12), without the dependence on $`ϵ`$, has been previously proposed, in the particular case of the stochastic resonance . In such a case, where $`x(t)`$ effectively takes only the two values $`\pm 1`$ and the transition can be assumed to be instantaneous, the meaning of the equation is rather transparent. At this point we have to remind that in almost all practical situations there exists a minimum time interval, $`\tau _s`$, a signal can be sampled with. Since there exists this minimum resolution time, we can at best estimate $`h^\mathrm{\Omega }(ϵ)`$ by means of $`h^\mathrm{\Omega }(ϵ)=h^\mathrm{\Omega }(ϵ,\tau _s)`$, instead of performing the limit (11); so that we may put: $$h(ϵ)\frac{h^\mathrm{\Omega }(ϵ,\tau _r)}{t(ϵ)},$$ (13) for small enough $`\tau _r`$. In most of the cases, the leading $`ϵ`$-contribution to $`h(ϵ)`$ in (13) is given by the mean exit-time $`t(ϵ)`$ and not by $`h^\mathrm{\Omega }(ϵ,\tau _r)`$. Anyhow, the computation of $`h^\mathrm{\Omega }(ϵ,\tau _r)`$ is compulsory in order to recover, e.g., a zero entropy for regular (e.g. periodic) signals. Now we discuss how one can estimate the $`ϵ`$-entropy in practice. In particular we introduce upper and lower bounds for $`h(ϵ)`$ which are very easy to compute in the exit time scheme . We use the following notation: for given $`ϵ`$ and $`\tau _r`$, $`h^\mathrm{\Omega }(ϵ,\tau _r)h^\mathrm{\Omega }(\{\eta _i,k_i\})`$, and we indicate with $`h^\mathrm{\Omega }(\{k_i\})`$ and $`h^\mathrm{\Omega }(\{\eta _i\})`$ respectively the Shannon entropy of the sequence $`\{k_i\}`$ and $`\{\eta _i\}`$. By applying standard results of information theory one obtains: 1. $`h^\mathrm{\Omega }(\{k_i\})h^\mathrm{\Omega }(\{\eta _i,k_i\}),`$ since the mean uncertainty on the composed event $`\{\eta _i,k_i\}`$ cannot be smaller than that of a partial one $`\{k_i\}`$ (or $`\{\eta _i\}`$); 2. $`h^\mathrm{\Omega }(\{\eta _i,k_i\})h^\mathrm{\Omega }(\{\eta _i\})+h^\mathrm{\Omega }(\{k_i\}),`$ since the uncertainty is maximal if $`\{k_i\}`$ and $`\{\eta _i\}`$ are independent (correlations can only decrease the uncertainty). Moreover, we observe that, for a given finite resolution $`\tau _r`$, the associated sequence $`\{\eta _i\}`$ satisfies the bound: $$h^\mathrm{\Omega }(\{\eta _i\})H_1^\mathrm{\Omega }(\{\eta _i\}),$$ In the above relation $`H_1^\mathrm{\Omega }(\{\eta _i\})`$ is the one-symbol entropy of $`\{\eta _i\}`$, (i.e. the entropy of the probability distribution of the exit-times measured on the scale $`\tau _r`$) which can be written as $$H_1^\mathrm{\Omega }(\{\eta _i\})=c(ϵ)+\mathrm{ln}\left(\frac{t(ϵ)}{\tau _r}\right),$$ where $`c(ϵ)=p(z)\mathrm{ln}p(z)dz`$, and $`p(z)`$ is the probability distribution function of the rescaled exit-time $`z(ϵ)=t(ϵ)/t(ϵ)`$. Finally, using the previous relations, one obtains the following bounds for the $`ϵ`$-entropy: $$\frac{h^\mathrm{\Omega }(\{k_i\})}{t(ϵ)}h(ϵ)\frac{h^\mathrm{\Omega }(\{k_i\})+c(ϵ)+\mathrm{ln}(t(ϵ)/\tau _r)}{t(ϵ)}.$$ (14) Note that such bounds are relatively easy to compute and give a good estimate of $`h(ϵ)`$. The Equations (12-14) allow for a remarkable improvement of the computational efficiency. Especially as far as the scaling behaviour of $`h(ϵ)`$ is concerned, one can see that the leading contribution is given by $`t(ϵ)`$, and that $`h^\mathrm{\Omega }(ϵ,\tau _r)`$ introduces, at worst, a sub-leading logarithmic contribution $`h^\mathrm{\Omega }(ϵ,\tau _r)\mathrm{ln}(t(ϵ)/\tau _r)`$ (see eq. (14)). This fact is evident in the case of Brownian motion. In this case one has $`t(ϵ)ϵ^2/D`$, and 1. $`c(ϵ)`$ is $`O(1)`$ and independent of $`ϵ`$ (since the Brownian motion is a self-affine process); 2. $`h^\mathrm{\Omega }(\{k_i\})\mathrm{ln}2`$, is small compared with $`\mathrm{ln}(t(ϵ)/\tau _r)`$. So that, neglecting the logarithmic corrections, $`h(ϵ)1/t(ϵ)Dϵ^2`$. In Fig. 4 we show the numerical evaluation of the bounds (14) for the diffusive map (6). Fig. 4 has to be compared with Fig. 2, where the usual approach has been used. While in Fig. 2 the expected $`ϵ`$-entropy scaling is just poorly recovered as an envelope over many different $`\tau `$, within the exit time method the predicted behaviour is easily recovered in all the range of $`ϵ>1`$ with a remarkable improvement in the quality of the result. We underline that the reason for which the exit time approach is more efficient than the usual one is a posteriori intuitive. Indeed, at fixed $`ϵ`$, $`t(ϵ)`$ automatically gives the typical time at that scale, and, as a consequence, it is not necessary to reach very large block sizes – at least if $`ϵ`$ is not too small. Especially for large $`ϵ`$, we found that small word lengths are enough to estimate the $`ϵ`$-entropy accurately. Of course, for small $`ϵ`$ (i.e. the plateau of Fig. 4) one has to use larger block sizes: here the exit time is $`O(1)`$ and one falls back to the problems of the standard method. For small $`ϵ`$ in deterministic system one has to distinguish two situations. 1. $`ϵ0`$ for discrete-time systems. In this limit the exit-time approach coincides with the usual one. The exit-times always coincide with the minimum sampling time, i.e. $`t(ϵ0)1`$ and we have to consider the possibility to have jumps over more than one cell, i.e., the $`k_i`$ symbols may take values $`\pm 1,\pm 2,\mathrm{}`$. 2. $`ϵ0`$ for continuous-time systems. At very small $`ϵ`$, due to the deterministic character of the system, one has $`t(ϵ)ϵ`$, and therefore one finds words composed with highly correlated symbols. So one has to treat very large blocks in computing the entropy . However, as far as high dimensional systems are concerned, for some aspects, the points (a) and (b) are not of practical interest. In these systems the analysis of the $`ϵ0`$ limit is usually unattainable for several reasons , and, moreover, in many cases one is more interested in the large $`ϵ`$ scale behaviour. We believe that in these cases the approach presented here, is practically unavoidable. We conclude this section with two further remarks. First, up to now we considered a scalar signal as the output of a one-dimensional system. This fact only entered in the two-valuedness of the $`k`$-variable. If we are given a vectorial signal $`𝐱(t)`$, describing the evolution of a $`d`$-dimensional system, we have only to admit $`2d`$ values for the direction-of-exit variable $`k`$. If the dynamics is discrete one has also to consider the possibility of jumps over more than one cell (see previous discussion). Second, one can wonder about the dependence of $`h(ϵ)`$ on the used observable. Rigorous results insure that the Kolmogorov-Sinai entropy, i.e. the limit $`ϵ0`$ of $`h(ϵ)`$ is an intrinsic quantity of the considered system, its value does not change under a smooth change of variables. In the case of $`(ϵ,\tau )`$-entropy, in principle there could be dependencies on the chosen function. However, one can see that at least the scaling properties should not strongly depend on the choice of the observable. If $`A(x)`$ is a smooth function of $`x`$, such that the following property holds: $$c_1|\delta x||A(x+\delta x)A(x)|c_2|\delta x|,$$ (15) with $`c_1`$ and $`c_2`$ finite constants, then there exist two constants $`\alpha _1`$ and $`\alpha _2`$ such that $$h_x(ϵ/\alpha _1,\tau )h_A(ϵ,\tau )h_x(ϵ/\alpha _2,\tau ),$$ (16) where $`h_A(ϵ,\tau )`$ and $`h_x(ϵ,\tau )`$ are the $`(ϵ,\tau )`$-entropies computed using the observable $`A`$ and $`x`$, respectively. This result implies that if $`h(ϵ,\tau )`$ shows a power-law behaviour as a function of $`ϵ`$, $`h(ϵ,\tau )ϵ^\beta `$, the same behaviour, with the same exponent $`\beta `$, must be seen when using another, smooth, observable in the determination of the $`(ϵ,\tau )`$-entropy. ## IV Application of the $`ϵ`$-entropy to deterministic and stochastic processes ### A An intermittent deterministic mapping We discuss the application of exit-time approach to the computation of $`ϵ`$-entropy in strongly intermittent low-dimensional systems. In presence of intermittency, the dynamics is characterised by very long, almost quiescent (laminar) intervals separating short intervals of very intense (bursting) activity (see Fig. 5). Already at a qualitative level, one realises that coding the trajectory shown in Fig. 5 at fixed sampling times (Section II) is not very efficient compared with the exit times method, where the information on the very long quiescent periods is typically stored using only one symbol. To be more quantitative, let us consider the following one dimensional intermittent map : $$x_{t+1}=(x_t+ax_t^z)\text{mod}\mathrm{\hspace{0.33em}1},$$ (17) with $`z>1`$ and $`a>0`$. The invariant density is characterised by a power law singularity near $`x=0`$, which is a marginally stable fixed point, i.e. $`\rho (x)x^{1z}`$. For $`z2`$, the density is not normalisable, and an interesting dynamical regime, the so-called sporadic chaos, appears . Namely, for $`z2`$ the separation between two close trajectories behaves as: $$|\delta x_n|\delta x_0\mathrm{exp}\left[cn^{\nu _0}(\mathrm{ln}n)^{\nu _1}\right],$$ (18) with $`0<\nu _0<1`$ or $`\nu _0=1`$ and $`\nu _1<0`$. In the sporadic chaos regime, nearby trajectories diverge with a stretched exponential, even if the Lyapunov exponent is zero. For $`z<2`$ the system follows the usual chaotic motion with $`\nu _0=1`$ and $`\nu _1=0`$. Sporadic chaos is intermediate between chaotic motion and regular one. This can be understood by computing the Kolmogorov-Chaitin-Solomonoff complexity , or, as we show in the following, by studying the mean exit time. By neglecting the contribution of $`h^\mathrm{\Omega }(ϵ)`$, and considering only the mean exit time, we can estimate the total entropy, $`H_N`$, of a trajectory of length $`N`$ as $$H_N\frac{N}{t(ϵ)_N}\text{for large}N,$$ (19) where $`[\mathrm{}]_N`$ indicates that the mean exit time is computed on a sequence of length $`N`$. Due to the power law singularity at $`x=0`$, $`t(ϵ)_N`$ depends on $`N`$. In equation (19), we have dropped from $`H_N`$ the dependence on $`ϵ`$, which is expected to be weak. Indeed, due to singularity near the origin, one has that the exit times at scale $`ϵ`$ are dominated by the first exit from a region of size $`ϵ`$ around the origin. So that, $`t(ϵ)_N`$ approximately gives the duration of the laminar period (this is exact for $`ϵ`$ large enough). In Fig. 6, the behaviour of $`t(ϵ)_N`$ is shown as a function of $`N`$ and $`z`$ for two different choices of $`ϵ`$. For large enough $`N`$ the behaviour is almost independent of $`ϵ`$, and for $`z2`$ one has $$t(ϵ)_NN^\alpha ,\text{where}\alpha =\frac{z2}{z1}.$$ (20) The value of $`\alpha `$ is obtained by the following argument: the power law singularity leads to $`x_t0`$ most of the time, and moreover, near the origin the map (18) can be approximated by the differential equation $`dx/dt=ax^z`$ . Therefore, denoting with $`x_0`$ the initial condition, one solves the differential equation obtaining $$(x_0+ϵ)^{1z}x_0^{\mathrm{\hspace{0.17em}\hspace{0.17em}1}z}=a(1z)t(ϵ).$$ Now, due to the singularity, $`x_0`$ is typically much smaller than $`x_0+ϵ`$, and hence we can neglect the term $`(x_0+ϵ)^{1z}`$, so that the exit time is $`t(ϵ)x_0^{1z}`$ . By the probability density of $`x_0`$, $`\rho (x_0)x_0^{1z}`$, one obtains the probability distribution of the exit times $`\rho (t)t^{1/(1z)1}`$, the factor $`t^1`$ takes into account the non-uniform sampling of the exit time statistics (see discussion after equation (25). Finally the average exit time on a trajectory of length $`N`$, which is given by $$t(ϵ)_N_0^Nt\rho (t)dtN^{\frac{z2}{z1}}.$$ (21) The total entropy is finally given by $$H_N\frac{N}{N^{\frac{z2}{z1}}}N^{\frac{1}{z1}},$$ note that this is exactly the same $`N`$-dependence found with the computation of the algorithmic complexity . Let us underline that the entropy per unit time goes to zero very slowly, because of the sporadicity $$\frac{H_N}{N}\frac{1}{t(ϵ)_N}.$$ Let us note that we arrive at this results without any partitions of the phase space of the system. ### B Affine and multi-affine stochastic processes Self-affine and multi-affine processes are fully characterised by the scaling laws of the moments of signal increments , $`\delta _tx=x(t_0)x(t_0+t)`$ : $$|\delta _tx(t_0)|^qt^{\zeta (q)},$$ (22) where $`\zeta (q)`$ is a linear function of $`q`$, $`\zeta (q)=\xi q`$, for a self-affine signal ($`\xi `$ is the Hölder exponent characterising the process) and a non-linear function of $`q`$ for a multi-affine signal. The average $``$ is defined as the average over the process distribution $`P(t_0,\delta _tx(t_0))`$, which gives the probability to have a fluctuation, $`\delta _tx(t_0)`$, at the instant $`t_0`$. In the case of a stationary process, as it will be always assumed here, the probability distribution is time invariant and the average $``$ is computed by invoking an ergodic hypothesis, as a time-average. Sometimes, with an abuse of language, a multi-affine process is also called a multi-fractal process. While a self-affine process has a global scaling-invariant probability distribution function, a multi-affine process can be constructed by requiring a local (in time) scaling invariant fluctuations . In a nutshell, one assumes a spectrum of different local scaling exponents $`\xi `$: $`\delta _tx(t_0)t^{\xi (t_0)}`$ with the probability $`P_t(\xi )t^{\mathrm{\hspace{0.17em}1}D(\xi )}`$ to observe a given Hölder exponent $`\xi `$ at time increment $`t`$. The function $`D(\xi )`$ can be interpreted as the fractal dimension of the set where the Hölder exponent $`\xi `$ is observed . The scaling exponents $`\zeta (q)`$ are related to $`D(\xi )`$ by a Legendre transform. Indeed, one may define the average process as an average over all possible singularities, $`\xi `$, weighted by the probability to observe them: $$(\delta _tx)^qd\xi t^{\xi q}t^{1D(\xi )},$$ which in the limit of small $`t`$ by a saddle point estimation becomes: $$(\delta _tx)^qt^{\zeta (q)}\text{with}\zeta (q)=\underset{\xi }{\mathrm{min}}(q\xi +1D(\xi )).$$ (23) Eq. (23) can be generalised to $`\zeta (q)=\mathrm{min}_\xi (q\xi +dD(\xi ))`$ if the considered signal is embedded in a d-dimensional space. Let us notice that in this language, the already discussed Brownian motion corresponds to a self-affine signal with only one possible exponent $`\xi =1/2`$ with $`D(1/2)=1`$. In Appendix A one finds how to construct arbitrary self-affine and multi-affine stochastic processes. Let us now investigate the $`ϵ`$-entropy properties of these two important classes of stochastic signals by using the exit-time approach. We will proceed by discussing the general case of multi-affine processes, noting that the self-affine one is a particular case of them corresponding to have only one exponent in the spectrum. The exit-time probability distribution function can be guessed by “inverting” the multifractal probability distribution functions . We expect that the following dimensional inversion should be correct (at least as far as leading scaling properties are concerned). We argue that the probability to observe an exit of the signal through a barrier of height $`\delta x`$ in a time $`t(\delta x)`$ is given by $`P_{\delta x}(t(\delta x))(\delta x)^{(1D(\xi ))/\xi }`$ where the height of the barrier and the exit-time are related by the inversion of the previously introduced multi-affine scaling relation $`t(\delta x)(\delta x)^{1/\xi }`$. In this framework we may write down the “multifractal” estimate of the exit-time moments, also called inverse structure functions : $$\mathrm{\Sigma }_q(\delta x)t^q(\delta x)d\xi (\delta x)^{\frac{q+1D(\xi )}{\xi }}(\delta x)^{\chi (q)},$$ (24) where $`\chi (q)`$ is obtained with a saddle point estimate in the limit of small $`\delta x`$: $$\chi (q)=\underset{\xi }{\mathrm{min}}\left(\frac{q+1D(\xi )}{\xi }\right).$$ (25) The averaging by counting the number of exit-time events $`M`$ (as we did in the previous sections) and the averaging with the uniform “multi-fractal” distribution are connected by the following relation : $$t^q(\delta x)=\underset{M\mathrm{}}{lim}\underset{i=1}{\overset{M}{}}t_i^q\frac{t_i}{_{j=1}^Mt_j}=\frac{t^{q+1}(\delta x)}{t(\delta x)}.$$ where the term $`t_i/_{j=1}^Mt_j`$ takes into account the non-uniformity of the exit-time statistics. From the previous relation evaluated for $`q=1`$ we can easily deduce the estimate for the mean exit-time scaling law: $$t(\delta x)=t^1(\delta x)^1(\delta x)^{\chi (1)}$$ (26) and therefore, as in the previous sections, we may estimate the leading contribution to the $`ϵ`$-entropy of a multi-affine signal: $$h(\delta x)(\delta x)^{\chi (1)}.$$ (27) Let us notice that in the simpler case of a self-affine signal with Hölder exponent $`\xi `$, this is nothing but the dimensional estimate $`h(\delta x)(\delta x)^{1/\xi }`$ which is rigorous for Gaussian processes . In this case the above argument is also in agreement with the bounds (14): indeed for an affine signal the function $`c(ϵ)`$ entering in (14) does not depend on $`ϵ`$ (we note here that $`\delta x`$ plays the same role of $`ϵ`$). In Fig. 7a-b we show the numerical estimate of the bounds (14) on the $`ϵ`$-entropy in two different self-affine signals with Hölder exponents $`\xi =1/3`$ and $`\xi =1/4`$ respectively (for details on the processes generation see Appendix A). The agreement with the expected result is very good. Let us notice that with the usual approach to the calculation of the $`ϵ`$-entropy for these simple signals the detection of the scaling behaviour is not so easy (see Figures 15,16 and 17 of ). In Fig. 8 we show the numerically computed lower and upper bounds for the $`ϵ`$-entropy of a multi-affine signal by using the mean exit-time estimate. The multi-affine signal here studied is characterised by having $`\zeta (q)`$ as the scaling exponent measured in turbulence (see next section). In particular, this means that $`\zeta (3)=1`$, and using Eqs. (23) and (25) $`\chi (1)=3`$ independently on the shape of $`D(\xi )`$. This is the $`ϵ`$-entropies counterpart of the Kolmogorov $`4/5`$ law . The agreement with the multifractal prediction (the straight lines in Fig. 8) is impressive. To our knowledge this is the first direct estimate of $`ϵ`$-entropy in multi-affine signals. We stress that the non trivial aspect of such an estimate is contained in the derivation of the inverse multifractal formulas (24)-(25). ## V $`ϵ`$-entropy and exit times in turbulence A turbulent flow is characterised by the presence of highly non-trivial chaotic fluctuations in space and time . The question we want to address here is to understand which kind of information can be captured by studying the $`ϵ`$-entropy of this important high dimensional dynamical system. The main physical mechanism is the energy transfer from large scales, $`L_0`$, i.e. scales where forcing is active, down to the dissipation scale, $`\eta `$, where kinetic energy is converted into heat . The ratio between these two scales increases with the Reynolds number. Fully developed turbulence corresponds to the limit of very high Reynolds numbers. In this limit, a turbulent velocity field develops scaling laws in the range of scale intermediate between $`L_0`$ and $`\eta `$, the so-called inertial range. Kolmogorov (1941) theory assumes a perfect self-similar behaviour for the velocity field in the inertial range. In other words, the velocity field was thought to be a continuous self-affine field with Hölder exponent $`\xi =1/3`$ as a function of its spatial coordinates: $$|v(x+R,t)v(x,t)|R^{1/3},$$ (hereafter, for simplicity, we neglect the vectorial notation). In terms of an averaged observable, this implies that the structure functions, i.e. the moments of simultaneous velocity differences at distance $`R`$, have a pure power-law dependency for $`\eta RL_0`$: $$S_p(R)=|v(x+R,t)v(x,t)|^pR^{\zeta (p)}.$$ (28) with $`\zeta (p)=p/3`$. Experiments and numerical simulations have indeed shown that there are small (but important) corrections to the Kolmogorov (1941) prediction. This problem goes under the name of intermittency, the origin of which is still one of the main open problem in the theory of Navier-Stokes equations . In the language of the previous section, an intermittent field is a multi-affine process. As far as the time-dependency of a turbulent velocity field is concerned, one can distinguish between two different time measurements. First, the standard one (actually used in most of the experimental investigation), consists in measuring the time evolution by a probe fixed in some spatial location, say $`x_p`$, in the flow. The time evolution obtained in this way is strongly affected by the spatial correlations induced by the large scales sweeping. As a result, one can apply the so-called frozen-turbulent hypothesis (Taylor hypothesis) , which connects a time-measurement with a spatial measurement by the following relation: $$v(x_p,t_0+t)v(x_p,t_0)v(x_pR,t_0)v(x_p,t_0)$$ where $`R=tU_0`$, where $`U_0`$ is the mean large scale sweeping velocity characteristic of the experiment. As a result of the Taylor-hypothesis, one has that time-measurements also show power-law behaviour with the same characteristic exponents of the spatial measurements, namely, within the Kolmogorov theory: $$|v(x_p,t_0+t)v(x_p,t_0)|^pt^{\zeta (p)}.$$ A second interesting possibility to perform time measurements consists in the so-called Lagrangian measurements . In this case, one has to follow the trajectory of a single fluid particle and measuring the time properties locally in the co-moving reference frame. The main characteristics of this method is that the sweeping is removed and so one can probe in details the ”proper” time-fluctuations induced by the non-linear terms of the Navier-Stokes equations (for recent theoretical and numerical investigations of similar issues see ). The phenomenological understanding of all these spatial and temporal properties are well summarised by the Richardson-cascade. The cascade picture describes a turbulent flow in terms of a superposition of fluctuations (eddies) hierarchically organised on a set of scales ranging from the largest one, $`L_0`$, to the smallest one, $`\eta `$, say $`\mathrm{}_n=2^nL_0`$, with $`n=0,\mathrm{},N_{max}`$ and $`N_{max}=\mathrm{log}_2(L_0/\eta )`$. Each scale has its own typical evolution time, $`\tau _n`$, given in terms of the velocity difference at that scale, $`\delta _nv=v(x+\mathrm{}_n)v(x)`$, by the dimensional estimate: $`\tau _n=\mathrm{}_n/\delta _nv(\mathrm{}_n)^{2/3}`$. The most relevant dynamical interactions are supposed to happen only between eddies of similar size, while each eddy is also subject to the spatial sweeping from eddies at larger scales. The energy is transferred down-scale from the largest-eddy (the mother) to its daughters and from the daughters to their grand-daughters and so on in a multi-step process similar, quantitatively and qualitatively to a stochastic multiplicative process . As a result of the previous picture, one can mimic a turbulent flow with a stochastic process hierarchically organised in space, and with suitable time-dependence able to reproduce both the overall sweeping and the eddy-turn-over times hierarchy . In the Appendices A and B we briefly remind a possible choice for these stochastic process. ### A Experimental data analysis We present now the computation of the $`ϵ`$-entropy for two sets of high Reynolds number experimental data, obtained from an experiment in Lyon (at $`Re_\lambda =400`$) and from another experiment in Modane (at $`Re_\lambda =2000`$). The measurement in Lyon has been taken in a wind tunnel with a working section of 3.0 m and a cross section of (0.5 m)x(0.5 m). Turbulence was generated by a cylinder placed inside the wind tunnel, its diameter was 0.1 m. The hot wire was placed 2.0 m behind the cylinder. The separation between both probes was approximately 1 mm . The measurement in Modane has been taken in a wind tunnel where the integral scale was $`L`$ 20 m and the dissipative scale was $`r_{diss}=`$ 0.3 mm. Let us first make an important remark. Whenever one wants to apply the multifractal formalism to turbulence there exist some analytical and phenomenological constraints on the shape of the function $`D(\xi )`$ entering in the multifractal description. In particular, the most important constraint is the exact result $`\zeta (3)=1`$. This, in turn, implies that independently of the possible multifractal spectrum of the turbulent field one has $`\chi (1)=3`$. So that as stated in the previous section, one obtains: $$h(ϵ)ϵ^{\chi (1)}=ϵ^3,$$ (29) this is the $`ϵ`$-entropy equivalent of the $`\zeta (3)=1`$ result, i.e. of the $`4/5`$ law of turbulence (see equations (26) and (27)). This means that there are not intermittent corrections to the $`ϵ`$-entropy. We have tested this prediction (here for the first time presented), which has been already confirmed in the analysis of the stochastic multi-affine signal in section IV-B, in two different experimental data sets. In Fig. 9 we show the $`ϵ`$-entropy computed for two different sets of experimental data. As one can see, the theoretical prediction $`h(ϵ)ϵ^3`$ is well reproduced only for large $`ϵ`$ values, while for intermediate values the entropy shows a continuous bending without any clear scaling behaviour, only when $`ϵ`$ reaches values corresponding to dissipative velocity fluctuations we have the dissipative scaling $`t(ϵ)ϵ`$. The strong intermediate regime between the dissipative and the inertial scaling behaviours is not a simple out-of-control finite Reynolds-effect. In fact, within the multifractal model of turbulence, one can understand the large crossover between the two power laws in terms of the so-called Intermediate-Dissipative-Range (IDR). The existence of an IDR was originally predicted in and furtherly analysed in . The IDR brings the signature of the mechanism stopping the turbulent energy cascade, i.e. how viscous mechanism are effective in dissipating turbulent energy. In particular, it was shown that the IDR can be fully described within the multifractal description once one allows the possibility to have different viscous cut-off depending on the local degree of velocity singularity, i.e. depending on the local realization of the $`\xi `$ scaling exponent. The main idea consists in using again the multifractal superposition (24) but considering that for velocity fluctuations at the edge between the inertial and the viscous range not all possible scaling exponents contribute to the average . It turns out that in the case of exit-time moments, the extension of the IDR is much more important then what was previously measured for the velocity structure functions (28). Therefore, the strong finite-range effects showed by the experimental data analysis of Fig. 9 can be qualitatively and quantitatively understood as an effect of the IDR . Let us conclude this section by comparing our results with a previous study of the $`ϵ`$-entropy in turbulence . There it was argued the following scaling behaviour: $$h(ϵ)ϵ^2,$$ (30) which differs from our prediction. The behaviour (30) has been obtained assuming that $`h(ϵ)`$ at scale $`ϵ`$ is proportional to the inverse of the typical eddy turnover time at that scale. We remind that here $`ϵ`$ represents a velocity fluctuation $`\delta v`$. Since the typical eddy turnover time for velocity fluctuations of order $`\delta vϵ`$ is $`\tau (ϵ)ϵ^2`$, the Eq. (30) follows. Recalling the discussion of section V-A about the two possible way of measuring a turbulent time signal it is clear that the scaling (30) holds only in a Lagrangian reference frame (see also ). This explains the difference of our prediction and (30). ### B An $`ϵ`$-entropy analysis of the Taylor hypothesis in fully developed turbulence By studying the $`ϵ`$-entropy for the velocity field of turbulent flows in $`3+1`$ dimension, $`h^{st}(ϵ)`$ ($`st`$ indicates space and time), we argue that the usually accepted Taylor hypothesis implies a spatial correlation which can be quantitatively characterised by an “entropy” dimension $`𝒟=8/3`$. In this section, for the sake of simplicity, we neglect intermittency, i.e. we assume a pure self-affine field with Hölder exponent $`\xi =1/3`$. We discuss now how to construct a multi-affine field with the proper spatial and temporal scaling. The idea consists in defining the signal as a dyadic three-dimensional superposition of wavelet-like functions $`\phi ((𝐱𝐱_{n,𝐤}(t))/\mathrm{}_n)`$ whose centres move according to a swept dynamics. The coefficients of the decomposition $`a_{n,𝐤}(t)`$ are stochastic functions chosen with suitable self-affine scaling properties both in time and in space. In particular, the exact definition for a field with spatial Hölder exponent $`\xi `$ in $`d`$ dimensions is (see Appendix A and B for details): $$v(𝐱,t)=\underset{n=1}{\overset{M}{}}\underset{k=1}{\overset{2^{d(n1)}}{}}a_{n,k}(t)\phi \left(\frac{𝐱𝐱_{n,k}(t)}{\mathrm{}_n}\right),$$ (31) where $`𝐱_{n,k}`$ is the centre of the $`k^{th}`$ wavelets at the level $`n`$ (for each dimension we consider one branching (i.e. two variables) for passing to the $`n+1`$ level, see Fig. 10). According to the Richardson-Kolmogorov cascade picture, one assumes that sweeping is present, i.e., $`𝐱_{n+1,k}=𝐱_{n,k^{}}+𝐫_{n+1,k}`$ where $`(n,k^{})`$ labels the “mother” of the $`(n+1,k)`$-eddy and $`𝐫_{n+1,k}`$ is a stochastic vector which depends on $`𝐫_{n,k^{}}`$ and evolves with characteristic time $`\tau _n(\mathrm{}_n)^{1\xi }`$. If the coefficients $`\{a_{n,k}\}`$ and $`\{𝐫_{n,k}\}`$ have characteristic time $`\tau _n(\mathrm{}_n)^{1\xi }`$ and $`\{a_{n,k}\}(\mathrm{}_n)^\xi `$, it is possible to show (see Appendix A and B for details) that the field (31) has the properties $`|v(𝐱+𝐑,t_0)v(𝐱,t_0)|`$ $``$ $`|𝐑|^\xi ,`$ (32) $`|v(𝐱,t_0+t)v(𝐱,t_0)|`$ $``$ $`t^\xi ;`$ (33) in addition the proper Lagrangian sweeping is satisfied. Now we are ready for the $`ϵ`$-entropy analysis of the field (31). If one wants to look at the field $`v`$ with a resolution $`ϵ`$, one has to take $`n`$ up to $`N`$ given by: $$(\mathrm{}_N)^\xi ϵ,$$ (34) in this way we are sure to consider velocity fluctuations of order $`ϵ`$. Then the number of terms contributing to (31) is $$\mathrm{\#}(ϵ)(2^d)^Nϵ^{d/\xi }.$$ (35) By using a result of Shannon one estimates the $`ϵ`$-entropy of the process $`a_{n,k}(t)`$ (and also of $`𝐫_{n,j}`$) as: $$h_n(ϵ)\frac{1}{\tau _n}\mathrm{log}\left(\frac{1}{ϵ}\right),$$ (36) where the above relation is rigorous if the processes $`a_{n,k}(t)`$ are Gaussian and with a power spectrum different form zero on a band of frequency $`1/\tau _n`$. The terms which give the main contribution are those with $`nN`$ with $`\tau _N(\mathrm{}_N)^{1\xi }ϵ^{(\frac{1\xi }{\xi })}`$. Collecting the above results, one finds $$h^{st}(ϵ)\frac{\mathrm{\#}(ϵ)}{\tau _N}ϵ^{\frac{d\xi +1}{\xi }}.$$ (37) For the physical case $`d=3`$, $`\xi =1/3`$, one obtains $$h^{st}(ϵ)ϵ^{11}.$$ (38) The above result, has already been obtained in with a different consideration. By denoting with $`v_k`$ the typical velocity at the Kolmogorov scale $`\eta `$, one has that Eq. (38) holds in the inertial range, i.e., $`ϵv_kRe^{1/4}`$, while for $`ϵv_k`$, $`h^{st}(ϵ)=`$ constant $`Re^{11/4}`$. Let us now discuss the physical implications of (37). Consider an alternative way to compute the $`ϵ`$-entropy of the field $`v(𝐱,t)`$: divide the $`d`$-volume in boxes of edge length $`\mathrm{}(ϵ)ϵ^{1/\xi }`$ and look at the signals $`v(𝐱_\alpha ,t)`$, where the $`𝐱_\alpha `$ are the centres of the boxes. In each $`𝐱_\alpha `$, we have a time record whose $`ϵ`$-entropy is $$h^{(\alpha )}(ϵ)ϵ^{1/\xi }$$ (39) because of the scaling (33). In (39) we use the symbol $`h^{(\alpha )}`$ to denote the entropy of the temporal evolution of the velocity field measured in $`𝐱_\alpha `$. Therefore, $`h^{st}(ϵ)`$ will be obtained summing up all the ”independent” contributions (39), i.e. $$h^{st}(ϵ)𝒩(ϵ)h^{(\alpha )}(ϵ)𝒩(ϵ)ϵ^{1/\xi },$$ (40) where $`𝒩(ϵ)`$ is the number of the independent cells. It is easy to understand that the simplest assumption $`𝒩(ϵ)l(ϵ)^dϵ^{d/\xi }`$ gives a wrong result, indeed one obtains $$h^{st}(ϵ)ϵ^{\frac{d+1}{\xi }},$$ (41) which is not in agreement with (37). In order to obtain the correct result (38) it is necessary to assume $$𝒩(ϵ)l(ϵ)^𝒟,$$ (42) with $`𝒟=d\xi `$. In other words, one has to consider that the sweeping implies a nontrivial spatial correlation, quantitatively measured by the exponent $`𝒟`$, which can be considered as a sort of “entropy” dimension. Incidentally, we note that $`𝒟`$ has the same numerical value as the fractal dimensions of the iso-surfaces $`v=const.`$ . From this observation, at first glance, one could conclude that the above result is somehow trivial since it is simply related to a geometrical fact. However, a closer inspection reveals that this is not true. Indeed, one can construct a self-affine field with spatial scaling $`\xi `$ and thus with the fractal dimension of the iso-surfaces $`v=const.`$ given by $`d\xi `$ for geometrical reasons, while $`𝒟=d`$. Such a process can be simply obtained by eliminating the sweeping, i.e., $$v(𝐱,t)=\underset{n=1}{\overset{M}{}}\underset{k=1}{\overset{2^{d(n1)}}{}}a_{n,k}(t)\phi \left(\frac{𝐱𝐱_{n,k}}{\mathrm{}_n}\right),$$ (43) where now the $`𝐱_{n,k}`$ are fixed and no longer time-dependent, while $`a_{n,k}(\mathrm{}_n)^\xi `$ but $`\tau _n\mathrm{}_n`$. For a field described by (43) one has that (32) and (33) hold but $`h^{st}(ϵ)ϵ^{\frac{d+1}{\xi }}`$ and $`𝒟=d`$, while the fractal dimension of the iso-surfaces $`v=const.`$ is $`d\xi `$. We conclude by noting that it is possible to obtain (see ) the scaling (37) using equation (43), i.e. ignoring the sweeping, assuming $`\tau _n(\mathrm{}_n)^{1\xi }`$ and $`a_{n,k}(\mathrm{}_n)^\xi `$, this corresponds to take separately the proper temporal and spatial spectra. However, this is not completely satisfactory since one has not the proper scaling in one fixed point, (see eq. (39) the only way to obtain this is through the sweeping). ## VI Conclusion In this paper we have discussed a method, based on the analysis of the exit-time statistics, for the computation of the $`ϵ`$-entropy. The basic idea is to look at a sequence of data not at fixed sampling time but only when the fluctuation in the signal is larger than some fixed threshold, $`ϵ`$. This procedure allows a remarkable improvement of the possibility to compute $`(ϵ,\tau )`$-entropy, which is well represented by the exact results (12) and the bounds (14). This approach is particularly suitable in all the systems without a unique characteristic time. In these cases the method based on a coarse-grained dynamics on a fixed $`(ϵ,\tau )`$ grid does not work very efficiently since words of very long size are involved. On the basis of the coding in terms of the exit-time events we are able to give significant lower and upper bounds to the $`ϵ`$-entropy. We have applied the method to different systems: chaotic diffusive maps, intermittent maps showing sporadic chaos, self-affine and multi-affine stochastic processes, and experimental turbulent data. Applying the multifractal formalism one predicts the scaling $`h(ϵ)ϵ^3`$ for time measurement of velocity in one point in turbulent flows. This power law does not depend on the intermittent corrections and has been confirmed by the experimental data analysis results. Moreover we have shown the connection of the Taylor-frozen hypothesis and the $`ϵ`$-entropy: the sweeping implies a nontrivial spatial correlation, quantitatively measured by an “entropy” dimension $`𝒟=8/3`$. ## VII Acknowledments We acknowledge useful discussions with G. Boffetta and A. Celani and the encouragement by B. Marani. We are deeply indebted to Y. Gagne and to G. Ruiz-Chavarria for having provided us with their experimental data. This work has been partially supported by INFM (PRA-TURBO) and by the European Network Intermittency in Turbulent Systems (contract number FMRX-CT98-0175) and the MURST cofinanziamento 1999 “Fisica statistica e teoria della materia condensata”. M.A. is supported by the European Network Intermittency in Turbulent Systems. ## VIII Appendix A In this Appendix we recall some recently obtained results on the generation of multi-affine stochastic signals . The goal is to have a stochastic process whose scaling properties are fully under control. The first step consists in generating a $`1`$-dimensional signal and the second in decorating it such as to build the most general $`(d+1)`$-dimensional process, $`v(𝐱,t)`$, with given scaling properties in time and in space. As for the simplest case of a $`1`$-dimensional system there are at least two different kind of algorithms. One is based on a dyadic decomposition of the signal in a wavelet basis with a suitable assigned series of stochastic coefficients . The second is based on a multiplication of sequential Langevin-processes with a hierarchy of different characteristic times . The first procedure suits particularly appealing for the modelisation of spatial turbulent fluctuations, because of the natural identification between wavelets and eddies in the physical space. The second one, on the other hand, looks more appropriate for mimicking the turbulent time evolution in a fixed point of the space, because of its sequential nature. Let us first summarise the main ingredient of both and then briefly explain how to merge them in order to have a realistic spatial-temporal multi-affine signal. A non-sequential algorithm for $`1`$-dimensional multi-affine signal in $`[0,1]`$, $`v(x)`$, can be defined as : $$v(x)=\underset{n=1}{\overset{N}{}}\underset{k=1}{\overset{2^{(n1)}}{}}a_{n,k}\phi \left(\frac{xx_{n,k}}{\mathrm{}_n}\right)$$ (44) where we have introduced a set of reference scales $`\mathrm{}_n=2^n`$ and the function $`\phi (x)`$ is a wavelet-like function , i.e. of zero mean and rapidly decaying in both real space and Fourier-space. The signal $`v(x)`$ is built in terms of a superposition of fluctuations, $`\phi ((xx_{n,k})/\mathrm{}_n)`$ of characteristic width $`\mathrm{}_n`$ and centred in different points of $`[0,1]`$, $`x_{n,k}=(2k+1)/2^{n+1}`$. In it has been proved that provided the coefficients $`a_{n,k}`$ are chosen by a random multiplicative process, i.e. the daughter is given in terms of the mother by a random process, $`a_{n+1,k^{}}=Xa_{n,k}`$ with $`X`$ a random number i.i.d. for any $`\{n,k\}`$, then the result of the superposition is a multi-affine function with given scaling exponents, namely: $$|v(x+R)v(x)|^pR^{\zeta (p)},$$ with $`\zeta (p)=p/2\mathrm{log}_2X^p`$ and $`\mathrm{}_NR1`$. In this Appendix $``$ indicates the average over the probability distribution of the multiplicative process. Besides the rigorous proof, the rationale for the previous result is simply that due to the hierarchical organisation of the fluctuations one may easily estimate that the term dominating the expression of a velocity fluctuation at scale $`R`$, in (44) is given by the couple of indices $`\{n,k\}`$ such that $`nlog_2(R)`$ and $`xx_{n,k}`$, i.e. $`v(x+R)v(x)a_{n,k}`$. The generalisation (44) to d-dimensional fields is given by: $$v(𝐱)=\underset{n=1}{\overset{N}{}}\underset{k=1}{\overset{2^{d(n1)}}{}}a_{n,k}\phi \left(\frac{𝐱𝐱_{n,k}}{\mathrm{}_n}\right),$$ where now the coefficient $`a_{n,k}`$ are given in terms of a d-dimensional dyadic multiplicative process. This class of stochastic fields has been of great help in mimicking simultaneous spatial fluctuations of turbulent flows. On the other hand, as previously said, sequential algorithms look more suitable for mimicking temporal fluctuations. Let us now discuss how to construct these stochastic multi-affine fields. With the application to time-fluctuations in mind, we will denote now the stochastic 1-dimensional functions with $`u(t)`$. The signal $`u(t)`$ is obtained by a superposition of functions with different characteristic times, representing eddies of various sizes : $$u(t)=\underset{n=1}{\overset{N}{}}u_n(t).$$ (45) The functions $`u_n(t)`$ are defined by the multiplicative process $$u_n(t)=g_n(t)x_1(t)x_2(t)\mathrm{}x_n(t),$$ (46) where the $`g_n(t)`$ are independent stationary random processes, whose correlation times are supposed to be $`\tau _n=(\mathrm{}_n)^\alpha `$, where $`\alpha =1\xi `$ (i.e. $`\tau _n`$ are the eddy-turn-over time at scale $`\mathrm{}_n`$) in the quasi-Lagrangian reference frame and $`\alpha =1`$ if one considers $`u(t)`$ as the time signal in a given point, and $`g_n^2=(\mathrm{}_n)^{2\xi }`$, where $`\xi `$ is the Hölder exponent. For a signal mimicking a turbulent flow, ignoring intermittency, we would have $`\xi =1/3`$. Scaling will appear for all time delays larger than the UV cutoff $`\tau _N`$ and smaller than the IR cutoff $`\tau _1`$. The $`x_j(t)`$ are independent, positive defined, identical distributed random processes whose time correlation decays with the characteristic time $`\tau _j`$. The probability distribution of $`x_j`$ determines the intermittency of the process. The origin of (46) is fairly clear in the context of fully developed turbulence. Indeed we can identify $`u_n`$ with the velocity difference at scale $`\mathrm{}_n`$ and $`x_j`$ with $`(\epsilon _j/\epsilon _{j1})^{1/3}`$, where $`\epsilon _j`$ is the energy dissipation at scale $`\mathrm{}_j`$. The following arguments show, that the process defined according to (45,46) is multi-affine: Because of the fast decrease of the correlation times $`\tau _j=(\mathrm{}_j)^\alpha `$, the characteristic time of $`u_n(t)`$ is of the order of the shortest one, i.e., $`\tau _n=(\mathrm{}_n)^\alpha `$. Therefore, the leading contribution to the structure function $`\stackrel{~}{S}_q(\tau )=|u(t+\tau )u(t)|^q`$ with $`\tau \tau _n`$ stems from the $`n`$-th term in (45). This can be understood nothing that in the sum $`u(t+\tau )u(t)=_{k=1}^N[u_k(t+\tau )u_k(t)]`$ the terms with $`kn`$ are negligible because $`u_k(t+\tau )u_k(t)`$ and the terms with $`kn`$ are sub-leading. Thus one has: $$\stackrel{~}{S}_q(\tau _n)|u_n|^q|g_n|^qx^q^n\tau _n^{\frac{\xi q}{\alpha }\frac{\mathrm{log}_2x^q}{\alpha }}$$ (47) and therefore for the scaling exponents: $$\zeta _q=\frac{\xi q}{\alpha }\frac{\mathrm{log}_2x^q}{\alpha }.$$ (48) The limit of an affine function can be obtained when all the $`x_j`$ are equal to $`1`$. A proper proof of these result can be found in . Let us notice at this stage that the previous “temporal” signal for $`\alpha =1\xi `$ is a good candidate for a velocity measurements in a Lagrangian, co-moving, reference frame (see body of the article). Indeed, in such a reference frame the temporal decorrelation properties at scale $`\mathrm{}_n`$ are given by the eddy-turn-over times $`\tau _n=(\mathrm{}_n)^{1\xi }`$. On the other hand, in the laboratory reference frame the sweeping dominates the time evolution in a fixed point of the space and we must use as characteristic times of the processes $`x_n(t)`$ the sweeping times $`\tau _n^{(s)}=\mathrm{}_n`$, i.e., $`\alpha =1`$. ## IX Appendix B We have now all the ingredients to perform a merging of temporal and spatial properties of a turbulent signal in order to define stochastic processes able to reproduce in a realistic way both spatial and temporal fluctuations in a Lagrangian reference frame. We just have to merge in a proper way the two previous algorithms. For example, for a d-dimensional multi-affine field such as, say, one of the three components of a turbulent field in a Lagrangian reference frame we can use the following model: $$v_L(𝐱,t)=\underset{n=1}{\overset{N}{}}\underset{k=1}{\overset{2^{d(n1)}}{}}a_{n,k}(t)\phi \left(\frac{𝐱𝐱_{n,k}}{\mathrm{}_n}\right).$$ (49) where the temporal dependency of $`a_{n,k}(t)`$ is chosen following the sequential algorithm while its spatial part are given by the dyadic structure of the non-sequential algorithm. In (49) we have used the notation $`v_L(𝐱,t)`$ in order to stress the typical Lagrangian character of such a field. We are now also able to guess a good candidate for the same field measured in the laboratory-reference frame, i.e. where the time properties are dominated by the sweeping of small scales by large scales. Indeed, it is enough to physically reproduce the sweeping effects by allowing the centre of the wavelets-like functions used to mimic the eddies-like turbulent structures to move according a swept-dynamics. To do so, let us define the Eulerian model: $$v_E(𝐱,t)=\underset{n=1}{\overset{N}{}}\underset{k=1}{\overset{2^{d(n1)}}{}}a_{n,k}(t)\phi \left(\frac{𝐱𝐱_{n,k}(t)}{\mathrm{}_n}\right).$$ (50) where the difference with the previous definition is in the temporal dependency of the centres of the wavelets, $`𝐱_{n,k}(t)`$. According to the Richardson-Kolmogorov cascade picture, one assumes that sweeping is present, i.e., $`𝐱_{n,k}=𝐱_{n1,k^{}}+𝐫_{n,k}`$ where $`(n,k^{})`$ labels the ”mother” of the $`(n,k)`$-eddy and $`𝐫_{n,k}`$ is a stochastic vector which depends on $`𝐫_{n1,k^{}}`$ and evolves with characteristic time $`\tau _n(\mathrm{}_n)^{1\xi }`$. Furthermore, its norm is $`O(\mathrm{}_n)`$: $`c_1<|𝐫_{n,k}|/\mathrm{}_n<c_2`$ where $`c_1`$ and $`c_2`$ are constants of order one. We now see that if we measure in one fixed spatial point a fluctuations over a time delay $`\delta t`$, is like to measure a simultaneous fluctuations at scale separation $`R=U_0\delta t`$, i.e. due to the sweeping the main contribution to the sum will be given by the terms with scale-index $`n=\mathrm{log}_2(R=U_0\delta t)`$ while the temporal dependency of the coefficients $`a_{n,k}(t)`$ will be practically frozen on that time scale. This happens because in presence of the sweeping the main contribution is given by the displacement of the centre at large scale, i.e. $`\delta r_0=|𝐫_\mathrm{𝟎}(t+\delta t)𝐫_\mathrm{𝟎}(t)|U_0\delta t`$, and the eddy turnover time at scale $`\mathrm{}_n`$ is $`O((\mathrm{}_n)^{1\xi })`$ always large that the sweeping time $`O(\mathrm{}_n)`$ at the same scale. In the previous discussion for sake of simplicity we did not consider the incompressibility condition. However one can take into account this constraint by the projection on the solenoidal space. In conclusion we have a way to build up a synthetic signal with the proper Eulerian (laboratory) properties, i.e. with sweeping, and also with the proper Lagrangian properties.
no-problem/0003/cs0003072.html
ar5iv
text
# MOO: A Methodology for Online Optimization through Mining the Offline Optimum ## 1 Introduction In online optimization, a stream of tasks arrives at a system for service. Each task must be served — before the next arrival — at a cost that depends on the system’s state, which may be changed by the task. The objective is to minimize the cost of servicing the entire task stream. The introduction of competitive analysis \[ST, KMRS\] inspired a large body of work on online optimization in the last ten years \[BoE\]. This form of analysis uses a competitive ratio to compare the online heuristic’s cost to the offline optimum (obtained with the task stream known in advance). In other words, the objective of the online decision algorithm is to match the offline optimum, and this often means imitating the latter. This objective is the basis of our proposal on a new methodology for online optimization. Suppose there are patterns in the task arrivals — i.e. task generation is constrained by a distribution; these patterns and the cost structure in turn combine to induce patterns in the offline optimum solution, and the online decision algorithm can exploit these patterns to get close to the optimum. Hence, the idea is: Step 1Take a task stream (the training stream) that was previously generated by the distribution. Step 2Obtain the offline optimum solution (i.e. the sequence of decisions for servicing the tasks). Step 3Transform the optimum solution into a database of records. Step 4Apply data mining to this database to extract patterns. Step 5Use the patterns to formulate online decision rules for servicing a task stream (the test stream) generated by the same distribution. We call this methodology for online optimization MOO, whose essential feature is mining the offline optimum (Step 4). This feature distinguishes MOO from the vast literature in machine learning and database mining; it is also different from applying algorithms for online learning to online optimization \[BB\], from using data collected online to make decisions \[KMMO, FM\], and from mining database access histories for buffer management \[FLTT\]. MOO’s strengths are: (1) It is a methodology that is applicable to a wide range of problems in online optimization (e.g. taxi assignment \[FRR\], packet routing \[AAFPW\], web caching \[Y\]). (2) It requires minimal knowledge about the task distribution and cost structure (and the mining in Step 4 makes no effort to discover them). (3) The sort of information to be mined (classification, association, clustering, etc.) may vary to suit the context. (4) The technique for mining (item-set sampling, neural networks, etc.) can be appropriately chosen. On the other hand, MOO’s weaknesses are: (1) An optimum solution for the training stream must be available. This is an issue if no tractable algorithm is known for generating the optimum. MOO, however, only requires the availability of the optimum and does not assume its tractability; it thus treats the optimum solution like an oracle. This oracle may, in fact, be human, in which case the methodology’s objective is to approximate the expert’s performance (for this, MOO is milking the oracle offline). Incidentally, the oracle may yield the optimum solution without providing information about the costs. (2) The task distribution must be stationary \[KMMO\], so that the information mined with the training stream remains relevant for the test stream. (3) MOO may need a significant amount of memory to store the rules for making online decisions. To demonstrate MOO, we apply it to the $`k`$-server problem. We chose this problem because it is the prototypical and most intensively studied online problem \[BoE\]. It is also close to a container yard management problem that the Port of Singapore Authority is interested in. The decision is cast as a classification problem, and we use Quinlan’s C4.5 to mine the optimum, as well as for online classification. This software \[Q\] was written for machine learning, but suffices for our purpose since the data set is not large and both the offline mining and online classification are fast. However, we envisage that other applications of MOO (e.g. using techniques other than classification, or approximating an expert through mining historical data) may require software that are specifically equipped with data mining technology \[A+, H+\]. We present here an experimental study of how classification can be used for the $`k`$-server problem. The objectives are: to establish the viability of the methodology; to explore how MOO’s effectiveness is influenced by the strength of patterns, the cost structure, the stream lengths, etc.; and to prepare a case for access to commercial data. As is implicit in that third objective, our experiments use synthetic data; this is because a systematic exploration of MOO’s effectiveness requires controlled experiments in which various factors can be tuned individually; whereas real data are affected by constraints and noise (that affect optimality), and these get in the way of a feasibility study that tries to build up an understanding of the methodology. Moreover, gaining access to commercial data is difficult without first making a case with synthetic data. (As far as we know, no real data for the $`k`$-server problem is available in the research community.) The work reported here is significant in the following ways: (1) The experiments on synthetic data show that the methodology is feasible — MOO fits into the gap between the offline optimum and other online heuristics, can come close to the optimum for strong patterns, does well for weak patterns, and is robust with respect to the cost structure. (2) It shows that optimization can be recast as classification. (3) MOO is a novel application of a concept in data engineering to a problem in algorithm theory, thus serving as a bridge between the two: This application poses challenging new problems in the analysis of online optimization (see Section 5.2); conversely, data mining (being an art — consider Steps 3 to 5) will benefit from the algorithm community’s insight into what information to look for and how to do the mining. (For example, the optimum solution for buffer replacement \[MS\] suggests that association rules $`SP`$ between a set of pages $`S`$ and a page reference $`P`$ should be annotated by a “distance” $`d`$ between $`S`$ and $`P`$ mined from the reference stream, and $`d`$ used for buffer management \[TTL\].) By offering a database perspective on online optimization, MOO has the potential of facilitating a mutually enriching interaction among database management, machine learning and algorithm analysis. We first describe the $`k`$-server problem in Section 2. The experimental setup is presented in Section 3 and the results examined in Section 4. Section 5 then concludes with a summary of our observations and poses some interesting and hard problems for this new application of data mining. ## 2 The $`k`$-server problem The $`k`$-server problem is defined on a set of points with a distance function $`d`$. Conceptually, the set may be infinite but, for our experiments, it consists of $`n`$ nodes. Unlike most papers on $`k`$-servers, we do not require that $`d`$ satisfy the triangular inequality, nor that it be symmetric. We also do not assume that $`d`$ is known to the online decision algorithm. There are $`k`$ servers who are positioned at different nodes. (Some authors allow multiple servers at one node \[KP\].) A task is a request that specifies a node $`i`$, and is served at 0 cost if there is already a server at $`i`$, or by moving a server from some node $`j`$ to $`i`$, at cost $`d(j,i)`$. (Some authors allow multiple server movements per task \[CL\].) A task stream is a sequence of arriving requests $`T_1,\mathrm{},T_s`$; an online solution uses only $`T_1,\mathrm{},T_{m1}`$ to determine how $`T_m`$ is served, while an offline solution uses $`T_1,\mathrm{},T_s`$ to determine how each request is served. A configuration is a set of $`k`$ nodes that specifies the location of the servers before the arrival of a request. Most algorithms in the literature for the $`k`$-server problem are for special cases. For example, Fiat et al’s marking algorithm is for paging, and Coppersmith et al’s RWALK is for resistive metric spaces \[FKLMSY, CDRS\]. The work function algorithm \[KP\] is, in theory, applicable to any $`k`$-server problem, but it is computationally intensive and (as far as we know) implemented only for special cases. In our experiments, we compare MOO to three algorithms. If an arriving request is for node $`i`$ and there is no server at $`i`$, these algorithms respond as follows: Greedy:Choose a server at node $`j`$ for which $`d(j,i)`$ is minimum. Balance:Let $`b_j=c_j+d(j,i)`$ where $`c_j`$ is the cost incurred so far by the server at node $`j`$; choose a server with minimum $`b_j`$ \[MMS\]. Harmonic:Let $`h_j=1/d(j,i)`$ for each node $`j`$ with a server; choose the server at $`j`$ with probability $`h_j/_rh_r`$ \[RS\]. Note that, unlike MOO, these three heuristics require knowledge of $`d`$. ## 3 Experimental setup ### 3.1 Classification In classification, a decision tree is built from a set of cases, where each case is a tuple of attribute values. Each attribute may be discrete (i.e. its values come from a finite set) or continuous (i.e. the possible values form the real line). Each case can be assigned a class, which may also be discrete (e.g. good, bad) or continuous (e.g. temperature). Each leaf in the decision tree is a class, and each internal node branches out based on the outcome of a test on an attribute’s value. The tree is built from cases with known classification, and a test case can then be classified by traversing the tree from root to leaf, along a path determined by the test outcomes. For the $`k`$-server problem, the request distribution and distance function induce patterns in the optimum decisions, and MOO tries to extract these patterns for use in online assignment. Specifically, we look for patterns that relate an assignment to the arriving request and the configuration it sees. Hence, the class specifies which node to move the server from, and the classification is based on $`n+1`$ attributes in a case, where one attribute specifies the arriving request and the other $`n`$ attributes specify whether a node has a server; the class and attributes are considered discrete. (A possible alternative is to name the $`k`$ servers, have the class specify the server, and use $`k`$ attributes to specify the location of the servers. With this $`(k+1)`$-tuple formulation of a case, however, the classifier considers “server $`A`$ at node 1 and server $`B`$ at node 2” to be different from “server $`A`$ at node 2 and server $`B`$ at node 1”. This differentiation of servers is not appropriate for the $`k`$-server problem, unless the cost model is changed to, say, let servers charge different costs for movement. It is also not appropriate to declare the class and attributes as continuous, unless we are considering nodes on a line with a linear distance function.) In our application of MOO, Step 2 uses network flow to solve for the offline optimum \[CKPV\]; in Step 3, this optimum is scanned to produce a file of cases, one for each request; Step 4 then uses C4.5 to build a decision tree with these training cases. For a test stream, this tree is used to classify each arriving request. This classification may be invalid, in that the tree may decide to move a server from a node that has no server; in this case, the server at $`j`$ with minimum $`d(j,i)`$ is chosen, i.e. use a greedy strategy. (If $`d`$ is unknown, MOO can choose a random server, say.) ### 3.2 Distance function We choose the distance functions to test MOO’s applicability for different neighborhood structures and distance properties. We start with $`1,2,\mathrm{},n`$ as nodes and $`d(x,x^{})`$ given by $`|xx^{}|`$, $`(xx^{})^2`$ and $`|xx^{}|x^{}`$ — only $`|xx^{}|`$ satisfies the triangular inequality, and $`|xx^{}|x^{}`$ is not symmetric. We also consider $`n`$ nodes on a square grid with integer coordinates, with $`d((x,y),(x^{},y^{}))`$ given by $`|xx^{}|+|yy^{}|`$ and $`|xx^{}|x^{}+|yy^{}|y^{}`$. ### 3.3 Request generation The training and test streams are generated with transition matrices in which an entry $`p_{ij}`$ is the probability that a request is for node $`j`$ given that the previous request was for node $`i`$. The fraction of nonzero entries is 10–20% for a sparse matrix and 80–90% for a dense matrix. We use these matrices to generate a stream in two ways: $``$A 1-matrix stream is generated with a single matrix. This is similar to Karlin et al’s markov paging, or a random walk on Borodin et al’s access graph \[KPR, BIRS\]. $``$A 2-matrix stream is generated alternately with two matrices: $`L`$ requests are generated with one matrix, followed by $`L`$ requests from the other matrix; at the switchover, if the last request from one matrix is $`i`$, then $`p_{ij}`$ from the other matrix is used to generate the next request. This gives a nonhomogeneous markov chain that is a random walk on two graphs, in contrast to the simultaneous walks used by Fiat et al \[FK, FM\]. In this paper, we arbitrarily fix $`L`$ to be 10. The purpose of using a 2-matrix stream is to see how MOO reacts to a mixture of request patterns. An example of a matrix and a stream that it generates are given in the Appendix. ## 4 Experimental results There are several variables in our experimental setup: $`k`$, $`n`$, line/grid, distance, sparse/dense, pattern mixture, starting configuration and stream length. The stream length $`s`$ is the most crucial because the offline optimum has complexity $`O(ks^2)`$ — on a 167MHz UltraSPARC, it can take 7 minutes for $`s=2000`$ and 1 hour for $`s=2500`$. The time complexity is compounded by the large memory required to store the network for finding the optimum — we have only one machine with sufficient main memory. If we choose $`s`$ large enough for the optimum and heuristics to all reach steady state, the time commitment would be overwhelming. Instead, in most cases, we set $`s`$ just large enough that conclusions can already be drawn, despite significant statistical variations for any particular solution. (This is similar to analysis of variance in statistics, where one can separate the means of two variables if the variation of each is “smaller” than the separation.) With the bottleneck of one workstation generating the results, we have chosen a small number of experiments that cut through the myriad possible combinations of variables. We concede that the data may be insufficient to support some of our conclusions, so these should be regarded as tentative insight rather than authoritative conclusions. ### 4.1 Nodes on a line Table 1 presents an experiment with a strong pattern in the stream of requests coming to 5 servers for 9 nodes on a line, with a $`d`$ that violates triangular inequality. After 2000 requests, the fluctuations are small enough for us to draw some conclusions. First, the average optimum cost per request is less than 1, and this is because most requests are for a node that already has a server. Second, the competitive ratios for a fixed request distribution can be significantly smaller than the $`k`$-server bound \[MMS\]; this is similar to previous observations \[BaE, FR\]. Third, MOO can achieve the optimum — the sparse matrix induces a strong pattern in the offline optimum solution, and this pattern is captured in the decision tree used by MOO. The starting configuration used in the three runs are the same for $`S_1`$, but different for $`S_2`$. The results for $`S_2`$ show that the configuration can have a strong effect — the heuristics’ performance ordering and competitive ratios both become erratic. In contrast, the ordering for the three runs of $`S_1`$ are the same, and the ratios are reasonably stable except for Greedy, which is sensitive to the stream instance. To factor in the effect of the starting configuration, this configuration is henceforth changed from run to run, unless otherwise stated. Despite the erratic results for $`S_2`$ and the fact that MOO uses a greedy strategy whenever the classifier makes an invalid assignment, MOO has a significantly smaller ratio that Greedy, thus showing the contribution from data mining. A check shows that the trees are small but unintuitive — an example is given in the Appendix — since they imitate the offline optimum (which “sees” future requests). In Table 1, MOO can get close to the optimum because the patterns are strong. For a dense matrix, the pattern is much weaker. Nonetheless, Table 2 shows that MOO has the smallest ratio, and the invalid assignments are surprisingly few. Further, the difference in starting configurations between the training and test streams does not have a big effect on MOO’s results, in contrast to the results for a strong pattern (recall: the starting configurations in Table 1 are the same for 1.00/1.01/1.00 and different for 1.09/3.04/1.30). The number of potential cases for the classifier is $`n\left(\genfrac{}{}{0pt}{}{n}{k}\right)`$, which is 1134 and comparable to the training length (2000) for Table 2. Even so, the performance ordering and ratios are reasonably stable, except for Greedy; when we tested the heuristics again with the runs using the same starting configuration, fluctuation in Greedy’s ratios narrowed down considerably, thus indicating that Greedy remains sensitive to the starting configuration for weak patterns. The decision trees, though bigger than the two for Table 1, remain small: the tree for $`D_1`$ is 3Kbytes and has only 27 decision nodes. All heuristics are trivially optimum if $`k=1`$, but the gap between existing heuristics and the optimum should open up as $`k`$ increases; to prove its worth, MOO must fit into this gap. In Figure 1 (and the following graphs), each data point is the average of 6 runs. It shows that, for a 2-matrix stream and distance $`|xx^{}|`$, the gap between Greedy and optimum opens up at $`k=5`$ for $`n=9`$, and MOO does fit into the gap. At $`k=5`$ for $`|xx^{}|`$, the difference between MOO and Greedy is negligible (if we consider the average ratio over 6 runs; Greedy’s ratio is smaller in some runs and MOO’s smaller in others). In contrast, Tables 1 and 2 show that MOO’s ratios are noticeably smaller than Greedy’s at $`k=5`$ for $`(xx^{})^2`$, which penalizes large movements. The gaps among the heuristics open further at $`k=5`$ and $`n=9`$ for $`|xx^{}|x^{}`$ in Figure 2. The alternation between strong and weak patterns does not affect MOO’s ability to outperform the other heuristics in Figure 1, and Figure 2 shows this remains so for alternating between two weak patterns. In fact, unlike Harmonic and Balance, MOO stays close to the optimum as $`n`$ scales up, thus demonstrating again its ability to learn from the optimum solution. For an asymmetrical and punitive $`|xx^{}|x^{}`$, the “right” server placement is important to being close to optimum for small $`n`$, so Greedy’s simplistic strategy does poorly there. For large $`n`$, even the optimum has its servers spread out, and the violation of the triangular inequality favors incremental server movements, thus making it possible for Greedy to get close to the optimum. ### 4.2 Nodes on a grid Intuitively, a heuristic should incur lower costs if nodes have more neighbors, but its ratio can increase because the optimum may make better use of the neighbors in reducing its cost. Figure 3 shows the results of repeating the runs for Figure 1 — same starting configurations and request streams — on a grid (instead of a line). Harmonic does perform better, but the effect on the ratios for Balance and Greedy is mixed. A check (of the detailed data) shows that, contrary to our intuition, their costs are sometimes higher for the grid. It appears that the increase in the number of neighbors also leads Balance and Greedy to make short-sighted moves that raise costs eventually. In any case, MOO remains in the gap between Greedy and optimum when $`k`$ increases. Similar results hold when $`n`$ is varied. Comparing Figures 2 and 4, we see that the ratios for a grid are noticeably smaller for Harmonic but larger for Greedy. A check shows that costs are lower (often by an order of magnitude), so all solutions benefit from having more neighbors when $`d`$ is $`|xx^{}|x^{}+|yy^{}|y^{}`$. However, the spreading-out effect that allows Greedy to get close to the optimum in Figure 2 is less for a grid, so Greedy is further from the optimum in Figure 4. Again, we see the gap among the heuristics opening up at $`k=5`$ and $`n=9`$ when $`d`$ changes from $`|xx^{}|+|yy^{}|`$ to $`|xx^{}|x^{}+|yy^{}|y^{}`$. MOO, on the other hand, stays close to optimum, like in Figure 2. The detailed data show that there are at most 2 invalid assignments (that are resolved greedily) at $`n=9`$ and less than $`12\%`$ such assignments at $`n=25`$; hence, MOO relies mostly on the decision tree, which has successfully captured the optimum solution even though the requests are a mixture of two weak patterns. ## 5 Conclusion ### 5.1 Summary We now summarize our observations: $``$MOO fits into the gap between the offline optimum and other online heuristics (Figures 1–4). For a strong pattern, MOO can be close to optimum, but may lose to other heuristics because of sensitivity to the starting configuration (Table 1). MOO does well even if the requests have a weak pattern (Table 2) or alternate between patterns (Figures 1–4). $``$MOO outperforms the other heuristics even if the distances are asymmetric (Figures 2 and 4) or violate the triangular inequality (Tables 1 and 2). Increasing the number of neighbors can increase costs, but MOO’s ratios remain stable (Figures 1 and 3, 2 and 4). $``$MOO stays close to the optimum as $`n`$ varies (Figures 2 and 4). $``$The classifier can get an effective decision tree even for relatively short stream lengths, the trees are small and the mining (Step 4) is fast (sub-second). ### 5.2 Challenging issues MOO poses some challenging problems for this new application of data mining: $``$How to analyze the competitive ratios produced with data mining? $``$For the $`k`$-server problem, why does MOO perform well for weak patterns and short training streams? (For the buffer replacement problem, mining can produce good results even if the requests are a mixture of 100 patterns \[TTL\].) $``$What sort of data mining would be appropriate for web caching, video-on-demand, etc.? Acknowledgment Many thanks to C.P. Teo for his help with network flow and Hongjun Lu for his comments. ### 5.3 References \[A+\]R. Agrawal, M. Mehta, J. Shafer, R. Srikant, A. Arning and T. Bollinger, The Quest data mining system, Proc. KDD, Portland, OR (Aug. 1996), 244–249. \[AAFPW\]J. Aspnes, Y. Azar, A. Fiat, S. Plotkin and O. Waarts, On-line load balancing with applications to machine scheduling and virtual circuit routing, Proc. STOC, San Diego, CA (May 1993), 623–630. \[BB\]A. Blum and C. Burch, On-line learning and the metrical task system problem, Proc. COLT, Nashville, TN (July 1997), 45–53. \[BaE\]R. Bachrach and R. El-Yaniv, Online list accessing algorithms and their applications: recent empirical evidence, Proc. SODA, New Orleans, LA (Jan. 97), 53–62. \[BoE\]A. Borodin and R. El-Yaniv, Online Computation and Competitive Analysis, Cambridge University Press, Cambridge, UK (1998). \[BIRS\]A. Borodin, S. Irani, P. Raghavan and B. Schieber, Competitive paging with locality of reference, Proc. STOC, New Orleans, LA (May 1991), 249–259. \[CDRS\]D. Coppersmith, P. Doyle, P. Raghavan and M. Snir, Random walks on weighted graphs and applications to on-line algorithms, J. ACM 40, 3 (July 1993), 421–453. \[CKPV\]M. Chrobak, H. Karloff, T. Payne and S. Vishwanathan, New results on server problems, SIAM J. Disc. Math. 4, 2(May 1991), 172–181. \[CL\]M. Chrobak and L.L. Larmore, An optimal on-line algorithm for $`k`$-servers on trees, SIAM J. Computing 20, 1(1991), 144–148. \[FK\]A. Fiat and A.R. Karlin, Randomized and multipointer paging with locality of reference, Proc. STOC, Las Vegas, NV (May 1995), 626–634. \[FKLMSY\]A. Fiat, R.M. Karp, M. Luby, L.A. McGoech, D.D. Sleator and N.E. Young, Competitive paging algorithms, J. Algorithms 12, 4(Dec. 1991), 685–699. \[FLTT\]L. Feng, H. Lu, Y.C. Tay and K.H. Tung, Buffer management in distributed database systems: A data mining approach, Proc. EDBT, Valencia, Spain (Apr. 1998), 246–260. \[FM\]A. Fiat and M. Mendel, Truly online paging with locality of reference, Proc. FOCS, Miami Beach, FL (Oct. 1997), 326–335. \[FR\]A. Fiat and Z. Rosen, Experimental studies of access graph based heuristics: beating the LRU standard?, Proc. SODA, New Orleans, LA (Jan. 1997), 63–72. \[FRR\]A. Fiat, Y. Rabani and Y. Ravid, Competitive $`k`$-server algorithms, Proc. FOCS, St. Louis, MO (Oct. 1990), 454–463. \[H+\]J. Han, Y. Fu, W. Wang, J. Chiang, W. Gong, K. Koperski, D. Li, Y. Lu, A. Rajan, N. Stefanovic, B. Xia and O.R. Zaiane, DBMiner: A system for mining knowledge in large relational databases, Proc. KDD, Portland, OR (Aug. 1996), 250–255. \[KMMO\]A.R. Karlin, M.S. Manasse, L.A. McGeoch and S. Owicki, Competitive randomized algorithms for non-uniform problems, Proc. SODA, San Francisco, CA (Jan. 1990), 301–309. \[KMRS\]A.R. Karlin, M.S. Manasse, L. Rudolph and D.D. Sleator, Competitive snoopy caching, Algorithmica 3, 1(1988), 79–119. \[KP\]E. Koutsoupias and C. Papadimitriou, On the $`k`$-server conjecture, Proc. STOC, Montreal, Canada (May 1994), 507–511. \[KPR\]A.R. Karlin, S.J. Phillips and P. Raghavan, Markov paging, Proc. FOCS, Pittsburgh, PA (Oct. 1992), 208–217. \[MMS\]M.S. Manasse, L.A. McGeoch and D.D. Sleator, Competitive algorithms for on-line problems, Proc. STOC, Chicago, IL (May 1988), 322–333. \[MS\]L.A. McGeoch and D.D. Sleator, A strongly competitive randomized paging algorithm, Algorithmica 6, 6(1991), 816–825. \[Q\]J.R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufman, San Mateo, CA (1993). \[RS\]P. Raghavan and M. Snir, Memory versus randomization in on-line algorithms, Proc. ICALP, Stresa, Italy (July 1989), 687–703. \[ST\]D.D. Sleator and R.E. Tarjan, Amortized efficiency of list update and paging rules, C. ACM 28, 2(Feb. 1985), 202–208. \[T\]K.H. Tung, Parking in a Marina, Honors Year Project Report, DISCS, National University of Singapore (1997). \[TTL\]K.H. Tung, Y.C. Tay and H. Lu, BROOM: Buffer replacement using online optimization by mining, Proc. CIKM, Bethesda, MD (Nov. 1998), 185–192. \[Y\]N. Young, On-line file caching, Proc. SODA, San Francisco, CA (Jan. 1998), 82–86. ## 6 Appendix $$\begin{array}{cccccccccccc}& & 0& 1& 2& 3& 4& 5& 6& 7& 8& \\ 0& (\mathrm{}& 0.00& 0.45& 0.00& 0.55& 0.00& 0.00& 0.00& 0.00& 0.00& )\mathrm{}\\ 1& 0.00& 0.00& 0.00& 0.58& 0.00& 0.00& 0.00& 0.00& 0.42\\ 2& 0.31& 0.69& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\ 3& 0.00& 0.00& 0.00& 0.00& 0.00& 1.00& 0.00& 0.00& 0.00\\ 4& 1.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\ 5& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.02& 0.98\\ 6& 1.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00& 0.00\\ 7& 0.00& 0.00& 0.35& 0.00& 0.62& 0.03& 0.00& 0.00& 0.00\\ 8& 0.00& 0.47& 0.00& 0.00& 0.00& 0.00& 0.53& 0.00& 0.00\end{array}$$ Figure A.1 Sparse matrix $`S_1`$ of Table 1. 1 8 6 0 1 3 5 8 6 0 3 5 8 1 3 5 8 6 0 1 3 5 8 1 3 5 7 2 1 3 5 8 1 3 5 8 6 0 1 8 6 0 1 8 1 3 5 8 1 3 5 8 1 Figure A.2$`S_1`$ generates a strong pattern. | Request from = 2: 3 | | | --- | --- | | Request from = 4: 5 | | | Request from = 7: 8 | | | Request from = 0: | | | $`|`$ Node 0 status = 0: 1 | // this tree has depth 1 only | | $`|`$ Node 0 status = 1: 0 | // weaker patterns induce deeper trees | | Request from = 1: | | | $`|`$ Node 0 status = 0: 1 | | | $`|`$ Node 0 status = 1: 0 | // how to read C4.5’s decision tree: | | Request from = 3: | // if the request is for node 3 | | $`|`$ Node 2 status = 0: 3 | // then (a) if no server is at 2, then use server at 3 | | $`|`$ Node 2 status = 1: 2 | // (b) else move the server from 2 | | Request from = 5: | // note: the tree is used only if no server | | $`|`$ Node 5 status = 0: 4 | // is at the requested node | | $`|`$ Node 5 status = 1: 5 | // so (a) is an invalid assignment | | Request from = 6: | // and (b) will not put two servers at 3 | | $`|`$ Node 6 status = 0: 5 | | | $`|`$ Node 6 status = 1: 6 | | | Request from = 8: | // this tree always assigns a server from a neighboring node | | $`|`$ Node 8 status = 0: 7 | // in agreement with $`d`$ in Table 1 | | $`|`$ Node 8 status = 1: 8 | // which favors incremental movements | Note that C4.5 (appropriately) selects the request to be the root. However, the rest of the tree is unintuitive, since the tree is mined from an offline optimum that “sees” future requests. Figure A.3 Decision tree from an optimum solution for a sequence generated with $`S_1`$.
no-problem/0003/hep-ph0003088.html
ar5iv
text
# Gluon distributions in nucleons and pions at a low resolution scale ## 1 Motivation Parton distribution functions (pdf) play an important role in particle physics as they describe the internal structure of hadrons in the framework of Quantum Chromodynamics. Pdf are also basic ingredients to calculate cross sections in hadron-hadron and lepton-hadron interactions. Parton distribution functions have unique characteristics depending on each hadron, which reflect the internal dynamics of the bound state. The pdf, $`f_i(x,Q^2)`$, are interpreted as the probability of finding a parton $`i`$ (quark or gluon) with a fraction $`x`$ of the hadron momentum when probed by the momentum transfer $`Q^2`$. The $`Q^2`$ dependence of pdf is succesfully described by the DGLAP evolution equations within perturbative QCD. However, in order to have adequate fits to Deep Inelastic Scattering (DIS) data, initial valence-like distributions at a low resolution scale must be considered even for sea quarks and gluons. This fact has been repeatedly noted by several authors (see e.g. Refs. ) but the origin of such initial distributions still remains unclear. The origin of primordial low scale pdf should be traced back to the internal dynamics of the hadron bound state. Thus they must be related to the confining phase of QCD, which is ultimately the origin of hadrons as bound states of quarks and gluons. In this sense, the sea quark and gluon distributions at a low resolution scale can be related to the idea of the intrinsic sea of quarks and gluons proposed by Brodsky et al. at the beginning of the 80’s . In their own words, “intrinsic quarks and gluons exist over a time scale independent of any probe momentum, and are associated with the bound-state hadron dynamics”. In contrast, the extrinsic sea of gluons and quarks have a purely perturbative origin, and their distributions show the characteristics of bremsstrahlung and pair production processes leading to the standard DGLAP perturbative QCD evolution. The above considerations led us to draw a low $`Q^2`$ picture of hadrons in terms of effective quark degrees of freedom interacting among them through intrinsic sea quarks and gluons . The building blocks are the so-called valons which are valence quarks appropriatedly dressed by their extrinsic sea . Within this picture one can represent the hadron wave function as a superposition of hadron-like Fock states which we construct by means of a well-known recombination mechanism . For example, the proton wave function can be written as $$|p=a_0|p_{}+a_1|\widehat{p}_{}g+\underset{i}{}a_i|B_iM_i$$ (1) at some low $`Q_v^2`$ scale compatible with the valon picture of the proton. Here $`|p_{}`$ is a pure three valon state , and the following terms are hadronic quantum fluctuations which emerge from the (non-perturbative) interaction among valons. The role of these hadronic fluctuations is to dynamically generate the intrinsic sea of quarks and gluons, which is the necessary binding agent in order to have the hadron state. In this way, a consistent picture of the low $`Q^2`$ (input) scale of hadrons emerges in which hadrons are formed by constituent quarks plus intrinsic sea quarks and gluons. It is worth to stress that in modern fits to DIS data, initial non-perturbative sea quark and gluon distributions are taken as an input which is adjusted by DGLAP evolution. In contrast, in our approach they are dynamically generated through the hadronic quantum fluctuations. Furthermore, the old fashioned fits to DIS data, in which all the hadron sea is perturbatively generated, are recovered by restricting the series of eq. (1) to the first term. This representation of the proton wave function led in a natural way to a $`\overline{d}/\overline{u}`$ asymmetry in the proton sea which closely describes the most recent experimental data by the E866/NuSea Collaboration . On the same footing, in Ref. a $`s\overline{s}`$ asymmetry in the nucleon sea was calculated, qualitatively agreeing with the results of the last global analysis of DIS data . In the following section of this paper, we determine the low $`Q^2`$ non-perturbative gluon distributions in nucleons, using the model introduced in Ref. . In section 3 we explore the consequences of this picture on the non-perturbative structure of charged and neutral pions. Finally, section 4 is devoted to further discussion and conclusions. ## 2 Intrinsic gluon distributions To start with, let us consider a baryon at some low $`Q_v^2`$ scale. At this scale the baryon ground state is formed only by three valons . Quantum fluctuations will generate the non-perturbative $`q\overline{q}`$ sea. Following Ref., the non-perturbative sea has a two-step origin in our model. In the first step a valon emits a gluon which subsequently decays into a quark-antiquark pair. In the second, such quark and antiquark interact with the valons giving rise to a bound $`|MB`$ state. Non-perturbative quark and antiquark distributions are then associated to the in-hadron meson and baryon valon densities. The emission of a gluon out of a valon is a basic QCD process which can be adequately described in terms of the convolution of the valon distribution, $`v(z)`$, with the Altarelli-Parisi $`P_{gq}(z)`$ and $`P_{qg}(z)`$ splitting functions . In this way, the quark and antiquark initial distributions are given by $$q(x)=\overline{q}(x)=N\frac{\alpha _{st}^2(Q_v^2)}{(2\pi )^2}_x^1\frac{dy}{y}P_{qg}\left(\frac{x}{y}\right)_y^1\frac{dz}{z}P_{gq}\left(\frac{y}{z}\right)v(z).$$ (2) Notice that, as the valon distribution does not depend on $`Q^2`$, the scale dependence in eq. (2) only arises through the strong coupling constant $`\alpha _{st}`$. At this stage the scale is fixed to the valon scale, $`Q_v^2`$, which is tipically about $`1`$ GeV<sup>2</sup>. Indeed, in Ref. the valon scale was estimated to be of the order of $`Q_v^20.64`$ GeV<sup>2</sup> for nucleons. Consequently, pair creation can be safely evaluated in a perturbative way since $`(\alpha _{st}/2\pi )^2`$ is still sufficiently small. The perturbative $`vgq\overline{q}`$ process is the source of both the extrinsic and intrinsic seas. The difference among them rest on the fact that an intrinsic $`q\overline{q}`$ pair interacts with the remaining valons while an extrinsic $`q\overline{q}`$ pair not. In this sense, the extrinsic sea, which is purely perturbative, forms the structure of valons . The second step involves the interaction of such $`q\overline{q}`$ pair with valons thus giving rise to the $`|MB`$ bound state. As they are in the realm of confinement, the interactions of the $`q\overline{q}`$ pair with valons must be evaluated by means of effective methods. Notice also that, for such interactions take place, the initial (perturbative) $`q\overline{q}`$ pair must be sufficiently longlived. Since the characteristic lifetime of such a perturbative $`q\overline{q}`$ pair scales as $`1/m_q`$, light and strange quarks should be largely available to interact with valons thus producing the $`|MB`$ hadronic quantum fluctuations. Then, assuming that the in-hadron meson and baryon formation arises from mechanisms similar to those at work in the production of real hadrons, we can evaluate the in-hadron meson probability density by using the Das-Hwa recombination approach . In the recombination model, the probability density for the production of a real meson as a function of its fractional momentum is given by the convolution of a two-quark distribution with a suitable recombination function. The two-quark distribution is given in terms of the single-quark distributions of the initial hadron which will be the valence quarks in the final meson. The recombination function is chosen in such a way that it favors the recombination of quarks with similar momentum fractions. Thus, in our model, the in-hadron meson distributions are given by $$P_{M_iB_i}(x)=_0^1\frac{dy}{y}_0^1\frac{dz}{z}F_i(y,z)R(x,y,z),$$ (3) where $$R(x,y,z)=\alpha \frac{yz}{x^2}\delta \left(1\frac{y+z}{x}\right)$$ (4) is the recombination function , and $$F_i(y,z)=\beta yv(y)z\overline{q}_i(z)(1yz)^a$$ (5) is the valon-antiquark distribution . In eqs. (3)-(5), $`x`$, $`y`$ and $`z`$ are the momentum fraction of the in-hadron meson, the valon and the antiquark respectively. The index $`i`$ runs over different quark flavors, depending on the meson being formed. Due to momentum conservation, the in-hadron meson and baryon probability densities are not independent but correlated by $$P_{M_iB_i}(x)=P_{B_iM_i}(1x),$$ (6) with an additional correlation in velocity given by $$\frac{xP_{M_iB_i}(x)}{m_{M_i}}=\frac{xP_{B_iM_i}(x)}{m_{B_i}}.$$ (7) The above constraint, eq. (7), which is needed in order to build a $`|M_iB_i`$ bound state, fixes the exponent $`a`$ in eq. (5. The hadronic fluctuations so far described can be interpreted as the origin of the intrinsic quark-antiquark sea. As a consequence, since the resulting $`q`$ and $`\overline{q}`$ sea distributions belong to different hadronic states in the $`|M_iB_i`$ fluctuation, intrinsic quark and antiquark probability densities in baryons are unequal in a general way. At this point, a judicious analysis of what fluctuations should be included in an expansion like eq. (1) must be made. For definitness, consider the proton wave function. Taking into account mass values and quantum numbers, the main fluctuations of the proton should be the $`|\pi ^+n`$, $`|\pi ^+\mathrm{\Delta }^0`$ and $`|\pi ^{}\mathrm{\Delta }^{++}`$ virtual states, with probabilities $`|a_{\pi n}|^2`$ and $`|a_{\pi \mathrm{\Delta }}|^2`$ respectively. Differences between the $`|\pi ^+\mathrm{\Delta }^0`$ and $`|\pi ^{}\mathrm{\Delta }^{++}`$ probabilities are taken into account by Clebsh-Gordan coefficients which ensure the correct global isospin of the fluctuation. Thus, we obtain $`\frac{1}{6}|a_{\pi \mathrm{\Delta }}|^2`$ and $`\frac{1}{2}|a_{\pi \mathrm{\Delta }}|^2`$ for $`|\pi ^+\mathrm{\Delta }^0`$ and $`|\pi ^{}\mathrm{\Delta }^{++}`$ respectively. On the other hand, the probability of the $`|\pi ^+n`$ bound-state is $`\frac{2}{3}|a_{\pi n}|^2`$. The coefficients $`|a_{\pi n}|^2`$ and $`|a_{\pi \mathrm{\Delta }}|^2`$ are given by $`(N\alpha \beta )_{\pi n}`$ and $`(N\alpha \beta )_{\pi \mathrm{\Delta }}`$ respectively. Their numerical values result from comparison with experimental data. Fluctuations like $`|\rho N`$, $`|\rho \mathrm{\Delta }`$, etc. which could contribute, for instance, to the $`\overline{d}\overline{u}`$ asymmetry in the proton, are far off-shell <sup>1</sup><sup>1</sup>1Note that these are even more suppressed than strange, $`|KH`$, fluctuations. and can be safely neglected. Remarkably, as shown in Ref. , this scheme leads to a $`\overline{d}/\overline{u}`$ asymmetry in the proton which closely describes the experimental data by the E866 Collaboration . Including fluctuations to Kaon-Hyperon states, $`|KH`$, a $`s\overline{s}`$ asymmetry in the proton sea arises which qualitatively agrees with results from a recent global analysis of DIS data . Fluctuations of the proton to $`|\pi ^0p`$ and $`|\pi ^0\mathrm{\Delta }^+`$ states do not contribute to the intrinsic quark and antiquark structure. The reason is that the formation of a $`\pi ^0`$ in-proton state must be inhibited due to its neutral flavor structure $`u\overline{u}d\overline{d}`$. This happens because $`v_q\overline{q}`$ objects can annihilate rapidly into a gluon while $`v_q\overline{q}^{}`$ cannot ($`q^{}`$ is a quark of different flavor to the $`q`$ one. See Fig. 1). Notice also that an unflavored object like a $`v_q\overline{q}`$ pair has itself the quantum numbers of a gluon. Thus, a hypothetical $`|\pi ^0p`$ fluctuation does not contribute to the sum over $`|M_iB_i`$ in the RHS of eq. (1) but to the second term, $`|\widehat{p}_{}g`$, providing a source of valence-like gluons in the proton. The proton-like object accompanying the non-perturbative gluon in the $`|\widehat{p}_{}g`$ fluctuation must have the same flavor structure than the $`p_{}`$ in the $`|p_{}`$ state. However, as far as the gluon is in a color octect state, the $`\widehat{p}_{}`$ must be colored. It is worth noting that, on general grounds, hadrons in a $`|MB`$ fluctuation must be colored. However, they can be identified with real hadrons regarding other quantum numbers like as flavor, isospin, etc. The time scale over which both the $`|\pi ^+n`$ and the $`|\widehat{p}_{}g`$ fluctuations exist should be approximately the same. In fact, the characteristic lifetime of a $`|MB`$ fluctuation is proportional to $`1/\mathrm{\Delta }E`$, where $`\mathrm{\Delta }E`$ is the energy difference between the $`|MB`$ and the proton states in an infinite momentum frame. Thus, for a generic $`|MB`$ state we have $$\tau _{|MB}\frac{1}{\mathrm{\Delta }E}=\frac{2P}{\left[\frac{\widehat{m}_M^2}{x_M}+\frac{\widehat{m}_B^2}{x_B}m_p^2\right]},$$ (8) where $`P`$ is the momentum of the proton in the infinite momentum frame, $`m_p`$ is the proton mass, and $`x_M`$ and $`x_B`$ are the momentum fractions carried by the meson and baryon in the fluctuation. $`\widehat{m}_{M,B}^2=m_{M,B}^2+k_T^2`$ is the transverse masses squared of virtual hadrons in the fluctuation. Given the smallness of the pion mass, we can assume that in-nucleon pions and non-perturbative gluons have similar transverse masses, then the characteristic lifetimes of the $`|\pi ^+n`$ and $`|\widehat{p}_{}g`$ fluctuations must be approximately the same. In this approach, the shape of the non-perturbative gluon coming from the $`v_q\overline{q}`$ pairing above described can be estimated by using the recombination model. Actually, as the origin of the non-perturbative gluon is the recombination of a valon with an antiquark of the same flavor, the momentum distribution of intrinsic gluons in the $`|\widehat{p}_{}g`$ fluctuation is simply given by $`g^{NP}(x,Q_v^2)`$ $`=`$ $`|a_1|^2P_{pg}(x,Q_v^2)`$ (9) $``$ $`|a_1|^2{\displaystyle \frac{(1x)^{12.9}}{x}}{\displaystyle _0^x}𝑑yy\overline{q}(y)(yx)v_{Nq}(yx),`$ where $`\overline{q}(x)`$ is given by eq. (2) and $`v_{Nq}(x)`$ is the distribution of the $`q`$-flavored valon in the nucleon given by $$v_{Nq}(x)=\frac{105}{32}\sqrt{x}(1x)^2.$$ (10) Coefficients $`N`$, $`\alpha `$, and $`\beta `$; coming from the definition of the initial perturbative $`\overline{q}`$, the recombination function and the valon-antiquark distribution in eqs. (2), (4) and (5) respectively; were included in the definition of $`|a_1|^2`$, the probability of the fluctuation. The intrinsic gluon probability distribution, $`P_{pg}(x,Q_v^2)`$, is accordingly normalized to unity. The intrinsic gluon probability density in the $`|\mathrm{\Delta }^+g`$ fluctuation can be estimated on similar basis. However, this fluctuation should be suppressed with respect to the $`|\widehat{p}_{}g`$ one. Another way to have a proton fluctuation containing a proton-like object and an unflavored neutral meson would be through the self-recombination of the $`q\overline{q}`$ pair produced by the gluon splitting of eq. (2) (see Fig. 2). In this case, the unflavored neutral meson must be a vector meson like a $`\rho ^0`$ or $`\omega `$. This is necessary to preserve the vectorial character of the initial perturbative gluon. However, this kind of fluctuations, consisting of a disconnected $`\rho ^0`$ or $`\omega `$ and a $`\widehat{p}_{}`$, are strongly suppressed by the OZI rule. In Fig. 3, the intrinsic gluon distribution at the valon scale given by eq. (9) is compared to the initial GRV-94 HO gluon distribution and the valence gluon distribution calculated in a Monte Carlo based model of the proton <sup>2</sup><sup>2</sup>2In , a model for hadrons is proposed in which primordial pdf corresponding to valence quarks and gluons are assumed to have Gaussian distributions with widths fixed from experimental data. These initial pdf are then complemented with contributions coming from $`|MB`$ fluctuations. In this model, intrinsic gluons are supposed to be present from the very beginning. In our model we are proposing a dynamical mechanism for their generation. This is the main difference between these two approaches.. ## 3 Non-perturbative structure of pions Similar mechanisms should be at work in other physical hadrons, like pions. Indeed, if we expand the pion wave function as $$|\pi ^{\pm ,0}=b_0|\pi _{}^{\pm ,0}+b_1|\widehat{\pi }_{}^{\pm ,0}g+\underset{i}{}b_i|M_iM_i^{},$$ (11) we can identify the would be $`|\widehat{\pi }_{}^{\pm ,0}\pi ^0`$ fluctuation with the $`|\widehat{\pi }_{}^{\pm ,0}g`$ one, as we made for nucleons. In this way, the intrinsic gluon contribution to the pion low $`Q^2`$ structure is given by a similar expression to that of eq. (9), $`g_\pi ^{NP}(x,Q_v^2)`$ $`=`$ $`|b_1|^2P_{\pi g}(x,Q_v^2)`$ (12) $`=`$ $`|b_1|^2{\displaystyle \frac{(1x)}{x}}{\displaystyle _0^x}𝑑yy\overline{q}_\pi (y)(yx)v_\mathrm{\Pi }(yx),`$ where $`\overline{q}_\pi (x)`$ is an antiquark distribution analogous to $`\overline{q}(x)`$ in eq. (2) but with a valon distribution for pions $`v_\mathrm{\Pi }(x)=1`$, as given in . Regarding the exponent related to velocity correlation, it turns out to be simply $`a=1`$. For the valon scale in pions we have used the same value than for nucleons, $`Q_v^20.64`$ GeV<sup>2</sup>. As explained above, a $`v_q\overline{q}`$ pair should recombine into a gluon instead of forming a neutral pion structure. Thus the same conclusions drawn for nucleons hold for fluctuations of the pion into a Fock state containing the initial pion plus a neutral pion-like object. In Fig. 4, the $`g_\pi ^{NP}(x,Q_v^2)`$ predicted by the model is shown, and compared to the GRV-P HO distribution and the gluon distribution in pions obtained in Ref. . It is interesting to note that our model predicts intrinsic gluon distributions carrying more average momentum in pions than in nucleons. This is a consequence of the fact that the $`v_q\overline{q}`$ pair giving rise to the intrinsic gluon carries more average momentum in a $`|\widehat{\pi }_{}^{\pm ,0}g`$ fluctuation than in a $`|\widehat{p}_{}g`$ fluctuation. The fact that pions do not fluctuate into $`|\widehat{\pi }_{}^{\pm ,0}\pi ^0`$ states but into $`|\widehat{\pi }_{}^{\pm ,0}g`$ ones has additional consequences on their low $`Q^2`$ scale structure. For example, for charged pions the first contribution to the sum in the RHS of eq. (11) arises from the $`|K^+\overline{K}^0`$ ($`|K^{}K^0`$) fluctuation of the $`|\pi _{}^+`$ ($`|\pi _{}^{}`$) state. Thus, $`|\pi ^+`$ $`=`$ $`b_0|\pi _{}^++b_1|\pi _{}^+g+b_3|K^+\overline{K}^0+\mathrm{}`$ (13) $`|\pi ^{}`$ $`=`$ $`b_0|\pi _{}^{}+b_1|\pi _{}^{}g+b_3|K^{}K^0+\mathrm{}.`$ (14) Then the first contribution to the intrinsic $`q\overline{q}`$ sea arises in the strange sector and there are no $`u\overline{u}`$ and $`d\overline{d}`$ intrinsic seas in charged pions. The structure of charged pions at the low $`Q_v^2`$ scale is thus given by $`v_{q/\pi ^\pm }(x,Q_v^2)`$ $`=`$ $`|b_0|^2v_\mathrm{\Pi }(x)+|b_1|^2{\displaystyle _x^1}{\displaystyle \frac{dy}{y}}P_{g\pi }(y)v_\mathrm{\Pi }\left({\displaystyle \frac{x}{y}}\right)`$ $`+|b_3|^2{\displaystyle _x^1}{\displaystyle \frac{dy}{y}}P_{KK}(y)v_{Kq}\left({\displaystyle \frac{x}{y}}\right)`$ $`s_s(x,Q_v^2)`$ $`=`$ $`\overline{s}_s(x,Q_v^2)=|b_3|^2{\displaystyle _x^1}{\displaystyle \frac{dy}{y}}P_{KK}(y)v_{Ks}\left({\displaystyle \frac{x}{y}}\right),`$ (15) where $`v_{q/\pi ^\pm }`$ are the resulting pion valence quark densities, $`v_{Kq}`$ and $`v_{Ks}`$ are the light and strange valon distributions in Kaons, and $`s_s=\overline{s}_s`$ the non-perturbative strange quark distributions in charged pions. $`P_{KK}`$ is the probability density of a Kaon inside a pion and $`P_{g\pi }(x)=P_{\pi g}(1x)`$ is the charged pion distribution in the $`|\widehat{\pi }_{}^\pm g`$ fluctuation. The hadronic distributions inside pions, $`P_{\pi g}`$ and $`P_{KK}`$, are given by similar formulas to those of eqs. (3)-(5). It is interesting to note that light quarks in a $`|KK`$ fluctuation contribute to the charged pion low $`Q^2`$ valence densities but not to their intrinsic sea distributions. This is because although there are non-perturbative contributions to the light quark distributions in charged pions, they appear in the $`\overline{u}(u)`$ and $`d(\overline{d})`$ sectors but not in the $`u(\overline{u})`$ and $`\overline{d}(d)`$ for the $`\pi ^{}(\pi ^+)`$ respectively <sup>3</sup><sup>3</sup>3Recall the flavor structure of the particles involved: $`\pi ^+(u\overline{d})K^+(u\overline{s})\overline{K}^0(s\overline{d})`$ and $`\pi ^{}(\overline{u}d)K^{}(\overline{u}s)K^0(\overline{s}d)`$.. In turn, for neutral pions the hadronic Fock state expansion has the form $$|\pi ^0=b_0|\pi _{}^0+b_1|\widehat{\pi }_{}^0g+b_2|\pi ^{}\pi ^++\frac{b_3}{\sqrt{2}}\left[|K^{}K^+|K^0\overline{K}^0\right]+\mathrm{}.$$ (16) Then, by analogy with eqs. (15), we can define $`v_{q/\pi ^0}(x,Q_v^2)`$ $`=`$ $`{\displaystyle \frac{1}{2}}|b_0|^2v_\mathrm{\Pi }(x)+{\displaystyle \frac{1}{2}}|b_1|^2{\displaystyle _x^1}{\displaystyle \frac{dy}{y}}P_{g\pi }(y)v_\mathrm{\Pi }\left({\displaystyle \frac{x}{y}}\right)`$ (17) $`+{\displaystyle \frac{|b_3|^2}{2}}{\displaystyle _x^1}{\displaystyle \frac{dy}{y}}P_{KK}(y)v_{Kq}\left({\displaystyle \frac{x}{y}}\right)`$ for the valence quark densities at the low $`Q_v^2`$ scale, and $`s_{u/\pi ^0}(x,Q_v^2)`$ $`=`$ $`s_{\overline{u}/\pi ^0}(x,Q_v^2)=s_{d/\pi ^0}(x,Q_v^2)=s_{\overline{d}/\pi ^0}(x,Q_v^2)`$ $`=`$ $`|b_2|^2{\displaystyle _x^1}{\displaystyle \frac{dy}{y}}P_{\pi \pi }(y)v_\mathrm{\Pi }\left({\displaystyle \frac{x}{y}}\right)`$ $`s_s(x,Q_v^2)`$ $`=`$ $`\overline{s}_s(x,Q_v^2)=|b_3|^2{\displaystyle _x^1}{\displaystyle \frac{dy}{y}}P_{KK}(y)v_{Ks}\left({\displaystyle \frac{x}{y}}\right),`$ (18) for the intrinsic up, down and strange seas. Gluon distributions are given by eq. (12) for both neutral and charged pions. It should be noted that, although considering the $`s_{q/\pi ^0}`$ densities as part of the intrinsic light sea or part of the valence densities in the $`\pi ^0`$ is a matter of convention, the low $`Q_v^2`$ structure of the $`\pi ^0`$ is different to the structure of charged pions. The difference is precisely given by the contribution of the $`|\pi ^{}\pi ^+`$ fluctuation which can only occur in a $`\pi ^0`$ state. As a final result, notice that the intrinsic quark-antiquark sea of pions turns out to be symmetric as a consequence of the hadronic structure of the fluctuations. This is in contrast to the tipically unequal intrinsic quark and antiquark distributions of the nucleon (see e.g. and Refs. therein). ## 4 Conclusions In this paper we have analysed some important consequences of making a hadronic Fock state expansion of the nucleon and pion low $`Q^2`$ wave-functions out of a novel mechanism for generating the cloud. We have shown that within such scheme it is possible to generate not only non-perturbative quark-antiquark distributions but also the gluon sea needed at the low $`Q^2`$ starting scale for DGLAP evolution. These non-pertubative quarks and gluons are responsible for the bound nature of any hadron state, as they bring about the interactions between valence quarks. The non-perturbative quarks and gluons can be consistently identified with the so called intrinsic sea, in contrast to the extrinsic sea. On the other hand, the extrinsic sea is perturbatively generated by the probe momentum $`Q^2`$, and is part of the own structure of valons, as discussed long time ago by Hwa . In this sense our approach leads to a unification of two different pictures of the hadron structure; namely, the early picture of (non-interacting) valons , and the intrinsic sea idea of Brodsky et al. , which provides the binding agent for the bound hadron state. On the other hand, our approach allows a full representation of the non-perturbative processes giving rise to hadronic quantum fluctuations. This fluctuations are due to the perturbative production of a $`q\overline{q}`$ pair which recombines with the remaining valons. Thus a connection between the physics of hadronic reactions and that of hadronic fluctuations is established through the well known recombination mechanism. A remarkable feature of the approach is that neutral pion fluctuations are here inhibited and, in turn, non-perturbative gluons take place. The reason is that neutral unflavored structures like the initial $`v_q\overline{q}`$ objects, are more likely to recombine rapidly into gluons than into neutral pions, in contrast to flavored structures like $`v_q\overline{q}^{}`$ which cannot do so. Thus, the hypothetical cloud of quantum fluctuations like $`|\widehat{p}_{}\pi ^0`$ does not contribute to the sum over $`|B_iM_i`$ in the RHS of eq. (1) but to the second term, $`|\widehat{p}_{}g`$, providing the source of valence-like gluons in the proton. Thus, within our scheme not only intrinsic quarks and antiquarks but also gluons are generated through quantum fluctuations of the low $`Q^2`$ hadron ground state. Concerning pions, we have calculated their quark-antiquark and gluon distributions at low $`Q^2`$. We have also shown that the non-perturbative structure of charged and neutral pions are different. The difference arises from the $`|\pi ^+\pi ^{}`$ fluctuation appearing in the hadronic Fock state expansion of the $`\pi ^0`$ wave-function but not in the charged pion ones. Finally, we have shown that the pionic intrinsic quark and antiquark distributions are symmetric, as a result of the specific features of its quantum fluctuations. This is in contrast to the structure of generic baryons which have asymmetric intrinsic quark and antiquark distributions . However, it should be noted that for mesons containing a light and a heavier valence quark, the situation is different and the intrinsic quark-antiquark sea must be asymmetric . Summarizing, we have proposed a possible scenario for the origin of the valence-like sea quark and gluon distributions nedeed at the low (input) scale in order to describe the experimental DIS data for nucleons and pions. We have also discussed the low scale structure of charged pions and shown that, besides valence quarks, the model predicts only gluon and strange intrinsic sea distributions as a suitable low $`Q^2`$ starting point for perturbative DGLAP evolution. On the other hand, for neutral pions, intrinsic light quark-antiquark distribution have to be considered as well. This signals a remarkable difference between the non-perturbative structure of neutral and charged pions. ## Acknowledgments We acknowledge R. Vogt for useful comments. J.M. is partially supported by COLCIENCIAS, the Colombian Agency for Science and Technology, under Contract No. 242-99.
no-problem/0003/math0003059.html
ar5iv
text
# Definitions ## Definitions Suppose $`M`$ is a 6-dimensional connected oriented smooth manifold and $`H`$ is a rank 4 smooth subbundle of its tangent bundle $`TM`$. Let $`Q`$ denote the quotient bundle $`TM/H`$. There is a homomorphism of vector bundles $`:HHQ`$ induced by Lie bracket:– $$(\xi ,\eta )=[\xi ,\eta ]modH\text{for }\xi ,\eta \mathrm{\Gamma }(H).$$ Regard $``$ as a tensor $`\mathrm{\Gamma }(\mathrm{\Lambda }^2H^{}Q)`$. Then $`\mathrm{\Gamma }(\mathrm{\Lambda }^4H^{}^2Q)`$ may be regarded as a quadratic form on $`Q^{}`$ defined up to scale. We shall say that $`(M,H)`$ is non-degenerate if and only if $``$ is non-degenerate as such a quadratic form. Since $`Q`$ has rank two, there are only two cases:– * $`(M,H)`$ is elliptic $``$ is definite; * $`(M,H)`$ is hyperbolic $``$ is indefinite. An elliptic example may be obtained by taking a 3-dimensional complex contact manifold and forgetting its complex structure. A hyperbolic example may be obtained by taking the product of two 3-dimensional real contact manifolds. These two examples will be referred to as the ‘flat’ models. The motivations for our investigation are discussed in the end of this article. ## Acknowledgements We are pleased to acknowledge several useful conversations with Gerd Schmalz, Jan Slovák, and Peter Vassiliou. ## The Elliptic Case ###### Theorem Suppose $`(M,H)`$ is elliptic. Then $`M`$ admits a unique almost complex structure $`J:TMTM`$ characterised by the following properties:– * $`J`$ preserves $`H`$; * the orientation on $`M`$ induced by $`J`$ is the given one; * $`:H\times HQ`$ is complex bilinear for the induced structures, or equivalently $`[\xi ,\eta ]+J[J\xi ,\eta ]\mathrm{\Gamma }(H)\text{for }\xi ,\eta \mathrm{\Gamma }(H)`$; * $`[\xi ,\eta ]+J[J\xi ,\eta ]J[\xi ,J\eta ]+[J\xi ,J\eta ]\mathrm{\Gamma }(H)\text{for }\xi \mathrm{\Gamma }(TM),\eta \mathrm{\Gamma }(H).`$ Furthermore, the tensor $`S:QHQ`$ induced by $$S(\xi ,\eta )=[\xi ,\eta ]+J[J\xi ,\eta ]modH\text{for }\xi \mathrm{\Gamma }(TM),\eta \mathrm{\Gamma }(H)$$ is the obstruction to $`J`$ being integrable. * Proof Fix $`xM`$. Since $`_x_x`$ is definite, there is no $`\psi Q_x^{}`$ for which $`(\psi _x)(\psi _x)`$ vanishes—as a quadratic polynomial $`_x_x`$ has no real roots. Instead it has two complex roots, related by complex conjugation. Each of these roots gives $`\psi Q_x^{}`$ defined up to complex scale, so that $`(\psi _x)(\psi _x)`$ vanishes as an element of $`\mathrm{\Lambda }^4H^{}`$. In this case, according to the Plücker criterion, $`\psi _x`$ is simple as an element of $`\mathrm{\Lambda }^2H^{}`$. The corresponding complex 2-plane in $`H^{}`$ defines a complex structure $`J:H_xH_x`$. At the same time $`\psi Q_x^{}`$ identifies $`Q_x`$ with $``$ and, in particular, defines a complex structure $`J:Q_xQ_x`$. These complex structures on $`H_x`$ and $`Q_x`$ are unchanged if $`\psi `$ is multiplied by any complex number. In other words, they are determined by choosing one of the two roots of $`_x_x`$ as a quadratic polynomial. The other root replaces $`J`$ by $`J`$ but only one of these choices induces the given orientation on $`M`$. To summarise, we now have uniquely determined almost complex structures on $`H`$ and $`Q`$ so that $$(\xi ,\eta )+J(J\xi ,\eta )=0\text{for }\xi ,\eta \mathrm{\Gamma }(H)$$ (1) and inducing the given orientation on $`M`$. Choose any extension of these almost complex structures to an almost complex structure $`\stackrel{~}{J}:TMTM`$. This $`\stackrel{~}{J}`$ satisfies the first three properties claimed in the statement of the theorem. Define $`\stackrel{~}{S}:TMHQ`$ by $$\stackrel{~}{S}(\xi ,\eta )=[\xi ,\eta ]+J[\stackrel{~}{J}\xi ,\eta ]modH\text{for }\xi \mathrm{\Gamma }(TM),\eta \mathrm{\Gamma }(H)$$ (2) This homomorphism depends on the choice of the etension $`\stackrel{~}{J}`$. For fixed $`\xi TM`$ consider the map $`HQ`$ defined by $`\eta \frac{1}{2}(\stackrel{~}{S}(\xi ,\eta )+J\stackrel{~}{S}(\xi ,J\eta ))`$. By construction this map is complex linear, so non–degeneracy of $``$ implies that there is a unique element $`K\xi H`$ such that $$(K\xi ,\eta )=\frac{\stackrel{~}{S}(\xi ,\eta )+J\stackrel{~}{S}(\xi ,J\eta )}{2}\text{for }\xi \mathrm{\Gamma }(TM),\eta \mathrm{\Gamma }(H)$$ (3) and this defines a homomorphism $`K:TMH`$. We claim that $`J=\stackrel{~}{J}+K`$ is the almost complex structure whose existence is asserted in the statement of the theorem. If $`\xi \mathrm{\Gamma }(H)`$, then (1) implies that $`\stackrel{~}{S}(\xi ,\eta )=0`$ so $`K\xi =0`$, and in particular $`K^2=0`$. Therefore, $`J`$ preserves $`H`$. Also $$(\stackrel{~}{J}+K)^2=\stackrel{~}{J}^2+\stackrel{~}{J}K+K\stackrel{~}{J}+K^2=\text{Id}+\stackrel{~}{J}K+K\stackrel{~}{J}$$ so we must check that $`\stackrel{~}{J}K+K\stackrel{~}{J}=0`$. By the non-degeneracy of $``$ it suffices to check that $$(\stackrel{~}{J}K\xi ,\eta )+(K\stackrel{~}{J}\xi ,\eta )=0\text{for }\xi \mathrm{\Gamma }(TM),\eta \mathrm{\Gamma }(H).$$ This is easily verified using (1), (2), and (3). Thus, $`J`$ is an almost complex structure. Moreover, it satisfies the first three requirements listed in the theorem as a consequence of $`\stackrel{~}{J}`$ doing so. Moreover, the tensor $`S`$ corresponding to $`J=\stackrel{~}{J}+K`$ is visibly given by $`S(\xi ,\eta )=\stackrel{~}{S}(\xi ,\eta )+(K\xi ,\eta )`$. By construction, this is just the component of $`\stackrel{~}{S}`$ which is conjugate linear in the second variable. But the final requirement is immediately seen to be equivalent to the fact that the corresponding tensor $`S`$ (which is conjugate linear in the first variable by construction), is conjugate linear in the second variable, too. In fact, this forces (3) as the correct modification so $`J`$ is uniquely characterised by having all four properties. It remains to show that the tensor $`S`$ is the obstruction to integrability of $`J`$. The Nijenhuis tensor of $`J`$ is $$N(\xi ,\eta )=[\xi ,\eta ]+J[J\xi ,\eta ]+J[\xi ,J\eta ][J\xi ,J\eta ]\text{for }\xi ,\eta \mathrm{\Gamma }(TM).$$ Notice that $`N`$ is skew and $`N(\xi ,J\eta )=JN(\xi ,\eta )`$. In particular, $$N(\xi ,J\xi )=JN(\xi ,\xi )=0\text{for }\xi \mathrm{\Gamma }(TM).$$ (4) Firstly, consider the case when $`\xi \mathrm{\Gamma }(TM),\eta \mathrm{\Gamma }(H)`$. The vanishing of $`S`$ means that $$[\xi ,\eta ]+J[J\xi ,\eta ]\mathrm{\Gamma }(H)\text{for }\xi \mathrm{\Gamma }(TM),\eta \mathrm{\Gamma }(H).$$ (5) In particular, this implies $`N(\xi ,\eta )\mathrm{\Gamma }(H)`$, so we may consider the tensor $`R:TMHHQ`$ defined by $$R(\xi ,\eta ,\mu )=(N(\xi ,\eta ),\mu )\text{for }\xi \mathrm{\Gamma }(TM),\eta ,\mu \mathrm{\Gamma }(H).$$ We claim that $`R`$ vanishes. Once this is proved, non-degeneracy of $``$ implies that $`N(\xi ,\eta )=0`$ for $`\xi \mathrm{\Gamma }(TM),\eta \mathrm{\Gamma }(H)`$ and so $`N`$ descends to $`N:\mathrm{\Lambda }^2QTM`$. Then, as $`Q`$ has complex rank one, (4) forces $`N`$ to vanish. To complete the proof, therefore, it suffices to show that $`R`$ vanishes. In the following calculation $``$ denotes equality modulo $`H`$ and in passing from one line to the next we are using either the Jacobi identity, or (5), or the fact that $`S`$ is conjugate linear in both variables. $$\begin{array}{ccc}\hfill R(\xi ,\eta ,\mu )& & [[\xi ,\eta ],\mu ]+[J[J\xi ,\eta ],\mu ]+[J[\xi ,J\eta ],\mu ][[J\xi ,J\eta ],\mu ]\hfill \\ & & [[\xi ,\eta ],\mu ]+J[[J\xi ,\eta ],\mu ]+J[[\xi ,J\eta ],\mu ][[J\xi ,J\eta ],\mu ]\hfill \\ & =& [[\xi ,\mu ],\eta ]+J[[J\xi ,\mu ],\eta ]+J[[\xi ,\mu ],J\eta ][[J\xi ,\mu ],J\eta ]\hfill \\ & & +[[\mu ,\eta ],\xi ]+J[[\mu ,\eta ],J\xi ]+J[[\mu ,J\eta ],\xi ][[\mu ,J\eta ],J\xi ]\hfill \\ & & [[\xi ,\mu ],\eta ]+[J[J\xi ,\mu ],\eta ]+[J[\xi ,\mu ],J\eta ][[J\xi ,\mu ],J\eta ]\hfill \\ & & +[[\mu ,\eta ],\xi ]+J[[\mu ,\eta ],J\xi ]+J[[\mu ,J\eta ],\xi ][[\mu ,J\eta ],J\xi ]\hfill \\ & =& [[\xi ,\mu ]+J[J\xi ,\mu ],\eta ]+[J[\xi ,\mu ][J\xi ,\mu ],J\eta ]\hfill \\ & & +[[\mu ,\eta ],\xi ]+J[[\mu ,\eta ],J\xi ]+J[[\mu ,J\eta ],\xi ][[\mu ,J\eta ],J\xi ]\hfill \\ & & [[\mu ,\eta ],\xi ]+J[[\mu ,\eta ],J\xi ]+J[[\mu ,J\eta ],\xi ][[\mu ,J\eta ],J\xi ].\hfill \end{array}$$ Therefore, $$R(\xi ,\eta ,\mu )+R(\xi ,\mu ,\eta )J[[\mu ,J\eta ]+[\eta ,J\mu ],\xi ][[\mu ,J\eta ]+[\eta ,J\mu ],J\xi ]$$ and since $`[\mu ,J\eta ]+[\eta ,J\mu ]\mathrm{\Gamma }(H)`$, this expression vanishes by (5). We conclude that $`R:TMHHQ`$ is skew in its last two entries. But by definition $`R`$ is conjugate linear in the middle variable and complex linear in the last variable, which together with skew symmetry in these two variables forces $`R`$ to vanish as required. x ###### Corollary The only local invariant of an elliptic $`(M,H)`$ is the tensor $`S`$. * Proof If $`S`$ vanishes, then $`(M,H)`$ is a complex contact manifold. The Darboux theorem in the holomorphic setting says that all 3-dimensional complex contact manifolds are locally isomorphic. x ## The Hyperbolic Case There is an entirely parallel story for the hyperbolic case with almost complex structure replaced by almost product structure. The corresponding theorem may be stated as follows. ###### Theorem Suppose $`(M,H)`$ is hyperbolic. Then $`H`$ admits a canonical splitting $`H=H_+H_{}`$ characterised by the following properties:– * $`[\xi ,\eta ]\mathrm{\Gamma }(H)\text{for }\xi \mathrm{\Gamma }(H_+),\eta \mathrm{\Gamma }(H_{})`$; * the orientation on M induced by $`\xi _1\xi _2[\xi _1,\xi _2]\eta _1\eta _2[\eta _1,\eta _2]`$ for $`\xi _1,\xi _2\mathrm{\Gamma }(H_+),\eta _1,\eta _2\mathrm{\Gamma }(H_{})`$ is the given one. Let $`Q_\pm `$ be the range of $`|_{\mathrm{\Lambda }^2H_\pm }`$. Non-degeneracy of $``$ implies that $`Q=Q_+Q_{}`$. By setting $`T_\pm M=[H_\pm ,H_\pm ]`$, we obtain a canonical splitting $`TM=T_+MT_{}M`$ such that $`Q_\pm =T_\pm M/H_\pm `$. Furthermore, the tensors $`S_+:Q_+H_+Q_{}`$ and $`S_{}:Q_{}H_{}Q_+`$ induced by $$S_\pm (\xi ,\eta )[\xi ,\eta ]mod(T_\pm H_{})\text{for }\xi \mathrm{\Gamma }(T_\pm M),\eta \mathrm{\Gamma }(H_\pm )$$ are the respective obstructions to $`T_+M`$ and $`T_{}M`$ being Frobenius integrable. If $`S_\pm `$ both vanish, then locally we obtain the flat local model, namely a product of two 3-dimensional real contact manifolds. The Darboux theorem, applied to each such contact manifold separately, implies that the flat model is locally unique. Again, the tensors $`S_\pm `$ provide the only local structure. ## Motivations Our motivation for this article comes from the theory of CR submanifolds of codimension 2 in $`^4`$. This theory was pioneered by Loboda and Ezhov-Schmalz who found normal forms for such submanifolds paralleling the Moser normal form for CR hypersurfaces. In this context, the distribution $`H`$ is formed by the maximal complex subspaces of the tangent spaces. More generally, to make an elliptic or hyperbolic $`(M,H)`$ into a partially integrable almost CR manifold, one has to specify an almost complex structure $`\stackrel{~}{J}`$ on $`H`$ such that $`(\stackrel{~}{J}\xi ,\stackrel{~}{J}\eta )=(\xi ,\eta )`$ for all $`\xi ,\eta H`$. In the hyperbolic case, this implies in particular that $`H=H_+H_{}`$ is a decomposition of $`H`$ as a sum of two complex line bundles. On the other hand, in the elliptic case the second almost complex structure $`\stackrel{~}{J}`$ can also be rephrased as a decomposition $`H=H_+H_{}`$ as a sum of complex line bundles characterized by $`\stackrel{~}{J}=\pm J`$ on $`H_\pm `$. Clearly, these additional structures lead to additional obstructions against being CR–isomorphic to the flat models (which are just appropriate quadrics). For example, one has the Nijenhuis tensor corresponding to $`\stackrel{~}{J}`$, or the obvious obstructions against integrability of the subbundles $`H_\pm `$ in the elliptic case. But in fact, in the CR setting, one gets much more structure: In and it is shown that one gets a parabolic geometry parallel to the Chern-Moser-Tanaka theory for CR hypersurfaces and thus in particular canonical Cartan connections. This article may be viewed as some remnant of the parabolic theory. As pointed out to us by Peter Vassiliou, there is another context in which $`(M,H)`$ with these special dimensions arise. The general pair of smooth first order partial differential equations in two independent variables $`(x,y)`$ and two dependent variables $`(u,v)`$ may be regarded as a codimension 2 submanifold $`M`$ in the 8-dimensional jet space with coördinates $`(x,y,u,v,u_x,u_y,v_x,v_y)`$. This jet space has a natural distribution of rank 6 defined as common kernel of the two 1-forms $$duu_xdxu_ydy\text{and}dvv_xdxv_ydy.$$ Generically, $`M`$ will meet this distribution transversally and so will itself inherit a rank 4 distribution $`H`$. The elliptic flat model is obtained from the Cauchy-Riemann equations $$u_x=v_y\text{and}u_y=v_x.$$ The hyperbolic flat model arises from the equations $$u_y=0\text{and}v_x=0.$$ Further discussion may be found in \[2, Chapter VII, §1\], , and . ## Higher Dimensions If we start with a $`(2n+1)`$-dimensional complex contact manifold $`M`$ with contact distribution $`H`$, then $`\mathrm{\Gamma }(\mathrm{\Lambda }^2H^{}Q)`$ may be defined as before but now we should consider $`^{2n}\mathrm{\Gamma }(\mathrm{\Lambda }^{4n}H^{}^{2n}Q)`$ as a polynomial of degree $`2n`$ defined up to scale. Only when $`n=1`$ is this polynomial generic. In general it has only two roots, each complex and of multiplicity $`n`$. | Andreas Čap | | --- | | Institut für Mathematik, Universität Wien | | Strudlhofgasse 4, A-1090 Wien, Austria | | and | | Erwin Schrödinger International Institute for Mathematical Physics, | | Boltzmanngasse 9, A-1090 Wien, Austria | | E-mail: andreas.cap@esi.ac.at | | Michael Eastwood | | Department of Pure Mathematics | | University of Adelaide | | South Australia 5005 | | E-mail: meastwoo@maths.adelaide.edu.au |
no-problem/0003/hep-ph0003211.html
ar5iv
text
# Lepton Pair Production at the LHC and the Gluon Density in the Proton ## Abstract The hadroproduction of lepton pairs with mass $`Q`$ and finite transverse momentum $`Q_T`$ is dominated by quark-gluon scattering in the region $`Q_T>Q/2`$. This feature provides a new independent method for constraining the gluon density with data at hadron collider energies. Predictions are provided at the energy of the LHC. ANL-HEP-CP-00-009 DESY 00-032 The production of lepton pairs in hadron collisions $`h_1h_2\gamma ^{}X;\gamma ^{}l\overline{l}`$ proceeds through an intermediate virtual photon via $`q\overline{q}\gamma ^{}`$, and the subsequent leptonic decay of the virtual photon. Interest in this Drell-Yan process is usually focussed on lepton pairs with large mass $`Q`$ which justifies the application of perturbative QCD and allows for the extraction of the antiquark density in hadrons . Prompt photon production $`h_1h_2\gamma X`$ can be calculated in perturbative QCD if the transverse momentum $`Q_T`$ of the photon is sufficiently large. Because the quark-gluon Compton subprocess is dominant, $`gq\gamma X`$, this reaction provides essential information on the gluon density in the proton at large $`x`$ . Alternatively, the gluon density can be constrained from the production of jets with large transverse momentum at hadron colliders . In this report we exploit the fact that, along prompt photon production, lepton pair production is dominated by quark-gluon scattering in the region $`Q_T>Q/2`$. This realization means that new independent constraints on the gluon density may be derived from Drell-Yan data in kinematical regimes that are accessible at the Large Hadron Collider (LHC) but without the theoretical and experimental uncertainties present in the prompt photon case. In leading order (LO) QCD, two partonic subprocesses contribute to the production of virtual and real photons with non-zero transverse momentum: $`q\overline{q}\gamma ^{()}g`$ and $`qg\gamma ^{()}q`$. The cross section for lepton pair production is related to the cross section for virtual photon production through the leptonic branching ratio of the virtual photon $`\alpha /(3\pi Q^2)`$. The virtual photon cross section reduces to the real photon cross section in the limit $`Q^20`$. The next-to-leading order (NLO) QCD corrections arise from virtual one-loop diagrams interfering with the LO diagrams and from real emission diagrams. At this order $`23`$ partonic processes with incident gluon pairs $`(gg)`$, quark pairs $`(qq)`$, and non-factorizable quark-antiquark $`(q\overline{q}_2)`$ processes contribute also. An important difference between virtual and real photon production arises when a quark emits a collinear photon. Whereas the collinear emission of a real photon leads to a $`1/ϵ`$ singularity that has to be factored into a fragmentation function, the collinear emission of a virtual photon yields a finite logarithmic contribution since it is regulated naturally by the photon virtuality $`Q`$. In the limit $`Q^20`$ the NLO virtual photon cross section reduces to the real photon cross section if this logarithm is replaced by a $`1/ϵ`$ pole. A more detailed discussion can be found in Ref. . The situation is completely analogous to hard photoproduction where the photon participates in the scattering in the initial state instead of the final state. For real photons, one encounters an initial-state singularity that is factored into a photon structure function. For virtual photons, this singularity is replaced by a logarithmic dependence on the photon virtuality $`Q`$ . A remark is in order concerning the interval in $`Q_T`$ in which our analysis is appropriate. In general, in two-scale situations, a series of logarithmic contributions will arise with terms of the type $`\alpha _s^n\mathrm{ln}^n(Q/Q_T)`$. Thus, if either $`Q_T>>Q`$ or $`Q_T<<Q`$, resummations of this series must be considered. For practical reasons, such as event rate, we do not venture into the domain $`Q_T>>Q`$, and our fixed-order calculation should be adequate. On the other hand, the cross section is large in the region $`Q_T<<Q`$. In previous papers , we compared our cross sections with available fixed-target and collider data on massive lepton-pair production, and we were able to establish that fixed-order perturbative calculations, without resummation, should be reliable for $`Q_T>Q/2`$. At smaller values of $`Q_T`$, non-perturbative and matching complications introduce some level of phenomenological ambiguity. For the goal we have in mind, viz., contraints on the gluon density, it would appear best to restrict attention to the region $`Q_TQ/2`$, but below $`Q_T>>Q`$. We analyze the invariant cross section $`Ed^3\sigma /dp^3`$ averaged over the rapidity interval -1.0 $`<y<`$ 1.0. We integrate the cross section over various intervals of pair-mass $`Q`$ and plot it as a function of the transverse momentum $`Q_T`$. Our predictions are based on a NLO QCD calculation and are evaluated in the $`\overline{\mathrm{MS}}`$ renormalization scheme. The renormalization and factorization scales are set to $`\mu =\mu _f=\sqrt{Q^2+Q_T^2}`$. If not stated otherwise, we use the CTEQ4M parton distributions and the corresponding value of $`\mathrm{\Lambda }`$ in the two-loop expression of $`\alpha _s`$ with four flavors (five if $`\mu >m_b`$). The Drell-Yan factor $`\alpha /(3\pi Q^2)`$ for the decay of the virtual photon into a lepton pair is included in all numerical results. In Fig. 1 we display the NLO QCD cross section for lepton pair production at the LHC at $`\sqrt{S}=14`$ TeV as a function of $`Q_T`$ for four regions of $`Q`$ chosen to avoid resonances, i.e. from threshold to $`2.5`$ GeV, between the $`J/\psi `$ and the $`\mathrm{{\rm Y}}`$ resonances, above the $`\mathrm{{\rm Y}}`$’s, and a high mass region. The cross section falls both with the mass of the lepton pair $`Q`$ and, more steeply, with its transverse momentum $`Q_T`$. The initial LHC luminosity is expected to be 10<sup>33</sup> cm<sup>-2</sup> s<sup>-1</sup>, or 10 fb<sup>-1</sup>/year, and to reach the design luminosity of 10<sup>34</sup> cm<sup>-2</sup> s<sup>-1</sup> after three or four years. Therefore it should be possible to analyze data for lepton pair production to at least $`Q_T100`$ GeV where one can probe the parton densities in the proton up to $`x_T=2Q_T/\sqrt{S}0.014`$. The UA1 collaboration measured the transverse momentum distribution of lepton pairs at $`\sqrt{S}=630`$ GeV to $`x_T=0.13`$ , and their data agree well with our expectations . The fractional contributions from the $`qg`$ and $`q\overline{q}`$ subprocesses through NLO are shown in Fig. 2. It is evident that the $`qg`$ subprocess is the most important subprocess as long as $`Q_T>Q/2`$. The dominance of the $`qg`$ subprocess increases somewhat with $`Q`$, rising from over 80 % for the lowest values of $`Q`$ to about 90 % at its maximum for $`Q`$ 30 GeV. Subprocesses other than those initiated by the $`q\overline{q}`$ and $`qg`$ initial channels are of negligible import. The full uncertainty in the gluon density is not known. We estimate the sensitivity of LHC experiments to the gluon density in the proton from the variation of different recent parametrizations. We choose the latest global fit by the CTEQ collaboration (5M) as our point of reference and compare results to those based on their preceding analysis (4M) and on a fit with a higher gluon density (5HJ) intended to describe the CDF and D0 jet data at large transverse momentum. We also compare to results based on global fits by MRST , who provide three different sets with a central, higher, and lower gluon density, and to GRV98 <sup>1</sup><sup>1</sup>1In this set a purely perturbative generation of heavy flavors (charm and bottom) is assumed. Since we are working in a massless approach, we resort to the GRV92 parametrization for the charm contribution and assume the bottom contribution to be negligible.. In Fig. 3 we plot the cross section for lepton pairs with mass between the between $`Q_T=50`$ and 100 GeV ($`x_T=0.007\mathrm{}0.014`$). For the CTEQ parametrizations we find that the cross section increases from 4M to 5M by 5 % and does not change from 5M to 5HJ in the whole $`Q_T`$-range. The largest differences from CTEQ5M are obtained with GRV98 (minus 18 %). The theoretical uncertainty in the cross section can be estimated by varying the renormalization and factorization scale $`\mu =\mu _f`$ about the central value $`\sqrt{Q^2+Q_T^2}`$. In the region between the $`J/\psi `$ and $`\mathrm{{\rm Y}}`$ resonances, the cross section drops from $`\pm 39\%`$ (LO) to $`\pm 16\%`$ (NLO) when $`\mu `$ is varied over the interval interval $`0.5<\mu /\sqrt{Q^2+Q_T^2}<2`$. The $`K`$-factor ratio (NLO/LO) is approximately 1.3 at $`\mu /\sqrt{Q^2+Q_T^2}=1`$. We conclude that the hadroproduction of low mass lepton pairs is an advantageous source of information on the parametrization and size of the gluon density. With the design luminosity of the LHC, regions of $`x_T0.014`$ should be accessible. The theoretical uncertainty has been estimated from the scale dependence of the cross sections and found to be small at NLO QCD.
no-problem/0003/astro-ph0003294.html
ar5iv
text
# LINER/H II “Transition” Nuclei and the Nature of NGC 4569 ## 1 Introduction Emission-line nebulae in galactic nuclei are generally considered to fall into three major categories: star-forming or H II nuclei, Seyfert nuclei, and low-ionization nuclear emission-line regions, or LINERs. The formal divisions between these classes are somewhat arbitrary, as the observed emission-line ratios of nearby galactic nuclei fall in a continuous distribution between LINERs and Seyfert nuclei and between LINERs and H II nuclei (e.g., Ho et al., 1993). Traditionally, LINERs have been defined as those nuclei having emission-line flux ratios which satisfy the relations \[O II\] $`\lambda `$3727/\[O III\] $`\lambda `$5007 $`>1`$ and \[O I\] $`\lambda `$6300/\[O III\] $`\lambda `$5007 $`>1/3`$ (Heckman, 1980). It is possible to construct alternative but practically equivalent definitions, based on other line ratios, which can be applied to datasets that do not include the wavelengths of the \[O I\], \[O II\], or \[O III\] lines (e.g., Ho et al., 1997a). A sizeable minority of galactic nuclei has emission-line ratios which are intermediate between those of “pure” LINERs and those of typical H II regions powered by hot stars; these galaxies would be classified as LINERs except that their \[O I\] $`\lambda `$6300 line strengths are too small in comparison with other lines to meet the formal LINER criteria. Objects falling into this category have been dubbed “transition” galaxies by Ho et al. (1993), and although this nomenclature is somewhat ambiguous we adopt it here for consistency with the spectroscopic survey of Ho et al. (1997a). That survey defined the transition class in terms of the following flux ratios: \[O III\] $`\lambda `$5007/H$`\beta `$ $`<`$ 3, 0.08 $``$ \[O I\] $`\lambda `$6300/H$`\alpha `$ $`<`$ 0.17, \[N II\] $`\lambda `$6583/H$`\alpha `$ $``$ 0.6, \[S II\] $`\lambda \lambda `$6716, 6731 /H$`\alpha `$ $``$ 0.4. Filippenko & Terlevich (1992) have used the term “weak-\[O I\] LINERs” to refer to galaxies having \[N II\] $`\lambda `$6583/H$`\alpha `$ $`0.6`$ (typical of LINERs) but which have \[O I\] $`\lambda `$6300/H$`\alpha `$ $`<1/6`$. This category is essentially identical to the transition class of Ho et al. (1993, 1997a), and we will refer to these galaxies as transition objects in this paper. According to the survey results of Ho et al. (1997b), this transition class accounts for 13% of all nearby galaxies, making them about as numerous as Seyfert nuclei. The Hubble type distribution of transition galaxies is intermediate between that of LINERs, which are most common in E/S0/Sa galaxies, and that of H II nuclei, which occur most often in Hubble types later than Sb (Ho et al., 1997b). Roughly $`20\%`$ of galaxies with Hubble types ranging from S0 to Sbc belong to the transition class. There is not a consensus, however, as to whether these transition objects should be regarded as star-forming nuclei, as accretion-powered active nuclei, or as composite objects powered by an AGN and by hot stars in roughly equal proportion. There is a large body of literature on the subject of the excitation mechanism of LINERs which is relevant to the similar transition class. A variety of physical mechanisms has been proposed to explain the emission spectra of LINERs, including shocks, photoionization by a nonstellar ultraviolet (UV) and X-ray continuum, and photoionization by hot stars. (See Filippenko 1996 for a review.) The possibility that LINERs (and Seyfert nuclei as well) might be photoionized by starlight was raised by Terlevich & Melnick (1985), who suggested that very hot ($`T_{\mathrm{eff}}10^5`$ K) Wolf-Rayet (W-R) stars in a metal-rich starburst could give rise to an ionizing continuum with a nearly power-law shape in the extreme-UV. More recent atmosphere models have indicated substantially lower temperatures for W-R stars, however, casting doubt on the Warmer hypothesis (Leitherer et al., 1992). Subsequent photoionization models have attempted to explain LINER and transition-type spectra as resulting from massive main-sequence stars. Filippenko & Terlevich (1992) found that the spectra of weak-\[O I\] LINERs could be explained in terms of photoionization by O3–O4 stars having effective temperatures of $`45,000`$ K, at ionization parameters of $`U10^{3.7}`$ to $`10^{3.3}`$. Shields (1992) carried this line of argument farther, proposing that genuine LINER spectra could be generated by early O stars with $`T_{\mathrm{eff}}50,000`$ K, provided that a high-density component ($`n_e10^{5.5}`$ cm<sup>-3</sup>) is present in the NLR; the high densities are needed to boost the strengths of high critical-density emission lines, most notably \[O I\] $`\lambda `$6300. Similar conclusions were reached by Schulz & Fritsch (1994), who explored the effects of absorption by ionized gas as a means to harden the effective ionizing spectrum. Recent observations, particularly in the UV and X-ray bands, have provided convincing evidence that many LINERs are in fact AGNs, particularly the “Type 1” LINERs which have a broad component to the H$`\alpha `$ emission line (for a recent review see Ho, 1999). The possibility has remained, however, that some LINERs and transition nuclei are powered entirely by bursts of star formation. An important shortcoming of the model calculations performed by Filippenko & Terlevich (1992) and Shields (1992) is that the ionizing continua used as input were those of single O-type stars; these studies did not address the question of whether a LINER or transition-type spectrum could result from the the *integrated* ionizing continuum of a young stellar cluster. Compared with these single-star models, the contribution of late-O and B stars will soften the ionizing spectrum, making the emission-line ratios tend toward those of normal H II regions. W-R stars, on the other hand, will harden the ionizing spectrum during the period when these stars are present, roughly $`36`$ Myr after the burst. Another drawback of the O-star models is that they require the presence of stars with effective temperatures higher than are thought to occur in H II regions of solar or above-solar metallicity, in order to produce a LINER or transition-type spectrum rather than an H II region spectrum. Their applicability to galactic nuclei is therefore somewhat unclear. Other mechanisms have been proposed for generating LINER or transition-type spectra. Shock excitation by supernova remnants in an aging starburst may give rise to some transition objects; the nucleus of NGC 253 is a likely candidate for such an object (Engelbracht et al., 1998). Also, post-AGB stars and planetary nebula nuclei will produce a diffuse ionizing radiation field which could be responsible for the very faint LINER emission (with H$`\alpha `$ equivalent widths of $`1`$ Å) observed in some ellipticals and spiral bulges (Binette et al., 1994). An alternate possibility is that the transition galaxies may simply be composite systems consisting of an active nucleus surrounded by star-forming regions. For a galaxy at a distance of 10 Mpc, for example, a 2″-wide spectroscopic aperture will include H II regions within 50 pc of the nucleus. Galaxies having emission lines both from a LINER nucleus and from surrounding star-forming regions, in roughly equal proportions, will appear to have a transition-type spectrum. This interpretation was advocated by Ho et al. (1993) as the most likely explanation for the majority of transition galaxies, and is consistent with the observed Hubble type distribution for the transition class. Other authors have similarly contended that transition galaxies are AGN/H II region composites, based on optical line-profile decompositions (Véron et al., 1997; Gonçalves et al., 1999) and near-infrared spectra (Hill et al., 1999). Two of the 65 transition nuclei observed in the Ho et al. (1997b) survey have a broad component to the H$`\alpha `$ emission line, indicating the likely presence of an AGN, and it is probable that many more transition nuclei contain obscured AGNs which were not detected in the optical spectra. On the other hand, radio observations do not appear to support the composite AGN/starburst interpretation. In a VLA survey of nearby galactic nuclei, Nagar et al. (1999) find compact, flat-spectrum radio cores in more than 50% of LINER nuclei, but in only 6% (1 of 18) of transition objects. This discrepancy suggests that the simple picture of an ordinary LINER surrounded by star-forming regions may not apply to the majority of transition objects. Recent results from the *Hubble Space Telescope* (*HST*) have shed new light on the question of the excitation mechanism of transition nuclei. As shown by Maoz et al. (1998), the UV spectrum of the well-known transition nucleus in NGC 4569 over 1200-1600 Å is virtually identical to that of a W-R knot in the starburst galaxy NGC 1741, indicating that O stars with ages of a few Myr dominate the UV continuum. Maoz et al. (1998) find that the nuclear star cluster in NGC 4569 is producing sufficient UV photons to ionize the surrounding narrow-line region, a key conclusion which provides fresh motivation to study stellar photoionization models. The brightness of the NGC 4569 nucleus, and the consequently high S/N observations that have been obtained, make it one of the best objects with which to study the transition phenomenon. The recent availability of the STARBURST99 model set (Leitherer et al., 1999) has prompted us to reexamine the issue of ionization by hot stars in LINERs and transition nuclei. These models give predictions for the spectrum and luminosity of a young star cluster, for a range of values of cluster age, metal abundance, and stellar initial mass function (IMF) properties. Using the photoionization code CLOUDY (Ferland et al., 1998) in combination with the STARBURST99 model continua, we have calculated the expected emission-line spectrum of an H II region illuminated by a young star cluster, to test the hypothesis that some LINERs and transition nuclei may be powered by starlight. Similar calculations have been performed by Stasińska & Leitherer (1996), but for the physically distinct case of metal-poor objects representing H II galaxies. Other examples of photoionization calculations for H II regions using evolving starburst continua are presented by García-Vargas & Díaz (1994), García-Vargas et al. (1995), and Bresolin et al. (1999). ## 2 The Nucleus of NGC 4569 Before describing the photoionization modeling, we review the properties of NGC 4569, as it is among the best-known examples of the transition class. NGC 4569 is a Virgo cluster spiral of type Sab, with a heliocentric velocity of $`235`$ km s<sup>-1</sup>, and we assume a distance of 16.8 Mpc for consistency with the catalog of Ho et al. (1997a). Its nucleus is remarkably bright for a non-Seyfert, and so compact in the optical that Humason (1936) suspected it to be a foreground Galactic star. It is also an unusually bright UV source, with the highest 2200 Å luminosity of the LINERs and transition objects observed by Maoz et al. (1995) and Barth et al. (1998). *HST* Faint Object Spectrograph (FOS) spectra show that the UV continuum is dominated by massive stars, with prominent P Cygni profiles of C IV $`\lambda `$1549, Si IV $`\lambda `$1400, and N V $`\lambda `$1240 (Maoz et al., 1998). The UV spectrum is nearly an exact match to the spectrum of one of the starburst knots in the W-R galaxy NGC 1741, an object with a likely age in the range 3–6 Myr (Conti et al., 1996). The optical spectrum of the NGC 4569 nucleus is dominated by the light of A-type supergiants, providing additional evidence for recent star formation (Keel, 1996). One key result of the Maoz et al. (1998) study was the conclusion that the nuclear starburst in NGC 4569 is producing sufficient numbers of ionizing photons to power the narrow-line region, assuming that the surrounding nebula is ionization-bounded, *even without correcting for the effects of internal extinction on the UV continuum flux*. In fact, there appears to be substantial extinction within NGC 4569, as demonstrated by the UV continuum slope as well as the presence of deep interstellar absorption features (Maoz et al., 1998). Ho et al. (1997a) derive an internal reddening of $`E(BV)=0.46`$ mag from the H$`\alpha `$/H$`\beta `$ ratio, while Maoz et al. (1998) estimate a UV extinction of $`A4.8`$ mag at 1300 Å by comparison of the observed UV slope with the expected spectral shape of an unreddened starburst. Despite the fact that NGC 4569 is often referred to as a LINER, and in some cases presumed to contain an AGN on the basis of that classification, there is no single piece of evidence which conclusively demonstrates that an AGN is in fact present at all. The *HST* images and spectra are all consistent with the nucleus being a young, luminous, and compact starburst region. No broad-line component is detected on the H$`\alpha `$ emission line (Ho et al., 1997a), and no narrow or broad emission lines are visible at all in the UV spectrum other than the P Cygni features that are generated in O-star winds (Maoz et al., 1998). Only the optical emission-line ratios point to a possible AGN classification. In a thorough study of optical and *IUE* UV spectra, Keel (1996) concluded that there was at best weak evidence for the presence of an AGN in NGC 4569, and that any AGN continuum component, if present, must have an unusually steep spectrum. Furthermore, while the nucleus of NGC 4569 is certainly extremely compact, the UV and optical *HST* images show that the nucleus is not dominated by a central point source. At 2200 Å, the nucleus appears extended in WFPC2 images with FWHM sizes of 13 and 9 pc along its major and minor axes (Barth et al., 1998). Optical WFPC2 images have been discussed recently by Pogge et al. (1999), who state that the nucleus is unresolved by *HST*. We have obtained these same images from the *HST* archive. While the nucleus is certainly compact, we find that it is clearly extended even at the smallest radii. A 12-second, CR-SPLIT exposure in the F547M ($`V`$-band) filter is unsaturated and allows a radial profile measurement. We find a FWHM size of 14 pc by 8 pc along the major and minor axes, consistent with the size of the nuclear cluster measured at 2200 Å. Barth et al. (1998) estimated that at most 23% of the nuclear UV flux could come from a central point source. From the equivalent widths of stellar-wind features in the UV spectrum, Maoz et al. (1998) give a similar upper limit of $`20\%`$ to the possible contribution of a truly featureless continuum to the observed UV flux. X-ray observations of NGC 4569 with *ROSAT* have revealed a source coincident with the nucleus which is unresolved at the 2″ resolution of the HRI camera (Colbert & Mushotzky, 1999). This does not necessarily indicate that an AGN is present, however, as the optical/UV size of the starburst core is an order of magnitude smaller than the HRI resolution. *ASCA* observations show that the X-ray emission is extended over arcminute scales in both the hard (2–7 keV) and soft (0.5–2 keV) bands (Terashima et al., 1999). Interestingly, the compact source seen in the *ROSAT* image is detected only in the soft *ASCA* band, while there is no detectable contribution from a compact, hard X-ray source. The spectral shape of the compact soft X-ray component is consistent with an origin either in an AGN or in X-ray binaries (Tschöke & Hensler, 1999), but the lack of a compact hard X-ray source argues against the AGN interpretation. If an AGN is present, it must be highly obscured even at hard X-ray energies, with an obscuring column of $`N_H>10^{23}`$ cm<sup>-2</sup> (Terashima et al., 1999). In radio emission, VLA observations show that the NGC 4569 nucleus is an extended source with a size of 4″ and no apparent core (Neff & Hutchings, 1992), in contrast with the compact, AGN-like cores found in some LINERs (Falcke et al., 1998). Shock excitation has often been considered as a mechanism to power the narrow emission lines in LINERs. However, the lack of narrow emission features in the UV spectrum of NGC 4569 argues against shock-heating models for this object, as existing shock models generally predict strong UV line emission (e.g., Dopita & Sutherland, 1996). Shock-excited filaments in supernova remnants show strong emission in high-excitation UV lines such as C IV $`\lambda `$1549 and He II $`\lambda `$1640 (e.g., Blair et al., 1991, 1995) which are altogether absent from the NGC 4569 spectrum. Similarly, the shock-excited nuclear disk of M87 (Dopita et al., 1997) has a high-excitation UV line spectrum which bears no resemblance to the NGC 4569 spectrum. From an analysis of infrared spectra, Alonso-Herrero et al. (1999) proposed that the NGC 4569 nucleus is powered by an 8–11 Myr-old starburst, by a combination of stellar photoionization and shock heating from supernova remnants. While this hypothesis may be applicable to some LINERs and transition galaxies, the UV spectrum of NGC 4569 shown by Maoz et al. (1998) is inconsistent with a burst of such advanced age, as the P Cygni features of C IV, Si IV, and N V would have disappeared from a single-burst population after about 6 Myr. The overall picture emerging from these observations is that the NGC 4569 nucleus is a compact, luminous, and young starburst. The only reason to invoke the presence of an AGN at all would be to explain the higher strengths of the low-ionization forbidden lines in comparison with values observed in normal H II nuclei. If it were indeed possible for a young starburst to produce transition or LINER-type emission lines in the surrounding gas, then there would be no reason to consider AGN models for NGC 4569. ## 3 Photoionization Calculations ### 3.1 The Ionizing Continuum As discussed by Filippenko & Terlevich (1992) and Shields (1992), the key ingredient necessary for generating a LINER or transition-type emission-line spectrum is an ionizing continuum which is harder than that produced by typical clusters of OB stars. A harder continuum will produce a more extended partially-ionized zone in the surrounding H II region, boosting the strength of the low-ionization lines which are typical of LINER spectra: \[O I\] $`\lambda `$6300, \[O II\] $`\lambda `$3727, \[N II\] $`\lambda `$$`\lambda `$6548,6583, and \[S II\] $`\lambda `$$`\lambda `$6716,6731. To represent the ionizing continuum of a young starburst, we have chosen the STARBURST99 model set; we refer the reader to Leitherer et al. (1999) for the details of the methods used to construct these models. Briefly, the STARBURST99 code employs the Geneva stellar evolution models of Meynet et al. (1994), with enhanced mass-loss rates, for high-mass stars. Atmospheres are represented by the models compiled by Lejeune et al. (1997) and Schmutz et al. (1992). Figures 1–12 of Leitherer et al. (1999) display the spectral energy distributions of the STARBURST99 model clusters for a range of burst ages and for a variety of initial conditions. From the figures, some important trends are readily apparent. During the first 2 Myr after an instantaneous burst, the continuum is dominated by the hottest O stars, and there is essentially no emission below 228 Å, corresponding to the ionization energy of He<sup>+</sup>. The appearance of W-R stars during the period 3–5 Myr after the burst results in a dramatic change in the UV continuum, as these stars emit strongly in the He<sup>++</sup> continuum below 228 Å. From 6 Myr onwards, the W-R stars disappear and the UV continuum rapidly fades and softens as the burst ages. Only the models with an upper mass limit of $`M_{\mathrm{up}}`$ = 100$`M_{}`$ generate the hard, W-R-dominated UV continuum; the model sequences with $`M_{\mathrm{up}}`$ = 30$`M_{}`$ do not generate significant numbers of photons below 228 Å for any ages because the progenitors of W-R stars are not present in the initial burst. Constant star-formation rate models with $`M_{\mathrm{up}}`$ = 100$`M_{}`$ form W-R stars continuously after 3 Myr, but the overall shape of the UV continuum is softer than in the instantaneous burst models, because of the continuous formation of luminous O stars. These results provide a useful starting point for the photoionization calculations. If it is possible for the H II region surrounding a young cluster to resemble a LINER or transition object, then this is most likely to occur when the ionizing continuum is hardest, when W-R stars are present during $`t35`$ Myr after a burst. Very massive stars (in the range 30–100 $`M_{}`$ or greater) must be present in the burst or else the requisite W-R stars will not appear. The formation of W-R stars is enhanced at high metallicity, so the ability to generate a LINER or transition-type spectrum may be a strong function of metal abundance as well as age. ### 3.2 Model Grid To create grids of photoionization models, we fed the UV continua generated by the STARBURST99 models into the photoionization code CLOUDY (version 90.04; Ferland et al., 1998). For each time step, a grid of models was calculated by varying the nebular density and the ionization parameter, which is defined as the ratio of ionizing photon density to the gas density at the ionized face of a cloud. Real LINERs and transition nuclei are likely to contain clouds with a range of values of density and ionization parameter, and more general models incorporating density and ionization stratification can be constructed as linear combinations of these simple single-zone models. All models were run with the following range of parameters: burst age from 1 to 10 Myr at increments of 1 Myr, with log $`U`$ ranging from $`2`$ to $`4`$ at increments of 0.5, and a constant density ranging from log ($`n_\mathrm{H}`$/cm<sup>-3</sup>) = 2 to 6 at increments of 1. As a starting point, we computed a grid for an instantaneous burst with an IMF having a power-law slope of $`2.35`$, $`M_{\mathrm{up}}`$ = 100 $`M_{}`$, solar metallicity in stars and gas, and a single plane-parallel slab of gas with no dust; we will refer to this as model grid A. The solar abundance set was taken from Grevesse & Anders (1989) and Grevesse & Noels (1993). Other grids were computed as variations on this basic parameter set, with the following modifications made in different model runs: a constant star-formation rate; metallicity 0.2, 0.4, or 2$`Z_{}`$ in both stars and gas; and spherical geometry for the nebula. To assess the effects of the highest-mass stars, we also ran custom model grids, via the STARBURST99 web site, with $`M_{\mathrm{up}}=70`$ and 120 $`M_{}`$. The depletion of heavy elements onto grains can result in marked changes to the emergent emission-line spectrum of an H II region, both by removing gas-phase coolants from the nebula and by grain absorption of ionizing photons, which will modify the effective shape of the ionizing continuum. In metal-rich H II regions, these effects will tend to boost the strengths of the low-ionization emission lines relative to the dust-free case (Shields & Kennicutt, 1995). To assess the effects of dust in transition nuclei, we calculated additional model grids which included dust grains with a Galactic ISM dust-to-gas ratio along with the corresponding gas-phase depletions. The dusty models were all calculated using the solar abundance set for the undepleted gas. Dust grains were assumed to have the optical properties of Galactic ISM grains, as described by Mathis, Rumpl, & Nordsieck (1977), Draine & Lee (1984), and Martin & Rouleau (1991). From the CLOUDY output files, we tabulated the strengths relative to H$`\beta `$ of the major emission lines which are prominent in LINERs. The calculations were performed under the assumption that the H II region is ionization bounded. For this case, the outer extension of the cloud was set to be the radius at which $`T_e`$ falls to 4000 K, beyond which essentially no emission is generated in the optical or UV lines. As a test, we ran a grid of models with the stopping temperature set to 1000 K, and we verified that the emission-line ratios were essentially identical to the default case of 4000 K stopping temperature. We also verified that the important diagnostic line ratios differed by $`0.1`$ dex between the spherical and plane-parallel cases when all other input parameters were unmodified, and all results discussed in this paper refer to the plane-parallel models. In the calculations, the longest timescales for atomic species to reach equilibrium were of order $`10^3`$ years, much shorter than the evolution timescale of the stellar cluster, justifying the assumption that each time step of the cluster evolution could be used independently to calculate the nebular conditions. Table 1 gives a summary of the model parameters, for the model grids which appear in the following discussion. ## 4 Discussion ### 4.1 Model Results The model results are displayed in Figures LINER/H II “Transition” Nuclei and the Nature of NGC 4569LINER/H II “Transition” Nuclei and the Nature of NGC 4569. To compare the model outputs with the observed properties of a variety of galaxy types, we have used the emission-line data compiled by Ho et al. (1997a). This catalog has the advantages of a homogeneous classification system, small measurement aperture ($`2\mathrm{}\times 4\mathrm{}`$), and careful starlight subtraction to ensure accurate emission-line data. In order to reduce confusion and to keep the sample of comparison objects to a reasonable number, we included only objects with unambiguous classifications as H II, LINER, transition, or Seyfert. Objects with borderline or ambiguous classifications, such as “LINER/Seyfert,” were excluded for clarity. The comparison sample was further reduced by excluding galaxies in which any of the emission lines H$`\alpha `$, H$`\beta `$, \[O III\] $`\lambda `$5007, \[O I\] $`\lambda `$6300, \[N II\] $`\lambda `$6583, or \[S II\] $`\lambda `$$`\lambda `$6716,6731 was undetected or was flagged as having a large uncertainty in flux (“b” or “c” quality flags). The measured line ratios are corrected for both Galactic and internal reddening. Figure LINER/H II “Transition” Nuclei and the Nature of NGC 4569 plots the ratio \[O III\] $`\lambda `$5007/H$`\beta `$ against \[O I\] $`\lambda `$6300/H$`\alpha `$ at a burst age of 4 Myr, for the solar-metallicity model grids A, B, C, and D. The model results are plotted for a density of $`n_\mathrm{H}=10^3`$ cm<sup>-3</sup>, as an approximate match to the density of $`n_e=600`$ cm<sup>-3</sup> measured for NGC 4569 (Ho et al., 1997a). The diagram shows that the instantaneous burst models (A and B) are a good match to the line ratios of the transition nuclei, for log $`U3.5`$. The dusty models from grid B fall more centrally within the region defined to contain transition objects, but the models without dust still closely match transition nuclei having lower \[O I\] $`\lambda `$6300/H$`\alpha `$ ratios. Figures LINER/H II “Transition” Nuclei and the Nature of NGC 4569 and LINER/H II “Transition” Nuclei and the Nature of NGC 4569 show the corresponding diagrams for \[N II\] $`\lambda `$6583/H$`\alpha `$ and for \[S II\] $`\lambda \lambda `$6716, 6731/H$`\alpha `$, respectively. In both cases we find that the single-burst models span the region occupied by transition nuclei in the diagnostic diagrams. In the constant star-formation rate models (C and D), the UV continuum remains softer than in the W-R-dominated phase of the instantaneous burst models, because of the ongoing formation of luminous O stars. As a result, the low-ionization emission lines are significantly weaker than in the instantaneous burst models at 4 Myr. These constant star-formation rate sequences are a reasonable match to the region of H II nuclei in the diagram, and for \[N II\]/H$`\alpha `$ and \[S II\]/H$`\alpha `$ the agreement with H II nuclei is improved at lower densities of $`n_e=10^2`$ cm<sup>-3</sup>, a value more typical of H II nuclei. These constant star-formation rate models are probably appropriate for galaxies having spatially extended, ongoing star formation in their nuclei. We note that the model curves shown in the figures should not be expected to follow the locus of H II nuclei in each plot, despite the fact that the models are generated with a starburst continuum. The range of line ratios observed in H II regions is primarily a sequence in metal abundance (McCall, Rybski, & Shields, 1985), while our models are shown as sequences in $`U`$ for a given metallicity and density. Another point to note about the diagrams is that some of the transition galaxies fall outside the region nominally defined for transition objects, particularly in the \[O I\]/H$`\alpha `$ ratio (Figure LINER/H II “Transition” Nuclei and the Nature of NGC 4569). These galaxies were classified as transition objects by Ho et al. (1997a) on the basis of meeting the majority of the classification criteria. Similarly, some overlap can be seen in the diagrams between the regions occupied by LINERs and Seyfert nuclei; this again reflects the fact that galaxies span a continuous range in the values of these emission-line ratios. The variation of \[O I\] line strength as a function of density is shown in Figure LINER/H II “Transition” Nuclei and the Nature of NGC 4569 for model grid A at $`t=4`$ Myr. For the range of densities considered ($`n_\mathrm{H}=10^2`$ to $`10^6`$ cm<sup>-3</sup>), the models closely overlap the transition region in the diagram at log $`U3.5`$. Introducing ISM depletion and dust grains to the nebula primarily increases the \[O I\]/H$`\alpha `$ ratio at low density. As expected, the \[O I\]/H$`\alpha `$ ratio increases with $`n_\mathrm{H}`$ up to densities of $`10^5`$ cm<sup>-3</sup>, while at densities approaching the critical density of the $`\lambda `$6300 transition ($`1.6\times 10^6`$ cm<sup>-3</sup>) this ratio saturates and begins to turn over. Figure LINER/H II “Transition” Nuclei and the Nature of NGC 4569 shows the \[O I\]/H$`\alpha `$ ratio as a function of burst age (at $`n_e=10^3`$ cm<sup>-3</sup>) for model grid A, for ages of 2 to 6 Myr. This diagram highlights the dramatic changes that W-R stars generate in the surrounding nebula. From 3 to 5 Myr after the burst, when W-R stars dominate the UV continuum, the harder ionizing continuum boosts the strength of \[O I\] by an order of magnitude and the model sequences appear adjacent to the transition region, with relatively little evolution in the line ratios during this period. As the burst ages beyond 6 Myr, the \[O I\]/H$`\alpha `$ ratio continues to fall, and the emission-line strengths drop rapidly as the ionizing continuum softens and its luminosity decreases. At $`Z=2Z_{}`$, the WR-dominated phase occurs slightly later, during time steps 4, 5, and 6 Myr. The \[N II\]/H$`\alpha `$ and \[S II\]/H$`\alpha `$ ratios have a similar dependence on burst age, and are displayed in Figures LINER/H II “Transition” Nuclei and the Nature of NGC 4569 and LINER/H II “Transition” Nuclei and the Nature of NGC 4569, respectively; these results are quite similar to the calculations presented by Leitherer et al. (1992) to illustrate the effects of the W-R continuum on the \[N II\]/H$`\alpha `$ ratio. While the \[O I\]/H$`\alpha `$ ratios in the models at $`\mathrm{log}U=3.5`$ are too low by $`0.10.2`$ dex to fit within the nominal transition region, they still closely match those transition nuclei having relatively low values for \[O I\]/H$`\alpha `$, and the low-ionization line strengths can be further enhanced by the inclusion of dust and depletion (as in Figure LINER/H II “Transition” Nuclei and the Nature of NGC 4569). One puzzling aspect of Figure LINER/H II “Transition” Nuclei and the Nature of NGC 4569 is that for ages outside the range 3–5 Myr, the models predict \[O I\]/H$`\alpha `$ ratios too low to match the majority of H II nuclei. Stasińska & Leitherer (1996) and Martin (1997) discuss this same problem in the context of low-metallicity starburst galaxies. They propose that shocks generated by supernovae and stellar winds provide the additional \[O I\] emission, without making a significant contribution to the \[O II\] or \[O III\] line strengths. Shocks could play a similar role in transition galaxies, as in the model of Alonso-Herrero et al. (1999), although the lack of high-excitation UV line emission in transition nuclei is problematic for the shock hypothesis. A higher upper mass cutoff alleviates this problem to some extent, at least for very young bursts. Increasing $`M_{\mathrm{up}}`$ to 120 $`M_{}`$ boosts \[O I\]/H$`\alpha `$ by $`0.2`$ dex for ages of $`3`$ Myr. Due to the very short lifetimes of the highest-mass stars, however, the $`M_{\mathrm{up}}`$ = 120, 100, and 70 $`M_{}`$ model grids result in identical emission-line spectra from $`4`$ Myr onward. Metal abundance is an additional parameter which must be considered. The models displayed up to this point were all calculated for a solar abundance set, while the nuclei of early-type spirals are likely to have enhanced heavy-element abundances. As discussed by Leitherer et al. (1999), the continuum shortwards of 228 Å is strongest in the high-metallicity models, because the increased mass-loss rates lead preferentially to the formation of W-R stars at high metal abundance. Figure LINER/H II “Transition” Nuclei and the Nature of NGC 4569 plots the \[O I\]/H$`\alpha `$ ratio for abundances of $`Z`$ = 0.2, 0.4, 1, and 2 $`Z_{}`$, at a density of $`n_\mathrm{H}=10^3`$ cm<sup>-3</sup> and $`t=4`$ Myr. From this diagram, it is clear that solar or higher abundances are necessary to match the \[O I\] strengths of the transition nuclei; at lower abundances the line ratios are a better match to those of the high-excitation (low-metallicity) H II nuclei. Figure LINER/H II “Transition” Nuclei and the Nature of NGC 4569 displays the density dependence of the \[O I\]/H$`\alpha `$ ratio for the $`Z=2Z_{}`$ model grid; by comparison with the solar-metallicity model grid in Figure LINER/H II “Transition” Nuclei and the Nature of NGC 4569, the higher abundances result in a lower-excitation spectrum with enhanced \[O I\] emission, due to the harder extreme-UV continuum. It would be advantageous to compare the model results with a wider variety of emission lines. Unfortunately, measurements of other optical emission lines are scarce for transition nuclei. The Ho et al. survey did not include the \[O II\] $`\lambda `$3727 line, and there is no other homogeneous catalog of \[O II\] measurements for transition galaxies. To be consistent with a LINER or transition-type classification, a model calculation must result in the flux ratio \[O II\] $`\lambda `$3727 / \[O III\] $`\lambda `$5007 $`>1`$. In fact, all of the models with log $`U3`$ do satisfy this criterion. Thus, any of our models which is consistent with the Ho et al. LINER or transition classification criteria is also consistent with the original Heckman (1980) criterion for the \[O II\]/\[O III\] ratio in LINERs. The relative strengths of UV lines such as C II\] $`\lambda `$2326, C III\] $`\lambda `$1909, and C IV $`\lambda `$1549 can provide further diagnostics, but none of these lines is detected in NGC 4569 (Maoz et al., 1998). The only other transition nucleus having *HST* UV spectra available is NGC 5055, and its spectrum appears to be devoid of UV emission lines as well (Maoz et al., 1998). We ran one additional model grid to test whether different model atmospheres for O stars would lead to different results. The STARBURST99 continua were calculated using stellar atmosphere models compiled by Lejeune et al. (1997), which are based on the Kurucz (1992) model set for the massive stellar component. The recent CoStar model grid of Schaerer & de Koter (1997), which includes non-LTE effects, stellar winds, and line blanketing for O stars, makes dramatically different predictions for the ionizing spectra. As shown by Schaerer & Vacca (1998), the CoStar models yield a luminosity in the He<sup>++</sup> continuum which is four orders of magnitude greater than that predicted by the Kurucz models, for the most massive O stars which dominate the UV luminosity at burst ages of $`<3`$ Myr. In the CoStar-based models the photon output of the cluster below 228 Å is essentially constant from 0 to 5 Myr. To investigate the effects of this harder O-star continuum on the emission-line spectra, we ran a grid of models using the evolving starburst continua computed by Schaerer & Vacca (1998) with the CoStar atmospheres. Model parameters were the same as for model grid A except that an upper mass limit of 120 $`M_{}`$ was used. We find that using the CoStar atmospheres has a relatively minor effect on our results. In comparison with the STARBURST99-based models having $`M_{\mathrm{up}}`$ = 120$`M_{}`$, the CoStar model grid yields an increase in the \[O I\]/H$`\alpha `$ ratio of $`0.10.15`$ dex for $`t<6`$ Myr, while \[N II\]/H$`\alpha `$ and \[S II\]/H$`\alpha `$ are essentially unaffected. During the period $`t<3`$ Myr, the CoStar-based models result in an H II region spectrum, demonstrating that W-R stars are still required in order to generate LINER or transition-type line ratios. The strength of the \[Ca II\] emission lines at 7291 and 7324 Å is often used as a diagnostic of dust and depletion, because in the absence of depletion these lines are predicted to be strong in photoionized gas (e.g., Kingdon et al., 1995; Villar-Martín & Binette, 1996). (The $`\lambda `$7291 line is a cleaner diagnostic since $`\lambda `$7324 is blended with \[O II\] $`\lambda `$7325.) However, for a 4 Myr-old burst with nebular conditions of $`n_e=10^3`$ cm<sup>-3</sup>, $`\mathrm{log}U=3.5`$, and an undepleted solar abundance set, our calculations yield a maximum prediction of only 0.2 for the ratio of \[Ca II\] $`\lambda `$$`\lambda `$7291, 7324 to H$`\beta `$. Only at very low ionization parameters ($`10^{4.5}`$) does the \[Ca II\] emission become stronger than H$`\beta `$. Since H$`\beta `$ is only barely visible in the spectra of many transition objects (prior to careful starlight subtraction, at least), typical observations may not have sufficient sensitivity to detect faint \[Ca II\] lines in these objects. High-quality spectra of LINERs do not show \[Ca II\] emission (Ho et al., 1993), indicating that Ca is likely to be depleted onto grains in these objects, but similar data are not generally available for transition nuclei. If the \[Ca II\] lines are found to indicate a high level of depletion onto dust grains in transition nuclei, this would also provide a further argument against shock-heating models, as shocks will tend to destroy grains (e.g., Morse et al., 1996). ### 4.2 The Nature of Transition Nuclei The results shown in the preceding figures demonstrate that the starburst models are in fact able to reproduce the major diagnostic emission-line ratios of transition nuclei with reasonable accuracy, during the period $`t`$ = 3–5 Myr when W-R stars are present. For a density of $`10^3`$ cm<sup>-3</sup> and an age of 4 Myr, the solar-metallicity models with and without depletion bracket the range of values observed in real transition nuclei for the line ratios \[O I\]/H$`\alpha `$, \[N II\]/H$`\alpha `$, and \[S II\]/H$`\alpha `$. We do not attempt to fine-tune a model to produce an exact match with the spectrum of NGC 4569, but the basic solar-metallicity dust-free model at $`t=4`$ Myr with $`n_\mathrm{H}=10^3`$ cm<sup>-3</sup> and log $`U=3.5`$ closely fits the observed \[O I\]/H$`\alpha `$ ratio, while overpredicting \[S II\]/H$`\alpha `$ by $`0.2`$ dex and underpredicting \[N II\]/H$`\alpha `$ by $`0.1`$ dex. We emphasize that the starburst models are only able to produce transition-type spectra for the case of an instantaneous burst; that is, when the burst duration is shorter than the timescale for evolution of the most massive stars. Multiple-burst populations can only yield a transition spectrum if the dominant population is $`35`$ Myr old and the older or younger bursts do not contribute significantly to the ionizing photon budget. Models with a constant star-formation rate produce H II region spectra at all ages, as the softer ionizing continua do not produce sufficient \[O I\] $`\lambda `$6300 emission in the surrounding H II region to match transition-type spectra. The parameter which is most important for determining the hardness of the ionizing continuum is the number ratio of W-R stars to O stars, which exceeds $`0.15`$ during the W-R-dominated phase in the STARBURST99 models at solar metallicity, and approaches or exceeds unity at $`Z=2Z_{}`$. In the constant star-formation rate models at solar metallicity, the W-R/O ratio levels off at $`0.06`$ after about 4 Myr. The compact size of the NGC 4569 nucleus is consistent with the requirement that the burst duration must be brief ($`1`$ Myr) in order to generate a transition-type spectrum. The FWHM size of the starburst core in NGC 4569 is only $`10`$ pc. For such a burst to occur in $`1`$ Myr would require a propagation speed for star formation of only $`10`$ km s<sup>-1</sup>. In fact, the typical velocities in the NGC 4569 nucleus are much greater than 10 km s<sup>-1</sup>: the \[N II\] $`\lambda `$6583 line has a velocity width of 340 km s<sup>-1</sup> (Ho et al., 1997a). Thus, the NGC 4569 nucleus could represent the result of a single, rapid burst of star formation. Although our results suggest that transition galaxy spectra may be attributed to a starburst with a high W-R/O-star ratio, the demographics of transition nuclei and H II nuclei indicate that many transition galaxies are probably not formed by this mechanism. In the STARBURST99 models, the W-R-dominated phase in an instantaneous burst lasts for $`3`$ Myr (i.e., 3 time steps in the calculations). An H II region surrounding an instantaneous burst will be visible for $`6`$ Myr, after which the emission lines will fade rapidly (e.g., García-Vargas & Díaz, 1994). Thus, for an instantaneous burst population, the transition phase and the H II nucleus phase will have approximately equal lifetimes. If all H II nuclei consisted of instantaneous burst stellar populations with nebular conditions conducive to the formation of transition-type spectra, then H II nuclei and transition nuclei should be roughly equal in number. In reality, it is likely that a large fraction of star-forming nuclei contain multiple bursts of star formation and/or conditions of low density or low metallicity, so all star-forming nuclei should not be expected to evolve through a transition-type phase. Although it is difficult to make specific predictions, it is probably safe to conclude that for a given Hubble type, transition nuclei generated solely by starbursts should be considerably less numerous than ordinary H II nuclei. The statistics compiled by Ho et al. (1997b) provide a basis for comparison. In early-type galaxies (E and S0), transition nuclei outnumber H II nuclei by a 3-to-1 margin. Only for Hubble types Sb and later do H II nuclei begin to outnumber transition nuclei by a factor of 2 or more. The most straightforward interpretation of this trend is that in early-type host galaxies, the majority of transition nuclei are actually AGN/H II region composites, as proposed by Ho et al. (1993) and others. At intermediate and late Hubble types, the population of transition nuclei may consist of both composite objects and “pure” starbursts evolving through the W-R-dominated phase. The presence of transition nuclei in a small fraction ($`10\%`$) of elliptical galaxies (Ho et al., 1997b) presents a particularly intriguing problem. The Ho et al. survey detected five transition nuclei in ellipticals but not a single case of an elliptical galaxy hosting an H II nucleus. Given that the models which have been considered for transition nuclei involve star formation, either alone or in combination with an AGN, this observation is rather puzzling. Perhaps faint AGNs in elliptical nuclei can produce transition-type spectra without substantial star formation activity. Four of the five transition nuclei found in ellipticals by Ho et al. (1997a) have borderline or ambiguous spectroscopic classifications, however, so “pure” transition nuclei in ellipticals are evidently quite rare. Given these results, one might expect to see transition-type emission spectra in some fraction of disk H II regions in spiral galaxies, but in fact such spectra are never found. Single-burst models for disk H II region spectra are only compatible with observed line ratios for model ages of $`t<3`$ Myr (Bresolin et al., 1999), as the harder ionizing spectrum after 3 Myr makes the models overpredict the strengths of the low-ionization lines. Bresolin et al. suggest that either current stellar evolution models are at fault, or that disk H II regions are disrupted before reaching an age of 3 Myr, in which case the W-R phase would not be observed in the nebular gas. An alternate (and perhaps more attractive) possibility is that the majority of H II nuclei, as well as disk H II regions, are better described by the models with constant star-formation rate, or contain multiple bursts of star formation with an age spread of a few Myr, which would result in a spectrum similar to the constant star-formation rate models. For understanding the physical nature of transition objects, the observational challenge is to search for any unambiguous signs of nonstellar activity. Detection of broad H$`\alpha `$ emission, or a compact source of hard X-ray emission with a power-law spectrum, would provide evidence for an AGN component. High-resolution optical spectra (from *HST*) could provide a means to spatially resolve a central AGN-dominated narrow-line region from the surrounding starburst-dominated component. Since direct evidence for accretion-powered nuclear activity in transition nuclei is generally lacking, it should not be assumed that any given transition object actually contains an AGN unless observations specifically support that interpretation. One further effect that should be considered in starburst models in the future is photoionization by the X-rays generated by the starburst. X-ray binaries and supernova remnants will provide high-energy ionizing photons, resulting in a spatially extended source of soft X-ray emission as observed in the nucleus of NGC 4569 (Terashima et al., 1999), for example. (The massive main sequence stars will contribute only a negligible amount to the total X-ray luminosity of a starburst; see Helfand & Moran 1999.) Photoionization by X-rays will naturally lead to an enhancement of the low-ionization forbidden lines, and this could contribute to the excitation of some transition galaxies. ### 4.3 LINERs The strength of \[O I\] $`\lambda `$6300 is the key distinguishing factor between LINERs and transition nuclei, and matching the observed strength of this line is the major challenge of starburst models for LINERs. Our calculations show that LINER spectra can only be generated by the STARBURST99 clusters under a very specific and limited range of circumstances. Model grids A and B, while matching the \[O I\] / H$`\alpha `$ ratio of transition nuclei quite well, do not overlap at all with the main cluster of LINERs in Figure LINER/H II “Transition” Nuclei and the Nature of NGC 4569, even at high densities and even when depletion and dust grains are included. Only grid G with $`Z=2Z_{}`$ is able to replicate the high \[O I\] / H$`\alpha `$ ratios of most LINERs, and only during $`t`$ 4–6 Myr and at densities of $`n_\mathrm{H}`$ $`10^5`$ cm<sup>-3</sup>. In agreement with previous models, we find that values of log $`U`$ $`3.5`$ to $`3.8`$ reproduce the observed \[O I\] / H$`\alpha `$ ratios of LINERs. However, at such high densities the models underpredict the strengths of \[S II\] and \[N II\] relative to H$`\alpha `$. Single-zone models require $`n_e10^5`$ cm<sup>-3</sup> to match the \[N II\]/H$`\alpha `$ ratios of LINERs and $`n_e10^4`$ cm<sup>-3</sup> for \[S II\]/H$`\alpha `$. Agreement with LINER spectra can be achieved with a simple two-zone model, in which high-density and low-density components are present, similar to the scenario proposed by Shields (1992). As an example, a two-component model constructed from grid A containing gas at ($`n_e=10^3`$ cm<sup>-3</sup>, $`U=10^{3.5}`$) and at ($`n_e=10^5`$ cm<sup>-3</sup>, $`U=10^4`$) produces emission-line ratios which are consistent with all the LINER classification criteria of both the Heckman (1980) and Ho et al. (1997a) systems, if the two density componenets are scaled so as to contribute equally to the total H$`\beta `$ luminosity. As a local comparison, observations of near-infrared Fe II emission indicate the presence of clouds having $`n_e>10^5`$ cm<sup>-3</sup> in the Galactic center region (DePoy, 1992), so it is plausible that other galactic nuclei may contain ionized gas at similarly high densities even in the absence of an observable AGN. A starburst origin for some LINER 2 nuclei would provide a natural explanation for the lower values of the X-ray/H$`\alpha `$ flux ratio seen in these objects, in comparison with AGN-like LINER 1 nuclei (Terashima et al., 1999). It seems unlikely, however, that many LINERs are generated by this starburst mechanism. About 15% of LINERs are known to have a broad component of the H$`\alpha `$ emission line, indicating a probable AGN (Ho et al., 1997b). By analogy with the Seyfert population, a much larger fraction of LINERs is likely to have broad-line regions which are either obscured along our line of sight, or are simply too faint to be detected in ground-based spectra against a bright background of starlight. Many LINERs show signs of nuclear activity that cannot be explained by stellar processes: compact flat-spectrum radio sources or jets, compact X-ray sources with hard power-law spectra, or double-peaked broad Balmer-line emission, for example. As an increasing body of observational work supports the idea that many LINERs are in fact AGNs, there is less incentive to consider purely stellar models for their excitation. Demographic arguments, similar to those given above for transition nuclei, can be applied for the LINER population. Since the LINER phase only occurs for instantaneous bursts at high density and high metallicity, the starburst scenario implies that H II nuclei should be considerably more numerous than starburst-generated LINERs for a given Hubble type. While LINERs are common in early-type hosts, H II nuclei are not found in elliptical hosts and are seen in fewer than 10% of S0 galaxies (Ho et al., 1997b). This disparity is a strong argument against a starburst origin for those LINERs in early-type galaxies. In later Hubble types the situation is less clear, however. H II nuclei occur in $`80\%`$ of spirals of type Sc and later, while LINERs occur in just 5% of these galaxy types (Ho et al., 1997b). It is conceivable that some of the LINERs in intermediate to late-type hosts could have a starburst origin, and this issue could be resolved by further UV and X-ray observations in the future. Interestingly, the Ho et al. survey did not find any examples of broad H$`\alpha `$ emission in LINERs or transition objects with hosts of type Sc or later; perhaps star formation plays a more prominent role than accretion-powered activity in these objects. While a few LINERs show spectral features of young stars in the UV (Maoz et al., 1998), the quality of the observational data is poor in comparison with the NGC 4569 UV spectrum, and it is difficult to set meaningful constraints on the age of the young stellar population. NGC 404 is a possible candidate for a starburst-generated LINER, but in its UV spectrum the P Cygni features are weak in comparison with NGC 4569, indicating either an older burst population or dilution by a featureless AGN continuum (Maoz et al., 1998). The LINERs having UV spectral features from massive stars may also host obscured AGNs which can be detected in other wavebands. For example, the UV continuum of the LINER NGC 6500 appears to have its origin in hot stars (Barth et al., 1997; Maoz et al., 1998), but observations of a parsec-scale radio jet unambiguously demonstrate that nonstellar activity is occurring as well (Falcke et al., 1998). ### 4.4 W-R Galaxies with LINER or Transition-Type Spectra The starburst models presented here run into two obvious problems. First, W-R galaxies are almost never known to have LINER or transition-type spectra. Second, LINERs and transition nuclei almost never show W-R features in their spectra. Is there any way to reconcile the starburst models with these facts? W-R galaxies are identified by the appearance of the 4650 Å blend in their spectra (e.g., Kunth & Sargent, 1981). Since the formation of W-R stars is enhanced at high $`Z`$, the strength of this feature relative to H$`\beta `$ increases dramatically with metallicity, from $`0.1`$ at $`Z<0.4Z_{}`$ to $`0.54`$ at $`ZZ_{}`$ (Schaerer & Vacca, 1998). However, in the nuclei of early-type spirals where high metallicities are expected, a nuclear starburst will be surrounded by the old stellar population of the galactic bulge, making the detection of the W-R bump extremely difficult (Mas-Hesse et al., 1999). Most of the currently known W-R galaxies are late-type spirals or irregular galaxies (Schaerer et al., 1999) in which the W-R bump is visible against the nearly featureless starburst continuum. When the W-R bump is detected in H II galaxies, its amplitude above the continuum level is generally far smaller than that of H$`\beta `$ or even H$`\gamma `$ (e.g., Kunth & Sargent, 1981). In most of the LINER and transition galaxy spectra in the catalog of Ho et al. (1995), however, H$`\beta `$ barely appears and H$`\gamma `$ is too weak to be visible at all prior to continuum subtraction. Even in a high-metallicity environment, where the total intensity of the W-R bump can be comparable to that of H$`\beta `$, the amplitude of the W-R bump above the continuum will be much lower than that of H$`\beta `$ because the flux in the W-R feature is spread over $`70100`$ Å. Thus, the detection of W-R emission in galactic nuclei is strongly biased toward late-type, bulgeless galaxies. In late-type or dwarf irregular galaxies where the W-R bump is visible, the W-R/O-star ratio is expected to be much smaller owing to the lower metallicity, and the resulting softer ionizing spectrum will tend to produce an H II region spectrum rather than a transition object. The gas density as a function of Hubble type may play a role as well; in a study of H II nuclei, Ho et al. (1997c) find a weak trend toward lower nebular densities in later-type host galaxies. Observational detection of W-R features in transition nuclei is perhaps the clearest test of the starburst models, if sufficiently sensitive observations can be obtained. The UV spectrum of NGC 4569 is consistent with an age of $`36`$ Myr, an age at which W-R stars are expected to be present. Previous optical spectra have not revealed the 4650 Å W-R bump in NGC 4569, but further observations with high S/N and small apertures would be worthwhile. The lack of He II $`\lambda `$1640 emission in the UV spectrum of NGC 4569 is potentially a more serious problem since the burst population should dominate at short wavelengths. The models of Schaerer & Vacca (1998) predict an equivalent width of at least 2 Å in the W-R-generated $`\lambda `$1640 line during the period 3–6 Myr for an instantaneous burst of solar or higher metallicity, while the observed upper limit of $`f(1640)<2.0\times 10^{15}`$ erg s<sup>-1</sup> cm<sup>-2</sup> (Maoz et al., 1998) corresponds to an equivalent width limit of $`0.3`$ Å. It should be noted, however, that the $`\lambda `$1640 line lies at the extremely noisy blue end of the FOS G190H grating setting, in a region where detection of emission or absorption features is difficult. Two W-R galaxies may provide useful points of reference for the starburst models. NGC 3367 is classified by Ho et al. (1997a) as an H II nucleus on the basis of its \[O I\]/H$`\alpha `$ and \[S II\]/H$`\alpha `$ ratios, although its \[N II\]/H$`\alpha `$ ratio of 0.83 is more consistent with a LINER or transition-type classification and its emission lines are markedly broader than those of typical H II nuclei. Alonso-Herrero et al. (1999) describe NGC 3367 as a starburst-dominated transition object (see also Dekker et al., 1988). The 4650 Å W-R bump was noted by Ho et al. (1995), who also suggested a LINER/H II classification and a composite source of ionization. As a borderline H II nucleus/transition object with clear evidence for W-R stars, this object deserves further study, to determine whether there is indeed an AGN or whether the enhanced low-ionization emission may be the result of ionization by the W-R population. The electron density of 835 cm<sup>-3</sup> measured from the \[S II\] doublet (Ho et al., 1997a) is also noteworthy, as this is among the highest densities found for an H II nucleus in the Ho et al. survey. Another intriguing object is the nucleus of NGC 6764, which has been classified variously as a Seyfert, a LINER, and a starburst by different authors (see Gonçalves et al., 1999). This galaxy exhibits prominent emission in the 4650 Å W-R blend (Osterbrock & Cohen, 1982). A recent study by Eckart et al. (1996) demonstrates that the narrow emission lines are consistent with a LINER classification, but there are no unambiguous signs of nonstellar activity in the nucleus. Eckart et al. (1996) find that the nucleus contains $`3600`$ W-R stars, and that the overall properties of the object are consistent with ionization by the starburst alone, rather than by a starburst/AGN composite. If this conclusion is confirmed by further observations, NGC 6764 could be considered the best candidate for a LINER photoionized by a starburst during its W-R-dominated phase. Alonso-Herrero et al. (1999) derived an age of 9–10 Myr for the starburst in NGC 6764 based on near-infrared emission-line diagnostics, but this age is inconsistent with the presence of W-R stars, at least for the case of an instantaneous burst. Compared with typical LINERs and transition nuclei, conditions for detection of W-R spectral features in these two objects are perhaps more favorable. Both host galaxies are of late Hubble types (SBc for NGC 3367 and SBbc for NGC 6764) in comparison with the majority of LINERs and transition nuclei, so the level of contamination by the surrounding old stellar population is relatively low. Furthermore (and partly as a result of this), the emission-line equivalent widths in these two nuclei are relatively high for LINERs or transition nuclei. If the emission-line spectra of NGC 3367 or NGC 6764 had been superposed on a luminous early-type spiral bulge, the W-R emission might never have been noticed. The detection of W-R emission in a LINER does not automatically imply a purely starburst origin for the emission lines, of course, since starbursts and AGNs are often known to coexist. Mrk 477 is a well-known example of a Seyfert 2 galaxy having a large population of W-R stars in its nucleus (Heckman et al., 1997). In the context of the starburst model, however, the crucial test is to search for additional examples of LINERs or transition nuclei which exhibit a high ratio of W-R to O stars but no signs of accretion-powered activity. ### 4.5 Caveats and Limitations The most important limitation of these calculations comes from the accuracy of the input continua. The conclusion that the starburst models are able to reproduce transition or LINER spectra under some circumstances depends crucially on the presence of W-R stars to provide a hard and luminous ionizing continuum. Unfortunately, the continuum shape and luminosity of W-R stars in the extreme-UV band are quite uncertain, particularly in the He<sup>++</sup> continuum where stellar winds have a dramatic effect. Several up-to-date reviews of the numerous difficulties involved in modeling W-R spectra can be found in the volume edited by van der Hucht et al. (1999). The STARBURST99 model grid uses the W-R atmospheres of Schmutz et al. (1992), which are calculated for a pure helium composition, but more recent atmosphere models are beginning to include the effects of line-blanketing, as well as clumping and departures from spherical symmetry. As discussed by Leitherer et al. (1999), the W-R/O-star ratio is also extremely model dependent, and may be revised in future generations of models. This would have a direct effect on the strength of the extreme-UV continuum and consequences for the nebular emission lines. Furthermore, the STARBURST99 models neglect binary evolution, although this is more likely to affect the W-R/O ratio at low metallicity. Since the results of the transition-object models are highly dependent on the most uncertain portion of the W-R spectrum, new photoionization calculations should be computed to assess the impact of different W-R evolution and atmosphere models in the future. The shape of the IMF in starburst regions is a subject of some debate, and the possible variation of the IMF with metallicity is of particular importance for galactic nuclei, which are likely to have $`Z>Z_{}`$. Kahn (1974) and Shields & Tinsley (1976) suggested that $`M_{\mathrm{up}}`$ should be lower in regions of higher metal abundance, but this issue has not been settled definitively. Star-count observations demonstrate that the IMF slope and $`M_{\mathrm{up}}`$ do not appear to vary with metallicity (Massey et al., 1995), at least for $`ZZ_{}`$. While nebular diagnostics in H II galaxies are generally consistent with a Salpeter IMF with $`M_{\mathrm{up}}100`$ $`M_{}`$ at subsolar metallicity (e.g., Stasińska & Leitherer, 1996), at high metallicity the observational situation is somewhat ambiguous. Bresolin et al. (1999) find that the mean stellar temperature in H II regions decreases significantly with increasing $`Z`$, and that the He I $`\lambda `$5876 / H$`\beta `$ ratios of H II regions at $`Z2Z_{}`$ are more consistent with $`M_{\mathrm{up}}`$ = 30 $`M_{}`$ than with $`M_{\mathrm{up}}`$ = 100 $`M_{}`$. Such a low value for $`M_{\mathrm{up}}`$ would pose serious difficulties for any starburst models of LINERs and transition nuclei, as the massive progenitors of W-R stars would not be present. Counterbalancing this trend, the strong tidal forces, turbulence, and magnetic field strengths in galactic nuclei may act to raise the Jeans mass and favor the formation of more massive stars (Morris, 1993). In the Galactic center, there are stars with initial masses of $`100`$ $`M_{}`$ (Krabbe et al., 1995), and one Galactic center object (the Pistol star) may have $`M_{\mathrm{initial}}`$ as high as 200–250 $`M_{}`$ (Figer et al., 1998). Thus, the proposed trend toward lower values of $`M_{\mathrm{up}}`$ at high metallicity in disk H II regions may not apply to galactic nuclei. Detailed comparison of the UV spectra of galaxies such as NGC 4569 with starburst population synthesis models can provide useful constraints on the population of high-mass stars in nuclear starbursts. Despite these uncertainties, these photoionization models have a major advantage compared with previous generations of W-R or O-star models for LINERs and transition nuclei, in that the STARBURST99 models with standard parameters are constructed to represent the actual stellar populations in starbursts, to the best of current knowledge. Previous O-star models (Filippenko & Terlevich, 1992; Shields, 1992) required the presence of hypothetical, unusually hot stars in order to explain LINER or transition spectra, and they did not address the evolution of the young stellar population at all. The starburst models presented here provide a more plausible mechanism to generate a transition-type spectrum, even if this model may apply only to a relatively small fraction of the population of transition galaxies. ## 5 Conclusions Our primary conclusion is that for standard starburst parameters and for nebular conditions which may be typical of galactic nuclei, the starburst models are able to reproduce the important diagnostic emission-line ratios for LINER/H II transition galaxies, otherwise known as weak-\[O I\] LINERs. The key ingredient needed to generate a transition-type spectrum is a UV continuum dominated by W-R stars, a condition which occurs during $`t=`$ 3–5 Myr after an instantaneous burst. A transition-type emission spectrum may thus be a phase in the evolution of some nuclear H II regions in which the ionizing continuum is generated by a single-burst stellar population. The models are also able to produce an \[O I\] / H$`\alpha `$ ratio high enough to match LINER spectra, but only for conditions of above-solar metallicity combined with the presence of high-density ($`10^5`$ cm<sup>-3</sup>) clouds. A sensitive search for W-R spectral features in transition nuclei would provide a test of this starburst scenario. This model may apply only to a small fraction of LINERs and transition nuclei; many LINERs and some transition objects show clear signs of nonstellar activity, and the starburst models may not apply at all to objects in early-type host galaxies. Further multiwavelength observations of transition nuclei will be of great utility for determining what fraction of them contain genuine active nuclei, and what fraction appear to be purely the result of stellar phenomena. Research by A.J.B. is supported by a postdoctoral fellowship from the Harvard-Smithsonian Center for Astrophysics. This research was also supported financially by grant AR-07988.02-96A, awarded to J.C.S. by STScI, which is operated by AURA for NASA under contract NAS5-26555. This work would not have been possible without the excellent software created and distributed by Gary Ferland and the Cloudy team, and by Claus Leitherer and the STARBURST99 team. We also thank Gary Ferland for providing a helpful referee’s report, Claus Leitherer for additional helpful comments on the manuscript, and Daniel Schaerer for supplying model starburst spectra in electronic form. Figure Captions Line-ratio diagram of \[O III\] $`\lambda `$5007 / H$`\beta `$ against \[O I\] $`\lambda `$6300 / H$`\alpha `$, for model grids A (solid line), B (long-dashed line), C (short-dashed line), and D (dot-dashed line), at a burst age of 4 Myr and $`n_e=10^3`$ cm<sup>-3</sup>. The input continuum has solar metallicity, IMF power-law slope $`2.35`$, and $`M_{\mathrm{up}}`$ = 100 $`M_{}`$. The following description applies to this and all subsequent plots: the small squares along each model line correspond to the model grid points at log $`U`$ = $`4,3.5,3,2.5`$, and $`2`$, with $`U`$ increasing upward along the line. The points plotted represent galaxies from the Ho et al. catalog, as follows: *Small circles*: H II nuclei. *Squares:* LINER/H II transition objects. NGC 4569 is represented by an open square. *Triangles:* “pure” LINERs. *Crosses:* Seyfert nuclei. The dotted line encloses the region defined for transition nuclei according to the criteria of Ho et al. (1997a).
no-problem/0003/math0003042.html
ar5iv
text
# Genus one 1-bridge knots and Dunwoody manifolds11footnote 1Work performed under the auspices of G.N.S.A.G.A. of C.N.R. of Italy and supported by the University of Bologna, funds for selected research topics. ## 1 Introduction and preliminaries The problem of determining if a balanced presentation of a group is geometric (i.e. induced by a Heegaard diagram of a closed orientable 3-manifold) is quite important within geometric topology and has been deeply investigated by many authors (see , , , , , , ); further, the connections between branched cyclic coverings of links and cyclic presentations of groups induced by suitable Heegaard diagrams have been recently pointed out in several papers (see , , , , , , , , , , ). In order to investigate these connections, M.J. Dunwoody introduces in a class of planar, 3-regular graphs endowed with a cyclic symmetry. Each graph is defined by a 6-tuple of integers; if this 6-tuple satisfies suitable conditions (admissible 6-tuple), the graph uniquely defines a Heegaard diagram such that the presentation of the fundamental group of the represented manifold is cyclic. This construction gives rise to a wide class of closed orientable 3-manifolds (Dunwoody manifolds), depending on 6-tuples of integers and admitting geometric cyclic presentations for their fundamental groups. Our main result is that each Dunwoody manifold is a cyclic covering of a lens space (eventually the 3-sphere), branched over a genus one 1-bridge knot. As a direct consequence, the Dunwoody manifolds belonging to a wide subclass are proved to be cyclic coverings of $`𝐒^\mathrm{𝟑}`$, branched over suitable knots, thus giving a positive answer to a conjecture of Dunwoody . Moreover, we show that all branched cyclic coverings of knots with classical (i.e. genus zero) bridge number two belong to this subclass; as a corollary, the fundamental group of each branched cyclic covering of a 2-bridge knot admits a geometric cyclic presentation. For the theory of Heegaard splittings of 3-manifolds, and in particular for Singer moves on Heegaard diagrams realizing the homeomorphism of the represented manifolds, we refer to and . For the theory of cyclically presented groups, we refer to . We recall that a finite balanced presentation of a group $`<x_1,\mathrm{},x_n|r_1,\mathrm{},r_n>`$ is said to be a cyclic presentation if there exists a word $`w`$ in the free group $`F_n`$ generated by $`x_1,\mathrm{},x_n`$ such that the relators of the presentation are $`r_k=\theta _n^{k1}(w)`$, $`k=1,\mathrm{},n`$, where $`\theta _n:F_nF_n`$ denotes the automorphism defined by $`\theta _n(x_i)=x_{i+1}`$ (mod $`n`$), $`i=1,\mathrm{},n`$. Let us denote this cyclic presentation (and the related group) by the symbol $`G_n(w)`$, so that: $$G_n(w)=<x_1,x_2,\mathrm{},x_n|w,\theta _n(w),\mathrm{},\theta _n^{n1}(w)>.$$ A group is said to be cyclically presented if it admits a cyclic presentation. We recall that the exponent-sum of a word $`wF_n`$ is the integer $`\epsilon _w`$ given by the sum of the exponents of its letters; in other terms, $`\epsilon _w=\upsilon (w)`$ where $`\upsilon :F_n𝐙`$ is the homomorphism defined by $`\upsilon (x_i)=1`$ for each $`1in`$. Following , we recall the definition of genus $`g`$ bridge number of a link, which is a generalization of the classical concept of bridge number for links in $`𝐒^\mathrm{𝟑}`$ (see ). A set of mutually disjoint arcs $`\{t_1,\mathrm{},t_n\}`$ properly embedded in a handlebody $`U`$ is trivial if there is a set of mutually disjoint discs $`D=\{D_1,\mathrm{},D_n\}`$ such that $`t_iD_i=t_iD_i=t_i`$, $`t_iD_j=\mathrm{}`$ and $`D_it_iU`$ for $`1i,jn`$ and $`ij`$. Let $`U_1`$ and $`U_2`$ be the two handlebodies of a Heegaard splitting of the closed orientable 3-manifold $`M`$ and let $`T`$ be their common surface: a link $`L`$ in $`M`$ is in $`n`$-bridge position with respect to $`T`$ if $`L`$ intersects $`T`$ transversally and if the set of arcs $`LU_i`$ has $`n`$ components and is trivial both in $`U_1`$ and in $`U_2`$. A link in 1-bridge position is obviously a knot. The genus $`g`$ bridge number of a link $`L`$ in $`M`$, $`b_g(L)`$, is the smallest integer $`n`$ for which $`L`$ is in $`n`$-bridge position with respect to some genus $`g`$ Heegaard surface in $`M`$. If the genus $`g`$ bridge number of a link $`L`$ is $`b`$, we say that $`L`$ is a genus $`g`$ $`b`$-bridge link or simply a ($`g,b`$)-link. Of course, the genus $`g`$ bridge number of a link in a manifold of Heegaard genus $`g^{}`$ is defined only for $`gg^{}`$ and the genus 0 bridge number of a link in $`𝐒^\mathrm{𝟑}`$ is the classical bridge number. Moreover, a ($`g,1`$)-link is a knot, for each $`g0`$. In what follows, we shall deal with ($`1,1`$)-knots, i.e. knots in $`𝐒^\mathrm{𝟑}`$ or in lens spaces. This class of knots is very important in the light of some results and conjectures involving Dehn surgery on knots (see , , , , , ). Notice that the class of ($`1,1`$)-knots in $`𝐒^\mathrm{𝟑}`$ contains all torus knots (trivially) and all 2-bridge knots (i.e. $`(0,2)`$-knots) . ## 2 Dunwoody manifolds Let us sketch now the construction of Dunwoody manifolds given in . Let $`a,b,c,n`$ be integers such that $`n>0`$, $`a,b,c0`$ and $`a+b+c>0`$. Let $`\mathrm{\Gamma }=\mathrm{\Gamma }(a,b,c,n)`$ be the planar regular trivalent graph drawn in Figure 1. It contains $`n`$ upper cycles $`C_1^{},\mathrm{},C_n^{}`$ and $`n`$ lower cycles $`C_1^{\prime \prime },\mathrm{},C_n^{\prime \prime }`$, each having $`d=2a+b+c`$ vertices. For each $`i=1,\mathrm{},n`$, the cycle $`C_i^{}`$ (resp. $`C_i^{\prime \prime }`$) is connected to the cycle $`C_{i+1}^{}`$ (resp. $`C_{i+1}^{\prime \prime }`$) by $`a`$ parallel arcs, to the cycle $`C_i^{\prime \prime }`$ by $`c`$ parallel arcs and to the cycle $`C_{i+1}^{\prime \prime }`$ by $`b`$ parallel arcs (assume $`n+1=1`$). We set $`𝒞^{}=\{C_1^{},\mathrm{},C_n^{}\}`$ and $`𝒞^{\prime \prime }=\{C_1^{\prime \prime },\mathrm{},C_n^{\prime \prime }\}`$. Moreover, denote by $`A^{}`$ (resp. $`A^{\prime \prime }`$) the set of the arcs of $`\mathrm{\Gamma }`$ belonging to a cycle of $`𝒞^{}`$ (resp. $`𝒞^{\prime \prime }`$) and by $`A`$ the set of the other arcs of the graph. The one-point compactification of the plane leads to a 2-cell embedding of the graph $`\mathrm{\Gamma }`$ in $`𝐒^\mathrm{𝟐}`$; it is evident that the graph is invariant with respect to a rotation $`\rho _n`$ of the sphere by $`2\pi /n`$ radians along a suitable axis intersecting $`𝐒^\mathrm{𝟐}`$ in two points not belonging to the graph. Obviously, $`\rho _n`$ sends $`C_i^{}`$ to $`C_{i+1}^{}`$ and $`C_i^{\prime \prime }`$ to $`C_{i+1}^{\prime \prime }`$ (mod $`n`$), for each $`i=1,\mathrm{},n`$. By cutting the sphere along all $`C_i^{}`$ and $`C_i^{\prime \prime }`$ and by removing the interior of the corresponding discs, we obtain a sphere with $`2n`$ holes. Let now $`r`$ and $`s`$ be two new integers; give a clockwise (resp. counterclockwise) orientation to the cycles of $`𝒞^{}`$ (resp. of $`𝒞^{\prime \prime }`$) and label their vertices from $`1`$ to $`d`$, in accordance with these orientations (see Figure 2) so that: * the vertex 1 of each $`C_i^{}`$ is the endpoint of the first arc of $`A`$ connecting $`C_i^{}`$ with $`C_{i+1}^{}`$; * the vertex $`1r`$ (mod $`d`$) of each $`C_i^{\prime \prime }`$ is the endpoint of the first arc of $`A`$ connecting $`C_i^{\prime \prime }`$ with $`C_{i+1}^{\prime \prime }`$. Then glue the cycle $`C_i^{}`$ with the cycle $`C_{is}^{\prime \prime }`$ (mod $`n`$) so that equally labelled vertices are identified together. It is evident by construction that the integers $`r`$ and $`s`$ can be taken mod $`d`$ and mod $`n`$ respectively. Denote by $`𝒮`$ the set of all the 6-tuples $`(a,b,c,n,r,s)𝐙^\mathrm{𝟔}`$ such that $`n>0`$, $`a,b,c0`$ and $`a+b+c>0`$. The described gluing gives rise to an orientable surface $`T_n`$ of genus $`n`$ and the $`nd`$ arcs belonging to $`A`$ are pairwise connected through their endpoints, realizing $`m`$ cycles $`D_1,\mathrm{},D_m`$ on $`T_n`$. It is straightforward that the cut of $`T_n`$ along the $`n`$ cycles $`C_i=C_i^{}=C_{is}^{\prime \prime }`$ does not disconnect the surface. Set $`𝒞=\{C_1,\mathrm{},C_n\}`$ and $`𝒟=\{D_1,\mathrm{},D_m\}`$. If $`m=n`$ and if the cut along the cycles of $`𝒟`$ does not disconnect $`T_n`$, then the two systems of meridian curves $`𝒞`$ and $`𝒟`$ in $`T_n`$ represent a genus $`n`$ Heegaard diagram of a closed orientable 3-manifold, which is completely determined by the 6-tuple. Each manifold arising in this way is called a Dunwoody manifold. Thus, we define to be admissible the 6-tuples $`(a,b,c,n,r,s)`$ of $`𝒮`$ satisfying the following conditions: * the set $`𝒟`$ contains exactly $`n`$ cycles; * the surface $`T_n`$ is not disconnected by the cut along the cycles of $`𝒟`$. The “open” Heegaard diagram $`\mathrm{\Gamma }`$ and the Dunwoody manifold associated to the admissible 6-tuple $`\sigma `$ will be denoted by $`H(\sigma )`$ and $`M(\sigma )`$ respectively. Remark 1. It is easy to see that not all the 6-tuples in $`𝒮`$ are admissible. For example, the 6-tuples $`(a,0,a,1,a,0)`$, with $`a1`$, give rise to exactly $`a`$ cycles in $`𝒟`$; thus, they are not admissible if $`a>1`$. The 6-tuples $`(1,0,c,1,2,0)`$ are not admissible if $`c`$ is even, since, in this case, we obtain exactly one cycle $`D_1`$, but the cut along it disconnects the torus $`T_1`$. Consider now a 6-tuple $`\sigma 𝒮`$. The graph $`\mathrm{\Gamma }`$ becomes, via the gluing quotient map, a regular 4-valent graph denoted by $`\mathrm{\Gamma }^{}`$ embedded in $`T_n`$. Its vertices are the intersection points of the spaces $`\mathrm{\Omega }=_{i=1}^nC_i`$ and $`\mathrm{\Lambda }=_{j=1}^mD_j`$; hence they inherit the labelling of the corresponding glued vertices of $`\mathrm{\Gamma }`$. Since the gluing of the cycles of $`𝒞^{}`$ and $`𝒞^{\prime \prime }`$ is invariant with respect to the rotation $`\rho _n`$, the group $`𝒢_n=<\rho _n>`$ naturally induces a cyclic action of order $`n`$ on $`T_n`$ such that the quotient $`T_1=T_n/𝒢_n`$ is homeomorphic to a torus. The labelling of the vertices of $`\mathrm{\Gamma }^{}`$ is invariant under the rotation $`\rho _n`$ and $`\rho _n(C_i)=C_{i+1}`$ (mod $`n`$). We are going to show that, if the 6-tuple is admissible, this last property also holds for the cycles of $`𝒟`$. ###### Lemma 1 a) Let $`\sigma =(a,b,c,n,r,s)`$ be an admissible 6-tuple. Then $`\rho _n`$ induces a cyclic permutation on the curves of $`𝒟`$. Thus, if $`D`$ is a cycle of $`𝒟`$, then $`𝒟=\{\rho _n^{k1}(D)|k=1,\mathrm{},n\}`$. b) If $`(a,b,c,n,r,s)`$ is admissible, then also $`(a,b,c,1,r,0)`$ is admissible and the Heegaard diagram $`H(a,b,c,1,r,0)`$ is the quotient of the Heegaard diagram $`H(a,b,c,n,r,s)`$ respect to $`𝒢_n`$. Proof. a) First of all, note that $`\rho _n(\mathrm{\Lambda })=\mathrm{\Lambda }`$; thus the group $`𝒢_n`$ also acts on the spaces $`T_n\mathrm{\Lambda }`$ and $`\mathrm{\Lambda }`$ (and hence on the set $`𝒟`$). If the 6-tuple $`\sigma `$ is admissible, then $`T_n\mathrm{\Lambda }`$ is connected, and hence the quotient $`(T_n\mathrm{\Lambda })/𝒢_n=T_n/𝒢_n\mathrm{\Lambda }/𝒢_n`$ must be connected too. This implies that $`\mathrm{\Lambda }/𝒢_n`$ has a unique connected component. Since $`\mathrm{\Lambda }`$ has exactly $`n`$ connected components, the cyclic group $`𝒢_n`$ of order $`n`$ defines a simply transitive cyclic action on the cycles of $`𝒟`$. b) Let $`C,DT_1`$ the two curves $`C=\mathrm{\Omega }/𝒢_n`$ and $`D=\mathrm{\Lambda }/𝒢_n`$. Then, the two systems of curves $`𝒞=\{C\}`$ and $`𝒟=\{D\}`$ on $`T_1`$ define a Heegaard diagram of genus one. The graph $`\mathrm{\Gamma }_1`$ corresponding to $`\sigma _1=(a,b,c,1,r,0)`$ is the quotient of the graph $`\mathrm{\Gamma }_n`$ corresponding to $`\sigma =(a,b,c,n,r,s)`$, respect to $`𝒢_n`$. Moreover, the gluings on $`\mathrm{\Gamma }_n`$ are invariant respect to $`\rho _n`$. Therefore, the gluings on $`\mathrm{\Gamma }_1`$ give rise to the Heegaard diagram above. This show that the 6-tuple $`\sigma _1`$ is admissible and obviously $`H(a,b,c,1,r,0)`$ is the quotient of $`H(a,b,c,n,r,s)`$ respect to $`𝒢_n`$. Remark 2. More generally, given two positive integer $`n`$ and $`n^{}`$ such that $`n^{}`$ divides $`n`$, if $`(a,b,c,r,n,s)`$ is admissible, then $`(a,b,c,r,n^{},b)`$ is admissible too. Moreover, the Heegaard diagram $`H(a,b,c,r,n^{},b)`$ is the quotient of $`H(a,b,c,r,n,b)`$ respect to the action of a cyclic group of order $`n/n^{}`$. It is easy to see that, for admissible 6-tuples, each cycle in $`𝒟`$ contains $`d`$ vertices with different labels and is composed by exactly $`d`$ arcs of $`\mathrm{\Gamma }`$ (in fact, $`2a`$ horizontal arcs, $`b`$ oblique arcs and $`c`$ vertical arcs). An important consequence of point a) of Lemma 1 is that, if $`\sigma `$ is an admissible 6-tuple, the presentation of the fundamental group of $`M(\sigma )`$ induced by the Heegaard diagram $`H(\sigma )`$ is cyclic. To see this, let $`v`$ be the vertex belonging to the cycle $`C_1`$ and labelled by $`a+b+1`$; denote by $`D_1`$ the curve of $`𝒟`$ containing $`v`$ and by $`v^{}`$ the vertex of $`C_1^{}`$ corresponding to $`v`$. Orient the arc $`e^{}A`$ of the graph $`\mathrm{\Gamma }`$ containing $`v^{}`$ so that $`v^{}`$ is its first endpoint and orient the curve $`D_1`$ in accordance with the orientation of this arc. Now, set $`D_k=\rho _n^{k1}(D_1)`$, for each $`k=1,\mathrm{},n`$; the orientation on $`D_1`$ induces, via $`\rho _n`$, an orientation also on these curves. Moreover, these orientation on the cycles of $`𝒟`$ induce an orientation on the arcs of the graph $`\mathrm{\Gamma }`$ belonging to $`A`$. By orienting the arcs of $`C^{}`$ and $`C^{\prime \prime }`$ in accordance with the fixed orientations of the cycles $`C_i^{}`$ and $`C_i^{\prime \prime }`$, the graph $`\mathrm{\Gamma }`$ becomes an oriented graph, whose orientation is invariant under the action of the group $`𝒢_n`$. Let us define to be canonical this orientation of $`\mathrm{\Gamma }`$. Let now $`wF_n`$ be the word obtained by reading the oriented arcs $`e_1=e^{},e_2,\mathrm{},e_d`$ of $`\mathrm{\Gamma }`$ corresponding to the oriented cycle $`D_1`$, starting from the vertex $`v^{}`$. The letters of $`w`$ are in one-to-one correspondence with the oriented arcs $`e_h`$; more precisely, the letter of $`w`$ corresponding to $`e_h`$ is $`x_i`$ if $`e_h`$ comes out from the cycle $`C_i^{}`$ and is $`x_i^1`$ if $`e_h`$ comes out from the cycle $`C_{is}^{\prime \prime }`$. Note that the word $`\theta _n^{k1}(w)`$ in the cyclic presentation $`G_n(w)`$ is obtained by reading the cycle $`D_k`$ along the given orientation, for $`1kn`$ (roughly speaking, the automorphism $`\theta _n`$ is “geometrically” realized by $`\rho _n`$). This proves that each admissible 6-tuple $`\sigma `$ uniquely defines, via the associated Heegaard diagram $`H(\sigma )`$, a word $`w=w(\sigma )`$ and a cyclic presentation $`G_n(w)`$ for the fundamental group of the Dunwoody manifold $`M(\sigma )`$. Note that the sequence of the exponents in the word $`w(\sigma )`$, and hence its exponent-sum $`\epsilon _{w(\sigma )}`$, only depends on the integers $`a,b,c,r`$. Let us consider now the Dunwoody manifolds $`M(a,b,c,n,r,s)`$ with $`n=1`$ (and hence $`s=0`$), which arises from a genus one Heegaard diagram. ###### Proposition 2 Let $`(a,b,c,1,r,0)`$ be an admissible 6-tuple and let $`w=w(a,b,c,1,r,0)`$ be the associated word. Then the Dunwoody manifold $`M(a,b,c,1,r,0)`$ is homeomorphic to: i) $`𝐒^\mathrm{𝟑}`$, if $`\epsilon _w=\pm 1`$; ii) $`𝐒^\mathrm{𝟏}\times 𝐒^\mathrm{𝟐}`$, if $`\epsilon _w=0`$; iii) a lens space $`L(\alpha ,\beta )`$ with $`\alpha =|\epsilon _w|`$, if $`|\epsilon _w|>1`$. Proof. From $`n=1`$ we obtain $`wF_1𝐙<x|\mathrm{}>`$. Thus, $`\pi _1(M)G_1(w)<x|x^{\epsilon _w}>𝐙_{|\epsilon _w|}`$. Example 1. The Dunwoody manifolds $`M(0,0,1,1,0,0)`$, $`M(1,0,0,1,1,0)`$ and $`M(0,0,c,1,r,0)`$, with $`c,r`$ coprime, are homeomorphic to $`𝐒^\mathrm{𝟑}`$, $`𝐒^\mathrm{𝟏}\times 𝐒^\mathrm{𝟐}`$ and to the lens space $`L(c,r)`$, respectively. Moreover, all lens spaces also arise with $`a0`$ ; in fact, for each $`a>0`$, $`M(a,0,c,1,a,0)`$ is homeomorphic with the lens space $`L(c,a)`$, if $`a`$ and $`c`$ are coprime, since it is easy to see that $`H(a,0,c,1,a,0)`$ can be transformed into the canonical genus one Heegaard diagram of $`L(c,a)`$ by Singer moves of type IB. Let us see now how the admissibility conditions for the 6-tuples of $`𝒮`$ can be given in terms of labelling of the vertices of $`\mathrm{\Gamma }^{}`$, belonging to the curve $`D_1𝒟`$. With this aim, consider the following properties for a 6-tuple $`\sigma 𝒮`$: * the set of the labels of the vertices belonging to the cycle $`D_1`$ is the set of all integers from $`1`$ to $`d`$; * the vertices of the cycle $`D_1`$ have different labels. It is easy to see that, if a 6-tuple $`\sigma 𝒮`$ is admissible, then it satisfies (i’) and (ii’). On the other side, if a 6-tuple $`\sigma 𝒮`$ satisfies (i’) and (ii’), then the curves $`\rho _n^{k1}(D_1)𝒟`$, with $`k=1,\mathrm{},n`$, which are all different from each other, are precisely the curves of $`𝒟`$. Thus, $`𝒟`$ has exactly $`n`$ curves and they are cyclically permutated by $`\rho _n`$. However, this does not imply that $`\sigma `$ is admissible; for example, the 6-tuple $`(1,0,2,1,2,0)`$ satisfies (i’) and (ii’), but it is not admissible (see Remark 1). Note that, for $`n=1`$, property (ii’) always holds, while condition (i’) holds if and only if $`𝒟`$ has a unique cycle. If a 6-tuple satisfies property (i’), then $`𝒢_n`$ acts transitively (not necessarily simply) on $`𝒟`$, and hence it is possible to induce an orientation (which is still said to be canonical) on the cycles of $`𝒟`$ and on the graph $`\mathrm{\Gamma }`$, by extending, via $`\rho _n`$, the orientation of $`D_1`$ to the other cycles of $`𝒟`$. Property (i’) implies that the cycles of $`𝒟`$ naturally induce a cyclic permutation on the set $`𝒩=\{1,\mathrm{},d\}`$ of the vertex labels. In fact, by walking along these canonically oriented cycles, starting from an arbitrary vertex $`\overline{v}`$ labelled $`j`$, one sequentially meets $`d`$ vertices (whose labels are different from each other), and then a new vertex $`\overline{v}^{}`$ labelled $`j`$ which can be different from $`\overline{v}`$. The sequence of the labellings of these $`d`$ consecutive vertices defines the cyclic permutation on $`𝒩`$. Further, each cycle of $`𝒟`$ precisely contains $`d^{}=ld`$ arcs, with $`l1`$, and $`l=1`$ if and only if the 6-tuple satisfies (ii’) too. Moreover, property (i’) is independent from the integers $`n`$ and $`s`$; hence, given two 6-tuples $`\sigma =(a,b,c,n,r,s)`$ and $`\sigma ^{}=(a,b,c,n^{},r,s)`$, then $`\sigma `$ satisfies (i’) if and only if $`\sigma ^{}`$ satisfies (i’). Let now $`\sigma `$ be a 6-tuple satisfying (i’) and suppose that $`\mathrm{\Gamma }`$ is canonically oriented. An arc of $`\mathrm{\Gamma }`$ belonging to $`A`$ is said to be of type I if it is oriented from a cycle of $`𝒞^{}`$ to a cycle of $`𝒞^{\prime \prime }`$, of type II if it is oriented from a cycle of $`𝒞^{\prime \prime }`$ to a cycle of $`𝒞^{}`$ and of type III otherwise (it joins cycles of $`𝒞^{}`$ or cycles of $`𝒞^{\prime \prime }`$). Moreover, the arc is said to be of type I’ if it is oriented from a cycle $`C_i^{}`$ (resp. $`C_i^{\prime \prime }`$) to a cycle $`C_{i+1}^{}`$ (resp. $`C_{i+1}^{\prime \prime }`$), of type II’ if it is oriented from a cycle $`C_{i+1}^{}`$ (resp. $`C_{i+1}^{\prime \prime }`$) to a cycle $`C_i^{}`$ (resp. $`C_i^{\prime \prime }`$) and of type III’ otherwise (it joins $`C_i^{}`$ with $`C_i^{\prime \prime }`$). Let $`\mathrm{\Delta }`$ be the set of the first $`d`$ arcs of $`D_1`$, following the canonical orientation, starting from the arc coming out from the vertex $`v^{}`$ of $`C_1^{}`$ labelled $`a+b+1`$. Obviously, the set $`\mathrm{\Delta }`$ contains all the arcs of $`D_1`$ if and only if the 6-tuple $`\sigma `$ also satisfies (ii’). Now, denote by $`p_\sigma ^{}`$ (resp. $`p_\sigma ^{\prime \prime }`$) the number of the arcs of type I (resp. of type II) of $`\mathrm{\Delta }`$ and set $`p_\sigma =p_\sigma ^{}p_\sigma ^{\prime \prime }`$. Similarly, denote by $`q_\sigma ^{}`$ (resp. $`q_\sigma ^{\prime \prime }`$) the number of the arcs of type I’ (resp. of type II’) of $`\mathrm{\Delta }`$ and set $`q_\sigma =q_\sigma ^{}q_\sigma ^{\prime \prime }`$. Note that $`p_\sigma `$ has the same parity of $`b+c`$ and $`q_\sigma `$ has the same parity of $`2a+b`$ and hence of $`b`$. It is evident that $`p_\sigma `$ and $`q_\sigma `$ only depend on the integers $`a,b,c,r`$. The integers $`p_\sigma `$ and $`q_\sigma `$ give an useful tool for verifying condition (ii’). In fact, suppose to walk along the canonically oriented cycle $`D_j`$ of $`𝒟`$, starting from a vertex $`\overline{v}`$ and let $`C_i`$ be the cycle of $`𝒞`$ containing $`\overline{v}`$. If $`\overline{v}^{}`$ is the first vertex with the same label of $`\overline{v}`$ and if $`C_i^{}`$ is the cycle of $`𝒞`$ containing $`\overline{v}^{}`$, we have $`i^{}=i+q_\sigma +sp_\sigma `$. Thus, the cycle $`D_j`$ contains $`d`$ arcs if and only if $`q_\sigma +sp_\sigma 0`$ (mod $`n`$). This proves that the 6-tuple satisfies (ii’). Thus, (i’) and (ii’) are respectively, in a different language, conditions (i) and (ii) of Theorem 2 of , which gives a necessary and sufficient condition for a 6-tuple to be admissible when $`d`$ is odd. In fact, we have the following result: ###### Lemma 3 (, Theorem 2) Let $`\sigma =(a,b,c,n,r,s)`$ be a 6-tuple with $`d=2a+b+c`$ odd. Then $`\sigma `$ is admissible if and only if it satisfies (i’) and (ii’). Remark 3. This result does not hold when $`d`$ is even. In fact, the 6-tuples $`(1,0,c,1,2,0)`$, with $`c`$ even, satisfy (i’) and (ii’), but they are not admissible, as pointed out in Remark 1. An immediate consequence of Lemma 3 is the following result: ###### Corollary 4 Let $`\sigma =(a,b,c,n,r,s)`$ be a 6-tuple with $`d=2a+b+c`$ odd and $`n=1`$. Then $`\sigma `$ is admissible if and only if $`𝒟`$ has a unique cycle. Proof. If $`\sigma `$ is admissible, then it is straightforward that $`𝒟`$ has a unique cycle. Vice versa, if $`𝒟`$ has a unique cycle, then (i’) holds. Since $`n=1`$ implies (ii’), the result is a direct consequence of the above lemma. The parameter $`p_\sigma `$ associated to an admissible 6-tuple $`\sigma `$ is strictly related to the word $`w(\sigma )`$ associated to $`\sigma `$. In fact, we have: ###### Lemma 5 Let $`\sigma =(a,b,c,n,r,s)`$ be an admissible 6-tuple, $`w=w(\sigma )`$ the associated word and $`\epsilon _w`$ its exponent-sum. Then $$p_\sigma =\epsilon _w.$$ Proof. Since $`\sigma `$ is admissible, the arcs of $`\mathrm{\Delta }`$ are precisely the arcs of $`D_1`$. Let $`e_1,e_2,\mathrm{},e_d`$ be the sequence of these arcs, following the canonical orientation on $`D_1`$, and let $`w=_{h=1}^dx_{i_h}^{u_h}`$, with $`u_h\{+1,1\}`$. We have: $`\epsilon _w=_{h=1}^du_h=1/2_{h=1}^d(u_h+u_{h+1})`$,where $`d+1=1`$. Since $`u_h+u_{h+1}=+2`$ if $`e_h`$ is of type I, $`u_h+u_{h+1}=2`$ if $`e_h`$ is of type II and $`u_h+u_{h+1}=0`$ if $`e_h`$ is of type III, the result immediately follows. In Dunwoody investigates a wide subclass of manifolds $`M(\sigma )`$ such that $`p_\sigma =\pm 1`$ and he conjectures that all the elements of this subclass are cyclic coverings of $`𝐒^\mathrm{𝟑}`$ branched over knots. In the next chapter this conjecture will be proved as a corollary of a more general theorem. ## 3 Main results The following theorem is the main result of this paper and shows how the cyclic action on the Heegaard diagrams naturally extends to a cyclic action on the associated Dunwoody manifolds, which turn out to be cyclic coverings of $`𝐒^\mathrm{𝟑}`$ or of lens spaces, branched over suitable knots. ###### Theorem 6 Let $`\sigma =(a,b,c,n,r,s)`$ be an admissible 6-tuple, with $`n>1`$. Then the Dunwoody manifold $`M=M(a,b,c,n,r,s)`$ is the $`n`$-fold cyclic covering of the manifold $`M^{}=M(a,b,c,1,r,0)`$, branched over a genus one 1-bridge knot $`K=K(a,b,c,r)`$ only depending on the integers $`a,b,c,r`$. Further, $`M^{}`$ is homeomorphic to: i) $`𝐒^\mathrm{𝟑}`$, if $`p_\sigma =\pm 1`$, ii) $`𝐒^\mathrm{𝟏}\times 𝐒^\mathrm{𝟐}`$, if $`p_\sigma =0`$, iii) a lens space $`L(\alpha ,\beta )`$ with $`\alpha =|p_\sigma |`$, if $`|p_\sigma |>1`$. Proof. Since the two systems of curves $`𝒞=\{C_1,\mathrm{},C_n\}`$ and $`𝒟=\{D_1,\mathrm{},D_n\}`$ on $`T_n`$ define a Heegaard diagram of $`M`$, there exist two handlebodies $`U_n`$ and $`U_n^{}`$ of genus $`n`$, with $`U_n=U_n^{}=T_n`$, such that $`M=U_nU_n^{}`$. Let now $`𝒢_n`$ be the cyclic group of order $`n`$ generated by the homeomorphism $`\rho _n`$ on $`T_n`$. The action of $`𝒢_n`$ on $`T_n`$ extends to both the handlebodies $`U_n`$ and $`U_n^{}`$ (see ), and hence to the 3-manifold $`M`$. Let $`B_1`$ (resp. $`B_1^{}`$) be a disc properly embedded in $`U_n`$ (resp. in $`U_n^{}`$) such that $`B_1=C_1`$ (resp. $`B_1^{}=D_1`$). Since $`\rho _n(C_i)=C_{i+1}`$ and $`\rho _n(D_i)=D_{i+1}`$ (mod $`n`$), the discs $`B_k=\rho _n^{k1}(B_1)`$ (resp. $`B_k^{}=\rho _n^{k1}(B_1^{})`$), for $`k=1,\mathrm{},n`$, form a system of meridian discs for the handlebody $`U_n`$ (resp. $`U_n^{}`$). By arguments contained in , the quotients $`U_1=U_n/𝒢_n`$ and $`U_1^{}=U_n^{}/𝒢_n`$ are both handlebody orbifolds topologically homeomorphic to a genus one handlebody with one arc trivially embedded as its singular set with a cyclic isotropy group of order $`n`$. The intersection of these orbifolds is a 2-orbifold with two singular points of order $`n`$, which is topologically the torus $`T_1=T_n/𝒢_n`$; the curve $`C`$ (resp. $`D`$), which is the image via the quotient map of the curves $`C_i`$ (resp. of the curves $`D_i`$), is non-homotopically trivial in $`T_1`$. These curves, each of which is a fundamental system of curves in $`T_1`$, define a Heegaard diagram of $`M^{}`$ (induced by $`H(a,b,c,1,r,0)`$). The union of the orbifolds $`U_1`$ and $`U_1^{}`$ is a 3-orbifold topologically homeomorphic to $`M^{}`$, having a genus one 1-bridge knot $`KM^{}`$ as singular set of order $`n`$. Thus, $`M^{}`$ is homeomorphic to $`M/𝒢_n`$ and hence $`M`$ is the $`n`$-fold cyclic covering of $`M^{}`$, branched over $`K`$. Since the handlebody orbifolds and their gluing only depend on $`a,b,c,r`$, the same holds for the branching set $`K`$. The homeomorphism type of $`M^{}`$ follows from Proposition 2 and Lemma 5. Remark 4. More generally, given two positive integers $`n`$ and $`n^{}`$ such that $`n^{}`$ divides $`n`$, if $`(a,b,c,r,n,s)`$ is admissible, then the Dunwoody manifold $`M(a,b,c,n,r,s)`$ is the $`n/n^{}`$-fold cyclic covering of the manifold $`M^{}=M(a,b,c,n^{},r,s)`$, branched over an $`(n^{},1)`$-knot in $`M^{}`$. Example 2. The Dunwoody manifolds $`M(0,0,1,n,0,0)`$, $`M(1,0,0,n,1,0)`$ and $`M(0,0,c,n,r,0)`$, with $`c,r`$ coprime, are $`n`$-fold cyclic coverings of the manifolds $`𝐒^\mathrm{𝟑}`$, $`𝐒^\mathrm{𝟏}\times 𝐒^\mathrm{𝟐}`$ and $`L(c,r)`$ respectively, branched over a trivial knot. In fact, these Dunwoody manifolds are the connected sum of $`n`$ copies of $`𝐒^\mathrm{𝟑}`$, $`𝐒^\mathrm{𝟏}\times 𝐒^\mathrm{𝟐}`$ and $`L(c,r)`$ respectively. Let us consider now the class of the Dunwoody manifolds $`M_n=M(a,b,c,n,r,s)`$ with $`p=\pm 1`$ (and hence $`d`$ odd) and $`s=pq`$. Many examples of these manifolds appear in Table 1 of , where it was conjectured that they are $`n`$-fold cyclic coverings of $`𝐒^\mathrm{𝟑}`$, branched over suitable knots. The following corollary of Theorem 6 proves this conjecture. ###### Corollary 7 Let $`\sigma _1=(a,b,c,1,r,0)`$ be an admissible 6-tuple with $`p_{\sigma _1}=\pm 1`$ and $`s=p_{\sigma _1}q_{\sigma _1}`$. Then the 6-tuple $`\sigma _n=(a,b,c,n,r,s)`$ is admissible for each $`n>1`$ and the Dunwoody manifold $`M_n=M(a,b,c,n,r,s)`$ is a $`n`$-fold cyclic coverings of $`𝐒^\mathrm{𝟑}`$, branched over a genus one 1-bridge knot $`K𝐒^\mathrm{𝟑}`$, which is independent on $`n`$. Proof. Obviously $`(a,b,c,1,r,s)=\sigma _1`$. Since $`\sigma _1`$ is admissible, it satisfies (i’). This proves that $`\sigma _n`$ satisfies (i’), for each $`n>1`$. Since $`s=p_{\sigma _1}q_{\sigma _1}=p_{\sigma _n}q_{\sigma _n}`$ and $`p_{\sigma _n}=p_{\sigma _1}=\pm 1`$, we obtain $`q_{\sigma _n}+sp_{\sigma _n}=0`$, for each $`n>1`$, which implies condition (ii) of Theorem 2 of , or equivalently (ii’). Moreover, $`d`$ is odd, since $`[d]_2=[2a+b+c]_2=[b+c]_2=[p_{\sigma _n}]_2=[p_{\sigma _1}]_2=1`$. Thus, Lemma 3 proves that $`\sigma _n`$ is admissible. The final result is then a direct consequence of Theorem 6. We point out that the above result has been independently obtained by H. J. Song and S. H. Kim in . An interesting problem which naturally arises is that of characterizing the set $`𝒦`$ of branching knots in $`𝐒^\mathrm{𝟑}`$ involved in Corollary 7. The next theorem shows that it contains all 2-bridge knots. We recall that a 2-bridge knot is determined by two coprime integers $`\alpha `$ and $`\beta `$, with $`\alpha >0`$ odd. The classification of 2-bridge knots and links has been obtained by Schubert in . Since the 2-bridge knot of type $`(\alpha ,\beta )`$ is equivalent to the 2-bridge knot of type $`(\alpha ,\alpha \beta )`$, then $`\beta `$ can be assumed to be even. ###### Theorem 8 The 6-tuple $`\sigma _1=(a,0,1,1,r,0)`$ with $`(2a+1,2r)=1`$ is admissible. Moreover, if $`s=q_{\sigma _1}`$, then the 6-tuple $`\sigma _n=(a,0,1,n,r,s)`$ is admissible for each $`n>1`$ and the Dunwoody manifold $`M_n=M(a,0,1,n,r,s)`$ is the $`n`$-fold cyclic covering of $`𝐒^\mathrm{𝟑}`$, branched over the 2-bridge knot of type $`(2a+1,2r)`$. Thus, all branched cyclic coverings of 2-bridge knots are Dunwoody manifolds. Proof. From $`(2a+1,2r)=1`$ it immediately follows that $`\sigma _1`$ has a unique cycle in $`𝒟`$. Since $`d=2a+1`$ is odd, Corollary 4 proves that $`\sigma _1`$ is admissible. Since $`p_{\sigma _n}=p_{\sigma _1}=+1`$, all assumptions of Corollary 7 hold; hence $`\sigma _n`$ is admissible for each $`n>1`$ and $`M_n`$ is an $`n`$-fold cyclic covering of $`𝐒^\mathrm{𝟑}`$, branched over a knot $`K𝐒^\mathrm{𝟑}`$ which is independent on $`n`$. In order to determine this knot, we can restrict our attention to the case $`n=2`$. Note that $`[s]_2=[q_{\sigma _1}]_2=[b]_2=0`$ and hence $`s`$ is always even. Thus, in the case $`n=2`$ we can suppose $`s=0`$. Let us consider now the genus two Heegaard diagram $`H(a,0,1,2,r,0)`$. The sequence of Singer moves on this diagram, drawn in Figures 3–10 and described in the Appendix of the paper, leads to the canonical genus one Heegaard diagram of the lens space $`L(2a+1,2r)`$ (see Figure 10). Since the representation of lens spaces (including $`𝐒^\mathrm{𝟑}`$) as 2-fold branched coverings of $`𝐒^\mathrm{𝟑}`$ is unique , the result immediately holds. Remark 5. The Dunwoody manifold $`M(a,0,1,n,r,s)`$ of Theorem 8 is homeomorphic to the Minkus manifold $`M_n(2a+1,2r)`$ and the Lins-Mandel manifold $`S(n,2a+1,2r,1)`$ . An immediate consequence of Theorem 8 is: ###### Corollary 9 The fundamental group of every branched cyclic covering of a 2-bridge knot admits a cyclic presentation which is geometric. Remark 6. In is shown that the fundamental group of every branched cyclic covering of a 2-bridge knot admits a cyclic presentation, but without pointing out that this presentation is geometric. About the set $`𝒦`$ of knots in $`𝐒^\mathrm{𝟑}`$ involved in Corollary 7, we propose the following: Conjecture. The set $`𝒦`$ contains all torus knots. If this conjecture is true, the set $`𝒦`$ contains knots with an arbitrarily high number of bridges. Moreover, the conjecture implies that every branched cyclic covering of a torus knot admits a geometric cyclic presentation. The above conjecture is supported by several cases contained in Table 1 of (see ). For example, the Dunwoody manifolds $`M(1,2,3,n,4,4)`$ (resp. $`M(1,3,4,n,5,5)`$) are the $`n`$-fold branched cyclic coverings of the 4-bridge torus knot $`K(4,5)`$ (resp. of the 5-bridge torus knot $`K(5,6)`$). ## 4 Appendix Now we show how to obtain, by means of Singer moves on the genus two Heegaard diagram $`H(a,0,1,2,r,0)`$ of Figure 3, the canonical genus one Heegaard diagram of the lens space $`L(2a+1,2r)`$ of Figure 10. The result will be achieved by a sequence of exactly $`a+4`$ Singer moves: one of type ID, $`a+2`$ of type IC and the final one of type III. Figure 3 shows the open Heegaard diagram $`H(a,0,1,2,r,0)`$. Note that, since $`s=0`$, the cycle $`C_1^{}`$ (resp. $`C_2^{}`$) is glued with the cycle $`C_1^{\prime \prime }`$ (resp. $`C_2^{\prime \prime }`$). Let $`D_1`$ (resp. $`D_2`$) be the cycle of the Heegaard diagram corresponding to the arc $`e^{}`$ (resp. $`e^{\prime \prime }`$) coming out from the vertex $`v^{}`$ of $`C_1^{}`$ (resp. $`v^{\prime \prime }`$ of $`C_2^{}`$) labelled $`a+1`$. Orient $`D_1`$ (resp. $`D_2`$) so that the arc $`e^{}`$ (resp. $`e^{\prime \prime }`$) is oriented from up to down (resp. from down to up). This orientation on $`D_2`$ is opposite to the canonical one but, in this way, all the $`2a`$ arcs connecting $`C_1^{}`$ with $`C_2^{}`$ are oriented from $`C_1^{}`$ to $`C_2^{}`$ and all the $`2a`$ arcs connecting $`C_1^{\prime \prime }`$ with $`C_2^{\prime \prime }`$ are oriented from $`C_2^{\prime \prime }`$ to $`C_1^{\prime \prime }`$. The cycle $`D_1`$, besides the arc $`e^{}`$, has two arcs for each $`k=0,\mathrm{},a1`$, one joining the vertex of $`C_1^{}`$ labelled $`a+1(1+2k)r`$ with the vertex of $`C_2^{}`$ labelled $`a+1+(1+2k)r`$, and the other one joining the vertex of $`C_2^{\prime \prime }`$ labelled $`a+1+(1+2k)r`$ with the vertex of $`C_1^{\prime \prime }`$ labelled $`a+1(3+2k)r`$. The cycle $`D_2`$, besides the arc $`a_2`$, has two arcs for each $`k=0,\mathrm{},a1`$, one joining the vertex of $`C_1^{}`$ labelled $`a+1(2+2k)r`$ with the vertex of $`C_2^{}`$ labelled $`a+1+(2+2k)r`$, the other joining the vertex of $`C_2^{\prime \prime }`$ labelled $`a+1+2kr`$ with the vertex of $`C_1^{\prime \prime }`$ labelled $`a+1(2+2k)r`$. The first Singer move consists of replacing the curve $`D_2`$ with the curve $`D_2^{}=D_1+D_2`$ (move of type ID of ) obtained by isotopically approaching the arcs $`e^{}`$ and $`e^{\prime \prime }`$ until their intersection becomes a small arc and by removing the interior of this arc. The move is completed by shifting, with a small isotopy, $`D_1`$ in $`D_1^{}`$ so that it becomes disjoint from $`D_2^{}`$. The resulting Heegaard diagram is drawn in Figure 4. The new $`2a+1`$ pairs of vertices obtained on $`C_1^{},C_1^{\prime \prime },C_2^{},C_2^{\prime \prime }`$ are labelled by simply adding a prime to the old label, while the $`4a+2`$ pairs of fixed vertices keep their old labelling. Note that each new vertex labelled $`j^{}`$ is placed, in the cycles $`C_1^{},C_1^{\prime \prime },C_2^{}`$ and $`C_2^{\prime \prime }`$, between the old vertices labelled $`j`$ and $`j+1`$ respectively. The cycles $`C_2^{}`$ and $`C_2^{\prime \prime }`$ are no longer connected by any arc, while the cycles $`C_1^{}`$ and $`C_1^{\prime \prime }`$ are connected by a unique arc (belonging to $`D_1^{}`$) joining the vertex labelled $`(a+1)^{}`$ of $`C_1^{}`$ with the vertex labelled $`(a+1r)^{}`$ of $`C_1^{\prime \prime }`$. All the $`3a`$ arcs connecting $`C_1^{}`$ and $`C_2^{}`$ are oriented from $`C_1^{}`$ to $`C_2^{}`$ and all the $`3a`$ arcs which now connect $`C_1^{\prime \prime }`$ with $`C_2^{\prime \prime }`$ are oriented from $`C_2^{\prime \prime }`$ to $`C_1^{\prime \prime }`$. The cycle $`D_2^{}`$ contains exactly $`4a+2`$ arcs; more precisely, for each $`i=1,\mathrm{},2a+1`$, it has one arc joining the vertex labelled $`i`$ of $`C_1^{}`$ with the vertex labelled $`2a+2i`$ of $`C_2^{}`$ and one arc joining the vertex labelled $`i`$ of $`C_2^{\prime \prime }`$ with the vertex labelled $`2a+22ri`$ of $`C_2^{}`$. The cycle $`D_1^{}`$ is a copy of the cycle $`D_1`$ and hence it contains $`2a+1`$ arcs. One of these arcs connects $`C_1^{}`$ with $`C_1^{\prime \prime }`$; moreover, for each $`k=0,\mathrm{},a1`$, $`D_1^{}`$ has one arc joining the vertex of $`C_1^{}`$ labelled $`(a+1(1+2k)r)^{}`$ with the vertex of $`C_2^{}`$ labelled $`(a+1+(1+2k)r)^{}`$ and one arc joining the vertex of $`C_2^{\prime \prime }`$ labelled $`(a+1+(1+2k)r)^{}`$ with the vertex of $`C_1^{\prime \prime }`$ labelled $`(a+1(3+2k)r)^{}`$. Now, apply to the diagram a Singer move of type IC, cutting along the cycle $`E`$ (drawn in Figure 4) containing $`C_1^{\prime \prime }`$ and $`C_2^{\prime \prime }`$ and gluing the curve $`C_2^{\prime \prime }`$ of the resulting disc with $`C_2^{}`$. The new Heegaard diagram obtained in this way shown in Figure 5. It contains the new cycles $`E^{}`$ and $`E^{\prime \prime }`$, which are copies of the cutting cycle $`E`$. These cycles replace $`C_2^{}`$ and $`C_2^{\prime \prime }`$ and they both have a unique vertex ($`w^{}`$ and $`w^{\prime \prime }`$ respectively). The cycle $`E^{}`$ (resp. $`E^{\prime \prime }`$) is connected with $`C_1^{}`$ (resp. with $`C_1^{\prime \prime }`$) by an arc joining $`w^{}`$ (resp. $`w^{\prime \prime }`$) with the vertex labelled $`(a+1)^{}`$ (resp. $`(a+1r)^{}`$), oriented as in Figure 5. The cycles $`C_1^{}`$ and $`C_1^{\prime \prime }`$ are joined by $`3a+1`$ arcs, all oriented from $`C_1^{}`$ to $`C_1^{\prime \prime }`$; $`2a+1`$ of them belong to $`D_2^{}`$ and the other $`a`$ belong to $`D_1^{}`$. More precisely, for each $`i=1,\mathrm{},2a+1`$, there is an arc of $`D_2^{}`$ joining the vertex labelled $`i`$ of $`C_1^{}`$ with the vertex labelled $`i2r`$ of $`C_1^{\prime \prime }`$; while, for each $`k=0,\mathrm{},a1`$, there is an arc of $`D_1^{}`$ joining the vertex labelled $`(a+1(1+2k)r)^{}`$ of $`C_1^{}`$ with the vertex labelled $`(a+1(3+2k)r)^{}`$ of $`C_1^{\prime \prime }`$. Apply again a Singer move of type IC, cutting along the cycle $`F_1`$ (drawn in Figure 5) containing $`C_1^{\prime \prime }`$ and $`E^{\prime \prime }`$ and gluing the curve $`C_1^{\prime \prime }`$ of the resulting disc with $`C_1^{}`$. The resulting Heegaard diagram is shown in Figure 6. It contains the new cycles $`F_1^{}`$ and $`F_1^{\prime \prime }`$, which are copies of the cutting cycle $`F_1`$. These cycles replace $`C_1^{}`$ and $`C_1^{\prime \prime }`$ and they both have one vertex less. It is easy to see that the cycle $`D_2^{}`$ has exactly the same $`2a+1`$ arcs connecting $`F_1^{}`$ and $`F_1^{\prime \prime }`$, all oriented from $`F_1^{}`$ to $`F_1^{\prime \prime }`$; if the labelling of the vertices of $`F_1^{}`$ and $`F_1^{\prime \prime }`$ is induced by the labelling of $`F_1`$ shown in Figure 5, these arcs join pairs of vertices with the same labelling of the previous step. The cycle $`D_1^{}`$ instead has one arc less than in the previous step. In fact, it has $`a1`$ arcs, connecting $`F_1^{}`$ and $`F_1^{\prime \prime }`$, all oriented from $`F_1^{}`$ to $`F_1^{\prime \prime }`$ and joining the vertex labelled $`(a+1(1+2k)r)^{}`$ of $`F_1^{}`$ with the vertex labelled $`(a+1(3+2k)r)^{}`$ of $`F_1^{\prime \prime }`$, for $`k=1,\mathrm{},a1`$. Now, apply again a Singer move of type IC, cutting along the cycle $`F_2`$ (drawn in Figure 6) containing $`F_1^{\prime \prime }`$ and $`E^{\prime \prime }`$ and gluing the curve $`F_1^{\prime \prime }`$ of the resulting disc with $`F_1^{}`$. The new Heegaard diagram only differs from the previous one for containing one arc less in the cycle $`D_1^{}`$. By inductive application of Singer moves of type IC, cutting along the cycle $`F_h`$ (drawn in Figure 7) containing $`F_{h1}^{\prime \prime }`$ and $`E^{\prime \prime }`$ and gluing the curve $`F_{h1}^{\prime \prime }`$ of the resulting disc with $`F_{h1}^{}`$, we obtain, for $`h=a`$, the situation shown in Figure 8, where the cycle $`D_1^{}`$ contains only two arcs, none of which connects $`F_a^{}`$ with $`F_a^{\prime \prime }`$. After the move of type IC corresponding to $`h=a+1`$, we obtain the situation of Figure 9 in which the Heegaard diagram contains a pair of complementary handles given by the pair of cycles $`E^{},E^{\prime \prime }`$ and by the cycle $`D_1^{}`$, composed by a unique arc connecting $`E^{}`$ with $`E^{\prime \prime }`$. The deletion of this pair of complementary handles (Singer move of type III) leads to the genus one Heegaard diagram drawn in Figure 10, which is the canonical Heegaard diagram of the lens space $`L(2a+1,2r)`$. LUIGI GRASSELLI, Department of Sciences and Methods for Engineering, University of Modena and Reggio Emilia, 42100 Reggio Emilia, ITALY. E-mail: grasselli.luigi@unimo.it MICHELE MULAZZANI, Department of Mathematics, University of Bologna, I-40127 Bologna, ITALY, and C.I.R.A.M., Bologna, ITALY. E-mail: mulazza@dm.unibo.it
no-problem/0003/astro-ph0003424.html
ar5iv
text
# Superhumps in V348 Pup ## 1 Introduction V348 Pup (1H 0709–360, Pup 1) is a novalike cataclysmic variable (CV): a system with a high mass transfer rate which maintains its accretion disc in the hot, ionized, high viscosity state reached by dwarf novae in outburst. It exhibits deep eclipses in its optical and infrared lightcurves (Tuohy et al. 1990): it is a high inclination system with orbital period $`P_{\mathrm{orb}}=2.44`$ hours (Baptista et al. 1996). ### 1.1 Superhumps Modulations in luminosity with a period a few per cent longer than the orbital period have been observed in many short period CVs (see reviews in Molnar & Kobulnicky 1992, Warner 1995, Patterson 1998a). These modulations typically take the form of a distinct increase in luminosity, or superhump. The standard explanation of this phenonemon is that the system contains an eccentric precessing accretion disc. If the accretion disc extends out far enough, the outermost orbits of disc matter can resonate with the tidal influence of the secondary star as it orbits the system. A 3:1 resonance can occur which results in the disc becoming distorted to form an eccentric non-axisymmetric shape. The tidal forces acting on this eccentric disc will cause it to precess slowly in a prograde direction. The superhump period, $`P_{\mathrm{sh}}`$, is then the beat period between the disc precession period, $`P_{\mathrm{prec}}`$, and the orbital period, $`P_{\mathrm{orb}}`$ (Osaki 1996): $$\frac{1}{P_{\mathrm{sh}}}=\frac{1}{P_{\mathrm{orb}}}\frac{1}{P_{\mathrm{prec}}}.$$ $`P_{\mathrm{sh}}`$ is the period on which the relative orientation of the line of centres of the two stars and the eccentric disc repeats. Possible models for the light modulation on $`P_{\mathrm{sh}}`$ are described below. This paper considers these models in relation to our observations. In the tidal model the superhump is a result of tidal stresses acting on the precessing eccentric disc (Whitehurst 1998b). The light may be due to a perturbation of the velocity field in the outer disc, leading to azimuthal velocity gradients and crossing or converging particle trajectories. Thus extra dissipation modulated on the superhump period arises when the secondary sweeps past the eccentric disc. In addition, the superhump-modulated tidal stress would lead to a superhump-modulated angular momentum loss from the disc which would facilitate a variation in the mass transfer rate through the disc, and hence a modulation in disc luminosity. The bright spot model arises from noting that the energy gained by material in the accretion stream will depend on how far it falls before impacting on the disc (Vogt 1981). The energy dissipated at impact will be modulated on the superhump period since the non-axisymmetric disc radius causes a stream-disc impact region at varying depths in the white dwarf potential well. Recent SPH simulations of accretions discs in AM CVn stars lead to a third, more realistic, model in which the disc shape changes from nearly circular to highly eccentric over the course of a superhump period (Simpson & Wood 1998). Superhumps arise from viscous energy production as the distorting disc is tidally stressed. Other SPH simulations (e.g. Murray 1996, 1998) also reveal a disc whose shape changes, with Murray (1996) predicting superhump modulations from both the periodic compression of the eccentric disc and the varying depth in the primary Roche potential at which the stream impacts the disc. Dwarf novae in super-outburst exhibit two distinct positive superhump phenomena (Vogt 1983, Schoembs 1986). Normal superhumps appear early in the super-outburst and fade away towards the end of the outburst plateau to be replaced with ‘late’ superhumps which persist into quiescence. These late superhumps are roughly anti-phased with the normal superhumps, and are more likely to be analogous to the persistent superhumps seen in novalikes (Patterson 1998b), where the system has had sufficient time to settle into a steady state. Our extensive photometry (Section 2) reveals similarities between superhumps in V348 Pup and late superhumps in dwarf novae. In Section 3.1 we present power spectra revealing the superhump period and additional signals close to orbital period harmonics. In Section 3.2 we estimate the orbital parameters, $`q`$ and $`i`$, for V348 Pup using the average orbital lightcurve and the superhump period. The waveform of the superhump modulation is discussed in Section 3.3. Section 3.4 considers average orbital lightcurves grouped according to superhump phase. In Section 3.5 we fit our lightcurves with a precessing eccentric disc model, hence deducing the location of light centre of the disc. We consider the results of maximum entropy eclipse mapping in section 3.6. Our results and their implications are discussed in Section 4. ## 2 The observations The observing campaign comprises 24 nights of rapid photometry from December 1991, February 1993 and January 1995 (see Table 1). The 1991 and 1993 observations (12 and 8 nights respectively) were taken using the the 40-inch telescope at CTIO with a blue copper sulphate filter. The January 1995 run consists of 4 nights of R band data. All the data have been corrected for atmospheric extinction and the 1995 data have also been calibrated to give an absolute flux. In 1995 the average out of eclipse R magnitude is 15.5 mag; at mid-eclipse $`R=16.8`$. Examples of typical data are plotted in Figure 1. ## 3 Analysis We determined mid-eclipse timings as described in Section 3.5 from which we determined an orbital ephemeris for this dataset $$\mathrm{T}_{\mathrm{mid}}=\mathrm{HJD}2448591.667969(85)+0.101838931(14)\mathrm{E}.$$ This is consistent with the Baptista et al. (1996) ephemeris within the quoted error limits. We adopt our ephemeris for this analysis. The eclipse timings are given in Table 2. ### 3.1 The superhump period Before performing any analysis, we normalized each night of data by dividing by the average out of eclipse value. To make detection of non-orbital modulations in the data easier, the average orbital lightcurves<sup>1</sup><sup>1</sup>1Superhump phase grouped average lightcurves are shown in Figure 7. from each years’ observations were calculated and subtracted from the corresponding data. The resulting lightcurves contain no orbital variations. Lomb-Scargle periodograms were calculated for each year’s data, and are shown in Figures 2 and 3. The 1991 periodogram reveals a periodicity with period 0.10763 days and simple aliasing structure. The 1993 periodogram has higher resolution, more complicated alias structure, and the strongest periodicity at 0.10857 days. The 1995 power spectrum (having the least time coverage) is lower resolution, however a clear signal at 0.10760 days and its aliases is present. By comparison with the clearer 1991 and 1993 spectra we surmise the 0.10760-day peak is the true signal. These periods are all close to 6 per cent greater than $`P_{\mathrm{orb}}`$ and correspond to the period excesses, $`ϵ`$, shown in Table 3, defined using superhump period, $`P_{\mathrm{sh}}=(1+ϵ)P_{\mathrm{orb}}`$. The inferred disc precession periods are also shown in Table 3. It is notoriously difficult to determine errors in periods measured from periodograms. To estimate the errors in the periods detected here, various fake datasets were generated. The lightcurves were smoothed and the residuals used to characterize the variance of the random noise. Various types of noise with the same variance were added to both the smoothed lightcurves and smoothed superhump modulations. The variance in the period value determined from these different datasets provides a measure of the error in the period measurement. Since the smoothed data will still contain noise artefacts these are probably underestimates. The 1993 dataset with its 20-day time base should provide the most precise period. Therefore, the possibility that the real 1991 and 1995 superhump periods are in fact closer to the 1993 value than our measurements suggest should not be ruled out, as Figure 3 shows. However, variability in the detected periodicity has also been seen before for superhumps (Patterson 1998b). Superhumping systems show a clear correlation between $`ϵ`$ and $`P_{\mathrm{orb}}`$, with $`ϵ`$ increasing with $`P_{\mathrm{orb}}`$; the superhump periods we detect in V348 Pup are consistent with this trend (Figure 3, Patterson 1998b). The power spectra also reveal signals at frequencies corresponding to sidebands of harmonics of the orbital period. The strongest such detections are at periods corresponding to $`2\mathrm{\Omega }_{orb}\mathrm{\Omega }_{prec}`$ and $`3\mathrm{\Omega }_{orb}\mathrm{\Omega }_{prec}`$ (marked with arrows in Figure 2): the predicted and the directly measured values of these sidebands are shown in Table 4, and graphically in Figure 4. The errors in these periods were estimated using the method described earlier. The detected periods do not all agree to within the estimated errors, suggesting that the error estimates may be a little too low, as expected; the highest quality 1993 data agrees best. The simplest way to produce these sidebands is by modulating the brightness or visibility of the superhump with orbital phase. If we consider the orbital lightcurve as a sum of Fourier components with frequencies $`n\mathrm{\Omega }_{orb}`$, then following the approach of Warner (1986) and Norton, Beardmore & Taylor (1996), the eclipse of the superhump light source will produce signals at frequencies $`(n+1)\mathrm{\Omega }_{orb}\mathrm{\Omega }_{prec}`$. Signals at frequencies $`(n1)\mathrm{\Omega }_{orb}+\mathrm{\Omega }_{prec}`$ are also predicted but we find no evidence of these; perhaps they are nullified by other signals of the same frequency in antiphase. In Section 3.4 we present evidence for a correlation between superhump amplitude and orbital phase; this modulation of superhump amplitude with orbital phase could also lead to the observed sideband signals. The SPH models of Simpson & Wood (1998) predict the formation of double armed spiral density waves in the disc whose rotation rate, they suggest, might lead to observed signals at about three times the superhump frequency. They also suggest that viewing these structures from non-zero inclination could lead to the detection of further frequencies, although they do not make precise predictions. Observations of the dwarf nova IP Peg in outburst have revealed evidence of such spiral structure (Steeghs, Harlaftis & Horne 1997). There is no significant signal around period $`P_{prec}=`$1.6 – 1.9 days in either the normalized or un-normalized lightcurves. The disc precession period is similarly absent in other persistently superhumping systems, notably AM CVn where $`P_{prec}`$ is clearly revealed by absorption line spectroscopy (Patterson, Halpern and Shambrook 1993). ### 3.2 The orbital parameters The mass ratio, $`q`$, of a superhumper can be estimated, given its period excess, $`ϵ`$ (Patterson 1998b). $$ϵ=\frac{0.23q}{1+0.27q}.$$ This leads to the estimates of $`q`$ for V348 Pup shown in Table 3. We favour the mass ratio, $`q=0.31`$, estimated from the most accurate 1993 superhump period. We note that SPH simulations of eccentric discs (Murray 1998, 1999) suggest a more complicated relationship between $`ϵ`$ and $`q`$ with the disc precession rate depending on the gas pressure and viscosity of the disc in addition to the mass ratio. This does not affect our substantive results. If we assume that the secondary star is Roche lobe filling, the width, $`w`$, of eclipse of a point source at the centre of the compact object uniquely defines orbital inclination, $`i`$, as a function of $`q`$. We can thus compute $`i`$ as a function of $`q`$ and $`w`$. When the centre of the compact object (point *P*) is first eclipsed (orbital phase $`\varphi _1`$), about half of the disc area will be eclipsed, and therefore for a disc whose intensity distribution is symmetric about the line of centres of the two stars, the fraction of disc flux eclipsed at this phase will be $`0.5`$ (Figure 5). Similarly at the end of the eclipse of *P* (orbital phase $`\varphi _2`$) the fraction of disc light visible is again $`0.5`$. Further assuming that the lightcurve consists purely of emission from a disc in the orbital plane, the full width of eclipse at half intensity will be equal to $`w`$. Using the average eclipses from each of our datasets to give $`w`$, we obtained $`i`$. We checked the assumption that half of the disc area is eclipsed at $`\varphi _1`$ and $`\varphi _2`$: for these values obtained it is a good assumption. We therefore adopt orbital parameters $`q=0.31`$, $`i=81\stackrel{}{.}1\pm 1^{}`$ The conclusions drawn later are identical to those obtained using $`q=0.36`$ with corresponding $`i=80\stackrel{}{.}0\pm 1^{}`$ which resulted from an earlier estimate of $`q`$. ### 3.3 The superhump modulation To define a zero point in superhump phase each set of data was folded onto its detected superhump period, binned into 100 phase bins, and a sine-wave was fitted to each (see Figure 6). We assessed the contribution of flickering to these curves by using various methods of binning the data. Figure 6 shows the curves produced from simply averaging the points in each bin. Since flickering in the lightcurve consists of brief increases in luminosity, by giving more weight to the lower values in each bin or using only the lower points in a bin, the impact of flickering on these superhump phase binned lightcurves should be reduced. We therefore generated curves by averaging only the lowest 25 per cent of fluxes in each bin, and by weighting the lower values more strongly than the higher values. The curves produced by these different methods are virtually identical, except for a flux offset between each. This suggests that flickering has little effect on the superhump curves. Since we have extensive datasets and the timescales of the flickering and the superhumps are very different, this is not unexpected. For a high inclination system a modulation on the superhump period will arise due to the eclipses of the precessing accretion disc changing as the disc orientation changes. This effect will occur in addition to the intrinsic variations in luminosity which are observed in non-eclipsing systems. Superhump modulations calculated using only non-eclipsing phases are almost identical to those in Figure 6, which implies that in this system the form of the superhump light curve is not affected by the changing eclipse shape. The broad form of the modulation is consistent for all three sets of observations. The peak-to-peak fractional amplitudes are shown in Table 3 and decline steadily from year to year. The more detailed structure, particularly the region with lower flux between $`\varphi _{sh}0.8`$ and 1.0 in the 1991 modulation, appears to be genuine; we find neither flickering nor eclipses have significant effect. Furthermore the abrupt changes do not correspond to changes in system brightness from one night to the next. Simpson & Wood (1998) calculate pseudo-lightcurves for superhumps. Assuming the light emitted from the disc is proportional to changes in the total internal energy of the gas in the disc, they present superhump shapes calculated for mass ratios of 0.050, 0.075 and 0.100 (their Figure 5). These curves have significant differences in morphology: the $`q=0.050`$ curve has a sharp rise and slow decline, the $`q=0.075`$ curve is reasonably symmetric, and the $`q=0.100`$ curve has a slow rise and steeper decline. The cleanest and most reliable of our superhump curves, that from 1993, also shows an asymmetric shape, with a slow rise and sharper decline, agreeing best with their highest value of $`q=0.100`$. We expect $`q0.31`$ so this is encouraging. ### 3.4 Average lightcurves Because of the sampling of our data (Table 1), the observed eclipses for each year appear in two rough groups, with superhump phases separated by about 0.5. We compared the average orbital lightcurves corresponding to each group. The average mid-eclipse superhump phase, $`\overline{\varphi _{\mathrm{sh}}}`$, of each group, and the range as indicated by the variance of $`\varphi _{\mathrm{sh}}`$ for each group are shown in Table 5, the average orbital curves are shown in Figure 7. The group A curve for 1991 (Figure 7a) displays a clear hump at orbital phase around -0.2: the mid-eclipse superhump phase of group A is 0.24, so eclipse should occur 0.25 in orbital phase after a superhump maximum. Group B has mid-eclipse superhump phase 0.71, meaning that the eclipse should occur around 0.3 in orbital phase before the superhump. The post-eclipse flux in group B is higher than in A, though the hump is not so sharp as in A. Group A in the 1993 data displays a hump peaking just before mid-eclipse, while the curve for group B is rather flat out of eclipse. The mid-eclipse superhump phase of group A is 0.02, while group B has superhump phase 0.48. This is again consistent with the position of the superhump. The difference curve (i.e. A-B) seems to show a broad superhump which is partially eclipsed by the secondary. We expect group B to display a hump around orbital phase 0.5. This is not obvious, meaning that the superhump is more prominent when is occurs at orbital phase 0 (group A) than at phase 0.5. The superhump light is therefore not emitted isotropically<sup>2</sup><sup>2</sup>2Eclipses of the superhump light will obviously be most important when the secondary is at inferior conjunction, i.e. when the superhump maximum is at orbital phase 0. Schoembs (1986) also noted a similar effect in OY Car. This will be considered further in Section 4. The superhump phasing of the 1995 eclipses is almost the same as those observed in 1993, but the smaller extent of the 1995 data and the low fractional amplitude makes identifying the hump difficult without first subtracting the orbital light curve (compare Figs 6b and 6c). However, the flux during eclipse for group A is higher than for group B, consistent with the phasing which suggests that a superhump should occur at mid-eclipse. ### 3.5 Eclipse parameters The O-C mid-eclipse times and the eclipse widths are shown in Figure 8. The mid-eclipse times were determined both by fitting a parabola to the deepest half of each eclipse and also by finding the centroid. The discrepancy between the two determinations provides an indication of the uncertainty. As the eccentric disc precesses slowly in our frame, we expect to see the eclipse width and midpoint phase modulated on the apsidal precession period. These quantities will be similarly modulated in superhump phase, since the superhump phase and precession phase of an eclipse are both measures of the relative orientation of disc and secondary star at mid-eclipse. Figure 8 shows that eclipse timings for all years exhibit a precession period modulation. The widths also show evidence of a modulation. The limited superhump phase coverage means that conclusions cannot easily be drawn from inspecting the datapoints alone. Such variations in eclipse asymmetry have been observed in other superhumping systems e.g. OY Car (Schoembs 1986) and Z Cha (Warner and O’Donoghue 1988, Kuulkers et al. 1991). To further investigate the disc shape we produced a simple model which was then fitted to the observed lightcurves. Our simple eccentric disc prescription has a circular inner boundary with radius $`r_{min}`$, centred on the white dwarf. The outer boundary is an ellipse of semi-major axis $`a_{max}`$, eccentricity $`e`$, with one focus also centred on the white dwarf. The disc brightness at distance $`r(\alpha )`$ from the white dwarf at an angle $`\alpha `$ to the semi-major axis is $$S(\alpha )\left(\frac{r(\alpha )r_{min}}{r_{max}(\alpha )r_{min}}+\frac{r_{min}}{a_{max}(1e)r_{min}}\right)^n,$$ where $`r_{max}(\alpha )`$ is the distance from the white dwarf to the outer disc boundary at orientation $`\alpha `$. Brightness contours are therefore circular at the inner boundary, smoothly changing to elliptical at the outer boundary. This form for $`S(\alpha )`$ reduces to $`Sr^n`$ if the disc is circular. Our model is sensible for a tidally distorted disc, since the tidal influence of the secondary star is unimportant at the inner disc, so we expect a more or less circular inner disc. In an inertial frame, the disc slowly precesses progradely with period $`P_{prec}`$. With respect to the corotating frame of the system, this disc then rotates retrogradely with period $`P_{\mathrm{sh}}`$. Let the relative orientation of the line of apsides of the disc with the line of centres of the binary when superhump maximum occurs be $`\varphi _{disc}`$. The structure of the disc in our model is therefore described by five parameters : $`r_{min}`$, $`a_{max}`$, $`e`$, $`n`$ and $`\varphi _{disc}`$. This model was chosen as the simplest way to model an eccentric precessing disc, with as few parameters as possible. A similar model was used by Patterson, Halpern & Shambrook (1993) to model the disc in AM CVn. We generated synthetic lightcurves for eclipses of our model disc using the orbital parameters from Section 3.2. By varying the parameters of our model to minimize the reduced $`\chi _r^2`$ of the fit, we obtained a best fit of our model to our lightcurves. The smoothed superhump was subtracted from the lightcurves before fitting the model in order to remove the intrinsic variation in the disc flux, enabling us to study the shape of the disc. In Section 3.4 we noted that the superhump is more visible for $`\varphi _{sh}=0`$ at $`\varphi _{orb}=0`$, so ideally we should subtract a superhump modulation which takes account of this variation in superhump prominence, but insufficient sampling of the disc precession phase by our data prevents us from doing this. The downhill simplex method for minimizing multidimensional functions was used (the AMOEBA routine from Press et al., 1996). Each orbit (centred on an eclipse) was allowed to have a different total disc flux and a different uneclipsed flux. This prevents the variation of $``$10 per cent in flux from one orbit to the next from interfering with the results, and allows for the possibility of a contribution to the lightcurve from regions never eclipsed by the secondary star. The errors in the fluxes were estimated as being $`\sigma \sqrt{flux}`$ where $`\sigma `$ is the square root of the variance of the flux between orbital phase 0.2 and 0.8. This estimate therefore includes the effect of flickering. We used two methods to assess the robustness and accuracy of these fits. Monte-Carlo methods estimated the size of the region in parameter space which has a $``$75 per cent chance of containing the ‘true’ values of the parameters. To assess the uniqueness and robustness of each solution we carried out the fitting process 20 times for each model/data combination starting each fit with a different random initial simplex. The solution chosen is that with the lowest value of reduced $`\chi ^2`$. Extreme outlier solutions are rejected and the variance in the parameters for the remaining solutions is a measure of the accuracy with which the AMOEBA routine converges to a unique solution. The errors quoted in Table 7 for each parameter are whichever is the greater of confidence region estimate or the variance in the parameter from the multiple fits. The parameters resulting from all the fits are shown in Table 7. $`R_L`$ is the Eggleton radius of the primary Roche lobe (Eggleton 1983). The $`\chi ^2`$ surface in parameter space is not perfectly smooth. There is a broad global minimum superimposed with smaller amplitude bumps. Close to the global minimum the gradient of $`\chi ^2`$ is low and so small bumps can lead AMOEBA to settle into a local minimum near the real minimum. Fits using the model as described above will be referred to as fit *f*. The emission extends out to 80 – 90 per cent of Roche lobe size, while the largest radius of the disc is $`a_{max}(1+e)=0.97R_L`$ for 1995 data. These results suggest that the disc does indeed extend out to the tidal cut-off radius, $`r_{tide}`$; $`r_{tide}0.9R_L`$ (Paczynski 1977). The 1995 radius is 10 per cent larger than the 1991 and 1993 radii. The 1991 and 1993 observations are in blue CuSO<sub>4</sub> filter light, which should come from hotter inner disc regions; the 1995 observations are in R which is expected to weight the outer disc emission more heavily, perhaps causing the inferred disc radius to be largest for the 1995 observations. A simple model black-body disc with a $`TR^{\frac{3}{4}}`$ temperature distribution, with $`T=10,000K`$ at the inner disc radius $`r_1=0.1R_L`$ and with an outer disc radius $`r_2=R_L`$ was used to calculate eclipse profiles in the R and B band. Fitting model *f* to these eclipses showed no significant difference in outer disc radius between the R band and CuSO<sub>4</sub> filter fits. The inner disc boundary and the index $`n`$ in the flux distribution are poorly constrained. The eccentricity is robustly non-zero; the changing eclipse shape demands a non-axisymmetric disc. The most interesting result is that all three datasets have $`\varphi _{disc}`$ around 0.4 to 0.5. This means that in our elliptical model the secondary star sweeps past the *smallest* radius part of the disc at superhump maximum - a result unexpected if tidal stressing of the disc by the gravitational influence of the secondary star is responsible for the superhump light. The implications of this result are discussed later. We adjusted the model so that the eccentricity varied during the superhump cycle as $`e(\varphi _{sh})=e_0cos^2\pi \varphi _{sh}`$. This will be referred to as fit *e*. This variation in eccentricity follows Simpson & Wood’s (1998) simulation in which the disc varies between being highly eccentric at the superhump maximum to almost circular away from the superhump. The results for $`r_{min}`$ and $`a_{max}`$ change very little, with $`a_{max}`$ again larger in the red (1995) than the blue (1991 and 1993). $`\varphi _{disc}`$ is unchanged from fit *f* within the errors for all three years. The maximum eccentricity, $`e_0`$, is larger than when $`e`$ was constant. This is expected since the eccentricity is demanded by the variation in O-C mid-eclipse times, and these O-C times are non-zero at times when $`e`$ is less than $`e_0`$. Next, we obtained fits in which the eccentricity was again constant, but where $`a_{max}`$ was allowed to vary between $`a_1`$ at superhump maximum to $`a_2`$ half a superhump period later; $`a_{max}(\varphi _{sh})=a_1+(a_2a_1)sin^2\pi \varphi _{sh}`$. This will be referred to as fit *a*. This was an attempt to reproduce the observed variations in eclipse width (Figure 8). $`r_{min}`$, $`e`$ and $`\varphi _{disc}`$ are essentially the same as in the first fit, while the values of $`a_1`$ and $`a_2`$ imply a variation in $`a_{max}`$ of amplitude 20 – 25 per cent in the blue and 14 per cent in the red, with the disc being smallest at superhump maximum. At its largest, the disc extends to the edge of the Roche lobe. While these implied variations in disc size are large, they are comparable to the uncertainties in $`a_1`$ and $`a_2`$, and so must be treated with caution. The final variation on our model was to allow both $`e`$ and $`a_{max}`$ to vary as described above. This will be referred to as fit *b*. The eccentricity and $`\varphi _{disc}`$ are little different from the fits with $`e`$ varying periodically and $`a_{max}`$ constant, while the values of $`a_1`$ and $`a_2`$ follow those from the previous fit ($`e`$ constant and $`a_{max}`$ varying). The treatment of $`a_{max}`$ and $`e`$ for each fit is summarized in Table 6. In Figure 9 we show part of the fit to the 1995 dataset using model *a*. The fit (shown as the continuous line) shows the different disc flux and uneclipsed flux allowed by our model in the form of the discontinuities at phase 30.5 and 31.5. The level of flickering out of eclipse can also clearly be seen, and was taken into account in our estimate of the errors as described above. Formally, the best model is that which achieves the lowest value of reduced $`\chi ^2`$ ($`\chi _r^2`$). In Figure 10 we show the values of $`\chi _r^2`$ achieved for each model and dataset relative to the lowest. The minimum $`\chi _r^2`$ achieved was around 0.8 for all datasets and models. This figure shows that the fits *a* and *b*, i.e. those in which $`a_{max}`$ varies on the superhump cycle, produce significantly better fits to the 1993 and 1995 observations. The variation of $`e`$ during the superhump cycle has little effect on the quality of these fits. There is less significant difference between the $`\chi ^2`$ achieved by the different fits to the 1991 observations, although allowing $`a_{max}`$ or $`e`$ to vary during the superhump cycle produces a better fit than when they are both constant. The significant reduction in $`\chi _r^2`$ achieved by allowing $`a_{max}`$ to vary implies that this model best represents the behaviour of the system. It is also interesting to compare how well each model predicts the variation in the eclipse width and O-C mid-eclipse times. The predictions of each model are plotted in Figure 8. It is periodic variation in these two eclipse characteristics which requires the disc to be eccentric. The models poorly reproduce the O-C variations in the 1991 data. While the phasing of the predicted variation agrees with the observations, the amplitude is too low. The fits in which $`e`$ varies during the superhump cycle predict a larger modulation in O-C times, a result of the larger eccentricity in these fits, but the agreement for these fits is still poor. The 1991 lightcurves suffer more from flickering than the 1993 and 1995 data, with many eclipses distorted as a result. This is the most likely explanation for the poor agreement between our model and the 1991 lightcurves. The agreement between the predicted and observed O-Cs is very good for all models for the 1993 observations. The variation in eclipse width is only reasonably modelled by those fits in which $`a_{max}`$ varies. The same is true of the 1995 fits. The result of these comparisons between the different models, both the formal comparison of reduced $`\chi ^2`$ and the more subjective ‘chi-by-eye’ considerations of the O-C times and eclipse widths is that the models in which $`a_{max}`$ varies during the superhump cycle predict the observations better than those in which $`a_{max}`$ is constant. All four models agree on three important points. The values of $`a_{max}`$, $`a_1`$ and $`a_2`$ show that the disc is large, filling at least about 50 per cent of the Roche lobe area. The disc is not axisymmetric. From the consistent values of $`\varphi _{disc}`$ we see that when the superhump reaches maximum light, the light centre of the disc is on the far side of the white dwarf from the donor star. ### 3.6 Eclipse mapping In Section 3.5 we used the changing eclipse profiles to constrain the parameters of a model intensity distribution. An alternative method for investigating the distribution of emission in the orbital plane is the commonly used eclipse mapping technique developed by Horne (1985). This method assumes that intensity distribution is fixed in the corotating frame of the binary, lies flat in the orbital plane and is constant. The surface of the secondary star is described by its Roche potential surface. Maximum entropy methods are used to obtain the intensity distribution by comparing the calculated lightcurve and observed lightcurves. The $`\chi ^2`$ statistic is used to ensure consistency between the observations and the fitted distribution, while the entropy is used to select the most appropriate solution from the multitude of possibilities. The entropy is usually defined such that the final solution is the smoothest or most axisymmetric map consistent with the observations. This technique has been widely used, and O’Donoghue (1990) employed it to locate the source of the strong normal superhumps in Z Cha. In order to study the shape of the precessing disc in V348 Pup, the PRIDA eclipse mapping code of Baptista & Steiner (1991) was modified so that the intensity distribution was fixed in the precessing disc frame rather than the corotating frame of the binary. Each year’s data was split into two groups (Section 3.4) but the lightcurves were not folded on orbital phase. This enabled us to obtain two maps for each year, corresponding to the groups given in Table 5. Since we expect the intensity distribution to change throughout the superhump cycle, grouping the eclipses as described means that the intensity distribution should be roughly the same for all eclipses in a group, an assumption of the eclipse mapping method. The superhump modulation was subtracted from each lightcurve as in Section 3.5. Normalization of the lightcurves was achieved by using the values for total disc flux and uneclipsed flux for each orbit obtained during the fitting procedure in Section 3.5. The uneclipsed flux was subtracted from each orbit and fluxes were then rescaled to produce an effective disc-only lightcurve. Various other normalization techniques were tested, and the detail of the reconstructed maps was sensitive to these changes. We used orbital parameters $`q=0.31`$ and $`i=81^{}`$, and looked for the most axisymmetric solution consistent with the data. The most consistent result revealed by these eclipse maps is that the emission at superhump phase 0.5 is less centrally concentrated than at superhump phase 0. This is illustrated in Figure 11 which shows the maps for the two 1995 groups of eclipses. Figures 12a to 12c show the azimuthally averaged brightness distribution of each map. They show the flux at radius $`r`$, $`F_r`$, multiplied by $`r`$; this quantity is proportional to the total flux in an annulus at radius $`r`$. Figures 12b and 12c show how the disc extends further out at superhump phase 0.5 than at phase 0, while the curves in Figure 12a are both nearly the same, as expected since both of these curves show the situation roughly half way between superhump phase 0 and 0.5. This result is in agreement with the results of our fits of model *a* in which the disc size was allowed to change. These fits showed that the size of the emission region is larger at superhump phase 0.5 than at phase 0. The maps are asymmetric, but due to the sensitivity described above, we draw no conclusions from the detailed structure. ## 4 Discussion The phase of the superhump relative to the conjunction of the line of centres of the system and the semi-major axis of the disc should make it possible to determine whether the bright spot model or the tidal heating model better explains the source of the superhump. The simplest tidal model predicts that the superhump light should peak when (or slightly after) the largest radius part of the disc coincides with the line of centres. This is because the tidal interaction is strongly dependent on distance from the secondary, and so will be most significant in regions where the disc extends out close to the L1 point. However, if the bright spot model is to be believed, then the the superhump light source will be brightest when the accreting material has the furthest to fall. In other words, the superhump should occur when the stream impacts on the disc at its smallest radius. The mid-eclipse times shown in the top row of panels in Figure 8 show the eclipses to be earliest around superhump phase 0.75 in all cases. Assuming that the centre of light of the eccentric disc is offset from the white dwarf in the direction of the largest radius, we can deduce the disc orientation during these eclipses to be as shown in Figure 13a. A quarter of a superhump period later, the orientation of the disc has barely changed, the secondary will be lined up with the smallest radius part of the disc and the superhump phase will be 0.0 (Figure 13b). Therefore superhump maximum occurs when the secondary star is lined up with the smallest radius part of the disc. The values of $`\varphi _{disc}`$ in Table 7 agree with this deduction. This phasing is consistent with the bright spot model for the superhump emission but is inconsistent with the simple tidal heating model. In Section 3.4, we noted that the superhump light appears not to be emitted isotropically: the superhump is strongest when it occurs around orbital phase 0. This is easily explained if the major contribution to superhump light is the bright spot: the bright spot is most visible when it is on the nearside of the accretion disc. Schoembs (1986) observed late superhumps in the eclipsing SU UMa dwarf nova OY Car, also a high inclination system. When a superhump was coincident with a pre-eclipse orbital hump, the combined amplitude was greater than that predicted for a linear superposition of the individual amplitudes i.e. OY Car’s late superhumps were strongest around orbital phase 0. However, van der Woerd et al. (1988) studied the dwarf nova VW Hyi, concluding that there was no correlation between the orbital phase and amplitude of late superhumps. Since VW Hyi has an intermediate inclination, $`60^{}`$ (Schoembs & Vogt 1981), the bright spot visibility need not vary with phase, so if the bright spot is the main superhump light source we expect no variation in superhump amplitude with orbital phase. Krzeminski & Vogt (1985) studied OY Car during a super-outburst and through variations in the O-C eclipse timings deduced the presence of an eccentric disc with phasing similar to that in V348 Pup. Krzeminski & Vogt’s definition of O-C time was criticized by Naylor et al. (1987), with Naylor et al. concluding that the O-C evidence was weaker than originally thought. Schoembs (1986) followed OY Car from early in a super-outburst almost until the return to quiescence, observing the $`180^{}`$ phase change from normal superhumps around the height of the outburst to late superhumps during the decline of the super-outburst. Patterson et al. (1995) observed the same change in superhump phase late in a super-outburst of V1159 Ori. Hessman et al. (1992) studied OY Car at the end of a super-outburst. By looking at the varying hot spot eclipse ingress times, and considering the trajectory of the accretion stream, they concluded that the disc was eccentric. The orientation of the disc at superhump maximum was very similar to that which we find in V348 Pup. The broad waveform of these late superhumps in OY Car (Hessman et al.) was also similar to the superhump modulation in V348 Pup. Such similarity between late superhumps in OY Car and the superhumps in V348 Pup is not surprising. Late superhumps in dwarf novae appear towards the end of the superoutburst, after the disc has had time to adjust to its high state. V348 Pup is persistently in a high state. Superhumps in a novalike system might resemble those to which the superhumps in a superoutbursting dwarf nova would tend if it remained in superoutburst for a long time. It seems likely that the mechanism responsible for late superhumps in SU UMa systems is the same mechanism responsible for superhumps in V348 Pup. However, Skillman et al. (1998) observed strong superhumps in the nova-like TT Ari throughout 1997 whose waveform is triangular like those of normal superhumps in dwarf novae. There are many other studies of the disc structure in SU UMa stars during superoutburst. Vogt (1981) and Honey et al. (1988) found evidence for an eccentric precessing disc in Z Cha from the radial velocity variations of various absorption and emission lines respectively. The very prominent normal superhump in Z Cha made it possible for Warner & O’Donoghue (1988) to study the location of the superhump light source. They found strong departures from axisymmetry in the superhump surface-brightness. O’Donoghue (1990) employed a modified eclipse mapping technique to Z Cha lightcurves and found the normal superhump light coming from three bright regions of the disc rim, located near the L1 point and the leading and trailing edges of the disc, concluding that the superhumps are tidal in origin, and that a highly eccentric disc with a smooth brightness distribution is not necessary to explain superhump behaviour. One anomalous eclipse did confine the superhump light source in Z Cha to the region of the quiescent bright spot. van der Woerd et al. (1988) concluded that the late superhumps in VW Hyi come from an optically thin plasma and could be a result of tidal interaction. In the SPH simulations of Murray (1996, 1998) pseudo-lightcurves are produced by assuming the heat produced by viscous dissipation to be radiated away where it is generated. Murray (1996, 1998) reveals an extended superhump light source in the outer disc, while Murray (1996) reveals an additional superhump modulation which arises from the impact of the accretion stream with the edge of the disc occuring at a varying depth in the primary Roche potential. This additional weaker superhump modulation is approximately 180 out of phase with the modulation due to tidal stressing, another similarity between late superhumps in dwarf novae, the persistent superhumps in V348 Pup and the bright spot model. If we consider the stream to impact the disc at radius $`r`$ in a $`\frac{1}{r}`$ potential, then the luminosity, $`L`$, of the bright spot should vary roughly as $`\mathrm{\Delta }(\frac{1}{r})`$. Considering the change in $`r`$ as the disc with eccentricity $`e`$ precesses, we get $`\frac{\mathrm{\Delta }L}{L}2e`$. The eccentricities we find are in the range 0.035–0.15 predicting superhump fractional amplitude in the range 0.07–0.3. This is consistent with the measured superhump amplitudes in V348 Pup (Table 3). While we limit the conclusions drawn from our eclipse maps in Section 3.6, there are a number of points deserving consideration. Our eclipse maps do not show evidence of a bright spot, but this does not rule out the possibility that a bright spot is the source of the superhump light, for the following reasons. First, we subtracted the superhump modulation from the lightcurves before performing the eclipse mapping, which should reduce the contribution of the bright spot in the maps if it is the primary source of the superhump light. Also, our maps are fixed in the precessing disc frame, rather than the orbital frame of the system, so the hot spot should be blurred azimuthally in our maps by $`70^{}`$ corresponding to the eclipse width of the system of $`0.2`$ in orbital phase. There will be additional azimuthal blurring since the eclipses contributing to each map have a spread of disc orientations at mid-eclipse corresponding to the values of $`\sigma _\varphi `$ in Table 5. Azimuthal structure in the maps is also suppressed by looking for the maximally axisymmetric solution. The eclipse maps tell us that the azimuthally averaged radial extent of the emission is lowest at superhump maximum, shown in Figure 12. If this change in extent of the emission region is interpreted as a result of a changing disc size, the the smaller disc radius at superhump maximum is consistent with the bright spot model for the superhump light source. In the SPH models of Simpson & Wood (1998), the symmetry axis of the disc is aligned roughly perpendicular to the line of centres of the system when the superhump reaches maximum intensity. Inspecting their plots suggests that this model would lead to eclipses being earliest and widest at superhump phase 0 contrary to our findings. Simpson & Wood stress that their pseudo-lightcurves should be treated cautiously since no radiative processes were explicitly considered. The difference in the mass ratio between V348 Pup and the values considered in Simpson & Wood’s simulations may affect the phasing of the early eclipses and superhump maximum, given that the predicted superhump waveform is sensitive to $`q`$. Furthermore, the spiral density waves in their simulations complicate the structure, so that simulated maps of the intensity may in fact produce reasonable agreement with our findings. SPH simulations (Murray 1996 & 1998, Simpson & Wood 1998) show the behaviour of tidally distorted discs to be more complicated than a simple eccentric disc, and with treatment of radiative processes the predictions are likely to become even more complicated. Once such models are developed further, comparisons with observation should provide a more complete understanding of superhump phenomena. ## 5 Summary The eclipsing novalike cataclysmic variable V348 Pup exhibits positive superhumps. The period of these superhumps is in agreement with the generally observed $`ϵP_{\mathrm{orb}}`$ relation for superhumpers. Using the relation for $`q`$ as a function of the superhump period excess, $`ϵ`$, (Patterson 1998b) we estimate $`q=0.31`$. Using the eclipse width we then estimate an orbital inclination of $`i=81\stackrel{}{.}1\pm 1\stackrel{}{.}0`$. Variations in the O-C mid-eclipse times and eclipse widths strongly suggest that V348 Pup harbours a precessing eccentric disc. We quantify this conclusion by fitting an eccentric disc model to the lightcurves. The relative orientation of the disc and secondary star at superhump maximum is more easily explained by the bright spot model for the superhump light source than the tidal heating model. A correlation between the amplitude and orbital phase of the superhumps is also more easily explained by the bright spot model. Additional signals are detected at frequencies close to harmonics of the orbital frequency. The source of these variations is currently without conclusive explanation, but they could result from rotationally symmetrical structure in the disc, such as spiral waves, which have been predicted in simulations (e.g. Simpson & Wood 1998), and directly observed in the dwarf nova IP Peg (Steeghs, Harlaftis and Horne 1997). They may be a result of the correlation between superhump amplitude and orbital phase. In a high inclination system like V348 Pup the eclipse of the superhump light source may also explain these signals. The phasing of the superhumps in V348 Pup is like that of late superhumps detected at the end of superoutbursts in SU UMa systems such as OY Car and VW Hyi (Schoembs 1986 and van der Woerd et al. 1988). The broad form of the superhump and the link between superhump amplitude and orbital phase are also similar to late superhumps in OY Car. By identifying the superhump mechanism in novalikes we are likely to also understand the mechanism for late superhumps in SU UMa systems. It is possible that this mechanism could be different from that which produces common superhumps, generally accepted to be a result of tidal stresses acting on an eccentric disc (O’Donoghue 1990). ## Acknowledgments The authors acknowledge the data analysis facilities at the Open University provided by the OU research committee and at the University of Sussex provided by the Starlink Project which is run by CCLRC on behalf of PPARC. The OU computer support provided by Chris Wigglesworth and compiling assistance from Sven Bräutigam is also much appreciated. The PRIDA eclipse mapping software (Baptista & Steiner 1991) was used courtesy of Raymundo Baptista. DJR thanks Rob Hynes for being a mine of useful advice and information. We thank Eugene Thomas and Jessica Zimmermann who carried out parts of the observation, and the support staff at CTIO for their sterling work. CAH thanks the Nuffield Foundation and the Leverhulme Trust for support. DJR is supported by a PPARC studentship.
no-problem/0003/astro-ph0003475.html
ar5iv
text
# Mode identification of Pulsating White Dwarfs using the HST 1footnote 11footnote 1Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc. under NASA contract No. NAS5-26555 ## 1 1. Introduction Observations of white dwarf stars are an important probe of stellar and galactic evolution. The properties of individual white dwarfs define the endpoints for models of stellar evolution, while the white dwarf luminosity function provides an observational record of star formation in our galaxy. For example, the coolest normal-mass white dwarfs are remnants of stars formed in the earliest epoch of star formation, so their cooling times can tell us the age of the galactic disk in the solar neighborhood (Winget et al. 1987, Wood 1992) and the effects of phase separation and crystallization at extreme densities (Chabrier et al. 1992, Segretain et al. 1994, Winget et al 1997). As the number of pulsation modes detected in the pulsating white dwarfs is insufficient for an inverse solution of the structure of the star, we must identify the pulsation modes to compare with the theoretical models and infer the structural parameters. A crucial step in determining the structure of a white dwarf from its pulsation periods is to identify the pulsation modes correctly. The pulsation modes in our models are indexed with three integers ($`k`$,$`\mathrm{}`$,$`m`$) where $`k`$ represents the number of nodes in the pulsation eigenfunction along the radial direction, $`\mathrm{}`$ is the total number of node lines on the stellar surface, and $`m`$ is the number of node lines passing through the pulsation poles. Pulsation modes with different indices generally have different pulsation periods. The usual procedure for identifying the mode indices is (1) calculate theoretical pulsation periods in models of white dwarfs; (2) compare the pattern of theoretical periods to the observed pattern of periods; (3) adjust the models to bring the theoretical and observed patterns into closer agreement. The problems with this procedure are clear: it does not work for white dwarfs with only a few excited pulsation modes, as it places too few constraints on the stellar structure; and, given the complexity and sophistication of the theoretical calculations and the large number of possible pulsation modes, there is ample opportunity to misidentify modes. Other methods of mode identification must be used to avoid these problems. ## 2 Mode Identification Using Time-Resolved UV Spectroscopy Time-resolved ultraviolet spectroscopy provides an independent method for determining the pulsation indices of white dwarfs. The amplitudes of g-mode pulsations depend strongly on $`\mathrm{}`$ at wavelengths shorter than 3000 Å. Figure 1 shows how the amplitude depends on wavelength and $`\mathrm{}`$ for the lowest-order modes of pulsating white dwarfs. The amplitude of all modes increases towards the ultraviolet but the amplitude increases more for $`\mathrm{}=2`$ than for $`\mathrm{}=1`$. The differences are even greater for modes with higher $`\mathrm{}`$. Note the predicted $`180^{}`$ phase flip of the $`\mathrm{}=4`$ mode for the DAV models at wavelengths shorter than 1500Å, indicated by the negative amplitudes. The increase in amplitude from optical to ultraviolet wavelengths is caused by two effects: the increasing effect of the temperature on the flux, and the increasing effect of limb darkening in the ultraviolet. The differences among the amplitudes of modes with different $`\mathrm{}`$ are caused mainly by limb darkening. The brightness variations of non-radially pulsating white dwarfs are due entirely to variations in effective temperature; geometric variations are negligible (Robinson, Kepler, & Nather 1982). The normal modes divide the stellar surface into zones of higher and lower effective temperature that can be described by spherical harmonics; modes of higher $`\mathrm{}`$ have more zones than those of lower $`\mathrm{}`$. From a distance, we can measure only the integrated surface brightness, which includes the effects of limb darkening, so modes of high $`\mathrm{}`$ are normally washed out by the cancellation of different zones. But at ultraviolet wavelengths, the effects of limb darkening increase drastically, decreasing the contribution of zones near the limb. Consequently, modes of higher $`\mathrm{}`$ are cancelled less effectively in the UV and their amplitudes increase more steeply at short wavelengths than those of low $`\mathrm{}`$. Theoretical calculations of the amplitudes require good model atmospheres but are entirely independent of the details of pulsation theory and white dwarf structure calculations. Robinson et al. (1995) used this method to determine $`\mathrm{}`$ for the pulsating DA white dwarf G117–B15A. They measured the amplitude of its 215 s pulsation in the ultraviolet with the HST high-speed photometer and identified it as an $`\mathrm{}=1`$ mode. With the correct value of $`\mathrm{}`$, they found that the mass of the surface hydrogen layer in G117–B15A was between $`1.0\times 10^6`$ and $`8\times 10^5M_{}`$, too thick to be consistent with models invoking thin hydrogen layers to explain the spectral evolution of white dwarfs. They also found $`T_{\mathrm{eff}}=\mathrm{12\hspace{0.17em}375}\pm 125`$ K, substantially lower than the accepted temperature at that time, but close to the presently accepted temperature (Koester et al. 1994, Bergeron et al. 1995, Koester & Allard 2000). To extend these results, we observed the pulsating DA white dwarfs G226–29 (DN Dra) and G185–32 (PY Vul), and the DBV PG1351+489 (EM UMa) with 10 sec/exposure RAPID mode of the (now decomissioned) Faint Object Spectrograph (FOS) of the Hubble Space Telescope. We used the blue Digicon detector and the G160L grating over the spectral region 1150 Å to 2510 Å. ## 3 Observations ### 3.1 G226–29 G226–29, also called DN Dra, LP $`101148`$, and WD 1647+591, is the brightest known pulsating DA white dwarf (DAV or ZZ Ceti star), with $`m_v=12.22`$. At a distance of just over 12 pc, it is the closest ZZ Ceti star (optical parallax of $`82.7\pm 4.6`$ mas, Harrington & Dahn 1980; Hipparcos parallax of $`91.1\pm 2.1`$ mas, Vauclair et al. 1997). Its pulsations were discovered by McGraw & Fontaine (1980), using a photoelectric photometer attached to the MMT telescope. They found a periodicity at 109 s with a 6 mma (mili modulation amplitude) amplitude near 4200 Å. Kepler, Robinson, & Nather (1983) used time-series photometry to solve the light curve and interpret the variations as an equally spaced triplet with periods near 109 s. The outer peaks have similar amplitudes, near 3 mma, and are separated by a frequency $`\delta f=16.14\mu `$Hz from the central peak, which has an amplitude of 1.7 mma. These results were confirmed by Kepler et al. (1995a), using the Whole Earth Telescope, who also showed that no other pulsation were present with amplitudes larger than 0.4 mma. G226–29 has the simplest mode structure, the second smallest overall pulsation amplitude, and the shortest dominant period of any pulsating white dwarf. For G226–29, the very short (109 s) period triplet leads to a seismological interpretation of the structure by Fontaine et al. (1994), who show the star should have a thick hydrogen layer ($`\mathrm{log}q=\mathrm{log}\mathrm{M}_\mathrm{H}/\mathrm{M}_{}=4.4\pm 0.2`$) if the observed triplet is the rotationally split $`\mathrm{}=1`$, $`k=1`$ mode. Kepler et al. (1995a), assuming an $`\mathrm{}=1`$, $`k=1`$ triplet, also derived an hydrogen layer mass about $`10^4\mathrm{M}_{}`$. Higher $`k`$ values would imply an unreasonably thick hydrogen layer. Several recent spectroscopic studies show G226–29 to be one of the hottest of the ZZ Ceti stars, suggesting that we may be observing it as it enters the instability strip. The absolute effective temperature of this star is not settled, because one can derive two different effective temperatures for a given gravity using optical spectra, and also because there are uncertainties about the best convective efficiency to use in model atmospheres (Bergeron et al. 1992, 1995, Koester & Vauclair 1997). Fontaine et al. (1992) derive $`T_{\mathrm{eff}}=13,630\pm 200`$ K and $`\mathrm{log}g=8.18\pm 0.05`$, corresponding to a stellar mass of $`0.70\pm 0.03M_{}`$, based on high signal-to-noise optical spectra and ML2/$`\alpha =1`$ model atmospheres. This effective temperature places G226–29 near the blue edge of their ZZ Ceti instability strip. Kepler & Nelan (1993) used published IUE spectra and optical photometry to derive $`T_{\mathrm{eff}}=\mathrm{12\hspace{0.17em}120}\pm 11`$ K, assuming $`\mathrm{log}g=8.0`$; their ZZ Ceti instability strip is much cooler, 12 640 K $`T_{\mathrm{eff}}`$ 11 740 K. Koester & Allard (1993) use the Lyman $`\alpha `$ line profile to derive a parallax-consistent solution of $`T_{\mathrm{eff}}=\mathrm{12\hspace{0.17em}040}`$ K and $`\mathrm{log}g=8.12`$. Bergeron et al. (1995) found $`T_{\mathrm{eff}}=\mathrm{12\hspace{0.17em}460}`$, and $`\mathrm{log}g=8.29`$ for an $`\alpha =0.6`$ ML2 model which fits the IUE and optical spectra simultaneously; their instability strip spans $`\mathrm{12\hspace{0.17em}460}T_{\mathrm{eff}}\mathrm{11\hspace{0.17em}160}`$ K, placing G226–29 on the blue edge. Koester, Allard & Vauclair (1995) show G226–29 must have nearly the same temperature as L 19–2 and G117–B15A, at about 12 400 K, in agreement with Kepler & Nelan (1993), Koester & Allard (1993), and Bergeron et al. (1995). Kepler et al. (1995b) found $`T_{\mathrm{eff}}=\mathrm{13\hspace{0.17em}000}\pm 110`$, and $`\mathrm{log}g=8.19\pm 0.02`$, which corresponds to a mass of $`0.73M_{}`$, from the optical spectra alone, using Bergeron’s ML2 model atmosphere. Giovannini et al. (1998), using the same optical spectra as Kepler et al., but using Koester’s ML2 model atmosphere, obtained $`T_{\mathrm{eff}}=\mathrm{13\hspace{0.17em}560}\pm 170`$, and $`\mathrm{log}g=8.09\pm 0.07`$, which corresponds to a mass of $`0.66M_{}`$, for a DA evolutionary model of Wood (1995). Koester & Allard (2000) obtained $`T_{\mathrm{eff}}=\mathrm{12\hspace{0.17em}050}\pm 160`$ K, and $`\mathrm{log}g=8.19\pm 0.13`$, using IUE spectra, V magnitude and parallax. This general agreement on the value of $`\mathrm{log}g`$ suggests the mass is around $`0.70M_{}`$. The effective temperature is most probably 12 100 K, consistent with the IUE continuum, parallax and optical line profiles simultaneously. G226–29 was observed with the HST six times, each time for 3 hours, between September 1994 and December 1995. As the star is bright and fairly hot, the time-averaged spectrum from the total of 18.6 hrs of observation has a high signal-to-noise ratio (Figure 2). ### 3.2 G185–32 The largest-amplitude pulsations of G185–32 have periods of 71 s, 141 s and 215 s (McGraw et al. 1981). We observed G185–32 with HST for a total of 7.1 hr on 31 Jul 1995. The Fourier transform of the UV and Zeroth order (see section 4) light curve (Figure 3) shows the periods we have identified for this star. ### 3.3 PG1351+489 PG1351+489 is the DBV with the simplest pulsation spectrum, and therefore the one which requires the shortest data set to measure its amplitude. Its pulsations, discovered by Winget, Nather & Hill (1987), have a dominant period at 489 s and a peak-to-peak blue amplitude near 0.16 mag. The light curve also shows the first and second harmonics of the this period ($`f_0`$), plus peaks at 1.47 $`f_0`$, 2.47 $`f_0`$ and 3.47 $`f_0`$, with lower amplitudes. We observed PG1351+489 for 4 consecutive orbits of HST, for a total of 2.67 hr. The ultraviolet and Zeroth order (see section 4) Fourier spectra (Figure 4) show only the 489 s period and its harmonic at 245 s above the noise, and a possible period at 599 s. ## 4 Zeroth Order Data Although not much advertised by the STScI, the zeroth order (undiffracted) light from an object falls onto the FOS detector when using the G160L grating and provides simultaneous photometry of the object with an effective wavelength around 3400 Å (see Figure 5) (Eracleous & Horne 1996).<sup>2</sup><sup>2</sup>2The data can be extracted from pixels 620 to 645 from the c4 files. The simultaneous photometry from the zeroth order light was crucial to the success of this project. As the zeroth order light has a counting rate around 100 times larger than the total light collected in the first order time resolved spectra, it can also be used to search for low amplitude pulsations. In the searched range of 800 s to 20 s, no new ones were found for any star observed in this project, to a limit around 8 mma. The calibration pipeline of the HST data contains a transmission curve for the zeroth order data measured on the ground prior to launch (Figure 6), but our data are inconsistent with this transmission curve. We will discuss this later in section 9. ## 5 Data Set Problems We detected two significant problems in the FOS data sets on G226–29, the first star we observed. First, we found a $`3`$% modulation of the total count rate on a time scale similar to the HST orbital period (see Figure 7). This modulation is probably caused by a combination of factors. We used a 1 arcsec entrance aperture and a triple peak-up procedure to center the star in the aperture for the first 5 observations. The triple peak-up process yields a centering accuracy of only $`\pm 0.2`$ arcsec which, when coupled with the 0.8 arcsec PSF of the image, produces light loss at the edges of the aperture of at least a few percent. As the position of the star image in the aperture wanders during the HST orbit, the amount of light lost at the aperture varies, modulating the detected flux. The second problem became evident when we compared the observed spectrum to the spectrum of G226–29 obtained with IUE and to model atmospheres for DA white dwarfs (Koester, Allard & Vauclair 1994). We found a spurious “bump” in the FOS spectrum in a 75 Å region just to the blue of 1500 Å. The bump is not subtle: it rises 25% above the surrounding continuum (see Kepler, Robinson & Nather 1995). The excess was caused by a scratch on the cathode of the FOS blue detector, in a region used only for the G160L grating, for which the pipeline flat field did not correct properly. The scratch is at an angle with respect to the diode array so that the wavelength of the bump in the spectrum changes as the position of the spectrum on the cathode changes. The pipeline flat field was obtained with a 0.04 arcsec centering accuracy in the 4.3 arcsec aperture and is not accurate for any other aperture or position. For our sixth and last observation, we used the upper 1.0 pair aperture to minimize the flat fielding problem. A method for re-calibrating all post-COSTAR observations with the G160L has recently been devised, including an Average Inverse Sensitivity correction.<sup>3</sup><sup>3</sup>3The AIS should be used with STSDAS task calfos, using the FLX\_CORR omit option. After re-calibration, there was significant improvement in our data, but there is still some problem with the flux calibration redwards of 2200Å, as well as some scattered light into the Ly$`\alpha `$ core. To identify the pulsation modes in a white dwarf we need to know only the fractional amplitudes of the pulsations as a function of wavelength. As the fractional amplitudes are immune to multiplicative errors in the calibration of the spectrograms, and as both problems we found are multiplicative, these problems do not affect our results. Data for wavelengths shorter than 1400 Å have known scattered light correction problems which, being additive, reduce the accuracy of our measured amplitudes by an uncertain amount. ## 6 Models The model atmospheres used to fit the time-averaged spectra, and to calculate the intensities at different angles with the surface normal, were calculated with a code written by Koester (Finley, Koester, Basri 1997). The code uses the ML2/$`\alpha `$=0.6 version of the standard mixing length theory of convection and includes the latest version of the quasi-molecular Lyman $`\alpha `$ opacity after Allard et al. (1994). This choice of convective efficiency allows for a consistent temperature determination from optical and ultraviolet time-average spectra (Bergeron et al. 1995, Vauclair et al. 1997, Koester & Allard 2000). ML2/$`\alpha =0.6`$ is also consistent with the wavelength dependence of the amplitude observed in G117–B15A by Robinson et al. (1995), according to Fontaine et al. (1996). To calculate the amplitudes of the pulsations, we require the specific intensities emitted by the white dwarf atmospheres as a function of wavelength, emitted angle, effective temperature, gravity, and chemical composition. Robinson, Kepler & Nather (1982) calculated the luminosity variations of g-mode pulsations, and Robinson et al. (1995) expanded the results to include explicitly an arbitrary limb darkening law $`h_\lambda (\mu )`$, where $`\mu =\mathrm{cos}\theta `$. If we call the coordinates in the frame of pulsation $`(r,\mathrm{\Theta },\mathrm{\Phi })`$, the coordinates in the observer’s frame $`(r,\theta ,\varphi )`$, and assume $$r=R_o(1+ϵ\xi _r)$$ with $$\xi _r=\mathrm{Real}\{Y_\mathrm{}m(\mathrm{\Theta },\mathrm{\Phi })e^{i\sigma t}\}$$ and assume low amplitude adiabatic pulsations $$\frac{\delta T}{T}=_{ad}\frac{\delta P}{P},$$ then the amplitude of pulsation at wavelength $`\lambda `$, $`A(\lambda )`$, defined as $$A(\lambda )\mathrm{cos}\sigma t=\frac{\mathrm{\Delta }F(\lambda )}{F(\lambda )}$$ is given by $$A(\lambda )=ϵY_\mathrm{}m(\mathrm{\Theta }_0,0)\left(\frac{1}{I_{0\lambda }}\frac{I_{0\lambda }}{T}\right)\left(R_0\frac{\delta T}{\delta r}\right)\times \frac{h_\lambda (\mu )P_{\mathrm{}}(\mu )\mu 𝑑\mu }{h_\lambda (\mu )\mu 𝑑\mu }$$ (1) where $`I_{0\lambda }`$ is the sub-observer intensity, i.e., the intensity for $`\mathrm{cos}\theta =1`$, and $`P_{\mathrm{}}(\mu )`$ are the Legendre polynomials. We have defined $`(\mathrm{\Theta }_0,0)`$ as the coordinates of the observer’s $`\theta =0`$ axis with respect to the $`(r,\mathrm{\Theta },\mathrm{\Phi })`$ coordinate system. By taking the ratio $`A(\lambda )/A(\lambda _0)`$, we can eliminate the perturbation amplitude $`ϵ`$, the effect of the inclination between the observer’s line of sight and the pulsation axis \[$`Y_\mathrm{}m(\mathrm{\Theta }_0,0)`$\], and the term $`R_0(\delta T/\delta r)`$, all of which cancel out. The term $`\left(\frac{1}{I_{0\lambda }}\frac{I_{0\lambda }}{T}\right)`$ and the limb darkening function $`h_\lambda (\mu )`$ must be calculated by the model atmosphere code, and the amplitude ratio is then calculated by numerical integration. For g-mode pulsations, the amplitude ratio is therefore a function of $`\mathrm{}`$. ## 7 Fit to the Time Averaged Spectra For G226–29, we used our high S/N average spectra to fit to Koester’s model atmospheres, constrained by HIPPARCOS parallax (Vauclair et al. 1997) to obtain $`T_{\mathrm{eff}}=\mathrm{12\hspace{0.17em}000}\pm 125`$ K, $`\mathrm{log}g=8.23\pm 0.05`$. The pure spectral fitting does not significantly constrain $`\mathrm{log}g`$, but confines $`T_{\mathrm{eff}}`$ to a narrow range. The parallax, on the other hand, very strongly constrains the luminosity, and (via the mass-radius relation and $`T_{\mathrm{eff}}`$) also the radius, and thus $`\mathrm{log}g`$ (Figure 8). For G185–32, Koester & Allard (2000) also show that the V magnitude and the parallax can be used to constraint the gravity and they obtained $`\mathrm{log}g=7.92\pm 0.1`$ and $`T_{\mathrm{eff}}=\mathrm{11\hspace{0.17em}820}\pm 110`$ K. Our time-averaged HST spectrum (Figure 9) gives an effective temperature value of $`T_{\mathrm{eff}}=\mathrm{11\hspace{0.17em}770}\pm 30`$ K, for such surface gravity. The time-averaged HST spectra alone cannot, for any star studied here, constrain both the effective temperature and the surface gravity simultaneously, and that is the main reason the parallax is used when available, as a further constraint. For PG1351+489, whose spectra is shown in Figures 10 and 11, we don’t have a parallax measurement, and pure He models give similar fit for $`\mathrm{log}g=7.50`$ and $`T_{\mathrm{eff}}=\mathrm{24\hspace{0.17em}090}\pm 620`$ K, or $`\mathrm{log}g=7.75`$ and $`T_{\mathrm{eff}}=\mathrm{24\hspace{0.17em}000}\pm 210`$ K, or $`\mathrm{log}g=8.00`$ and $`T_{\mathrm{eff}}=\mathrm{23\hspace{0.17em}929}\pm 610`$ K. Our quoted values are for pure helium atmosphere, but to complicate things further, Beauchamp et al (1999) can fit the optical spectra with $`T_{\mathrm{eff}}=\mathrm{26\hspace{0.17em}100}\mathrm{K}`$, $`\mathrm{log}g=7.89`$ for pure helium model, but $`T_{\mathrm{eff}}=\mathrm{22\hspace{0.17em}600}\mathrm{K}`$, $`\mathrm{log}g=7.90`$ allowing some hydrogen, undetectable in the optical spectra. ## 8 Optical Data for G226–29 Since most of the ground-based time-series observations to date on G226–29 were obtained in white light with biakali photocathode detectors, we obtained simultaneous UBVR time series photometry using the Stiening photometer (Robinson et al. 1995) attached to the 2.1 m Struve telescope at McDonald Observatory, in March to June 1995, for a total on 39.2 hr of 1 sec exposures on the star. We then transformed the time base to Barycentric Julian Dynamical Time (BJDD) to eliminate the phase shift introduced by the motion of the Earth relative to the barycenter of the solar system. We calculated a Fourier transform of the intensity versus time, for each of the UBVR colors, and measured the amplitudes and phases for the three modes, called $`P_0`$, $`P_1`$, and $`P_2`$, the triplet around 109 s (Table 1). Even though we use this nomenclature, we are not assuming the three modes correspond to an $`\mathrm{}=1`$ mode split by rotation, as we are studying the $`\mathrm{}`$ value for each component independently. Mode $`P_0`$ has a period of 109.27929 s, $`P_1`$ has a period of 109.08684 s, $`P_2`$ has a period of 109.47242 s. The phases of the three modes are the same in all filters, as expected for $`g`$-mode pulsations (Robinson, Kepler & Nather 1982). As the HST data on G226–29 were spread out over 16 months, the ephemeris of our previous optical data were not accurate enough to bridge the resulting time gaps, requiring us to obtain an additional optical data set to improve the accuracy of the pulsation ephemeris. We observed the star again with the McDonald Observatory 2.1 m telescope for 1.7 hr from 8–15 May 96, 1.4 hr on 7 Feb 97, 2.7 hr on 6 May 1997, and 13.6 hr from 3 Jun 1997 to 11 Jun 1997 using the 85cm telescope at Beijing Astronomical Observatory with a Texas 3 star photometer. With this data set we were able to improve the ephemeris for the three pulsations enough to cover the HST data set and, using the 1995 data set, to extend back to the 1992 Whole Earth Telescope data set. Our new ephemeris, accurate from 1992 to 1997, is: $$P_0=\mathrm{109.279\hspace{0.17em}299\hspace{0.17em}45}\mathrm{sec}\pm 3.3\times 10^6\mathrm{sec},$$ $$T_{\mathrm{max}}^0(\mathrm{BJDD})=\mathrm{244\hspace{0.17em}8678.789\hspace{0.17em}330\hspace{0.17em}34}\pm 3.7\mathrm{sec};$$ $$P_1=\mathrm{109.086\hspace{0.17em}874\hspace{0.17em}54}\mathrm{sec}\pm 1.8\times 10^6\mathrm{sec},$$ $$T_{\mathrm{max}}^1(\mathrm{BJDD})=\mathrm{244\hspace{0.17em}8678.789\hspace{0.17em}951\hspace{0.17em}19}\pm 2.0\mathrm{sec};$$ $$P_2=\mathrm{109.472\hspace{0.17em}385\hspace{0.17em}02}\mathrm{sec}\pm 6.5\times 10^7\mathrm{sec},$$ $$T_{\mathrm{max}}^2(\mathrm{BJDD})=\mathrm{244\hspace{0.17em}8678.789\hspace{0.17em}541\hspace{0.17em}96}\pm 0.5\mathrm{sec}.$$ Our data set is not extensive enough to extend the ephemeris back to the 1980–1982 discovery data. ## 9 Ultraviolet Amplitudes To analyze the HST data for the pulsation time variability, we first integrated the observed spectra into one bin, by summing over all wavelengths, to obtain the highest signal to noise ratio. We then transformed the time base to Barycentric Julian Dynamical Time (BJDD), and calculated the Fourier transform of the intensity versus time. For all three stars we conclude that the ultraviolet (HST) data sets showed only the pulsation modes previously detected at optical wavelengths. ### 9.1 G226–29 Figure 12 shows the Fourier spectra of the light curve of G226–29, converted to amplitude, and the effects of subtracting, in succession, the three pulsations we have detected. The periods used in the subtraction are those of our new ephemeris, but the phases and amplitudes were calculated with a linear least squares fit to the HST data by itself. The residual after this process is probably due to imperfect “pre-whitening” and does not indicate the presence of other pulsations. The complex spectral windows arises from the fact that the beat period between the pulsations is around 17 hr, and the length of each run of HST was about 3 hr. We can identify the two largest modes, marked $`P_1`$ and $`P_2`$ in Figure 12, at 109.08 and 109.47 s, but the central peak, which has a smaller amplitude, is largely hidden in the complex spectral window. The Fourier transform shows that we cannot totally separate the smallest amplitude pulsation, called $`P_0`$, from the two largest amplitude pulsations, called $`P_1`$ and $`P_2`$ modes; its amplitude and phase have large uncertainties in the HST data set. Note that we use the periods measured in the optical to subtract the light curves (prewhitening), as they are much more accurate than the HST values. The fact that the subtraction works confirms that the optical periods are the same as the ones found in the UV. After concluding the the ultraviolet (HST) data sets presents only the pulsation modes previously detected, we integrated the observed spectra in 50 Å bins. After determining that the pulsations at all wavelengths were in phase, we fitted three sinusoids simultaneously to each wavelength bin, with phases fixed to the values obtained from the co-added spectra. Figure 13 shows the measured amplitudes. We then normalized the amplitude of the pulsations by their amplitude at 5500 Å to compare with the theoretical models. We used the zeroth order data set to check the amplitude at U, which has similar wavelength, and noticed that, unlike any other observation of the star, the ratio of amplitudes between modes $`P_1`$ and $`P_2`$ changed significantly, making the use of the UBVR measurements unreliable, as they were not simultaneous with the HST data. We therefore renormalized the optical data using the amplitude ratios derived from the zeroth order data, assuming its effective wavelength is 3400 Å $`\pm `$ 100 Å. The published transmission curve for the zeroth order data, convolved with the models, would demand an amplitude of pulsation much larger than observed. The ultraviolet efficiency of the mirror must be much lower than measured on the ground, but the effective wavelength is consistent with our measurements, to within our uncertainty of around 100 Å. Even though the central wavelength of the mirror is uncertain, the amplitude vs. wavelength changes only by a few percent over 100 Å, so we include the uncertainty in the wavelength as an uncertainty in the normalization. We note that the observed ultraviolet amplitudes were used to test this effective wavelength and not only are they consistent with it, they exclude any effective wavelength shorter than 3100 Å, as it would produce a much higher amplitude than observed. Note that to calculate $`A(\lambda )/A(\lambda _{\mathrm{ref}})`$ we only need one amplitude, $`A(\lambda _{\mathrm{ref}})`$, for normalization \[see equation (1)\]. By using the Zeroth order amplitude, we do not need the UBVR amplitudes. The original HST data set consists of 764 useful pixels from 1180 Å to 2508 Å, each with a width of 1.74 Å, but we can only measure reliable amplitudes for the bins redder than 1266 Å. We convolved the theoretical amplitude spectra (Figure 1) with the measurements summed into 50 Å bins, over the ultraviolet, and the UBVR transmission curves for the optical, obtaining amplitudes directly comparable to the normalized measurements (Table 2). We then proceeded with a least-squares fit of the amplitude vs. wavelength observed curves to the theoretical ones, and for G226–29 modes $`P_1`$ and $`P_2`$ fit an $`\mathrm{}=1`$ g-mode, as shown in Figure 14, and fail to fit the other modes shown. The central mode, $`P_0`$, fits $`\mathrm{}=1`$ best, but the data are too noisy to exclude $`\mathrm{}=2`$. We determined the $`\mathrm{}`$, $`T_{\mathrm{eff}}`$, and $`\mathrm{log}g`$ independently for each of three modes, $`P_0`$, $`P_1`$ and $`P_2`$, by fitting $`\mathrm{Amp}(\lambda )/\mathrm{Amp}(5500\mathrm{\AA })`$ for each periodicity to the model grid, with $`\mathrm{}`$, $`T_{\mathrm{eff}}`$ and $`\mathrm{log}g`$ as free parameters, by least squares. Each mode can be fitted by $`\mathrm{}=1`$ or $`\mathrm{}=2`$ with different $`T_{\mathrm{eff}}`$ and $`\mathrm{log}g`$, but all modes fit only one model, with $`T_{\mathrm{eff}}=\mathrm{11\hspace{0.17em}750}\pm 20`$ K and $`\mathrm{log}g=8.23\pm 0.06`$, for $`\mathrm{}=1`$. Note that the quoted uncertainties are only those of the least-squares fit and do not represent the true uncertainties. There are substantial systematic errors introduced by the normalization of the flux at 3400 Å, the HST flux calibration, as well as the uncertainties due to the mixing-length approximation used in the model atmospheres (Bergeron et al. 1995, Koester & Vauclair 1997, Koester & Allard 2000), which is incapable of representing the true convection in the star at different depths (Wesemael et al. 1991, Ludwig, Jordan & Steffan 1994). ### 9.2 G185–32 For G185–32 a fit of the change in pulsation amplitude with wavelength (Table 3) with all three parameters: $`T_{\mathrm{eff}}`$, $`\mathrm{log}g`$ and $`\mathrm{}`$ free resulted in $`\mathrm{}`$ being consistent with either 1 or 2, but the required temperatures and gravities for $`\mathrm{}=2`$, $`T_{\mathrm{eff}}=\mathrm{13\hspace{0.17em}250}`$ K and $`\mathrm{log}g=8.75`$ (Figure 15) were inconsistent with those derived from the time-averaged spectrum itself $`T_{\mathrm{eff}}=\mathrm{11\hspace{0.17em}750}`$ K and $`\mathrm{log}g=8.0`$. We therefore fixed the temperature and gravity to those derived using the time-averaged spectra, V magnitude and parallax (Koester & Allard 2000) and fitted the amplitude variation for $`\mathrm{}`$. $`\mathrm{}=1`$ is the best fit for all the modes, except the 141 s mode of G185–32 that does not fit any pulsation index, because its amplitude does not change significantly in the ultraviolet (Figure 16), in contradiction to what is expected from the theoretical models for a DAV. As for G226–29, and PG1351+489, our normalization uses the amplitude of the Zeroth order data because it has the same Fourier spectral window as the ultraviolet data. ### 9.3 PG1351+489 For PG1351+489, a fit of the normalized ultraviolet amplitudes to the theoretical ones, for the main periodicity at 489 s and its harmonic at 245 s, fit an $`\mathrm{}=1`$ mode, with $`T_{\mathrm{eff}}=\mathrm{22\hspace{0.17em}500}\mathrm{K}\pm 250`$ K, and $`\mathrm{log}g=8.0\pm 0.10`$, or an $`\mathrm{}=2`$ mode with $`T_{\mathrm{eff}}=23100K\pm 250`$K and $`\mathrm{log}g=7.5\pm 0.10`$. Figure 17 shows that the amplitude ratios are dependent on $`\mathrm{log}g`$, but again the temperature and $`\mathrm{log}g`$ determination cannot be untangled. As the optical spectra of Beauchamp et al. (1999) indicate $`\mathrm{log}g=7.9`$ for PG1351+489, and $`\mathrm{log}g8.0`$ for the whole DBV class, we conclude that the best solution is $`\mathrm{}=1`$ and $`\mathrm{log}g8.0`$. It is important to note that all pulsations have the same phase at all wavelengths, to within the measurement error of a few seconds, and therefore no phase shift with wavelength is detected, assuring that all geometric and some non-adiabatic effects are negligible; the main non-adiabatic effect is a phase shift between the motions (velocities) and the flux variation, not measurable in our data. ## 10 Discussion For a DAV, an $`\mathrm{}=1`$ mode with 109 s period requires a $`k=1`$ radial index from pulsation calculations, and therefore the model for G226–29 has to have a thick hydrogen surface layer, around $`10^4\mathrm{M}_{}`$ (Bradley 1998). The star may have a thick hydrogen layer as well. The effective temperature derived from the pulsation amplitudes, $`T_{\mathrm{eff}}=11750`$, and surface gravity $`\mathrm{log}g=8.23`$, indicate a mass of $`(0.75\pm 0.04)M_{}`$, according to the evolutionary models of Wood (1992). As all three modes fit $`\mathrm{}=1`$, they must be a triplet from a rotationally split $`\mathrm{}=1`$ mode. Even though the Bergeron et al. (1995) ML2/$`\alpha =0.6`$ instability strip runs $`12460\mathrm{K}T_{\mathrm{eff}}11160\mathrm{K}`$, we know by comparison of its spectra with other ZZ Cetis that G226–29 is at the blue edge. As it is at the blue edge, such a low temperature indicates the instability strip is at much lower temperature than previously quoted. Also, the observed low amplitude of the pulsations, their short period and the small number of pulsations all indicate it is at the blue edge for its mass, and the higher than average mass suggests a higher temperature instability strip. According to Bradley & Winget (1994), the ML3 instability strip for a 0.75 $`M_{}`$ white dwarf is at 13 100 K, 330K hotter than for a 0.6 $`M_{}`$ star. Giovannini et al. (1998) show that the observed instability strip does depend on mass, as Bradley & Winget (1994) predicted. One of the problems with the determination of an effective temperature for a star is that it may vary with the wavelength used in the determination; even though the effective temperature is a bolometric parameter, none of our observations are. The problem is dramatic for stars with surface convection layers, because of the effects of turbulent pressure on the photosphere. None of the model atmospheres calculated with mixing length theory can reproduce the physical non-local processes involved (Canuto & Dubovikov 1998), requiring different parameterizations at different depths (Ludwig, Jordan & Steffen 1994), and a fine tuning of the convection mixing length coefficient for the wavelength region of interest (Bergeron et al. 1995, Koester & Vauclair 1997, Koester & Allard 2000). The model atmospheres used assume the atmosphere is in hydrostatic and thermodynamical equilibrium, an assumption that must be examined because the timescale for convection is of the same order of the timescale for pulsation. The periodicity at 141 s for G185–32 does not change its amplitude significantly with wavelength and therefore does not fit any theoretical model. As its period is twice that of the 70.92 s periodicity, one must consider if it is only a pulse shape effect on the 70.92 s periodicity, but normally a pulse shape effect occurs as an harmonic, not a sub-harmonic, because it normally affects the rise and fall of the pulse. We have no plausible explanation for this periodicity, showing that these stars still have much to teach us. The periodicity at 560 s rises slowly to the ultraviolet, but as the HST data sets are short, its amplitude has a large uncertainty due to aliasing, as shown by the phase change from ultraviolet ($`1.5\pm 8.2`$ s) to zeroth order data ($`44.8\pm 9.6`$ s). Another surprise for G185–32 is the identification of the periodicities at 70.93 s and 72.56 s as $`\mathrm{}=1`$ modes. The pulsation models are consistent with such short period $`\mathrm{}=1`$, k=1 mode only for a total mass around $`1M_{}`$. Higher k values require even larger mass. But the mass determination from our time average spectra, as well the mass determinations by Koester & Allard (2000) using the IUE spectra plus V mag and parallax, or the optical spectra of Bergeron et al. (1995) all derive a normal mass around 0.56 $`M_{}`$. One possibility to resolve such difference is if the observed modes were a rotationally split k=0 mode, but that would require that G185–32 be a binary star, to account for the center of mass changes during pulsation. As G185–32 is not a known binary star, such explanation requires the discovery of a companion, possibly with a high signal-to-noise red spectra. The models, while useful, clearly lack some of the pulsation physics present in the star. The splitting of the 70.93 to 72.56 s periodicities, assuming they are m-splitting of the same k and $`\mathrm{}`$ mode, imply a rotation period around 26 min, but that of the 299.95 s to 301.46 s periodicities would imply a rotation period of 9.7 hr. In the estimate we used $$P_{\mathrm{rot}}=\frac{1C_{kl}^I}{\mathrm{\Delta }f}$$ where $`\mathrm{\Delta }f`$ is the frequency splitting, and used $`C_{kl}^I0.47`$ to 0.48 for k=1, $`\mathrm{}=1`$. The asymptotic value for $`C_{kl}^I`$ is 0.5, for $`k1`$. Either value for the rotation period is much shorter than for normal ZZ Ceti stars (around 1 day) so we must also consider the possibility that the 141 s periodicity, which does not follow the g-mode theoretical prediction, and is harmonically related to the 70.93 s periodicity, must arise from some other cause. The 141 s periodicity of G185–32 is a periodic brightness change that is not accompanied by a change in color, suggesting some kind of geometric effect. We are thankful to Bob Williams, the former Director of the STScI for granting us director’s discretionary time for the project, to Jeffrey Hayes, our project scientist at STScI, for the continuous help with the HST data reduction and to Bill Welsh, from the University of Texas, for bringing the zeroth order data to our attention. Support for this work was provided by NASA through grants number GO-5581, GO-6011 and GO-6442 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555, Support to S.O.K. was also provided by CNPq-Brazil. D.K. acknowledges financial support for work on HST observations from the DLR through grant No. 50 OR 96173. Jiang acknowledges financial support from Chinese Natural Science Foundation, grant No. 19673008.
no-problem/0003/nucl-th0003006.html
ar5iv
text
# Limits on the low energy antinucleon-nucleus annihilations from the Heisenberg principle. ## 1 Introduction Recently several experimental data have shown that at projectile momenta below 200 MeV/c the behavior of antinucleon-nucleus annihilations is quite different from what could be naively expected. For $`k`$ (incident momentum of the antinucleon in the laboratory) below 70 MeV/c there are no evident signs of an increase of the $`\overline{p}`$nucleus total annihilation cross section at increasing mass number $`A`$ of the target. At 30-50 MeV/c the $`\overline{p}p`$ total annihilation rate is larger than the corresponding rate for $`\overline{p}`$D and $`\overline{p}^4`$He. The width and shift of the ground level of antiprotonic atom of Hydrogen are larger than the corresponding observables in antiprotonic Deuterium. For the $`\overline{p}p`$ scattering length $`\alpha `$ $``$ $`\alpha _R+i\alpha _I`$ we have $`\alpha _R`$ $``$ $`\alpha _I`$ $``$ 0.7$`÷`$0.8 fm, and the $`\rho `$-parameter (i.e. the ratio between the real and the imaginary part of the forward scattering amplitude) is $``$ $`1`$ at zero energy. These values mean that at small momenta the elastic interaction is $`repulsive`$ (i.e. negative phase shifts: the outgoing scattered wave is in advance with respect to the free motion wave) and as much important as the annihilation. Elastic and annihilation data for $`\overline{p}p`$ at laboratory momenta $`k`$ below 600 MeV/c, scattering length data and $`\rho `$parameter data from 0 to 600 MeV/c can be well fitted by energy-independent optical potentials. They present some $`very`$ curious features: (i) an increase of the strength of the imaginary part leads to a $`decrease`$ of the consequent reaction rate, and an increase in the radius of the imaginary part does not lead to a consistent increase of the reaction rate; (ii) a repulsive elastic amplitude is produced despite the real part of the potential is attractive; (iii) the annihilation rate is much more sensitive to the diffuseness parameter than to the strength or the radius. All this happens for $`k`$ $`<`$ 200 MeV/c. We could suspect that strange phenomena start from $`k`$ $``$ 200 MeV/c, although they become experimentally evident at smaller momenta. The synthesis of the previous facts can be: stronger and attractive in principle $``$ weaker and repulsive in effect. We and other authors have presented explainations of these phenomena that for simplicity we regroup under the name “inversion”. In particular, in it has been shown that, within a multiple scattering framework, double interaction terms interfere destructively with single interaction terms in $`\overline{p}`$D interaction. In it has been shown that in the simplified optical model potential $`V(r)`$ $`=`$ $`iW`$ for $`r`$ $`<`$ $`R`$ (the “black sphere model”) the zero-energy reaction cross section is an increasing function of $`W`$ for small $`W`$ only, and it decreases to zero for $`W`$ $``$ $`\mathrm{}`$. In a previous work we have generalized the black sphere analysis showing that the inversion is associated with the formation of a sharp “hole” (i.e. a vacuum region with sharp boundaries, due to the annihilation) in the projectile wavefunction at small momenta. The underlying argument was not related with any specific model for the annihilation. It was stressed that this phenomenon was related to the transition from a semiclassical to a pure quantum, S-wave dominated, regime. We examined also more specific explainations for the inversion, however in the following we would like to further develop that general argument, relating it with the Heisenberg principle $`\delta k\delta x`$ $`>`$ $`1`$ (in natural units). Both in Hydrogen and in heavier nucleus targets, the great bulk of the annihilations is supposed to take place within a region of thickness $`\mathrm{\Delta }`$ $``$ 1 fm placed just out of the nuclear surface. Since the realized annihilation implies the statement “$`\overline{N}`$ and nucleus at relative distance $`r`$ $``$ $`R_{nucleus}`$ defined within uncertainty $`\mathrm{\Delta }`$”, we expect strong deviations from semiclassical intuition at $`k`$ $`<`$ $`1/\mathrm{\Delta }`$ $``$ 200 MeV/c. ## 2 The breaking of the saturation of the unitarity limit. We begin by spending a few words on the so-called “black disk model”, that assumes complete flux remotion from the lowest partial waves and gives the unitarity limit for the total reaction probability. At very low energies, this model is a nonsense, for a well known limiting property of the phase shifts at zero energy. Indeed, the black disk model assumes $`|exp(2i\delta _o)|`$ $`=`$ 0 for the S-wave phase shift $`\delta _o`$. But in the limit $`k`$ $``$ 0 any requirement of the kind $`|exp(2i\delta _o)|`$ $`<`$ $`B`$, where $`B`$ is a constant smaller than 1, means $`Im(\delta _o)`$ $``$ $`kIm(\alpha )`$ $``$ constant, i.e. $`Im(\alpha )`$ $``$ $`1/k`$ $``$ $`\mathrm{}`$. This shows that the idea of complete flux remotion and the black disk model are ill-defined concepts at low energies. Of course, one can artificially put $`|exp(2i\delta _o)|`$ $`=`$ 0, but at small $`k`$ one will never be able to obtain this condition starting from a model with $`confined`$ interactions. So, in presence of a $`very`$ effective reaction mechanism, as annihilation is, we expect that a scale $`k`$ $``$ $`k_b`$ exists for the projectile momentum $`k`$ such that: (i) for $`k`$ $`>>`$ $`k_b`$ the reaction cross section assumes values which are close to the unitarity limit; (ii) for $`k`$ $`<<`$ $`k_b`$ we assist to a breaking of the saturation of the unitarity limit, i.e. the reaction cross section is much smaller that its unitarity limit value. Assuming that the main distortions in the entrance channel wavefunction are caused by the absorption, the uncertainty principle suggests $`k_b`$ $``$ $`1/\delta r`$, where $`\delta r`$ is the characteristic projectile path in nuclear matter. The consequent physics is very different depending whether this path is peculiar of a nucleus-projectile or of a nucleon-projectile underlying process. In this respect neutron induced nuclear reactions, and reactions like $`\overline{N}`$-nucleus annihilation or $`K^{}`$-nucleus absorption, are the exact opposite. In the former case the underlying projectile-nucleon interactions are elastic, although their effect is destructive on the full nuclear structure. The reaction process contains the piece of information “the projectile and the nuclear center of mass are at relative distance $`<`$ $`R_{nucleus}`$, i.e. $`\delta r`$ $`=`$ $`R_{nucleus}`$”. In the latter case the nucleon-projectile interaction is so inelastic that the path of the projectile in nuclear matter is $`\mathrm{\Delta }`$ $``$ $`R_{nucleon}`$, and the reaction process contains the piece of information “relative distance $`=`$ $`R_{nucleus}\pm \mathrm{\Delta }`$”, i.e. $`\delta r`$ $``$ $`\mathrm{\Delta }`$. In both cases the information implicitely contained in the fact that the reaction has happened is uncompatible with the statement “the momentum of their relative motion was smaller than $`1/\delta r`$”. So, either the reaction can’t happen or we must pay a price, in terms of large-momentum distortions of the projectile wavefunction. These distortions produce a large flux reflection, as we show below, that is the reason for the departure from the saturation of the unitarity limit. ## 3 The general mechanism. We assume that the antinucleon-nucleus annihilation reaction is such a violent and effective process to make it necessary for the $`\overline{N}`$ wavefunction to be zero in all places where the value of the density of the nucleons is close to the nuclear matter value. In other words, as soon as the overlap between the distributions of probability for the antinucleon and for the target nucleons overcomes a certain threshold $`<<`$ 1 the annihilation is supposed to take place, with the practical consequence that any consistent overlap of the projectile and target wavefunctions is forbidden. Most models or phenomenological optical potential analyses agree on this property. This produces a thin spherical shell of thickness $`\mathrm{\Delta }`$ $``$ 1 fm (the exact size depends on the specific model) where the largest part of the annihilations is supposed to take place. We name it “annihilation shell”. The internal surface of the annihilation shell roughly coincides with the surface of the target nucleus or proton, in agreement with the idea that $`\overline{N}`$ and nuclear matter densities can’t overlap consistently. Depending on the model, the position of the external surface of the annihilation shell is related either with a minimum amount of overlap between antinucleon and nucleon densities required for annihilations, or with the range of a meson/baryon exchange between the annihilating particles. The target independence of $`\mathrm{\Delta }`$, together with the Heisenberg principle, produces a target-independent annihilation cross section. To undestand how it realizes, we start with some easy 1-dimensional examples. We consider a $`\overline{N}`$ plane wave with momentum $`\stackrel{}{k}`$ $`=`$ $`(0,0,k)`$ parallel to the $`z`$-axis. There is no interaction for $`z`$ $`>`$ 0, while for $`z`$ $`<`$ 0 absorption of the $`\overline{N}`$ flux is possible, according to some unknown mechanism. We don’t know how it happens, but we know that most of the flux that enters the absorbtion region disappears within a range $`\mathrm{\Delta }`$: $`|\mathrm{\Psi }(\mathrm{\Delta })|`$ $`<<`$ $`|\mathrm{\Psi }(0)|`$. The uncertainty principle implies that in this region the wavefunction has relevant components associated to a single particle momentum $`k_z`$ $``$ $`1/\mathrm{\Delta }`$. A consequence of this is the obvious geometrical fact that for the absolute value of the logarithmic derivative of $`\mathrm{\Psi }`$ we have $`|\mathrm{\Psi }^{}/\mathrm{\Psi }|`$ $``$ $`1/\mathrm{\Delta }`$ in the damping range $`\mathrm{\Delta }`$ $`<`$ $`z`$ $`<`$ 0, and consequently also in $`z`$ $`=`$ $`0ϵ`$. For matching this value with the value of the logarithmic derivative on the positive $`z`$ side, we need both an incoming and a reflected wave. The general form of $`\mathrm{\Psi }(z)`$ for $`z`$ $`>`$ 0 is $`\mathrm{\Psi }`$ $`=`$ $`\mathrm{\Psi }_osin[k(r\alpha )]`$ $``$ $`\mathrm{\Psi }_{in}+\mathrm{\Psi }_{out}`$, with $`\alpha `$ complex to give account of the reactions. In general $`\alpha `$ is a function of $`k`$, however we can identify it with the $`k`$-independent scattering length since we are interested in the region of small $`k`$, and we assume that no resonances are present in the $`k`$-range that we consider. Below we report standard calculations, but it is easy to understand the relevant points in advance. For $`z`$ $`>`$ 0, $`|\mathrm{\Psi }_{in}|^2`$ $``$ $`|\mathrm{\Psi }(z_p)|^2/4`$, where $`z_p`$ is the lowest positive $`z`$ value where the periodical $`\mathrm{\Psi }`$ attains an oscillation peak. $`|\mathrm{\Psi }(0)|^2`$ $`<<`$ $`|\mathrm{\Psi }(z_p)|^2`$ if $`|\mathrm{\Psi }^{}(0)/\mathrm{\Psi }(0)|`$ $`>>`$ $`k`$. As a consequence, for $`|\mathrm{\Psi }^{}(0)/\mathrm{\Psi }(0)|`$ $`>>`$ $`k`$ we have also $`|\mathrm{\Psi }(0)|^2`$ $`<<`$ $`|\mathrm{\Psi }_{in}|^2`$. In magnitude, $`|\mathrm{\Psi }(0)|^2/|\mathrm{\Psi }_{in}|^2`$ $``$ $`k^2|\mathrm{\Psi }^{}/\mathrm{\Psi }|^2`$ $``$ $`(k\mathrm{\Delta })^2`$ at small $`k`$. The ratio between the value of $`|\mathrm{\Psi }(0)|^2`$ and $`|\mathrm{\Psi }_{in}|^2`$ roughly coincides with the ratio between the absorbed and the incoming flux, or at least it represents an upper limit for this ratio. Indeed, only for $`z`$ $`<`$ 0 we may have flux absorption. The ratio of the absorbed to the incoming flux will be a number of magnitude $``$ 1 only in the case where the condition $`k`$ $`>>`$ $`1/\mathrm{\Delta }`$ is realized, because in this case the position $`z_p`$ will be close enough to the origin to have $`|\mathrm{\Psi }(0)|^2`$ $``$ $`|\mathrm{\Psi }(z_p)|^2`$. Then we are close to the saturation of the unitarity limit for the reaction: full flux absorption, possibly accompanied by elastically scattered diffractive flux (which originates in the interference between absorbed and incident waves). At $`k`$ $``$ $`1/\mathrm{\Delta }`$ we start diparting from the saturation of the unitarity limit, and for $`k`$ $`<<`$ $`1/\mathrm{\Delta }`$ we will be far from it. In the latter case the matching conditions associate a large $`|\mathrm{\Psi }^{}/\mathrm{\Psi }|_0`$ to a small flux absorption. As a by-product, elastic cross sections can be large, but they are refractive, not diffractive. If one wants to check the previous estimates with some calculations, one can normalize $`\mathrm{\Psi }`$ for $`z`$ $`>`$ 0 so to have $`\mathrm{\Psi }_{in}`$ $`=`$ $`e^{ikz}`$. Then $`\mathrm{\Psi }_o`$ $`=`$ $`e^{2ik\alpha }`$, and $`\mathrm{\Psi }_{out}`$ $`=`$ $`e^{ik(z2\alpha )}`$. Since the flux cannot be created, $`Im(\alpha )`$ $`<`$ 0. Then for $`z`$ $`>`$ 0 $$|\mathrm{\Psi }|^2=1+e^{4kIm(\alpha )}2e^{2kIm(\alpha )}cos\{2k[zRe(\alpha )]\}.$$ (1) In particular, when $`k|Im(\alpha )|`$ $`<<`$ 1 $`|\mathrm{\Psi }|^2`$ becomes $`22cos\{2k[zRe(\alpha )]\}`$, so that also in presence of absorption $`|\mathrm{\Psi }(z_p)|^2/|\mathrm{\Psi }_{in}|^2`$ $``$ 4 for $`k`$ small enough. When both $`k|Im(\alpha )|`$ $`<<`$ 1 and $`k|Re(\alpha )|`$ $`<<`$ 1 are satisfied we have $`|\mathrm{\Psi }(0)|^2`$ $``$ $`(4k)^2|\alpha |^2`$ $`<<`$ 1, thus confirming that for $`k`$ small enough $`|\mathrm{\Psi }(0)|^2`$ $``$ $`(k|\alpha |)^2`$. The logarithmic derivative of $`\mathrm{\Psi }`$ in $`z`$ $`=`$ 0 is $`kcotg(k\alpha )`$ $``$ $`1/\alpha `$ at small $`k`$, so that “$`k`$ small enough” means $`k`$ $`<<`$ $`|\mathrm{\Psi }^{}(0)/\mathrm{\Psi }(0)|`$. The conclusions of the examined example may change if we consider a reaction region which is limited to $`z_o`$ $`<`$ $`z`$ $`<`$ 0, i.e. for $`z`$ $`<`$ $`z_o`$ no particle absorption is possible. We remark that $`z_o`$ represents the size of the region where reactions are possible, while $`\mathrm{\Delta }`$ is the range needed for the projectile wavefunction to pass from $`\mathrm{\Psi }`$ $``$ $`\mathrm{\Psi }_osin[k(z\alpha )]`$ to $`\mathrm{\Psi }`$ $``$ 0. We must distinguish the two cases where $`z_o`$ is smaller or larger than $`\mathrm{\Delta }`$. In the former case the absorption is proportional to the thickness $`z_o`$ of the reaction region. But a saturation condition is reached when $`z_o`$ becomes larger than $`\mathrm{\Delta }`$, and for any $`z_o`$ $`>>`$ $`\mathrm{\Delta }`$ the conclusions will be the same as in the case $`z_o`$ $`=`$ $`\mathrm{}`$. It is now useful to notice that for $`z_o`$ $`>>`$ $`\mathrm{\Delta }`$ nothing would be changed by the introduction of the additional boundary condition $`\mathrm{\Psi }(z_o)`$ $`=`$ 0. This constraint obliges one to take into account the reflected wavefunction inside the reaction region, i.e. that component of $`\mathrm{\Psi }`$ whose absolute value increases at increasing negative $`z`$ inside the reaction region. But for $`z_o`$ $`>>`$ $`\mathrm{\Delta }`$ this component is very small and can be neglected. The latter situation with the additional “reflection” condition in $`z_o`$ corresponds to the 1-dimensional reduction of the 3-dimensional problem of $`\overline{N}N`$ and $`\overline{N}`$nucleus annihilation, because the damping of the projectile wavefunction takes place on a space scale which is short enough to prevent antinucleons from reaching the origin with any target. From a mathematical point of view the situation is identical in the two cases, after substituting $`\mathrm{\Psi }(z+z_o)`$ with $`r\mathrm{\Psi }(r)`$. In treating the problem, initially we neglect the role of a real $`strong`$ attracting potential. The modifications that it introduces will be considered in a further section. We define $`R_m`$ and $`\mathrm{\Delta }`$ such that practically all of the annihilations are supposed to take place at $`r`$ values comprised between $`r`$ $`=`$ $`R_m`$ and $`r`$ $`=`$ $`R_m\mathrm{\Delta }`$. We assume $`R_m`$ as a reasonable matching radius, satisfying the two conditions: (i) for $`r`$ $`>`$ $`R_m`$ the oscillations of $`\chi (r)`$ $``$ $`r\mathrm{\Psi }(r)`$ are mainly controlled by the sum of kinetic and Coulomb potential energy, and the distortions of $`\chi `$ due to the absorption are negligible; (ii) at smaller radii the situation becomes the opposite within a range $`<<`$ $`1/k`$. The interactions $`directly`$ responsible for the annihilation have range $`R`$ and decay exponentially for $`r`$ $`>`$ $`R`$ according to some $`exp(r/r_o)`$ law (e.g., for a Woods-Saxon potential $`R`$ is the radius and $`r_o`$ the diffuseness). Depending on the model, $`R_m`$ is normally 0.5-1 fm larger than $`R`$, suggesting that the relevant processes take place in the exponential tail of the annihilating forces. Clearly $`1/k`$ defines the “scale of space resolution” in the problem, and the following considerations can be applied for $`k`$ $`<<`$ $`1/r_o`$ only. Summarizing, in our problem we assume both the range $`r_o`$ characterizing the exponential damping of the inelastic interaction and the thickness $`\mathrm{\Delta }`$ of the annihilation shell to be much smaller than $`1/k`$, and assume the reasonable matching radius $`R_m`$ to be larger than $`\mathrm{\Delta }`$. With the previous definitions and assumptions, all the things that we have written about the “$`z`$problem with reflection condition” can be repeated word by word after substituting $`z+z_o`$ with $`r`$, $`\mathrm{\Psi }(z+z_o)`$ with $`\chi (r)`$ $``$ $`r\mathrm{\Psi }(r)`$, while $`r`$ $`=`$ $`R_m`$ corresponds to $`z`$ $`=`$ 0 and $`r`$ $`=`$ 0 to $`z`$ $`=`$ $`z_o`$. More properly however, $`k`$ is the wavenumber produced at $`r`$ $`=`$ $`R_m`$ by both the kinetic and the Coulomb potential energy. The saturation condition is expressed by $`R_m`$ $`>`$ $`\mathrm{\Delta }`$ $``$ 1 fm, and seems to be realized, as above discussed, in antinucleon annihilation on all possible targets, from proton to heavy nuclei. It implies that the reflected flux is negligible $`inside`$ the proton/nucleus target. The uncertainty principle assures that the dominating momentum components inside the reaction range are $``$ $`1/\mathrm{\Delta }`$. When this is transferred to the $`\overline{N}`$ wave it means $`|\chi ^{}/\chi |_{R_m}`$ $``$ $`1/\mathrm{\Delta }`$, with large flux reflection for $`k\mathrm{\Delta }`$ $`<<`$ 1. In the S-wave 1-dimensional reduction of the 3-dimensional scattering problem the reflected wave is a composition of both the scattered and of the untouched initial wave. The disappeared flux corresponds to inelastic reactions, and the ratio of this flux to the incoming one is $``$ $`(k\mathrm{\Delta })^2`$ for $`k\mathrm{\Delta }`$ $`<<`$ 1, in agreement with the previous example. A part of the reflected flux will correspond to elastic reactions, which are not diffractive because we are very far from the unitarity limit. The fact that the above ratio of the absorbed to the incoming flux tends to zero for $`k`$ $``$ 0 is not in contraddiction with a finite reaction rate, but target details are lost once $`k`$ $`<`$ $`1/\mathrm{\Delta }`$. ## 4 Predictions. It is easy to estimate upper limits for the complex scattering length $`\alpha `$ with the condition $$|\chi ^{}/\chi |_{R_mϵ}1/\mathrm{\Delta }.$$ (2) Using $`|\chi ^{}/\chi |_{R_m+ϵ}`$ $`=`$ $`kcotg[k(R_m\alpha )]`$ one finds, in the limit $`k`$ $``$ 0, $$|\mathrm{\Delta }|^2[R_mRe(\alpha )]^2+[Im(\alpha )]^2,$$ (3) that implies: $`|Im(\alpha )|`$ $``$ $`\mathrm{\Delta }`$ or smaller, $`Re(\alpha )`$ is positive and comprised in the range $`R_m\pm \mathrm{\Delta }`$. The consequence of this are: 1) $`\mathrm{\Delta }`$ (rather than $`R_{nucleus}`$) is the relevant parameter for the low energy reaction probability, which is proportional to $`|Im(\alpha )|`$. For $`k`$ $`<<`$ 100 MeV/c and for $`\overline{n}`$ projectiles the reaction probability should be roughly the same for any target nucleus radius, as far as the reaction is S-wave dominated (so, for $`k`$ $`<<`$ $`100/A^{1/3}`$ MeV/c), with magnitude $`\pi \mathrm{\Delta }/k_{cm}`$ $``$ $`6000\mathrm{\Delta }/k_{cm}`$ mb (with $`\mathrm{\Delta }`$ in fm and $`k`$ in MeV/c). For $`\overline{p}`$ projectiles the differences will be mostly due to the Coulomb effects, which have been estimated elsewhere. Both with $`\overline{n}`$ and with $`\overline{p}`$, $`Im(\alpha )`$ $``$ 1 fm (or smaller) for all nuclear targets. 2) $`Re(\alpha )`$ $``$ $`+R_m`$ $``$ $`+R_{nucleus}`$ means an $`\overline{n}`$nucleus total elastic cross section $``$ $`4\pi R_{nucleus}^2`$, and its positive sign is characteristic of a $`repulsive`$ interaction. Accordingly, the zero energy $`\rho `$parameter $`=`$ $`Re(\alpha )/Im(\alpha )`$ is negative. We can estimate $`Re(\alpha )`$ $``$ 1 fm and $`\rho `$ $``$ $`1`$ for light nuclei, $`Re(\alpha )`$ $``$ 1.3 $`A^{1/3}`$ fm and $`\rho `$ $``$ $`A^{1/3}`$ for heavy nuclei. Again, Coulomb effects enhance the total elastic cross section in the $`\overline{p}`$ case. 3) If one can identify subsets of $`\overline{N}N`$ annihilation events which are supposed to be characterized by different $`\mathrm{\Delta }`$parameters, the consequent low-energy cross sections should scale accordingly. E.g., $`\overline{p}p`$ $``$ $`2\pi `$ and $`\overline{p}p`$ $``$ $`2K`$ have been demonstrated to be characterized by different space scales, because of the different mass of the final states. If the characteristic annihilation distances, measured at $`k`$ $`>>`$ 200 MeV/c or estimated by some model, are $`\mathrm{\Delta }_1`$ and $`\mathrm{\Delta }_2`$, the ratio between the corresponding annihilation rates should be of magnitude $`(\mathrm{\Delta }_1/\mathrm{\Delta }_2)^2`$ at very small momenta. For all those reactions (e.g. $`K^{}`$ absorption on nuclear targets), where the absorption range inside nuclear matter is $``$ 1 fm, the same considerations apply. Relevant deviations from the previous predictions should be attributed to peculiarities of the external tail of the nuclear density (e.g. a longer tail in deuteron or <sup>3</sup>He, or a different proton/neutron composition at the surface). In the special case of neutron-halo nuclei the presence of a very long range tail in the nuclear matter distribution removes the basic assumptions of this work. ## 5 The role of elastic attracting potentials In presence of a real attracting potential surrounding the annihilation shell the actual zero-energy momentum at $`R_m`$ is determined by the potential energy. We must consider two very different cases, i.e. strong or Coulomb interactions. A strong elastic potential has nuclear characteristic range, so it does not escape the previous general considerations. Now the external surface of the annihilation shell should be displaced to include the region where the distortions of the projectile wavefunction of elastic origin are relevant. This may increase $`\mathrm{\Delta }`$ up to 2 fm. However, the convergence of the $`\overline{p}p`$, $`\overline{p}D`$, $`\overline{p}^4`$He and $`\overline{p}^20`$Ne annihilation cross sections to similar values at small momenta, all corresponding to scattering lengths $`<`$ 1 fm (after subtracting Coulomb effects) suggests that $`\mathrm{\Delta }`$ is smaller than 1 fm. The Coulomb potential has atomic range and so escapes all the previous considerations. In $`\overline{p}p`$ annihilations, Coulomb forces fix a minimum $`\overline{p}`$ kinetic energy of magnitude 1 MeV at the proton surface, corresponding to a momentum 40 MeV/c, that represents a scale for the true zero energy momentum we have to consider. At much smaller momenta all the modifications that we observe are due to electromagnetic or atomic effects. With nuclear targets, the Coulomb energy at the nuclear surface increases proportionally to $`Z/Z^{1/3}`$ $`=`$ $`Z^{2/3}`$, so the corresponding zero-energy momentum increases proportionally to $`Z^{1/3}`$. With very heavy nuclei the Coulomb momentum starts becoming comparable in magnitude to the Fermi momentum, introducing a completely different physics. Apart from this, Coulomb forces produce a large enhancement of the reaction and elastic cross sections by focusing the $`\overline{p}`$ wavefunction on the nucleus. This effect is widely discussed in other works. ## 6 Conclusions We have shown that, within those models where the annihilation probability is large enough to prevent a consistent overlap between the projectile and the target wavefunctions, the antinucleon-nucleus annihilation cross section is largely target-independent, apart for Coulomb effects. The cause of this behavior is the quantum uncertainty principle, together with the fact that on most of the nuclear targets the process is characterized by the same value of the parameter $`\mathrm{\Delta }`$ $``$ 1 fm. $`\mathrm{\Delta }`$ is the thickness of the spherical shell surrounding the nucleus where the bulk of the annihilations are supposed to take place. For the scattering length $`\alpha `$ we have estimated $`Im(\alpha )`$ $``$ $`\mathrm{\Delta }`$, while $`Re(\alpha )`$ is positive and roughly coincides with the larger between the nuclear radius and $`\mathrm{\Delta }`$. We have also suggested that the ratio between the low energy annihilation rates relative to selected final states with different characteristical annihilation distances $`\mathrm{\Delta }_1`$ and $`\mathrm{\Delta }_2`$ should be $`\mathrm{\Delta }_1/\mathrm{\Delta }_2`$.
no-problem/0003/cond-mat0003351.html
ar5iv
text
# Stretched Exponential Relaxation on the Hypercube and the Glass Transition ## Figure captions
no-problem/0003/cond-mat0003083.html
ar5iv
text
# Nature of Ground State Incongruence in Two-Dimensional Spin Glasses ## Abstract We rigorously rule out the appearance of multiple domain walls between ground states in $`2D`$ Edwards-Anderson Ising spin glasses (with periodic boundary conditions and, e.g., Gaussian couplings). This supports the conjecture that there is only a single pair of ground states in these models. A fundamental problem in spin glass physics is the multiplicity of infinite-volume ground states in finite-dimensional short-ranged systems, such as the Edwards-Anderson (EA) Ising spin glass. In $`1D`$, there is no frustration and only a single pair of (spin-reversed) ground states. In the mean-field Sherrington-Kirkpatrick (SK) model , there are presumed to be (in some suitably defined sense) infinitely many ground state pairs (GSP’s) . One conjecture, in analogy with the SK model, is that finite $`D`$ realistic models with frustration have infinitely many GSP’s; for a review, see . A different conjecture, based on droplet-scaling theories , is that there is only a single GSP in all finite $`D`$. In $`2D`$ and $`3D`$, the latter scenario has received support from recent simulations, some based on “chaotic size dependence” and some using other techniques. In this paper, we provide a significant analytic step towards a resolution of this problem in $`2D`$, by ruling out the presence of multiple domain walls between ground states. We anticipate that the ideas and techniques introduced here will ultimately yield a solution to the problem of ground state multiplicity in two dimensions, and that at least some of them may prove to be useful in higher dimensions as well. Though our result is more general, we confine our attention to the nearest-neighbor EA Ising spin glass, with Hamiltonian $$_𝒥(\sigma )=\underset{x,y}{}J_{xy}\sigma _x\sigma _y,$$ (1) where $`𝒥`$ denotes a specific realization of the couplings $`J_{xy}`$, the spins $`\sigma _x=\pm 1`$ and the sum is over nearest-neighbor pairs $`x,y`$ only, with the sites $`x,y`$ on the square lattice $`𝐙^2`$. The $`J_{xy}`$’s are independently chosen from a mean zero Gaussian (or any other symmetric, continuous distribution with unbounded support) and the overall disorder measure is denoted $`\nu (𝒥)`$. A ground state is an infinite-volume spin configuration whose energy (governed by Eq. (1)) cannot be lowered by flipping any finite subset of spins. That is, all ground state spin configurations must satisfy the constraint $$\underset{x,y𝒞}{}J_{xy}\sigma _x\sigma _y0$$ (2) along any closed loop $`𝒞`$ in the dual lattice. In any $`L\times L`$ square $`S_L`$ (centered at the origin) with, e.g., periodic b.c.’s, there is (with probability one) only a single finite-volume GSP (the spin configurations of lowest energy subject to the b.c.). An infinite-volume ground state can be understood as a limit of finite-volume ones: consider the ground state $`\sigma ^{(L_0,L)}`$ inside any given $`S_{L_0}`$, but with b.c.’s imposed on $`S_L`$ and $`LL_0`$. An infinite-volume ground state (satisfying Eq. (2)) is generated whenever, for each (fixed) $`L_0`$, $`\sigma ^{(L_0,L)}`$ converges to a limit as $`L\mathrm{}`$ (for some sequence of b.c.’s, which may depend on the coupling realization). If many infinite-volume GSP’s exist, then a sequence as $`L\mathrm{}`$ of finite-volume GSP’s with coupling-independent b.c.’s will generally not converge to a single limit (i.e., $`\sigma ^{(L_0,L)}`$ continually changes as $`L\mathrm{}`$), a phenomenon we call chaotic size dependence . So a numerical signal of the existence of many ground states is that the GSP in $`S_L`$ with periodic b.c.’s varies chaotically as $`L`$ changes . It is important to distinguish between two types of multiplicity. The symmetric difference $`\alpha \mathrm{\Delta }\beta `$ between two GSP’s $`\alpha `$ and $`\beta `$ is the set of all couplings that are satisfied in one and not the other. A domain wall (always defined relative to two GSP’s) is a cluster (in the dual lattice) of the couplings satisfied in one but not the other state. So $`\alpha \mathrm{\Delta }\beta `$ is the union of all of their domain walls, and may consist of a single one or many. Two distinct GSP’s are incongruent if $`\alpha \mathrm{\Delta }\beta `$ has nonvanishing density in the set of all bonds; otherwise the two are regionally congruent. Incongruent GSP’s can in principle have one or more positive density domain walls, or instead infinitely many of zero density. If there are multiple GSP’s, the interesting, and physically relevant, situation is the existence of incongruent states. Regional congruence is of mathematical interest, but to see it would require a choice of b.c.’s carefully conditioned on the coupling realization $`𝒥`$. It is not currently known how to choose such b.c.’s. Numerical treatments that look for multiple GSP’s implicitly search for incongruent ground states, and it is the question of their existence and nature in $`2D`$ that we treat here. To state our result precisely, we introduce the concept of a metastate. For spin glasses, this was proposed in the context of low temperature states for large finite volumes (and shown to be equivalent to an earlier construct of Aizenman and Wehr ), and its properties were further analyzed in . In the current context, a (periodic b.c.) metastate is a measure on GSP’s constructed via an infinite sequence of squares $`S_L`$, with both the $`L`$’s and the (periodic) b.c.’s coupling-independent. Roughly speaking, the metastate here provides the probability (as $`L\mathrm{}`$) of various GSP’s appearing inside any fixed $`S_{L_0}`$. It is believed (but not proved) that different sequences of $`L`$’s yield the same (periodic b.c.) metastate. If there are infinitely many (incongruent) GSP’s, a metastate should be dispersed over them, giving their relative likelihood of appearance in typical large volumes. If there is no incongruence, the metastate would be unique and supported on a single GSP, and that GSP will appear in most (i.e., a fraction one) of the $`S_L`$’s . We now state the main result of this paper. It shows that if more than a single GSP is present in the periodic b.c. metastates, then two distinct GSP’s cannot differ by more than a single domain wall. After we present the proof of this statement, we will discuss why this result supports the existence of only a single GSP in $`2D`$. Theorem. In the $`2D`$ EA Ising spin glass with Hamiltonian (1) and couplings as specified earlier, two infinite-volume GSP’s chosen from the periodic b.c. metastates are either the same or else differ by a single, non-self-intersecting domain wall, which has positive density. We sketch the proof of this theorem in several steps; a full presentation will be given elsewhere . First, some elementary properties of (zero-temperature) domain walls: Lemma 1. A $`2D`$ domain wall is infinite and contains no loops or dangling ends. Proof. A domain wall between two spin configurations is a boundary separating regions of agreement from disagreement and thus cannot have dangling ends. To rule out loops, note that the sum $`_{<xy>}J_{xy}\sigma _x\sigma _y`$ along any such loop must have opposite signs in the two GSP’s, violating Eq. (2), unless the sum vanishes. But this occurs with probability zero because the couplings are chosen independently from a continuous distribution. We now construct a periodic b.c. metastate $`\kappa _𝒥`$, which will provide a measure on the domain walls between GSP’s (that appear in $`\kappa _𝒥`$). As in construction II of (but at zero temperature), consider for each square $`S_L`$, two sets of variables, the couplings $`𝒥^{(L)}`$ (chosen from, e.g., the Gaussian distribution) and the bond variables $`\sigma _x^{(L)}\sigma _y^{(L)}`$ for the GSP $`\pm \sigma ^{(L)}`$. Consider fixed sets of both random variables as $`L\mathrm{}`$; by compactness, there exists a subset of $`L`$’s along which the joint distribution converges to a translation-invariant infinite-volume (joint) measure. This limit distribution is supported on $`𝒥`$’s that arise from $`\nu `$, the usual independent (e.g., Gaussian) distribution on the couplings, and the conditional (on $`𝒥`$) distribution $`\kappa _𝒥`$ is supported on (infinite-volume) GSP’s for that $`𝒥`$. A metastate $`\kappa _𝒥`$ yields a measure $`𝒟_𝒥`$ on domain walls. This is done by taking two (replica) GSP’s from $`\kappa _𝒥`$ to obtain a configuration of (unions of) domain walls (i.e., the set of domain walls one would see from two GSP’s chosen randomly from $`\kappa _𝒥`$). If one then integrates out the couplings, one is left with a translation-invariant measure $`𝒟`$ on the domain wall configurations themselves. This leads to important percolation-theoretic features of domain walls between GSP’s in $`\kappa _𝒥`$. Some of these are stated in the following: Lemma 2. Distinct $`2D`$ GSP’s $`\alpha `$ and $`\beta `$ from $`\kappa _𝒥`$ must (with probability one) be incongruent and the domain walls of their symmetric difference $`\alpha \mathrm{\Delta }\beta `$ must be non-intersecting, non-branching paths, that together divide $`𝐙^2`$ into infinite strips and/or half-spaces. Proof. This lemma, from , uses a technique introduced in . First we note that by the translation-invariance of $`𝒟`$, any “geometrically defined event”, e.g., that a bond belongs to a domain wall, either occurs nowhere or else occurs with strictly positive density. This immediately yields incongruence. Suppose now that an intersection/branching occurs at some site $`z`$ (in the dual lattice). Then there are at least three (actually four) infinite paths in $`\alpha \mathrm{\Delta }\beta `$ that start from $`z`$, and they cannot intersect in another place, because that would form a loop, violating Lemma 1. But then translation-invariance implies a positive density of such $`z`$’s. The tree-like structure of $`\alpha \mathrm{\Delta }\beta `$ implies that in a square with $`p`$ such $`z`$’s, the number of distinct such paths crossing its boundary is at least proportional to $`p`$. Since $`p`$ scales like $`L^2`$, there is a contradiction as $`L\mathrm{}`$, because the number of distinct paths cannot be larger than the perimeter, which scales like $`L`$. Similar arguments complete the proof. The picture we now have for $`\alpha \mathrm{\Delta }\beta `$ is a union of one or more infinite domain walls (each of which divides the plane into two infinite disjoint parts) that neither branch, intersect, nor form loops, and that mostly remain within $`O(1)`$ distance from one another. We now begin a lengthy argument to show that there in fact cannot be more than a single domain wall. The first step is to introduce the notion of a “rung” between adjacent domain walls. A rung $``$ in $`\alpha \mathrm{\Delta }\beta `$ is a path of bonds in the dual lattice connecting two distinct domain walls, and with only the first and last sites in $``$ on any domain wall. So each of the couplings in $``$ is satisfied in both $`\alpha `$ and $`\beta `$ or unsatisfied in both. The energy $`E_{}`$ of $``$ is defined to be $$E_{}=\underset{<xy>}{}J_{xy}\sigma _x\sigma _y,$$ (3) with $`\sigma _x\sigma _y`$ taken from $`\alpha `$ (or equivalently, $`\beta `$). It must be that $`E_{}>0`$ (with probability one) for the following reason. Suppose that a rung could be found with negative energy; by translation-invariance (and arguments somewhat like those used for Lemma 2), there would then be an infinite set of rungs with negative energy connecting some two domain walls. Consider the “rectangle” that is bounded by two such rungs and the connecting domain wall pieces. The sum of $`J_{xy}\sigma _x\sigma _y`$ along the couplings in the two domain wall pieces would be positive in one of $`\alpha ,\beta `$ and negative in the other; hence, the loop formed by the boundary of this rectangle would violate Eq. (2) in $`\alpha `$ or $`\beta `$, leading to a contradiction. However, we can impose a more serious constraint on $`E_{}`$; namely that it must be bounded away from zero for all $``$ between two fixed domain walls. To explain this, we first consider a single arbitrary bond $`b`$, an $`S_L`$ large enough to contain $`b`$, a coupling realization $`𝒥^{(L)}`$ and the corresponding GSP $`\alpha ^{(L)}`$. Now let $`J_b`$ vary with all other couplings fixed. It is easy to see that there will be a transition value $`K_b^{(L)}`$ (which is a function of all the couplings in $`𝒥^{(L)}`$ except $`J_b`$) beyond which $`\alpha ^{(L)}`$ ceases to have minimum energy and is replaced by some $`\alpha ^{b,(L)}`$, related to $`\alpha ^{(L)}`$ by a droplet flip. The symmetric difference $`\alpha ^{(L)}\mathrm{\Delta }\alpha ^{b,(L)}`$ consists of a domain wall (the boundary of the droplet) passing through $`b`$ with exactly zero total energy when $`J_b=K_b^{(L)}`$. The droplet boundary may or may not reach the boundary of $`S_L`$. In other words, as $`J_b`$ varies from $`\mathrm{}`$ to $`+\mathrm{}`$, there are exactly two GSP’s ($`\alpha ^{(L)}`$ and $`\alpha ^{b,(L)}`$) that appear, one when $`J_b`$ is below $`K_b^{(L)}`$ and one when it is above. What happens when $`L\mathrm{}`$? As in the construction of metastates, we obtain a translation-invariant infinite-volume joint probability distribution on $`𝒥`$ (the couplings $`J_b`$), $`\alpha `$ (a GSP for $`𝒥`$), $`𝒦`$ (transition values $`K_b`$ for $`𝒥,\alpha `$) and $`\alpha ^{}`$ ($`\alpha ^b`$’s for $`𝒥,\alpha ,𝒦`$). In this limit: $`𝒥`$ is chosen from the usual disorder distribution $`\nu `$, then $`\alpha `$ from the metastate $`\kappa _𝒥`$ and finally $`𝒦`$ and $`\alpha ^{}`$ from some measure $`\kappa _{𝒥,\alpha }`$. The symmetric difference $`\alpha \mathrm{\Delta }\alpha ^b`$ may consist of a single finite loop or else of one or more infinite disconnected paths, but in all cases some part must pass through $`b`$. The lack of dependence of $`K_b^{(L)}`$ on $`J_b`$ implies that even after $`L\mathrm{}`$, $`K_b`$ and $`J_b`$ are independent random variables; this independence leads to the next two lemmas. Lemma 3. With probability one, no coupling $`J_b`$ is exactly at its transition value $`K_b`$. Proof. From the independence of $`J_b`$ and $`K_b`$, and the continuity of the distribution of $`J_b`$, it follows that there is probability zero that $`J_bK_b=0`$, much like in the proof of Lemma 1. Lemma 4. The rung energies $`E_{^{}}`$ between two fixed (adjacent) domain walls cannot be arbitrarily small; i.e., there is zero probability that $`E^{}`$, the infimum of all such $`E_{^{}}`$’s, will be zero. Proof. Were this not so, there would be (by translation-invariance arguments) an infinite set of rungs $`^{}`$ with $`E_{^{}}<ϵ`$, for any $`ϵ>0`$. That implies (by the “rectangular” construction below Eq. (3)) that each $`J_b`$ along the two domain walls would be at the transition value $`K_b`$, either for $`\alpha `$ or for $`\beta `$, violating Lemma 3. The next lemma relates the location of the droplet boundary, $`\alpha \mathrm{\Delta }\alpha ^a`$, when $`\alpha ^a`$ replaces $`\alpha `$, to the “flexibility” of $`a`$. The flexibility $`F_a`$ of a bond $`a`$ (in a $`(𝒥,\alpha ,𝒦,\alpha ^{})`$ configuration) is defined as $`|J_aK_a|`$; the larger the flexibility, the more stable is $`\alpha `$ under changes of $`J_a`$. Lemma 5. If $`F_b>F_a`$, then there is zero probability that $`\alpha \mathrm{\Delta }\alpha ^a`$ passes through $`b`$. Proof. For finite $`L`$, this is an elementary consequence of the fact that for $`e=a`$ or $`b`$, $`F_e^{(L)}|J_eK_e^{(L)}|`$ is the minimum, over all droplets whose boundary passes through $`e`$, of the droplet flip energy cost. After $`L\mathrm{}`$, such a characterization of $`F_e`$ may not survive, but what does survive is that $`\alpha \mathrm{\Delta }\alpha ^a`$ does not go through $`b`$. The next lemma completes our proof that for GSP’s $`\alpha `$ and $`\beta `$ chosen from $`\kappa _𝒥`$, $`\alpha \mathrm{\Delta }\beta `$ cannot consist of more than a single domain wall, since otherwise there would be an immediate contradiction with Lemma 4. For the proof, we need the notion of “super-satisfied”. It is easy to see that a coupling $`J_{xy}`$ is satisfied in every ground state if $`|J_{xy}|>`$min$`\{M_x,M_y\}`$, where $`M_x`$ is the sum of the three other coupling magnitudes $`|J_{xz}|`$ touching $`x`$, and $`M_y`$ is defined similarly. Such a coupling $`J_{xy}`$, called super-satisfied, clearly cannot be part of any domain wall. Lemma 6. There is zero probability that $`E^{}>0`$. Proof. Suppose $`E^{}>0`$ (with positive probability); we show this leads to a contradiction. First we find, as in Fig. 1, a rung $``$ with $`E_{}E^{}=\delta `$ strictly less than the flexibility values (for both $`\alpha `$ and $`\beta `$) of two couplings $`b_1,b_2`$ along the “left” of the two domain walls, $`b_1`$ “above” and $`b_2`$ “below” the rung. Such an $``$, $`b_1`$ and $`b_2`$ must exist by Lemma 3 (and translation-invariance arguments). But we also want a situation, as in Fig. 1, where all the (dual lattice) non-domain-wall couplings that touch the left domain wall between $`b_1`$ and $`b_2`$ (other than the first coupling $`J_a`$ in $``$) are super-satisfied, and remain so regardless of changes of $`J_a`$. How do we know that such a situation will occur (with non-zero probability)? If necessary, one can first adjust the signs and then increase the magnitudes (in an appropriate order) of these (ten) couplings, so that they first become satisfied and then super-satisfied. This can be done in an “allowed” way because of our assumption that the distribution of individual couplings has unbounded support. Also, this can be done without causing a replacement of either $`\alpha `$ or $`\beta `$, without changing $`E_{}`$, without decreasing any other $`E_{^{}}`$ and without decreasing the flexibilities of $`b_1`$ or $`b_2`$. Starting from a positive probability event, such an (allowed) change of finitely many couplings in $`𝒥`$ yields an event which still has non-zero probability. Next, suppose we move $`J_a`$ toward its transition value $`K_a`$ by an amount slightly greater than $`\delta `$. The geometry (of Fig. 1) and Lemma 5 forbid the replacement of either $`\alpha `$ or $`\beta `$, because it is impossible, under the conditions given, for $`\alpha \mathrm{\Delta }\alpha ^a`$ or $`\beta \mathrm{\Delta }\beta ^a`$ to connect to the left end of bond $`a`$. But this move reduces $`E_{}`$ below $`E_{^{}}`$ for any $`^{}`$ not containing $`a`$, contradicting translation-invariance. This completes the proof of the theorem: if distinct $`\alpha ,\beta `$ occur, they differ by at most a single domain wall. Although this does not yet rule out many ground states in the $`2D`$ periodic b.c. metastate, it greatly simplifies the problem by ruling out all but one possibility about how GSP’s may differ. We expect, though, that these single domain walls do not exist. There are reasonable arguments and conjectures indicating that this is so, and that even if they do exist, it remains unlikely that there exists an infinite multiplicity of states. We will discuss these in turn. First, we note that although, for technical reasons, we have not extended our proof to rule out single domain walls, our previous results indicate that it is natural to expect that the “pseudo-rungs” that connect sections of the domain wall that are close in Euclidean distance, but greatly separated in distance along the domain wall, can have arbitrarily low (positive) energies. If these “pseudo-rungs” also connect arbitrarily large pieces of the domain wall containing some fixed bond (and we emphasize that these properties are not yet rigorously proved), then single domain walls would be ruled out in a similar manner as above. The consequence would be that the periodic b.c. metastate in the $`2D`$ EA Ising spin glass with Gaussian couplings is supported on a single GSP. In the unlikely event that single positive-density domain walls do appear, our theorem could still rule out an infinite multiplicity of GSP’s in $`2D`$. This would be a consequence of the following conjecture (which presents an interesting problem in the topology of random curves): Conjecture: There exists no translation-invariant measure on infinite sequences $`(a_1,a_2,\mathrm{})`$ of distinct bond configurations on $`𝐙^2`$ such that each $`a_i`$ and each $`a_i\mathrm{\Delta }a_j`$ is a single, doubly-infinite, self-avoiding path. The above conjecture, if true, would rule out the presence of infinitely many distinct GSP’s $`\alpha _0,\alpha _1,\mathrm{}`$ (in one or more metastates for a given $`𝒥`$) since taking $`a_i=\alpha _0\mathrm{\Delta }\alpha _i`$ would contradict the conjecture. These considerations, taken together, make it appear unlikely that an infinite multiplicity of GSP’s, constructed from periodic (or antiperiodic ) boundary conditions, can exist for the $`2D`$ EA Ising spin glass with Gaussian (or similar) couplings.
no-problem/0003/hep-ph0003135.html
ar5iv
text
# Power-Suppressed Thermal Effects from Heavy Particles ## I Introduction One might naively expect the effects of heavy particles with mass $`M`$ much greater than the temperature $`T`$ to be suppressed by the Boltzmann factor $`e^{M/T}`$. However, in a quantum field theory, there are additional effects that are suppressed only by powers of $`T/M`$. In a recent series of papers , Matsumoto and Yoshimura have argued that there are power-suppressed terms in the number density of heavy particles. If there were such terms, they could greatly exceed the contribution from the conventional Boltzmann-suppressed terms when $`TM`$. This would have important implications for cosmology, because it would imply that the relic abundance of weakly-interacting massive particles is much larger than the conventional predictions based on the Boltzmann equation. Present bounds on the energy density of the universe would then imply significantly tighter constraints on the properties of the heavy particles that may constitute the cold dark matter. Matsumoto and Yoshimura have studied the power-suppressed effects in a simple model with two species of scalar particles, one heavy and one light. The only interaction of the heavy particle is one that allows pair-annihilation into light particles. In the first two papers in the series , Matsumoto and Yoshimura used the influence functional method and a Hartree approximation to derive a quantum kinetic equation for the momentum distribution of heavy particles. Their equation includes off-shell effects associated with the thermal width of the heavy particles. They found that the leading power-suppressed contribution to the heavy-particle number density was proportional to $`T^{7/2}/M^{1/2}`$. In their third paper , Matsumoto and Yoshimura studied the equilibrium number density of heavy particles and found that the leading power-suppressed term was actually proportional to $`T^6/M^3`$. In their fourth paper , they derived a new quantum kinetic equation that reproduces the equilibrium result of Ref. . Singh and Srednicki have criticized Matsumoto and Yoshimura’s conclusion that there are power-suppressed contributions to the number density of heavy particles. They argued that the quantum kinetic equation of Matsumoto and Yoshimura does not properly account for the interaction energy between the heavy particles and the thermal bath of light particles. They also noted that the power-suppressed terms found by Matsumoto and Yoshimura are actually contributions to the number density of virtual heavy particles. They argued that it is the number density of on-shell heavy particles that is relevant to the relic abundance, and this will have the usual Boltzmann suppression. Srednicki recently argued that Matsumoto and Yoshimura’s conclusion is the result of an inappropriate definition of the number density . Srednicki considered a similar model in which the real-valued heavy field is replaced by a complex-valued field, so that the heavy bosons have a conserved charge. He showed that there was a definition of the heavy-particle number density that had Boltzmann suppression to all orders in perturbation theory. Srednicki also suggested that the power-suppressed contributions to the heavy-particle energy density should have a simple interpretation in the effective field theory for the light particles obtained by integrating out the heavy particles. They are contributions to the energy density of light particles coming from nonrenormalizable effective interactions between the light particles. The approach followed by Matsumoto and Yoshimura has been to integrate out the light field to get an quantum kinetic equation for the heavy particles. The power-suppressed terms are then understood as arising from the thermal width acquired by the heavy particle when the light field is integrated out. The heavy particle no longer has a sharp energy-momentum relation, but is a resonance. The power-suppressed terms come from the tail of the spectral function of the resonance, where the energy and momentum of the heavy particle are both small compared to $`M`$. The philosophy of effective field theories suggests using the diametrically opposite strategy. Physics involving energies and momenta small compared to $`M`$ can be understood most simply by integrating out the heavy field. A heavy particle whose energy and momentum is small compared to $`M`$ is off its mass-shell by an amount of order $`M`$. By the uncertainty principle, it can remain in this highly virtual state only for a time of order $`1/M`$. Light fields can propagate only over distances of order $`1/M`$ in this short time. Thus the effects of the highly virtual heavy particle on light fields with momenta much smaller than $`M`$ can be taken into account through local interactions among the light fields. In other words, the light fields can be described by a local effective field theory. The effective field theory approach was used by Kong and Ravndal to compute the leading power-suppressed terms in the energy density for QED at temperature $`Tm_e`$, where $`m_e`$ is the electron mass. They first integrated out the electron field to get a low-energy effective Lagrangian for photons that includes the Euler-Heisenberg term. They then computed the energy density for this effective field theory at temperature $`T`$ and found that the leading term is proportional to $`\alpha ^2T^8/m_e^4`$. This term can be identified as a contribution to the photon energy density coming from the Euler-Heisenberg term in the effective Hamiltonian for photons. In this paper, we use similar effective field theory methods to study the power-suppressed thermal effects in the model considered by Matsumoto and Yoshimura. We show that the power-suppressed terms in the energy density can indeed be interpreted as contributions to the energy density of light particles from nonrenormalizable effective interactions. We also show that the term in the energy density from which Matsumoto and Yoshimura extracted the heavy-particle number density can be eliminated by a field redefinition and therefore can not have any physical significance. This paper is organized as follows. We introduce the pair-annihilation model of Matsumoto and Yoshimura in section II and summarize their results on contributions to the energy density that are suppressed by powers of $`T/M`$. In section III, we construct an effective Lagrangian for the light field by integrating out the heavy field. We compute the leading power-suppressed terms in the energy density by differentiating the free energy density for the effective theory in equilibrium at temperature $`T`$. In section IV, we use a field redefinition to construct an effective Hamiltonian for the light field. We show that the thermal average of the effective Hamiltonian density reproduces the leading power-suppressed terms in the energy density. We summarize our results in section V. ## II Bosonic Model with pair annihilation The model studied by Matsumoto and Yoshimura contains two species of spin-zero particles: a heavy particle of mass $`M`$ described by the field $`\phi `$ and a massless particle described by the field $`\chi `$. The Lagrangian density is $``$ $`=`$ $`_\chi +{\displaystyle \frac{1}{2}}_\mu \phi ^\mu \phi {\displaystyle \frac{1}{2}}M^2\phi ^2{\displaystyle \frac{1}{4}}\lambda \phi ^2\chi ^2,`$ (1) where $`_\chi `$ is the Lagrangian for the light field: $`_\chi `$ $`=`$ $`{\displaystyle \frac{1}{2}}_\mu \chi ^\mu \chi {\displaystyle \frac{1}{24}}\lambda _\chi \chi ^4.`$ (2) We have suppressed the counterterms needed to remove ultraviolet divergences. The symmetry $`\phi \phi `$ guarantees that the heavy particle is stable at zero temperature. The heavy particles can be created or annihilated in pairs via the $`\phi ^2\chi ^2`$ interaction. Renormalizability also requires a $`\phi ^4`$ self-interaction, but we assume that its coefficient is much smaller than $`\lambda `$, so it can be neglected. This is a great simplification, because the Lagrangian is then quadratic in the heavy field $`\phi `$. The $`\chi ^4`$ term in (1) is necessary for thermalization of the light field, but since we are primarily interested in the effects of the heavy field, we will carry out explicit calculations only to zeroth order in $`\lambda _\chi `$. The energy density is the ensemble average of the Hamiltonion density: $`\rho =`$. Matsumoto and Yoshimura divide the Hamiltonian density into three terms: $`=_\chi +_\phi +_{\mathrm{int}}`$, where $`_\chi `$ $`=`$ $`{\displaystyle \frac{1}{2}}\dot{\chi }^2+{\displaystyle \frac{1}{2}}(\chi )^2+{\displaystyle \frac{1}{24}}\lambda _\chi \chi ^4,`$ (3) $`_\phi `$ $`=`$ $`{\displaystyle \frac{1}{2}}\dot{\phi }^2+{\displaystyle \frac{1}{2}}(\phi )^2+{\displaystyle \frac{1}{2}}M^2\phi ^2,`$ (4) $`_{\mathrm{int}}`$ $`=`$ $`{\displaystyle \frac{1}{4}}\lambda \phi ^2\chi ^2.`$ (5) We have suppressed the counterterms required to renormalize the composite operators so that their expectation values vanish at zero temperature. Matsumoto and Yoshimura interpreted the corresponding three terms in $`\rho =\rho _\chi +\rho _\phi +\rho _{\mathrm{int}}`$ as the energy density of the “thermal environment”, the energy density of the “system” consisting of heavy particles, and the interaction energy density, respectively. At zeroth order in $`\lambda `$, the energy density of the heavy particles is that of an ideal nonrelativistic gas, $$\rho _\phi =M(MT/2\pi )^{3/2}e^{M/T},$$ (6) which exhibits the usual Boltzman suppression. At second order in $`\lambda `$, there are terms that are suppressed only by powers of $`T/M`$. Matsumoto and Yoshimura calculated the leading power-suppressed contributions for each of the 3 terms in the energy density : $`\delta \rho _\chi `$ $`=`$ $`{\displaystyle \frac{1}{69120}}\lambda ^2{\displaystyle \frac{T^6}{M^2}},`$ (7) $`\delta \rho _\phi `$ $`=`$ $`{\displaystyle \frac{1}{69120}}\lambda ^2{\displaystyle \frac{T^6}{M^2}},`$ (8) $`\delta \rho _{\mathrm{int}}`$ $`=`$ $`{\displaystyle \frac{\pi ^2}{64800}}\lambda ^2{\displaystyle \frac{T^8}{M^4}}.`$ (9) The terms proportional to $`\lambda ^2T^6`$ terms cancel between $`\delta \rho _\chi `$ and $`\delta \rho _\phi `$ and, so the leading power-suppressed term of order $`\lambda ^2`$ in the total energy density is proportional to $`\lambda ^2T^8`$. Matsumoto and Yoshimura noted this cancellation, but nevertheless interpreted $`\delta \rho _\phi `$ in (8) as a contribution to the energy density of heavy particles. Taking the heavy particles to be nonrelativistic with energy equal to $`M`$, they identified $`\delta \rho _\phi /M`$ as a contribution to the heavy-particle number density proportional to $`T^6/M^3`$. Singh and Srednicki have argued that the separation of the energy density into three terms corresponding to the system, the environment, and interactions is reasonable if and only if $`|\rho _{\mathrm{int}}|\rho _\phi `$. If this condition is not satisfied, the coupling between the system and the environment is effectively strong and they cannot be clearly separated. Note that this condition is satisfied by the power-suppressed terms (8) and (9) if $`TM`$. ## III Effective Lagrangian In this section, we construct a low-energy effective Lagrangian for the light field $`\chi `$. This effective Lagrangian reproduces the zero-temperature Green functions at momentum scales much less than $`M`$. The effects of the heavy fields are reproduced by nonrenormalizable interactions with coefficients that are suppressed by powers of $`1/M^2`$. An effective Lagrangian for the light particles can be constructed by using functional methods to integrate out the heavy field. This method is particularly convenient for the model of Matsumoto and Yoshimura, because the Lagrangian is quadratic in the field $`\phi `$. The effective action for the light field can be defined by a functional integral over the heavy field: $$\mathrm{exp}(iS_{\mathrm{eff}}[\chi ])𝒟\phi \mathrm{exp}(id^4x).$$ (10) This effective action shouldn’t be confused with the 1PI effective action that generates one-particle-irreducible Green functions. Since $``$ is quadratic in $`\phi `$, the functional integral can be evaluated explicitly: $$S_{\mathrm{eff}}[\chi ]=d^4x_\chi +\frac{i}{2}\mathrm{ln}det\left(^2M^2\frac{\lambda }{2}\chi ^2+iϵ\right).$$ (11) The effective action can be expanded in powers of the coupling constant $`\lambda `$: $$S_{\mathrm{eff}}[\chi ]=d^4x_\chi +\frac{i}{2}\mathrm{ln}det(^2M^2+iϵ)+\underset{n=1}{\overset{\mathrm{}}{}}S_{\mathrm{eff}}^{(n)}[\chi ].$$ (12) The term of $`n`$’th order in $`\lambda `$ is $$S_{\mathrm{eff}}^{(n)}[\chi ]=\frac{i\lambda ^n}{2^{n+1}n}\mathrm{tr}\left[(^2M^2+iϵ)^1\chi ^2\right]^n.$$ (13) The $`\mathrm{ln}det(^2M^2)`$ term in (12) can be discarded, because it is just a $`\chi `$-independent constant. The only contribution from $`S_{\mathrm{eff}}^{(1)}`$ is the local functional $`d^4x\chi ^2`$ with a divergent coefficient. It can be absorbed into the mass counterterm for the light field. The terms $`S_{\mathrm{eff}}^{(n)}`$ for $`n2`$ are nonlocal functionals of the $`\chi `$ field. Since we are interested in light fields with characteristic momenta much smaller than $`M`$, we can use the derivative expansion to express $`S_{\mathrm{eff}}^{(n)}`$ as an infinite series of local functionals. The derivative expansion is illustrated in Appendix A by expanding $`S_{\mathrm{eff}}^{(2)}`$ to all orders. The lowest derivative term from $`S_{\mathrm{eff}}^{(2)}`$ is a $`\chi ^4`$ term. It has a divergent coefficient and can be absorbed into the counterterm for the $`\chi ^4`$ term in the Lagrangian. The remaining terms have finite coefficients suppressed by powers of $`1/M^2`$. They represent nonrenormalizable interactions among the light fields induced by virtual heavy particles. The effective action can now be expressed as the integral of an effective Lagrangian: $`S_{\mathrm{eff}}[\chi ]=d^4x_{\mathrm{eff}}`$, where $`_{\mathrm{eff}}`$ $`=`$ $`_\chi {\displaystyle \frac{\lambda ^2}{96(4\pi )^2M^2}}\chi ^2^2\chi ^2+{\displaystyle \frac{\lambda ^2}{960(4\pi )^2M^4}}\chi ^2(^2)^2\chi ^2{\displaystyle \frac{\lambda ^3}{96(4\pi )^2M^2}}\chi ^6+\mathrm{}.`$ (14) We have suppressed the counterterms required to remove ultraviolet divergences, and we have kept all terms proportional to $`\lambda ^m(1/M^2)^n`$ with $`m+n4`$. If we consider an observable involving a single momentum scale $`pM`$, terms in $`_{\mathrm{eff}}`$ proportional to $`\lambda ^m(1/M^2)^n`$ will give effects suppressed by $`\lambda ^m(p/M)^{2n}`$. The $`\chi ^6`$ term in (14) will therefore be comparable in importance to the $`\chi ^2(^2)^2\chi ^2`$ term if $`\lambda (p/M)^2`$. A similar strategy can be used at nonzero temperature $`T`$ to compute power-suppressed contributions to equilibrium observables if $`TM`$. Such observables can be expressed as Euclidean functional integrals over the fields $`\phi `$ and $`\chi `$ with periodic boundary conditions in the Euclidean time direction. To calculate the power suppressed terms, we would first integrate over $`\phi `$, then expand in powers of $`\lambda `$, then carry out the derivative expansion in powers of $`1/M^2`$, and finally integrate over $`\chi `$. The first three steps reduce the problem to a calculation in an effective theory for $`\chi `$, and the final step of integrating over $`\chi `$ corresponds to computing the thermal average in that effective theory at temperature $`T`$. This strategy will reproduce all the terms that are suppressed by powers of $`T/M`$, but it will not give any Boltzmann-suppressed terms, because the expansion in powers of $`1/M^2`$ eliminates terms with an essential singularity at $`1/M=0`$. The simplest way to compute the power-suppressed terms in the energy density $`\rho `$ is to calculate the corresponding terms in the free energy density $``$ and then differentiate. The power-suppressed terms in $``$ can be obtained simply by computing the free energy density at temperature $`T`$ for the theory defined by the effective Lagrangian (14). The leading terms are given by vacuum diagrams whose only vertex is one of the power-suppressed interactions in (14), and they can be written $`\delta `$ $`=`$ $`{\displaystyle \frac{\lambda ^2}{96(4\pi )^2M^2}}\chi ^2^2\chi ^2_{\mathrm{free}}{\displaystyle \frac{\lambda ^2}{960(4\pi )^2M^4}}\chi ^2(^2)^2\chi ^2_{\mathrm{free}}`$ (16) $`+{\displaystyle \frac{\lambda ^3}{96(4\pi )^2M^2}}\chi ^6_{\mathrm{free}}+\mathrm{},`$ The angular brackets $`\mathrm{}_{\mathrm{free}}`$ denote the thermal average in the free field theory. These thermal averages are expressed as Matsubara sum-integrals in Appendix B. The first term on the right hand side of (16) is zero. The remaining two terms give $`\delta `$ $`=`$ $`{\displaystyle \frac{1}{1024}}\left({\displaystyle \frac{16}{225}}\lambda ^2{\displaystyle \frac{T^4}{M^4}}{\displaystyle \frac{25}{48\pi ^4}}\lambda ^3{\displaystyle \frac{T^2}{M^2}}+\mathrm{}\right)_{\mathrm{free}}.`$ (17) where $`_{\mathrm{free}}=(\pi ^2/90)T^4`$ is the free energy of a gas of free massless bosons. By dimensional analysis, the $`\chi ^2^2\chi ^2`$ term in (16) would have given a term proportional to $`\lambda ^2T^6/M^2`$. The absence of such a term is related to the cancellation of the $`\lambda ^2T^6/M^2`$ terms in the energy density noted by Matsumoto and Yoshimura. Once the free energy density is known, we can derive the energy density by differentiation: $`\rho =T^2\frac{}{T}(/T)`$. The leading power-suppressed terms in the energy density are $$\delta \rho =\frac{1}{1024}\left(\frac{112}{675}\lambda ^2\frac{T^4}{M^4}\frac{125}{144\pi ^4}\lambda ^3\frac{T^2}{M^2}+\mathrm{}\right)\rho _{\mathrm{free}},$$ (18) where $`\rho _{\mathrm{free}}=(\pi ^2/30)T^4`$ is the energy density of a free gas of massless bosons. Srednicki calculated the $`\lambda ^2(T/M)^4`$ term for a similar model with a complex-valued heavy field by direct calculation in the full theory . It is worth noting that the $`\lambda ^3(T/M)^2`$ term is equally important if $`\lambda (T/M)^2`$. At very low temperature, the power-suppressed terms in (18) dominate over the leading Boltzman-suppressed term (6) in the energy density of the heavy particles, but they represent small corrections to the energy density of the light particles. ## IV Effective Hamiltonian Srednicki argued that the power suppressed terms in the energy density should be interpreted as contributions to the energy density of the light field from nonrenormalizable effective interactions. In order to verify this explicitly, we construct a low-energy effective Hamiltonian density for the light field and compute its thermal average in the effective theory at temperature $`T`$. If the Lagrangian density depends only on the field $`\chi `$ and its first derivatives, the standard Noether prescription for constructing the Hamiltonian density is $`=\dot{\chi }(/\dot{\chi })`$. This prescription can not be applied to the effective Lagrangian (14) because, even after using integration by parts to reduce the number of derivatives acting on any single field, it still depends on the second derivatives of $`\chi `$. While the Noether prescription for the Hamiltonian density can be generalized to higher derivative Lagrangians, it is rather cumbersome. A simpler approach is to first construct a different effective Lagrangian $`_{\mathrm{eff}}^{}`$ that depends only on $`\chi `$ and its first derivatives, and then apply the Noether prescription to it. The effective Lagrangian $`_{\mathrm{eff}}`$ in (14) is the unique effective Lagrangian that reproduces the off-shell Green’s functions of the full theory at low momenta, but there are infinitely many effective Lagrangians that reproduce all the physical observables at low momenta. They include all effective Lagrangians that can be obtained from (14) by a field redefinition. In quantum field theory, we always have the freedom to redefine the field, because physical quantities, such as S-matrix elements, are invariant under field redefinitions. In renormalizable field theories, nontrivial field redefinitions are usually not considered, because they make the theory superficially nonrenormalizable. However, effective theories are already nonrenormalizable, so nontrivial field redefinitions don’t introduce any additional complications. In fact they can be used to simplify the effective Lagrangian by removing terms that don’t contribute to physical quantities. For example, by introducing the field redefinition $`\chi \chi +G(\chi )`$ into the kinetic term $`_\mu \chi ^\mu \chi `$, we generate additional terms that can be used to cancel any terms of the form $`G(\chi )^2\chi `$. We use the following field redefinition to simplify the effective Lagrangian (14): $$\chi \chi \frac{\lambda ^2}{72(4\pi )^2M^2}\chi ^3+\frac{\lambda ^2}{720(4\pi )^2M^4}^2\chi ^3+\mathrm{}.$$ (19) Expanding out the derivatives and rearranging them by using integration by parts, our effective Lagrangian reduces to $`_{\mathrm{eff}}^{}`$ $`=`$ $`_\chi +{\displaystyle \frac{\lambda ^2}{240(4\pi )^2M^4}}(_\mu \chi ^\mu \chi )^2{\displaystyle \frac{\lambda ^3}{96(4\pi )^2M^2}}\chi ^6+\mathrm{}.`$ (20) Again we have kept only those terms with a total of up to 4 powers of $`\lambda `$ and $`1/M^2`$. This new effective Lagrangian will not reproduce the low-momentum Green functions of the original theory, but it will reproduce all physical observables involving low momenta. An alternative way to derive the effective Lagrangian (20) starting from the original Lagrangian (1) is by matching physical quantities computed in both theories. We would begin by writing down the most general effective Lagrangian consistent with the symmetry $`\chi \chi `$: $`_{\mathrm{eff}}^{}`$ $`=`$ $`_\chi +A\chi ^2_\mu \chi ^\mu \chi +B\chi ^6+C(_\mu \chi ^\mu \chi )^2+D\chi ^2^2\chi ^2\chi +E\chi ^2\chi _\mu \chi ^\mu \chi +\mathrm{}.`$ (21) We would then write down the most general field redefinition consistent with the symmetry: $$\chi \chi +a^2\chi +b\chi ^3+c\chi _\mu \chi ^\mu \chi +d\chi ^2^2\chi +\mathrm{}.$$ (22) Inserting this field redefinition into (21) and expanding it out, we would find that the coeffcients $`b`$, $`c`$, and $`d`$ could be used to set $`A=D=E=0`$. To determine the remaining coefficients, such as $`B`$ and $`C`$, we would exploit the fact that physical quantities are invariant under field redefinitions. We would compute $`T`$-matrix elements involving light particles with momenta $`pM`$ in the full theory using the original Lagrangian (1) and in the effective theory using the effective Lagrangian (21). By matching these $`T`$-matrix elements, we would deduce that $`B`$ and $`C`$ have the values given in (20). Having constructed the new effective Lagrangian $`_{\mathrm{eff}}^{}`$ in (20) that depends only upon $`\chi `$ and its first derivatives, we can use the standard Noether prescription to deduce the effective Hamiltonian. The effective Hamiltonian density reads $`_{\mathrm{eff}}=_\chi +_{\mathrm{pow}}`$, where $`_\chi `$ is given in (3) and $`_{\mathrm{pow}}`$ includes all the higher dimension operators: $`_{\mathrm{pow}}`$ $`=`$ $`{\displaystyle \frac{\lambda ^2}{240(4\pi )^2M^4}}(_\mu \chi ^\mu \chi )\left(3\dot{\chi }^2+(\chi )^2\right)+{\displaystyle \frac{\lambda ^3}{96(4\pi )^2M^2}}\chi ^6+\mathrm{}.`$ (23) We can now calculate the power-suppressed terms in the energy density by taking the thermal average $`_{\mathrm{eff}}`$ at temperature $`T`$ for the effective theory defined by (20). The energy density can be written as $`\rho =_\chi +_{\mathrm{pow}}`$. In $`_{\mathrm{pow}}`$, the leading power-suppressed terms $`\delta \rho _{\mathrm{pow}}`$ are simply the thermal averages in a free field theory of the operators in (23). In $`_\chi `$, the leading power-suppressed terms $`\delta \rho _\chi `$ come from treating the interactions in (20) as first-order perturbations. The calculations are described in more detail in Appendix B, and the results are $`\delta \rho _\chi `$ $`=`$ $`{\displaystyle \frac{1}{1024}}\left({\displaystyle \frac{16}{135}}\lambda ^2{\displaystyle \frac{T^4}{M^4}}{\displaystyle \frac{25}{24\pi ^4}}\lambda ^3{\displaystyle \frac{T^2}{M^2}}\right)\rho _{\mathrm{free}},`$ (24) $`\delta \rho _{\mathrm{pow}}`$ $`=`$ $`{\displaystyle \frac{1}{1024}}\left({\displaystyle \frac{32}{675}}\lambda ^2{\displaystyle \frac{T^4}{M^4}}+{\displaystyle \frac{25}{144\pi ^4}}\lambda ^3{\displaystyle \frac{T^2}{M^2}}\right)\rho _{\mathrm{free}},`$ (25) where $`\rho _{\mathrm{free}}=(\pi ^2/30)T^4`$. The sum of (24) and (25) reproduces our previous result (18). Note that $`\delta \rho _\chi `$ in (24) differs from the power-suppressed terms in $`_\chi `$ in the full theory, which include the term (7) suppressed by $`\lambda ^2T^2/M^2`$. If we had omitted the $`\lambda ^2\chi ^3`$ term in the field redefinition (19), the term proportional to $`\chi ^2^2\chi ^2`$ in the effective Lagrangian (14) would not have been eliminated. There would then have been an additional term in the effective Hamiltonian proportional to $`\chi ^2(\dot{\chi }^2+(\chi )^2)`$. Its thermal average reproduces the term $`\delta \rho _\phi `$ in (8) calculated by Matsumoto and Yoshimura. However the $`\chi ^2^2\chi ^2`$ interaction term in the effective Lagrangian gives an additional term in $`_\chi `$ that reproduces $`\delta \rho _\chi `$ in (7). Since the cancelling contributions (7) and (8) to the energy density can be eliminated by a field redefinition, neither can have any physical significance. In particular, $`\delta \rho _\chi /M`$ cannot be interpreted as a contribution to the number density of heavy particles. A field redefinition was also used by Kong and Ravandal in their calculation of the energy density for QED at $`Tm_e`$. The effective Lagrangian obtained by integrating out the electron field is $`_{\mathrm{eff}}`$ $`=`$ $`{\displaystyle \frac{1}{4}}F_{\mu \nu }F^{\mu \nu }+{\displaystyle \frac{\alpha }{60\pi m_e^2}}F_{\mu \nu }^2F^{\mu \nu }+{\displaystyle \frac{\alpha ^2}{90m_e^4}}\left[(F_{\mu \nu }F^{\mu \nu })^2+{\displaystyle \frac{7}{4}}(F_{\mu \nu }\stackrel{~}{F}^{\mu \nu })^2\right]+\mathrm{}.`$ (26) The two power-suppressed terms are called the Uehling term and the Euler-Heisenberg term, respectively. The Uehling term can be eliminated by a field redefinition: $`A_\mu A_\mu +{\displaystyle \frac{\alpha }{30\pi m_e^2}}^2A_\mu +\mathrm{}.`$ (27) It therefore cannot contribute to physical quantities. The leading power-suppressed term in the energy density comes from the Euler-Heisenberg interactions. ## V Conclusion The effective-field-theory approach provides a simple way of understanding the contributions to equilibrium observables that are suppressed by powers of $`T/M`$. They arise from effective interactions among light particles that are induced by integrating out virtual heavy particles. The most economical way to compute the power-suppressed terms is to first construct a low-energy effective Lagrangian that describes the light particles at $`T=0`$ and then consider this effective theory in equilibrium at temperature $`T`$. For the pair annihilation model introduced in Refs. , we demonstrated explicitly that the power-suppressed terms in the energy density can be interpreted as contributions from the light particles. We used the field redefinition (19) to construct an effective Hamiltonian density $`_{\mathrm{eff}}`$ for the light field $`\chi `$, and then verified that its thermal average reproduces the power-suppressed terms. The field redefinition eliminated terms suppressed by $`\lambda ^2(T/M)^2`$ from individual terms in $`_{\mathrm{eff}}`$, which otherwise would have canceled only after all such terms had been added together. The fact that all the $`\lambda ^2(T/M)^2`$ terms can be eliminated by a field redefinition indicates that individual terms of this form cannot have any physical significance. The incorrect conclusions concerning the heavy-particle number density in Ref. stem from the authors having interpreted $`_\phi `$ in (4) literally as an operator that creates only heavy particles and whose thermal average therefore probes the number density of those particles. However, the operator $`_\phi `$ also creates light particles through loop diagrams that involve virtual heavy particles. Provided the momenta of the light particles are small compared to $`M`$, the loop diagram can be expressed as the product of a short-distance coefficient that depends on $`M`$ and a local effective operator that creates light particles. In the definition of the composite operator (4), the terms with effective operators $`\chi ^2`$, $`\dot{\chi }^2`$, $`(\chi )^2`$ and $`\chi ^4`$ are implicitly subtracted, so $`_\phi `$ creates light particles through higher dimension effective operators. Thus $`_\phi `$ receives contributions not only from heavy particles, but also from light particles created by these effective operators. It is these latter contributions that are responsible for the power-suppressed terms in the energy density. Those terms cannot be related to the number density of heavy particles that can participate in kinetic processes, since they involve only virtual heavy particles with lifetimes of order $`1/M`$. The strategy of effective field theory is to integrate out heavy fields to get an effective theory for light fields. The construction of a quantum kinetic equation for heavy particles requires exactly the opposite strategy. Light fields must be integrated out to create an effective description of the heavy particles. Effective field theory demonstrates convincingly that the quantum kinetic equations derived in Ref. do not describe correctly the evolution of the number density of heavy particles. Perhaps the insights from effective field theory can be used as guidance for deriving the correct quantum kinetic equations. ## Acknowledgments We thank A. Heckler and G. Steigman for bringing this problem to our attention. We thank J.O. Andersen for useful discussions. This work was supported in part by the U. S. Department of Energy Division of High Energy Physics (grant DE-FG02-91-ER40690). ## A The Derivative Expansion The derivative expansion can be used to express each of the term $`S_{\mathrm{eff}}^{(n)}`$ in the effective action (12) as an infinite series of local functionals. In this appendix, we illustrate the derivative expansion by applying it to the term $`S_{\mathrm{eff}}^{(2)}[\chi ]`$. The operator $`(^2M^2+iϵ)^1`$ in the definition (13) of $`S_{\mathrm{eff}}^{(n)}`$ corresponds to the free spin-zero propagator: $`G(x,y)`$ $`=`$ $`{\displaystyle \frac{d^4q}{(2\pi )^4}e^{iq(xy)}\frac{1}{q^2M^2+iϵ}}.`$ (A.1) The definition for $`S_{\mathrm{eff}}^{(2)}`$ can be written $`S_{\mathrm{eff}}^{(2)}[\chi ]`$ $`=`$ $`{\displaystyle \frac{i\lambda ^2}{16}}{\displaystyle d^4xd^4yG(x,y)\chi ^2(y)G(y,x)\chi ^2(x)}.`$ (A.2) Inserting the integral expression (A.1) for the propagators, this becomes $`S_{\mathrm{eff}}^{(2)}[\chi ]`$ $`=`$ $`{\displaystyle \frac{i\lambda ^2}{16}}{\displaystyle d^4xd^4y\chi ^2(x)\chi ^2(y)\frac{d^4p}{(2\pi )^4}e^{ip(xy)}I(p^2)},`$ (A.3) where the function $`I(p^2)`$ is given by $`I(p^2)`$ $`=`$ $`{\displaystyle \frac{d^4q}{(2\pi )^4}\frac{1}{q^2M^2+iϵ}\frac{1}{(q+p)^2M^2+iϵ}}.`$ (A.4) This integral has a logarithmic ultraviolet divergence that can be isolated by adding and subtracting $`I(0)`$. The difference between the two integrals is convergent and can be evaluated using the Feynman parameter method: $`I(p^2)`$ $`=`$ $`I(0){\displaystyle \frac{i}{(4\pi )^2}}{\displaystyle _0^1}𝑑x\mathrm{ln}\left[1x(1x)p^2/M^2\right].`$ (A.5) Assuming the integral in (A.3) is dominated by $`p^2M^2`$, we can expand the logarithm into a power series in $`p^2`$ and then evaluate the Feynman parameter integral to get $`I(p^2)`$ $`=`$ $`I(0)+{\displaystyle \frac{i}{(4\pi )^2}}{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{n!(n1)!}{(2n+1)!}}\left({\displaystyle \frac{p^2}{M^2}}\right)^n.`$ (A.6) The function $`I(p^2)`$ inside the integral over $`p`$ in (A.3) can be replaced by $`I(_x^2)`$ outside the integral. The integral over $`p`$ then reduces to $`\delta ^4(xy)`$, which collapses the expression to an integral over a single coordinate $`x`$. Our final result for the derivative expansion is $`S_{\mathrm{eff}}^{(2)}[\chi ]`$ $`=`$ $`{\displaystyle \frac{i\lambda ^2}{16}}I(0){\displaystyle d^4x\chi ^4}+{\displaystyle \frac{\lambda ^2}{16(4\pi )^2}}{\displaystyle \underset{n=1}{\overset{\mathrm{}}{}}}{\displaystyle \frac{(1)^nn!(n1)!}{(2n+1)!M^{2n}}}{\displaystyle d^4x\chi ^2(^2)^n\chi ^2}.`$ (A.7) The $`\chi ^4`$ term has a divergent coefficient, but it can be absorbed into one of the counterterms in the original Lagrangian. All the higher derivative terms have finite coefficients. ## B Thermal sum-integrals The calculations of thermodynamic quantities in this paper can be reduced to computing thermal averages in a free field theory. In the imaginary-time formalism, these thermal averages are expressed as sums over Euclidean energies and integrals over spacial momentum. We use the following notation for these sum-integrals: $${\displaystyle _P}=T\underset{p_4}{}\frac{d^3p}{(2\pi )^3}.$$ (B.1) The Euclidean 4-momentum is $`P=(𝐩,p_4=2\pi nT)`$, where $`n`$ is any integer. We also use the notation $`P^2=𝐩^2+p_4^2`$. Many of the sum-integrals required in this paper are thermal averages of local operators in a free field theory: $`\chi ^2^2\chi ^2_{\mathrm{free}}`$ $`=`$ $`4{\displaystyle _P}{\displaystyle _Q}{\displaystyle \frac{1}{P^2}},`$ (B.2) $`\chi ^2(^2)^2\chi ^2_{\mathrm{free}}`$ $`=`$ $`4{\displaystyle _P}{\displaystyle _Q}{\displaystyle \frac{(P^2)^2+P^2Q^2+2(PQ)^2}{P^2Q^2}},`$ (B.3) $`\chi ^6_{\mathrm{free}}`$ $`=`$ $`15\left({\displaystyle _P}{\displaystyle \frac{1}{P^2}}\right)^3,`$ (B.4) $`(_\mu \chi ^\mu \chi )^2_{\mathrm{free}}`$ $`=`$ $`{\displaystyle _P}{\displaystyle _Q}{\displaystyle \frac{P^2Q^2+2(PQ)^2}{P^2Q^2}},`$ (B.5) $`(_\mu \chi ^\mu \chi )(\chi )^2_{\mathrm{free}}`$ $`=`$ $`{\displaystyle _P}{\displaystyle _Q}{\displaystyle \frac{P^2𝐪^2+2PQ𝐩𝐪}{P^2Q^2}},`$ (B.6) $`\chi ^2(\chi )^2_{\mathrm{free}}`$ $`=`$ $`{\displaystyle _P}{\displaystyle _Q}{\displaystyle \frac{𝐪^2}{P^2Q^2}}.`$ (B.7) We also need several sum-integrals that come from computing the thermal average of the free-field Hamiltonian density to first order in the effective interactions: $`_\chi i{\displaystyle d^4x(_\mu \chi ^\mu \chi )^2}_{\mathrm{free}}`$ $`=`$ $`2{\displaystyle _P}{\displaystyle _Q}{\displaystyle \frac{[P^2Q^2+2(PQ)^2](P^22𝐩^2)}{(P^2)^2Q^2}},`$ (B.8) $`_\chi i{\displaystyle d^4x\chi ^6}_{\mathrm{free}}`$ $`=`$ $`45\left({\displaystyle _Q}{\displaystyle \frac{1}{Q^2}}\right)^2{\displaystyle _P}{\displaystyle \frac{P^22𝐩^2}{(P^2)^2}},`$ (B.9) $`_\chi i{\displaystyle d^4x\chi ^2_\mu \chi ^\mu \chi }_{\mathrm{free}}`$ $`=`$ $`{\displaystyle _P}{\displaystyle _Q}{\displaystyle \frac{(P^2+Q^2)(P^22𝐩^2)}{(P^2)^2Q^2}}.`$ (B.10) We can simplify the sum integrals by averaging over angles using $`p^i=0`$ and $`p^ip^j=𝐩^2\delta ^{ij}/3`$. This reduces the double sum-integrals to products of single sum-integrals. The single sum-integrals that are needed are $`{\displaystyle _P}P^2`$ $`=`$ $`0,`$ (B.11) $`{\displaystyle _P}1`$ $`=`$ $`0,`$ (B.12) $`{\displaystyle _P}{\displaystyle \frac{1}{P^2}}`$ $`=`$ $`{\displaystyle \frac{1}{12}}T^2,`$ (B.13) $`{\displaystyle _P}{\displaystyle \frac{𝐩^2}{P^2}}`$ $`=`$ $`{\displaystyle \frac{\pi ^2}{30}}T^4,`$ (B.14) $`{\displaystyle _P}{\displaystyle \frac{𝐩^2}{(P^2)^2}}`$ $`=`$ $`{\displaystyle \frac{1}{8}}T^2,`$ (B.15) $`{\displaystyle _P}{\displaystyle \frac{(𝐩^2)^2}{(P^2)^2}}`$ $`=`$ $`{\displaystyle \frac{\pi ^2}{12}}T^4.`$ (B.16) We have given only the temperature-dependent terms in the sum-integrals. The temperature-independent terms are ultraviolet divergent and depend on the choice of ultraviolet cutoff. The most convenient cutoff is dimensional regularization of the integrals over the spatial momenta. With this cutoff, the temperature-independent terms vanish.
no-problem/0003/physics0003085.html
ar5iv
text
# Search for correlation effects in linear chains of trapped ions ## Abstract We report a precise search for correlation effects in linear chains of 2 and 3 trapped Ca<sup>+</sup> ions. Unexplained correlations in photon emission times within a linear chain of trapped ions have been reported, which, if genuine, cast doubt on the potential of an ion trap to realize quantum information processing. We observe quantum jumps from the metastable 3d<sup>2</sup>D<sub>5/2</sub> level for several hours, searching for correlations between the decay times of the different ions. We find no evidence for correlations: the number of quantum jumps with separations of less than 10 ms is consistent with statistics to within errors of $`0.05\%`$; the lifetime of the metastable level derived from the data is consistent with that derived from independent single-ion data at the level of the experimental errors ($`1\%`$); and no rank correlations between the decay times were found with sensitivity to rank correlation coefficients at the level of $`|R|=0.024`$. The drive to realise the potential of quantum information processing has led to the investigation of various experimental systems; among these is the ion trap, which has several advantages including the capability to generate entanglement actively with existing technology . Following the proposal of an ion-trap quantum processor by Cirac and Zoller , several groups have carried out pioneering experiments . In a recent review , the view was expressed that “the ion trap proposal for realizing a practical quantum computer offers the best chance of long term success.” One of the attractive features of the trap is that the various interactions and processes which govern its behaviour have been exhaustively studied and are in principle well-understood. However, 14 years ago unexplained collective behaviour when several ions were present was reported . This prompted tests in another laboratory which gave null results , but recently a further account of such effects has appeared . There is thus an apparent conflict of evidence from different laboratories. The effects manifest themselves as an enhanced rate of coincident quantum jumps. Sauter $`et`$ $`al.`$ measured two- and three-fold coincident quantum jumps in a system of three trapped Ba<sup>+</sup> ions to occur two orders of magnitude more frequently than expected on the basis of statistics. This observation led to proposals that the ions were undergoing a collective interaction with the light field . Itano et al. subsequently made a search for such effects in groups of two and three Hg<sup>+</sup> ions in their laboratory. Their results were consistent with no correlations. In a test on two ions, when over 5649 consecutive jumps were observed, the number of apparent double jumps was 11, which was approximately the number that would be expected due to random coincidences within the finite time resolution of the experiment. Further tests based on photon statistics were also consistent with no correlations. More recently, Block $`et`$ $`al.`$ have observed an enhanced rate of two- and three-fold coincidences in a linear chain of ten Ca<sup>+</sup> ions, where the coincidences were not confined to adjacent ions. This led them to suggest an unexplained long range interaction between ions in the linear crystal. They also found that measurements of the lifetime $`\tau `$ of the 3D<sub>5/2</sub> level (shelved state) from the 10-ion string produced discrepancies of as much as $`6\sigma `$ between runs under nominally identical conditions, where $`\sigma `$ is the standard deviation for each run. Since only the electromagnetic interaction is involved, it is extremely unlikely that these observations indicate new physics; nevertheless, they raise serious doubt about the suitability of the ion trap as a quantum information processing device. The coupling between a quantum system and its environment plays a crucial role in quantum information processing. An unexplained contribution to this coupling is especially significant, because any method to suppress the decoherence, such as quantum error correction (QEC) , relies on accurate knowledge of the process in question. It is furthermore particularly important to understand collective decoherence processes and place a reliable upper bound on their size, because the simultaneous combination of uncorrelated and correlated errors in a quantum computer poses the most severe constraints on QEC . Thus, experimental reports of a decoherence process which is both unexplained and collective merit serious attention. We have therefore undertaken a search for the reported effects in linear chains of 2 and 3 trapped Ca<sup>+</sup> ions. Our data were taken under conditions such that correlation effects would be expected on the basis of the results of and , and are significantly more precise than either. We find no evidence at all for correlations. Our work is complementary to that of in that we are operating in a different system (Ca<sup>+</sup> instead of Hg<sup>+</sup>) with a significantly different time-scale (mean rate for observed double quantum jumps of order 0.2 per minute instead of 2 per minute), and we perform several new statistical tests on 2 and 3 ions. Our upper bound for unexpected double jumps is $`1.4`$ per hour, or $`0.05`$% of the single jump rate. The corresponding upper bounds for the Hg<sup>+</sup> ion trap in are 30 per hour and $`0.06`$%. The experimental method is very similar to that reported in our measurement of the lifetime of the 3d<sup>2</sup>D<sub>5/2</sub> level , which was originally adopted by Block $`et`$ $`al.`$ . Linear crystals of a small number, $`N`$, of <sup>40</sup>Ca<sup>+</sup> ions separated by about 15 $`\mu `$m are obtained by trapping in a linear Paul trap $`in`$ $`vacuo`$ ($`2\times 10^{11}`$ Torr), and laser-cooling the ions to a few mK. The transitions of interest are shown in figure 1. Laser beams at 397 nm and 866 nm continuously illuminate the ions, and the fluorescence at 397 nm is detected by a photomultiplier. The photon count signal is accumulated for bins of duration $`t_b=10.01`$ ms (of which the last 2.002 ms is dead time), and logged. A laser at 850 nm drives the $`3\text{D}_{3/2}4\text{P}_{3/2}`$ transition. The most probable decay route from $`4\text{P}_{3/2}`$ is to the $`4\text{S}_{1/2}`$ ground state; alternatively, an ion can return to $`3\text{D}_{3/2}`$. However, about 1 decay in 18 occurs to $`3\text{D}_{5/2}`$, the metastable “shelving” level. At this point the fluorescence from the ion that has been shelved disappears. A shutter on the 850 nm laser beam remains open for 100 ms before it is closed, which gives ample time for shelving of all $`N`$ ions. Between 5 and 10 ms after the shutter is closed we start to record the photomultiplier count signal in the 10 ms bins. We keep observing the photon count until it abruptly increases to a level above a threshold. This is set between the levels observed when 1 and 0 ions remain shelved. The signature for all $`N`$ ions having decayed is taken to be ten consecutive bins above this threshold. After this we re-open the shutter on the 850 nm laser. This process is repeated for several hours, which constitutes one run. The data from a given run were analysed as follows. The raw data consists of counts indicating the average fluorescence level in each bin of duration $`t_b`$ (see figure 2). $`N`$ thresholds $`\lambda _m`$ are set, the $`m^{th}`$ threshold being set between the levels observed when $`m`$ and $`(m1)`$ ions remain shelved. The number of bins observed below $`\lambda _N`$ gives the decay time, $`t_N`$, of the first of $`N`$ shelved ions to decay. The number of bins observed between $`\lambda _{m+1}`$ and $`\lambda _m`$ being exceeded gives the decay time, $`t_m`$, of the next ion to decay leaving $`(m1)`$ ions shelved. The large number of $`t_m`$ obtained are then gathered into separate histograms and the expected exponential distribution $`A\mathrm{exp}(\gamma _mt)`$ is fitted to each, in order to derive the decay rate $`\gamma _m`$ of the next ion to decay leaving $`(m1)`$ ions shelved (see figure 3). It is appropriate to use a Poissonian fitting method (described in ), rather than least-squares, because of the small numbers involved in part of the distribution (at large $`t`$). If the $`N`$ ions are acting independently, each one will have a decay rate $`\gamma =1/\tau `$, where $`\tau `$ is the lifetime of the 3D<sub>5/2</sub> state. Since we do not distinguish between the fluorescence signals from the different ions, then with $`m`$ ions remaining shelved the next decay is characterised by the increased rate $`\gamma _m=m/\tau `$. Figure 3 shows the histogram of the decay times, $`t_1`$, of the second ion of two to decay obtained from a 3.2 hour run. The expected exponential decay fits the data very well. Events in the first bin of the histogram correspond to both ions being detected as decaying in the same bin, $`t_1=0`$. These quantum jumps, coincident within our time resolution, certainly do not occur two orders of magnitude more frequently than expected by random coincidence as was observed by Sauter $`et`$ $`al.`$ . In fact, they are observed to occur less frequently than predicted by the fitted exponential to the histogram data. However, this is an artefact of our finite time resolution. The fitted exponential to the histogram data has value $`f_1`$ in the first bin, which gives the number of second ion decays that are expected to occur within $`t_b`$ of the first ion decaying by random coincidence. However, for both ions to decay within a single bin, the second ion has an average time of less than $`t_b`$ in which to decay. The exact details depend upon the analysis thresholds, $`\lambda _m`$, and the detector dead time. In the 2-ion case, one can show that, to first order in $`t_b/\tau `$, the first bin width is modified to $`Ft_b`$ where: $`F=0.980.8\lambda _1^{}+0.16\lambda _{1}^{}{}_{}{}^{2}+0.16\lambda _{2}^{}{}_{}{}^{2}+1.44\lambda _2^{}0.64\lambda _1^{}\lambda _2^{}`$ with normalized thresholds: $`\lambda _m^{}={\displaystyle \frac{\lambda _mS_N}{S_{N1}S_N}}`$ where $`S_m`$ is the mean photon count with $`m`$ ions shelved (so $`S_N`$ is the mean background count level). This expression was verified using real and simulated data. The expected number of coincidences is therefore $`Ff_1`$. For the histogram shown, the 2-ion data was analyzed with the thresholds $`\lambda _1^{}=1.4`$ and $`\lambda _2^{}=0.40`$ (these are chosen to optimize the discrimination of the fluorescence levels $`S_m`$), which gives $`F=0.42`$. The expected number of coincidences is $`Ff_1=24\pm 5`$, assuming $`\sqrt{n}`$ errors, which agrees with the observed number of coincidences, 26. The second bin of the histogram is the only other bin expected to have a modified width, which is by a negligible amount. Note that, to ensure the number of coincidences is properly normalized, it is important that only events where at least $`(m+1)`$ ions were shelved at the start of an observation are included in the $`t_m`$ histogram (for $`mN`$). Table I shows that the observed number of 2-fold coincidences in the 2- and 3-ion data agree with the expected value within $`\sqrt{n}`$ errors. The total expected number of 2-fold coincidences in all the data was 66.3 out of 16132 quantum jumps observed to start with at least 2 ions shelved. We are therefore sensitive to changes in the proportion of 2-fold coincidences at the level of $`\sqrt{66}/16132=0.05\%`$ or about 1.4 event per hour. The expected number of 3-fold coincidences depends on the threshold settings in a more complex way than in the 2-fold case, and here we simply use simulated 3-ion data to provide the predicted number of 3-fold coincidences shown in table I. The total number of expected 3-fold coincidences is 0.05 in both 3-ion data runs, which have a combined duration of 2.8 hrs. In fact, this predicted value is significantly lower than effects in our trap which can perturb the system sufficiently to cause de-shelving (such as collisions with residual background gas), as discussed in . We observe at most one event, depending on the exact choice of threshold settings, and this does not constitute evidence for correlation. The decay rates obtained from the 2- and 3-ion data are shown in figure 4, where the horizontal lines are the expected rates $`\gamma _m=m/\tau `$ assuming the ions to act independently. Combining all the $`\gamma _m`$ derived from the 2- and 3-ion data as estimates of $`m/\tau `$ yields a value $`\tau =1177\pm 10`$ ms, where we include a 2 ms allowance for systematic error . This is consistent with the value derived from single-ion data, $`\tau =1168\pm 7`$ ms . We are therefore sensitive to changes in the apparent value of $`\tau `$ due to multiple ion effects at the level of 1%. Superfluorescence and subfluorescence as observed in a two-ion crystal are calculated to be negligible with the large interionic distance of about 15 $`\mu `$m in the chain. In order to look for more general forms of correlation between the decay times of each ion, rank correlation tests were performed. Table II gives the results; they show no significant correlations. The 2-ion data is the most sensitive, allowing underlying rank-correlation coefficients to be ruled out at the level of $`|R_{12}|=0.024`$. In summary, we have presented results that are consistent with no correlations of spontaneous decay within linear chains of 2 and 3 trapped Ca<sup>+</sup> ions, contrary to previous studies. First, the number of coincident quantum jumps were found to be consistent with those expected from random coincidence at the level of $`0.05\%`$. Second, the exponential decay expected assuming the ions to act independently fitted the histogram of decay times $`t_m`$ obtained from the 2- and 3-ion data well. Third, the decay rates from these fits were combined to estimate the lifetime of the shelved state, giving a result consistent with our previous precise measurement performed on a single ion . Fourth, rank correlation tests were performed on the decay times obtained from the 2- and 3-ion data; no evidence for rank correlation was found. We suggest therefore that the correlations which have been reported are likely to be due not to interactions between the ions themselves, but to external time-dependent perturbations. In our own trap, we have investigated and reduced such perturbations to a negligible level , and the present work demonstrates that when this is done there is no evidence that an ion trap is subject to unexplained effects which would make it unsuitable for quantum information processing. We are grateful to G.R.K. Quelch for technical assistance, and to S. Siller for useful discussions. This work was supported by EPSRC (GR/L95373), the Royal Society, Oxford University (B(RE) 9772) and Christ Church, Oxford.
no-problem/0003/hep-ph0003148.html
ar5iv
text
# DESY 00–044 ISSN 0418–9833 MZ-TH/00–09 hep–ph/0003148 March 2000 Gluon Fragmentation to Gluonium ## Abstract The fragmentation of gluons to gluonium states is analyzed qualitatively in the non-perturbative region. The convolution of this mechanism with perturbative gluon radiation leaves us with a hard component in the fragmentation of gluon to gluonium. Theoretical analyses of gluonic matter particles were initiated soon after gluon fields were introduced as the basic force fields of the strong interactions. Mass spectra and quantum numbers of such states have been studied in several approaches to non-perturbative QCD . Since gluon-rich environments of collision processes are the preferential source for the production of gluonia, several mechanisms of this kind have been analyzed at great detail, in particular heavy quarkonium decays and Pomeron processes . Strong candidates have been observed in various experimental analyses , though final conclusions could not be drawn yet. Recently it has been suggested to search for gluonium states in $`Z`$ decays . A significant fraction of these decays involves gluon jets so that the LEP/SLC $`Z`$ events provide a natural ensemble to search for such states. The energy of these gluon jets is large enough to justify a parton picture for the production of gluonia. This process can be described by the convolution of the hard cross section for producing a gluon with a fragmentation function accounting for the transition of gluons to gluonia. The resulting expressions are correct up to terms which decrease as inverse powers of the gluon-jet energy. In this note we describe an attempt to predict the momentum spectrum of the gluonium particles within the fragmented gluon jets. The solution of this problem is of experimental relevance in optimizing search strategies for gluonia. In particular, if a major fraction of the gluon energy is transferred to the gluonium states in the fragmentation process, the experimental identification is facilitated vis-a-vis the soft fusion of gluons to gluonia in the plateau region for which the gluonium decay products submerge to the low-energy hadron sea. Non-perturbative Fragmentation: The basic idea for the primordial non-perturbative fragmentation of gluons to generic gluonium states<sup>1</sup><sup>1</sup>1To analyze the gross features of the fragmentation function, it is sufficient to consider $`gg`$ gluonium states as example. $`G`$ of mass $`M_G1.5`$ GeV follows the path of heavy-quark fragmentation, the gross structure of which can be described by the Peterson et al. fragmentation function . A simple form of the fragmentation function can be derived by adopting the quantum-mechanical rules of old-fashioned perturbation theory to estimate transition probabilities in the parton model. The qualitative features of the amplitude for a high-energy gluon $`g`$ to fragment into a gluonium state $`G`$ with a fraction $`z`$ of the gluon momentum are determined by the energy transfer $`\mathrm{\Delta }E=E_G+E_g^{}E_g`$ across the vertex in the fragmentation process (see Fig. 1) which conserves three-momentum: $$\mathrm{Amplitude}\left[gG+g^{}\right]\mathrm{\Delta }E^1.$$ (1) Expanding the energies for large gluon momenta $`P`$ about the gluonium mass $`M_G`$ and the transverse momentum which can be assumed of order of the strong interaction scale $`\mathrm{\Lambda }`$, $$\begin{array}{ccc}\hfill \mathrm{\Delta }E& =& \sqrt{M_G^2+z^2P^2}+\sqrt{\mathrm{\Lambda }^2+(1z)^2P^2}P\hfill \\ & & \frac{1}{z}+\frac{ϵ_G}{1z},\hfill \end{array}$$ (2) and taking into account the standard factor $`z^1`$ for the longitudinal phase space which generates the non-perturbative rapidity plateau, we suggest the following ansatz for the non-perturbative $`gG`$ fragmentation function: $$d_g^G(z)=\frac{N}{z\left(\frac{1}{z}+\frac{ϵ_G}{1z}\right)^2}.$$ (3) The shape parameter $`ϵ_G`$ is defined as $$ϵ_G=\mathrm{\Lambda }^2/M_G^2$$ (4) according to the expansion Eq. (2). The coefficient $`N`$ denotes the normalization which after integration over the momentum spectrum defines the (unknown) rate at which gluons fragment non-perturbatively into gluonium states. Choosing for illustration $`M_G=1.5`$ GeV and $`\mathrm{\Lambda }=0.5`$ GeV, the small coefficient $`ϵ_GO(1/10)`$ makes $`d_g^G(z)`$ peak strongly near $`z=1`$ (dashed curve in Fig. 2). The maximum of the momentum distribution is predicted at $`z_{\mathrm{max}}1\sqrt{2ϵ_G}`$ for small $`ϵ_G`$. This form is a straightforward consequence of quantum mechanics which enhances transitions with minimum energy transfer. For large momenta, i.e. large $`z`$, the impact of the heavy gluonium mass on the energy transfer is less effective than for small momenta. This fragmentation picture applies only for large energies of the fragmenting gluon for which $`\mathrm{\Delta }EP^1`$ approaches zero. A lower bound of the required gluon energy may be estimated by demanding the energy transfer $`\mathrm{\Delta }E`$ to be less than a fraction of the typical strong interaction scale $`\mathrm{\Lambda }`$. It follows from the inequality $`\mathrm{\Delta }E\stackrel{<}{}\mathrm{\Lambda }/2`$ that the gluon energy must exceed $$E\stackrel{>}{}E_0M_G^2/\mathrm{\Lambda }$$ (5) in the laboratory frame which amounts to about 5 GeV for the parameters introduced above. Perturbative Gluon Radiation: If the gluon energy exceeds the minimum value (5), the parent gluon is attenuated by secondary gluon bremsstrahlung at early times before the non-perturbative fragmentation becomes operative. These perturbative QCD effects can be described in analogy to the Altarelli-Parisi evolution . Neglecting the higher-order effect of quark feedback to gluons within the gluon jet, the attenuation is described in moment space<sup>2</sup><sup>2</sup>2The moments of the function $`f(z)`$ are defined by $`f(m)=_0^1𝑑zz^{m2}f(z)`$. by the coefficient $$g(m,E^2)=\left[\frac{\alpha _s(E^2)}{\alpha _s(E_0^2)}\right]^{2\gamma _m/\beta _0}$$ (6) for the evolution from the energy $`E_0`$ to $`E`$. $`\gamma _m`$ is the anomalous dimension related to the gluon splittings $`ggg,q\overline{q}`$: $$\gamma _m=\frac{3}{2}\left(\frac{1}{3}\frac{2N_F}{9}+\frac{4}{m(m1)}+\frac{4}{(m+1)(m+2)}4\underset{j=2}{\overset{n}{}}\frac{1}{j}\right)$$ (7) with $`\beta _0=112N_F/3`$ where we take $`N_F=4`$ active light flavours. Summary: The final gluon-to-gluonium fragmentation function $`D_g^G`$ at energy $`E`$ which incorporates the perturbative and non-perturbative effects is found by convoluting the perturbative splitting function with the non-perturbative fragmentation function; in moment space: $$D_g^G(m,E^2)=d_g^G(m)\left[\frac{\alpha _s(E^2)}{\alpha _s(E_0^2)}\right]^{2\gamma _m/\beta _0}$$ (8) with $`d_g^G`$ describing the non-perturbative fragmentation process at the energy $`E_0M_G^2/\mathrm{\Lambda }`$ as argued before. Transformed back to momentum space, the fragmentation function $`D_g^G(z,E^2)`$ is illustrated for the gluon energy $`E=30`$ GeV by the full curves in Fig. 2. The fragmentation function is shown for two values of the initial energy $`E_0`$ for the definition of the non-perturbative function $`d_g^G(z)`$. The variation illustrates the inherent uncertainties due to the qualitative estimate of $`E_0`$. However, despite of these quantitative uncertainties the qualitative picture emerges quite clearly: A hard component is expected to be present when gluonia are formed in the fragmentation of gluon jets. Gluonium particles will also be generated by gluon-gluon fusion mechanisms at low energies in the plateau region of the gluon jets. These additional mechanisms will increase the overall multiplicity of gluonia in the jet, yet they will not reduce the particle yield generated by the hard component in gluon fragmentation to gluonium at large $`z`$. The hard component may experimentally be quite helpful for the search of these novel particles since the reconstruction is easier in a phase space region of low hadronic population. Acknowledgement: We thank P. Roy for an inspiring discussion on the search for gluonia in gluon fragmentation. Note: After writing this letter, we received the paper of Ref. in which elements of the production rate for gluonia in gluon fragmentation have been discussed.
no-problem/0003/astro-ph0003170.html
ar5iv
text
# GRAIN SURVIVAL IN SUPERNOVA REMNANTS AND HERBIG-HARO OBJECTS ## 1 INTRODUCTION For interstellar dust grains, the predominant destruction process is shocks driven by supernova explosions. In the postshock cooling gas, a charged grain is accelerated around the magnetic field line (betatron acceleration), collides with other grains and gas particles, and thereby loses its mass. Generally, it is believed that almost all the grains are destroyed in a single shock. The references are summarized in Savage & Sembach (1996) and Jones (2000). However, we would like to argue that the actual efficiency of grain destruction is as low as $`20`$% by mass in representative shock-heated nebulae, i.e., supernova remnants (SNRs) and Herbig-Haro (HH) objects. The relative intensity of the emission lines \[Fe II\] $`\lambda `$8617 and \[O I\] $`\lambda `$6300 is used to estimate the gas-phase Fe/O abundance ratio. In the usual interstellar medium, iron is depleted into grains by a factor of $``$ 100 as a major dust constituent, while oxygen is largely undepleted (Savage & Sembach 1996). Thus the gas-phase Fe/O ratio is proportional to the mass fraction of destroyed grains. The \[Fe II\]/\[O I\] flux ratio is sensitive to the gas-phase Fe/O ratio, but is insensitive to the ionization state, temperature, and density of the gas. This is because the same physical conditions are required to generate the \[Fe II\] and \[O I\] emissions. They are excited by electron collisions. Since the ionization potentials of Fe<sup>+</sup> and O<sup>0</sup> are only 16.2 and 13.6 eV, both the \[Fe II\] and \[O I\] emissions are generated in a partially ionized zone. The excitation energies of the \[Fe II\] and \[O I\] lines are 19,000 and 23,000 K. Their critical densities for collisional de-excitation at $`10^4`$ K are 3.5 $`\times `$ $`10^5`$ and 1.8 $`\times `$ $`10^6`$ cm<sup>-3</sup>, which are well above the typical electron density in shocks. Moreover, the \[Fe II\] and \[O I\] lines are prominent in shocks. The grain destruction is expected to have been completed in their emission region, which is far downstream from the shock front. We calculate the \[Fe II\]/\[O I\] flux ratio in shocks, and compare the results with the observational data of SNRs and HH objects. The analysis and subsequent discussion employ the same values for atomic constants and interstellar abundances, the references of which are given below. ## 2 OBSERVATIONAL DATA Figure 1 shows the number distribution of the flux ratio \[Fe II\] $`\lambda `$8617/\[O I\] $`\lambda `$6300 in SNRs (filled areas) and HH objects (open areas) in our Galaxy and Magellanic Clouds. We do not include young SNRs, where supernova ejecta dominate the line emitting gas. The total (gas $`+`$ dust) abundances of metals in our sample are hence equal to those of the usual interstellar medium. The possible scatter of the total abundances among objects would be too small to affect the present analysis. For reference, we also show the flux ratios in H II regions M8 and M42 (arrows). The data were taken from the literature and are described in the figure caption. The distributions of SNRs and HH objects seem to be the same. Their \[Fe II\]/\[O I\] flux ratios are higher than those of H II regions by factors of 2–3. Since the gas-phase fraction of iron in H II regions is 5–10% (Baldwin et al. 1996; Esteban et al. 1999), the grain destruction efficiency in SNRs and HH objects seems to be 20–30%. This result is confirmed by the following numerical calculation. ## 3 NUMERICAL CALCULATION Our numerical calculation was based on the code MAPPINGS III, version 1.0.0g (Dopita & Sutherland 1996). To study \[Fe II\] and \[O I\] emissions in detail, we included the charge exchange reaction Fe<sup>2+</sup> $`+`$ H<sup>0</sup> $``$ Fe<sup>+</sup> $`+`$ H<sup>+</sup> (Neufeld & Dalgarno 1987), updated the collision strengths of Fe<sup>+</sup> and O<sup>0</sup> with the values of Pradhan & Zhang (1993) and Berrington & Burke (1981), and updated the radiative transition probabilities of O<sup>0</sup> with the values in Osterbrock (1989). Careful analytic fits were made to the temperature dependence of those collision strengths. The most important parameter for our calculation is the gas-phase elemental abundances. We included 11 elements: H, He, C, N, O, Ne, Mg, Si, S, Ar, and Fe. Formerly, the solar values had been used for the total (gas $`+`$ dust) abundances of metals in the interstellar medium. Recently, however, studies of elemental compositions in nearby stars revealed that the Sun is enhanced anomalously in metallicity by a factor of $`1.5`$ (Snow & Witt 1996). Since no reliable data of the interstellar abundances are currently available, we used the solar values of Anders & Grevesse (1989) with the abundances of metals being lowered by 0.20 dex (Savage & Sembach 1996). The gas-phase fraction of iron $`\delta _{\mathrm{Fe}}`$ = $`n_{\mathrm{Fe}}`$(gas)/$`n_{\mathrm{Fe}}`$(gas$`+`$dust) was set to be 0.1, 0.2, 0.3, or 1.0. We accordingly changed the gas-phase fractions of C, O, Mg, and Si, by assuming that the grain composition is always equal to that observed toward a reddened star $`\zeta `$ Oph (Savage & Sembach 1996). When $`\delta _{\mathrm{Fe}}`$ = 0.2, for example, 68% of C, 64% of O, 22% of Mg, and 26% of Si are in the gas phase. The other parameters are the shock velocity $`v_s`$, the preshock hydrogen nucleon density $`n_{\mathrm{H},0}`$, and the preshock magnetic field $`B_0`$. We set $`v_s`$ = 50–150 km s<sup>-1</sup>, $`n_{\mathrm{H},0}`$ = 10 or 100 cm<sup>-3</sup>, and $`B_0`$ = 3 $`\mu `$G. These parameter values are typical of radiative shocks in SNRs and HH objects: $`v_s`$ $``$ 100 km s<sup>-1</sup> in SNRs and $`v_s`$ $``$ 100 km s<sup>-1</sup> in HH objects (Russell & Dopita 1990). We assumed a plane-parallel geometry and a steady flow. The preshock ionization state was determined in an iterative manner. Since the ionized zone in the preshock gas is practically absent at $`v_s`$ $``$ 150 km s<sup>-1</sup> (Dopita & Sutherland 1996), we ignored the contribution of the preshock gas to the emergent spectrum. The calculation was terminated when the ionized hydrogen fraction $`n_{\mathrm{H}^+}/n_\mathrm{H}`$ fell below $`0.01`$. Beyond this point, the gas becomes too cool and neutral to excite the \[Fe II\] and \[O I\] emissions. We also ignored grain opacity, heating, and cooling. Their natures in shock-heated gas are quite uncertain. They should be nonetheless unimportant to the \[Fe II\] and \[O I\] excitations (see Shields & Kennicutt 1995). Figure 2 shows the cloud structure for $`\delta _{\mathrm{Fe}}`$ = 0.2, $`v_s`$ = 100 km s<sup>-1</sup>, and $`n_{\mathrm{H},0}`$ = 10 cm<sup>-3</sup> as a function of the hydrogen nucleon column density from the shock front: ($`a`$) temperature and densities, ($`b`$) ionization fractions, and ($`c`$) line emissivities per hydrogen nucleon, which are normalized by their peak values. These quantities vary markedly across the postshock gas. However, Fe<sup>+</sup> ions coexist with O<sup>0</sup> atoms, and the \[Fe II\] $`\lambda `$8617 line exhibits the same emissivity profile as the \[O I\] $`\lambda `$6300 line. The emissivity profile is different in other forbidden lines such as \[O II\] ($`\lambda `$3726 $`+`$ $`\lambda `$3729) and \[O III\] $`\lambda `$5007. We thereby confirm our expectation that the \[Fe II\] and \[O I\] lines are generated under the same physical conditions. Figure 3 shows the flux ratio \[Fe II\] $`\lambda `$8617/\[O I\] $`\lambda `$6300 for $`n_{\mathrm{H},0}`$ = 10 cm<sup>-3</sup> (filled circles) and 100 cm<sup>-3</sup> (open circles) as a function of the shock velocity. The \[Fe II\]/\[O I\] ratio at $`v_s`$ $``$ 70 km s<sup>-1</sup> depends only on the gas-phase iron fraction $`\delta _{\mathrm{Fe}}`$. Though this is not the case at $`v_s`$ $`<`$ 70 km s<sup>-1</sup>, such slow shocks are unimportant. They do not generate the \[O III\] $`\lambda `$5007 emission, which is observed in all of our SNRs and HH objects. We also show the median and maximum values of the observed \[Fe II\]/\[O I\] ratios (arrows). These values are reproduced by the models for $`v_s`$ $``$ 70 km s<sup>-1</sup> with $`\delta _{\mathrm{Fe}}`$ = 0.2 and 0.3, respectively. The preshock gas has $`\delta _{\mathrm{Fe}}`$ $``$ 0. Hence, as suggested from the comparison with H II regions (Fig. 1), the grain destruction efficiency is 20% on average and 30% at most in radiative shocks associated with SNRs and HH objects.<sup>1</sup><sup>1</sup>1 Even if the individual objects have ranges of shock parameters and the observed \[Fe II\] and \[O I\] emissions originate preferentially in slow shocks driven into dense gas, our conclusion is qualitatively correct. The median value of the observed \[Fe II\]/\[O I\] ratios is reproduced by the models for $`v_s`$ = 50 km s<sup>-1</sup> with $`\delta _{\mathrm{Fe}}`$ = 0.3. ## 4 DISCUSSION Though shocks destroy grains in SNRs and HH objects, the destruction is far from complete. Typically, 80% of iron is still locked into grains. However, many observational studies of shock-heated nebulae conclude that the grain destruction is almost complete, as reviewed by Savage & Sembach (1996) and Jones (2000; see also references for the data used in Fig. 1). In the usual interstellar gas, heavy metals such as Fe and Ca are depleted by factors of $`10^2`$$`10^4`$ (Savage & Sembach 1996). If only a small fraction of the grains is destroyed, emission and absorption lines of those metals are greatly enhanced (Fesen & Kirshner 1980). The observer is easily misled to consider that a large fraction of the grains is destroyed. Moreover, owing to wide variations of physical quantities across the gas (Fig. 2), it is generally difficult to determine elemental abundances in shocks. Nevertheless, conclusions similar to ours were obtained in some of the past observations of SNRs. Phillips & Gondhalekar (1983) and Jenkins et al. (1998) observed ultraviolet absorption lines of stars behind S147 and Vela SNR, estimated column densities of gas-phase ions across these SNRs, and found depletion of Al. Raymond et al. (1988, 1997) observed ultraviolet and optical emission lines of Cygnus Loop and Vela SNR, compared their relative strengths with predictions of shock models, and found depletions of Fe and of C and Si. Oliva, Moorwood, & Danziger (1989) observed near-infrared emission lines of RCW 103, compared their strengths with model predictions, and found depletion of Fe. Reach & Rho (1996) detected continuum emission from grains in the far-infrared spectrum of W44. It should be noted that our result is more reliable than these previous ones. The Fe/O abundance ratio estimated from the \[Fe II\] and \[O I\] fluxes is robust with respect to the shock velocity and preshock density (Fig. 3). Since we studied the Fe/O abundance ratio alone, the grain survival probability estimated here is applicable, strictly speaking, only to Fe-bearing grains. There could exist several types of grains and subgrains which have different survival abilities. Of importance is a careful analysis of emission and absorption lines of the other elements. The present conclusion is nonetheless general. Observations of various Galactic interstellar clouds indicate that Mn, Cr, Ni, and Ti always have the same dust-phase fraction as Fe (Savage & Sembach 1996; Jones 2000). Though the major dust constituents Mg and Si appear to be more easily liberated to the gas phase than Fe, large fractions of Mg- and Si-bearing grains survive in shocks. The above observations indicate that, when 80% of Fe is locked into grains, $`50`$% of Mg and Si are in the dust phase.<sup>2</sup><sup>2</sup>2 These dust-phase fractions were adapted from Savage & Sembach (1996). Their gas-phase Mg abundance was scaled by a factor of 2, in order to allow for the revised Mg<sup>+</sup> oscillator strengths (Fitzpatrick 1997). Since Savage & Sembach (1996) used the Zn abundance to normalize the Fe, Mg, and Si abundances, we made no correction for the difference in the assumed total (gas $`+`$ dust) abundances. Noticeably, in Galactic clouds, the observed $`\delta _{\mathrm{Fe}}`$ value is always less than 0.3. This fact supports our conclusion for SNRs and HH objects. The present conclusion is, at least quantitatively, consistent with theoretical models. Jones, Tielens, & Hollenbach (1996) predicted that 60% (by mass) of silicate grains survive in a shock with $`v_s`$ = 100 km s<sup>-1</sup>, $`n_{\mathrm{H},0}`$ = 25 cm<sup>-3</sup>, and $`B_0`$ = 3 $`\mu `$G. This predicted probability of grain survival is somewhat low, but we could increase it. The above model assumes that grains are solid and homogeneous. If the grains are porous, e.g., consisting of several types and sizes of subgrains, they undergo less destruction (Jones et al. 1994). This is because their effective cross section is large. The resultant large gas drug prevents efficient betatron acceleration. Such porous grains are the natural result of coagulation of small grains into larger ones, and are found as interplanetary dust particles. The presence of porous grains is also suggested by the recent finding that the Sun is overabundant in heavy elements (Snow & Witt 1996). The amount of metals available for grains in the interstellar space is much less than that had been estimated from the solar abundances. However, the observed interstellar extinction per unit length puts a lower limit to the volume fraction of space occupied by grains. This situation calls for porous grains which have high volume-to-mass ratios (see also Jones et al. 1996; Mathis 1990, 1998). The destruction efficiency in shocks determines lifetime of interstellar grains. We estimate the grain lifetime in our Galaxy (Tielens 1998). The gas mass shocked by a supernova to a velocity equal to or greater than $`v_s`$ is 2500 ($`v_s`$/100 km s<sup>-1</sup>)<sup>-9/7</sup> $`M_{\mathrm{}}`$. Since the effective supernova rate is 8 $`\times `$ 10<sup>-3</sup> yr<sup>-1</sup> and the mass of diffuse gas is 5 $`\times `$ $`10^8`$ $`M_{\mathrm{}}`$, the time interval for a grain to experience a supernova-driven shock with $`v_s`$ $``$ 100 km s<sup>-1</sup> is 3 $`\times `$ 10<sup>7</sup> yr. If each of the shocks destroys 20% of the grains, their mean lifetime is 2 $`\times `$ 10<sup>8</sup> yr. The lifetime of the gas parcel itself is much longer, i.e., 2 $`\times `$ 10<sup>9</sup> yr, which is estimated from the total gas mass 8 $`\times `$ 10<sup>9</sup> $`M_{\mathrm{}}`$ and the star formation rate 5 $`M_{\mathrm{}}`$ yr<sup>-1</sup>. Since interstellar grains are actually present, there has to exist some growth process, e.g., accretion of gas particles onto grains in dense gas (Jones et al. 1994). Tielens (1998) obtained a similar grain lifetime, from the metal depletions observed in diffuse and dense gases and the timescale for cycling the material between them. Finally, we underline that dust depletion is crucial to understanding spectra of shock-heated nebulae. Their gas-phase abundances are often assumed to represent the total (gas $`+`$ dust) abundances of the preshock gas. This assumption could be wrong. For example, near-infrared \[Fe II\] emission lines at 1.257 and 1.644 $`\mu `$m are more prominent by factors of $`500`$ in SNRs than in H II regions (Graham, Wright, & Longmore 1987). This fact is often explained by shock destruction of Fe-bearing grains. However, Mouri, Kawara, & Taniguchi (2000) found with the code MAPPINGS III that the flux ratio \[Fe II\] 1.257 $`\mu `$m/Pa$`\beta `$ observed in SNRs is reproduced only when the gas-phase iron abundance is as low as in the H II region M42. The flux ratio predicted for the solar abundance is too high. This finding motivated the present work. We adopted the flux ratio \[Fe II\] $`\lambda `$8617/\[O I\] $`\lambda `$6300 as a more reliable diagnostic, conducted numerical calculations in more detail for gas-phase metallicity, and thereby determined more precisely the gas-phase iron abundance. The authors are grateful to R. S. Sutherland for making the excellent code MAPPINGS III available to the public, and also to K. Kawara for interesting discussion.
no-problem/0003/cond-mat0003250.html
ar5iv
text
# Raman scattering study of anomalous spin-, charge-, and lattice-dynamics in the charge-ordered phase of 𝐁𝐢_{1-𝑥}⁢𝐂𝐚_𝑥⁢𝐌𝐧𝐎₃ (𝑥>0.5) ## Abstract We report an inelastic light scattering study of the effects of charge-ordering on the spin-, charge-, and lattice-dynamics in $`\mathrm{Bi}_{1x}\mathrm{Ca}_x\mathrm{MnO}_3`$ $`(x>0.5)`$. We find that charge-ordering results in anomalous phonon behavior, such as the appearance of ‘activated’ modes. More significantly, however, the transition to the CO phase results in the appearance of a quasielastic scattering response with the symmetry of the spin chirality operator ($`T_{1g}`$); this scattering response is thus indicative of magnetic or chiral spin fluctuations in the AFM charge-ordered phase. Among the most interesting and rich phenomena exhibited by complex transition metal oxides such as the nickelates, cuprates, and manganites is charge- and orbital-ordering, i.e., the organization of charges and orbital configurations in periodic arrays on the lattice. The considerable recent effort devoted to understanding this behavior has revealed a variety of interesting properties, including novel states of matter such as coexisting magnetic phases and possible ‘quantum liquid crystal’ states. Yet, a number of important issues remain unsolved, including the effects of orbital and charge ordering (CO) on the lattice and charge dynamics, and the nature of carrier motion in the complex spin background of the Néel state. A clarification of these issues demands experimental methods capable of probing the strong interplay among the spin-, charge-, lattice-, and orbital-degrees-of-freedom in strongly-correlated systems. In this Letter, we discuss an inelastic light (Raman) scattering study of the unconventional lattice-, spin-, and charge-dynamics in the CO phase of the $`\mathrm{Bi}_{1x}\mathrm{Ca}_x\mathrm{MnO}_3`$ $`(x>0.5)`$ system. Raman scattering offers several unique features in the investigation of charge-ordered systems. For example, by providing energy, symmetry, and lifetime information concerning lattice-, spin-, as well as charge-excitations, Raman scattering affords unique insight into the interplay among these coupled excitations in various phases. Also, as a technique that can sensitively probe unconventional charge- and spin-dynamics, such as exotic “chiral” spin and charge currents, Raman scattering offers a unique means of probing the unconventional spin- and charge-dynamics that arise when charge-carriers are placed in the complex spin environment of CO systems. These benefits are clearly evident in the present study, which uncovers several interesting features of CO behavior in $`\mathrm{Bi}_{1x}\mathrm{Ca}_x\mathrm{MnO}_3`$ ($`x>0.5`$). First, polarized Raman measurements show that charge-ordering results in the appearance of activated phonon modes, due to the lowering of symmetry by charge-stripe formation. Most interesting, however, is the observation that a quasielastic Raman scattering response, with the symmetry of the spin chirality operator ($`T_{1g}`$), develops in the CO phase. This distinctive scattering response indicates the presence of chiral fluctuations at finite temperatures in the CO/AFM phase, possibly arising from a chiral spin-liquid state associated with the Mn core spins, or from closed-loop charge motion caused by the constraining environment of the complex orbital and Néel spin textures. The samples used in our study were flux-grown single crystalline $`\mathrm{Bi}_{0.19}\mathrm{Ca}_{0.81}\mathrm{MnO}_3`$ ($`T_{\mathrm{co}}=165`$ K, $`T_\mathrm{N}=120`$ K) and $`\mathrm{Bi}_{0.18}\mathrm{Ca}_{0.82}\mathrm{MnO}_3`$ ($`T_{\mathrm{co}}=210`$ K, $`T_\mathrm{N}=160`$ K). The typical dimensions of these samples are 2 x 2 x 1 mm<sup>3</sup>. Raman spectra were measured in a backscattering geometry using continuous helium flow and cold-finger optical cryostats, and a modified subtractive-triple-grating spectrometer equipped with a nitrogen-cooled CCD array detector. The spectra were corrected for the spectral response of the spectrometer and the detector. The samples were excited with 4 mW of the 4762-$`\mathrm{\AA }`$ line of the Kr<sup>+</sup> laser, focused to a 50 $`\mu `$m diameter spot within a single CO domain of the crystals. Temperatures listed for the Raman spectra include estimates of laser heating effects. To identify excitation symmetries, the spectra were obtained with the incident ($`𝐄_𝐢`$) and scattered ($`𝐄_𝐬`$) light polarized in various configurations, including ($`𝐄_𝐢,𝐄_𝐬`$) = ($`𝐱`$,$`𝐱`$) and ($`𝐲`$,$`𝐲`$): $`A_{1g}+E_g`$, and ($`𝐄_𝐢,𝐄_𝐬`$) = ($`𝐋`$,$`𝐋`$): $`A_{1g}+\frac{1}{4}E_g+T_{1g}`$, where x and y are the and crystal directions, respectively, L is left circular polarization, and where $`A_{1g}`$, $`E_g`$ and $`T_{1g}`$ are respectively the singly-, doubly-, and triply-degenerate irreducible representations of the $`O_h`$ space group of the crystals, which have a pseudocubic structure. Figure 1 (a) shows polarized microscope images of the (100) $`\mathrm{Bi}_{0.19}\mathrm{Ca}_{0.81}\mathrm{MnO}_3`$ sample surface taken at 290 K, 175 K, 80 K, and 6 K, respectively. One can clearly see the growth of “light” and “dark” regions below the charge-ordering temperature, $`T_{\mathrm{co}}=165`$ K, corresponding to the development of domains having perpendicular orientations of the charge-stripes. The evolution of CO behavior and domain formation below $`T_{\mathrm{co}}`$ is more quantitatively illustrated in Fig. 1 (b), which presents the temperature dependence of the dielectric anisotropy, $`\mathrm{\Delta }^{ϵ_1}(\omega )=\frac{|ϵ_1^{ac}ϵ_1^{bc}|}{\sqrt{(ϵ_1^{ac})^2+(ϵ_1^{bc})^2}}`$, where $`ϵ_1^{ac}`$ and $`ϵ_1^{bc}`$ are the dielectric responses for $`\omega =1`$ eV light polarized in the ac and bc planes respectively. Notably, the temperature-dependence of $`\mathrm{\Delta }^{ϵ_1}`$ is similar to that of the order-parameter in a second-order phase transition, consistent with our expectation that the increasing size of this quantity below $`T_{\mathrm{co}}`$ reflects the increasing organization of charges below $`T_{\mathrm{co}}`$. However, at low temperatures, $`\mathrm{\Delta }^{ϵ_1}`$ saturates below the maximum value of 1 due to the fact that the optical spot in the ellipsometry measurements is not isolated to a single domain. One of the distinct advantages of polarized Raman scattering techniques for measuring optical anisotropy in the CO phase is that this technique allows the study of single ($`100`$ $`\mu `$m) domains with uniformly aligned charge stripes. Raman spectra of single domain regions in $`\mathrm{Bi}_{0.19}\mathrm{Ca}_{0.81}\mathrm{MnO}_3`$ are illustrated, for various temperatures and scattering geometries, in Fig. 2. Note first that in the high temperature “isotropic” phase ($`T=305`$ K), all three scattering geometries (xx, yy, and LL) overlap, with the exception of some intensity differences associated with the phonons. However, with decreasing temperature into the CO phase, two significant features are evident: First, several changes in the phonon spectra evolve with decreasing temperature, including the appearance of new modes and the evolution of differences in the phonon spectra associated with the xx and yy scattering geometries. Second, while the low frequency backgrounds associated with the xx and yy scattering geometries decrease with decreasing temperature into the CO phase, there is a dramatic growth of the low frequency scattering background in the LL geometry, betraying the development of a distinctive $`T_{1g}`$-symmetry quasielastic scattering response in the CO/AFM phase. We focus first on the effects of charge-ordering on the phonons in $`\mathrm{Bi}_{0.19}\mathrm{Ca}_{0.81}\mathrm{MnO}_3`$ \- such information is important, as the optical phonons function as ‘local probes’ of changes in the local symmetry and bond-strengths caused by charge-ordering. Consider first the $`160`$ cm<sup>-1</sup> $`A_{1g}`$ phonon mode in Fig. 3 (circles and triangles), which is associated with in-phase Mn vibrations. This mode exhibits an abrupt hardening across the charge-ordering transition, indicative of the effects of charge-ordering on the lattice force constants via changes in the Coulomb energies. Also, Figs. 2 and 3 illustrate the appearance of a second mode at $`185`$ cm<sup>-1</sup>. In the isotropic high temperature phase ($`T>T_{\mathrm{co}}`$), this mode is present in the xy scattering configuration, but develops also in the xx scattering geometry below $`T_{\mathrm{co}}`$ due to the breakdown of symmetry selection rules in the CO phase. The breaking of 4-fold in-plane symmetry due to long-range charge-ordering is also reflected in differences in the phonon spectra observed in the xx and yy scattering geometries. In particular, the yy spectrum at 25 K (Fig. 2) shows the development of ‘new’ phonon modes near $`145`$ cm<sup>-1</sup> (filled squares in Fig. 3), $`300`$ cm<sup>-1</sup>, and $`420`$ cm<sup>-1</sup>. The appearance of new modes can reflect Brillouin-zone-folding of zone boundary modes to the zone center, caused by the additional periodicity associated with charge-stripe formation. It is expected, however, that zone-folded modes should have much weaker intensities than ‘regular’ modes, which is not the present case. A more plausible interpretation therefore is that the new modes we observe are “activated” modes due to charge-ordering: For example, the out-of-phase Mn vibrational mode is not Raman-active in the PM ‘isotropic’ phase because the net charge fluctuation associated with out-of-phase motion of the ($`\mathrm{Mn}^{3.5+}\mathrm{O}\mathrm{Mn}^{3.5+}`$) complex is zero, and hence this mode cannot modulate the polarizability. However, in the CO phase, the different charges on the Mn<sup>3+</sup> and Mn<sup>4+</sup> sites cause out-of-phase Mn vibrations of the ($`\mathrm{Mn}^{3+}\mathrm{O}\mathrm{Mn}^{4+}`$) complex to have a non-zero net charge fluctuation that couples to the polarizability, resulting in the ‘new’ Raman-active mode at $`145`$ cm<sup>-1</sup>. An even more interesting feature apparent in Figs. 2 and 4 (a) is the development in the CO state of a quasielastic Raman response in the LL scattering geometry, $$\mathrm{Im}\chi (\omega )\frac{A\omega \mathrm{\Gamma }}{\omega ^2+\mathrm{\Gamma }^2}$$ (1) where A is the quasielastic scattering amplitude and $`\mathrm{\Gamma }`$ is the fluctuation rate. The absence of a similar scattering response in either xx or yy scattering geometries definitively identifies this quasielastic response as having $`T_{1g}`$ symmetry. This distinctive scattering symmetry transforms like the spin-chirality operator ($`\stackrel{}{S_1}\stackrel{}{S_2}\times \stackrel{}{S_3}`$), and is typical of scattering from magnetic fluctuations and from chiral spin fluctuations. Thus, while the ground state of the CO/AFM phase is not generally expected to have a net magnetization or spin chirality, the development of this $`T_{1g}`$ quasielastic response in Figs. 2 and 4 (a) betrays the presence of strong fluctuations associated with such a broken time-reversal symmetry state at finite temperatures in the CO/AFM phase. Interestingly, the fluctuation rate $`\mathrm{\Gamma }`$ associated with this unusual response (inset Fig. 4 (b)) tends to zero with decreasing temperature, perhaps indicating a tendency toward static long-range order as $`T0`$. When considering the origin of this anomalous scattering response, we can first rule out a “precursor” fluctuational response associated with either charge-stripe- or orbital-ordering: while such responses should develop above, become maximum near, and diminish below the ordering transition temperature $`T_{\mathrm{co}}`$, Figs. 4 (a) and (b) clearly illustrate that the $`T_{1g}`$ quasielastic response we observe evolves at $`T_{\mathrm{co}}`$, and grows with decreasing temperature below $`T_{\mathrm{co}}`$, coincident with the charge-order-parameter $`\mathrm{\Delta }^{ϵ_1}`$ in Fig. 1 (b). Several intriguing possibilities are consistent with both the distinctive $`T_{1g}`$ symmetry and temperature dependence of the quasielastic light scattering response in Figs. 4 (a) and (b). First, although neutron scattering studies show no evidence for ferromagnetic spin fluctuations below $`T_\mathrm{N}`$ in this system, it is possible that the $`T_{1g}`$ scattering we observe reflects spin fluctuations associated with a canted AFM phase. Similarly, the properties of the quasielastic scattering response in Fig. 4 are also consistent with the presence of chiral spin fluctuations associated with the core spins, for example similar to those observed in ferromagnetic pyrochlores such as $`\mathrm{Sm}_2\mathrm{Mo}_2\mathrm{O}_7`$. Such fluctuations of canted AFM or spin chirality could arise in the CO manganites due to geometrical frustration of the Mn core spins, and/or to an appreciable Dzyaloshinskii-Moriya interaction ($`\stackrel{}{S_1}\times \stackrel{}{S_2}`$), in the AFM/CO phase. Finally, another interesting possibility is that the fluctuational response in Fig. 4 is associated with chiral charge currents, i.e., charges constrained to hop in closed-loop paths. The possibility of such charge motion in the Néel spin environment of the CO/AFM phase is suggested by first noting that long-range translational charge motion is constrained in this phase by the complex orbital and Néel spin structure, by the constraints of the double-exchange hopping mechanism, and by disorder (e.g., by doping away from commensurate fillings), which strongly limits conduction along the 1D Mn<sup>3+</sup> chains. However, Fig. 4 (c) illustrates one possible closed-loop path in which the hopping of holes is not constrained by either the spin or orbital environments in the CO/AFM phase. Interestingly, Nagaosa and Lee predicted that such “closed-loop” charge hopping should be present in doped AFM insulators, and should be manifest in the appearance of a quasielastic Raman response similar to that observed in the CO/AFM phase of $`\mathrm{Bi}_{1x}\mathrm{Ca}_x\mathrm{MnO}_3`$ ($`x>0.5`$) (Fig. 4 (a)). Such quasielastic light scattering arises in this case from fluctuations in an induced effective magnetic field generated by the chiral charge currents, $`\chi (m)\chi ^{}(m)mm^{}`$, where $`\chi (m)`$ is the ‘field’-dependent electric susceptibility. In conclusion, our Raman scattering studies of $`\mathrm{Bi}_{1x}\mathrm{Ca}_x\mathrm{MnO}_3`$ have allowed us to explore the influence of charge- and orbital-ordering on the lattice-, charge-, and spin-dynamics. Most significantly, in the CO/AFM phase these studies reveal the development of a fluctuational (quasielastic) response with the distinctive symmetry of the spin-chirality operator – this remarkable response is consistent with the presence of a fluctuating chiral state at finite temperatures in the CO/AFM state. Importantly, these studies also clearly demonstrate that Raman scattering is uniquely suited to probing exotic “chiral” phases in other correlated systems, such as the CMR-phase manganites , geometrically-frustrated metallic ferromagnets , the high $`T_c`$ cuprates , and quantum Hall systems. We thank M. V. Klein, Y. Lyanda-Geller, and P. Goldbart for useful discussions. We acknowledge financial support of the DOE via DEFG02-96ER45439 (S.Y., M.R., S.L.C.), the DFG via Ru 773/1-1, the NSF through the STCS via DMR91-20000 (M.R.), and NSF-DMR-9802513 (K.H.K., S-W.C.).
no-problem/0003/cond-mat0003507.html
ar5iv
text
# The Same Superconducting Criticality for Underdoped and Overdoped La2-xSrxCuO4 Single Crystals H. H. Wen1, X. H. Chen2, W. L. Yang1, and Z. X. Zhao1 1 National Laboratory for Superconductivity, Institute of Physics and Center for Condensed Matter Physics, Chinese Academy of Science, P.O. Box 603, Beijing 100080, P. R. China 2 Physics Department, University of Sciences and Technology of China, Hefei 230026, P.R.China By measuring the superconducting diamagnetic moments for an underdoped and an overdoped La2-xSrxCuO4 single crystal with equal qualities and roughly equal transition temperatures, it is found that the underdoped sample has only one transition which corresponds to Hc2, but the overdoped sample has two transitions with the higher one at Hc2. Further investigation reveals the same upper-critical field Hc2 for both samples although the overall charge densities are very different, indicating the possibility of a very direct and detailed equivalence of the superconducting condensation process in the two doping limits. The second transition for the overdoped sample can be understood as the bulk coupling between the superconducting clusters produced by macroscopic phase separation. PACS numbers : 74.25. Bt, 74.20.Mn, 74.40.+k ## ACKNOWLEDGMENTS This work is supported by the Chinese NSFC within the project: 19825111. WHH gratefully acknowledges the continuing financial support from the Alexander von Humboldt foundation, Germany.
no-problem/0003/hep-ph0003240.html
ar5iv
text
# Models of Dynamical Supersymmetry Breaking with Gauged 𝑈⁢(1)_𝑅 Symmetry ## I Introduction Supersymmety is motivated to solve the gauge hierarchy problem. However, since the superparticles have not been observed yet, supersymmetry should be broken at low energies. Spontaneous supersymmetry breaking at the tree-level does not explain why the discrepancy between the supersymmetry breaking scale and the Planck scale is so large. On the other hand, in the models of dynamical supersymmetry breaking the supersymmetry breaking scale is related to the Planck scale via the dimensional transmutation . In the light of this fact, many people have constructed the models of the dynamical supersymmetry breaking and discussed the phenomenology so far . As for the mediation mechanism of supersymmetry breaking to the visible sector, there are mainly two ways. One is the gravity mediation (including the anomaly mediation ), the other is the gauge mediation (including the anomalous U(1) mediation ). Many authors have extensively studied both scenarios and proposed interesting models. In our previous letter , we proposed a simple mechanism of the dynamical supersymmetry breaking with gauged $`U(1)_R`$ symmetry. Although the attempts to make use of the gauged $`U(1)_R`$ symmetry in supergravity theories can be found in Refs. , the mechanism of supersymmetry breaking was not the main topics in these works. Supersymmetry can be dynamically broken by the interplay between the Fayet-Iliopoulos term of $`U(1)_R`$ and the dynamically generated superpotential due to the non-perturbative dynamics of the gauge theory with vanishing cosmological constant. However, we did not discuss in detail the anomaly cancellation for $`U(1)_R`$ taking into account the full particle contents, since we focused on the dynamics of supersymmetry breaking in the hidden sector. The main purpose of this paper is to pursuit this point. We discuss two models as the visible sector: one is the minimal supersymmetric standard model (MSSM), the other is the supersymmetric SU(5) grand unified theory (GUT). In the model of MSSM, we use the Green-Schwarz mechanism for anomaly cancellation. We find quite simple solutions of R-charge assignments in both cases. The spectrum of the supersymmetry breaking mass is also discussed. The scalar fields receive the soft supersymmetry breaking masses, which is the same order of the gravitino mass from the tree-level interactions of supergravity. Moreover, the scalar fields with non-zero R-charges obtain additional soft supersymmetry breaking masses through $`U(1)_R`$ D-term, which is also the same order of the gravitino mass. The masses of gauginos in the MSSM depend on the form of gauge kinetic functions. If the gauge kinetic function includes the higher dimensional term, the gaugino masses are generated in the same order of the gravitino mass or less. If the gauge kinetic function is trivial, the gaugino masses are generated through the anomaly mediation in a few orders smaller than the gravitino mass. We see that our scenario is phenomenologically viable with the gravitino mass of order 1 TeV or 10 TeV. The comments on the generation of the Higgs potential from higher dimensional interactions are given. This paper is organized as follows. In section 2, we review our mechanism of dynamical supersymmetry breaking with gauged $`U(1)_R`$ symmetry. Then this mechanism is applied to MSSM of the visible sector in section 3 and applied to the supersymmetric SU(5) GUT of the visible sector in section 4. The last section is devoted to the summary and discussion. ## II Dynamical Supersymmetry Breaking with Gauged $`U(1)_R`$ Symmetry In this section, we review our previous letter, in which we proposed a simple mechanism of dynamical supersymmetry breaking with gauged $`U(1)_R`$ symmetry in the context of the minimal supergravity. The model is based on the gauge group $`SU(2)_H\times U(1)_R`$ with the following matter contents. <sup>*</sup><sup>*</sup>*In our previous letter , we did not discuss the cancellation of the gauge anomaly of $`[U(1)_R]^3`$ and the mixed gravitational anomaly of $`U(1)_R`$. We simply assumed there that these anomalies are cancelled out, if all particle contents are considered with an appropriate $`U(1)_R`$ charge assignment. In this paper, we will discuss this issue in detail. | | $`SU(2)_H`$ | $`U(1)_R`$ | | --- | --- | --- | | $`Q_1`$ | 2 | $`1`$ | | $`Q_2`$ | 2 | $`1`$ | | $`S`$ | 1 | $`+4`$ | The general renormalizable superpotential at the tree-level is $`W=\lambda S\left[Q_1Q_2\right],`$ (1) where the square brackets denote the contraction of $`SU(2)`$ indices by the $`ϵ`$-tensor, $`\lambda `$ is a dimensionless coupling constant. We assume that $`\lambda `$ is real and positive. It is known that the superpotential is generated dynamically by non-perturbative (instanton) effect of the $`SU(2)`$ gauge dynamics . The total effective superpotential is found to be $`W_{eff}=\lambda S\left[Q_1Q_2\right]+{\displaystyle \frac{\mathrm{\Lambda }^5}{\left[Q_1Q_2\right]}},`$ (2) where the second term is the dynamically generated superpotential and $`\mathrm{\Lambda }`$ is the dynamical scale of the $`SU(2)_H`$ gauge interaction. Note that the supersymmetric vacuum lies at $`S\mathrm{}`$ and $`Q_1,Q_20`$, if only the F-term potential is considered. Next, let us consider the D-term potential. The gauged $`U(1)_R`$ symmetry is impossible in the globally supersymmetric theory, since the generators of the $`U(1)_R`$ symmetry and supersymmetry do not commute with each other. On the other hand, in the supergravity theory the $`U(1)_R`$ symmetry can be gauged as if it were a usual global symmetry . However, it should be noticed that the Fayet-Iliopoulos term of the gauged $`U(1)_R`$ symmetry appears due to the symmetry of supergravity. This fact is easily understood by the standard formula for supergravity theories . Using the generalized Kähler potential $`G=K+\mathrm{ln}|W|^2`$, we have $`D=_iq_i(G/z_i)z_i`$, where $`q_i`$ is the $`U(1)_R`$ charge of the field $`z_i`$. Note that the contribution from the superpotential leads to the constant term, since the superpotential is holomorphic and has $`U(1)_R`$ charge 2. With the above particle contents, the D-term potential is found to be $`V_D={\displaystyle \frac{g_R^2}{2}}\left(4S^{}SQ_1^{}Q_1Q_2^{}Q_2+2M_P\right)^2,`$ (3) where $`M_P=M_{pl}/\sqrt{8\pi }`$ is the reduced Planck mass, $`g_R`$ is the $`U(1)_R`$ gauge coupling, and the minimal Kähler potential, $`K=S^{}S+Q_1^{}Q_1+Q_2^{}Q_2`$, is assumed. This assumption is justified by our result with $`\mathrm{\Lambda }M_P`$ which means that the $`SU(2)_H`$ gauge interaction is weak at the Planck scale. Note that the supersymmetric vacuum conditions required by the D-term potential and by the effective superpotential of Eq. (2) are incompatible. Therefore, supersymmetry is broken. This consequence remains correct, if there is no other superfields which have negative $`U(1)_R`$ charges, or if the other negatively charged superfields (if they exist) have no vacuum expectation value. Let us analyze the total potential in our model. Here, note that the cosmological constant should vanish. This requirement comes not only from the observations of the present universe but also from the consistency of our discussion. Since it is not clear whether the superpotential discussed above can be dynamically generated even in the curved space, the space-time should be flat for our discussion to be correct. Note that we cannot take the usual strategy, namely, adding a constant term to the superpotential, since such a term is forbidden by the $`U(1)_R`$ gauge symmetry. Therefore, it is a non-trivial problem whether we can obtain the vanishing cosmological constant in our model. Assuming that the potential minimum lies on the D-flat direction of the $`SU(2)_H`$ gauge interaction, we take the vacuum expectation values such that $`S=s`$ and $`Q_i^\alpha =v\delta _i^\alpha `$, where $`i`$ and $`\alpha `$ denote the flavor and $`SU(2)_H`$ indices, respectively. We can always make $`s`$ and $`v`$ real and positive by symmetry transformations. The total potential is given by $`V(v,s)`$ $`=`$ $`e^K\left[\left(\lambda v^2+sW\right)^2+2v^2\left(\lambda s{\displaystyle \frac{\mathrm{\Lambda }^5}{v^4}}+W\right)^23W^2\right]`$ (4) $`+`$ $`{\displaystyle \frac{g_R^2}{2}}\left(4s^22v^2+2\right)^2,`$ (5) where $`K`$ and $`W`$ are the Kähler potential and superpotential, respectively, which are given by $`K`$ $`=`$ $`s^2+2v^2,`$ (6) $`W`$ $`=`$ $`\lambda sv^2+{\displaystyle \frac{\mathrm{\Lambda }^5}{v^2}}.`$ (7) Here, all dimensionful parameters are taken to be dimensionless with the normalization $`M_P=1`$. The first line in Eq. (4) comes from the F-term (except for $`W^2`$ term) and the remainder is the D-term potential. Since the potential is very complicated, it is convenient to make some assumptions for the values of parameters. First, assume that $`g_R\lambda ,\mathrm{\Lambda }`$. Since the D-term potential is proportional to $`g_R^2`$ and positive definite, the potential minimum is expected for $`V_D`$ to be small as possible. If we assume $`s1`$ and $`v1`$, the potential can be rewritten as $`Ve^2\left(\lambda ^23\mathrm{\Lambda }^{10}\right).`$ (8) It is found that $`\lambda \sqrt{3}\mathrm{\Lambda }^5`$ is required in order to obtain the vanishing cosmological constant. Let us consider the stationary conditions of the potential. Using the assumptions $`s1`$ and $`v=1+y`$ ($`|y|1`$), the stationary conditions can be expanded with respect to $`s`$ and $`y`$. Considering the relations $`g_R\lambda \mathrm{\Lambda }^5`$, we can expand the condition $`V/y=0`$ and obtain $`ys^2{\displaystyle \frac{e^2\lambda ^2}{2g_R^2}}.`$ (9) Using this result, we can also expand the expansion of the condition $`V/s=0`$ and obtain $`s{\displaystyle \frac{\lambda \mathrm{\Lambda }^5}{8\lambda ^2\mathrm{\Lambda }^{10}}}.`$ (10) By the numerical analysis, the above rough estimation is found to be a good approximation. The result of numerical calculations is the following. $`y`$ $``$ $`4.7\times 10^3,`$ (11) $`s`$ $``$ $`6.8\times 10^2.`$ (12) Here, we used the values of $`\mathrm{\Lambda }=10^3`$, $`\lambda 1.8\mathrm{\Lambda }^5`$ and $`g_R=10^{11}`$. For these values of the parameters, we can obtain the vanishing cosmological constant. Note that the numerical values of Eqs. (11) and (12) are almost independent of the actual value of $`\mathrm{\Lambda }`$, if the condition $`g_R\mathrm{\Lambda }^5`$ is satisfied and the ratio $`\lambda /\mathrm{\Lambda }^5`$ is fixed. This can be seen in the approximate formulae of Eqs.(9) and (10). We can choose the value of $`\mathrm{\Lambda }`$ in order to obtain a phenomenologically acceptable mass spectrum. As a result of the above analysis, we can estimate the gravitino mass as $`m_{3/2}=e^{K/2}W3.0\times {\displaystyle \frac{\mathrm{\Lambda }^5}{M_P^4}}.`$ (13) This non-zero gravitino mass means that supersymmetry is really broken in the framework of the supergravity. The gravitino mass contributes to the masses of scalar partners via the tree-level interactions of supergravity. Note that there is another contribution, if scalar partners have non-zero $`U(1)_R`$ charges. In this case, they also acquire the mass from the vacuum expectation value of the D-term, and it is estimated as $`m_{Dterm}^2=g_R^2Dq\left(7.3\times {\displaystyle \frac{\mathrm{\Lambda }^5}{M_P^4}}\right)^2q,`$ (14) where $`q`$ is the $`U(1)_R`$ charge. This mass squared is always positive for the scalar fields with positive $`U(1)_R`$ charges. The mass is the same order of the magnitude of the gravitino mass. This is because $`g_R`$ is canceled out in the above estimation (see Eq. (9)). Note that $`m_{3/2}`$ and $`m_{Dterm}^2`$ is controled by the strong coupling scale of $`SU(2)_H`$ gauge theory $`\mathrm{\Lambda }`$. We give some comments. Our model has the same structure of the supersymmetry breaking model with the anomalous $`U(1)`$ gauge symmetry . In the model, the Fayet-Iliopoulos term is originated from the anomaly of the $`U(1)`$ gauge symmetry . On the other hand, in our model the origin of the term is the symmetry of supergravity with the gauged $`U(1)_R`$ symmetry. The Fayet-Iliopoulos term appears even if the $`U(1)_R`$ gauge interaction is anomaly free. The mediation of supersymmetry breaking to the visible sector is discussed in succeeding sections. We address the highly non-trivial problem whether the anomaly cancellation of $`U(1)_R`$ can be done when we include the visible sector superfields with positive semi-definite $`U(1)_R`$ charges . We discuss two explicit models. One is the model that the $`U(1)_R`$ anomalies are canceled out by the Green-Schwarz mechanism. We refer this model to the anomalous $`U(1)_R`$ model. The other is the model that the $`U(1)_R`$ symmetry is anomaly free. We refer this model to the anomaly free $`U(1)_R`$ model. ## III Anomalous $`U(1)_R`$ model Before discussing the model in detail, it is instructive to review the Green-Schwarz mechanism in four dimensions . Suppose that we have a gauge symmetry $`U(1)_X`$ with mixed gauge anomalies of $`U(1)_XG_i^2`$, where $`G_i`$ denote other gauge groups. The Lagrangian is not invariant under the $`U(1)_X`$ gauge transformation $`A_\mu A_\mu +_\mu \alpha (x)`$: $$\delta _{gauge}=\alpha (x)\frac{C_i}{8\pi ^2}\mathrm{tr}(F_i\stackrel{~}{F}_i),$$ (15) where $`C_i`$ are anomaly coefficients and $`F_i(\stackrel{~}{F_i})`$ are the field strength tensors (its dual) of $`G_i`$. On the other hand, there could be another anomaly at the Planck scale through the dilaton superfield $`S`$ with the $`U(1)_X`$ transformation $`SS+\frac{i}{2}\delta _{\mathrm{GS}}\alpha (x)`$: $$\delta _{\mathrm{GS}}=\alpha (x)\frac{\delta _{\mathrm{GS}}}{2}k_i\frac{1}{2}\mathrm{tr}(F_i\stackrel{~}{F_i}),$$ (16) where $`\delta _{\mathrm{GS}}`$ is the Green-Schwarz coefficient which is calculated in the string theory as $$\delta _{\mathrm{GS}}=\frac{1}{192\pi ^2}\mathrm{tr}q_i.$$ (17) Here, $`q_i`$ are R-charges of the fermionic components of the superfields. $`k_i`$ are the Kac-Moody levels of $`G_i`$. Note that the Kac-Moody levels are integers for non-Abelian gauge groups but are not necesarry integers for Abelian groups. Therefore, we can assume the value of the Kac-Moody levels for Abelian groups appropiately. All the Kac-Moody levels must have the same sign, since all the gauge couplings are generated by $$\frac{1}{g_i^2}=k_iS,$$ (18) where $`S`$ is the vacuum expextation value of the dilaton superfield. From Eqs. (15) and (16), the anomaly cancellation conditions lead to the Green-Schwarz relations: $$\frac{C_i}{k_i}=2\pi ^2\delta _{\mathrm{GS}}.$$ (19) This relation should be satisfied for any $`i`$. It follows from this relation that all of $`C_i`$ has the same sign, since all of $`k_i`$ has the same sign. Now, we discuss the model in detail. We consider the MSSM with all the Yukawa couplings but the $`\mu `$-term. The relevant anomaly coefficients are listed below. $`C_H`$ $`=`$ $`{\displaystyle \frac{1}{2}}(q_1+q_2)+2,`$ (20) $`C_3`$ $`=`$ $`{\displaystyle \frac{3}{2}}(2q+u+d)+3,`$ (21) $`C_2`$ $`=`$ $`{\displaystyle \frac{3}{2}}(3q+l)+{\displaystyle \frac{1}{2}}(h+\overline{h})+2,`$ (22) $`C_Y`$ $`=`$ $`3({\displaystyle \frac{1}{6}}q+{\displaystyle \frac{4}{3}}u+{\displaystyle \frac{1}{3}}d+{\displaystyle \frac{1}{2}}l+e)+{\displaystyle \frac{1}{2}}(h+\overline{h}),`$ (23) $`C_{YY}`$ $`=`$ $`3(q^22u^2+d^2l^2+e^2)+h^2\overline{h}^2=0,`$ (24) $`C_g`$ $`=`$ $`2(q_1+q_2)+s+3(6q+3u+3d+2l+n+e)+2(h+\overline{h})6,`$ (25) $`C_R`$ $`=`$ $`2(q_1^3+q_2^3)+s^3+3(6q^3+3u^3+3d^3+2l^3+n^3+e^3)+2(h^3+\overline{h}^3)+18,`$ (26) where $`q_1,q_2`$ and $`s`$ are R-charges of the fermionic components for $`Q_1,Q_2`$ and $`S`$ in the hidden sector, and $`q,u,d,l,n,e,h`$, and $`\overline{h}`$ are those for the chiral superfields $`Q(\mathrm{𝟑},\mathrm{𝟐},\frac{1}{6})`$, $`\overline{U}(\overline{\mathrm{𝟑}},\mathrm{𝟏},\frac{2}{3})`$, $`\overline{D}(\overline{\mathrm{𝟑}},\mathrm{𝟏},\frac{1}{3})`$, $`L(\mathrm{𝟏},\mathrm{𝟐},\frac{1}{2})`$, $`N(\mathrm{𝟏},\mathrm{𝟏},0)`$ $`E(\mathrm{𝟏},\mathrm{𝟏},1)`$, $`H(\mathrm{𝟏},\mathrm{𝟐},\frac{1}{2})`$, and $`\overline{H}(\mathrm{𝟏},\mathrm{𝟐},\frac{1}{2})`$, respectively. The representations and charges in the parenthesis are those under $`SU(3)_C\times SU(2)_L\times U(1)_Y`$. The coefficients $`C_H,C_3,C_2,C_Y,C_{YY},C_g`$ and $`C_R`$ represent the anomaly coefficients for $`U(1)_R(SU(2)_H)^2`$, $`U(1)_R(SU(3)_C)^2`$, $`U(1)_R(SU(2)_L)^2`$, $`U(1)_R(U(1)_Y)^2`$, $`(U(1)_R)^2U(1)_Y`$, $`U(1)_R`$ and $`(U(1)_R)^3`$, respectively. Here we simply assume a family independent charge assignment. Note that the gravitino contribution for the mixed gravitational anomaly is $`21`$ times that of a gaugino, while the gravitino contribution for the $`U(1)_R`$ gauge anomaly is three times that of a gaugino , and the dilatino contributes ($`1`$) to both $`C_g`$ and $`C_R`$. Note also that the anomaly coefficient $`C_{YY}`$ has to vanish identically, since it cannot be cancelled by the Green-Schwarz mechanism. As mentioned earlier, all the Yukawa coupling are included. $$W=y_uQ\overline{U}H+y_dQ\overline{D}\overline{H}+y_eLE\overline{H}+y_nLNH.$$ (27) The resulting conditions for R-charges are $`q+u+h`$ $`=`$ $`1,`$ (28) $`q+d+\overline{h}`$ $`=`$ $`1,`$ (29) $`l+e+\overline{h}`$ $`=`$ $`1,`$ (30) $`l+n+h`$ $`=`$ $`1.`$ (31) Furthermore, we need the Yukawa coupling in the hidden sector $$W=\lambda S[Q_1Q_2],$$ (32) which leads to the condition $$s+q_1+q_2=1.$$ (33) One can immediately see that $`C_3=C_2=C_Y=0`$ and Eqs. (28), (29) and (30) are incompatible, since $`C_Y+C_22C_3=6`$. Therefore we have to use the Green-Schwarz mechanism to cancel the anomalies. Taking into account the simple assumption $`k_2=k_3`$, we have only two solutions. One solution consists of all integer charges: $`Q_1=Q_2=2,S=6,`$ (34) $`Q=\overline{U}=\overline{D}=L=N=E=0,H=\overline{H}=2,`$ (35) where these are the charges of superfields. The corresponding anomaly coefficients are $`C_3=C_2=3,C_Y=9,C_{YY}=0,`$ (36) $`C_H=1,C_R=9,C_g=57.`$ (37) Note that all the non-trivial anomaly coefficients have negative signs. Therefore, we assume that all the Kac-Moody levels are positive with negative vacuum expectation value of the dilaton superfield. We can freely choose the positive values of $`k_R`$ and $`k_g`$ to satisfy the Green-Schwarz relation . On the other Kac-Moody levels the following discussions are required. The Green-Schwarz relation $$\frac{C_Y}{k_Y}=\frac{C_2}{k_2}=\frac{C_3}{k_3}=\frac{C_H}{k_H}=2\pi ^2\delta _{GS}=\frac{2\pi ^2C_g}{192\pi ^2}=\frac{C_g}{96}$$ (38) is satisfied, if we introduce 39 gauge singlets of vanishing R-charge with $`k_Y=9,k_2=k_3=3`$ and $`k_H=1`$. These values of Kac-Moody levels tell us the gauge coupling relations at the Planck scale, namely, $$\alpha _3=\alpha _2=3\alpha _Y,\alpha _H=3\alpha _3.$$ (39) This coupling unification can be easily accomplished by introducing extra massive particles which change the running of the gauge coupling constants and does not affect the anomaly cancellation conditions. In fact, the gauge coupling unification is realized, if we introduce extra massive particles so that the $`SU(3)_C`$ gauge coupling almost does not run and the $`SU(2)_L`$ gauge coupling is asymptotically non-free. Let us perform the potential analysis. We assume the $`SU(2)_H`$ D-flat condition. The scalar potential in the present case is the same one in our previous letter except for the D-term partAlthough the Fayet-Iliopoulos term due to the anomaly of $`U(1)_R`$ should be included, but its magnitude is suppressed compared to that of the gauged $`U(1)_R`$ symmetry. Hence, we simply neglect it. $`V(v,s)`$ $`=`$ $`e^K\left[\left(\lambda v^2+sW\right)^2+2v^2\left(\lambda s{\displaystyle \frac{\mathrm{\Lambda }^5}{v^4}}+W\right)^23W^2\right]`$ (41) $`+{\displaystyle \frac{g_R^2}{2}}\left(6s^24v^2+2\right)^2,`$ where $`K`$ and $`W`$ are the Kähler potential and the superpotential, respectively, which are given by Eqs. (6) and (7). We assume $`\lambda ,\mathrm{\Lambda }g_R,v=\frac{1}{\sqrt{2}}+y`$ with $`y1`$ and $`s1`$, and expand the potential with respect to $`y`$ and $`s`$. The approximate formulae of the stationary conditions $`V/y=0`$ and $`V/s=0`$ are $`y`$ $``$ $`{\displaystyle \frac{3}{2\sqrt{2}}}s^2{\displaystyle \frac{e(3\lambda ^216\mathrm{\Lambda }^{10})}{32\sqrt{2}g_R^2}},`$ (42) $`s`$ $``$ $`{\displaystyle \frac{10\lambda \mathrm{\Lambda }^5}{9\lambda ^232\mathrm{\Lambda }^{10}}}.`$ (43) These formulae are correct within the order of magnitude, since the convergence of the expansion is not so good. The numerical calculation gives $`v`$ $``$ $`{\displaystyle \frac{1}{\sqrt{2}}}+0.019,`$ (44) $`s`$ $``$ $`0.13,`$ (45) where we used the parameters, $`\mathrm{\Lambda }=10^3`$, $`\lambda =7.3\mathrm{\Lambda }^5`$ and $`g_R^2=10^{11}`$. We have checked that the cosmological constant can be fine-tuned to zero by adjusting $`\lambda `$ for fixed $`\mathrm{\Lambda }`$. Next, we estimate the spectrum of supersymmetry breaking masses. The gravitino mass is given by $$m_{3/2}=e^{K/2}W4.1\times \frac{\mathrm{\Lambda }^5}{M_P^4}.$$ (46) This gravitino mass contributes to the masses of scalar partners via the tree-level interaction of supergravity. Since R-charges for $`Q,\overline{U},\overline{D},L,N,E`$ vanish, $$m_i^2m_{3/2}^2(i=Q,\overline{U},\overline{D},L,N,E).$$ (47) On the other hand, since $`H`$ and $`\overline{H}`$ have R-charge 2, $`m_{H,\overline{H}}^2`$ $``$ $`m_{3/2}^2+2g_R^2D,`$ (48) $``$ $`m_{3/2}^2+2\left(8.2{\displaystyle \frac{\mathrm{\Lambda }^5}{M_P^4}}\right)^2𝒪(m_{3/2}^2).`$ (49) The contribution from $`D0`$ is the same order of the gravitino mass. The gaugino mass can be generated in two different ways. In the case that the gauge kinetic function is trivial, the anomaly mediation contribution dominates as $$m_{\lambda _i}=\frac{\beta (g_i^2)}{2g_i^2}m_{3/2},$$ (50) where $`\beta (g_i^2)`$ is the beta function for the standard model gauge groups.<sup>§</sup><sup>§</sup>§More precisely, there is a correction of the same order of magnitude, see Ref. . In the case that the gauge kinetic function is non-trivial and includes the term $`S^2[Q_1Q_2]^3/M_P^8`$, for example, the gaugino masses are $$m_{\lambda _i}\frac{1}{2^3}m_{3/2}.$$ (51) In both cases, there could be further contribution from the vacuum expectation value of F-component of the dilaton . The experimental bound on gaugino masses in the MSSM determines the order of the gravitino mass as 10 TeV in both cases. Here, we comment on the $`\mu `$-term. We can have the higher dimensional interaction $$W=\kappa \frac{S[Q_1Q_2]^2}{M_P^4}H\overline{H}$$ (52) which respects all symmetries, where $`\kappa `$ is a dimensionless constant. Plugging the vacuum expectation value of $`S`$, $`Q_1`$ and $`Q_2`$ into Eq. (52), we obtain $`\mu \kappa S`$. By adjusting $`\kappa 1`$, the electroweak scale is generated. ## IV Anomaly free $`U(1)_R`$ model In the last section, we discussed the scenario that supersymmetry breaking effect is mediated to the MSSM as the visible sector. The Green-Schwarz mechanism is used to cancel gauge anomalies of $`U(1)_R`$ and the simple solution was found. However the Green-Schwarz mechanism requires the dilaton field, and this leads to new difficult problems such as the dilaton stabilization and so on . Therefore, it is desirable to consider the case that $`U(1)_R`$ is anomaly free. As mentioned earlier, it is impossible to cancel all the gauge anomaly for $`U(1)_R`$ in the MSSM. In this section, we consider the supersymmetric SU(5) GUT instead of the MSSM as the visible sector. We have found that all the gauge anomalies can be cancelled by introducing $`SU(5)\times SU(2)_H`$ gauge singlets with non-trivial R-charges. Suppose that we have $`N`$ of $`SU(5)\times SU(2)_H`$ gauge singlets with R-charge 2. The anomaly cancellation conditions are $`U(1)_R\left[SU(5)\right]^2:3\left({\displaystyle \frac{1}{2}}\overline{f}+{\displaystyle \frac{3}{2}}a\right)+{\displaystyle \frac{1}{2}}(h+\overline{h})+5\sigma +5=0,`$ (53) $`\left[U(1)_R\right]:2(q_1+q_2)+s+3(5\overline{f}+10a+n)+5(h+\overline{h})+24\sigma +N+7=0,`$ (54) $`\left[U(1)_R\right]^3:2(q_1^3+q_2^3)+s^3+3(5\overline{f}^3+10a^3+n^3)+5(h^3+\overline{h}^3)+24\sigma ^3+N+31=0,`$ (55) where $`\overline{f},a,n,\sigma ,h`$ and $`\overline{h}`$ are the R-charges of the fermionic component for the superfields $`\overline{F}(\overline{\mathrm{𝟓}})`$, $`A(\mathrm{𝟏𝟎})`$, $`N(\mathrm{𝟏})`$, $`\mathrm{\Sigma }(\mathrm{𝟐𝟒})`$, $`H(\mathrm{𝟓})`$ and $`\overline{H}(\overline{\mathrm{𝟓}})`$, respectively. The representations in the parethesis are those of SU(5). The superpotential of Eq. (32), and Yukawa couplings $$WAAH+A\overline{F}\overline{H}+N\overline{F}H$$ (56) give the conditions of Eq. (33) and $`2a+h`$ $`=`$ $`1,`$ (57) $`\overline{f}+a+\overline{h}`$ $`=`$ $`1`$ (58) $`n+\overline{f}+h`$ $`=`$ $`1.`$ (59) We have a simple solution, in the case of $`N=36`$ as $`\overline{F}=A=N=0,H=\overline{H}=2,\mathrm{\Sigma }=1,`$ (60) $`Q_1=0,Q_2=2,S=4,`$ (61) where these are the charges of superfields. Note that $`Q_1`$ and $`Q_2`$ have different charges now. This modification makes possible to have the special charge assignment with all positive charges except for the fields which couple to $`SU(2)_H`$ gauge bosons. The potential analysis is the same in Ref. as long as we assume that the potential is along the $`SU(2)_H`$ D-flat direction. The gravitino mass is estimated as $`m_{3/2}=e^{K/2}W{\displaystyle \frac{\mathrm{\Lambda }^5}{M_P^4}}.`$ (62) The gravitino mass contributes to the masses of scalar partners via the tree-level interactions of supergravity. Note that there is another contribution, if scalar partners have non-zero $`U(1)_R`$ charges. In this case, they also acquire the mass from the vacuum expectation value of the D-term, and it is estimated as $`m_{Dterm}^2=qg_R^2D\left({\displaystyle \frac{\mathrm{\Lambda }^5}{M_P^4}}\right)^2q,`$ (63) where $`q`$ is the $`U(1)_R`$ charge. This mass squared is always positive for the scalar partners with positive $`U(1)_R`$ charges. The mass is the same order of the magnitude of the gravitino mass. The gaugino mass can be generated in two different ways. In the case that the gauge kinetic function is trivial, the anomaly mediation contribution dominates as $$m_{\lambda _i}=\frac{\beta (g_i^2)}{2g_i^2}m_{3/2}.$$ (64) On the other hand, in the case that the gauge kinetic function is non-trivial and includes the higher dimensional term $`S([Q_1Q_2])^2/M_P^5`$ The higher dimensional term $`S([Q_1Q_2])^2/M_P^5`$ in the gauge kinetic function can be forbidden to all orders by the discrete symmetry. , for example, the gaugino masses are $$m_{\lambda _i}m_{3/2}.$$ (65) Considering the experimental bound on gaugino masses in the MSSM , the gravitino mass is taken to be of the order of 10 TeV or 1 TeV in the former case or the latter case, respectively. This phenomenological constraint requires the dynamical scale of the $`SU(2)_H`$ gauge interaction to be of the order of $`10^{15}`$ GeV for both cases. This also means that $`\lambda `$ is extremely small, $`\lambda 10^{15}`$. Note that the requirement of this fine-tuned value of $`\lambda `$ is the result from the fine-tuning for vanishing cosmological constant. Furthermore, this small Yukawa coupling is consistent with the above discussion in the following sense. Since $`S`$ has the vacuum expectation value, the mass for $`Q_i`$ is generated through the Yukawa coupling in Eq.(2). The relation $`\lambda S\mathrm{\Lambda }`$ is needed not to change our result from the $`SU(2)_H`$ gauge dynamics. Here, we comment on the Higgs potential. We would like to discuss whether it is possible to construct the higher dimensional operators that realize the gauge symmetry breaking as $`SU(5)SU(3)_C\times SU(2)_L\times U(1)_YSU(3)_C\times U(1)_{em}`$ and also realize the doublet-triplet splitting. We can explicitly write down the following interactions respecting all symmetries.More precisely, there are other terms of contracting the gauge indices in different ways, but we simply write the representative. $`W`$ $``$ $`{\displaystyle \frac{\lambda _1}{M_P}}\left[Q_1Q_2\right]\overline{H}H+{\displaystyle \frac{\lambda _2}{M_P^5}}\left[Q_1Q_2\right]^2\overline{H}\mathrm{\Sigma }^2H`$ (67) $`+{\displaystyle \frac{\lambda _3}{2M_P^4}}S\left[Q_1Q_2\right]^2\mathrm{tr}(\mathrm{\Sigma }^2)+{\displaystyle \frac{\lambda _4}{4M_P^3}}\left[Q_1Q_2\right]\mathrm{tr}(\mathrm{\Sigma }^4),`$ where $`\lambda _i(i=14)`$ are constants. By choosing $`\lambda _i`$ appropriately, the first two terms can realize the doublet-triplet splitting, while the last two terms can realize the gauge symmetry breaking. Unfortunately, there exists unwanted operators, for examples, $`{\displaystyle \frac{1}{M_P^3}}\left[Q_1Q_2\right](H\overline{F})^2`$ $``$ $`{\displaystyle \frac{1}{M_P}}(H\overline{F})^2,`$ (68) $`{\displaystyle \frac{1}{M_P^5}}\left[Q_1Q_2\right]^3S^2`$ $``$ $`M_PS^2,`$ (69) $`{\displaystyle \frac{1}{M_P^{10}}}\left[Q_1Q_2\right]^5S^3`$ $``$ $`S^3.`$ (70) Although the first term seems to be harmless, the other terms are dangerous since supersymmetry is restored if these operators are present. However, this problem is not specific to our model, but inevitable and generic in supergravity models. ## V Summary and Discussion In this paper, we have shown that it is possible to construct simple and phenomenologically viable models of dynamical supersymmetry breaking with gauged $`U(1)_R`$ symmetry. Supersymmetry breaking occurs by the interplay between the dynamically generated superpotential and the Fayet-Iliopoulos term which appears due to the symmetry of supergravity. The cosmological constant can be fine-tuned to vanish at the minimum of the potential. What is the most non-trivial in this class of models is the anomaly cancellations for $`U(1)_R`$. We have presented two explicit models with anomaly cancellations as the visible sector. One is the MSSM with all the Yukawa couplings and without the $`\mu `$-term. Anomalies are cancelled by the Green-Schwarz mechanism. We have found a quite simple solution of R-charge assignments for matter superfields in our model compared to the models in Ref. . Our solution consists of all integer charges. We have discussed the gauge coupling unification which follows from the Green-Schwarz relations. The gauge coupling unification is easily accomplished by introducing extra massive particles. The spectrum of supersymmetry breaking masses has been estimated and turned out to be phenomenologically viable as follows. The gravitino mass is of order 10 TeV. The scalar masses are the same order of the gravitino mass. The gaugino masses are a few orders smaller than the gravitino mass. We have also shown that it is possible to have the $`\mu `$-term. Unfortunately, we have to introduce the dilaton superfield to cancel anomalies in this model. This leads to new difficult problems such as the dilaton stabilization and so on . The other is the supersymmetric $`SU(5)`$ GUT. In this model, $`U(1)_R`$ anomalies are cancelled by introducing the $`SU(5)`$ gauge singlet superfields. We have also found a quite simple solution of R-charge assignment. Unlike the anomalous $`U(1)_R`$ model in this paper and the anomalous $`U(1)`$ model in Ref. , the dilaton is not necessary since the gauged $`U(1)_R`$ is anomaly free. Therefore, there is no dilaton stabilization problem. We have estimated the mass spectrum of supersymmetry breaking and found that these models are phenomenologically viable. The gravitino mass is of the order 1 TeV or 10 TeV depending on the form of the gauge kinetic function. The scalar masses are the same order of the gravitino mass. The gaugino masses are a few orders smaller than the gravitino mass in the case that the gauge kinetic function is trivial, while the same order of it in the case that the gauge kinetic function have higher dimensional terms. We have shown that it is possible to have the Higgs potential for the doublet-triplet splitting and the GUT gauge symmetry breaking. It is interesting to apply our mechanism to the phenomenology of the anomaly mediation scenarios. In this scenarios, if the visible sector consists of the MSSM, the slepton mass squared becomes negative because the beta function coefficients of $`SU(2)_L\times U(1)_Y`$ are negative. This problem is easily avoided <sup>\**</sup><sup>\**</sup>\** For related works, see by using our mechanism since the contribution to the mass squared from the vacuum expectation value of $`U(1)_R`$ D-term is always positive for the positively R-charged field. Unfortunately, this situation is not realized in our models since R-charges of the matter fields are zero. However, it would be possible to construct the models in which lepton superfields have positive charges. Finally, note that our model is effectively reduced to the well known Polonyi model which is the simplest supersymmetry breaking model at the tree-level. One can understand this fact by freezing the $`Q_i`$ with its vacuum expectation values in the superpotential of Eq. (2). Here, we would like to emphasize that the Polonyi model is derived as the effective theory of our model. The Polonyi model does not specify the dynamics of supersymmetry breaking. Also, unlike the Polonyi model, there is no dimensionful parameters other than the Planck scale in our model. They are induced dynamically from the Planck scale. ###### Acknowledgements. This work was supported in part by the Grant-in-aid for Science and Culture Research form the Ministry of Education, Science and Culture of Japan (#11740156, #3400, #2997). N.M. and N.O. are supported by the Japan Society for the Promotion of Science for Young Scientists.
no-problem/0003/quant-ph0003027.html
ar5iv
text
# Light Pulse Squeezed State Formation In Medium With The Relaxation Kerr Nonlinearity *footnote **footnote *To appear in: Proceedings of the 6th International Conference on Squeezed States and Uncertainty Relations, (ICSSUR’99), 24-29 May 1999, Napoli, Italy, (NASA Conference Publication Series). ## I Introduction There are two main approaches in the quantum theory of self-action (or self-phase modulation) of ultrashort light pulses (USPs). In the first approach (see -) the calculations of the nonclassical light formation of the self-action of pulses assume that the nonlinear response of the medium is instantaneous and that the relative fluctuations are small. The latter assumption is valid for the intense USPs ordinarily used in experiments. However a finite relaxation time of the nonlinearity is of principle importance. The relaxation time of the nonlinearity determines a region of the spectrum of the quantum fluctuations that play a large role in the formation of squeezed light. In the alternative approach the inertia of the nonlinearity is taken into account. The methods that have been developed in and differ in the interaction Hamiltonian. In the authors considered the interaction Hamiltonian presuming one has to introduce thermal fluctuations in order to satisfy the commutation relations for time-dependent Bose-operators. For the case of the normally ordered interaction Hamiltonian it is not necessary to take into consideration thermal fluctuations. The results of the quantum theory of the USPs self-action in the medium with the relaxation Kerr nonlinearity based on the normally ordered interaction Hamiltonian are presented below. Variances of the quadrature components and spectral distribution of the pulsed quadrature-squeezed light are calculated. Besides, propagation of such a pulse in a dispersive linear medium is analyzed. It is shown that in this case the pulse with sub-Poissonian photon statistics can be formed. ## II Quantum theory of the self-action of the light pulse We describe the process under consideration by the following interaction Hamiltonian $$\widehat{H}_{int}(z)=\mathrm{}\beta _{\mathrm{}}^{\mathrm{}}𝑑t_{\mathrm{}}^tH(tt_1)\text{}[\widehat{n}(t,z),\widehat{n}(t_1,z)]𝑑t_1,$$ (1) where the coefficient $`\beta `$ is determined by the nonlinearity of the medium, $`H(t)`$ is the nonlinear response function of the Kerr medium $`H(t)0`$ for $`t0`$ and $`H(t)=0`$ for $`t<0`$; is the normal ordering operator, $`\widehat{n}(t,z)=\widehat{A}^+(t,z)\widehat{A}(t,z)`$ is the photon number density operator, and $`\widehat{A}^+(t,z)`$ and $`\widehat{A}(t,z)`$ are the photon’s creation and annihilation Bose-operators in a given cross section $`z`$. The operator $`\widehat{n}(t,z)`$ commutes with the Hamiltonian (1) and therefore $`\widehat{n}(t,z)=\widehat{n}(t,z=0)=\widehat{n}_0(t)`$, where $`z=0`$ corresponds to the input of the nonlinear medium. According to Eq.(1) the spatial evolution of the operator $`\widehat{A}(t,z)`$ is given by the equation $$\frac{\widehat{A}(t,z)}{z}i\beta q[\widehat{n}_0(t)]\widehat{A}(t,z)=0,$$ (2) in the moving coordinate system, $`z=z^{^{}}`$ and $`t=t^{^{}}z^{^{}}/u`$ ($`u`$ is the velocity of the pulse), and $$q[\widehat{n}_0(t)]=_{\mathrm{}}^{\mathrm{}}h(t_1)\widehat{n}_0(tt_1)𝑑t_1,(h(t)=H(|t|)).$$ (3) The solution of Eq.(2) is $$\widehat{A}(t,l)=\mathrm{exp}\{i\gamma q[\widehat{n}_0(t)]\}\widehat{A}_0(t).$$ (4) Here $`\widehat{A}_0(t)=\widehat{A}(t,0)`$, $`\gamma =\beta l`$, $`l`$ is the length of the nonlinear medium. For $`h(t)=2\delta (t)`$ and $`\widehat{A}_0(t)=\widehat{a}_0`$ expressions (3)-(4) have a form corresponding to single-mode radiation. To verify the commutation relation $`[\widehat{A}(t_1,l),\widehat{A}^+(t_2,l)]=\delta (t_1t_2)`$ and to calculate the quantum characteristics of the pulse it is necessary to develop an algebra of time-dependent Bose-operators ,. In agreement with Eq.(1) the photon number operator remains unchanged in the nonlinear medium. This fact has already been used in Eq.(2). Therefore, in the case of self-action it is of greatest interest to study the fluctuations of the quadrature components. Here we restrict our consideration by the $`X`$-quadrature $`\widehat{X}(t,z)=[\widehat{A}^+(t,z)+\widehat{A}(t,z)]/2`$. The correlation function of the $`X`$-quadrature is given by the formula $$R(t,t+\tau )=\frac{1}{4}\left\{\delta (\tau )\psi (t)h(\tau )\mathrm{sin}2\mathrm{\Phi }(t)+\psi ^2(t)g(\tau )\mathrm{sin}^2\mathrm{\Phi }(t)\right\},$$ (5) where $`\psi (t)=2\gamma |\alpha _0(t)|^2`$ is the nonlinear phase addition, $`\alpha _0(t)`$ is the eigenvalue of the operator $`\widehat{A}_0(t)`$ of the pulse at the initial coherent state, $`\mathrm{\Phi }(t)=\psi (t)+\varphi (t)`$ ($`\varphi (t)`$ is the initial phase of the pulse). If we consider the nonlinear response as $`h(\tau )=\tau _r^1\mathrm{exp}(|\tau |/\tau _r)`$ then $`g(\tau )=\tau _r^1(1+|\tau |/\tau _r)\mathrm{exp}(|\tau |/\tau _r)`$ ($`\tau _r`$ is the nonlinearity relaxation time). We took into consideration that the parameter $`\gamma 1`$ and the pulse duration $`\tau _p\tau _r`$. According to Eq.(5), the spectral density of the quadrature fluctuations is $$S(\omega ,t)=_{\mathrm{}}^{\mathrm{}}R(t,t+\tau )e^{i\omega \tau }𝑑\tau =\frac{1}{4}[12\psi (t)L(\omega )\mathrm{sin}2\mathrm{\Phi }(t)+4\psi ^2(t)L^2(\omega )\mathrm{sin}^2\mathrm{\Phi }(t)],$$ (6) where $`L(\omega )=1/[1+(\omega \tau _r)^2]`$. It follows from Eq.(6) that the level of the quadrature fluctuations, depending on the value of the phase $`\mathrm{\Phi }(t)`$, can be bigger or smaller than the shot-noise corresponding to $`S^{(coh)}(\omega )=1/4`$. If the phase of the pulse is chosen optimal for a frequency $`\omega _0`$, $`\varphi _0(t)=0.5\mathrm{arctan}\{[\psi (t)L(\omega _0)]^1\}\psi (t)`$, then the spectral density at this frequency is minimal. The calculated spectra at $`t=0`$ for the case of $`\omega _0=\tau _r^1`$ are presented in Fig.1. It is obvious from Fig.1 that the frequency band in which the spectral density of the quadrature fluctuations is lower than the shot-noise level depends on the nonlinear phase addition $`\psi (0)`$. ## III Squeezed light pulse in dispersive linear medium We analyse now the propagation of the quadrature-squeezed pulse through a dispersive linear medium in which the following operator transformation take place $$\widehat{B}(t,z)=_{\mathrm{}}^{\mathrm{}}G(tt_1,z)\widehat{A}(t_1,l)𝑑t_1.$$ (7) Here $`G(t,z)`$ is the Green function for the medium, $`z`$ is the distance and $`\widehat{A}(t,l)`$ is the input value of the operator (at $`z=0`$) defined by Eq.(4). Let us introduce the photon number operator over the measurement time $`T`$ and the Mandel parameter $`Q(t,z)`$: $$\widehat{N}_T(t,z)=\underset{tT/2}{\overset{t+T/2}{}}\widehat{B}^+(t_1,z)\widehat{B}(t_1,z)𝑑t_1,Q(t,z)=\frac{\epsilon (t,z)}{\widehat{N}_T(t,z)},$$ (8) $$\epsilon (t,z)=\widehat{N}_T^2(t,z)\widehat{N}_T(t,z)^2\widehat{N}_T(t,z).$$ (9) Let us assume that the initial light pulse has Gaussian form $`\overline{n}_0(t)=\overline{n}_0\mathrm{exp}\{t^2/\tau _p^2\}`$ and the Green function equals to $`G(t,z)=(i2\pi k_2z)^{1/2}\mathrm{exp}\left\{it^2/2k_2z\right\}`$. The coefficient $`k_2`$ characterizes the dispersion of the group velocity. In the case of the normal dispersion $`k_2>0`$ and for the anormal dispersion $`k_2<0`$. When a phase self-modulated pulse passes through the dispersive linear medium the compression or the decompression of the pulse takes place. This effect can change the photon statistics of the pulse. In the so-called paraxial approximation we get $$\widehat{N}_T(t,z)=\overline{n}_0TV^1(z)\mathrm{exp}[t^2/V^2(z)\tau _p^2],V^2(z)=w^2(z)+\phi ^2(z),$$ (10) $$Q(0,z)=[\frac{T\psi _0}{\sqrt{\pi }\tau _p}]\frac{\mathrm{sin}[\mathrm{arctan}(\phi (z)/w(z))+0.5\mathrm{arctan}[2\phi (z)w(z)/(2\phi (z)\phi _d(z)w^2(z))]]}{[w^4(z)2\phi ^2(z)w^2(z)+4\phi ^4(z)]^{1/4}},$$ (11) $$w^2(z)=1s\psi _0\phi (z),\phi (z)=z/D,\phi _d(z)=z/d,D=\tau _p^2/|k_2|,d=\tau _r^2/|k_2|.$$ (12) $`D`$ and $`d`$ are the characteristic dispersion lengths, $`s=1`$ for $`k_2<0`$, and $`s=1`$ for $`k_2>0`$. It follows from Eq.(11) that a pulse with sub-Poissonian photon statistics ($`Q(t,z)<0`$) can be obtained. Of particular interest is the compression of the phase self-modulation pulse ($`s=1`$, $`k_2<0`$). The dependence of the Mandel parameter in this case is presented in Fig.2. One can see that the suppression of quantum fluctuation of the photon number becomes noticeable for the nonlinear phase $`\psi _0>1`$. ## IV Conclusions The main results of the developed systematic theory are as follows. The spectral region with level of the quadrature fluctuations less than the shot noise depends on relaxation time of the nonlinearity and the nonlinear phase addition. The choice of the initial phase of pulse gives us possibility to control the frequency when the coefficient of squeezing is maximal. The propagation of the quadrature-squeezed light pulse through a dispersion linear medium (an optical fiber or optical compressor) can lead to formation of the pulse with sub-Poissonian photon statistics. ## Acknowledgements One the authors (A.C.) would like to thank the Organizing Committee of ICSSUR’99 for financial support for participation in the Conference.
no-problem/0003/math0003230.html
ar5iv
text
# Characterizations of ℙ^𝑛 in arbitrary characteristic There are several results which characterize $`^n`$ among projective varieties. The first substantial theorem in this direction is due to \[Hirzebruch-Kodaira57\], using the topological properties of $`^n`$. Later characterizations approached $`^n`$ as the projective variety with the most negative canonical bundle. The first such theorem is due to \[Kobayashi-Ochiai72\] and later it was generalized in the papers \[Ionescu86, Fujita87\]. A characterization using the tangent bundle was given in \[Mori79, Siu-Yau80\]. Other characterizations using vector bundles are given in the papers \[Ye-Zhang90, Sato99, Kachi99\]. The results of \[Kobayashi-Ochiai72, Ionescu86, Fujita87\] ultimately relied on the Kodaira vanishing theorem, thus they were restricted to characteristic zero. The aim of this note is to provide an argument which does not use higher cohomologies of line bundles. This makes the proofs work in arbitrary characteristic. Thus (1) generalizes \[Ionescu86, Fujita87\] and (2) is a characteristic free version of \[Kobayashi-Ochiai72\]. The proofs of \[Ionescu86, Fujita87\] work if we know enough about extremal contractions of smooth varieties. Thus the results have been known in any characteristic for surfaces and they can be read off from the classification of contractions of smooth 3–folds in positive characteristic developed in the papers \[Kollár91, Shepherd-Barron97, Megyesi98\]. For the rest of the paper all varieties are over an algebraically closed field of arbitrary characteristic. ###### Theorem 1. Let $`X`$ be a smooth projective variety of dimension $`n`$ and $`H`$ an ample divisor on $`X`$. Then the pair $`(X,H)`$ satisfies one of the following: 1. $`(n1)H+K_X`$ is nef. 2. $`X^n`$ and $`H`$ is a hyperplane section. 3. $`X`$ is a quadric in $`^{n+1}`$ and $`H`$ is a hyperplane section. 4. $`X`$ is the projectivization of a rank $`n`$ vector bundle over a smooth curve $`A`$ and $`H=𝒪(1)`$. With a little bit of work on the scrolls in (1.4) this implies the following characterization of projective spaces and hyperquadrics. ###### Corollary 2. Let $`X`$ be a smooth projective variety of dimension $`n`$. 1. $`X`$ is isomorphic to $`^n`$ iff there is an ample divisor $`H`$ such that $`K_X`$ is numerically equivalent to $`(n+1)H`$. 2. $`X`$ is isomorphic to a hyperquadric $`^n^{n+1}`$ iff there is an ample divisor $`H`$ such that $`K_X`$ is numerically equivalent to $`nH`$.∎ Our proof of (1) also yields the following: ###### Corollary 3. Let $`X`$ be a smooth projective variety of dimension $`n`$. Assume that $`K_X`$ has negative intersection number with some curve. Then $`X`$ is isomorphic to $`^n`$ iff 1. $`(K_XC)n+1`$ for every rational curve $`CX`$, and 2. $`(K_X)^n(n+1)^n`$. It is conjectured that (3.1) alone characterizes $`^n`$. A positive solution in characteristic zero is announced in \[Cho-Miyaoka98\]. It has also been conjectured that (3.2) also characterizes $`^n`$ among Fano varieties. This, however, turned out to be false for $`n4`$ \[Batyrev82\]. The first step of the proof of (1) is to identify the family of lines on $`X`$. The results of \[Mori79\] show that there is a family of rational curves $`\{C_t\}`$ on $`X`$ such that $`(HC_t)=1`$ and this family has the expected dimension. Focusing on these rational curves leads to other characterizations of $`^n`$ and of $`^n`$, which strengthen (2) in characteristic zero. ###### Theorem 4. Let $`X`$ be a smooth projective variety of dimension $`n`$ over $``$ and $`L`$ an ample line bundle on $`X`$. 1. \[Andreatta-Ballico-Wiśniewski93\] Assume that for every $`x,yX`$ there is a rational curve $`C_{xy}`$ through $`x,y`$ with $`(LC_{xy})=1`$. Then $`X^n`$ and $`L𝒪(1)`$. 2. \[Kachi-Sato99\] Assume that for every $`x,y,zX`$ there is a rational curve $`C_{xyz}`$ through $`x,y,z`$ with $`(LC_{xyz})=2`$. Then $`X^n`$ or $`X^n`$ and $`L𝒪(1)`$. The positive characteristic version of (4) is still open. Concentrating on higher degree curves leads to the following characterization of Veronese varieties, which gives another generalization of (2.1) and also leads to a weaker version of (1). ###### Theorem 5. \[Kachi99\] Let $`X`$ be a smooth projective variety of dimension $`n`$ (over an algebraically closed field of arbitrary characteristic) and $`L`$ an ample divisor on $`X`$. Assume that: 1. For general $`x,yX`$ there is a connected (possibly reducible) curve $`C_{xy}`$ through $`x,y`$ with $`(LC_{xy})d`$. 2. For every irreducible subvariety $`ZX`$ of dimension at least 2 we have $`(L^{dimZ}Z)d^{dimZ}`$. 3. Fujita’s sectional genus of $`\frac{1}{d}L`$ is less than 1, that is, $$(L^{n1}(\frac{n1}{d}L+K_X))<0.$$ Then $`X^n`$ and $`L𝒪(d)`$. ###### 6Proof of (1). As we noted, we may assume that $`n3`$. We are done if $`K_X+(n1)H`$ is nef. If $`K_X+(n1)H`$ is not nef, then by the cone theorem of Mori (cf. \[Kollár95, III.1\]) there is a $`c>0`$ and a rational curve $`CX`$ such that 1. $`K_X+(n1+c)H`$ is nef, 2. $`C(K_X+(n1+c)H)=0`$, and 3. $`(CK_X)n+1`$. Thus we see that $$(CH)\frac{(CK_X)}{n1+c}\frac{n+1}{n1+c}<2.$$ Hence $`CH=1`$ and $`(CK_X)=n`$ or $`n+1`$. Let $`f:^1X`$ be the normalization of $`C`$, $`0^1`$ a point and $`x=f(0)`$. Then by \[Kollár95, II.1\] $$\begin{array}{cc}dim_{[f]}\mathrm{Hom}(^1,X)\hfill & (CK_X)+n,\text{and}\hfill \\ dim_{[f]}\mathrm{Hom}(^1,X,0x)\hfill & (CK_X).\hfill \end{array}$$ We consider 3 separate cases corresponding to the three outcomes (1.2-4). ###### 7Case 1: $`(CK_X)=n+1`$. In order to go from homomorphisms to curves in $`X`$, we have to quotient out by the automorphism group. Thus $`\mathrm{Hom}(^1,X,0x)`$ gives an $`(n1)`$-dimensional family of rational curves through $`xX`$ since $`\mathrm{Aut}(^1,0)`$ is 2–dimensional. This implies that the Picard number of $`X`$ is 1 (cf. \[Kollár95, IV.3.13.3\]). Thus $`K_X`$ is numerically equivalent to $`(n+1)H`$. The following lemma shows that $`X^n`$. We formulate it for singular varieties, the setting needed for the third case. ###### Lemma 8. Let $`X`$ be a normal, projective variety and $`xX`$ a smooth point. Let $`H`$ be an ample Cartier divisor on $`X`$ and let $`\{C_t:tT\}`$ be an $`(n1)`$-dimensional family of curves through $`x`$. Assume that $`(C_tH)=1`$ and $`(K_XH^{n1})>(n1)(H^n)`$. Then $`X^n`$, $`H=𝒪(1)`$ and the $`C_t`$ are lines through $`x`$. Proof. By Riemann–Roch, $$h^0(X,𝒪_X(mH))=m^n\frac{(H^n)}{n!}+m^{n1}\frac{(K_XH^{n1})}{2(n1)!}+O(m^{n2}).$$ For a section of a line bundle it is $$\left(\genfrac{}{}{0pt}{}{m+n1}{n}\right)=m^n\frac{1}{n!}+m^{n1}\frac{n1}{2(n1)!}+O(m^{n2})$$ conditions to vanish at a smooth point $`xX`$ to order $`m`$. Comparing the two estimates we see that for large $`m`$ $$\begin{array}{ccc}H^0(X,𝒪_X(mH)I_x^{m+1})\hfill & (\text{const})m^n\hfill & \text{if }(H^n)>1\text{, and}\hfill \\ H^0(X,𝒪_X(mH)I_x^m)\hfill & (\text{const})m^{n1}\hfill & \text{if }(H^n)=1\text{.}\hfill \end{array}$$ Pick any member $`D|𝒪_X(mH)I_x^{m+1}|`$ (resp. $`D|𝒪_X(mH)I_x^m|`$) and take $`C_tD`$. Then $`(C_tD)=m`$. On the other hand, $`(C_tD)_{yC_t}\mathrm{mult}_yDm+1`$ (resp. $`m`$). This is a contradiction if $`(H^n)>1`$. If $`(H^n)=1`$ then $`(C_tD)=\{x\}`$ and the linear system $`|𝒪_X(mH)I_x^m|`$ is constant along $`C_t`$. By varying $`xC_t`$, we also see that the line bundle $`𝒪_{C_t}(mH)`$ has a section with an $`m`$-fold zero at a general point of $`C_t`$. Since $`\mathrm{deg}𝒪_{C_t}(mH)=m`$, this implies that a general $`C_t`$ is a smooth rational curve. Set $`Y=B_xX`$ with exceptional divisor $`E`$ and projection $`\pi :YX`$. Set $`M=\pi ^{}HE`$. Then $`H^0(𝒪_Y(mM))H^0(X,𝒪_X(mH)I_x^m)`$, hence $`h^0(𝒪_Y(mM))(\text{const})m^{n1}`$. For large $`m`$, let $`p:YZ`$ denote the map induced by $`|mM|`$. Up to birational equaivalence, $`p`$ does not depend on $`m`$ and it has connected 1–dimensional general fibers. Moreover, for any subset $`VY`$ of dimension at most $`n2`$ there is a member $`D|mM|`$ containing $`V`$. Every curve $`C_t`$ lifts to a curve $`C_t^{}`$ on $`Y`$ and there is an open subset $`T^0T`$ such that the curves $`\{C_t^{}:tT^0\}`$ form a single algebraic family. (In our case these correspond to the curves $`C_t`$ which are smooth at $`x`$.) By shrinking $`T`$ we pretend that $`T=T^0`$. Note that $`(C_t^{}E)=1`$ and $`(C_t^{}M)=0`$. We may assume that $`Z`$ is proper. Let $`VY`$ be the set where $`p`$ is not defined and choose $`VD|mM|`$. If a curve $`C_t^{}`$ passes through $`V`$ then $`C_t^{}D`$ since $`(MC_t^{})=0`$. The curves $`C_t^{}`$ cover an open subset of $`Y`$, hence a general $`C_t^{}`$ is disjoint from $`D`$. Thus $`p`$ is defined everywhere along a general $`C_t^{}`$ and $`C_t^{}`$ is a fiber of $`p`$ (at least set theoretically). Hence there are open subsets $`Y^0Y`$ and $`Z^0Z`$ such that $`p^0:Y^0Z^0`$ is proper and flat. $`(EC_t^{})=1`$, thus $`E`$ is a rational section of $`p`$. In particular, a general fiber of $`p`$ is reduced. Thus, possibly after shrinking $`Y^0`$, we may assume that $`p^0`$ is a $`^1`$-bundle. Set $`E^0:=EY^0`$. $`Z^0`$ parametrizes curves in $`Y`$ and by looking at the image of these curves in $`X`$ we obtain a morphism $`h:Z^0\mathrm{Chow}(X)`$. (See \[Kollár95, I.3–4\] for the definition of $`\mathrm{Chow}`$ and its basic properties.) Let $`Z^{}`$ be the normalization of the closure of the image of $`Z^0`$ in $`\mathrm{Chow}(X)`$, $`p^{}:U^{}Z^{}`$ the universal family and $`u^{}:U^{}X`$ the cycle map. We want to prove that $`U^{}=Y`$. $`h`$ induces a morphism between the universal families $`h^{}:Y^0U^{}`$ which sits in a diagram $$Y^0\stackrel{h^{}}{}U^{}\stackrel{u^{}}{}X\stackrel{\pi ^1}{}Y.$$ The composite is an open immersion hence birational. Thus the intermediate maps are also birational. $`h^{}`$ is thus an isomorphism near a general fiber of $`p^0`$, and so $`p^{}:U^{}Z`$ is a $`^1`$-bundle over an open set. $`Z^{}`$ parametrizes curves wich have intersection number 1 with an ample divisor. Thus $`Z^{}`$ parametrizes irreducible curves with multiplicty 1. Let $`E^{}U^{}`$ be the closure of $`h^{}(E^0)`$. $`E^{}`$ is a rational section of $`p^{}`$ and $`u^{}(E^{})=\{x\}`$. If $`E^{}`$ contains a whole fiber of $`p^{}`$ then $`u^{}(E^{})=\{x\}`$ implies that every fiber is mapped to a point which is impossible. So, $`E^{}`$ is a section of $`p^{}`$. Let $`BU^{}E^{}`$ be a curve such that $`u(B)X`$ is a point. Then $`S:=(p^{})^1(p^{}(B))`$ is a ruled surface and $`u^{}`$ contracts the curves $`B`$ and $`E^{}S`$ to points. By bend–and–break (cf. \[Kollár95, II.5.5.2\]) this leads to a curve $`C^{}X`$ such that $`(C^{}H)<1`$, but this is impossible. Thus $`u^{}`$ is quasifinite on $`U^{}E^{}`$. $`u^{}`$ is also birational, hence $$u^{}:U^{}E^{}X\text{is an open embedding.}$$ Its image contains $`X\{x\}`$ and it is not projective, hence $`U^{}E^{}X\{x\}`$. Let us now look at the birational map $`\varphi :=u_{}^{}{}_{}{}^{1}\pi :YU^{}`$. Both $`U^{}`$ and $`Y`$ contain $`Y^0`$ and $`X\{x\}`$ as open sets, and $`\varphi `$ is the identity on $`Y^0`$ and $`X\{x\}`$. Thus $`\varphi `$ is an isomorphism outside the codimension 2 sets $`EE^0`$ and $`E^{}E^0`$. Since $`Y/X`$ has relative Picard number 1, this implies that $`\varphi `$ is a morphism by (9). Thus $`\varphi `$ is an isomorphism since $`\varphi `$ can not contract a subset of $`E^{n1}`$ without contracting $`E`$. In particular $`E^{}^{n1}`$. $`p^{}:U^{}E^{}`$ is flat with reduced fibers by \[Hartshorne77, Ex.III.10.9\], hence it is a $`^1`$-bundle. Thus $`X^n`$ by an easy argument (see, for instance, \[Kollár95, V.3.7.8\]).∎ The following lemma is essentially in \[Matsusaka-Mumford64\]. ###### Lemma 9. Let $`Z_iS`$ be projective morphisms with $`Z_1`$ smooth. Let $`\varphi :Z_1Z_2`$ be a birational map of $`S`$-schemes which is an isomorphism outside the codimension 2 subsets $`E_iZ_i`$. Assume that the relative Picard number $`\rho (Z_1/S)`$ is 1 and $`Z_1E_1S`$ is not quasifinite. Then $`\varphi `$ is a morphism. Proof. We may assume that $`S`$ is affine. Let $`H_2`$ be a relatively ample divisor on $`Z_2`$ and $`H_1`$ its birational transform. $`H_1`$ is also relatively ample because $`\rho (Z_1/S)=1`$ and $`H_1`$ can not be relatively nef (since $`H_1`$ is effective when restricted to a positive dimensional fiber of $`Z_1E_1S`$). Thus $`|mH_2|`$ and $`|mH_1|`$ are both base point free for $`m1`$ and these are the birational transforms of each other by $`\varphi `$. Let $`\mathrm{\Gamma }Z_1\times _SZ_2`$ be the closure of the graph of $`\varphi `$ with projections $`\pi _i:\mathrm{\Gamma }Z_i`$. If $`\pi _1^1(z)`$ is positive dimensional then every member of $`|mH_2|`$ intersects $`\pi _2(\pi _1^1(z))`$, thus $`|mH_1|=\varphi ^1|mH_2|`$ has $`z`$ as its base point, a contradiction. Thus $`\pi _1`$ is an isomorphism and $`\varphi `$ is a morphism.∎ ###### 10Proof of (3). By \[Mori79\] there is a rational curve $`CX`$ such that $`0<(K_XC)n+1`$. Fix an ample divisor $`L`$ and pick a rational curve $`CX`$ such that $`0<(K_XC)`$ and $`(LC)`$ is minimal. Then $`(K_XC)=n+1`$ by (3.1) and $`C`$ is not numerically equivalent to a reducible curve whose irreducible components are rational. Thus, as in (7), we obtain that the Picard number of $`X`$ is 1 and $`K_X`$ is ample. Set $`H:=\frac{1}{n+1}K_X`$. $`H`$ is a $``$-divisor such that $`(H^n)1`$ and $`(HC)1`$ for every rational curve $`CX`$. It is easy to see that the proof of (8) works for $``$-divisors satisfying these assumptions. Thus we obtain (3).∎ Returning to the proof of (1), we are left with the cases when $`(CK_X)=n`$. Then we have an at least $`(n2)`$-dimensional family of curves through every $`xX`$. If there is an $`xX`$ such that all curves of degree 1 through $`x`$ cover $`X`$, then we obtain that $`X^n`$ and this leads to a contradiction. Thus for every $`xX`$ the curves $`\{C_t:xC_t\}`$ sweep out a divisor $`B_x`$. The divisors form an algebraic family for $`x`$ in a suitable open set $`X^0X`$. ###### 11Case 2: $`(CK_X)=n`$ and $`B_{x_1}B_{x_2}\mathrm{}`$ for $`x_1,x_2X^0`$. By assumption, any two points of $`X`$ are connected by a chain of length 2 of the form $`C_{t_1}C_{t_2}`$, thus $`X`$ has Picard number 1 by \[Kollár95, IV.3.13.3\]. Thus we see that $`K_XnH`$. The computation of the genus of a general complete intersection curve of members of $`mH`$ for odd $`m1`$ shows that $`(H^n)`$ is even. Thus $`(H^n)2`$. Pick general $`x_1,x_2`$ such that there is no degree 1 curve through $`x_1`$ and $`x_2`$ and pick curves $`C_ix_i`$ such that $`C_1C_2\mathrm{}`$. We can view $`C_1C_2`$ as the image of a map $`f:^1^1X`$ where $`^1^1`$ denotes the union of 2 lines in $`^2`$. Let $`y_i^1^1`$ be a preimage of $`x_i`$. By the usual estimates (cf. \[Kollár95, II.1.7.2\]) we obtain that $$dim_{[f]}\mathrm{Hom}(^1^1,X,y_ix_i)2n+n2n=n.$$ On the other hand, these maps correspond to pairs of curves $`C_1^{}C_2^{}`$ where $`C_i^{}`$ passes through $`x_i`$ and $`C_1C_2\mathrm{}`$. By our assumption these form an $`n2`$-dimensional family. The automorphism group $`\mathrm{Aut}(^1^1,y_1,y_2)`$ accounts for the missing 2 dimensions. Viewing $`^1^1`$ as a reducible plane conic, \[Kollár95, II.1.7.3\] implies that there is an at least $`(n1)`$-dimensional family of degree 2 rational curves $`\{A_sX:sS\}`$ which pass through both of $`x_1,x_2`$. As in Case 1, we obtain that $$\begin{array}{ccc}H^0(X,𝒪_X(mH)I_{x_1}^{m+1}I_{x_2}^{m+1})\hfill & (\text{const})m^n\hfill & \text{if }(H^n)>2\text{, and}\hfill \\ H^0(X,𝒪_X(mH)I_{x_1}^mI_{x_2}^m)\hfill & (\text{const})m^{n1}\hfill & \text{if }(H^n)=2\text{.}\hfill \end{array}$$ Pick any member $`D|𝒪_X(mH)I_{x_1}^{m+1}I_{x_2}^{m+1}|`$ (resp. $`D|𝒪_X(mH)I_{x_1}^mI_{x_2}^m|`$) and take $`A_sD`$. Then $`(A_sD)=2m`$. On the other hand, $`(A_sD)_{yA_s}\mathrm{mult}_yD2m+2`$ (resp. $`2m`$). Thus $`(H^n)=2`$, $`(A_sD)=\{x_1,x_2\}`$ and the linear system $`|𝒪_X(mH)I_{x_1}^mI_{x_2}^m|`$ is constant along $`A_s`$ for general $`s`$. We also see that for general $`A_s`$, the line bundle $`𝒪_{A_s}(mH)`$ has a section with an $`m`$-fold zero at two general points of $`A_s`$. Since $`\mathrm{deg}𝒪_{A_s}(mH)=2m`$, this implies that a general $`A_s`$ is a smooth rational curve. Set $`Y=B_{x_1x_2}X`$ with exceptional divisors $`E_1,E_2`$ and projection $`\pi :YX`$. Set $`M=\pi ^{}HE_1E_2`$. Then $`|mM|=|𝒪_X(mH)I_{x_1}^mI_{x_2}^m|`$. As in Case 1, we obtain an open set $`Y^0Y`$ and a $`^1`$-bundle $`Y^0Z^0`$ with two sections $`E_iY^0`$. Construct $`p^{}:U^{}Z^{}`$ and $`u:U^{}X`$ using $`\mathrm{Chow}(X)`$ as before. Let $`zZ^{}`$ be any point and $`A_zX`$ the 1–cycle corresponding to $`z`$. $`A_z`$ has degree 2 and it passes through the points $`x_1,x_2`$. We have assumed that there is no degree 1 curve passing through $`x_1,x_2`$, thus either $`A_z`$ is irreducible and $`(HA_z)=2`$ or $`A_z`$ has 2 different irreducible components $`A_z^1A_z^2`$ and $`(HA_z^i)=1`$. The key point is to establish that the cycle map $`u`$ is quasifinite on $`U^{}(E_1^{}E_2^{})`$. Assuming the contrary, we have a normal surface $`S`$ and morphisms $`p^{}:SZ^{}Z^{}`$ and $`u:SX`$ such that every fiber of $`p^{}:SZ^{}`$ is either $`^1`$ or the union of 2 copies of $`^1`$ and $`u`$ contracts $`E_iS`$ and another curve $`B`$ to points. A version of bend–and–break (12) establishes that in this case $`u(p_{}^{}{}_{}{}^{1}(z))`$ is independent of $`zZ^{}`$. This is, however, impossible since $`Z^{}\mathrm{Chow}(X)`$ is finite. Again using (9) we conclude that $`U^{}Y`$ and $`Z^{}E_i^{}^{n1}`$. Therefore $$𝒪_Y(\pi ^{}HE_1E_2)p_{}^{}{}_{}{}^{}𝒪_{E_i^{}}(1)p_{}^{}{}_{}{}^{}𝒪_{^{n1}}(1)$$ is generated by global sections. Pushing these sections down to $`X`$ and varying the points $`x_i`$ we obtain that $`𝒪_X(H)`$ is generated by $`n+2`$ global sections. Thus $`|H|`$ gives a finite morphism $`X^{n+1}`$ whose image is a quadric since $`(H^n)=2`$. Thus $`X`$ is a smooth quadric. ###### Lemma 12 (3 point bend–and–break). Let $`S`$ be a normal and proper surface which is a conic bundle (that is, there is a morphism $`p:SA`$ such that every (scheme theoretic) fiber is isomorphic to a plane conic). Let $`E_1,E_2S`$ be disjoint sections of $`p`$ and $`BS`$ a multisection. Assume that every singular fiber $`F_a`$ of $`p`$ has 2 components $`F_a=F_a^1F_a^2`$, and $`(F_a^iE_i)=0`$ for $`i=1,2`$. Let $`L`$ be a nef divisor on $`S`$ such that $`(LF_a^i)=1`$ for every $`F_a`$ and $`i`$ and $`(LE_1)=(LE_2)=(LB)=0`$. Then $`SA\times ^1`$, the $`E_i`$ are flat sections and $`B`$ is a union of flat sections. Proof. The Picard group of $`S`$ is generated by the classes $`F_a^i,E_1`$ with rational coefficients (cf. \[Kollár95, IV.3.13.3\]). From the conditions $`(LF_a^i)=1`$ we conclude that $`L=E_1+E_2+dF`$ for some $`d`$ (and $`(E_1^2)=(E_2^2)`$). $`(L^2)0`$ and $`(LE_i)=0`$, so by the Hodge index theorem $`(E_i^2)0`$. Hence we conclude that $`d=(E_i^2)0`$. This implies that $$(LB)=(E_1B)+(E_2B)+\mathrm{deg}(B/A)(E_1^2),$$ All terms on the right hand side are nonnegative. Thus $`(E_1^2)=(E_2^2)=0`$. Since $`(E_1E_2)=0`$, this implies that $`E_1`$ and $`E_2`$ are algebraically equaivalent. Thus there are no singular fibers. ∎ ###### 13Case 3: $`(CK_X)=n`$ and $`B_{x_1}B_{x_2}=\mathrm{}`$ for general $`x_1,x_2`$. Take a general $`x_1X`$ and a general $`x_3B_{x_1}`$. If $`dim(B_{x_1}B_{x_3})=n2`$ then $`dim(B_{x_1}B_{x_2})=n2`$ for a general $`x_2`$, a contradiction. Thus $`B_{x_1}`$ and $`B_{x_2}`$ are either disjoint or they coincide for $`(x_1,x_2)`$ in an open subset of $`X\times X`$. The algebraic family of divisors $`B_x`$ thus determines a morphism $`p:XA`$ to a smooth curve $`A`$. A general fiber of $`p`$ is $`B_x`$. Let $`Y`$ be a general fiber of $`p`$ and $`xY`$ a general smooth point. We have an $`(n2)`$-dimensional family of curves $`C_t`$ through $`x`$ and all of these are contained in $`Y`$. $`(C_tH)=1`$ and the Picard number of $`Y`$ is 1 as before. Thus $`K_YK_X|_YnH|_Y`$. Let $`\pi :\overline{Y}Y`$ denote the normalization of $`Y`$. The curves $`C_t`$ lift to $`\overline{Y}`$, and $$K_{\overline{Y}}=K_Y+(\text{conductor of }\pi ),$$ where the conductor of $`\pi `$ is an effective divisor which is zero iff $`\pi `$ is an isomorphism (this is a special case of duality for finite morphisms, cf. \[Hartshorne77, Ex.III.7.2\]). Thus $$(K_{\overline{Y}}\pi ^{}H^{n2})(K_YH^{n2})=n(H|_Y)^{n1}=n(\pi ^{}H)^{n1}.$$ So $`\overline{Y}^{n1}`$ by (8). Furthermore, we also obtain that $`K_{\overline{Y}}=n\pi ^{}H+(\text{conductor})`$, which implies that the conductor of $`\pi `$ is zero. Thus $`Y^{n1}`$ and $`p:XA`$ is generically a $`^{n1}`$-bundle. This implies that $`p`$ has a section (cf. \[Serre79, X.6-7\]), hence every fiber of $`p`$ has a reduced irreducible component. Let $`WX`$ be a reduced irreducible component of a fiber of $`p`$ and $`xW`$ a general smooth point. The above argument applies also to $`W`$, and we obtain that $`W^{n1}`$ and $`W`$ is a connected component of its fiber. This shows that $`X`$ is a $`^{n1}`$-bundle over $`A`$. ∎ ###### Acknowledgments . Partial financial support was provided by the NSF under grant numbers DMS-9800807 and DMS-9622394. Johns Hopkins University, Baltimore MD 21218 ``` kachi@math.jhu.edu ``` Princeton University, Princeton NJ 08544-1000 ``` kollar@math.princeton.edu ```
no-problem/0003/astro-ph0003260.html
ar5iv
text
# Structure in the local Galactic ISM on scales down to 1 pc, from multi-band radio polarization observations ## 1 Introduction Wieringa et al. (1993) were the first to note structure on arcminute scales in the linearly polarized component of the galactic radio background at 325 MHz, observed with the WSRT. The small-scale structure in the maps of polarized intensity $`P`$, (with polarized brightness temperatures $`T_{\mathrm{b},\mathrm{pol}}`$ of up to 10 K) does NOT have a counterpart in total intensity, or Stokes $`I`$, down to very low limits. Because the total Stokes $`I`$ of the galactic radio background has an estimated $`T_{\mathrm{b},\mathrm{pol}}`$ of the order of 30 – 50 K at 325 MHz, which must be very smooth and therefore filtered out completely in the WSRT measurements, the apparent polarization percentage of the small-scale features can become very much larger than 100%. The absence of corresponding small-scale structure in Stokes $`I`$ led Wieringa et al. (ibid.) to propose that the small-scale structure in polarized intensity $`P`$ is due to Faraday rotation modulation. In this picture, synchrotron radiation generated in the Galactic halo reaches us through a magneto-ionic screen, viz. the warm relatively nearby ISM. Structure in the electron density and/or magnetic field in the ISM causes spatial variations in the Rotation Measure (RM) of the screen. Hence, the angle of linear polarization of the synchrotron emission from the halo is rotated by different amounts along different lines of sight. Even if the polarized emission in the halo were totally smooth, in intensity as well as angle, the screen would produce structure in Stokes $`Q`$ and $`U`$. Small-scale structure in the polarized galactic radio background has recently been observed also at other frequencies. At 1420 MHz, Gray et al. (1998, 1999) used the DRAO synthesis telescope to study the phenomenon at 1 resolution. Uyaniker et al. (1999) used the Effelsberg telescope at 1.4 GHz, to map the polarized emission at 9 resolution over about 1100. Duncan et al. (1998) discuss radio polarization data at 1.4, 2.4 and 4.8 GHz with the Parkes radio telescope and the VLA, at resp. 5, 10 and 15 resolution. All these observations support the interpretation in terms of modulation of emission originating at larger distances, by a relatively nearby Faraday screen. The distributions of polarized intensity and angle may therefore be used to study the structure of the Faraday screen. In particular, polarization observations give information about the electron density, n<sub>e</sub>, and the component of the magnetic field parallel to the line of sight, $`B_{}`$, in the ISM on scales down to less than $``$ 0.5 pc ($`<`$ 4 at an assumed distance of $``$ 500 pc). The diffuse nature of the polarized radio background allows (almost) complete spatial mapping of RMs over large areas, provided one has observations at several frequencies. This gives a large advantage over RM determinations through individual objects, like pulsars or extra-galactic radio sources. ## 2 Distribution of polarized intensity In Fig. 1 we show a gray-scale representation of the polarized intensity in a 5 MHz wide frequency band centered at 349 MHz. The map shows a region of $`6.4^{}\times 9^{}`$ centered at $`\alpha =6^h10^m,\delta =53^{}(\mathrm{}=161^{},b=16^{})`$ at an angular resolution of about 4. It is one of 8 frequency bands observed simultaneously. Three of those have strong interference, but we obtained good data at 341, 349, 355, 360 and 375 MHz. All 5 maps were made combining mosaics of 7$`\times `$5 pointing centres. This yields constant sensitivity over a large area (see e.g. Rengelink et al. 1997). The observations were made with the WSRT in January and February 1996, largely at night, and ionospheric Faraday rotation was therefore well-behaved. No corrections were applied. The region in Fig. 1 is rather special because $`T_{\mathrm{b},\mathrm{pol}}`$ goes up to 10 K, and because it contains large, almost linear structures in $`P`$. Our attention was drawn to this field by the panoramic view of galactic polarization produced in the WENSS survey (de Bruyn & Katgert 2000). However, this field is not unique, and there are other regions with similarly high $`T_{\mathrm{b},\mathrm{pol}}`$. Over a very large fraction of the map the $`P`$-signal is quite significant, with a noise $`\sigma _{\mathrm{T}_\mathrm{b}}`$ 0.5 K. With S/N-ratios of generally more than 3 and going up to 30, polarization angles are well-defined. Note that in this region, the upper limit to structure in Stokes $`I`$ (total intensity) on small scales ($``$ 30) is about 1 K, or less than 2% of the total $`I`$. There appear to be at least two distinct components in the polarized intensity distribution. The first one is a fairly smooth, ‘cloudy’ component, pervading the entire map, with intensity variations on typical scales of (several) tens of arcminutes. In addition, there are conspicuous, very narrow and often quite long and wiggly structures, which we will refer to as ‘canals’, in which the polarized intensity is considerably lower than in the immediate surroundings. In this Letter we focus on the nature and implications of the narrow ‘canals’; we will discuss the ‘cloudy’ component in more detail in another paper (Haverkorn et al. 2000). ## 3 The nature of the ‘canals’ in polarized intensity The strong and abrupt decrease of polarized intensity in the ‘canals’ suggests that depolarization is responsible. There are several mechanisms that can produce depolarization, but the only plausible type in this case is beam depolarization. This occurs when the polarization angle varies significantly within a beam. Complete depolarization requires that for each line of sight there is a ‘companion’ line of sight within the same beam that has the same polarized intensity but for which the polarization angle differs by 90. Below we will show that our observations indicate that the polarization angle indeed changes by large amounts across low polarized intensity ‘canals’, and close to 90 across the ‘canals’ of lowest $`P`$. Depolarization can also be caused by ‘differential Faraday rotation’. This happens when along a line of sight emitting and (Faraday) rotating plasmas coexist (e.g. Burn 1966; Sokoloff et al. 1998). However, the absence of correlated structure in Stokes $`I`$ and the high degree of polarization suggest that this is not a dominating effect. Significant bandwidth depolarization, which occurs when the polarization angle is rotated by greatly different amounts in different parts of a frequency band could only play a rôle (given our 5 MHz bandwidth) if the RM were of order 80 rad m<sup>-2</sup>, which is not the case in this region near the galactic anti-centre (see below). In Fig. 2 we show the polarization vectors around a few of the deepest ‘canals’, superimposed on gray-scale plots of $`P`$, in two frequency bands. The area shown is indicated in Fig. 1. The polarization vectors on either side of the ‘canals’ are quite close to perpendicular, demonstrating that the ‘canals’ are produced by beam depolarization. This perpendicularity applies to all ‘canals’, irrespective of frequency band and is very convincing, especially because everywhere else the polarization vectors vary quite smoothly (if significantly!). Beam depolarization creates ‘canals’ that are one beam wide, which is exactly what we observe. This implies that the 90 ‘jump’ must occur on angular scales smaller than the beamwidth. At $`2^{}`$ resolution (about twice that in Fig. 2), the ‘canals’ indeed seem unresolved, but the decrease in S/N-ratio precludes conclusions on even smaller scales (the original data have $`0.8^{}`$ resolution). Additional evidence that the ‘canals’ are due to beam depolarization is statistical. We defined ‘canal-like’ points from the observed values of $`P`$, as follows. For each point in the mosaic we compared the observed value of $`P`$ with the $`P`$-values in pairs of two diametrically opposed neighbouring (adjacent) points. If the value of $`P`$ in the central point was less than a certain small fraction of the values in both comparison points, the point was defined ‘canal-like’. This definition mimics the visual detection ‘algorithm’. In the top panel of Fig. 3 we show the distribution of the difference between the $`\varphi _{\mathrm{pol}}`$’s in the two adjacent points that define the ‘canal-like’ points, for a $`P`$-threshold of 30%. The $`\mathrm{\Delta }\varphi _{\mathrm{pol}}`$-distribution peaks at 90, fully consistent with the beam depolarization hypothesis. This conclusion is reinforced by a comparison with the distribution of $`\mathrm{\Delta }\varphi _{\mathrm{pol}}`$ (again for diametrically opposed adjacent neighbours) of all points for which $`P`$ is between 1.0 and 2.0 times larger than both $`P`$-values in the two diametrically opposed neighbouring points, which is shown in the bottom panel of the same figure. Similar ‘canals’ were noted by Uyaniker et al. (1999) and Duncan et al. (1998), who also invoked beam depolarization. Yet, Fig. 3 is the first quantitative proof for this explanation. ## 4 The cause of the ‘jumps’ in polarization angle Two processes can cause jumps in polarization angle $`\varphi _{\mathrm{pol}}`$ across the ‘canals’: a sudden change in RM across the ‘canals’, and a jump in intrinsic $`\varphi _{\mathrm{pol}}`$ of the emission incident on the Faraday screen. A large change in intrinsic $`\varphi _{\mathrm{pol}}`$ implies a change in magnetic field direction and is therefore quite difficult to understand in view of the absence of structure in total intensity $`I`$ at the more than 2% level (see Sect. 2). On the other hand, variations in the RM of the Faraday screen would seem to be quite natural, if not unavoidable. Discontinuities in RM must play an important rôle in producing the ‘canals’, because the ‘canals’, although similar in adjacent frequency bands, generally do not occur in all bands, and certainly are not identical in the different bands (see Fig. 2). This indicates that the jumps in $`\varphi _{\mathrm{pol}}`$ are mainly due to changes in RM. However, the question is if the jumps in $`\varphi _{\mathrm{pol}}`$ are indeed accompanied by jumps in RM of the right magnitude so that $`\mathrm{\Delta }\varphi _{\mathrm{pol}}=90^{}`$ is produced at the frequency where the ‘canal’ is best visible. In principle, the determination of RM only involves a simple linear fit of the polarization angles in the five frequency bands (at 341, 349, 355, 360 and 375 MHz) vs. $`\lambda ^2`$, but in practice several complications may arise. First, the observed values of $`\varphi _{\mathrm{pol}}`$ may be biased due to imaging effects (like off-sets) in the Stokes $`Q`$\- and $`U`$-maps from which $`\varphi _{\mathrm{pol}}`$ is derived (cf. Wieringa et al. 1993). Our data indicate that, in the maps of this region of sky, such off-sets are quite small, so that the bias in the $`\varphi _{\mathrm{pol}}`$ values is small. Second, it is not obvious that the assumption of pure Faraday rotation ($`\varphi (\lambda )\lambda ^2`$) is supported by the data (see Haverkorn et al. 2000). In Fig. 4 we show an array of plots of $`\varphi _{\mathrm{pol}}(\lambda )`$ vs. $`\lambda ^2`$ for independent beams in the small region (indicated in Fig. 2) that contains two clear ‘canals’. As can be seen, a direct determination of $`\mathrm{\Delta }`$RM across the ‘canals’ is not at all trivial. Without knowing the position of the ‘canals’, one probably would have some trouble to find the ‘canals’ from discontinuities in RM distribution alone, due to the uncertainties in the RM-estimates, which sometimes are considerable. On the other hand, if one knows where the canals are one can identify some related ‘jumps’ in RM. From the present data, it seems quite likely that the ‘canals’ are primarily due to quite abrupt and relatively large changes of RM, with $`\mathrm{\Delta }`$RM/RM ranging from $``$ 0.3 to more than 1 (at least in this region of sky). Note that in this region the RMs are in the range from –10 to +10 rad m<sup>-2</sup> (also confirmed by several polarized extragalactic radio sources in these same observations). However, a more robust conclusion about the relation between $`\mathrm{\Delta }\varphi _{\mathrm{pol}}`$ and $`\mathrm{\Delta }`$RM requires a detailed analysis of more, and more sensitive data, and a careful error analysis. ## 5 Implications for the structure of the local ISM Because we have not yet reached a quantitative conclusion about the suspected correlation between $`\mathrm{\Delta }\varphi _{\mathrm{pol}}`$ and $`\mathrm{\Delta }`$RM, it is not possible to give a full discussion of the implications that these polarization data have for the small-scale structure of the warm ISM. However, the data discussed here show the great promise that high-resolution, multi-band polarization data hold for the study of the ISM, especially on small scales where pulsars and extragalactic radio sources cannot give much information. Fortunately, more and more sensitive radio polarization data (in different regions of sky) are forthcoming. In addition, information must be obtained about the electron density in the warm ISM on the relevant scales (e.g. through H$`\alpha `$ measurements), as well as on the other components in the ISM (like e.g. the HI). While we fully realize the preliminary nature of the conclusions presented, we feel justified to speculate somewhat on the possible implications of the ‘canals’. Structure in RM reflects structure in $`B_{}`$ and/or $`n_\mathrm{e}`$ in the ISM. However, as the RM is an integral over the entire line of sight, the large $`\mathrm{\Delta }`$RM/RM values that are implied by our observations may give a very specific message. In particular, we consider it unlikely that the large $`\mathrm{\Delta }`$RM/RM values are produced mainly by variations in electron density. Instead, they may be indicating a turbulent ISM with varying (reversing) magnetic field structures, as modeled in recent MHD simulations (see e.g. Mac Low & Ossenkopf 2000; Vázquez-Semadeni & Passot 1999). ###### Acknowledgements. The Westerbork Synthesis Radio Telescope is operated by the Netherlands Foundation for Research in Astronomy (NFRA) with financial support from the Netherlands Organization for scientific research (NWO). This work is supported by NWO grant 614-21-006.
no-problem/0003/astro-ph0003454.html
ar5iv
text
# The longest thermonuclear X-ray burst ever observed? ## 1 Introduction Of the $``$150 low-mass X-ray binaries known in our galaxy, about 40% show occasional bursts of X-rays, in which a rapid rise, lasting from less than a second to $``$10 s, is followed by a slower decay, lasting between $``$10 s to minutes. During the decay the characteristic temperature of the X-ray spectrum decreases. An X-ray burst is explained as energy release by rapid nuclear fusion of material on the surface of a neutron star and thus an X-ray burst is thought to identify the compact object emitting it unambiguously as a neutron star. If the burst is very luminous, reaching the Eddington limit $`L_{\mathrm{Edd}}`$, the energy release may temporarily lift the neutron star atmosphere to radii of order 100 km. Reviews of observations of X-ray bursts are given by Lewin et al. (1993, 1995). The properties of a burst depend, according to theory, on the mass and radius of the neutron star, on the rate with which material is accreted onto the neutron star, and on the composition of the accreted material. It is hoped that a detailed study of X-ray bursts can be used to determine the mass and radius of the neutron star, via the relation between luminosity, effective temperature and flux, and via the changes in the general relativistic correction to this relation when the atmosphere expands from the neutron star surface to a larger radius. However, the physics of the X-ray burst is complex. There is evidence that the emitting area does not cover the whole neutron star and changes with the accretion rate. Reviews of the theory of X-ray bursts are given by Bildsten (1998, 2000). In this paper we describe a long flux enhancement that we observed with the Wide Field Cameras of BeppoSAX in the X-ray burst source 4U 1735$``$44, and argue that this event is the longest type I X-ray burst ever observed. In Sect. 2 we describe the observations and data extraction, in Sect. 3 the properties of the flux enhancement. A discussion and comparison with earlier long bursts is given in Sect. 4. In the remaining part of this section we briefly describe earlier observations of 4U 1735$``$44. 4U 1735$``$44 is a relatively bright low-mass X-ray binary. Smale et al. (1986) fit EXOSAT data in the 1.4-11 keV range with a power law of photon index 1.8 with an exponential cutoff above 7 keV, absorbed by an interstellar column $`N_\mathrm{H}5\times 10^{20}\mathrm{cm}^2`$. The flux in the 1.4-11 keV range is $`4\times 10^9\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$. Van Paradijs et al. (1988) show that a sum of thermal bremsstrahlung of $`10`$ keV and black body radiation of $`2`$ keV, absorbed by an interstellar column $`N_\mathrm{H}<8\times 10^{20}\mathrm{cm}^2`$, adequately describes EXOSAT data in the same energy range and at a similar flux level, obtained one year later. A similar spectrum, with a higher absorption column $`N_\mathrm{H}3.4\times 10^{21}\mathrm{cm}^2`$, fits the Einstein solid-state spectrometer and monitor proportional counter data (Christian & Swank 1997). During GINGA observations, the source was somewhat brighter, at $`9\times 10^9\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ in the 1-37 keV range (Seon et al. 1997). Bursts were detected at irregular time intervals during each of the five occasions in 1977 and 1978 that SAS-3 observed 4U 1735$``$44, leading to a total of 53 detected bursts (Lewin et al. 1980). EXOSAT detected one burst in 1984 (Smale et al. 1986) and five bursts during a continous 80 hr observation in 1985 (Van Paradijs et al. 1988), one rather bright burst was detected with GINGA in 1991 (Seon et al. 1997), and five X-ray bursts with RXTE in 1998 (Ford et al. 1998). Burst intervals range from about 30 minutes to more than 50 hrs. Three of the bursts observed with EXOSAT and the single burst observed with GINGA were radius expansion bursts (Damen et al. 1990, Seon et al. 1997), and have been used to determine the distance to 4U 1735$``$44 as about 9.2 kpc (Van Paradijs and White 1995). 4U 1735$``$44 was the first X-ray burster for which an optical counterpart was found: V926 Sco (McClintock et al. 1977). From optical photometry an orbital period of 4.65 hrs was derived (Corbet et al. 1986). ## 2 Observations and data extraction The Wide Field Camera experiment (Jager et al. 1997) is located on the BeppoSAX platform which was launched early 1996 (Boella et al. 1997). It comprises two identically designed coded-aperture multi-wire Xenon proportional counter detectors. The field of view of each camera is 40$`\times `$40 degrees full width to zero response, which makes it the largest of any flown X-ray imaging device with good angular resolution. The angular resolution is $`5^{}`$ full width at half maximum, and the accuracy of the source location is upward of $`0.7^{}`$, depending mainly on the signal-to-noise ratio. The photon energy range is 2-28 keV, and the time resolution is 0.5 ms. Due to the coded mask aperture the detector data consist of a superposition of the background and shadowgrams of multiple sources. To reconstruct the sky image an algorithm is employed which is based on cross correlation of the detector image with the coded mask (Jager et al. 1997). Since the fall of 1996, the Wide Field Cameras observe the field around the Galactic Center on a regular basis during each fall and spring. The first campaign was a nine-day near-continuous observation from August 21 until August 30, 1996. About 30% of the time, viz. $``$35 minutes per orbit, is lost due to earth occultation and due to passage through the South Atlantic Anomaly. ## 3 A long X-ray flux enhancement of 4U 1735$``$44 In Fig. 1 we show the lightcurve of 4U 1735$``$44 as observed with the WFC between 21 and 30 August 1996. The persistent countrate varies between 0.2 and 0.3 counts cm<sup>-2</sup>s<sup>-1</sup>. Immediately after the earth occultation on MJD 50318.1 a strong enhancement (factor $``$3) in the X-ray intensity was seen which subsequently decayed exponentially. An expanded lightcurve of this event is also shown in Fig. 1. The position derived for this event is $`3\stackrel{}{.}2\pm 3\stackrel{}{.}4`$ from the position of 4U 1735$``$44 as derived from its persistent emission. (Both positions share the same systematic error, and thus their relative position is much more accurate than their absolute positions, which have errors of $`1^{}`$.) We conclude that the event is from 4U 1735$``$44. To the persistent flux we fit the two models discussed in the introduction, i.e. a power law with high energy cutoff, and a sum of bremsstrahlung and black body spectra, in the 2-24 keV range. The spectrum before and after the event are for the intervals MJD 50317.8-50318.1 and MJD 50319.7-50320.8, respectively. The results are given in Table 1. We note that the values of the fit parameters are similar to those for earlier observations. Notwithstanding the different flux levels before and after the flux enhancement, the hardness of the spectrum (also shown in Fig. 1) is similar. The persistent flux corresponds to an X-ray luminosity at 9.2 kpc of $`4.4\times 10^{37}\mathrm{erg}\mathrm{s}^1`$ in the 2-28 keV band. In our fits we set the interstellar absorption at a fixed value of $`N_\mathrm{H}=3.4\times 10^{21}\mathrm{cm}^2`$; the hard energy range of the WFC is not much affected by absorption, and fits for different assumed absorption values give results similar to those listed in Table 1. To describe the flux decline we first fit an exponential $`C=C(0)e^{t/\tau }`$ to the observed countrate in the 2-28 keV range. The fit is acceptable (at $`\chi _\nu ^2=1.6`$ for 33 d.o.f.) and $`\tau =86\pm 5`$ min. Fits to the counts in the 2-5 keV and 5-20 keV ranges give decay times of $`129\pm 15`$ and $`67\pm 5`$ min, respectively, in accordance with the observed softening of the flux during decline (see Fig. 1). We fit the spectrum during the flux enhancement as follows. First we add all the counts obtained between MJD 50318.10 and 50318.25. We then fit the total spectrum with the sum of a black body and either a cutoff power law spectrum or a thermal bremsstrahlung spectrum. In these fits, the parameters of the power law and bremsstrahlung component are fixed at the values obtained for the fit to the persistent spectrum after the event. The resulting parameters for the black body are also listed in Table 1. At the observed maximum the bolometric flux was $`(1.5\pm 0.1)\times 10^8\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$ which for a source at 9.2 kpc corresponds to a luminosity of $`1.5\times 10^{38}\mathrm{erg}\mathrm{s}^1`$. The start of the flux enhancement is not observed, but if we assume that its maximum is at the Eddington limit of $`1.8\times 10^{38}\mathrm{erg}\mathrm{s}^1`$ (for a neutron star mass of $`1.4M_{}`$) and that the decay time is constant, then maximum was reached 23.6 min before the source emerged from earth occultation, leaving at most 12.4 min for the rise to maximum (since the start of the data gap). The decay from maximum was therefore much longer, by a factor $`>`$8, than the rise. The fluence in the observed part of the burst is $`5.1\times 10^5\mathrm{erg}\mathrm{cm}^2`$, corresponding to $`5.2\times 10^{41}`$ erg; this is a lower limit to the energy released during the full event. We have also made fits to the first and second half of the event separately, and find temperatures for the blackbody component of 2.1-2.2 keV and 1.3-1.4 keV for the first and second half respectively, confirming the softening. For the blackbody radius we find 5.7-6.5 km and 8.5-8.8 km, for the first and second half, respectively. This apparent increase in radius is probably due to the difference between the observed colour temperature and the actual effective temperature of the black body; when we apply corrections to the colour temperature as given by van Paradijs et al. (1986) the value for the radius in the first part of the burst increases to 14 km, whereas that for the second half is unchanged. ## 4 Discussion In addition to the thermonuclear X-ray bursts, also called type I bursts, low-mass X-ray binaries show other sudden enhancements in X-ray flux. Type II bursts are different from type I bursts in that type II bursts do not show cooling of the characteristic temperature of the X-ray spectrum during the decline. X-ray flares have an irregular flux evolution. Type II bursts are thought to be accretion events; the nature of flares is unknown. The flux enhancement of 4U 1735$``$44 shows a smooth exponential decay of the countrate and of the characteristic temperature. Its rise must have been shorter than the decline. A black body gives a good fit to the observed spectrum, for a radius as expected from a neutron star, similar to earlier, ordinary bursts of 4U 1735$``$44. All these properties indicate a type I burst. The only special property of the new burst is its duration, which when expressed as the ratio of fluence $`E_\mathrm{b}`$ and peak flux $`F_{\mathrm{max}}`$: $`E_\mathrm{b}/F_{\mathrm{max}}>3400`$ s, is more than 300 times longer than the longest burst observed previously from this source (see Lewin et al. 1980). This duration also translates in a fluence which is several orders of magnitude larger than the previous record holder for 4U 1735$``$44, because the peak flux is similar to those of normal type I bursts. The fluence of a type I burst which burns all matter deposited onto a neutron star since the previous burst must be $``$1% of the accretion energy released by deposition of this matter. We do not have a measurement to the previous burst, but in seven days following the burst no other burst was observed. Multiplying this time by the persistent luminosity we obtain $`2.7\times 10^{43}`$ erg, or about 50 times the energy of the burst, well in the range of previously observed ratios for type I bursts. The presence of clear cooling argues against a type II burst; this and the smooth decay argues against a flare. If the flux enhancement were due to an accretion event, the amount of matter dropped extra onto the neutron star (assuming a mass of $`1.4M_{}`$ and a radius of 10 km) must have been $`>3\times 10^{21}`$ g, which may be compared to the average accretion rate of $`2.3\times 10^{17}`$ g s<sup>-1</sup> derived for the persistent flux. If the inner part of the accretion disk would have depleted itself onto the neutron star during the flux enhancement, one would expect the accretion rate immediately after to be lower than before. The observations suggest the opposite. We conclude that a type I X-ray burst is the best explanation for the enhanced flux event. We consider it significant that the occurrence of this burst is accompanied by the absence of any ordinary – i.e. short – burst throughout our 9-day observation, whereas all previous observations of 4U 1735$``$44 did detect ordinary bursts (see Introduction). Searching the literature for long bursts we find that the longest type I burst published previously is a radius expansion burst observed with SAS-3, probably in 4U 1708$``$23 (Hoffman et al. 1978; see also Lewin et al. 1995). The ratio of fluence and peak flux for that burst was $`500`$ s, so that the BeppoSAX WFC burst of 4U 1735$``$44 lasted at least six times longer. Other events published as long bursts from Aql X-1 (Czerny et al. 1987) and from X 1905+000 (Chevalier and Ilovaisky 1990) are in fact relatively short bursts followed by an enhanced constant flux level which persisted for several hours: in both cases the flux declined to 1/e of the peak level within 20 s. These events are clearly different from the long exponential bursts seen in 4U 1708$``$23 and 4U 1735$``$44. From the theoretical point of view, a long interval between bursts would allow hydrogen to burn completely before the onset of the burst, so that the energetics of the burst is dominated by pure helium burning. If matter accreted at a rate of $`2.3\times 10^{17}`$ g s<sup>-1</sup> during one week, the energy released by helium burning is compatible with the energy of the observed burst. The problem with this model is that theory predicts for this accretion rate that the burst initiates well before hydrogen burning is completed, i.e. that bursts are more frequent and less energetic, in accordance with those previously observed of 4U 1735$``$44. Indeed, Fujimoto et al. (1987) find that a burst of $`10^4`$ s duration occurs only for accretion rates $`\dot{M}<0.01\dot{M}_{\mathrm{Edd}}`$. The persistent flux during the BeppoSAX observation is a factor $`20`$ higher than this limit; observations previous to ours have consistently found 4U 1735$``$44 at a similar luminosity. An alternative model for bursts with a duration of $`10^4`$ s is accretion of pure helium at an accretion rate in excess of the Eddington limit ($`\dot{M}>5\times \dot{M}_{\mathrm{Edd}}`$, Brown & Bildsten 1998). The orbital period and optical spectrum indicate a main-sequence, i.e. hydrogen-rich, donor star (Augusteijn et al. 1998). Perhaps the main challenge for any theoretical explanation is that the properties of the persistent flux during our nine day long observation, during which a single very long X-ray burst was observed, are not different from those during earlier observations with EXOSAT when more frequent ordinary bursts were found.
no-problem/0003/astro-ph0003041.html
ar5iv
text
# Pixel and Micro-lensing with NGST ## 1 Introduction Searching microlensing events towards neighbouring galaxies allows to probe the distribution of compact objects (also called MAssive Compact Halo Objects or Machos) along their lines of sight (Paczynski 1986, Griest 1991). The interpretation of the first results is still very preliminary and relies on a small number of events (Alcock et al. 1997a,b,c, Renault et al. 1997, Palanque-Delabrouille et al. 1998, Alard et al. 1997, Udalski et al. 1994, Derue et al. 1998). Enlarging this faint statistics and exploring new lines of sight will be key-points in the near future for drawing strong constraints on the dark haloes. The detection of microlensing events on unresolved stars, initiated by Crotts (1992) and Baillon et al. (1993), offers some new perspectives. Two independent methods, based on this principle, have been developed. On the one hand, monitoring the fluxes of all the pixels present on the images allows to achieve a good sensitivity to the possible variation of unresolved stars (Ansari et al., 1997; Melchior et al. 1998,1999). A systematic study of all the information present in the frames is thus possible, with in particular an estimation of the detection efficiencies. On the other hand, the image subtraction technique also allows the detection of variable objects (Crotts & Tomaney 1996), but has mainly been used so far to improve the photometry of microlensing events detected towards the Galactic Bulge and LMC (e.g. Alard 1999; Alcock et al. 1999). Thus, when accounting for unresolved stars, the number of effectively monitored stars is significantly enlarged, lines of sight towards more distant target galaxies can be explored, and hence the number of potential target galaxies is increased by an order of magnitude. The efficacy of these approaches to detect luminosity variations with a high detection rate has been demonstrated. The flux of the unresolved star is by definition unknown: this prevents the definition of the Einstein ring crossing time (=intrinsic duration of the events) and the estimation of the lens mass. In this context, the NGST will open new opportunities. Here, we discuss a few of them with respect to the possible instrumentation of this telescope. ## 2 Towards M 31 The survey of M 31 undertaken at the Isaac Newton Telescope<sup>1</sup><sup>1</sup>footnotemark: 13 will detect a few hundred microlensing events<sup>2</sup><sup>2</sup>footnotemark: 24<sup>4</sup><sup>4</sup>footnotetext: This typically assumes a 5-year monitoring of M 31. if the dark haloes of M 31 and the Milky Way are filled with compact objects. In the following, we discuss how the NGST could improve our knowledge of the lenses and the dark component of the haloes. ### 2.1 Identification of unresolved stars for microlensing (Integral Field Spectrometer with R$``$1000) Whereas spectroscopic identifications of unlensed (resolved) stars in the Galactic Bulge (e.g. Benetti et al. (1995)) are possible from the ground, the NGST will offer the possibility to perform a similar work in M 31 with unresolved stars. With a small field of view (10”$`\times `$10”) in the optical and the near-infrared, the spectroscopy (R$``$1000) with high spatial resolution (0.1”) would allow to further study microlensing candidates detected from ground-based telescopes, and in particular those affecting unresolved stars in M 31. The identification of those unresolved sources is very challenging with existing technology. For instance, the microlensing candidate detected by the AGAPE group (Ansari et al. 1999), at 41” from M 31’s centre, affects a star whose magnitude at rest is dimmer than 22 in R, and lies on a stellar background of magnitude 16 mag.arcsec<sup>-1</sup>. In this case, 3D imaging from space will be possible and allows an identification and detailed study of the (unlensed) star. The unresolved stars corresponding to the microlensing events detected during the on-going ground-based surveys towards M 31 (e.g. INT survey) could be resolved and further studied with 3D imaging. This would provide complementary informations and place some constraints on the lens mass. The typical expected events will occur in an area with a surface magnitude of 22, and the NGST spectroscopy would easily detect stars down to magnitude 26. Moreover, spectroscopy in the near-infrared will be possible for the study of red giants. ### 2.2 Identification of unresolved stars for Low Mass X-ray Binaries (Integral Field Spectrometer with R$``$200) These microlensing surveys will also achieve a good sensitivity to cataclysmic variables. XMM<sup>3</sup><sup>3</sup>3see http://www.ast.cam.ac.uk/$``$mike/casu/WFCsur/M 31.html and http://www$``$star.qmw.ac.uk/AGAPE/5<sup>5</sup><sup>5</sup>footnotetext: see http://astro.estec.esa.nl/XMM/xmm$`\mathrm{\_}`$top.html, http://xmmssc-www.star.le.ac.uk/ and http://www-star.qmw.ac.uk/AGAPE/xpage1.html, due to be launched in 15 December 1999, will observe M 31. The cross-identification with the microlensing surveys will provide the first optical counter-parts of LMXB. The NGST will be complementary to these two observing modes, and in particular, be able to identify the optical counter-parts of LMXB when quiescent (mag$``$28-30). In fact, the optical to near-infrared spectrum could be studied, and thus allow an unprecedented study of these systems in M 31. ### 2.3 Real-time follow-up of ground-based observations (Integral Field Spectrometer with R$``$200-2000) Whereas spectroscopy of on-going microlensing events towards the Galactic bulge has been performed (e.g. see the remarkable work of Lennon et al. (1996)), a similar work, but for extragalactic stars, will be possible with the NGST. If ground-based microlensing surveys towards galaxies like M 31 are still going-on in 2007, high S/N follow-up observations based on an alert system would be possible with 3D imaging. Possible deviations from the point source/point lens approximations could be studied in external galaxies like M 31: (1) chromatic and spectroscopic signatures as suggested by Valls-Gabaud (1998) (and references therein) for the study of stellar structure; (2) planets orbiting around a star in M 31 (e.g. di Stefano 1999); (3) binary lenses (e.g. Gaudi & Gould 1999). ### 2.4 New survey of M 31 in the near-infrared (Wide Field Imager – multi-band filter mode) Depending on the results of the current ground-based surveys, monitoring a large field of view (5’$`\times `$5’) in M 31 with a resolution better than 0.03” would allow a complementary study. A sensitivity to different (dimmer) stars could be achieved, and in addition to the optical, the near-infrared wavelength range (1-2$`\mu `$m) could also be investigated. Also the constant seeing, which would characterise such data, would be a major advantage. A multi-band follow-up will allow an unprecedented analysis of microlensing events, with in particular a determination of the lens mass, completely independent from the ground-based optical observations. Moreover, a close insight into extragalactic dim stars will be possible for the first time. ## 3 Towards other galaxies A near-infrared survey, with high spatial resolution (0.03”), combined with a large field of view (5’$`\times `$5’), will allow to probe more distant galaxies, and thus a larger number of lines of sight can be explored. In particular, M 87 (at $``$ 17 Mpc) will be seen from space (with a constant seeing $`0.03`$”) with the same observing conditions as M 31 (at 0.7 Mpc) with ground-based observations (with a mean seeing $`1.5`$”). A space-based monitoring of galaxies at the distance of M 87 with an 8-meter telescope will require 5-min exposures. In first approximation, if M 31 was at the same distance as M 87, we could typically expect of order of 50 microlensing events over a 6 months observing period. As studied by Gould (1995), additional events could be expected if intra-cluster MACHOs constitute a significant fraction of the dark matter of the Virgo cluster. An ambitious programme monitoring 10-20 galaxies would perform an unprecedented cartography of dark haloes. Real-time follow-up of such microlensing events with 10 mas resolution would achieve a high signal-to-noise ratio and to constrain the lens mass. ## 4 Conclusions The NGST equipped with an Integral Field Spectrometer (R$``$200-2000) and with a multi-band filter mode on a Wide Field Imager, characterised by a good spatial sampling, would first allow for the first time the study of extragalactic dim stars thanks to the magnification detected with ground-based observations. With a wide spectral range (from optical to near-infrared), the NGST would be unique to break the degeneracy of the parameters of each lens-source system as it could resolve the stars. Hence, the mass function of the lenses detected in M 31 could be determined, and a better understanding of the dark matter content of dark haloes achieved. Exotic events in M 31 could be studied with an unprecedented sensitivity and would further help to constrain the lens population. In addition, this would offer the unique opportunity to detection of extragalactic planets (see Rhie et al. (1999) for the possible detection of the first planet in the Galactic Bulge). ###### Acknowledgements. During this work, A.-L. Melchior has been supported by a European contract ERBFMBICT972375 at QMW.
no-problem/0003/math0003112.html
ar5iv
text
# Untitled Document Matrix exponentials Let $`A`$ be a complex square matrix, put $$Q(z)=(za_1)^{\alpha _1+1}\mathrm{}(za_k)^{\alpha _k+1},$$ with the $`a_p`$ distinct and the $`\alpha _p`$ nonnegative integers, assume $`Q(A)=0`$, set $$Q_p(z):=\frac{Q(z)}{(za_p)^{\alpha _p+1}},$$ let $`b_{p,n}`$ be the $`n`$-th Taylor coefficient of $`1/Q_p(z)`$ at $`z=a_p`$, let $`f`$ be an entire function, and let $`P(f(z),z)[z]`$ be $`Q(z)`$ times the singular part of $`f(z)/Q(z)`$. Theorem. We have * $`b_{p,n}=(1)^n{\displaystyle \underset{\stackrel{\beta _p=0}{|\beta |=n}}{}}{\displaystyle \underset{\stackrel{j=1,\mathrm{},k}{jp}}{}}\left({\displaystyle \genfrac{}{}{0pt}{}{\alpha _j+\beta _j}{\alpha _j}}\right){\displaystyle \frac{1}{(a_pa_j)^{\alpha _j+1+\beta _j}}}`$ where $`\beta `$ runs over $`^k`$ and $`|\beta |:=\beta _1+\mathrm{}+\beta _k`$, * $`P(f(z),z)={\displaystyle \underset{p=1}{\overset{k}{}}}{\displaystyle \underset{q=0}{\overset{\alpha _p}{}}}{\displaystyle \underset{j=0}{\overset{q}{}}}{\displaystyle \frac{f^{(j)}(a_p)}{j!}}b_{p,qj}(za_p)^qQ_p(z),`$ * $`f(A)=P(f(z),A).`$ Let $`Q`$ be as above and $`d`$ its degree, and define $`g_j:`$ for $`0j<d`$ by $$P(e^{tz},z)=g_{d1}(t)z^{d1}+\mathrm{}+g_1(t)z+g_0(t).$$ If $`u:`$ is smooth satisfying $`Q(\frac{d}{dt})u=h`$ where $`h:`$ is continuous, then $$u(t)=\underset{j=0}{\overset{d1}{}}u^{(j)}(0)g_j(t)+_0^tg_{d1}(ty)h(y)𝑑y.$$ Assume that each $`a_p`$ is an eigenvalue, let $`A=S+N`$ ($`S`$ semisimple, $`N`$ nilpotent) be the Jordan decomposition of $`A`$, and $`A=A_p=(a_pE_p+N_p)`$ be its spectral decomposition. Then $$E_p=\underset{q=0}{\overset{\alpha _p}{}}b_{p,q}(Aa_p)^qQ_p(A),N_p=\underset{q=0}{\overset{\alpha _p1}{}}b_{p,q}(Aa_p)^{q+1}Q_p(A).$$ Pierre-Yves Gaillard, Institut Élie Cartan, Université Nancy I, BP 239, 54506 Vandœuvre, France, gaillard@iecn.u-nancy.fr
no-problem/0003/physics0003064.html
ar5iv
text
# 1 Problem ## 1 Problem Show that a bounded source cannot produce a unipolar electromagnetic pulse. Equivalently, show that there are no three-dimensional electromagnetic solitons in vacuum. ## 2 Solution This problem was first discussed by Stokes over 150 years ago , who noted that three-dimensional sound waves from a bounded source cannot be unipolar. While his argument applies to electromagnetic waves as well, this is little recognized in the literature. One-dimensional electromagnetic (and sound) waves can be unipolar. A plane wave can have any pulseform, but, strictly speaking, a plane wave can only be generated by an unbounded source. A well-known pedagogic example given in sec. II-20 of is based on this case. Hence, this old problem still has a new flavor. Here, we offer two new solutions, followed by discussion. ### 2.1 Via Conservation of Energy and Fourier Analysis If the source is bounded, it appears pointlike when viewed from a great enough distance. Then, energy conservation requires that the pulse energy density fall off as $`1/r^2`$, for distance $`r`$ measured from some characteristic point within the source. Since the energy density is proportional to the square of the the electromagnetic fields, we have the well-known result that the radiation fields from a bounded source fall off as $`1/r`$ far from the source. This is in contrast to the static fields, which must fall off at least as quickly at $`1/r^2`$, far from a bounded source. Now, consider the possibility of a unipolar pulse, i.e., one for which the electric field components $`E_i(𝐫,t)`$ have only one sign. At a fixed point t, the time integral of at least one component of such a pulse would be nonzero: $$E_i(𝐫,t)𝑑t0.$$ (1) Then a Fourier analysis of this component, $$E_i(𝐫,\omega )=E_i(𝐫,t)e^{i\omega t}𝑑t,$$ (2) would have a nonzero value at zero frequency, $`E_i(𝐫,\omega =0)0`$. However, the quantity $`E_i(𝐫,\omega =0)`$ would then be a static solution to Maxwell’s equation, and so must fall off like $`1/r^2`$. This contradicts the hypothesis that $`E_i(𝐫,t)`$ represented an electromagnetic pulse from a bounded source, which must fall off as $`1/r`$. Thus, a bounded source cannot emit a unipolar electromagnetic pulse. ### 2.2 Via the Fields of an Accelerated Charge The electric field vector radiated by a charge is opposite to the transverse component of the acceleration. In the case of a bounded source, the accelerations of the charges cannot impart a nonzero average velocity to any charge; otherwise the source would not remain bounded. Hence, any accelerations must include both positive and negative components such that their time integral vanishes. Therefore, the radiated electric field must also include both positive and negative components. Again, A bounded source cannot emit a unipolar electromagnetic pulse. A variant of the above argument using the Liénard-Wiechert fields has been given by Bessonov , who also considers the case of a bipolar pulse consisiting of a pair of well separated, opposite-sign unipolar pulses. ### 2.3 Three-Dimensional Unipolar Radiation from an Unbounded Source In the case of two nonrelativistic, unbound charged particles that interact only via the Coulomb force $`q_1q_2/r`$, the component of the acceleration of one of the charges along the axis of its hyperbolic trajectory always has the same sign. Hence, the radiated electric field component along that axis is unipolar, and a Fourier analysis of the field has a zero-frequency component (see sec. 70 of ). Thus, a three-dimensional unipolar pulse can be emitted by a system whose motion is unbounded. ### 2.4 Unipolarlike Pulses It is possible for a bounded system to produce an electromagnetic pulse that consists almost entirely of a single central pulse of one sign. But according to the argument above, this pulse must include long tails of the opposite sign so that the time integral of the fields vanish at any point far from the source. This behavior has been observed in several recent reports on subcycle electromagnetic pulses . ## 3 Can the Far-Zone Radiation From a Bounded Source Transfer Energy to a Charged Particle? The present considerations can be extended to comment on this topical question. If a unipolar electromagnetic pulse existed far from its bounded source, the corresponding vector potential A would have different values at asymptotically early and late times. Then, as argued by Lai , an interaction with a charge $`e`$ and mechanical momentum P that conserves the canonical momentum $`𝐏+e𝐀/c`$ could produce a net change in the magnitude of the mechanical momentum, i.e., provided a transfer of energy between the particle and the pulse. Such far-zone unipolar particle acceleration is desirable, but is not consistent with Maxwell’s equations. Near-zone particle acceleration by time-dependent electromagnetic fields can be accomplished by passing a particle through a bounded field region, such as an rf cavity, during a short interval when the fields have only one sign. While the electromagnetic fields are not unipolar in this case, their interaction with the charged particle is effectively so. A variant on such considerations is the fact that a bounded electrostatic field cannot exchange net energy with a charged particle that begins and ends its history at large distances from the source. So-called electrostatic particle accelerators all contain a nonelectrostatic component that can move the charge to a region of nonzero electric potential and leave it there with effectively zero electrical and mechanical energy. Then, the charge can extract energy from the field as it is expelled to large distances. Returning to the case of far-zone radiation, we give a brief argument based the well-known relation for the time rate of change of energy $`U`$ of a particle of charge $`e`$ and velocity v in an electromagnetic field E (see, for example, eq. (17.7) of ): $$\frac{dU}{dt}=e𝐄𝐯$$ (3) This expression holds for particles of any velocity less than the speed of light. Of course, the magnetic field cannot change the particle’s energy. In the approximation that the particle’s velocity is essentially unchanged by its interaction with the electromagnetic field, we have $$\mathrm{\Delta }U=e𝐯𝐄𝑑t.$$ (4) To perform the integral, we can use Feynman’s expression for the far-zone radiated electric field of an accelerated charge (sec. I-34 of ): $$𝐄_{\mathrm{rad}}=\frac{e}{c^2}\frac{d^2\widehat{𝐧}}{dt^2},$$ (5) in Gaussian units, where $`\widehat{𝐧}`$ is a unit vector from the retarded position of the source charge to the observer. Then, $`𝐄_{\mathrm{rad}}𝑑t`$ is the difference between $`d\widehat{𝐧}/dt`$ at early and late times. Since $`d\widehat{𝐧}/dt`$ is the angular velocity of the relative motion between the source and charge, this vanishes at both early and late times as the moving charge is then arbitrarily far from the bounded source. Hence, in the constant-velocity approximation, the far-zone fields from a bounded source cannot transfer energy to a particle. While the constant-velocity approximation is not necessarily good for a particle whose initial velocity is nonrelativistic, it is an excellent approximation for a relativistic particle. It is possible for the far-zone fields of a bounded source (in vacuum and far from any other material) to transfer energy to a nonrelativistic charge, as recently observed , but this energy transfer becomes increasingly inefficient as the particle’s velocity approaches that of light. The above considerations have ignored energy transfers due to scattering, which can be significant if the energy of a photon of the electromagnetic wave is large compared to the total energy of the charged particle in the center of mass frame. In this case, a quantum description is more appropriate. In the regime in which the energy transfer from a single scattering process is small, the classical idea of a “radiation pressure” associated with the Poynting vector $`𝐒=(c/4\pi )𝐄\times 𝐇`$ can be formalized by including the radiation reaction in the equation of motion of the charged particle. See, for example, eq. (75.10) of . However, this effect is always very small in the classical regime.
no-problem/0003/physics0003042.html
ar5iv
text
# Electron correlation in C4N+2 carbon rings: aromatic vs. dimerized structures ## Abstract The electronic structure of C<sub>4N+2</sub> carbon rings exhibits competing many-body effects of Hückel aromaticity, second-order Jahn-Teller and Peierls instability at large sizes. This leads to possible ground state structures with aromatic, bond angle or bond length alternated geometry. Highly accurate quantum Monte Carlo results indicate the existence of a crossover between C<sub>10</sub> and C<sub>14</sub> from bond angle to bond length alternation. The aromatic isomer is always a transition state. The driving mechanism is the second-order Jahn-Teller effect which keeps the gap open at all sizes. The discovery of carbon fullerenes and nanotubes has opened a new materials research area with a vast potential for practical applications. Unfortunately, our understanding of the rich variety of structural and electronic properties of carbon nanostructures is far from satisfactory. For example, experiments indicate that quasi-one-dimensional structures such as chains and rings are among the primary precursors in the formation process of fullerenes and nanotubes. However, our insights into their properties and behavior are incomplete due to the complicated many-body effects involved. In particular, recent studies have demonstrated a profound impact of the electron correlation on stability and other properties of such all-carbon structures. An important example of such nanostructures is the system of planar monocyclic carbon rings C<sub>n</sub> with $`n=4N+2`$, where $`N`$ is a natural number. These closed-shell molecules manifest an intriguing competition between conjugated aromaticity, second-order Jahn-Teller and, at large sizes, Peierls instability effects. Consequently, this leads to different stabilization mechanisms that tend to favor one of the following structures: a cumulenic ring (A), with full D<sub>nh</sub> symmetry, with all bond angles and bond lengths equal; or either of two distorted ring structures, of lower D$`_{\frac{n}{2}h}`$ symmetry, with alternating bond angles (B) or bond lengths (C). Further structural details are given in Fig. 1. Accurate studies for the smallest sizes (C<sub>6</sub> and C<sub>10</sub>) find isomer B to be the most stable. However, for larger sizes the results from commonly used methods are contradictory and available experiments are unable to clearly specify the lowest energy structures. In order to identify the most stable isomers and to elucidate the impact of many-body effects, we carried out an extensive study of electronic structure and geometries of C<sub>4N+2</sub> rings of intermediate sizes up to 22 atoms (with some methods up to 90 atoms). We employed a number of electronic structure methods including the highly accurate quantum Monte Carlo (QMC) method which has been proven very effective in investigations of C<sub>20</sub> and larger carbon clusters , as confirmed also by an independent study by Murphy and Friesner . Our QMC results reveal that the C<sub>4N+2</sub> ground state structures have alternated geometries at all sizes while cumulenic isomer A is a structural transition state. The results also provide valuable insights into the shortcomings of the density functional approaches such as inaccurate balance between exchange and correlation in commonly used functionals. In addition, the letter presents a first evaluation of interatomic forces in large systems within the QMC framework. According to the Hückel rule, the $`4N+2`$ stoichiometry implies the existence of a double conjugated $`\pi `$-electron system (in- and out-of-plane). Combined with the ring planarity, this suggests a strong aromatic stabilization in favor of isomer A. Although the highest occupied and the lowest unoccupied molecular orbitals (HOMO and LUMO) are separated by a gap of several eV, a double degeneracy in the HOMO and LUMO states opens the possibility for a second-order Jahn-Teller distortion , resulting in either cumulenic B or acetylenic C structure. Such distortion lowers the symmetry and splits the degeneracy by a fraction of an eV, with an overall energy gain. Moreover, as $`N\mathrm{}`$, the picture is complicated further by the fact that the system becomes a semimetallic polymer with two half-filled $`\pi `$ bands. As first pointed out by Peierls , such a one-dimensional system is intrinsically unstable and undergoes a spontaneous distortion which lowers the symmetry. The symmetry breaking enables the formation of a gap, in analogy to the elusive case of trans-polyacetylene . It is very instructive to see how the commonly used computational methods deal with such many-body effects. Density functional theory (DFT) methods tend to favor a “homogenized” electronic structure with delocalized electrons. In fact, for sizes larger than C<sub>10</sub>, there is no indication of any stable alternation up to the largest sizes we have investigated (C<sub>90</sub>). Calculations performed within the local density approximation (LDA) and generalized gradient approximations (GGA, with BPW91 functional) consistently converge to the aromatic structure A, in agreement with other studies . Only by extrapolation to the infinite-chain limit, Bylaska, Weare et al. claim to observe a very small, yet stable, bond alternation within LDA. A very different picture arises from the Hartree-Fock (HF) method, which shows a pronounced dimerization for C<sub>10</sub> and larger. This agrees with the HF tendency to render structures less homogeneous in order to increase the impact of exchange effects. We also verified that using GGA functionals with an admixture of the exact HF exchange (B3PW91) recovers qualitatively the HF results for large sizes ($`>`$C<sub>46</sub>), as already observed by others . Obviously, this problem calls for much more accurate treatments. High-level post-HF methods, such as multi-configuration self-consistent field (MCSCF) and coupled cluster (CC), indeed provide answers for the smallest ring sizes (C<sub>6</sub> and C<sub>10</sub> ). In particular, Martin and Taylor have carried out a detailed CC study demonstrating that both C<sub>6</sub> and C<sub>10</sub> have angle alternated ground state structures, although for C<sub>10</sub> the energy of the aromatic isomer A is found to be extremely close ($`1`$ kcal/mol). In addition, we have performed limited CCSD calculations of C<sub>14</sub> and have found the dimerized isomer to be stable by $`6`$ kcal/mol. Unfortunately, these methods are impractical for larger cases or more extensive basis sets . The quantum Monte Carlo (QMC) method was used to overcome these limitations. This method possesses the unique ability to describe the electron correlation explicitly and its favorable scaling in the number of particles enables us to apply it to larger systems . In the variational Monte Carlo (VMC) method we construct an optimized correlated many-body trial wavefunction $`\mathrm{\Psi }_T`$, given by the product of a linear combination of Slater determinants and a correlation factor $$\mathrm{\Psi }_T=\underset{n}{}d_nD_n^{}\{\phi _\alpha \}D_n^{}\{\phi _\beta \}\mathrm{exp}\underset{I,i<j}{}u(r_{iI},r_{jI},r_{ij})$$ (1) where $`\phi `$ are one-particle orbitals, $`i,j`$ denote the electrons, $`I`$ the ions and $`r_{iI},r_{jI},r_{ij}`$ are the corresponding distances. The correlation part, $`u`$, includes two-body (electron-electron) and three-body (electron-electron-ion) correlation terms and its expansion coefficients are optimized variationally. Most of the variational bias is subsequently removed by the diffusion Monte Carlo (DMC) method, based on the action of the projection operator $`\mathrm{exp}(\tau H)`$; in the limit of $`\tau \mathrm{}`$, this projector recovers the lowest eigenstate from an arbitrary trial function of the same symmetry and nonzero overlap. The fermion antisymmetry (or sign) problem is circumvented by the fixed-node approximation. More details about the method are given elsewhere . DFT, HF and MCSCF calculations have been carried out using standard quantum chemistry packages . All calculations employed an accurate basis set, consisting of $`10s11p2d`$ Gaussians contracted to $`3s3p2d`$, and smooth effective core potentials to replace the chemically inert core electrons. The geometries of smaller rings with 6 and 10 atoms have already been established from previous calculations . We have verified that the most reliable published structural parameters agree very well (within $`0.002`$ Å and $`1^{}`$) with our own GGA values. However, since the dimerized isomer C is unstable within DFT, we followed a different strategy. We began from HF geometries, which show that the degree of bond length alternation saturates at $`𝒜_r14\%`$ (Fig. 2). In order to correct for the HF bias favoring acetylenic structures, we performed limited MCSCF calculations (see below) for C<sub>10</sub>, C<sub>14</sub>, and C<sub>18</sub>. The electron correlation has a profound effect on the geometry, to the extent of causing the dimerized isomer to be unstable for C<sub>10</sub>, while for C<sub>14</sub> it decreases the dimerization to $`𝒜_r10\%`$. Clearly the limited MCSCF for C<sub>14</sub> and C<sub>18</sub> provides rather poor geometry improvement although one expects a larger correction as more correlation energy is recovered. In order to verify this and to estimate the correct degree of dimerization for C<sub>14</sub>, we carried out the evaluation of the Born-Oppenheimer forces by a finite-difference scheme using correlated sampling, in the VMC method . The computation of interatomic forces is a new development in QMC methodology and, to our knowledge, this is the first application in this range of system sizes. We probed the tangential C-C stretching/shortening which leads to a change in the degree of dimerization, $`𝒜_r`$. For $`𝒜_r=7\%`$, our force estimate is $`F=dE/d\theta =0.010(2)`$ a.u. (and a second derivative of $`H=0.30(1)`$ a.u.), suggesting proximity to the minimum. Moreover, at $`𝒜_r=10.5\%`$ we find a force of opposite sign: $`F=0.013(3)`$ a.u. ($`H=0.33(1)`$ a.u.). For C<sub>18</sub>, we have instead performed two QMC single point calculations at $`𝒜_r=7\%,14\%`$ and found the first energy to be lower by $`\mathrm{\Delta }E12`$ kcal/mol. Finally, we assumed $`𝒜_r=7\%`$ and $`\overline{r}=1.286`$ Å as our best estimate for calculations of the acetylenic isomer with $`n>10`$. The crucial ingredient for very accurate QMC calculations is a trial function with a small fixed-node error. The quality of the one-particle orbitals is of prime importance for decreasing such error. While HF or DFT orbitals are commonly used for construction of Slater determinants, our recent projects have demonstrated that natural orbitals from limited correlated calculations (e.g., MCSCF) lead to more consistent results. Inclusion of the electron correlation into the method used to generate the orbitals is indeed very relevant for obtaining improved fermion nodes, especially for such systems which exhibit strong non-dynamical correlation effects . Extensive tests confirmed that orbitals from MCSCF (with an active space consisting of 4 occupied and 4 virtual orbitals) yield the lowest energies and so they were used in all of our calculations. In addition, the inclusion of the most important excited configurations into $`\mathrm{\Psi }_T`$ (about 20–30 determinants) provided further significant improvement of the total energies. In particular, the weights of single excitations were surprisingly large for the alternated geometries and comparable to the largest weights of configurations with double excitations. A more detailed analysis on the multi-reference nature of the wavefunction in these systems will be given elsewhere. Equipped with such high quality trial functions we have carried out QMC calculations from C<sub>6</sub> to C<sub>18</sub>. A plot of the energy differences, with comparison to other methods, is shown in Fig. 3. For the very compact C<sub>6</sub> ring, where the overlap between $`\pi `$ in-plane orbitals is large, as observed by Raghavachari et al. , the angle alternated isomer B is the most stable. The aromatic structure A is instead a transition state leading to angle alternation (B<sub>1u</sub> mode), while the dimerized isomer C is unstable in all methods. C<sub>10</sub> is the case which was studied extensively in the past. Our DMC results agree with calculations of Martin and Taylor . We conclude that the angle alternated isomer is still the lowest energy structure, albeit extremely close to the cumulenic A geometry, with an energy difference of $`1`$ kcal/mol. Indeed, the stabilization by aromaticity is almost as strong as the effect of second-order Jahn-Teller distortion which is responsible for the alternation pattern. The aromatic isomer remains a transition state, as it is for C<sub>6</sub>, although in this case the energy surface is extremely flat. The C<sub>10</sub> acetylenic isomer appears unstable in DMC, which implies that our older calculations and also a more recent all-electron study , based on a single determinant $`\mathrm{\Psi }_T`$ with HF orbitals, were not accurate enough. Our perhaps most interesting results come from the C<sub>14</sub> and C<sub>18</sub> isomers. The angle alternated structures become unstable since the in-plane orbital overlap is smaller due to the increased ring radius. The trends from HF, MCSCF and QMC (Fig. 3) are clearly in favor of dimerized geometries, although there is an indication that the HF bias for the Jahn-Teller distortion is much reduced as we recover more of the correlation energy. Nevertheless, since the fixed-node DMC method typically obtains $`95\%`$ of the correlation energy, we argue that margins for possible errors are very small. This is in sharp contrast with the density functional results, which indicate only the aromatic isomer A to be stable. It seems that DFT methods “overshoot” the effect of correlation at the expense of exchange, resulting in a qualitative disagreement with QMC results. The data from HF and QMC calculations enable us to analyze the HF and correlation energy components separately as a function of ring size. For $`n10`$, the HF energy difference between aromatic (A) and dimerized (C, $`𝒜_r=7\%`$) isomers can be approximated by $$E_{HF}^𝐀E_{HF}^𝐂5.14n45.4[\text{kcal/mol}].$$ (2) For the correlation energy such an extrapolation is less certain, as we lack data for very large sizes. The following formula reproduced our C<sub>10</sub>–C<sub>22</sub> data within the error bars obtained (the value for C<sub>22</sub> was based on an adjusted single determinant DMC calculation) $$E_{corr}^𝐀E_{corr}^𝐂3.57n+27.4[\text{kcal/mol}]$$ (3) while for larger sizes it is probably somewhat less accurate. Nevertheless, the significant difference between the coefficients enables us to recognize the dominance of the Coulomb and exchange contributions for large $`n`$. This means that the competition between the stabilization mechanisms, which determine the geometry, gap, and key electronic structure features, is effectively decided at intermediate ring sizes. On the basis of these results we suggest that the ground state isomers are alternated at all sizes. In conclusion, we present a study of C<sub>4N+2</sub> carbon rings, for which highly accurate fixed-node diffusion Monte Carlo results show that distorted isomers are always preferred with a stability crossover from bond angle to bond length alternation between C<sub>10</sub> and C<sub>14</sub>. The fully-symmetric aromatic isomer is instead a structural transition state connecting equivalent alternated geometries shifted by one site along the ring, although for C<sub>10</sub> the angle alternated isomer is below the aromatic one by only $`1`$ kcal/mol. The intermediate size rings (above C<sub>10</sub>) show a clear trend of dimerization, indicating that the driving stabilization mechanism is the second-order Jahn-Teller effect. The corresponding HOMO-LUMO gap persists in large rings and overshadows thus the onset of the Peierls regime. The calculations also provide an interesting insight into the deficiencies of density functional approaches such as the imbalance in approximating the exchange and correlation energy contributions. The competition of complicated many-body effects, from aromatic conjugation to symmetry-breaking, provides an extremely sensitive probe of the treatment of electron correlation in different theoretical approaches. Our results demonstrate the potential of quantum Monte Carlo not only for evaluation of accurate energy differences, but also for other properties, such as equilibrium structures, which can be obtained by new developments in the QMC methodology. We are grateful to J. Weare and E. Bylaska for suggesting this problem and for discussions. L.M. would like to thank M. Head-Gordon and R.J. Bartlett for stimulating discussions. This research was supported by the DOE ASCI initiative at UIUC, by the State of Illinois and by NCSA. The calculations were carried out on the SPP-2000 Exemplar, Origin2000, NT Supercluster at NCSA, and the T3E at PSC. We also would like to acknowledge R. Reddy at PSC for technical help.
no-problem/0003/math0003034.html
ar5iv
text
# Invertible Knot Concordances and Prime Knots ## 1. Introduction Kirby and Lickorish showed that every knot in $`S^3`$ is concordant to a prime knot, equivalently, every concordance class contains a prime knot. Generalizations appear in . Sumners introduced the notion of invertible concordance. We prove here that the Kirby and Lickorish’s result can be strengthened: ###### Theorem 1.1. Every knot in $`S^3`$ is invertibly concordant to a prime knot. Corresponding to invertible concordance there is a group, the double concordance group, studied in . A consequence of our work is that every double concordance class contains a prime knot. ## 2. Definitions and basic results In all that follows manifolds and maps will be smooth and orientable. Let $`I`$ denote the interval $`[0,1]`$. A *link* of $`n`$ components, $`L`$, is a smooth pair $`(S^3,l)`$ where $`l`$ is a smooth oriented submanifold of $`S^3`$ diffeomorphic to $`n`$ disjoint copies of $`S^1`$. A *knot* $`K`$ is a link of one component. Two links, $`L_1`$ and $`L_2`$, each of $`n`$ components, are called *concordant* if there exists a proper smooth oriented submanifold $`w`$ of $`S^3\times I`$, with $`w=(l_1\times 0(l_2)\times 1)`$ and $`w`$ diffeomorphic to $`n`$ disjoint copies of $`S^1\times I`$. Let $`(W;L_1,L_2)`$ denote $`(S^3\times I,w)`$ the concordance between $`L_1`$ and $`L_2`$. If $`(W_1;L_1,L_2)`$ and $`(W_2;L_2,L_3)`$ are two concordances with a common boundary component (oriented oppositely) we can then paste $`W_2`$ to $`W_1`$ along $`L_2`$ to get $`(W_1W_2;L_1,L_3)`$. A concordance $`(W;L_1,L_2)`$ is said to be *invertible at* $`L_2`$ if there is a concordance $`(W^{};L_2,L_1)`$ such that $`(WW^{};L_1,L_1)`$ is diffeomorphic to $`(L_1\times I;L_1,L_1)`$, the product concordance of $`L_1`$. Given the above situation, we say that $`L_1`$ *is invertibly concordant to* $`L_2`$, and $`L_2`$ *splits* $`L_1\times I`$. In the same manner, concordance and invertible concordance can be defined for knots and links in the solid torus $`S^1\times D^2`$. A submanifold $`N`$ with boundary is said to be *proper* in a manifold $`M`$ if $`N=NM`$. Let $`B^3`$ denote the standard closed 3-ball $`\{x^3|x|1\}`$. An *$`n`$-tangle* $`T`$ is a smooth pair $`(B^3,\lambda )`$ where $`\lambda `$ is a proper embedding of $`n`$ disjoint copies of the interval $`I`$ into $`B^3`$. Throughout this paper, an embedding means either the map or the image. Let $`U_n`$ denote a trivial $`n`$-tangle, *i.e.*, $`U_n`$ consists of $`n`$ unlinked unknotted arcs. For example, $`U_1`$ is the unknotted standard ball pair $`(B^3,I)`$. For $`n=2`$, see Figure 1. Concordances and invertible concordances between tangles can be defined in a similar way as for links. However, the boundary of the 3-ball $`B^3`$ is required to be fixed at each stage of concordance. More precisely, let $`I_1,\mathrm{},I_n`$, denote $`n`$ disjoint copies of the interval $`I`$. Two $`n`$-tangles, $`T_0=(B^3,\lambda _0)`$ and $`T_1=(B^3,\lambda _1)`$, are *concordant* if there is a proper smooth embedding $`\tau `$ of $`(_{i=1}^nI_i)\times I`$ into $`B^3\times I`$, with $`\tau (_{i=1}^nI_i\times ϵ)=\lambda _ϵ(ϵ=0,1)`$ and $`\tau (ϵ_i\times I)=\tau (ϵ_i\times 0)\times I`$ for each $`i=1,\mathrm{},n`$, and $`ϵ_i=0,1`$ in $`I_i`$. Let $`(V;T_1,T_2)`$ denote $`(B^3\times I,\tau )`$, the concordance between $`T_1`$ and $`T_2`$. If $`(V;T_1,T_2)`$ and $`(V^{};T_2,T_3)`$ are two concordances, we can then paste $`V^{}`$ to $`V`$ along $`T_2`$ to get a concordance $`(VV^{};T_1,T_3)`$. A concordance $`(V;T_1,T_2)`$ is *invertible at* $`T_2`$ if there is a concordance $`(V^{};T_2,T_1)`$ such that $`(VV^{};T_1,T_1)`$ is diffeomorphic to $`(T_1\times I;T_1,T_1)`$ by a diffeomorphism $`\phi `$ with $`\phi (\tau )=\lambda _1\times I`$, where $`\tau `$ is the embedding of $`n`$ disjoint copies of $`I\times I`$ into $`B^3\times I`$ defining the concordance $`(VV^{};T_1,T_1)`$ and $`\lambda _1`$ is the embedding of $`n`$ disjoint copies of $`I`$ into $`B^3`$ defining the tangle $`T_1`$. A knot is called *doubly null concordant* if it is the slice of some unknotted 2-sphere in $`S^4`$. Two knots $`K_1`$ and $`K_2`$ are said to be *doubly concordant* if $`K_1\mathrm{\#}J_1`$ is isotopic to $`K_2\mathrm{\#}J_2`$ for some doubly null concordant knots $`J_1`$ and $`J_2`$. The following theorem is due to Zeeman. ###### Theorem 2.1. Every 1-twist-spun knot is unknotted. Let $`K`$ denote the knot obtained by taking the image of $`K`$, with reversed orientation, under a reflection of $`S^3`$. The following fact was first proved by Stallings and now follows readily from 2.1. (One cross-section of the 1-twist-spin of $`K`$ yields $`K\mathrm{\#}(K)`$. For details, see .) ###### Corollary 2.2. $`K\mathrm{\#}(K)`$ is doubly null concordant for every knot $`K`$. ###### Corollary 2.3. If $`K_1\mathrm{\#}(K_2)`$ is doubly null concordant then $`K_1`$ and $`K_2`$ are doubly concordant. ###### Proof. Take $`J_1=K_2\mathrm{\#}(K_2)`$ and $`J_2=K_1\mathrm{\#}(K_2)`$ in the definition of double concordance. ∎ ###### Remark 2.4. An easy exercise shows that knots $`K_1`$ and $`K_2`$ are concordant if and only if $`K_1\mathrm{\#}(K_2)`$ is *slice*, *i.e.*, concordant to the unknot. This defines an equivalence relation. However, a definition of double concordance more along the lines of concordance is as of yet inaccessible. The difficulty is that it is unknown whether the following is true: If knots $`K`$ and $`K\mathrm{\#}J`$ are doubly null concordant, then $`J`$ is doubly null concordant. There is a relation between invertible concordance and double concordance. ###### Proposition 2.5. If $`K_1`$ is invertibly concordant to $`K_2`$ then $`K_1\mathrm{\#}(K_2)`$ is doubly null concordant. ###### Proof. There is a copy of $`S^3\times I`$ in $`S^4`$ intersecting the 1-twist-spin of $`K_1`$ in $`K_1\mathrm{\#}(K_1)\times I`$. Since $`K_2`$ splits $`K_1\times I`$, there is an invertible concordance from $`K_1\mathrm{\#}(K_1)`$ to $`K_1\mathrm{\#}(K_2)`$. Hence $`K_1\mathrm{\#}(K_1)\times I`$ is split by $`K_1\mathrm{\#}(K_2)`$ and the result follows. ∎ ## 3. Invertible concordances and prime knots Kirby and Lickorish proved that any knot in $`S^3`$ is concordant to a prime knot. Livingston gave a different proof of this result using satellite knots. In this section, we modify Livingston’s approach to prove Theorem 1.1. Before proving this, we will set up some notation. By a *splitting*-$`S^2`$, $`S`$, for a knot $`K`$ (in $`S^3`$ or $`S^1\times D^2`$) we denote an embedded 2-sphere, $`S`$, intersecting $`K`$ in exactly 2 points. A knot in either $`S^3`$ or $`S^1\times D^2`$ is *prime* if for every splitting-$`S^2`$, $`S`$, $`S`$ bounds some 3-ball, $`B`$, with $`(B,BK)`$ a trivial pair. The *winding number* of a knot $`K`$ in $`S^1\times D^2`$ is that element $`z`$ of $`H_1(S^1\times D^2;)`$ with $`z0`$ and $`K`$ representing $`z`$. The *wrapping number* of $`K`$ is the minimum number of intersections of $`K`$ with a disk $`D`$ in $`S^1\times D^2`$ with $`D=`$ meridian. If $`K_1`$ is a knot in $`S^1\times D^2`$ and $`K_2`$ is a knot in $`S^3`$, the $`K_1`$ *satellite of* $`K_2`$ is the knot in $`S^3`$ formed by mapping $`S^1\times D^2`$ into the regular neighborhood of $`K_2`$, $`N(K_2)`$, and considering the image of $`K_1`$ under this map. The only restriction on the map of $`S^1\times D^2`$ into $`N(K_2)`$ is that it maps a meridian to a meridian. In what follows we will consider $`S^1\times D^2`$ embedded in $`S^3`$ in a standard way. Hence any knot $`K`$ in $`S^1\times D^2`$ gives rise to a knot $`K^{}`$ in $`S^3`$. The following theorem is due to Livingston. ###### Theorem 3.1. Let $`K_1`$ be a knot in $`S^1\times D^2`$ such that $`K_1^{}`$ is the unknot in $`S^3`$. Then $`K_1`$ is prime in $`S^1\times D^2`$. Moreover, if $`K_1`$ has wrapping number $`>1`$ and $`K_2`$ is any nontrivial knot in $`S^3`$, then the $`K_1`$ satellite of $`K_2`$ is prime in $`S^3`$. This theorem suggests that, to prove our main theorem 1.1, we only need to find a knot $`K_1`$ in $`S^1\times D^2`$ with $`K_1^{}`$ the unknot in $`S^3`$ and an invertible concordance between the core $`C`$ and the knot $`K_1`$ in $`S^1\times D^2`$. To do this, we observe that there is an invertible concordance between the tangles $`U_2`$ and $`T`$ in Figure 1. We remark here that Ruberman in has used the tangle $`T`$ to prove that any closed orientable $`3`$-manifold is invertibly homology cobordant to a hyperbolic $`3`$-manifold. ###### Lemma 3.2. The 2-tangle $`T`$ in Figure 1(b) splits $`U_2\times I`$. ###### Proof. Let $`I_1`$ be a copy of the non-straight arc of $`T`$ in the 3-ball $`B^3`$ and let $`J_1`$ be a copy of the non-straight arc of $`U_2`$ in $`B^3`$ as shown in Figure 1(c). The closed curve $`J_1I_1`$ bounds an obvious punctured torus $`F`$ that is the shaded region in Figure 1(c). Consider $`F`$ as the plumbing of two $`S^1\times I`$. Let $`c_i,i=1,2,`$ be the cores of the two $`S^1\times I`$ of $`F`$ and let $`\overline{c}_i,i=1,2,`$ be disjoint proper line segments in $`F`$ intersecting with $`c_i`$ exactly once, respectively. See Figure 1(c). To construct an invertible concordance, we will construct two concordances and then paste them together. First, note that pinching $`I_1`$ along $`\overline{c}_1`$ transforms $`T`$ into the tangle $`U_2`$ with an unlinked unknotted circle inside which is isotopic to the circle $`c_2`$. Now capping off this circle we have a concordance $`(V_1^{};T,U_2)`$. The tangle $`B^3\times \frac{1}{4}`$ in Figure 2 represents a slice of this concordance before capping off the circle. In the similar way, pinching $`I_1`$ along $`\overline{c}_2`$ and capping off the unknot gives us another concordance $`(V_2;T,U_2)`$. Let $`(V_1;U_2,T)`$ denote the concordance $`(V_1^{};T,U_2)`$ with reversed orientation. We can then paste $`V_1`$ to $`V_2`$ along $`T`$ to get a concordance $`(V_1V_2;U_2,U_2)`$, which will be proved to be isotopic to the product concordance $`U_2\times I`$. A few cross-sections of concordance $`V_1V_2`$ are drawn in Figure 2. Let $`\tau `$ denote the embedding of two disjoint copies of $`I\times I`$ into $`V_1V_2`$ as in the definition of concordance in Section 2. It is obvious from Figure 2 that there is a 3-manifold $`M`$ (the union of shaded regions) in $`V_1V_2`$ bounded by $`\tau `$ and $`J_1\times I`$, whose intersection with $`U_2`$ at each end of the concordance is the arc $`J_1`$ and whose cross-section in the middle is the punctured torus $`F`$. This 3-manifold $`M`$ can be considered as the union of three submanifolds: the product $`F\times I`$ and two 3-dimensional 2-handles $`D^2\times I`$. One $`D^2\times I`$ is glued to $`F\times I`$ along a regular neighborhood of $`c_2`$, which corresponds to capping off the circle isotopic to $`c_2`$ as we constructed the concordance $`V_1^{}`$. The other $`D^2\times I`$ is glued along a regular neighborhood of $`c_1`$, which corresponds to capping off the circle isotopic to $`c_1`$ as we constructed the concordance $`V_2`$. Since $`F\times I`$ is a 3-dimensional handlebody with 2 handles with cores $`c_1`$ and $`c_2`$, $`M`$ is the manifold that results by adding two 2-handles to a genus $`2`$ solid handlebody along the cores of the 1-handles, in this case yielding $`B^3`$. Moreover, $`M`$ does not intersect the other straight arc of $`T`$ at any stage. Using this 3-ball $`M`$, we can isotop $`\tau `$ to $`J_1\times I`$ in a regular neighborhood of $`M`$ not disturbing the other arc and $`B^3`$. This completes the proof. ∎ ###### Proposition 3.3. The knot $`K_1`$ in Figure 3(b) splits $`C\times I`$, where $`C`$ is the core in $`S^1\times D^2`$. ###### Proof. Consider $`S^1\times D^2`$ as the complement of the unknot $`m`$ in $`S^3`$. The knot $`K_1`$ in Figure 3(b) is isotopic to $`K_1`$ in Figure 3(a). It is obvious from Figure 3(a) that $`K_1m`$ is the link in $`S^3`$ formed by replacing a trivial 2-tangle in Hopf link with $`T`$ (dotted circle in Figure 3(a)). The proposition now follows from Lemma 3.2. ∎ Now we are ready to prove our main theorem 1.1. ###### Proof of Theorem 1.1. Let $`K`$ be a knot in $`S^3`$. If $`K`$ is trivial it is prime itself. Suppose now that $`K`$ is nontrivial. Let $`K^{}`$ be $`K_1`$ satellite of $`K`$ where $`K_1`$ is the knot in $`S^1\times D^2`$ in Figure 3(b). By Proposition 3.3, $`K^{}`$ splits $`K\times I`$. We now only need to show $`K^{}`$ is prime. Since $`K_1^{}`$ is the unknot in $`S^3`$, $`K_1`$ is prime by Theorem 3.1 and to complete proof it remains to show its wrapping number $`>1`$. Its winding number is 1, hence its wrapping number is at least one. It is easy to see that the only prime knot in $`S^1\times D^2`$ with wrapping number 1 is the core. So, if $`K_1`$ had wrapping number 1, then it is isotopic to the core of $`S^1\times D^2`$. The $`1`$ surgery on the meridian curve $`m`$ in $`S^3`$ should make $`K_1^{}`$ unchanged, *i.e.*, unknotted. However, the knot in Figure 3(e), the result of $`K_1^{}`$ after $`1`$ surgery along $`m`$, is $`9_{46}`$ and hence knotted. Therefore the wrapping number is $`>1`$. ∎ ###### Corollary 3.4. Any knot is doubly concordant to a prime knot. ###### Remark 3.5. The $`K_1`$ satellite of $`K`$ has the same Alexander polynomial as that of $`K`$. Seifert proved that the Alexander polynomial of the $`K_1`$ satellite of $`K`$ is $`\mathrm{\Delta }_{K_1^{}}(t)\mathrm{\Delta }_K(t^w)`$ if $`w`$ is the winding number of $`K_1`$ in $`S^1\times D^2`$. In our case, $`w`$ is $`1`$ and $`K_1^{}`$ is the unknot. In , Livingston also proved that every 3-manifold is homology cobordant to an irreducible 3-manifold. Two 3-manifolds, $`M_1`$ and $`M_2`$, are *homology cobordant* if there is a 4-manifold $`W`$, with $`W=M_1M_2`$ and the map of $`H_{}(M_i;)H_{}(W;)`$ an isomorphism. Invertible homology cobordisms can be defined in the same way as in the knot concordance case. A 3-manifold $`M`$ is *irreducible* if every embedded $`S^2`$ in $`M`$ bounds an embedded $`B^3`$. ###### Remark 3.6. In spirit of , we have a simple proof that every 3-manifold is invertibly homology cobordant to an irreducible 3-manifold. To prove this, we only need to slightly modify the proof of Theorem 3.2 in by using $`K_1`$ in Figure 3(b). The $`1`$ surgery on $`K_1`$ makes the meridian $`m`$ the knot $`9_{46}`$. This remark is also a corollary of Ruberman’s Theorem 2.6 in that reads: for every closed orientable 3-manifold $`N`$, there is a hyperbolic 3-manifold $`M`$, and an invertible homology cobordism from $`M`$ to $`N`$. The remark follows since a hyperbolic 3-manifold is irreducible.
no-problem/0003/hep-ph0003160.html
ar5iv
text
# New limits on effective Majorana neutrino masses from rare kaon decays ## 1 Introduction Over the last years growing evidence arose for a non-vanishing neutrino rest mass. The observations of missing solar neutrinos, a deficit in upward going atmospheric $`\nu _\mu `$ and the LSND accelerator experiment results all can be explained within the context of neutrino oscillations. For recent reviews see . On the other hand oscillation experiments are no absolute mass measurements, depending on $`\mathrm{\Delta }m^2`$ = $`m_2^2m_1^2`$, where $`m_{1,2}`$ are the two mass eigenvalues. Therefore several neutrino mass models exist to describe the observed effects. Beside that also the fundamental character of the neutrino, whether being a Dirac or Majorana particle, is still unknown. The most promising channel to probe this property for $`\nu _e`$ is neutrinoless double beta decay. The measured quantity $`m_{ee}`$ is called effective Majorana neutrino mass and given by $$m_{ee}=|U_{ei}^2m_i\eta _i^{\mathrm{CP}}|$$ (1) where $`m_i`$ are the mass eigenvalues, $`\eta _i^{\mathrm{CP}}=\pm 1`$ are the relative CP-phases and $`U_{ei}`$ are the mixing matrix elements connecting weak eigenstates with mass eigenstates. The current experimental upper bound on $`m_{ee}`$ is around 0.2 eV . But this quantity is only one element of a general $`3\times 3`$ matrix of effective Majorana neutrino masses given in the form $$\begin{array}{c}m_{\alpha \beta }=\left|U_{\alpha i}U_{\beta i}m_i\eta _i^{\mathrm{CP}}\right|\text{ with }\alpha ,\beta =e,\mu ,\tau .\end{array}$$ (2) The limits for the other matrix elements are rather poor compared with double beta decay. Limits on $`m_{e\mu }`$ arise from muon-positron conversion on titanium coming from the experimental bound of $$\frac{\mathrm{\Gamma }(\mathrm{Ti}+\mu ^{}\mathrm{Ca}^{GS}+e^+)}{\mathrm{\Gamma }(\mathrm{Ti}+\mu ^{}Sc+\nu _\mu )}<1.710^{12}(90\%\text{CL})$$ (3) which can be converted in a new limit of $`m_{e\mu }`$$`<`$ 17 (82) MeV depending on whether the proton pairs in the final state are in a spin singlet or triplet state and allowing correction factors of the order one for the difference in Ti and S as given in . Recently an improved limit on the element $`m_{\mu \mu }`$$`\stackrel{<}{}`$ $`10^4`$ GeV was given by investigating trimuon production in neutrino - nucleon scattering . The first full matrix of limits on $`m_{\alpha \beta }`$, including for the first time matrix elements containing the $`\tau `$ \- sector of Eq. 2 are given in , using HERA charged current results with associated dimuon production. All limits obtained for the matrix elements are of the order $`10^4`$ GeV or slightly below. Indirect bounds coming from FCNC processes like $`\mu e\gamma ,\tau \mu \gamma `$ could also account for severe limits. On the other hand note, that these processes depend on $`m_{e\mu }=\sqrt{U_{ei}U_{\mu i}m_i^2}`$ (in case of $`\mu e\gamma `$) while the ones mentioned before typically depend on $`m_{\alpha \beta }`$<sup>2</sup>. Therefore without specifying a mixing and mass scheme, the quantities are rather difficult to compare. A discussion of such models within this context and using oscillation data can be found in . The same argument holds for combining $`m_{ee}`$ and $`m_{e\mu }`$ to determine $`m_{\mu \mu }`$. Therefore any experimentally obtained limit is very useful. A further interesting topic within this context is the production of neutral heavy leptons and direct production of Majorana neutrinos heavier than 100 GeV, the last has been studied for various collider types . The current limits on neutral heavy leptons are coming from LEP and are 39.5 GeV (stable) and 76.0 GeV (for an unstable Majorana neutrino coupling to muons) . Also mixing of such heavy particles with the light neutrinos will be restricted by the limits given for $`m_{\alpha \beta }`$. The process of a lepton-number violating rare kaon decay discussed in this paper will allow to obtain a new limit on $`m_{\mu \mu }`$. ## 2 The decay $`K^+\pi ^{}\mu ^+\mu ^+`$ A further possibility to probe $`m_{\mu \mu }`$is the rare kaon decay $$K^+\pi ^{}\mu ^+\mu ^+.$$ (4) This process is violating lepton number by two units. The measured quantity $`m_{\mu \mu }`$ is given by $$m_{\mu \mu }=|U_{\mu i}^2m_i\eta _i^{\mathrm{CP}}|.$$ (5) The lowest order Feynman diagrams are shown in Fig. 1. The amplitude $`A_1`$ is given for the tree diagram by $$A_1=2G_F^2f_Kf_\pi (V_{ud}V_{us})^{}\underset{i}{}(U_{li}U_{l^{}i})^{}p_{K,\alpha }p_{\pi ,\beta }[L_i^{\alpha \beta }(p_l,p_l^{})\delta _{ll^{}}L_i^{\alpha \beta }(p_l^{},p_l)]$$ (6) where $$L_i^{\alpha \beta }(p_l,p_l^{})=m_{\nu _i}(q^2m_{\nu _i}^2)^1\overline{v}(p_l)\gamma ^\alpha \gamma ^\beta P_Rv^c(p_l^{})$$ (7) and q corresponds to the four-momentum of the virtual $`\nu _i`$ and $`P_R=(1+\gamma _5)/2`$. The box diagram cannot be calculated easily because the hadronic matrix element $$d^4xd^4ye^{i(p_dp_u)y}e^{i(p_{\overline{s}}p_{\overline{u}})x}\pi ^{}[\overline{d_L}(y)\gamma _\beta u_L(y)][\overline{s_L}(x)\gamma _\alpha u_L(x)]K^+$$ (8) is not directly related to measured quantities like $`0\overline{s_L}\gamma _\alpha u_LK^+`$ and $`\pi ^{}\overline{d_L}\gamma _\beta u_L0`$ as in the tree graph. The tree diagram is dominating the total decay rate and the $`m_\nu `$ dependence is coming basically from $`L^{\alpha \beta }`$. Detailed calculations can be found in . Because we are far away from the expected rate, the uncertainties in the matrix element can be neglected. A first extraction of a branching ratio for this process was done in reexamining the data from . They obtained a branching ratio of $$\frac{\mathrm{\Gamma }(K^+\pi ^{}\mu ^+\mu ^+)}{\mathrm{\Gamma }(K^+\mathrm{all})}<1.510^4(90\%\text{CL})$$ (9) Using this value with the theoretical calculations of a limit of $`m_{\mu \mu }`$$`<1.110^5`$ GeV could be deduced . The processes discussed in were able to improve that number down to be less than $`410^3`$ GeV. In the meantime new sensitive kaon experiments are online and using the E865 experimentat BNL a new upper limit on the branching ratio of $$\frac{\mathrm{\Gamma }(K^+\pi ^{}\mu ^+\mu ^+)}{\mathrm{\Gamma }(K^+\mathrm{all})}<310^9(90\%\text{CL})$$ (10) could be obtained , an improvement by a factor 50000. Because the branching ratio is $`m_{\mu \mu }^2`$ this can be converted in a limit on $`m_{\mu \mu }`$$`\stackrel{<}{}`$ 500 GeV, a factor of eight better than the existing limits and three orders of magnitude better in this particular decay channel. ## 3 Results and Discussion The obtained upper limit on $`m_{\mu \mu }`$ of 500 GeV restricts regions of heavy Majorana neutrinos having a mixture U<sub>μH</sub> with $`\nu _\mu `$. No direct bound exists for neutrino masses heavier than 90 GeV. This is illustrated in Fig. 2. Further improvements to this bound could come from even more sensitive searches for this rare kaon decay. An improvement by a factor of about 10 on $`m_{\mu \mu }`$ implying an improvement on the branching ratio limit by two orders of magnitude would bring the number in overlap with LEP searches. New experiments like E949 at BNL and CKM (E905) at Fermilab or a muon collider could improve on that significantly. Furthermore the decay of charmed mesons could be considered as well. Among the Cabibbo angle favoured modes are $`D^+K^{}\mu ^+\mu ^+,D_S^+K^{}\mu ^+\mu ^+`$ or $`D_S^+\pi ^{}\mu ^+\mu ^+`$. The existing limits on the branching ratio for these processes are $`3.210^4,5.910^4`$ and $`4.310^4`$ respectively . While being competitive with the old bound for the kaon decay discussed, the new kaon branching ratio limit is now five orders of magnitude better. Therefore, to obtain new information on $`m_{\mu \mu }`$ from D-decays, analyses of new data sets have to be done. To improve significantly towards lighter neutrino masses ($`m_{\mu \mu }`$$`\stackrel{<}{}`$ 1 GeV) one might consider other processes. The close analogon to double beta decay and therefore a measurement of $`m_{\mu \mu }`$ using nuclear scales would be muon capture by nuclei with a $`\mu ^+`$ in the final state as discussed in . No such experiment was performed yet. The ratio with respect to muon capture can be given for <sup>44</sup>Ti and light neutrino exchange as $$\mathrm{\Gamma }=\frac{\mathrm{\Gamma }(\mu ^{}+\mathrm{Ti}\mu ^++\mathrm{Ca})}{\mathrm{\Gamma }(\mu ^{}+\mathrm{Ti}\nu _\mu +\mathrm{Sc})}510^{24}(\frac{m_{\mu \mu }}{250keV})^2.$$ (11) Assuming that a branching ratio of the order of muon-positron conversion (Eq.3) can be obtained, a bound on $`m_{\mu \mu }`$$`\stackrel{<}{}`$ 150 GeV results. Unfortunately this is only a slight improvement on the bound obtained here. Furthermore assuming heavy neutrino exchange for the $`\mu ^+`$ capture would result in a rate another four orders of magnitude lower than for light neutrino exchange. Improvements on the $`\tau `$ \- sector of matrix elements of Eq. (2), especially $`m_{\tau \tau }`$, could be done by a search for rare B-decays. Limits on the branching ratio for decays $`B^+K^{}\mu ^+\mu ^+,B^+\pi ^{}\mu ^+\mu ^+`$ of less than $`9.110^3`$ exist , however nobody looked into the decays $`B^+K^{}\tau ^+\tau ^+`$ or $`B^+\pi ^{}\tau ^+\tau ^+`$. With the new B-factories such a search might be possible at a level of producing limits on $`m_{\tau \tau }`$ competitive with the ones given in . ## 4 Conclusion Whether neutrinos are Dirac or Majorana particles is still an open question. For several flavours an effective Majorana mass matrix can be assumed, whose best explored element is $`m_{ee}`$ due to neutrinoless double beta decay searches. An improved limit on $`m_{e\mu }`$ is given here. An investigation especially on the matrix element $`m_{\mu \mu }`$ was performed. Using new bounds on the branching ratio of $`K^+\pi ^{}\mu ^+\mu ^+`$ obtained by E865 a new upper limit on $`m_{\mu \mu }`$ of less than 500 GeV was obtained, improving existing bounds by roughly one order of magnitude. Informations on the admixture of heavy Majorana neutrinos with muons are obtained. Suggestions for further improvements are given as well as the proposal to consider a search for rare B - decays like $`B^+\pi ^{}\tau ^+\tau ^+`$ to improve on the $`\tau `$ \- sector of $`m_{\alpha \beta }`$. ## 5 Acknowledgements I would like to thank H. Ma for providing me the new E865 limit and W. Rodejohann for useful discussions.
no-problem/0003/nucl-th0003015.html
ar5iv
text
# Level Densities by Particle-Number Reprojection Monte Carlo Methods ## Introduction The nucleosynthesis of heavy elements takes place by radiative capture of neutrons ($`s`$ and $`r`$ process) and protons ($`rp`$ process) in competition with beta decay. In the statistical regime, neutron and proton capture rates are proportional to the level density of the corresponding compound nucleus astro . Most theoretical models of level densities are based on the Fermi gas model, e.g., the Bethe formula bethe , which describes the exponential increase of the many-particle level density with both excitation energy and mass number. In the backshifted Bethe Formula (BBF) (see, e.g. Ref. conteur ), shell corrections and two-body correlations are taken into account empirically by introducing a backshift $`\mathrm{\Delta }`$ of the ground state energy. The BBF offers a good description of the experimentally determined level densities when both $`a`$ and $`\mathrm{\Delta }`$ are fitted for each nucleus dilg ; Gr74 . The overall systematics of $`a`$ and $`\mathrm{\Delta }`$ were studied empirically but it is difficult to accurately predict these parameters for a particular nucleus. The interacting shell model takes into account both shell effects and residual interactions and constitutes an attractive framework for calculating accurate level densities. However, conventional diagonalization methods are limited by the size of the model space. Full major shell calculations are presently restricted to nuclei with $`A50`$ (in the $`pf`$-shell) shell . The development of quantum shell model Monte Carlo (SMMC) methods lang ; sign allows the calculation of finite and zero-temperature observables in model spaces orders of magnitude larger than those that can be treated by conventional diagonalization techniques. Recently the SMMC method was successfully adapted to the microscopic calculations of nuclear level densities nakada . The applications of fermionic Monte Carlo methods are often limited by the so-called sign problem, which causes a breakdown of the method at low temperatures. A practical solution was developed in the nuclear case sign , but the resulting extrapolation errors were found to be too large for accurate calculations of level densities. Instead we have constructed good-sign interactions that include the dominating collective components of effective nuclear interactions dufour and were proven to be realistic for the calculation of level densities. The SMMC method is based on a representation of the many-body imaginary-time propagator as a functional integral over one-body propagators in fluctuating auxiliary fields, known as the Hubbard-Stratonovich transformation Hubbard . The many-dimensional integration is then performed by Monte Carlo. The SMMC method is computationally intensive. In particular, level density calculations require computation of the thermal energy at all temperatures. If this procedure is to be repeated for a series of nuclei, the calculations quickly become very time-consuming. Recently we introduced a novel particle-number reprojection method reproject with which we can calculate nuclear observables for a series of nuclei using the Monte Carlo sampling for a single nucleus. The weight function used in the sampling is proportional to the partition function of a fixed even-even or $`N=Z`$ nucleus. Thermal observables for neighboring nuclei are then calculated by reprojection on different particle numbers (both even and odd). This technique offers an economical way of calculating level densities for a large number of nuclei. ## The Shell Model Monte Carlo Methods A general many-body Hamiltonian containing up to two-body interactions can be written in the following quadratic form: $$H=\underset{\alpha }{}ϵ_\alpha \widehat{\rho }_\alpha +\frac{1}{2}\underset{\alpha }{}v_\alpha \widehat{\rho }_\alpha ^2,$$ (1) where $`\widehat{\rho }_\alpha `$ are one-body densities. Using the Hubbard-Stratonovich transformation, the imaginary-time many-body propagator $`e^{\beta H}`$ can be represented as $$e^{\beta H}=D[\sigma ]G(\sigma )U_\sigma ,$$ (2) where $`G(\sigma )`$ is a Gaussian weight and $`U_\sigma `$ is a one-body propagator of non-interacting nucleons moving in fluctuating time-dependent auxiliary fields $`\sigma (\tau )`$. The canonical expectation value of an observable $`\widehat{O}`$ at inverse temperature $`\beta `$ is calculated from $$\widehat{O}_𝒜=\frac{D[\sigma ]G(\sigma )\mathrm{Tr}_𝒜(\widehat{O}U_\sigma )}{D[\sigma ]G(\sigma )\mathrm{Tr}_𝒜U_\sigma },$$ (3) where Tr<sub>A</sub> is a canonical trace in the subspace of fixed particle number $`𝒜`$. In practice we project on both neutron and proton number, $`N`$ and $`Z`$, respectively, so $`𝒜`$ denotes the pair $`(N,Z)`$. Introducing the notation $`X_\sigma _WD[\sigma ]W(\sigma )X_\sigma /D[\sigma ]W(\sigma )`$, where $`W(\sigma )G(\sigma )\mathrm{Tr}_𝒜U_\sigma `$, Eq. (3) can be written as $$O_𝒜=\mathrm{Tr}_𝒜(\widehat{O}U_\sigma )/\mathrm{Tr}_𝒜U_\sigma _W.$$ (4) For a good-sign interaction and for even-even or $`N=Z`$ nuclei, the weight function $`W(\sigma )`$ is positive-definite. The $`\sigma `$-fields are sampled according to $`W(\sigma )`$ and thermal observables are calculated from (4). ## The particle-number reprojection method We assume that the Monte Carlo sampling is done for a nucleus with particle number $`𝒜`$. The ratio $`Z_𝒜^{}/Z_𝒜`$ between the partition function of another nucleus with particle number $`𝒜^{}`$ and that of the original nucleus $`𝒜`$ is written as $$\frac{Z_𝒜^{}(\beta )}{Z_𝒜(\beta )}\frac{\mathrm{Tr}_𝒜^{}e^{\beta H}}{\mathrm{Tr}_𝒜e^{\beta H}}=\frac{\mathrm{Tr}_𝒜^{}U_\sigma }{\mathrm{Tr}_𝒜U_\sigma }_W.$$ (5) The expectation value of an observable $`\widehat{O}`$ for nucleus with $`𝒜^{}`$ particles is calculated from $$\widehat{O}_𝒜^{}=\frac{\left(\frac{\mathrm{Tr}_𝒜^{}\widehat{O}U_\sigma }{\mathrm{Tr}_𝒜^{}U_\sigma }\right)\left(\frac{\mathrm{Tr}_𝒜^{}U_\sigma }{\mathrm{Tr}_𝒜U_\sigma }\right)_W}{\frac{\mathrm{Tr}_𝒜^{}U_\sigma }{\mathrm{Tr}_𝒜U_\sigma }_W}.$$ (6) The Monte Carlo sampling is carried out using the weight function $`W(\sigma )`$ which is proportional to the partition function of nucleus $`𝒜`$, and Eq. (6) is used to calculate the same observable for nuclei with $`𝒜^{}𝒜`$. In the calculations of level densities we used the Hamiltonian nakada $$H=\underset{a}{}ϵ_a\widehat{n}_a+g_0P^{(0,1)}\stackrel{~}{P}^{(0,1)}\chi \underset{\lambda =2}{\overset{4}{}}k_\lambda O^{(\lambda ,0)}O^{(\lambda ,0)},$$ (7) where $$\begin{array}{ccc}P^{(\lambda ,T)}& =& \frac{\sqrt{4\pi }}{2(2\lambda +1)}\underset{ab}{}j_aY_\lambda j_b[a_{j_a}^{}\times a_{j_b}^{}]^{(\lambda ,T)},\hfill \\ O^{(\lambda ,T)}& =& \frac{1}{\sqrt{2\lambda +1}}\underset{ab}{}j_a\frac{dV}{dr}Y_\lambda j_b[a_{j_a}^{}\times \stackrel{~}{a}_{j_b}^{}]^{(\lambda ,T)}.\hfill \end{array}$$ (8) The modified annihilation operator is defined by $`\stackrel{~}{a}_{j,m,m_t}=(1)^{jm+\frac{1}{2}m_t}a_{j,m,m_t}`$, and a similar definition is used for $`\stackrel{~}{P}^{(\lambda ,T)}`$. The single-particle energies $`ϵ_a`$ are calculated in a central Woods-Saxon potential $`V(r)`$ plus spin-orbit interaction. $`g_0`$ is a monopole pairing interaction strength determined from experimental odd-even mass differences. The quadrupole, octupole and hexadecupole interaction terms in (7) are obtained by expanding a separable surface-peaked interaction $`v(r,r^{})=\chi (dV/dr)(dV/dr^{})\delta (rr^{})`$ bertsch whose strength $`\chi `$ is determined self-consistently. The parameters $`k_2=2,k_3=1.5`$ and $`k_4=1`$ are renormalization constants that take into account core polarization effects. Both the pairing and the surface-peaked interactions are attractive and therefore have a good sign sign . In the particle-number reprojection method described above we have assumed that the Hamiltonian $`H`$ is independent of $`𝒜`$. Suitable corrections should be made if some of the Hamiltonian parameters vary with $`𝒜`$. In the iron region we find that $`\chi `$ depends only weakly on the mass number $`A`$, and the pairing strength $`g_0`$ is constant through the shell. The largest variation in this mass region is that of the single-particle energies. The thermal energy of a nucleus with $`𝒜^{}=(N^{},Z^{})`$ can then be estimated from $`E_𝒜^{}(\beta )_a[ϵ_a(𝒜^{})ϵ_a(𝒜)]n_a_𝒜^{}+H_𝒜^{}`$, where $`H`$ is the Hamiltonian for a nucleus with $`𝒜=(N,Z)`$. This estimate for $`E_𝒜^{}`$ is an approximation since we are still using the propagator $`e^{\beta H}`$ with the Hamiltonian $`H`$ for nucleus $`𝒜`$ (instead of $`𝒜^{}`$). This is a good approximation if we reproject on nuclei with $`N^{}Z^{}`$ values close to $`NZ`$ (the Woods-Saxon potential depends on $`NZ`$). In the applications below this leads to negligible errors in the level densities. ## Applications and Results In this section we present applications of the particle-number reprojection method to nuclei in the iron region. Since we are interested in level densities around the neutron and proton resonance energies we use the complete $`(pf+g_{9/2})`$-shell. This model space contains both positive and negative parity states. We perform the direct Monte Carlo sampling for the even-even nucleus <sup>56</sup>Fe and the $`N=Z`$ nucleus <sup>54</sup>Co (both have a good sign for the interaction (7)). The thermal energies of <sup>53-56</sup>Mn, <sup>54-58</sup>Fe and <sup>54-60</sup>Co were reprojected from <sup>56</sup>Fe, while those of <sup>50-52</sup>Mn and <sup>52,53</sup>Fe were reprojected from <sup>54</sup>Co. The calculations were done for $`\beta `$ values up to 2.5 MeV<sup>-1</sup>. At small $`\beta (<1)`$ the calculations were done in a smaller step of $`\mathrm{\Delta }\beta =\mathrm{\hspace{0.17em}1}/16`$. At larger $`\beta `$ points, the Monte Carlo calculations become more time-consuming, and we doubled our step size to $`\mathrm{\Delta }\beta =\mathrm{\hspace{0.17em}1}/8`$. For each $`\beta `$ point we took about $`4000`$ independent samples. The reprojected energies usually have larger statistical error at large values of $`\beta `$. To calculate reliable ground state energies we performed direct Monte Carlo runs for some of the reprojected nuclei at several values of $`\beta `$ between $`1.75`$ and $`2.5`$. For $`\beta >2.5`$ the statistical error for the thermal energy of an odd-even nucleus becomes too large to be useful. Since the thermal energy of an odd-even nucleus is already close to its asymptotic value at these large $`\beta `$ values, we could extract the ground state energy to within an accuracy of $`0.3`$ MeV. Fig. 1 shows the SMMC thermal energies versus $`\beta `$ for manganese isotopes. The staggering observed in the spacings of the thermal energies at large $`\beta `$ is a pairing effect. The inset of Fig. 1 shows the SMMC thermal energies of <sup>54</sup>Mn (diamonds with error bars) at large values of $`\beta `$. It demonstrates the procedure we used to extract the ground state energy. The level density is related to the partition function by an inverse Laplace transform $$\rho _𝒜^{}(E)=_i\mathrm{}^i\mathrm{}\frac{d\beta }{2\pi i}e^{\beta E_𝒜^{}}Z_𝒜^{}(\beta ).$$ (9) The partition function $`Z_𝒜^{}(\beta )`$ is computed from the SMMC thermal energies by integrating the thermodynamic relation $`\mathrm{ln}Z_𝒜^{}/\beta =E_𝒜^{}(\beta )`$. The average level density is then calculated by evaluating (9) in the saddle point method $`\rho _𝒜^{}=(2\pi \beta ^2C_𝒜^{})^{1/2}e^{S_𝒜^{}},`$ (10) where $`S_𝒜^{}=\beta E_𝒜^{}+\mathrm{ln}Z_𝒜^{}`$ and $`C_𝒜^{}=\beta ^2dE_𝒜^{}/d\beta `$ are the canonical entropy and heat capacity, respectively. Fig. 2 shows the level densities for the manganese isotopes of Fig. 1 as a function of excitation energy. These densities are fitted to a modified version of the BBF conteur $`\rho (E_x){\displaystyle \frac{\sqrt{\pi }}{12}}a^{\frac{1}{4}}(E_x\mathrm{\Delta }+t)^{\frac{5}{4}}e^{2\sqrt{a(E_x\mathrm{\Delta })}}.`$ (11) Here $`t`$ is a thermodynamic temperature determined by the relation $`E_x\mathrm{\Delta }=at^2t`$. Eq. (11) differs from the usual BBF in the term $`t`$ which appears in the pre-exponential factor, and gives a better fit to the SMMC level densities at lower excitation energies. The solid lines in Fig. 2 are the fitted BBF level densities of Eq. (11). The fitting is done in the energy range $`E_x<20`$ MeV and is usually good down to $`1`$ MeV for even-even nuclei (for which $`\mathrm{\Delta }`$ is positive), or even below $`1`$ MeV for odd-$`A`$ nuclei. The reduced pairing correlations in odd-odd nuclei are clearly observed in the level densities of Fig. 2. The backshift parameter $`\mathrm{\Delta }`$ for the odd-odd nucleus <sup>54</sup>Co is lower than $`\mathrm{\Delta }`$ for the odd-even nucleus <sup>55</sup>Co, leading to a higher level density for <sup>54</sup>Co despite its smaller mass. The level density parameters $`a`$ and $`\mathrm{\Delta }`$ were extracted by fitting Eq. (11) to the microscopic SMMC level densities, and are shown in Fig. 3 versus mass number $`A`$. The SMMC results (solid squares) are compared with the experimental data ($`\times `$’s) quoted in Refs. dilg and Gr74 . The solid lines describe the empirical formulae of Refs. HWFZ . The SMMC values of $`a`$ depend smoothly on the mass $`A`$, unlike the values predicted by the empirical formulae. The pairing effects are clearly reflected in the staggering behavior of $`\mathrm{\Delta }`$ versus $`A`$ as seen on the right column of Fig. 3. In the empirical formulae, $`\mathrm{\Delta }`$ is close to zero for odd-even nuclei, positive for even-even nuclei and negative for odd-odd nuclei. However, we see that both the experimental and SMMC values of $`\mathrm{\Delta }`$ can differ significantly from zero for the odd-even nuclei. The SMMC values of $`a`$ and particularly $`\mathrm{\Delta }`$ are generally in better agreement with the experimental results than the empirical values. For some of the odd-odd manganese isotopes we observe discrepancies between the SMMC values of $`a`$ and the experimental data. However, the lower values of $`a`$ for these manganese isotopes are compensated by corresponding lower values of $`\mathrm{\Delta }`$. Consequently, the discrepancies in the level densities themselves are less significant for $`E_x10`$ MeV. To demonstrate the $`T_z`$-dependence of level densities we show in Fig. 4 the level densities of two odd-odd $`A=54`$ (Mn and Co) and three odd-even $`A=55`$ nuclei (Mn, Fe and Co). The empirical formulae predict similar level densities for the two odd-odd nuclei as well as for the three odd-$`A`$ nuclei: the values of $`a`$ are similar if the mass $`A`$ is the same; $`\mathrm{\Delta }0`$ for odd-$`A`$ nuclei; and $`\mathrm{\Delta }`$ are approximately the same for odd-odd nuclei. However the SMMC level densities of these nuclei (symbols) are seen to be quite different from each other. We also see that the experimental level densities (dashed lines) are in good agreement with the SMMC densities. In conclusion, we have described a particle-number reprojection method in the shell model Monte Carlo method. With this reprojection technique we can calculate the thermal properties for a series of nuclei using Monte Carlo sampling for a single nucleus. Level densities of odd-$`A`$ and odd-odd nuclei are calculated despite a new sign problem introduced by the projection on an odd number of particles. This work was supported in part by the Department of Energy grant No. DE-FG-0291-ER-40608, and by the Ministry of Education, Science, Sports and Culture of Japan (grants 08044056 and 11740137). Computational cycles were provided by the Cornell Theory Center, by the San Diego Supercomputer Center (using NPACI resources), and by the NERSC high performance computing facility at LBL.
no-problem/0003/cond-mat0003515.html
ar5iv
text
# Direct observation of twist mode in electroconvection in I52 ## I Introduction When a spatially extended system is driven far from equilibrium, a series of transitions occurs as a function of the external driving force, or control parameter. The initial transition is typically from a spatially uniform state to a state with periodic spatial variations, called a pattern . One can distinguish two general classes of pattern forming systems: isotropic and anisotropic. In isotropic systems, because there is no intrinsic direction in the system, the initial wavevector of the pattern can have any orientation. For anisotropic systems, the uniform state of the system has a special axis and there are at most two degenerate initial wavevectors for the pattern. Electroconvection in nematic liquid crystals has become a paradigm for the study of pattern formation in anisotropic systems . Nematic liquid crystals are fluids in which the molecules possess orientational order . The axis along which the molecules are aligned on average is referred to as the director. For electroconvection, a nematic liquid crystal is placed between two glass plates. The plates are treated so that there is a uniform alignment of the director parallel to the plates, i.e. planar alignment. The plates are also coated with a transparent conductor, and the liquid crystal is doped with an ionic impurity. An ac voltage is applied perpendicular to the director. Above a critical value $`V_c`$ for the voltage, a pattern develops that consists of a periodic variation of the director and charge density with a corresponding convective flow of the fluid. Many of the interesting patterns in electroconvection are the result of oblique rolls. Oblique rolls refer to patterns where the wavevector forms a nonzero angle $`\theta `$ with respect to the initial alignment of the director. Because the director only defines an axis, for each value of $`\theta `$ and wavenumber $`q`$, there are two degenerate states corresponding to wavevectors at the angles $`\theta `$ and $`\theta `$. These states are referred to as zig and zag, respectively. Electroconvection has been extensively studied experimentally . However, despite a relatively early identification of the basic instability mechanism , a detailed, quantitative description of the rich array of patterns has only recently emerged. The first step in this development was the elucidation of the standard model of electroconvection . The linear stability analysis and weakly nonlinear analysis presented in Ref. accurately describes electroconvection at relatively high electrical conductivities and thick samples. However, it fails to account for traveling patterns, i.e. a Hopf bifurcation, that are observed in thin samples and at low sample conductivity . Also, the original weakly nonlinear analysis of the standard model does not explain the experimentally observed “abnormal” rolls. Recently, these two phenomena have been explained by independent theoretical extensions of the standard model that are described below. First, the weak-electrolyte model (WEM) is an extension of the standard model that treats the charge density as a dynamically active field and is able to explain the Hopf bifurcation. Second, within the framework of the standard model, secondary and further bifurcations have been assessed with a fully nonlinear Galerkin calculation . In particular, this work helped elucidate the decisive role of a homogeneous in-plane twist of the director in the bifurcation to abnormal rolls . The general features of this fully nonlinear calculation can be reproduced by an extended weakly nonlinear analysis. This analysis extends previous treatments of the standard model by including the homogeneous twist as a dynamically active mode , and I will refer to it as the “twist-mode model”. For certain cases, the twist-mode model even provides a semiquantitative or quantitative description of the dynamics. This analysis applies to both electroconvection and thermal convection in nematics . Currently, a weakly nonlinear theory that includes both the WEM effects and the twist mode remains undeveloped. The appropriate merging of the twist-mode model and the WEM is essential for systems with traveling oblique rolls, where both effects can be important. For example, even though electroconvection in I52 provided the first quantitative confirmation of the WEM at the linear level , the patterns in I52 are dominated by oblique rolls . Therefore, the homogeneous in-plane twist is present and may be important at the weakly nonlinear level. The main question is: does this mode need to be included as an additional active mode in an extended weakly nonlinear analysis, or is the WEM model sufficient? In particular, this question is crucial for two patterns that are observed in I52 where the dynamics depends on the interaction between traveling oblique rolls: spatiotemporal chaos at onset and localized states known as “worms” . Currently, qualitative features of these patterns have been reproduced in the context of a weakly nonlinear analysis based on the WEM . However, if the twist-mode is an active mode, it must be included for any quantitative comparison to work. The results reported here are a first step in determining the relative importance of the in-plane homogeneous twist of the director in electroconvection in I52. For these initial experiments, the contribution of WEM effects were minimized by focusing on relatively high conductivities. This work includes a more detailed and quantitative study of the SO2 state that is first reported in Ref. . The SO2 state consists of the superposition of rolls with wavevector q with rolls with wavevector k. The angle between the wavevectors ranged from 72 to 90, depending on the parameter values. In this paper, I will refer to the initial oblique roll wavevector as q and any subsequent wavevector that grows as a result of a secondary instability as the dual wavevector k. Also, wavevectors with a positive angle relative to the undistorted director will be referred to as zig-type, and wavevectors with a negative angle with respect to the undistorted director will be referred to as a zag-type. In general, because of the two-fold degeneracy, the initial wavevector q can either be zig-type or zag-type. In Ref. , it has been proposed that the SO2 state is an example of the bimodal varicose state. I will show that this association is correct. Also, I report on direct measurements of a twist mode of the director field in I52 for the oblique roll states and the SO2 states. These measurements confirm that the director-wavevector frustration mechanism proposed in Ref. is the source of the bimodal instability that results in the SO2 state. It has been predicted in Ref. that the bimodal state can experience a Hopf bifurcation to a time periodic state. In this state, the amplitudes of the q, k, and homogeneous twist modes all oscillate in time. This state is referred to as the oscillating bimodal varicose . This state has been observed indirectly in thermal convection , where only the oscillations of the q and k modes were observed. I report on observations of this Hopf bifurcation in electroconvection. In particular, I have directly measured the oscillations of the homogeneous twist mode in addition to the oscillations of the q and k modes. The sequence of bifurcations reported here is in perfect agreement with the twist-mode model. Also, the measured angle between the q and k modes agree with predictions of the twist-mode model. However, the location of the bifurcation to the bimodal varicose and the oscillating bimodal varicose in not quantitatively described by the twist-mode model. This is easily attributed to two facts. First, the material parameters are not completely known. Second, the WEM effects are still present at some level in the experimental system and have not yet been incorporated into the model. The rest of the paper is organized as follow. Section II describes the experimental details. Section III presents the experimental results, and Sec. IV discusses the comparison between the results and the twist-mode model. ## II Experimental Details I used the nematic liquid crystal I52 doped with 5% by weight molecular iodine. Commercial cells were obtained from EHC, Ltd in Japan . The cells consisted of two pieces of glass coated with transparent electrodes of indium-tin oxide (ITO). The surfaces were treated with rubbed polymers to obtain uniform planar alignment. The initial alignment direction will be referred to as the x-axis, and the z-axis is taken perpendicular to the glass plates. The cell spacing was 23 $`\mu `$m and the electrodes were 1 cm x 1 cm. The cells were filled by capillary action and sealed with five minute epoxy. The cells were placed in a temperature control block. The temperature was maintained constant to $`\pm 2\mathrm{mK}`$. In order to study a range of parameters, four different operating temperatures were used: 35 C, 42 C, 45 C, and 55 C. The system used to image the patterns is shown in Fig. 1. It is a standard shadowgraph setup that is modified to allow for the direct observation of in-plane twist modes of the director using a $`\lambda /4`$ plate and analyzer. Two main orientations of the polarizer, $`\lambda /4`$ plate, and analyzer were used. In Fig. 1a, the polarizer and analyzer are shown parallel to each other and the initial alignment of the director. With this geometry, only the transmission of extraordinary light is observed. Minus the $`\lambda /4`$ plate and analyzer, this is the standard shadowgraph setup for electroconvection. The light is focused by the director variation in the x-z plane, and an image of the pattern is obtained . In the arrangement shown in Fig. 1b, the polarizer is oriented perpendicular to the initial alignment of the director. In this case, only ordinary incident light is present, and the contrast due to the x-z periodic variation is avoided. In both cases, the $`\lambda /4`$ plate at $`45^{}`$ to the undistorted director orientation is used to discriminate domains of positive and negative x-y director twist. Figure 1c is a top view of the cell and illustrates the definition of positive and negative rotation. The optical setup used here is analyzed in detail in Ref. for a thicker cell and different liquid crystal. However, the qualitative features hold true in this case. For the orientations shown in Fig. 1b, regions of negative twist have an enhanced transmission of light relative to regions of positive twist. Rotating the $`\lambda /4`$ plate to $`45^{}`$ yields equivalent information, but with complementary intensities. With the polarizer aligned parallel to the undistorted director, the dominant feature of the image is the focusing due to the periodic x-z variation of the director. However, because we also used the $`\lambda /4`$ plate in this case, there is a modification of the overall intensity when a homogeneous twist is present. In this orientation, because the polarizer and analyzer are both aligned with the undistorted director, a positive twist of the director has an enhanced transmission of light relative to a negative twist of the director when the $`\lambda /4`$ plate is at $`+45^{}`$. Therefore, in both setups, the sign of the twist amplitude can be determined. In the future, a detailed calculation along the lines presented in Ref. is required for quantitative measurements of the twist amplitude. The images were taken using a standard CCD camera and digitized with an 8-bit framegrabber. All of the images have both a background subtraction and background division performed. The background subtraction was done to remove the effective mean from the image that is due to the fact that the camera digitized images on a 0 to 255 scale. This was necessary to enable detection of the uniform twist mode by Fourier techniques. The presence of a uniform twist shifts the mean intensity of the image and shows up as changes in the amplitude of the zero wavevector. Without the subtraction step, the zero wavevector peak is dominated by the mean caused by the digitizing process. The electrical conductivity of the sample ranged between $`1\times 10^8\mathrm{\Omega }^1\mathrm{m}^1`$ and $`1\times 10^9\mathrm{\Omega }^1\mathrm{m}^1`$. The variation in conductivity was due to two effects: the conductivity is temperature dependent and the conductivity decreased slowly in time. At a fixed temperature, the main effect of the conductivity drift is to shift the critical voltage, $`V_c`$, for the onset of convection. The different temperatures were used to study the effect of varying the material parameters and to offset the long term shifts in $`V_c`$. By changing the temperature, $`V_c`$ was kept at $`11\mathrm{V}_{\mathrm{rms}}`$. For all of the experiments reported here, the drive frequency was 25 Hz. Because of the shift in $`V_c`$, the following protocols were followed for all of the experimental runs. All transitions are reported in terms of $`ϵ=(V/V_c)^21`$. To determine the various transition points, the voltage was increased in steps of $`\mathrm{\Delta }ϵ`$ ranging from $`0.005`$ to $`0.01`$ depending on the precision of interest for a given run. For measurements of the onset of the bimodal varicose instability ($`ϵ_{BV}`$), a single image was taken after the system was allowed to equilibrate for 10 minutes. The relevant time scale is the director relaxation time, which for this system is on the order of 0.2 s. As $`ϵ`$ is increased above $`ϵ_{BV}`$, there is a Hopf bifurcation to the oscillating bimodal varicose state at $`ϵϵ_H`$. The value of $`ϵ_H`$ was determined by quasi-statically stepping $`ϵ`$ from the bimodal varicose state. At each step in $`ϵ`$, a time series of images was taken. The power spectrum of the time series was computed, and the signature of the Hopf bifurcation was the development of a nonzero frequency component. For each run, $`V_c`$ was measured before and after the experiment to determine the drift in $`V_c`$. The drift was a relatively constant 0.018 volts per hour, which corresponds to a shift in $`ϵ`$ of 0.003 per hour. This level of drift in conductivity did not adversely affect our ability to make comparison with theory and was accounted for in all values of $`ϵ`$ that are reported here. ## III Experimental Results Figure 2 shows four images of a typical pattern in the oblique roll regime, and Fig. 3 shows four images of a typical pattern above the bimodal varicose transition. For each figure, the images are all of the same pattern and the following protocols were used to take the images. Image (a) has the polarizer and analyzer parallel to each other and the undistorted director (setup shown in Fig. 1a). The $`\lambda /4`$ plate is orientated at $`+45^{}`$ with respect to the undistorted director. In this case, one observes the usual shadowgraph image. There is also an overall modulation of the intensity due to the homogeneous twist mode. For images (b), (c), and (d), the polarizer has been rotated 90 with respect to the undistorted director (setup shown in Fig. 1b). Image (c) is a Fourier filtered version of image (b). A low-pass Fourier filter was used to highlight the intensity variation due to the homogeneous twist of the director. This long-wavelength variation is difficult to detect in the raw image because of the residual focusing effects from the x-z distortion of the director. For image (d), the optical axis of the $`\lambda /4`$ plate is orientated -45 relative to the polarizer and the image is again Fourier filtered with a low-pass filter. Therefore, image (d) should be the complement of image (c). Both sets of images were obtained by jumping the voltage from below $`V_c`$ to a value in the middle of the range for each state. A jump was used to create a pattern with both the zig and zag orientations to illustrate the different orientations of the in-plane twist in a single image. When the voltage is stepped slowly, a single orientation of the rolls exists over large regions of the cell. With the orientation used for images (b) and (c), the brighter regions correspond to regions of negative twist. Therefore, Fig. 2b and c confirm that the twist orientation is opposite the pattern wavevector, as expected . For example, the region of zig rolls (q at $`+\theta `$) in the lower right corner appears brighter than the region of zag rolls (q at $`\theta `$) diagonally across the middle. Also, Fig. 2d is clearly the complement of Fig. 2c, providing additional evidence that the source of the variations is an in-plane twist of the director. In Fig. 3, the bimodal varicose pattern is shown. This state corresponds to the state SO2 in Ref. . In general, the bimodal varicose pattern is described by the superposition of two modes with different amplitude and wavevectors, where one of the wavevectors is of the zig-type and the other is of the zag-type. This can be written as $`A\mathrm{cos}(𝐪𝐱)+B\mathrm{cos}(𝐤𝐱)`$, where $`A`$ and $`B`$ are the two amplitudes of the modes. In general, the two wavevectors are such that the modes are not a degenerate zig and zag pair. In this case, there are two degenerate bimodal varicose states. One formed when the initial wavevector q is a zig state, and one formed when q is a zag state. The two degenerate states are shown in Fig. 3. The left half of the image consists of a pattern where the zig-type rolls were the initial state and zag-type rolls grew as a result of the instability. The right half of the image is the reverse case. In this case, the angle between $`𝐪`$ and k is 80. It is this superposition of a zig-type and zag-type roll that identifies the SO2 state as the bimodal varicose state. Furthermore, Figs. 3c and 3d confirm that the direction of the homogeneous twist is still determined by the wavevector with the maximum amplitude. This provides strong evidence for the mechanisms described in Ref. as the source of the bimodal instability. Further work is needed to make quantitative measurements of the change in the amplitude of the twist mode as a function of the growth of the k mode. Because the SO2 state corresponds to the bimodal varicose state, measurements of the transition point ($`ϵ_{BV}`$) exist for some parameter values . These values for $`ϵ_{BV}`$ are in very rough agreement with calculations of the twist-mode model ; however, detailed measurements of the transition points have not been made. For example, in this system, it has not been determined if the transition to the bimodal varicose is forward or backward. In order to elucidate the nature of the transition, Fig. 4 shows the amplitude of the pattern at the wavevectors q and k as a function of $`ϵ`$ for the sample at T = 45 C. The amplitude of the pattern at a given wavevector is calculated from the power spectrum for each image. For each wavevector, the power is computed by summing the power in a 3 x 3 square centered on the wavevector. The amplitude is the square root of the power. Figure 4 provides strong evidence that the bimodal varicose instability is forward, within the resolution of our measurements, because the amplitude of the k mode grows continuously from zero. In Fig. 4, it appears that the amplitude of the q mode decreases at large $`ϵ`$. Though leveling off of the amplitude is expected, the decrease is most likely an artifact of optical nonlinearities that arise at high $`ϵ`$. This is demonstrated in Fig. 5, which shows a typical image of the bimodal varicose state at $`ϵ=0.15`$. Also shown in Fig. 5 is the corresponding power spectrum. One clearly observes the large number of peaks corresponding to nonlinearities in either the optics or the pattern itself. In principle, one can compute the amplitude of the director variation $`A`$ directly from the images. In electroconvection, the shadowgraph images contain contributions proportional to both $`A`$ and $`A^2`$. This adds some complication for calculating amplitudes of superimposed oblique rolls because the $`A^2`$ terms result in sums and differences of the two wavevectors. However, the real problem for this case is the additional nonlinearities that result in the plethora of diffraction peaks in Fig. 5b. One such additional problem is the existence of caustics at these large amplitudes. An additional feature of the electroconvection in I52 is that the initial bifurcation is actually a continuous, Hopf bifurcation to a state of superimposed zig and zag rolls. The large jump in amplitude to the pure zig state at $`ϵ=0.01`$ corresponds to the transition from the initially traveling rolls to the stationary state. For all of the conductivities and temperatures reported on here, there was an initial Hopf bifurcation. As $`ϵ`$ is increased further, the bimodal varicose state experiences a Hopf bifurcation to the oscillating bimodal varicose state at $`ϵϵ_H`$. Figure 6 shows the power spectrum as a function of frequency for the wavevectors q, k, and for the zero wavevector of a typical time series above $`ϵ_H`$. The presence of a peak at finite $`\omega `$ is the signature of the oscillating bimodal state. These power spectra were computed by taking a time series of 64 images 0.5 s apart. The images covered a region containing approximately 5 wavelengths of the pattern. Each image was Fourier transformed, and a time series of the Fourier transforms at each wavevector of interest was constructed. Then, the power spectra of each of these time series was calculated. For these images, the arrangement of polarizer, $`\lambda /4`$ plate, and analyzer described in Fig. 1a was used. As discussed above, the use of image subtraction implies that the power in the zero wavevector corresponds to the amplitude of the twist mode, as it represents a long wavelength variation of the intensity. Also, for this state, the initial wavevector was of the zag-type and the dual wavevector was of the zig-type. Therefore, the twist rotation is positive, and with the optical setup of Fig. 1a, this produces an overall increase of the image intensity. Figure 6 illustrates two main points. First, the Hopf bifurcation corresponds to an oscillation of the roll amplitudes and the twist amplitude about their corresponding mean values (the large peak at zero frequency). Second, the oscillation amplitude is significantly less than the mean value. The oscillating bimodal varicose, as defined in Ref. , is a pattern of the form $`A(t)\mathrm{cos}(𝐪𝐱)+B(t)\mathrm{cos}(𝐤𝐱)`$, where $`A(t)`$ and $`B(t)`$ oscillate roughly out-of-phase with each other and around different mean values. For the oscillating state observed here, Fig. 6 demonstrates that the mean values are different. Figure 7 illustrates the behavior of the oscillating bimodal varicose in real space and illustrates the out-of-phase nature of the oscillations. The image in Fig. 7 is one of the individual frames used to compute the power spectra shown in Fig. 6. Because of the optical nonlinearities, the easiest way to directly observe out-of-phase oscillations is to plot the local behavior of the pattern after Fourier filtering. This is show in the plot in Fig. 7. The plot is constructed as follows. For each image in the time series, the Fourier transform is computed. Then, a Hanning window is applied to the region around the wavevector of interest (and its complex conjugate). The inverse Fourier transform of the result produces a real space image that corresponds to the mode of interest. In this real space image, the pixels in a $`2\times 2`$ region are averaged. The region is shown by the dot in the image in Fig. 7. A time series of these average values is then constructed. The plot in Fig. 7 is subset of the longer time series used to compute the power spectra of Fig. 6. Figure 7 clearly illustrates the oscillations of the twist-mode (squares), zig-type mode (triangles), and zag-type mode (circles) that comprise the oscillating bimodal varicose. The optical arrangement of Fig. 1a was used while taking this time series. Therefore, the positive amplitude of the twist-mode represents a positive rotation angle. This is consistent with the dominant mode being of the zag-type. Also, an increase in the zero wavevector intensity represents an increase in the twist-mode amplitude, which is correlated with the increase in the zag mode amplitude. A mechanism for the oscillating bimodal varicose is discussed in Ref. , and the evidence provided by Figs. 6 and 7 for this mechanism is discussed in Sec. IV. ## IV Discussion The results presented in this paper highlight three main points. First, the twist mode is present in the oblique roll states in I52, as it should be. Because it is a weakly damped mode , its amplitude is actually relatively large. Therefore, it must be included in any extended weakly nonlinear description of the system. Second, the observed qualitative features of the oscillating bimodal state are consistent with the mechanism proposed in Ref. . In particular, the results presented in Fig. 7 provide strong evidence that the twist mode is responsible for the dynamics of this state. The plot in Fig. 7 should be compared to the analogous plot in Fig. 8 of Ref. . In both cases, the amplitudes of the zig and zag-type rolls are out-of-phase with each other. The twist-mode is found to oscillate with a phase intermediate to the two modes. This is strong evidence for the proposed director-wavevector frustration mechanism where the the oscillations of the twist mode mediate the oscillations in the q and k modes . Finally, the WEM effects may remain important in the nonlinear states and have an effect on the location of the bimodal varicose transition and the following Hopf bifurcation. The fact that the WEM effects are present is clearly shown by the initial Hopf bifurcation that is present at all the parameters used here. The calculations in Ref. that include the twist amplitudes are based on the standard model of electroconvection, which do not allow for the observed primary Hopf bifurcation. However, the fact that the traveling waves were always replaced with a stationary pattern at $`ϵ0.01`$ suggests that the WEM effects are “weak” in some sense. Therefore, it is not unreasonable to qualitatively compare the results presented here with calculations based on the twist-mode model. However, as is shown in Table I, there is some quantitative disagreement between the twist-mode model and the experiment. Table I shows a comparison between calculations and experiment for two temperatures for the following measured quantities: the initial angle of the rolls with respect to the undeformed director ($`\theta `$); the angle $`\alpha `$ between q and k at the bimodal varicose instability; the transition point to the bimodal varicose state ($`ϵ_{BV}`$); and the subsequent Hopf bifurcation to the oscillating bimodal varicose ($`ϵ_H`$). There is good agreement between $`\theta `$ and $`\alpha `$. The agreement for $`\theta `$ is not surprising because the WEM predicts only a small shift in $`\theta `$ from the standard model value. Likewise, the agreement for $`\alpha `$ is not surprising because the WEM does not appear to shift angles significantly. The asterisk next to the calculated value of $`ϵ_{BV}`$ for the T = 42 C case indicates that there is a maximum in the growth rate of k at this point, but the growth rate is still negative. In fact, for this temperature, the growth rate of k does not become positive within the twist-mode model. Also, for the T = 55 C case, the calculation predicts a value for $`ϵ_{BV}`$ that is too large. For all the parameter values used in the experiment, a transition to the oscillating bimodal varicose from the bimodal varicose state was observed. Over this same range of parameters, the calculations predict a restabilization of a single roll state that supersedes the oscillating bimodal varicose. In addition to possible WEM effects, there are a number of additional sources for the above outlined quantitative disagreements between experiment and the twist-mode model. First, the locations of $`ϵ_{BV}`$ and $`ϵ_H`$ have not yet been calculated in a fully nonlinear calculation and can only be estimated within the context of the extended weakly nonlinear analysis. Second, there are issues of pattern selection that are not addressed in the weakly nonlinear analysis. For example, even though the weakly nonlinear analysis predicts the restabilization of the oblique rolls, the pattern may still select the oscillating bimodal varicose state. Finally, there remains important uncertainties in the material parameters of I52 that make quantitative comparison between theory and experiment difficult. The confirmation of the existence of the twist mode in electroconvection in I52 has important consequences for both the states of spatiotemporal chaos and the localized worm states that have been observed in this system. Because both of these states involve the superposition of oblique rolls, the twist mode must automatically be present, as observed here. In principle, if the states were a superposition of equal amplitudes of zig and zag, the twist mode would have zero amplitude as the two set of rolls produce opposite twists. However, for the state of spatiotemporal chaos, the amplitudes vary irregularly. Therefore, the twist mode may play an active roll in the dynamics. Likewise, in the worm state, the zig and zag rolls have different amplitudes at the edges of the worm. Therefore, the twist mode may play a role in the localization mechanism of the worms as an additional slow field. ###### Acknowledgements. I thank Emmanuel Plaut and Werner Pesch for useful discussions Also, I thank Emmanuel Plaut for providing me with unpublished theoretical calculations. This work was supported by NSF grant DMR-9975479.
no-problem/0003/hep-th0003028.html
ar5iv
text
# I Introduction ## I Introduction Since many years the topological three-dimensional Chern-Simons theory is the source of continuous and renewed interests, with many applications going from pure field theory to condensed matter physics. The Chern-Simons gauge model has been the first example of a topological field theory of the Schwarz type, allowing for the computation of several topological invariants in knots theory . It is a remarkable fact that these computations can be performed within the standard perturbation theory . Moreover, the Chern-Simons provides an example of a fully ultraviolet finite field theory, with vanishing $`\beta `$-function and field anomalous dimensions . This feature relies on the existence of an additional global invariance of the Chern-Simons action which shows up only after the introduction of the gauge fixing and of the corresponding Faddeev-Popov ghost term. This further symmetry is known as vector supersymmetry since its generators carry a Lorentz index and, together with the BRST symmetry, give rise to a supersymmetric algebra of the Wess-Zumino type. It worth mentioning that the nonzero temperature version of the Chern-Simons action is also available and turns out to play an important role in the applications of three-dimensional gauge theories to finite temperature effects. Therefore, it seems naturally to ask ourselves if the vector supersymmetry is still present in the case of a nonzero temperature. This is the aim of the present letter. In particular, we shall be able to show that this question can be answered in the affirmative. In this sense, the fully quantized Chern-Simons action can be considered as an example of a superymmetric field theory at finite temperature. The paper is organized as follows. In Sect.2 we present the finite temperature Chern-Simons action and we analyse the existence of the aforementioned supersymmetry. Sect.3 will be devoted to the study of some consequences and to the conclusion. ## II Finite temperature Chern-Simons action In order to analyse the properties of the Chern-Simons action at finite temperature let us first recall the supersymmetric structure of the zero temperature case. Adopting the Landau gauge, for the fully quantized Chern-Simons action we have $$\mathrm{SS}=_^3d^3x\left(\frac{1}{4}\epsilon ^{\mu \nu \sigma }(A_\mu ^aF_{\mu \nu }^a\frac{1}{3}f_{abc}A_\mu ^aA_\nu ^bA_\sigma ^c)+b^a^\mu A_\mu ^a\overline{c}^a_\nu (D^\nu c)^a\right).$$ (II.1) Expression (II.1) is left invariant by the following nilpotent BRST transformations $$\begin{array}{cc}sA_\mu ^a=(D_\mu c)^a,\hfill & sc^a=\frac{1}{2}f^{abc}c^bc^c\hfill \\ s\overline{c}^a=b^a,\hfill & sb^a=0.\hfill \end{array}$$ (II.2) In addition, the action (II.1) is known to possess a further rigid invariance whose generators $`\delta _\mu `$ carry a vector index, i.e. $$\begin{array}{cc}\delta _\mu c=A_\mu ,\hfill & \delta _\mu \overline{c}=0\hfill \\ \delta _\mu b=_\mu \overline{c},\hfill & \delta _\mu A_\nu =\epsilon _{\mu \nu \sigma }^\sigma \overline{c},\hfill \end{array}$$ (II.3) and, together with the BRST transformations, obey the following relations $$\begin{array}{cc}s^2=0,\{\delta _\mu ,\delta _\nu \}=0,\hfill & \\ \{\delta _\mu ,s\}=_\mu +\mathrm{eqs}.\mathrm{of}\mathrm{motion},\hfill & \end{array}$$ (II.4) which, closing on-shell on the space-time translations, give rise to a supersymmetric algebra of the Wess-Zumino type. Concerning now the nonzero temperature case, for the quantized Chern-Simons action in the imaginary time formalism , we obtain $$\mathrm{SS}_T=_0^\beta 𝑑\tau d^2x\left(\frac{1}{4}\epsilon ^{\mu \nu \sigma }(A_\mu ^aF_{\mu \nu }^a\frac{1}{3}f_{abc}A_\mu ^aA_\nu ^bA_\sigma ^c)+b^\mu A_\mu \overline{c}^a_\nu (D^\nu c)^a\right),$$ (II.5) where $`\beta `$ stands for the inverse of the temperature $`T`$. As is well known, all fields $`\eta =(A,c,\overline{c},b)`$ are required to obey periodic boundary conditions along the compactified direction $`\tau `$ , namely $$\begin{array}{cc}\eta (x,\tau )=\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}\eta ^n(x)e^{i\omega _n\tau },\hfill & \\ \eta ^n(x)=\frac{1}{\beta }_0^\beta 𝑑\tau \eta (x,\tau )e^{i\omega _n\tau },\hfill & \end{array}$$ (II.6) where the $`\omega _n`$ are the so-called Matsubara frequencies $$\omega _n=\frac{2\pi n}{\beta }.$$ (II.7) We emphasize here that the ghost fields $`c,\overline{c}`$, although being anticommuting variables, have to be periodic in $`\tau `$. As we shall see in the following, this property will be crucial for the existence of a supersymmetric structure at nonzero temperature. In order to write down the finite temperature Chern-Simons action in terms of the Matsubara modes $`\eta ^n`$, we identify the $`\tau `$-direction with the $`x^3`$ variable and we introduce the following useful two-dimensional notation $$\begin{array}{cc}A_\mu ^n=(A_\alpha ^n,\varphi ^n),\alpha ,\beta ,\gamma =1,2,\hfill & \\ \epsilon ^{3\mu \nu }=\epsilon ^{\alpha \beta },\epsilon ^{\alpha \beta }\epsilon _{\alpha \gamma }=\delta _\gamma ^\beta .\hfill & \end{array}$$ (II.8) Thus, for the action we obtain $$\mathrm{SS}_T=\mathrm{SS}_{inv}+\mathrm{SS}_{gf},$$ (II.9) where $$\begin{array}{cc}\mathrm{SS}_{inv}=\beta _nd^2x\left(\frac{1}{2}\epsilon ^{\alpha \beta }(\varphi ^{an}F_{\alpha \beta }^{an})\frac{i}{2}\omega _n\epsilon ^{\alpha \beta }A_\alpha ^{an}A_\beta ^{an}+i\omega _nb^{an}\varphi ^{an}\right)\hfill & \\ \mathrm{SS}_{gf}=\beta _nd^2x\left(b^{an}^\alpha A_\alpha ^{an}+_\alpha \overline{c}^{an}\left(^\alpha c^a+f^{abc}\underset{l}{}A_\alpha ^{bl}c^{c(n+l)}\right)\right)\hfill & \\ +\beta _nd^2x\left(\overline{c}^{an}\omega _n^2c^{an}+i\omega _n\overline{c}^{an}\underset{l}{}f^{abc}\varphi ^{bl}c^{c(n+l)}\right)\hfill & \\ & \\ F_{\alpha \beta }^{an}=_\alpha A_\beta ^{an}_\beta A_\alpha ^{an}+f^{abc}_lA_\alpha ^{bl}A_\beta ^{c(nl)}.\hfill & \end{array}$$ (II.10) In terms of the Matsubara modes, the BRST transformations (II.2) read $$\begin{array}{cc}sA_\alpha ^{an}=_\alpha c^{an}+f^{abc}_lA_\alpha ^{bl}c^{c(nl)},\hfill & \\ s\varphi ^{an}=i\omega _nc^{an}+f^{abc}_l\varphi ^{bl}c^{c(nl)}\hfill & \\ sc^a=\frac{1}{2}f^{abc}_lc^{bl}c^{c(nl)},\hfill & \\ s\overline{c}^{an}=b^{an},\hfill & \\ sb^{an}=0.\hfill & \end{array}$$ (II.11) Moreover, it can be checked that the nonzero temperature action (II.9) is left invariant by the further following rigid transformations $`\delta _\alpha ,\delta `$, namely $$\begin{array}{cc}\delta _\alpha A_\beta ^{an}=i\omega _n\epsilon _{\alpha \beta }\overline{c}^{an},\hfill & \\ \delta _\alpha \varphi ^{an}=\epsilon _{\alpha \beta }^\beta \overline{c}^{an},\hfill & \\ \delta _\alpha c^{an}=A_\alpha ^{an},\hfill & \\ \delta _\alpha b^{an}=_\alpha \overline{c}^{an},\hfill & \\ \delta _\alpha \overline{c}^{an}=0,\hfill & \end{array}$$ (II.12) and $$\begin{array}{cc}\delta A_\alpha ^{an}=\epsilon _{\alpha \beta }^\beta \overline{c}^{an},\hfill & \\ \delta c^{an}=\varphi ^{an},\hfill & \\ \delta \varphi ^{an}=0,\hfill & \\ \delta b^{an}=i\omega _n\overline{c}^{an},\hfill & \\ \delta \overline{c}^{an}=0.\hfill & \end{array}$$ (II.13) The generators $`\delta _\alpha ,\delta `$ give rise, together with the BRST operator $`s`$, to the following algebraic relations $$\begin{array}{cc}\{\delta ,s\}\eta ^n=i\omega _n\eta ^n+\mathrm{eqs}.\mathrm{of}\mathrm{motion},\hfill & \\ \{\delta _\alpha ,s\}\eta ^n=_\alpha \eta ^n+\mathrm{eqs}.\mathrm{of}\mathrm{motion},\hfill & \\ \{\delta _\alpha ,\delta _\beta \}=0,\hfill & \\ \delta ^2=0.\hfill & \end{array}$$ (II.14) We see therefore that the supersymmetric structure (II.4) of the zero temperature Chern-Simons persists also in the case of a nonvanishing temperature. In particular, it is easily recognized that the operator $`\delta `$ of eqs.(II.13) corresponds to the generator $`\delta _\tau `$ of eqs.(II.3) along the compactified direction $`\tau =x^3`$. It is also worth underlining here that the existence of a nonzero temperature supersymmetric algebra relies on the periodic boundary conditions required for the Faddeev-Popov ghosts $`c,\overline{c}`$. As is well known, this property follows from the gauge invariance of the nonzero temperature action $`\mathrm{SS}_T`$. Moreover, the supersymmetry turns out to be crucial in order to ensure that no physical excitations show up in the nonzero temperature case, as it will be discussed in the next section. In other words, the nonzero temperature Chern-Simons action remains a topological theory, with no local physical degrees of freedom. ## III Conclusion It has been already underlined that in the zero temperature case the existence of the vector supersymmetry is deeply related to the topological nature of the Chern-Simons term. We recall in fact that the supersymmetry shows up only after the introduction of the ghost fields. As a consequence, it follows that the contributions coming from the propagating components of the gauge field are exactly compensated by those corresponding to the ghosts, resulting in the well known ultraviolet finiteness of the theory. This means that the are no local physical degrees of freedom, i.e. that the theory is topological. The existence of a supersymmetric structure in the case of nonzero temperature suggests a similar behaviour for the finite temperature version of the Chern-Simons. This fact can be easily confirmed in the abelian case by showing that the partition function turns out to be independent from the temperature, implying the vanishing of all relevant thermodynamic quantities. Let us compute in fact the partition function for the abelian Maxwell-Chern-Simons action $$𝒵=e^\beta =𝒟A𝒟c𝒟b𝒟\overline{c}e^{\mathrm{SS}_{MCS}},$$ (III.15) with $$\mathrm{SS}_{MCS}=_0^\beta 𝑑\tau d^2x\left\{\frac{g}{4}F_{\mu \nu }F^{\mu \nu }+i\frac{1}{2}\epsilon ^{\mu \sigma \nu }A_\mu _\sigma A_\nu +b^\mu A_\mu \overline{c}^\mu _\mu c\right\}.$$ (III.16) We have introduced a constant $`g`$ in order to take into account the Maxwell term. Of course, the pure Chern-Simons contribution is recovered in the limit $`g0`$. For the free energy $``$ we obtain the following result $$=gL^2\frac{d^2p}{4\pi ^2}\left\{\sqrt{\stackrel{}{p}^2+1/g^2}+\frac{2}{\beta }ln(1e^{\beta \sqrt{\stackrel{}{p}^2+1/g^2}})\right\},$$ (III.17) where $`L^2`$ stands for the two-dimensional area. Obviously, expression (III.17) does not depend from $`\beta `$ in the limit $`g0`$. Again, there is a complete compensation between the ghost and the gauge sectors, as expected from the existence of the supersymmetry. The analysis of the ultraviolet finiteness of the nonabelian finite temperature case as well as the computation of the vacuum expectation value of Polyakov loops are under investigation. Acknowledgements The Conselho Nacional de Desenvolvimento Científico e Tecnológico CNPq-Brazil, the Fundação de Amparo aà Pesquisa do Estado do Rio de Janeiro (Faperj) and the SR2-UERJ are acknowledged for the financial support.
no-problem/0003/cond-mat0003302.html
ar5iv
text
# Electric field induced memory and aging effects in pure solid N2 \[ ## Abstract We report combined high sensitivity dielectric constant and heat capacity measurements of pure solid N<sub>2</sub> in the presence of a small external ac electric field in the audio frequency range. We have observed strong field induced aging and memory effects which show that field cooled samples may be prepared in a variety of metastable states leading to a free energy landscape with experimentally “tunable” barriers, and tunneling between these states may occur within laboratory time scales. \] The interplay of geometric frustration and strong ferromagnetic as well as antiferromagnetic interactions in magnetic systems such as Gd<sub>3</sub>Ga<sub>5</sub>O<sub>12</sub>, Y<sub>2</sub>Mo<sub>2</sub>O<sub>7</sub>, Ho<sub>2</sub>Ti<sub>2</sub>O<sub>7</sub> and SrCr<sub>9p</sub>Ga<sub>12-9p</sub>O<sub>19</sub> with $`p0.9`$ have been the subject of intense interest . In these systems, frustration is caused by competition between nearest neighbor exchange interaction which leads to macroscopic degeneracies and can lead to a variety of new phenomena at low temperatures including spin glasses, spin liquids and magnetic analogs of ice. However, a good understanding of these systems is hindered by the fact that these systems are highly complicated, and the interactions are often complex. In the present work we show that there exists a simple electrical analog of such systems, namely pure solid N<sub>2</sub>, where the $`hcp`$ lattice geometry at temperatures above $`T_{\alpha \beta }`$ is incompatible with the symmetry of the interactions and the resulting geometrical frustration destroys the long range orientational order favored by the electric quadrupolar interactions between molecules. There are several unique features that make solid N<sub>2</sub> an ideal system to study the interplay of geometric frustration and interactions: (i) It is a simple system with well understood molecular interactions . (ii) Solid N<sub>2</sub> undergoes structural transition from $`hcp`$ to $`fcc`$ at $`T_{\alpha \beta }`$ which significantly lowers the geometric frustration; this gives us the unique opportunity to study the effects of geometric frustration by comparing the system in the two geometries. (iii) As we will show in the present work, but unsuspected previously, the system can be manipulated to become trapped in a variety of metastable states by cooling it in a small ac electric field in the audio frequency range. (iv) The time scale involved for the system to tunnel between these trapped macroscopic states can be several hours (short compared to silicate glasses), which allows for the possibility of studying the nonequilibrium behavior including aging effects in the laboratory. (v) It is easy to increase the disorder by small amounts at a time by replacing some of the quadrupolar N<sub>2</sub> molecules by spherical Ar atoms . This allows one to explore the interplay of disorder and frustration along with interactions; these studies will be reported elsewhere. It is generally believed that the orientational ordering in N<sub>2</sub> is nucleated at a temperature above $`T_{\alpha \beta }`$36 K although no experiment has indicated the temperature at which this ordering begins . The high temperature $`hcp`$ or $`\beta `$-phase does not support long range orientational order but there can still be some short range local ordering. In this phase, N<sub>2</sub> is known to show hindered rotation due to the incompatibility between the lattice and the molecular sizes resulting in a complex free energy landscape. Recently we showed that the dielectric constant ($`\epsilon (T)`$) of pure solid N<sub>2</sub> exhibited unexpected hysteresis in $`\epsilon (T)`$ in the audio frequency range whenever the sample is heated above an onset temperature $`T_h`$ $``$ 42 K, in the presence of the bias electric field . However, the significance of the small external ac field was not obvious. In this paper we present systematic dielectric as well as high sensitivity heat capacity data for samples that are cooled either in the presence or absence of an external ac electric field. These results show that when field-cooled in a small ac field in the audio frequency range, solid N<sub>2</sub> shows remarkable glass-like memory and aging effects which are readily observable within the laboratory time scales. In contrast, the zero-field cooled samples do not show these effects. The dielectric measurements were carried out using a three terminal ac capacitance bridge having a sensitivity of two parts per billion . The high sensitivity heat capacity measurements were carried out using an advanced dual-slope method . Figure 1 shows the relative change in $`\epsilon (T)`$ in 4.2-55 K range (melting temperature is $``$63 K). Curve (1), taken from ref. REFERENCES, shows the zero-field cooled result. The (lower) family of curves (2-4) show $`\epsilon (T)`$ when the sample is cooled in the presence of a bias field of 5 kV/m at 1kHz, after warming up to $`T_h<T_{max}<48`$ K. The difference in these curves is mainly due to the final temperature ($`T_{max}`$) in the above range from which the sample is cooled. As reported previously , once cooled down along any one of these curves, thermal cycles are completely reversible along the same curve as long as the highest temperature remains below $`T_h`$. The onset of strong hysteresis upon warming above $`T_h`$ was noted in ref. REFERENCES, but the significance of the field-cooling was not appreciated. In the (upper) family of curves (5-9), we show the results when the sample temperature is raised to $`53`$ K, annealed at $`50<T<53`$ K (for 10 to 12 h.), and then field-cooled in the presence of the above bias field. The $`\epsilon (T)`$ increases sharply, reaching a maximum at $`51`$ K, and then decreases slowly. When cooled from this temperature, the warm-up and cool-down curves are no longer similar even in the $`\alpha `$-phase and are very different from the zero-field cooled (curve (1)) or other lower family of reversible curves (2-4). For $`4.2<T<30`$ K, the $`\epsilon (T)`$ decreases linearly with $`T`$ during warm-up cycle, while it increases linearly with $`T`$ during cool-down cycle. Also, we have observed spontaneous change in $`\epsilon `$ at various temperatures indicated by $``$ in Fig. 1. We note that the spontaneous changes in $`\epsilon (T)`$ occur at apparently random temperatures and these are present in both the $`\alpha `$ as well as the $`\beta `$-phases but only during the cool-down cycle. Thus the field cooled sample becomes trapped in metastable configurations and the free energy barriers vary from very large for the lower family of curves to sufficiently small for the upper family of curves to allow random tunneling to lower states. Remarkably, the value of $`T_{\alpha \beta }`$ is the same for all curves to within 0.1 K although the change in $`\epsilon (T)`$ at $`T_{\alpha \beta }`$ can be different. If the system is left isolated with the external field turned off and annealed above $`T_h`$ for several hours and then cooled, $`\epsilon (T)`$ retraces the lowest curve in Fig. 1, independent of the sample history. This shows that the sample can be brought back to its original state by appropriate annealing, and that the memory and aging effects are induced by cooling in the small ac field, and not due to lattice dislocations or other defects. When the sample temperature is further raised to $``$57 K, we observed a massive, spontaneous change in $`\epsilon (T)`$ ($`2.5\times 10^2`$) at 56.5 K shown in Fig. 2. This temperature is highly reproducible to within 0.1 K for various samples, though small hysteresis ($``$ 1 K) in temperature is present between warm-up and cool-down cycles (not shown). No anomaly near this temperature has been observed in previous experiments, either in dielectric measurements at microwave frequencies or in heat capacity measurements . The fact that such a significant transition has not been observed before prompted us to suspect that the transition may occur only in the presence of the audio frequency electric field. However, because the dielectric measurements always involve an external ac field, and the bias field itself affects the measurement, it is not possible to carry out a true zero-field cooled dielectric measurements at these temperatures. We therefore carried out very sensitive heat capacity measurements at constant volume for both field cooled and zero-field cooled samples. Note that, unlike the previous heat capacity data obtained using adiabatic methods, our high sensitivity nonadiabatic method revealed, in the absence of any external field (curve (1) in Fig. 3), a new critical behavior at $``$51 K in the heat capacity. This critical behavior is in striking agreement with our dielectric data (inset of Fig. 2). In particular, in the absence of the external electric field, $`C_v(T)`$ retraces curve (1) in Fig. 3 upon thermal recycling and the general behavior of $`C_v(T)`$ near 56.5 K agrees very well with the previously reported values . When a uniform 10 kV/m, 1 kHz field is applied across the sample from 10 K, the difference in $`C_v(T)`$ is negligible up to the continuous transition near 51 K, but significant deviation from the $`C_v(T)`$ of zero-field cooled samples can be observed for $`T>51`$K (see curve (2), Fig. 3). When the above sample exposed to the external field from 10 K is cooled down from 60 K and its heat capacity is obtained once again during the warm-up cycle (while leaving the field on through out), the observed $`C_v(T)`$ (curve (3), Fig. 3) is larger than that of zero-field cooled sample in the $`\alpha `$ as well as $`\beta `$-phases well above the sensitivity of our apparatus. It should be noted that the structural transition as well as the new continuous transition at $``$51 K remain the same independent of the field. The data clearly shows that in the field cooled samples, there exits a sharp time-dependent rise in $`C_v(T)`$ near 56.5 K (indicated by $``$) which finally gives rise to a peak at this temperature. Different curves in Fig. 3 near 56.5 K (in curves (2) and (3)) obtained with 3 h. intervals indicate the slow thermal evolution of $`C_v`$ with time. (In contrast, our dielectric data (Fig. 2) shows rapid change in $`\epsilon `$ at this temperature.) The absence of a peak in C<sub>v</sub> for the zero-field cooled sample (curve (1), Fig. 3) near 56.5 K indicates that the observed anomaly at 56.5 K is due entirely to the external electric field. Comparison of curves (1) and (3) clearly demonstrates the strong effect of the external ac field on the thermodynamic behavior of solid N<sub>2</sub>. We have carried out several dielectric measurements to determine the dependence on excitation field strength and frequency for which solid N<sub>2</sub> shows this remarkable field-induced nonequilibrium behavior. Several samples were initially zero-field cooled down to 4.2 K, carefully annealed at $`41.5<T<43`$ K for 6 to 8 h., and then annealed once again at $`50<T<53`$ K for 10 to 12 h. at various external field strengths and frequencies. After cooling the samples to 4.2 K, a standard 100 V/m bias field at 1 kHz was applied to obtain the warm-up $`\epsilon (T)`$ data in the $`\alpha `$-phase for excitation field strengths $`<`$ 100 V/m and frequencies $`>`$ 20 kHz. For other excitation field strengths and frequencies, the bias field and the excitation field are the same. This particular sequence is followed because the ac capacitance bridge has the required sensitivity only for bias fields larger than 100 V/m, and in the audio frequency range . Also, as pointed out previously , once the sample temperature is below $`T_h`$, the bias field has no observable effect on the nonequilibrium behavior (and hysteresis) of the samples. From these experiments we observed strong field-induced nonequilibrium behavior for external electric fields stronger than $``$20 V/m (peak-to-peak). The $`\epsilon (T)`$ curves of the samples field cooled in fields stronger than 20 V/m are very similar to those shown in Fig. 1. For dc 1 kV/m as well as ac 2 V/m field cooled samples, the $`\epsilon (T)`$ is only slightly different from the zero-field cooled case (not shown). In particular we do not observe spontaneous and random jumps in $`\epsilon (T)`$ and linear temperature dependence at these temperatures, which characterize the nonequilibrim nature of the field cooled samples. To find out the upper limit of the frequency of the electric field for which we observe the nonequilibrim behavior, we field cooled the sample in 5.2 kV/m, 90 MHz external uniform field. We observed no spontaneous and random jumps in $`\epsilon (T)`$ for this sample, but the small hysteresis is still present and the $`\epsilon (T)`$ curve is close to that of the zero-field cooled sample. This indicates that the glass-like states can be excited only at low frequencies extending up to perhaps hundreds of kHz and for field strengths greater than $``$20 V/m. This could be one of the reasons for the failure of the previous microwave measurements to observe the remarkable effect of the external electric field on the thermodynamics of solid N<sub>2</sub>. We would like to point out that while for $`T<56.5`$ K the field cooled $`\beta `$-phase N<sub>2</sub> shows remarkable aging behavior, for $`56.5<T<T_M`$, where lattice defects may be dominant due to the proximity to melting, $`\epsilon (T)`$ is linear as well as reversible with thermal recycling (see Fig. 2). Here we would like to emphasize that the hysteresis values shown in the various plots are accurate to within 1%, and for zero-field cooled samples, we observed no hysteresis or shift in the absolute value of $`\epsilon `$ at 4.2 K even after annealing the sample for long time (few hours) at 50 to 55 K (without any external field). This shows that the lattice defects present near the melting temperature can not be the driving mechanism for the aging and memory effects. The external electric field indeed is responsible for the memory effects observed in solid N<sub>2</sub> whose origin is not understood. The strong effect of cooling in the presence of a small ac field in the audio frequency range at such high temperatures is puzzling. The audio frequency field corresponds to a temperature scale of a few $`\mu `$K, and the strength of the electric field coupling to the molecular polarizability is negligible compared to the dominant electric quadrupole-quadrupole (EQQ) interactions. Clearly the process of orientational ordering is disrupted by the presence of the field, although the field itself is not strong enough to make a difference if the system is already ordered (uniform field has no effect on EQQ). The presence of short range ordering in the $`\beta `$-phase along with the strong effect of field-cooling therefore implies that the formation or growth of these clusters is inhibited by the presence of the field. The geometrical frustration generated by the symmetry incompatibility of local and extended degrees of freedom results in a thermodynamically large number of accessible ground states. This macroscopic ground state degeneracy presents a new paradigm with which one views condensed matter systems that form glass-like states. The characteristic glass dynamics and aging effects result from the relaxation among a large number of nearly degenerate ground states. In the present case where in the absence of substitutional disorder the frustration is due entirely to the symmetry properties of the interaction, the application of an applied electric field, although small, can perturb the energy landscape of the interacting molecules and result in the field-cooled memory effects. Thermal cycling to temperature where the relaxation rate becomes sufficiently rapid is required to erase the memory effect. In the case of pure N<sub>2</sub>, typical energy spacings are of the order of $``$1$`\mu `$K (the $`ortho`$-$`para`$ spacing for local ordered clusters) and ac fields of the order of 10 kHz are therefore most effective in creating transfers among the energy landscape profile. As a result of these transitions, a cluster of orientatinally ordered molecules will find the molecular orientations slightly changed from the locked ordering directions leading to a partial destruction of the ordering, provided the clusters are small enough. As observed, the effect is absent for rf as well as pure dc electric fields. Upon field-cooling from $`T_{max}>51`$ K where large ordered clusters are not present, the ac field modifies the landscape of metastable states and can generate spontaneous tunneling to lower energy states (e.g., curve (8) in Fig. 1). However, when we begin the field-cooling from $`T_{max}<51`$ K where a broad distribution of ordered cluster sizes already exists, the small changes in the orientation of the molecules can only affect the small clusters (giving rise to hystersesis only). Thus the system can be prepared in a variety of trapped metastable states (with small differences in the ground state energies but high barriers), leading to a free energy landscape with experimentally tunable (large or small) barriers, by cooling from different temperatures in the presence of an external electric field. In conclusion, solid N<sub>2</sub> provides a unique opportunity to quantitatively address a variety of questions in the rapidly evolving area of aging and nonequilibrium phenomena in glass-like materials. This work is supported by a grant from the National Science Foundation No. DMR-962356.
no-problem/0003/astro-ph0003121.html
ar5iv
text
# Deeply penetrating banded zonal flows in the solar convection zone ## 1. Introduction The intensely turbulent state of the solar convection zone is revealed by the patterns of granulation, mesogranulation and supergranulation evident in its surface layers (e.g., Brummell, Cattaneo & Toomre 1995). Yet accompanying such turbulent and seemingly chaotic small-scale dynamics are also signs of ordered large-scale behavior. Most notably the solar differential rotation involves a relatively smooth decrease in angular velocity from equator to pole, both in the surface layers (e.g., Snodgrass 1984) and within the convection zone as inferred from helioseismic measurements (e.g., Thompson et al. 1996; Schou et al. 1998a). On the largest scales, the magnetic activity similarly exhibits well-defined rules as the 22-year cycle progresses. An enticing link between the latitudes of field emergence and small variations in the rotation rate of the surface layers is provided by bands of slightly faster and slower than average zonal flows, called torsional oscillations, that were observed from direct Doppler measurements to migrate towards the equator in a manner similar to the zones of solar activity (e.g., Howard & LaBonte 1980; Snodgrass, Howard & Webster 1985; Ulrich 1998). Helioseismic analysis of data from the Michelson Doppler Imager (MDI) instrument (e.g., Scherrer et al. 1995) on the Solar and Heliospheric Observatory (SOHO) spacecraft has confirmed the presence of such bands of weak zonal flow, and their drift towards the equator, for the present solar cycle (Kosovichev & Schou 1997; Schou et al. 1998a,b; Schou 1999). Although the causal relation between these banded flows and the zones of magnetic activity is still unclear, it is important to understand whether the flows are confined to the layer of rotational shear just below the solar surface. In this letter, we address such questions using two extensive helioseismic data sets, covering slightly over four years, obtained with MDI and with the ground-based Global Oscillation Network Group (GONG) project (e.g., Harvey et al. 1996). We establish the consistency of the independent determinations of the flow from the two data sets, and infer that the zonal banding signature extends to depths of about 60 Mm (or about 8% in radius) below the solar surface. Thus these are not superficial features, and provide evidence of ordered rotational responses as the magnetic cycle is progressing. More extensive accounts of such analyses of zonal flows are provided for GONG data by Howe, Komm & Hill (2000) and for MDI data by Toomre et al. (2000). ## 2. Observations and data analyses The rotation rate of the solar convection zone has been inferred through inversion of observed rotational splittings of solar f and p modes. Two sets of observations have been used. One was obtained by the GONG network over the period 1995 May 7 to 1999 June 26. This set consists of 40 overlapping series of 108 days each, with starting dates 36 days apart. The second set was obtained by MDI and consists of 11 contiguous sets from 1996 May 1 to 1998 June 24 (before control of the spacecraft was lost), one set from 1998 October 23 to 1998 December 21 (after control was reasserted), and four sets from 1999 February 3 to 1999 November 17. While each of the sets nominally covers 72 days, the ones surrounding the loss of contact are missing a few days. The dependence of the frequencies on the azimuthal order was represented in terms of an expansion on orthogonal polynomials (Ritzwoller & Lavely 1991), expressed in terms of the so-called $`a`$ coefficients $`a_k(n,l)`$, depending on the radial order $`n`$ and the degree $`l`$ of the mode, as well as on the order $`k`$ of the coefficient. The odd coefficients are related to the angular velocity $`\mathrm{\Omega }(r,\theta )`$ (as a function of the distance $`r`$ to the solar center and the co-latitude $`\theta `$) by $$2\pi a_{2s+1}(n,l)=_0^R_0^\pi K_{nls}^{(a)}(r,\theta )\mathrm{\Omega }(r,\theta )rdrd\theta ,$$ (1) where $`R`$ is the solar radius and the kernels $`K_{nls}^{(a)}`$ are assumed known from a solar model. The GONG data comprised around 10,000 coefficients, up to $`a_{15}`$, for a total of typically 1,200 p-mode multiplets $`(n,l)`$ for $`l150`$, whereas the MDI sets contained approximately 30,000 coefficients, up to $`a_{35}`$, for roughly 1,800 multiplets with $`l300`$. The inversions of the relations (1) were carried out by means of two methods, described by Schou et al. (1998a): two-dimensional regularized least-squares fitting (RLS) and two-dimensional subtractive optimally localized averages (OLA). ## 3. Results and discussion The overall features of the inferred rotation profile are very similar to those obtained by Schou et al. (1998a) based on 144 days of MDI data. Here we concentrate on the time-dependent aspects of the dynamics of the upper portions of the convection zone. These are most readily studied by considering departures of the reconstructed rotation rate from its temporal average $`\overline{\mathrm{\Omega }}(r,\theta )`$. Figure 1 shows the evolution of these residuals as a function of latitude at four target depths, using OLA inversion of the MDI data. The inversion is only sensitive to the component of rotation symmetric around the equator; even so, to show more clearly the evolution of the features, we have included both hemispheres in the plots. The residuals show alternating bands of positive and negative zonal velocity, relative to the temporal mean, converging towards the equator with increasing time. The amplitude in these flows is around 1.5 nHz, corresponding to velocities of up to around $`6\mathrm{m}\mathrm{s}^1`$. The flows are visible, at roughly the same amplitude, in the inversion targeted at $`0.92R`$, and faint traces are visible in the inversion targeted at $`0.88R`$; we return to the significance of this below. Although the signal shown in Figure 1 seems strong and coherent, some doubt about its reality may remain. Thus access to the independent GONG dataset is essential, with the additional advantage that it starts almost a year before the MDI data. As a further test, we also apply two different analysis techniques to the set, as shown in Figure 3. The GONG and MDI data, where they overlap in time, are essentially consistent at $`r=0.99R`$. At $`r=0.95R`$ the GONG RLS reconstructions are somewhat noisier so that the subtle signature of the migrating zonal bands is less obvious. The RLS and OLA inversion results for the MDI data agree very well at both depths illustrated. To provide a more quantitative comparison, Figure 3 shows the GONG and MDI solutions at selected radii and latitudes as a function of time. It is evident that the three sets of results largely agree within their error bars. Also, the variations are highly statistically significant: at the equator the angular velocity decreases uniformly, whereas at latitude $`30^{}`$, for example, the variation reflects mainly the passage of the band of more rapid rotation. The inversions reveal a considerably different behavior at the two depths at latitude $`60^{}`$, much as could also be inferred from Figure 3. The interpretation of any inversion results must take into account their finite resolution, as well as the properties of errors of the inferences. In particular, the solution obtained at a given location contains contributions from $`\mathrm{\Omega }`$ at other points, while error correlation between the solutions at different locations may give the impression of coherent structure where none exists. Quantitative measures of these effects can be obtained from detailed analyses of the inversion (e.g. Schou et al. 1998a; Howe & Thompson 1996). As an alternative, we have considered artificial data for a number of prescribed rotation laws, with error properties corresponding to those of the solar data. The mode set and errors used correspond to those of a typical MDI set. Superimposed on a smoothly varying flow in the latitudinal direction, the artificial rotation profiles possess a single pair of flows, $`10^{}`$ wide, and rotating 3 nHz faster than the background level, moving equatorwards at $`5^{}`$ per sample. There are nine samples in all in each ‘time’ sequence. Three cases are illustrated in Figure 4 using OLA inversions: in the first, the flows extend from the surface to a depth of 5% of the solar radius, in the second to a depth of 8% and in the third to a depth of 20%. For all cases, the branches of the flow are strongly visible at 0.99$`R`$, and the flows are evident to about the depth to which they are imposed, while disappearing below that depth. The 0.92$`R`$ case most resembles the solar observations illustrated in Figure 1, whereas the disappearance of the flows in the 0.95$`R`$ case at 0.93$`R`$ and the visibility of the 0.80$`R`$ flow at 0.84$`R`$ are both inconsistent with the solar observations. This evidence, taken together with many other cases which we have tested, strongly suggests that the solar flow structure extends to a depth of at least 0.08$`R`$ with substantial amplitude, but does not extend much further than 0.10$`R`$. The latitudinal width of the Sun’s banded flows (Fig. 1) is similar to the $`10^{}`$ width assumed in the artificial data: this is consistent also with the Doppler measurements at the surface (Ulrich, 1998). Comparison of Figures 1 and 4 suggests that the solar flows may be somewhat weaker than the $`3\mathrm{nHz}`$ chosen for the artificial case. At the highest latitudes, Figures 1 and 3 show more dynamical variations than at lower latitudes. This is likely related to the lesser moment of inertia associated with the polar regions. There may be evidence for the formation of a new band of rapid rotation tending towards lower latitudes; the further evolution of this feature should be followed over the coming years. ## 4. Conclusions Analysis of extended series of GONG and MDI data has revealed coherent banded flow structures in the solar convection zone. These correspond to the torsional oscillations detected in direct Doppler observations of the solar surface. We have demonstrated that the flows are likely to extend to a depth of at least 60 Mm, a substantial fraction of the total 200 Mm depth of the convection zone, and considerably more than the depth (about 35 Mm) at which the rotation rate attains its maximum in the subsurface radial shear layer at low latitudes (cf. Schou et al. 1998a). In addition, there appear to be other systematic variations with time of the residual rotation rate, with different signatures at low and high latitudes (cf. Fig. 3). Inversions of global oscillation frequency splittings sample the component of rotation symmetric around the equator. The actual flows will exhibit some level of asymmetry which will depend on the time scale used in the analysis. Indeed, local analyses by means of the time-distance and ring-diagram techniques (Giles, Duvall, & Scherrer 1998; Haber et al. 2000) have shown features similar to those found here, but with clear differences between the two hemispheres. The link between the evolving latitudinal positioning of the faster zonal bands and of the sites of sunspot emergence suggest that the dynamics are related, yet how this is accomplished is uncertain. The strong magnetic fields most likely originate from deep within the Sun, probably formed by dynamo action near the base of the convection zone (e.g., Spiegel & Zahn 1992; Parker 1993; Weiss 1994; Charbonneau & MacGregor 1997). Field bundles ascending from this region through the convection zone, before erupting into the atmosphere as large-scale magnetic loops, could well lead to significant perturbations in velocity and thermal fields there. This is likely to be accompanied by some redistribution of angular momentum, given that the magnetic structures will attempt to conserve their original angular momentum (e.g., Brummell, Cattaneo & Toomre 1995). The coupling of a highly turbulent medium with ascending magnetic structures, and their mutual feedbacks, have not yet been assessed in recent flux-tube models. Global simulations of turbulent convection in rotating spherical shells (e.g., Elliott et al. 2000; Miesch et al. 2000; Miesch 2000) to study the resulting differential rotation have revealed intrinsic variability in zonal flows over intervals of several rotation periods, some of which may be inertial oscillations (e.g., Gunther & Gilman 1985), but such modelling has not included large-scale magnetic fields. Obtaining propagating bands and time scales of variation of order the solar cycle seems problematic unless there is some selective coupling to magnetic processes. Adding to the puzzle is that the evolving zonal bands are present at the higher latitudes even before the prominent large-scale magnetic eruptions begin (e.g. Ulrich 1998), as within this cycle. Continued helioseismic observations as this magnetic cycle is proceeding may help to provide clues about such aspects of solar internal dynamics, for we now have the ability to probe hitherto unseen flows well below the solar surface. ## Acknowledgements This work utilizes data obtained by the GONG project, managed by the National Solar Observatory, a Division of the National Optical Astronomy Observatories, which is operated by AURA, Inc. under a cooperative agreement with NSF. The data were acquired by instruments operated by the Big Bear Solar Observatory, High Altitude Observatory, Learmonth Solar Observatory, Udaipur Solar Observatory, Instituto de Astrofísico de Canarias, and Cerro Tololo Interamerican Observatory. The Solar Oscillations Investigation (SOI) involving MDI is supported by NASA grant NAG 5-3077 to Stanford University. SOHO is a mission of international cooperation between ESA and NASA. RWK, and RH in part, were supported by NASA contract S-92698-F. JC-D was supported by the Danish National Research Foundation through the establishment of the Theoretical Astrophysics Center. MJT was supported in part by the UK Particle Physics & Astronomy Research Council. JT was supported in part by NASA through grants NAG 5-7996 and NAG 5-8133, and by NSF through grant ATM-9731676.
no-problem/0003/quant-ph0003148.html
ar5iv
text
# Comment on “Quantum mechanics of an electron in a homogeneous magnetic field and a singular magnetic flux tube” ## Abstract Recently Thienel \[Ann. Phys. (N.Y.) 280 (2000), 140\] investigated the Pauli equation for an electron moving in a plane under the influence of a perpendicular magnetic field which is the sum of a uniform field and a singular flux tube. Here we criticise his claim that one cannot properly solve this equation by treating the singular flux tube as the limiting case of a flux tube of finite size. The Pauli Hamiltonian for an electron (of mass $`M`$, charge $`|e|`$ and $`g`$-factor 2) moving in the $`(x,y)`$-plane under the influence of a magnetic field pointing in the $`z`$-direction is given by $$H=\frac{1}{2M}\left(𝐩+\frac{|e|}{c}𝐀\right)^2+\frac{|e|\mathrm{}}{Mc}B_zS_z.$$ (1) Thienel has recently investigated the eigenvalue problem for this Hamiltonian in the case of a magnetic field which is the sum of a uniform field and a singular flux tube, $$B_z(𝐫)=B+\alpha \mathrm{\Phi }\delta ^2(𝐫)\left(B>0;\mathrm{\Phi }2\pi \mathrm{}c/|e|\right).$$ (2) He claims that standard approaches to this problem fail. The purpose of this Comment is to show not only that they do work, they are also simpler than his alternative method. As Thienel, we choose the vector potential in the symmetric gauge, $$𝐀(r)=\left(\frac{Br}{2}+\frac{\alpha \mathrm{\Phi }}{2\pi r}\right)𝐞_\phi ,$$ and use magnetic units (where the unit of length is $`\lambda =(\mathrm{\Phi }/\pi B)^{1/2}`$ and the unit of energy is $`\mathrm{}\omega `$, with $`\omega =|e|B/Mc`$ the Larmor frequency). Then we can rewrite (1) as $$H=\frac{1}{4r}\frac{}{r}\left(r\frac{}{r}\right)\frac{1}{4r^2}\left(\frac{}{\phi }+i\alpha \right)^2\frac{i}{2}\left(\frac{}{\phi }+i\alpha \right)+\frac{1}{4}r^2+\left[1+\frac{\alpha }{2r}\delta (r)\right]S_z$$ (3) (our $`r`$ corresponds to his $`\stackrel{~}{r}`$). Since $`H`$, $`L_z`$ and $`S_z`$ commute with each other they can be diagonalized simultaneously, so we can write the eigenfunctions of $`H`$ as $`\mathrm{\Psi }_{E,m,\sigma }(r,\phi )=\psi _{E,m,\sigma }(r)e^{im\phi }|\sigma `$, with $`m`$ an integer and $`S_z|\sigma =\sigma |\sigma `$, $`\sigma =\pm 1/2`$. Solving the resulting differential equation for $`\psi _{E,m,\sigma }(r)`$ and demanding that $`\psi _{E,m,\sigma }(r)0`$ as $`r\mathrm{}`$ we finally obtain $$\mathrm{\Psi }_{E,m,\sigma }(r,\phi )=𝒩r^{|m+\alpha |}e^{r^2/2}U(\xi ,|m+\alpha |+1,r^2)e^{im\phi }|\sigma ,$$ (4) where $`U(a,b,z)`$ is one of Kummer’s functions , $`𝒩`$ is a normalization constant and $$\xi \frac{1}{2}(|m+\alpha |+m+\alpha +1+2\sigma )E.$$ (5) In order to determine the possible values of $`E`$ we need to know the correct boundary condition at the origin. This problem was examined by Hagen and Górnicki in the case of a pure Aharonov-Bohm potential (i.e., with $`B=0`$). By treating the singular flux tube as the limiting case of a flux tube of finite size, they obtained the following result: the eigenfuntions corresponding to the spin component which “sees” a repulsive delta-function potential at the origin (i.e., $`\sigma =+1/2`$ if $`\alpha >0`$, $`\sigma =1/2`$ if $`\alpha <0`$) must be regular there. (By contrast, Thienel requires, without justification, that they vanish at the origin. This is the reason why a vacancy line occurs in his $`(E,m+\sigma )`$-plane — see FIG. 1 of Ref. — for integer $`\alpha 0`$, whereas in our solution no such vacancy line occurs.) The supersymmetry of the Pauli Hamiltonian determines the eigenfunctions corresponding to the other spin component. A few remarks are in order here: (i) although such a boundary condition was derived for the the Dirac equation, one can easily show that it also holds for the Pauli equation (with $`g=2`$) ; (ii) the presence of background smooth magnetic field does not alter the boundary condition at the origin, as it does not add any singular term to the Hamiltonian. Let us first consider the case $`\alpha >0`$. Then $`\mathrm{\Psi }_{E,m,1/2}`$ must be regular at the origin, which occurs only if $`\xi =n`$ $`(n=0,1,2\mathrm{})`$, for then $$U(n,|m+\alpha |+1,r^2)=(1)^nn!L_n^{|m+\alpha |}(r^2),$$ (6) where $`L_n^a(z)`$ is the associated Laguerre polynomial . Combining this with (4) and (5) and normalizing $`\mathrm{\Psi }_{E,m,1/2}`$ to unity we thus obtain $`\mathrm{\Psi }_{n,m,1/2}(r,\phi )=\sqrt{{\displaystyle \frac{\mathrm{\Gamma }(n+1)}{\pi \mathrm{\Gamma }(|m+\alpha |+n+1)}}}r^{|m+\alpha |}e^{r^2/2}L_n^{|m+\alpha |}(r^2)e^{im\phi }|+,`$ (7) $`E_{n,m,1/2}=n+1+{\displaystyle \frac{1}{2}}(|m+\alpha |+m+\alpha )(n=0,1,2\mathrm{};m=0,\pm 1,\pm 2\mathrm{}).`$ (8) For each of these states there is a superpartner with the same energy and opposite spin, obtained by applying the supercharge $`Q^{}`$ (Eq. (10) of Ref. ) to (7): $`\mathrm{\Psi }_{n,m+1,1/2}(r,\phi )`$ $`=`$ $`E_{n,m,1/2}^{1/2}Q^{}\mathrm{\Psi }_{n,m,1/2}(r,\phi )`$ (9) $`=`$ $`{\displaystyle \frac{1}{2}}\sqrt{{\displaystyle \frac{\mathrm{\Gamma }(n+1)}{\pi E_{n,m,1/2}\mathrm{\Gamma }(|m+\alpha |+n+1)}}}`$ (11) $`\times \left({\displaystyle \frac{}{r}}+{\displaystyle \frac{m+\alpha }{r}}+r\right)r^{|m+\alpha |}e^{r^2/2}L_n^{|m+\alpha |}(r^2)e^{i(m+1)\phi }|.`$ The factor $`E_{n,m,1/2}^{1/2}`$ ensures proper normalization: $`\mathrm{\Psi }_{n,m+1,1/2}|\mathrm{\Psi }_{n,m+1,1/2}`$ $`=`$ $`E_{n,m,1/2}^1\mathrm{\Psi }_{n,m,1/2}|QQ^{}|\mathrm{\Psi }_{n,m,1/2}`$ $`=`$ $`E_{n,m,1/2}^1\mathrm{\Psi }_{n,m,1/2}|(HQ^{}Q)|\mathrm{\Psi }_{n,m,1/2}=1.`$ The eigenstates with zero energy are anihilated by both supercharges, $$Q|E=0=Q^{}|E=0=0.$$ They are given by $$\mathrm{\Psi }_{E=0,m,1/2}(r,\phi )=\frac{r^{(m+\alpha )}e^{r^2/2}e^{im\phi }|}{\sqrt{\pi \mathrm{\Gamma }(m\alpha +1)}};$$ (12) square integrability requires $`m+\alpha <1`$. If $`\alpha <0`$ it is $`\mathrm{\Psi }_{E,m,1/2}`$ which must be regular at $`r=0`$. Thus $`\mathrm{\Psi }_{n,m,1/2}(r,\phi )=\sqrt{{\displaystyle \frac{\mathrm{\Gamma }(n+1)}{\pi \mathrm{\Gamma }(|m+\alpha |+n+1)}}}r^{|m+\alpha |}e^{r^2/2}L_n^{|m+\alpha |}(r^2)e^{im\phi }|,`$ (13) $`E_{n,m,1/2}=n+{\displaystyle \frac{1}{2}}(|m+\alpha |+m+\alpha )(n=0,1,2\mathrm{};m=0,\pm 1,\pm 2\mathrm{}).`$ (14) The zero modes are already included among these states $`(n=0,m+\alpha 0)`$. The spin-up states are obtained by applying the supercharge $`Q`$ (Eq. (9) of Ref. ) to the spin-down states with nonzero energy (the factor $`E_{n,m,1/2}^{1/2}`$ ensures proper normalization): $`\mathrm{\Psi }_{n,m1,1/2}(r,\phi )`$ $`=`$ $`E_{n,m,1/2}^{1/2}Q\mathrm{\Psi }_{n,m,1/2}(r,\phi )`$ (15) $`=`$ $`{\displaystyle \frac{1}{2}}\sqrt{{\displaystyle \frac{\mathrm{\Gamma }(n+1)}{\pi E_{n,m,1/2}\mathrm{\Gamma }(|m+\alpha |+n+1)}}}`$ (17) $`\times \left({\displaystyle \frac{}{r}}+{\displaystyle \frac{m+\alpha }{r}}+r\right)r^{|m+\alpha |}e^{r^2/2}L_n^{|m+\alpha |}(r^2)e^{i(m1)\phi }|+.`$ The same results can be obtained by treating the singular flux tube as the limiting case of a flux tube of finite size in the presence of a background homogeneous magnetic field. The calculations, however, are more complicated and not illuminating. Here I shall only point out to the origin of Thienel’s wrong conclusion that such an approach fails. First of all, we note that the correct form of $`\mathrm{\Psi }`$ outside the flux tube is given by (4). Then, by demanding continuity of $`\mathrm{\Psi }/r`$ at the border of the tube, one is led to what is essentially his Eq. (17) multiplied by $`R^{|m+\alpha |1}e^{R^2/2}U(\xi ,|m+\alpha |+1,R^2)`$. As a function of $`\xi `$, $`U`$ has an infinite number of zeros, which were completely overlooked by Thienel. \[One can show, in particular, that the zeros $`\xi _n`$ of $`U`$ satisfy $`lim_{R0}\xi _n=n`$ ($`n=0,1,2\mathrm{}`$). This follows from the asymptotic behavior of $`U`$ for small $`R`$ , $$U(\xi ,|m+\alpha |+1,R^2)\stackrel{R0}{}\frac{\mathrm{\Gamma }(|m+\alpha |)}{\mathrm{\Gamma }(\xi )}R^{2|m+\alpha |}$$ (valid for $`\xi n`$ and $`m+\alpha 0`$), the fact that $`\mathrm{\Gamma }(n+ϵ)`$ and $`\mathrm{\Gamma }(nϵ)`$ have opposite signs for $`n=0,1,2\mathrm{}`$ and $`0<ϵ<1`$, and the continuity of $`U`$ as a function of $`\xi `$.\] ###### Acknowledgements. I thank Adilson José da Silva and Marcelo Gomes for a critical reading of this paper. This work was supported by FAPESP.
no-problem/0003/hep-ph0003324.html
ar5iv
text
# 1 Results for 𝑥⁢Δ⁢𝑢̄⁢(𝑥), 𝑥⁢Δ⁢𝑑̄⁢(𝑥) and 𝑥⁢Δ⁢𝑠̄⁢(𝑥) at low normalization point obtained in chiral quark soliton model RUB-TP2-17/99 Polarized antiquark distributions from chiral quark-soliton model: summary of the results K. Goeke<sup>a</sup>, P.V. Pobylitsa<sup>a,b</sup>, M.V. Polyakov<sup>a,b</sup> and D. Urbano<sup>a,c</sup> <sup>a</sup> Institute for Theoretical Physics II, Ruhr University Bochum, Germany <sup>b</sup> Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg 188350, Russia <sup>c</sup> Faculdade de Engenharia da Universidade do Porto, 4000 Porto, Portugal ## Abstract In these short notes we present a parametrization of the results obtained in the chiral quark-soliton model for polarized antiquark distributions $`\mathrm{\Delta }\overline{u}`$, $`\mathrm{\Delta }\overline{d}`$ and $`\mathrm{\Delta }\overline{s}`$ at a low normalization point around $`\mu =0.6`$ GeV. The aim of these short notes is to summarize the results for the polarized antiquark distributions $`\mathrm{\Delta }\overline{u}`$, $`\mathrm{\Delta }\overline{d}`$ and $`\mathrm{\Delta }\overline{s}`$ obtained in refs. in the framework of the chiral quark-soliton model. The chiral quark-soliton model is a low-energy field theoretical model of the nucleon structure which allows a consistent calculations of leading twist quark and antiquark distributions . Due to its field theoretical nature the quark and antiquark distributions obtained in this model satisfy all general QCD requirements: positivity, sum rules, inequalities, etc. A remarkable prediction of the chiral quark soliton model, noted first in ref. , is the strong flavour asymmetry of polarized antiquarks, the feature which is missing in other models like, for instance, pion cloud models (for discussion of this point see Ref. ). The fits below are based on the calculations of Refs. , generalized to the case of three flavours. The results of these calculations are fitted by the form inspired by quark counting rules discussed in Ref. : $`\mathrm{\Delta }\overline{q}(x)`$ $`=`$ $`{\displaystyle \frac{1}{x^{\alpha _q}}}\left[A_q(1x)^5+B_q(1x)^6\right],`$ (1) which leads to $`\alpha _u`$ $`=`$ $`0.0542,\alpha _d=0.0343,\alpha _s=0.0169`$ $`A_u`$ $`=`$ $`0.319,A_d=0.185,A_s=0.0366`$ $`B_u`$ $`=`$ $`0.589,B_d=0.672,B_s=0.316.`$ (2) In Fig. 1 we plot the resulting distribution functions. We note that these functions, obtained in the framework of the chiral quark soliton model, refer to the normalization point of about $`\mu =0.6`$ GeV. A few comments are in order here: * The model calculations are not justified at $`x`$ close to zero and one. Therefore the small $`x`$ and $`x1`$ behaviours obtained in the the fit above should be consider as an educated guess only, not as model prediction. * We estimate that the theoretical errors related to the approximations ($`1/N_c`$ corrections, $`m_s`$ corrections, etc.) done in the model calculations are at the level of 20%-30% for $`\mathrm{\Delta }\overline{u}`$ and $`\mathrm{\Delta }\overline{d}`$, and around 50% for $`\mathrm{\Delta }\overline{s}`$. The value of the normalization point is not known exactly, the most favoured value is $`\mu =0.6`$ GeV. The measurements of flavour asymmetry of polarized antiquarks, say, in semi-inclusive DIS or in Drell-Yan reactions with polarized protons would allow to discriminate between different pictures of the nucleon.
no-problem/0003/quant-ph0003067.html
ar5iv
text
# Correlations of measurement information and noise in quantum measurements with finite resolution ## 1 Introduction The interpretation of quantum mechanics has been controversial from the very beginning, and even one hundred years after Planck’s quantum hypothesis, the physical reality behind the formalism is still being debated . At the heart of this confusion is the issue of quantum measurements . The original definitions of physical quantities were based on the expectation that the quantities observed in measurements are effectively identical to the fundamental mathematical elements of the theory. The implicit assumption behind this expectation is that knowledge and fact are independent and seperable. An ideal measurement simply provides information about facts without changing the facts. Reality would then be defined by an infinitely precise set of observable quantities. However, this assumption had to be abandoned in order to describe atomic and optical phenomena. Starting with Planck’s quantum hypothesis, quantum theory completely abandoned the constraints of classical physics. In order to justify this radical departure from these highly successful concepts, Bohr and Heisenberg argued that no reality needs to be attributed to properties which are not being observed. In the famous Bohr-Einstein dialogue, the decisive argument was provided by the uncertainty principle which places a fundamental restriction on our ability to know observable properties. At the same time, a highly precise mathematical description of quantum mechanics emerged. This mathematical description introduces the concept of probability amplitudes, which combine the epistemological nature of probabilities with the deterministic concept of interference. It is this Hilbert space formalism which has firmly established quantum mechanics in 20th century physics. Nevertheless interpretational problems remain because the physical meaning of the Hilbert space formalism is purely statistical. The quantum state of a single system cannot be measured. This situation gives rise to a dualism between the mathematical description and the physical meaning in quantum mechanics which is illustrated in figure 1. For a long time, this dualism was not recognized as experiments mostly revealed statistical averages of only a few observable properties. As modern technology allows a more detailed investigation of quantum phenomena, however, the interpretational problems associated with a theory that is not based upon directly observable facts have reemerged. It may therefore be necessary to investigate the role of measurement in quantum mechanics with respect to the possible manipulations of single quantum objects in order to clarify the physical meaning of quantum states. In the light of the present discussion on the usefulness of quantum effects for computation and data transmission, a clarification of the process by which information is extracted from a quantum system may also provide a better understanding of the technical requirements for quantum information processing. In the following, the relationship between the measurement information obtained at finite resolution and the noise introduced by the measurement interaction is investigated. It is shown that the noise is strongly correlated with the measurement result, even if the eigenvalues of the observed property would not permit such correlations. These correlations represent operator ordering dependent properties which emerge only in the regime where the measurement resolution is high enough to reveal quantization, yet low enough to permit coherence. Thus, finite resolution measurements reveal more about the physical properties of quantum systems than precise measurements. Hopefully, the investigation of these nonclassical correlations will help to bridge the gap between the mathematical principles and the physical principles of quantum mechanics by providing a physical interpretation of quantum coherence and operator ordering. ## 2 Uncertainty in quantum measurements The connection between the mathematical formalism and the physical observables is established by the measurement. The squared probability amplitudes of Hilbert space formalism can then be interpreted as real probabilities. The probabilities can be summarized in the density matrix $`\widehat{\rho }`$ which may be expressed in terms of the eigenstates of $`\widehat{n}`$ as $$\widehat{\rho }=\underset{n,m}{}\rho _{nm}nm.$$ (1) While the diagonal elements $`\rho _{nn}`$ clearly define the probability of obtaining a measurement result of $`n`$ in a precise measurement of $`\widehat{n}`$, the relationship between $`\widehat{n}`$ and the off-diagonal elements $`\rho _{nm}`$ with $`nm`$ is less clear. The physical meaning of the off-diagonal elements only emerges in a transformation to a different set of eigenstates, where $`\rho _{nm}`$ modifies the new probabilities as an interference term. However, a precise measurement of $`\widehat{n}`$ destroys this interference information, increasing the uncertainty in the probability distributions associated with eigenstates of observables other than $`\widehat{n}`$ in accordance with the uncertainty principle. Uncertainty requires the dissapearance of $`\rho _{nm}`$ before $`n`$ can be distinguished from $`m`$. The information obtained about $`\widehat{n}`$ therefore requires that the measurement interaction introduces decoherence in the off-diagonal elemnts $`\rho _{nm}`$. If the noise does not reduce $`\rho _{nm}`$ to zero, then the resolution $`\delta n`$ is not sufficient to completely distinguish $`n`$ from $`m`$. This situation is typical for optical quantum nondemolition measurements of photon number . In such experiments, the information obtained about the photon number $`\widehat{n}`$ is given by a measurement result $`n_m`$ and a resolution $`\delta n`$ which corresponds to the Gaussian uncertainty of the measurement. However, this reduced measurement resolution is not just a limitation but helps to preserve the coherence of the quantum state . Instead of selecting a well-defined photon number component $`n`$, the measurement adjusts the statistical weight of each component $`n`$ by a factor dependent on the difference between $`n_m`$ and $`n`$. This effect of the measurement can be represented by a generalized measurement operator $`\widehat{P}_{\delta n}(n_m)`$ given by $$\widehat{P}_{\delta n}(n_m)=(2\pi \delta n^2)^{1/4}\mathrm{exp}\left(\frac{(n_m\widehat{n})^2}{4\delta n^2}\right).$$ (2) For a given initial state $`\widehat{\rho _i}`$, the probability distribution $`P(n_m)`$ over measurement results $`n_m`$ and the density matrix $`\widehat{\rho }_f(n_m)`$ after the measurement read $`P(n_m)`$ $`=`$ $`Tr\left\{\widehat{P}_{\delta n}(n_m)\widehat{\rho }(\text{in})\widehat{P}_{\delta n}(n_m)\right\}`$ (3) $`\widehat{\rho }_f(n_m)`$ $`=`$ $`{\displaystyle \frac{1}{P(n_m)}}\widehat{P}_{\delta n}(n_m)\widehat{\rho }(\text{in})\widehat{P}_{\delta n}(n_m).`$ (4) By describing the effects of a finite measurement resolution $`\delta n`$ on all physical properties of the system the generalized measurement postulate defined by the operator $`\widehat{P}_{\delta n}(n_m)`$ provides an expression of the role of uncertainty in quantum measurements. It is then possible to quantify the changes in coherence associated with a photon number measurement result $`n_m`$ in detail. ## 3 Photon number measurement statistics The interference properties of the light field are described by the coherent amplitude operator $`\widehat{a}`$. This operator is also referred to as annihilation operator because its effect on photon number states is given by $$\widehat{a}n=\sqrt{n}n1.$$ (5) However, the physical meaning of this photon number property is simply that the interference properties of the light field given by the expectation value $`\widehat{a}`$ depend on the density matrix elements $`\rho _{nm}`$ with $`m=n1`$. Photon number information therefore destroys the interference properties by reducing $`\widehat{a}`$. For a coherent light field such as that emitted by a typical laser, the initial density matrix is given by $$\widehat{\rho }_i=\underset{n,m}{}\mathrm{exp}(|\alpha |^2)\frac{1}{\sqrt{n!m!}}(\alpha ^{})^m(\alpha )^nnm.$$ (6) This coherent state has an average amplitude of $`\widehat{a}=\alpha `$ and an average photon number of $`\widehat{n}=|\alpha |^2`$. The optical coherence of this state is maximal and any reduction in the photon number fluctuations leads to a reduction of the coherence given by $`\widehat{a}=\alpha `$. By applying the general measurement postulate according to equation (3), one obtains the probability distribution for photon number measurement results $`n_m`$ at a finite resolution of $`\delta n`$ as $$P(n_m)=\frac{\mathrm{exp}(|\alpha |^2)}{\sqrt{2\pi \delta n^2}}\underset{n}{}\frac{|\alpha |^{2n}}{n!}\mathrm{exp}\left(\frac{(nn_m)^2}{2\delta n^2}\right).$$ (7) This probability distribution is simply a sum of Gaussians centered around the respective integer photon numbers. At low resolution (high $`\delta n`$), the Gaussians merge to form a continuous probability distribution. At high resolution (low $`\delta n`$), the Gaussians are completely separate. However, coherence is only possible in the regions where neighbouring Gaussians overlap. This becomes clear if the coherence $`\widehat{a}_f(n_m)`$ after a measurement of $`n_m`$ is determined. It reads $$\widehat{a}_f(n_m)=\alpha \mathrm{exp}\left(\frac{1}{8\delta n^2}\right)\frac{_n\frac{|\alpha |^{2n}}{n!}\mathrm{exp}\left(\frac{(n+\frac{1}{2}n_m)^2}{2\delta n^2}\right)}{_n\frac{|\alpha |^{2n}}{n!}\mathrm{exp}\left(\frac{(nn_m)^2}{2\delta n^2}\right)}.$$ (8) This fraction of two sums of Gaussians has its maxima at half integer photon numbers, where the denominator representing the measurement probability is minimal. Figure 2 shows the probability distribution and the coherence after the measurement for $`\alpha =3`$ and $`\delta n=0.3`$. A direct comparison of the probability and the coherence near $`n_m=9`$ is shown in figure 3 It is obvious that integer measurement results $`n_m`$ are correlated with strong decoherence and half integer results with weak decoherence. ## 4 Nonclassical correlation If the quantization $`Q`$ of a measurement result $`n_m`$ is defined as $$Q=\mathrm{cos}\left(2\pi n_m\right),$$ (9) then the correlation between quantization and coherence observed in the finite resolution measurement of photon number may be expressed as $$C(Q,\widehat{a}_f)=\left(Q(n_m)\widehat{a}_f(n_m)\right)𝑑n_m\left(Q(n_m)𝑑n_m\right)\left(\widehat{a}_f(n_m)𝑑n_m\right).$$ (10) In the case of a coherent state, the averages of quantization $`Q`$, of coherence $`\widehat{a}_f`$, and of their product are given by $`{\displaystyle Q(n_m)𝑑n_m}`$ $`=`$ $`\mathrm{exp}\left(2\pi ^2\delta n^2\right)`$ (11) $`{\displaystyle \widehat{a}_f(n_m)𝑑n_m}`$ $`=`$ $`\mathrm{exp}\left({\displaystyle \frac{1}{8\delta n^2}}\right)\alpha `$ (12) $`{\displaystyle \left(Q(n_m)\widehat{a}_f(n_m)\right)𝑑n_m}`$ $`=`$ $`\mathrm{exp}\left(2\pi ^2\delta n^2\right)\mathrm{exp}\left({\displaystyle \frac{1}{8\delta n^2}}\right)\alpha .`$ (13) Quantization and coherence are therefore exactly anti-correlated, with $`C(Q,\widehat{a}_f)`$ being equal to two times the negative product of average quantization and average coherence, $`C(Q,\widehat{a}_f)`$ $`=`$ $`2\left({\displaystyle Q(n_m)𝑑n_m}\right)\left({\displaystyle \widehat{a}_f(n_m)𝑑n_m}\right)`$ (14) $`=`$ $`2\mathrm{exp}\left(2\pi ^2\delta n^2\right)\mathrm{exp}\left({\displaystyle \frac{1}{8\delta n^2}}\right)\alpha .`$ Figure 4 shows this anti-correlation as a function of measurement resolution $`\delta n`$. Note that the anti-correlation is maximal near $`\delta n=0.3`$. For lower $`\delta n`$, decoherence reduces the average value of $`\widehat{a}_f(n_m)`$ to zero. For higher $`\delta n`$, quantization is not resolved in the measurement and the average value of $`Q`$ is zero. The anti-correlation of quantization and coherence is a direct consequence of quantum coherence. It originates from the fact that the operator $`\widehat{a}`$ connects photon number states $`n`$ with photon number states $`n1`$. This property means that $`\widehat{a}`$ does not commute with functions of $`\widehat{n}`$. If the quantization operator $`\widehat{Q}`$ is defined as $`\widehat{Q}`$ $`=`$ $`\left(\mathrm{cos}(\pi \widehat{n})\right)^2\left(\mathrm{sin}(\pi \widehat{n})\right)^2`$ (15) $`=`$ $`(1)^{2\widehat{n}},`$ then the anti-correlation between quantization and coherence can be obtained by “splitting” $`\widehat{Q}`$ into two operators and inserting $`\widehat{a}`$ in the middle: $`C(\widehat{Q},\widehat{a})`$ $`=`$ $`(1)^{\widehat{n}}\widehat{a}(1)^{\widehat{n}}(1)^{2\widehat{n}}\widehat{a}`$ (16) $`=`$ $`2\widehat{a}.`$ Of course, this operator ordering is only justified because the actual measurement first obtained $`n_m`$ and then $`\widehat{a}`$. The position of $`\widehat{a}`$ between the parity operators $`(1)^{\widehat{n}}`$ is a consequence of this measurement context. ## 5 Context dependence of information In the example above, the measurement information obtained is not information about the quantum state before the measurement, since this state is already known to be a coherent state. Nevertheless, the quantum state does not provide sufficient information to describe the physical properties of the light field in every possible context. Therefore, relevant new information about a physical property may be extracted even from the pure state. There seems to be no classical analogy to this process of extracting new information from a well defined state. Possibly, the generalization of classical information concepts to quantum mechanics is not as straightforward as the concepts of entropy and qbits seem to suggest . In particular, the measurement interaction is responsible for selecting the relevant information. If information has been encoded in a different variable, then this information becomes physically irrelevant and is lost. However, this loss of information correponds to a smooth and continuous transition from one type of context to the other, leaving some room for correlations such as the one between quantization and coherence. In fact, it seems to be more natural to consider situations in which the context is not given by a well defined orthogonal set of states, but by a combination of non-commuting properties which could be described by a sequence of finite resolution measurements. The problem of avoiding decoherence in quantum computation probably arises because it is difficult to limit the measurement context to precise measurements at well defined times. If the wrong type of information leaks out, the artificial concepts of quantum information break down. On the other hand, the full wealth of possibilities naturally inherent in the quantum properties of physical properties may only be explored by embracing the correlations between non-commuting variables. Specifically, the nonclassical correlations expressed by context dependent operator ordering may provide a clearer understanding of the physical properties behind such applications of quantum effects as quantum computation, quantum teleportation, and quantum cryptography. ## 6 Conclusions The uncertainty relations severely restrict the physical information available about a quantum system. Nevertheless, new information may even be extracted from a pure state by measuring a previously unknown property. This extraction of new information requires the loss of information about the known system properties and therefore constitutes an exchange of one type of physical information for another. Yet this exchange process is more precise than the notion of uncertainty would seem to suggest. In the case discussed above, the coherent field information is gradually exchanged for photon number information. However, an accidental observation of an “incorrect” half integer photon number does not result in a loss of information - instead the original phase information is retained. On the other hand, even the accidental observation of the “correct” integer photon number does not provide more information on the system - instead, there is an increased loss of phase coherence to compensate the precision of the photon number measurement result. Thus the nonclassical correlation of quantization and coherence preserve the total information available about the system while changing the physical context. It seems to be the major difference between classical physics and quantum physics that quantum physics establishes a fundamental relationship between the information available about a system and the physical properties of the system. In particular, bit like quantized information is only obtained if the property of coherence has been lost. The anti-correlation of quantization and coherence demonstrates that quantization itself is a property which depends on the measurement context and thus on the information presently available about the light field. This ambiguity in the physical properties appears to be the fundamental reason for the dualism between the physical and the mathematical principles of quantum mechanics. ## Acknowledgements One of us (HFH) would like to acknowledge support from the Japanese Society for the Promotion of Science, JSPS.
no-problem/0003/astro-ph0003243.html
ar5iv
text
# On the Disappearance of Kilohertz Quasi-Periodic Oscillations at a High Mass Accretion Rate in Low-Mass X-ray Binaries ## 1 Introduction The detection of kilohertz quasi-periodic oscillation (kHz QPO) in low-mass X-ray binaries is arguably the greatest discovery that the Rossi X-ray Timing Explorer (RXTE) has made to date. Such signals almost certainly originate in the immediate vicinity of central neutron stars, given that the fastest oscillations are observed to occur on dynamical timescales near such objects (review by van der Klis 2000). The prospect of using kHz QPOs to probe the effects of strong gravity near neutron stars is therefore very exciting. Various models (e.g., Klein et al. 1996; Miller et at. 1998; Stella & Vietri 1999; Osherovich & Titarchuk 1999) have been proposed to explain kHz QPOs that, in nearly all cases, come in pairs (note that we choose not to discuss the QPOs observed during thermonuclear bursts). Except for the “photon bubble model” (Klein et al. 1996), all other models invariably associate one of the pair to the Keplerian motion of clumps of matter or “hot spots” at the inner edge of a geometrically thin accretion disk (although the “sonic point model” differs in detail, e.g., Miller et at. 1998; however, see Lai 1998). The inner edge of the disk is determined by the pressure balance between the accreted matter and the magnetic field of the neutron star. As the mass accretion rate increases, the ram pressure of the accreted matter increases, which squeezes the magnetosphere more and thus the accretion disk extends farther toward the neutron star. Therefore, we expect that the frequency of the “Keplerian QPO” increases with the accretion rate, which agrees with the observations (van der Klis 2000). Since no Keplerian flow can exist inside the last stable orbit, we expect that there is an upper limit to the frequency of this QPO — any further increase in the accretion rate cannot result in an increase in the frequency of the QPO. Observing this limit would provide strong observational evidence for the presence of the last stable orbit around neutron stars, a natural consequence of strong gravity. However, due to the lack of detailed knowledge on how the signals are produced in the first place, it is still not clear how kHz QPOs would behave when the accretion disk reaches the last stable orbit. Without considering any physical mechanisms responsible for modulating the X-ray emission, one might take it for granted that the frequency of the Keplerian QPO would saturate at sufficiently high accretion rate (e.g., Zhang et al. 1998b; Lai 1998). On the other hand, Cui et al. (1998) emphasized the importance of disk-magnetosphere interaction in producing the QPOs, based on a detailed study of the evolution of a kHz QPO observed of Aquila X-1, a transient atoll source, throughout the rising phase of an X-ray outburst. In this Letter, we generalize the ideas proposed by Cui et al. to all kHz QPO sources. ## 2 Saturation of kHz QPO Frequency at High X-ray Flux? Using data from the RXTE observations of 4U 1820-30, which spanned over almost one year, Zhang et al. (1998b) found that the kHz QPOs detected in the source evolved strongly in frequency at low X-ray fluxes but reached a plateau at high fluxes. They attributed such a saturation in the QPO frequency to the presence of the last stable orbit. In other words, the plateau frequency of one of the kHz QPOs would represent the Keplerian frequency at the last stable orbit, if the interpretation is correct. However, for atoll or Z sources, the X-ray flux is not always a reliable indicator for the mass accretion rate (e.g., Méndez et al. 1999; Méndez 1999; Ford et al. 2000), which fundamentally determines the magnetospheric radius of the neutron star and thus the location of the inner edge of the accretion disk. Moreover, it is now well established that the frequency of kHz QPOs is not well correlated with X-ray flux over a long period of time, although short-term linear correlations seem to be present (Zhang et al. 1998a; Méndez 1999 and references therein). Therefore, for 4U 1820-30 the observed evolution of the kHz QPOs with X-ray flux may not be a good representation of that with accretion rate. The saturation of the QPO frequencies could simply be an artifact caused by poor sampling of the data (Méndez 1999, private communication). Unfortunately, there is no reliable way of determining (or even quantifying) the accretion rate for atoll or Z sources at present. One empirical approach that is often taken is to use the position long the distinctive tracks in the color-color or color-intensity diagrams (which define atoll or Z sources) as a qualitative measure of the accretion rate (van der Klis 1995). The approach seems effective since the frequency of kHz QPOs is shown to be uniquely correlated with such “color track position” for a number of sources (Méndez et al. 1999; Méndez 1999; van Straaten et al. 2000). Kaaret et al. (1999) re-analyzed the data of Zhang et al. (1998b) and added data from more recent observations. They seemed to confirm that the QPO frequencies leveled off at high X-ray fluxes, although this result was not reproduced in other studies (van der Klis 2000). When they plotted the frequencies against the inferred accretion rate (as indicated by the color track position in the color-intensity diagram that they defined), however, the frequencies varied monotonically over the entire range. The authors noted an apparent deviation in the correlation at high accretion rates, with very limited dynamical range, from that extrapolated from low accretion rates. They then interpreted the deviation as evidence for the last stable orbit. We would like to note, however, that the color track position of the source is only an empirical indicator for the accretion rate. It is not clear at all how the two are quantitatively related; there is certainly no compelling reason to expect that they are necessarily linearly correlated. Consequently, the correlation between the QPO frequency and the color track position is expected to be complex (but is not known). A more serious question regarding the interpretation is why no other sources show a similar behavior (i.e., the claimed saturation of kHz QPO frequencies at high accretion rate), if the phenomenon were to originate in something so fundamental. It seems unlikely that the uniqueness of 4U 1820-30 can be the answer to this question. ## 3 Clues from Transient Atoll Sources Transient atoll sources are nearly ideal systems for a detailed study of the evolution of kHz QPOs with mass accretion rate, because of the large dynamical range that they provide during an X-ray outburst. Unfortunately, they are very few in number, and even fewer that undergo outbursts on time scales short enough that they can be be observed. Nevertheless, high-quality data is available for some of these sources. Aql X-1 is known for experiencing frequent outbursts (van Paradijs & McClintock 1995). During an outburst in 1998, the source was intensively monitored throughout the rising phase of the outburst (Cui et al. 1998). Although the origin of X-ray outbursts is generally unknown, there appears to be some consensus now that thermal instability causes a sudden surge in the mass accretion rate through the disk and thus initiates an X-ray outburst (review by King 1995). Therefore, we know, at least qualitatively, how the mass accretion rate evolves during an outburst. Cui et al. found that the rising phase seems to be quite simple (at least for Aql X-1): the X-ray flux correlates fairly well with the color track position, compared to the decaying phase where no correlation exists between the two quantities on long time scales (Zhang et al. 1998a; Méndez 1999). Perhaps, during the rising phase of an outburst, the X-ray flux is simply proportional to the mass accretion rate, as usually expected. The evolution of a kHz QPO (not a pair) detected in Aql X-1 was carefully followed during the rising phase of the outburst (Cui et al. 1998). The QPO was not detected at the beginning of the outburst, when the accretion rate was presumably low; it then appeared and persisted through the intermediate range of the accretion rate; and it vanished again when the accretion rate exceeded a certain threshold ($`\dot{M}_h`$) near the peak of the outburst. To account for such an evolution of the kHz QPO, Cui et al. proposed the following physical scenario, as illustrated in Figure 1. In the quiescent state, the mass accretion takes place in the form of advection-dominated accretion flow (ADAF) close to the central neutron star, and in the forms of a standard Keplerian disk farther away (e.g., Narayan & Yi 1994, 1995; see also Menou et al. 1999 for discussions of neutron star systems). Therefore, there is no direct interaction between the Keplerian disk and the magnetosphere of the neutron star in this state. Such an interaction was argued to be essential for producing X-ray modulation associated with kHz QPOs, consequently, there would be no kHz QPOs in the quiescent state. At the onset of an X-ray outburst, the mass accretion rate begins to increase rapidly, so the ADAF region shrinks and the Keplerian disk moves inward (Narayan 1997). As soon as the disk starts to interact with the magnetosphere, kHz QPOs are produced. The accretion rate continues to increase as the outburst proceeds, so the magnetosphere is pushed farther toward the neutron star and the disk extends farther inward. After the disk has reached the last stable orbit, any increase in the mass accretion rate disengages the disk from the magnetosphere, causing the QPOs to disappear. For Aql X-1, assuming the X-ray flux is proportional to the accretion rate, the inferred low and upper limits on the magnetic field strength are certainly consistent with our expectation of an atoll source (Cui et al. 1998). Another transient atoll source, 4U 1608-52, showed a remarkably similar pattern, based on the inferred mass accretion rate from the color-color diagram, in the evolution of its kHz QPO during the decaying phase of an outburst (Méndez et al. 1999). Therefore, we propose that it is the disappearance of kHz QPOs at high mass accretion rate that provides evidence for the presence of the last stable orbit. ## 4 Application to All kHz Sources Without an exception, the kHz QPOs vanish at high mass accretion rate for all sources (van der Klis 2000). For different sources, however, this seems to occur at a different accretion rate. For instance, $`\dot{M}_h`$ is certainly higher for Z sources than that for atoll sources, perhaps suggesting a critical role that the magnetic field plays (according to the atoll/Z paradigm; van der Klis 1995). This would be naturally explained by our model since the model requires that the stronger the magnetic field the higher the threshold (see § 5 for a quantitative treatment). Moreover, extending the model to all kHz sources would solve a long-standing observational puzzle as to why the kHz QPOs disappear at a higher accretion rate for a brighter source (van der Klis 2000). It is interesting to note that no kHz QPOs have ever been detected in a group of bright atoll sources (GX 3+1, GX 9+1, GX 13+1, and GX 9+9; Wijnands et al. 1998; Strohmayer 1998; Homan et al. 1998). Perhaps, for these sources, the accretion rate always remains above $`\dot{M}_h`$. At low mass accretion rates, with the exception of 4U 1728-34, kHz QPOs become undetectable when the accretion rate is below a certain threshold ($`\dot{M}_l`$) for all atoll sources; for Z sources, on the other hand, the QPOs are detected down to the lowest inferred mass accretion rate (van der Klis 2000). Perhaps, the ADAF scenario can also be applied to persistent atoll sources or even Z sources. If so, $`\dot{M}_l`$ would likely depend sensitively on the magnetic field strength, as we will demonstrate in the next section. ## 5 Disk–Magnetosphere Interaction One of the critical issues that the kHz QPO models do not explicitly address is the physical mechanism that modulates the X-ray emission. The models are presently still at the level of simply associating the natural frequencies in a low-mass neutron star binary to the kHz QPOs observed. However, a mechanism is clearly needed for the natural frequencies to manifest themselves observationally. An obvious candidate is the interaction between the Keplerian disk and the magnetosphere of the neutron star, as was proposed in the original formulation of the beat-frequency model (Alpar & Shaham 1985). Such an interaction can create inhomogeneity or warping in the accretion flow (Vietri & Stella 1998; Lai 1999), which may cause X-ray emission to be modulated at the orbital frequency. Assuming a dipole field, the radius of the magnetosphere, in units of the Schwarzschild radius ($`2GM/c^2`$), is approximately given by (Frank et al. 1992): $$r_m2.19\dot{m}^{2/7}m^{10/7}B_8^{4/7}r_6^{12/7},$$ (1) where m is the mass of the neutron star in solar units, $`\dot{m}`$ is the mass accretion rate in Eddington units ($`1.39\times 10^{18}m\text{ }g\text{ }s^1`$), $`B_8`$ is the dipole field of the neutron star in units of $`10^8`$ G, and $`r_6`$ is the radius of the neutron star in units of $`10^6`$ cm. To derive $`\dot{M}_h`$, therefore, we set $`r_m=3`$ (ignoring the effects of slow rotation of the neutron star). We have $$\dot{M}_h=0.33m^5B_8^2r_6^6.$$ (2) For neutron stars with $`m=2`$ and $`r_6=1`$, $`\dot{M}_h0.01`$ for $`B=10^8`$ G or $`1`$ for $`B=10^9`$ G, which are roughly what we expect of atoll or Z sources, respectively (van der Klis 1995). At low mass accretion rates, if the ADAF is responsible for truncating the Keplerian disk, we can derive $`\dot{M}_l`$ by setting $`r_m=r_{tr}`$, where $`r_{tr}`$ is the radius at which the Keplerian disk makes a transition to the quasi-spherical ADAF. However, the physical origin of the transition is still poorly understood. Observationally, $`r_{tr}`$ is clearly a function of $`\dot{m}`$ — the higher the mass accretion rate the smaller the transition radius (Narayan 1997). Assuming $`r_{tr}\dot{m}^\alpha `$ (where $`\alpha >0`$), equating $`r_{tr}`$ to $`r_m`$ yields $$\dot{M}_l^{\alpha 2/7}m^{10/7}B_8^{4/7}r_6^{12/7}.$$ (3) If $`\alpha >2/7`$, the stronger the magnetic field the lower $`\dot{M}_l`$ will be; the converse is true for $`\alpha <2/7`$. The former appears to be supported by observations, given that the kHz QPOs persist down the the lowest mass accretion rate for Z sources while they seem to disappear at some point for atoll sources. Of course, the ADAF scenario might break down entirely for Z sources; i.e., the mass accretion process always takes the form of a Keplerian disk, so the kHz QPOs are always detectable at low mass accretion rates. Meyer & Meyer-Hofmeister (1994) suggested that a cool Keplerian disk could undergo a phase transition to a hot, quasi-spherical corona, when the gas density in the transition region becomes too low to effectively radiate away the energy released during the mass accretion process. If such a phase transition is relevant for ADAF, then $`r_{tr}=18.3m^{0.17/1.17}\dot{m}^{1/1.17}`$ (i.e., $`\alpha =0.84>2/7`$; Liu et al. 1999). We would have $$\dot{M}_l=41.7m^{2.76}B_8^1r_6^3,$$ (4) which would likely be super-Eddington for atoll or Z sources. Therefore, it is not clear if the model can be applied to neutron star systems. ## 6 Summary The observations of the kHz QPO phenomenon seem to suggest that the interaction between the Keplerian accretion disk and the magnetosphere of the neutron star is directly responsible for modulating the X-ray emission. For a given source, the presence (or absence) of such an interaction dictates the appearance (or disappearance) of the QPOs. At high mass accretion rate, we argue that the presence of the last stable orbit manifests itself in the disappearance of the QPOs, as opposed to the saturation in the QPO frequency. The difference may only seem semantic since, quantitatively, both interpretations require that the neutron star is inside the last stable orbit. However, we feel that it is imperative for the models to begin to address such critical issues as modulation mechanisms for kHz QPOs, in light of the ever improving quality of the data. One critical question is whether the neutron star magnetosphere can be disengaged from the Keplerian disk at the last stable orbit. Studies have shown that the evolution of the magnetic field configuration is very complicated as the disk approaches the last stable orbit (e.g., Lai 1998), but the exact solution is not known at present. Intuitively, as the accretion process proceeds from the inner edge of the Keplerian disk onto the surface of the neutron star, the ram pressure of the accreted matter continues to push the magnetosphere inward. In this case, the magnetosphere only interacts with the non-Keplerian accretion flow in the “gap” between the last stable orbit and the neutron star surface, but not directly with the Keplerian flow in the disk. The importance of such gap accretion has been studied extensively (Kluz̀niak & Wagoner 1985; Kluz̀niak & Wilson 1991). The situation is less certain at low mass accretion rates. In fact, observationally it can still be argued whether the QPOs actually disappear, given that in nearly all cases the upper limits derived are comparable to the fractional rms amplitudes of the QPOs measured at high accretion rates (Méndez 2000, private communication). In the case of 4U 0614+09, however, the 95% upper limit is only about half of the measured amplitude when the source is bright (Méndez et al. 1997). Therefore, we have at least one source in which the QPOs, if present at all, are definitely much weaker at low accretion rates (i.e., below $`\dot{M}_l`$). Moreover, we note that often the kHz QPOs strengthen (relative to the average source intensity) as the accretion rate decreases (Wijnands et al. 1997; Wijnands & van der Klis 1997; Smale, Zhang, & White 1997). If the QPOs do disappear at low accretion rate, some physical process, like the ADAF, could be present to truncate the Keplerian disk at large distance from the neutron star under such circumstances. This would destroy the disk-magnetosphere interaction and thus the kHz QPOs for atoll sources. The process may also operate in Z sources: the persistence of the kHz QPOs in such cases can be attributed to a lower accretion rate threshold that is due to a stronger magnetic field of the neutron star. Alternatively, the process bears no relevance to Z sources. The disk-magnetosphere interaction is always present at low accretion rates, so are the QPOs. I thank Dong Lai for a stimulating discussion on the subject. Part of this work was completed when I was attending the workshop on “X-ray Probes of Relativistic Astrophysics” at the Aspen Center for Physics (ACP) in the summer of 1999. I acknowledge helpful discussions with the participants of the workshop, in particular, with Michiel van der Klis and Mariano Méndez on several important observational issues. I also wish to acknowledge financial assistance from the ACP. This work was supported in part by NASA grants NAG5-7990 and NAG5-7484.
no-problem/0003/cond-mat0003314.html
ar5iv
text
# Hydrodynamic Coupling of Two Brownian Spheres to a Planar Surface ## Abstract We describe direct imaging measurements of the collective and relative diffusion of two colloidal spheres near a flat plate. The bounding surface modifies the spheres’ dynamics, even at separations of tens of radii. This behavior is captured by a stokeslet analysis of fluid flow driven by the spheres’ and wall’s no-slip boundary conditions. In particular, this analysis reveals surprising asymmetry in the normal modes for pair diffusion near a flat surface. Despite considerable progress over the past two centuries hydrodynamic properties of all but the simplest colloidal systems remain controversial or unexplained. For example, velocity fluctuations in sedimenting colloidal suspensions are predicted to diverge with system size . Experimental observations indicate, on the other hand, that long-wavelength fluctuations are suppressed by an as-yet undiscovered mechanism . One possible explanation is that hydrodynamic coupling to bounding surfaces may influence particles’ motions to a greater extent and over a longer range than previously suspected . Such considerations invite a renewed examination of how hydrodynamic coupling to bounding surfaces influences colloidal particles’ dynamics. This Letter describes an experimental and theoretical investigation of two colloidal spheres’ diffusion near a flat plate. Related studies have addressed the dynamics of two spheres far from bounding walls , and of a single sphere in the presence of one or two walls . Confinement by two walls poses particular difficulties since available theoretical predictions apply only for highly symmetric arrangements , or else contradict each other . The geometry we have chosen avoids some of this complexity while still highlighting the range of non-additive hydrodynamic coupling in a many-surface system. We combined optical tweezer manipulation and digital video microscopy to measure four components of the pair diffusion tensor for two colloidal spheres as a function of their center-to-center separation $`r`$ and of their height $`h`$ above a planar glass surface. Measurements were performed on silica sphere of radius $`0.495\pm 0.025`$ $`\mu `$m (Duke Scientific lot 21024) dispersed in a layer of water $`140\pm 2`$ $`\mu `$m thick. The suspension was sandwiched between a microscope slide and a #1 coverslip whose surfaces were stringently cleaned before assembly and whose edges were hermetically sealed with a uv cured epoxy (Norland type 88) to prevent evaporation and suppress bulk fluid flow. A transparent thin film heater bonded to the microscope slide and driven by a Lakeshore LM-330 temperature controller maintained the sample volume’s temperature at $`T=29.00\pm 0.01^{}`$C, as measured by a platinum resistance thermometer. The addition of 2 mM of NaCl to the solution minimized electrostatic interactions among the weakly charged spheres and glass surfaces by reducing the Debye screening length to 2 nm. Under these conditions, the individual spheres’ free self-diffusion coefficients are expected to be $`D_0=k_BT/(6\pi \eta a)=0.550\pm 0.028\mu \mathrm{m}^2/\mathrm{sec}`$, where $`\eta =0.817`$ cP is the electrolyte’s viscosity . The spheres’ motions were tracked with an Olympus IMT-2 optical microscope using a 100$`\times `$ NA 1.4 oil immersion objective. Images acquired with an NEC TI-324A CCD camera were recorded on a JVC-822DXU SVHS video deck before being digitized with a Mutech MV-1350 frame grabber at 1/60 sec intervals. Field-accurate digitization was assured by interpreting the vertical interlace time code recorded onto each video field. The spheres’ locations $`\stackrel{}{r}_1(t)`$ and $`\stackrel{}{r}_2(t)`$ in the image acquired at time $`t`$ then were measured to within 20 nm using a computerized centroid tracking algorithm . A pair of spheres was placed reproducibly in a plane parallel to the glass surfaces using optical tweezers . These optical traps were created with a solid state laser (Coherent Verdi) whose beam was brought to a focus within the sample volume by the microscope’s objective. Resulting optical gradient forces suffice to localize a silica sphere at the focal point despite random thermal forces . Two optical traps were created by alternating the focused laser spot between two positions in the focal plane at 200 Hz using a galvanometer-driven mirror . Diverting the trapping laser onto a beam block every few cycles freed the spheres to diffuse away from this well defined initial condition. Resuming the trap’s oscillation between the two trapping points resets the spheres’ positions. Alternately trapping and releasing the spheres allowed us to sample their dynamics efficiently in a particular geometry. Allowing the spheres only $`\tau =83`$ msec (5 video fields) of freedom before retrapping them for 16 msec (less than 1 video field) ensured that their out-of-plane motions, $`\mathrm{\Delta }z<\sqrt{2D_0\tau }=0.4`$ $`\mu `$m, cause negligible tracking errors. Because optical tweezers form in the microscope’s focal plane, their height $`h`$ relative to the coverslip’s surface can be adjusted from 1 to 30 $`\mu `$m with 0.5 $`\mu `$m accuracy by adjusting the microscope’s focus. For a given height, we continuously varied the spheres’ initial separation between 2 $`\mu `$m and 10 $`\mu `$m at 0.025 Hz for a total of 20 minutes. This procedure yielded 60,000 samples of the spheres’ dynamics in 1/60 sec intervals divided into sequences 5/60 sec long for each value of $`h`$. These trajectory data were decomposed into cooperative motions $`\stackrel{}{\rho }=\stackrel{}{r}_1+\stackrel{}{r}_2`$ and relative motions $`\stackrel{}{r}=\stackrel{}{r}_1\stackrel{}{r}_2`$ either perpendicular or parallel to the initial separation vector, and binned according to the initial separation, $`r`$. The diffusion coefficients $`D_\psi (r,h)`$ associated with each mode of motion $`\psi (r,h,\tau )`$ at each height and initial separation were then obtained from the Stokes-Einstein formula $$\mathrm{\Delta }\psi ^2(\tau )=2D_\psi (r,h)\tau ,$$ (1) where the angle brackets indicate an ensemble average. Fig. 1 shows typical data for one mode of motion at one height and starting separation. Diffusion coefficients $`D_\psi (r,h)`$ extracted from least squares fits to Eq. (1) appear in Fig. 2 as functions of $`r`$ for the smallest and largest accessible values of $`h`$. The horizontal dashed lines in Fig. 2 indicate the spheres’ asymptotic self-diffusion coefficients. Measurements’ deviations from these limiting values reveal the influence of the spheres’ hydrodynamic interactions. Particles moving through a fluid at low Reynolds number excite large-scale flows through the no-slip boundary condition at their surfaces. These flows couple distant particles’ motions, so that each particle’s dynamics depends on the particular configuration of the entire collection. This dependence is readily calculated using Batchelor’s generalization of Einstein’s classic argument : The probability to find $`N`$ particles at equilibrium in a particular configuration $`\{\stackrel{}{r}_1,\mathrm{},\stackrel{}{r}_N\}`$ depends on their interaction $`\mathrm{\Phi }(\stackrel{}{r}_1,\mathrm{},\stackrel{}{r}_N)`$ through Boltzmann’s distribution, $`P(\stackrel{}{r}_1,\mathrm{},\stackrel{}{r}_N)=\mathrm{exp}\left[\mathrm{\Phi }/(k_BT)\right]`$. The corresponding force $`\mathrm{\Phi }=k_BTP/P`$ drives a probability flux $`k_BT𝐛P`$, where $`𝐛(\stackrel{}{r}_1,\mathrm{},\stackrel{}{r}_N)`$ is the particles’ mobility tensor. The system reaches equilibrium when this interaction-driven flux is balanced by a diffusive flux $`𝐃P`$. It follows that the $`N`$-particle diffusivity is $`𝐃=k_BT𝐛`$. Elements of $`𝐃`$ lead to generalized Stokes-Einstein relations $$\mathrm{\Delta }r_{i\alpha }(\tau )\mathrm{\Delta }r_{j\beta }(\tau )=2D_{i\alpha ,j\beta }\tau .$$ (2) describing how particle $`i`$’s motion in the $`\alpha `$ direction couples to particle $`j`$’s in the $`\beta `$ direction. The mobility tensor for spheres of radius $`a`$ has the form $$b_{i\alpha ,j\beta }=\frac{\delta _{i\alpha ,j\beta }}{6\pi \eta a}+b_{i\alpha ,j\beta }^e.$$ (3) $`b_{i\alpha ,j\beta }^e`$ is the Green’s function for the flow at $`\stackrel{}{r}_i`$ in the $`\alpha `$ direction due to an external force at $`\stackrel{}{r}_j`$ in the $`beta`$ direction. In the present discussion, it accounts for no-slip boundary conditions at all other surfaces in the system. If the spheres are well separated, we may approximate the flow field around a given sphere by a stokeslet, the flow due a point force at the sphere’s location. This approximation is valid to leading order in the spheres’ radius. The Green’s function for the flow at $`\stackrel{}{x}`$ in the $`\alpha `$ direction due to a stokeslet at $`\stackrel{}{r}_j`$ in the $`\beta `$ direction is $$G_{\alpha \beta }^S(\stackrel{}{x}\stackrel{}{r}_j)=\frac{1}{8\pi \eta }\left[\frac{\delta _{\alpha \beta }}{|\stackrel{}{x}\stackrel{}{r}_j|}+\frac{(\stackrel{}{x}\stackrel{}{r}_j)_\alpha (\stackrel{}{x}\stackrel{}{r}_j)_\beta }{|\stackrel{}{x}\stackrel{}{r}_j|^3}\right]$$ (4) so that $`b_{i\alpha ,j\beta }^e=G_{\alpha \beta }^S(\stackrel{}{r}_i\stackrel{}{r}_j)`$. In the particular case of two identical spheres, diagonalizing the resulting diffusivity tensor $`𝐃`$ yields the diffusion coefficients for two collective (C) modes and two relative (R) modes along directions perpendicular ($``$) and parallel ($``$) to the initial separation $`{\displaystyle \frac{D_{}^{C,R}(r)}{D_0}}`$ $`=`$ $`1\pm {\displaystyle \frac{3}{4}}{\displaystyle \frac{a}{r}}+𝒪\left({\displaystyle \frac{a^3}{r^3}}\right)`$ (5) $`{\displaystyle \frac{D_{}^{C,R}(r)}{D_0}}`$ $`=`$ $`1\pm {\displaystyle \frac{3}{2}}{\displaystyle \frac{a}{r}}+𝒪\left({\displaystyle \frac{a^3}{r^3}}\right),`$ (6) where the positive corrections apply to collective modes and the negative to relative. The collective diffusion coefficients $`D_{}^C`$ and $`D_{}^C`$ are enhanced by hydrodynamic coupling because fluid displaced by one sphere entrains the other. Relative diffusion coefficients $`D_{}^R`$ and $`D_{}^R`$ are suppressed, on the other hand, by the need to transport fluid into and out of the space between the spheres. Introducing a planar boundary into this system adds considerable complexity. The flow field around a small sphere located a height $`h`$ above a horizontal wall is most easily calculated by the method of images , in which the wall’s no-slip boundary condition is satisfied by placing a stokeslet (S), a source doublet (D), and a stokeslet doublet (SD) a distance $`h`$ below the plane of the wall . The flow due to this image system is described by the Green’s function $$\begin{array}{c}G_{\alpha \beta }^W(\stackrel{}{x}\stackrel{}{R}_j)=G_{\alpha \beta }^S(\stackrel{}{x}\stackrel{}{R}_j)\hfill \\ \hfill +2h^2G_{\alpha \beta }^D(\stackrel{}{x}\stackrel{}{R}_j)2hG_{\alpha \beta }^{SD}(\stackrel{}{x}\stackrel{}{R}_j)\end{array}$$ (7) where $`\stackrel{}{R}_j=\stackrel{}{r}_j2h\widehat{z}`$ is the position of sphere $`j`$’s image, and $`G_{\alpha \beta }^D(\stackrel{}{y})`$ $`=`$ $`(12\delta _{\alpha z}){\displaystyle \frac{}{y_\beta }}\left({\displaystyle \frac{y_\alpha }{y^3}}\right)`$ (8) $`G_{\alpha \beta }^{SD}(\stackrel{}{y})`$ $`=`$ $`(12\delta _{\alpha z}){\displaystyle \frac{}{y_\beta }}G_{\alpha z}^S(\stackrel{}{y})`$ (9) are Green’s functions for a source dipole and a stokeslet doublet, respectively. The flow field set up by the image system (and thus by the wall’s no-slip boundary condition) entrains the sphere through $`b_{i\alpha ,i\beta }^e=G_{\alpha \beta }^W(\stackrel{}{r}_i\stackrel{}{R}_i)`$ and decreases its mobility. Two independent modes emerge from this analysis, one ($`z`$) normal to the wall and the other ($`xy`$) parallel, with diffusivities $`{\displaystyle \frac{D_z(h)}{D_0}}`$ $`=`$ $`1{\displaystyle \frac{9}{8}}{\displaystyle \frac{a}{h}}+𝒪\left({\displaystyle \frac{a^3}{h^3}}\right)`$ (10) $`{\displaystyle \frac{D_{xy}(h)}{D_0}}`$ $`=`$ $`1{\displaystyle \frac{9}{16}}{\displaystyle \frac{a}{h}}+𝒪\left({\displaystyle \frac{a^3}{h^3}}\right).`$ (11) Eqs. (5) and (6) should suffice for two spheres far from bounding surfaces. Similarly, the spheres’ motions should decouple when the influence of a nearby wall dominates; Eqs. (10) and (11) then should apply. At intermediate separations, however, neither set of formulas is accurate. Naively adding the drag coefficients due to sphere-sphere and sphere-wall interactions yields $`D_\psi ^1(r,h)=D_\psi ^1(r)+D_{xy}^1(h)D_0^1`$. Results of this linear superposition approximation appear as dashed curves in Fig. 2. While adequate for spheres more than 50 radii from the wall \[(c) and (d)\], linear superposition underestimates the wall’s influence for smaller separations \[(a) and (b)\]. A more complete treatment not only resolves these quantitative discrepancies but also reveals an additional surprising influence of the bounding surface on the spheres’ dynamics: the highly symmetric and experimentally accessible modes parallel to the wall are no longer independent. The combination of a neighboring sphere and two image systems contribute $`b_{i\alpha ,j\beta }^e=G_{\alpha \beta }^S(\stackrel{}{r}_i\stackrel{}{r}_j)+G_{\alpha \beta }^W(\stackrel{}{r}_i\stackrel{}{R}_i)+G_{\alpha \beta }^W(\stackrel{}{r}_i\stackrel{}{R}_j)`$ to the mobility of sphere $`i`$ in the $`\alpha `$ direction. Eigenvectors of the corresponding diffusivity tensor appear in Fig. 3. The independent modes of motion are rotated with respect to the bounding wall by an amount which depends strongly on both $`r`$ and $`h`$. The experimentally measured in-plane motions clearly are not independent yet will satisfy Stokes-Einstein relations, nonetheless, with pair-diffusion coefficients $`D_\alpha ^{C,R}(r,h)=D_{1\alpha ,1\alpha }(r,h)\pm D_{1\alpha ,2\alpha }(r,h)`$, where the positive sign corresponds to collective motion, the negative to relative motion, and $`\alpha `$ indicates directions either perpendicular or parallel to the line connecting the spheres’ centers. Explicitly, we obtain $`{\displaystyle \frac{D_{}^{C,R}(r,h)}{D_0}}`$ $`=`$ $`1{\displaystyle \frac{9}{16}}{\displaystyle \frac{a}{h}}\pm {\displaystyle \frac{3}{4}}{\displaystyle \frac{a}{r}}\left[1{\displaystyle \frac{1+\frac{3}{2}\xi }{(1+\xi )^{3/2}}}\right]`$ (12) $`{\displaystyle \frac{D_{}^{C,R}(r,h)}{D_0}}`$ $`=`$ $`1{\displaystyle \frac{9}{16}}{\displaystyle \frac{a}{h}}\pm {\displaystyle \frac{3}{2}}{\displaystyle \frac{a}{r}}\left[1{\displaystyle \frac{1+\xi +\frac{3}{2}\xi ^2}{(1+\xi )^{5/2}}}\right]`$ (13) up to $`𝒪(a^3/r^3)`$ and $`𝒪(a^3/h^3)`$, where $`\xi =4h^2/r^2`$. These results appear as solid curves in Fig. 2. To gauge the success of this procedure and to quantify the range over which the presence of a wall measurably influences colloidal dynamics, we computed the error-weighted mean-squared deviation of the predicted diffusivities from the measured values, $`\chi _\psi ^2(h)=\left[\left(D_\psi ^{expt}(r,h)D_\psi (r,h)\right)/\delta D_\psi ^{expt}(r,h)\right]^2𝑑r`$. Typical results appear in Fig. 4. The lowest-order stokeslet presented here analysis agrees well with measurement over the entire experimentally accessible range. Deviations from the linear superposition approximation’s predictions, on the other hand, are evident out to $`h=15`$ $`\mu `$m or 30 radii. The present study demonstrates that a confining surface can influence colloidal dynamics over a large range of separations. This influence is inherently a many-body effect, as demonstrated by the linear superposition approximation’s failure. Quantitative agreement between our measurements and a leading-order stokeslet analysis offers hope for future progress in understanding confinement’s effects on colloidal dynamics. David Altman developed the transparent thin film heater with support from the MRSEC REU program at the University of Chicago. Work at the University of Chicago was supported by the National Science Foundation through grant DMR-9730189, through the MRSEC program of the NSF through grant DMR-9888595, and by the David and Lucile Packard Foundation. Theoretical work was supported by the A.P. Sloan Foundation, the Mathematical Science Division of the National Science Foundation, and a NDSEG Fellowship to TS.
no-problem/0003/hep-ph0003173.html
ar5iv
text
# Untitled Document hep-ph/0003173, UTPT-00-05 Brane Cosmologies without Orbifolds Hael Collins hael@physics.utoronto.ca and Bob Holdom bob.holdom@utoronto.ca Department of Physics University of Toronto Toronto, Ontario M5S 1A7, Canada Abstract We study the dynamics of branes in configurations where 1) the brane is the edge of a single anti-de Sitter (AdS) space and 2) the brane is the surface of a vacuum bubble expanding into a Schwarzschild or AdS-Schwarzschild bulk. In both cases we find solutions that resemble the standard Robertson-Walker cosmologies although in the latter, the evolution can be controlled by a mass parameter in the bulk metric. We also include a term in the brane action for the scalar curvature. This term adds a contribution to the low energy theory of gravity which does not need to affect the cosmology but which is necessary for the surface of the vacuum bubble to recover four dimensional gravity. March, 2000 1. Introduction. A remarkable feature in certain theories with more than the observed $`3+1`$ dimensions is that while these extra dimensions can extend infinitely, the geometry of the bulk space-time nevertheless is able to confine gravity to a three dimensional surface within the larger space. Randall and Sundrum (RS) first showed that by attaching two semi-infinite slices of $`4+1`$ dimensional anti-de Sitter space (AdS<sub>5</sub>) along a three dimensional hypersurface, or ‘$`3`$-brane’, with orbifold conditions about this $`3`$-brane, gravity behaves as though it is confined to its vicinity. This $`3`$-brane is identified with our universe. In addition to reproducing ordinary Newtonian gravity, any successful model should also be able to produce a realistic cosmological evolution for the $`3`$-brane. The dynamical evolution of the brane is determined by Einstein’s equations for the combined bulk and brane system, but these equations might not produce the familiar Robertson-Walker cosmology along the brane. Viewed locally, near the brane the surrounding bulk introduces a new element into the field equations for gravity on the brane through a term for the change in the extrinsic curvature across the brane, as originally derived by Israel . While generalizations of the original RS orbifold $`[1]`$ have been shown to admit the usual open, flat and closed Robertson-Walker cosmologies , we shall examine more asymmetric geometries for which the AdS curvature lengths on opposite sides of the brane are not necessarily equal. We shall treat in detail the case of a finite and spherical region of AdS space, including the case of a vacuum bubble that expands in an asymptotically flat $`4+1`$ dimensional space. Most previous studies $`[3]`$ of the dynamics of a brane have only included a surface tension term and a Lagrangian for the matter fields, which generally includes all the Standard Model fields, in the brane action. Yet without a more fundamental description of the physics that produces the brane, these terms should represent only the leading pieces of an effective action that could include higher order terms in a derivative expansion, such as a term for the scalar curvature on the brane, $``$, and higher powers of curvature tensors on the brane, such as $`^2`$ and $`_{ab}^{ab}`$. Such terms generically are suppressed by extra powers of the AdS curvature length scale, $`\mathrm{}`$, so at distances much larger than $`\mathrm{}`$ we expect that these higher order terms in the brane action can be neglected. However, at least one fine-tuning is typically made to obtain a vanishing cosmological constant on the brane by cancelling the brane tension against a contribution from the bulk. After this fine-tuning is made, a scalar curvature term on the brane can be naturally of the same order as the terms that remain in the field equations for gravity on the brane. The importance of such a term increases when we consider universes very different from the original Randall-Sundrum scenario. For a vacuum bubble in a asymptotically flat bulk, this term is the sole source for four dimensional gravity. A brane action that contains powers of the brane curvature tensors has also been used in the context of the AdS/CFT correspondence to regularize the action of a bulk AdS<sub>n+1</sub> space which diverges when the radius of the AdS<sub>n+1</sub> space becomes infinite . Unlike the effective field theory description of the brane action, the requirement that the total action of the theory—the sum of the brane action and the bulk action—be finite in this limit precisely fixes the coefficients of the terms in the brane action. The coefficient of the brane tension gives the usual cancellation of the cosmological constant on the brane; however, we find that for the specific coefficient of the scalar curvature term on the brane, the brane curvature term cancels the leading order effects coming from the bulk gravity. In the light of the AdS/CFT correspondence, this result might be anticipated since the bulk gravitational theory is conjectured to be equivalent to a conformal quantum field theory—without gravity—on the surface. In the next section, we derive the equations of motion for a bulk AdS<sub>n+1</sub> space in which a hypersurface is embedded. In section three, these equations are used to study a dynamic brane on which the induced metric takes the standard Robertson-Walker form. The general equations for a non-orbifolded geometry including the effects of a scalar curvature term in the brane action are found. Since these equations are difficult to solve exactly, in section four we neglect the scalar curvature term and focus on the expansion of a vacuum bubble in an asymptotically flat $`4+1`$ dimensional space-time. In section five we include the scalar curvature in the brane action and study its effects when the brane is the edge of a single AdS space. Finally we consider its effect on the vacuum bubble, before concluding in section six. 2. The Action for AdS<sub>n+1</sub> with a Boundary. We would like to derive the form of Einstein’s equations on an $`n`$-dimensional hypersurface embedded in an $`n+1`$-dimensional bulk space-time. Later, we restrict to the interesting case where $`n=4`$. To be general, we shall treat the bulk space-time as two regions, $`_1`$ and $`_2`$, separated by the hypersurface, $``$. Note that these bulk regions do not need to have the same metric on either side of the brane but only need to satisfy the Israel conditions derived below. Since the boundary corresponds to the observed universe, we include an action on the brane containing, in addition to a surface tension term, a term for the scalar curvature on the brane plus the contributions from matter and gauge fields confined to the brane. At each point on the brane, we define a space-like unit normal, $`N_a=N_a(x)`$, to the surface that satisfies $`g^{ab}N_aN_b=1`$. $`g_{ab}`$ is the bulk metric and the indices $`a,b`$ run over all the bulk coordinates. The bulk metric induces a metric on the brane, $$h_{ab}=g_{ab}N_aN_b;$$ $`(2.1)`$ while the bulk metric can be discontinuous across the brane, the induced metric on the brane should be the same whether calculated with the bulk metric for either region. Combining all of these ingredients, the total action is the sum of the actions for the two bulk regions,<sup>1</sup> Our convention for the sign of the Riemann tensor is $`R_{bcd}^a_d\mathrm{\Gamma }_{bc}^a_c\mathrm{\Gamma }_{bd}^a+\mathrm{\Gamma }_{ed}^a\mathrm{\Gamma }_{bc}^e\mathrm{\Gamma }_{ec}^a\mathrm{\Gamma }_{bd}^e`$. $$\begin{array}{cc}\hfill S_1& =\frac{1}{16\pi G}__1d^{n+1}x\sqrt{g}\left[R+\frac{n(n1)}{\mathrm{}_1^2}\right]\frac{1}{8\pi G}_{}d^nx\sqrt{h}K^{(1)},\hfill \\ \hfill S_2& =\frac{1}{16\pi G}__2d^{n+1}x\sqrt{g}\left[R+\frac{n(n1)}{\mathrm{}_2^2}\right]\frac{1}{8\pi G}_{}d^nx\sqrt{h}K^{(2)}\hfill \end{array}$$ $`(2.2)`$ and that of the boundary, $$S_{\mathrm{surf}}=\frac{1}{16\pi G}_{}d^nx\sqrt{h}\left[\frac{2(n1)}{\mathrm{}}\frac{\sigma }{\sigma _c}+b\frac{\mathrm{}}{n2}+16\pi G_{\mathrm{fields}}+\mathrm{}\right].$$ $`(2.3)`$ Here $`G`$ is the bulk Newton’s constant and $`K`$ is the trace of the extrinsic curvature $`K_{ab}`$, defined by $$K_{ab}=h_a^c_cN_b.$$ $`(2.4)`$ $`\sigma `$, $``$ and $`_{\mathrm{fields}}`$ represent the brane tension, the scalar curvature of the induced metric and the Lagrangian of fields confined to the brane. We normalize the brane tension with respect to a critical tension,<sup>2</sup> Note that since we are considering more general geometries than the orbifolds of $`[3]`$, it is convenient to define the critical tension to be half that of the RS universe. $`\sigma _c=3/8\pi G\mathrm{}`$, as will be useful later, and we allow the two bulk regions to have potentially different curvature lengths $`\mathrm{}_1`$ or $`\mathrm{}_2`$. This action is a generalization of that which appears in $`[9]`$ and $`[10]`$. From the vantage of writing an effective theory on the brane $`[6]`$, we simply include the $`\sqrt{h}`$ term as the next to leading term in the brane action in powers of derivatives. The coefficient of this term, $`b\mathrm{}/(n2)`$, is determined by some underlying theory so we leave it unspecified. Varying the total action yields the usual Einstein equations in the bulk, $$R_{ab}\frac{1}{2}Rg_{ab}=\frac{n(n1)}{\mathrm{}_{1,2}^2}g_{ab},$$ $`(2.5)`$ where the appropriate AdS length is chosen for each region, plus the following equation for the surface, $$\mathrm{\Delta }K_{ab}h_{ab}\mathrm{\Delta }K=\frac{n1}{\mathrm{}}\frac{\sigma }{\sigma _c}h_{ab}b\frac{\mathrm{}}{n2}\left[_{ab}\frac{1}{2}h_{ab}\right]+8\pi GT_{ab}+\mathrm{}$$ $`(2.6)`$ where $`\mathrm{\Delta }K_{ab}K_{ab}^{(2)}K_{ab}^{(1)}`$, $`T_{ab}`$ is the energy-momentum tensor for the fields confined to the brane, $$T_{ab}h_{ab}_{\mathrm{fields}}+2\frac{\delta _{\mathrm{fields}}}{\delta h^{ab}},$$ $`(2.7)`$ and $`_{ab}`$ is the Ricci tensor for the induced metric. Contracting both sides of $`(2.6)`$ with $`h^{ab}`$ and solving for $`\mathrm{\Delta }K=h^{ab}\mathrm{\Delta }K_{ab}`$ gives the Israel condition $$\mathrm{\Delta }K_{ab}=\frac{1}{\mathrm{}}\frac{\sigma }{\sigma _c}h_{ab}b\frac{\mathrm{}}{n2}\left[_{ab}\frac{1}{2(n1)}h_{ab}\right]+8\pi G\left[T_{ab}\frac{1}{n1}T_c^ch_{ab}\right]+\mathrm{}$$ $`(2.8)`$ which describes the effect of the presence of the bulk space-time on the brane Einstein equations through the appearance of the extrinsic curvature term. A similar equation, although without the term arising from varying the scalar curvature in the brane action, has appeared in earlier studies of domain walls in four and five $`[3]`$ $`[4]`$ dimensions. This term might seem unimportant at distances much larger than $`\mathrm{}`$, since it contains two more powers of $`\mathrm{}`$ compared to the term with the brane tension. However, the contributions from $`\sqrt{h}`$ can be of the same order as the difference between the brane tension and $`\mathrm{\Delta }K_{ab}`$, once the brane tension has been finely tuned. For comparison, in the original RS orbifold we only include the first term on the right side of $`(2.8)`$ and the extrinsic curvatures from the two sides are equal and opposite, $`K_{ab}K_{ab}^{(1)}=K_{ab}^{(2)}`$. The bulk AdS<sub>5</sub> space gives $`K_{ab}=h_{ab}/\mathrm{}`$ which yields the usual fine-tuning condition $`[1]`$ for the brane tension, $`\sigma =2\sigma _c`$. 3. Cosmology on the Boundary. We shall now set $`n=4`$ and examine some specific solutions of the field equations for gravity on an $`3`$-brane between two $`4+1`$ dimensional regions with negative cosmological constants. The metrics for the interior $`r<R(\tau )`$ and exterior $`r>R(\tau )`$ regions with respect to the brane can be written in the AdS<sub>5</sub>-Schwarzschild form . $$\begin{array}{cc}\hfill ds^2|_{\mathrm{int}}& =u(r)dt^2+(u(r))^1dr^2+r^2d\mathrm{\Omega }_3^2\hfill \\ \hfill u(r)& =\frac{r^2}{\mathrm{}_1^2}+k\frac{m_1}{r^2}\hfill \\ \hfill ds^2|_{\mathrm{ext}}& =v(r)dt^2+(v(r))^1dr^2+r^2d\mathrm{\Omega }_3^2\hfill \\ \hfill v(r)& =\frac{r^2}{\mathrm{}_2^2}+k\frac{m_2}{r^2}.\hfill \end{array}$$ $`(3.1)`$ An AdS<sub>5</sub> bulk corresponds to setting $`m_1=m_2=0`$. We have included the $`m_{1,2}/r^2`$ terms in the metric since they can have an important effect on the brane cosmology. Their presence leads to black-hole horizons at some distance into the bulk whose masses are determined by $`m_1`$ and $`m_2`$ $`[12]`$ . We shall frequently refer to the $`k=1`$ case, for which the brane is a $`3`$-sphere and a closed Robertson-Walker cosmology results, but we shall leave $`k`$ in the expressions with the understanding that flat or open cosmologies can be obtained by setting $`k=0`$ or $`1`$ respectively: $$d\mathrm{\Omega }_3^2\{\begin{array}{cc}d\chi ^2+\mathrm{sin}^2\chi (d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2)\hfill & k=1\hfill \\ \mathrm{}^2(dx^2+dy^2+dz^2)\hfill & k=0\hfill \\ d\chi ^2+\mathrm{sinh}^2\chi (d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2)\hfill & k=1\hfill \end{array}$$ $`(3.2)`$ We are looking for dynamical solutions, so we let the position of the brane be given by $$(t,r,\chi ,\theta ,\varphi )=(T(\tau ),R(\tau ),\chi ,\theta ,\varphi )$$ $`(3.3)`$ where $`\tau `$ is the proper time for an observer at rest with respect to the brane. The normal to the brane is then $$N_a=(\dot{R},\dot{T},0,0,0)$$ $`(3.4)`$ with a dot denoting differentiation with respect to $`\tau `$. Since the normal has unit length, $`g^{ab}N_aN_b=1`$, we can express $`\dot{T}`$ in terms of $`\dot{R}`$, $$\dot{T}=\frac{(\dot{R}^2+u(r))^{1/2}}{u(r)}$$ $`(3.5)`$ in the interior and with $`u(r)`$ replaced by $`v(r)`$ in the exterior bulk. With the normal in this form we find that the induced metric on the brane is already in the standard Robertson-Walker form; the metric induced from the interior bulk metric is $$\begin{array}{cc}\hfill ds^2& =u(r)dt^2+(u(r))^1dr^2+R^2d\mathrm{\Omega }_3^2\hfill \\ & =u(r)\left(\dot{T}^2(u(r))^2\dot{R}^2\right)d\tau ^2+R^2d\mathrm{\Omega }_3^2\hfill \\ & =d\tau ^2+R^2(\tau )d\mathrm{\Omega }_3^2\hfill \\ & h_{\mu \nu }dx^\mu dx^\nu ,\hfill \end{array}$$ $`(3.6)`$ where $`\mu `$, $`\nu `$ run over the coordinates on the brane. The exterior region produces exactly the same induced metric. In terms of the coordinate system defined by $`(3.6)`$, the interior contribution to the extrinsic curvature is $$K_{\mu \nu }^{(1)}dx^\mu dx^\nu =\frac{1}{u(R(\tau ))\dot{T}}\left[\ddot{R}+\frac{1}{2}\frac{u}{R}\right]d\tau ^2+u(R(\tau ))\dot{T}Rd\mathrm{\Omega }_3^2$$ $`(3.7)`$ with the exterior region contributing an analogous expression with $`u(R(\tau ))`$ replaced with $`v(R(\tau ))`$. Let us consider the matter on the brane to be distributed as an isotropic perfect fluid, of density $`\rho `$ and pressure $`p`$, for which the energy-momentum tensor is $$T_\mu ^\nu =\text{diag}(\rho ,p,p,p).$$ $`(3.8)`$ In this case, the spatial components of $`(2.8)`$ together with $`(3.5)`$ yield $$\sqrt{\dot{R}^2+u(R)}\pm \sqrt{\dot{R}^2+v(R)}=\frac{R}{\mathrm{}}\frac{\sigma }{\sigma _c}\frac{b\mathrm{}}{2R}\left[\dot{R}^2+k\right]+\frac{8\pi G}{3}R\rho +\mathrm{}.$$ $`(3.9)`$ The temporal component of $`(2.8)`$ does not give an independent equation once we have imposed the conservation of energy on the brane $`[3]`$ $`[4]`$ which demands that $$\frac{d}{d\tau }\left(\rho R^3\right)=p\frac{d}{d\tau }R^3.$$ $`(3.10)`$ The choice of the relative sign between the extrinsic curvature terms in $`(3.9)`$ depends on the geometry of the bulk AdS<sub>5</sub> space that surrounds the brane. In the original RS universe, the orbifold is made of two slices of AdS<sub>5</sub> attached so that the warp factor—the $`r^2/\mathrm{}^2`$ in the AdS metric $`(3.1)`$—decreases as we move further from the brane in either direction. Thus for the orbifold geometry, the plus sign is chosen. When the warp factor behaves differently on opposite sides of the brane, as for a brane simply embedded in a single bulk AdS<sub>5</sub> space, the minus sign is used. For no scalar curvature term, $`b=0`$, the Israel condition $`(3.9)`$ can be rewritten so that the evolution of $`R(\tau )`$ is determined by a potential, $$\frac{1}{2}\dot{R}^2+V(R)=\frac{1}{2}k,$$ $`(3.11)`$ where $$\begin{array}{cc}\hfill V(R)& =\frac{1}{8}\frac{R^2}{\mathrm{}^2}\left\{\frac{(\sigma +\rho )^2}{\sigma _c^2}2\left(\frac{\mathrm{}^2}{\mathrm{}_1^2}+\frac{\mathrm{}^2}{\mathrm{}_2^2}\right)+\frac{\sigma _c^2}{(\sigma +\rho )^2}\left(\frac{\mathrm{}^2}{\mathrm{}_1^2}\frac{\mathrm{}^2}{\mathrm{}_2^2}\right)^2\right\}\hfill \\ & \frac{1}{4}\frac{1}{R^2}\left\{m_1+m_2\frac{\sigma _c^2}{(\sigma +\rho )^2}(m_1m_2)\left(\frac{\mathrm{}^2}{\mathrm{}_1^2}\frac{\mathrm{}^2}{\mathrm{}_2^2}\right)\right\}\hfill \\ & \frac{1}{8}\frac{\sigma _c^2}{(\sigma +\rho )^2}\frac{\mathrm{}^2(m_1m_2)^2}{R^6}.\hfill \end{array}$$ $`(3.12)`$ A similar potential is implicit in $`[3]`$. For $`R\mathrm{}`$ and for a generic tension, this potential does not produce a Standard Robertson-Walker cosmology. However, when the brane tension is tuned to $$\sigma =\pm \sigma _c\left|\frac{\mathrm{}}{\mathrm{}_1}\pm \frac{\mathrm{}}{\mathrm{}_2}\right|$$ $`(3.13)`$ the leading $`R^2/\mathrm{}^2`$, $`\rho `$-independent term drops out of the potential. The appropriate signs in $`(3.13)`$ depend on the behavior of the AdS space on either side of the brane. The simplest example of a system that produces a realistic cosmology is an AdS<sub>5</sub> space that terminates on a ‘edge of the universe’ $`3`$-brane, in the spirit of , with only a tension term in the brane action. This closely resembles the usual orbifold geometry $`[3]`$ except that here the critical tension, $`\sigma =\sigma _c`$, is half of that needed for the orbifold. For $`\mathrm{}_1=\mathrm{}`$, $`\mathrm{}_2\mathrm{}`$ and $`m_1=m`$, $`m_2=0`$, the potential $`(3.12)`$ becomes $$V(R)=\frac{1}{2}\frac{R^2}{\mathrm{}^2}\frac{1}{\sigma _c^2}\left[(\sigma +\rho )^2\sigma _c^2\right]\frac{m}{2R^2}.$$ $`(3.14)`$ Making the fine-tuning of the brane tension to its critical value, $$\dot{R}^2+k=\frac{R^2}{\mathrm{}^2}\frac{2\rho }{\sigma _c}+\frac{R^2}{\mathrm{}^2}\frac{\rho ^2}{\sigma _c^2}+\frac{m}{R^2}=\frac{16\pi G}{3\mathrm{}}\rho R^2+\frac{m}{R^2}+\mathrm{}.$$ $`(3.15)`$ Here we have inserted the definition of $`\sigma _c`$ and assumed $`\rho /\sigma _c1`$. For the standard Robertson-Walker universe, the dynamical equation that determines $`R(\tau )`$ is $$\dot{R}^2+k=\frac{8\pi G_4}{3}\rho R^2$$ $`(3.16)`$ where $`G_4`$ is the $`3+1`$ dimensional Newton’s constant. Thus identifying $`G_4=2G/\mathrm{}`$, we recover the familiar cosmologies on the brane driven by the energy density on the brane, as long as $`m`$ is not too large. A similar result was found in this edge of the universe picture in . 4. A Vacuum Bubble. When a bubble nucleates in a region having a vacuum energy higher than that in the bubble’s interior, the bubble will expand or contract depending upon the surface tension of the bubble and the difference in the bulk vacuum energies. A simple example of this behavior occurs when a bubble of AdS<sub>5</sub> is surrounded by an asymptotically flat region. The $`3`$-brane here is the surface of this bubble. The purpose of this section is to introduce this bubble as an example of an acceptable brane cosmology that is driven by one of the mass parameters in the bulk metric in a relatively simple setting. One obvious difficulty—that the model does not produce a $`4d`$ Newton’s Law—can be removed by adding a scalar curvature term to the brane action. Yet we shall first study the cosmology without this term since in this limit we can solve the behavior exactly and shall find that it is maintained when the brane curvature term is included. For a bubble in a flat vacuum, the metrics for the interior and exterior regions are then respectively given by $$u(R)=\frac{r^2}{\mathrm{}^2}+k\frac{m_1}{r^2}v(R)=k\frac{m_2}{r^2}.$$ $`(4.1)`$ Since $`\mathrm{}_2\mathrm{}`$, we have set $`\mathrm{}_1=\mathrm{}`$ without loss of generality. Then the cosmological evolution is determined by the function, $$\begin{array}{cc}\hfill V(R)& =\frac{1}{8}\frac{R^2}{\mathrm{}^2}\left\{2\frac{(\sigma +\rho )^2}{\sigma _c^2}\frac{\sigma _c^2}{(\sigma +\rho )^2}\right\}\hfill \\ & \frac{1}{4}\frac{1}{R^2}\left\{m_1+m_2\frac{\sigma _c^2(m_1m_2)}{(\sigma +\rho )^2}\right\}\frac{1}{8}\frac{\sigma _c^2\mathrm{}^2(m_1m_2)^2}{(\sigma +\rho )^2R^6}.\hfill \end{array}$$ $`(4.2)`$ Again, this potential does not to lead to a standard Robertson-Walker cosmology on the brane unless we set $`\sigma =\sigma _c`$. Expanding in the limit where the matter density is small compared to this critical tension, we have $$\begin{array}{cc}\hfill V(R)& =\frac{1}{2}\frac{m_2}{R^2}\frac{1}{8}\frac{\mathrm{}^2(m_2m_1)^2}{R^6}+\mathrm{}\hfill \\ & +\frac{1}{2}\frac{\rho }{\sigma _c}\left(\frac{m_2m_1}{R^2}+\frac{1}{2}\frac{\mathrm{}^2(m_2m_1)^2}{R^6}+\mathrm{}\right)\hfill \\ & \frac{1}{2}\frac{\rho ^2}{\sigma _c^2}\left(\frac{R^2}{\mathrm{}^2}+\frac{3}{2}\frac{m_2m_1}{R^2}+\frac{3}{4}\frac{\mathrm{}^2(m_2m_1)^2}{R^6}+\mathrm{}\right)+\mathrm{}.\hfill \end{array}$$ $`(4.3)`$ The potential for the vacuum bubble $`(4.3)`$ does not contain a $`\rho R^2`$ term, since the same fine tuning that removes the cosmological constant from the brane also eliminates such a term. However, in the limit in which $`R(\tau )\mathrm{}`$ and $`\rho \sigma _c`$, the leading term that determines the cosmology on the surface of the expanding bubble is $$\dot{R}^2+k=\frac{m_2}{R^2}+\mathrm{}.$$ $`(4.4)`$ Although this equation seems quite different from $`(3.16)`$, the time dependence of its solution is exactly the same as for a radiation-dominated universe in which $`\rho R^4`$. Notice that if we do not want the $`\rho ^2R^2`$ term to dominate, we should only consider sufficiently late times in the evolution when $$\frac{\rho }{\sigma _c}\frac{\mathrm{}}{R}.$$ $`(4.5)`$ 5. Effects of the Scalar Curvature in the Brane Action. 5.1. The Edge of the Universe. We now study the effects of including a scalar curvature term for the induced metric in the brane action, $`b0`$. We first consider a bulk AdS<sub>5</sub> space that terminates on a $`3`$-brane. The Israel equation $`(3.9)`$ in this case contains only one extrinsic curvature term, $$\left[\dot{R}^2+u(R)\right]^{1/2}=\frac{R}{\mathrm{}}\frac{\sigma +\rho }{\sigma _c}b\frac{\mathrm{}}{2R}\left[\dot{R}^2+k\right].$$ $`(5.1)`$ Solving for $`\dot{R}^2+k`$, we obtain the potential $$V(R)=\frac{R^2}{b^2\mathrm{}^2}\left[1+b\frac{\sigma +\rho }{\sigma _c}\pm \sqrt{1+b^2+2b\frac{\sigma +\rho }{\sigma _c}b^2\frac{m\mathrm{}^2}{R^4}}\right].$$ $`(5.2)`$ At the critical brane tension, for the lower sign and assuming that we can expand the square root, we find that $$V(R)=\frac{1}{1+b}\frac{R^2}{\mathrm{}^2}\frac{\rho }{\sigma _c}\frac{1}{2(1+b)}\frac{m}{R^2}\frac{1}{2(1+b)^3}\frac{R^2}{\mathrm{}^2}\frac{\rho ^2}{\sigma _c^2}+\mathrm{}.$$ $`(5.3)`$ This time instead of $`(3.15)`$ we have $$\dot{R}^2+k=\frac{1}{(1+b)}\left(\frac{16\pi G}{3\mathrm{}}\rho R^2+\frac{m}{R^2}\right)+\mathrm{}$$ $`(5.4)`$ We obtain the standard Robertson-Walker evolution on the brane if we identify the four dimensional Newton’s constant with $$G_4=\frac{1}{1+b}\frac{2}{\mathrm{}}G.$$ $`(5.5)`$ We obtain the same result if we calculate the $`G_4`$ by considering variations about the background metric and then integrating over the extra dimension, as described in and $`[1]`$. Since a non-zero $`m`$ is not needed to produce the standard cosmology $`(5.3)`$, we set it to zero while determining $`G_4`$. It is also convenient, rather than working with the coordinates of $`(3.1)`$, to define a new radial coordinate through $$e^{2\rho /\mathrm{}}=\frac{r^2}{\mathrm{}^2}+1.$$ $`(5.6)`$ The AdS<sub>5</sub> metric then becomes $$ds^2=e^{2\rho /\mathrm{}}dt^2+\frac{e^{2\rho /\mathrm{}}}{e^{2\rho /\mathrm{}}1}d\rho ^2+\left(e^{2\rho /\mathrm{}}1\right)\mathrm{}^2d\mathrm{\Omega }_3^2.$$ $`(5.7)`$ In the limit, $`\rho \mathrm{}`$, this metric reduces to the simpler form $$ds^2e^{2\rho /\mathrm{}}\left(dt^2+\mathrm{}^2d\mathrm{\Omega }_3^2\right)+d\rho ^2.$$ $`(5.8)`$ If we replace the metric on the brane $`dt^2+\mathrm{}^2d\mathrm{\Omega }_3^2`$ with a metric $`\overline{g}_{\mu \nu }(x^\lambda )dx^\mu dx^\nu `$ that only depends on the coordinates on the brane, then we find that the $`5d`$ scalar curvature is related to the $`4d`$ scalar curvature by $$R_5=\frac{20}{\mathrm{}^2}+e^{2\rho /\mathrm{}}\overline{R}_4+\mathrm{}.$$ $`(5.9)`$ Integrating over the AdS<sub>5</sub> region gives then the following term in the effective action, $$\frac{1}{16\pi G_4}_{}d^4x\sqrt{h}\overline{R}_4\frac{1}{16\pi G}_{}d^4x\sqrt{h}\frac{\mathrm{}}{2}\overline{R}_4.$$ $`(5.10)`$ Combining this effective brane curvature induced by the bulk zero-mode with that included in the brane action gives $$S_{\mathrm{eff}}=\frac{1}{16\pi G}_{}d^4x\sqrt{h}\frac{\mathrm{}}{2}\left(\overline{R}_4+b\right)+\mathrm{}$$ $`(5.11)`$ so that the effective four dimensional Newton constant gets renormalized by the factor $`1/(1+b)`$. From the vantage of an effective field theory on the brane, for which $`b`$ is determined by some unknown higher energy theory, this result shows that we recover Newtonian gravity on the brane and at the same time the ability to generate a standard cosmological behavior on the brane, as long as $`b>1`$. For comparison, in the standard orbifold picture, with $`\sigma =2\sigma _c`$, the effective Newton constant on the brane is $$G_4=\frac{1}{2+b}\frac{2}{\mathrm{}}G.$$ $`(5.12)`$ For the special choice $`b=1`$ when $`\sigma =\sigma _c`$, our effective action on the brane corresponds to the first two terms in the brane counterterm action of $`[9]`$ and $`[10]`$ which regularizes the bulk AdS action. The AdS/CFT conjecture $`[7]`$ suggests that for this action, the theory of gravity in the AdS bulk is equivalent to a conformal field theory on the boundary, without gravity. Indeed we find that for physical values for the matter density $`(\rho 0)`$ and for a positive mass parameter in the AdS-Schwarzschild metric $`(m0)`$, we do not recover a realistic cosmological evolution on the brane. For $`b=1`$ and $`\sigma =\sigma _c`$, the Israel equation $`(3.9)`$ yields a complex potential for $`\dot{R}^2+k`$, $$V(R)=\frac{R^2}{\mathrm{}^2}\left[\frac{\rho }{\sigma _c}\pm \sqrt{2\frac{\rho }{\sigma _c}\frac{m\mathrm{}^2}{R^4}}\right],$$ $`(5.13)`$ so we no longer obtain the ordinary cosmological solutions. 5.2. The Vacuum Bubble. When a brane is embedded between arbitrary bulk AdS-Schwarzschild spaces and a scalar curvature term is included in the brane action, the Israel equation $`(3.9)`$ is a quartic polynomial in $`(\dot{R}^2+k)`$ which becomes tractable only for special space-time geometries or in the $`R\mathrm{}`$ limit. Returning to the case of a vacuum bubble expanding into an asymptotically flat region, with $`u(r)`$ and $`v(r)`$ as in $`(4.1)`$, we find the following leading behavior in the $`\mathrm{}/R1`$ limit: $$\begin{array}{cc}\hfill V(R)& \frac{1}{2}\frac{m}{R^2}\frac{1}{8}\frac{\mathrm{}^2}{R^2}\frac{m^2}{R^4}(b+1)^2+\frac{1}{2}\frac{m}{R^2}\frac{\rho }{\sigma _c}(b+1)+\mathrm{}\hfill \\ & \frac{1}{2}\frac{\rho ^2}{\sigma _c^2}\left(\frac{R^2}{\mathrm{}^2}+\frac{3}{2}\frac{m}{R^2}(b+1)^2+\mathrm{}\right)+\mathrm{}\hfill \end{array}$$ $`(5.14)`$ Here, for simplicity, we have set $`m_2=m`$ and $`m_1=0`$. Notice that the presence of a brane curvature term has not generated a $`\rho R^2`$ term. Therefore we still require that the cosmology is driven by the $`m/R^2`$ term in order to obtain the same time evolution as in a radiation dominated universe. For the new $`b`$-dependent terms not to overwhelm the $`m/R^2`$ term we must impose $`b\rho /\sigma _c1`$. Comparing with the condition already imposed by $`(4.5)`$—that the $`m/R^2`$ term and not the $`\rho ^2R^2`$ term should drive the cosmology—we see that we can accommodate a $`b\mathrm{}`$ up to cosmological scales without imposing any new constraint. A curvature term in the brane action plays a crucial role in the vacuum bubble scenario since it produces a $`4d`$ Newton’s law for distances along the brane smaller than $`b\mathrm{}`$ . A similar result is also found in for a brane embedded with a flat bulk on both sides. As we just have seen, $`b\mathrm{}`$ can be large without affecting the cosmology. One unpleasant feature of this example is that while the correct Newton’s law is obtained, the effective $`4d`$ Einstein equation contains a term for a scalar graviton $`[17]`$. As a more realistic variation, consider a vacuum bubble that expands into another AdS<sub>5</sub>-Schwarzschild region, rather than a flat bulk. Unlike the standard Randall-Sundrum picture we shall let the second AdS length, $`\mathrm{}_2`$, have a large macroscopic size but which is yet much smaller than the length associated with the brane curvature: $`\mathrm{}_1,\mathrm{}\mathrm{}_2b\mathrm{}`$. The leading behavior of the cosmology $`(3.9)`$ for this universe is then governed by $$\begin{array}{cc}\hfill V(R)& =\frac{1}{2}\frac{1}{b}\frac{\mathrm{}_2}{\mathrm{}_1}\frac{m_2}{R^2}\frac{1}{2}\frac{1}{b}\frac{m_1}{R^2}+\mathrm{}\frac{\rho }{\sigma _c}\left(\frac{1}{b}\frac{R^2}{\mathrm{}^2}\frac{1}{b^2}\frac{\mathrm{}_2}{\mathrm{}}\frac{R^2}{\mathrm{}^2}+\mathrm{}\right)\hfill \\ & +\frac{\rho ^2}{\sigma _c^2}\left(\frac{3}{2}\frac{1}{b}\frac{\mathrm{}_1}{\mathrm{}}\frac{R^2}{\mathrm{}^2}+\mathrm{}\right)+\mathrm{}.\hfill \end{array}$$ $`(5.15)`$ Unlike $`(5.14)`$, the $`\rho R^2`$ term is again present: $$\dot{R}^2+k=\frac{8\pi G}{3}\frac{2}{b\mathrm{}}\rho R^2+\frac{1}{b}\frac{\mathrm{}_2}{\mathrm{}_1}\frac{m_2}{R^2}+\mathrm{}.$$ $`(5.16)`$ Provided $`m_2`$ is not too large, we recover a standard Robertson-Walker cosmology with an effective $`4d`$ Newton’s constant, $`G_4=2G/b\mathrm{}`$. What has happened for this bubble is that above the AdS lengths we expect that the bulk space produces an effectively $`4d`$ theory of gravity $`[16]`$. Since we have assumed that $`\mathrm{}_1,\mathrm{}_2b\mathrm{}`$, when we probe distances below $`\mathrm{}_1,\mathrm{}_2`$ we do not observe the extra dimensions of the bulk space since we are in the regime in which the effect of the brane curvature term dominates. This argument is borne out in $`[17]`$ where it is shown that the effective theory of gravity on the surface is governed by a $`4d`$ Einstein equation at all scales when $`\mathrm{}_1,\mathrm{}_2b\mathrm{}`$. 6. Conclusions. We have found that, in general, the inclusion of a scalar curvature term in the brane action still allows us to find the standard Robertson-Walker cosmologies for the evolution of the brane. This standard behavior emerges once the size of the universe has grown large in comparison to the AdS length of the bulk space and provided that the usual fine-tuning of the effective cosmological constant on the brane to zero has been made. When the AdS lengths are small, the presence of this brane scalar curvature term simply acts to renormalize the effective Newton constant on the brane. In the case an ‘end of the universe’ brane, the brane curvature does not affect the cosmology, except when $`b=1`$. We have explored physically intuitive brane universes in which the bulk does not have an orbifold symmetry. In the case of a vacuum bubble expanding into an asymptotically flat space, we encountered an intriguing example of a system in which the existence of the bulk is crucial for the correct cosmological evolution since the $`\rho R^2`$ term that usually produces a Robertson-Walker cosmology is absent. Instead, the cosmology, which has the same time-dependence as a radiation dominated universe, is driven by a mass term in the bulk Schwarzschild metric. A scalar curvature in the brane action plays a more important role here since it provides the only possible source for $`4d`$ gravity up to a scalar graviton. We also examined a variation in which the bubble lies between two regions with potentially very different cosmological constants. For such a bubble, it is possible to recover a completely standard Robertson-Walker cosmology without constraining the bulk AdS lengths to be below a millimeter scale, provided that the brane curvature term is sufficiently strong. These examples should encourage the search for novel extra-dimensional models in which the bulk effects are not small corrections to the standard cosmology but rather drive its evolution. References relax L. Randall and R. Sundrum, “An alternative to compactification,” Phys. Rev. Lett. 83, 4690 (1999) \[hep-th/9906064\]. relax W. Israel, “Singular Hypersurfaces And Thin Shells In General Relativity,” Nuovo Cim. B44, 1 (1966). relax P. Kraus, “Dynamics of anti-de Sitter domain walls,” JHEP 9912, 011 (1999) \[hep-th/9910149\]. relax J. Garriga and M. Sasaki, “Brane-world creation and black holes,” hep-th/9912118. relax C. Csaki, M. Graesser, C. Kolda and J. Terning, Phys. Lett. B462, 34 (1999) \[hep-ph/9906513\]; C. Csaki, M. Graesser, L. Randall and J. Terning, hep-ph/9911406; J. M. Cline, C. Grojean and G. Servant, Phys. Rev. Lett. 83, 4245 (1999) \[hep-ph/9906523\]; E. E. Flanagan, S. H. Tye and I. Wasserman, hep-ph/9910498; P. Binetruy, C. Deffayet and D. Langlois, hep-th/9905012; and T. Shiromizu, K. Maeda and M. Sasaki, gr-qc/9910076. relax R. Sundrum, “Effective field theory for a three-brane universe,” Phys. Rev. D59, 085009 (1999) \[hep-ph/9805471\]. relax For a review, see O. Aharony, S. S. Gubser, J. Maldacena, H. Ooguri and Y. Oz, “Large N field theories, string theory and gravity,” Phys. Rept. 323, 183 (2000) \[hep-th/9905111\]. relax M. Henningson and K. Skenderis, “The holographic Weyl anomaly,” JHEP 9807, 023 (1998) \[hep-th/9806087\]. relax V. Balasubramanian and P. Kraus, “A stress tensor for anti-de Sitter gravity,” Commun. Math. Phys. 208, 413 (1999) \[hep-th/9902121\]. relax R. Emparan, C. V. Johnson and R. C. Myers, “Surface terms as counterterms in the AdS/CFT correspondence,” Phys. Rev. D60, 104001 (1999) \[hep-th/9903238\]. relax J. Ipser and P. Sikivie, “The Gravitationally Repulsive Domain Wall,” Phys. Rev. D30, 712 (1984). relax D. Birmingham, “Topological black holes in anti-de Sitter space,” Class. Quant. Grav. 16, 1197 (1999) \[hep-th/9808032\]. relax S. W. Hawking and D. N. Page, “Thermodynamics Of Black Holes In Anti-De Sitter Space,” Commun. Math. Phys. 87, 577 (1983). relax P. Horava and E. Witten, “Eleven-Dimensional Supergravity on a Manifold with Boundary,” Nucl. Phys. B475, 94 (1996) \[hep-th/9603142\]. relax S. S. Gubser, “AdS/CFT and gravity,” hep-th/9912001. relax L. Randall and R. Sundrum, “A large mass hierarchy from a small extra dimension,” Phys. Rev. Lett. 83, 3370 (1999) \[hep-ph/9905221\]. relax H. Collins and B. Holdom, “Linearized gravity about a brane in an asymmetric bulk,” hep-th/0006158. relax G. Dvali, G. Gabadadze and M. Porrati, “4D gravity on a brane in 5D Minkowski space,” hep-th/0005016.
no-problem/0003/astro-ph0003437.html
ar5iv
text
# What’s wrong with AGN models for the X-ray background ? ## 1. Introduction It has been recognized, already a few years ago, that a self–consistent AGN model for the XRB requires the combined fit of several observational constraints in addition to the XRB spectral intensity such as the number counts, the redshift and absorption distribution in different energy ranges, the average spectra and so on (Setti & Woltjer 1989). First attempts towards a “best fit” solution relied on simplified assumptions for the AGN spectral properties and for the evolution of their luminosity function (Madau, Ghisellini & Fabian 1994 (MGF94), Comastri et al. 1995 (CSZH95), Celotti et al. 1995 (CFGM95)). A three step approach has been followed to build the so–called baseline model: the first step is to assume a single average spectrum for the type 1 objects which is usually parameterized as a power law plus a reflection component from a face–on disk and a high–energy cut–off at a few hundreds of keV. A distribution of absorbing column densities for type 2 objects is then added in the second step. Finally the template spectra are folded with an evolving XLF which, in the framework of unified models, does not depend on the amount of intrinsic obscuration. The number density and absorption distribution of obscured sources are then varied until a good fit is obtained. The baseline model led to a successful description of most of the observational data available before 1995 and to testable predictions for the average properties of the sources responsible for the bulk of the XRB. The increasing amount of data from soft and hard X–ray surveys combined with the study of nearby bright sources has been used to obtain a more detailed description of the AGN X–ray spectra and absorption distribution. In addition, the optical identification of sizeable samples of faint AGNs discovered in the ROSAT, ASCA and BeppoSAX surveys has shed new light on the evolution of the AGN luminosity function opening the possibility to test in more detail the AGN synthesis model predictions. As a consequence, the modelling of the XRB has attracted renewed attention and several variations/improvements with respect to the baseline model have been proposed. However, despite the increasing efforts, a coherent self–consistent picture of “the” XRB model has yet to be reached, as most of its ingredients have to be extrapolated well beyond the present limits. Besides the interest in a best–fit model it is by now clear that the problem of the origin of the XRB is closely related to the evolution of accretion and obscuration in AGN. As a consequence, the XRB spectrum should be considered as a useful tool towards a better understanding of the history of black hole formation and evolution in the Universe (Fabian & Iwasawa 1999) and the interplay between AGN activity and star–formation (Franceschini et al. 1999; Fabian this volume). ## 2. Recent Observational constraints ### 2.1. The XRB spectrum The low energy (below 10 keV) XRB spectrum has been measured with the imaging detectors onboard ROSAT, ASCA, and BeppoSAX and a summary of the results is given in Figure 1 together with a compilation of recent re–analysis of the HEAO1 A2 and A4 experiments data. The comparison between the different datasets in the overlapping $``$ 1–8 keV energy range points to a systematic difference in the normalization of the XRB flux while the average spectrum is similar among all the observations. The largest deviation is of the order of $``$ 40 % between the HEAO1 A2 and BeppoSAX data (see Vecchi et al. 1999 for a more detailed discussion). Such a discrepancy could be due to residual, not fully understood, cross–calibration errors among the different detectors and/or to field–to–field fluctuations. These findings cast shadows on the intensity and the location of the XRB peak as measured by HEAO1 A2 ($``$ 43 keV cm<sup>-2</sup> s<sup>-1</sup> sr<sup>-1</sup> at 30 keV; Gruber et al. 1999). Indeed a simple extrapolation of the BeppoSAX and HEAO1 A4 MED best fit spectra imply that the XRB spectrum peaks at $``$ 23 keV with a much higher intensity introducing an extra–degree of freedom in AGN models parameter space. A new measurement of the 10–100 keV spectrum would be extremely important. Unfortunately such observations are not foreseen in the near future. ### 2.2. The AGN spectrum As far as the model of the XRB is concerned, the most important parameters to deal with are a good estimate of the average continuum slope and of the absorption column density. The broad–band energy range exploited by BeppoSAX turned out to be extremely useful to probe column densities as high as 10<sup>24-25</sup> cm<sup>-2</sup>, to assess the strength of the reflection component which peaks around 20–30 keV, and the shape of the low–energy soft–excess emission below $``$ 1 keV. In addition ASCA observations of sizeable samples of relatively faint AGNs have allowed to probe the spectral properties of high–luminosity high–redshift objects. The most important new results emerging from these observations can be summarized as follows: $``$ The fraction of heavily obscured (24 $`<`$ log $`N_H`$ $`<`$ 25) and Compton thick (log $`N_H>`$ 25) sources in the local Universe is much higher than previously thought (Risaliti, Maiolino & Salvati 1999) and a fraction as high as 50% of the Seyfert 2 in the local Universe could be obscured by these large column densities. $``$ Soft excess emission is uncommon among bright Seyfert 1 galaxies (Matt this volume) and nearby quasars (George et al. 2000) and estimated to be present in less than $``$ 30 % of AGN. $``$ First observations of high redshift quasars suggest a flattening of the power law slope which cannot be ascribed to the reflection component (Vignali et al. 1999). $``$ Despite intensive searches for high luminosity highly absorbed objects (the so–called type 2 quasars) these sources appear to be elusive and only a few bona–fide examples have been reported in the literature (i.e. Barcons et al. 1998; Georgantopoulos et al. 1999). ### 2.3. The evolution of the AGN X-ray luminosity function The evolution of the AGN XLF has been extensively studied mainly in the soft X–rays and usually parametrized with a pure luminosity evolution (PLE) model (i.e. Boyle et al 1994). A major step forward in the determination of the soft XLF has been recently achieved by Miyaji et al. (2000). Combining the results of several ROSAT surveys it has been possible to explore the low-luminosity high-redshift tail of the XLF in much greater detail than before. The results favour a luminosity dependent density evolution (LDDE) as the best description of the available data. In agreement with previous studies, X-ray selected AGN undergo strong evolution up to a redshift $`z_c`$ = 1.5–2.0 and a levelling–off or a weak negative evolution up to $`z_{max}`$ 4–5. Two parametric descriptions (LDDE1 and LDDE2) encompassing the statistically acceptable fits to the soft XLF have been worked out by Miyaji and collaborators. The integration of the LDDE1 and LDDE2 XLF up to $`z`$ 5 accounts for about 60 % and 90 % of the soft XRB respectively. ## 3. The AGN models parameter space ### 3.1. Warnings Before discussing and comparing the various models, it is important to stress the strong coupling between the input spectral parameters and those describing the XLF evolution, which instead are often uncorrectly considered to be independent in the models. Indeed the X–ray luminosities are usually computed converting count rates into fluxes assuming a single valued (relatively steep) slope. This procedure might easily lead to a wrong estimate of the intrinsic luminosity for a very hard absorbed spectrum or if the soft X–ray flux is due to a component not directly related to the obscured nucleus (as in the case of a thermal spectrum from a starburst or scattered emission). According with the XRB baseline model, absorbed AGN become progressively more important towards faint fluxes and thus an additional spurious density evolution term can be introduced in the derivation of the XLF. It turns out that not only the evolution and the space density of obscured AGN are highly uncertain, but also the common practice to consider the soft XLF as representative of the properties of type 1 objects is likely to contain major uncertainties especially when extrapolated to higher energies. Unfortunately our present knowledge of the AGN spectral and evolutive properties does not allow to disentangle the spectral and evolutionary parameters, leaving this ambiguity in all the XRB synthesis models. ### 3.2. An incomplete tour of the parameter space The baseline model (cfr $`\mathrm{\S }`$ 1) has been recently extended, taking into account some of the new observational findings described in $`\mathrm{\S }`$2, by several authors: Gilli, Risaliti & Salvati 1999 (GRS99); Miyaji, Hasinger & Schmidt 1999 (MHS99); Wilman & Fabian 1999 (WF99); Pompilio, La Franca & Matt 2000 (PLM00). A good agreement among the various models has been reached on the high energy cut–off in the input spectrum (300–500 keV), which is basically fixed by the XRB shape above 40 keV (Comastri 1999), and on the $`z_c`$ and $`z_{max}`$ values. GRS99 and MHS99 adopted the LDDE model for the evolution of the XLF and also introduced a cut–off in the luminosity distribution of absorbed AGN for $`L>`$ 10<sup>44</sup> erg s<sup>-1</sup> to cope with the lack of type 2 QSO. The absorption distribution has been fixed according to the recent BeppoSAX results only in the GRS99 model. PLM00 and WF99 both stressed that a proper treatement of the high energy spectrum of heavily obscured (24 $`<`$ log$`N_H`$ $`<`$ 25) objects has important consequences for the modelling. In particular the evolution of the obscured to unobscured ratio as a function of redshift (PLM00) or the need of super–solar abundances to better fit the XRB peak at 30 keV (WF99; but see $`\mathrm{\S }`$ 2.1) have been invoked. A comparison between the various models (all of them providing a fairly good description of the present data) is made difficult by the large dispersion in the starting assumptions among the different authors (see Table 1) and also by the relatively large uncertainty in the XRB spectrum normalization (see $`\mathrm{\S }`$ 2.1). The most up-to-date treatment of the XLF evolution has been adopted only by GRS99 and MHS99 who also made an attempt to correct for the biases described in $`\mathrm{\S }`$ 3.1. In both cases the model predictions fall short the hard X–ray (2–10 keV and especially 5–10 keV) counts at relatively bright 10<sup>-13</sup>–10<sup>-12</sup> cgs fluxes. This effect, which is less severe for MHS99 given the very hard input spectrum ($`\alpha `$ = 0.7 plus reflection), can be explained by the relatively low average luminosity of absorbed sources which show up only at fainter fluxes. The hard X–ray counts are better accounted for in PLE models (Fig. 2), which however are based on a less appropriate description of the XLF and include high luminosity highly absorbed sources. It is worth noting that source counts at fluxes $`>`$ 10<sup>-13</sup> cgs, both in the soft and hard bands, should not be entirely accounted for by AGN models as a non negligble fraction of these relatively bright sources are not AGN. The 2–10 keV and 5–10 keV counts are best fitted by those models without soft excess emission in type 1 objects. However in this case the predicted average spectrum of faint sources in the ROSAT band ($`\alpha _E`$ 0.5–0.6) is much harder than the observed value ($`\alpha _E`$ 1.0, Hasinger et al. 1993). Another inconsistency of most of these models concerns the relatively small expected percentage of type 1 unobscured AGN at the 2–10 keV fluxes currently sampled. Indeed optical identifications of medium–deep ASCA surveys (Boyle et al. 1998; Akiyama et al. 2000) suggest that the fraction of unabsorbed broad line AGN is of the order of 60–70 % while only one third of the sources should be type 1 AGN on the basis of the models predictions (but see $`\mathrm{\S }`$ 4). The fraction of type 1 AGN can be increased assuming an LDDE2 model for the evolution of the XLF and a flat $`\alpha _E`$ = 0.7 spectrum for high luminosity objects (Vignali et al. 1999). With these parameters a good fit to the hard XRB spectrum can be obtained even without including heavily obscured ($`N_H>`$ 10<sup>24</sup>) sources. As a result the relative ratio between absorbed and unabsorbed objects at relatively bright fluxes (Fig. 3) decreases significantly. However also within this model the hard X–ray counts are seriously underestimated (being consistent with the dotted line in Fig. 2) owing to the decreased emissivity of hard absorbed sources. As an example of the link between the various parameters I have computed three different models which provide a good fit to the overall XRB spectrum but differ in the choice of the input spectra and XLF evolution (Fig. 4). Assuming a high fraction of type 1 objects as in the LDDE2 scenario a soft excess component cannot be accomodated as the 1/4 keV background would be overpredicted. On the other hand the class of PLE models without soft excess (which better reproduce the hard X–ray counts, but with the caveats discussed above) suggest a possible contribution from other, steep spectrum, sources to the 0.25 keV background. ## 4. Is there a way out ? The main message emerging from what discussed above is that a self–consistent description of all the observational constraints is still lacking. The major problem is the discrepancy between the predictions of those models computed assuming the most up–to–date results, and the high energy ($`>`$ 2 keV) source counts. One obvious possibility is a substantial contribution from non–AGN, flat spectrum sources. Extremely hard ($`\alpha _E`$ 0.2) power–law tails above a few keV, possibly originating in advection dominated accretion flows, have been recently discovered in a small sample of nearby elliptical galaxies (Allen, Di Matteo & Fabian 2000). It has been proposed (Di Matteo & Allen 1999) that these objects constitute the missing population needed to fill the gap between the hard counts and the AGN model predictions. However in this case elliptical galaxies should be a non–negligible fraction of the already identified X–ray sources in ASCA and BeppoSAX surveys at variance with the present breakdown of optical identifications. Another interesting possibility, which would allow to include high luminosity highly absorbed AGN in the models and at the same time reproduce most of the observational constraints, is that the optical properties of X–ray obscured AGN are different from what expected (i.e. narrow lined AGN). In this respect the identification of the first High Energy LLarge Area Survey (HELLAS) carried out with BeppoSAX in the 5–10 keV band (Fiore et al. 1999, and this volume) is providing new and unexpected results. In particular, X–ray absorbed AGN are identified with objects which show a large variety of optical classification, such as intermediate type 1.5–1.9 objects, red quasars (Vignali et al. 2000) and even broad line “blue” quasars. A similar behaviour has been also reported for a sample of ROSAT AGN (Mittaz, Carrera, Page this volume). It is also interesting to note that large columns of cold gas have been detected in Broad Absorption Line quasars (Brandt et al. this volume) and in several Broad Line Radio galaxies and radio quasars observed by ASCA (Sambruna, Eracleous & Mushotzky 1999). Although the statistics is not yet good enough to reach firm conclusions, it is quite possible that the correlation between X–ray absorption and optical appearance of AGN change with redshift and/or luminosity (Fig. 5). A decreasing value of the dust–to–gas ratio as a function of the X–ray luminosity would provide a possible explanation of this effect. ## 5. Conclusions In order to achieve a major improvement in the exploration of XRB models parameter space, the resolved fraction of its energy density should be of the order of 50–60 % or higher. The expected contribution of AGN to the 2–10 keV XRB is reported in Table 2 as a function of flux. The model parameters are such to account for an intensity of $``$ 7 $`\times `$ 10<sup>-8</sup> erg cm<sup>-2</sup> s<sup>-1</sup> sr<sup>-1</sup> (in between the ASCA and BeppoSAX measurements) at $``$ 10<sup>-17</sup> cgs. The predictions are model dependent and should be considered as indicative. Nevertheless it is clear that at the fluxes sampled by the foreseen Chandra and XMM medium–deep surveys most of the XRB will be resolved allowing to unveil the nature of the sources making the bulk of its energy density. The most important challenge for XRB models will be the study of X–ray absorption and luminosity distribution for 2–10 keV fluxes $`<`$ 10<sup>-13</sup> cgs, the search for heavily obscured AGN which according to the predictions are expected to show up in a substantial fraction below $`<`$ 10<sup>-14</sup> cgs (cfr. Fig. 3), and the optical–infrared follow–up of X–ray obscured sources. ## ACKNOWLEDGEMENTS Partial support from ASI contract ARS-98-119 and MURST grant Cofin98-02-32 is acknowledeged. I thank G. Zamorani and R. Gilli for useful discussions. ## REFERENCES Allen S.W., Di Matteo T., Fabian A.C., 2000, MNRAS 311, 493 Akiyama M., et al., 2000, ApJ in press (astro-ph/0001289) Barcons X., et al., 1998, MNRAS 301, L25 Boyle B.J., et al., 1994, MNRAS 271, 639 Boyle B.J., et al., 1998, MNRAS 296, 1 Cagnoni I., Della Ceca R., Maccacaro T., 1998, ApJ 493, 54 Celotti A., Fabian A.C., Ghisellini G., Madau P., 1995, MNRAS 277, 1169 Comastri A., Setti G., Zamorani G., Hasinger G., 1995, A&A 296, 1 Comastri A., 1999, Astr. Lett. & Comm. 39, 181 Di Matteo T., Allen S.W., 1999, ApJ 527, L21 Fabian A.C., Iwasawa K., 1999, MNRAS 303, L34 Fiore F., et al., 1999, MNRAS 306, L55 Franceschini A., et al., 1999, MNRAS 310, L5 Gendreau K.C., et al., 1995, PASJ 47, 5 Gendreau K.C., Barcons X., Fabian A.C., 1998, MNRAS 297, 41 Georgantopoulos I., et al., 1996, MNRAS 280, 276 Georgantopoulos I., et al., 1999, MNRAS 305, 125 George I.M., et al., 2000, ApJ 531, 52 Gilli R., Risaliti G., Salvati M., 1999, A&A 347, 424 Gruber D.E., et al., 1999, ApJ 520, 124 Hasinger G., et al., 1993, A&A 275, 1 Kinzer R.L., et al., 1997, ApJ 475, 361 Madau P., Ghisellini G., Fabian A.C., 1994, MNRAS 270, L17 Miyaji T., et al. 1998, A&A 334, L13 Miyaji T., Hasinger G., Schmidt M., 1999, Adv. Space Res. in press Miyaji T., Hasinger G., Schmidt M., 2000, A&A 353, 25 Ogasaka Y., et al., 1998, AN 319, 43 Piccinotti G., et al., 1982, ApJ 253, 485 Pompilio F., La Franca F., Matt G., 2000, A&A 353, 440 Risaliti G., Maiolino R., Salvati M., 1999, ApJ 522, 157 Roberts T.P., Warwick R.S., 1998, AN 319, 34 Sambruna R.M., Eracleous M., Mushotzky R.F., 1999, ApJ 526, 60 Setti G., Woltjer L., 1989, A&A 224, L21 Ueda Y., et al., 1999, ApJ 524, L11 Vecchi A., Molendi S., Guainazzi M., et al., 1999, A&A 349, L73 Vignali C., Comastri A., Cappi M., et al., 1999, ApJ 516, 590 Vignali C., Mignoli M., Comastri A., et al., 2000, MNRAS in press (astro-ph/0002279) Wilman R.J., Fabian A.C., 1999, MNRAS 309, 862
no-problem/0003/hep-th0003257.html
ar5iv
text
# Nucleation at Finite Temperature Beyond the Superminispace Model ## I Introduction The problem of the decay rate of a metastable state and coherence of degenerate states via quantum tunneling has profound physical implications in many fundamental phenomena in various branches of physics as e.g. in condensed matter and particle physics and cosmology. The instanton techniques initiated long ago by Langer and Coleman are a major tool of investigation, and provide us with a formalism capable of producing accurate values for the tunneling rate. The instanton method has been widely used to study tunneling at zero temperature. The generalization to finite temperature tunneling has been a long–standing problem in which a new type of solution satisfying a periodic boundary condition, and therefore called the periodic instanton, was gradually realized to be relevant. The exact analytic form of the periodic instanton is known only in one–dimensional quantum mechanics. In field theory models, it can be found either approximately at low energies or numerically. Thus quantum tunneling at finite temperature $`T`$ is, under certain conditions, dominated by periodic instantons with finite energy $`E`$, and in the semi–classical approximation the euclidean action is expected to be saturated by a single periodic instanton. Thus only periodic instantons with the period equal to the inverse temperature can dominate the thermal rate. With exponential accuracy the tunneling probability $`P(E)`$ at a given energy $`E`$ can be written as $$P(E)e^{W(E)}=e^{S(\beta )+E\beta }$$ (1) The period $`\beta `$ of the periodic instanton is related to the energy $`E`$ in the standard way $`E=\frac{S}{\beta }`$ and $`S(\beta )`$ is the action of the periodic instanton per period. With increasing temperature thermal hopping becomes more and more important and beyond some critical or crossover temperature $`T_c`$ becomes the decisive mechanism. The barrier penetration process is then governed by a static solution of the euclidean field equation, i.e. the sphaleron. The study of the crossover from quantum tunneling to thermal hopping is an interesting subject with a long history. Under certain assumptions for the shape of the potential barrier, it was found that the transition between quantum tunneling and thermally assisted hopping occurs at the temperature $`T_c`$ and was recognized as a smooth second order transition in the quantum mechanical models of Affleck and the cosmological models of Linde. It was demonstrated that the periodic instantons which govern the tunneling in the intermediate temperature regime interpolate smoothly between the zero temperature or vacuum instanton and the sphaleron. In analogy with the terminology of statistical mechanics this phenomenon can be referred to as a second–order transition characterized by the plot of euclidean action $`S(\beta )`$ versus instanton period $`\beta `$, the latter being the inverse temperature in the finite temperature field theory. However, it was shown that the smooth transition is not generic. Using a simple quantum mechanical model it was demonstrated that the time derivative of the euclidean action would be discontinuous if the period of the instanton is not a monotonic function of energy. Assuming that there exists a minimum of $`\beta (E)`$, i. e. that $`\frac{d\beta }{dE}=0`$ at some value of $`E`$, the second time derivative of the action $$\frac{d^2S(\beta )}{d\beta ^2}=\frac{1}{\frac{d\beta }{dE}}$$ (2) would not be defined at the minimum, or, in other words, the first time derivative is discontinuous. The sharp first order transition occurs as a bifurcation in the plot of the action $`S(\beta )`$ versus the period $`\beta `$. In the context of field theory the crossover behaviour and the bifurcation of periodic instantons have also been explained in a more transparent manner. The idea to determine the order of a transition from the plot of euclidean action versus the period of the instantons was subsequently extended, and a sufficient condition for the existence of a first–order transition was derived using only small fluctuations around the sphaleron. If the period $`\beta (EU_0)`$ of the periodic instanton close to the barrier peak can be found, a sufficient condition to have a first–order transition is seen to be that $`\beta (EU_0)\beta _s<0`$ or $`\omega ^2\omega _s^2>0`$, where $`U_0`$ denotes the barrier height and $`\beta _s`$ is the period of small oscillation around the sphaleron; $`\omega `$ and $`\omega _s`$ are the corresponding frequencies. This observation triggered active research on the transition behaviour, as e.g. in connection with spin tunneling in condensed matter physics and with tunneling in various field theory models. It is also interesting to investigate the crossover from quantum tunneling to thermal hopping in the context of cosmology. After the pioneering attempt of Linde little work has been done along this direction in view of the lack of solvable models. Motivated by a similar study of bubble nucleation in field theory, the nucleation rate in a superminispace model has been extended to finite temperature where the matter field is frozen out and leaves only the constant vacuum energy density. In this context a periodic instanton solution was obtained analytically, and the transition from tunneling to thermal hopping was found to be of the first order. The superminispace model may be too simple to imply a realistic result. In the present paper we therefore extend the model to one including a spacially homogeneous matter field. Small fluctuations around the sphaleron solutions are then studied with a perturbation method. Criteria for a first–order transition and related phase diagrams are obtained for both scalar and vector fields. This investigation may shed light on our understanding of the time evolution of the early Universe. In Sec. 2 the effective Lagrangian and the equations of motion are obtained for the closed Robertson–Walker (RW) metric interacting with spacially homogeneous scalar and vector fields. The physical meaning of tunneling is explained. The oscillation frequencies around the sphaleron are obtained with a perturbation expansion in Sec. 3. We derive the criterion for the first–order transition of the nucleation and the related phase diagram for interaction with a scalar field. In Sec. 4 we apply a similar approach to the model with a vector field. ## II Sphalerons and the Thermal Rate of Nucleation We begin with the model of the Universe defined by the action, $$S=d^4x\sqrt{g}\left[\frac{}{16\pi G}+_m\right]$$ (3) where $``$ is the Ricci scalar. The Lagrangian density of the scalar matter field $`\varphi `$ is of the general form, $$_m=\frac{1}{G_\varphi }\left[\frac{1}{2}_\mu \varphi ^\mu \varphi V(\varphi )\right]$$ (4) where $`G_\varphi `$ is a dimensional parameter. For a vector field we consider that of the $`O(3)`$ nonlinear $`\sigma `$–model with a symmetry breaking term such as, $$_m=\frac{1}{2}m\underset{a}{}_\mu n_a^\mu n_a\frac{1}{\lambda ^2}(1+n_3),\underset{a=1}{\overset{3}{}}n_a^2=1$$ (5) where m and $`\lambda `$ are suitable parameters. Contemporary cosmological models are based on the idea that the Universe is pretty much the same everywhere. More mathematically precise properties of the manifold may be isotropy and homogeneity. The spacetime to be considered here is $`𝐑\times \mathrm{\Sigma }`$ where $`𝐑`$ represents the time direction and $`\mathrm{\Sigma }`$ is a homogeneous and isotropic 3–manifold. The Universe is also assumed to be closed. We therefore obtain the Robertson–Walker (RW) metric of the closed case, $$ds^2=dt^2R^2(t)d\mathrm{\Omega }_3^2$$ (6) The manifold $`\mathrm{\Sigma }`$ in our case is a three–sphere $`S^3`$ and the lapse function is simply equal to 1. R(t) is known as the scale factor which tells us “how big” the spacetime slice $`\mathrm{\Sigma }`$ is at time t. $`d\mathrm{\Omega }_3^2`$ is the metric on a unit 3-sphere. The Ricci scalar is found to be $$=6\left[\frac{\ddot{R}}{R}+\frac{\dot{R}^2}{R^2}+\frac{1}{R^2}\right]$$ (7) where a dot denotes the time derivative. For spacially homogeneous matter fields $`\varphi =\varphi (t)`$ and $`𝐧=𝐧(t)`$ the angle integrals can be carried out and we have, $$S=L𝑑t$$ (8) The effective Lagrangians are obtained as $$L=2\pi ^2\left\{\frac{3R(\dot{R}^21)}{8\pi G}+\frac{1}{G_\varphi }\left[\frac{1}{2}R^3\dot{\varphi }^2R^3V(\varphi )\right]\right\}$$ (9) for the scalar field and $$L=2\pi ^2\left[\frac{3R(\dot{R}^21)}{8\pi G}+\frac{mR^3}{2}\underset{a}{}\dot{n}_a^2\frac{R^3}{\lambda ^2}(1+n_3)\right]$$ (10) for the vector field. The canonical momenta are defined by $$P_R=\frac{L}{\dot{R}}=\frac{3\pi R\dot{R}}{2G},P_\varphi =\frac{L}{\dot{\varphi }}=\frac{2\pi ^2}{G_\varphi }R^3\dot{\varphi },P_a=2\pi ^2mR^3\dot{n}_a$$ (11) The corresponding Hamiltonians $$H=\frac{G_\varphi }{4\pi ^2R^3}P_\varphi ^2\frac{G}{3\pi R}P_R^2\frac{3\pi }{4G}R+\frac{2\pi ^2}{G_\varphi }R^3V(\varphi ),$$ (12) $$H=\frac{1}{4\pi ^2mR^3}P_a^2\frac{G}{3\pi R}P_R^2\frac{3\pi }{4G}R+\frac{2\pi ^2}{\lambda ^2}R^3(1+n_3)$$ (13) are conserved quantities. For our purposes of the study of nucleation we make use of the Wick rotation $`\tau =it`$ and obtain the euclidean Lagrangians, $$L_e=2\pi ^2\left\{\frac{3R(\dot{R}^2+1)}{8\pi G}+\frac{1}{G_\varphi }\left[\frac{1}{2}R^3\dot{\varphi }^2+R^3V(\varphi )\right]\right\},$$ (14) $$L_e=2\pi ^2\left[\frac{3R(\dot{R}^2+1)}{8\pi G}+\frac{m}{2}R^3\underset{a}{}\dot{n}_a^2+\frac{R^3}{\lambda ^2}(1+n_3)\right]$$ (15) From now on the dot denotes imaginary time derivatives, e. g. $`\dot{R}=\frac{dR}{d\tau }`$. The euclidean equations of motion for the scalar field are seen to be $$\frac{d}{d\tau }(R\dot{R})\frac{\dot{R}^2+1}{2}+2\pi \stackrel{~}{G}R^3\dot{\varphi }^2+4\pi \stackrel{~}{G}R^2V(\varphi )=0$$ (16) where $`\stackrel{~}{G}=\frac{G}{G_\varphi }`$, and $$\frac{d}{d\tau }(R^3\dot{\varphi })=R^3\frac{V}{\varphi }$$ (17) The sphalerons $`\varphi _0`$ and $`R_0`$ are static solutions of eqs. (16) and (17) with $`\dot{\varphi }=\ddot{\varphi }=\dot{R}=\ddot{R}=0`$. From eq.(16) we have $$R_0=\left[\frac{1}{8\pi \stackrel{~}{G}V(\varphi _0)}\right]^{\frac{1}{2}}$$ (18) $`\varphi _0`$ is the position of the peak of the potential barrier such that $`\frac{V}{\varphi }|_{\varphi =\varphi _0}=0`$. With the sphaleron $`\varphi _0`$ the effective potential of the dynamical variable $`R`$ is seen to be from eq.(9), $$U(R)=\frac{R^3V(\varphi _0)}{G_\varphi }+\frac{3R}{8\pi G}$$ (19) $`R_0`$ is just the position of the above potential barrier peak and indeed the sphaleron. The thermal rate of nucleation at temperature T is given by the Arrhenius law, $$P(T)e^{\frac{U(R_0)}{T}}$$ (20) Our superminispace model here is simply the dynamical model described by the equation of motion (16) with the scalar field variable $`\varphi `$ in $`V(\varphi )`$ replaced by the sphaleron $`\varphi _0`$. The nucleation process in the superminispace model has been extended to the finite temperature case with the periodic instanton formalism in our previous work. In the present paper the scalar field is not frozen out and we instead investigate the fluctuation of the fields around the sphalerons. The crossover behaviour from tunneling to thermal hopping can be obtained with perturbation expansions. ## III Nucleation at finite temperature in presence of a scalar field As we demonstrated above, the crossover behaviour of the nucleation of our model Universe from quantum tunneling to thermal activation can be obtained from the deviation of the period of the periodic instanton from that of the sphaleron. To this end we expand the field variables about the sphaleron configurations $`\varphi _0`$ and $`R_0`$, i. e. we set $$\varphi =\varphi _0+\xi ,R=R_0+\eta $$ (21) where $`\xi `$ and $`\eta `$ are small fluctuations. Substitution of eq.(21) into the equations of motion (16) and (17) yields the following power series equations of the fluctuation fields $`\xi `$ and $`\eta `$, $$\widehat{l}\left(\begin{array}{c}\xi \\ \eta \end{array}\right)=\widehat{h}\left(\begin{array}{c}\xi \\ \eta \end{array}\right)+\left(\begin{array}{c}G_2^\xi (\xi ,\eta )\\ G_2^\eta (\xi ,\eta )\end{array}\right)+\left(\begin{array}{c}G_3^\xi (\xi ,\eta )\\ G_3^\eta (\xi ,\eta )\end{array}\right)+\left(\begin{array}{c}G_4^\xi (\xi ,\eta )\\ G_4^\eta (\xi ,\eta )\end{array}\right)+\mathrm{}$$ (22) where the operators $`\widehat{l},\widehat{h}`$ are defined as $$\widehat{l}=\left(\begin{array}{cc}\frac{d^2}{d\tau ^2}& 0\\ o& \frac{d^2}{d\tau ^2}\end{array}\right),\widehat{h}=\left(\begin{array}{cc}V^{(2)}(\varphi _0)& 0\\ o& 8\pi \stackrel{~}{G}V(\varphi _0)\end{array}\right)$$ (23) and $`G_2,G_3,\mathrm{}`$ denote terms which contain quadratic, cubic and higher powers of the small fluctuations respectively: $`G_2^\xi `$ $`=`$ $`{\displaystyle \frac{3}{R_0}}\left[\dot{\eta }\dot{\xi }+\eta \ddot{\xi }\right]+{\displaystyle \frac{1}{2}}V^{(3)}(\varphi _0)\xi ^2+{\displaystyle \frac{3}{R_0}}V^{(2)}(\varphi _0)\xi \eta ,`$ $`G_3^\xi `$ $`=`$ $`{\displaystyle \frac{6}{R_0^2}}\eta \dot{\eta }\dot{\xi }{\displaystyle \frac{3}{R_0^2}}\eta ^2\ddot{\xi }+{\displaystyle \frac{1}{3!}}V^{(4)}(\varphi _0)\xi ^3+{\displaystyle \frac{3}{2R_0}}V^{(3)}(\varphi _0)\eta \xi ^2+{\displaystyle \frac{3}{R_0^2}}V^{(2)}(\varphi _0)\xi \eta ^2,`$ $`G_4^\xi `$ $`=`$ $`{\displaystyle \frac{3}{R_0^3}}\eta ^2\dot{\eta }\dot{\xi }{\displaystyle \frac{1}{R_0^3}}\eta ^3\ddot{\xi }+{\displaystyle \frac{1}{4!}}V^{(5)}(\varphi _0)\xi ^4+{\displaystyle \frac{1}{2R_0}}V^{(4)}(\varphi _0)\xi ^3\eta +{\displaystyle \frac{3}{2R_0^2}}V^{(3)}(\varphi _0)\eta ^2\xi ^2+{\displaystyle \frac{1}{R_0^3}}V^{(2)}(\varphi _0)\xi \eta ^3,`$ $`G_2^\eta `$ $`=`$ $`{\displaystyle \frac{1}{2R_0}}\dot{\eta }^2{\displaystyle \frac{1}{R_0}}\eta \ddot{\eta }2\pi \stackrel{~}{G}R_0\dot{\xi }^22\pi \stackrel{~}{G}R_0V^{(2)}(\varphi _0)\xi ^2{\displaystyle \frac{4\pi \stackrel{~}{G}}{R_0}}V(\varphi _))\eta ^2,`$ $`G_3^\eta `$ $`=`$ $`4\pi \stackrel{~}{G}\eta \dot{\xi }^24\pi \stackrel{~}{G}V^{(2)}(\varphi _0)\eta \xi ^2{\displaystyle \frac{4\pi \stackrel{~}{G}}{3!}}R_0V^{(3)}(\varphi _0)\xi ^3,`$ $`G_4^\eta `$ $`=`$ $`{\displaystyle \frac{2\pi \stackrel{~}{G}}{R_0}}\eta ^2\dot{\xi }^2{\displaystyle \frac{8\pi \stackrel{~}{G}}{3!}}V^{(3)}(\varphi _0)\eta \xi ^3{\displaystyle \frac{2\pi \stackrel{~}{G}}{R_0}}V^{(2)}(\varphi _0)\xi ^2\eta ^2{\displaystyle \frac{\pi \stackrel{~}{G}R_0}{3!}}V^{(4)}(\varphi _0)\xi ^4`$ where $`V^{(n)}(\varphi _0)=\frac{d^nV(\varphi )}{d\varphi ^n}|_{\varphi =\varphi _0}`$. The first–order solution of the fluctuation fields is obvious from eq.(22). We have $$\xi \mathrm{cos}\omega _0\tau ,\eta \mathrm{cos}\omega _0\tau ,\omega _0^2=\frac{1}{R_0^2}=V^{(2)}(\varphi _0)$$ (24) where $`\omega _0`$ is the frequency of the sphalerons which is simply the frequency of small oscillations in the bottoms of the inverted potential wells of $`U(R)`$ and $`V(\varphi )`$. Substituting the first-order solution into eq.(22) we obtain the second–order solution; the higher–order results can be obtained by successive substitutions. After the second substitution we have, $$\xi =\rho \mathrm{cos}\omega \tau +\rho ^2[g_{1,\xi }+g_{2,\xi }\mathrm{cos}2\omega \tau ]+\xi _3,$$ (25) $$\eta =\rho \mathrm{cos}\omega \tau +\rho ^2g_{2,\eta }\mathrm{cos}2\omega \tau +\eta _3,$$ (26) where $`g_{1,\xi }`$ $`=`$ $`{\displaystyle \frac{1}{2\omega _0^2}}\left[{\displaystyle \frac{1}{2}}V^{(3)}(\varphi _0)3\omega _0^3\right],`$ $`g_{2,\xi }`$ $`=`$ $`{\displaystyle \frac{1}{6\omega _0^2}}\left[3\omega _0^3+{\displaystyle \frac{1}{2}}V^{(3)}(\varphi _0)\right],`$ $`g_{2,\eta }`$ $`=`$ $`{\displaystyle \frac{1}{3\omega _0}}\left[{\displaystyle \frac{3}{4}}\omega _0^2+2\pi \stackrel{~}{G}(1V(\varphi _0))\right].`$ In our case $`g_{1,\eta }=0`$. Here $`\rho `$ serves as a perturbation parameter. The third–order corrections $`\xi _3,\eta _3`$ are proportional to $`\rho ^3`$. Substitution of eqs.(25), (26) into the equation of motion (22) yields an equation to determine $`\xi _3`$ , $`\eta _3`$ and the corresponding frequency $`\omega `$. After some tedious algebra we obtain the deviation of the frequency from $`\omega _0`$ up to order of $`\rho ^4`$, i.e. $$\omega ^2\omega _0^2=\rho ^2\frac{4\pi \stackrel{~}{G}}{3\omega _0^2}V^{(3)}(\varphi _0)g_{1,\xi }\rho ^42\pi \stackrel{~}{G}\left[\frac{V^{(4)}(\varphi _0)}{3\omega _0^2}+2\omega _0^2\right]g_{1,\xi }^2$$ (27) The $`\rho ^4`$ term applies in case the $`\rho ^2`$ term vanishes. The sufficient condition for a transition of the first order to occur is $`\omega ^2\omega _0^2>0`$. In Fig. 2 we show the phase diagram taking into account terms up to the order of $`\rho ^2`$. We now analyse some field models in terms of our criterion eq.(27). For the well studied $`\varphi ^4`$ model, $$V(\varphi )=(\varphi ^2\alpha ^2)^2$$ (28) we have $`\varphi _0=0`$, $`\omega _0^2=V(\varphi _0)=4\alpha ^2`$, $`V^{(3)}(\varphi _0)=0`$ and $`V^{(4)}(\varphi _0)=24`$. Eq.(27) leads to $$\omega ^2\omega _0^2<0$$ (29) The transition is of second order, in agreement with previous observations in the literature. In recent investigations it was pointed out that the transition can be first order with a steeper well of the potential, $$V(\varphi )=\frac{4+\alpha }{12}\frac{1}{2}\varphi ^2\frac{\alpha }{4}\varphi ^4+\frac{1+\alpha }{6}\varphi ^6$$ (30) which is a double–well type for $`\alpha >0`$ (see Fig. 3). The sphaleron is $`\varphi _0=0`$ with frequency $`\omega _0=1`$. Since $`V^{(3)}(\varphi _0)=0`$ the criterion for the first–order transition is determined by the $`\rho ^4`$ term such that $$\omega ^2\omega _0^2=18\pi \rho ^4[1\alpha ]\stackrel{~}{G}$$ (31) We thus have either first or second order transitions depending on the parameter $`\alpha `$. When $`0<\alpha <1`$ the transition is of second order, while for $`\alpha >1`$ it is of the first order. ## IV Vector matter field The winding number transition at finite temperature, i. e. the transition between degenerate vacua with the vector field model of eq.(5), has been investigated recently using a similar method, but in a flat space-time. It was found that in that context the transition is always of the first order. We now consider the nucleation of the model Universe in the presence of the same vector field. We reexpress the vector field with unit norm in terms of angular variables, i.e. $$𝐧=(\mathrm{sin}\theta \mathrm{cos}\phi ,\mathrm{sin}\theta \mathrm{sin}\phi ,\mathrm{cos}\theta )$$ (32) The euclidean equations of motion are found from the Lagrangian (15) to be $$\frac{d}{d\tau }(R^3\dot{\theta })R^3\mathrm{sin}\theta \mathrm{cos}\theta \dot{\phi }+\frac{R^3}{m\lambda ^2}\mathrm{sin}\theta =0,$$ (33) $$\frac{d}{d\tau }(R\dot{R})\frac{1+\dot{R}^2}{2}+2\pi GR^2\left[m\underset{a}{}\dot{n}_a^2+\frac{2}{\lambda ^2}(1+\mathrm{cos}\theta )\right]=0,$$ (34) $$\frac{d}{d\tau }(R^3\dot{\phi }\mathrm{sin}^2\theta )=0$$ (35) where $`\dot{n}_1`$ $`=`$ $`\dot{\theta }\mathrm{cos}\theta \mathrm{cos}\phi \dot{\phi }\mathrm{sin}\theta \mathrm{sin}\phi ,`$ $`\dot{n}_2`$ $`=`$ $`\dot{\theta }\mathrm{cos}\theta \mathrm{sin}\phi +\dot{\phi }\mathrm{sin}\theta \mathrm{cos}\phi ,`$ $`\dot{n}_3`$ $`=`$ $`\dot{\theta }\mathrm{sin}\theta .`$ The sphaleron solution which is obtained from $`\dot{\theta }=\ddot{\theta }=\dot{\phi }=\ddot{\phi }=\dot{R}=\ddot{R}=0`$ is seen to be $$𝐧_0=(0,0,1),R_0=\frac{\lambda }{4\sqrt{\pi G}}$$ (36) with $`\theta _0=0`$ and $`\phi _0`$ an arbitrary constant. We again consider the perturbation expansion around the sphaleron configurations and set $$\theta =\theta _0+\gamma ,\phi =\phi _0+\delta ,R=R_0+\zeta $$ (37) A self consistent solution is determined by $$\widehat{l}\left(\begin{array}{c}\gamma \\ \zeta \end{array}\right)=\widehat{h}_v\left(\begin{array}{c}\gamma \\ \zeta \end{array}\right)+\left(\begin{array}{c}G_2^\gamma (\gamma ,\zeta )\\ G_2^\zeta (\gamma ,\zeta )\end{array}\right)+\left(\begin{array}{c}G_3^\gamma (\gamma ,\zeta )\\ G_3^\zeta (\gamma ,\zeta )\end{array}\right)+\left(\begin{array}{c}G_4^\gamma (\gamma ,\zeta )\\ G_4^\zeta (\gamma ,\zeta )\end{array}\right)+\mathrm{}$$ (38) with $`\delta =const.`$ where $$\widehat{h}_v=\left(\begin{array}{cc}\frac{1}{m\lambda ^2}& 0\\ o& \frac{16\pi G}{\lambda ^2}\end{array}\right)$$ (39) and $`G_2^\gamma `$ $`=`$ $`{\displaystyle \frac{3}{R_0}}\left[\dot{\zeta }\dot{\gamma }+\zeta \ddot{\gamma }+{\displaystyle \frac{1}{m\lambda ^2}}\zeta \gamma \right],`$ $`G_3^\gamma `$ $`=`$ $`{\displaystyle \frac{6}{R_0^2}}\zeta \dot{\zeta }\dot{\gamma }{\displaystyle \frac{3}{R_0^2}}\zeta ^2\ddot{\gamma }+{\displaystyle \frac{1}{3!m\lambda ^2}}\gamma ^3+\gamma \dot{\delta }^2{\displaystyle \frac{3}{R_0^2m\lambda ^2}}\gamma \zeta ^2,`$ $`G_4^\gamma `$ $`=`$ $`{\displaystyle \frac{3}{R_0^3}}\zeta ^2\dot{\zeta }\dot{\gamma }{\displaystyle \frac{1}{R_0^3}}\zeta ^3\ddot{\gamma }+{\displaystyle \frac{3}{R_0}}\zeta \dot{\delta }^2\gamma +{\displaystyle \frac{1}{2m\lambda ^2R_0}}\zeta \gamma ^3{\displaystyle \frac{1}{m\lambda ^2R_0^3}}\zeta ^3\gamma ,`$ $`G_2^\zeta `$ $`=`$ $`2\pi GmR_0\dot{\gamma }^2{\displaystyle \frac{1}{R_0}}\zeta \ddot{\zeta }+{\displaystyle \frac{2\pi G}{\lambda ^2}}R_0\gamma ^2{\displaystyle \frac{8\pi G}{\lambda ^2R_0}}\zeta ^2{\displaystyle \frac{1}{2R_0}}\dot{\zeta }^2,`$ $`G_3^\zeta `$ $`=`$ $`4\pi Gm\zeta \dot{\gamma }^2+{\displaystyle \frac{4\pi G}{\lambda ^2}}\zeta \gamma ^2,`$ $`G_4^\zeta `$ $`=`$ $`2\pi mGR_0\gamma ^2\dot{\delta }^2{\displaystyle \frac{2\pi Gm}{R_0}}\zeta ^2\dot{\gamma }^2{\displaystyle \frac{\pi G}{3!\lambda ^2}}R_0\gamma ^4+{\displaystyle \frac{2\pi G}{\lambda ^2R_0}}\zeta ^2\gamma ^2.`$ The solution for the fluctuation in first–order approximation is $$\gamma \mathrm{cos}\omega _0\tau ,\zeta \mathrm{cos}\omega _0\tau $$ (40) where the frequency of the sphaleron is found to be $$\omega _0=\frac{4\sqrt{\pi G}}{\lambda }=\frac{1}{R_0},m=\frac{1}{16\pi G}$$ (41) The solution for fluctuations up to the third–order approximation is $$\gamma =\rho \mathrm{cos}\omega \tau \frac{\rho ^2}{2}[3\omega _0+\omega _0\mathrm{cos}2\omega \tau ]+\gamma _3,$$ (42) $$\zeta =\rho \mathrm{cos}\omega \tau \frac{\rho ^2}{6\omega _0}(\frac{1}{4}+\omega _0^2)\mathrm{cos}2\omega \tau +\zeta _3,$$ (43) where $`\gamma _3`$, $`\zeta _3`$ and $`\omega `$ are determined by substitution of eqs.(42), (43) into the equation of motion (38). By doing so the deviation of the frequency which we are interested in is obtained as, $$\omega ^2\omega _0^2=\rho ^2\frac{3\omega _0^2}{2}(1+3\omega _0^2)>0$$ (44) which is always positive. The transition is therefore of first order, and is, remarkably, the same as that of the winding number transition of the vector field model in a flat space–time. ## V Conclusion We believe that the present study is the first attempt to investigate the nucleation of a RW closed Universe at finite temperature with time–dependent matter fields. Although we consider only the crossover behaviour of the nucleation from quantum tunneling to thermal activation, this investigation may shed light on the understanding of the time–evolution of the early Universe. Unlike the superminispace model in which only the first order transition exists, we find that both first and second order transitions are possible here, depending on the shape of the potential of the matter fields. From another point of view the system considered here can be regarded as the barrier penetration of the field models in the closed RW metric. A remarkable observation is that the crossover behaviours, i.e. (1)the second order transition for the ordinary $`\varphi ^4`$ double–well potential of the scalar field, (2) both the first and second for a steeper double–well potential, but (3) first order only for the $`O(3)`$ nonlinear $`\sigma `$ model, maintain the same relations as those of transitions of these field models in a flat space-time. Acknowledgements J.–Q.L. and D.K.P. acknowledge support by the Deutsche Forschungsgemeinschaft. J.–Q.L. also acknowledges partial support by the National Natural Science Foundation of China under Grant No. 19775033. Figure Captions Fig. 1: Barrier of nucleation and the sphaleron $`R_0`$ Fig. 2: Phase diagram with scalar field. I. first order region. II. second order region. Fig. 3: Steeper double–well potentials with $`\alpha =0.1`$ and $`4`$
no-problem/0003/cond-mat0003396.html
ar5iv
text
# Comparison of flux creep and nonlinear 𝐸-𝑗 approach for analysis of vortex motion in superconductors ## I Introduction The term flux creep is used to describe a thermally-activated motion of flux lines in superconductors. This motion is characterized by a velocity strongly dependent on the local current density. In high-temperature superconductors (HTSCs), the flux creep can be specially pronounced because of small flux pinning energies and high temperatures. An account of the flux creep is therefore crucially important for understanding the time-dependent magnetic behavior of HTSCs. In the literature one finds numerous papers making use of flux creep analysis to describe the evolution of flux and current density distributions, current-voltage curves, magnetization and magnetic susceptibilities for superconductors of various shapes. Interestingly, there exist today two commonly accepted approaches for the analysis of thermally-activated flux motion. The first one, the so-called *flux creep* approach, assumes a particular microscopic pinning mechanism, which defines the pinning energy $`U`$ and its dependence on the local values of current density $`j`$ and flux density $`B`$. The velocity of the thermally-activated flux motion, $`v`$, then determines the local electric field according to $`E=vB`$. The second approach, on the other hand, employs a phenomenological nonlinear current-voltage relation, $`E(j)`$. For brevity, we will call this the $`Ej`$ approach. The present paper is devoted to a detailed comparison of these two approaches. We carry out numerical simulations for the most conventional choice of $`E(j)`$ and $`U(j,B)`$ and set focus on the clear differences in the resulting behavior. The numerical findings are then compared to current density distributions measured in YBaCuO films using magneto-optical imaging of flux density profiles. Distinct features in the observed current distributions allow us to conclude which approach gives the more realistic description. ## II The two approaches To compare the two approaches we consider a one-dimensional flux creep problem where the flux moves along the $`𝐱`$ direction, the magnetic induction $`𝐁`$ is directed along the $`𝐳`$-axis, while the electric field $`𝐄`$ is parallel to the $`𝐲`$-axis. The Maxwell equation has then the form $$\frac{B}{t}=\frac{E}{x}.$$ (1) In the flux creep approach, since it represents an activation process, the velocity of the vortex motion is given by $$v=v_c\mathrm{e}^{U(j,B)/kT},$$ (2) where $`v_c`$ is the velocity when $`U=0`$. In the case that the pinning energy has a logarithmic dependence on the current, $`U(j,B)=U_c\mathrm{ln}(j_{cU}/j)`$, it follows that the electric field equals $$E=v_cB\left(j/j_{cU}\right)^{U_c/kT}.$$ (3) In the $`Ej`$ approach, the phenomenological $`E(j)`$ relation is usually chosen in the power law form, $$E=E_c\left(j/j_{cE}\right)^n,$$ (4) with $`n1`$, and where $`j_{cE}`$ and $`E_c`$ are constants with dimension of current density and electric field, respectively. Comparing Eqs. (4) and (3) one can see that the exponent $`n`$ in the $`Ej`$ approach plays the same role as the ratio $`U_c/kT`$ in the flux creep model. However, even if $`n=U_c/kT`$, there still remains an important difference. In Eq. (3) one has $`EB`$, i.e., the electric field induced by the vortex motion is proportional to the number of moving vortices. In the $`Ej`$ approach, Eq. (4), this proportionality is absent. As a result, the two approaches become different if all parameters, $`j_{cU}`$, $`j_{cE}`$, and $`E_c`$ are independent of $`B`$, which is the conventional assumption. In the $`Ej`$ approach at $`n\mathrm{}`$ the electric field tends to zero for $`j<j_{cE}`$, while it becomes infinitely large for $`j>j_{cE}`$. This situation is equivalent to the critical-state model characterized by the critical current density $`j_{cE}`$. Similarly, for $`U_c/kT\mathrm{}`$ the flux creep approach reproduces the critical-state model with critical current density $`j_{cU}`$. Therefore, in the limit $`U_c/kT,n\mathrm{}`$, and $`j_{cU}=j_{cE}`$ both approaches become equivalent. Accordingly, their difference is expected to grow as $`n`$ and $`U_c/kT`$ becomes smaller. To complete the set of equations one needs also a relation between the flux and current density. Let us assume that the superconductor has infinite extension along the $`𝐲`$-axis, the direction of current, and occupies the region $`wxw`$. In the $`𝐳`$-direction it can be either infinite (a slab), or very thin (a strip) with thickness $`dw`$. With the magnetic field $`𝐁_𝐚`$ applied along $`𝐳`$, the flux and current density can in both cases be considered uniform in this direction. Making the common assumption that $`B=\mu _0H`$, one has for a slab that $$\mu _0j=B/x.$$ (5) For a thin strip the Biot-Savart law yields $$B(x)=B_a+\frac{d\mu _0}{2\pi }_w^w\frac{j(u)}{ux}𝑑u.$$ (6) It is convenient to invert the latter equation, which gives $`j(x)`$ $`=`$ $`{\displaystyle \frac{2}{\pi d\mu _0}}{\displaystyle _w^w}{\displaystyle \frac{B(x^{})B_a}{xx^{}}}\sqrt{{\displaystyle \frac{w^2x^2}{w^2x^2}}}𝑑x^{}`$ (8) $`+{\displaystyle \frac{I_T}{\pi d\sqrt{w^2x^2}}},`$ where $`I_T`$ is the transport current. In the numerical simulations we solve the set of equations (1), (3) for the flux-creep approach, and (1), (4) for the $`Ej`$ approach, respectively. The relation between $`B`$ and $`j`$ is taken from Eq. (5) for the case of a slab, and from Eq. (8) in the thin strip geometry. The critical current densities, $`j_{cU}`$ and $`j_{cE}`$, are assumed to be $`B`$-independent. ## III Numerical results ### A Comparison with exact solution As a check of the quality of our numerical simulation scheme we compared numerical results for the $`Ej`$ approach with an exact analytical solution, which can be obtained in the slab case. Assume that at $`t=0`$ a finite external magnetic field is suddenly turned on. The flux density distribution can then be expressed as a function of a single scaling parameter as long as the flux fronts penetrating from opposite sides do not overlap. With $`Ej^n`$, the scaling law has the form $$B(x,t)=af(\xi ),\xi =(wx)a^{\frac{n1}{n+1}}t^{\frac{1}{n+1}}.$$ (9) Here $$f(\xi )=\frac{\mathrm{\Phi }\left(\xi /\xi _0\right)}{\mathrm{\Phi }(0)},\mathrm{\Phi }(z)=_z^1𝑑x(1x^2)^{\frac{1}{n1}},$$ (10) where $`\xi _0`$ is given by the equation $$\xi _0^{n+1}\left[\mathrm{\Phi }(0)\right]^{n1}=2n(n+1)/(n1)$$ Note that $`f(\xi _0)=0`$, and hence $`\xi =\xi _0(n)`$ describes the advance of the flux front. Shown in Fig. 1 are simulated profiles of the flux density $`B(\xi )`$ in a slab using $`n=3`$ and $`n=11`$. Both graphs contain five curves corresponding to different points in time $`t`$ between 5$`\tau `$ and 10$`{}_{}{}^{4}\tau `$, where $`\tau =B_a^2/(\mu _0j_{cE}E_c)`$. Notice the Bean model like linearity in the profiles for $`n=11`$, and the clear non-linearity for $`n=3`$. Shown together with these curves is also the analytic solution $`f(\xi )`$, given by Eq. (10). The collapse within each family of curves demonstrates an excellent agreement, and gives confidence in the numerical procedures. ### B Slab with a transport current In the following we present simulation results assuming that a transport current, linearly increasing in time, is passed through an initially zero-field-cooled superconductor. The choice of parameters is $`dJ_T/dt=10^3j_cv_c`$ where $`J_T(t)=_w^wj(x,t)𝑑x`$ is the transport current per unit height, $`j_cj_{cU}=j_{cE}`$, and $`U_c/kT=n=5`$. Moreover, we let $`E_c=0.2v_c\mu _0j_cw`$, which gives approximately the same average electric field in the superconductor for both the flux creep and $`Ej`$ approach. Figures 2 and 3 present the time development of the current and flux density distributions in the case of a slab. In the flux penetrated region, which gradually expands from the edges, both approaches lead to $`B`$-profiles that are essentially linear, and a fairly constant current density. The slab has a central region where both $`B`$ and $`j`$ vanish. As $`J_T(t)`$ increases, the flux penetrates deeper, and the current becomes distributed more uniformly. Although the overall behavior resembles that of the Bean model, one can also see clear deviations, in particular in the current distributions. The results also reveal distinct differences between the two approaches. The most prominent one is seen in the $`j`$-distribution, where in the flux creep approach a peak develops in the center as the slab becomes fully penetrated. Another difference is visible near the edges, where the slopes in $`j(x)`$ are significantly larger in the $`Ej`$ model. Both of these features are also reflected in the $`B`$-distributions, although there only as different curvatures of the profiles. Subsequent increase in the transport current (not shown in figures) does not change significantly the shape of distributions. In the $`Ej`$ approach, the current density tends to become completely uniform. In the flux creep approach, the peak in the center retains, and both $`j(x)`$ and $`B(x)`$ increase monotonously as grows the transport current, $`J_T`$. ### C Strip with a transport current The simulated behavior of a thin strip experiencing a linearly increasing transport current $`I_T`$, is shown in Figures 4 and 5. The choise of parameters is the same as for the slab, with $`dI_T/dt=10^3j_cv_cd`$ except now it requires that $`E_c=0.2v_c\mu _0j_cd/\pi `$, in order to give approximately the same average electric field for both the flux creep and the $`Ej`$ approach. Comparing the results with the previous slab case, one immediatlely sees differences in the shape of the profiles. The $`j(x)`$ in a strip is always finite everywhere even at small currents, where the flux penetration is only partial. Furthermore, $`B(x)`$ is strongly nonlinear and has peaks at the edges. Both these features are well known also in the Bean model behavior for a thin strip. As in the slab case, we observe also here a significant difference between the distributions obtained from the flux creep and the $`Ej`$ approach. Firstly, the two approaches give opposite sign for the slope of $`j(x)`$ in the penetrated regions near the edges. Secondly, only the creep approach leads to a central peak in $`j(x)`$ at large currents. Hence, while in the $`Ej`$ approach the $`j(x)`$ remains concave throughout, the creep approach predicts a gradual change from a concave to a convex profile. Contrary to the case of a slab, differences are also clearly seen in the flux distributions. In particular, the creep approach predicts a much steeper slope near the flux front. Interestingly, we found that although the two approaches lead to quite different spatial distributions, the integral characteristics of the strip, such as current-voltage curves, are only weakly sensitive to the differences. This is demonstrated in Fig. 6, which shows the integral current-voltage curves obtained using both approaches. The curves for $`n=5`$ correspond to the $`j`$-distributions shown in the previous figures. The electric field was determined as $`P/I_T`$, where the dissipated power $`P`$ per unit length of the strip was calculated by integrating the product $`j(x)E(x)`$ over the strip cross-section. In the log-log plot the current-voltage curves display a clear crossover at $`E0.002E_c`$. At large currents the integral $`E(j)`$ curve shows a power law behavior $`Ej^n^{}`$, where $`n^{}`$ is an “integral” exponent. We find that $`n^{}`$ slightly exceeds $`U_c/kT`$ in the flux creep approach. In the $`Ej`$ approach $`n^{}`$ is equal to the “local” exponent $`n`$ and the integral $`E(j)`$ curves merge the local $`E(j)`$’s shown by straight lines. At small currents the integral $`E(j)`$ curve also shows a power law behavior, although with a much smaller $`n^{}`$ which seemingly does not depend on $`n`$ or $`U_c/kT`$. ### D Discussion Both approaches describe the same physical situation: in response to the transport current the flux lines enter the sample from the edge and then move some distance before getting pinned or annihilated in the sample’s center. Thus, near the edges the flux motion is always more pronounced. Consequently, $`E`$ has a maximum there. In the $`Ej`$ approach, the local current density is an explicit function of the local electric field. Therefore, $`j(x)`$ follows $`E(x)`$ and monotonously decreases from the edges towards the center. On the other hand, in the flux creep approach, $`j(x)`$ is related to $`v(x)=E(x)/B(x)`$ by Eq. (2), and, hence depends also on the flux distribution. In particular, $`j(x)`$ is relatively small at the strip edges where $`|B|`$ is maximal, see Fig. 4(a). ## IV Experimental results and discussion A YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> (YBCO) film of 200 nm thickness was prepared by dc magnetron sputtering on LaAlO<sub>3</sub> substrate. Using photo-lithography a strip of dimensions 500$`\times `$100 $`\mu `$m<sup>2</sup> was formed and equipped with Ag contact pads for injection of a transport current. The current was applied in pulses of 40 ms duration while the temperature was kept at 20 K in an optical cryostat. Magneto-optical images were recorded with 33 ms exposure time during the current pulse. From the images we determine the $`z`$-component of $`𝐁`$ in the plane of the ferrite garnet magneto-optical indicator, which we estimate to be located 10 $`\mu `$m above the YBCO film. Shown in Fig. 7(a) are the measured $`B`$-distributions. Because of the finite distance between the indicator and the superconductor, these profiles are not easily compared to the results of the simulations. However, since the $`j`$-profiles showed more distict differences between the creep and $`Ej`$ approach, the measured $`B`$-profiles were converted to sheet current distributions, $`J(x)=dj(x)`$, in the strip. An inversion scheme described in Ref. and further developed elsewhere was employed. Profiles of the sheet current are shown in Fig. 7(b) for a range of transport currents up to 4.5 A. Evidently, they fit quite well to the simulated results of the creep approach shown in Fig. 4. In particular, one easily recognizes the characteristic change from a concave to a convex current distribution as $`I_T`$ increases. The $`Ej`$ approach, on the other hand, appears not to be able to give an adequate description of the flux dynamics in the present experiment. Although being the better model, one can still see considerable discrepancies between the experimental curves and the flux-creep approach simulations. One example is the peak of $`j(x)`$ in the strip center at large currents, a feature the experiments could not reproduce. Unfortunately, a current of 4.8 A caused fatal damage to the sample, and we were not able to measure distributions under the conditions where a central peak might become apparent. As an alternative way to create a peak in $`J`$ where $`B`$ changes sign, we carried out a different experiment. Here the strip (a new YBCO sample prepared by the same method) was initially in the remanent state after first being exposed to a very high magnetic field. After that the strip was subjected to a transport current of 2 A. The resulting flux and current distributions are shown in Fig. 8(a). One sees that in the left half of the strip there is a wide region with a large and nearly constant current density. Within this region one finds that $`J(x)`$ indeed has a peak located close to the point where $`B=0`$. A new set of simulations was made for this special state with combined magnetization currents and $`I_T`$. The simulations aimed to reproduce the exact experimental steps: First, the strip was exposed to a perpendicular magnetic field applied as a pulse of 70 mT amplitude with 1 s of linear increase and 1 s of linear decay. Then, after one more second, the strip was subjected to a transport current pulse with 1 ms rise time. During the current pulse, 20 ms after turn-on, the magneto-optical image was taken. For this particular time, Figs. 8(b) and (c) present the numerically obtained distributions for the flux creep approach and $`Ej`$ approach, respectively. The following model parameters were used: $`n=U_c/kT=5`$, $`I_c=25`$ A, and $`v_c=10`$ m/s. Again, only the flux creep approach gives a peak in the current profile, and now we find an excellent agreement with the experimental results. A remaining discrepancy between the flux-creep approach simulations and the experimental curves is that the gradient of $`J(x)`$ near the strip edge is larger in the experiments. This holds true also for simulations made with other values of the power $`U_c/kT`$. We believe that our photo-lithographic technology does not reduce the film quality much more than up to a distance 1-2 $`\mu `$m from the edge. This is also consistent with the magneto-optical images, which show that the current flow along the strip is highly uniform on scales larger than 5 $`\mu `$m. Therefore, the discrepancy between experiment and simulations indicates that the vortex behavior is more complicated than assumed in the present flux creep model. The experimentally observed suppression of $`J`$ near the edge where $`|B|`$ is maximal, can be interpreted as a $`|B|`$-induced reduction of the critical current density $`j_{cU}`$, or the pinning energy $`U_c`$. This interpretation, however, fails to account for a similar suppression of $`J`$ observed previously in the remanent state after current pulse. An alternative explanation able to cope with both observations is a heat dissipation due to vortex motion which is always most intensive near the strip edges. ## V Conclusions Numerical simulations have been carried out in order to compare two commonly accepted approaches for analysis of flux motion in superconductors; (i) the flux creep approach, and (ii) the approach based on a non-linear $`E(j)`$ curve. We have shown that if the critical current density is field-independent, these approaches predict similar but also distinctly different current and flux distributions. The difference is most pronounced in the regions where the local flux density $`B`$ is small. The simulation results were compared with the real current distributions in a YBCO strip carrying a transport current. The experimental data were obtained by using magneto-optical imaging. The comparison shows clearly that the flux creep approach provides the better description of the flux motion in the strip. ###### Acknowledgements. The financial support from the Research Council of Norway (NFR), and from NATO via NFR is gratefully acknowledged. We are grateful to Bjørn Berling for a many-sided help and to E. H. Brandt for a discussion.
no-problem/0003/astro-ph0003261.html
ar5iv
text
# The Predicted Signature of Neutrino Emission in Observations of Pulsating Pre-White Dwarf Stars ## 1 Introduction In general, stars are too remote—and observables too few—to make them practical experimental physics test-beds: our data are spent in simply describing the dimensions of the objects under study. In many cases we must extrapolate experimental data over many orders of magnitude, or resort to untested calculations from first principles, to reach the regions of phase space that apply to stellar interiors. If we hope to overcome these problems and pursue “experimental” astrophysics, we can either attempt to increase the number of observables or find simpler stars. As first realized by Mestel (1952), the evolution of white dwarfs and pre-white dwarfs (PWDs) is primarily a simple cooling problem. In general, our growing understanding of white dwarf interiors and evolution has paralleled advances in the theory of dense plasmas, with the recognition of important influences like electron degeneracy (Chandrasekar 1939), Coulomb interactions (Salpeter 1961), crystallization (Kirzhnitz 1960; Abrikosov 1960; Salpeter 1961; Stevenson 1980) and neutrino cooling effects (Chin, Chiu, & Stothers 1966; Winget, Hansen, & Van Horn 1983; Kawaler, Hansen, & Winget 1985). Iben & Tutukov (1984) summarize the various mechanisms which dominate white dwarf evolution from the planetary nebula nucleus (PNN) stage to the coolest white dwarfs. On the observational side, the discovery of white dwarf pulsation in the 1960s, and pre-white dwarf pulsation in the 1970s, greatly increased the observable parameters available for comparison with theoretical models. These are short period, multiperiodic, $`g`$-mode variables, showing anywhere from a few to over a hundred separate periodicities on timescales of 100-3000 s. The pulsating PWD stars are divided into two classes: the planetary nebula nucleus variables (PNNV stars), and the slightly more evolved GW Virginis (or simply GW Vir stars) which lack observed nebulae. With high surface gravities (log $`g67.5`$), and effective temperatures between 80,000 K and 170,000 K, they occupy a region of the H-R diagram between the high-$`T_{\mathrm{eff}}`$ end of the PNN branch and the top of the white dwarf cooling track. There are eight known PNNV stars, and four GW Vir stars (Ciardullo & Bond 1996). The evolutionary timescale of PWD stars is of order $`10^6`$ years. During this short transition from PNN star to hot white dwarf, stellar radius and photon luminosity decrease by one and three orders of magnitude, respectively. High core density and temperature allow electron scattering processes to produce a large neutrino flux which remains roughly constant during this time. As photon luminosity plummets, neutrinos contribute an increasing fraction of the total energy losses. Neutrino emission eventually comes to dominate the overall evolution of the star. Unlike photon energy, which must diffuse relatively slowly through the entire star before emerging into space, neutrinos created near the center of the PWD escape directly. This neutrino luminosity cools the center of the star, maintaining a temperature inversion similar to that within stars at the tip of the red giant branch. Calculations of the relevant reaction rates were performed initially by Beaudet, Petrosian, & Salpeter (1967) based on the theory of weak interactions proposed by Feynman & Gell-Mann (1958). Later, Dicus (1972) and Dicus et al. (1976) recalculated these rates in the unified electroweak theory of Weinberg and Salam (Weinberg 1967, Salam 1968). All of these calculations are theoretical, however. We have no direct experimental or observational confirmation of neutrino production rates under conditions appropriate to PWD interiors. The cooling of a GW Vir interior tends to increase the periods of each given pulsation mode. Their high luminosity (log $`L03`$) means they cool much more rapidly than cooler white dwarf variables. GW Vir period changes are therefore expected to be more rapid also. Winget, Hansen & Van Horn (1983) show that the $`e`$-folding time for period changes in GW Vir stars should be of the same order as the evolutionary timescale—$`10^6`$ years; such rapid changes are measurable in $`13`$ years time. This is an exciting prospect: to measure directly, on human timescales, the rate of evolution of a star, and specifically to place strict constraints on the mechanisms which regulate the evolution of a stellar interior. Over 30 years ago, Chin, Chiu, and Stothers (1966) predicted that at some point in PWD evolution neutrino losses should dominate all other cooling processes. Asteroseismological analysis can tell us which stars these are, and then measurement of period changes can tell us if our neutrino physics is right. Such a test has implications far beyond the study of PWD evolution. For instance, one of the fundamental questions of stellar astrophysics is the length of time stars spend on the main sequence. Answering this question requires precise knowledge of the p-p and CNO nuclear reaction rates. Currently, the best laboratory for measuring these rates is our own Sun, since terrestrial labs cannot in general reproduce the conditions of the stellar interior. However, models which successfully reproduce the known structure of the Sun predict a neutrino flux two to three times that measured by earthly detectors (Bahcall & Pinsonneault 1996, and references therein). For a long time, it was thought the problem might reside in our incomplete knowledge of conditions in the solar interior. Recently, helioseismology projects such as the Global Oscillation Network Group (GONG) have resulted in the measurement of millions of solar pulsation frequencies (Harvey et al. 1996). With so many parameters to constrain model properties, the possibility that the solar neutrino problem can be solved through variations in the thermodynamics or mechanics seems to be excluded (Bahcall & Pinsonneault 1996). The problem, then, almost certainly lies with the way we handle the nuclear physics. Under the most intense scrutiny is the standard theory of lepton interactions. Our calculations of neutrino emission from PWDs are based on this same theory. In PWDs, however, the energy loss rate due to neutrinos is thousands of times greater than in the Sun. Measurement of the effects of neutrino interactions in PWDs would afford a critical independent test not only of the standard lepton theory but also of non-standard theories brought forward to solve the solar neutrino problem. To explore this possibility, we calculated PWD evolutionary tracks using different neutrino production rates. In the next section we describe the calculation of those rates and summarize the basic interactions that lead to neutrino emission in PWD interiors. Section 3 describes PWD sequences with varied neutrino production rates and examines effects on measurable quantities such as $`T_{\mathrm{eff}}`$, surface gravity, and rate of period change. Finally, in § 4 we discuss prospects for placing observational constraints on neutrino physics, and we identify appropriate targets for future observation. ## 2 Neutrino Cooling in Pre-White Dwarf Interiors Unlike the solar neutrino flux, neutrino emission in PWDs is not a by-product of nuclear fusion. Instead, the density and temperature in their cores are high enough (log $`\rho _\mathrm{c}`$ 6–7, log $`T_\mathrm{c}`$ 7–8) to produce neutrinos directly through several different scattering processes. The two most important processes are neutrino bremmstrahlung and plasmon excitation. Neutrino bremmstrahlung is much like the normal bremmstrahlung process familiar to astrophysicists, in which high-energy electrons scatter off nuclei, emitting X-rays. At the high density and temperature of PWD interiors, however, neutrinos can be produced instead. These same conditions support the existence of thermally excited photons within the plasma, analogous to phonons propagating within a metal lattice. These “plasmons” have a finite lifetime and decay to form a neutrino and antineutrino.<sup>1</sup><sup>1</sup>1Actually there are two types of plasmons. The process described here is that of the transverse plasmon. The other, longitudinal plasmon corresponds to an oscillation in the electron gas similar to a sound wave, but is usually less important as a neutrino source in hot white dwarfs (Itoh et al. 1992). The possible relevance of the plasma process to stellar astrophysics was first pointed out by Adams, Ruderman, & Woo (1963), who subsequently calculated rates based on the theory of Feynman & Gell-Mann (1958). Beaudet, Petrosian & Salpeter (1967) were the first to incorporate them into stellar evolution calculations. Later, Dicus (1972) recalculated the rates of various neutrino processes in the unified electro-weak theory of Weinberg and Salam (Weinberg 1967, Salam 1968). The rates used in our stellar evolution code, ISUEVO, derive from updated calculations by Itoh et al. (1996), and include the plasmon, bremmstrahlung, and several less important neutrino production processes. The evolution code ISUEVO (Dehner 1996; see also Dehner & Kawaler 1995) is optimized for the construction of PWD and white dwarf models. The models used in this investigation are based on the evolution of a $`3M_{}`$ model from the Zero Age Main Sequence through the thermally pulsing AGB phase. After reaching a stable thermally pulsing stage (about 15 thermal pulses), mass loss was invoked until the model evolved to high temperatures. This model (representing a PNN) had a final mass of $`0.6M_{}`$, and a helium-rich outer layer. Additional details concerning the construction of this evolution sequence (and others of different mass, discussed in § 3, below) can be found in O’Brien (1998). To study the direct effects of neutrino losses on PWD evolution, we introduced artificially altered rates well before the evolving models reached the PWD track. If we simply changed the rates beginning at the hot end of the PWD sequence, the thermal structure of each model would take several thermal timescales to relax to a new equilibrium configuration based on the new rates. Unfortunately, this relaxation time is of the same order as the PWD cooling time, and so only the cool end of the sequence would see the full effects of the new rates on their evolutionary timescales. Therefore, the enhanced and diminished rates described in the next section were introduced into evolutionary calculations beginning at the base of the AGB. The resulting thermal structure of the initial PWD “seed” models was then already consistent with the neutrino rates used during the prior evolution that produced them. ## 3 Pre-White Dwarf Sequences with Different Neutrino Rates Starting with the PWD seed models above, we evolved the models from high $`L`$ and $`T_{\mathrm{eff}}`$ down toward the white dwarf cooling track. Three sequences were calculated. The base sequence used the normal neutrino production rates. Another sequence used rates diminished by a factor of three (at any given $`\rho `$ and $`T`$ in the stellar interior) over the normal rates, while the third sequence used rates enhanced by a factor of three. This trio spans nearly one order of magnitude in neutrino production. The resulting $`0.6M_{}`$ evolutionary sequences are shown in Figure 1, from $`T_{\mathrm{eff}}170,000K`$—equivalent to the hottest PWDs known—down to about $`35,000K`$. Luminosity decreases by almost four orders of magnitude in approximately five million years. The GW Vir instability strip occupies the left half of the figure, above $`T_{\mathrm{eff}}80,000K`$ (log $`T_{\mathrm{eff}}=4.9`$), a temperature reached by the PWD models in only 500,000 years. The most striking aspect of Figure 1 is the similarity of the tracks: changing the neutrino rates seems to have little effect on the luminosity at a given $`T_{\mathrm{eff}}`$ at any point in PWD evolution, despite the importance of neutrino losses as a cooling mechanism over much of this range. In Figure 2, we find that, for all three sequences, neutrino losses are the primary cooling mechanism over the approximate range $`100,000K<T_{\mathrm{eff}}<30,000K`$. Plasmon reactions dominate over the bremmstrahlung process for 0.6 $`M_{}`$ models at all stages of PWD evolution, as shown in Figure 3. The ratio $`L_\nu /L_\gamma `$ also increases with stellar mass. In the $`T_{\mathrm{eff}}`$ range 80,000–100,000 K, $`L_\nu /L_\gamma `$ for a 0.66 $`M_{}`$ model sequence is nearly 30% higher than for a 0.60 $`M_{}`$ sequence. Figures 1 and 2 show that the differences in $`L`$ and $`T_{\mathrm{eff}}`$ are smallest when the neutrinos are important. This is because the primary structural effect of changing the neutrino rates is on the radius of the models (Figure 4), causing the tracks to assume a position in the $`L`$$`T_{\mathrm{eff}}`$ plane normally occupied by models of slightly higher mass (for enhanced rates) or lower mass (for diminished rates). However, at lower temperatures electron degeneracy becomes increasingly important as a mechanical support against gravity (and thus in determining the final stellar radius); neutrino cooling only affects the thermal processes participating in the mechanical structure. Even at high luminosity, however, different neutrino rates result in only small changes in measurable quantities such as surface gravity. Current observational techniques could not hope to resolve such small differences. Figure 5 shows a more tangible effect of changing the rates. Even though models with different rates look much the same at a given $`T_{\mathrm{eff}}`$, they get there at widely differing times, since the rate of evolution along a track is directly dependent on the importance of neutrino emission as a source of cooling. For example, the model with enhanced neutrino rates cools from $`100,000`$ K down to $`65,000`$ K in $`600,000`$ years, while the model with diminished neutrino rates takes $`1.3`$ million years, more than twice as long, to cool by the same amount. The maximum difference in the slope of the different curves in Figure 5 occurs at $`T_{\mathrm{eff}}80,000`$ K. Thus the epoch where the rate of evolution is most sensitive to the assumed neutrino physics corresponds to the position in the H-R diagram occupied by the coolest pulsators in the PWD instability strip. On the other hand, for stars in the strip hotter than $`100,000`$ K Figure 5 shows that evolutionary rates do not depend on neutrino rates. Our expectations are borne out in Figure 6, which shows the rate of change in period, $`\dot{\mathrm{\Pi }}/\mathrm{\Pi }`$ ($`d(\mathrm{ln}\mathrm{\Pi })/dt`$), as a function of period, $`\mathrm{\Pi }`$, for PWD models at $`140,000`$ K (lower panel) and $`80,000`$ K (upper panel), given normal, enhanced, and diminished neutrino production rates. The rate of period change $`\dot{\mathrm{\Pi }}/\mathrm{\Pi }`$ in the cooler models changes by a factor of four between the enhanced and diminished rates. Changing the neutrino rates has little effect on $`\dot{\mathrm{\Pi }}/\mathrm{\Pi }`$ in the hotter model, consistent with the results from Figure 5. We now turn to the exciting implications of these results, and explore the possibility and practicality of measuring $`\dot{\mathrm{\Pi }}/\mathrm{\Pi }`$ in cool pulsating PWD stars. We can then identify likely targets for future observational campaigns. ## 4 Prospects for Measuring Neutrino Cooling Effects ### 4.1 Determination of d$`\mathrm{\Pi }`$/dt Unfortunately, the period changes expected to occur in PWD stars are too small to detect from simple comparison of the period from one year to that of the next. To determine d$`\mathrm{\Pi }`$/dt, a better technique is to measure the cumulative phase change in a mode with a slowly changing period. This is accomplished by comparing the observed times of maxima ($`O`$) in the light curve to the times of maxima ($`C`$) calculated from an assumption of constant period. The resulting plot of ($`OC`$) shows the phase drift associated with a changing period. A constant rate of period change, d$`\mathrm{\Pi }`$/dt, enters as a quadratic term in time: $$(OC)\frac{1}{2}\frac{1}{\mathrm{\Pi }_{t_o}}\frac{d\mathrm{\Pi }}{dt}(tt_o)^2[\mathrm{sec}]$$ (1) where $`\mathrm{\Pi }_{t_o}`$ is the period at time $`t_o`$ (see for example Winget et al. 1985, 1991 and Kepler et al. 1995). To measure d$`\mathrm{\Pi }`$/dt with confidence, the star must of course have stable and fully resolved pulsation periods, with reliable phase measurements from season to season. Kawaler, Hansen, & Winget (1985) and Kawaler & Bradley (1994) present predicted values of d$`\mathrm{\Pi }`$/dt for models relevant to GW Vir and PNNV stars; the only observed value of d$`\mathrm{\Pi }`$/dt, that for PG 1159 itself, is consistent with these models. However, as Kawaler & Bradley (1994) demonstrated, for a star as hot as PG 1159, d$`\mathrm{\Pi }`$/dt is strongly affected by mode trapping. This is an effect whereby some modes become excluded from regions below subsurface composition discontinuities. Kawaler & Bradley (1994) show that, in general, d$`\mathrm{\Pi }`$/dt should be positive; this reflects the overall cooling of the model (Winget, Hansen, & Van Horn 1983). Trapped modes, however, are concentrated in the outer layers, within which contraction dominates cooling; therefore trapped modes can show periods which decrease with time. Thus, mode trapping can complicate the interpretation of measured period changes in hot PWDs. As GW Vir stars cool, the surface contraction rate decreases relative to the cooling rate of the interior. So, while mode trapping can still influence the pulsation period distribution itself, the rates of period change become more similar from mode to mode in cooler GW Vir stars. Kawaler & Bradley (1994) found that the sign of d$`\mathrm{\Pi }`$/dt could be different for different modes in hot GW Vir models; by the time those models evolve to the cool end of the strip, the period change rates are all positive. ### 4.2 Prospective Targets Measurements of secular period change, $`\dot{\mathrm{\Pi }}/\mathrm{\Pi }`$, in white dwarfs have been attempted by a number of investigations, with either measurements made or tight upper limits set for the GW Vir star PG 1159 (Winget et al. 1985, 1991, Costa & Kepler 1998) and G117-B15A (Kepler et al. 1995). Unfortunately, neutrino cooling is not expected to be an important effect for either of these stars. On the other hand PG 0122, with a $`T_{\mathrm{eff}}`$ of 80,000 K, occupies the stage in GW Vir evolution most highly dominated by neutrino emission. O’Brien et al. (1998) show that PG 0122 is in addition the most massive GW Vir star, which should enhance neutrino effects as well. In order to measure $`\dot{\mathrm{\Pi }}/\mathrm{\Pi }`$ with confidence, a star must have a stable and fully resolved pulsation, with stable phase measurements from season to season. PG 0122 is a very stable pulsator: over the past decade, it has shown a consistent pulsation spectrum, with the large-amplitude modes present at the same frequencies during each of three intensive observing seasons in 1986, 1990, and 1996. The amplitudes of each of the dominant modes remained approximately constant as well (O’Brien et al. 1998). Therefore, PG 0122 is an excellent candidate for measurement of the rate of secular period change caused by the evolutionary cooling of its interior. In addition to the physics governing neutrino production, PG 0122 is an ideal target for measuring neutrino emission rates because of the minimal influence of any mode trapping on interpretation of its $`\dot{\mathrm{\Pi }}/\mathrm{\Pi }`$. As mentioned above, for stars below 100,000 K trapping no longer significantly affects $`\dot{\mathrm{\Pi }}/\mathrm{\Pi }`$ from mode to mode. From Figure 6, we estimate the value of $`\dot{\mathrm{\Pi }}/\mathrm{\Pi }`$ for the dominant pulsation mode ($`\mathrm{\Pi }=400`$ s) in PG 0122 to be about $`6\times 10^{15}`$ sec <sup>-1</sup>. With this rate of period change, the period should increase by about $`0.001`$ s in 10 years; this is smaller than the period uncertainty for a run length of several months (assuming a frequency precision of $`\frac{1}{10\times \mathrm{run}\mathrm{length}}`$). However, the accumulated phase advance over a ten year period should be nearly two full cycles. Using the periods alone from the 1986, 1990, and 1996 data, O’Brien et al. (1998) attempted to calculate $`\dot{\mathrm{\Pi }}/\mathrm{\Pi }`$ directly. From the best least-squares periods from 1996 and 1986, they calculate a period change of $`0.10\pm 0.02`$ s, implying $`\dot{\mathrm{\Pi }}/\mathrm{\Pi }=7\times 10^{13}`$ sec <sup>-1</sup>, about 100 times larger in magnitude—and different in sign—than theory expects. However, as O’Brien et al. (1998) point out, this calculation is based on the formal errors from a least-squares fit to the observed periods, and the formal least-squares error generally underestimates the true error by an order of magnitude. In practice, the data currently available allow an upper limit to be set on the absolute magnitude of $`\dot{\mathrm{\Pi }}/\mathrm{\Pi }`$ for PG 0122 of $`1.5\times 10^{12}`$ sec <sup>-1</sup>. In view of the importance of measuring $`\dot{\mathrm{\Pi }}/\mathrm{\Pi }`$ for this star—as well as for the other cool GW Vir stars—we must continue analysis of archival data and mount observing campaigns in the near future to monitor all the known GW Vir stars with $`T_{\mathrm{eff}}`$ $`<100,000`$ K. With frequent observation, an accumulated phase advance of half a cycle, combined with the techniques described above, could be used to determine $`\dot{\mathrm{\Pi }}/\mathrm{\Pi }`$ for the GW Vir stars PG 0122, PG 2131, and PG 1707 in two to three years. In the case of PG 0122, the data presented in O’Brien et al. (1998) provide a key anchor for this investigation. ## 5 Summary and Conclusions We have shown that the predicted rates of period change in GW Vir stars near the cool end of the instability strip are sensitive to the neutrino production rates used in stellar models. The persistence of the solar neutrino problem has made the standard model of neutrino interactions one of the most intensely scrutinized theories in all of physics. Determination of $`\dot{\mathrm{\Pi }}`$ in the GW Vir stars PG 0122 and PG 2131 will provide an important test of the standard model and of any new theories put forward to replace it. The authors express their appreciation to Chris Clemens for valuable editorial comments. We also thank the anonymous referee who, in particular, helped clarify our understanding and explanation of lepton scattering theory as it applies to white dwarf interiors. MSO’B was supported during much of this research by a GAANN fellowship through grant P200A10522 from the Department of Education to Iowa State University. Support also came from the National Science Foundation under the NSF Young Investigator Program (Grant AST-9257049) to SDK at Iowa State University. Finally, some support for this work came to SDK from the NASA Astrophysics Theory Program through award NAG5-4060 to Iowa State University.
no-problem/0003/astro-ph0003199.html
ar5iv
text
# The Dark Matter Problem in Disk Galaxies ## 1 Introduction High resolution simulations of galaxy formation, incorporating realistic CDM initial conditions of dark halo formation, generally confirm the existence of a universal density (NFW) profile in the outer regions of galaxies \[Navarro, Frenk & White, 1997\]. Moreover, some groups are now reporting significant central dark matter density cusps that are as steep as $`r^\beta `$ with $`\beta 1.5`$. The existence of even a more modest cusp ($`\beta 1`$, as in the original NFW result) implies that at the current epoch $`L_{}`$ galaxies have two to three times too much dark matter within a 2 to 2.5 disk scale lengths \[Navarro & Steinmetz, 2000\]. This conclusion applies both to the Milky Way, where the mass of the disk can be dynamically estimated from the motions of stars near the Sun, and to an ensemble of nearby spirals, for which the Tully–Fisher relation effectively measures a $`M/L`$ ratio that can be compared with values predicted by stellar-synthesis models. The Tully-Fisher slope and dispersion are accounted for by the high resolution simulations, but the normalization is discrepant, by about a factor of 3 in $`M/L`$ at given surface brightness, rotation velocity and luminosity \[Navarro & Steinmetz, 2000\]. Two further problems encountered with the cold dark matter hypothesis are (i) that the scale-lengths of disks are predicted to be too small by a factor $`5`$ \[Steinmetz & Navarro, 1999\], and (ii) an order of magnitude more satellites are predicted than are observed \[Moore et al, 1999\]. Both of these problems are closely related to the persistence of substructure in high-resolution N-body simulations of hierarchical models of dark halo formation. There are two possible avenues for resolution of these problems. One approach is to tinker with the particle physics. One may abandon the idea that CDM is weakly interacting. There are CDM particle candidates for which annihilation rates are of order the weak rate but for which scattering crossections are of the order the strong interaction \[Carlson et al, 1992, Machacek & Hall, 1992\]. Such dissipative CDM may erase both the CDM cusps and clumpiness \[Spergel & Steinhardt, 1999\], but at the price of introducing an unacceptably spherical inner core in massive clusters \[Miralda-Escude, 2000\]. One may suppress the small-scale power on subgalactic scales, either by invoking broken scale-invariance \[Kamionkowski & Liddle, 1999\] or warm dark matter \[Sommer-Larsen & Dolgov, 1999\], in the hope that the structure of massive dark halos will be modified. Here we adopt the less radical approach of exploring astrophysical alternatives. We accept the fundamental correctness of the CDM picture, and ask (i) could excess dark matter be ejected from the optical galaxy? and (ii) why do baryons in galaxies currently have more specific angular momentum than predicted by the simple CDM picture. We argue that these questions are connected, and that both may be resolved if galaxies have first absorbed and then ejected a mass of baryons that is comparable to their current baryonic masses. An earlier paper argued that baryonic winds can imprint cores within dwarf galaxy dark halos \[Navarro, Eke & Frenk, 1996\]. Here we propose that energy and angular momentum surrendered by the ejected baryons have profoundly modified the dark halo within the current optical massive galaxy. In this picture most protogalactic material remained gaseous until the period of mass ejection was substantially complete – this conjecture is tenable because we have no reliable knowledge of either the rate at which, or the efficiency with which, stars form in a protogalactic environment. In Section 2 we argue for massive galactic outflows. In Section 3 we ask how the dark halo was modified as a result of processing the material prior to ejection. Section 4 is concerned with the implications for the star-formation rate within a gaseous bar. Section 5 sums up our arguments. ## 2 Inflow and outflow A primary problem with the conventional picture of galaxy formation is that in all simulations, the baryons lose much of their angular momentum as they fall into dark-matter haloes \[Katz & Gunn, 1992, Weil, Eke & Efstathiou, 1998\] rather than conserving it as semi-analytic models of galaxy formation typically assume \[Kauffmann, White & Guiderdoni, 1993, Granato et al, 2000\]. Consequently, whereas in semi-analytic galaxy-formation models, the baryons are marginally short of angular momentum to account for the observed disk sizes, in reality they will fall short by an order of magnitude. Current estimates of the acquisition of angular momentum by perturbations in the expanding universe seem robust, as is the prediction that collapsing baryons will surrender much of their angular momentum. Hence, we should take seriously the expectation that protogalaxies early on will contain a considerable mass of low-angular momentum baryons. What becomes of this material? Some of it will have been converted into the galaxy’s bulge and central black hole. However, the mass of low-angular momentum material to be accounted for is comparable to the mass of the current disk, on account of the substantial factor by which infalling baryons will have been short of angular momentum. The bulge and central black hole of the Milky Way, by contrast, between them contain less mass than the disk by a factor $`5`$. In galaxies of later Hubble type, such as M33, the factor can be substantially greater. Star formation is always associated with conspicuous outflows, which are thought to be generically associated with accretion disks. Hence, it is likely that a significant fraction of a protogalaxy’s low angular momentum baryons are ejected in a wind that is powered by star formation, magnetic torques and black hole accretion. Observations of star-burst galaxies such as M82 lend direct support to this conjecture \[Axon & Taylor, 1978, Dahlem, Weaver & Heckman, 1998\]. We conclude that the observed disks of galaxies plausibly formed from the higher angular momentum tail of the conventional distribution. In terms of a spherically-symmetric infall model, we imagine that the baryons that started out close to the centre of the protogalaxy were mostly ejected. Galactic disks are formed from the baryons that were initially confined to the perifery of the volume from which the final galaxy’s dark matter was drawn, or even came from outside this volume – the theory of primordial nucleosynthesis assures us that $`90\%`$ of all baryons lay outside the spheres conventionally containing galactic dark matter, so there is no shortage of material to work with. On account of its large galactocentric radius, the material to which we are appealing will initially have had more angular momentum than the disk into which it was destined to settle. X-ray observations of early-type galaxies and clusters of galaxies provide strong support for the conjecture that forming galaxies blow powerful winds. In clusters of galaxies, the metal-enriched ejecta are directly observed because they have been trapped by the cluster potential. The wide spread in the X-ray luminosities of the hot atmospheres of giant elliptical galaxies has been used to argue persuasively that early winds can escape the potentials of many galaxies, but not those of the most massive systems, with the consequence that the ejected gas sometimes falls back into the visible galaxy and gives rise to a ‘cooling flow’ \[D’Ercole et al, 1989\]. Independent arguments point to massive outflows early in galactic evolution. The narrow dispersion in the colour-magnitude diagrams of cluster ellipticals, both now and at redshifts $`z1`$ \[Jorgensen et al, 1999\], implies that the galaxies’ colours are not heavily contaminated by metal-poor stars. Early outflows would prevent such contamination \[Kauffmann & Charlot, 1998\], \[Ferreras & Silk, 2000\]. Moreover, bulges and the nuclei of elliptical galaxies are enhanced in $`\alpha `$-elements (C, O, Mg) relative to Fe \[Kuntschner, 2000\]. This observation seems to require suppression of star formation from material that has been enriched in Fe by type Ia supernovae. It is often assumed that this suppression is achieved by converting all the protogalactic gas to stars before many type Ia supernovae have exploded, but it could be also be achieved by a supernova-driven wind. Models of the chemical evolution of disks \[Prantzos & Silk, 1998\] similarly yield an acceptably small number of metal-poor stars in the old disk if a supernova-driven wind carries metal-enriched gas out of the galaxy. Finally, the detection of old halo white dwarfs with a frequency and mass range similar to that inferred for MACHOs from the LMC microlensing experiments \[Ibata et al, 1999\] will, if spectroscopically confirmed \[Ibata et al, 2000\], require a substantial protogalactic outflow phase to eliminate from the protogalaxy heavy elements that would otherwise pollute stars that formed later and are observed to have low metalliciticies. Galactic outflows will have delivered heavy elements to the intergalactic medium \[Lehnert, Heckman & Weaver, 2000\]. This process not only accounts for the observed metallicities of intracluster gas \[Renzini, 1997\], but may also be responsible for the metallicities of the low-density gas that is primarily detected through Ly$`\alpha `$ absorption in quasar spectra. It is possible at low $`z`$, significant enrichment of the IGM and ICM might come from dwarf galaxies, although the low metallicities of the dwarfs argue against this unless the luminosity function is exceptionally steep. At the redshifts $`z\mathrm{}>2`$ at which the enriched IGM material is observed, most of the stars in the nearby dwarf galaxies will not have formed, so the more luminous galaxies would have necessarily had to dominate unless a new population of early-forming dwarfs is invoked. However semi-analytic theory predicts that at $`z\mathrm{}>2`$, most star formation is confined to locations at which luminous galaxies now reside \[Baugh et al., 1998\], \[Benson et al., 2000\]. These locations are far removed from the low-density gas that is observed to contain heavy elements. Galactic winds could be responsible for transporting the heavy elements from the location of the bulk of star formation, to where they are observed. Moreover, extended metal-enriched absorption systems might arise from expanding shells that form in galactic winds in the same way that shells form around planetary nebulae. Thus, many lines of argument suggest that outflows from both spheroids and disks were common, and therefore that significantly more baryons were involved in the formation of a given galaxy than it now contains. ## 3 Modification of the halo As we have seen, the infalling baryons will have lost much of their angular momentum. The lost angular momentum is taken up by the halo. In principle, acquisition of this angular momentum modifies the halo at all radii, but the modifications are small where dark matter dominates over baryons, and are profound only interior to the radius at which $`M_{\mathrm{disk}}M_{\mathrm{halo}}`$. Observationally, we know that the baryons are dominant inside the solar radius, so we expect the halo profile to be substantially modified there, precisely as the CDM model seemingly requires \[Navarro & Steinmetz, 2000\]. There are threee obvious mechanisms by which gas can lose angular momentum to the halo. Early on, the halo is expected to be triaxial and its principal axes will rotate slowly if at all. Gas flowing in such a potential rapidly loses angular momentum, even if its mass is small compared to the mass of the local halo \[Katz & Gunn, 1992\]. If gas ever accumulates to the degree that it contributes a non-negligible fraction of the mass interior to some radius $`r`$, two other mechanisms for angular-momentum loss become effective: massive blobs of gas will lose angular momentum through dynamical friction \[Stark et al, 1991, Navarro & Steinmetz, 1997\], and a tumbling gaseous bar will lose angular momentum through resonant coupling \[Hernquist & Weinberg, 1995\]. These last two processes operate even if the halo becomes axisymmetric, as it may do where gas contributes significantly to the overall mass budget. During the earliest stages of galaxy formation, gas will be far from centrifugal equilibrium and will flow rapidly inwards. We assume that it loses energy faster than angular momentum, with the consequence that gas that started out at a given galactocentric radius will eventually settle to a (possibly elliptical) ring. If this ring is not substantially self-gravitating, it will evolve little if at all. Low-surface-brightness galaxies would seem to be made up of such inert rings of gas. If the ring is significantly self-gravitating, it will continue to lose angular momentum to the local halo by a combination of dynamical friction and bar-driven resonant coupling. The dynamics of a tumbling gaseous bar embedded in a dark halo of comparable mass has yet to be carefully studied, but both analytic calculations and simulations show that, in the case of a stellar bar, resonant coupling is a rapid process: the time-scale of angular-momentum loss exceeds the bar’s dynamical time by a factor of only a few \[Weinberg & Tremaine, 1984, Debattista & Sellwood, 1998\]. Consequently, a bar embedded in a dynamically significant halo will shrink. This shrinkage will rapidly enhance the mass fraction of baryons because concentration of the baryons will be accompanied by expansion of the local halo as it takes up energy and angular momentum shed by the bar. These considerations suggest that, if the baryons ever become dynamically significant, they will go on losing angular momentum to the halo until they are dominant, and that dominance is achieved by a combination of the baryons moving in and the dark matter moving out. Moreover, chemical evolution models of the Milky Way disk require about half of the disk to have formed via late infall \[Prantzos & Silk, 1998\], which implies an extended phase of baryonic infall. The source of the baryons is likely to be stripped satellites that are merging with the Milky Way and become dynamically disrupted. Late infall may double the mass of the disk, with the consequence that the final disk is close to maximal, and the role of dark matter within the solar circle is negligible. In phase space, orbits at energies around the bar’s corotation energy will be highly chaotic, and the strong orbital shear that is characteristic of chaos will tend to erase substructure within the halo near the corotation energy. The coupling between baryons and dark matter is a fairly local process, essentially confined to a factor of 2 either side of the baryons’ corotation radius. The processes we have described for one corotation radius presumably occurred in sequence at a series of radii that increased from very small values out to scales characteristic of present-day disk galaxies. If the arguments of the preceeding section are correct, the dark matter at any given radius $`r`$ will interact locally with many different parcels of baryons during the formation process, as these parcels move through radius $`r`$ on their way to the galactic centre and probable ejection from the galaxy. ## 4 Bars and star formation Since the stars of the current disk are now on nearly circular orbits, they cannot have formed until after any tumbling gaseous bar had dissolved. Is it reasonable to have a bar without significant star formation? The dwarf galaxy NGC 2915 \[Bureau et al., 1999\] is an example of a dark-matter dominated galaxy with a very extended HI disk revealing a central bar and spiral structure extending well beyond the optical component. Evidently the Toomre $`Q`$ of this system satisfies $$Q_{\mathrm{global}}>Q>Q_{\mathrm{local}},$$ where $`Q_{\mathrm{global}}`$ and $`Q_{\mathrm{local}}`$ are the critical values of the disk instability parameter for global non-axisymmetric and local axisymmetric instabilities, respectively. One can readily imagine that as the disk forms, the gas surface density increases and the gas velocity dispersion drops, so that $`Q`$ decreases, and the local $`Q`$ criterion is subsequently satisfied. In the solar neighbourhood the disk satisfies $$Q_{\mathrm{local}}\left(\frac{\sigma _g}{10\mathrm{kms}^1}\right)\left(\frac{15M_{}\mathrm{pc}^2}{\mu _g}\right)$$ and is marginally unstable. The gas disk of the Milky Way presently contains about $`6\times 10^9`$M. In the transient bar phase, the effective $`Q`$ is increased by the ratio of bar streaming velocity to gas velocity dispersion $`\sigma _g`$, which amounts to a factor of $`10`$. Hence a gas mass of up to $`10^{11}`$M can be stabilized against star formation during the transient bar phase. It is clear that high resolution numerical simulations are required to model the coupling between the non-axisymmetric protodisk and the dark halo. These simulations need to include the effects of baryonic dissipation and star formation. There may be possible stellar relics of an early massive bar, that would be recognizable as a disk component of old stars with significant orbital eccentricity. ## 5 Conclusions Two serious problems currently plague the CDM theory of galaxy formation: an excess of dark matter within the optical bodies of galaxies, and disks that are too small. The second problem reflects the low angular momentum of infalling matter, and is made worse when one accepts that infalling baryons will surrender much of their angular momentum to the dark halo. In consequence, galaxies start with more low-angular-momentum baryons than they currently hold in their bulges and central black holes. We have argued that the surplus material was early on ejected as a massive wind. Many direct and indirect observational arguments point to the existence of such winds. Although the angular momentum of the first baryons to fall in was inadequate for the formation of the disk, it was not entirely negligible, and caused the inner halo to expand when the latter absorbed it. Similarly, the angular momentum of the baryons that are now in the disk was originally larger than it now is, and the surplus angular momentum further expanded the inner halo. In short, through relieving perhaps twice the baryonic mass of the current galaxy of angular momentum and energy, the dark halo has become substantially less centrally concentrated than it was originally, and it now contributes only a small fraction of the mass within the visible galaxy. During this refashioning of its inner parts, substructure is likely to have been erased, leaving the final inner halo smoother both locally and globally. This picture requires the baryonic mass to remain gaseous until the dark halo has been reduced to a minor contributor to the central mass, and a disk has formed in which most material is on circular orbits. This conjecture is plausible for two reasons: (i) the dark halo will be unresponsive to the collective modes of a gaseous disk, so the disk will not have growing modes until it dominates the gravitational potential in which it sits, and (ii) the enhanced orbital shear that is characteristic of closed orbits in a barred potential cannot be conducive to star formation. In any case we have little understanding of what controls the rate of star formation in a protogalaxy, and we know from the fragility of disks \[Toth & Ostriker, 1992\] that disks formed at the end of the formation process, after merging had all but ceased and the largest substructure had been erased from bulge and inner halo. Existing numerical simulations of the interactions of baryons and dark matter during galaxy formation (e.g., Navarro & Steinmetz, 2000; Benson et al., 2000) lack both the mass resolution and some of the physics that is required to realise the essential ideas employed here. For example, in the simulations of Benson et al. the gravitational softening length is $`10h^1\mathrm{kpc}`$, and the basic baryonic resolution element has mass $`4\times 10^{10}\mathrm{M}_{}`$ and spurious discreteness effects will be present on mass scales several times larger. Such simulations neglect magnetic fields (which are believed to drive winds off accretion disks) and energy input by both supernovae and the central massive object. In summary, a considerable mass of low-angular momentum baryons must have been ejected. This prediction is a priori plausible, given observations of winds from star-burst galaxies and outflows from Lyman-break galaxies, and given the prevalence of outflows in star-formation regions. The heavy element abundances of hot gas in clusters of galaxies and in cool, low-density gas observed at redshifts $`z2`$ through quasar absorption lines are likely to arise through the mixing of metal-rich ejecta with primordial gas. The low-angular-momentum material having been ejected, the current disks formed from the higher-angular-momentum baryons that fell in later. Since the ejection stage commences only once $`M_{\mathrm{baryon}}M_{\mathrm{dm}}`$, in order for self-gravity to drive gas flows and the ensuing winds, thereby causing the dark-matter distribution to expand and the baryons to further contract, the visible galaxy is inevitably baryon-dominated but has a circular speed that is required, via the baryonic mass-loss and the protogalactic dynamical coupling, to approximately match that of the embedding halo. Thus the so-called ‘disk–halo conspiracy’ \[Bahcall & Casertano, 1998\] is not really a coincidence but a consequence of dynamical evolution.
no-problem/0003/astro-ph0003114.html
ar5iv
text
# Power law burst and inter-burst interval distributions in the solar wind: Turbulence or dissipative SOC ? ## Abstract We calculate for the first time the probability density functions (PDFs) $`P`$ of burst energy $`e`$, duration $`T`$ and inter-burst interval $`\tau `$ for a known turbulent system in nature. Bursts in the earth-sun component of the Poynting flux at 1 AU in the solar wind were measured using the MFI and SWE experiments on the NASA WIND spacecraft. We find $`P(e)`$ and $`P(T)`$ to be power laws, consistent with self-organised criticality (SOC). We find also a power law form for $`P(\tau )`$ that distinguishes this turbulent cascade from the exponential $`P(\tau )`$ of ideal SOC, but not from some other SOC-like sandpile models. We discuss the implications for the relation between SOC and turbulence. In their seminal papers , Bak et al. (BTW) demonstrated that a discrete cellular automaton model of an artificial sandpile had a spatial response to slow fuelling that was characterised by a scale-free distribution of energy release events or “avalanches” (see also ). Scale invariance was shown by a power law probability density function (PDF) $`P`$ of avalanche area $`A`$, $`P(A)=CA^\alpha `$. This scale-invariant spatial structure led BTW to propose the sandpile as a toy model of turbulence because, in Kolmogorov turbulence , long-wavelength, injection-range perturbations cause a scale-free forward cascade of energy transport until the dissipation scale is reached and therefore one might expect the PDFs of burst quantities in turbulent systems to be power laws too. These have recently been shown in burst area $`A`$ for a generic inverse cascade model , in burst energy $`e`$ and duration $`T`$ for both a shell model and reduced 2D MHD turbulence simulations , and in peak burst power for 1D MHD turbulence . Boffetta et al. (hereafter B99) have also shown that the PDF $`P`$ of inter-burst intervals $`\tau `$ in a shell model of turbulence is a power law too but that this is not so for the BTW sandpile in which $`P(\tau )`$ is exponential. B99 postulated that the power law $`P(\tau )`$ found for solar flares was consistent with a shell model of turbulence rather than the BTW sandpile. Here we demonstrate for the first time that the predicted avalanche phenomenology (power laws in $`P(e),P(T)`$ and $`P(\tau )`$) of a shell model of turbulence is observed within a natural system \- the solar wind - for which there is direct independent evidence of turbulence . The solar wind is a near-radial supersonic plasma outflow from the solar corona which carries with it solar magnetic flux into interplanetary space by virtue of the plasma’s very high electrical conductivity. In this ideal magnetohydrodynamic (MHD) approximation, the electric field $`𝐄^{^{}}`$ in the rest frame of the moving plasma is given by $`𝐄^{^{}}=𝐄+𝐯\times 𝐁=\mathrm{𝟎}`$ from Ohm’s law. The electromagnetic energy (Poynting) flux $`𝐄\times 𝐇`$ along the sun-earth line $`x`$ can be approximated by $`v\left(B_y^2+B_z^2\right)/\mu _0`$ assuming a radial solar wind. This quantity was calculated from “key parameter” measurements of $`𝐁`$ and $`𝐯`$ from the MFI and SWE experiments, respectively, on the WIND spacecraft between January 1995 and December 1998 inclusive. The typically 80-100 s averaged measurements of $`𝐯`$ were interpolated onto the 46 s time samples of $`𝐁`$. In the resulting time series, bursts were identified, by the method used in , as intervals when the Poynting flux exceeded a given fixed threshold. Thresholds were set at the 10,20,…90 percentiles of the cumulative probability distribution of the Poynting flux. For each threshold, the PDF of the burst energy $`e`$, burst lifetime $`T`$, and inter-burst interval $`\tau `$ was calculated, where the burst energy is the sum of the Poynting flux samples over the burst lifetime $`T`$. The PDFs are shown in Figure 1. The burst energy PDF (top panel) can be seen to have a power law region over about 4 orders of magnitude between about $`10^5`$ and $`10^1`$ J m<sup>-2</sup>. The burst lifetime PDF (second panel) also exhibits a power law region and can be fitted by a power law with exponential cut-off similar to that found previously for the solar wind $`\epsilon `$ function . In these respects, the solar wind Poynting flux has the avalanche phenomenology common to both the BTW sandpile and turbulence. The inter-burst interval PDF has been plotted on both a log-log scale (third panel) and a log-linear scale (bottom panel). It is readily seen that this PDF is a power law rather than an exponential. A power law with an exponent of 1.67 is shown by the thick dashed curve in the third panel. This power law form distinguishes the solar wind from a system having the properties of the BTW sandpile and instead shows it to be consistent with the shell model of turbulence used by B99. This is the same result they found for solar flares, for which there was not the direct independent evidence of turbulence that there is for the solar wind. It is possible that the solar wind avalanche phenomenology is simply dominated by the advection of an already turbulent fluid from the sun rather than by an energy cascade within the solar wind itself (S. C. Chapman, personal communication, 1999). We can expect the solar wind outflow from the sun to be strongly influenced by energy dissipation events in the solar corona such as nanoflares because these events can change the thermal pressure gradient that drives the solar wind and/or allow reconfigurations of the solar magnetic field that aid or inhibit plasma outflow from the sun. These observations are also topical in magnetospheric physics because we have previously shown a similarity between the avalanche phenomenology present in geomagnetic perturbations (which measure dissipative currents in the Earth’s ionosphere) and that in the energy delivered by the solar wind to the Earth’s magnetic and plasma environment. Independently, an analysis of the $`R/S`$ Hurst exponents of solar wind variables and magnetospheric indices has drawn similar conclusions. So what does the observation of avalanche phenomenology in a natural system tell us about its physics? BTW postulated that the appearance of “avalanche phenomenology” (power law burst PDFs) in Nature was due to an underlying fixed point in the dynamics (“criticality”) which was attractive (“self-organised”) - Self-Organised Criticality (SOC). Renormalisation group studies have demonstrated that the Abelian BTW model indeed exhibits an attractive fixed point. However, although BTW argued that SOC implied avalanche phenomenology, the converse is not true; and in particular the observation of avalanche phenomenology in natural systems does not by itself prove that such systems are SOC. There are many examples of systems that are either not self-organised or not critical, or both, that nevertheless present avalanche phenomenology. Avalanche phenomenology has been seen in the forest fire model controlled by a repulsive rather than an attractive fixed point; it thus has to be tuned to exhibit scaling . Some other models exhibit power-law distributions without finite size scaling and so are not bona fide critical. Avalanche phenomenology can also be produced by coherent noise driving or by “sweeping of an instability” . In addition, the fixed-threshold method of estimating burst sizes that was used in and the present work may generally result in scale-free PDFs if applied to certain types of time series. The action of slicing through a fractional Brownian motion (fBm) time series at a fixed level generates a set of crossing times known as an isoset, for which the PDF of the time interval between two subsequent crossings has a power law form . Hence the burst duration and inter-burst interval statistics drawn from such an fBm time series by the fixed threshold method would also be expected to be power laws. Clearly it is not sensible to apply the SOC label generally to systems exhibiting avalanche phenomenology . Instead we should follow B99 in using a restricted definition of SOC, implicit in BTW’s choice of name, as being the mechanism of self-organisation to a critical state. From this point of view, in order to show the presence of SOC, one has to demonstrate those properties of self organization and criticality that are unique to the process of SOC rather than simply observing the avalanche phenomena that SOC was designed to account for. In consequence, the important question remains as to the generality of B99’s identification of an exponential $`P(\tau )`$ with the SOC mechanism. Exponential $`P(\tau )`$ implies that energy release episodes are uncorrelated in time because of the standard result that Poisson-distributed random numbers have an exponential distribution of waiting times. This will give rise to a $`1/f^2`$ power spectrum for frequencies higher than those corresponding to the longest correlation time. In the BTW model, this is the time for the longest avalanche and is set by the system length. Jensen et al. found that the BTW system had a $`1/f^2`$ high frequency power spectrum in energy flow down the sandpile, rather than the $`1/f`$ spectrum indicative of long-time correlation. Whilst exponential $`P(\tau )`$ certainly holds for the BTW sandpile , this is not true for some other sandpile models. For example, let us consider the nearest neighbour OFC model . The conservative form of this model has been shown to be critical and to evolve to a steady state . In this case, $`P(\tau )`$ is found to be exponential . However, there is also a non-conservative form of the nearest neighbour OFC model in which dissipation is introduced. This was recently shown to cease to be critical and, in this dissipative case, $`P(\tau )`$ is found to differ from an exponential . This supports the identification of exponential $`P(\tau )`$ with SOC. Three classes of sandpile model, all of which modify aspects of BTW SOC, exhibit time correlation between bursts - variously reported as a non-exponential $`P(\tau )`$ in the dissipative OFC model and as a “1/f” power spectrum in both running and continuous (e.g.) sandpiles. However, it has yet to be shown that any of these systems are still SOC in the sense of both posessing an attractive fixed point and showing finite size scaling. If B99 are correct in identifying time correlation of bursts as a diagnostic for the absence of SOC, then there should then be no instance of a model that has an attractive fixed point and finite size scaling (self-organized and critical) and which also has time correlated bursts of energy flow (specifically a $`1/f`$ spectrum or nonexponential $`P(\tau )`$). That is, the time correlation in dissipative, running and continuous sandpiles is actually the signature of the breakdown of self-organized criticality. The apparent paradox of the observation of scale-free burst PDFs in such models is resolved when one recognises that scaling may survive away from the fixed point, and can thus co-exist with time correlation . Scaling in both space and time can thus be a robust “generic” property of such “near-SOC” systems even if exact criticality is not. Boffetta et al.’s test can then test for the presence of SOC but cannot distinguish any of the modified sandpiles from turbulence models, and hence such “near-SOC” models remain possible descriptions of turbulence. We are grateful to R. P. Lepping and K. W. Ogilvie for solar wind data from the NASA WIND spacecraft. We acknowledge valuable discussions with Sandra Chapman, Iain Coleman, Tim Horbury, Sean Oughton, Carmen do Prado, Dave Tetreault and Tom Chang. We appreciate the provision of preprints by Giuseppe Consolini, Channon Price, Jouni Takalo and Donald Turcotte. NWW acknowledges the generous hospitality of MIT.
no-problem/0003/physics0003057.html
ar5iv
text
# 1 Problem ## 1 Problem A linearly polarized plane electromagnetic wave of frequency $`\omega `$ is normally incident on an opaque screen with a square aperture of edge $`a`$. Show that the wave has a longitudinal magnetic field once it has passed through the aperture by an application of Faraday’s Law to a loop parallel to the screen, on the side away from the source. Deduce the ratio of longitudinal to transverse magnetic field, which is a measure of the diffraction angle. ## 2 Solution Consider a linearly polarized wave with electric field $`𝐄_xe^{i(kz\omega t)}`$ incident on a perfectly absorbing screen in the plane $`z=0`$ with a square aperture of edge $`a`$ centered on the origin. We apply the integral form of Faraday’s Law to a semicircular loop with its straight edge bisecting the aperture and parallel to the transverse electric field $`𝐄_x`$, as shown in the figure. The electric field is essentially zero close to the screen on the side away from the source. Then, at time $`t=0`$, $$𝐄𝑑𝐥E_xa0.$$ (1) If the loop were on the source side of the screen, the integral would vanish. Faraday’s Law tells us immediately that the time derivative of the magnetic flux through the loop is nonzero. Hence, there must be a nonzero longitudinal component, $`B_z`$, to the magnetic field, once the wave has passed through the aperture. In Gaussian units, $$B_ya=E_xa𝐄𝑑𝐥=\frac{1}{c}\frac{d}{dt}𝐁𝑑𝐒\frac{1}{c}\frac{dB_z}{dt}\frac{a^2}{2},$$ (2) where $`B_z`$ is a characteristic value of the longitudinal component of the magnetic field over that half of the aperture enclosed by the loop. The longitudinal magnetic field certainly has time dependence of the form $`e^{i\omega t}`$, so $`dB_z/dt=i\omega B_z=2\pi icB_z/\lambda `$, and eq. (2) leads to $$\frac{B_z}{B_y}\frac{i\lambda }{\pi a}.$$ (3) By a similar argument for a loop that enclosed the other half of the aperture, $`B_z/B_yi\lambda /\pi a`$ in that region; $`B_z=0`$ in the plane $`y=0`$. We see that the wave is no longer a plane wave after passing through the aperture, and we can say that it has been diffracted as a consequence of Faraday’s Law. This argument emphasizes the fields near the aperture. A detailed understanding of the fields far from the aperture requires more than just Faraday’s Law. A simplified analysis is that that magnitude of the ratio (3) is a measure of the spread of angles of the magnetic field vector caused by the diffraction, and so in the far zone the wave occupies a cone of characteristic angle $`\lambda /\pi a`$. ## 3 Comments Using the fourth Maxwell equation including the displacement current, we can make an argument for diffraction of the electric field similar to that given above for the magnetic field. After the wave has passed through the aperture of size $`a`$, it is very much like a wave that has been brought to a focus of size $`a`$. Hence, we learn that near the focus $`(x,y,z)=(0,0,0)`$ of a linearly polarized electromagnetic wave with $`𝐄=E\widehat{𝐱}`$ and propagating in the $`z`$ direction, there are both longitudinal electric and magnetic fields, and that $`E_z`$ and $`B_z`$ are antisymmetric about the planes $`x=0`$ and $`y=0`$, respectively. Also, eq. (3) indicates that near the focus the longitudinal and transverse fields are $`90^{}`$ out of phase. Yet, far from the focus, the transverse and longitudinal fields become in phase, resulting in spherical wavefronts that extend over a cone of characteristic angle $`\lambda /\pi a`$. For this to hold, the longitudinal and the transverse fields must experience phase shifts that differ by $`90^{}`$ between the focal point and the far zone. It is only a slight leap from the present argument to conclude that the transverse fields undergo the extra phase shift. This was first deduced (or noticed) by Guoy in 1890 via the Huygens-Kirchhoff integral . The latter tells us that the secondary wavelet $`\psi `$ at a large distance $`r`$ from a small region of area $`A`$ where the wave amplitude is $`\psi _0e^{i\omega t}`$ is $$\psi =\frac{k\psi _0A}{2\pi i}\frac{e^{i(kr\omega t)}}{r}=\frac{k\psi _0A}{2\pi }\frac{e^{i(kr\omega t\pi /2)}}{r}.$$ (4) The possibly mysterious factor of $`i`$ in the denominator of the Huygens-Kirchhoff integral implies a $`90^{}`$ phase shift between a focus and the far field of a beam of light. Here, we have seen that this phase shift can also be considered as a consequence of Faraday’s Law.
no-problem/0003/math0003126.html
ar5iv
text
# Composition sum identities related to the distribution of coordinate values in a discrete simplex. ## 1. Introduction The present paper is a discussion of composition sum identities that may be obtained by utilizing spectral residues of parameterized, recursively defined sequences. Here we are using the term “composition sum” to refer to a sum whose index runs over all ordered lists of positive integers $`p_1,p_2,\mathrm{},p_l`$ that such that for a fixed $`n`$, $$p_1+\mathrm{}+p_l=n.$$ Spectral residues will be discussed in detail below. Compositions sums are a useful device, and composition sum identities are frequently encountered in combinatorics. For example the Stirling numbers (of both kinds) have a natural representation by means of such sums: \[4, §51, §60\]: $$s_n^l=\frac{n!}{l!}\underset{p_1+\mathrm{}+p_l=n}{}\frac{1}{p_1p_2\mathrm{}p_l};𝔖_n^l=\frac{n!}{l!}\underset{p_1+\mathrm{}+p_l=n}{}\frac{1}{p_1!p_2!\mathrm{}p_l!}.$$ There are numerous other examples. In general, it is natural to use a composition sum to represent the value of quantities $`f_n`$ that depend in a linearly recursive manner on quantities $`f_1,f_2,\mathrm{},f_{n1}`$. By way of illustration, let us mention that this point of view leads immediately to the interpretation of the $`n^{\text{th}}`$ Fibonacci number as the cardinality of the set of compositions of $`n`$ by $`\{1,2\}`$ \[1, 2.2.23\] To date, there are few systematic investigations of composition sum identities. The references known to the present author are ; all of these papers obtain their results through the use of generating functions. In this article we propose a new technique based on spectral residues, and apply this method to derive some results of an enumerative nature. Let us begin by describing one of these results, and then pass to a discussion of spectral residues. Let $`S^3(n)`$ denote the discrete simplex of bounded, ordered triples of natural numbers: $$S^3(n)=\{(x,y,z)^3:0x<y<zn\}.$$ In regard to this simplex, we may inquire as to what is more probable: a selection of points with distinct $`y`$ coordinates, or a selection of points with distinct $`x`$ coordinates. The answer is given by the following. ###### Theorem 1.1. For every cardinality $`l`$ between $`2`$ and $`n1`$, there are more $`l`$-element subsets of $`S^3(n)`$ with distinct $`y`$ coordinates, than there are $`l`$-element subsets with distinct $`x`$ coordinates. Let us consider this result from the point of view of generating functions. The number of points with $`y=j`$ is $`j(nj)`$. Hence the generating function for subsets with distinct $`y`$-values is $$Y(t)=\underset{j=1}{\overset{n1}{}}(1+j(nj)t),$$ where $`t`$ counts the selected points. The number of points with $`x=nj`$ is $`j(j1)/2`$. Hence, the generating function for subsets with distinct $`x`$-values is $$X(t)=\underset{j=2}{\overset{n}{}}\left(1+\frac{j(j1)}{2}t\right).$$ The above theorem is equivalent to the assertion that the coefficients of $`Y(t)`$ are greater than the coefficients of $`X(t)`$. The challenge is to find a way to compare these coefficients. We will see below this can be accomplished by re-expressing the coefficients in question as composition sums, and then employing a certain composition sum identity to make the comparison. We therefore begin by introducing a method for systematically generating such identities. ## 2. The method of spectral residues Let us consider a sequence of quantities $`f_n`$, recursively defined by $$f_0=1,(\nu n)f_n=\underset{j=0}{\overset{n1}{}}a_{jn}f_j,n=1,2,\mathrm{}$$ (2.1) where the $`a_{jk},\mathrm{\hspace{0.17em}0}j<k`$ is a given array of constants, and $`\nu `$ is a parameter. The presence of the parameter has some interesting consequences. For instance, it is evident that if $`\nu `$ is a natural number, then there is a possibility that the relations (2.1) will not admit a solution. To deal with this complication we introduce the quantities $$\rho _n=Res(f_n(\nu ),\nu =n),$$ and henceforth refer to them as spectral residues. The list $`\rho _1,\rho _2,\mathrm{}`$ will be called the spectral residue sequence. ###### Proposition 2.1. If $`\nu =n`$ then the relations (2.1) do not admit a solution if $`\rho _n0`$, and admit multiple solutions if $`\rho _n=0`$. ###### Proof. If $`\nu =n`$, the relations in question admit a solution if and only if $$\underset{j=1}{\overset{n1}{}}a_{jn}f_j|_{\nu =n}=0.$$ The left-hand side of the above equation is precisely, $`\rho _n`$, the $`n^{\text{th}}`$ spectral residue. It follows that if $`\rho _n=0`$, then the value of $`f_n`$ can be freely chosen, and that the solutions are uniquely determined by this value. ∎ The above proposition is meant to indicate how spectral residues arise naturally in the context of parameterized, recursively defined sequences. However, our interest in spectral residues is motivated by the fact that they can be expressed as composition sums. To that end, let $`𝐩=(p_1,\mathrm{},p_l)`$ be an ordered list of natural numbers. We let $$s_j=p_1+\mathrm{}+p_j,j=1,\mathrm{},l$$ denote the $`j^{\text{th}}`$ left partial sum and set $$|𝐩|=s_l=p_1+\mathrm{}+p_l.$$ Let us also define the following abbreviations: $$s_𝐩=\underset{j=1}{\overset{l1}{}}s_j,a_𝐩=\underset{j=1}{\overset{l1}{}}a_{s_js_{j+1}}$$ ###### Proposition 2.2. $$\rho _n=\underset{|𝐩|=n}{}a_𝐩/s_𝐩.$$ Composition sum identities arise in this setting because spectral residue sequences enjoy a certain invariance property. Let $`𝐟=(f_1,f_2,\mathrm{})`$ and $`𝐠=(g_1,g_2,\mathrm{})`$ be sequences defined, respectively by relation (2.1) and by $$g_0=1,(\nu n)g_n=\underset{j=0}{\overset{n1}{}}b_{jn}g_j,n=1,2,\mathrm{}$$ ###### Definition 2.3. We will say that $`𝐟`$ and $`𝐠`$ are unipotently equivalent if $`g_n=f_n`$ plus a $`\nu `$-independent linear combination of $`f_1,\mathrm{},f_{n1}`$. The motivation for this terminology is as follows. It is natural to represent the coefficients $`a_{ij}`$ and $`b_{ij}`$ by infinite, lower nilpotent matrices, call them $`A`$ and $`B`$. Let $`D_\nu `$ denote the diagonal matrix with entry $`\nu n`$ in position $`n+1`$. The sequences $`𝐟`$ and $`𝐠`$ are then nothing but generators of the kernels of $`D_\nu A`$ and $`D_\nu B`$, respectively, The condition that $`𝐟`$ and $`𝐠`$ are unipotently equivalent amounts to the condition that $`D_\nu A`$ and $`D_\nu B`$ are related by a unipotent matrix factor. Unipotent equivalence is, evidently, an equivalence relation on the set of sequences of type (2.1). ###### Proposition 2.4. The spectral residue sequence is an invariant of the corresponding equivalence classes. ###### Proof. The recursive nature of the $`f_k`$ ensures that $`Res(f_k;\nu =n)`$ vanishes for all $`k<n`$. The proposition now follows by inspection of Definition 2.3. ∎ The application of this result to composition identities is immediate. ###### Corollary 2.5. If $`a_{ij}`$ and $`b_{ij}`$ are nilpotent arrays of constants such that the corresponding $`𝐟`$ and $`𝐠`$ are unipotently equivalent, then necessarily $$\underset{|𝐩|=n}{}a_𝐩/s_𝐩=\underset{|𝐩|=n}{}b_𝐩/s_𝐩.$$ Due to its general nature, the above result does not, by itself, lead to interesting composition sum identities. In the search for useful applications we will limit our attention to recursively defined sequences arising from series solutions of linear differential equations. Consideration of both first and second order equations in one independent variable will prove fruitful. Indeed, in the next section we will show that the first-order case naturally leads to the exponential formula of labelled counting \[7, §3\]. The second-order case will be considered after that; it leads naturally to the type of result discussed in the introduction. ## 3. Spectral residues of first-order equations. Let $`U=U_1z+U_2z^2+\mathrm{}`$ be a formal power series with zero constant term, and let $`\varphi (z)`$ be the series solution of the following parameterized, first-order, differential equation: $$z\varphi ^{}(z)+[U(z)\nu ]\varphi (z)+\nu =0,\varphi (0)=1.$$ Equivalently, the coefficients of $`\varphi (z)`$ must satisfy $$\varphi _0=1,(\nu n)\varphi _n=\underset{j=0}{\overset{n1}{}}U_{nj}\varphi _j.$$ In order to obtain a composition sum identity we seek a related equation whose solution will be unipotently related to $`\varphi (z)`$. It is well known that a linear, first-order differential equation can be integrated by means of a gauge transformation. Indeed, setting $`\sigma (z)`$ $`={\displaystyle \underset{k=1}{\overset{\mathrm{}}{}}}U_k{\displaystyle \frac{z^k}{k}},`$ $`\psi (z)`$ $`=\mathrm{exp}(\sigma (z))\varphi (z)`$ our differential equation is transformed into $$z\psi ^{}(z)\nu \psi (z)+\nu \mathrm{exp}(\sigma (z))=0.$$ Evidently, the coefficients of $`\varphi `$ and $`\psi `$ are unipotently related, and hence we obtain the following composition sum identity. ###### Proposition 3.1. Setting $`U_𝐩=_iU_{p_i}`$ for $`𝐩=(p_1,\mathrm{},p_l)`$ we have $$\underset{n}{}\underset{|𝐩|=n}{}\frac{U_𝐩}{s_𝐩}\frac{z^n}{n}=\mathrm{exp}\left(\underset{k}{}U_k\frac{z^k}{k}\right).$$ (3.2) The above identity has an interesting interpretation in the context of labelled counting, e.g. the enumeration of labelled graphs. In our discussion we will adopt the terminology introduced in H. Wilf’s book . For each natural number $`k1`$ let $`𝒟_k`$ be a set — we will call it a deck — whose elements we will refer to as pictures of weight $`k`$. A card of weight $`k`$ is a pair consisting of a picture of weight $`k`$ and a $`k`$-element subset of $``$ that we will call the label set of the card. A hand of weight $`n`$ and size $`l`$ is a set of $`l`$ cards whose weights add up to $`n`$ and whose label sets form a partition of $`\{1,2,\mathrm{},n\}`$ into $`l`$ disjoint groups. The goal of labelled counting is to establish a relation between the cardinality of the sets of hands and the cardinality of the decks. For example, when dealing with labelled graphs, $`𝒟_k`$ is the set of all connected $`k`$-graphs whose vertices are labelled by $`1,2,\mathrm{},k`$. A card of weight $`k`$ is a connected $`k`$-graph labelled by any $`k`$ natural numbers. Equivalently, a card can be specified as a picture and a set of natural number labels. To construct the card we label vertex $`1`$ in the picture by the smallest label, vertex $`2`$ by the next smallest label, etc. Finally, a hand of weight $`n`$ is an $`n`$-graph (not necessarily connected) whose vertices are labelled by $`1,2,\mathrm{},n`$. Let $`d_k`$ denote the cardinality of $`𝒟_k`$ and set $$d(z)=\underset{k}{}d_k\frac{z^k}{k!}$$ Similarly let $`h_{nl}`$ denote the cardinality of the set of hands of weight $`n`$ and size $`l`$, and set $$h(y,z)=\underset{nl}{}h_{nl}y^l\frac{z^n}{n!}.$$ The exponential formula of labelled counting is an identity that relates the above generating functions. Here it is: $$h(y,z)=\mathrm{exp}(yd(z)).$$ (3.3) To establish the equivalence of (3.2) and (3.3) we need to introduce some extra terminology. Consider a list of $`l`$ cards with weights $`p_1,\mathrm{},p_l`$ and label sets $`S_1,\mathrm{},S_l`$. We will say that such a list forms an ordered hand if $$\mathrm{min}(S_i)<\mathrm{min}(S_{i+1}),\text{ for all }i=1,\mathrm{},l1.$$ Evidently, each hand (a set of cards) corresponds to a unique ordered hand (an ordered list of the same cards), and hence we seek a way to enumerate the set of all ordered hands of weight $`n`$ and size $`l`$. Let us fix a composition $`𝐩=(p_1,\mathrm{},p_l)`$ of a natural number $`n`$, and consider a permutation $`\pi =(\pi _1,\mathrm{},\pi _n)`$ of $`\{1,\mathrm{},n\}`$. Let us sort $`\pi `$ according to the following scheme. Exchange $`\pi _1`$ and $`1`$ and then sort $`\pi _2,\mathrm{},\pi _{p_1}`$ into ascending order. Next exchange $`\pi _{p_1+1}`$ and the minimum of $`\pi _{p_1+1},\mathrm{},\pi _n`$ and then sort $`\pi _{p_1+2},\mathrm{},\pi _{p_2}`$ into ascending order. Continue in an analogous fashion $`l2`$ more times. The resulting permutation will describe a division of $`\{1,\mathrm{},n\}`$ into $`l`$ ordered blocks, with the blocks themselves being ordered according to their smallest elements. Call such a permutation $`𝐩`$-ordered. Evidently, each $`𝐩`$-ordered permutation can be obtained by sorting $$s_𝐩\times n\times \underset{i}{}(p_i1)!$$ different permutations. Next, let us note that an ordered hand can be specified in terms of the following ingredients: a composition $`𝐩`$ of $`n`$, one of $`_id_{p_i}`$ choices of pictures of weights $`p_1,\mathrm{},p_l`$, and a $`𝐩`$-ordered permutation. It follows that $$h_{nl}=\underset{\stackrel{\left|𝐩\right|=n}{𝐩=(p_1,\mathrm{},p_l)}}{}\frac{n!}{s_𝐩\times n\times _i(p_i1)!}\underset{i}{}d_{p_i}.$$ Finally, we can establish the equivalence of (3.2) and (3.3) by setting $$U_k=\frac{d_k}{(k1)!}y.$$ ## 4. Spectral residues of second-order equations. Let $`U=U_1z+U_2z^2+\mathrm{}`$ be a formal power series with zero constant term, and let $`\varphi (z)`$ be the series solution of the following second-order, linear differential equation: $$z^2\varphi ^{\prime \prime }(z)+(1\nu )z\varphi ^{}z+U(z)\varphi (z)=0,\varphi (0)=1.$$ (4.4) Equivalently, the coefficients of $`\varphi (z)`$ are determined by $$\varphi _0=1,n(\nu n)\varphi _n=\underset{j=0}{\overset{n1}{}}U_{nj}\varphi _j.$$ Two remarks are in order at this point. First, the class of equations described by (4.4) is closely related to the class of self-adjoint second-order equations. Indeed, conjugation by a gauge factor $`z^{\nu /2}`$ transforms (4.4) into self-adjoint form with potential $`U(z)`$ and energy $`\nu ^2/4`$. The solutions of the self-adjoint form are formal series multiplied by $`z^{\nu /2}`$, so nothing is lost by working with the “nearly” self-adjoint form (4.4). Second, there is no loss of generality in restricting our focus to the self-adjoint equations. Every second-order linear equation can be gauge-transformed into self-adjoint form, and as we saw above, spectral residue sequences are invariant with respect to gauge transformations. Indeed, as we shall demonstrate shortly, the potential $`U(z)`$ is uniquely determined by its corresponding residue sequence. ###### Proposition 4.1. The spectral residues corresponding to (4.4) are $$\rho _n=\frac{1}{n}\underset{|𝐩|=n}{}\frac{U_𝐩}{s_𝐩s_𝐩^{}},$$ where as before, for $`𝐩=(p_1,\mathrm{},p_l)`$, we write $`U_𝐩`$ for $`_iU_{p_i}`$, and write $`𝐩^{}`$ for the reversed composition $`(p_l,p_{l1},\mathrm{},p_1)`$. Since $`\rho _n=U_n/n`$ plus a polynomial of $`U_1,\mathrm{},U_{n1}`$, it is evident that the spectral residue sequence completely determines the potential $`U(z)`$. An explicit formula for the inverse relation is given in . Interesting composition sum identities will appear in the present context when we consider exactly-solvable differential equations. We present three such examples below, and discuss the enumerative interpretations in the next section. In each case the exact solvability comes about because the equation is gauge-equivalent to either the hypergeometric, or the confluent hypergeometric equation. Let us also remark — see for the details — that these equations occupy an important place within the canon of classical quantum mechanics, where they correspond to various well-known exactly solvable one-dimensional models. ###### Proposition 4.2. $$\underset{\stackrel{𝐩=(p_1,\mathrm{},p_l)}{|𝐩|=n}}{}\frac{(n1)!}{s_𝐩}\frac{(n1)!}{s_𝐩^{}}\left(\underset{i}{}p_i\right)t^l=\underset{j=1}{\overset{n}{}}\left\{t+j(j1)\right\}$$ ###### Proof. By Proposition 4.1, the left hand side of the above identity is $`n!(n1)!`$ times the $`n^{\text{th}}`$ spectral residue corresponding to the potential $$U(z)=\frac{tz}{(z1)^2}=t\underset{k}{}kz^k.$$ Setting $$t=\alpha (1\alpha )$$ and making a change of gauge $$\varphi (z)=(1z)^\alpha \psi (z)$$ transforms (4.4) into $$z^2\psi ^{\prime \prime }(z)+(1\nu )\psi ^{}(z)\frac{z}{1z}\left\{2\alpha z\psi ^{}(z)+\alpha (\alpha \nu )\psi (z)\right\}=0.$$ Multiplying through by $`(1z)/z`$ and setting $$\gamma =1\nu ,\beta =\alpha \nu ,$$ we recover the usual hypergeometric equation $$z(1z)\psi ^{\prime \prime }(z)+\left\{\gamma +(1\alpha \beta )z\right\}\psi ^{}(z)\alpha \beta \psi (z)=0.$$ It follows that $$\psi _n=\frac{(\alpha )_n(\alpha \nu )_n}{n!(1\nu )_n},$$ and hence the $`n^{\text{th}}`$ spectral residue is given by $$\rho _n=(1)^n\frac{_{j=1}^n(\alpha j)(\alpha +j1)}{n!(n1)!},$$ or equivalently by $$\rho _n=\frac{_{j=1}^n(t+j(j1))}{n!(n1)!}.$$ The asserted identity now follows from the fundamental invariance property of spectral residues. ∎ ###### Proposition 4.3. $$\underset{\stackrel{𝐩=(p_1,\mathrm{},p_l)}{\stackrel{p_i\{1,2\}}{|𝐩|=n}}}{}\frac{(n1)!}{s_𝐩}\frac{(n1)!}{s_𝐩^{}}t^{nl}=\underset{k}{}(1+k^2t),$$ where the right hand index $`k`$ varies over all positive integers $`n1,n3,n5,\mathrm{}`$. ###### Proof. As in the preceding proof, Proposition 4.1 shows that the left hand side of the present identity is $`n!(n1)!`$ times the $`n^{\text{th}}`$ spectral residue corresponding to the potential $$U(z)=z+tz^2.$$ Setting $$t=\omega ^2,$$ and making a change of gauge $$\varphi (z)=\mathrm{exp}(\omega z)\psi (z)$$ transforms (4.4) into $$z^2\psi ^{\prime \prime }(z)+(1\nu )z\psi ^{}(z)+2\omega z^2\psi ^{}(z)+z(\omega (1\nu )+1)\psi (z)=0.$$ Dividing through by $`z`$ and setting $$\gamma =1\nu ,1=\omega (2\alpha +\nu 1),$$ we obtain the following scaled variation of the confluent hypergeometric equation: $$z\psi ^{\prime \prime }(z)+(\gamma +2\omega z)\psi ^{}(z)+2\omega \alpha \psi (z)=0.$$ It follows that $$\psi _n=\frac{(2\omega )^2(\alpha )_n}{n!(\gamma )_n},$$ and hence that $`\rho _n`$ $`={\displaystyle \frac{_{k=0}^{n1}(1+\omega (2k+1n))}{n!(n1)!}}`$ $`={\displaystyle \frac{_{k=0}^{\frac{n1}{2}}(1+t(n12k)^2)}{n!(n1)!}}.`$ The asserted identity now follows from the fundamental invariance property of spectral residues. ∎ ###### Proposition 4.4. $$\underset{\stackrel{𝐩=(p_1,\mathrm{},p_l)}{\stackrel{p_i\text{odd}}{|𝐩|=n}}}{}\frac{(n1)!}{s_𝐩}\frac{(n1)!}{s_𝐩^{}}\left(\underset{i}{}p_i\right)t^{\frac{nl}{2}}=\underset{k}{}\left\{1+(k^4k^2)t\right\},$$ where the right hand index $`k`$ ranges over all positive integers $`n1,n3,n5,\mathrm{}`$. ###### Proof. By Proposition 4.1, the left hand side of the present identity is $`n!(n1)!t^{n/2}`$ times the $`n^{\text{th}}`$ spectral residue corresponding to the potential $$U(z)=\frac{1}{2\sqrt{t}}\left(\frac{z}{(1z)^2}+\frac{z}{(1+z)^2}\right)=\frac{1}{\sqrt{t}}\underset{k\text{odd}}{}kz^k.$$ The rest of the proof is similar to, but somewhat more involved than the proofs of the preceding two Propositions. Suffice it to say that with the above potential, equation (4.4) can be integrated by means of a hypergeometric function. This fact, in turn, serves to establish the identity in question. The details of this argument are to be found in . ∎ ## 5. Distribution of coordinate values in a discrete simplex In this section we consider enumerative interpretations of the composition sum identities derived in Proposition 4.2, 4.3, 4.4. Let us begin with some general remarks about compositions. There is a natural bijective correspondence between the set of compositions of $`n`$ and the powerset of $`\{1,\mathrm{},n1\}`$. The correspondence works by mapping a composition $`𝐩=(p_1,\mathrm{},p_l)`$ to the set of left partial sums $`\{s_1,\mathrm{},s_{l1}\}`$, henceforth to be denoted by $`L_𝐩`$. It may be useful to visualize this correspondence it terms of a “walk” from $`0`$ to $`n`$: the composition specifies a sequence of displacements, and $`L_𝐩`$ is the set of points visited along the way. One final item of terminology: we will call two compositions $`𝐩`$, $`𝐪`$ of $`n`$ complimentary, whenever $`L_𝐩`$ and $`L_𝐪`$ disjointly partition $`\{1,\mathrm{},n1\}`$. Now let us turn to the proof of Theorem 1.1. As was mentioned in the introduction, this Theorem is equivalent to the assertion that the coefficients of $$Y(t)=\underset{j=1}{\overset{n1}{}}(1+j(nj)t)$$ are greater than the corresponding coefficients of $$X(t)=\underset{j=2}{\overset{n}{}}\left(1+\frac{j(j1)}{2}t\right).$$ Rewriting the former function as a composition sum we have $$Y(t)=\underset{\stackrel{𝐩=(p_1,\mathrm{},p_l)}{|𝐩|=n}}{}s_𝐩s_𝐩^{}t^l,$$ or equivalently $$Y(t)=\underset{\stackrel{𝐩=(p_1,\mathrm{},p_l)}{|𝐩|=n}}{}\frac{(n1)!}{s_𝐩}\frac{(n1)!}{s_𝐩^{}}t^{nl}.$$ On the other hand, Proposition 4.2 allows us to write $$X(t)=\underset{\stackrel{𝐩=(p_1,\mathrm{},p_l)}{|𝐩|=n}}{}\frac{(n1)!}{s_𝐩}\frac{(n1)!}{s_𝐩^{}}\left(\underset{i}{}\frac{p_i}{2^{p_i1}}\right)t^{nl}.$$ It now becomes a straightforward matter to compare the coefficients of $`Y(t)`$ to those of $`X(t)`$. Indeed the desired conclusion follows from the rather obvious inequality: $$k2^{k1},k=1,2,3\mathrm{},$$ the inequality being strict for $`k3`$. Let us now turn to an enumerative interpretation of the composition sum identity featured in Proposition 4.3. In order to state the upcoming result we need to define two notions of sparseness for subsets of $`S^3(n)`$. Let us call a multiset $`M`$ of integers sparse if $`M`$ does not contain duplicates, and if $$|ab|2$$ for all distinct $`a,bM`$. Let us also say that a multiset $`M`$ is $`2`$-sparse if $`M`$ does not contain duplicates, and if there do not exist distinct $`a,bM`$ such that $$a/2=b/2.$$ It isn’t hard to see that sparseness is a more restrictive notion than $`2`$-sparseness, i.e. if $`M`$ is sparse, then it is necessarily $`2`$-sparse, but not the other way around. For example, the set $$\{1,3,4,7\}$$ is not sparse, but it is $`2`$-sparse. We require one other item of notation. For $`AS^3(n)`$ we let $`\pi _x(A)`$ denote the multiset of $`x`$-coordinates of points in $`A`$, and let $`\pi _y(A)`$ denote the multiset of $`y`$-coordinates. We are now ready to state ###### Theorem 5.1. For every cardinality $`l`$ between $`2`$ and $`n1`$, there are more $`l`$-element subsets $`A`$ of $`S^3(n)`$ such that $`\pi _y(A)`$ is sparse, than there are $`l`$-element subsets $`A`$ such that $`\pi _x(A)`$ is sparse. Indeed, the number of $`l`$-element subsets $`A`$ of $`S^3(n)`$ such that $`\pi _y(A)`$ is sparse is equal to the number of $`l`$-element subsets $`A`$ of $`S^3(n)`$ such that $`\pi _x(A)`$ is merely $`2`$-sparse. ###### Proof. Let $`𝐩`$ be a composition of $`n`$. Let us begin by noting that the corresponding $`L_𝐩`$ is sparse if and only if the complimentary composition consists of $`1`$’s and $`2`$’s only. It therefore follows that the enumerating function for $`AS^3(n)`$ such that $`\pi _y(A)`$ is sparse is $$\underset{\stackrel{𝐩=(p_1,\mathrm{},p_l)}{\stackrel{p_i\{1,2\}}{|𝐩|=n}}}{}\frac{(n1)!}{s_𝐩}\frac{(n1)!}{s_𝐩^{}}t^{nl}.$$ On the other hand the number of $`(x,y,z)S^3(n)`$ such that $`x\{2k,2k+1\}`$ for any given $`k`$ is precisely $$\left(\genfrac{}{}{0pt}{}{n2k}{2}\right)+\left(\genfrac{}{}{0pt}{}{n2k1}{2}\right)=(n2k1)^2.$$ Hence the enumerating function for $`AS^3(n)`$ such that $`\pi _x(A)`$ is $`2`$-sparse is $$\underset{k=0}{\overset{(n1)/2}{}}\left(1+(n2k1)^2t\right).$$ The two enumerating functions are equal by Proposition 4.3. ∎ Finally, let us consider an enumerative interpretation of the composition sum identity featured in Proposition 4.3. The setting for this result will be $`S^5(n)`$, the discrete simplex of all bounded, ordered $`5`$-tuples $`(x_1,x_2,x_3,x_4,x_5)`$. For $`AS^5(n)`$ we will use $`\pi _i(A),i=1,\mathrm{},5`$ to denote the corresponding multiset of $`x_i`$ coordinate values. ###### Theorem 5.2. For every cardinality $`l`$ between $`2`$ and $`n3`$, there are more $`l`$-element subsets $`A`$ of $`S^5(n)`$ such that $`\pi _3(A)`$ is sparse, than there are $`l`$-element subsets $`A`$ such that $`\pi _1(A)`$ is $`2`$-sparse. ###### Proof. Let us note that the number of points in $`S^5(n)`$ such that $`x_3=j+1`$ is given by $$\left(\genfrac{}{}{0pt}{}{j+1}{2}\right)\left(\genfrac{}{}{0pt}{}{nj1}{2}\right).$$ Hence, the enumerating function for the first class of subsets is given by $$X_3(t)=\underset{\stackrel{𝐩=(p_1,\mathrm{},p_l)}{\stackrel{p_i\{1,2\}}{|𝐩|=n2}}}{}\left\{\underset{jL_𝐩}{}\frac{j(j+1)(nj1)(nj2)}{4}\right\}t^{n2l}.$$ Now there is a natural bijection between the set of compositions of $`n2`$ by $`\{1,2\}`$ and the set of compositions of $`n1`$ by odd numbers. The bijection works by prepending a $`1`$ to a composition of the former type, and then performing substitutions of the form $$(\mathrm{},k,2,\mathrm{})(\mathrm{},k+2,\mathrm{}),k\text{ odd}.$$ Consequently, we can write $$X_3(t)=\underset{\stackrel{𝐩=(p_1,\mathrm{},p_l)}{\stackrel{p_i\text{ }\text{odd}}{|𝐩|=n1}}}{}\frac{n!}{s_𝐩}\frac{n!}{s_𝐩^{}}\left(\frac{t}{4}\right)^{(n1l)/2}.$$ (5.5) Turning to the other class of subsets, the number of points $`(x_1,\mathrm{},x_5)`$ that satisfy $$x_1\{2j,2j+1\}$$ is given by $$\left(\genfrac{}{}{0pt}{}{n2j}{4}\right)+\left(\genfrac{}{}{0pt}{}{n2j1}{4}\right)=\frac{(n2j2)^4(n2j2)^2}{12}.$$ Consequently the enumerating function for subsets $`A`$ such that $`\pi _1(A)`$ is $`2`$-sparse is given by $$X_1(t)=\underset{k}{}\left(1+(k^4k^2)\frac{t}{12}\right),$$ where $`k`$ ranges over all positive integers $`n2,n4,\mathrm{}.`$ Next, using the identity in Proposition 4.4 we have $$X_1(t)=\underset{\stackrel{𝐩=(p_1,\mathrm{},p_l)}{\stackrel{p_i\text{odd}}{|𝐩|=n1}}}{}\frac{(n1)!}{s_𝐩}\frac{(n1)!}{s_𝐩^{}}\left(\underset{i}{}\frac{p_i}{3^{(p_i1)/2}}\right)\left(\frac{t}{4}\right)^{(n1l)/2}.$$ Using (5.5) it now becomes a straightforward matter to compare $`X_1(t)`$ to $`X_3(t)`$. Indeed, the desired conclusion follows from the following evident inequality: $$k3^{(k1)/2},k=1,3,5,\mathrm{},$$ the inequality being strict for $`k5`$. ## 6. Conclusion The above discussion centers around two major themes: spectral residues, and the distribution of coordinate values in a simplex of bounded, ordered integer tuples. In the first case, we have demonstrated that the method of spectral residues leads to composition sum identities with interesting interpretations. We have considered here parameterized recursive relations corresponding to first and second-order linear differential equations in one independent variable. The next step in this line of inquiry would be to consider other classes of parameterized recursive relations — perhaps non-linear, perhaps corresponding to partial differential equations — in the hope that new and useful composition sum identities would follow. In the second case, we have uncovered an interesting geometrical property of the order simplex. Theorems 1.1, 5.1, 5.2 support the conclusion that the middle dimensions of an order simplex are more “ample” then the outer dimensions. However the 3 results we have been able to establish all depend on very specific identities, and do not provide a general tool for the investigation of this phenomenon. To put it another way, our results suggest the following ###### Conjecture 6.1. Let $`N`$ be a natural number greater than $`2`$ and $`d`$ a natural number strictly less than $`N/21`$. Let $`nN`$ be another natural number. For every sufficiently small cardinality $`l`$, there are more $`l`$-element subsets of $`S^N(n)`$ with distinct $`x_{d+1}`$ coordinates, than there are $`l`$-element subsets with distinct $`x_d`$ coordinates. It would also be interesting to see whether this conjecture holds if we consider subsets of points with sparse, rather than distinct sets of coordinate values. Finally, Theorem (5.1) deserves closer scrutiny, because it describes a bijection of sets, rather than a mere comparison. It is tempting to conjecture that this bijection has an enumerative explanation based on some combinatorial algorithm.
no-problem/0003/cond-mat0003287.html
ar5iv
text
# On the Energy Minima of the SK Model ## 1 Introduction. The local energy minima properties are analyzed in several works on glass-forming liquids and they are allowing a better understanding of the behavior of these systems. The importance of potential energy landscape in the physics of super-cooled liquids was already pointed out by Goldstein . More recently Stillinger and Weber formalized the idea that the multidimensional energy surface can be partitioned in a large number of local minima, so called Inherent Structures (IS), each one surrounded by its attraction basin. It is now clear that the low-temperature dynamics (i.e. for temperatures below the Mode Coupling critical temperature $`T_{MCT}`$ ) can be subdivided into intra-basin motion and crossing of energy barriers by activated processes, taking place on a significantly longer time-scale. The system at equilibrium below $`T_{MCT}`$ is ‘almost always’ trapped in one of the basins accessible at this temperature. The huge number $`𝒩\mathrm{exp}(N\mathrm{\Sigma })`$ of these ‘valleys’, exponentially diverging with the system size $`N`$, suggested the scenario of an underlying thermodynamic transition due to an ‘entropy crisis’ at the Kauzmann temperature $`T_K<T_{MCT}`$ where the configurational entropy $`\mathrm{\Sigma }`$ goes to zero , which was supported by recent analytical work \- . By looking at IS one can evaluate $`\mathrm{\Sigma }`$ numerically -. Moreover, the IS energy turns out to be an interesting quantity for studying both the static and the dynamical behavior -. Differences between fragile and strong glasses were recently proposed to be explainable within an energy landscape description, too . The outlined picture of glass-forming liquids is reminiscent of that characterizing generalized mean field spin glass models like those involving $`p`$-spin interactions , which display a dynamical ergodicity breaking at the temperature $`T_DT_{MCT}`$ (in this case the barriers between basins are infinite in the thermodynamic limit also for $`T_K<T<T_{MCT}`$ because of the mean field approximation), and a thermodynamic entropy driven transition at a lower temperature $`T_K`$, corresponding to a one step replica symmetry breaking (1RSB) scenario. For $`T<T_K`$ one finds a non trivial probability distribution of the overlap between states $`P(q)=m\delta (q)+(1m)\delta (qq_{EA})`$. Here $`q_{EA}`$ is the self-overlap of a state with itself, whereas different states are orthogonal and have mutual overlap zero. Several years ago, Kirkpatrick, Thirumalai and Wolynes suggested that 1RSB spin glass models could be a paradigm of vitreous systems. The numerical study of out-of-equilibrium dynamics in glass-forming liquids gives intriguing results -. The measurement of $`P(q)`$ among ‘glassy states’ is a subtle task , since one faces both the problem of thermalizing the system down to very low temperatures and that of avoiding possible crystalline minima whose basin of attraction could be non-negligible for small systems. As recently proposed by Bhattacharya, Broderix, Kree and Zippelius , to look at the Inherent Structures is helpful also from this point of view, since it allows a more precise definition of the overlap and make easier to distinguish between glassy minima and crystalline or quasi-crystalline configurations. On the other hand, it is not completely clear $`apriori`$ which kind of behavior one should expect for the $`P_{quen}(q)`$ measured among energy minima obtained by quenching equilibrium configurations instead of among equilibrium configurations themselves. To our knowledge, such a quantity was never previously studied in a spin glass model (which is not surprising, since in this case equilibration is still feasible without difficulties for moderate system sizes). More generally, little is known about properties of ‘Inherent Structures’ in spin glasses, which (apart from analogies with glass-forming liquids) have their own interest even in the well understood Sherrington Kirkpatrick (SK) mean field model. In a previous work one of the authors studied the energy minima of the SK model obtained starting from random initial configurations (i.e. by quenching from infinite initial temperature) whereas recently Crisanti and Ritort have performed both for a 1RSB spin glass and for the SK model a numerical analysis of the static and dynamical properties within the energy landscape description similar to the one proposed in for a glass-forming liquid, by looking in particular at the IS energy and at the configurational entropy. Their results confirm the close similarities between 1RSB spin glass and structural glass energy landscapes and the usefulness of this kind of approach, further suggesting a systematic analysis. In the present work, we perform an extensive numerical study of energy minima properties in the SK model, considering initial equilibrium configurations both in the high-temperature (paramagnetic) region and deep in the glassy phase. We look first of all at the behavior of the appropriately defined $`P_{quen}(q)`$ which is compared with the corresponding (usual) equilibrium one. Then we extend the analysis to the overlap of the IS with the configurations from which they are obtained, which measure the (quite strong) correlations between them, and we study systematically finite size effects on the behavior of the IS energy. ## 2 Model, Observables and Simulations. The Sherrington Kirkpatrick spin glass model is described by the Hamiltonian $$_J=\underset{i<j=1}{\overset{N}{}}J_{ij}\sigma _i\sigma _j,$$ (1) where $`\sigma _i=\pm 1`$ are Ising spins, the sum runs over all pairs of spins and $`J_{ij}`$ are random independent variables with mean value $`\overline{J_{ij}}=0`$ and variance $`1/N`$. We take $`J_{ij}=\pm N^{1/2}`$. This model is exactly solvable (since interactions have infinite range) and has a glassy phase with full replica symmetry breaking (FRSB). In case of zero magnetic field (the one we consider here), taking into account the symmetry under inversion of the spins, the $`P(q)`$ changes at the critical temperature $`T_C=1`$ from a $`\delta `$-function at $`q=0`$ (characteristic of the paramagnetic phase) to the FRSB two $`\delta `$-functions in $`q=\pm q_{EA}`$ with a non-zero $`plateau`$ joining them. The transition is continuous also in the order parameter (at variance with 1RSB models), i.e. $`lim_{TT_C^{}}q_{EA}(T)=0`$, and there is no distinction between the dynamical and the static transition , i.e. $`T_D=T_C`$. The SK model is particularly suitable for the kind of study we are interested in, since its behavior is well understood and, on the other hand, by using optimized Monte Carlo methods one is able to thermalize large system sizes down to low temperatures, which allows to study finite size effects systematically. We simulated $`N=64`$, 128, 256, 512 and 1024, averaging over 2048, 1024, 512, 384 and 192 different disorder realizations respectively. The program was multi-spin coded on different sites of the system (we store 64 spins in the same word) and we used Parallel Tempering (PT) , running simultaneously two independent sets of copies (replica) of the system for each sample. Up to 50 (for the two largest sizes) different temperatures between $`T_{min}=0.65`$ and $`T_{max}=3`$ were used, and we performed from 100.000 PT steps for the smallest value of $`N=64`$ to 300.000 for the largest value of $`N=1024`$. The PT acceptance for the exchange of nearest neighbor temperatures was never smaller than 0.6. In the second half of the run we computed the specific heat both as the derivative of the energy density with respect to temperature $`cde/dT`$, and from fluctuations $`cN(e^2e^2)/T^2`$, looking for compatibility of results. This means to compare one-time and two-time quantities respectively, which is an effective way for checking thermalization particularly when using PT (note that in this case fluctuations involve different replicas evolving at the same temperature at different times during the run). Nevertheless we also divided the second half of the run in (four) equal intervals, checking that there were no evident differences in the values of the considered observables, $`P(q)`$ and $`P_{quen}(q)`$ in particular. A further confirm for the system being well thermalized comes from the perfectly symmetric with respect to the exchange $`qq`$ probability distributions that we obtained. For each disorder realization and for each temperature, in the second half of the run, we saved 1500+1500 equilibrium configurations from the two independent sets of replica which were subsequently quenched by a zero-temperature dynamics. The observables were computed from these configurations and the corresponding energy minima, errors being evaluated from sample-to-sample fluctuations. One should note that both $`P(q)`$ and $`P_{quen}(q)`$ are strongly not self-averaging in the glassy phase, wherefore it was necessary to average over a large number of samples even for large system sizes. The whole simulations would have taken about two years of CPU on a usual alpha-station, i.e. a few days when using 128 processors simultaneously on Cray T3E (the code is easily parallelized with efficiency close to 1 by running a different disorder realization on each processor). A subtle point concerns the quenching procedure. We are considering even values for $`N`$, which means that the local field acting on each spin because of the other ones can never be zero. Moreover, it is known , from the analysis of properties of quenched configurations obtained starting from infinite temperature (i.e. random initial configuration), that different zero temperature dynamics give qualitatively identical results. The ‘Greedy algorithm’, where the spin corresponding to the largest energy decreasing is flipped at each step, seems to stop more frequently in local high energy minima than the ‘Reluctant algorithm’, where one flips the spin which gives the smallest energy decreasing (i.e., the contrary of the previous case). We choose to use an ‘intermediate’ zero-temperature dynamics, which is easy to implement too. At each step, a randomly chosen spin is suggested to flip and at least $`20N`$ steps are performed after the last successful one before stopping. The probability that the final configuration is not a local energy minimum (under single spin flip) is therefore $`e^{20}`$, practically negligible. As a last remark, it should be stressed that such a quenching procedure could possibly give energy minima which are not the ‘nearest’ IS to the starting equilibrium configurations, i.e. which are $`less`$ correlated but not $`more`$ correlated than in the analog glass-forming liquid case, where one generates ISs by following the path of steepest descent. Labeling by $`\{\sigma _i\}`$, $`\{\tau _i\}`$ the spins belonging to two configurations, the overlap is defined as $$𝒬=\frac{1}{N}\underset{i=1}{\overset{N}{}}\sigma _i\tau _i.$$ (2) The spin glass order parameter, i.e. the equilibrium probability distribution of overlap among states $`P(q,T)`$, is usually evaluated numerically as the histogram of the instantaneous overlap $`𝒬`$ between two replica (with the same disorder configurations) which evolve simultaneously and independently at temperature $`T`$ $$P(q,T)=\overline{P_J(q)}=\overline{\delta (q𝒬)},$$ (3) where the thermal average $``$ corresponds to average over time in the simulation whereas $`\overline{()}`$ stands for the average over $`J_{ij}`$ realizations. $`P_J(q,T)`$ can be equivalently measured from two given sets of equilibrium configurations belonging to the two replica, by considering the overlap of each configuration of one set with all the configurations of the other. In this work we evaluated $`P_J(q,T)`$ both during the simulation and from the saved configurations, obtaining perfectly compatible results, which confirm that these configurations sample accurately enough the phase space. We define the quenched probability distribution of the overlap as $$P_{quen}(q,T)=\overline{\frac{1}{𝒩_{IS}^2}\underset{i_a,i_b=1}{\overset{𝒩_{IS}}{}}\delta (q𝒬_{IS})},$$ (4) where the sum runs over the $`𝒩_{IS}=1500`$ energy minima obtained starting from the equilibrium configurations at temperature $`T`$ for each of the two replica sets. This definition, analogous to the one introduced in for a Lennard-Jones glass-forming binary mixture, implies that we are weighting each IS with the Boltzmann factor of the corresponding basin at temperature $`T`$, as usual in numerical studies on super-cooled liquids. ## 3 Results. ### 3.1 The behavior of $`P_{quen}(q)`$. We present in \[Fig. 1\] data for the equilibrium overlap distribution $`P(q)`$ (on the left) and for the quenched overlap distribution $`P_{quen}(q)`$ (on the right) at a temperature very close to the critical temperature $`T=1.05=1.05T_C`$, but still in the paramagnetic phase, for different system sizes. It was shown in that the $`P_{quen}(q)`$ obtained from infinite temperature configurations becomes more and more concentrated in $`q=0`$ for increasing $`N`$ and it goes to a $`\delta `$-function in the thermodynamic limit. We find the same qualitative behavior in the whole high-temperature region $`T>T_C=1`$. It is crystal clear from \[Fig. 1\] that there is no evidence for replica symmetry breaking in the probability distribution of overlap between IS reachable from equilibrium configurations at $`T\stackrel{>}{}T_C`$ and weighted with the Boltzmann factor of the corresponding basin at this temperature. In other words, we know that for $`T<T_C`$ there is RSB, but we cannot detect it by looking at the probability distribution of the overlap obtained with a fast quench starting from $`T\stackrel{>}{}T_C`$. This result implies that also in the case of glass-forming liquid the analogously defined $`P_{quen}(q,T)`$ will be trivial when quenching from the paramagnetic (liquid) phase, even when quenching down to a $`T`$ value where RSB occurs (if it does). This is in agreement with the behavior reported in for a Lennard-Jones binary mixture starting from $`T\stackrel{>}{}T_{MCT}`$. In glass-forming liquids the overlap among IS is easier to define than the equilibrium overlap: its use releases one from the careful consideration of possible crystalline or quasi-crystalline configurations. Nevertheless, the quantity $`P_{quen}(q,T)`$ does not provide any evidence for the existence or absence of RSB when avoiding the hard task of thermalizing super-cooled liquid down to low temperatures. One should also note that in 1RSB models both $`P(q,T)`$ and $`P_{quen}(q,T)`$ are expected to be trivial also in the region $`T_{MCT}>T>T_K`$ (apart from finite size effects), just because of the very large number $`e^{N\mathrm{\Sigma }}`$ of ‘valleys’ (and corresponding IS) which are $`almostall`$ orthogonal , with zero overlap in the thermodynamic limit. After clarifying this point, we note that a more careful analysis of the data shown in \[Fig. 1 (right)\] suggests the presence of a $`weak`$ RSB, as already observed in $`P_{quen}(q,T=\mathrm{})`$ . In the whole paramagnetic phase, the ‘quenched’ probability distribution of the overlap approaches its delta function limit $`\delta (q)`$ for $`N\mathrm{}`$ much slower than the equilibrium one. To quantify the $`N`$-dependence, we introduce the function $$f(q)\underset{N\mathrm{}}{lim}\frac{1}{N}\mathrm{ln}\left[P_{N,quen}(q)\right].$$ (5) The replica symmetry is broken $`inaweaksense`$ if $`f(q)`$ is zero in an extended region $`I`$ though $`P_{quen}(q)`$ is a $`\delta `$-function in the thermodynamic limit. This implies that $`P_{N,quen}(q)`$ is going to zero slower than exponentially in this region and that therefore, by adding to the Hamiltonian a quantity of order $`N`$ (for instance by using appropriate boundary conditions), one could obtain any given value of the ‘quenched’ overlap $`qI`$. Our best numerical evidence for such a behavior comes from the study of $$r_N(q,T)\frac{1}{N^\nu }\mathrm{ln}\left(_q^1𝑑q^{}P_{N,quen}(q^{},T)\right).$$ (6) We get a practically $`T`$-independent estimate of the exponent for $`T>T_C`$, $`\nu =0.68\pm 0.05`$. This value is compatible with $`\nu 2/3`$ obtained from $`T=\mathrm{}`$ data in . Though one observes some deviations at large $`q`$, the weak $`N`$-dependence of $`r_N`$ shown in \[Fig. 2\] and the value of $`\nu `$ that is significantly smaller than 1, strongly suggest that $`f(q)`$ is zero on a finite interval $`I`$, where possibly $`I=[0,1]`$, i.e. $`P_{quen}(q)`$ displays a $`weak`$ breaking of replica symmetry. On the other hand, at temperatures lower than $`T_C`$, $`P_{quen}(q,T)`$ clearly shows the characteristic behavior corresponding to a full replica symmetry breaking. This is what one would expect since RSB was already evident in the $`P(q)`$ of the configurations we were starting from. The qualitative similarities between $`P(q)`$ and $`P_{quen}(q)`$ are remarkable (see \[Fig. 3\]). They are present for each disordered configurations: for instance the number of peaks found in a one-sample equilibrium $`P_J(q,T)`$ at a given $`T`$ is preserved in the corresponding $`P_{J,quen}(q,T)`$ too. FRSB features are even more evident when looking at $`P_{quen}(q)`$: in particular the presence of a continuous $`plateau`$ between the two self-overlap peaks is clearer in \[Fig. 3\] on the right than in \[Fig. 3\] on the left, since the ‘quenched’ self overlap takes larger values, though it goes to one only for $`T0`$. Finite size effects play a similar role in the equilibrium and quenched cases: for instance, it is clear from \[Fig. 3\] that also the ‘quenched’ zero overlap probability $`P_{quen}(0,T)`$ does not depend on $`N`$ in the glassy phase, this being a well known test when looking for FRSB . To further investigate this point, we plot in \[Fig. 4\] the ratio of cumulants, $$B(T)=\frac{1}{2}\left(3\frac{\overline{q^4}}{\overline{q^2}^2}\right),$$ (7) as a function of $`T`$. We recall that this is the usual quantity one calculates in order to locate the critical temperature, since finite size scaling predicts that curves for different sizes intersect at $`T_C`$: this is the behavior observed in \[Fig. 4 (left)\] (apart from corrections to scaling when considering the smaller system sizes). Surprisingly enough, we find that the same kind of finite size analysis can be performed on cumulants obtained from probability distribution among IS \[Fig. 4 (right)\]. Though corrections to scaling are slightly more important in this case, we see from \[Fig. 5\], where the intersection points $`T_C(N_1,N_2)`$ for the different pairs $`N_1`$, $`N_2`$ of considered system sizes as a function of $`N_1+N_2`$ are plotted, that one gets the correct $`T_C=1`$ for $`N_1,N_2\mathrm{}`$. We conclude that $`P_{quen}(q)`$ is an interesting quantity to study. The results we have shown could be particularly relevant when looking at glass-forming liquids, but the observed behavior suggests that this quantity could also help in further clarifying the glassy phase properties of finite dimensional realistic spin glasses (this is a very long-standing subject, see for instance ). ### 3.2 Correlations between equilibrium configurations and IS After noting the similarities between $`P(q)`$ and $`P_{quen}(q)`$ one expects the presence of strong correlations between equilibrium configurations and corresponding IS, particularly in the low-temperature phase. To quantify them, we measure the probability distribution of the overlaps $`q_{qt}`$ between each energy minimum and the configuration from which it is obtained, $`P_{qt}(q,T)`$. As it is shown in \[Fig. 6\], we find a Gaussian shaped distribution which goes toward a $`\delta `$-function in the thermodynamic limit both in the paramagnetic and in the glassy phase. We observe that below $`T_C`$, at variance with $`P(q)`$ and $`P_{quen}(q)`$, this probability distribution is a self-averaging quantity, which is easily understandable since we are substantially looking at overlaps between configurations related to the same state. It is intriguing to note that there is no clear evidence for the underlying phase transition when looking at this quantity. The mean value $`q_{qt}(T)`$ (see \[Fig. 7\]) is increasing when decreasing the temperature, and $`lim_{T0}q_{qt}(T)=1`$. The $`N`$-dependence appears more pronounced in the high temperature region, where data are well fitted by the power law $$q_{N,qt}=q_{\mathrm{},qt}+\frac{C}{N^\alpha }.$$ (8) We show in \[Fig. 8\] our estimates for $`q_{\mathrm{},qt}(T)`$ (left), which gives in particular $`q_{\mathrm{},qt}(T_C)0.4`$, and our best fit using data for the overlap between random initial conditions and corresponding minima (right), i.e. $`lim_T\mathrm{}q_{qt}(T)`$. Also in this case, we get a non-zero value ($`0.1`$) in the thermodynamic limit. Nevertheless, it should be stressed that we are confined to a relatively small range of system sizes, which makes a reliable estimation of the error of $`q_{\mathrm{},qt}`$ very hard. It would be interesting to understand how much these correlations vary when looking at different models. In agreement with our results, in , for the considered volumes it was quoted $`q_{qt}0.4`$ for the SK slightly above $`T_C`$, to be compared with the higher value $`q_{qt}0.94`$ found for the 1RSB model (ROM) at a temperature higher than the Mode Coupling $`T_{MCT}`$, which suggests the presence of stronger correlations between equilibrium configurations and corresponding IS in the glass-forming liquid case, and therefore a $`P_{quen}(q,T)`$ with a behavior even closer to the one at the equilibrium. ### 3.3 The IS energy In \[Fig. 9\] we show the equilibrium energy $`e(T)`$, as a function of the temperature $`T`$ (only weakly depending on the system size). In \[Fig. 10 (left)\] we present data on the IS energy $`e_{quen}(T)`$, i.e. the mean energy of the minima accessible from equilibrium configurations at a given temperature and weighted with the Boltzmann factor of the corresponding basin. The behavior of this quantity changes abruptly from a nearly $`T`$-independent value (in the high $`T`$ regime) to the approximately $`T`$ decreasing of the low temperature region, where the IS energy continuously goes towards the ground-state value (which is analytically known, $`e_0=0.7633`$). Correspondingly the derivative $`de_{quen}(T)/dT`$, which is plotted in \[Fig. 10 (right)\], displays a maximum and takes very small high-$`T`$ values (note the logarithmic scale). Data on the position $`T_{max}(N)`$ of the maximum of $`de_{quen}(T)/dT`$ as a function of $`N`$ are shown in the insert and give evidence for $`lim_N\mathrm{}T_{max}(N)=1=T_C`$. Our finite size analysis therefore confirms that in the $`N\mathrm{}`$ limit $`e_{quen}(T)`$ takes the constant threshold value $`e_{th}`$ for $`T>T_C`$. The behavior in this region agrees well with the power law $$e_N=e_{th}+C/N^\alpha ,$$ (9) giving a constant $`e_{th}=0.7145\pm 0.004`$, which is our best numerical estimates for the threshold energy, down to $`T1.1`$ near to the critical temperature (and still compatible with this value also at $`T_C`$). We note that this estimates is in perfect agreement with the $`e_{th}0.715`$ quoted in that was obtained by fitting data on IS reached from random initial conditions by a sequential quenching procedure (this value could depend on the considered zero-temperature dynamics). The exponent $`\alpha `$ is slightly increasing when going to lower temperatures and varies between $`\alpha =0.34\pm 0.04`$ at $`T=3`$ and $`\alpha =0.43\pm 0.06`$ at $`T=T_C`$. It is interesting to stress that finite size corrections to the asymptotic behavior look $`very`$ important, as shown by the small $`\alpha `$ value we have found. Correspondingly for all the considered sizes (up to the quite large volume $`N=1024`$) the IS energy becomes roughly constant only at temperatures definitely higher than $`T_C`$. A similar behavior with strong finite size corrections is found in for the considered 1RSB model at $`T>T_{MCT}`$ too. ## Conclusions We have presented numerical results about the properties of energy minima in the SK model. The probability distribution of the overlap between IS weighted with the Boltzmann factor of the corresponding basin at temperature $`T`$, $`P_{quen}(q,T)`$, turns out to be qualitatively similar to the equilibrium overlap distribution $`P(q,T)`$ at the same temperature $`T`$. We found a trivially-shaped $`P_{quen}(q,T)`$ in the whole paramagnetic ($`T>T_C=1`$) phase, whereas the FRSB behavior characteristic of the glassy phase is evident from data on $`P_{quen}(q,T)`$ only when looking at energy minima obtained from equilibrium configurations at temperature definitely lower than $`T_C`$. A finite size analysis of the Binder parameter for $`P_{quen}(q,T)`$ gives the same estimate of the critical temperature as the usual (equilibrium) one. These results can be particularly relevant for glass-forming liquids, where the overlap is more precisely definable between IS , but they also imply that numerical evidence for replica symmetry breaking can not be extracted from data on $`P_{quen}(q,T)`$, which have been obtained from configurations at equilibrium at temperatures above the possible glass transition temperature (in that case $`T_K<T_{MCT}`$). The analysis of finite size corrections to the IS energy confirms that it is approaching the expected thermodynamic limit behavior $`e_{IS}=const=e_{th}`$ for $`T>T_C`$, whereas it appears continuously decreasing for $`T<T_C`$. ## Acknowledgments B.C. acknowledges Uwe Müssel and Holger Wahlen for useful suggestions, and would like to thank the John-von-Neumann Institut für Computing (Forschungszentrum Jülich), where this work was partially developed. Simulations were ran on the Forschungszentrum Jülich Cray T3E.
no-problem/0003/nucl-th0003029.html
ar5iv
text
# Relativistic Approach to Isoscalar Giant Resonances in 208Pb \[ ## Abstract We calculate the longitudinal response of <sup>208</sup>Pb using a relativistic random-phase approximation to three different parameterizations of the Walecka model with scalar self-interactions. From a nonspectral calculation of the response—that automatically includes the mixing between positive- and negative-energy states—we extract the distribution of strength for the isoscalar monopole, dipole, and high-energy octupole resonances. We employ a consistent formalism that uses the same interaction in the calculation of the ground state as in the calculation of the response. As a result, the conservation of the vector current is strictly maintained throughout the calculation. Further, at small momentum transfers the spurious dipole strength—associated with the uniform translation of the center-of-mass—gets shifted to zero excitation energy and is cleanly separated from the sole remaining physical fragment located at an excitation energy of about 24 MeV; no additional dipole strength is observed. The best description of the collective modes is obtained using a “soft” parameterization having a compression modulus of $`K=224`$ MeV. preprint: Submitted to Physical Review C \] Almost forty years ago Thouless wrote a seminal paper on vibrational states in nuclei in the random-phase approximation . There he showed how spurious states—such as those associated with a uniform translation of the center-of-mass—separate out cleanly from the physical modes by having their strength shifted to zero excitation energy. Thirty years later Dawson and Furnstahl generalized Thouless’ result to the relativistic domain placing particular emphasis on the role of consistency . They showed how a fully self-consistent approach guarantees the conservation of the vector current as well as the decoupling of the spurious component of the isoscalar dipole ($`J^\pi =1^{};T=0`$) mode from the physical spectrum. These results emerged after a careful treatment of the negative-energy states. Indeed, neglecting their contribution resulted in a violation of the vector current as well as the appearance of substantial spurious strength in the response. These fundamental results emphasize that the Dirac single-particle basis is complete only when positive- and negative-energy states are included. Relativistic models of nuclear structure have evolved considerably since they were first introduced by Walecka and later extended by Serot . Although the qualitative success of these models relies almost exclusively on the dynamics generated by the the scalar ($`\sigma `$) and vector ($`\omega `$) mesons, several improvements have been introduced in order to enhance their quantitative standing . Chief among them is the incorporation of scalar self-interactions which introduce important non-linearities into the equations of motion. Perhaps the greatest impact of these non-linear terms has been seen in the compression modulus of nuclear matter. In a linear model the compression modulus is predicted to be unreasonably large at $`K=547`$ MeV. Yet this value can be reduced to $`K=224`$ MeV by the mere inclusion of non-linear terms. We will show here how this “soft” parameterization yields excitation energies for various compressional modes in fair agreement with experiment. While scalar self-interactions are now incorporated routinely into most relativistic calculations of the nuclear ground state, their role on the dynamics of the excited states is just being unraveled; most calculations of the response of the mean-field ground state still use the linear model. Applying non-linear models becomes technically more difficult because the scalar-meson propagator no longer has a simple Yukawa form. At present the only calculations of the response that have incorporated non-linear terms are those by Ma and collaborators . One of the main conclusions of their work is that “a large discrepancy remains between theory and experiment in the case of the dipole compression mode”. We now show that if one includes the full momentum dependence of the longitudinal response, a unique physical fragment emerges at low momentum transfer. This fragment—located at an excitation energy of $`E24`$ MeV—is identified as the isoscalar giant dipole resonance (ISGDR). We start from a Lagrangian having an isodoublet nucleon field ($`\psi `$) interacting via the exchange of isoscalar sigma ($`\varphi `$) and omega ($`V^\mu `$) mesons, an isovector rho ($`b^\mu `$) meson, and the photon ($`A^\mu `$). That is, the interacting Lagrangian density becomes $`_{\mathrm{int}}`$ $`=`$ $`g_\mathrm{s}\overline{\psi }\psi \varphi g_\mathrm{v}\overline{\psi }\gamma ^\mu \psi V_\mu {\displaystyle \frac{1}{2}}g_\rho \overline{\psi }\gamma ^\mu \tau _a\psi b_\mu ^a`$ (1) $``$ $`{\displaystyle \frac{1}{2}}(1+\tau _3)e\overline{\psi }\gamma ^\mu \psi A_\mu U(\varphi ).`$ (2) In addition to meson-nucleon interactions the Lagrangian density includes scalar self-interactions of the form $$U(\varphi )=\frac{1}{3!}\kappa \varphi ^3+\frac{1}{4!}\lambda \varphi ^4.$$ (3) Our theoretical program in the linear model has been described in great detail in several references . Here we merely highlight the main features of the approach. The longitudinal response of the mean-field ground state is defined by $`S_\mathrm{L}(𝐪,\omega )`$ $`=`$ $`{\displaystyle \underset{n}{}}\left|\mathrm{\Psi }_n|\widehat{\rho }(𝐪)|\mathrm{\Psi }_0\right|^2\delta (\omega \omega _n)`$ (4) $`=`$ $`{\displaystyle \frac{1}{\pi }}_m\mathrm{\Pi }^{00}(𝐪,𝐪,\omega ),`$ (5) where $`\widehat{\rho }(𝐪)`$ is the Fourier transform of the isoscalar vector density, $`\mathrm{\Psi }_0`$ is the exact nuclear ground state, and $`\mathrm{\Psi }_n`$ is an excited state with excitation energy $`\omega _n`$. Note that the response is directly related to the timelike polarization insertion $`\mathrm{\Pi }^{00}`$. To compute the linear response of the ground state of spherical nuclei—such as <sup>208</sup>Pb—one starts by calculating ground-state properties in a mean-field approximation. In this mean-field theory (MFT) nucleons interact with the self-consistent field generated by all positive-energy nucleons; vacuum loops are neglected in this approximation. Such a calculation yields single-particle energies and wave-functions for the occupied states as well as the mean-field potential $`\mathrm{\Sigma }_{\mathrm{MF}}(x)`$. It is precisely this mean-field potential that one uses to compute the single-nucleon propagator nonspectrally: $$\left(\omega \gamma ^0+i𝜸\mathbf{}M\mathrm{\Sigma }_{\mathrm{MF}}(x)\right)G_\mathrm{F}(𝐱,𝐲;\omega )=\delta (𝐱𝐲).$$ (6) There are several advantages in using a nonspectral representation for the nucleon propagator . First, one avoids the artificial cutoffs and truncations that plague the spectral approach . Second, both positive and negative-energy continuua are treated exactly. As a result, the contributions from the negative-energy states to the response are included automatically. Finally, a nonspectral evaluation of $`G_\mathrm{F}`$ poses no more challenges, nor requires much more computational effort, than the corresponding calculation of an individual single-particle state. Having determined the occupied bound-state orbitals and the nucleon propagator, the evaluation of the uncorrelated—or single-particle—polarization becomes relatively straightforward . To go beyond the simple single-particle response one must invoke the relativistic random-phase approximation (RPA). The RPA builds long-range coherence among the many particle-hole excitations with the same quantum numbers by iterating the uncorrelated polarization to infinite order . Yet before going any further in the description of the RPA we must stress two issues of paramount importance. The first is consistency, which demands that the residual particle-hole interaction used in the RPA be identical to the interaction used to generate the mean-field ground state. Second, the consistent relativistic response of the mean-field ground state involves, in addition to the familiar particle-hole excitation, the mixing of positive- and negative-energy states. These new configurations are essential for the conservation of the vector current and for the removal of spurious dipole strength from the physical spectrum. Although in the MFT it is consistent to neglect vacuum polarization , the mixing between positive- and negative-energy states remains of utmost importance. The one new ingredient that we wish to add to our formalism is scalar self-interactions. The added complication arises from the fact that the scalar-mediated interaction no longer has a simple Yukawa form. Rather, the scalar propagator now satisfies a Klein-Gordon equation: $$\left(\omega ^2+^2m_\mathrm{s}^2U^{\prime \prime }(\varphi )\right)\mathrm{\Delta }(𝐱,𝐲;\omega )=\delta (𝐱𝐲).$$ (7) In infinite nuclear matter the scalar self-interactions introduce a trivial modification: the scalar meson now propagates with an effective mass $`m_\mathrm{s}^2=m_\mathrm{s}^2+U^{\prime \prime }(\varphi _0)`$, rather than with its free-space value. In the finite system solving for the scalar propagator becomes technically more difficult, but not more than solving for the nucleon propagator of Eq. (6). We have computed the scalar propagator in momentum space and have expanded it in terms of spherical harmonics so that the angular integrals appearing in the RPA equations may be done analytically. A publication containing a more detailed description of our techniques will be forthcoming. The benchmark by which every theoretical calculation of the nuclear response should be measured is the isoscalar giant dipole resonance. This is because the conservation of the vector current and the shift of spurious strength to zero excitation energy can only happen in a consistent calculation of the response. In Fig. 1 we display the distribution of isoscalar dipole strength in <sup>208</sup>Pb at the small momentum transfer of $`q=46`$ MeV (or $`q=0.23\mathrm{fm}^1`$) using parameter set NLC from Table I. Note that the longitudinal response has been computed with an “artificial” width of 1 MeV. The uncorrelated Hartree response displays a large amount of dipole strength around 8 MeV of excitation energy. This strength is concentrated in the “1-$`\mathrm{}\omega `$” region where many particle-hole excitations can be made. Yet most of the strength is spurious, as revealed by the large amount being shifted to zero excitation energy in the RPA response. What remains is an almost imperceptible fragment located at $`E=24.4`$ MeV; and nothing else. The small fragment is displayed more clearly along with the experimental value (shown as a filled circle) on the inset of the figure. This result is a testimony to the power of consistency. By demanding that the residual particle-hole interaction be identical to the interaction in the ground state, and by properly including the mixing between positive- and negative-energy states, all spurious strength gets cleanly separated from the physical response. A comparison between three different relativistic models—all of them constrained to reproduce bulk properties of nuclear matter at saturation as well as the root-mean-square charge radius of <sup>40</sup>Ca—is displayed in Fig. 2. Note that the three models employed here have been defined in Ref. as L2 ($`K=547`$ MeV), NLB ($`K=421`$ MeV), and NLC ($`K=224`$ MeV) \[see also Table I\]. As expected, the energy of the dipole resonance scales with the compressibility of the model. Clearly, models with a large compression modulus—such as L2 and NLB—produce isoscalar dipole strength at values that are too large to be consistent with experiment . These results have also been tabulated in Table II. Because of the heroic efforts by experimentalists in separating the isoscalar dipole mode from the high-energy octupole resonance (HEOR), we also include a comparison between our results and their experimental findings in Table II. Although not necessarily a compressional mode, our results for the HEOR follow similar trends to those observed for the giant dipole resonance. We conclude the presentation of our results by displaying the distribution of strength for the quintessential compressional mode: the isoscalar giant monopole resonance (GMR). First discovered in $`\alpha `$-scattering experiments from <sup>208</sup>Pb , and recently measured with higher accuracy at an excitation energy of $`E=14.2\pm 0.1`$ , the GMR places important constraints on theoretical models of nuclear matter. Indeed, the first measurement of the GMR—in conjunction with a a simple analysis based on the liquid-drop model—suggested a compression modulus of about $`K=200`$ MeV, a value considerably lower than the predictions of density-dependent Skyrme models at the time. Our calculations for the monopole strength in <sup>208</sup>Pb are displayed along with the experimental value in Fig. 3. We find good agreement with empirical formulas that suggest that the position of the GMR should scale as the square root of the compressibility. Indeed, we compute GMR energies in the ratio of 1:1.38:1.53, while the square root of the nuclear-matter compressibilities are in the ratio of 1:1.37:1.56. Moreover, these results help to reinforce our earlier claim that relativistic models of nuclear structure having compression moduli well above $`K200`$ MeV will be in conflict with experiment. In summary, we have computed the distribution of strength for the isoscalar monopole, dipole, and high-energy octupole resonances in <sup>208</sup>Pb using a relativistic random-phase approximation to three different parameterizations of the Walecka model with scalar self-interactions. We placed particular emphasis on the role of consistency. That is, we demanded that the residual particle-hole interaction used in the RPA be identical to the interaction used to generate the mean-field ground state. Moreover, we have used a nonspectral approach that automatically included the mixing between positive- and negative-energy states to compute the longitudinal response. Enforcing these constraints—and little else—was sufficient for separating the spurious $`J^\pi =1^{};T=0`$ state. In contrast to recent relativistic calculations —as well as nonrelativistic ones —we see no need for imposing additional constraints to “partially” remove the spurious contamination. These approaches attempt to remove all spurious strength by defining an effective dipole operator of the form $`M_{10}(𝐫)=(r^3\eta r)Y_{10}(\widehat{𝐫})`$; here $`\eta `$ plays the role of a Lagrange multiplier and is determined to be $`\eta =5r^2/3`$ from translational invariance. More significantly, such a transition operator neglects the all important momentum dependence of the excitation. Indeed, it was only at small momentum transfers—just as in the experiment —that we observed a single physical fragment concentrated around 24 MeV of excitation. As the momentum transfer increased, we uncovered additional dipole strength around 8 MeV. Yet this trend, namely, a sizable fraction of dipole strength at low energies and a giant resonance peak, is all that was reported in those recent publications . We do not regard this behavior as “a large discrepancy between theory and experiment”. Rather, we attribute this trend to the simplified—momentum-independent—choice of dipole operator adopted in those calculations. We have also computed the distribution of strength for the giant monopole resonance. As in the case of ISGDR we have used the same exact operator—the isoscalar vector density—to compute the monopole component of the longitudinal response. Indeed, monopole, dipole, and octupole strength were all obtained from simply isolating the relevant $`J^\pi `$-channel from the longitudinal response. For the GMR we have found good agreement with a recent relativistic calculation . Good agreement has also been obtained with semi-empirical formulas that suggest that the position of the GMR should scale as the square root of the compressibility. Depending on the relativistic parameterization adopted, monopole strength was found between 13 and 20 MeV of excitation. Lastly, we venture into the neutron-star domain. The non-linear parameterization NLC gives a rather satisfactory description of the various compressional modes. Although by no means perfect, this agreement suggests that the compression modulus of nuclear matter can not differ too much from the value predicted by this model ($`K=224`$ MeV). When this parameter set is used to compute the equation of state for neutron matter—and is then combined with the Tolman-Oppenheimer-Volkoff equation—one obtains an upper limit for the mass of a neutron star of $`M=2.8M_{}`$. Given the recent compilation of 21 neutron-star masses by Thorsett and Chakrabarty , where they show that the measurements are consistent with a remarkably narrow mass distribution $`M=(1.35\pm 0.04)M_{}`$ , the fascinating possibility that neutron stars harbor novel and exotic states of matter becomes almost a reality. This work was supported in part by the DOE under Contract No.DE-FG05-92ER40750 and by the Florida State University School of Computational Science and Information Technology.
no-problem/0003/hep-th0003054.html
ar5iv
text
# References NYU-TH/00/03/05 A Comment on Brane Bending and Ghosts in Theories with Infinite Extra Dimensions Gia Dvali, Gregory Gabadadze, and Massimo Porrati Department of Physics, New York University, New York, NY 10003 Abstract Theories with infinite volume extra dimensions open exciting opportunities for particle physics. We argued recently that along with attractive features there are phenomenological difficulties in this class of models. In fact, there is no graviton zero-mode in this case and 4D gravity is obtained by means of continuum bulk modes. These modes have additional degrees of freedom which do not decouple at low energies and lead to inconsistent predictions for light bending and the precession of Mercury’s perihelion. In a recent papers, \[hep-th/0003020\] and \[hep-th/0003045\] the authors made use of brane bending in order to cancel the unwanted physical polarization of gravitons. In this note we point out that this mechanism does not solve the problem since it uses a ghost which cancels the extra degrees of freedom. In order to have a consistent model the ghost should be eliminated. As soon as this is done, 4D gravity becomes unconventional and contradicts General Relativity. New mechanisms are needed to cure these models. We also comment on the possible decoupling of the ghost at large distances due to an apparent flat-5D nature of space-time and on the link between the presence of ghosts and the violation of positive-energy conditions. Theories with infinite volume extra dimensions open exciting opportunities for particle physics. The following 5D warped metric may serve as a good example of this class of models: $`ds^2=A(y)\eta _{\mu \nu }dx^\mu dx^\nu dy^2,`$ (1) where the warp factor $`A(y)`$ tends to a nonzero constant at $`\pm \mathrm{}`$. A brane setup which realizes this was recently proposed in Ref. . It was argued in Refs. and that these models are very attractive since they could give new insights into bulk supersymmetry and the cosmological constant problem. Regretfully, as they stand right now, these theories face two serious challenges: to reproduce the correct four-dimensional Einstein limit without invoking ghost states , and to satisfy a weak energy positivity condition . The aim of the present note is to respond to the criticism of and regarding the first issue. Let us first recall the arguments of Ref. . The work was based on the following assumptions: I) The theory is self-consistent, in the sense that it has no unconventional or unphysical states, such as ghosts; II) 5D gravity couples universally to the energy-momentum tensor $`T_{\mu \nu }`$. We argue that in and the condition (I) is relaxed. In models with infinite extra dimensions, differently form the Randall-Sundrum (RS) model , there is no localized 4D spin-2 or spin-0 zero-mode. The only relevant physical degrees of freedom are 4D massive spin-2 gravitons. As a result, 4D gravity is obtained by exchanging a metastable graviton . This is equivalent to the exchange of a continuum of massive spin-2 bulk states. Each of the continuum states, from the 4D point of view, has strictly 5 physical degrees of freedom. They can be conveniently decomposed as: 2 from the 4D massless graviton, 2 from the “graviphoton” and 1 from a “graviscalar”. Two of these, coming from the “graviphotons” are not relevant for matter localized on the brane. Graviscalars contribute to physical processes . These extra scalar degrees of freedom lead to deviations from the standard predictions of Einstein’s theory since the tensor structures of massive and massless graviton propagators are different: $`\left({\displaystyle \frac{1}{2}}(\eta ^{\mu \alpha }\eta ^{\nu \beta }+\eta ^{\mu \beta }\eta ^{\nu \alpha }){\displaystyle \frac{1}{3}}\eta ^{\mu \nu }\eta ^{\alpha \beta }+𝒪(p)\right)`$ $`\mathrm{massive};`$ $`\left({\displaystyle \frac{1}{2}}(\eta ^{\mu \alpha }\eta ^{\nu \beta }+\eta ^{\mu \beta }\eta ^{\nu \alpha }){\displaystyle \frac{1}{2}}\eta ^{\mu \nu }\eta ^{\alpha \beta }+𝒪(p)\right)`$ $`\mathrm{massless}.`$ (2) Under assumptions (I) and (II), the 4D gravitational interactions are completely determined by the exchange of bulk gravitons. As we emphasized above, from the 4D point of view, these are just massive spin-2 states, with 5 degrees of freedom for each of them. The effective 4D gravity in is obtained by summing up these states. Therefore, it is clear that the degrees of freedom do not match with those of 4D General Relativity and lead to unacceptable predictions . In other words, there is an additional scalar degree of freedom in the 4D world obtained in . Note that our arguments are very general and are based only on the assumption of unbroken 4D general covariance. The way to evade this result is to compensate the extra scalar with a ghost state. Clearly, if one introduces unconventional states, such as ghosts , the results of are modified, but then it is hard to make sense of the theory (see discussions below). It was suggested in Ref. that the unwanted polarizations are canceled if brane bending is taken into account. The question is how one can reconcile this claim with the 4D arguments presented above? The only way is by relaxing the assumption (I) of Ref. and allowing for a ghost state in the theory. To see that this is indeed the case in let us recall that the brane bending studied in and is just a gauge choice which is needed to maintain the linearized approximation. A detailed formalism was developed in Refs. , and was reiterated in Refs. and for the particular case at hand, so we won’t be repeating it here. We just point out that the brane bending reveals a ghost field which is used in to cancel the unwanted graviton polarizations. A most simple way to see this is as follows. Suppose that a brane with no matter is located at $`y=0`$. After a matter source is introduced on the brane, its location is shifted to $`y\zeta (x)`$, where $`\zeta (x)`$ is some response function determined by the source. Thus, matter couples to 4D fluctuations through the warp factor $`A(y\zeta (x))`$. Expanding $`A(y\zeta (x))`$ in powers of $`\zeta `$ one finds an additional coupling of $`T^{\mu \nu }`$ to $`\zeta A`$. This is the coupling which effectively introduces a ghost. Indeed, let us introduces a source with energy-momentum tensor $`T_{\mu \nu }S_{\mu \nu }\delta (\overline{y}).`$ (3) When the brane is bent by the matter source, one may choose new coordinates $`\overline{x},\overline{y}`$ using a gauge transformations (see for details ). The induced metric on the brane takes the following form in these coordinates: $`\overline{h}_{\mu \nu }(x,0){\displaystyle d^4z\left(D_5(x,0;z,0)(S_{\mu \nu }(z)\frac{1}{3}\eta _{\mu \nu }S_\alpha ^\alpha (z))H\eta _{\mu \nu }D_4(x,z)S_\alpha ^\alpha (z)\right)},`$ (4) where $`D_5`$ denotes the scalar part of the 5D graviton propagator, $`D_4`$ denotes that of a four-dimensional scalar and $`H`$ is some positive constant proportional to the square root of the bulk cosmological constant. The last term in this expressions is equivalent to a contribution of a scalar ghost field. We would like to point out here that brane bending term does not cause any problem in the RS scenario. Moreover, it is needed for self-consistency of the RS model. Recall that in the RS framework there is a massless graviton zero-mode with 2 physical degrees of freedom, in addition there is an unphysical “graviscalar” which is gauge dependent, plus there are massive spin-2 gravitons. The ghost in the RS framework is explicitly canceled by an unphysical “graviscalar”. Therefore, one is left with the 2 physical polarizations of the 4D massless graviton zero-mode. One might think of this as canceling a longitudinal photon by $`A_0`$ “ghost” in the Gupta-Bleuler quantization of QED. This cancellation of unphysical states does not take place if there is no localized zero-mode. Which is precisely what happens in . As we discussed above, states which mediate 4D gravity in this case are just massive spin-2 states. They have 5 degrees of freedom which are all physical. Two degrees of freedom corresponding to “graviphotons” decouple at low energies, as they couple derivatively to a conserved energy-momentum tensor. However, the third physical scalar does not decouple. The aim of the ghost present in is to compensate for this scalar. Thus, one is left with the theory which has a manifest ghost in the physical spectrum. This ghost was used in to remove the problem of extra degrees of freedom from the 4D theory to large distances, where gravity, in this case, becomes scalar antigravity due to the ghost. However, the presence of a ghost indicates sickness of a theory at any scales. In particular, the ghost energy is unbounded from below. Any theory which looks remotely like gravity is then completely unstable when coupled to such a state. This instability is most probably due to the fact that the background in violates positive-energy conditions . Since there are no known ways to remedy theories with physical ghosts, we are inclined to take a conservative point of view and require that ghost contributions should be canceled for a sensible model. In this case the model can be made free of ghosts, however, the gravity in 4D becomes a tensor-scalar gravity and one goes back to the problems pointed out in Ref. . One may wonder whether the ghost can persist at large distances. This is a bit confusing, since it naively seems that the model of Ref. should become flat-five-dimensional at large distances in which case the second term in (4) is clearly absent. However, the theory at hand is never a flat-five-dimensional one. Rather it is a flat-five-dimensional model with a peculiar brane. This brane is a combination of a positive and negative tension slices and from large distances looks as a zero tension object. However, regardless of the fact that this is a zero-tension object, it brakes maximally translation invariance in extra dimensions. As a result, there is no continuous limit in which the theory is flat-five-dimensional. Another possibility is to make the ghost metastable, and decay at large distances. Still, even in this case, it should admit a Källen-Lehman representation in terms of massive ghost states. Again, metastability does not suffice to “exorcise” the ghost. Indeed, this may at most cure problems in single-ghost exchange amplitudes, but not in amplitudes involving two or more ghosts. The instability associated with the fact that the ghost energy is unbounded-below is just an example. The ghost formulation of the problems raised in makes us think that they might be related to the lack of energy-positivity in this scenario . Probably, any solution of the ghost problem must also cure the energy-positivity problem. In any event, this framework deserves further investigation and perhaps there are some unconventional solutions to the problems discussed above. Acknowledgments We would like to thank C. Csáki, J. Erlich, and T.J. Hollowood for useful communications regarding the results of Ref. . The work of G.D. is supported in part by a David and Lucile Packard Foundation Fellowship for Science and Engineering. G.G. is supported by NSF grant PHY-94-23002. M.P. is supported in part by NSF grant PHY-9722083.
no-problem/0003/quant-ph0003102.html
ar5iv
text
# Discussion: Are There Material Objects in Bohm’s Theory? ## 1 The Issue Bedard, (1999) argues that “Bohm’s interpretation is not as classical as it initially appears” (p. 223) and that, in particular, the common view that the pilot-wave theory<sup>1</sup><sup>1</sup>1Louis de Broglie, (1924) formulated essentially the same theory that Bohm, (1952) did. An examination of his dissertation reveals that in fact his theory was much closer to Bohm’s than is usually assumed. I have chosen, therefore, to use the ‘neutral’ term ‘pilot-wave theory’. can get by with an ontology of just particles “do\[es\] not make sense” (ibid.). Indeed, “sets of Bohmian particles do not have all the intrinsic properties necessary to constitute a material object” (ibid.). This last remark is the heart of Bedard’s impressive paper, the point being that the ‘minimalist’ interpretation of the pilot-wave theory (according to which the only entities are the particles, and their only fundamental property position) has no account of material objects. In this note, I shall suggest that in fact the minimalist’s interpretation is fully adequate. Bedard shows that the minimalist’s physical world (consisting of just particles with position) is too sparse to explain why, for example, a collection of particles is “a brain instead of something merely shaped \[as\] a brain” (ibid.). Instead, the wavefunction itself must be invoked to get any account of the forces that bind particles together. The heart of my response is just that the minimalist is not obligated to have such an account. Much of the rest of this note is aimed at making this claim plausible. I hasten to add that it is no part of my argument that the pilot-wave theory is ‘classical’, whatever that adjective may mean. ## 2 Review of the Arguments Bedard presents three arguments against the minimalist interpretation of the pilot-wave theory, according to which the only ‘real’ objects are particles, and the only ‘real’ properties are positions, while the wave function does nothing more than encode the dynamics obeyed by those particles. I shall review each argument, reserving most comments for later. Her first argument is based on the idea that “in order for a system to constitute a composite object such as a cat, table, or hammer, the right types of particles must be bonded together in an appropriate way” (p. 227). Consider two sets of particles, one ‘merely’ shaped as a hammer, the other shaped as a hammer and bonded together so as to maintain its shape under the stress of, for example, hitting a nail. The latter is a hammer, the former not. Bedard then points out that the pilot-wave theory may account for such ‘bonds’ by appeal to features of the wavefunction, but not by appeal to the intrinsic properties of the particles, of which, on the minimalist account, there is just one—position. Hence the minimalist interpretation is inadequate, for it must model the distinction between bonded and unbonded particles and yet cannot: “quantum mechanics would not be celebrated for successfully modeling such a distinction if the distinction were insignificant” (p. 228). (We shall see that the minimalist is not much inclined to join the celebration.) Bedard’s second argument is aimed at the idea that the configuration of particles could ‘cause’ our perceptions. In partiular, she considers a quantum-mechanical evolution such as: $$|v|\text{}v\text{}|\text{ready}|v|\text{}v\text{}|\text{sees “}v\text{}$$ (1) where $`|v`$ is the state of some measured system, $`|\text{}v\text{}`$ is the state of an apparatus indicating the result “$`v`$”, $`|\text{ready}`$ is the state of an observer not yet having observed the apparatus, and $`|\text{sees “}v\text{}`$ is the state of an observer having observed the apparatus. Bedard then notes that the evolution of the particles in the observer are not functionally dependent on the positions of the apparatus’ particles. We are supposed to conclude that the apparatus’ particles do not, therefore, ‘cause’ the perception “sees ‘v’ ”. To bolster this conclusion, Bedard further claims that on the counterfactual analysis of causation, the positions of the apparatus’ particles do not affect the observer’s particles, because “according to the counterfactual analysis, ‘the pointer particles cause the perception’ means that if the pointer configuration were different, then the perception would have been different” (p. 231). But we should be somewhat more careful, here. Surely, for example, the counterfactual analysis says more. Consider a baseball breaking a window. Is it true that ‘had the ball been elsewhere, the window would not have broken’? No. One requires that the difference be ‘enough’. (E.g., $`10^{10}`$cm is probably not enough.) True, the configurations that might make a difference happen to have probability zero, if in fact the wavefunction is as Bedard describes in (1), but several problems arise here. First, is probability $`0`$ low enough? If not, how do we evaluate the relevant counterfactual? If so, are we allowed to alter the wavefunction (in order to make these configurations to have probability greater than $`0`$)? Such questions make it clear that appeal to the counterfactual analysis is at best problematic, and requires considerably more careful discussion. (See (Dickson,, 1996) for some discussion of the difficulties for applying the counterfactual analysis to the pilot-wave theory.) In fact, Bedard does briefly raise the possibility that the evolution is, instead, $$\left(|v|\text{}v\text{}+|w|\text{}w\text{}\right)|\text{ready}|v|\text{}v\text{}|\text{sees “}v\text{}+|w|\text{}w\text{}|\text{sees “}w\text{}.$$ (2) In this case, one or the other of the wavepackets for the apparatus is ‘empty’ (i.e., the configuration is not located there)—let it be $`|\text{}w\text{}`$. Then we might be tempted to say that if the configuration of the apparatus had been there, the observer would have seen “$`w`$” rather than “$`v`$”. But then does the counterfactual analysis not entail that the configuration is causally relevant to the state of the observer? Bedard answers ‘no’, for two reasons. First, “the viability of particularity \[i.e., the minimal interpretation\] …should not hinge on the complexity of the universe” (p. 232) and second, “we could construct more realistic and complex examples in which empty wavepackets exist without having the pointer particles determine which wavepacket is active or affect the brain particles’ trajectories” (ibid.). The first point is crucial—I shall suggest below that in fact the only reasonable version of the minimal interpretation should allow that answers to ‘causal’ questions depend on the contingent details of this universe. The second, I do not understand, if Bedard has in mind the situation that I described schematically in (2). There, the quantum-mechanical perfect correlations between the apparatus and the observer guarantee that the observer will (with probability $`1`$) see the result that the apparatus indicates. However, this latter point is central neither to her argument nor to mine. I shall address the general argument about ‘particulate epistemology’ below. Bedard’s third argument is that mere positions are insufficient to explain the correlation between our perceptions and the world. For example, colors may not depend on configuration in the appropriate way, and yet we can perceive color. In general, it is possible to perceive properties that are not in any obvious way dependent on configurations. But then configurations cannot explain our perceptions. ## 3 The Minimal Interpretation Bedard quotes several authors who seem to adopt something like a ‘minimalist’ interpretation of the pilot-wave theory. We will do well, nonetheless, to make it clear what that interpretation says. Bohm’s formulation of the pilot-wave theory in 1952 invoked a ‘quantum potential’ in addition to the classical potential, and toegether they are responsible for the motions of particles—they are the potentials that appear in Hamilton’s equations. Bohm seems never to have let go of the idea of the quantum potential (Bohm and Hiley,, 1993). But why is it necessary? Apparently it is required to explain the ‘deviation’ of particles from Newtonian trajectories. For example, in the two-slit experiment, where there is no classical force acting on the particle between the slits and the screen, one ‘must’ invoke a ‘quantum force’ to explain why the particle does not follow the Newtonian trajectory. (Bohm and Hiley, (1993) provide some nice pictures of these ‘curved’ trajectories.) The minimalist interpretation that I would advocate (were I an advocate of the pilot-wave theory in the first place) begins by asking why we must appeal to Newton to establish ‘what is expected’. Instead, why not simply continue to allow that the particle experiences no force between the slits, and yet its trajectory is not the classically expected trajectory? This idea suggests that we consider a space-time in which these non-classical yet free motions are geodesics, so that, in much the same way that we no longer invoke the ‘force of gravity’ to explain deviations from Euclidean geodesics, so also we would not invoke the ‘quantum potential’ to explain deviations from the Newtonian trajectories. This idea has been carried out with enough rigor to render it at least a plausible foundation for an interpretation of the pilot-wave theory (Pitowsky,, 1991). The sole role for the wavefunction, then, is to determine the structure of space-time. It does not describe any other ‘real features’ of the world, and there are no ‘forces’, ‘potentials’, or anything of the sort accounting for the non-classicality of the motions of particles. What sort of metaphysics goes with this view? It is reductionistic, in the traditional sense. A crucial part of this attitude of reductionism is that our accounts of the world—in particular, the categories that we use to describe the world and relations amongst objects in it—might not be ‘fundamental’, but might instead be imposed by us on the world. A familiar example is afforded by heat. Suppose that statistical mechanics is a successful reduction of thermodynamics, that heat is nothing more than the motion of molecules, and similarly for the other central concepts of thermodynamics. Then, where thermodynamics might lead us to refer to such things as ‘the quantity of heat’, ‘the flow of heat’, and so forth, we would, in light of this reduction, understand that these phrases do not straightforwardly refer to any real entities or properties of entities in the world. An example familiar to philosophers is afforded by Hume’s account of causation. Causes are not in the world, according to this account; they are ‘constant conjunction with an inference by the mind’; i.e., we impose causal relations on the world, where in fact there is nothing other than constant conjunction. In general, the minimalist interpretation is open to the idea that the categories that you and I use to describe the world are not the real categories into which the things in the world actually fall—not even close. The minimalist is open to a radical revision of our ordinary discourse about the world in light of our interpretations of scientific theory. ## 4 Replies to Bedard’s Arguments This feature of the minimal interpretation—and reductionism in general—makes it clear where Bedard’s arguments miss the mark. She assumes the legitimacy of certain distinctions made by us, and then requires that physical theory provide an explanation of, or account of, those distinctions in purely physical terms. The minimalist, however, is open to the idea that our mode of description may be largely responsible for the putative distinctions. While Bedard does implicitly acknowledge that her arguments do not apply to this reductionistic form of minimalism (I shall quote two cases below), apparently she does not sufficiently appreciate that reductionism was the only plausible form of minimalism in the first place. Consider again the case of the real and false hammers. Bedard notes that there are no resources internal to the minimalist’s description of the world to distinguish between real and false hammers. But the minimalist need not agree that such distinctions are reflected in fundamental physical facts about the world. As far as the world is concerned, ‘merely’ hammer-shaped sets of particles and ‘true hammers’ are no different—in much the same way that there is no physically fundamental difference between a beautiful painting and an ugly one. The sets of particles that happen, by virtue of their trajectories, to remain shaped as a hammer even under ‘stress’ (another concept imposed by us!) are picked out by us as special—presumably because they are useful for pounding nails. It is crucial, now, to note that the pilot-wave theory does predict that, under the right conditions (that is, in a universe with the right sort of spatio-temporal structure and initial configuration—and we may have just gotten ‘lucky’ in this respect), there will be sets of particles that are hammer-shaped, and that will remain so under ‘stress’. It is crucial, in other words, to realize that Bedard has not shown that there could be no hammers, according to the minimalist view. Her argument, rather, establishes that the minimalist view has no explanation of their ‘hammerhood’, rather than their ‘hammer-shapedness’. In general, it has no explanation, in terms of fundamental physical facts, of the difference between ‘bonded’ and ‘unbonded’ particles. But the minimalist interpretation need provide no such explanation. The virtue of Bedard’s first argument is to highlight this fact in a particularly sharp way, and to make it clear that the minimalist must be a radical reductionist of roughly the Humean sort. But the minimalist was already committed to this view from the start—one can hardly claim that the only truly existent objects are point particles with positions and fail to notice that such a claim involves a particularly radical form of reductionism. Some may be unsatisfied by the view, but Bedard has not shown that it “do\[es\] not make sense” (p. 223). To put it in her terms, Bedard has demonstrated “the incompatibility of particularity with theories in which certain material objects have essential properties that are causal” (p. 229), but the minimalist need acknowledge no such objects. What I have said to this point should make it clear how to respond to Bedard’s second argument as well. The point there (and let us grant it in spite of the problematic invocation of the counterfactual analysis of causation) was that configurations do not, in general, cause observers’ perceptions. Again, the reductionist is not committed to the view that configurations do cause observers’ perceptions. Indeed, the very notion of a causal conection is dispensible on this view. The minimalist interpretation does predict the requisite correlations. Non-reductionists understand them causally. Bedard’s third argument contains the germ of a potential problem for the minimalist. However, outside the context of some concrete theory of perception—not to mention our consciousness of perceptions—there is very little that can (or ought to be) said about the relation between the physical world and our perceptions of it. Nonetheless, the potential problem is that if this theory, ‘we know not what’, entails that consciousness has nothing to do with configurations, then the minimalist will have some explaining to do. One might be tempted to add that Bedard has shown that however our brains physically encode information about the world, the configurations of particles in the world cannot be the causes of those encodings. Agreed, but we have already seen that the minimalist need not provide a description of such putative causal connections in physically fundamental terms (i.e., purely in terms of the positions of point particles). ## 5 Objection and Conclusion One might object that the position I am describing here is just the position already examined by Bedard, namely, that the hammers are distinguished from the non-hammers not by their instantaneous properties, but by their entire histories (trajectories). She rejects this view, saying that “if an object’s essential properties include causal properties, these causal properties should not be smuggled in through some of their effects (such as particle trajectories)” (p. 229). The reductionists as I have described them will, of course, reject the antecedent. There are two ways to do so. First, the reductionist might indeed be able to reduce causal properties to properties of a trajectory, in which case causal properties are not in fact ‘essential’, in the same way that heat is not an essential property of ensembles of particles (let us assume). Having reduced heat to the motion of molecules, it is no good saying “if heat is an essential property, then do not smuggle it in through its effects (the motions of molecules)” for heat is not essential. Second, the reductionist might simply refuse to admit that causal properties are even reducible to fundamental physical properties. Causal properties simply have no place at all in a physical theory. So said Hume: the property of being a hammer has two parts, ‘constant hammer-shape’ and ‘inference by the mind’. So what do we learn from Bedard’s paper? We learn that the minimalist must be a reductionist. That lesson is valuable, though I have suggested that in any case there was little doubt, even prior to Bedard’s paper, that the minimalist should be a reductionist. Nonetheless, Bedard has provided us a very fine illustration, in detail, of just why the minimalist must be a reductionist, and just how radical that reductionism might have to be.
no-problem/0003/hep-ph0003190.html
ar5iv
text
# 1 Introduction ## 1 Introduction Until a few years ago cosmology with scalar fields was almost synonymous with cosmological inflation. Recently there has been an enormous upsurge in interest in the possibility that scalar fields can play an important role in the dynamics of the Universe at recent epochs, mainly due to the observations of the apparent magnitudes of distant supernovae which may be explained by the presence of such a component . In this context it is certainly interesting to consider what the role of such fields can be at other epochs, and in particular how their behaviour between the end of inflation and their reappearance today might influence cosmology in the intervening period. This question is also related to the ‘fine-tuning’ problem associated with such scenarios: how is it that such a field can give a significant contribution to the energy density today starting from a natural set of initial conditions after inflation? This apparent problem is in fact resolved in a wide class of potentials which generically have the property that in some part of the potential they may support modes which are dominated by the kinetic energy of the scalar field, so that their energy density scales away faster than that in the radiation, i.e. $`\rho _\varphi a^n`$, with $`4<n6`$, where $`a`$ is the scale factor. In principle there is no reason why such modes cannot initially dominate over the radiation component, and in certain specific models this is realized. The main important observational constraint is that such domination must terminate by the nucleosynthesis epoch, when the expansion law must be that given by radiation domination with the standard model degrees of freedom. There may be an additional contribution which, conservatively, must be less than about $`20\%`$ of the total . More generally we can consider the question of the cosmology of the Universe between the end of an inflationary epoch and the entry into radiation domination before nucleosynthesis. For a transition from a scalar field dominated cosmology to occur the energy in the scalar field must either decay (directly or indirectly) into standard model particles - as in standard reheating scenarios \- or it must red-shift away more rapidly than the radiation. Or some combination of the two can occur. In the former case any scaling less rapid than that during inflation ($`n>2`$, or equivalently an equation of state $`p_\varphi =w_\varphi \rho _\varphi `$, with $`w_\varphi >1/3`$) can be envisaged, with the case $`n=3`$ corresponding to the most standard reheating during the oscillation of the inflaton about the minimum of a quadratic potential. There is a continual release of entropy until the radiation dominated epoch, leading to a dilution of most relevant physical quantities sourced during the scalar dominated phase. In the latter case, which corresponds to domination by the kinetic energy of a homogeneous scalar field (or equivalently to an equation of state $`p_\varphi =w_\varphi \rho _\varphi `$, with $`w_\varphi >1/3`$ ) the scalar field simply redshifts away until it becomes the sub-dominant component. There is no entropy release, and correspondingly a coherent energy remains in the scalar field which, given an appropriate potential (the ‘self-tuning’ potentials of , or the ‘tracking’ potentials of ) can become relevant again at late times . In we have considered in a generic way the effect of a change in the expansion rate prior to nucleosynthesis on models of electroweak baryogenesis<sup>1</sup><sup>1</sup>1The effects on dark matter freeze-out can be inferred from the work of , who studied mainly modifications associated with anisotropy in the expansion., in particular on the effect on the sphaleron bound and the ‘no-go’ theorem for electroweak baryogenesis in the case of a second order phase transition. As concrete realizations of such cosmologies we considered models which go through an epoch after inflation - which, following we termed ‘kination’ - of domination by a kinetic mode of a scalar field. This occurs most naturally in a model in which the universe ‘reheats’ not by the decay of the inflaton, but by gravitational particle creation at the end of the inflationary epoch . In a recent paper it has been observed that, for low (sub-electroweak) reheat temperatures in more traditional models of reheating - in which the inflaton decays while oscillating in a mode with matter scaling after inflation - the effects discussed in on electroweak cosmology also result. There is in this case an even larger relative boost to the expansion rate (see below), but a very large entropy release which tends to undo any of the enhancing effects of the greater expansion rate. In one of us (TP) has considered the general case of a decaying inflaton evolving in a mode scaling as $`1/a^n`$, and shown that, while the same larger boost to the expansion rate occurs as in the $`n=3`$ case of , the entropy release problem is greatly reduced as the kinetic mode $`n=6`$ limit is attained. Here we concentrate on another aspect of such alternative cosmologies, which is a simple consequence of the observation which has been made in : Because of the enhanced expansion rate, the right-handed electrons of the standard model may remain out of equilibrium until a temperature below the electroweak phase transition. It is well known that asymmetry in right-handed electrons - because of their late equilibration time - may be important from at least two points of view: $``$ Since right-handed electrons couple to other particles in the standard model with only an extremely small Yukawa coupling, they remain out of equilibrium in an expanding Universe until relatively late - in the standard radiation dominated cosmology until $`T\text{ }\stackrel{>}{}\text{ }20`$TeV . A pre-existing baryon asymmetry can survive the effect of standard model anomalous processes \- which violate $`B+L`$ and are unsuppressed until the electroweak phase transition - only if there are non-zero CP-odd conserved global charges when they are operative. In the absence of such charges the equilibrium attained will be CP invariant with zero baryon number. As noted in above <sup>2</sup><sup>2</sup>2The scale quoted in is $`10`$GeV. The increase by a factor 2 is due to a tighter bound on the Higgs mass. $`20`$TeV right electron number $`e_R`$ is in fact such an effective charge, and as a result other global charges like $`BL`$ can be violated until close to this scale. This leads to a very significant reduction in the bounds on $`BL`$ violating interactions in grand unified theories with the structure appropriate for them to generate baryon asymmetry. Here the consequences are much simpler and more dramatic: If the $`e_R`$ remain out of equilibrium all the way until the electroweak scale, a baryon number will result from this due to the $`B+L`$ violating processes. When the electroweak scale is reached this baryon number will simply be frozen when the $`B+L`$ violating processes abruptly switch off. This will be the case irrespective of whether there is primordial $`B`$ or $`L`$ (or $`BL`$), and irrespective of whether these charges are violated or conserved. Just like in the case of electroweak baryogenesis all the non-trivial physics required is in principle present in the standard model. The problem of baryogenesis then becomes posed as what we will refer to as ‘electrogenesis’, the generation of the source right handed electrons prior to the time at which the $`B+L`$ violating processes become suppressed. It is this process which we discuss below. $``$ The effective conservation of $`e_R`$ in the early Universe due to the fact that its perturbative decay channel is out of equilibrium is not exact, because the $`e_R`$ charge has an axial anomaly under the U(1) of hypercharge. There are no degenerate vacua as in the non-abelian case, but there are finite energy modes of the U(1) field with Chern-Simons number which can ‘eat’ the charge. In fact, as discussed in this leads to an instability at finite density to the formation of long wavelength modes of hypermagnetic field. When these modes come inside the horizon they can evolve during the time in which the right electron number is without its perturbative decay channel. Here this scenario will be modified as a result of the change in the expansion rate, since the perturbative channel does not come into play until the electroweak scale, at which time a first order phase transition may produce the turbulence needed to amplify the produced seed magnetic fields. ## 2 Scalar fields and the expansion rate after inflation The inflationary solutions for scalar fields represent only one part of a much wider range of possible behaviours of the energy density in the zero modes of scalar fields. The full range can be characterized by the equation of state for a (real) scalar field which is determined by the relative weight of the kinetic and potential energy (see for a discussion): $$p_\varphi =w_\varphi \rho _\varphi ,w_\varphi =\frac{\frac{1}{2}\dot{\varphi }^2V(\varphi )}{\frac{1}{2}\dot{\varphi }^2+V(\varphi )},\rho _\varphi a^{3(w_\varphi +1)}.$$ (1) The limit of potential energy domination gives inflation, with $`\rho _\varphi const`$, while the opposite limit of complete kinetic energy domination gives the most rapid possible red-shifting of the energy to be $`\rho _\varphi 1/a^6`$. While inflation is associated with flat potentials (satisfying ‘slow-roll’ conditions), the latter limit is associated with steep potentials<sup>3</sup><sup>3</sup>3The exception is a flat direction with no associated potential energy e.g. a Goldstone direction associated with a broken exact global symmetry, which only has pure kinetic modes.. A particularly useful ‘yard-stick’ of flatness/steepness is the simple exponential potential $$V_{\mathrm{exp}}(\varphi )=M_P^4e^{\lambda \varphi /M_P}$$ (2) where $`M_P=1/\sqrt{8\pi G}2.4\times 10^{18}`$GeV is the reduced Planck mass (and the origin of $`\varphi `$ has been chosen to give the simple normalization). This potential in fact has an attractor solution for any $`\lambda ^2<6`$ in which the energy density scales as $`1/a^{\lambda ^2}`$, and as $`1/a^6`$ for $`\lambda ^2>6`$. A potential with a varying slope, e.g. the inverse power-law potential of $`VM_P^{4+\alpha }/\varphi ^\alpha `$ then supports a kinetic mode at small $`\varphi `$, but an inflationary type (or ‘quintessence’) mode at large values of the field. Alternatively an oscillating mode about the minimum of a potential $`V\lambda _\alpha \varphi ^\alpha `$ gives a broad range of scalings with $`\rho _\varphi a^{6\alpha /(\alpha +2)}`$ , producing thus the familiar matter scaling when $`\alpha =2`$ and radiation scaling when $`\alpha =4`$. In we discussed several ways in which a period of kinetic mode domination (which, following , we termed ‘kination’) could come about after inflation<sup>4</sup><sup>4</sup>4Such kinetic modes have also recently been used to propose a solution to the cosmological moduli problem .. We considered only the case in which the relevant field (the ‘kinaton’) did not decay itself, and discussed two possible sources for the radiation in the Universe: The entropy associated with particle creation during the de Sitter phase (see below), or a more conventional source in the decay of a distinct inflaton field. In the latter case specific conditions need to be satisfied by the ‘kinaton’ field to allow it to dominate over the energy produced by the inflaton, whereas in the former the kinaton and the inflaton are one field and the domination by the kinetic mode for a period is a built-in and necessary feature. Our concern in this paper is not the inflationary model building aspect of the problem, but rather the problem of ‘electrogenesis’ in this kind of cosmology, as well as in the more conventional reheating models discussed in . For the sake of clarity and simplicity we limit ourselves here to two definite and simple models with scalar field dominance continuing until temperatures just above the nucleosynthesis scale, exemplifying these two types of different cases: $``$ Model (A): The inflaton rolls after inflation into a steep potential in which the field rolls in a kinetic mode, so that the energy density scales as $`1/a^n`$ where $`n>4`$ (see Figure 1). The field is assumed to be very weakly coupled and the only radiation present is the very sub-dominant component due to particle creation at the end of the preceeding de Sitter phase. The latter has a characteristic energy density $`H_I^4`$, where $`H_I`$ is the expansion rate at the end of inflation, so that initially $`\rho _{\mathrm{rad}}/\rho _\varphi (H_I/M_P)^2`$. Provided the inflaton scales faster than radiation it will become subdominant at a subsequent time and the transition to radiation domination is achieved without any decay of the field . Requiring that this transition occurs before nucleosynthesis gives an absolute lower bound on $`H_I`$, which for a pure (or almost) kinetic mode scaling as $`1/a^6`$ results is $`H_I\text{ }\stackrel{>}{}\text{ }10^7`$GeV (see ). As noted by Spokoiny , for an appropriate potential the field can again dominate in a slowly scaling mode at late times. This kind of model has been dubbed ‘quintessential inflation’ and studied in more detail in (see also ). Taking the reheat temperature $`T_{\mathrm{reh}}`$ to be defined <sup>5</sup><sup>5</sup>5Note that in these models the Universe is strictly speaking not ‘reheated’ at all - the entropy is left behind at the end of the de Sitter phase and the important process is the red-shifting away of the dominant energy in the inflaton. Here we adopt the standard definition of ‘reheat temperature’ as used in standard reheating models. In we used ‘reheat temperature’ to mean the temperature of the radiation when it first thermalizes, which is far higher ($`0.1H_I`$) than the ‘reheat temperature’ as defined here. What we now call $`T_{\mathrm{reh}}`$ is denoted $`T_{k,end}`$ (‘end of kination’) in . as that when $`\rho _\varphi \rho _{\mathrm{rad}}`$, it is easy to infer <sup>6</sup><sup>6</sup>6For simplicity we neglect here and elsewhere the small reheating factors associated with particle decouplings. that above this temperature we have $$H=H_{\mathrm{rad}}\left(\frac{T}{T_{\mathrm{reh}}}\right)^{\frac{n4}{2}},$$ (3) where $`H_{\mathrm{rad}}1.4\times 10^{16}(T^2/100\mathrm{G}\mathrm{e}\mathrm{V})`$ is the standard radiation dominated evolution of the expansion rate, and $`n`$ gives the scaling of the energy density in the dominant scalar mode $`\rho _\varphi 1/a^n`$, with clearly the largest enhancement of the expansion rate for the limit $`n=6`$. The constraint that the energy density in the scalar field be less than about $`20\%`$ at nucleosynthesis requires that $`T_{\mathrm{reh}}\text{ }\stackrel{>}{}\text{ }5^{1/(n4)}T_{\mathrm{ns}}`$ (and $`T_{\mathrm{ns}}=1`$MeV). Here we are interested in the case when right electrons are out of equilibrium at the electroweak scale, which corresponds therefore to the upper bound $$T_{\mathrm{reh}}<T_{\mathrm{ew}}\left(\frac{H_{\mathrm{ew}}}{\mathrm{\Gamma }_{e_R}}\right)^{\frac{2}{n4}},$$ (4) which for the optimum case ($`n=6`$) becomes $$T_{\mathrm{reh}}<T_{\mathrm{ew}}\frac{H_{\mathrm{ew}}}{\mathrm{\Gamma }_{e_R}}\frac{T_{\mathrm{ew}}}{200}0.5\mathrm{GeV},$$ (5) where we have made use of the fact that the interaction rate for right electrons through their Yukawa coupling is $`\mathrm{\Gamma }_{e_R}10^{13}x_{e_R}^2T`$ , and we took $`x_{e_R}m_H/2T_{\mathrm{ew}}0.5`$ correspondinding to the current lower bound on the Higgs mass $`m_H`$. In terms of the expansion rate the bound (5) corresponds to the requirement of a boost by about $`200`$ times in the expansion rate at the electroweak scale relative to that in the standard radiation dominated cosmology. $``$ Model (B): The inflaton evolves after inflation into a potential, in which it rolls or oscillates, scaling as $`1/a^n`$ with $`6n3`$. The dominant source of entropy comes from the decay of the inflaton, which is however sufficiently weakly coupled that reheating occurs between the electroweak scale and the nucleosynthesis scale. The energy density-temperature dependence for this case is illustrated in Figure 1. The phase we are discussing corresponds to the ‘preheating’ phase of inflationary models with the usual mechanism of reheating from inflaton decay, either in an oscillatory mode (with $`n=3`$ for a $`\varphi ^2`$ potential) or a rolling mode. We assume here for simplicity perturbative reheating, but note that the nonperturbative decay channels of narrow resonance may also be considered. A realization of the latter with a rolling mode is given by the ‘NO’ models of . It is quite easy to show that in Case (B) the expansion rate as a function of the temperature is independent of the equation of state (1), i.e. the following universality in scaling in the expansion rate holds $$H=\frac{53w_\varphi }{6}\frac{\rho _r}{\mathrm{\Gamma }_\varphi M_P}=H_{\mathrm{rad}}\left(\frac{T}{T_{\mathrm{reh}}}\right)^2,$$ (6) which implies that, for a reheat temperature below the electroweak transition, the expansion rate is enhanced by $`(T/T_{\mathrm{reh}})^2`$ with respect to the standard rate $`H_{\mathrm{ew}}H_{\mathrm{rad}}(T_{\mathrm{ew}})`$. The condition that the right electrons remain out of equilibrium until the electroweak scale is in this case $$T_{\mathrm{reh}}<T_{\mathrm{ew}}\left(\frac{H_{\mathrm{ew}}}{\mathrm{\Gamma }_{e_R}}\right)^{\frac{1}{2}}\frac{T_{\mathrm{ew}}}{15}5\mathrm{GeV}$$ (7) which again corresponds to the same minimal boost in the expansion rate by a factor of about $`200`$. The extra increase in the expansion rate as a function of temperature compared to the first case is due to the ‘leaking’ of the scalar field energy into the radiation. Note that the lower bound on $`T_{\mathrm{reh}}`$ is in this case $`T_{\mathrm{reh}}\text{ }\stackrel{>}{}\text{ }2`$MeV. Our interest here finally is in the ratio of baryon number to entropy, and so will need to include the dilution effect of this entropy production subsequent to the scale $`T_{\mathrm{dec}}`$ at which the baryon number, or in fact the source for it, right electron number is produced. As discussed in the entropy per comoving volume $`S_{\mathrm{com}}`$ scales as $`a^3T^3T^{3(8n)/n}`$ since $`at^{\mathrm{\hspace{0.17em}2}/n}T^{8/n}`$. Thus the dilution factor $`f_{\mathrm{dil}}`$ due to entropy production between the two scales is $$f_{\mathrm{dil}}\left(\frac{T_{\mathrm{dec}}}{T_{\mathrm{reh}}}\right)^{\frac{3(8n)}{n}}.$$ (8) Thus there is a very significant difference between the case of the matter scaling (considered in ) giving $`f_{\mathrm{dil}}(T_{\mathrm{dec}}/T_{\mathrm{reh}})^5`$ and that of the kinetic mode limit with $`f_{\mathrm{dil}}T_{\mathrm{dec}}/T_{\mathrm{reh}}`$. The origin of this difference can be easily understood: a scalar kinetic mode gets rid of most of its energy by the rapid red-shifting. We now turn to the effect of these modifications to the pre-nucleosynthesis expansion rate on the generation of a baryon asymmetry from an $`e_R`$ asymmetry. ## 3 From $`e_R`$ to a baryon asymmetry Before discussing the generation of right electron asymmetry in these cosmologies, we discuss the conversion of such an asymmetry to a baryon asymmetry when $`B+L`$ violating processes are active. ‘Conversion’ is in fact a little misleading as these processes of course only act on the left-handed fermions: As will become more explicit now the physics of the creation of the baryon asymmetry is that the right electrons carry the gauge charge hypercharge, which is globally zero and exactly conserved. When there is net hypercharge in the $`e_R`$ sector, there must be also a compensating hypercharge in the rest of the particles. When this is non-zero the $`B+L`$ violating processes minimize the free energy with a non-zero baryon number. We follow a standard procedure and consider the equilibrium abundance of baryon number subject to the constraints imposed by the charges conserved by the fast interactions (which are in equilibrium at that time). Because baryon number violation freezes out at the electroweak scale, when the sphaleron processes become suppressed, this is the scale at which we need to calculate baryon number. Above the electroweak scale the rate of the B-violating processes is mediated by the symmetric phase sphaleron transitions, which are unsuppressed: $`\mathrm{\Gamma }_{\mathrm{sph}}25\alpha _w^5T10^6T`$ , so that they will have time to equilibrate above the electroweak scale in the models we are discussing (cf. Eqs. (3) and (6)). In fact we shall assume for simplicity that the expansion rate is such that the $`e_R`$ are the only standard model degrees of freedom out of equilibrium. Within the scenarios we are discussing this is not necessarily the case, as the rate could be enhanced in principle enough to take also other heavier particles out of equilibrium. For example the $`\mu _R`$ has a Yukawa coupling larger by about $`10^2`$ and therefore a decay rate faster by a factor of $`(y_\mu /y_e)^210^4`$, while with a reheat temperature sufficiently close to the nucleosynthesis scale the expansion rate may be boosted in Model A by almost as much as $`T_{\mathrm{ew}}/T_{\mathrm{ns}}10^5`$, and in Model B by $`(T_{\mathrm{ew}}/T_{\mathrm{ns}})^210^{10}`$ times (with of course the correspondingly large entropy release factor). With the following hierarchy of couplings $$\mathrm{\Gamma }_{e_R}H\mathrm{\Gamma }_{\mathrm{sph}},\mathrm{\Gamma }_{\mu _R},..$$ (9) the appropriate equilibrium calculation of baryon number is particularly simple. In the standard model the only conserved charges are the gauge charges, $`e_R`$ and $`\frac{1}{3}BL_i`$, where the latter is the baryon minus lepton number in each generation. We will make the slight simplification of assuming only total $`BL`$ as conserved (which would be appropriate at this scale in certain models including neutrino mass matrices), which leads to minor numerical changes to the results quoted here. (We refer the reader to where the full set of constraint equations can be found.) To arrive at the set of constraint equations one expresses the charge densities in terms of particle densities $`n_\alpha `$ using $`n_\alpha \overline{n}_\alpha =(T^2/6)k_\alpha \mu _\alpha `$, where<sup>7</sup><sup>7</sup>7We use here the massless approximation, to which there will be small corrections due to thermal masses. Note that we also assume the right electron distributions can be described by a chemical potential, which is justified given their relatively fast elastic scattering rate through weak hypercharge processes $`10^2T`$ . $`k_\alpha =1(2)`$ for fermions (bosons) and $`\mu _\alpha `$ is the chemical potential for a species $`\alpha `$. Further $`\mu _\alpha `$ can be re-expressed in terms of the chemical potentials for charges $`Q_A`$ as follows: $`\mu _\alpha =_Aq_\alpha ^A\mu _A`$, where $`q_\alpha ^A`$ is the $`A`$-charge of the $`\alpha `$ species. With this we can have (cf. ) the following constraint equations while the baryon number is out of equilibrium: $`Y`$ $`=`$ $`{\displaystyle \frac{T^2}{6}}\left[(10+n)\mu _Y+8\mu _{BL}+\mu _{e_R}\right]`$ $`BL`$ $`=`$ $`{\displaystyle \frac{T^2}{6}}\left[8\mu _Y+13\mu _{BL}\mu _{e_R}\right]`$ $`e_R`$ $`=`$ $`{\displaystyle \frac{T^2}{6}}\left[\mu _Y\mu _{BL}+\mu _{e_R}\right].`$ (10) Here we used the hypercharge assignments such that $`Q=Y+T^3`$, where $`Q`$ denotes the electric charge and $`T^3`$ the isospin. We have not written the second linearly independent gauge charge explicitly, as choosing it as $`T^3`$ it is simply proportional to its own chemical potential, and so trivially drops out of the equations when we impose $`T^3=0`$. The baryon number $`B`$ can itself be expressed in terms of the relevant chemical potentials as $$B=\frac{T^2}{6}\left[2\mu _Y+4\mu _{BL}\right].$$ (11) The gauge charge $`Y`$ must be zero, and then given any value of the global conserved charges Eqs. (10) can be solved to give the baryon number (11). When $`BL`$ is conserved we thus have $`B={\displaystyle \frac{2(9+2n)}{59+12n}}e_R+{\displaystyle \frac{2(11+2n)}{59+12n}}(BL)`$ (12) and see that $`e_R`$ is an almost equally strong source for baryon number as is $`BL`$. Indeed, as $`n`$ changes from $`n=1`$ to $`n=\mathrm{}`$, the coefficient of $`e_R`$ changes from $`0.31`$ to $`1/3`$, while that of $`BL`$ from $`0.32`$ to $`1/3`$. Hence Eq. (12) may be quite well approximated by $$B\frac{1}{3}\left[e_R+(BL)\right].$$ (13) Thus, if $`BL`$ is conserved by all interactions after inflation, it is zero and remains zero, but the final baryon number, in contrast to the usual radiation dominated universe, is now non-zero and simply proportional to the original $`e_R`$ asymmetry. Indeed therefore we see explicitly that no $`B`$ violation other than that of the anomalous processes of the standard model is required to produce it. While the latter is the case which will interest us, it is interesting to note that the result that one obtains a non-zero baryon number from $`e_R`$ is very robust, and is relatively insensitive to whether the other charges are violated. Indeed it is easy to see that if $`e_R`$ is the only conserved charge – $`BL`$ may for example be violated by some interactions all the way down to the electroweak scale – the net baryon number is still non-zero. Indeed, solving the reduced set of constraint equations for $`Y`$ and $`e_R`$ only, with $`\mu _{BL}`$ set to zero, we find $$B=\frac{2}{11+n}e_R$$ (14) which is slightly smaller and of the opposite sign than the result in Eq. (12). We conclude that, irrespective of constraints on the value of $`BL`$ and assumption on whether $`BL`$ is conserved, a right-handed electron asymmetry is reprocessed into a baryon asymmetry of the same order. ## 4 Electrogenesis We now consider explicitly models for electrogenesis – production of a right-handed electron asymmetry – prior to the electroweak scale. In the standard radiation dominated cosmology right electrons have been understood to be of interest because of their capacity to protect a baryon asymmetry from erasure . Thus their generation has been considered in the context of theories which also produce such a primordial baryon or lepton asymmetry, and thus typically the scale characterizing their generation is very high, around the GUT scale or in the case of leptogenesis as low as $`10^{10}`$GeV . In the present context right electrons are in their own right adequate sources for baryogenesis by reprocessing with standard model $`B+L`$ violation. Given that the physics required to generate them is CP-violating only, and thus potentially can be associated in simple ways with much lower energy scales, it is certainly of interest to consider mechanisms which can produce them quite independently of $`B`$ or $`L`$ violation beyond the standard model. In fact in the cosmologies being considered one is forced to seek such different mechanisms of $`e_R`$ generation for another very simple reason which we have not drawn attention to so far: The maximum temperature $`T_{\mathrm{max}}`$ attainable after inflation in these cosmologies is in fact much lower than in the standard radiation dominated cosmology. Given the requirements of $`T_{\mathrm{reh}}`$ in (4) and (7), we can bound the temperature above by extrapolating the expansion rate to the point $`H0.1T`$. For model A this corresponds to $`T_{\mathrm{max}}\text{ }\stackrel{<}{}\text{ }10^8`$GeV, while for model B it gives $`T_{\mathrm{max}}\text{ }\stackrel{<}{}\text{ }10^6`$GeV. Above this point thermodynamic temperature can have no meaning as the age of the Universe is shorter than the equilibration time of any process. Thus any mechanism which in the ordinary radiation dominated scenario relies on temperatures being reached higher than this is not applicable, and we must seek mechanisms which operate at a lower temperature. Here our aim is not to be exhaustive about possible mechanisms, but rather to study an explicit model which produces an $`e_R`$ asymmetry sufficiently large to source the observed baryon asymmetry in these cosmologies. Given that in principle all the elements are present in the standard model itself, it is natural to ask – just as one does in the context of electroweak baryogenesis – whether it alone might suffice. While in the standard radiation dominated cosmology the standard model has apparently insurmountable problems on two fronts – the sphaleron bound and the inadequacy of standard model CP violation – here the former does not provide a significant constraint. All we require here is that the right electron number come into equilibrium after the $`B+L`$ violation goes out of equilibrium. This is in contrast with baryogenesis scenarios at a first order electroweak phase transition in which the sphaleron rate at the transition is required to drop below the expansion rate. So could the standard model with its CP violation produce the $`e_R`$ asymmetry? Given that its production can only come about through the same Yukawa coupling channel, the answer would seem to be definitively in the negative. In general however the question in these cosmologies can be framed more generally given that the expansion rate can change enormously: Is it possible to generate some CP-odd charge (not necessarily $`e_R`$) which is conserved on a time scale longer than that associated with the $`B+L`$ violating processes in the unbroken phase? We will return briefly to this question in the conclusion. Here, just as one does in the context of baryogenesis models, we add some extra CP-violating physics in the scalar sector. We study a simple out-of-equilibrium decay of scalar particles with CP-violating decays. Interestingly we find that, again because of the modified expansion rate prior to nucleosynthesis, the mass of these scalars need not be so far above the electroweak scale for the mechanism to work. This suggests that the kind of mechanism for ‘electrogenesis’ we discuss may be implementated successfully in other theories with additional scalars particles at scales not far above the electroweak scale, with signatures testable at accelerators. We will return to this point in our conclusions. ### 4.1 The Model The additional particle content we assume over the standard model (and the inflaton) is a set of Higgs-like scalar doublets $`\mathrm{\Phi }^a`$ coupled to the standard model leptons through a Yukawa type interaction, i.e. with interaction Lagrangian $$_{add}=h_{ij}^a\mathrm{\Phi }^a\overline{\psi }_{iL}\psi _{jR}+h.c.,$$ (15) where the couplings $`h_{ij}^a`$ are CP-violating, i.e. $`𝐡_{}^{𝐚}{}_{}{}^{}𝐡^𝐚`$, where $`𝐡^𝐚`$ is the matrix of couplings. While in principle CP violation does not mandate a matrix of couplings, but only a coupling to the right electron itself with a complex phase unremovable by phase transformations on the whole Lagrangian, we will require the flavour mixing structure and the existence of at least two such scalars in order to implement the generation of a CP-violating asymmetry. The strongest constraints on the masses and the couplings of such scalars come from the fact that they are flavour changing. For leptons the strongest constraint of this type comes from the bounds on the decay $`\mu e\gamma `$ . For couplings $`h`$ of order one this requires masses $`M_\mathrm{\Phi }\text{ }\stackrel{>}{}\text{ }100`$TeV, with the branching ratio for this process going parametrically as $`h_{\mu \tau }^2h_{e\tau }^2(M_W/M_\mathrm{\Phi })^4`$ so that much smaller masses can be permitted if the couplings have a hierarchy like that in the standard model Yukawa couplings . ### 4.2 The Out-of-Equilibrium Conditions We consider here a simple out-of-equilibrium decay scenario for these particles, very analogous to that which occurs in standard GUT scale baryogenesis scenarios . It is possible that nonperturbative decay mechanisms may be operative and work just as well, but we limit our treatment here to the simpler perturbative case. The perturbative decay rate for $`\mathrm{\Phi }`$ can be well approximated by $$\mathrm{\Gamma }_{\varphi ,\mathrm{pert}}=\frac{|𝐡|^2}{8\pi }E_\varphi ,$$ (16) where $`E_\varphi `$ is the energy of $`\mathrm{\Phi }`$, $`|𝐡|^2=\mathrm{Tr}(\mathrm{𝐡𝐡}^{})`$ and we have assumed the energy of the $`\mathrm{\Phi }`$ is much greater than that of the produced fermions (e.g. in the case $`m_i=m_j=m`$ there is a simple suppression $`E_\varphi [E_\varphi ^24m_\psi ^2]^{1/2}`$). Before considering the production of a CP asymmetry we first discuss the out-of-equilibrium condition. When the particles decay, with rate given by (16), the reverse process (or any other one) creating them must be suppressed. This is fulfilled if the temperature of the plasma at the time of decay is well below the mass scale of the scalars, i.e. $$M_\mathrm{\Phi }>T,\mathrm{when}\mathrm{\Gamma }_\varphi H.$$ (17) Equations (3) and (6) give the boost to the expansion rate with respect to the radiation dominated case as $`(T/T_{\mathrm{reh}})^p`$, where $`p=1`$ for kinetic mode domination ($`n=6`$), and $`p=2`$ for a decaying dominant component. Making use of this and Eq. (16), we infer that the constraint (17) can be re-expressed as $$M_\mathrm{\Phi }>T_{\mathrm{dec}}>(70g_{})^{1/2(1+p)}\left[|𝐡|^2M_PT_{\mathrm{reh}}^p\right]^{\frac{1}{1+p}}(0p2),$$ (18) where $`T_{\mathrm{dec}}`$ is the temperature at which $`\mathrm{\Phi }`$ decays, and we have used $`H_{\mathrm{rad}}=(\pi ^2g_{}/90)^{\frac{1}{2}}T^2/M_P`$ (where $`M_P2.4\times 10^{18}`$GeV). For the case of radiation domination ($`p=0`$) this gives $`M_\mathrm{\Phi }>T_{\mathrm{dec}}>10^{16}|𝐡|^2`$GeV, where we took $`g_{}10^3`$. Given that in these scenarios the asymmetry is generated by, at the very least, the interference between a tree-level and one-loop diagram, it is always suppressed by some small numbers times at least a square of the couplings $`𝐡`$, and often by higher powers of the couplings. Hence to produce a significant asymmetry one cannot have the coupling too small, and conversely one needs the scalar field to have a mass not so far below the GUT scale. For the cosmologies we are primarily considering these bounds change very considerably. In Model A in which the Universe is dominated by a kinetic mode ($`p=1`$, or equivalently $`n=6`$) the constraint (18) relaxes to $$M_\mathrm{\Phi }>T_{\mathrm{dec}}>3\times 10^6\mathrm{GeV}|𝐡|\left(\frac{T_{\mathrm{reh}}}{T_{\mathrm{ns}}}\right)^{\frac{1}{2}}>\mathrm{\hspace{0.17em}5}|𝐡|\times 10^6\mathrm{GeV}.$$ (19) where we took $`T_{\mathrm{ns}}=1`$MeV and $`g_{}10^3`$. This should be compared with the energy scale $`H_I`$ which characterizes this model at the beginning of the post-inflationary epoch. For a reheat temperature $`T_{\mathrm{ns}}`$ and a pure $`n=6`$ scaling after inflation one finds $$H_I10^7\mathrm{GeV}\left(\frac{\mathrm{T}_{\mathrm{reh}}}{\mathrm{T}_{\mathrm{ns}}}\right)^{\frac{1}{2}}.$$ (20) Thus the $`M_\mathrm{\Phi }`$ can be sufficiently light that they are produced by gravitational coupling in this mechanism along with all the other lighter ($`m<H`$) degrees of freedom. A little later, at a temperature $`T0.1H_I`$ the strongly interacting degrees of freedom begin to equilibrate (and define a real thermodynamic temperature), while the $`\mathrm{\Phi }`$ can decay without ever coming into equilibrium. For smaller values of the coupling ($`h\text{ }\stackrel{<}{}\text{ }10^2`$) there may be some time for weak force mediated annihilation processes (with rate $`\alpha _w^2T`$) to act, and in this case the initial $`e_R`$ number density at the time of decay will be reduced somewhat relative to their initial value. In the case of Model B, when the dominant component decays, we have $`p=2`$ so that Eq. (18) gives an even milder bound on the mass of $`\mathrm{\Phi }`$: $$M_\mathrm{\Phi }>T_{\mathrm{dec}}>2|𝐡|^{\frac{2}{3}}\left(\frac{T_{\mathrm{reh}}}{T_{\mathrm{ns}}}\right)^{\frac{2}{3}}\mathrm{TeV}>\mathrm{\hspace{0.17em}3}|𝐡|^{\frac{2}{3}}\mathrm{TeV}.$$ (21) In this case therefore the out-of-equilibrium condition may in some cases (for sufficiently low $`T_{\mathrm{reh}}`$) provide an even weaker constraint on their masses than accelerator constraints from the flavour changing processes they can mediate. More generally, it is certainly interesting to note that the mass scale is sufficiently low that models may be viable in which the scalars are the supersymnmetric scalar partners of the standard model particles. We will return to this point in our conclusions. Therefore in models of type B we can envisage the following scenario. The universe attains a temperature $`TM_\mathrm{\Phi }`$ and the $`M_\mathrm{\Phi }`$ are created by the fastest processes in similar quantities to the other degrees of freedom; as the temperature falls they drop out of equilbrium and, when the temperature $`T_{\mathrm{dec}}`$ is reached, they decay. As in Model A one would need to consider carefully the different cases (depending on $`|𝐡|`$) in which the weak interactions can or cannot play a role in reducing the particle anti-particle asymmetry in $`\mathrm{\Phi }`$ before this decay occurs. One feature of (21) should immediately be noted, however, and we will return to it below: The entropy release of these models which is of relevance in the present case is that which occurs between the time of production of the $`e_R`$ asymmetry, $`T_{\mathrm{dec}}`$, and $`T_{\mathrm{reh}}`$. From (21) it follows that $$\frac{T_{\mathrm{dec}}}{T_{\mathrm{reh}}}>2|𝐡|^{\frac{2}{3}}\left(\frac{\mathrm{TeV}}{T_{\mathrm{ns}}^{2/3}T_{\mathrm{reh}}^{1/3}}\right)\text{ }\stackrel{>}{}\text{ }10^5|𝐡|^{\frac{2}{3}},$$ (22) where the latter equality follows from (7). When it comes to producing a final baryon asymmetry this constraint will make it very difficult for models with any scaling much slower than the kinetic mode ($`\rho _\varphi a^6`$) to produce a reasonable final baryon to entropy ratio. We will return to this below. ### 4.3 Generation of the Asymmetry We now turn to the production of the asymmetry through the out-of-equilibrium decay of these scalar fields. As in any CP-violating out-of-equilibrium decay scenario one must go beyond the tree-level decay and consider the interference between tree-level and higher order diagrams to produce any net CP-violating effect. Further, since CPT theorem implies the equality between the total decay rate for particles and anti-particles, we need at least two channels containing different $`e_R`$ number in order to be able to produce the asymmetry. It is to this end that we have taken the $`\mathrm{\Phi }^a`$ scalars to couple to more than one generation. Further, to produce a CP-violating effect at one loop level we need at least two scalars, just as one requires two heavy bosons in simple GUT scenarios (cf. Ref. ). Here we do not try to be exhaustive in our consideration of the diagrams which give dominant contributions in different parts of parameter space. In Figure 2 the two diagrams we consider here are shown, for the decay channel $`\mathrm{\Phi }\overline{e}_R\mathrm{\Psi }_R^i\mathrm{\Phi }^{}`$ where the $`\mathrm{\Phi }^{}`$ is assumed to be the lighter of the (at least) two scalars. Provided the second fermion $`\mathrm{\Psi }_R^i`$ is of one of the two heavier lepton flavours, the process violates right electron number. The rate of the corresponding anti-particle decay does not cancel if the CP-violating interference terms between the two diagrams have a pure complex part. Summing over the internal fermions we have (see for somewhat similar cases) that the net CP-violating effect creating net $`e_R`$ number compared to the tree-level decay is $$\frac{\mathrm{\Gamma }_{\mathrm{\Phi }\overline{e}_R\mathrm{\Psi }_R^i\mathrm{\Phi }^{}}\mathrm{\Gamma }_{\overline{\mathrm{\Phi }}e_R\overline{\mathrm{\Psi }}_R^i\overline{\mathrm{\Phi }}^{}}}{\mathrm{\Gamma }_\mathrm{\Phi }^{tot}+\overline{\mathrm{\Gamma }}_{\overline{\mathrm{\Phi }}}^{tot}}=ϵ_p\frac{\mathrm{Im}\left[(\mathrm{𝐡𝐡}_{}^{}{}_{}{}^{})_{ei}(𝐡^{}𝐡^{}𝐡^{}𝐡_{}^{}{}_{}{}^{})_{ie}\right]}{\mathrm{Tr}\left[\mathrm{𝐡𝐡}^{}\right]},$$ (23) where $`ϵ_p10^2`$ is the phase space factor (and $`i`$ is summed over the non-electron indices). There is also another pair of diagrams which differ only in that the $`\mathrm{\Phi }^{}`$ emission on the external $`e_R`$ leg, which gives (23) with the indices interchanged. From the result we see clearly that to obtain an effect at this order we indeed need two scalars since, when $`𝐡=𝐡^{}`$, the result in Eq. (23) vanishes. We note that a diagram in which the $`\mathrm{\Phi }^{}`$ on the external leg is the standard model Higgs could dominate if the $`𝐡^{}`$ couplings are all smaller than the Yukawa coupling of the $`\tau `$ lepton ($`y_\tau 10^2`$). We thus write the resultant $`e_R`$ asymmetry as $$\frac{e_R}{s}(T_{\mathrm{dec}})\frac{10^2}{g_{}}|𝐡|^4\delta _{CP}$$ (24) where $`\delta _{CP}`$ is proportional to the imaginary part in (23). If all the couplings $`h`$ are of the same order this can be of order one, while if there is a hierarchy similar to that in the standard model, it will be correspondingly smaller. ### 4.4 The Baryon Asymmetry To arrive at the final baryon asymmetry the cases of Model A and Model B are quite different. In the former case there is no entropy dilution and the final baryon to entropy ratio is quite simply given by (13), i.e. one third of the right electron to entropy ratio (24). We thus have $$\frac{n_B}{s}\frac{10^2}{g_{}}|𝐡|^4\delta _{CP},(ModelA),$$ (25) which for roughly equal on- and off-diagonal couplings in $`𝐡`$ and $`𝐡^{}`$ gives a baryon asymmetry of the required size $`n_B/s10^{10}`$ for couplings of the order $`10^110^2`$. Note that this corresponds (from (19)) to scalar masses as low as about $`M_\varphi 100`$TeV. In Model B there is the important entropy dilution factor, so that for the final baryon asymmetry we find $$\frac{n_B}{s}\frac{10^2}{g_{}}|𝐡|^4\delta _{CP}\left(\frac{T_{\mathrm{reh}}}{T_{\mathrm{dec}}}\right)^{\frac{3(8n)}{n}},(ModelB).$$ (26) If we take the constraint given in (22) for couplings and $`\delta _{CP}`$ of order one, we can have a baryon asymmetry compatible with that required for nucleosynthesis only for a case $`n6`$, i.e. when the inflaton rolls in a kinetic energy dominated mode while it decays. The out-of-equilibrium decay condition for the $`\mathrm{\Phi }`$ field (21) is in this case satisfied for $`M_\varphi `$ as small as a few TeV. However the constraints on the flavour changing neutral currents need to be carefully considered, but can still be satisfied (e.g. if one of the couplings $`h_{e\mu }`$ or $`h_{e\tau }`$ is much smaller than the others). This provides in principle an interesting probe at accelerators of these models in a parameter range which is of interest. For the standard matter scaling ($`n=3`$) , or indeed radiation scaling ($`n=4`$) during reheating, the entropy dilution factor in (26) is much too large to allow the generation of the required baryon asymmetry, and the mechanism we have discussed is not a viable one for baryogenesis in these cases. In a different model it may be possible to relax the condition (12) and reduce the dilution factor. This would be appropriate for example if the $`e_R`$ continues to be created all the way down to the electroweak scale, for example in a scenario in which the $`\mathrm{\Phi }`$ particles are themselves directly produced out of equilibrium by the inflaton decay all the way to that scale. Tuning the value of $`T_{\mathrm{reh}}`$ to be just enough to keep the $`e_R`$ out of equilibrium until that time, i.e. to satisfy (7), the case $`n=3`$ gives a dilution by a factor $`10^6`$, and the case $`n=3`$ by $`10^4`$. A fairly copious initial $`e_R`$ asymmetry must therefore be produced very close to the electroweak scale in order to give the required baryon asymmetry. The model we have presented here is completely perturbative. It is likely that there are models of non-perturbative decay of a condensate of the $`\mathrm{\Phi }`$ field in which the constraints inferred on the Yukawa coupling $`𝐡`$ may be relaxed. One simple possibility would be a variant of the well-known Affleck-Dine mechanism , with a scalar field $`\mathrm{\Phi }`$ charged under right-handed electron number, which oscillates and decays. This may occur for example in supersymmetric extensions of the Standard Model when an $`A\mathrm{\Phi }^3+h.c.`$ term is present in the potential for a weakly coupled scalar field. When the field decays it creates a net right-handed electron number which is not suppressed by any coupling constant. The suppression may also potentially be evaded without the scalar field being required to carry a right electron charge. This would occur if there were a resonant decay into the electrons, which may in fact occur through precisely the same Yukawa coupling discussed here. Because they do not have the Yukawa coupling suppression, these mechanisms for producing $`e_R`$ would leave more space for entropy release in models like our model B. However, as we have pointed out, this will only work in the special case that the $`e_R`$ is created very close to the electroweak scale, and the temperature at which radiation domination begins $`T_{\mathrm{reh}}`$ lies just far enough below the electroweak scale to keep the $`e_R`$ out of equilibrium at the electroweak scale. ## 5 Anomaly and Magnetic Fields So far we have neglected the effects related to the abelian anomaly discussed in the introduction. As described in the effect of this term on a finite chemical potential $`\mu _R`$ for right electrons is to cause an instability in modes with $`k<\mu `$. Naively this instability can begin to grow when the corresponding mode enters the horizon (i.e. $`\mu H`$) , but when the slowing effect of conductivity $`\sigma `$ in the plasma is taken into account the criterion becomes $`\mu ^2/\sigma H`$. In the standard case, in which the mode must be able to start to evolve before the perturbative $`e_R`$ decay comes back into play, this corresponds to the effect being important if $`\mu /T\text{ }\stackrel{>}{}\text{ }10^6`$. Here since the expansion rate is such that the $`e_R`$ come back into equilibrium below the electroweak scale, the lower bound (which depends now simply on the expansion rate at the electroweak scale) will in fact be the same or larger. Moreover, while in the standard case the effect of the perturbative channel is important, and leads to the requirement that a significantly larger asymmetry than the critical one be present initially in order for the source hypermagnetic field to survive to the electroweak scale, here the instability will simply develop all the way to that scale and there will be no damping of the fields due to the appearance of the perturbative channel. The conclusions we can draw are as follows: $``$ Model A: If the initial $`e_R`$ asymmetry is less than the critical value for hypermagnetic field generation, all our analysis given above holds. A baryon asymmetry is produced of the same order, and thus for compatibility with the observed asymmetry one must have the corresponding initial value $`\mu _R10^{10}T`$. For initial $`\mu _R`$ larger than the critical value, hypermagnetic field will be generated. The baryon asymmetry generated will depend on the attained value of the chemical potential $`\mu _R`$ (which reduces as the Chern-Simons number grows in the condensate). However, the latter will always be bounded below by the same critical value, and thus the baryon asymmetry also (see also for a discussion of the full evolution of the dynamical equations). Thus we conclude that magnetic field generation from $`\mu _R`$ cannot be attained in this model with an altered expansion rate, since it will always be associated with a baryon asymmetry which is too large. On the other hand, the baryogenesis mechanism from right electrons works perfectly well, and is unaffected by the anomalous effects as the chemical potential is so small. $``$ Model B: The effects of the interplay of the baryon generation and the instability causing the growth of magnetic field are much more difficult to evaluate in this case, and are in general model dependent. The fact that the entropy dilution effect mandates a larger initial electron asymmetry, which would then be subject to the instability, suggests that it may be possible to find a model in which both magnetic field and the observed baryon asymmetry could be produced from the right electrons. As was discussed above, the production of the baryon asymmetry would require that a large $`\mu _R`$ chemical potential be produced very close to the electroweak scale. The corollary is that, if it is produced close to the electroweak scale, there will be little time for the instability to evolve and create significant seed fields. One way of getting around this would be in a model with a continuous sourcing of the $`e_R`$ asymmetry, so that the contribution at earlier time may grow into a magnetic field. The driving chemical potential is, however, itself being constantly diluted by entropy production, and the energy in the resultant field also relative to the background energy density. To see whether seed fields of significant magnitude can survive to the electroweak scale would require detailed study in a model of $`e_R`$ generation quite different to that we have discussed. ## 6 Conclusions We have considered here one aspect of cosmologies in which a scalar field dominates the expansion rate prior to nucleosynthesis. Right-handed electrons may remain out of equilibrium until the electroweak scale, so that if they are generated the $`B+L`$ processes of the standard model will lead to a non-zero equilibrium density of baryons of the order of that in the $`e_R`$. We have discussed two kinds of post-inflationary cosmologies in which such a period of scalar field domination can occur: in the first the inflaton rolls away after inflation into a kinetic mode in a steep potential, and the Universe is ‘reheated’ by the gravitational particle production at the end of the inflationary epoch, while in the second the inflaton rolls into a mode which can have a range of scalings and reheats the Universe itself by decaying sufficiently slowly to give a very low reheat temperature. We studied a specific model for the generation of the right handed electron asymmetry in which there are a set of scalars with CP-violating (and flavour changing) couplings to the leptons. We showed that in both scenarios such scalars can decay out of equilibrium at quite low temperatures and produce the desired asymmetry. While our models strongly favoured the case of kinetic mode domination, which have little or no entropy release, we note that in certain very special circumstances which may be satisfied in other models the generation of the observed baryon asymmetry may still be possible in the standard reheating scenario (with matter scaling during the reheating epoch). Finally we considered briefly the effect of the abelian anomaly which destabilizes such charges, and concluded that in the models with kinetic mode domination this effect is unimportant for the baryon number generation, while in the case with large entropy dilution it may be important and might allow the generation of magnetic field as in the case of standard radiation domination. Finally we return to the question of how this kind of mechanism might be implemented in other particle physics models, in particular in more popular (e.g. supersymmetric) extensions of the standard model. In general one need not consider necessarily the generation of right-handed electron number, but the generation of any CP odd charge which is effectively conserved after its creation on a timescale which is longer than the expansion rate of the Universe at the electroweak scale (when the $`B+L`$ violating processes freeze-out). Given that the expansion rate at the electroweak scale can be enhanced in these models by many orders of magnitude – up to a rate $`10^{11}T_{\mathrm{ew}}`$ in models of type A, and $`10^6T_{\mathrm{ew}}`$ in models of type B – scenarios can be considered in which many of the lighter degrees of freedom will drop out of equilibrium (for example the lighter right-handed quarks). While in the standard model itself there would seem to be the obstacle of prohibitively small CP violation, in extensions there is generically new CP-violating structure in the added sectors (e.g. in the chargino and squark mass matrices of the minimal supersymmetric standard model). The problem of baryogenesis then becomes the problem of the generation prior to the electroweak scale of CP-odd approximately conserved charge using this structure. Given our observation that for very modest masses (as low as a TeV for a particle with a coupling of order one) the decay of these heavier particles occurs out of equilibrium in these cosmologies, there is clearly the interesting possibility of sourcing CP-odd charges in this way, thus creating a baryon asymmetry. We will treat these issues in detail in forthcoming work . ## Acknowledgements We would like to thank Kimmo Kainulainen and Misha Shaposhnikov for useful discussions. MJ thanks Sacha Davidson and Steve Abel for discussion while this work was being completed.
no-problem/0003/astro-ph0003045.html
ar5iv
text
# A High-Eccentricity Low-Mass Companion to HD 89744 ## 1 INTRODUCTION We report on the detection of a massive ($`m_2\mathrm{sin}i`$$`=7.2`$ $`M_{\mathrm{JUP}}`$) planet in a highly elliptical ($`e=0.7`$), 256 day orbit about the star HD 89744 (HR 4067, HIP 50786), from radial velocity variations which reveal Keplerian motions of the star. Observations were carried out from 1996 through 1999 using the Advanced Fiber Optic Echelle (AFOE) spectrograph (Brown et al., 1994; Nisenson et al., 1998), a bench-top spectrograph located at the Whipple Observatory 1.5m telescope, and also with the Hamilton spectrograph at the Lick Observatory CAT and Shane telescopes, in November and December of 1999. The AFOE spectrograph is designed primarily for precise radial velocity studies of the seismology of bright stars, and of reflex motions of stars due to planetary companions. Long term stability of the velocity reference is provided by use of an iodine ($`I_2`$) cell (Butler et al., 1996). The AFOE determines radial velocity variations induced by planetary companions with a precision and long-term accuracy of approximately 10 m/s. On the order of 100 relatively bright stars ($`m_v7`$) have been monitored for this purpose since 1995. Since 1995 when the planetary candidate oribiting the star 51 Pegasus was detected (Mayor & Queloz, 1995), some 29 additional candidates have been detected by several groups, all from Doppler shifts measured using precise radial velocity techniques (Marcy & Butler, 1998; Mayor et al., 1998; Noyes et al., 1997; Cochran et al., 1997). HD 89744 (F7 V) was added to the AFOE observing list in early 1996, based on its relatively low chromospheric emission as measured with the Mt. Wilson “HK” chromospheric activity monitoring program (Baliunas et al., 1995). AFOE observations have been obtained regularly since then, and indicated the presence of a planet with a highly eccentric orbit. However data near the companion’s periastron, critical to an accurate determination of the orbital parameters, were not obtained until late 1999. Between October and December 1999, while the companion was near periastron, observations were made at Lick Observatory as well as with the AFOE, to ensure good phase coverage. The data points taken with the Lick CAT and Shane telescopes agree extremely well with the AFOE data, and thus provide a confirmation of the detection along with a precise determination of the ellipticity of the planet’s orbit. ## 2 PROPERTIES OF THE HOST STAR, HD 89744 HD 89744 is an F7V star at a Hipparcos-determined distance of 39.0 parsec. It is listed as a constant star in the Hipparcos catalog . The star has absolute magnitude $`M_v=2.78`$ and color $`BV=0.531`$ (Perryman, 1997). Comparing its position in the color-magnitude diagram with predictions of stellar evolution calculations, Prieto & Lambert (2000) determine it to have mass $`M=(1.34\pm 0.09)M_{}`$, radius $`R=(2.14\pm 0.1)R_{}`$, and effective temperature $`T_{\mathrm{eff}}`$$`=(6166\pm 145)K`$. Independently, Ng & Bertelli (1998) determine its mass to be $`M=(1.47\pm 0.01)M_{}`$. For the purposes of this paper we adopt the average of the two masses listed and an uncertainty given by their spread: $`M=(1.4\pm 0.09)M_{}`$. The metallicity of HD 89744 has been determined to be \[Fe/H\]$`=0.18`$ by Edvardsson et al. (1993). Its age is determined by Ng & Bertelli (1998) to be $`2.04\pm 0.10`$ Gy. From rotational modulation of the Ca II flux, Baliunas, Sokoloff, & Soon (1996) determined that the rotation period of the star is $`P_{\mathrm{rot}}=9`$ days. This, combined with the above-mentioned radius of the star, implies an equatorial velocity $`v`$ = 12 km/sec. Rotational broadening of the spectrum implies $`v\mathrm{sin}i`$$`=8`$ km/s (Bernacca & Perinotto, 1970; Uesugi & Fukuda, 1970). This further implies that the star’s rotational equator is inclined by about $`40^{}`$ to the plane of the sky. Speckle observations (McAlister et al., 1989) have not revealed any indication of a stellar companion to HD 89744. Table 1 summarizes the relevant parameters of the star HD 89744. ## 3 OBSERVATIONS ### 3.1 Instrumentation and Data Reduction The AFOE is a bench-mounted, fiber-fed, high-resolution cross-dispersed echelle spectrograph designed for high radial velocity precision and stability both on short time scales (better than 1 m/s over hours, for asteroseismology) and long term (approximately 10 m/s, for radial velocity exo-planet searches, through use of an iodine absorption cell). The AFOE is located at the 1.5m telescope of the Whipple Observatory on Mt. Hopkins, Arizona. A more complete description of the AFOE is available in Brown et al. (1994) and Nisenson et al. (1998). The AFOE exo-planet survey program monitors the radial velocity of about 100 stars brighter than $`m_v=7.5`$, with an accuracy of 10 – 15 m/s for integrations with a signal-to-noise ratio of 100 to 150. Most observations consist of three consecutive exposures, primarily to limit cosmic ray contamination by keeping the exposure times short. Our data reduction methodology is conceptually similar to that described by Butler et al. (1996), but differs in details. Echelle images are dark subtracted and one dimensional spectra extracted and corrected for scattered light, then flat-fielded using the spectrum of a tungsten lamp. For each of the six spectral orders that contain strong iodine lines a model is adjusted to match the observed star-plus-iodine spectrum in the least-squares sense. This model is computed using a Doppler-shifted high SNR spectrum of the star alone, plus a very high resolution high SNR spectrum of the iodine cell. The model incorporates the sought-for Doppler-shift of the star as well as mechanical drifts within the spectrograph, the instrumental wavelength solution, the instrumental resolution profile, and a residual scattered light correction. The resulting radial velocities, after correction for the motion of the telescope relative to the solar system barycenter, are averaged for all six orders and the three consecutive observations. The scatter around the mean provides an estimate of the uncertainty. The root-mean-square (RMS) velocity of several radial velocity standard stars is commensurate with this estimate of uncertainty. The Doppler analysis for the Lick observations is described in detail in Butler et al. (1996). ### 3.2 Observations and Orbital Fit AFOE observations of HD 89744 were obtained on 74 separate nights between December 1996 and December 1999. Additional observations on 14 nights during November and December 1999 were also taken at the Lick Observatory CAT and Shane telescopes, to ensure complete phase coverage near the November 1999 periastron passage. The data with their uncertainties are plotted in Figure 1. The RMS of the zero-averaged velocities for the AFOE data alone is 129.1 m/s, while the averaged uncertainty on these observations is 15.2 m/s; the corresponding reduced $`\chi `$ is 10; where the reduced $`\chi `$ is given by $$\chi =\left[\frac{1}{N_oN_m}\underset{i=1}{\overset{N_o}{}}(\frac{O_iM_i}{\sigma _i})^2\right]^{\frac{1}{2}}$$ (1) where $`O_i`$ are the observations, $`\sigma _i`$ the observational uncertainties, $`M_i`$ the corresponding values of the model, $`N_o`$ the number of observations and $`N_m`$ the number of adjustable parameters in the model. After fitting a Keplerian orbit in the least-squares sense, the RMS of the residuals is 26.2 m/s, corresponding to a reduced $`\chi `$ of 1.7. If we include the Lick observations, by adding one additional free parameter to account for the arbitrary offset between the two data sets, the RMS of the residuals is 20.5 m/s, or a reduced $`\chi `$ of 1.6. (One velocity point was rejected based on a 3-pass 3-$`\sigma `$ rejection algorithm.) The resulting orbital parameters are given in Table 2. These are almost identical to the parameters obtained when using only the AFOE observations, except that the combined data set leads to smaller uncertainties, primarily on the amplitude but also on the eccentricity. The orbital fit phase plot and the residuals to the fit are shown in Figures 2 and 3. The periodic variation of the radial velocity, and the highly eccentric character of the orbit, are evident. For a stellar mass of 1.4 $`M_{}`$, the orbital elements imply a minimum mass for the companion of $`m_2\mathrm{sin}i`$$`=7.2`$ $`M_{\mathrm{JUP}}`$, and a semi-major axis of 0.88 AU. ## 4 DISCUSSION The radial velocity data shown in Figure 2 are unambiguous in revealing a periodic radial velocity variation of HD 89744, which can be fit well by a Keplerian orbit. Since observations obtained with two different radial velocity instruments at two different telescopes yield exactly the same radial velocity variations within their respective instrumental errors, the evidence is compelling that the measured velocities are real variations on the star. There is no known way a stellar signal in a late F-type main sequence star could mimic a Keplerian orbital signature with such a long period and large amplitude, and with such a large eccentricity. Hence we are driven to the interpretation that the star is orbited by a low-mass companion, HD 89744 b, ($`m_2\mathrm{sin}i`$$`=7.2M_{}`$), in an orbit with semi-major axis 0.88 AU and eccentricity 0.70. The residuals to the orbital fit are larger than would be expected from internal errors in the data; this is true both for the AFOE data and the Hamilton echelle data. However, Saar et al. (1998) conclude that for a typical F stars with this rotation period, the velocity jitter induced by stellar magnetic activity and inhomogeneous convection is approximately 10 m/s. Adding this jitter in quadrature to the uncertainties lower the reduced $`\chi `$ to 1.2. The residuals display a long term trend ($`15`$ m/s/year) that is marginally significant. While it might be caused by a residual instrumental drift in the AFOE data, we can not rule out that it might be due to a distant companion; further observations over a longer baseline are required. The orbital eccentricity of the companion to HD 89744 is among the highest planetary eccentricities known. Only two other planets, 16 Cyg B b ($`e=0.68`$, $`a_2=1.70`$ AU; Cochran et al., 1997) and HD 222582 b ($`e=0.71`$, $`a_2=1.35`$ AU; Vogt et al., 1999) have comparable eccentricities. HD 89744 b, with $`a_2=0.88`$ AU, has the smallest semi-major axis of the three. At periastron it dips to within 0.26 AU, still well outside a periastron distance which could lead to tidal circularization within the stellar lifetime. The discovery of the first highly eccentric planet, 16 Cyg B b, led to the suggestion (Mazeh et al., 1997; Holman et al., 1997) that its eccentricity may have been “pumped up” by the influence of a nearby companion star, 16 Cyg A. However, neither HD 89744 nor HD 222582 is orbited by a stellar companion. Thus a different explanation is required for their high eccentricities, an explanation that might also apply to 16 Cyg B b. The high orbital eccentricity of the HD 89744 system continues the trend that planetary-mass companions whose semi-major axes are greater than about 0.15 AU tend to have a broad range of eccentricities, with no apparent trends of eccentricity with mass or semi-major axis. This circumstance must be explained by any successful planetary formation and migration scenario. As noted by others (e.g., Vogt et al., 1999; Marcy et al., 1999), this causes difficulties with a number of proposed mechanisms for planetary formation and migration. The values of $`v`$sin$`i`$, $`P_{\mathrm{rot}}`$, and $`R`$ for HD 89744 given in Table 1 imply an inclination of the stellar rotational equator of $`i=42^{}`$ (i.e., $`\mathrm{sin}i`$= 0.66). Moreover, if we assume that the orbit is coplanar with the star’s equatorial plane, we infer that $`m_2`$ = 10.8 $`M_{\mathrm{JUP}}`$. The astrometric orbital amplitude would be 0.17 mas, too small for Hipparcos detection but within the range of next-generation astrometric missions. The mass of 10 $`M_{\mathrm{JUP}}`$ suggested by the stellar rotational data is near the upper limit of masses associated with extra-solar giant planets (e.g., Marcy et al., 2000). If such large masses hold up to further investigations, then theoretical understanding of the origin and evolution of extra-solar giant planets must be able to accommodate a mass range spanning at least values between 0.5 $`M_{\mathrm{JUP}}`$ and 10 $`M_{\mathrm{JUP}}`$. The metallicity of HD 89744, \[Fe/H\]= 0.18, is substantially higher than the mean for nearby sun-like stars (Favata et al., 1997; Gonzalez, 1998). HD 89744 was placed on the AFOE observing list without reference to its metallicity; therefore the association of its high metallicity with the presence of a planet is not a selection effect. This association continues the trend, already noted (e.g., Gonzalez & Law, 1999, and references therein) and references therein that the metallicity of stars with planets tends to be higher than that of stars without planets. We are grateful to the Mt. Hopkins observing and support staff, especially Ed Bennet, Perry Berlind, Mike Calkins, Ted Groener, Robert Hutchins, Jim Peters and Wayne Peters; we also acknowledge observing with the AFOE by Scott Horner and Ted Kennelly. We are very grateful to Adam Contos for his dedicated efforts in observing and reducing a significant fraction of data taken with the AFOE. We also thank Ari Behar for his help in reducing some of the AFOE data. DAF acknowledges the dedication and pioneering efforts of G. Marcy and P. Butler who started the Lick planet search project. She would also like to thank Sabine Frink, David Nidever and Amy Reines for obtaining some of the Lick observations. The AFOE group also acknowledges support from NASA grant NAG5–75005 and from the Smithsonian Institution Scholarly Studies program. DAF acknowledges support from NASA grant NAG5–8861. In preparation of this paper, we made use of the Simbad database operated at CDS, Strasbourg, France and the NASA Astrophysics Data System.
no-problem/0003/hep-ph0003009.html
ar5iv
text
# Muon colliders and the non-perturbative dynamics of the Higgs boson33footnote 3Invited talk presented by A. Ghinculov at the 5th International Conference on Physics Potential and Development of 𝜇⁺⁢𝜇⁻ Colliders, December 15-17, 1999, Fairmont Hotel, San Francisco, CA, USA. ## Abstract A muon collider operating in the TeV energy range can be an ideal $`s`$-channel Higgs boson factory. This is especially true for a very heavy Higgs boson. The non-perturbative dynamical aspects of such a Higgs boson were recently investigated with large $`N`$ expansion methods at next to leading order, and reveal the existence of a mass saturation effect. Even at strong coupling, the Higgs resonance remains always below 1 TeV. However, if the coupling is strong enough, the resonance becomes impossible to be detected. A central question in today’s particle physics is how the electroweak symmetry breaking is realized in nature. Further experimental input is needed for distinguishing between various theoretical possibilities, and this will be the main goal of the LHC. The simplest of these possibilities is the minimal scalar sector of the standard model which predicts the existence of one single Higgs particle. The sensitivity of low energy quantum corrections to the mass of the Higgs boson is small because of Veltman’s screening theorem. Therefore the indirect Higgs mass determination from radiative corrections is rather imprecise, in spite of the impressive accuracy of LEP, SLC, and Tevatron measurements. Current electroweak data fits based on the minimal standard model favor a lighter Higgs boson, with a central value around 110 GeV, which is close to the region excluded by direct production bounds. So far no significant deviations from the standard model radiative corrections were measured which would hint towards the existence of additional degrees of freedom at higher energy. However, their existence is strongly supported by well-known open questions of the standard model on the theoretical side. Such degrees of freedom have the potential to induce additional radiative corrections and thus shift the prediction for the Higgs boson mass. It is conceivable that, once built, the LHC will discover a Higgs resonance considerably heavier than the central values suggested by electroweak data fits at present. An interesting feature of a possible muon collider is that it can be used as an $`s`$-channel Higgs factory. Here we would like to discuss the implications of the non-perturbative dynamics of the scalar sector for $`\mu ^+\mu ^{}`$ Higgs factories. We will argue that due to the non-perturbative dynamics of the scalar sector, a possible muon collider will not need an energy much higher than 1 TeV to study even a strongly coupled standard Higgs boson. However, it may need a high luminosity. A heavy Higgs boson implies a strongly self-interacting scalar sector. Thus it complicates the theoretical analysis because at some point perturbation theory becomes unreliable. A few radiative corrections induced by heavy Higgs bosons are available in higher order . Their convergence properties were studied by several authors , and revealed rather large theoretical uncertainties. In order to avoid the problems of perturbation theory at strong coupling, such as large renormalization scheme uncertainties and the blow-up of radiative corrections in higher loop order, a non-perturbative approach is necessary. We performed a study of the Higgs sector at strong coupling by using non-perturbative $`1/N`$ expansion techniques at higher order. This study revealed the existence of an interesting mass saturation effect. When the coupling constant of the scalar sector is increased, the mass of the Higgs boson remains bounded under a saturation value just under 1 TeV, while its widths continues to increase. Along the lines of ’t Hooft’s work on planar QCD , the large $`N`$ expansion has attracted a lot of attention by holding the promise to solve nonabelian gauge theories non-perturbatively. It was also used in the study of critical phenomena. Its connections to matrix models, two-dimensional gravity, and string theory were also explored. Given that the standard model’s Higgs sector is a gauged $`SU(2)`$ sigma model, the $`1/N`$ expansion suggests itself naturally for studying it at strong coupling. At leading order in $`1/N`$, this was initiated in ref. . Unfortunately, the leading order solution proves to be quite a poor approximation, which in the weak coupling limit deviates substantially from perturbation theory. Because of this it cannot be used in realistic phenomenological studies. In ref. we extended this study to next-to-leading order. It turns out that the next-to-leading order solution is impressively accurate. In the weak coupling limit it competes with the best perturbative results available at two-loop precision. The starting point of the $`1/N`$ analysis is the Lagrangean of the standard model’s scalar sector promoted to a $`O(N)`$-symmetric sigma model. The well-known equivalence theorem provides a relation between the physics of the purely scalar sector and the physics of electroweak vector bosons. The standard model case is recovered in the $`N=4`$ limit: $$_1=\frac{1}{2}_\nu \mathrm{\Phi }_0^\nu \mathrm{\Phi }_0\frac{\mu _0^2}{2}\mathrm{\Phi }_0^2\frac{\lambda _0}{4!N}\mathrm{\Phi }_0^4,\mathrm{\Phi }_0(\varphi _0^1,\varphi _0^2,\mathrm{},\varphi _0^N)$$ (1) The next step is to introduce an additional unphysical field $`\chi `$ in this Lagrangian : $`_2`$ $`=`$ $`_1+{\displaystyle \frac{3N}{2\lambda _0}}(\chi _0{\displaystyle \frac{\lambda _0}{6N}}\mathrm{\Phi }_0^2\mu _0^2)^2`$ (2) $`=`$ $`{\displaystyle \frac{1}{2}}_\nu \mathrm{\Phi }_0^\nu \mathrm{\Phi }_0{\displaystyle \frac{1}{2}}\chi _0\mathrm{\Phi }_0^2+{\displaystyle \frac{3N}{2\lambda _0}}\chi _0^2{\displaystyle \frac{3\mu _0^2N}{\lambda _0}}\chi _0+const.`$ The auxiliary field $`\chi `$ does not correspond to a dynamical degree of freedom. Its equation of motion is simply a constraint and can be used for eliminating $`\chi `$. While the introduction of the auxiliary field does not change the dynamics, it does alter the Feynman rules by eliminating the scalar quartic couplings. This proves to be extremely helpful for calculations beyond leading order in $`1/N`$. Denoting the Higgs boson by $`\sigma `$ and the Goldstone bosons by $`\pi `$, the Feynman rules derived from the Lagrangean $`_2`$ have only trilinear vertices of the type $`\chi \sigma \sigma `$ and $`\chi \pi \pi `$. One can easily count the powers of $`N`$ of a Feynman graph by noticing that closed Goldstone loops give rise to a factor $`N`$, $`\chi \chi `$ propagators have a factor $`1/N`$, and mixed $`\chi \sigma `$ propagators have a factor $`1/\sqrt{N}`$. In figure 1 we show the Feynman diagrams which we need for calculating Higgs production and decay processes at muon colliders at next-to-leading order in the $`1/N`$ expansion. These are all one-, two-, and three-point functions of the sigma model. We note that the summation of leading order renormalon chains on internal propagators of these diagrams leads to an additional Euclidean pole in the propagators. The origin and physical content of the tachyon pole is discussed in ref. . When effectively calculating the diagrams in figure 1, we use the minimal tachyonic subtraction discussed in ref. for treating it. We calculated the diagrams shown in figure 1 numerically, along the lines of ref. . Once they are available numerically, they can be used for deriving amplitudes of physical processes. Two Higgs processes are of interest at $`\mu ^+\mu ^{}`$ $`s`$-channel Higgs factories: $`\mu ^+\mu ^{}Ht\overline{t}`$ and $`\mu ^+\mu ^{}HZZ,W^+W^{}`$. Their amplitudes are given in the $`1/N`$ expansion by the following expression at next-to-leading order: $`_{f\overline{f}}`$ $`=`$ $`{\displaystyle \frac{1}{sm^2(s)\left[1\frac{1}{N}f_1(s)\right]}}`$ $`_{WW}`$ $`=`$ $`{\displaystyle \frac{m^2(s)}{\sqrt{N}v}}{\displaystyle \frac{1\frac{1}{N}f_2}{sm^2(s)\left[1\frac{1}{N}f_1(s)\right]}}`$ (3) Here, the correction functions $`f_1`$ and $`f_2`$ are given by a combination of the two- and three-point functions defined in figure 1: $`f_1(s)`$ $`=`$ $`{\displaystyle \frac{m^2(s)}{v^2}}\widehat{\alpha }(s)+2\widehat{\gamma }(s)+{\displaystyle \frac{v^2}{m^2(s)}}\left[\widehat{\beta }(s)2{\displaystyle \frac{sm^2(s)}{v^2}}\left(\delta Z_\sigma \delta Z_\pi \right)\right]`$ $`f_2(s)`$ $`=`$ $`{\displaystyle \frac{m^2(s)}{v^2}}\widehat{\alpha }(s)+\widehat{\gamma }(s)\widehat{\varphi }(s){\displaystyle \frac{v^2}{m^2(s)}}\widehat{\eta }(s)`$ (4) The wave function renormalizations $`\delta Z_\sigma `$, $`\delta Z_\pi `$ can be extracted from $`\widehat{\beta }`$, $`\widehat{\gamma }`$. The hat in the expressions above means that the multi-loop diagrams are subtracted recursively in the ultraviolet, according to the Bogoliubov-Parasiuk-Hepp-Zimmermann procedure . We note that by performing these ultraviolet subtractions we introduce a renormalization scale. However, in the final physical correction functions $`f_1`$ and $`f_2`$ this renormalization scheme dependence cancels out. The final result in manifestly independent of the choice of the renormalization scheme. In figure 2 we show numerical results for the $`\mu ^+\mu ^{}Ht\overline{t}`$ and $`\mu ^+\mu ^{}HZZ,W^+W^{}`$ processes of eqs 3. In both processes the Higgs mass saturation effect shows up. When the strength of the coupling increases, the peak of the resonance shifts towards higher energy, up to a saturation value just under 1 TeV, and then starts to shift back towards lower energy. At the same time, the width continues to increase and the resonance becomes flat and difficult do detect experimentally. To conclude, we performed a non-perturbative study of the two main Higgs processes of interest at a future muon collider. Due to the non-perturbative dynamics of the Higgs sector, a standard Higgs particle is bound to result into a resonance with a peak below 1 TeV. Therefore, a muon collider will not need energies much larger than 1 TeV to cover the whole range where a standard Higgs may exist. However, due to non-perturbative dynamics, at strong coupling the experimental detection becomes difficult. To measure a flat Higgs resonance will require precise knowledge of the backgrounds. Detection will be a matter of luminosity and not of center of mass energy. Finally, if the coupling becomes strong enough, the Higgs boson will still remain under 1 TeV but will become impossible to detect with a given luminosity. Aknowledgement The work of A.G. was supported by the US Department of Energy. The work of T.B. was supported by the EU Fourth Training Programme ”Training and Mobility of Researchers”, Network ”Quantum Chromodynamics and the Deep Structure of Elementary Particles”, contract FMRX-CT98-0194 (DG 12 - MIHT).
no-problem/0003/quant-ph0003122.html
ar5iv
text
# Quantum Computation with hot and cold ions: An assessment of proposed schemes. ## I Introduction Of all the proposed technologies for quantum information processing devices, argueably one of the most promising and certainly one of the most popular is trapped ions. This scheme, discovered by Ignacio Cirac and Peter Zoller , and demonstrated experimentally shortly afterwards by Monroe et al. , is currently being persued by about a half-dozen groups world-wide (for an overview of this work, see, for example refs. ). A vital ingredient of trapped ion quantum computing is the ability to cool trapped ions down to their quantum ground state by sideband cooling. Using controlled laser pulses, the quantum state of the ion’s collective oscillation modes (i.e. the ions external degrees of freedom) can then be altered conditionnally on the internal quantum state of the ions’ valence electrons, and vice-versa. This allows quantum logic gates to performed. The current state-of-the-art (as of spring, 2000) is that two groups have suceeded in cooling strings of a few ions to the quantum ground state , and that entanglement of up to four ions has been experimentally reported . The fidelity of the quantum logic gates performed in trapped-ion quantum computers relies cruitially on the quantum state of the ions collective oscillatory degrees of freedom. In the original Cirac-Zoller scheme the ions must be in their quantum ground state of these degrees of freedom (the quanta of which are widely referred to as phonons). If the purity of this quantum state were to be degraded by the action of external perturbations (which, given the fact that ions couple to any externally applied electric field, seems quite likely) then the fidelity of quantum operations naturally will suffer. The maintainance of the cold ions in their oscillatory quantum ground state seems at the moment to be the biggest single problem standing in the way of advancing this field. The solution is being tackled in two ways: firstly the understanding and nuffication of the experimental causes of the “heating” of the trapped ions, and secondly the investigation of alternative schemes for performing quantum logic operations which relax the strict condition of being in the quantum ground state of the phonon modes. This paper is a brief review and assessment of these schemes. ## II Heating of ions The influence of random electromagnetic fields on trapped ions has been analyzed by various authors ; because this theory impacts on our later discussions, we will give a brief reprise of it here. Consider $`N`$ ions confined in a trap. The trap is assumed to be sufficiently anistropic, and the ions sufficiently cold that they lie crystalized along an axis of the trap in which the effective trapping potential is weakest, which we shall denote as the x-axis. Because the ions are interacting via the Coulomb force, their motion will be strongly coupled. Their small amplitude fluctuations are best described in terms of normal modes, each of which can be treated as an independent harmonic oscillator . There will be a total of $`N`$ such modes along the weak axis (we will neglect motion along the directions of strong confinement). We shall number these modes in order of increasing resonance frequency, the lowest ($`p=1`$) mode being the center of mass mode, in which the ions oscillate as if rigidly clamped together. In the quantum mechanical description, each mode is characterized by creation and annihilation operators $`\widehat{a}_p^{}`$ and $`\widehat{a}_p`$ (where $`p=1,\mathrm{}N`$). The ions are interacting with an extrnal electric field $`𝐄(𝐫,t)`$. The Hamiltonian in this case is given by the expression $$\widehat{H}=i\mathrm{}\underset{p=1}{\overset{N}{}}\left[u_p(t)\widehat{a}_p^{}u_p^{}(t)\widehat{a}_p\right],$$ (1) where $$u_p(t)=\frac{ie}{\sqrt{2M\mathrm{}\omega _p}}\underset{n=1}{\overset{N}{}}E_x(𝐫_n,t)b_n^{(p)}\mathrm{exp}\left(i\omega _pt\right).$$ (2) In eq.(2), $`b_n^{(p)}`$ is the $`n`$-th element of the $`p`$-th normalized eigenvector of the ion coupling matrix , $`\omega _p`$ being its resonance frequnecy, and $`E_x`$ is the component of the electric field along the weak axis of the trap. In what follows, the center of mass phonon mode ($`p=1`$), whose frequency is equal to the frequency $`\omega _x`$ of the Harmonic trapping potential, will have special importance. The frequencies at which the externally applied fields are resonant with the ions’ motion is at most a few Megahertz; the wavelengths of such radiation will therefore not be less than 100 meters or so. The separation of the ions is of the order of 10 $`\mu \text{m}`$, or $`10^7`$ wavelengths. Thus spatial frequencies in the applied field on the spatial scale of the ions’ separation will be very evanescent, and to a very good approximation one can assume $`E_x(𝐫_n,t)E_x(t)`$, i.e. the field is constant over the extent of the ion string. Using the fact that $`_{n=1}^Nb_n^{(p)}=\delta _{p,1}`$, the interaction Hamiltonian becomes $$\widehat{H}=i\mathrm{}u_1(t)\widehat{a}_1^{}+h.a.,$$ (3) where $$u_1(t)=\frac{ie}{\sqrt{2NM\mathrm{}\omega _x}}E_x(t)\mathrm{exp}\left(i\omega _xt\right),$$ (4) where $`\omega _x\omega _1`$ is the trapping frequency along the x-axis. In other words, spatially uniform fields will only interact with the center-of-mass mode of the ions, which is physical inituitive since some form of differential force must be applied to excite modes in which ions move relative to one another. The dynamics governed by this Hamiltonian can be solved exactly . The “heating time’, i.e. the time taken for the occupation number of the center of mass mode to increase by one, is given by the formula $$\tau _N=\frac{M\mathrm{}\omega _x}{Ne^2E_{RMS}^2T},$$ (5) where $`E_{RMS}`$ is the root mean square value of $`E_x(t)`$ and $`T`$ is its coherence time (we have assumed that $`T2\pi /\omega _x`$). ## III Quantum Computing using “higher” phonon modes The analysis of the heating of ions presented in the previous section is directly linked to the first, and conceptually most simple method for quantum computing with trapped ions in a manner which avoids the heating problem . Quite simply the “higher” ($`p>1`$) modes of the ions’ collective oscillations can be utilized in place of the center of mass ($`p=1`$) mode originally considered by Cirac and Zoller. The pulse sequence required is exactly that proposed by those authors, with the slight added complication that different laser frequencies (i.e. the sideband corresponding the stretch mode in question) must be employed, and that the laser-ion coupling varies between different ions for the higher modes , requiring different pulse durations for different ions. However, as has been pointed out by Saito et al. (in the context of high-temperature NMR experiments) the overall complexity of a computer algorithm involving classical control problems of this kind can nullify any speed-up that can be achieved via quantum parallelism. Experimentally the “higher” modes of the two-ion system are observed to have heating times in excess of 5 $`\mu `$sec, as opposed to heating times of less than 0.1 $`\mu `$sec for the center of mass modes , confirming that they are indeed well isolated from the influence of external heating fields, and can be used as a reliable quantum information bus. The heating of the center of mass mode has an important indirect influence. As this mode becomes more and more excited, the wavefunction of the ions becomes more spatially smeared-out, causing a random phase shift of the ions. This effect is analogous to the Debye-Waller effect in X-Ray crystallography . One possible solution for this problem has been proposed , namely the use of sympathetic cooling by a separate species of ion, allowing the excitation of the center of mass mode to be reduced and kept constant. This scheme however poses the problem of devising a method of loading a trap with an ion of a distinct species and providing a second set of lasers to cool it. ## IV Quantum computation with macroscopically resolved quantum states: the scheme of Poyatos, Cirac and Zoller The essential principle of the scheme proposed by Poyatos, Cirac and Zoller for “hot” ion quantum computation is to create coherent states of the ions’ collective oscillations, rather than Fock states. A laser pulse, appropriately tuned, flips the internal state of the ion and simultaneously provides a momentum “kick” to the wavepacket of the trapped ion in a direction which is dependent on the internal state of the ion. Thus if the ion/qubit is in state $`|0`$ it will start to move off in one direction; if it is in state $`|1`$ it will start to move in the opposite direction. If it is in a superposition state, then a macroscopic entangled state (or “cat” state) will be created. Because of the strong ion-ion coupling due to the Coulomb interaction, a second ion will also evolve into two spatially dependent wavepackets dependent on the state of the first ion (see Fig.2). If the momentum kick imparted by the initial laser pulse is sufficiently strong then the wavepacket associated with the $`|0`$ will, after a short time, be spatially distinct from that associated with the $`|1`$ state. A laser may then be directed on that distinct wavepacket of the second ion, allowing its state to be changed dependent on the state of the first ion (Fig.3). Once this is done, the motion of the wavepackets in the traps restores them to their original positions (Fig.4) and a third pulse, reversing the effect of the first pulse and nullifying the momentum kick is applied to the first ion, completing the gate operation (Fig.5). The traveling wave laser pulses which provide the momentum kicks are described by the interaction Hamiltonian $$\widehat{H}_I=\frac{\mathrm{}\mathrm{\Omega }}{2}\widehat{\sigma }^{(+)}\mathrm{exp}\left[i\underset{p=1}{\overset{N}{}}\eta _p\left(\widehat{a}_p+\widehat{a}_p^{}\right)\right]+h.a.$$ (6) In this equation $`\mathrm{\Omega }`$ is the Rabi frequency, which is proportional to the electric field strength of the laser (see ref. for details) and the operators $`\widehat{\sigma }^{(+)}|01|`$ and $`\widehat{\sigma }^{()}|10|`$ are respecitively the lowering and raising operators for the internal states of the ion (treated as a two level system). In paper will be considering the dynamics of one or two ions only, and the context should make it clear to which of the two ions the operators refer; in some cases subscripts are appended. The constant $`\eta _p`$ is the Lamb-Dicke parameter, which characterizes the strength of the coupling between the laser and the oscillatory mode. It varies between different modes and, in general, from ion to ion. If this Hamiltonian acts for a time $`t_{las}=\pi /\mathrm{\Omega }2\pi /\omega _x`$ then the resultant transformation of the state of ion 1 is $$|\varphi ^{}=\left[\widehat{\sigma }^{(+)}\underset{p}{\overset{N}{}}\widehat{D}_p\left(i\eta _p\right)+\widehat{\sigma }^{()}\underset{p}{\overset{N}{}}\widehat{D}_p\left(i\eta _p\right)\right]|\varphi ,$$ (7) where $`\widehat{D}_p\left(v\right)`$ is the displacement operator for the $`p`$-th phonon mode in question . The fact that all of the phonon modes are excited by this operation leads to somewhat complicated dynamics of the excited wavepackets. This is alieviated somewhat by the use of slightly different trapping potentials (see for details), which may be realized in small scale traps, in which the electrodes are close to the ions (which could have a detrimental effect on the heating of the ions.) <sup>*</sup><sup>*</sup>*Another possible method of modifying the ions’ collective dynamics is to insert one or more ions of a different mass into the ion chain, as has been investigated in the context of sympathetic cooling by Kielpinski et al. .. As the number of ions increases this dynamics will becomes more and more complicated, so that one has to wait longer an longer times for the wavepackets to re-combine (as in Fig.4) prior to completion of the gate. This is phenomnon unfortunately limits this scheme to no more than two or three ions. In practice the ion wavepackets do not have to be completely separated spatially so that a laser can be focused on one but not the other (as shown in Fig.3); so long as they are separated somewhat, a laser beam could be applied in such a fashion that one of the wavepackets had its internal states flipped while the other recieved a pulse of the same duration, but different intensity, contrived to leave the internals states effectively unaltered (e.g. a “$`4\pi `$” pulse). Care must however be exercised that laser fields are constant over the spatial extent of each wavepacket, otherwise spatial information will become inprinted on the internal degrees of freedom. The finite temperature effects simply by increasing the size of the ions’ wavepackets. It derives its immunity from heating from the use of macroscopic effects (i.e. the separation of the ions’ wavepackets) which are effected but slightly by the heating. A possible source of decoherence would be differential heating, when the two spatially separated wavepackets of a single ion are excited by different random feilds, so that a mixed state of the internal degrees of freedom is created when the wavepackets are recombined. ## V Quantum computation with virtual phonons: the scheme of Mølmer and Sørensen Mølmer and Sørensen have proposed related techniques for creating both multi-ion entangled states and for quantum computation with ions in thermal motion. The scheme proposed in fact is valid for any mixed state of the ions’ collective oscillation modes, and is not confined to thermal equilibrium states. It relies on the virtual excitation of phonon states, in a manner analgous to the virtual excitation of some exited state of an atom or molecule in Raman processes. Laser fields with two spectral components detuned equally to the red and to the blue of the atomic resonance frequency are applied to a pair of ions in the trap. The interaction is described by the following Hamiltonian $`\widehat{H}_I`$ $`=`$ $`\mathrm{}\mathrm{\Omega }\widehat{J}^{(+)}\left\{1+i\eta \left(\widehat{a}e^{i\omega _xt}+\widehat{a}^{}e^{i\omega _xt}\right)\right\}\mathrm{cos}(\delta t)+h.a.`$ (8) $`=`$ $`\mathrm{}\mathrm{\Omega }e^{i\delta t}\widehat{J}_x\mathrm{}\mathrm{\Omega }\eta e^{i(\delta +\omega _x)t}\widehat{a}^{}\widehat{J}_y\mathrm{}\mathrm{\Omega }\eta e^{i(\delta \omega _x)t}\widehat{J}_y\widehat{a}+h.a.`$ (9) In this equation $`\delta `$ is the detuning of the laser beam from the resonance frequnecy of the two level system. For large values of $`\delta `$ it is convenient to consider this interaction in terms of an effective Hamiltonian (see appendix), which neglects the effects of very rapidly varying terms. In this case, the effective Hamiltonian is $`\widehat{H}_{eff}`$ $`=`$ $`{\displaystyle \frac{\mathrm{}\mathrm{\Omega }^2\eta ^2}{(\delta +\omega _x)}}[\widehat{J}_y\widehat{a},\widehat{a}^{}\widehat{J}_y]+{\displaystyle \frac{\mathrm{}\mathrm{\Omega }^2\eta ^2}{(\delta \omega _x)}}[\widehat{a}^{}\widehat{J}_y,\widehat{J}_y\widehat{a}]`$ (10) $`=`$ $`{\displaystyle \frac{\mathrm{}\mathrm{\Omega }^2\eta ^2}{(\delta \omega _x)}}\left({\displaystyle \frac{2\omega _x}{\delta +\omega _x}}\right)\widehat{J}_y^2.`$ (11) This interaction is equivalent to a conditional quantum logic gate preformed between the two ions, and can be used to create multiparticle entangled states. This scheme is very attractive because, while it has the possibility of being scalable to many ions, its operation is independent of the occupation number of the phonon modes, and so its fidelity is not degraded by excitation during the gate operations themselves. Its chief drawback seems to be the time taken to perform gate operations. In an example is given of population oscillations associated with the above entangling operations in the presence of noise. The Rabi frequency of these oscillations was approximately 4500$`\omega _x`$ (c.f. Fig.4 of ref., with appropriate change of notation). Given that trap frequencies must be of the order of $`\omega _x(2\pi )`$500 kHz in order for the ions to be individually resolvable by focused lasers It is not necessary to resolve ions individually for this scheme to be used to create entanglement; however some form of differential laser addressing will be necessary in order to perform quantum computations involving more than two qubits. , this implies a gate time of the order of 50 milliseconds. As explained in ref., it is possible to decrease this time by reducing the detuning $`\delta `$ of the laser, at the cost of increasing the susceptibility of this scheme to heating during gate operations. Nevertheless Mølmer and Sørensen’s scheme is a very compelling idea, and has been used experimentally to create entangled states of multiple ions . ## VI Quantum computation via adiabatic passages: the scheme of Schneider, James and Milburn The scheme proposed by Schneider et al. relies on two operations: first the phonon-number dependent a.c. Stark shift introduced by D’Helon and Milburn , and second the use of stimulated Raman adiabatic passage to carry out certain kinds of transitions independently of the occupation number of the phonon mode used as a quantum information bus. First let us consider the origin of the D’Helon-Milburn shift. The Hamiltonian for a single two-level ion at the node of a detuned classical standing wave is given by the following formula: $`\widehat{H}_I`$ $`=`$ $`{\displaystyle \frac{\mathrm{}\mathrm{\Omega }\eta }{2}}\widehat{\sigma }^{(+)}\left(\widehat{a}e^{i\omega _xt}+\widehat{a}^{}e^{i\omega _xt}\right)e^{i\mathrm{\Delta }t}+h.a.`$ (12) $`=`$ $`{\displaystyle \frac{\mathrm{}\mathrm{\Omega }\eta }{2}}\left(\widehat{\sigma }^{(+)}\widehat{a}e^{i(\mathrm{\Delta }\omega _x)t}+\widehat{\sigma }^{(+)}\widehat{a}^{}e^{i(\mathrm{\Delta }+\omega _x)t}\right)+h.a.,`$ (13) where $`\mathrm{\Delta }`$ is the laser detuning. In the limit of large detuning ($`\mathrm{\Delta }\omega _x`$) the effective Hamiltonian is (using the result derived in the appendix): $`\widehat{H}_{eff}`$ $`=`$ $`{\displaystyle \frac{\mathrm{}\mathrm{\Omega }^2\eta ^2}{2(\mathrm{\Delta }\omega _x)}}[\widehat{\sigma }^{()}\widehat{a}^{},\widehat{\sigma }^{(+)}\widehat{a}]+{\displaystyle \frac{\mathrm{}\mathrm{\Omega }^2\eta ^2}{2(\mathrm{\Delta }+\omega _x)}}[\widehat{\sigma }^{()}\widehat{a},\widehat{\sigma }^{(+)}\widehat{a}^{}]`$ (15) $``$ $`{\displaystyle \frac{\mathrm{}\mathrm{\Omega }^2\eta ^2}{2\mathrm{\Delta }}}\left(2\widehat{n}+1\right)\widehat{\sigma }_z={\displaystyle \frac{\mathrm{}\mathrm{\Omega }^2\eta ^2}{\mathrm{\Delta }}}\widehat{n}\left(\widehat{\sigma }^{(+)}+1/2\right)+{\displaystyle \frac{\mathrm{}\mathrm{\Omega }^2\eta ^2}{2\mathrm{\Delta }}}\left(\widehat{n}\widehat{\sigma }^{(+)}\right).`$ (16) The second term on the right hand side of the final equation represents a level shift, which can be compensated for by detuning the laser. If we choose the duration $`\tau `$ of this interaction to be $`\tau =\pi \mathrm{\Delta }/\mathrm{\Omega }^2\eta ^2`$, the time evolution is represented by the operator $$\widehat{𝒮}_t=\mathrm{exp}[i\widehat{a}^{}\widehat{a}(\widehat{\sigma }_z+1/2)\pi ].$$ (17) This time evolution flips the phase of the ion when the CM mode is in an odd state and the ion is in its excited state, thus providing us with a conditional phase shift for an ion and the CM mode. This operation will be performed only on one of the ions (the target qubit) involved in the quantum gate (which we denote by the subscript $`t`$). Operations acting on the second ion involved in the gate (the control qubit) will be denoted by the subscript $`c`$. The adiabatic passage required for the gate operation can be realized using two lasers, traditionally called the pump and the Stokes (see fig.6). The pump laser is polarized to couple the control qubit state $`|1_c`$ to some second auxiliary state $`|3_c`$ and is detuned by an amount $`\mathrm{\Delta }`$. The Stokes laser couples to the red side band transition $`|2_c|n+1|3_c|n`$, with the same detuning $`\mathrm{\Delta }`$. If the population we want to transfer adiabatically is initially in the state $`|1_c|n`$, we turn on the Stokes field (i.e. the sideband laser) and then slowly turn on the pump field (i.e. the carrier laser) until both lasers are turned on fully. Then we slowly turn off the Stokes laser: this is the famous “counter-intuitive” pulse sequence used in adiabatic passage techniques. The adiabatic passage must be performed very slowly. The condition in our scheme is that $`T1/\mathrm{\Omega }_{p,n},1/\mathrm{\Omega }_{S,n}`$, where $`T`$ is the duration of the adiabatic passage and $`\mathrm{\Omega }_{p,n}`$ ($`\mathrm{\Omega }_{S,n}`$) are the effective Rabi frequencies for the pump and the Stokes transition, respectively . Using the adiabatic passage we can transfer the population from $`|1_c|n`$ to $`|2_c|n+1`$. To invert the adiabatic passage, we just have to interchange the roles of the pump and the Stokes field. We will denote the adiabatic passage by operators $`𝒜_1^+`$ and $`𝒜_1^{}`$ defined as follows: $`𝒜_j^+`$ $`:`$ $`|1_j|n|2_j|n+1`$ (18) $`𝒜_j^{}`$ $`:`$ $`|2_j|n+1|1_j|n.`$ (19) The utility of this adiabatic passage scheme is that, despite the fact that the laser trasnition rates $`\mathrm{\Omega }_{p,n}`$ and $`\mathrm{\Omega }_{S,n}`$ are dependent on the phonon occupation number $`n`$, the adiabatic passage using the counter-intuitive pulse sequence is independent of $`n`$. These two operations are combined in the sequence shown in fig.7 in order to perform quantum gate operations. A detailed breakdown of the operation, including the intermediate states are very stage, is given in . This principle drawbacks of this scheme are two-fold In order that the following remarks be view in their correct context, the reader should be aware that the author of the present article was one of the authors of the scheme by Schneider et al.. Because of the adiabatic passage involved, it will of necessity be slow, gates requiring times of the order of a milliseconds (although it has this in common with the other schemes described in this paper.) Secondly this scheme (unlike those of Poyatos et al. and Mølmer and Sørensen) is vulnernable to heating during the gate operation. A further complication is the presence of multiple phonon modes in real experiments; to take into account their influence, eq.(16) needs to be rewritten as follows: $$\widehat{H}_{eff}=\frac{\mathrm{}\mathrm{\Omega }^2\eta ^2}{2\mathrm{\Delta }}\underset{p=1}{\overset{N}{}}\eta _p^2\left(2\widehat{n}_p+1\right)\widehat{\sigma }_z.$$ (20) The $`\widehat{𝒮}_t`$ gate will only function as designed when all of the modes except the one to be used as the quantum information bus have zero population. Thus this scheme at best is a means of avoiding the necessity of reducing the population of every mode to its quantum ground state; one mode can be left in a mixed state. ## VII Assessment The various schemes for quantum computation with trapped ions in principle meet many of the criteria for scalable quantum computation technology. Here we discuss the various criteria one by one. ### A Initialization The quantum information register (the ions) and the quantum information bus (the phonon modes) can be initialized reliably using laser cooling and optical pumping. Important aspects of these techniques have already been demonstrated experimentally. The “hot” ion schemes discussed here, if they can be realized experimentally, ease the stringent requirements on preparation of the initial state of the ions collective oscillation modes. ### B Gate Operations Quantum logic can be performed using the various schemes outlined above. The common ingredient is laser control of the quantum states of the ions internal and external degrees of freedom, requiring pulses of known duration and strength focused acurately on individual ions. Methods for aleiviating the laser focusing problem by altering the ions resonance frequency by various means such as non-uniform electric or magnetic fields have been proposed . The ability to address individual ions with laser beams and control their quantum states has been demonstrated experimentally by two groups using various means . ### C Isolation from the Environment The internal degrees of freedom of the ions, in which the quantum information is stored, have very long decoherence times (especially when Raman transitions form the basis of the single-qubit operations.) The principal form of enviromental disrupion suffered by ion traps is disturbance of the motional degrees of freedom, the proposed methods of avoiding this problem being the subject of this article. The “higher modes” scheme is well isolated from the environment, except for the indirect influence of the Debye-Waller effect. Both the Poyatos et al. scheme and the Mølmer-Sørensen scheme are not intrinsically isolated from the environment, but avoid its influence in various ingenious ways. The Schneider et al. will suffer from environmental influences during gate operations unless they can be nullified, for example by using “higher modes”. ### D Error correction There is nothing instrinsic that will rule out implimentation of fault tolerant quantum computation in ion traps when sufficient numbers of ions become availible. Ancilla ions can be prepared in their quantum ground state independent of other ions in the register. The use of multiple stretch modes (there are N-1 such modes in the weak trapping direction), allows quantum gates to be performed in parallel. Read out can be performed at intermediate stages during calculations without destroying the qubit being read, or disturbing other ions in the register unduly (there will be recoil during the read out that has the possibility of excitation of the oscillatory modes). ### E Read Out The read-out of the quantum state of ions using a cycling transition has been demonstrated experimentally with high efficiency and reliability . Indeed these experiments are the only ones in which high efficiency strong measurement of a single quantum system (as opposed to an ensemble of systems) has been performed. ### F Scalability The ultimate number of ions that can be stored in a string in an ion trap and used for quantum computation is limited by a number of factors. Probably the most important is growing complexity of the sideband spectrum as the number of ions grows. Even in the case of highly anisotropic traps (in which transverse oscillations can be neglected) the number of oscillation modes is equal to the number of ions, and each mode has a distinct frequency, with an infinite ladder of excitation resonances. In addition one has to take into account multi-phonon resonances; the whole leading very complicated structure in frequency space. The extent to which this “spectrum of death” can be understood and exploited, by systematic identication of resonances, careful bookeeping and tayloring of Lamb-Dicke coefficients remains to be seen. Other effects which place an upper bound on the number of ions in a single register is that fact that the spatial separation of the ions decreases $`N^{0.56}`$ , making their spatial resolution by a focused laser beam more and more difficult. Another, more definite upper bound on the number of ions that can be stored in a linear configuration is the onset of a phase transition to a more complex configuration such as a zig-zag ; for traps optimized for quantum computation with singly ionized calcium this occurs at about 170 ions. It is however argueable whether or not the onset of instabilities makes quantum computation impossible. If only small numbers of ions can be reliably used for quantum computation in a single ion trap, multiple traps will be needed for large scale devices. DeVoe has proposed fabricating multiple elliptical traps, each suitable for a few dozen ions, on a substrate with a density of 100 traps/cm<sup>2</sup>. Some form of reliable, high efficiency quantum communication channel to link the multiple traps would need to be implemented . An alternative scheme has been proposed by Wineland et al. in which two traps are used. One trap is used to store a large number of ions in a readily accessible manner (e.g. in an easily rotated ring configuration); each of these ions form the qubits of the register of the quantum computer. When a gate operation is to be performed, the two involved ions are extracted from the storage trap by applying static electric fields in an appropriate controlled manner, and transfered to a separate logic trap where they can be cooled and quantum logic operations can be performed on them. The cooling could be done sympathetically by a third ion of a separate species stored in the logic trap (thereby preserving the quantum information stored in the two logic ions which otherwise would be lost during cooling); in these circumstances either the original Cirac-Zoller scheme or any of the “hot gates” schemes described here can be used as the mechanism for peforming the logic; in particular the Poyatos et al. scheme, whose principal drawback seems to be its lack of scalability beyond two or three ions, would no longer be at a disadvantage, and given that it is considerably faster than both the Mølmer-Sørensen and Schneider et al. schemes, might be attactive. In conclusion, the variety and richness of the quantum computing schemes that have been devised for ion traps illustrates the great flexibility of this technology. Uniquely amoungst the proposals for quantum computing technology, the question for ion traps is not “does it work?” but rather “how far can it be developed?” ## Acknolwledgements The author wishes to thank Gerard Milburn, Sara Schneider, Andrew White, Michael Holzscheiter and Dave Wineland for useful conversations and correspondence. This work was performed in part while the author was a guest at the Deptartment of Physics, University of Queensland, Brisbane, Australia. He would like to thank the faculty, staff and students for their warm hospitality. This work was funded in part by the U.S. National Security Agency. ## Appendix: Effective Hamiltonians for Detuned Interactions We start with the Schrödinger equation in the interaction picture, i.e. $$i\mathrm{}\frac{}{t}|\psi (t)=\widehat{H}_I(t)|\psi (t)$$ (21) The formal solution of this first order partial differential equation is $$|\psi (t)=|\psi (0)+\frac{1}{i\mathrm{}}_0^t\widehat{H}_I(t^{})|\psi (t^{})𝑑t^{}.$$ (22) Subtituting this result back into eq.(21), we obtain $$i\mathrm{}\frac{}{t}|\psi (t)=\widehat{H}_I(t)|\psi (0)+\frac{1}{i\mathrm{}}_0^t\widehat{H}_I(t)\widehat{H}_I(t^{})|\psi (t^{})𝑑t^{}$$ (23) If we assume that the interaction Hamiltonian is strongly detuned, in the sense that $`\widehat{H}_I(t)`$ consists of a number of highly oscillative terms, then to a good approximation the first term on the right hand side of eq.(23) can be neglected, and we can adopt a Markovian approximation for the second term, so that the evolution of $`|\psi (t)`$ is approximately governed by the following equation $$i\mathrm{}\frac{}{t}|\psi (t)\widehat{H}_{eff}(t)|\psi (t),$$ (24) where $$\widehat{H}_{eff}(t)=\frac{1}{i\mathrm{}}\widehat{H}_I(t)\widehat{H}_I(t^{})𝑑t^{},$$ (25) where the indefinite integral is evaulated at time $`t`$ without a constant of integration. These arguments can be placed on more rigorous footing by considering the evolution of a time-averaged wavefunction. We will now assume that the interaction Hamilitonian consists explicitly of a combination of harmonic time varying components, i.e. $$\widehat{H}_I(t)=\underset{m}{}\widehat{h}_m\mathrm{exp}(i\omega _mt)+h.a.,$$ (26) where $`h.a.`$ stands for the the hermitician adjoint of the perceeding term, and the frequencies $`\omega _m`$ are all distinct (i.e. $`mn\omega _m\omega _n`$). In this case the effective Hamiltonian $`\widehat{H}_{eff}(t)`$ reduces to a simple form useful in the analysis of laser-ion interactions: $`\widehat{H}_{eff}(t)`$ $`=`$ $`{\displaystyle \underset{m,n}{}}{\displaystyle \frac{1}{i\mathrm{}}}\left(\widehat{h}_me^{i\omega _mt}+\widehat{h}_m^{}e^{i\omega _mt}\right)\left(\widehat{h}_n{\displaystyle \frac{e^{i\omega _nt}}{i\omega _n}}+\widehat{h}_n^{}{\displaystyle \frac{e^{i\omega _nt}}{i\omega _n}}\right)`$ (27) $`=`$ $`{\displaystyle \underset{m,n}{}}{\displaystyle \frac{1}{\mathrm{}\omega _n}}\left(\widehat{h}_m\widehat{h}_ne^{i(\omega _m+\omega _n)t}+\widehat{h}_m\widehat{h}_n^{}e^{i(\omega _m\omega _n)t}\widehat{h}_m^{}\widehat{h}_ne^{i(\omega _m\omega _n)t}\widehat{h}_m^{}\widehat{h}_n^{}e^{i(\omega _m+\omega _n)t}\right)`$ (28) $`=`$ $`{\displaystyle \underset{m}{}}{\displaystyle \frac{1}{\mathrm{}\omega _m}}[\widehat{h}_m^{},\widehat{h}_m]+\text{oscillating terms}.`$ (29) If we confine our interest to dynamics which are time-averaged over a period much longer than the period of any of the oscillations present in the effect Hamiltonian (i.e. averaged over a time $`T2\pi /\mathrm{min}\{|\omega _m\omega _n|\}`$) then the oscillating terms may be neglected, and we are left with the following simple formula for the effective Hamiltonian: $$\widehat{H}_{eff}(t)=\underset{m}{}\frac{1}{\mathrm{}\omega _m}[\widehat{h}_m^{},\widehat{h}_m].$$ (30)
no-problem/0003/physics0003049.html
ar5iv
text
# Ultrastable CO₂ Laser Trapping of Lithium Fermions ## Abstract We demonstrate an ultrastable $`\mathrm{CO}_2`$ laser trap that provides tight confinement of neutral atoms with negligible optical scattering and minimal laser noise induced heating. Using this method, fermionic $`{}_{}{}^{6}\mathrm{Li}`$ atoms are stored in a 0.4 mK deep well with a 1/e trap lifetime of 300 seconds, consistent with a background pressure of $`10^{11}`$ Torr. To our knowledge, this is the longest storage time ever achieved with an all-optical trap, comparable to the best reported magnetic traps. PACS numbers: 32.80.Pj Copyright 1999 by the American Physical Society Off-resonance optical traps have been explored for many years as an attractive means of tightly confining neutral atoms . Far off resonance optical traps (FORTs) employ large detunings from resonance to achieve low optical heating rates and high density, as well as to enable trapping of multiple atomic spin states in nearly identical potentials . For $`\mathrm{CO}_2`$ laser traps , the extremely large detuning from resonance and the very low optical frequency lead to optical scattering rates that are measured in photons per atom per hour. Hence, optical heating is negligible. Such traps are potentially important for development of new standards and sensors based on spectroscopic methods, for precision measurements such as determination of electric dipole moments in atoms , and for fundamental studies of cold, weakly interacting atomic or molecular vapors. However, all-optical atom traps have suffered from unexplained heating mechanisms that limit the minimum attainable temperatures and the maximum storage times in an ultrahigh vacuum . Recently, we have shown that to achieve long storage times in all-optical traps that are not limited by optical heating rates, heating arising from laser intensity noise and beam pointing noise must be stringently controlled . Properly designed $`\mathrm{CO}_2`$ lasers are powerful and extremely stable in both frequency and intensity , resulting in laser-noise-induced heating times that are measured in hours. Hence, in an ultrahigh vacuum (UHV) environment, where loss and heating arising from background gas collisions are minimized , extremely long storage times should be obtainable using ultrastable $`\mathrm{CO}_2`$ laser traps. In this Letter, we report storage of $`{}_{}{}^{6}\mathrm{Li}`$ fermions in an ultrastable $`\mathrm{CO}_2`$ laser trap. Trap 1/e lifetimes of 300 seconds are obtained, consistent with a background pressure of $`10^{11}`$ Torr. This constitutes the first experimental proof of principle that extremely long storage times can be achieved in all-optical traps. Since arbitrary hyperfine states can be trapped, this system will enable exploration of s-wave scattering in a weakly interacting fermi gas. The well-depth for a focused $`\mathrm{CO}_2`$ laser trap is determined by the induced dipole potential $`U=\alpha _g\overline{^2}/2`$, where $`\alpha _g`$ is, to a good approximation, the ground state static polarizability , and $`\overline{^2}`$ is the time average of the square of the laser field. In terms of the maximum laser intensity $`I`$ for the gaussian $`\mathrm{CO}_2`$ laser beam, the ground state well-depth $`U_0`$ in Hz is $$\frac{U_0}{h}(\mathrm{Hz})=\frac{2\pi }{hc}\alpha _gI.$$ (1) In our experiments, a laser power of P=40 W typically is obtained in the trap region. A lens is used to focus the trap beam to field a 1/e radius of $`a_f=50\mu `$m, yielding an intensity of $`I=2P/(\pi a_f^2)1.0\mathrm{MW}/\mathrm{cm}^2`$. For the I-P(20) line with $`\lambda _{\mathrm{CO}_2}10.6\mu `$m, the Rayleigh length is $`z_0=\pi a_f^2/\lambda _{\mathrm{CO}_2}=0.74`$ mm. Using the Li ground state polarizability of $`\alpha _g=24.3\times 10^{24}\mathrm{cm}^3`$ yields a well depth of $`U_0/h=8`$ MHz, which is approximately 0.4 mK. For this tight trap, the $`{}_{}{}^{6}\mathrm{Li}`$ radial oscillation frequency is 4.7 kHz and the axial frequency is 0.22 kHz. For $`{}_{}{}^{6}\mathrm{Li}`$ in a $`\mathrm{CO}_2`$ laser trap, both the excited and the ground states are attracted to the well. The excited state static polarizability is $`\alpha _p=18.9\times 10^{24}\mathrm{cm}^3`$ , only 20% less than that of the ground state. With a ground state well depth of 8 MHz, the frequency of the first resonance transition in the trap is shifted by only 1.6 MHz at the center of the trap and thus does not significantly alter the operation of the magneto-optical trap (MOT) from which the trap is loaded. The optical scattering rate $`R_s`$ in the $`\mathrm{CO}_2`$ laser trap arises from Larmor scattering and can be written as $`R_s=\sigma _SI/(\mathrm{}ck)`$, where the Larmor scattering cross section $`\sigma _S`$ is $$\sigma _S=\frac{8\pi }{3}\alpha _g^2k^4.$$ (2) Here, $`k=2\pi /\lambda _{\mathrm{CO}_2}`$. Using $`\alpha _g=24.3\times 10^{24}\mathrm{cm}^3`$ yields $`\sigma _S=5.9\times 10^{30}\mathrm{cm}^2`$. At 1.0 MW/$`\mathrm{cm}^2`$, the scattering rate for lithium is then $`2.9\times 10^4`$/sec, corresponding to a scattering time of $`3400`$ sec for one photon per atom. As a result, the recoil heating rate is negligible. Heating can arise from laser intensity noise and beam pointing fluctuations . For simplicity, we estimate the noise-induced heating rates for our trap using a harmonic oscillator approximation which is valid for atoms near the bottom of the well. This provides only a rough estimate of the expected heating rates in the gaussian well, since the trap oscillation frequency decreases as the energy approaches the top of the well. A detailed discussion of noise-induced heating in gaussian potential wells will be given in a future publication. In the harmonic oscillator approximation, intensity noise causes parametric heating and an exponential increase in the average energy for each direction of oscillation, $`\dot{E}=\mathrm{\Gamma }E`$, where the rate constant in $`\mathrm{sec}^1`$ is $$\mathrm{\Gamma }=\pi ^2\nu ^2S_I(2\nu ).$$ (3) Here $`\nu `$ is a trap oscillation frequency and $`S_I(2\nu )`$ is the power spectrum of the fractional intensity noise in $`\mathrm{fraction}^2/\mathrm{Hz}`$. For our $`\mathrm{CO}_2`$ laser, $`S_I(9.4\mathrm{kHz})1.0\times 10^{13}`$/Hz, where it is comparable to the detector noise. This is three orders of magnitude lower than that measured for an argon ion laser . The corresponding heating time for radial oscillation in our trap at $`\nu =4.7`$ kHz is $`\mathrm{\Gamma }^14.6\times 10^4`$ sec. For the axial oscillation, $`\nu =220`$ Hz, $`S_I(440Hz)1.1\times 10^{11}`$/Hz and $`\mathrm{\Gamma }^12\times 10^5`$ sec. Fluctuations in the position of the trapping laser beam cause a constant heating rate $`\dot{E}=\dot{Q}`$, where $$\dot{Q}=4\pi ^4M\nu ^4S_x(\nu ).$$ (4) Here $`M`$ is the atom mass and $`S_x`$ is the position noise power spectrum in $`\mathrm{cm}^2`$/Hz at the trap focus. For $`{}_{}{}^{6}\mathrm{Li}`$, one obtains $`\dot{Q}(\mathrm{nK}/\mathrm{s})=2.8\times 10^4\nu ^4(Hz)S_x(\mu \mathrm{m}^2/\mathrm{Hz})`$. Position noise only couples directly to the radial motion where $`\nu 4.7`$ kHz. For our laser, $`S_x(4.7\mathrm{kHz})3.4\times 10^{10}\mu \mathrm{m}^2`$/Hz, where the upper bound is determined by the noise floor for our detection method. This yields $`\dot{Q}`$ 46 nK/s. Hence, we expect the trap lifetime to be limited by the background pressure of our UHV system. The expected number of trapped atoms $`N_T`$ can be estimated as follows. We take the trapping potential to be approximately gaussian in three dimensions: $$U(\stackrel{}{x})=U_0\mathrm{exp}(x^2/a^2y^2/b^2z^2/z_o^2),$$ (5) where $`a=b=a_f/\sqrt{2}`$ is the intensity 1/e radius and $`z_o`$ is the Rayleigh length. Here, the lorentzian dependence of the trap beam intensity on the axial position $`z`$ is approximated by a gaussian dependence on $`z`$. We assume that after a sufficient loading time, atoms in the $`\mathrm{CO}_2`$ laser trap will come into thermal and diffusive equilibrium with the MOT atoms that serve as a reservoir . The density of states in the gaussian trap and the occupation number then determine the number of trapped atoms, which takes the form $$N_T=nV_{FORT}F[U_0/(k_BT)].$$ (6) Here the volume of the $`\mathrm{CO}_2`$ laser trap is defined as $`V_{FORT}=a^2z_o\pi ^{3/2}`$. Hence, $`nV_{FORT}`$ is the total number of atoms contained in the volume of the FORT at the MOT density $`n`$. $`F(q)`$ determines the number of trapped atoms compared to the total number contained in the FORT volume at the MOT density. It is a function only of the ratio of the well depth to the MOT temperature, $`qU_0/(k_BT)`$: $$F(q)=\frac{q^{3/2}}{2}_0^1𝑑xx^2g(x)\mathrm{exp}[q(1x)].$$ (7) Here $`g(x)`$ is the ratio of the density of states for a gaussian well to that of a three dimensional harmonic well: $$g(x)=\frac{\beta ^{3/2}(1x)^{1/2}}{x^2}\frac{16}{\pi }_0^1𝑑uu^2\sqrt{\mathrm{exp}[\beta (1u^2)]1},$$ (8) where $`\beta \mathrm{ln}(1x)`$. The variable $`x=(E+U_0)/U_0`$ is the energy of the atom relative to the bottom of the well in units of the well depth, where $`U_0E0`$, and $`g(0)=1`$. For our MOT, the typical temperature is 1 mK, $`n10^{11}/\mathrm{cm}^3`$, and $`nV_{FORT}=5\times 10^5`$ atoms. Using the well depth of $`U_0=0.4`$ mK in Eq. 6 shows that $`N_T`$ is of the order of $`6\times 10^4`$ atoms. Much higher numbers are obtainable for a deeper well at lower temperature. The experiments employ a custom-built, stable $`\mathrm{CO}_2`$ laser. High-voltage power supplies, rated at $`10^6`$ fractional stability at full voltage, proper electrode design, and negligible plasma noise enable highly stable current. Heavy mechanical construction, along with thermally and acoustically shielded invar rods, reduces vibration. The laser produces 56 W in an excellent $`\mathrm{TEM}_{00}`$ mode. The $`\mathrm{CO}_2`$ laser beam is expanded using a ZnSe telescope. It is focused through a double-sealed, differentially-pumped, 5 cm diameter ZnSe window into a UHV system. The vacuum is maintained at $`10^{11}`$ Torr by a titanium sublimation pump. The trap is at the focus of a 19 cm focal length ZnSe lens. The trap is continuously loaded from a $`{}_{}{}^{6}\mathrm{Li}`$ MOT employing a standard $`\sigma _\pm `$ configuration with three orthogonal pairs of counterpropagating, oppositely-polarized 671 nm laser beams, each 2.5 cm in diameter and 8 mW. Power is supplied by a Coherent 699 dye laser that generates 700 mW. The MOT magnetic field gradient is 15 G/cm (7.5 G/cm) along the radial (axial) directions of the trap. The MOT is loaded from a multicoil Zeeman slower system that employs a differentially pumped recirculating oven source . Using a calibrated photomultiplier, the MOT is estimated to trap approximately $`10^8`$ $`{}_{}{}^{6}\mathrm{Li}`$ atoms. The MOT volume is found to be $`1\mathrm{mm}^3`$. This yields a density of $`10^{11}/\mathrm{cm}^3`$, consistent with that obtained for lithium in other experiments . Using time-of-flight methods, we find typical MOT temperatures of 1 mK. We initially align the $`\mathrm{CO}_2`$ laser trap with the MOT by using split-image detection of the fluorescence at 671 nm to position the focusing ZnSe lens in the axial direction. The focal point for the trapping beam is positioned in the center of the MOT, taking into account the difference in the index of refraction of the optics at 671 nm and 10.6 $`\mu `$m. A 671 nm laser beam is aligned on top of the $`\mathrm{CO}_2`$ laser beam to align the transverse position of the focal point in the MOT. Since the Rayleigh length is short and the focus is tight, this method does not reliably locate the actual focus of the $`\mathrm{CO}_2`$ beam. Hence, a spectroscopic diagnostic based on the light shift induced by the $`\mathrm{CO}_2`$ laser is employed for final alignment of the trapping beam. While the near equality of the Li excited and ground state polarizabilities is ideal for continuous loading from the MOT, it makes locating the $`\mathrm{CO}_2`$ laser focus in the MOT by light shift methods quite difficult. To circumvent this problem, a dye laser at 610 nm is used to excite the 2p-3d transition for diagnostics. At the 10.6 $`\mu `$m $`\mathrm{CO}_2`$ laser wavelength, we estimate that the 3d state has a scalar polarizablity of approximately $`700\times 10^{24}\mathrm{cm}^3`$ , nearly 30 times that of the 2s or 2p state. In the focus of the $`\mathrm{CO}_2`$ laser, the corresponding light shift is $`300`$ MHz. Chopping the $`\mathrm{CO}_2`$ laser beam at 2 kHz and using lock-in detection of fluorescence at 610 nm yields a two-peaked light shift spectrum. This two-peaked structure arises because the lock-in yields the difference between signals with the $`\mathrm{CO}_2`$ laser blocked and unblocked. At the ideal focusing lens position, the amplitude and the frequency separation of these peaks are maximized. Optical alignment remains unchanged for months after this procedure. Measurement of the trapped atom number versus time is accomplished by monitoring the fluorescence at 671 nm induced by a pulsed, retroreflected probe/repumper beam (1 mW, 2 mm diameter). The probe is double-blinded by acousto-optic (A/O) modulators to minimize trap loss arising from probe light leakage. The loading sequence is as follows: First, the $`\mathrm{CO}_2`$ laser trap is continuously loaded from the MOT for 10 seconds. This provides adequate time for the MOT to load from the Zeeman slower. Then the MOT repumping beam is turned off, so that atoms in the upper $`F=3/2`$ hyperfine state are optically pumped into the lower $`F=1/2,M=\pm 1/2`$ states. After 25 $`\mu `$sec, the optical MOT beams are turned off using A/O modulators, and a mechanical shutter in front of the dye laser is closed within 1 ms to eliminate all MOT light at 671 nm. The MOT gradient magnets are turned off within 0.2 ms. After a predetermined time interval between 0 and 600 sec, the probe beam is pulsed to yield a fluorescence signal proportional to the number of trapped atoms. The detection system is calibrated and the solid angle is estimated to determine the atom number. Typical trapped atom numbers measured in our initial experiments are $`2.3\times 10^4`$. This corresponds to the predictions of Eq. 6 for a well depth of 0.25 mK. Since we expect the potential of the MOT gradient magnet to lower the effective well depth from 0.4 mK by $`0.15`$ mK during loading, the measured trap number is consistent with our predictions. Fig. 1 shows the decay of the trapped atom number on a time scale of 0-600 seconds. Each data point is the mean obtained from four separate measurement sequences through the complete decay curve. The error bars are the standard deviation from the mean. Atoms in the $`F=1/2`$ state exhibit a single exponential decay with a time constant of 297 sec, clearly demonstrating the potential of this system for measurements on a long time scale. We have observed that an initial 10-15% decrease in the signal can occur during the first second. This may arise from inelastic collisions between atoms in the $`F=1/2`$ state with atoms that are not optically pumped out of the upper $`F=3/2`$ state. During optical pumping, fluorescence from the F=3/2 state decays in $`5\mu `$sec to a $`5`$% level which persists for a few milliseconds, consistent with a residual $`F=3/2`$ population. The lifetime of atoms in the $`F=1/2`$ state can be limited by processes that cause heating or direct loss. If we attribute the trap lifetime entirely to residual heating, the heating rate from all sources would be at most $`400\mu \mathrm{K}/300\mathrm{s}\mathrm{e}\mathrm{c}1\mu `$K/sec, which is quite small. However, if the loss were due to heating, one would expect a multimodal decay curve, analogous to that predicted in Ref. . Instead, we observe a single exponential decay as expected for direct loss mechanisms, such as collisions with background gas atoms or optical pumping by background light at 671 nm (into the unstable $`F=3/2`$ state). If we assume that the lifetime is background gas limited and that Li is the dominant constituent, the measured lifetime of 297 sec is consistent with a pressure of $`10^{11}`$ Torr. The long lifetime of the $`F=1/2`$ state is expected, based on the prediction of a negligible s-wave elastic scattering length ($`<<1`$ Bohr) at zero magnetic field . Hence, spontaneous evaporation should not occur. We have made a preliminary measurement of trap loss arising from inelastic collisions when the $`F=3/2`$ state is occupied. This is accomplished by omitting the optical pumping step in the loading sequence described above. The trap is found to decay with a 1/e time $`<1`$ sec when $`2.3\times 10^4`$ atoms are loaded (density $`10^9/\mathrm{cm}^3`$). A detailed study of elastic and inelastic collisions at low magnetic field is in progress. In conclusion, we have demonstrated a 300 sec 1/e lifetime for lithium fermions in an ultrastable $`\mathrm{CO}_2`$ laser trap with a well depth of 0.4 mK. By using an improved aspherical lens system, an increase in trap depth to 1 mK is achievable. Further, Eq. 6 shows that, if the MOT temperature is reduced to 0.25 mK, more than $`10^6`$ atoms can be trapped in a 1 mK deep well. Since the ground and excited state trapping potentials are nearly identical, exploration of optical cooling schemes may be particularly fruitful in this system. Currently, we are exploring $`{}_{}{}^{6}\mathrm{Li}`$ as a fundamental example of a cold, weakly-interacting fermi gas. By trapping multiple hyperfine states, it will be possible to study both elastic and inelastic collisions between fermions. The combination of long storage times and tight confinement obtainable with the $`\mathrm{CO}_2`$ laser trap, as well as the anomalously large scattering lengths for $`{}_{}{}^{6}\mathrm{Li}`$ , make this system an excellent candidate for evaporative cooling and potential observation of a Bardeen-Cooper-Schrieffer transition. Further, this system is well suited for exploring novel wave optics of atoms and molecules, such as coherent changes of statistics by transitions between free fermionic atoms and bosonic molecules, analogous to free to bound transitions for bosonic atoms . We thank Dr. R. Hulet for stimulating conversations regarding this work. We are indebted to Dr. C. Primmerman and Dr. R. Heinrichs of MIT Lincoln Laboratory for the loan of two stable high voltage power supplies and to Dr. K. Evenson of NIST, Boulder for suggestions regarding the laser design. This research has been supported by the Army Research Office and the National Science Foundation. Permanent Address, Department of Electrical Engineering and Computer Science, MIT, Cambridge, MA 02139.
no-problem/0003/cond-mat0003232.html
ar5iv
text
# Reconstructions of diamond (100) and (111) surfaces: Accuracy of the Brenner potential ## 1 Introduction Carbon-based structures are of a great current interest . The challenge of these systems from the fundamental point of view is related to the interplay between different types of atom bonding, leading to a uniquely large variety of structural phases formed by a single element, such as diamond and graphite, single- and multi-shell fullerenes and nanotubes and other structures with many peculiar properties. To study the elastic properties and fracture of these structures and their mixtures, the transformation paths between them , etc. it is important to develop predictive schemes based on simplified empirical potentials, which allow large-scale simulations of complex structures with mixed atomic bonding, which are often beyond the possibilities of *ab-initio* calculations. Effective many-body empirical potentials have proven to be useful and predictive for a number of materials . The potentials developed by Tersoff for group IV elements are very accurate for Si and Ge, also as far as interface properties are concerned, but less reliable for C. Carbon is particularly difficult for an empirical scheme due to the large variety of different types of C–C bonding with very different energetics and bond lengths $`d_{\mathrm{CC}}`$. For example, for a single C–C bond in diamond $`d_{\mathrm{CC}}=1.54`$ Å, for a conjugated bond in graphite $`d_{\mathrm{CC}}=1.42`$ Å and for a double bond in H<sub>2</sub>C$`=`$CH<sub>2</sub> $`d_{\mathrm{CC}}=1.34`$ Å. The Tersoff potential, which has been fit to the bulk properties of both diamond and graphite, does not, however, distinguish the chemical character of the bond. At diamond surfaces, different types of bonding are present at the same time, leading to poor results of the Tersoff potential for the surface reconstructions as we show in detail in this work. Brenner has re-parametrised the Tersoff potential and added nonlocal terms to properly account for the bond modifications induced by a change of bonding of neighbouring atoms. As in the Tersoff scheme, the potential energy of the system is written as a sum of effective pair terms for each bond, the energetics of which depends on the local environment (bond order of Tersoff) and, in addition, on the chemical character of the bond (single, double, triple or conjugated) derived by evaluating the number of neighbours for the atoms forming the bond and all their nearest neighbours. Diamond surfaces are an example of a rather simple system, where the interplay between different types of carbon bonding becomes important. Numerous calculations exploiting various *ab-initio* approaches and extensive experimental data are available, making the diamond surfaces an important check point to verify the accuracy and predictive power of empirical schemes. The Tersoff potential for C yields the unreconstructed (111) $`(1\times 1)`$ surface as the most stable against the experimental evidence of the $`(2\times 1)`$ Pandey chain reconstruction analogous to that of Si(111). For the (001) face it strongly favours an asymmetric re-arrangement of carbon atoms beneath the raw of unbuckled dimers . In the present work we compare in detail the predictions of the Brenner and Tersoff potentials with results of *ab-initio* calculations for known reconstructions of the diamond(100) and (111) surfaces. Our results reveal high quantitative accuracy of the Brenner potential. Since the parameters are fit to the *bulk* properties of diamond and graphite and to properties of various hydrocarbon molecules, the high accuracy at the *surface* suggests a high predictive power of the potential at short distances. With further modifications to include also long-range interactions (beyond 2 Å, the cut-off of the potential) , which are now under development , the Brenner potential promises to become a powerful tool to investigate carbon-based structures on a large scale. ## 2 (100) surface Our previous study of the diamond(100) surface with the Tersoff potential has suggested a new reconstruction with a strongly asymmetric rearrangement of atoms in deeper layers. These predictions have now been verified using the off-lattice Monte Carlo (MC) technique with the Brenner potential. We have confirmed that both the symmetric $`(2\times 1)`$ and asymmetric $`(2\times 1)`$a (where ‘a’ stands for ‘asymmetric’) structures shown in Fig. 1 correspond to local energy minima. However, the energies for the two reconstructions given by the Brenner potential are found in a reversed order compared to the prediction of the Tersoff potential. Relative to the energy of the relaxed $`(1\times 1)`$ ideal diamond (100) surface, the energy gain is found with the Brenner potential to be $`5.40`$ and $`4.19`$ eV per surface dimer for the $`(2\times 1)`$ and $`(2\times 1)`$a structures, respectively. For comparison, the Tersoff predictions were $`0.26`$ and $`1.55`$ eV, respectively . The length of the bonds between atoms in the top four layers are given for the two structures in Table 1. For comparison, we also give the results obtained with the Tersoff potential and those found in *ab-initio* calculations . Note that, from the chemical point of view, each surface atom at the bulk-terminated surface has two un-paired electrons (two dangling bonds). Therefore, the dimer bond (bond 11 in Table 1) in the symmetric $`(2\times 1)`$ surface structure has the character of a double-bond. The Brenner potential correctly reproduces the length of the bond 11, which is much shorter than both the single C–C bond in diamond and the conjugated bond in graphite, but rather close to the length of a double bond. It quantitatively agrees with the dimer bond length of 1.37 Å for the $`(2\times 1)`$ diamond(100) surface found in *ab-initio* calculations . Conversely, the Tersoff potential, which does not include the nonlocal terms, predicts very different length of the dimer bond; it also gives much smaller reconstruction energy for both structures since the chemical character of stronger double and conjugated bonds is not accounted for. With the Brenner potential, in the asymmetric $`(2\times 1)`$a structure, the bond 11 is elongated up to 1.437 Å (close to the graphite value) since it becomes a member of a conjugated ($`\pi `$-bonded) system. Note that the bond 12$`b`$, which also connects three-fold coordinated atoms, has a very similar length. Examination of the first two columns in Table 1 reveals the high accuracy of the predictions of the Brenner potential for the bond lengths of the symmetric $`(2\times 1)`$ diamond(100) reconstruction. Except a slightly higher difference of the length of the bonds 23$`a`$ and 23$`b`$, our results agree with the *ab-initio* results within 0.01 Å accuracy. Note that the *ab-initio* approach of Ref. underestimates the bulk bond length by 0.01 Å with respect to the experimentally-determined value. In contrast to the surprisingly good quantitative agreement of the structural data, the reconstruction energy is different. The value of $`5.40`$ eV/dimer as given by the Brenner potential is found between the *ab-initio* values (3.02 , 3.36 and 3.52 eV/dimer) and the reconstruction energy given by the semi-empirical SLAB-MINDO scheme (7.86 eV/dimer ). ## 3 (111) surface We do not consider the Tersoff potential here since it gives the unreconstructed $`(1\times 1)`$ surface as the minimum energy structure. Fig. 2 shows the relaxed $`(1\times 1)`$ and the $`(2\times 1)`$ Pandey chain reconstructions of the diamond(111) surface as given by the Brenner potential. In brief, the most important changes of the bond lengths as compared to the bulk value are as follows. For the relaxed $`(1\times 1)`$ structure: (i) the contraction of the bond within the first bilayer by $`3.5`$% agrees with *ab-initio* values of $`3.1`$, $`4.0`$ and $`4.2`$; (ii) the elongation of the bond between the first and second bilayer by $`+2.0`$% agrees with $`+2.1`$% of Ref. but seems underestimated compared to $`+8.7`$% and $`+9`$% of Refs. . In the $`(2\times 1)`$ Pandey reconstruction: (iii) the $`\pi `$-bonded upper chain bond length of 1.437 Å ($`6.7`$%) well compares to *ab-initio* values of 1.47 Å ($`4.4`$%) , 1.44 Å and 1.43 Å ($`6.5`$%) ; (iv) the lower chain elongation by $`+1.4`$% is close to $`+0.7`$ and $`+0.9`$; (v) the stretch of the bonds between the first and second bilayers by $`+3.9\%`$ and $`+4.0`$% seems to be underestimated with respect to the *ab-initio* values of $`+8.1`$% , $`+8`$, $`+4.5\%`$ and $`+6.6`$% . Further comparison with the results of Refs. shows that all other bond shifts agree with the *ab-initio* calculations within $``$1%. Therefore, except a tendency to underestimate the elongation of the bonds between the top and second bilayer, our structural results for diamond(111) agree remarkably well with *ab-initio* predictions . Relative to the energy of the bulk-terminated diamond(111), we find energy gains per $`1\times 1`$ unit cell of 0.244 eV for the $`(1\times 1)`$ structure (cf 0.37 eV and 0.57 eV ) and 1.102 eV for the Pandey reconstruction (cf 0.47 eV and 1.40 eV ). We note that there is a long-standing debate on the structural and electronic properties of the diamond(111) surface. An important issue is whether this surface is metallic or semiconducting. In most calculations the band of surface states is metallic whereas experimentally the highest occupied state is at least 0.5 eV below the Fermi level . Dimerisation along the $`\pi `$-bonded chain could open the surface gap but only one total-energy calculation obtains slightly dimerised chains yielding a 0.3 eV gap in the surface band. Experimentally, recent X-ray data does not show any dimerisation but favour the $`(2\times 1)`$ reconstruction accompanied by a strong tilt of the $`\pi `$-bonded chains, similar to the $`(2\times 1)`$ reconstruction of Si(111) and Ge(111). The tilt is however not confirmed by theoretical studies . Neither dimerisation nor buckling of the $`\pi `$ chain is found in our results for the Pandey reconstruction in agreement to most *ab-initio* results. However, our recent MC study of the structure of diamond(111) based on the Brenner potential has shown that, in addition to the stable $`(2\times 1)`$ Pandey chain reconstruction, there exist additional meta-stable states, specific for carbon, with all surface atoms in three-fold graphite-like bonding. Since the energy of these metastable states is very close to the one of the Pandey $`(2\times 1)`$, these structures can coexist with the Pandey structure at a real surface. Moreover, due to symmetry breaking induced by a strong dimerisation of the lower (4-fold coordinated) atomic chain in the first bilayer, the meta-stable reconstructions is likely to exhibit semiconducting behaviour. Although the new structures and their surface electronic properties ought to be checked in *ab-initio* studies, the high accuracy of the Brenner potential demonstrated in this work strongly supports this prediction. ## 4 Conclusion We have performed off-lattice Monte Carlo study of the (100) and (111) diamond surfaces with the empirical many-body Brenner potential and compared the results in detail with those obtained with the Tersoff potential and with *ab-initio* approaches . We find that the Brenner potential is extremely accurate in describing the structural properties at surfaces, supporting the recent predictions of new meta-stable reconstructions of diamond(111). On the other hand, the Tersoff potential , which does not distinguish the chemical character of the bond, turns out to give a poor description of surface properties. The Brenner potential, however, cannot describe weaker long-range interactions, such as the interplanar interactions in graphite due to the cut-off at 2 Å. This is the most serious limitation to be overcome. Recently, further modifications of the Brenner potential to include also long-range interactions beyond 2 Å are being proposed . Given the high accuracy of the short-range part, the modified Brenner potential promises to become a powerful tool to investigate carbon-based structures on a large scale. ### *Acknowledgements* We would like to thank Daniele Passerone, Furio Ercolessi and Erio Tosatti for their help in implementation the off-lattice grand canonical Monte Carlo code. Valuable discussions with Elias Vlieg, Frank van Bouwelen, Rob de Groot, Hans ter Meulen, Willem van Enckevort and John Schermer are acknowledged.
no-problem/0003/astro-ph0003467.html
ar5iv
text
# Studies of multiple stellar systems – III. Modulation of orbital elements in the triple-lined system HD 109648 ## 1 Introduction The number of triple systems with well-determined orbital elements is still small (Fekel 1981; Tokovinin 1997, 1999). In particular, the number of spectroscopic triples in which the wide orbit is also known from radial-velocity observations is very small. Part of the problem is that the velocity amplitude of the outer binary is usually small compared to the amplitude of the inner binary. Moreover, after a binary orbit has been solved, the natural reaction is to discontinue observing it, and checks for longer-term variations are rarely made . This series of papers is aimed at increasing our knowledge of triples by investigating systems where the inner and outer orbits can both be determined from spectroscopic observations. The first paper of the series (Mazeh, Krymolowski and Latham 1993, hereinafter Paper I) examined the halo triple G38-13, while the second paper (Krymolowski and Mazeh 1998, hereinafter Paper II) derived an analytic technique which allows for fast simulation of orbital modulations of a binary induced by a third star. In the present paper we analyse the triple-lined spectroscopic triple system HD 109648 (HIP 61497, $`\alpha =12^\mathrm{h}35^\mathrm{m}59\stackrel{s}{.}8`$, $`\delta =+36\mathrm{°}15\mathrm{}30\mathrm{}`$ (J2000); $`V=8.8`$). HD 109648 was identified as one (star 6) of a handful of stars belonging to the remnant of a nearby old open cluster, Upgren 1 , but subsequent studies have weakened the interpretation that all of the stars originally identified are physically associated (Upgren, Philip & Beavers 1982; Gatewood et al. 1992; Stefanik et al. 1997; Baumgardt 1998). The triple-lined nature of HD 109648 was noticed soon after we began observing it, because the one-dimensional correlations of some of the spectra clearly showed three peaks. A periodicity analysis revealed periods at $`5.5`$ and $`120`$ days. Triple systems tend to be hierarchical, usually with a close binary and a more distant third star, as other configurations are generally unstable and are unlikely to persist and be detected. To first-order, a hierarchical triple system can be separated into an inner orbit (comprising the two close stars) and an outer orbit (comprising the third star and the centre-of-mass of the inner pair). This approximation is most valid when the distance to the third star far exceeds the separation between the inner two stars. One of the goals of this study is to investigate the interaction of these three stars (through the variation of the inner and outer orbits) to higher order. A preliminary version of this work was presented at a conference entitled ‘Thirty Years of Astronomy at the Van Vleck Observatory: A Meeting in Honor of Arthur R. Upgren’ . This paper updates the orbital solutions presented there and adds a significantly more detailed analysis of the system, partly through the use of numerical simulations. In Section 2 we summarise the analysis of the observations, including the derivation of the radial velocities, orbital solutions, and additional parameters such as the mass ratios and constraints on the orbital inclinations. We discuss the theoretically expected modulations of orbital elements in Section 3. In Section 4 we describe our efforts to search for such variations and present our results. Further constraints for the system via numerical simulation are derived in Section 5. Finally, in Section 6 we discuss our results and relate them to previous and future work. ## 2 Radial velocities and orbital solutions HD 109648 has been monitored since 1990 with the Center for Astrophysics (CfA) Digital Speedometer on the 1.5-m Wyeth Reflector at the Oak Ridge Observatory, located in the town of Harvard, Massachusetts. The echelle spectra cover 45 Å centered at 5187 Å, with a spectral resolution of $`\lambda /\mathrm{\Delta }\lambda 35,000`$. As of 1998, we have secured 290 spectra of HD109648. Radial velocities were derived for each of the three stars in the system using the three-dimensional version of the two-dimensional correlation technique TODCOR . TODCOR assumes that the spectrum for each individual star in the system is known, and that an observed spectrum is composed of the individual component spectra added together, each shifted by its own radial velocity. Thus, to use TODCOR successfully one must have suitable template spectra for each of the components. We chose our templates from a grid of synthetic spectra calculated by Jon Morse using the 1992 Kurucz model atmospheres (e.g. Nordström et al. 1994). Our first guess for the template parameters was based on a visual inspection of the spectra. Application of TODCOR yielded preliminary velocities for each of the three components, from which we determined the makeup of the triple: the inner binary consists of the primary and the tertiary, while the outer star is second in brightness. With this information, we were able to refine our templates to obtain the final velocities. For the primary we adopted an effective temperature, $`T_{\mathrm{eff}}=6750`$ K; solar metallicity, \[m/H\] = 0; and main-sequence surface gravity, log $`g=4.5`$ (cgs units). The period of the inner binary is quite short, and we assumed this has led to spin-orbit synchronization for the inner stars (see for example Mazeh & Shaham 1979). Therefore we adopted a rotational velocity of $`v\mathrm{sin}i=10`$ $`\mathrm{km}\mathrm{s}^1`$ for both of the inner binary stars. For the secondary and the tertiary, we have used a slightly cooler temperature, $`T_{\mathrm{eff}}=6500`$ K, and for the outer star we assumed that the rotation was negligible, $`v\mathrm{sin}i=0`$ $`\mathrm{km}\mathrm{s}^1`$. Small changes in the choice of template parameters did not have much effect on the radial velocities or orbital solutions. The final template parameters are listed in Table 1, and the individual radial velocities are reported in the Appendix. For a proper solution of the orbital elements, one must solve for the inner and outer motions simultaneously. For this purpose we have used orb20, a code developed at Tel Aviv University \[\], and a new code developed at the CfA. These two independent codes yielded the same results. Throughout this paper we employ the following notation for the elements of this hierarchical triple system. We label the inner stars Aa and Ab (with Aa being the brighter), while the centre-of-mass of the inner stars is denoted as A and the outer star is denoted B. When we are discussing orbits rather than the stars themselves, we designate the inner orbit as A and the outer orbit as AB. The orbital solution using all 290 of our observations is displayed in Figure 1. The top panel shows the motion of the two inner stars, after their centre-of-mass motion has been removed. The bottom panel shows the centre-of-mass motion of the inner binary, as well as the motion of the third star. The derived average orbital elements of the inner and outer motions, as well as the overall radial velocity of the system, $`\gamma `$, are listed in Table 2. The triple-lined nature of the spectra yields much more information about the system than can be gathered when only one component is visible (as was the case for G38-13 in Paper I). We can, for instance, determine the mass ratios between the three stars. The mass ratio of the inner pair is easily determined from the inner orbital elements, $`m_{Aa}K_{Aa}=m_{Ab}K_{Ab}`$. This results in $$\frac{m_{Ab}}{m_{Aa}}=0.8991\pm 0.0027.$$ (1) The same relation holds for the outer orbit, $`m_AK_A=m_BK_B`$, but in this triple system we know that $`m_A=m_{Aa}+m_{Ab}`$. Thus we derive $$\frac{m_B}{m_{Aa}}=\frac{K_A}{K_B}\left(1+\frac{K_{Aa}}{K_{Ab}}\right)=0.9356\pm 0.0072.$$ (2) From the orbital elements and Kepler’s Third Law we can also derive the quantities $`m_{Aa}\mathrm{sin}^3i_A`$ and $`m_B\mathrm{sin}^3i_{AB}`$ (e.g., Batten 1973) listed in Table 2, which with the mass ratio lead to a ratio involving the inclination angles, $$\frac{\mathrm{sin}i_{AB}}{\mathrm{sin}i_A}=0.9478\pm 0.0045.$$ (3) The individual inclination angles remain unknown, and consequently, so do the exact masses of the stars, but we can estimate the masses by assuming the stars are still on the main sequence. Upgren & Rubin (1965) report an MK spectral type of F6V for HD 109648. The light of the primary dominates the spectrum of the system, so this spectral type corresponds to a primary mass of $$m_{Aa}=1.3\pm 0.1M_{\mathrm{}},$$ (4) where we have assumed solar metallicity and have adopted an uncertainty that corresponds to the spectral-type range F3V to F8V. Using the known mass ratios, we can then calculate the masses of the other two stars. From the masses we can nearly determine the inclination angles. Since radial velocity measurements do not reveal the direction of orbital motion, we are left with an ambiguity between the following supplementary possibilities for each angle, $$i_A=53.4\pm 2.0\mathrm{°}\text{or}126.6\pm 2.0\mathrm{°}\text{and}$$ (5) $$i_{AB}=49.5\pm 1.8\mathrm{°}\text{or}130.5\pm 1.8\mathrm{°}.$$ (6) However, even if we were to resolve the ambiguities in the inclinations, we would still not know the complete geometric orientation of the system. Spectroscopic observations (as opposed to visual ones) do not provide the elements $`\mathrm{\Omega }_A`$ and $`\mathrm{\Omega }_{AB}`$, the position angles of the inner and outer lines of nodes. Without these angles, we cannot determine an important quantity in the interaction of the two binaries—the relative inclination $`\varphi `$. The relative inclination is defined as the angle between the two orbital planes, or identically, the angle between the inner and outer angular momentum vectors. It is related to the individual inclination angles, and the angles of the lines of nodes, by (Batten 1973; Fekel 1981) $$\mathrm{cos}\varphi =\mathrm{cos}i_A\mathrm{cos}i_{AB}+\mathrm{sin}i_A\mathrm{sin}i_{AB}\mathrm{cos}(\mathrm{\Omega }_A\mathrm{\Omega }_{AB}).$$ (7) Since the quantity $`\mathrm{\Omega }_A\mathrm{\Omega }_{AB}`$ is unknown, we can only limit its cosine between $`+1`$ and $`1`$, resulting in a geometrical constraint on $`\varphi `$, $$i_Ai_{AB}\varphi i_A+i_{AB}.$$ (8) Nevertheless, from this result we can derive an important parameter, namely the minimum relative inclination angle, to see if the system can be coplanar. We determine that the minimum angle between the orbital planes is $`\varphi _{\mathrm{min}}=3.9\pm 0.3\mathrm{°}`$. Thus the two orbits could be very close to coplanarity, but cannot be exactly coplanar. In Section 4 we strengthen this lower limit slightly. ## 3 Expected effects of the Three-body interaction As discussed in Papers I and II, the separation of the motions of a hierarchical triple system into inner and outer orbits is only a first-order approximation. The gravitational attraction of the outer body exerted on each of the two inner bodies is different from the gravitational attraction exerted on an imaginary body at the centre-of-mass of the inner binary system. The difference induces long-term modulations of some of the orbital elements of the system. The timescale for such modulations is on the order of (Mazeh & Shaham 1979; Paper II) $$T_{\mathrm{mod}}=P_{AB}\left(\frac{P_{AB}}{P_A}\right)\left(\frac{m_{Aa}+m_{Ab}}{m_B}\right).$$ (9) Thus, for such modulations to be observationally detectable in a relatively short time, one requires a system with a short outer period, as well as a small outer:inner period ratio. HD 109648 satisfies both these requirements, with an outer period about 120.5 days and a period ratio near 22:1. This results in $`T_{\mathrm{mod}}15`$ years, one of the shortest modulation timescales known for a late-type triple system. Our observations of HD 109648 span more than eight years, giving us some hope that we may be able to detect changes in some of the orbital parameters. ### 3.1 Modulation of the inner eccentricity and the longitudes of periastron One effect expected from the three-body interaction is a modulation of the inner binary eccentricity . The presence of the third star causes a quasi-periodic variation in the inner eccentricity, $`e_A`$, around an average value. The amplitude of the eccentricity modulation strongly depends on the eccentricity of the outer orbit and on the relative inclinations between the orbital planes (Mazeh, Krymolowski & Rosenfeld 1997; Paper II), with coplanar situations producing the least effect. The inner eccentricity modulation goes together with the motions of the lines of apsides of the two orbits (Mazeh, Krymolowski & Rosenfeld 1997; Paper II; Holman, Touma & Tremaine 1997), which manifest themselves through the variation of the longitudes of periastron. Both modulations, that of the inner binary eccentricity and that of the longitudes of periastron can be observed as changes in the elements derived for the two orbital motions. Mazeh and Shaham (1979) have shown that the inner eccentricity modulation takes place even when the binary orbit starts as circular one. This aspect of the eccentricity modulation is applicable here, because we expect a binary with a period of about 5.5 days to be completely circularized (Zahn 1975; Mathieu and Mazeh 1984), if it were not for the effect of the third star. ### 3.2 Precession of the nodes Another expected modulation results from an effect known as the precession of the nodes . In the general case of a non-coplanar triple, the inner and outer angular momentum vectors ($`𝑮_𝑨`$ and $`𝑮_{𝑨𝑩}`$) precess around their sum, the total angular momentum ($`𝑮`$), which remains fixed. As a result, the angle between $`𝑮_𝑨`$ (or $`𝑮_{𝑨𝑩}`$) and any fixed direction in space (other than that coincident with $`𝑮`$), varies periodically with time. The observer’s line of sight is one such fixed direction. The angle between the line of sight and $`𝑮_𝑨`$ is precisely the inner inclination angle $`i_A`$, since $`𝑮_𝑨`$ is perpendicular to the (instantaneous) inner orbital plane. Similarly, the angle between the line of sight and $`𝑮_{𝑨𝑩}`$ is the outer inclination angle $`i_{AB}`$. As a result of the precession of the orbital planes we expect to see a periodic modulation of the inner and outer inclination angles. In the case of a fixed relative inclination between the two orbits, the time variations of the inclination angles are given by $$\mathrm{cos}i_A=\mathrm{cos}\alpha \mathrm{cos}\beta _A+\mathrm{sin}\alpha \mathrm{sin}\beta _A\mathrm{cos}[\omega _p(tt_0)]\text{and}$$ (10) $$\mathrm{cos}i_{AB}=\mathrm{cos}\alpha \mathrm{cos}\beta _{AB}\mathrm{sin}\alpha \mathrm{sin}\beta _{AB}\mathrm{cos}[\omega _p(tt_0)],$$ (11) where $`\alpha `$ is the angle between the line of sight and $`𝑮`$, $`\beta _A`$ is the angle between $`𝑮_𝑨`$ and $`𝑮`$, $`\beta _{AB}`$ is the angle between $`𝑮_{𝑨𝑩}`$ and $`𝑮`$, $`\omega _p`$ is the angular precession frequency and $`t_0`$ is a fiducial time determining the phase. An approximate expression for the precession frequency $`\omega _p`$ is given by Mazeh & Shaham ; in general it corresponds to the typical modulation timescale given in equation 9. The amplitude of the modulations in the inclination angles are set by $`\alpha `$, $`\beta _A`$ and $`\beta _{AB}`$, and the variations of $`i_A`$ and $`i_{AB}`$ are exactly out of phase. The modulation of the inclination angles has an immediately observable effect, because the observed amplitudes of the radial velocity variations $`K`$ in a binary system are directly proportional to $`\mathrm{sin}i`$ (Mazeh & Shaham 1976; Mazeh and Mayor 1983). Thus, for HD 109648, periodic modulations in the inner inclination angle, $`i_A`$, lead to periodic modulations in $`K_{Aa}`$ and $`K_{Ab}`$. Correspondingly, the modulations of $`i_{AB}`$ would be evidenced by variations in $`K_A`$ and $`K_B`$. ## 4 Search for Modulations Induced by the Third Star To search for evidence of modulation of the orbital elements, we have divided our data set and performed orbital solutions on each subset. To obtain a robust orbital solution, we would like as many points in each subset as possible; however, to resolve changes in the elements with time, we would like as many subsets as possible. The ultimate constraint comes from the time history of our observations, shown in Figure 2, which has forced us to use only five subsets. Data from the first few years of observation (before we appreciated the importance of getting good coverage of this system) were combined into one subset, while the observations from each subsequent season form their own subset. We have tried further divisions of the data (e.g., separating the first subset into two), but these provided orbital solutions which were too uncertain to be useful. As expected (Paper II), the inner and outer periods did not vary over the timespan of the observations. Because of this and the fact that our later subsets cover less than two outer periods, in what follows we have fixed the outer period at the value determined by using all the observations. ### 4.1 Modulation of the inner eccentricity and the longitudes of periastron The inner eccentricity and the longitudes of periastron are presented as a function of time in Figure 3 and Figure 4. In these and subsequent figures, the horizontal “error bars” actually indicate the time span of each subset, while the plotted points are at the mean date within the subset (each observation was given equal weight). There is no obvious modulation of the inner eccentricity, and this fact will enable us to further constrain the geometry of the system in Section 5. However, the fact that inner eccentricity is not zero (as would be expected due to tidal circularization of such a close binary) is evidence for interaction with the third star. More convincingly, the inner and outer longitudes are clearly varying. The roughly linear trend indicates a secular advance of the line of apsides, a direct indication of the effect of three-body interaction. ### 4.2 Precession of the nodes The radial velocity amplitudes, $`K_{Aa}`$, $`K_{Ab}`$, $`K_A`$ and $`K_B`$, from the five subsets are shown in Figure 5. The amplitudes of the inner binary, $`K_{Aa}`$ and $`K_{Ab}`$, both show very clear variation, with the same trend. This is well understood if we assume the variation is caused by the precession of the nodes. To show that this is the case we note the $`K_{Aa}=V_{Aa}\mathrm{sin}i_A`$, where $`V`$ is used to denote the true orbital velocity amplitude rather than the projected radial velocity amplitude, and $`K_{Ab}=V_{Ab}\mathrm{sin}i_A`$. Variation of the inclination angle would thus induce the same trend for $`K_{Aa}`$ and $`K_{Ab}`$. Furthermore, if the variations of the observed amplitudes are caused only by the modulation of the inclination angle, the ratio $`K_{Aa}/K_{Ab}`$ should remain constant. To check this point we have plotted in Figure 6 the results from the five subsets in the ($`K_{Aa}`$, $`K_{Ab}`$) parameter space. If the ratio between the two amplitudes is constant, we expect these five points to fall on a straight line that goes through the origin. The figure shows a beatiful confirmation of this prediction. The variation of the amplitude combined with the results for the inner inclination given in Section 2 imply an approximately $`4\mathrm{°}`$ decrease in $`i_A`$ over the span of the observations. Were this a real effect due to precession of the nodes, we would expect an increase in the outer inclination angle, and correspondingly an increase in $`K_A`$ and $`K_B`$. We can calculate the amplitude of such an effect by linearly approximating the decrease in the inner radial velocity amplitudes. As above, we know that $$\frac{\mathrm{\Delta }K_{Aa}}{V_{Aa}}=\frac{\mathrm{\Delta }K_{Ab}}{V_{Ab}}\text{and}\frac{\mathrm{\Delta }K_A}{V_A}=\frac{\mathrm{\Delta }K_B}{V_B}$$ (12) The lack of significant eccentricity modulation implies that the relative inclination $`\varphi `$ has remained nearly constant (because of conservation of the total angular momentum), and validates equations 10 and 11. Using these and differentiating $`K=V\mathrm{sin}i`$ with respect to time yields $$\frac{1}{V_{Aa}}\frac{dK_{Aa}}{dt}=\frac{1}{V_{Ab}}\frac{dK_{Ab}}{dt}=\omega _p\mathrm{cot}i_A\mathrm{sin}\alpha \mathrm{sin}\beta _A\mathrm{sin}[\omega _p(tt_0)]\text{and}$$ (13) $$\frac{1}{V_A}\frac{dK_A}{dt}=\frac{1}{V_B}\frac{dK_B}{dt}=\omega _p\mathrm{cot}i_{AB}\mathrm{sin}\alpha \mathrm{sin}\beta _{AB}\mathrm{sin}[\omega _p(tt_0)].$$ (14) Assuming that this first-derivative (i.e. linear) expansion is sufficient to cover the span of our observations, we can take the ratio of these two equations and determine that $$\frac{\frac{\mathrm{\Delta }K_{Ab}}{V_{Ab}}}{\frac{\mathrm{\Delta }K_B}{V_B}}=\frac{\mathrm{cot}i_A}{\mathrm{cot}i_{AB}}\frac{\mathrm{sin}\beta _A}{\mathrm{sin}\beta _{AB}}.$$ (15) Because $`𝑮=𝑮_𝑨+𝑮_{𝑨𝑩}`$, the law of sines can be applied. For a binary orbit, the amplitude of the angular momentum about the centre-of-mass is $$L\mu [Ma(1e^2)]^{1/2}\mu M^{2/3}P^{1/3}(1e^2)^{1/2},$$ (16) where $`\mu `$ is the reduced mass and $`M`$ is the total mass. Thus we have $$\frac{\mathrm{sin}\beta _A}{\mathrm{sin}\beta _{AB}}=\frac{\left|𝑮_{𝑨𝑩}\right|}{\left|𝑮_𝑨\right|}=\left(\frac{\mu _{AB}}{\mu _A}\right)\left(\frac{M_{AB}}{M_A}\right)^{2/3}\left(\frac{P_{AB}}{P_A}\right)^{1/3}\left(\frac{1e_{AB}^2}{1e_A^2}\right)^{1/2}$$ (17) Figure 5 shows that $`\mathrm{\Delta }K_{Ab}2.9\mathrm{km}\mathrm{s}^1`$. Combining equations 15 and 17 and inserting the parameters for HD 109648 from Section 2, we expect $`\mathrm{\Delta }K_B0.4\mathrm{km}\mathrm{s}^1`$. Our error bars are too large to claim a detection of this, but the expectation is consistent with what we observe. Continued observations may make the precession more clear. The already observed precession also enables us to strengthen our lower limit on the relative inclination, $`\varphi _{\mathrm{min}}`$, calculated in Section 2. Because $`i_A`$ has been decreasing significantly, while $`i_{AB}`$ has likely increased slightly, the difference $`i_Ai_{AB}`$, (which is a strict lower limit on the relative inclination) has also been decreasing. Thus our strongest constraint on $`\varphi _{\mathrm{min}}`$ can come from analyzing our earliest subset of data only, where $`i_Ai_{AB}`$ was the greatest, rather than its average value over the whole time span. This yields a refined minimum relative inclination, $`\varphi _{\mathrm{min}}=5.4\pm 0.4\mathrm{°}`$. ## 5 Simulation The amplitude of the inner eccentricity modulation is especially sensitive to the relative inclination, $`\varphi `$, between the two orbital planes (Mazeh & Shaham 1979; Bailyn 1987; Paper II). The modulation amplitude increases with the relative inclination. For relative inclinations greater than a critical relative inclination, $`\varphi \varphi _{\mathrm{crit}}40\mathrm{°}`$, the modulation amplitude increases dramatically, with the possibility of the inner eccentricity approaching unity. Our observations, spanning more than eight years, have yielded an average inner eccentricity of $`e_A=0.0119\pm 0.0014`$. Using numerical simulations, we can estimate the likelihood of this result for different values of the relative inclination angle. Our simulations are similar to those described in Paper I, integrating Newton’s equations for three mass points. The starting point for the integrations was determined from the elements over all the observations, given in Table 2. We have used the three-body regularization program of Aarseth , as well as a code written by Bailyn , to perform the integrations. We have also developed and used a code written specifically for this system. All three routines yielded identical results. Typical results of the simulations are shown in Figure 7. As discussed in Paper II, the eccentricity modulation depends not only on the relative inclination, but also on the arguments of periastron, $`g_A`$ and $`g_{AB}`$, measured with respect to the unknown intersection of the orbital planes (Söderhjelm 1984; Paper II). These arguments are especially important at the lower relative inclinations. We have explored a range of values, so that our simulations are typical cases. As the figure shows, for small relative inclinations an eight-year window could easily produce an eccentricity modulation consistent with what we have observed. At higher relative inclinations, however, the probability of obtaining a small average inner eccentricity over eight years decreases. By running many different simulations, we can quantify this probability, and estimate an upper limit to the relative inclination, $`\varphi _{\mathrm{max}}54\mathrm{°}`$ above which a low inner eccentricity cannot be maintained for eight years. However, our simulations were limited, considering only Newtonian gravity with three point masses. Other effects, including quadrupole perturbations in the inner binary, tidal friction and general relativistic effects, may be significant factors in the eccentricity modulation. In particular, such effects may dampen the eccentricity modulation amplitude at high relative inclinations and thus our estimate of the upper limit on the relative inclination is not very firm. On the other hand, we have a strong lower limit on the relative inclination calculated in Section 2 from the geometry of the system and strengthened in Section 4 via the precession of the nodes. Using this technique, thus, we can significantly constrain the relative inclination. It turns out that the ambiguity in the inclination angles corresponds to an outer orbit which either co-rotates or counter-rotates with respect to the inner orbit. Choosing the inclination angles in the same quadrant corresponds to the co-rotational case, with limits on the relative inclination, $$5.4\mathrm{°}\varphi 54\mathrm{°}\text{(co-rotation).}$$ (18) The counter-rotational case leads to alternative limits for the relative inclination, $$126\mathrm{°}\varphi 174.6\mathrm{°}\text{(counter-rotation).}$$ (19) The relative inclinations closer to $`\varphi _{\mathrm{max}}`$ (i.e. the upper limit in the co-rotational case or the lower limit in the counter-rotational case) are less probable than those closer to $`\varphi _{\mathrm{min}}`$. ## 6 Discussion We have shown that HD 109648 is a hierarchical triple system, with an outer period and outer:inner period ratio conducive to modulations of orbital elements on timescales of about a decade. Indeed, our observations clearly indicate an advance of the inner longitude of periastron corresponding roughly to this timescale. We also found strong evidence for variations of the radial velocity amplitudes of the inner orbit, most naturaly accounted for by the precession of the nodes. Furthermore, the inner eccentricity is small but significant, presumably due to the interaction with the outer star. Such effects have been predicted theoretically for hierarchical triples for a number of years. However, there have been few observational confirmations. Mayor and Mazeh have looked for evidence of the precession of nodes in a number of close binaries, and have reported several significant changes, based on observations made at two widely-spaced epochs. Mazeh and Shaham also suggested a few systems where the effect may have had a role, but none of these were confirmed triples. The inner eccentricity modulation has been less conclusively observed. Mazeh and Shaham (1977; 1979) have postulated it to be the cause of long-period phenomena, such as episodic accretion, in some close binaries. More convincing evidence has been offered for the interaction of a third star with the tidal circularization of the inner binary. Mazeh has looked for eccentric orbits in samples of short-period binaries that should have been circularized, as a fingerprint for a third star in the system. Three such examples were found, and the hypothesis of a triple system was confirmed in each case. In addition, one system (HD 144515) showed evidence for a variation in the inner eccentricity, again based on observations from two epochs. Ford, Kozinsky, and Rasio (1999) also provide instances of some other triple systems where these interactions may have played a role. HD 109648 provides the best observational evidence so far of these predicted modulations. This system is a confirmed hierarchical triple, a direct result of analysing the triple-lined spectra. Furthermore, the evidence for variations in the elements comes from observations in a homogeneous set of data, rather than relying on two-epoch observations. Finally, we see evidence both for the precession of the nodes and for the apsidal advance in the same system. More data are needed to strengthen this case. With better information on the variation of the elements with time, we should be able to derive better constraints on the orientation of the system. For example, if we are able to fit the variation of the inner and outer inclination angles, we can determine the various angles between the total angular momentum and its inner and outer binary components, as well as the angle between the total angular momentum and the line of sight. If the inner eccentricity modulation becomes clearer with additional data as well, it should provide stronger constraints on the relative inclination. In addition to continued spectroscopic observations, there may be some hope that interferometric observations of HD 109648 will be able to help clarify the orientation of the system. The Hipparcos parallax for HD 109648 of 4 mas may potentially be in error due to the outer orbit, particularly because the outer period is nearly commensurate with one year. Nevertheless, the separation of the third star from the inner binary is on the order of a few mas, allowing for the possibility of a visual orbit in the future. We can also hope that additional triple systems will be discovered, perhaps ones that are even better than HD 109648 for this type of study. Indeed, Saar, Nördstrom & Andersen have noted a promising system with a modulation timescale perhaps shorter even than that of HD 109648. Determining the geometry and orientation of such systems will be a great advance in our understanding of them. ## Acknowledgments We would like to thank J. Caruso and J. Zajac for making many of the observations presented here, as well as R. Davis for help with the data reduction. We are grateful for the efforts of S. Chanmugam, T. Lynn, M. Morgan and E. Scanapieco for their initial work in determining periods for the two orbits. Thanks also go to S. Aarseth and C. Bailyn for use of their three-body codes. We thank the anonymous referee for providing useful suggestions. This work was supported by US-Israel Binational Science Foundation grant 94-00284 and 97-00460 and an NSF Graduate Research Fellowship. SJ thanks the Harvard University Department of Astronomy for its support of this work. SJ would also expresses his sincere gratitude to Y. Krymolowski for many helpful discussions and his wonderful generosity. ## Appendix A Radial velocities In the following table we list our observations of the heliocentric radial velocities for the three visible components of HD109648. The date given is HJD - 2400000, and the velocities are in $`\mathrm{km}\mathrm{s}^1`$.
no-problem/0003/astro-ph0003090.html
ar5iv
text
# Galactic Contamination in the QMAP Experiment ## 1. INTRODUCTION Quantifying Galactic emission in a cosmic microwave background (CMB) map is interesting for two different reasons. On one hand, the CMB is known to be a gold mine of information about cosmological parameters. Taking full advantage of this requires accurate modeling and subtraction of Galactic foreground contamination. On the other hand, the high fidelity maps being produced as part of the current CMB gold rush offer a unique opportunity for secondary non-CMB science. This includes a greatly improved understanding of Galactic emission processes between 10 and $`10^3`$ GHz. This paper is motivated by both of these reasons. The QMAP experiment (Devlin et al. 1998; Herbig et al. 1998; de Oliveira-Costa et al. 1998b, hereafter dOC98b) is one of the CMB experiments that has produced a sky map with accurately modeled noise properties, lending itself to a cross-correlation analysis with a variety of foreground templates. We present such an analysis in §2 and §3, then compute the corresponding correction to the published QMAP power spectrum measurements in §4 and finish by discussing the implications for Galactic foreground modeling in §5. ## 2. METHOD The multi-component fitting method that we use was presented in detail in de Oliveira-Costa et al. 1999 (hereafter dOC99), so we review it only briefly here. The joint QMAP map from both flights consists of $`N=3164`$ (Ka-band, 26 to 36 GHz) and $`4875`$ (Q-band, 36-46 GHz) measured sky temperatures (pixels) $`y_i`$. We model this map as a sum of CMB fluctuations $`x_i`$, detector noise $`n_i`$ and $`M`$ Galactic components whose spatial distributions are traced in part by external foreground templates. Writing these contributions as $`N`$-dimensional vectors, we obtain $$𝐲=\mathrm{𝐗𝐚}+𝐱+𝐧,$$ (1) where $`𝐗`$ is an $`N\times M`$ matrix whose rows contain the various foreground templates convolved with the QMAP beam (i.e., $`𝐗_{ij}`$ would be the $`i^{th}`$ observation if the sky had looked like the $`j^{th}`$ foreground template), and $`𝐚`$ is a vector of size $`M`$ that gives the levels at which these foreground templates are present in the QMAP data. We treat $`𝐧`$ and $`𝐱`$ as uncorrelated random vectors with zero mean and the $`𝐗`$ matrix as constant, so the data covariance matrix is given by $$𝐂\mathrm{𝐲𝐲}^T𝐲𝐲^T=\mathrm{𝐱𝐱}^T+\mathrm{𝐧𝐧}^T,$$ (2) where $$\mathrm{𝐱𝐱}^T_{ij}\underset{\mathrm{}=2}{\overset{\mathrm{}}{}}\frac{2\mathrm{}+1}{4\pi }P_{\mathrm{}}(\widehat{𝐫}_i\widehat{𝐫}_j)W_{\mathrm{}}^2C_{\mathrm{}}$$ (3) is the CMB covariance matrix and $`\mathrm{𝐧𝐧}^T`$ is the QMAP noise covariance matrix (a detailed description of the calculation of $`\mathrm{𝐧𝐧}^T`$ is presented in dOC98b). We use a flat power spectrum $`C_{\mathrm{}}1/\mathrm{}(\mathrm{}+1)`$ normalized to a $`Q_{flat}=(5C_2/4\pi )^{1/2}=30`$ $`\mu `$K (dOC98b). We model the QMAP beam as a Fisher function with FWHM$`=\sqrt{8\mathrm{ln}2}\sigma =0.9^{}`$ for Ka-band and $`0.6^{}`$ for Q-band, which gives a $`W_{\mathrm{}}e^{\sigma ^2\mathrm{}(\mathrm{}+1)/2}`$. Since our goal is to measure $`𝐚`$, both $`𝐱`$ and $`𝐧`$ act as unwanted noise in equation (1). Minimizing $`\chi ^2(𝐲\mathrm{𝐗𝐚})^T𝐂^1(𝐲\mathrm{𝐗𝐚})`$ yields the minimum-variance estimate of $`𝐚`$, $$\widehat{𝐚}=\left[𝐗^T𝐂^1𝐗\right]^1𝐗^T𝐂^1𝐲$$ (4) with covariance matrix $$\mathrm{SS}\widehat{𝐚}^2\widehat{𝐚}^2=\left[𝐗^T𝐂^1𝐗\right]^1.$$ (5) The error bars on individual correlations are therefore $`\mathrm{\Delta }\widehat{a}_i=\mathrm{SS}_{ii}^{1/2}`$. This includes the effect of chance alignments between the CMB and the various template maps, since the CMB anisotropy term is incorporated in $`\mathrm{𝐱𝐱}^T`$. ## 3. BASIC RESULTS We cross-correlate the QMAP data with two different synchrotron templates: the 408 MHz survey (Haslam et al. 1982) and the 1420 MHz survey (Reich 1982; Reich and Reich 1986), hereafter Has and R&R, respectively. To study dust and/or free-free emission, we cross-correlate the QMAP data with three Diffuse Infrared Background Experiment (DIRBE) sky maps at wavelengths 100, 140 and 240$`\mu \mathrm{m}`$ (Boggess et al. 1992) and with the Wisconsin H-Alpha Mapper (WHAM)<sup>6</sup><sup>6</sup>6 Details also at http://www.astro.wisc.edu/wham/. survey (Haffner et al. 1999). For definiteness, we use the DIRBE 100$`\mu \mathrm{m}`$ channel when placing limits below since it is the least noisy of the three DIRBE channels. Three of our templates are shown together with the Ka-band QMAP map in Figure 1. Most of our interesting results come from the QMAP Ka-band, since the Q-band was substantially noisier (the opposite was true for the Saskatoon experiment – see de Oliveira-Costa et al. 1997, hereafter dOC97). Before calculating the correlations, we convolve the template maps with the QMAP beam function. We also remove the monopole and dipole from both the templates and the QMAP maps. As a consequence, our results depend predominantly on the small scale intensity variations in the templates and are insensitive to the zero levels of the QMAP data and the templates. Table 1 shows the coefficients $`\widehat{𝐚}`$ and the corresponding fluctuations in antenna temperature in the QMAP data ($`\mathrm{\Delta }T=\widehat{𝐚}\sigma _{Gal}`$, where $`\sigma _{Gal}`$ is the standard deviation of the template map). Statistically significant ($`>2\sigma `$) correlations are listed in boldface. Note that the fits are done jointly for $`M=3`$ templates. The DIRBE, Haslam and $`H_\alpha `$ correlations listed in Table 1 correspond to joint 100$`\mu \mathrm{m}`$$``$Has$``$$`H_\alpha `$ fits, whereas the R&R numbers correspond to a joint 100$`\mu \mathrm{m}`$$``$R&R$``$$`H_\alpha `$ fit. Only the two synchrotron templates are found to be correlated with the Ka-band, while no correlations are found for the Q-band. Repeating the analysis done for two different Galactic cuts (20 and 30) indicates that the bulk of this contamination is at latitudes lower than 30. As in dOC97, de Oliveira-Costa et al. 1998a (hereafter dOC98a) and dOC99, the cross-correlation software was tested by analyzing constrained realizations of CMB and QMAP instrument noise. From 1000 realizations, we recovered unbiased estimates $`\widehat{𝐚}`$ with a variance in excellent agreement with equation (5). As an additional test, we computed $`\chi ^2(𝐲\mathrm{𝐗𝐚})^T𝐂^1(𝐲\mathrm{𝐗𝐚})`$ and obtained $`\chi ^2/N1`$ in all cases. Including a synchrotron template lowered $`\chi ^2`$ by a significant amount (18 for R&R and 9 for Has), whereas adding the other templates resulted in insignificant reductions $`\mathrm{\Delta }\chi ^21`$. ## 4. IMPLICATIONS FOR CMB The lower right map in Figure 1 shows $`\mathrm{𝐗𝐚}^T`$, i.e., our best fit estimate of the foreground contribution to the QMAP Ka-band. To quantify the foreground contribution to the published QMAP power spectrum measurements, we repeat the exact same analysis described in dOC98b after subtracting out this map, i.e., with the map $`𝐲`$ replaced by $`𝐲\mathrm{𝐗𝐚}^T`$. The dOC98b band powers were computed by expanding the map in signal-to-noise ($`S/N`$) eigenmodes (Bond 1995; Bunn and Sugiyama 1995), weight vectors $`𝐛_i`$ that solve the generalized eigenvalue equation $$\mathrm{𝐱𝐱}^T𝐛_i=\lambda _i\mathrm{𝐧𝐧}^T𝐛_i.$$ (6) When the $`𝐛`$’s are sorted by decreasing eigenvalue $`\lambda `$, they tend to probe from larger to smaller angular scales. The $`S/N`$ expansion coefficients are shown in Figure 2, and are seen to be substantially smaller for the foregrounds than for the CMB. As described in dOC98b, we obtain a statistically independent power estimate from the square of each mode and then average these individual estimates with inverse-variance weighting to obtain the band power estimates listed in Table 2 and shown in Figure 3. The Ka-band (30 GHz) band powers are seen to drop by less than a few percent when the foreground signal is subtracted, whereas the Q-band (40 GHz) contamination is too small to quantify. The Ka-band correction is slightly smaller on small angular scales: 1.3% instead of 2.3%. This is expected, since diffuse Galactic foregrounds are expected to have a redder power spectrum than the CMB. As a side benefit, our statistically significant detection of foregrounds allowed an independent confirmation of the QMAP pointing solution. We reanalyzed the QMAP data set with the pointing solution offset by $`\pm 0.5^{}`$ in azimuth, and found the highest correlations for the original pointing. ## 5. IMPLICATIONS FOR FOREGROUND MODELING Figure 4 shows a compilation of measured correlations between foreground templates and CMB maps at various frequencies. Comparisons are complicated by the fact that the foreground level $`\mathrm{\Delta }T=\widehat{𝐚}\sigma _{Gal}`$ depends strongly on galactic latitude via $`\sigma _{Gal}`$. We therefore plot $`\widehat{𝐚}`$ instead, i.e., the factor giving the frequency dependence of emission per unit foreground. Such measurements were used to normalize recent foreground models such as those of Bouchet & Gispert (1999) and Tegmark et al. (1999). Below we discuss how our QMAP results affect such models. ### 5.1. Synchrotron Writing the frequency dependence as $`a\nu ^\beta `$ and recalling that the correlation coefficients are, by definition, $`a=1`$K$`/\mu `$K$`=10^6`$ for Has at 408 MHz and $`a=1`$mK$`/\mu `$K$`=10^3`$ for R&R at 1420 MHz, we obtain the spectral index limits $`2.7\mathrm{}<\beta \mathrm{}<3.3`$ for the Ka–Has correlation and $`2.6\mathrm{}<\beta \mathrm{}<2.8`$ for the Ka–R&R correlation. These values are slightly steeper than the canonical sub-GHz slope of $`2.7\mathrm{}<\beta \mathrm{}<2.9`$ (Davies et al. 1998; Platania et al. 1998), but consistent with a steepening of the spectrum of cosmic ray electrons at higher energies (Rybicki and Lightman 1979). The relatively high QMAP synchrotron signal seen in Figure 4 could be interpreted as slight spatial variability of the frequency dependence (Tegmark 1999), but may also have other explanations. For instance, the worst striping problems in the Haslam map are right around the North Celestial Pole, which may have caused Saskatoon to underestimate the true synchrotron level there (dOC97). ### 5.2. Spinning dust & free-free emission An important question is whether the DIRBE-correlated signal seen by so many experiments (see Figure 4, top) is due to dust-correlated free-free emission (Kogut et al. 1996) or spinning dust grains (Draine & Lazarian 1998). The turndown at low frequencies suggests a spinning dust interpretation (dOC99), but an analysis using improved Tenerife data (Mukherjee et al. 2000) may have re-opened this question<sup>7</sup><sup>7</sup>7 As seen in Figure 4, the Mukherjee et al. results for $`b>20^{}`$ also shown a turndown between 15 and 10 GHz. Figure 7 in their paper looks different because it is an average including data down to $`b=0`$. This may still be consistent with spinning dust being the worst foreground for the (Galaxy cut) MAP data, since free-free emission is believed to be more concentrated in the Galactic plane than dust. . Our present results cannot settle the issue, but offer a few additional clues. If free-free emission is responsible for the correlation, then substantial $`H_\alpha `$ emission would be expected as well. Figure 4 (bottom) shows the expected correlation $`a`$ for the case of $`8,000`$K gas (Bennett et al. 1992). It is seen that the $`H_\alpha `$-correlation, although marginal at best, is consistent with the theoretical curve. Moreover, Table 1 shows that the limit on the dust-correlated emission is low enough to be compatible with a free-free origin (i.e., with the $`15\mu `$K $`H\alpha `$-correlated component). To clarify this issue, we computed the correlation between the dust and $`H_\alpha `$ maps. As described in dOC99, eq. (5) shows that we can interpret $`\mathrm{SS}^1`$ as the covariance between the various templates with dimensionless correlation coefficients $`r_{ij}\mathrm{SS}_{ij}^1(\mathrm{SS}_{ii}^1\mathrm{SS}_{jj}^1)^{0.5}`$. Like in dOC99, the DIRBE maps were found to be almost perfectly correlated, and essentially uncorrelated ($`r^2\mathrm{}<3\%`$) with the radio maps. The Has and R&R maps are correlated with $`r83\%`$ for $`b>20^{}`$. As a new result, we obtain a marginal correlation $`r0.2`$ between the DIRBE maps and the $`H_\alpha `$ template. Since the statistical properties of these maps are not accurately known, we computed error bars by repeating the analysis with one of the templates replaced by $`2\times 2\times 72=288`$ transformed maps, rotated around the Galactic axis by multiples of 5 and/or flipped vertically and/or horizontally. The actual correlation was found to be larger than 85% of these, showing that the correlation is not significant at the $`2\sigma `$ level: $`\widehat{𝐚}=(0.25\pm 0.19)`$R/MJy sr<sup>-1</sup> $`(1\sigma )`$. This result is significantly smaller than that recently found by Lagache et al. (2000) for their DIRBE–WHAM correlation done in a different region of the sky, but compatible with other marginal dust–$`H_\alpha `$ correlations (McCullough 1997; Kogut 1997). This poor correlation is a challenge for the pure free-free hypothesis, which maintains that microwave emission traces dust because dust traces free-free emission. A cross-correlation analysis with large frequency and sky coverage will hopefully be able to unambiguously determine the relative levels of free-free and dust emission in the near future. We would like to thank Matias Zaldarriaga for encouraging us to complete it. Support for this work was provided by the Packard Foundation, NSF grants PHY 92-22952 & PHY 96-00015, and NASA grant NAG5-6034. We acknowledge the NASA office of Space Sciences, the COBE flight team, and all those who helped process and analyze the DIRBE data. REFERENCES Bennett, C.L., Smoot, G.F., Hinshaw, et al. 1992, ApJ, 396, L7 Boggess, N.W., et al. 1992, ApJ, 397, 420 Bond, J.R. 1995, Phys.Rev.Lett., 74, 4369 Bunn, E.F., Sugiyama, N., 1995, ApJ, 446, L49-52 Fran ois R. Bouchet, F.R., and Gispert, R. 1999 astro-ph/9903176 Davies R.D., Watson, R.A., and Gutierrez, C.M. 1998, MNRAS, 278, 925 de Oliveira-Costa, A., et al. 1997, ApJ, 482, L17 (dOC97) de Oliveira-Costa, A., et al. 1998a, ApJ, 509, L9 (dOC98a) de Oliveira-Costa, A., et al. 1998b, ApJ, 509, L77 (dOC98b) de Oliveira-Costa, A., et al. 1999, ApJ, 527, L9 (dOC99) Devlin, M. et al. 1998, ApJL, 509, L69 Draine, B.T., and Lazarian, A. 1998, ApJ, 494, L19 Haffner, L.M. et al. 1999, ApJ, 523, 223 Haslam, C.G.T., et al. 1982, A&AS 47, 1 Herbig, T. et al. 1998, ApJL, 509, L73 Kogut, A. 1997, AJ, 114, 1127 Kogut, A., et al. 1996, ApJ, 464, L5 Lagache, G. et al. 2000, astro-ph/9911355 McCullough, P.R. 1997, AJ, 113, 2186 Mukherjee, P. et al. 2000, astro-ph/0002305 Platania, P., et al. 1998, ApJ, 505, 473 Reich, W. 1982, A&AS, 48, 219 Reich, P., and Reich, W. 1986, A&AS, 63, 205 Rybicki, G. B. and Lightman, A. P. 1979, Radiative Processes in Astrophysics, Wiley & Sons, p.174 Tegmark, M. et al. 1999, astro-ph/9905257
no-problem/0003/hep-ph0003329.html
ar5iv
text
# Frame-Independence of Exclusive Amplitudes in the Light-Front Quantization ## I Introduction The hadron phenomenology based on the equal-$`\tau `$ quantization takes great advantage of the Drell-Yan-West frame. In the $`q^+=0`$ Drell-Yan-West frame, one can derive a first-principle formulation for the exclusive amplitudes by judiciously choosing the good component of the light-front current. Due to the rational dispersion relation in the equal-$`\tau `$ formulation, one doesn’t need to suffer from the complicated vacuum fluctuation. The zero-mode contribution may also be avoided in the Drell-Yan-West frame by using the plus component of the current . However, caution is needed in applying the established Drell-Yan-West formalism to other frames because in general the current components do mix under transformation of the reference-frame. Furthermore, the Poincare algebra in the ordinary equal-$`t`$ quantization is drastically changed in the light-front equal-$`\tau `$ quantization. The upshot of the difference may be summarized as the change of the dynamical operators in the two quantizations. For example, the transverse rotation operator in the light-front quantization does not commute with the Hamiltonian possessing the dynamics. Likewise, the light-front helicities are not in general independent from the reference frame. Therefore, the adopted light-front formulation used in one particular reference-frame to compute physical quantities is not gauranteed to work if another reference-frame is used. Consequently, it is important to carefully check if the same formulation can indeed be used in another frame for the correct calculation insuring the frame-independence of the physical quantities. In this work, we present criteria which insure an identical formulation in different frames of the light-front quantization. As an explicit example, we consider the well-known convolution formalism for the exclusive processes established in the Drell-Yan-West($`q^+=0`$) frame and investigate the criteria to insure the same formalism in different reference-frames. In particular, we analyze the transformation of the electromagnetic current elements and the light-front helicities between the Drell-Yan-West frame and the Breit($`q^+=0`$) frame which is another popular reference-frame used frequently by Frederico and collaborators for the calculation of exclusive amplitudes. We found that four operations are needed to go from the Drell-Yan-West frame to the Breit frame. Combining the four operations, we find the equivalent single operation that can go back and forth between two frames: $`U=\mathrm{exp}[i(\alpha 𝒥^++\beta 𝒦^3)]`$, where the coefficients $`\alpha `$ and $`\beta `$ of the transverse rotation $`𝒥^+`$ and the boost $`𝒦^3`$ are functions of $`q^2`$. It turns out that the single operation is dynamical (not kinematical) because transverse rotation is involved in the single transformation. This means that the two reference-frames are not only kinematically different but also dynamically inequivalent even though both frames have $`q^+=0`$. Thus, some new dynamics can come in by moving from the Drell-Yan-West frame to the Breit frame. This in principle raises a flag on various form factor calculations performed in the Breit frame because the simple convolution formulas that worked out quite well in the Drell-Yan-West frame may not apply in the Breit frame. We thus investigate in detail both pseudoscalar and vector form factors in these two typical light-front frames. We find that the helicities in the Drell-Yan-West frame are not changed in the Breit frame and thus the form factors obtained in the two frames must be indeed identical<sup>*</sup><sup>*</sup>* However, this doesn’t mean that any model calculations of form factors would give the same results in the two frames because the phenomenological models may not be covariant. For example, the angular condition for the vector meson form factors is violated if the model calculation is not fully covariant. Thus, our finding here provides an additional condition that the covariant model must satisfy besides the angular condition.. We find that the plus component of the Drell-Yan-West frame is proportional to the plus component of the Breit frame so that the same convolution formula can be used in both frames. This is a highly non-trivial result because such coincidence cannot be in general anticipated. We have also expanded the operation in a perturbative way. Closed exact single operation results $`U=\mathrm{exp}[i(\alpha 𝒥^++\beta 𝒦^3)]`$ were compared with the perturbative results. In general, however, the light-front helicities are changed and the plus component of the current is mixed with other components under the change of reference frame. For example, the $`V`$-transformation was obtained as $`V=\mathrm{exp}[i(\alpha 𝒦^{}\beta 𝒦^3)]`$ where $`𝒦^{}`$ is the light-front transverse boost so that $`V`$ commutes with the Hamiltonian. This reveals that the obtained $`V`$-transformation is dynamically equivalent even though it is kinematically different. In this case, however, the plus component of the current in Drell-Yan-West frame is not proportional to that in the $`V`$-frame. Thus, the same convolution formula obtained in the Drell-Yan-West frame cannot be used in the $`V`$-frame. In the following section (Section II), we present the four operations needed to go from the Drell-Yan-West frame to the Breit frame and obtain the explicit relations for the matrix elements of the current in the two frames, confirming the usual unitary tranformation rule of the vector operator. It is also assured that the pseudoscalar form factor must be identical in the two frames because the plus component is preserved. In Section III, we apply the transformation to the helicities and show that the vector form factors must be also identical in the two frames because the helcities are not changed in the two frames. In Section IV, the $`V`$-transformation is presented to show that the usual convolution formula for the form factor calculation in the Drell-Yan-West frame cannot be used even though the transformation commutes with the hamiltonian involving dynamics. Conclusions and discussions follow in Section V. In the Appendix, the perturbative expansion of the transformation between the frames using the Campbell-Baker-Hausdorff relation is compared with the exact closed form of transformation. ## II Transformation between Drell-Yan-West and Breit frames We begin by considering the absorption of a photon by a meson of mass $`M`$. The Drell-Yan-West reference frame is defined such that the initial four-momentum of the particle, $`p`$, and the four-momentum of the photon, $`q`$, are given as follows: $`p`$ $`=`$ $`(p_0,0,0,|𝐩|)`$ (1) $`q`$ $`=`$ $`({\displaystyle \frac{q^2}{2p^+}},\sqrt{q^2},0,{\displaystyle \frac{q^2}{2p^+}}).`$ (2) We found the four transformations required to move from the Drell-Yan-West frame to the Breit frame. These are represented schematically in Fig. 1. The first transformation to the Breit frame is the boost in the z-direction which eliminates the three-momentum of the meson. The necessary boost parameter $`\omega `$ must satisfy $`\mathrm{sinh}\omega =\frac{|𝐩|}{M}`$, generating the Lorentz transformation $`\mathrm{\Lambda }_1=\left(\begin{array}{cccc}\frac{E}{M}& 0& 0& \frac{|𝐩|}{M}\\ 0& 1& 0& 0\\ 0& 0& 1& 0\\ \frac{|𝐩|}{M}& 0& 0& \frac{E}{M}\end{array}\right).`$ (7) The momenta in the resulting frame are $`p=(M,0,0,0)`$ and $`q=(\frac{q^2}{2M},\sqrt{q^2},0,\frac{q^2}{2M})`$. The second transformation is a rotation about the y-axis so that both the final momentum of the massive particle and the momentum of the photon lie along the x-axis. This requires that $`\mathrm{tan}\theta =\frac{\sqrt{q^2}}{2M}`$. Throughout this paper we will refer to the quantity $`\frac{\sqrt{q^2}}{2M}`$ as $`\kappa `$. Therefore, $`\mathrm{\Lambda }_2=\left(\begin{array}{cccc}1& 0& 0& 0\\ 0& \frac{1}{\sqrt{1+\kappa ^2}}& 0& \frac{\kappa }{\sqrt{1+\kappa ^2}}\\ 0& 0& 1& 0\\ 0& \frac{\kappa }{\sqrt{1+\kappa ^2}}& 0& \frac{1}{\sqrt{1+\kappa ^2}}\end{array}\right).`$ (12) The initial momentum is unchanged, but the photon momentum is now $`q=(2M\kappa ^2,2M\kappa \sqrt{1+\kappa ^2},0,0)`$ and the final momentum is $`p^{}=(M+2M\kappa ^2,2M\kappa \sqrt{1+\kappa ^2},0,0)`$. The next transformation is a boost by parameter $`\omega _2`$ in the x-direction into a frame in which the incoming and outgoing momenta of the particle are equal and opposite,i.e., $`𝐩=𝐩^{}`$. Now $`\mathrm{sinh}\omega _2=\kappa `$, so that $`\mathrm{\Lambda }_3=\left(\begin{array}{cccc}\sqrt{1+\kappa ^2}& \kappa & 0& 0\\ \kappa & \sqrt{1+\kappa ^2}& 0& 0\\ 0& 0& 1& 0\\ 0& 0& 0& 1\end{array}\right).`$ (17) Now the momenta are $`p=(M\sqrt{1+\kappa ^2},M\kappa ,0,0)`$, $`q=(0,2M\kappa ,0,0)`$, and $`p^{^{}}=(M\sqrt{1+\kappa ^2},M\kappa ,0,0)`$. The final transformation is a boost opposite the initial one by the parameter $`\omega `$ generating $`\mathrm{\Lambda }_4=\left(\begin{array}{cccc}\frac{E}{M}& 0& 0& \frac{|𝐩|}{M}\\ 0& 1& 0& 0\\ 0& 0& 1& 0\\ \frac{|𝐩|}{M}& 0& 0& \frac{E}{M}\end{array}\right).`$ (22) The resulting reference frame is known as the Breit frame, with new momenta: $`p=(p^0\sqrt{1+\kappa ^2},M\kappa ,0,|𝐩|\sqrt{1+\kappa ^2})`$ and $`q=(0,2M\kappa ,0,0)`$. If we define $`Q=\sqrt{q^2}`$, the momenta in the light-front formalism $`k=(k^+,k^{},𝐤)`$ become $`p`$ $`=`$ $`(p_i^+\sqrt{1+\kappa ^2},p_i^{}\sqrt{1+\kappa ^2},{\displaystyle \frac{Q}{2}},0)`$ (23) $`q`$ $`=`$ $`(0,0,Q,0)`$ (24) $`p^{^{}}`$ $`=`$ $`(p_i^+\sqrt{1+\kappa ^2},p_i^{}\sqrt{1+\kappa ^2},{\displaystyle \frac{Q}{2}},0)`$ (25) where $`p_i`$ is the initial momentum of the massive particle in the Drell-Yan-West frame. The product of these four unitary transformations is a single unitary transformation, $`U`$, which characterizes the relationship between the two frames. To fully understand the invariance of electromagnetic form factors under $`U`$, we must know how the matrix elements of the current transform. We explicitly verified that $`U^{}I^\mu U=\mathrm{\Lambda }_\nu ^\mu I^\nu `$, where $`\mathrm{\Lambda }`$ is the full Lorentz transformation matrix. Applying this matrix to an arbitrary current four-vector $`I^\mu `$ allows us to find the current-operator relations: $`U^{}I^+U`$ $`=`$ $`\sqrt{1+\kappa ^2}I^+`$ (26) $`U^{}I^{}U`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{1+\kappa ^2}}}I^{}+{\displaystyle \frac{p_i^{}\kappa }{M\sqrt{1+\kappa ^2}}}\left({\displaystyle \frac{p_i^{}\kappa }{M}}I^+2I^1\right)`$ (27) $`U^{}I^1U`$ $`=`$ $`I^1{\displaystyle \frac{p_i^{}\kappa }{M}}I^+`$ (28) $`U^{}I^2U`$ $`=`$ $`I^2`$ (29) Notice that the plus component of the current in Drell-Yan-West frame is proportional to the plus component of the current in the Breit frame. This correspondence is necessary if the convolution formalism is to be valid in both frames. We can use these relations to find a closed form for the generator of the transformation. First, note that the operator must have the form $`U=\mathrm{exp}i\left(\alpha 𝒥^++\beta 𝒦^3+\gamma 𝒦^{}\right)`$, where we have defined $`𝒥^+=𝒥^2+𝒦^1`$ and $`𝒦^{}=𝒦^1𝒥^2`$ to be components of angular momentum and boost on the light-front, respectively. The coefficient $`\gamma `$ must be zero according to arguments presented in the Appendix. In Fig. 2, the first (dotted) and second (dashed-dotted) order perturbative expansions for $`\gamma `$ are plotted versus the parameter $`\kappa `$ to demonstrate that the expansion converges to $`\gamma =0`$. Next, by using the Campbell-Baker-Hausdorff relation with the appropriate Poincare algebra we find $`U^{}I^+U`$ $`=`$ $`I^+e^\beta `$ (30) $`U^{}I^1U`$ $`=`$ $`I^1+I^+{\displaystyle \frac{\alpha }{\beta }}\left(e^\beta 1\right).`$ (31) Thus the full transformation from the Drell-Yan-West frame to the Breit frame can be represented by the unitary transformation $$U=\mathrm{exp}i\left(\alpha 𝒥^++\beta 𝒦^3\right)$$ (32) where $`\alpha `$ $`=`$ $`{\displaystyle \frac{\kappa }{1\sqrt{1+\kappa ^2}}}{\displaystyle \frac{p_i^{}}{M}}\mathrm{ln}\sqrt{1+\kappa ^2}`$ (33) $`\beta `$ $`=`$ $`\mathrm{ln}\sqrt{1+\kappa ^2}.`$ (34) This result was checked explicitly by both construction of the corresponding Lorentz transformation matrix and comparison with the perturbative expansion of $`e^{i\omega 𝒦^3}e^{i\omega _2𝒦^1}e^{i\theta 𝒥^2}e^{i\omega 𝒦^3}`$ as presented in the Appendix. Plots of the coefficients $`\alpha `$ and $`\beta `$ are included for comparison with the perturbative expansion with $`\omega =0`$ in Figs. 3 and 4. Solid lines correspond to exact solutions; dotted lines are first-order expansions; and dashed-dotted lines are up to third-order expansions. Note that the second-order terms are absent as shown in the Appendix if $`\omega =0`$ (See Eq.(A.4)). The dashed-dotted line is indistinguishable from the solid line in Fig. 4. To verify the invariance of the pseudoscalar form factor under this transformation, consider the relation $`<p^{}|I^\mu |p>=(p+p^{})^\mu F`$, where the matrix element and the momenta are defined in the Drell-Yan-West frame. In the Breit frame this becomes $`<p_D^{}|U^{}I^\mu U|p_D>=\mathrm{\Lambda }_\nu ^\mu <p_D^{}|I^\nu |p_D>=\mathrm{\Lambda }_\nu ^\mu (p+p^{})_D^\nu F=(p+p^{})_B^\mu F`$. Since the relation must hold in any reference frame, we see that the form factor $`F`$ is invariant under Lorentz transformations. This can be easily confirmed for the $`U`$ transformation by using the current-operator relations given by Eq.(2.7). More importantly, the Drell-Yan-West particle-number-conserving convolution formalism will be valid in the Breit frame. This is true since the plus component of any four-vector (i.e., current) is unmixed with other components under this change of reference frame indicating $`q^+=0q^+=0`$. Thus, the non-valence (pair-creation) diagrams remain excluded. ## III Transformation of Helicities and the Equivalence of Vector Form Factors The invariance of vector or spin-one form factors is nontrivial due to the frame-dependence of light-front helicities. In this section we find that light-front helicity eigenvalues are unchanged under transformation from the Drell-Yan-West to the Breit reference frame. The polarization vectors for light-front helicity eigenstates in the Drell-Yan-West frame are given by $`ϵ(0)`$ $`=`$ $`{\displaystyle \frac{1}{M}}(p_i^+,p_i^{},0,0)`$ (35) $`ϵ(+1)`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}(0,0,1,i)`$ (36) $`ϵ(1)`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}(0,0,1,i)`$ (37) $`ϵ^{^{}}(0)`$ $`=`$ $`{\displaystyle \frac{1}{M}}(p_i^+,p_i^{}+{\displaystyle \frac{Q^2}{p_i^+}},Q,0)`$ (38) $`ϵ^{^{}}(+1)`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}(0,{\displaystyle \frac{2Q}{p_i^+}},1,i)`$ (39) $`ϵ^{^{}}(1)`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}(0,{\displaystyle \frac{2Q}{p_i^+}},1,i)`$ (40) while the corresponding vectors in the Breit frame are: $`ϵ(0)`$ $`=`$ $`{\displaystyle \frac{1}{M}}(p_i^+\sqrt{1+\kappa ^2},p_i^{}{\displaystyle \frac{\kappa ^21}{\sqrt{1+\kappa ^2}}},{\displaystyle \frac{Q}{2}},0)`$ (41) $`ϵ(+1)`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}(0,{\displaystyle \frac{Q}{p_i^+\sqrt{1+\kappa ^2}}},1,i)`$ (42) $`ϵ(1)`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}(0,{\displaystyle \frac{Q}{p_i^+\sqrt{1+\kappa ^2}}},1,i)`$ (43) $`ϵ^{^{}}(0)`$ $`=`$ $`{\displaystyle \frac{1}{M}}(p_i^+\sqrt{1+\kappa ^2},p_i^{}{\displaystyle \frac{\kappa ^21}{\sqrt{1+\kappa ^2}}},{\displaystyle \frac{Q}{2}},0)`$ (44) $`ϵ^{^{}}(+1)`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}(0,{\displaystyle \frac{Q}{p_i^+\sqrt{1+\kappa ^2}}},1,i)`$ (45) $`ϵ^{^{}}(1)`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}(0,{\displaystyle \frac{Q}{p_i^+\sqrt{1+\kappa ^2}}},1,i).`$ (46) In each frame, the polarization vector representing a particular helicity eigenstate is determined by applying the three constraints $`ϵ(p,\lambda )p`$ $`=`$ $`0`$ (47) $`ϵ^{}(p,\lambda )ϵ(p,\lambda )`$ $`=`$ $`\delta _{\lambda \lambda ^{}}`$ (48) $`{\displaystyle \underset{\lambda }{}}ϵ^\mu (p,\lambda )ϵ^\nu (p,\lambda )`$ $`=`$ $`g^{\mu \nu }+{\displaystyle \frac{p^\nu p^\mu }{M^2}}`$ (49) and choosing $`ϵ(p,0)`$ so that its plus and perpendicular components are proportional to the plus and perpendicular components of the momentum $`p`$. In general, any polarization vector in one frame transforms to a superposition of polarization vectors in a second reference frame. Thus, light-front helicities are not Lorentz invariant. Applying the transformations described in the previous section to the six polarization vectors in the Drell-Yan-West frame, however, we obtain exactly the corresponding six vectors in the Breit frame. This implies that a state of helicity $`\lambda `$ in the Drell-Yan-West frame transforms to a state of the same helicity $`\lambda `$ in the Breit frame. One consequence of this special relationship between the Drell-Yan-West and Breit reference frames is that the form of the angular condition, which is used to check the accuracy of model vector meson wavefunctions, is identical in both reference frames. The angular condition in the Drell-Yan-West frame is given by $$\mathrm{\Delta }(Q^2)=(1+2\kappa ^2)F_{++}^++F_+^+\kappa \sqrt{8}F_{+0}^+F_{00}^+=0,$$ (50) where $`F_{\lambda \lambda ^{}}^+=<p^{},\lambda ^{}|I^+|p,\lambda >`$. To see how Eq.(3.4) holds also in the Breit frame, first note that $`<p_B^{},\lambda _B^{}|I^\mu |p_B,\lambda _B>=<p_D^{},\lambda _D^{}|U^{}I^\mu U|p_D,\lambda _D>`$. Thus, each matrix element transforms according to the current-operator relations (see Eq.(2.7)) presented in the last section. Since the helicity is invariant under this particular transformation, the matrix elements in the Breit frame are given by $`<p_B^{},\lambda _B^{}|I^\mu |p_B,\lambda _B>`$ $`=`$ $`{\displaystyle \underset{m,n}{}}<p^{},\lambda ^{}(n)|U^{}I^\mu U|p,\lambda (m)>c_mc_n`$ (51) $`=`$ $`<p_D^{},\lambda _D^{}|U^{}I^\mu U|p_D,\lambda _D>=\mathrm{\Lambda }_\nu ^\mu <p_D^{},\lambda _D^{}|I^\nu |p_D,\lambda _D>.`$ (52) This shows that $`F_{\lambda \lambda ^{}}^+\sqrt{1+\kappa ^2}F_{\lambda \lambda ^{}}^+`$ under the transformation from the Drell-Yan-West frame to the Breit frame. Therefore, Eq.(3.4) must also hold in the Breit frame. Similarly, one can show not only that $`q^+=0q^+=0`$ but also that the relations between the vector form factors $`\{F_1,F_2,F_3\}`$ and $`F_{\lambda \lambda ^{}}^+`$ remain the same in both frames. This is a remarkable feature of the two frames which results in the invariance of the convolution formalism. Furthermore, the matrix elements are related to the vector form factors as follows $`<p^{},\lambda ^{}|I^\mu |p,\lambda >=ϵ_\alpha ^{^{}}ϵ_\beta [g^{\alpha \beta }(p+p^{})^\mu F_1+(g^{\mu \beta }q^\alpha g^{\mu \alpha }q^\beta )F_2+{\displaystyle \frac{q^\alpha q^\beta (p+p^{})^\mu F_3}{2M^2}}].`$ (53) Under Lorentz transformations all scalar products on the right remain invariant, so that each term on the right-hand side transforms as the $`\mu `$ component of a momentum or polarization four-vector times a form factor. This means that in the Briet frame the matrix elements can be written as $`<p_B^{},\lambda _B^{}|I^\mu |p_B,\lambda _B>={\displaystyle \underset{i}{}}\rho _{B}^{\mu }{}_{i}{}^{}F_i^B,`$ (54) where $`\rho _{B}^{\mu }{}_{i}{}^{}(i=1,2,3)`$ is the coefficient of $`F_i`$ (see Eq.(3.6)) in the Breit frame. According to Eq.(3.5), the left-hand side of Eq.(3.7) is also equal to $`\mathrm{\Lambda }_\nu ^\mu <p_D^{},\lambda _D^{}|I^\nu |p_D,\lambda _D>`$ $`=`$ $`\mathrm{\Lambda }_\nu ^\mu {\displaystyle \underset{i}{}}\rho _{D}^{\mu }{}_{i}{}^{}F_i^D`$ (55) $`=`$ $`{\displaystyle \underset{i}{}}\rho _{B}^{\mu }{}_{i}{}^{}F_i^D.`$ (56) Therefore, $`_i\rho _{B}^{\mu }{}_{i}{}^{}F_i^B=_i\rho _{B}^{\mu }{}_{i}{}^{}F_i^D`$. This can only be true if all three form factors $`F_1,F_2,F_3`$ are identical in both frames. ## IV The $`V`$-Transformation It is well-known that helicities in the traditional equal-time quantization are frame dependent. We expect a similar phenomenon on the light-front. In order to demonstrate that light-front helicities are not frame-invariant in general, consider the following example. Define a transformation $`V`$ which is identical to the previous transformation, $`U`$, except that each boost is reversed; i.e., $`\omega \omega `$ and $`\omega 2\omega 2`$. Under this transformation we find the current-operator relations $`V^{}I^{}V`$ $`=`$ $`\sqrt{1+\kappa ^2}I^{}`$ (57) $`V^{}I^+V`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{1+\kappa ^2}}}I^++{\displaystyle \frac{p_i^{}\kappa }{M\sqrt{1+\kappa ^2}}}\left({\displaystyle \frac{p_i^{}\kappa }{M}}I^{}+2I^1\right)`$ (58) $`V^{}I^1V`$ $`=`$ $`I^1+{\displaystyle \frac{p_i^{}\kappa }{M}}I^{}`$ (59) $`V^{}I^2V`$ $`=`$ $`I^2.`$ (60) Note that the plus component of the current is not proportional to the plus component of the current in the Drell-Yan-West frame. Now applying the same constraints (3.3) as before, we obtain the polarization vectors for the initial meson: $`ϵ(0)`$ $`=`$ $`{\displaystyle \frac{1}{M}}(k^+,{\displaystyle \frac{k_{}^{}{}_{}{}^{2}M^2}{k^+}},k_{},0)`$ (61) $`ϵ(+1)`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}(0,{\displaystyle \frac{2k_{}}{k^+}},1,i)`$ (62) $`ϵ(1)`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}(0,{\displaystyle \frac{2k_{}}{k^+}},1,i),`$ (63) where the parameters $`k^+,k_{}`$ are components of the fermion’s light-front momentum in the new $`V`$-frame which is given by $`k=({\displaystyle \frac{p^{}}{M^2\sqrt{1+\kappa ^2}}}\left[(p^+)^2+\kappa ^2(p^{})^2\right],p^{}\sqrt{1+\kappa ^2},{\displaystyle \frac{\kappa (p^{})^2}{M}},0).`$ (64) Similarly, the components of the photon’s light-front momentum $`k_\gamma `$ in the $`V`$-frame are $$k_\gamma =(\frac{1}{p^+M^2\sqrt{1+\kappa ^2}}\left[M^2+\kappa ^2(p^{})^2\right],\frac{Q^2}{p^+}\sqrt{1+\kappa ^2},\frac{Q}{p^+}(2x^2p^{}+p^+),0).$$ (65) The transformation $`V`$ can be represented as the unitary operator $`V=\mathrm{exp}i[\alpha 𝒦^{}\beta 𝒦^3]`$, where $`\alpha `$ and $`\beta `$ were presented in section II. Transforming the Drell-Yan-West polarization vectors according to $`V`$ yields a set of polarization vectors in this new $`V`$-frame. None of these polarization vectors, however, represents a helicity eigenstate in this frame. We obtain, for example, that $`ϵ(0)^+={\displaystyle \frac{p^{}}{M^3\sqrt{1+\kappa ^2}}}\left[(p^+)^2\kappa ^2(p^{})^2\right].`$ (66) The correct plus component of the zero-helicity polarization vector is $`\frac{k^+}{M}`$ from Eq.(4.2) or $`ϵ(0)^+={\displaystyle \frac{p^{}}{M^3\sqrt{1+\kappa ^2}}}\left[(p^+)^2+\kappa ^2(p^{})^2\right],`$ (67) according to Eq.(4.3). The subtle sign difference is a consequence of changing the sign of each boost in the transformation. For a stronger example, consider the transverse polarization vectors. We obtain by transforming the Drell-Yan-West vectors $`ϵ(+1)^+`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}({\displaystyle \frac{2\kappa p^{}}{M\sqrt{1+\kappa ^2}}})`$ (68) $`ϵ(1)^+`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}({\displaystyle \frac{2\kappa p^{}}{M\sqrt{1+\kappa ^2}}}).`$ (69) These terms must be zero, as presented in (4.2), to satisfy the necessary constraints in the $`V`$-frame. Thus, light-front helicity is not invariant under this $`V`$-transformation. This implies that the convolution formulation used in the Drell-Yan-West frame cannot be used in the $`V`$-frame. As a result, we expect a different form for the angular condition in this frame. ## V Conclusion We applied the four operations in the calculation of the pseudoscalar form factor and found that the form factor can be identically obtained in the Drell-Yan-West and Breit reference frames using the simple convolution formalism as long as the plus component of the current is used. We also applied the four operations in the calculation of spin-1 (vector) form factors, $`F_1,F_2,F_3(G_E,G_M,G_Q)`$. We found that the light-front helicities become identical in the two frames even though the light-front helicities are in general frame dependent. Thus, all three form factors obtained by the particle-number-conserving convolution formalism must be identical in the two frames confirming the correctness of previous applications to the vector meson form factors in the Breit frame. We also find that the angular condition is identical in the two frames. This is a remarkable result because it works only in very limited special frames. Thus, the Drell-Yan-West and Breit frames can be regarded as such special frames. However, such coincidence does not generally hold for other reference frames. One needs thus to investigate the frame-independence of form factors with care before relying on a particular reference frame. In summary, the two typical frames (Drell-Yan-West and Breit) are special in the light-front computation since the plus components of the current in the two frames are proportional to each other and the light-front helicities are equivalent in these two frames. Thus, the angular condition is also identical and the same convolution formula obtained in the Drell-Yan-West frame can equivalently be used in the Breit frame. However, in other frames, caution is needed in using the Drell-Yan-West convolution formalism. ###### Acknowledgements. This work was supported in part by a grant from the US Department of Energy. ## A Comparison with Perturbative Expansion Using CBH Relations According to the Campbell-Baker-Hausdorff Theorem, if $`e^Ae^B=e^C`$ then $`C`$ can be expressed as an expansion in terms of commutators. The first few terms are $`C=A+B+{\displaystyle \frac{1}{2}}[A,B]+{\displaystyle \frac{1}{12}}([A,[A,B]][[A,B],A])+\mathrm{}.`$ (A1) Applying this theorem three times allows us to combine the unitary transformation $`e^{i\omega 𝒦^3}e^{i\omega _2𝒦^1}e^{i\theta 𝒥^2}e^{i\omega 𝒦^3}`$ into a single exponential, providing us with a perturbative expansion for the generator of the $`U`$-transformation. $`C`$ $`=`$ $`{\displaystyle \frac{1}{2}}𝒥^+(a+b)(1{\displaystyle \frac{1}{2}}\omega +{\displaystyle \frac{1}{12}}\omega ^2+{\displaystyle \frac{1}{12}}\omega c)+{\displaystyle \frac{1}{2}}𝒦^{}(ab)(1{\displaystyle \frac{1}{2}}\omega +{\displaystyle \frac{1}{12}}\omega ^2+{\displaystyle \frac{1}{12}}\omega c)`$ (A2) $`+`$ $`𝒦^3(c\omega {\displaystyle \frac{1}{12}}\omega (a^2b^2))+\mathrm{}.`$ (A3) Where $`a,b,`$ and $`c`$ are given up to fourth order as $`a`$ $`=`$ $`\omega _2+{\displaystyle \frac{1}{2}}\omega \theta +{\displaystyle \frac{1}{12}}\omega _2(\omega ^2+\theta ^2)+{\displaystyle \frac{1}{12}}(\omega _{2}^{}{}_{}{}^{2}\omega \theta )`$ (A4) $`b`$ $`=`$ $`\theta +{\displaystyle \frac{1}{2}}\omega \omega _2{\displaystyle \frac{1}{12}}\theta (\omega ^2+\omega _{2}^{}{}_{}{}^{2})`$ (A5) $`c`$ $`=`$ $`\omega +{\displaystyle \frac{1}{2}}\theta \omega _2+{\displaystyle \frac{1}{12}}\omega (\theta ^2+\omega _{2}^{}{}_{}{}^{2}).`$ (A6) Suppose that $`\omega =0`$. Then, keeping up to third order, we have $`C={\displaystyle \frac{1}{2}}𝒥^+(\omega _2\theta +{\displaystyle \frac{1}{12}}\theta ^2\omega _2{\displaystyle \frac{1}{12}}\omega _{2}^{}{}_{}{}^{2}\theta )+{\displaystyle \frac{1}{2}}𝒦^{}(\omega _2+\theta +{\displaystyle \frac{1}{12}}\theta ^2\omega _2+{\displaystyle \frac{1}{12}}\omega _{2}^{}{}_{}{}^{2}\theta )+{\displaystyle \frac{1}{2}}𝒦^3\omega _2\theta .`$ (A7) Using the facts that $`\mathrm{tan}\theta =\kappa `$ and $`\mathrm{sinh}\omega _2=\kappa `$, we can examine the coefficient of each operator versus the parameter $`\kappa `$. In Figs.2-4, each coefficient corresponding to $`\alpha ,\beta `$, and $`\gamma `$ in Eq.(A.4) are compared with the closed form presented earlier (See Eq.(2.10)). For small values of $`\kappa `$ the closed form and the expansion agree well, and their values diverge slowly with increasing $`\kappa `$ as expected. The coefficient of the $`𝒦^{}`$ operator approaches zero for a given $`\kappa `$ as the order of approximation increases. One can indeed verify $`\gamma =0`$ because we have already shown that the plus component of current in the Drell-Yan-West frame must be proportional to the plus component of current in the Breit frame (See Eq.(2.7)). Since the commutator $`[I^+,𝒦^{}]=2iI^1`$, including a $`𝒦^{}`$ term in the transformation would contradict Eq.(2.7). Thus, the coefficient $`\gamma `$ must be zero.
no-problem/0003/hep-ph0003227.html
ar5iv
text
# 1 Introduction ## 1 Introduction It has long been established that the B-meson system (both charged and neutral) may be the ideal place to look for indirect effects of physics, both CP-conserving and CP-violating, beyond the Standard Model (BSM) . The reasons, in brief, are: * The $`B\overline{B}`$ mixing is dominated by the short-distance box diagram with the top quark running inside the loop. Thus, CP-violation is large and the major part of it, fortunately, can be calculated to a good precision. The long-distance part is anyway known to be negligible in B-decays (this may be compared with D decays where it is the dominant contribution). * The soft QCD effects are less pronounced for B than for D — this is due to the $`m_c/m_b`$ suppression. Thus, $`1/m_b`$ corrections are at least controllable. * Due to the CKM suppression, the lifetime of the B meson is sufficiently large to be accurately measured. This has important implications in the asymmetric B-factories. * Due to the abovementioned reasons, we will have a number of dedicated B-factories in the near future, apart from the running ones like CLEO. We will need the hadronic machines to measure CP-violation in modes with branching ratios (BR) less than $`10^5`$. However, before one proceeds, one must remember that the theoretical uncertainties are still significant, and will probably remain so in near future . These uncertainties are dominated by our ignorance of soft-QCD physics. For example, approximations like factorization and quark-hadron duality (both local and global) are not at all beyond doubt; the numerical inputs like the strange quark mass, the number of effective colour $`N_c`$ and the regularization scale $`\mu `$ are not yet certain and should be treated as more or less free parameters. With all these handicaps, it is extremely difficult to find the signature of BSM physics if that is more than one order of magnitude smaller than the SM contribution. Fortunately, there are cases when the BSM signal may be equally (or more) large as the SM one, and can be easily distinguished. There are two major ways to proceed. First, one can look for CP-asymmetries, both direct and mixing-induced, and see whether they tally with the SM predictions. Thus, if the SM amplitude for a particular process be $`A_{SM}\mathrm{exp}(i\theta _{SM})`$ and the BSM amplitude for the same process be $`A_{BSM}\mathrm{exp}(i\theta _{BSM})`$, the total amplitude is given by $$A_{tot}e^{i\theta _{tot}}=A_{SM}e^{i\theta _{SM}}(1+he^{i\varphi })$$ (1) where $`hA_{BSM}/A_{SM}`$ and $`\varphi =\theta _{BSM}\theta _{SM}`$. The change in CP-asymmetry is essentially governed by $`h`$, which should be $`𝒪(1)`$ for the BSM physics to be visible. Such investigations involve the measurement of the angles as well as the sides of the unitarity triangle (UT). Here, one may face a number of different situations, some of which are: (i) The three angles of the UT do not sum up to $`\pi `$. This is a definite signal of new physics, but, considering the errors in determining the angles, may not be easily obtained even in the B-factories. Even if one measures the functions $`\mathrm{sin}2\alpha `$, $`\mathrm{sin}2\beta `$ and $`\mathrm{sin}^2\gamma `$ from CP-asymmetries, one needs to resolve the discrete ambiguity to get the actual values of the angles . (ii) The angles do sum up to $`\pi `$, but the sides are not in the proper ratio. One needs to determine the sides too for this type of signal. (iii) CP-asymmetries measured from different modes, which should yield the same angle in SM, give different results. For example, $`J/\psi K_S`$ and $`\varphi K_S`$ modes may produce different CP asymmetries (both should give the same angle $`\beta `$ in SM), and one may find nonzero CP-asymmetries in $`bc`$ decay modes of $`B_s`$ (which, in SM, should not give any significant CP-asymmetry). (iv) One can observe sizable asymmetries in leptonic, semileptonic and radiative B-decays too. Secondly, one can concentrate on CP-conserving observables. A good place is the branching ratios (BR) of rare modes. CLEO already has some interesting signals which are listed in Table 1; there may be more in near future. Another excellent channel is to look for forbidden modes in the SM (like $`B^+K^+K^+\pi ^{}`$ ) where even a single event may signal BSM physics. OPAL has looked for such signals and placed limits on BSM couplings . Anyway, we should realize that quantification of BSM physics is something we must approach with caution; qualitative signals (e.g., $`h1`$) are what we can hope to observe quickly. Of course, if BSM physics is indicated from other experiments, then the B-system can be used to complement and quantify that. ## 2 Basic Formalism The unitarity triangle probed in B-decays is given by the orthogonality condition $$V_{ub}V_{ud}^{}+V_{cb}V_{cd}^{}+V_{tb}V_{td}^{}=0.$$ (2) The triangle, alongwith the angles $`\alpha `$, $`\beta `$ and $`\gamma `$, is shown in Fig. 1. The only two complex entries in the CKM matrix in Wolfenstein parametrization (WP) are $$V_{td}=|V_{td}|exp(i\beta ),V_{ub}=|V_{ub}|exp(i\gamma ).$$ (3) With WP, the tip of the UT has coordinates $`(\rho ,\eta )`$. The central values are $$\rho (1\lambda ^2/2)=0.240_{0.047}^{+0.057},\eta (1\lambda ^2/2)=0.335\pm 0.042.$$ (4) In the SM, the quark-level subprocesses that are important to determine the angles of the UT are shown in Table 2, which is mainly taken from . It is helpful to remember that $`B^0\overline{B^0}`$ mixing measures $`2\beta `$, $`bu`$ measures $`2\gamma `$, presence of both simultameously measures $`2\alpha `$ (assuming the UT closes), and $`B_s\overline{B_s}`$ mixing and $`bc`$ decay are CP-conserving to a very good extent. Some of such CP-conserving modes are also shown; a nonzero CP-asymmetry in them (say, in $`B_sJ/\psi \varphi `$) would be an encouraging signal for BSM physics. ## 3 Possible New Physics In this section, we first briefly review a couple of non-SUSY extensions of the SM, and the results are taken mainly from . After that we discuss two versions of SUSY. ### 3.1 Four Generations With four quark generations, the CKM matrix is $`4\times 4`$, with three independent phases. This makes UT a quadrangle, and the asymmetries measured by different processes will be different from their SM predictions: for example, the asymmetry measured in $`B^0\overline{B^0}`$ mixing is not only $`2\beta `$ but some $`2(\beta +\theta _d)`$ due to the $`t^{}`$ mediated box. The smoking gun signals may be the simultaneous measurements of $`\alpha `$, $`\beta `$ and $`\gamma `$ which will not sum up to $`\pi `$. Also, $`bd\gamma `$ and $`bd\mathrm{}^+\mathrm{}^{}`$ may be enhanced compared to their SM values depending on the magnitudes of $`V_{td}`$ and $`V_{t^{}d}`$ . CP asymmetry in $`BJ/\psi K_S`$ is negative for almost half of the parameter space, and almost 40% of the parameter space predicts the magnitude of CP asymmetry in $`B_sJ/\psi \varphi `$ to be more than $`0.2`$ (the SM asymmetry is almost zero) . ### 3.2 Multi-Higgs Doublet with no FCNC In such models, the CP-asymmetries are almost identical to that of the SM, since the CKM matrix still has the same structure, and $`H^+\overline{u_i}d_j`$ couplings have the same phase as that of the SM. There may be a significant change in the total amplitude of $`B^0\overline{B^0}`$ mixing due to the $`H^+`$ box diagrams, which will in turn affect the value of $`V_{td}`$. An interesting signal in this model may be the $`B^0\mathrm{}^+\mathrm{}^{}`$ rates, which, for some particular choice of the parameter space, can be much higher than the SM ones. We do not discuss the spontaneous CP-violation scenario, since only spontaneous CP-violation would mean a real CKM matrix, which is ruled out from the $`K_LK_S`$ mass difference and the CDF measurement of $`\mathrm{sin}2\beta `$. ### 3.3 Supersymmetry with R-parity Conservation The minimal SUSY and its R-parity conserving variants are interesting mainly for the CP-violating observables; any CP-conserving observable like the BRs must have two SUSY particles in the loop and is thereby suppressed in general. As is well-known, there can be two independent phases in the minimal SUSY which can lead to interesting CP-violating effect. They can be written as $$\varphi _A=arg(A^{}m_{1/2}),\varphi _B=arg(m_{1/2}\mu (m_{12}^2)^{}),$$ (5) where the symbols have their usual meanings. For both these phases $`𝒪(1)`$, the dipole moment of neutron, for example, is larger than the experimental limit by two to three orders of magnitude for 100 GeV squarks. This is known as the Supersymmetric CP problem. Another problem arises in the measurement of $`ϵ_K`$ which turns out to be seven orders of magnitude larger than the experimental number unless there is some fine-tuning among the parameters. To solve the SUSY flavour problems regarding $`ϵ_K`$ and dipole moment of neutron, a number of different models were proposed. Among them are: (1) Heavy squarks at the TeV scale; (2) Universality among right and left squark masses for different generations; (3) Alignment of quark and squark mixing matrices, and (4) Approximate CP-symmetry of the Lagrangian. There are a number of specific flavour models in the literature which incorporates one or more of the above features . As has been pointed out, the effects on measured observables crucially depend on the exact structure of the model, and not all models in a given category have same CP-violating predictions. For example, alignment-type models predict $`𝒪(1)`$ contributions to $`B^0\overline{B^0}`$ mixing phases but very small contribution to the neutron dipole moment; heavy squark models (for the first two generations) may have a larger $`d_n`$. In Table 3, which is taken from , we summarise the predictions of various type of models. For a detailed discussion, see . Another interesting observable is the forward-backward lepton asymmetry (as well as the absolute BRs) in $`BX_s\mathrm{}^+\mathrm{}^{}`$ where $`\mathrm{}=e`$ or $`\mu `$ . For both the leptons, the SM predictions for $`A_{FB}`$ is $`0.23`$ but it can vary from $`0.33`$ to $`0.18`$ in SUSY models. The negative $`A_{FB}`$ constitutes an interesting signal. The BRs can be enhanced by a factor of four or can be suppressed by a factor of two, which should also be measured in the B-factories. ### 3.4 Supersymmetry without R-parity R-parity violating (RPV) SUSY has one great advantage over the non-RPV SUSY models: the new physics contributions appear in the tree-level, and hence can greatly enhance or suppress the SM contributions. Here we will discuss the popular approach, i.e., we will consider all RPV couplings to be free parameters, constrained only by various experimental data, and study its consequences on B-decays. The physics of RPV has been discussed in detail in the literature, and here we will only quote the main results. For $`BM_1M_2`$ decays ($`M`$ is any meson in general) the relevant pair of couplings is either $`\lambda ^{}\lambda ^{}`$ or $`\lambda ^{\prime \prime }\lambda ^{\prime \prime }`$ type (but not both simultaneously). For $`BM\mathrm{}^+\mathrm{}_{}^{}{}_{}{}^{}`$ decays, it is $`\lambda \lambda ^{}`$ and $`\lambda ^{}\lambda ^{}`$ together. For example, we can have sneutrino/squark mediated $`bd_i\overline{d_j}d_k`$ decays and selectron/squark mediated $`bd_i\overline{u_j}u_k`$ decays ($`i,j,k`$ are generation indices). All B-decay modes are affected by suitable pair of RPV couplings; more specifically, all UT angles can change from their SM predictions. In SM, the decays $`BJ/\psi K_S`$ and $`B\varphi K_S`$ measure the same angle $`\beta `$; with RPV, the measured CP-asymmetries may be different, which will definitely signal new physics . One can see forbidden modes like $`B^+K^+K^+\pi ^{}`$ originating from the SM forbidden $`bss\overline{d}`$ decay . CP-asymmetries $`100\%`$ in the measurement of the UT angle $`\gamma `$ can be obtained even from $`B^+`$ decays ; thus, study of $`B^+`$s, alongwith $`B^0`$s and $`B_s`$s, are of paramount importance. The leptonic forward-backward asymmetries are modified too: for a pure $`\lambda \lambda ^{}`$ type coupling, there is no FB asymmetry, whereas for a $`\lambda ^{}\lambda ^{}`$ type coupling, it is in the opposite direction from SM . Another important feature is that RPV couplings can enhance or suppress the BRs significantly. As we have seen, the CLEO data on $`\eta ^{}K`$ and $`\eta K^{}`$ are quite far away from the SM prediction. It has been shown that a moderate value of the product coupling $`d_{222}^R\lambda _{i23}^{}\lambda _{i22}^{}`$ (each $`\lambda ^{}=0.05`$-$`0.09`$, say), perfectly compatible with the experimental bounds, can enhance the BRs to their experimental value. This is demonstrated in Fig. 2 where we have plotted the BR against $`\xi =1/N_c^{eff}`$, using $`m_s(1GeV)=165`$ MeV. At the same time, this product suppresses decays like $`B^0\varphi K`$; in SM, this decay is allowed only for $`\xi <0.23`$ but with RPV, larger ranges of $`\xi `$ are allowed. However, the SM range is in conflict with other PV modes such as $`B^\pm \omega K^\pm `$ and $`B^\pm \omega \pi ^\pm `$. The former requires either $`\xi <0.05`$ or $`0.65<\xi <0.85`$ while the latter requires $`0.45<\xi <0.85`$ . Interestingly, the $`d_{222}^R`$ operator affects $`B^\pm \varphi K^\pm `$ while the other two decay modes are blind to it. This additional contribution interferes destructively with the SM amplitude and $`BR(B^\pm \varphi K^\pm )`$ is suppressed leading to a wider allowed range for $`\xi `$. For example, with each $`\lambda ^{}=0.09`$, $`\xi `$ can be as large as $`0.8`$, thus allowing for a common fit to all the three ($`PV`$) modes under discussion<sup>2</sup><sup>2</sup>2Note that the favoured value of $`\xi `$ for the $`PP`$ and $`PV`$ modes still continue to be different. While this is not a discrepancy, a common $`\xi `$ for both these sets can be accommodated for values of $`\lambda ^{}`$ slightly larger than that we have considered.. (Note that $`\lambda ^{}=0.09`$ is just ruled out from $`BX_s\nu \overline{\nu }`$ data if we assume a sneutrino-squark degeneracy; but there is every reason to believe that squarks are heavier than sneutrinos, in which case our analysis still stands.) $`d_{222}^R`$ also affects $`VV`$ decay modes such as ($`B\varphi K^{}`$). As this calculation involves a few more model dependent parameters, we do not discuss it here. ## 4 Conclusions The study of B-decays, both in the CP-conserving and CP-violating fronts, is quite interesting to study indirect effects of new physics, more so in view of the upcoming B-factories. CLEO has already given some food for thought. Among various new physics models, non-SUSY extensions of the SM mainly affect the $`B^0\overline{B^0}`$ amplitude, and, maybe, phase. The determination of $`V_{td}`$ may be affected too. Different SUSY flavour models will have different signatures regarding the neutron dipole moment and asymmetries in $`J/\psi K_S`$ and $`\pi \nu \overline{\nu }`$ modes. RPV SUSY models can contribute to almost all B-decays, and can even induce some SM forbidden decyas. One of the important achievements is to explain the CLEO result on $`B\eta ^{}K`$ with RPV. However, this is only about the observation of BSM physics, and to have a qualitative measurement, one needs to minimize the theoretical errors, which will be the biggest challenge to the theoreticians in the next few years. In short, we await some really exciting years on both theoretical and experimental fronts! ## 5 Acknowledgements I thank Rahul Basu and other organisers of WHEPP-6 for providing a most stimulating atmosphere. Some of the material presented here is based on the work done with Debajyoti Choudhury and Bhaskar Dutta.
no-problem/0003/hep-th0003072.html
ar5iv
text
# 1 Introduction ## 1 Introduction Solitonic solutions of supergravity have played a crucial role in achieving our current understanding of nonperturbative string theory. The identification of a subset of these solutions with the strong coupling limit of D-branes was central to the resolution of longstanding puzzles in black hole physics and large N gauge theories. Although in general the properties of this solutions are well known (see reviews ), it is fair to say that some solutions have been explored less than others. Such is the case of 7-branes and instantons in type IIB. Seven branes play a central role in different recent developments in nonperturbative string theory. Two examples are the instrumental role in the formulation of F-theory and in constructions of supersymmetric gauge theories . The role of the type IIB instanton has been widely discussed in the context of the AdS/CFT correspondence. In this paper we attempt to remedy this situation by widening the class of seven-brane solutions <sup>1</sup><sup>1</sup>1Some papers considering different aspects of seven branes include . and showing that the D-instanton of Gibbons, Green and Perry admits a very natural generalization. We also hope to clarify some misconceptions in the literature by showing how some solutions that have been regarded as independent of each other are actually particular cases of a one-parametric family of supersymmetric seven-brane solutions. In section 2 we study various seven-brane constructions. After reviewing the most general solution of the seven-brane ansatz and showing how it reduces to previously known solutions we study seven branes from a holomorphic point of view, this is a distinguishing feature of seven-branes that have been widely exploited in nonperturbative constructions. We show that holomorphic seven branes are very similar to the nonlinear $`O(3)`$ model used to describe isotropic ferromagnets. In particular we show explicitly that by imposing very reasonable asymptotic conditions on the solution, one is led to a homotopical classification for the charge of the seven-brane, which is not possible for D-branes in general. In the two last subsections of section 2 we construct new seven-brane solutions, we consider both electric and magnetic seven-branes. The gauge field of the electrically charged seven-brane displays a kink-like behavior. In section 3 we discuss the supersymmetry of seven branes and show that these equations admit the one-parametric family of solutions constructed at the beginning of section 2. In section 4 we construct generalizations of the D-instanton of . ## 2 On seven-branes ### 2.1 The various D7-branes In this subsection we review the known seven-brane solutions and clarify a common misconception about the D7-brane. We argue that three seven-brane constructions found in the literature are basically the same object. The seven-brane constructed implicitly in along the lines of the traditional p-brane solution, the GGP seven brane and a circular seven-brane constructed in are all, to an extent to be specified shortly, particular cases of a unique construction. Let us start by reviewing the general p-brane construction in 10-dimensions. One starts with the following Einstein-frame action $$S_{II,p}=\frac{1}{16\pi G}d^{10}x\sqrt{g}\left(R\frac{1}{2}(\varphi )^2\frac{1}{2(p+2)!}e^{\frac{3p}{2}\varphi }F_{p+2}^2\right).$$ (1) The equations of motion (EOM) for $`p=1`$ are: $`R_{MN}`$ $`=`$ $`{\displaystyle \frac{1}{2}}(_M\varphi _N\varphi +e^{2\varphi }F_MF_N),`$ $`{\displaystyle \frac{1}{\sqrt{g}}}_M(\sqrt{g}g^{MN}_N\varphi )`$ $`=`$ $`e^{2\varphi }g^{MN}F_MF_N,`$ $`_M(\sqrt{g}g^{MN}e^{2\varphi }F_N)`$ $`=`$ $`0.`$ (2) Here, $`F_M=_M\mathrm{a},`$ where $`\mathrm{a}`$ is the zero-form, R-R potential. Unlike the quantum theory, the classical theory is invariant under a rescaling of the action, and, as the Lagrangian has no mass parameter, this feature is critical for understanding some of the properties of classical solitons, such as D-branes, in the absence of other scales. One expects the D7-brane to be a singular, magnetic solution of $`S_{II,1}`$ and the corresponding ansatz for the metric and field strength is $`ds^2`$ $`=`$ $`e^{2A(r)}\eta _{\mu \nu }dx^\mu dx^\nu +e^{2B(r)}\delta _{mn}dy^mdy^n,`$ $`F_m`$ $`=`$ $`\lambda ϵ_{mn}{\displaystyle \frac{y^n}{r^2}}\mathrm{or}a=\lambda \theta ,`$ (3) with $`\mu ,\nu =0,\mathrm{},7`$ , $`m,n=1,2`$; $`ϵ_{12}=1`$; $`r=\sqrt{y_1^2+y_2^2}`$; $`\theta `$ is the angular coordinate in the transverse space. This ansatz for the metric is dictated by preserving $`P_8\times SO(2)`$, that is, Poincare invariance in the longitudinal directions and $`SO(2)`$ in the perpendicular ones. One very useful relation, which is true for all Dp-branes and is required by supersymmetry is $`Ad+B\stackrel{~}{d}=0`$, where $`d=p+1`$ and $`\stackrel{~}{d}=10p3`$ (see appendix for the general definition). Applied to seven-branes $`(\stackrel{~}{d}=1082=0),`$ one has $`A=0`$. The Ricci tensor in this case becomes $`R_{mn}=\delta _{mn}^2B`$ (see appendix). Einstein’s equations then imply $`\varphi ^{}`$ $`=`$ $`{\displaystyle \frac{\lambda e^\varphi }{r}},`$ $`B^{\prime \prime }+{\displaystyle \frac{B^{}}{r}}`$ $`=`$ $`{\displaystyle \frac{1}{2}}(\varphi ^{})^2.`$ (4) The first of these equations is characteristic of magnetic p-brane solutions and says that $`\mathrm{exp}\varphi `$ is a harmonic function in the transverse coordinates. The general solution to the second equation can be written as $$B=B_0+B_1\rho +\frac{1}{2}\mathrm{ln}(1+\lambda \rho ),$$ (5) where we have defined $`\rho \mathrm{ln}(r/r_0)`$, a notation that we shall find useful throughout the rest of this paper. By a rescaling of $`y^m`$, we may choose $`B_0=0`$, whence the entire background takes the form $`ds^2`$ $`=`$ $`\eta _{\mu \nu }dx^\mu dx^\nu +e^{2(B_1+1)\rho }(1+\lambda \rho )(d\rho ^2+d\theta ^2),`$ $`e^\varphi `$ $`=`$ $`1+\lambda \rho ,`$ $`F_m`$ $`=`$ $`\lambda ϵ_{mn}{\displaystyle \frac{y^n}{r^2}}.`$ (6) For sufficiently small $`r,`$ viz., $`r<r_1r_0\mathrm{exp}(1/\lambda )`$, the quantity $`1+\lambda \rho `$ can become negative, and the solution breaks down, since of course $`\mathrm{exp}(\varphi )`$ must be nonnegative. This also shows up as a singularity in the metric and, more importantly, as a naked singularity in the curvature. $$R=\frac{\lambda ^2}{r^{2(B_1+1)}(1+\lambda \rho )^3}$$ (7) In this respect, the 7-brane differs from lower p-branes, whose singularities do not occur at finite $`r.`$ This disturbing feature of the solution can and should be interpreted as the appearance of new physics in a natural way: The SUGRA Lagrangian, being nonrenormalizable, must be understood as the first term of an effective field theory involving an expansion in powers of the curvature so, as the curvature becomes large, the terms ignored become as important as the leading term. The value of $`\rho _1`$ is arbitrary because the Lagrangian is scale-invariant. However, by reference to other forms of matter, it is natural to expect such a breakdown on a scale $`rG^{1/8}.`$ Similarly, from the superstring point of view, one expects corrections at $`r\sqrt{\alpha ^{}}.`$ Moreover, since $`\mathrm{exp}(\varphi )`$ is the (local) string coupling constant, this becomes large as $`r`$ approaches $`r_1,`$ so that one cannot trust the classical approximation anyway. Therefore, even though in principle $`r_0`$ is simply an integration constant, we assume that it is sufficiently small so that our solution makes sense physically at distances $`r>>r_1`$. For $`B_1=0,`$ one has the typical harmonic function representation for p-branes . For $`B_1=1`$, one recovers the circular seven-brane of .<sup>2</sup><sup>2</sup>2To obtain precisely eq. (6.14) in it is necessary to make the following change: $`\mathrm{ln}rr`$ and to identify $`\lambda =\stackrel{~}{m}`$. It is not possible to establish a one-to-one relation with the solution of . One can however show that the asymptotic behavior of the solution of can be obtained from eq. (6)<sup>3</sup><sup>3</sup>3The limit we consider here has been previously discussed in the literature (see, for example, ). This limit is known as the weak coupling limit of the GGP solution. It is very natural to concentrate on the large $`r`$ asymptotic behavior because most of the constraints that string theory imposes through the boundary state formalism on supergravity solutions depend only on the asymptotic behavior. The notational correspondence between the asymptotic behavior of the dilaton and the axion field given in and the fields of eq. (6) are: $$\lambda =\frac{1}{2\pi },b=\frac{1}{r_0}.$$ (8) The asymptotic value of the dilaton $`b`$ appears in eq. (28) of . To establish the relation between the metrics we now turn to eq. (27) in (see next subsection) $$\overline{}\mathrm{ln}\mathrm{\Omega }=\frac{1}{2}\overline{}\mathrm{ln}\tau _2.$$ (9) where, in our notation, $`\mathrm{ln}\mathrm{\Omega }=B`$ and $`\tau _2=\mathrm{exp}(\varphi ).`$ The general solution to this equation is $`\mathrm{e}^{2B}=\tau _2\mathrm{exp}(F(z)+\overline{F}(\overline{z}))`$. where $`F`$ is an arbitrary holomorphic function. Rotational invariance in the transverse dimensions leads to the choice (6). Further specification within this class amounts to different choices of the parameter $`B_1`$, as shown before. The conditions imposed in were that the metric must be real, modular invariant and nondegenerate (for an extensive discussion see ). This brings us to $`\mathrm{exp}(2B)=(1+\lambda \mathrm{ln}(\frac{r}{r_0}))\eta ^2\overline{\eta }^2`$, where $`\eta `$ is the Dedekind eta function, which is perfectly compatible with (6) for asymptotically large $`r`$. To guarantee nondegeneracy of the metric in the presence of multiple seven-branes, other powers of $`r`$ must be included but this is still compatible with (6). ### 2.2 Comments on holomorphic seven-branes In the case $`p=1`$ the action (1) can be rewritten as $$S_{II,1}=\frac{1}{16\pi G}d^{10}x\sqrt{g}\left(R\frac{1}{2\tau _2^2}_\mu \tau ^\mu \overline{\tau }\right),$$ (10) where $`\tau \tau _1+i\tau _2=a+ie^\varphi `$. One interesting property of the equation of motion following from this action is that, for the type of metric we are considering here ($`A=0`$,) the dilaton and axion equations are independent of the metric. This decoupling from the metric signals a complex structure that has been exploited in various contexts starting from . The equation of motion for $`\tau `$ is $$\overline{}\tau +\frac{2\tau \overline{}\tau }{\overline{\tau }\tau }=0,$$ (11) where $`zy^1+iy^2`$ and $`_z.`$ One can easily see that any holomorphic or antiholomorphic function $`\tau `$ is a solution to this equation. This particular class of solutions is required for supersymmetric solutions (see below). Einstein’s equations take the form $$\overline{}B=\frac{\tau \overline{}\overline{\tau }+\overline{}\tau \overline{\tau }}{2(\tau \overline{\tau })^2}=\frac{1}{2}\overline{}\mathrm{ln}\tau _2,$$ (12) where the last equation is true only for holomorphic or antiholomorphic $`\tau `$. At this stage one can conclude that the seven-brane-like solutions to the EOM obtained from (10) are characterized by an arbitrary holomorphic function $`F(z)`$. $$e^{(2B)}=\tau _2e^{2(F+\overline{F})}.$$ (13) This situation is familiar, it is typical of systems with two spatial dimensions, such is the case of the nonlinear $`O(3)`$ model that describes isotropic ferromagnets. In our problem, however, we have to go a step further because not any holomorphic function is acceptable as opposed to the case of the $`O(3)`$ model. The domain in which $`\tau `$ is defined presumably carries some information about the symmetries of the theory. The idea follows from the fact that in supergravity $`\tau `$ belongs to the upper half plane $`𝐇`$ and the symmetry of the theory is $`SL(2,R)`$. This symmetry is expected to be broken at the string theory level due to charge quantization to a discrete subgroup, $`SL(2,Z),`$ which is presumed to be a discrete gauge symmetry. In that case, $`\tau `$ must be restricted to the fundamental domain; this was the path taken in which is asymptotically compatible with eq. (6) and raises the natural question of how much supergravity solutions know about the full string theory. We will explore the restrictions that must be applied to $`\tau `$ such that a well-defined magnetic charge exists at the end of this subsection. If one insists on a metric preserving $`P_8\times SO(2)`$, which is the usual ansatz for the p-brane metric, then using Einstein eq. (13), one has that $`\tau _2=\tau _2(r^2)`$. The condition that the sum $`F+\overline{F}`$ be a function of $`r^2=z\overline{z}`$ poses a functional equation stating that the sum of $`F`$ and $`\overline{F}`$ must be expressed as a product of their arguments. This equation is solved for $`F(z)=B_1\mathrm{ln}z`$. Using holomorphicity of $`\tau :\overline{}\tau =0`$ we get: $`i\dot{a}/\overline{z}+a^{}z+i\tau _2^{}z=0`$, where the dot means derivative with respect to $`\theta `$ and the prime with respect to $`r^2`$. For purely magnetic solutions, $`(a^{}=0)`$, one has that $`\tau _2^{}=\dot{a}/(z\overline{z})`$. Since $`\tau _2^{}`$ is a function of only $`r^2=z\overline{z}`$ and $`\dot{a}`$ is a function of only $`\theta =(i/2)\mathrm{ln}z/\overline{z}`$, the only solution is $`\dot{a}=\lambda `$ which implies $`\tau _2=1+\lambda \mathrm{ln}r`$. This is essentially the solution presented in eq. (6). The converse is however not true. For any magnetically charged holomorphic seven-brane: $`i\dot{a}/(2\overline{z})+i\tau _2^{}z\dot{\tau }_2/(2\overline{z})=0`$. Since $`a`$ and $`\tau _2`$ are real, the only solution to this equation is $`\dot{\tau }_2=0`$ (i.e., $`\tau _2`$ is independent of the angle.) For the same reasons, $`\dot{a}=const=\lambda `$ and $`\tau _2=\mathrm{exp}(\varphi )=1+\lambda \rho .`$ This, however, is not enough to fix the form of $`F,`$ and the solution will in general depend on an arbitrary holomorphic function. The analogy with the $`O(3)`$ model can be pursued a step further to establish that the charge is equal to the degree of the map $`\tau z`$. For $`\mathrm{exp}(2\pi i\tau )=bz^n+c`$, one has that for asymptotically large $`r`$, $`a=n\theta /(2\pi )`$ which implies $`q=n`$, the same relation one has for the $`O(3)`$ model. This means that the charge equals the homotopy class of the map $`\tau z`$ and provides the homotopic sector classification known for solitons and instantons in gauge theories. This is a distinctive feature of seven-branes, the gauge fields in D-branes are $`U(1)`$ gauge fields and therefore the only chance to get some nontrivial homotopy is for two-dimensional transverse space due to $`\pi _k(U(1))=0`$ for $`k>1`$. It is interesting to note that, as in solitons and instantons in gauge theory, the long distance behavior determines the topological class of the solution on very general grounds and that one need not know the details of the solution for all values of the radius. ### 2.3 Electric kink-like seven brane In this section we construct an electric<sup>4</sup><sup>4</sup>4Here the term electric is used exclusively in the sense that $`a=a(r)`$ and, unlike the previous case, the conserved charge $`_{S^1}e^{2\varphi }(_ra)`$ is nonzero. seven-brane with a metric somewhat resembling the one of the circular seven-brane constructed in . We know from general considerations that the charge dual to the D7-brane charge (called “magnetic” here) is the D(-1)-brane charge, i.e., the Type IIB-instanton charge that couples locally to the zero-form $`a`$ field. The instanton, as a classical solution of the EOM, only exists in Euclidean space-time, although of course it represents a quantum tunneling amplitude in a Hilbert space built up in a space-time with Lorentz signature. The charge that the solution we present here carries plays a prominent role in the D-instanton solution of . One interpretation of this charge will be evident at the end of this section. We will look for solutions of the truncated IIB supergravity Lagrangian discussed in the previous sections. The main difference will be that, in the ansatz for the metric, we will not assume $`A=0`$ as we did previously, so, as mentioned earlier, such a solution will not be supersymmetric and not BPS.<sup>5</sup><sup>5</sup>5Hence, for strong coupling, we can expect quantum corrections to be large. The reason for starting with this general ansatz is that for $`A=0,`$ there are no electric solutions. From the Einstein equations eq. (2) and the general form of the Ricci tensor (see appendix) one has, in the transverse directions, $`R_{mn}=\delta _{mn}^2B`$, but the right hand side of Einstein’s equations is $`y_my_n/r^2((\varphi ^{})^2+e^{2\varphi }(a^{})^2)`$ which has a different tensor structure and is signaling that the equation can not be nontrivially satisfied. We will, therefore, consider the following ansatz $$ds^2=e^{2A(r)}\eta _{\mu \nu }dx^\mu dx^\nu +e^{2B(r)}\delta _{mn}dy^mdy^n.$$ (14) The EOM are given by eq. (2). The Einstein equation for $`g^{\mu \nu }`$ implies that $`^2e^{8A}`$ $`=`$ $`0,\mathrm{or}\mathrm{more}\mathrm{explicitly},`$ $`A^{\prime \prime }+{\displaystyle \frac{1}{r}}A^{}+8(A^{})^2`$ $`=`$ $`0.`$ (15) The solution of this harmonic equation is $$AA_0=\frac{1}{8}\mathrm{ln}\rho ,$$ (16) where, as before, $`\rho =\mathrm{ln}(r/r_0).`$ The Einstein equation in the direction transverse to the brane has two different tensor structures: $`\delta _{mn}`$ and $`y^my^n/r^2`$. The $`\delta _{mn}`$ equation gives $$B^{\prime \prime }+\frac{1}{r}B^{}+8A^{}B^{}+\frac{8}{r}A^{}=0.$$ (17) Solving this equation is straightforward, giving $$BB_0=\rho +B_1\mathrm{ln}\rho ,$$ (18) here $`B_1`$ is an arbitrary constant. Using the axion equation, one can rewrite the remaining Einstein equation (for $`y^my^n/r^2`$) in terms of the dilaton only $$\varphi ^{\prime \prime }+\frac{1+\rho ^1}{r}\varphi ^{}+(\varphi ^{})^2=(4B_1+\frac{7}{4})\frac{\rho ^2}{r^2}.$$ (19) This equation may be linearized by replacing $`\varphi `$ by $`\mathrm{exp}(\varphi )`$ and further simplified by defining $`t\mathrm{ln}\rho `$, yielding $$\frac{d^2}{dt^2}e^\varphi =\omega ^2e^\varphi ,$$ (20) Remarkably, this is invariant under translation of $`\mathrm{ln}(r/r_0)`$ (or rescaling of $`\rho `$,) but this is clearly not a symmetry of the other equations. The solution of this equation may be expressed in a variety of ways, e.g., $$e^\varphi =\mathrm{cosh}(\omega (tt_0))=\frac{1}{2}\left[\left(\frac{\rho }{\rho _0}\right)^\omega +\left(\frac{\rho _0}{\rho }\right)^\omega \right]$$ (21) where $`\omega =\sqrt{4B_1+7/4}`$, $`\rho _0`$ is a constant and $`t=\mathrm{ln}\rho `$. Up to this point, we actually have not found it necessary to have $`\omega ^2>0`$ but, from this solution, we see that this is required in order to have $`\mathrm{exp}(\varphi )0`$ for all $`t.`$<sup>6</sup><sup>6</sup>6This corresponds to the restriction $`B_17/16.`$ The general solution for the metric and the axion is, up to a global $`SL(2,R)`$ transformation, $`ds^2`$ $`=`$ $`(\rho )^{1/4}\eta _{\mu \nu }dx^\mu dx^\nu +r_0^2\rho ^{2B_1}(d\rho ^2+d\theta ^2),`$ $`a`$ $`=`$ $`\mathrm{tanh}\left(\omega \mathrm{ln}({\displaystyle \frac{\rho }{\rho _0}})\right),`$ (22) where $`\rho _0`$ is a constant. This has a form very similar to the circular seven-brane of . It is easy to see that $`|\tau |=1,`$ so that the solution corresponds to a classical solution in the $`\tau `$-plane that starts at $`\tau =1`$ and traverses a semicircle until arriving at $`\tau =+1.`$ We would like to stress at this point, although it will be completely clear in what follows, that the solution presented here, which we called electric seven brane because the charge that it carries is a Noether charge, i.e. it is conserved as a consequence of the equations of motion, is not a supersymmetric solution. In the absence of supersymmetry or a topologically conserved charge, it is not at all clear that this solution is stable. One of the properties of the charge of the electric seven brane presented here is that it is invariant under the $`SL(2,R)`$ transformation acting on the dilaton and action fields, viz., $`\tau (a\tau +b)/(c\tau +d)`$. This means that the electric charge can be viewed as an $`SL(2,R)`$ charge. A similar situation takes place for the instanton soltuion of . In this context the electric seven brane and the D-Instanton are two $`SL(2,R)`$-charged objects. ### 2.4 Magnetic seven-branes In this subsection we will construct general magnetic seven-branes. The ansatz for the metric and the field strength is exactly as at the beginning of this section, eq. (3). The Einstein equation of motion in the longitudinal directions to the brane is as in the previous subsection and therefore the solution for $`A`$ is as for the electric 7-brane eq. (16), $`AA_0=(1/8)\mathrm{ln}\mathrm{ln}r/r_0`$. The “off-diagonal” Einstein equation $$(\varphi ^{})^2\frac{e^{2\varphi }\lambda ^2}{r^2}=16(A^{\prime \prime }\frac{A^{}}{r})16(A^{})^2+32A^{}B^{}.$$ (23) The two “diagonal” equations are identical $$\frac{1}{2}(\varphi ^{})^2+8A^{\prime \prime }+B^{\prime \prime }+\frac{B^{}}{r}+8(A^{})^28A^{}B^{}=0.$$ (24) The axion equation, as is always the case for magnetic solutions, is identically satisfied. The dilaton equation becomes $$\varphi ^{\prime \prime }+\frac{\varphi ^{}}{r}+8A^{}\varphi ^{}=\frac{\lambda ^2e^{2\varphi }}{r^2}.$$ (25) A convenient way to solve these equations is, once again, to change variables to $`\rho =\mathrm{ln}r/r_0`$. In this new coordinate the dilaton equation becomes $$\varphi ^{\prime \prime }+\frac{\varphi ^{}}{\rho }=\lambda ^2e^{2\varphi },$$ (26) where the prime now denotes the derivative with respect to $`\rho `$ . Using eq. 23, equation 24 takes the following simple form in terms of $`\rho `$ $$B^{\prime \prime }+\frac{B^{}}{\rho }+\frac{1}{\rho }+\frac{\lambda ^2e^{2\varphi }}{2}=0.$$ (27) The previous two equations can be combined using the identity $`(\varphi ^{}\rho )^{}/\rho =\varphi ^{\prime \prime }+\varphi ^{}/\rho `$ into $$B^{}+\frac{\varphi ^{}}{2}=\frac{t_0}{\rho }1,$$ (28) where $`t_0`$ is an integration constant. These results can be plugged into eq. 23 giving $$\varphi ^{\prime \prime }\frac{\varphi ^{}}{\rho }(\varphi ^{})^2=\frac{(16t_0+7)}{4\rho ^2}.$$ (29) A linearization procedure, similar to the one discussed in the electric case, can be applied to this equation with the following modification $$\frac{d^2}{dt^2}e^{\varphi t}=\mathrm{\Lambda }^2e^{\varphi t},$$ (30) The complete solution takes the following form $`ds^2`$ $`=`$ $`\rho ^{1/4}\eta _{\mu \nu }dx^\mu dx^\nu +r_0^2\rho ^{2t_0+1}\mathrm{sinh}\left(\mathrm{\Lambda }\mathrm{ln}({\displaystyle \frac{\rho }{\rho _0}})\right)\left(d\rho ^2+d\theta ^2\right),`$ $`e^\varphi `$ $`=`$ $`e^{\varphi _o}\rho \mathrm{sinh}\left(\mathrm{\Lambda }\mathrm{ln}({\displaystyle \frac{\rho }{\rho _0}})\right),`$ $`a`$ $`=`$ $`\lambda \theta ,`$ (31) with the following algebraic constraint, $`\lambda e^{\varphi _0}=\mathrm{\Lambda },\mathrm{\Lambda }^2=4t_0+11/4`$. We could just set $`\varphi _0=0`$ so that $`\mathrm{\Lambda }`$ becomes $`\lambda `$. We shall now show that there is a limit in which this solution becomes the solution presented in eq. (6). In order to do so let us consider the following substitution $`8\mathrm{ln}(r_0)=1/C_O`$. Using this substitution and renaming some of the integration constants we can rewrite $`AA_0`$ $`=`$ $`{\displaystyle \frac{1}{8}}\mathrm{ln}(1+C_0\mathrm{ln}r),`$ $`BB_0`$ $`=`$ $`\mathrm{ln}r+(t_0+1/2)\mathrm{ln}(1+C_0\mathrm{ln}r){\displaystyle \frac{1}{2}}\mathrm{ln}\mathrm{sinh}(x_0+\mathrm{\Lambda }\mathrm{ln}(1+C_0\mathrm{ln}r)),`$ $`\varphi \varphi _0`$ $`=`$ $`\mathrm{ln}(1+C_o\mathrm{ln}r)\mathrm{ln}\mathrm{sinh}(x_0+\mathrm{\Lambda }\mathrm{ln}(1+C_0\mathrm{ln}r)).`$ (32) The limit to the “extremal” case eq. (6) is: $`C_0,x_00`$ and $`\mathrm{\Lambda },e^{2B_0},e^{\varphi _0}\mathrm{}`$ in such a way that $`C_0\mathrm{\Lambda }^2,C_0\mathrm{\Lambda }/x_0,e^{2B_0}x_0`$ and $`\mathrm{exp}\varphi _0x_0`$ are constant. This can be easily arranged by sending $`ϵ`$ to zero in the following relations $$C_0=ϵ^2,\mathrm{\Lambda }^2=\frac{B_1+1}{ϵ^2},x_0=\sqrt{B_1+1}\frac{\alpha }{\mu }ϵ,e^{2B_0}=e^{\varphi _0}=\frac{\mu }{\sqrt{B_1+1}}\frac{1}{ϵ}.$$ (33) This way we recover precisely the solution of eq. (6). ## 3 Comments on supersymmetric seven-branes In this section we will analyze the condition under which a seven-brane solution admits a supersymmetric generalization. The supersymmetry transformations of the corresponding fermionic fields in type IIB supergravity can be presented in an $`SU(1,1)`$ (see also the recent review ) or $`SL(2,R)`$ invariant formulation . The dilaton and axion fields of type IIB theory parametrize the upper half of the complex plane using the traditional parametrization $`\tau =a+ie^\varphi `$. Following , one can introduce the zweibein $`V_\pm ^\alpha `$, $$V=\left(\begin{array}{cc}V_{}^1& V_+^1\\ V_{}^2& V_+^2\end{array}\right),$$ (34) and define the local $`U(1)`$ action as $$VV\left(\begin{array}{cc}e^{i\mathrm{\Sigma }}& 0\\ 0& e^{i\mathrm{\Sigma }}\end{array}\right),$$ (35) where $`\mathrm{\Sigma }`$ is a $`U(1)`$ phase. It is convenient to parametrize the matrix $`V`$ following $$V=\frac{1}{\sqrt{2i\tau _2}}\left(\begin{array}{cc}\overline{\tau }e^{i\gamma }& \tau e^{i\gamma }\\ e^{i\gamma }& e^{i\gamma }\end{array}\right).$$ (36) One can fix the local $`U(1)`$ gauge symmetry by setting the scalar field $`\gamma `$ to be a function of $`\tau `$: $`\gamma =\gamma (\tau )`$ To write the type IIB supergravity EOM, it is convenient to introduce two $`SL(2,R)`$ singlet currents , $`P_M`$ $`=`$ $`ϵ_{\alpha \beta }V_+^\alpha _MV_+^\beta ={\displaystyle \frac{i}{2}}{\displaystyle \frac{_M\tau }{\tau _2}}e^{2i\gamma },`$ $`Q_M`$ $`=`$ $`iϵ_{\alpha \beta }V_{}^\alpha _MV_+^\beta =_M\gamma {\displaystyle \frac{1}{2}}{\displaystyle \frac{_M\tau _1}{\tau _2}}.`$ (37) Under the $`U(1)`$ gauge symmetry (35), they transform as $`P_M`$ $``$ $`P_Me^{2i\mathrm{\Sigma }},`$ $`Q_M`$ $``$ $`Q_M+_M\mathrm{\Sigma }.`$ (38) These transformations show that $`Q_M`$ is a composite gauge potential and that the $`U(1)`$ charge of $`P_M`$ equals 2. The equations that result from sending all the fermions to zero as well as all bosonic fields except the relevant ones are $`R_{MN}`$ $`=`$ $`P_MP_N^{}+P_M^{}P_N,`$ $`D^MP_M`$ $`=`$ $`(_M2iQ_M)P_M=0.`$ (39) Substituting (37) into these, the EOM can be expressed in terms of $`\varphi `$ and $`a`$ as $`R_{MN}={\displaystyle \frac{1}{2}}(_M\varphi _N\varphi +e^{2\varphi }_Ma_Na),`$ $`\mathrm{\Delta }a+2^M\varphi _Ma=0,`$ $`\mathrm{\Delta }\varphi e^{2\varphi }(a)^2=0.`$ (40) Note that $`\gamma `$ does not appear in the EOM. In fact it appears as a multiplier in the dilaton and the axion equations but this does not affect the EOM. It should be taken as a bonus that these equations can be derived from a Lagrangian because the original definition of IIB is solely based on supersymmetry and the EOM. The Lagrangian in question is precisely the one we have been studying given by eq. (1). The supersymmetry transformations of the dilatino $`\lambda `$ and the gravitino $`\psi _M`$ are given by $`\delta \lambda `$ $`=`$ $`iP_M\gamma ^Mϵ^{},`$ $`\delta \psi _M`$ $`=`$ $`\left(_M{\displaystyle \frac{i}{2}}Q_M\right)ϵ.`$ (41) From the supersymmetric variation of the dilatino it is easy to see that a supersymmetric solution must have holomorphic or antiholomorphic $`\tau `$. Namely, $$\delta \lambda =\frac{e^{2i\gamma B}}{2\tau _2}(\mathrm{\Gamma }^1_1\tau +\mathrm{\Gamma }^2_2\tau )ϵ^{}=\frac{e^{2i\gamma B}}{2\tau _2}\mathrm{\Gamma }^1(_1\tau +\mathrm{\Gamma }^1\mathrm{\Gamma }^2_2\tau )ϵ^{}.$$ (42) Taking $`\mathrm{\Gamma }^1\mathrm{\Gamma }^2ϵ^{}=iϵ^{}`$ one obtains that in order for the variation of the dilatino to be zero, $`(_1+i_2)\tau =0`$, which means that $`\tau `$ is a holomorphic function. The same analysis could be carried out for antiholomorphic functions. For the gravitino one has that the variation in the longitudinal directions is $$\delta \psi _\mu =\eta _{\mu \underset{¯}{\mu }}e^{AB}\mathrm{\Gamma }^{\underset{¯}{\mu }}\mathrm{\Gamma }^m_mAϵ,$$ (43) here we have assumed that all fields depend only on the transverse directions. One could apply the same type of manipulation we applied to the dilatino transformation and conclude that a solution to this equation is provided by any antiholomorphic function $`A`$, but $`A`$ is a metric entry and must, therefore, be real. The only choice we have is $`A=constant`$ which, by means of a trivial coordinate redefinition, is equivalent to $`A=0`$. So far, everything is compatible with the type of seven-branes considered in subsection 2.1. The gravitino transformation in the transverse directions is $$\delta \psi _m=\left(_m+\frac{1}{2}_nB\mathrm{\Gamma }^m\mathrm{\Gamma }^n\frac{1}{2}_mB\frac{i}{2}_m\gamma +\frac{i}{4\tau _2}_m\tau _1\right)ϵ.$$ (44) Substituting $`B=B_0+B_1\mathrm{ln}r+\frac{1}{2}\mathrm{ln}\tau _2`$, taking into account that $`\tau `$ is a holomorphic function and considering a more general spinor $`ϵ=e^{if/2}ϵ_0`$ where $`\mathrm{\Gamma }^1\mathrm{\Gamma }^2ϵ_0^{}=iϵ_0^{}`$, we obtain that $`f\gamma +iB_1\mathrm{ln}r`$ must be a holomorphic function or in other words, $`\gamma f=B_1\theta `$ which can always be satisfied for any given $`\gamma =\gamma (\tau )`$. In the case of the D7-brane of one has $`B_1=0`$ which is a solution for $`\gamma =0`$ (the preferred gauge fixing condition) and $`f=0`$. The solutions of and also enter in this scheme. To make contact with we need to drop the condition that $`B=B(r)`$ and assume only that $`B`$ must be real. In this case the solution is specified up to a holomorphic function eq. (13). The variation of the gravitino in the directions perpendicular to the seven-brane demand that $`f\gamma +i(F+\overline{F})`$ be a holomorphic function. In $`\gamma (\tau )=\text{ Im}\mathrm{ln}(\tau +i)`$ up to an additive constant, this combines with the fact that $`F=F(\tau )`$ is dictated by modular invariance, to provide a supersymmetric solution for $`f=0`$. ## 4 Instantonic solutions to IIB In this section we will consider the following Euclidean action $$S=\frac{1}{16\pi G}d^{10}x\sqrt{g}\left(R\frac{1}{2}(\varphi )^2+\frac{1}{2}e^{2\varphi }(a)^2\right).$$ (45) and its EOM $`R_{MN}`$ $`=`$ $`{\displaystyle \frac{1}{2}}(_M\varphi _N\varphi e^{2\varphi }F_MF_N),`$ $`{\displaystyle \frac{1}{\sqrt{g}}}_M(\sqrt{g}g^{MN}_N\varphi )`$ $`=`$ $`e^{2\varphi }g^{MN}F_MF_N,`$ $`_M(\sqrt{g}g^{MN}e^{2\varphi }F_M)`$ $`=`$ $`0.`$ (46) One way to generalize the solution of is to consider the following ansatz for the metric $$ds^2=e^{2B(r)}\delta _{mn}dy^mdy^n.$$ (47) As familiar by now, the Einstein equations have two tensor structures. The $`\delta _{mn}`$ part is $$B^{\prime \prime }+\frac{17}{r}B^{}+8(B^{})^2=0,$$ (48) which becomes a linear differential equation after substituting $`B^{}=1/u`$. The general solution is $$BB_0=\frac{1}{8}\mathrm{ln}(1+\frac{Q_0}{r^{16}}).$$ (49) Note that for $`Q_0=0`$ and fixing the constant $`B_0=0`$ we recover the flat metric of the solution obtained in . The dilaton and axion equation take the following form $`\varphi ^{\prime \prime }+{\displaystyle \frac{9}{r}}\varphi ^{}+e^{2\varphi }(a^{})^2+8B^{}\varphi ^{}=0,`$ $`a^{\prime \prime }+{\displaystyle \frac{9}{r}}a^{}+2a^{}\varphi ^{}+8B^{}a^{}=0.`$ (50) Using the axion equation one obtains that $`y^my^n/r^2`$ Einstein equation can be rewritten as $$\varphi ^{\prime \prime }+(\varphi )^2+\varphi ^{}(\frac{9}{r}+8B^{})=\frac{288}{Q_0}r^{16}(B^{})^2.$$ (51) In terms of a new function $`D`$ such that $`\varphi ^{}=(D4B^{}r^9)/r^9`$ and also changing to coordinate $`t:\frac{dt}{dr}=1/r^9`$ the dilaton equation becomes $$\dot{D}+D^2=\frac{512Q_0}{(1+64Q_0t^2)^2}.$$ (52) The solutions of this equation take different forms depending on the sign of $`Q_0`$. For negative $`Q_0`$, substituting $`Q_0Q_0^2`$ in all formulas we obtain the following solution: $`ds^2`$ $`=`$ $`(1{\displaystyle \frac{Q_0^2}{r^{16}}})(dr^2+r^2d\mathrm{\Omega }_9^2),`$ $`e^\varphi `$ $`=`$ $`e^{\overline{\varphi }_0}\mathrm{sinh}\left(x_0+{\displaystyle \frac{3}{2}}\mathrm{ln}{\displaystyle \frac{1+Q_0/r^8}{1Q_0/r^8}}\right),`$ $`aa_0`$ $`=`$ $`e^{\overline{\varphi }_0}\mathrm{coth}\left(x_0+{\displaystyle \frac{3}{2}}\mathrm{ln}{\displaystyle \frac{1+Q_0/r^8}{1Q_0/r^8}}\right).`$ (53) The limit in which this solution coincides with is the following: $`Q_0,x_00`$ and $`\mathrm{exp}\varphi _0\mathrm{}`$ with the following relations going to constants: $`Q_0/x_0=C/3,e^{\overline{\varphi }_0}x_0=e^{\varphi _0}`$. (Here $`C`$ and $`\varphi _0`$ are the parameters of the solution in .) This solution has a singularity at $`r_0^8=Q_0`$. It is a nontrivial task to classify this singularity being a singularity in Euclidean space. Let us simply note that it is a curvature singularity since for this metric the scalar curvature equals $$R=\frac{288Q_0^2}{r^{18}}(1\frac{Q_0^2}{r^{16}})^{9/4}.$$ There is also a solution for positive values of the parameter $`Q_0`$, for this case we will consider $`Q_0Q_0^2`$. The solution we obtained is $`ds^2`$ $`=`$ $`(1+{\displaystyle \frac{Q_0^2}{r^{16}}})(dr^2+r^2d\mathrm{\Omega }_9^2),`$ $`e^\varphi `$ $`=`$ $`e^{\varphi _0}(1+{\displaystyle \frac{C}{r^8}}{\displaystyle \frac{3Q_0^2}{r^{16}}}{\displaystyle \frac{CQ_0^2}{3r^{24}}})(1+{\displaystyle \frac{Q_0^2}{r^{16}}})^{3/2},`$ $`a`$ $`=`$ $`e^{\varphi _0}(1{\displaystyle \frac{\sqrt{C^2+9Q_0^2}C}{3r^8}}{\displaystyle \frac{3r^{16}Q_0^2}{r^{16}3Q_0^2}})/(1+{\displaystyle \frac{C}{3r^8}}{\displaystyle \frac{3r^{16}Q_0^2}{r^{16}3Q_0^2}}).`$ (54) To recover the instanton of from this solution one needs simply send $`Q_00`$. The Einstein equations of $`S_{II,1}`$ (both for Minkowskian and Euclidean versions) imply that the action is zero. This naive approach would render an instanton with zero action. First of all one would turn to the surface term. The surface term originates from allowing variations of $`\delta g^{MN}`$ which vanish at the surface but whose normal derivatives do not. The boundary term is $`(KK^0)\sqrt{h}d^9y`$. For the dilaton metric we consider here one obtains a contribution proportional to $`O(1/r^8)`$ which therefore vanishes for asymptotically large distances. A way out of this situation which was successfully used in consists in realizing that the same solution can be considered as a solution of the supergravity but instead of considering an axion field one needs to consider the $`C_8`$ formulation with field strength $`F_9`$ dual to the field strength of the axion $`da`$. In this picture Einstein equation no longer implies that the solution must have zero action. There is a more rigorous argument as to why the Euclidean version for the axion field has to include the surface term, it is based on an analysis of the dual formulation . A subtlety here arises because of the need to perform Poincare duality to objects that live in spaces of different signatures. Modulo these subtleties our result for the value of the action is exactly that of . We checked that our solution becomes that of in certain limit specified by the charges. It is straightforward to notice that this limit corresponds precisely to the large $`r`$ limit of the solution in the positive $`Q_0`$ case and for negative $`Q_0`$ we need to assume that $`x_0=0`$. ## 5 Conclusions In this paper we have constructed new electric and magnetic seven-branes. We have shown that the EOM for the seven-brane ansatz admit a one-parametric family of solutions some particular cases of which were known in the literature as separate objects. We have also shown explicitly that the supersymmetry equations are loose enough to admit a solution for such one-parametric family. The electric and magnetic seven-branes constructed here have a kink-like behavior similar to some solitons in gauge theories. We have constructed instantonic solutions for IIB that generalize in a very clear way the previously known D-instanton solution of . We have not, however, analyzed the supersymmetric structure of the instanton. Acknowledgments We wish to thank M. Duff for pointing out a solution we overlooked at the early stage. We would also like to thank J.T. Liu and J.X. Lu for very interesting discussions and several clarifying remarks. We would also like to acknowledge useful correspondence from Y. Lozano and M. Gutperle. One of us (L. PZ) would like to thank the Office of the Provost at the University of Michigan for support. Both of us wish to acknowledge the support of the High Energy Physics Division of the Department of Energy. Appendix The calculations for the Ricci tensor for the p-brane metric have been presented in a number of papers . We present such results here for the sake of completeness. Let us consider a metric in $`D`$ spacetime dimensions, given by the following ansatz $$ds^2=e^{2A}dx^\mu dx^\nu \eta _{\mu \nu }+e^{2B}dy^mdy^n\delta _{mn},$$ (55) where $`x^\mu `$ $`(\mu =0,\mathrm{},d1)`$ are the coordinates of the $`(d1)`$-brane world volume, and $`y^m`$ are the coordinates of the $`(Dd)`$-dimensional transverse space. Generally the functions $`A`$ and $`B`$ depend only on $`r=\sqrt{y^my^m}`$. Here we will consider an arbitrary dependence in the transverse dimensions. The Ricci tensor obtained from this metric is $`R_{\mu \nu }`$ $`=`$ $`\eta _{\mu \nu }e^{2(AB)}\delta ^{mn}\left(_m_nA+d_mA_nA+\stackrel{~}{d}_mA_nB\right),`$ $`R_{mn}`$ $`=`$ $`\delta _{mn}\left(\delta ^{pq}_p_qB+d\delta ^{pq}_pA_qB+\stackrel{~}{d}\delta ^{pq}_pB_qB\right)`$ (56) $`\stackrel{~}{d}_m_nBd_m_nAd_mA_nA+\stackrel{~}{d}_mB_nB`$ $`+d(_mA_nB+_nA_mB),`$ where $`\stackrel{~}{d}=Dd2`$. A convenient choice of vielbein basis for the metric is $`e^{\underset{¯}{\mu }}=e^Adx^\mu `$ and $`e^{\underset{¯}{m}}=e^Bdy^m`$, where underlined indices denote tangent space components. The corresponding spin connection, which is needed for the supersymmetry transformation of the gravitino, is $`\omega ^{\underset{¯}{\mu }\underset{¯}{n}}`$ $`=`$ $`e^B_nAe^{\underset{¯}{\mu }},\omega ^{\underset{¯}{\mu }\underset{¯}{\nu }}=0,`$ $`\omega ^{\underset{¯}{m}\underset{¯}{n}}`$ $`=`$ $`e^B_nBe^{\underset{¯}{m}}e^B_mBe^{\underset{¯}{n}}.`$ (57)
no-problem/0003/astro-ph0003131.html
ar5iv
text
# Is the [CO] index an age indicator for star forming galaxies? ## 1 Introduction An ideal instantaneous burst of star formation generates a so–called Simple Stellar Population (SSP), that is a stellar system which is coeval and initially chemically homogeneous (see Renzini & Buzzoni buzzoni86 (1986)). The integrated near IR luminosity of a SSP is dominated by red stars since its very early stage of evolution ($``$10 Myr), when massive stars ($`<`$40 M) evolve as red supergiants. When the stellar system gets older ($``$100 Myr) intermediate mass giants evolving along the AGB and, after a few Gyr, low mass giants near the tip of the Red Giant Branch (RGB) dominate the integrated IR and bolometric luminosities (e.g. Renzini & Buzzoni buzzoni86 (1986), Chiosi et al. chiosi86 (1986)). The time evolution of the observable parameters related to a SSP, such as e.g. photometric colours and spectral indices, provides the basic ingredient for constructing evolutionary models of star forming galaxies. Among the photometric and spectroscopic indices used to study the red stars of a SSP, the CO index has attracted quite some attention as a potential tool to trace red supergiants, i.e. young stellar systems. This idea primarily derives from the fact that field stars of similar spectral types show different CO indices depending on their spectral class, the strongest features being found in supergiants (see e.g. Fig. 4 of Kleinmann & Hall 1986, hereafter KH (86)). Several attempts of predicting the evolution of the CO index of a SSP appeared in the literature and were applied to the interpretation of IR spectral observations of starburst galaxies (e.g. Doyon et al. doyon94 (1994), Shier et al. shier96 (1996), Goldader et al. goldader97 (1997), Mayya mayya97 (1997), Leitherer et al. leitherer99 (1999)). Most of the models are restricted to solar metallicities and predict a pronounced maximum at $``$10 Myr followed by a quite rapid and steady decline. The CO index drops by almost a factor of 3 at $``$100 Myr and, noticeably, reaches values much lower than those observed in old ($``$10 Gyr) Galactic globular clusters and spheroidal galaxies of quasi–solar metallicities. The few models at sub–solar metallicities predict a similar time evolution with shallower CO features at all epochs. Taken at face value, these models would imply that star forming galaxies with prominent CO absorption features must be dominated by a young ($`<`$100 Myr) star formation event, while more mature, but still relatively young systems of a few $`\times `$100 Myr should be characterized by quite weak CO absorption features. In other words, the CO index could provide a powerful tool to constrain the age of the major star formation event of galaxies. This paper is a critical re–analysis of the time evolution of the CO index in simple SSPs and star forming galaxies. In Sect. 2 we describe the, sometimes confusing, definition of CO index and discuss its relationship with stellar parameters. In Sect. 3 we present theoretical curves based on different stellar evolutionary models and briefly discuss the possible reasons for their very different behaviours. In Sect. 4 we compare the predicted evolution of \[CO\] with measured parameters of template stellar clusters in the Magellanic Clouds, old globular clusters in the Galaxy, normal and starburst galaxies. In Sect. 5 we draw our conclusions. ## 2 The CO index ### 2.1 Spectroscopic and photometric definitions. The CO index was originally defined as the magnitude difference between a relatively narrow filter ($`\mathrm{\Delta }\lambda 0.1`$ $`\mu `$m) centered at 2.3 $`\mu `$m, which includes the first four band–heads of $`\mathrm{\Delta }v`$=2 CO roto–vibrational transitions, and a similarly narrow filter centered at 2.2 $`\mu `$m (Baldwin et al. baldwin73 (1973)). The central wavelength of the CO filter was then increased to 2.36 $`\mu `$m and slightly different filter parameters were adopted by different groups. A comprehensive database of CO photometric measurements was produced in the 70–80’s. These include measurements of field stars (e.g. McWilliam & Lambert lambert84 (1984)), Galactic globular clusters (Frogel et al. frogel83 (1983)), young stellar clusters in Magellanic Clouds (Persson et al. persson83 (1983)) and old spheroidal galaxies (Frogel et al. frogel78 (1978)). These data are still considered a fundamental benchmark for verifying the predictions of stellar evolutionary models. The spectroscopic CO index was defined by KH (86) who measured the (2,0) band–head at 2.29 $`\mu `$m from medium resolution ($`\lambda /\mathrm{\Delta }\lambda 2500`$) spectra of a sample of field stars. The strength of this band is unequivocally defined as the ratio between the fluxes integrated over narrow wavelength ranges centered on the line and nearby continuum, i.e. 2.2924–2.2977 and 2.2867–2.2919 $`\mu `$m, and expressed in terms of magnitudes. The same quantity is sometimes given in terms of equivalent width (e.g. Origlia et al. origlia93 (1993)) and the numbers are simply related by $$[\mathrm{CO}]_{\mathrm{spec}}=2.5\mathrm{log}\left(1\frac{W_\lambda (2.29)}{53\mathrm{\AA }}\right)\mathrm{mag}$$ $`(1)`$ where $`[\mathrm{CO}]_{\mathrm{spec}}`$ is the spectroscopic index and $`W_\lambda `$(2.29) is the equivalent width of the (2,0) band–head. The relationship between spectroscopic and photometric indices is not obvious because they are based on measurements of different quantities which have different behaviours on the stellar physical parameters, e.g. $`[\mathrm{CO}]_{\mathrm{phot}}`$ also depends on the <sup>12</sup>C/<sup>13</sup>C ratio (see McWilliam & Lambert lambert84 (1984)). Nevertheless, an empirical correlation between the two indices is generally adopted following from observations of giant stars in the field. This yields (see Fig. 3 of KH (86)) $$[\mathrm{CO}]_{\mathrm{phot}}0.57[\mathrm{CO}]_{\mathrm{spec}}0.01\mathrm{mag}$$ $`(2)`$ To complicate the scenario further, other definitions of the spectroscopic index exist in the literature. These are based on spectroscopic measurements of equivalent widths over wavelength ranges much broader than those used by KH (86) and similar to that adopted in the photometric definition. The most popular of these intermediate indices is that proposed by Doyon et al. (doyon94 (1994)) which measures the equivalent width over the 2.31–2.40 $`\mu `$m range relative to a continuum which is extrapolated from shorter wavelengths. The specific advantage of this definition is that it allows measurements of \[CO\] even from spectra of relatively poor quality, while proper measurement of the spectroscopic index requires high s/n spectra with resolution $`R1000`$. These “broad spectral indices” are calibrated using measurements of field stars and can be therefore converted to photometric indices using empirical relationships similar to Eq. (2). To avoid confusion we will hereafter express measured and predicted quantities in terms of $`[\mathrm{CO}]_{\mathrm{phot}}`$ with additional comments on how it was obtained or scaled from spectroscopic measurements. ### 2.2 CO index and stellar parameters In general, the strength of the CO features depends on the following stellar parameters (for a more detailed discussion see Sect. 4.1 of Origlia et al. origlia93 (1993)) Effective temperature, $`T_{\mathrm{eff}}`$, which sets the CO/C relative abundance. Surface gravity, $`g`$, which influences the H<sup>-</sup>/H equilibrium and hence modifies the total gas column density of the photosphere. In cool stars with CO/C$``$1 the column density of CO scales as $`g^{1/3}`$ and, therefore, the lines become more opaque when the gravity decreases. Microturbulent velocity, $`\xi `$, which determines the Gaussian width of the lines and, therefore, the equivalent width of saturated lines (the CO transitions are semi-forbidden and have very weak Lorentzian wings). Metallicity and carbon relative abundance, which define the value of C/H. For a given set of $`T_{\mathrm{eff}}`$ and $`g`$, all CO line opacities scale linearly with the carbon abundance C/H. The well known correlation between CO index and spectral type of giant stars is due to a complex combination of the effects of the first three parameters. The variation of $`T_{\mathrm{eff}}`$ is important only up to early K stars where most of the carbon is already in the form of CO. Therefore, in later spectral types the variation of $`T_{\mathrm{eff}}`$ has virtually no direct effect on \[CO\], i.e. the CO index is not a thermometer for cool stars. The steady deepening of the CO features along the K4 III–M7 III sequence is driven by the decrease of surface gravity and increase of microturbulent velocity. The variation of surface gravity, $`\mathrm{\Delta }\mathrm{log}g0.8`$ from early K to late M giants (McWilliam & Lambert lambert84 (1984)), follows from the fact that field red giants are stars with similar mass and quasi–solar metallicities taken at different phases of their evolution on the RGB. Thus M III stars, being cooler and more luminous, have a lower surface gravity than K giants. The value of $`\xi `$ cannot be directly related to the stellar parameters but, based on detailed spectroscopic observations, is found to increase when the bolometric luminosity increases. To a first approximation, $`\xi `$ scales linearly with $`\mathrm{log}(L_{\mathrm{bol}})`$ and, therefore, late M giants have microturbulent velocities about 0.8 km/s higher than early K III stars (see e.g. Tsuji tsuji86 (1986), tsuji91 (1991)). The effect of microturbulent velocity becomes particularly prominent in red supergiants and accounts for the fact that class I stars have much stronger \[CO\] than giants of similar temperatures and gravities (see e.g. Tsuji et al. tsuji94 (1994), McWilliam & Lambert lambert84 (1984)). The derived values of $`\xi `$ scale $``$ linearly with $`\mathrm{log}(L_{\mathrm{bol}})`$, i.e. a behaviour similar to that found in giant stars. Therefore, for practical purposes, the variation of \[CO\] in red giants and supergiants can be reproduced adopting an empirical, linear relationship between $`\xi `$ and bolometric luminosity, namely: $$\xi 2.00.4M_{\mathrm{bol}}\mathrm{km}/\mathrm{s}$$ $`(3)`$ An indirect test (and confirmation) of this equation comes from integrated spectral analysis of old and metallic stellar systems (Origlia et al. origlia97 (1997)) and young clusters in Magellanic Clouds (Oliva & Origlia oliva98 (1998)). The derived values of average microturbulent velocities, namely $`\overline{\xi }2`$ and $`\overline{\xi }45`$ km/s for old and young systems, respectively, are in good agreement with those derived from spectral synthesis models adopting Eq. (3). The effect of carbon abundance is relatively unimportant in field stars, which span a relatively narrow range of metallicities, but becomes evident in stellar clusters of low metallicity which are characterized by very weak CO features (see the left panel of Fig. 3). However, it should be kept in mind that the nice correlation between \[CO\] and metallicity of Fig. 3 also reflects the metallicity dependence of the temperature of giant stars, i.e. lower metallicity clusters have weaker \[CO\] not only because their giant stars have less carbon, but also because they are warmer than those in more metallic stellar systems (e.g. Origlia et al. origlia97 (1997)). ## 3 Modelling the evolution of \[CO\] in a SSP The integrated CO index of a SSP with a given metallicity and age can be most conveniently determined from the integrated synthetic spectrum using the same methods adopted for measuring the index from the observational data. The integrated synthetic spectrum can be expressed as $$F_\lambda =\varphi (M)L_\mathrm{K}(M)f_\lambda (T,g,\xi ,\text{[C/Fe]})𝑑M$$ $`(4)`$ where $`T`$=$`T(M)`$, $`g`$=$`g(M)`$ and $`L_\mathrm{K}(M)`$ are the stellar temperature, gravity and monochromatic luminosity <sup>1</sup><sup>1</sup>1 $`L_\mathrm{K}`$ (erg s<sup>-1</sup> $`\mu `$m<sup>-1</sup>) is the luminosity per $`\lambda `$-unit at 2.2 $`\mu `$m along the isochrone. These parameters are explicitly tabulated by the Padua’s models (Bertelli et al. bertelli94 (1994)) while can be derived from the Geneva’s evolutionary tracks as outlined in Leitherer et al. (leitherer99 (1999)). The shape of the initial mass function, $`\varphi (M)`$, has only a minor effect on the derived indices and, for all practical purposes, can be approximated by a standard Salpeter’s IMF ($`\varphi (M)M^{2.35}`$). The function $`f_\lambda `$ is the spectrum (normalized to unity at 2.2 $`\mu `$m) appropriate for a star of the given metallicity, temperature and gravity. This function is often evaluated using observations of field stars whose spectral types are converted into surface temperatures and luminosities using empirical calibrations. However, this method is limited to stars with quasi–solar metallicity and is particularly uncertain for supergiants and AGB stars. A more objective estimate of $`f_\lambda `$ can be obtained from model stellar atmospheres which yield a synthetic spectrum with the desired resolution for a given set of metallicity ($`Z`$), effective temperature ($`T`$), surface gravity ($`g`$), microturbulent velocity ($`\xi `$) and carbon relative abundance (\[C/Fe\]). The last two quantities should be treated as free parameters because they cannot be unequivocally related to other physical parameters of the star. A detailed description of the procedure used to construct the synthetic spectra can be found in Origlia et al. (origlia93 (1993)). The time evolution of the CO index simply follows from Eq. (3) using theoretical isochrones at different ages. The results are plotted in Fig. 1 where we also evaluate the effect of using different stellar evolutionary models. These agree in predicting strong \[CO\] at early times, when the near IR emission is dominated by red supergiants (i.e. $`t100`$ Myr). Indeed, the comparison with observations is not yet satisfactory because models predict too warm red supergiants, especially at sub–solar metallicities (e.g. Oliva & Origlia oliva98 (1998), Origlia et al. origlia99 (1999)). The most striking result in Fig. 1 is the very different behaviour in the AGB phase (i.e. $`t100`$ Myr) where the predicted CO index varies by large ($`>`$3) factors depending on the adopted stellar evolutionary tracks. In particular, the steady decay of \[CO\] in the curves based on the Geneva’s models contrasts with the increase at the onset of the AGB predicted by the Padua’s tracks. Interestingly, the latter predict strong CO at low metallicities where the other models give \[CO\]$``$0. To investigate the reason(s) for the very different behaviours in the AGB phase, it is instructive to compare the model isochrones which are displayed in Fig. 2 for two representative ages at solar metallicity. The Geneva’s curves stop at relatively low luminosities and high temperatures ($`>`$3600 K), this implies that the stars dominating the IR emission are warm enough to dissociate CO, have quite large surface gravities and relatively low luminosity. Thus the CO bands are weak because the CO/C relative abundance, the column density of the photosphere and the microturbulent velocity are all relatively small (see Sect. 2.2). The AGB in the Padua’s models, on the contrary, extends to very high luminosities and low temperatures. This implies low surface gravities and large microturbulent velocities, i.e. deep CO features. The different extent of the AGB in Fig. 2 primarily follows from assumptions made by the models. The Geneva’s tracks arbitrarily stop at the onset of thermal pulses, while the Padua’s computations follow this phase up to the very end of the double shell burning using semi–analytical approximations. However, both approaches are unrealistic because the evolution along the AGB, which for sure extends well into the thermal pulses phase, is regulated and shortened by the strong mass–loss experienced by AGB stars, a parameter not included in the Bertelli et al. (bertelli94 (1994)) tracks. Moreover, one should keep in mind that the predicted stellar temperatures are also quite uncertain and, probably, too warm (Chieffi et al. chieffi95 (1995), Oliva & Origlia oliva98 (1998)). This effect is visible in Fig. 2 where one can notice that, within the part of the AGB covered by both models, the Padua’s tracks are systematically warmer than those of Geneva. Indeed, recent developments of the theory of convective energy transfer indicate that all the temperatures of red stars in the Padua’s tracks should be decreased by 200–300 K (Bressan, private communication). Therefore, the “true” AGB is likely to be less extended but redder than predicted by the Padua’s tracks and the two effects on the \[CO\] should compensate each others. In practice, the “true” CO index during the AGB phase is probably similar to that plotted in the right hand panels of Fig. 1. For the moment being, assuming \[CO\]$``$constant with time is probably a fair approximation which, at least, does not lead to far reaching conclusions on the age of stellar systems. ## 4 Available data and SSP templates Stellar clusters in the Large Magellanic Cloud provide a convenient and, in practice, unique set of templates for young and intermediate age stellar populations spanning a wide range of metallicities (between 1/100 and $``$ solar, e.g. Sagar & Pandey sagar89 (1989)). The available \[CO\] data are summarized in Fig. 3. The large scatter of the points at a given age reflects variations of metallicities, statistical effects related to the intrinsic small number of luminous red stars in the clusters and, possibly, field contamination (see e.g. Chiosi et al. chiosi86 (1986), Santos et al. santos (1997)). A further complication occurs beyond $``$600 Myr, when carbon stars appear. These often display a very red spectrum with strong continuum emission from the envelope which dilutes the CO bands and produce an anti–correlation between J–K colours and \[CO\] index (Persson et al. persson83 (1983)). Note in particular that the mild anti–correlation between CO index and age visible in Fig. 3 also reflects the fact that the older clusters are, on average, less metallic than the youngest one’s. For the purposes of this paper, the most important fact is that several of the LMC clusters with ages $``$100 Myr display \[CO\] indices much larger than those predicted by the models based on the Geneva’s tracks. In other words, a highly metallic stellar population of 100–1000 Myr could have a \[CO\] similar to that of a 10–100 Myr cluster of the same metallicity, as indeed predicted by models including the whole AGB evolution (see Sect. 3). Therefore, finding a galaxy with a very deep CO index does not necessarily imply that its stellar population must be younger than 100 Myr, as sometimes assumed in literature. Fig. 3 also shows results from a wider set of data, including old stellar systems (Galactic globular clusters and ellipticals) and starburst galaxies. In general, the only clear observational result is that objects with \[CO\] significantly larger than 0.18 cannot be old stellar systems of (sub)solar metallicities, but require younger stellar populations or, alternatively, old stellar populations much more metallic than ellipticals. Encouragingly, several starburst galaxies do indeed display CO indices significantly larger than the above threshold but, in most cases, within the range covered by LMC clusters of ages $``$10<sup>9</sup> yr (see Fig. 3). On the other hand, however, other well studied starbursters have values of \[CO\] lower than 0.18 and more similar to ellipticals and bulges. This probably reflects metallicity variations, i.e. weaker CO features can be associated with young stellar systems of lower metallicities. ## 5 Conclusions Very large values of the CO index around 2.3 $`\mu `$m (i.e. \[CO\]$`>`$0.18, larger than those observed in the oldest, metal rich stellar systems) indicate that the near IR luminosity should be dominated by a relative young SP of RSG/AGB stars. Any further attempt to better quantifying the age of this young SP in the range 10 Myr – 1 Gyr is unreliable since the theoretical evolutionary tracks are still too uncertain to properly describe the red supergiant and AGB evolution, especially at sub–solar metallicities. ###### Acknowledgements. We would like to thank A. Bressan for helpful discussions and comments. We are grateful to the referee, Y.D. Mayya, for comments and critics which helped us to improve the quality of the paper. This work was partly supported by the Italian Ministry for University and Research (MURST) under grant Cofin98-02-32.
no-problem/0003/nlin0003060.html
ar5iv
text
# Forecasting confined spatiotemporal chaos with genetic algorithms ## Abstract A technique to forecast spatiotemporal time series is presented. it uses a Proper Ortogonal or Karhunen-Loève Decomposition to encode large spatiotemporal data sets in a few time-series, and Genetic Algorithms to efficiently extract dynamical rules from the data. The method works very well for confined systems displaying spatiotemporal chaos, as exemplified here by forecasting the evolution of the onedimensional complex Ginzburg-Landau equation in a finite domain. Nonlinear time-series analysis provides tools to identify dynamical systems from measured data . The approach has been greatly developed in the last years as a powerful alternative to linear stochastic methods in the modeling of irregular time-series and provides, under the assumption of deterministic behavior, useful recipes for system control, noise reduction, and forecasting. Applications of these techniques to situations of spatiotemporal chaos, however, is still in its beginnings . There are two main reasons for this: a) the large attractor dimensions of spatiotemporally chaotic systems, increasing with system size, poses serious difficulties to the standard methods of delay embedding and attractor reconstruction; b) the right choice of variables is far from obvious: whereas the time evolution of an observable at a particular space point could be enough in some particular situations, decaying space correlations, and propagation phenomena turn this to be a poorly performing choice in most cases. A very efficient method for time-series prediction using Genetic Algorithms (GA) has been recently proposed in for nonextended systems. Comparatively small data sets are enough to use this technique, which makes it competitive in facing difficulty a) i.e., prediction in the presence of attractors of large dimension. In some cases, even non-trivial functional forms of dynamical systems generating the data can be unveiled . In this Letter we extend the GA approach to the forecasting of confined spatiotemporal chaos. By this we mean the situation in which chaotic dynamics in an extended system is strongly affected by the presence of boundaries. Our interest in this situation, somehow intermediate between low-dimensional chaos and homogeneous extensive chaos, arises from its relevance to real experimental situations, and from recent work leading to theoretical understanding: the boundaries break translational symmetry and the resulting phase rigidity restricts the shape of the chaotic fluctuations allowed. This manifests for example in the appearance of nontrivial average patterns and in inhomogeneities in other statistical characteristics . Under these circumstances the Empirical Orthogonal Functions (EOFs) obtained from a Proper Ortogonal Decomposition (POD, also known as Karhunen-Loève decomposition) provide an excellent basis for describing the system dynamics. They are different from simple Fourier modes and contain information (optimal in a precise sense) on the broken traslational symmetry. The amplitudes of the most important EOFs will be the variables chosen in response to difficulty b). We now describe more in detail our method for spatiotemporal forecasting, in which the POD is used to encode the large spatiotemporal data set in a few time-series, and the GA approach is used to obtain the corresponding forecasts. Given a time series of spatial patterns $`U(𝐱,n)`$, where $`n=1,\mathrm{},N`$ labels the temporal sequence and $`𝐱`$ the $`M`$ spatial points in a $`d`$-dimensional mesh, the POD decomposes the fluctuations around the temporal mean $`u(𝐱,n)U(𝐱,n)U(𝐱,n)_n`$ into modes ranked by their temporal variance. As a result, a set of spatial EOFs and associated temporal amplitude functions are obtained. The EOFs $`\varphi _i(𝐱)`$ ($`i=1,\mathrm{},M`$) are the (orthogonal) eigenfunctions of the covariance matrix of the data $`C(𝐱,𝐱^{})=u(𝐱,n)u(𝐱^{},n)_n`$ and are the spatial structures statistically more representative of the fluctuations in the data set. Temporal amplitude functions $`a_i(n)`$, describing the dynamics of the system, are obtained from the modal decomposition $`u(𝐱,n)=_{i=1}^Ma_i(n)\varphi _i(𝐱)`$. If only $`K<M`$ of the EOFs (the ones containing the highest temporal variance as measured by the corresponding eigenvalues) are used in the reconstruction process, the set of reconstructed patterns $$u^K(𝐱,n)=\underset{i=1}{\overset{K}{}}a_i(n)\varphi _i(𝐱)$$ (1) is still the best approximation one can obtain by linearly combining $`K`$ arbitrary spatial patterns multiplied by $`K`$ arbitrary amplitude functions. Even more, it has been shown for several chaotic and even turbulent confined systems that taking a few dominating modes $`K<<M`$ provides a good approximation to the complete data set. Forecasting of the amplitude functions is performed with a Genetic Algorithm. In general, GA’s are computational methods to solve optimization problems in which the optimal solution is searched iteratively with steps inspired in the Darwinian processes of natural selection and survival of the fittest . Here the optimization problem to be solved is finding the empirical model best describing the data, that is, finding the optimum function $`F_i`$ that minimizes the difference $`E_i^2_{n=1}^N\left(a_i(n)\stackrel{~}{a_i}(n)\right)^2`$ between the values $`a_i(n)`$ of each time series and the corresponding estimator given by $$\stackrel{~}{a_i}(n)=F_i[a_i(n1),a_i(n2),\mathrm{},a_i(nD)],$$ (2) with $`D+1nN`$. Finding $`F_i,i=1,\mathrm{},M`$ amounts to identify the dynamical system behind the data set. Once found, Eq. (2) can be used to predict the future evolution of the system. If $`D`$ is large enough, the existence of the exact $`F_i`$’s is guaranteed by Takens theorem and its extensions , but a smaller $`D`$ can give approximate dynamics $`F_i`$ with already a reasonably low error $`E_i`$. In addition, we are not looking for all the $`M`$ estimators but only for the $`K`$ associated to the dominant EOFs. In our approach, the time-series associated to each EOF are modeled independently. More general multivariate estimators, with each $`\stackrel{~}{a_i}`$ possibly dependent on different $`a_j`$’s, may in principle be used, but we restrict to the choice (2) for algorithmic simplicity. The power of the GA resides in that a huge functional space is explored in order to find an optimal $`F_i`$. Each possible $`F_i`$ is a formula consisting in a combination of numerical constants, variables, and arithmetic operators. This combination is stored in the computer as a symbolic string. The only limitation to the allowed functional forms (besides the limitation to arithmetic operations) is the maximum allowed length of the symbolic string. The search procedure begins by randomly generating an initial population of potential estimators $`F_i`$ that will be subjected to the evolutionary process. The evolution is carried out by selecting from the initial population the strongest individuals, i.e. the functions that best fit the data, giving a smaller $`E_i`$. In practice, only a temporal part of the data set is used in this step (the training set), whereas the rest of the data are used later for validating the efficiency of the prediction method (validating set). The strongest strings choose a mate for reproduction while the weaker strings disappear. ‘Reproduction’ consists in interchanging parts of the symbolic strings (the ‘genetic material’) between the two mating individuals. As a result, a new generation of individuals (which includes the original ‘parent’ string) is generated. The new population is then subjected to mutation processes that change, with low probability, small parts of the symbolic strings. The evolutionary steps are repeated with the new generation, and the process is iterated until an optimum individual is finally found or after a fixed number of generations. Further details about the implementation of the algorithm can be consulted in . The formulae $`F_i`$ are only optimized for predicting the value of $`a_i(n)`$ in terms of the $`D`$ amplitudes immediately before in time. We call this ‘one-step-ahead forecast’. One can in principle iterate the formulae to obtain successively predictions for $`a_i(n+1)`$, $`a_i(n+2)`$, etc. But this will normally lead to results rapidly diverging with respect to the correct values because of error accumulation and amplification . However, GA’s can be designed specifically to forecast values of the time series not necessarily in the immediate future. For example, finding the function $`F_i^T`$ minimizing the error between the actual series and the estimator $$\stackrel{~}{a_i}^T(n)=F_i^T[a_i(nT),a_i(nT1),\mathrm{},a_i(nD)],$$ (3) with $`D+1nN`$, allows direct prediction of $`a_i(N+T)`$, that is prediction $`T`$-steps ahead, without iteration. Numerical results. To illustrate the forecasting method we generate a data set from numerical simulation of a well-studied model equation displaying spatiotemporal chaos, the one-dimensional Complex Ginzburg-Landau equation (CGLE) , supplemented with Dirichlet boundary conditions at the ends of a finite interval. It is convenient for our purposes to write it as $$_tA(x,t)=q^2(1+\alpha )_x^2A+A(1+i\beta )A|A|^2,$$ (4) where $`q`$, $`\alpha `$, and $`\beta `$ are real and positive and $`A(x,t)`$ is a complex-valued field. We solve it in the interval $`[0,\pi ]`$ so that the boundary conditions read $`A(0)=A(\pi )=0`$. By simple scaling of the spatial coordinate one sees that this is equivalent to rewriting the equation with $`q=1`$, but solving it in a domain of size $`L=\pi /q`$. Thus the parameter $`q`$ is equivalent to an inverse system size, and decreasing it is equivalent to increasing system size. Following we fix $`\alpha =4`$ and $`\beta =4`$ . For $`q<0.2`$ the system displays spatiotemporal chaos for most of the initial conditions. Decreasing $`q`$ one encounters the regime of confined spatiotemporal chaos we are interested in before approaching homogeneous extensive chaos at large system sizes ($`q0`$) . According to , the correlation dimension of the dynamical attractor for $`q=0.14`$ is $`9.08`$. We sample our simulation every $`\tau =0.1`$ time units and at spatial locations separated $`\mathrm{\Delta }=\pi /100`$ space units, and follow it for $`80`$ time units ($`800`$ samples) after discarding the initial transient starting from random initial conditions (this sampling leads to $`N=800`$ and $`M=100`$). This will be our ‘training set’ to be feed into the GA. The simulation is then continued for a few more time units, to provide the ‘validation set’ which is hidden to the GA. It is used later to check the accuracy of the predictions. We choose as the basic field to be forecasted the modulus $`U(x,n)=\left|A\left(x,t=n\tau \right)\right|`$ of the complex field. The algorithm seems to perform slightly better in forecasting the real or the imaginary parts of $`A`$, but we use $`U`$ to show that the algorithm works well with nonlinear combinations of the basic dynamical quantities. In Fig. (1) we show parts of typical spatiotemporal evolutions for $`q=0.12`$ and $`q=0.16`$. Clearly, reducing $`q`$ decreases the spatial scales, as corresponding to an effectively larger system size, but also the complexity of the evolution is increased. In both cases it is clear that the motion of the dynamical structures is constrained by the presence of the walls, as corresponding to confined spatiotemporal chaos. We solve Eq. (4) for $`q=0.18,0.16,0.14,0.12`$ and perform the POD on the fluctuations $`u(x,n)`$ of the modulus around its temporal mean value in the resulting data sets. The number of relevant EOFs (which we define to be those accounting for at least $`99\%`$ of the data variance) are respectively $`9`$, $`11`$, $`13`$, and $`15`$. We note that this confirms the expected approximate linear scaling of the number of EOFs with increasing system size $`L`$ ($`q^1`$). It is somehow surprising that this extensive scaling appears even when chaos is not homogeneous, but still influenced by the boundaries. This fact has been observed in other systems before . For illustrative purposes, we show in Fig. (2) the two most relevant EOFs from our training set at $`q=0.16`$, and the corresponding temporal amplitude functions. The chaotic character of these series is evident. We next apply the GA to each of the amplitude functions of the relevant EOFs. We use the following parameters for all the values of $`q`$: number of generations in the evolutionary process $`2000`$, number of individuals in each generation $`120`$, maximum number of symbols allowed for each symbolic string $`20`$, number of delays in (2) or (3) $`D=18`$. Tuning of these parameters for each particular value of $`q`$ would improve forecasting, but would make comparisons more difficult. Predictions for the field $`u(x,t)`$ are then build up by reconstruction according to (1) with $`K`$ the number of relevant EOFs defined above. In Fig. (3) we show the one-step-ahead forecasted fields, more concretely the prediction for the first step beyond the training set, $`n=801`$. It is compared with the actual numerical pattern in the validation set, for $`q=0.12,0.14`$ and $`q=0.16`$, displaying an excellent performance. We quantify the quality of the prediction in terms of the mean square error $`ϵ_q(n)`$: $$ϵ_q^2(n)\frac{1}{M}\underset{j=1}{\overset{M}{}}\left(\stackrel{~}{u}^K(x=j\mathrm{\Delta },n)u(x=j\mathrm{\Delta },n)\right)^2,$$ (5) where $`\stackrel{~}{u}^K(x,n)`$ is the predicted pattern reconstructed from Eq. (1) and $`u(x,n)`$ is the actual pattern from the validation set. As stated before, GA’s can be used to predict future values some time steps ahead, without the need of iterating the one-step-ahead predictor (which early becomes useless because of the expected exponential growth of errors). Figure (4) shows $`ϵ_q(n)`$ as a function of $`n`$ for $`q=0.16`$ calculated from: a) the one-step-ahead prediction formulae obtained from the training set, but applied to obtain the pattern at step $`n`$ from the previous $`D`$ values in the validation set; b) iteration of the one-step-ahead formulae starting from the last $`D`$ data in the training set; c) five-steps-ahead prediction from a formula of the type (3) with $`T=5`$, obtained by the GA in the training set, and used into the validation set. We see that the improvement in accuracy is notorious when iteration is avoided. We note that the errors in methods a) and c) remain bounded even when $`n`$ is far from the values from which the prediction formulas were estimated (i.e. the training set $`n<800`$). This confirms that the method is not simply fitting data, but rather it has really found approximate dynamical rules within the deterministic spatiotemporal series. Figure (5) displays the average error $`<ϵ_q>`$, which is the temporal average of $`ϵ_q(n)`$ with $`n`$ in the validation range displayed in Fig. (4), as a function of $`q`$ (for one-step-ahead prediction). Despite we are including more EOFs in the reconstruction for decreasing $`q`$, the prediction error shows a tendency to increase. This is a consequence of the increase in complexity (and in attractor dimension) of the dynamics by the effective increase in system size ($`q^1`$). Since we keep the number of delays $`D`$ fixed, the embedding of the data set becomes more incomplete at smaller $`q`$ and the prediction deteriorates. In addition, for smaller $`q`$ the confined or boundary influenced character of the spatiotemporal chaos in the system is lost and a description in terms of local structures will be certainly more efficient . In summary, we have presented a method to forecast the evolution of spatially extended systems based in the combination of POD and GA’s. The method performs very well in situations of confined spatiotemporal chaos as exemplified by the CGLE in a finite interval. We mention here that we are exploring the possibilities of the method for prediction from noisy natural data sets. Results obtained in forecasting Sea Surface Temperature patterns in an area of the Mediterranean Sea are encouraging. We acknowledge financial support from CICYT (MAR98-0840) and DGICYT (PB94-1167).
no-problem/0003/astro-ph0003174.html
ar5iv
text
# Structured outflow from a dynamo active accretion disc ## 1 Introduction The importance of magnetic fields for disc accretion is widely recognized (e.g., Mestel 1999), and the turbulent dynamo is believed to be a major source of magnetic fields in accretion discs (Pudritz 1981a, 1981b; Stepinski & Levy 1988; Brandenburg et al. 1995; Hawley et al. 1996). A magnetic field anchored in the disc is further considered to be a major factor in launching and collimating a wind in young stellar objects and active galactic nuclei (Blandford & Payne 1982; Pelletier & Pudritz 1992); see Königl & Pudritz (2000) for a recent review of stellar outflows. Yet, most models of the formation and collimation of jets rely on an externally imposed poloidal magnetic field and disregard any field produced in the disc. Our aim here is to study outflows in connection with dynamo-generated magnetic fields. We discuss parameters of young stellar objects in our estimates, but the model also applies to systems containing a compact central object after rescaling and possibly other modifications such as the appropriate choice of the gravitational potential. Extensive numerical studies of collimated disc winds have been performed using several types of model. Uchida & Shibata (1985), Matsumoto et al. (1996) and Kudoh et al. (1998) consider an ideal MHD model of an accretion disc embedded in a non-rotating corona, permeated by an external magnetic field initially aligned with the disc rotation axis. Intense accretion develops in the disc (Stone & Norman 1994), accompanied by a centrifugally driven outflow. The wind is eventually collimated by toroidal magnetic field produced in the corona by winding up the poloidal field. Bell (1994), Bell & Lucek (1995) and Lucek & Bell (1996, 1997) use two- and three-dimensional ideal MHD models, with a polytropic equation of state, to study the formation and stability of pressure driven jets collimated by poloidal magnetic field, which has a minimum inside the jet. The general structure of thermally driven disc winds was studied analytically by, e.g., Fukue (1989) and Takahara et al. (1989). Another type of model was developed by Ustyugova et al. (1995), Romanova et al. (1997, 1998), Ouyed et al. (1997) and Ouyed & Pudritz (1997a, 1997b, 1999) who consider ideal MHD in a polytropic corona permeated by an external poloidal magnetic field and subsume the physics of the accretion disc into boundary conditions at the base of the corona. The disc is assumed to be in Keplerian rotation (with any accretion neglected). The corona is non-rotating initially, and the system is driven by the injection of matter through the boundary that represents the disc surface. These models develop a steady (or at least statistically steady) state consistent with the analytical models by Blandford & Payne (1982), showing a magneto-centrifugal wind collimated by toroidal magnetic field, which again is produced in the corona by the vertical shear in the angular velocity. Three-dimensional simulations suggest that the resulting collimated outflow does not break due to the kink instability (Ouyed et al. 2003; Thomsen & Nordlund 2003). It is not quite clear how strong the external magnetic field in accretion discs can be. Dragging of an external field from large radii in a viscous disc requires that magnetic diffusivity is much smaller than the kinetic one (Lubow et al. 1994), i.e. that the magnetic Prandtl number is significantly larger than unity which would be difficult to explain in a turbulent disc (Heyvaerts et al. 1996). This argument neglects, however, the effect of magnetic torques which could produce significant field line dragging even when the magnetic Prandtl number is of order unity (Shalybkov & Rüdiger 2000). On the other hand, the efficiency of trapping an external magnetic field at initial stages of the disc formation is questionable because only a small fraction of the external flux can be retained if the density contrast between the disc and the surrounding medium is large. In that case the magnetic field will be strongly bent and reconnection will remove most of the flux from the disc. Furthermore, the ionization fraction is probably too small in the inner disc to ensure sufficient coupling between the gas and the magnetic field (Fromang et al. 2002). For these reasons it seems appropriate to explore whether or not magnetic fields generated by a dynamo within the disc can produce an outflow with realistic properties. The models of Ustyugova et al. (1995) and Ouyed & Pudritz (1997a, 1997b, 1999) have a distributed mass source and imposed poloidal velocity at the (unresolved) disc surface. As a result, their system develops a persistent outflow which is not just a transient. On the other hand, the models of Matsumoto et al. (1996) resolve the disc, but have no mass replenishment to compensate losses via the outflow and accretion, and so the disc disappears with time. Our model is an attempt to combine advantages of both of these approaches and also to add dynamo action in the disc. Instead of a rigidly prescribed mass injection, we allow for self-regulatory replenishment of matter within a resolved disc. Instead of prescribing poloidal velocity at the disc surface, we resolve the disc and prescribe an entropy contrast between the disc and the corona, leaving more freedom for the system. Such an increase of entropy with height is only natural to expect for a disc surrounded by a hot corona, and we parameterize the coronal heating by a (fixed) entropy contrast. We further add self-sustained, intrinsic magnetic field to our system, as opposed to an external field used in the other models. Since our model goes beyond ideal MHD, the magnetic field must be maintained against decay. A simple form of mean-field dynamo action is included for this purpose. Like in the model of Ouyed & Pudritz (1997a, 1997b, 1999), the hot, pressure supported corona does not rotate initially. The disc is cool and is therefore centrifugally supported, so its rotation is nearly Keplerian. The corresponding steady-state solution is used as our initial condition. This solution is, however, unstable because of the vertical shear in the angular velocity between the disc and the corona (Urpin & Brandenburg 1998). Angular momentum transfer by viscous and magnetic stresses also leads to a departure from the initial state. As a result, a meridional flow develops, which exchanges matter between the corona and the disc surface layers. Mass losses through the disc surface and to the accreting central object are then replenished in the disc where we allow local mass production whenever and wherever the density decreases below a reference value. Matter is heated as it moves into the corona; this leads to an increase in pressure which drives the wind. Another efficient driver of the outflow in our model is magneto-centrifugal acceleration. A strong toroidal magnetic field produced by the dynamo in the disc is advected into the corona and contributes to confining the wind. Altogether, our model contains many features included in a range of earlier models. For example, the disc is essentially resistive to admit dynamo action, whereas magnetic diffusion turns out to be relatively unimportant in the corona, similarly to the models of, e.g., Wardle & Königl (1993), Ferreira & Pelletier (1995) and Casse & Ferreira (2000a, 2000b). The synthetic nature of the outflow, driven by both pressure forces and magneto-centrifugally, is characteristic of the self-similar solutions of Ferreira & Pelletier (1995) and Casse & Ferreira (2000b). The latter authors also stress the rôle of the vertical entropy gradient in enhancing the mass flux in the outflow and prescribe a vertical profile of the entropy production rate. Structured outflows outwardly similar to our results have been discussed by Krasnopolsky et al. (1999) and Goodson et al. (1997). However, in the former paper the structure in their solutions is due to the boundary conditions imposed on the disc surface, and in the latter model the structure is due to the interaction between the stellar magnetic field and the disc under the presence of differential rotation between star and disc. On the contrary, the structure in our model results from the dominance of different driving mechanisms in the inner and outer parts of the disc. While we had initially expected to find collimated outflows similar to those reported by Ouyed & Pudritz (1997a, 1997b), the outflow patterns we obtained turned out to be of quite different nature. However, a significant difference in our model is the resolved disc, which is necessary to model dynamo action in the disc, and the relatively small extension into the corona, where collimation is not yet expected. Figure 1 shows a sketch of the overall, multi-component structure of the outflows in the presence of magnetic fields and mass sink, which needs to be compared to the individual Figures in the rest of the paper. The plan of the paper is as follows. We introduce our model in Sect. 2, and then consider a range of parameters in Sect. 3 to clarify and illustrate the main physical effects. In Sect. 4 we explore the parameter space, and our conclusions are presented in Sect. 5. ## 2 The model ### 2.1 Basic equations Our intention is to make our model as simple as possible and to avoid detailed modelling of mass supply to the accretion disc, which occurs differently in different accreting systems. For example, matter enters the accretion disc at large radii in a restricted range of azimuthal angles in binary systems with Roche lobe overflow. On the other hand, matter supply can be more uniform in both azimuth and radius in active galactic nuclei. Being interested in other aspects of accretion physics, we prefer to avoid detailed modelling of these processes. Instead, similarly to the model of Ouyed & Pudritz (1997a, 1997b), we inject matter into the system, but an important difference is that we introduce a self-regulating mass source in the disc. We also allow for a mass sink at the centre to model accretion onto the central object (star). With these two effects included, the continuity equation becomes $$\frac{\varrho }{t}+(\varrho u)=q_\varrho ^{\mathrm{disc}}+q_\varrho ^{\mathrm{star}}\dot{\varrho },$$ (1) where $`u`$ is the velocity field and $`\varrho `$ is the gas density. The mass source, $`q_\varrho ^{\mathrm{disc}}`$, is localized in the disc (excluding the star) and turns on once the local density drops below a reference value $`\varrho _0`$ (chosen to be equal to the initial density, see Sect. 2.4), i.e. $$q_\varrho ^{\mathrm{disc}}=\frac{\xi _{\mathrm{disc}}(r)\xi _{\mathrm{star}}(r)}{\tau _{\mathrm{disc}}}(\varrho _0\varrho )_+,$$ (2) where the subscript ‘$`+`$’ indicates that only positive values are used, i.e. $`x_+=\{\begin{array}{cc}x\hfill & \text{if }x>0\text{,}\hfill \\ 0\hfill & \text{otherwise}.\hfill \end{array}`$ (5) This means that matter is injected only if $`\varrho <\varrho _0`$, and that the strength of the mass source is proportional to the gas density deficit. Throughout this work we use cylindrical polar coordinates $`(\varpi ,\phi ,z)`$, assuming axisymmetry. The shapes of the disc and the central object are specified with the profiles $`\xi _{\mathrm{disc}}`$ and $`\xi _{\mathrm{star}}`$ via $`\xi _{\mathrm{disc}}(\varpi ,z)`$ $`=`$ $`\mathrm{\Theta }(\varpi _0\varpi ,d)\mathrm{\Theta }(z_0|z|,d),`$ (6) $`\xi _{\mathrm{star}}(\varpi ,z)`$ $`=`$ $`\mathrm{\Theta }(r_0r,d).`$ (7) Here, $`\mathrm{\Theta }(x,d)`$ is a smoothed Heaviside step function with a smoothing half-width $`d`$ set to 8 grid zones in $`\xi _{\mathrm{disc}}`$ and $`d=r_0`$ in $`\xi _{\mathrm{star}}`$; $`r=\sqrt{\varpi ^2+z^2}`$ is the spherical radius and $`r_0`$ is the stellar radius introduced in Eq. (25); $`\varpi _0`$ and $`z_0`$ are the disc outer radius and disc half-thickness, respectively. So, $`\xi _{\mathrm{disc}}`$ is equal to unity at the disc midplane and vanishes in the corona, whereas $`\xi _{\mathrm{star}}`$ is unity at the centre of the central object and vanishes outside the star. Note that $`\xi _{\mathrm{disc}}\xi _{\mathrm{star}}0`$ everywhere, since $`\xi _{\mathrm{disc}}>0`$) all the way to the origin, $`z=\varpi =0`$. In Eq. (2), $`\tau _{\mathrm{disc}}`$ is a response time which is chosen to be significantly shorter than the time scale of the depletion processes, which is equivalent to the time scale of mass replenishment in the disc, $`M_{\mathrm{disc}}/\dot{M}_{\mathrm{source}}`$ (cf. Sect. 3.4), to avoid unphysical influences of the mass source. We do not fix the distribution and magnitude of $`q_\varrho ^{\mathrm{disc}}`$ beforehand, but the system adjusts itself such as to prevent the disc from disappearing. The self-regulating mass sink at the position of the central star is modelled in a similar manner, $$q_\varrho ^{\mathrm{star}}=\frac{\xi _{\mathrm{star}}(r)}{\tau _{\mathrm{star}}}(\varrho \varrho _0)_+,$$ (8) where $`\xi _{\mathrm{star}}`$ is defined in Eq. (7) and $`\tau _{\mathrm{star}}`$ is a central accretion time scale that controls the efficiency of the sink. We discuss physically meaningful values of $`\tau _{\mathrm{star}}`$ in Sect. 2.5. Apart from the continuity equation (1), the mass source also appears in the Navier–Stokes equation, unless matter is always injected with the ambient velocity of the gas. In that case, however, a runaway instability can occur: if matter is already slower than Keplerian, it falls inward, and so does the newly injected matter. This enhances the need for even more mass injection. A similar argument applies also if matter is rotating faster than Keplerian. This is why we inject matter at the Keplerian speed. This leads to an extra term in the Navier–Stokes equation, $`(u_\mathrm{K}u)q_\varrho ^{\mathrm{disc}}`$, which would only be absent if the gas were rotating at the Keplerian speed. Thus, the Navier–Stokes equation takes the form $$\frac{\mathrm{D}u}{\mathrm{D}t}=\frac{1}{\varrho }p\mathrm{\Phi }+\frac{1}{\varrho }\left[F+(u_\mathrm{K}u)q_\varrho ^{\mathrm{disc}}\right],$$ (9) where $`\mathrm{D}/\mathrm{D}t=/t+u`$ is the advective derivative, $`p`$ is the gas pressure, $`\mathrm{\Phi }`$ is the gravitational potential, $`F=J\times B+\tau `$ is the sum of the Lorentz and viscous forces, $`J=\times B/\mu _0`$ is the current density due to the (mean) magnetic field $`B`$, $`\mu _0`$ is the magnetic permeability, and $`\tau `$ is the viscous stress tensor, $$\tau _{ij}=\varrho \nu (u_i/x_j+u_j/x_i).$$ (10) Here, $`\nu `$ is the kinematic viscosity, which has been subdivided into three contributions, $$\nu =\nu _\mathrm{t}+\nu _{\mathrm{adv}}+\nu _{\mathrm{shock}}.$$ (11) The first term is a turbulent (Shakura–Sunyaev) viscosity in the disc, $$\nu _\mathrm{t}=\alpha _{\mathrm{SS}}c_\mathrm{s}z_0\xi _{\mathrm{disc}}(r),$$ (12) where $`c_\mathrm{s}=(\gamma p/\varrho )^{1/2}`$ is the sound speed and $`\gamma `$ is the ratio of specific heats. The second term is an artificial advection viscosity, $$\nu _{\mathrm{adv}}=c_\nu ^{\mathrm{adv}}\delta x(u_{\mathrm{pol}}^2+c_\mathrm{s}^2+v_{\mathrm{A},\mathrm{pol}}^2)^{1/2},$$ (13) which is required to stabilize rapidly moving patterns. Here, $`\delta x=\mathrm{min}(\delta \varpi ,\delta z)`$ is the mesh size, and $`c_\nu ^{\mathrm{adv}}`$ is a constant specified in Sect. 2.5. In Eq. (13) we have only used the poloidal velocity, $`u_{\mathrm{pol}}`$, and the Alfvén speed due to the poloidal magnetic field, $`v_{\mathrm{A},\mathrm{pol}}=(B_{\mathrm{pol}}^2/\varrho \mu _0)^{1/2}`$, because in an axisymmetric calculation advection in the $`\phi `$-direction is unimportant. Finally, $$\nu _{\mathrm{shock}}=c_\nu ^{\mathrm{shock}}\delta x^2(u)_+$$ (14) is an artificial shock viscosity, with $`c_\nu ^{\mathrm{shock}}`$ a constant specified in Sect. 2.5. Note that $`\nu _{\mathrm{adv}}`$ and $`\nu _{\mathrm{shock}}`$ are needed for numerical reasons; they tend to zero for increasing resolution, $`\delta x0`$. We assume that the magnetic field in the disc is generated by a standard $`\alpha ^2\mathrm{\Omega }`$-dynamo (e.g., Krause & Rädler 1980), which implies an extra electromotive force, $`\alpha B`$, in the induction equation for the mean magnetic field, $`B`$. To ensure that $`B`$ is solenoidal, we solve the induction equation in terms of the vector potential $`A`$, $$\frac{A}{t}=u\times B+\alpha B\eta \mu _0J,$$ (15) where $`B=\times A`$, and $`\eta `$ is the magnetic diffusivity. Since $`\alpha (r)`$ has to be antisymmetric about the midplane and vanishing outside the disc, we adopt the form $$\alpha =\alpha _0\frac{z}{z_0}\frac{\xi _{\mathrm{disc}}(r)}{1+v_\mathrm{A}^2/v_0^2},$$ (16) where $`v_\mathrm{A}`$ is the Alfvén speed based on the total magnetic field, and $`\alpha _0`$ and $`v_0`$ are parameters that control the intensity of dynamo action and the field strength in the disc, respectively. The $`\alpha `$-effect has been truncated near the axis, so that $`\alpha =0`$ for $`\varpi 0.2`$. For the magnetic diffusivity we assume $$\eta =\eta _0+\eta _\mathrm{t},$$ (17) where $`\eta _0`$ is a uniform background diffusivity and the turbulent part, $`\eta _\mathrm{t}=\eta _{\mathrm{t0}}\xi _{\mathrm{disc}}(r)`$, vanishes outside the disc. Thus, magnetic diffusivity in the corona is smaller than in the disc. Depending on the sign of $`\alpha `$ and the vertical distribution of $`\eta `$, the dynamo can generate magnetic fields of either dipolar or quadrupolar symmetry. We shall discuss both types of geometry. ### 2.2 Implementation of a cool disc embedded in a hot corona Protostellar systems are known to be strong X-ray sources (see, e.g., Glassgold et al. 2000; Feigelson & Montmerle 1999; Grosso et al. 2000). The X-ray emission is generally attributed to coronae of disc-star systems, plausibly heated by small scale magnetic reconnection events (Galeev et al. 1979), for example in the form of nanoflares that are caused by slow footpoint motions (Parker 1994). Heating of disc coronae by fluctuating magnetic fields is indeed quite natural if one accepts that the disc turbulence is caused by the magneto-rotational instability. Estimates for the coronal temperatures of YSOs range from $`10^6\mathrm{K}`$ to $`10^8\mathrm{K}`$ (see, e.g., Feigelson & Montmerle 1999). For the base of the disc corona, temperatures of at least $`8\times 10^3`$ are to be expected in order to explain the observed mass loss rates (Kwan & Tademaru 1995; Kwan 1997). The discs, on the other hand, have typical temperatures of a few $`10^3\mathrm{K}`$ (e.g., Papaloizou & Terquem 1999). A simple way to implement a dense, relatively cool disc embedded in a rarefied, hot corona without modelling the detailed physics of coronal heating is to prescribe the distribution of specific entropy, $`s(r)`$, such that $`s`$ is smaller within the disc and larger in the corona. For a perfect gas this implies $`p=K\varrho ^\gamma `$ (in a dimensionless form), where $`K=e^{s/c_v}`$ is a function of position (here $`p`$ and $`\varrho `$ are gas pressure and density, $`\gamma =c_p/c_v`$, and $`c_p`$ and $`c_v`$ are the specific heats at constant pressure and constant volume, respectively). We prescribe the polytrope parameter $`K`$ to be unity in the corona and smaller in the disc, so we put $$[K(r)]^{1/\gamma }=e^{s/c_p}=1(1\beta )\xi _{\mathrm{disc}}(r),$$ (18) where $`0<\beta <1`$ is a free parameter with $`\beta =e^{\mathrm{\Delta }s/c_p}`$, that controls the entropy contrast, $`\mathrm{\Delta }s>0`$, between corona and disc. We consider values of $`\beta `$ between 0.1 and 0.005, which yields an entropy contrast, $`\mathrm{\Delta }s/c_p`$, between 2.3 and 5.3. The temperature ratio between disc and corona is roughly $`\beta `$; see Eq. (28). Assuming pressure equilibrium between disc and corona, and $`p\rho T`$ for a perfect gas, the corresponding density ratio is $`\beta ^1`$. ### 2.3 Formulation in terms of potential enthalpy In the present case it is advantageous to use the potential enthalpy, $$H=h+\mathrm{\Phi },$$ (19) as a variable. Here, $`h`$ is the specific enthalpy, $`h=c_vT+p/\varrho =c_pT`$ for a perfect gas (with constant specific heats), and $`T`$ is temperature. Therefore, specific enthalpy $`h`$ is related to $`p`$ and $`\varrho `$ via $`h=\gamma (\gamma 1)^1p/\varrho `$. Specific entropy $`s`$ is related to $`p`$ and $`\varrho `$ (up to an additive constant) through $`s=c_v\mathrm{ln}pc_p\mathrm{ln}\varrho `$ for a perfect gas. We have therefore $$\frac{\mathrm{D}\mathrm{ln}h}{\mathrm{D}t}=\frac{\mathrm{D}\mathrm{ln}p}{\mathrm{D}t}\frac{\mathrm{D}\mathrm{ln}\varrho }{\mathrm{D}t}=(\gamma 1)\frac{\mathrm{D}\mathrm{ln}\varrho }{\mathrm{D}t}+\frac{\gamma }{c_p}\frac{\mathrm{D}s}{\mathrm{D}t}.$$ (20) Since $`s`$ is independent of time, $`\mathrm{D}s/\mathrm{D}t=us`$. Together with Eq. (1), this yields an evolution equation for $`h`$, which can be written in terms of $`H`$ as $$\frac{\mathrm{D}H}{\mathrm{D}t}=u\mathrm{\Phi }+(\gamma 1)h\left(\frac{\dot{\varrho }}{\varrho }u\right)+\frac{\gamma h}{c_p}us.$$ (21) In the following we solve Eq. (21) instead of Eq. (1). In terms of $`h`$, density and sound speed are given by $$\varrho ^{\gamma 1}=\frac{\gamma 1}{\gamma }\frac{h}{K},c_\mathrm{s}^2=(\gamma 1)h=\gamma \frac{p}{\varrho }.$$ (22) It proved necessary to include an artificial diffusion term in Eq. (21) with a diffusion coefficient proportional to $`\nu `$. The first law of thermodynamics allows us to express the pressure gradient in terms of $`h`$ and $`s`$, $$\frac{1}{\varrho }p=h+Ts,$$ (23) and with $`T=h/c_p`$ we obtain $$\frac{\mathrm{D}u}{\mathrm{D}t}=H+hs/c_p+\frac{1}{\varrho }\left[F+(u_\mathrm{K}u)q_\varrho ^{\mathrm{disc}}\right],$$ (24) which now replaces Eq. (9). We use a softened, spherically symmetric gravitational potential of the form $$\mathrm{\Phi }=GM_{}\left(r^n+r_0^n\right)^{1/n},$$ (25) where $`G`$ is the gravitational constant, $`M_{}`$ is the mass of the central object, $`r`$ is the spherical radius, $`r_0`$ is the softening radius, and $`n=5`$; tentatively, $`r_0`$ can be identified with the stellar radius. ### 2.4 The initial state Our initial state is the hydrostatic equilibrium obtained by solving, for $`h`$, the vertical balance equation obtained from Eq. (24), $$\frac{}{z}(h+\mathrm{\Phi })+h\frac{s/c_p}{z}=0,$$ (26) from large $`z`$ (where $`h+\mathrm{\Phi }=0`$) down to $`z=0`$. The initial density distribution $`\varrho _0`$ is then obtained using Eq. (22); it decreases monotonically with both $`z`$ and $`\varpi `$ in the equilibrium state. The initial rotation velocity, $`u_{\phi 0}`$, follows from the radial balance equation, $$\frac{u_{\phi 0}^2}{\varpi }=\frac{}{\varpi }(h+\mathrm{\Phi })+h\frac{s/c_p}{\varpi }.$$ (27) In the disc, $`h=c_pT`$ is small, so $`u_{\phi 0}`$ is close to the Keplerian velocity, while the corona does not rotate initially, and so is supported by the pressure gradient. As a rough estimate, the value of $`h`$ in the midplane of the disc is $$h_{\mathrm{disc}}\beta h_{\mathrm{corona}}\beta GM_{}/\varpi ,$$ (28) as can be seen by integrating Eq. (26), ignoring the $`\mathrm{\Phi }/z`$ term. Thus, the initial toroidal velocity in the midplane can be obtained from Eq. (27) using Eq. (28) and recalling that $`s/\varpi =0`$ in the midplane, which gives $$u_{\phi 0}\sqrt{(1\beta )GM_{}/\varpi }=\sqrt{1\beta }u_\mathrm{K}.$$ (29) For $`\beta =0.1`$, for example, the toroidal velocity is within 5% of the Keplerian velocity. ### 2.5 Dimensionless variables and choice of parameters Our model is scale invariant and can therefore be applied to various astrophysical objects. We consider here the range of parameters typical of protostellar discs, for which a typical surface density is $`\mathrm{\Sigma }_01\mathrm{g}\mathrm{cm}^2`$. A typical coronal sound speed is $`c_{\mathrm{s0}}10^2\mathrm{km}\mathrm{s}^1`$, which corresponds to a temperature of $`T4\times 10^5\mathrm{K}`$. This allows us to fix relevant units as follows: $$[\mathrm{\Sigma }]=\mathrm{\Sigma }_0,[u]=c_{\mathrm{s0}},[x]=GM_{}c_{\mathrm{s0}}^2,[s]=c_p.$$ (30) With $`M_{}=1M_{}`$ we have $`[x]0.1\mathrm{AU}`$, and $`[t]=[x]/[u]1.5\mathrm{day}`$. The unit for density is $`[\varrho ]=[\mathrm{\Sigma }]/[x]7\times 10^{13}\mathrm{g}\mathrm{cm}^3`$, the unit for the mass accretion rate is $`[\dot{M}]=[\mathrm{\Sigma }][u][x]2\times 10^7M_{}\mathrm{yr}^1`$, and the unit for the magnetic field is $`[B]=[u](\mu _0[\varrho ])^{1/2}30\mathrm{G}`$. Since $`[h]=[u]^2`$, the dimensionless value $`h=1`$ corresponds to $`10^{14}\mathrm{cm}^2\mathrm{s}^2`$. With a mean specific weight $`\mu =0.6`$, the universal gas constant $`=8.3\times 10^7\mathrm{cm}^2\mathrm{s}^2\mathrm{K}^1`$ and $`\gamma =5/3`$, we have $$c_p=\frac{\gamma }{\gamma 1}\frac{}{\mu }3.5\times 10^8\mathrm{cm}^2\mathrm{s}^2\mathrm{K}^1.$$ (31) Therefore, $`h=1`$ corresponds to $`T=[u]^2/[s]=[h]/c_p3\times 10^5\mathrm{K}`$. Using $`h=\mathrm{\Phi }`$ in the corona, this corresponds to a temperature of $`3\times 10^5\mathrm{K}`$ at $`r=[x]0.1\mathrm{AU}`$. We choose $`\beta `$ between 0.1 and 0.005, corresponding to a typical disc temperature (in the model) of $`3\times 10^4\mathrm{K}`$ to $`1.5\times 10^3\mathrm{K}`$; see Eq. (28); real protostellar discs have typical temperatures of about a few thousand Kelvin (see Sect. 2.2). In our models, we use $`\varpi _0=1.5`$ for the disc outer radius, $`z_0=0.15`$ for the disc semi-thickness and $`r_0=0.05`$ for the softening (stellar) radius. The disc aspect ratio is $`z_0/\varpi _0=0.1`$. Note that $`r_0=0.05`$ corresponds to $`7\times 10^{10}\mathrm{cm}`$, i.e. one solar radius. Therefore, we shall not reduce it much below this physically meaningful value. Note, however, that smaller values of $`r_0`$ would result in faster outflows (Ouyed & Pudritz 1999). Furthermore, $`c_\nu ^{\mathrm{adv}}=0.02`$ and $`c_\nu ^{\mathrm{shock}}=1.2`$. We vary the value of $`\alpha _{\mathrm{SS}}`$ between 0.003 and 0.007. The mean-field dynamo is characterized by the parameters $`|\alpha _0|=0.3`$, $`v_0=c_\mathrm{s}`$, $`\eta _{\mathrm{t0}}=10^3`$ and $`\eta _0=5\times 10^4`$. The total magnetic diffusivity in the disc is therefore $`\eta _{\mathrm{T0}}=\eta _{\mathrm{t0}}+\eta _0=0.0015`$. In terms of the usual Shakura–Sunyaev viscosity parameter, this corresponds to $$\alpha _{\mathrm{SS}}^{(\eta )}\frac{\eta _{\mathrm{T0}}}{c_{\mathrm{s},\mathrm{disc}}z_0}0.01\left(\frac{\varpi }{\beta }\right)^{1/2}\left(\frac{\eta _{\mathrm{T0}}}{0.0015}\right)\left(\frac{z_0}{0.15}\right)^1,$$ (32) where we have used $`c_{\mathrm{s},\mathrm{disc}}^2\beta c_{\mathrm{s},\mathrm{corona}}^2`$ \[cf. Eqs (22) and (28)\] and $`c_{\mathrm{s},\mathrm{corona}}\varpi ^{1/2}`$. In terms of the usual nondimensional dynamo parameters we have $$|C_\alpha |=|\alpha _0|z_0/\eta _{\mathrm{T0}}=30$$ (33) and, for Keplerian rotation, $$C_\omega =(\varpi \mathrm{\Omega }/\varpi )z_0^2/\eta _{\mathrm{T0}}=22.5\varpi ^{3/2},$$ (34) so that the dynamo number is given by $$|𝒟|=|C_\alpha C_\omega |=675\varpi ^{3/2}.$$ (35) Note that the value of the dynamo number expected for accretion discs is given by $$|𝒟|=\frac{|\alpha _0\varpi \mathrm{\Omega }/\varpi |z_0^3}{\eta _{\mathrm{T0}}^2}\frac{3}{2}\left(\alpha _{\mathrm{SS}}^{(\eta )}\right)^2$$ (36) for $`\eta _{\mathrm{T0}}=\alpha _{\mathrm{SS}}^{(\eta )}c_{\mathrm{s},\mathrm{disc}}z_0`$, $`|\alpha _0|c_{\mathrm{s},\mathrm{disc}}`$ and $`c_{\mathrm{s},\mathrm{disc}}=\mathrm{\Omega }z_0`$. As follows from Eqs (32)–(35), for Keplerian rotation, $$\frac{2}{3}|𝒟|\left(\alpha _{\mathrm{SS}}^{(\eta )}\right)^2=|\alpha _0|z_0\beta ^1\varpi ^{1/2}.$$ (37) \[Note that this expression is independent of $`\eta _{\mathrm{T0}}`$.\] For our choice of parameters of $`|\alpha _0|=0.3`$ and $`z_0=0.15`$, expression (37) is $`1`$ if $`\varpi 0.002\beta ^2=0.2`$ for $`\beta =0.1`$, which corresponds to the truncation radius of the $`\alpha `$-effect. Therefore, our choice of parameters is consistent with the constraint (36). We take $`\gamma =5/3`$, $`\tau _{\mathrm{disc}}=0.1`$, and consider two values of $`\tau _{\mathrm{star}}`$. For $`\tau _{\mathrm{star}}\mathrm{}`$, the mass sink at the central object is suppressed, whereas $`\tau _{\mathrm{star}}0`$ implies instantaneous accretion of any extra matter (relative to the hydrostatic equilibrium) by the central object. A realistic lower limit for $`\tau _{\mathrm{star}}`$ can be estimated as $`\tau _{\mathrm{star}}r_0/v_{\mathrm{ff}}0.008`$, where $`v_{\mathrm{ff}}`$ is the free fall velocity (given by $`\frac{1}{2}v_{\mathrm{ff}}^2=GM_{}/r_0`$ in dimensional quantities). In most cases we used $`\tau _{\mathrm{star}}=0.01`$, but we also tried an even smaller value of $`\tau _{\mathrm{star}}=0.005`$ and obtained very similar results. A finite value of $`\tau _{\mathrm{star}}`$ implies that matter is not instantaneously absorbed by the sink. Therefore, some matter can leave the sink if it moves so fast that its residence time in the sink is shorter than $`\tau _{\mathrm{star}}`$. As can be seen below, a small (negligible) fraction of mass does indeed escape from the sink. Computations have been carried out in domains ranging from $`(\varpi ,z)[0,2]\times [1,1]`$ to $`[0,8]\times [2,30]`$, but the results are hardly different in the overlapping parts of the domains. In our standard computational box, $`\delta \varpi =\delta z=0.01`$ and in the case of the larger computational domain $`\delta \varpi =\delta z=0.02`$. ### 2.6 Numerical method and boundary conditions We use third order Runge–Kutta time-stepping and a sixth order finite-difference scheme in space. Details and test calculations are discussed by Brandenburg (2001). On the outer boundaries, the induction equation is evolved using one-sided derivatives (open boundary conditions). The normal velocity component has zero derivative normal to the boundary, but the velocity is required to be always directed outwards. The tangential velocity components and potential enthalpy on the boundaries are similarly obtained from the next two interior points away from the boundary. Tests show that the presence of the boundaries does not affect the flow inside the computational domain. Regularity conditions are adopted on the axis where the $`\varpi `$ and $`\phi `$ components of vectorial quantities vanish, whereas scalar variables and the $`z`$ components of vectorial quantities have vanishing radial derivative. ## 3 Results In this section we discuss a range of models of increasing degree of complexity. We first consider in Sect. 3.1 the simplest model that contains neither magnetic field nor mass sink at the centre to show how the pressure gradient resulting from the (fixed) entropy distribution drives a disc outflow. It is further shown that the outflow is significantly reduced if accretion onto the central object is allowed, but restored again if the disc can generate a large-scale magnetic field (Sect. 3.2). Having thus demonstrated the importance of the large-scale magnetic field in our model, we discuss in Sect. 3.3 its structure, largely controlled by the dynamo action but affected by the outflow. Model parameters used in these sections are not necessarily realistic as we aim to illustrate the general physical nature of our solutions. We present a physically motivated model in Sect. 4 where the set of model parameters is close to that of a standard accretion disc around a protostellar object. The physical nature of our solutions is discussed in Sects. 3.4, 3.5 and 3.6. ### 3.1 Nonmagnetic outflows We illustrate in Fig. 2 results obtained for a model without any magnetic fields and without mass sink. A strong outflow develops even in this case, which is driven mostly by the vertical pressure gradient in the transition layer between the disc and the corona, in particular by the term $`Ts`$ in Eq. (23). The gain in velocity is controlled by the total specific entropy difference between the disc and the corona, but not by the thickness $`d`$ of the transition layer in the disc profile (6). The flow is fastest along the rotation axis and within a cone of polar angle of about $`30^{}`$, where the terminal velocity $`u_z3`$ is reached. The conical shape of the outflow is partly due to obstruction from the dense disc, making it easier for matter to leave the disc near the axis. Both temperature and density in nonmagnetic runs without mass sink are reduced very close to the axis where the flow speed is highest. The general flow pattern is sensitive to whether or not matter can accrete onto the central object. We show in Fig. 3 results for the same model as in Fig. 2, but with a mass sink given by Eq. (8) with $`\tau _{\mathrm{star}}=0.01`$. This can be compared with earlier work on thermally driven winds by Fukue (1989) who also considered polytropic outflows, but the disc was treated as a boundary condition. In Fukue (1989), outflows are driven when the injected energy is above a critical value. The origin of this energy injection may be a hot corona. The critical surface in his model (see the lower panel of his Fig. 2) is quite similar to that found in our simulation (our Fig. 3), although our opening angle was found to be larger than in Fukue’s (1989) model. \[Below we show, albeit with magnetic fields, that smaller values of $`\beta `$ do result in smaller opening angles, see Fig. 9, which would then be compatible with the result of Fukue (1989).\] As could be expected, the mass sink hampers the outflow in the cone (but not at $`\varpi 0.5`$). The flow remains very similar to that of Fig. 3 when $`\tau _{\mathrm{star}}`$ is reduced to $`0.005`$. Thus, the nonmagnetic outflows are very sensitive to the presence of the central sink. As we show now, magnetized outflows are different in this respect. ### 3.2 Magnetized outflows In this section we discuss results obtained with magnetic fields, first without mass sink at the centre and then including a sink. We show that the effects of the sink are significantly weaker than in the nonmagnetic case. A magnetized outflow without the central mass sink, shown in Fig. 4, is similar to that in Fig. 2, but is denser and hotter near the axis, and the high speed cone has a somewhat larger opening angle. In addition, the outflow becomes more structured, with a well pronounced conical shell where temperature and density are smaller than elsewhere (the conical shell reaches $`z=\pm 1`$ at $`\varpi 1.2`$ in Fig. 4). Here and in some of the following figures we also show the fast magnetosonic surface with respect to the poloidal field. In Sect. 3.5 we show that this surface is close to the fast magnetosonic surface. As shown in Fig. 5, the outflow becomes faster inside the cone ($`u_z5`$ on the axis). As expected, we find that deeper potential wells, i.e. smaller values of $`r_0`$ in Eqs (7) and (25), result in even faster flows and in larger opening angles. Our results are insensitive to the size and symmetry of the computational domain: we illustrate this in Fig. 6 with a larger domain of the size $`[0,8]\times [2,30]`$. The disc midplane in this run is located asymmetrically in $`z`$ in order to verify that the (approximate) symmetry of the solutions is not imposed by the symmetry of the computational domain. Figure 6 confirms that our results are not affected by what happens near the computational boundaries. Unlike the nonmagnetized system, the magnetized outflow changes only comparatively little when the mass sink is introduced at the centre. We show in Fig. 7 the results with a sink ($`\tau _{\mathrm{star}}=0.01`$) and otherwise the same parameters as in Fig. 4. As could be expected, the sink leads to a reduction in the outflow speed near the axis; the flow in the high speed cone becomes slower. But apart from that, the most important effects of the sink are the enhancement of the conical structure of the outflow and the smaller opening angle of the conical shell. A decrease in $`\tau _{\mathrm{star}}`$ by a factor of 2 to $`0.005`$ has very little effect, as illustrated in Figs. 8 and 10. Increasing the entropy contrast (while keeping the specific entropy unchanged in the corona) reduces the opening angle of the conical shell. Pressure driving is obviously more important in this case, as compared to magneto-centrifugal driving (see Sect. 3.5). A model with $`\beta =0.02`$ (corresponding to a density and inverse temperature contrast of about 50:1 between the disc and the corona) is shown in Fig. 9. At $`\vartheta =60^{}`$, the radial velocity $`u_r`$ is slightly enhanced relative to the case $`\beta =0.1`$ (contrast 10:1); see Fig. 10. At $`\vartheta =30^{}`$, on the other hand, the flow with the larger entropy contrast reaches the Alfvén point close to the disc (at $`r0.27`$ as opposed to $`r1`$ in the other case), which leads to a smaller terminal velocity. We conclude that the general structure of the magnetized flow and its typical parameters remain largely unaffected by the sink, provided its efficiency $`\tau _{\mathrm{star}}^1`$ does not exceed a certain threshold. It is plausibly the build-up of magnetic pressure at the centre that shields the central object to make the central accretion inefficient. This shielding would be even stronger if we included a magnetosphere of the central object. We discuss in Sect. 4.1 the dependence of our solution on the geometrical size of the sink and show that the general structure of the outflow persists as long as the size of the sink does not exceed the disk thickness. ### 3.3 Magnetic field structure The dynamo in most of our models has $`\alpha _0<0`$, consistent with results from simulations of disc turbulence driven by the magneto-rotational instability (Brandenburg et al. 1995; Ziegler & Rüdiger 2000). The resulting field symmetry is roughly dipolar, which seems to be typical of $`\alpha \mathrm{\Omega }`$ disc dynamos with $`\alpha _0<0`$ in a conducting corona (e.g., Brandenburg et al. 1990). We note that the dominant parity of the magnetic field is sensitive to the magnetic diffusivity in the corona: a quadrupolar oscillatory magnetic field dominates for $`\alpha _0<0`$ if the disc is surrounded by vacuum (Stepinski & Levy 1988). For $`\alpha _0<0`$, the critical value of $`\alpha _0`$ for dynamo action is about 0.2, which is a factor of about 50 larger than without outflows. Our dynamo is then only less than twice supercritical. A survey of the dynamo regimes for similar models is given by Bardou et al. (2001). The initial magnetic field (poloidal, mixed parity) is weak \[$`p_{\mathrm{mag}}B^2/(2\mu _0)10^5`$\], cf. Fig. 11 for comparison with the gas pressure\], but the dynamo soon amplifies the field in the disc to $`p_{\mathrm{mag},\mathrm{tor}}10`$, and then supplies it to the corona. As a result, the corona is filled with a predominantly azimuthal field with $`p_{\mathrm{mag},\mathrm{tor}}/p100`$ at larger radii; see Fig. 11. We note, however, that the flow in the corona varies significantly in both space and time<sup>1</sup><sup>1</sup>1See movie at http://www.nordita.dk/~brandenb/movies/outflow . Magnetic pressure due to the toroidal field $`B_\phi `$, $`p_{\mathrm{mag},\mathrm{tor}}`$, exceeds gas pressure in the corona outside the inner cone and confines the outflow in the conical shell. The main mechanisms producing $`B_\phi `$ in the corona are advection by the wind and magnetic buoyancy (cf. Moss et al. 1999). Magnetic diffusion and stretching of the poloidal field by vertical shear play a relatively unimportant rôle. The field in the inner parts of the disc is dominated by the toroidal component; $`|B_\phi /B_z|3`$ at $`\varpi 0.5`$; this ratio is larger in the corona at all $`\varpi `$. However, as shown in Fig. 11, this ratio is closer to unity at larger radii in the disc. As expected, $`\alpha _0>0`$ results in mostly quadrupolar fields (e.g., Ruzmaikin et al. 1988). As shown in Fig. 12, the magnetic field in the corona is now mainly restricted to a narrow conical shell that crosses $`z=\pm 1`$ at $`\varpi 0.6`$. Comparing this figure with the results obtained with dipolar magnetic fields (Fig. 4), one sees that the quadrupolar field has a weaker effect on the outflow than the dipolar field; the conical shell is less pronounced. However, the structures within the inner cone are qualitatively similar to each other. The magnitude and distribution of $`\alpha `$ in Eq. (16) only weakly affect magnetic field properties as far as the dynamo is saturated. For a saturated dynamo, the field distribution in the dynamo region ($`0.2<\varpi <1.5`$, $`|z|<0.15`$) roughly follows from the equipartition field, $`B(\varrho \mu _0v_0^2)^{1/2}`$ with $`v_0=c_\mathrm{s}`$. In other words, nonlinear states of disc dynamos are almost insensitive to the detailed properties of $`\alpha `$ (e.g., Beck et al. 1996; Ruzmaikin et al. 1988). A discussion of disc dynamos with outflows, motivated by the present model, can be found in Bardou et al. (2001). It is shown that the value of magnetic diffusivity in the corona does not affect the dynamo solutions strongly. Moreover, the outflow is fast enough to have the magnetic Reynolds number in the corona larger than unity, which implies that ideal integrals of motion are very nearly constant along field lines; see Sect. 3.6. The most important property is the sign of $`\alpha `$ as it controls the global symmetry of the magnetic field. ### 3.4 Mass and energy loss The mass injection and loss rates due to the source, sink and wind are defined as $$\dot{M}_{\mathrm{source}}=q_\varrho ^{\mathrm{disc}}dV,\dot{M}_{\mathrm{sink}}=q_\varrho ^{\mathrm{star}}dV,$$ (38) and $$\dot{M}_{\mathrm{wind}}=\varrho udS,$$ (39) respectively, where the integrals are taken over the full computational domain or its boundary. About $`1/3`$ of the mass released goes into the wind and the rest is accreted by the sink, in the model with $`\tau _{\mathrm{star}}=0.01`$ and $`\beta =0.1`$ of Fig. 7. Reducing $`\tau _{\mathrm{star}}`$ by a factor of $`2`$ (as in the model of Fig. 8), only changes the global accretion parameters by a negligible amount ($`10\%`$). The mass loss rate in the wind fluctuates on a time scale of 5 time units, but remains constant on average at about $`\dot{M}_{\mathrm{wind}}3`$, corresponding to $`6\times 10^7M_{}\mathrm{yr}^1`$, in the models of Figs. 7 and 8. The mass in the disc, $`M_{\mathrm{disc}}`$, remains roughly constant. The rate at which mass needs to be replenished in the disc, $`\dot{M}_{\mathrm{source}}/M_{\mathrm{disc}}`$, is about 0.4. This rate is not controlled by the imposed response rate of the mass source, $`\tau _{\mathrm{disc}}^1`$, which is 25 times larger. So, the mass source adjusts itself to the disc evolution and does not directly control the outflow. We show in Fig. 13 trajectories that start in and around the mass injection region. The spatial distribution of the mass replenishment rate $`q_\varrho ^{\mathrm{disc}}`$ shown in Fig. 13 indicates that the mass is mainly injected close to the mass sink, and $`q_\varrho ^{\mathrm{disc}}`$ remains moderate in the outer parts of the disc. (Note that the reduced effect of the mass sink in the magnetized flow is due to magnetic shielding rather than to mass replenishment near the sink – see Sect. 3.2.) The angular structure of the outflow can be characterized by the following quantities calculated for a particular spherical radius, $`r=8`$, for the model of Fig. 6: the azimuthally integrated normalized radial mass flux density, $`\dot{M}(\vartheta )/M_{\mathrm{disc}}`$, where $$\dot{M}(\vartheta )=2\pi r^2\varrho u_r\mathrm{sin}\vartheta ,M_{\mathrm{disc}}=_{\mathrm{disc}}\rho dV,$$ (40) the azimuthally integrated normalized radial angular momentum flux density, $`\dot{J}(\vartheta )/J_{\mathrm{disc}}`$, where $$\dot{J}(\vartheta )=2\pi r^2\varrho \varpi u_\phi u_r\mathrm{sin}\vartheta ,J_{\mathrm{disc}}=_{\mathrm{disc}}\rho \varpi u_\phi dV,$$ (41) the azimuthally integrated normalized radial magnetic energy flux (Poynting flux) density, $`\dot{E}_\mathrm{M}(\vartheta )/E_\mathrm{M}`$, where $$\dot{E}_\mathrm{M}(\vartheta )=2\pi r^2\frac{(E\times B)_r}{\mu _0}\mathrm{sin}\vartheta ,E_\mathrm{M}=_{\mathrm{disc}}\frac{B^2}{2\mu _0}dV,$$ (42) and the azimuthally integrated normalized radial kinetic energy flux density, $`\dot{E}_\mathrm{K}(\vartheta )/E_\mathrm{K}`$, where $$\dot{E}_\mathrm{K}(\vartheta )=2\pi r^2(\frac{1}{2}\varrho u^2u_r)\mathrm{sin}\vartheta ,E_\mathrm{K}=_{\mathrm{disc}}\frac{1}{2}\rho u^2dV.$$ (43) Here, $`M_{\mathrm{disc}},J_{\mathrm{disc}},E_\mathrm{M}`$ and $`E_\mathrm{K}`$ are the mass, angular momentum, and magnetic and kinetic energies in the disc. A polar diagram showing these distributions is presented in Fig. 14 for the model of Fig. 6. Note that this is a model without central mass sink where the flow in the hot, dense cone around the axis is fast. The fast flow in the hot, dense cone carries most of the mass and kinetic energy. A significant part of angular momentum is carried away in the disc plane whilst the magnetic field is ejected at intermediate angles, where the conical shell is located. The radial kinetic and magnetic energy flux densities, integrated over the whole sphere, are $`L_\mathrm{K}_0^\pi \dot{E}_\mathrm{K}(\vartheta )d\vartheta 0.54\dot{M}_{\mathrm{wind}}c_{}^2`$ and $`L_\mathrm{M}_0^\pi \dot{E}_\mathrm{M}(\vartheta )d\vartheta 0.03\dot{M}_{\mathrm{wind}}c_{}^2`$, respectively, where $`c_{}2.8`$ is the fast magnetosonic speed (with respect to the poloidal field) at the critical surface (where $`u_z=c_{}`$) on the axis. Thus, $`\dot{M}_{\mathrm{wind}}c_{}^2`$ can be taken as a good indicator of the kinetic energy loss, and the magnetic energy loss into the exterior is about 3% of this value. These surface-integrated flux densities (or luminosities) are, as expected, roughly independent of the distance from the central object. ### 3.5 Mechanisms of wind acceleration The magnetized outflows in our models with central mass sink have a well-pronounced structure, with a fast, cool and low-density flow in a conical shell, and a slower, hotter and denser flow near the axis and in the outer parts of the domain. Without central mass sink, there is a high speed, hot and dense cone around the axis. The magnetic field geometry (e.g., Fig. 7) is such that for $`\varpi >0.1`$, the angle between the disc surface and the field lines is less than $`60^{}`$, reaching $`20^{}`$ at $`\varpi 1`$$`1.5`$, which is favourable for magneto-centrifugal acceleration (Blandford & Payne 1982; Campbell 1999, 2000, 2001). However, the Alfvén surface is so close to the disc surface in the outer parts of the disc that acceleration there is mainly due to pressure gradient. The situation is, however, different in the conical shell containing the fast wind. As can be seen from Fig. 15, the Alfvén surface is far away from the disc in that region and, on a given field line, the Alfvén radius is at least a few times larger than the radius of the footpoint in the disc. This is also seen in simulations of the magneto-centrifugally driven jets of Krasnopolsky et al. (1999); see their Fig. 4. The lever arm of about $`3`$ is sufficient for magneto-centrifugal driving to dominate. As can be seen also from Fig. 10, the flow at the polar angle $`\vartheta =60^{}`$ is mainly accelerated by pressure gradient near the disc surface (where the Alfvén surface is close to the disc surface). However, acceleration remains efficient out to at least $`r=1`$ within the conical shell at $`\vartheta 30^{}`$. This can be seen in the upper and middle panels of Fig. 10 (note that the conical shell is thinner and at a smaller $`\vartheta `$ in the model with larger entropy contrast, and so it cannot be seen in this figure, cf. Fig. 9). These facts strongly indicate that magneto-centrifugal acceleration dominates within the conical shell. Another indicator of magneto-centrifugal acceleration in the conical shell is the distribution of angular momentum (see Fig. 15), which is significantly larger in the outer parts of the conical shell than in the disc, which suggests that the magnetic field plays an important rôle in the flow dynamics. We show in Fig. 16 the ratio of the ‘magneto-centrifugal force’ to pressure gradient, $`|F_{\mathrm{pol}}^{(\mathrm{mc})}|/|F_{\mathrm{pol}}^{(\mathrm{p})}|`$, where the subscript ‘pol’ denotes the poloidal components. Here, the ‘magneto-centrifugal force’ includes all terms in the poloidal equation of motion, except for the pressure gradient (but we ignore the viscous term and the mass production term, the latter being restricted to the disc only), $$F^{(\mathrm{mc})}=\varrho \left(\mathrm{\Omega }^2\varpi \mathrm{\Phi }\right)+J\times B,$$ (44) and $`F^{(p)}=p`$. The large values of the ratio in the conical shell confirm that magneto-centrifugal driving is dominant there. On the other hand, the pressure gradient is strong enough in the outer parts of the disc to shift the Alfvén surface close to the disc surface, leading to pressure driving. This is also discussed in Casse & Ferreira (2000b) who point out that, although the criterion of Blandford & Payne (1982) is fulfilled, thermal effects can be strong enough to lead to pressure driving. According to Ferreira (1997), a decrease of the total poloidal current $`I_{\mathrm{pol}}=2\pi \varpi B_\phi /\mu _0`$ along a field line is another indicator of magneto-centrifugal acceleration. We have compared the poloidal current $`I_{\mathrm{pol}}^{(\mathrm{top})}`$ at the Alfvén point, or at the top of our box if the Alfvén point is outside the box, with the poloidal current $`I_{\mathrm{pol}}^{(\mathrm{surf})}`$ at the disc surface, and find that outside the conical shell this ratio is typically $`0.8`$, while along the field line that leaves the box at $`(\varpi ,z)=(0.6,1)`$, we get $`I_{\mathrm{pol}}^{(\mathrm{top})}/I_{\mathrm{pol}}^{(\mathrm{surf})}0.18`$, i.e. a strong reduction, which confirms that magneto-centrifugal acceleration occurs inside the conical shell. We note, however, that the changing sign of $`B_\phi `$ and therefore of $`I_{\mathrm{pol}}`$ makes this analysis inapplicable in places, and the distribution of angular momentum (Fig. 15) gives a much clearer picture. As further evidence of a significant contribution from magneto-centrifugal driving in the conical shell, we show in Fig. 17 that the magnetic field is close to a force-free configuration in regions where angular momentum is enhanced, i.e. in the conical shell and in the corona surrounding the outer parts of the disc. These are the regions where the Lorentz force contributes significantly to the flow dynamics, so that the magnetic field performs work and therefore relaxes to a force-free configuration. The radial variation in the sign of the current helicity $`JB`$ is due to a variation in the sign of the azimuthal magnetic field and of the current density. Such changes originate from the disc where they imprint a corresponding variation in the sign of the angular momentum constant, see Eq. (48). These variations are then carried along magnetic lines into the corona. The locations where the azimuthal field reverses are still relatively close to the axis, and there the azimuthal field relative to the poloidal field is weak compared to regions further away from the axis. Pressure driving is more important if the entropy contrast between the disc and the corona is larger (i.e. $`\beta `$ is smaller): the white conical shell in Fig. 16 indicative of stronger magneto-centrifugal driving shifts to larger heights for $`\beta =0.02`$, as shown in Fig. 18. We note, however, that there are periods when magneto-centrifugal driving is dominant even in this model with higher entropy contrast, but pressure driving dominates in the time averaged picture (at least within our computational domain). We show in Fig. 19 the variation of several quantities along a magnetic field line that has its footpoint at the disc surface at $`(\varpi ,z)=(0.17,0.15)`$ and lies around the conical shell. Since this is where magneto-centrifugal driving is still dominant, it is useful to compare Fig. 19 with Fig. 3 of Ouyed et al. (1997), where a well-collimated magneto-centrifugal jet is studied. Since our outflow is collimated only weakly within our computational domain, the quantities are plotted against height $`z`$, rather than $`z/\varpi `$ as in Ouyed et al. (1997) ($`z/\varpi `$ is nearly constant along a field line for weakly collimated flows, whereas approximately $`z/\varpi z`$ along a magnetic line for well-collimated flows). The results are qualitatively similar, with the main difference that the fast magnetosonic surface in our model almost coincides with the Alfvén surface in the region around the conical shell where the outflow is highly supersonic. Since we include finite diffusivity, the curves in Fig. 19 are smoother than in Ouyed et al. (1997), who consider ideal MHD. A peculiar feature of the conical shell is that the flow at $`z1`$ is sub-Alfvénic but strongly supersonic. The fast magnetosonic surface is where the poloidal velocity $`v_{\mathrm{pol}}`$ equals the fast magnetosonic speed for the direction parallel to the field lines, $$v_{\mathrm{pol}}^2=\frac{1}{2}\left(c_\mathrm{s}^2+v_\mathrm{A}^2+\sqrt{(c_\mathrm{s}^2+v_\mathrm{A}^2)^24c_\mathrm{s}^2v_{\mathrm{A},\mathrm{pol}}^2}\right),$$ (45) with $`v_{\mathrm{A},\mathrm{pol}}`$ the Alfvén speed from the poloidal magnetic field. This surface has the same overall shape as the fast magnetosonic surface with respect to the poloidal field, albeit in some cases it has a somewhat larger opening angle around the conical shell and is located further away from the disc in regions where the toroidal Alfvén speed is enhanced. ### 3.6 Lagrangian invariants Axisymmetric ideal magnetized outflows are governed by five Lagrangian invariants, the flux ratio, $$k=\varrho u_z/B_z=\varrho u_\varpi /B_\varpi ,$$ (46) the angular velocity of magnetic field lines, $$\stackrel{~}{\mathrm{\Omega }}=\varpi ^1(u_\phi kB_\phi /\varrho ),$$ (47) the angular momentum constant, $$\mathrm{}=\varpi u_\phi \varpi B_\phi /(\mu _0k),$$ (48) the Bernoulli constant, $$e=\frac{1}{2}u^2+h+\mathrm{\Phi }\varpi \stackrel{~}{\mathrm{\Omega }}B_\phi /(\mu _0k).$$ (49) and specific entropy $`s`$ (which is a prescribed function of position in our model). In the steady state, these five quantities are conserved along field lines, but vary from one magnetic field line to another (e.g., Pelletier & Pudritz 1992; Mestel 1999), i.e. depend on the magnetic flux within a magnetic flux surface, $`2\pi a`$, where $`a=\varpi A_\phi `$ is the flux function whose contours represent poloidal field lines. We show in Fig. 20 scatter plots of $`k(a)`$, $`\stackrel{~}{\mathrm{\Omega }}(a)`$, $`\mathrm{}(a)`$, and $`e(a)`$ for the model of Fig. 6. Points from the region $`0.2z8`$ collapse into a single line, confirming that the flow in the corona is nearly ideal.<sup>2</sup><sup>2</sup>2 If we restrict ourselves to $`0.2z8`$, about $`90\%`$ of the points for $`\stackrel{~}{\mathrm{\Omega }}(a)`$ are within $`\pm 10\%`$ of the line representing $`z=4`$; for the other three invariants, this percentage is at least $`97\%`$. In a larger domain, $`0.2z30`$, these percentages drop to about $`80\%`$ for $`k(a)`$ and $`60\%`$ for $`\stackrel{~}{\mathrm{\Omega }}(a)`$; for $`\mathrm{}(a)`$ and $`e(a)`$, however, they remain greater than $`90\%`$. This is not surprising since the magnetic Reynolds number is much greater than unity in the corona for the parameters adopted here. For $`8z30`$, there are departures from perfect MHD; in particular, the angular velocity of magnetic field lines, $`\stackrel{~}{\mathrm{\Omega }}`$, is somewhat decreased in the upper parts of the domain (indicated by the vertical scatter in the data points). This is plausibly due to the finite magnetic diffusivity which still allows matter to slightly lag behind the magnetic field. As this lag accumulates along a stream line, the departures increase with height $`z`$. Since this is a ‘secular’ effect only, and accumulates with height, we locally still have little variation of $`k`$ and $`\stackrel{~}{\mathrm{\Omega }}`$, which explains why magneto-centrifugal acceleration can operate quite efficiently. The corona in our model has (turbulent) magnetic diffusivity comparable to that in the disc. This is consistent with, e.g., Ouyed & Pudritz (1999) who argue that turbulence should be significant in coronae of accretion discs. Nevertheless, it turns out that ideal MHD is a reasonable approximation for the corona (see Fig. 20), but not for the disc. Therefore, magnetic diffusivity is physically significant in the disc and insignificant in the corona (due to different velocity and length scales involved), as in most models of disc outflows (see Ferreira & Pelletier 1995 for a discussion). Thus, our model confirms this widely adopted idealization. ## 4 Toward more realistic models The models presented so far have some clear deficiencies when compared with characteristic features relevant to protostellar discs. The first deficiency concerns the relative amount of matter accreted onto the central object compared with what goes into the wind. A typical figure from the models presented above was that as much as 30% of all the matter released in the disc goes into the wind and only about 70% is accreted by the central object. Earlier estimates (e.g., Pelletier & Pudritz 1992) indicate that only about 10% of the matter joins the wind. Another possible deficiency is the fact that in the models presented so far the low-temperature region extends all the way to the stellar surface whilst in reality the cool disc breaks up close to the star because of the stellar magnetic field. Finally, the overall temperature of the disc is generally too high compared with real protostellar discs which are known to have typical temperatures of about a few thousand Kelvin. The aim of this section is to assess the significance of various improvements in the model related to the above mentioned characteristics. We consider the effect of each of them separately by improving the model step by step. ### 4.1 A larger sink As discussed in Sects. 3.1 and 3.2, properties of the outflow, especially of the nonmagnetic ones, are sensitive to the parameters of the central sink. It is clear that a very efficient sink would completely inhibit the outflow near the axis. On the other hand, a magnetosphere of the central object can affect the sink efficiency by channelling the flow along the stellar magnetic field (Shu et al. 1994; see also Fendt & Elstner 1999, 2000). Simulating a magnetosphere turned out to be a difficult task and some preliminary attempts proved unsatisfactory. Instead, we have considered a model with a geometrically larger sink, and illustrate it in Fig. 21. Here, $`r_0=0.25`$ in Eqs (7) and (25), which is five times larger than the sink used in our reference model of Fig. 7. The relaxation time of the sink in Eq. (8), $`\tau _{\mathrm{star}}`$, was rescaled in proportion to the free-fall time at $`r_0`$ as $`\tau _{\mathrm{star}}r_0/v_{\mathrm{ff}}r_0^{3/2}`$, which yields $`\tau _{\mathrm{star}}=0.112`$. The resulting mass loss rate into the wind is $`\dot{M}_{\mathrm{wind}}0.8`$, which corresponds to $`1.6\times 10^7M_{}\mathrm{yr}^1`$ in dimensional units. Although this is indeed smaller than the value for the reference model ($`6\times 10^7M_{}\mathrm{yr}^1`$), the overall mass released from the disc is also smaller, resulting in a larger fraction of about 40% of matter that goes into the wind; only about 60% is accreted by the central object. As we show below, however, larger accretion fractions can more easily be achieved by making the disc cooler. We conclude that even a sink as large as almost twice the disc half-thickness does not destroy the outflow outside the inner cone. However, the outflow along the axis is nearly completely suppressed. ### 4.2 Introducing a gap between the disc and the sink In real accretion discs around protostars the disc terminates at some distance away from the star. It would therefore be unrealistic to let the disc extend all the way to the centre. Dynamo action in all our models is restricted to $`\varpi 0.2`$, and now we introduce an inner disc boundary for the region of lowered entropy as well. In Fig. 22 we present such a model where the $`\xi _{\mathrm{disc}}(r)`$ profile is terminated at $`\varpi =0.25`$. This inner disc radius affects then not only the region of lowered entropy, but also the distributions of $`\alpha `$, $`\eta _\mathrm{t}`$, $`\nu _\mathrm{t}`$ and $`q_\varrho ^{\mathrm{disc}}`$. The radius of the mass sink was chosen to be $`r_0=0.15`$, i.e. equal to the disc semi-thickness. The correspondingly adjusted value of $`\tau _{\mathrm{star}}`$ is 0.052. The entropy in the sink is kept as low as that in the disc. As can be seen from Fig. 22, the gap in the disc translates directly into a corresponding gap in the resulting outflow pattern. At the same time, however, only about 20% of the released disc material is accreted by the sink and the rest is ejected into the wind. A small fraction of the mass that accretes toward the sink reaches the region near the axis at some distance away from the origin and can then still be ejected as in the models without the gap. Note that our figures (e.g., Fig. 22) show the velocity rather than the azimuthally integrated mass flux density (cf. Fig. 13); the relative magnitude of the latter is much smaller, and so the mass flux from the sink region is not significant. ### 4.3 A cooler disc We now turn to the discussion of the case with a cooler disc. Numerical constraints prevent us from making the entropy gradient between disc and corona too steep. Nevertheless, we were able to reduce the value of $`\beta `$ down to 0.005, which is 20 times smaller than the value used for the reference model in Fig. 7. The $`\beta `$ value for the sink was reduced to 0.02; smaller values proved numerically difficult. The value of $`\beta =0.005`$ results in a disc temperature of $`3\times 10^3`$ K in the outer parts and $`1.5\times 10^4`$K in the inner parts. As in Sect. 4.2, the disc terminates at an inner radius of $`\varpi =0.25`$. At $`\varpi 0.5`$, the density in dimensional units is about $`10^9`$g cm<sup>-3</sup>, which is also the order of magnitude found for protostellar discs. The resulting magnetic field and outflow geometries are shown in Figs. 23 and 24. A characteristic feature of models with a cooler disc is a more vigorous temporal behaviour with prolonged episodes of reduced overall magnetic activity in the disc during which the Alfvénic surface is closer to the disc surface, and phases of enhanced magnetic activity where the Alfvénic surface has moved further away. Figures 23 and 24 are representative of these two states. It is notable that the structured outflow of the type seen in the reference model occurs in states with strong magnetic field and disappears during periods with weak magnetic field. Another interesting property of the cooler discs is that now a smaller fraction of matter goes into the wind (10–20%), and 80–90% is accreted by the central object, in a better agreement with the estimates of Pelletier & Pudritz (1992). We note in passing that channel flow solutions typical of two-dimensional simulations of the magneto-rotational instability (Hawley & Balbus 1991) are generally absent in the simulations presented here. This is because the magnetic field saturates at a level close to equipartition between magnetic and thermal energies. The vertical wavelength of the instability can then exceed the half-thickness of the disc. In some of our simulations, indications of channel flow behaviour still can be seen. An example is Fig. 23 where the magnetic energy is weak enough so that the magneto-rotational instability is not yet suppressed. According to the Shakura–Sunyaev prescription, turbulent viscosity and magnetic diffusivity are reduced in a cooler disc because of the smaller sound speed (cf. Eq. (12)). Since in the corona the dominant contributions to the artificial advection viscosity $`\nu _{\mathrm{adv}}`$ come from the poloidal velocity and poloidal Alfvén speed (and not from the sound speed), $`c_\nu ^{\mathrm{adv}}`$ has to be reduced. Here we choose $`\alpha _{\mathrm{SS}}=0.004`$ and reduce $`c_\nu ^{\mathrm{adv}}`$ by a factor of $`10`$ to $`c_\nu ^{\mathrm{adv}}=0.002`$. Since we do not explicitly parameterize the turbulent magnetic diffusivity $`\eta _\mathrm{t}`$ with the sound speed (cf. Eq. (17)), also $`\eta _{\mathrm{t0}}`$ needs to be decreased, together with the background diffusivity $`\eta _0`$. We choose here values that are $`25`$ times smaller compared to the previous runs, i.e. $`\eta _{\mathrm{t0}}=4\times 10^5`$ and $`\eta _0=2\times 10^5`$, which corresponds to $`\alpha _{\mathrm{SS}}^{(\eta )}`$ ranging between $`0.001`$ and $`0.007`$. With this choice of coefficients, the total viscosity and magnetic diffusivity in the disc have comparable values of a few times $`10^5`$. The effect of reduced viscosity and magnetic diffusivity is shown in Fig. 25. Features characteristic of the channel flow solution are now present, because the vertical wavelength of the magneto-rotational instability is here less than the half-thickness of the disc. ## 5 Discussion If the disc dynamo is sufficiently strong, our model develops a clearly structured outflow which is fast, cool and rarefied within a conical shell near the rotation axis where most of the angular momentum and magnetic energy is carried, and is slower, hotter and denser in the region around the axis as well as in the outer parts of the domain. The slower outflow is driven mostly by the entropy contrast between the disc and the corona, but the faster wind within the conical shell is mostly driven magneto-centrifugally. Without a central mass sink, the flow near the axis is faster, but otherwise the flow structure is similar to that with the sink. The half-opening angle of the cone with hot, dense gas around the axis is about $`20^{}`$$`30^{}`$; this quantity somewhat changes with model parameters but remains close to that range. The outflow in our models does not show any signs of collimation. It should be noted, however, that not all outflows from protostellar discs are actually collimated, especially not at such small distance from the source. An example is the Becklin–Neugebauer/Kleinmann–Low (BN/KL) region in the Orion Nebula (Greenhill et al. 1998), which has a conical outflow with a half-opening angle of $`30^{}`$ out to a distance of 25–60 AU from its origin. Therefore, collimation within a few AU (the size of our computational domain) is expected to be only weak. The region around the fast, cool and rarefied conical shell seen in Fig. 7 is similar to the flow structure reported by Krasnopolsky et al. (1999); see their Fig. 1. In their model, however, the thin axial jet was caused by an explicit injection of matter from the inner parts of the disc which was treated as a boundary. In our reference model the fast outflow is sub-Alfvénic because of the presence of a relatively strong poloidal field, whereas in Krasnopolsky et al. (1999) the outflow becomes super-Alfvénic at smaller heights. Outside the conical shell the outflow is mainly pressure driven, even though the criterion of Blandford & Payne (1982) is fulfilled. However, as Casse & Ferreira (2000b) pointed out, pressure driven outflows might dominate over centrifugally driven outflows if thermal effects are strong enough. In our model, matter is replenished in the resolved disc in a self-regulatory manner where and when needed. We believe that this is an improvement in comparison to the models of Ouyed & Pudritz (1997a, 1997b, 1999) and Ustyugova et al. (1995), where mass inflow is prescribed as a boundary condition at the base of the corona. If we put $`q_\varrho ^{\mathrm{disc}}=0`$ in Eqs (1) and (9), the disc mass soon drops to low values and the outflow ceases. This is qualitatively the same behaviour as in the models of, e.g., Kudoh et al. (1998). We should stress the importance of finite magnetic diffusivity in the disc: although poloidal velocity and poloidal magnetic field are well aligned in most of the corona, dynamo action in the disc is only possible in the presence of finite magnetic diffusivity, and the flow can enter the corona only by crossing magnetic field lines in the disc. An outflow occurs in the presence of both dipolar and quadrupolar type magnetic fields, even though fields with dipolar symmetry seem to be more efficient in magneto-centrifugal driving (cf. von Rekowski et al. 2000). The effects of the magnetic parity on the outflow structure deserves further analysis. The dynamo active accretion disc drives a significant outward Poynting flux in our model. Assuming that this applies equally to protostellar and AGN discs, this result could be important for understanding the origin of seed magnetic fields in galaxies and galaxy clusters; see Jafelice & Opher (1992) and Brandenburg (2000) for a discussion. We note, however, that the pressure of the intracluster gas may prevent the magnetized plasma from active galactic nuclei to spread over a significant volume (Goldshmidt & Rephaeli 1994). Our model can be improved in several respects. In many systems, both dynamo-generated and external magnetic fields may be present, so a more comprehensive model should include both. We used an $`\alpha ^2\mathrm{\Omega }`$ dynamo to parameterize magnetic field generation in the disc because we have restricted ourselves to axisymmetric models. As argued by Brandenburg (1998), dynamo action of turbulence which is driven by the magneto-rotational instability can be roughly described as an $`\alpha ^2\mathrm{\Omega }`$ dynamo. But this parameterization can be relaxed in three-dimensional simulations where one may expect that turbulence will be generated to drive dynamo action. Such simulations will be discussed elsewhere (von Rekowski et al. 2002). Since our model includes angular momentum transport by both viscous and magnetic stresses, it is natural that the accreted matter is eventually diverted into an outflow near the axis; this is further facilitated by our prescribed entropy gradient at the disc surface. We believe that this picture is physically well motivated (Bell & Lucek 1995), with the only reservation that we do not incorporate the (more complicated) physics of coronal heating and disc cooling, but rather parameterize it with a fixed entropy contrast. We include a mass sink at the centre which could have prevented the outflow, and indeed the sink strongly affects nonmagnetized outflows. We have shown, however, that the magnetic field can efficiently shield the sink and thereby support a vigorous disc wind. The assumption of a prescribed entropy distribution is a useful tool to control the size of the disc and to parameterize the heating of the disc corona. However, it should be relaxed as soon as the disc physics can be described more fully. The energy equation, possibly with radiation transfer, should be included. This would lead to a self-consistent entropy distribution and would admit the deposition of viscous and Ohmic heat in the outflow. In the simulations by von Rekowski et al. (2002), entropy is evolved. We believe that a mass source is a necessary feature of any model of this kind if one wishes to obtain a steady state. In the present paper the mass source is distributed throughout the whole disc to represent replenishment of matter from the midplane of the disc. Alternatively, a mass source could be located near to or on the domain boundary. ###### Acknowledgements. We are grateful to C.G. Campbell, R. Henriksen, J. Heyvaerts, R. Ouyed and R.E. Pudritz for fruitful discussions. We acknowledge many useful comments of the anonymous referee. This work was partially supported by PPARC (Grants PPA/G/S/1997/00284 and 2000/00528) and the Leverhulme Trust (Grant F/125/AL). Use of the PPARC supported supercomputers in St Andrews and Leicester is acknowledged. Id: wind.tex,v 1.105.2.28 2002/11/20 17:45:57 brandenb Exp
no-problem/0003/gr-qc0003096.html
ar5iv
text
# 1 Introduction ## 1 Introduction Around ten years ago, Korotkii and Obukhov presented a class of rotating and expanding, Gödel type cosmological metrics , showing that they respect the observed isotropy of the cosmic background radiation and do not lead to parallax effects. Furthermore, for some values of the metric parameters, there are no closed time-like curves and, then, these metrics do not suffer the causal problems characteristic of the original Gödel’s metric . In this paper we will show that, due to conservation of angular momentum, the metric of Korotkii and Obukhov leads, in the limit of large times, to an anisotropic metric that reduces to the open metric of Friedmann in the nearby approximation. For small times, we present an approximate solution valid in the limit of small rotation, which presents an isotropic distribution of pressures and the same evolution law as in the corresponding isotropic case. Due to the rotation, the expressions for the energy density and pressure are affected only by higher order corrections relative to the standard, isotropic expressions, which guarantees that anisotropy does not affect, unless by higher order corrections, the processes occurred during early times. For the epoch dominated by dust matter, the corresponding expanding solution of Einstein equations is similar to the isotropic open solution, except for an anisotropic distribution of pressures that, as we shall see, can be related to a material content formed by an imperfect fluid. Nevertheless, the anisotropy gives rise to an important difference in the later stages of Universe evolution. For our solutions to satisfy the dominant energy conditions, namely, positivity and causal flux of energy, the epoch dominated by dust matter should naturally be followed by an era of coasting evolution, in which the energy density $`ϵ`$ falls with $`a^2`$, where $`a`$ is the radius of the Universe. This corresponds to a material content that satisfies the equation of state $`p=ϵ/3`$. Such a content can be interpreted as a decaying positive cosmological term, and it is very significative that arguments from quantum cosmology also predicts the conservation law $`ϵa^2=`$constant for a time dependent cosmological term . Moreover, during this phase, the relative energy density (the energy density relative to the critical one) is a constant, and the energy conditions impose a lower bound on its value which is close to the present value. This constitutes a possible explanation for the observed quasi-flatness of the Universe. The general conclusion we will try to establish is that the introduction of a global rotation into the Universe description, in addition to agree with the observations that have been sustaining the standard model, can shed light on subjects like the origin of galaxies rotation, as pointed out by Li , or the quasi-flatness problem. ## 2 Gödel type metrics The Gödel type metric that we will consider is given by $$ds^2=a^2(\eta )[(d\eta +le^xdy)^2(dx^2+e^{2x}dy^2+dz^2)],$$ (1) where $`a`$ is a scale factor, $`l`$ is a positive parameter, $`\eta `$ is the conformal time and $`x`$,$`y`$,$`z`$ are spatial coordinates. Korotkii and Obukhov have shown that this metric respects the observed isotropy of the cosmic background radiation and does not lead to parallax effects, contrary to what would be expected from an anisotropic, rotating metric. Moreover, they have shown that, for $`l<1`$, there are no causal problems, because the closed time-like curves characteristic of Gödel’s metric can appear only for $`l>1`$ (the Gödel metric corresponds to $`l=\sqrt{2}`$, with $`a`$ constant<sup>2</sup><sup>2</sup>2For an exhaustive study of the stationary case of metric (1), see the pioneer work of Rebouças and Tiomno .). Metric (1) describes an expanding and rotating universe, with an angular velocity given, in comoving coordinates, by $`\omega =l/2a`$ . Although this result was derived by Korotkii and Obukhov for a constant value of $`l`$, it is easy to verify that it remains valid when $`l`$ is a function of time. Using conservation of angular momentum, it is possible to see that, in the radiation dominated epoch, the parameter $`l`$ is a constant, as originally considered by Korotkii and Obukhov, while in the matter dominated one it falls with $`a`$. Indeed, from the conservation of angular momentum, we have $`ϵ\omega a^5`$= constant, where $`ϵ`$ is the energy density of the matter content. In the radiation epoch, $`ϵ`$ falls with $`a^4`$, and so $`\omega `$ falls with $`a`$, leading to a constant $`l`$. On the other hand, for the matter epoch $`ϵ`$ falls with $`a^3`$, so $`\omega `$ falls with $`a^2`$, and $`l`$ should fall with $`a`$. As we shall see, in a rotating and expanding universe described by metric (1), the matter dominated epoch should be followed by an era in which the energy density falls with $`a^2`$, if the energy conditions have to be satisfied. So, during this last epoch $`\omega `$ falls with $`a^3`$, and $`l`$ falls with $`a^2`$. Therefore, for large times the terms in $`l`$ can be dismissed in Einstein’s equations, which means to consider, instead of metric (1), the anisotropic metric $$ds^2=a^2(\eta )[d\eta ^2(dx^2+e^{2x}dy^2+dz^2)].$$ (2) The cosmological solutions we will present in this paper, approximate solutions of metric (1), are exact solutions of the diagonal metric (2), in the particular case $`l=0`$. With help of the coordinate transformation $`e^x=\mathrm{cosh}\xi +\mathrm{cos}\varphi \mathrm{sinh}\xi ,`$ $`ye^x=\mathrm{sin}\varphi \mathrm{sinh}\xi ,`$ (3) metric (2) can also be written as $$ds^2=a^2(\eta )(d\eta ^2d\xi ^2\mathrm{sinh}^2\xi d\varphi ^2dz^2).$$ (4) The coordinate transformation (2) is a particular case, for $`l=0`$, of a more general transformation, with help of which the metric (1) can be expressed in cylindrical coordinates . It is easy to show that, in the limit of nearby distances, that is, up to subdominant terms in $`\mathrm{sinh}\xi `$, metric (4) reduces to the open FLRW metric. Indeed, using the transformations $`\mathrm{sinh}\xi =\mathrm{sinh}\chi \mathrm{sin}\theta ,`$ $`z=\mathrm{sinh}\chi \mathrm{cos}\theta ,`$ (5) relating cylindrical and spherical coordinates, we obtain by differentiation $`\mathrm{cosh}\xi d\xi =\mathrm{sinh}\chi \mathrm{cos}\theta d\theta +\mathrm{cosh}\chi \mathrm{sin}\theta d\chi ,`$ $`dz=\mathrm{sinh}\chi \mathrm{sin}\theta d\theta +\mathrm{cosh}\chi \mathrm{cos}\theta d\chi .`$ (6) So, by using $$\frac{1}{\mathrm{cosh}^2\xi }=\frac{1}{1+\mathrm{sinh}^2\xi }1\mathrm{sinh}^2\xi =1\mathrm{sinh}^2\chi \mathrm{sin}^2\theta ,$$ (7) we have $$dz^2+d\xi ^2\mathrm{sinh}^2\chi d\theta ^2+d\chi ^2.$$ (8) Finally, substituting (8) and the first of equations (2) into (4) leads to $$ds^2a^2(\eta )[d\eta ^2d\chi ^2\mathrm{sinh}^2\chi (d\theta ^2+\mathrm{sin}^2\theta d\varphi ^2)],$$ (9) which is precisely the open FLRW metric in spherical coordinates . The requirement of absence of closed time-like curves can be used to understand why the present observed superior limit for the global angular velocity is so small. As said before, the rotation parameter $`l`$ is given by $`l=2\omega a`$ . Then, the causality condition $`l<1`$ applied to the radiation dominated era implies that $`\omega _d<1/2a_d2.5\times 10^{15}s^1`$, where $`\omega _d`$ is the global angular velocity at the time of decoupling between matter and radiation, and $`a_d`$ is the radius of the Universe at that time, $`a_d6\times 10^{22}`$m (for a present radius of the Universe given by $`a1/H10^{26}`$m, where $`H`$ is the Hubble parameter). As we have seen, during the matter dominated era $`\omega a^2=`$constant, which leads to an upper limit for the present angular velocity of matter given by $`\omega =\omega _da_d^2/a^210^{21}s^1`$, while for radiation we obtain, from $`\omega _{rad}a=`$constant, the upper limit $`\omega _{rad}=\omega _da_d/a1.5\times 10^{18}s^1`$. In this way, the absence of closed time-like curves appears as a natural explanation for the smallness of the present rotation. ## 3 The radiation dominated era ¿From metric (1) and considering $`l`$ as a function of time, we obtain the Einstein equations $$ϵa^4=\left(1\frac{3l^2}{4}\right)a^2+3(1l^2)\dot{a}^22l\dot{l}a\dot{a},$$ (10) $$p_1a^4=\left(\frac{l^2}{4}+\dot{l}^2+l\ddot{l}\right)a^2+(1l^2)\dot{a}^22(1l^2)a\ddot{a}+4l\dot{l}a\dot{a},$$ (11) $$p_2a^4=\frac{l^2}{4}a^2+(1l^2)\dot{a}^22(1l^2)a\ddot{a}+2l\dot{l}a\dot{a},$$ (12) $$p_3a^4=\left(1\frac{l^2}{4}+\dot{l}^2+l\ddot{l}\right)a^2+(1l^2)\dot{a}^22(1l^2)a\ddot{a}+4l\dot{l}a\dot{a}.$$ (13) Here, $`ϵ`$ is the energy density, and $`p_i`$, $`i=1,2,3`$, are the principal pressures. The dot means derivation with respect to the conformal time. The other non-null components of the energy-momentum tensor are proportional to $`l`$, which from now on will be considered a small parameter, i.e., $`l<<1`$. In this approximation, these non-diagonal components can be neglected compared to the diagonal ones. As suggested in section 2, let us adopt for the radiation dominated era the ansatz $`l=`$constant. The above equations turn out to be $$ϵa^4=\left(1\frac{3l^2}{4}\right)a^2+3(1l^2)\dot{a}^2,$$ (14) $$p_1a^4=p_2a^4=\frac{l^2}{4}a^2+(1l^2)\dot{a}^22(1l^2)a\ddot{a},$$ (15) $$p_3a^2=p_1a^2+1\frac{l^2}{2}.$$ (16) Substituting in (14) the conservation law for radiation, $`ϵa^4=a_0^2=`$ constant, and considering the limit $`a0`$, we obtain the solution $$a=b\eta =\sqrt{2bt},$$ (17) where $`t`$ is the cosmological time, defined by $`dt=ad\eta `$, and $$b=\frac{a_0}{[3(1l^2)]^{1/2}}.$$ (18) This is the same evolution law obtained by the isotropic model in the limit of small times . For the energy density, we then have $$ϵ=\frac{a_0^2}{a^4}=\frac{3}{4t^2}\left(1l^2\right),$$ (19) while it follows, from equations (15) and (16), in the same limit $`a0`$, $$p_i=p=\frac{ϵ}{3},$$ (20) $`i=1,2,3`$, i.e., the equation of state for radiation, as expected. For $`l<1`$, equations (19) and (20) give $`ϵ>0`$ and $`|p_i|<ϵ`$, that is, the energy conditions are satisfied. For $`l^2<<1`$, those equations give the same predictions as the isotropic model . Another remarkable point is that, although the distribution of pressures is in general anisotropic, in the limit of small times we obtain an isotropic pressure. In addition, the Hubble parameter at this epoch has the same time dependence as in the standard model, namely $`H=1/2t`$, which leads to the same ratio between the interaction rate and the expansion one. The conclusion is that the thermal history of this universe is the same predicted by the standard model. In what concerns processes occurring during the initial stages of Universe evolution, the anisotropy (and the rotation) may be manifest only as higher order corrections. ## 4 Cosmological solutions for large times As pressure and energy density decrease, the radiation dominated era evolves until matter and radiation decouple from each other and one enters a matter dominated epoch characterized by the conservation law $`ϵa^3=2a_1,`$ (21) where $`a_1`$ is a constant. Adopting the ansatz $`la=`$constant, taking the limit of large $`a`$ and keeping only the dominant terms, the Einstein equations (10)-(13) reduce to $$ϵa^3=a+\frac{3\dot{a}^2}{a},$$ (22) $$p_1a^3=p_2a^3=\frac{\dot{a}^2}{a}2\ddot{a},$$ (23) $$p_3a^2=p_1a^2+1.$$ (24) Substituting (21) into (22), we have the solution $`a(\eta )=a_1\left[\mathrm{cosh}\left({\displaystyle \frac{\eta }{\sqrt{3}}}\right)1\right],`$ (25) where we have absorbed an integration constant by a suitable shift in the origin of conformal time $`\eta `$. With this solution, the spatial Einstein equations (23)-(24) lead to the pressures $`p_1a^2=p_2a^2={\displaystyle \frac{1}{3}},`$ $`p_3a^2={\displaystyle \frac{2}{3}},`$ (26) whose average yields the equation of state for dust matter, $`p=0`$, as expected. Hence, the evolution of this universe since the initial, radiation dominated epoch until the matter dominated one is similar to that predicted by the open isotropic model, except for an anisotropy in the pressure distribution, anisotropy that is negligible at early times and that, for large times, is as small as the pressures themselves. However, there is an important difference with respect to the isotropic case. In the matter dominated era, the energy density falls as $`a^3`$, while the pressures decrease as $`a^2`$. So, for large times, the magnitude of the pressures would become larger than the energy density and, consequently, the dominant energy conditions $`ϵ|p_i|`$ would not be fulfilled. It is possible to prove that, for the energy conditions to be satisfied at present, the relative energy density should be larger than or equal to $`0.4`$, but, even so, these conditions would be violated sooner or later in the future. Therefore, in this anisotropic scenario, the dust era should be followed by an epoch in which the energy density falls, at least, so slowly as $`a^2`$, that is, according to the conservation law $$ϵa^2=3b^21=\mathrm{constant},$$ (27) where $`b`$ is a positive constant introduced for mathematical convenience (the possibility that $`b`$ be negative would correspond to a contracting universe and will not be studied here). Substituting this conservation law into Eq. (10) and dismissing the terms in $`l`$ gives $`\dot{a}/a=b`$, which leads to the solution $$a=e^{b\eta }=bt.$$ (28) The Hubble parameter is now given by $`H=b/a`$ and, for the relative energy density, we obtain the constant value $$\mathrm{\Omega }\frac{ϵ}{3H^2}=\frac{3b^21}{3b^2}.$$ (29) The spatial Einstein equations give $`p_1a^2=p_2a^2=b^2,`$ $`p_3a^2=1b^2.`$ (30) For the average pressure, we then get $`p=ϵ/3`$, an equation of state corresponding to a (decaying) positive cosmological term. In this sense, it is interesting to note that, for a cosmological term varying with time, the conservation law $`ϵa^2=\mathrm{constant}`$ has also been suggested on the basis of quantum cosmology considerations . It is easy to see that the dominant energy conditions $`ϵ|p_i|`$ are now fulfilled provided that $`b^21/2`$. From (29), one can see that this corresponds to the condition $`\mathrm{\Omega }1/30.3`$. On the other hand, it is possible to check that, during the radiation and matter dominated epochs, the relative energy density decreases monotonically, which means that the bound obtained above is a lower limit for $`\mathrm{\Omega }`$ at all times. So, the energy conditions impose a lower bound on the relative energy density, maintaining the Universe in a quasi-flat configuration. The conservation law $`ϵa^2=\mathrm{constant}`$ is also compatible with Einstein equations in the isotropic case. For the open FLRW metric, instead of Eq. (10), we have the Friedmann equation $$ϵ=\frac{3}{a^4}(\dot{a}^2a^2).$$ (31) Substituting $`ϵa^2=3(b^21)`$, it is easy to arrive once more at the evolution law (28). In addition, the spatial Einstein equations give $`p=ϵ/3`$, corresponding again to a (decaying) positive cosmological term. In this case, however, although the energy conditions are satisfied only if $`b1`$, no positive lower bound is imposed on the relative energy density, contrary to what happens in the anisotropic case. More generally, it is possible to prove that the conservation law demanding that $`ϵa^2`$ be a constant leads to Eq. (28) for all (open, flat and closed) metrics, in both isotropic and anisotropic cases (actually, in the flat case the anisotropic metric reduces to the isotropic one). Remarkably, however, it is only in the open anisotropic case that the energy conditions impose a quasi-flat configuration. Let us also note that the anisotropic model presented in this paper does not exclude the possibility of an inflationary phase in the cosmic evolution. Indeed, if we add a dominant, positive cosmological constant to the left hand side of Eq. (10), we obtain an exponential evolution law for $`a(t)`$. Actually, the introduction of a typical cosmological constant (or, alternatively, the introduction of an energy density falling more slowly than $`a^2`$) in Einstein equations would be needed if recent claims about the observation of a positive cosmic acceleration were confirmed . As we have shown, a cosmological term decaying as $`a^2`$ leads to the solution (28), for which the deceleration parameter is exactly zero. ## 5 The matter content as an imperfect fluid As we have seen in section $`3`$, in the limit of small times the matter content of space-time (1) consists of rotating relativistic matter, with energy density and isotropic pressure given by (19) and (20), respectively. For large times $`l0`$, and from (11)-(13) we have $$p_1a^2=p_2a^2=p_3a^21.$$ (32) In this way, we have an anisotropic distribution of pressures, with the anisotropy corrections falling as $`a^2`$. It is this fact that ultimately leads to the necessity of considering a coasting evolution (i.e., $`at`$) in the last phase of Universe history, if we want to respect the energy conditions in a rotating and expanding context. The appearance of an anisotropic distribution of pressures and, in particular, of tensions, shows that the material content of our rotating and expanding universe cannot be a perfect fluid, as usual in the standard isotropic cosmologies. Therefore, it is necessary to find a suitable matter source compatible with such an anisotropy in order to put this non-stationary, rotating cosmology on an acceptable physical basis. Without closing the door to other possibilities , a natural candidate for the material content is an homogeneous imperfect fluid with viscosity. Actually, in general situations this choice is more realistic than a perfect fluid. The use of a perfect fluid in standard cosmology is possible due to the isotropy of the fluid motion, which avoids the appearance of friction. But in anisotropic contexts like the one considered here the role of viscosity cannot, in general, be neglected. In order to simplify our analysis, let us consider the fluid motion in a locally comoving Lorentz frame, that is, a freely falling frame whose origin is at rest with respect to a given point of the fluid at a given time $`t`$ (owing to the presence of friction, we cannot introduce a globally comoving frame). The energy-momentum tensor of the fluid is given by $$T^{\mu \nu }=pg^{\mu \nu }+(p+ϵ)U^\mu U^\nu +\mathrm{\Delta }T^{\mu \nu },$$ (33) where $`p`$ is the average pressure and, in our frame, $`\mathrm{\Delta }T^{00}=0`$ and $`\mathrm{\Delta }T^{i0}=\chi _0(^iT+T\dot{U}^i),`$ $`\mathrm{\Delta }T^{ij}=\eta _0(^jU^i+^iU^j{\displaystyle \frac{2}{3}}_kU^k\delta ^{ij}).`$ (34) Here, $`U^\mu `$ is the fluid velocity, $`T`$ its temperature, the overdot means derivation with respect to $`t`$ and $`^i=\delta ^{ij}_j`$. $`\chi _0`$ and $`\eta _0`$ are the fluid heat conductivity and shear viscosity, respectively. In our model with cylindrical symmetry, the second of equations (5) leads to $`\mathrm{\Delta }T^{11}=\mathrm{\Delta }T^{22}={\displaystyle \frac{2}{3}}\eta _0h,`$ $`\mathrm{\Delta }T^{33}={\displaystyle \frac{4}{3}}\eta _0h,`$ (35) where we have defined $`h^1U^1^3U^3=^2U^2^3U^3`$. So, we have $$\mathrm{\Delta }T^{11}=\mathrm{\Delta }T^{22}=\mathrm{\Delta }T^{33}2\eta _0h.$$ (36) Comparing this equation with (32), we see that viscosity gives the expected contribution to the pressures provided that $$2\eta _0ha^2=1.$$ (37) Since the anisotropic metric (2) is diagonal, the energy-momentum tensor will also be diagonal, that is, $`T^{i0}=\mathrm{\Delta }T^{i0}=0`$. This can be achieved if we consider a null heat conductivity, or an approximately null fluid temperature, or yet if the condition $`^iT=T\dot{U}^i`$ follows. In this last case, since for the coasting solution considered in the last section we have $`\dot{U}^i=0`$, we get $`_iT=0`$, which means isotropy and homogeneity of fluid temperature. On the other hand, we have $`\mathrm{\Delta }T^{ij}=0`$ for $`ij`$, which leads to $`^jU^i=^iU^j`$ for $`ij`$. This condition can be satisfied, in particular, by the Hubble type law $`U_i=H_ix_i`$, $`i=1,2,3`$. In this case, equation (37) shows that $`H_3<H_1=H_2`$, and it is important to verify how much this inequality can affect the observed Hubble law. The relation between the Hubble parameter $`H`$ and the parameters $`H_i`$ can be established with the help of the equation $`\dot{V}/V=_iU^i`$, relating the temporal variation of a volume $`V`$ of the fluid and the divergence of its velocity field. Considering $`V`$ as the volume of the observed universe, proportional to $`a^3`$, this leads to $`_iU^i=3\dot{a}/a=3H`$. So, from $`U_i=H_ix_i`$, we obtain $`H=(H_1+H_2+H_3)/3`$, that is, the Hubble parameter is equal to the average of the paramaters $`H_i`$. During the coasting phase of Universe evolution, the Hubble parameter is given by $`H=b/a`$ and, therefore, its relative variation with direction is given by $`h/H=(2\eta _0ba)^1`$. If we consider a constant viscosity $`\eta _0`$ (or at least a viscosity varying very slowly with $`a`$), the relative variation of $`H`$ falls with $`a`$ and could be unobservable for large times, even for a small value of $`\eta _0`$. Therefore, the above analysis shows that the solutions found for the spacetime metrics (1) and (2) may correspond to a physical matter content formed by radiation, viscous matter and a cosmological term. The appearance of anisotropic pressures, far from discard the model and render it unphysical, can be related to the presence of friction owing to the anisotropic motion of matter. ## 6 A singularity-free alternative scenario In a recent paper , we have investigated an alternative, singularity free, scenario for Universe’s evolution, in which the present expanding universe is originated from a primordial Gödel universe , by a phase transition during which the negative cosmological term characteristic of the Gödel phase crosses a positive maximum and rolls down to zero. This scenario could also explain the origin of galaxies rotation, but it was not clear how the global angular momentum of the Universe could be transferred to the galaxies . This difficult is intimately connected to the discontinuous transition considered in that paper, where the Gödel metric (1) (with $`l=\sqrt{2}`$) is directly matched with the expanding (but non-rotating) anisotropic metric (2). The analysis we have made in the present paper can help us solving these difficulties. Initially we have a Gödel universe, which corresponds to $`l=\sqrt{2}`$ and $`ϵa^2=1`$. After the phase transition, we have a rotating and expanding universe, in which $`l`$ (and then the non-diagonal term in metric (1)) falls down, leading to the diagonal metric (2). During the phase transition, half of the initial energy is used to compensate the negative cosmological constant present in the Gödel model, given by $`\lambda a^2=1/2`$. After the phase transition, we then have $`ϵa^2=1/2`$, relation which allows us to match the original Gödel universe with our last solution (27)-(28) for $`b^2=1/2`$, that is, for $`\mathrm{\Omega }0.3`$. In this way, the scale factor $`a`$ changes continuously in the whole evolution, and the dominant energy conditions are satisfied. Moreover, the decaying, positive cosmological term characteristic of the expanding phase can be shown to arise naturally from the scalar field transition described in , in which a self-interaction potential, initially at a negative minimum (corresponding to the negative cosmological constant present in the Gödel solution), crosses a positive maximum and rolls down to zero. Now, we can use the mechanism proposed by Li to transfer angular momentum from Universe to galaxies in a rotating and expanding context . In the above match, the value of the radius of the primordial universe is not fixed by the present values of the energy density and Hubble parameter, contrary to what occurs in the match considered in Ref. (there, the Gödel phase is matched with the dust solution given by (21) and (25)). In the Gödel phase we have the angular velocity $`\omega _G=\sqrt{2}/2a_G`$, where $`a_G`$ is the radius of the primordial Gödel’s universe. In the expanding phase, the angular velocity of matter at the present time is given by $`\omega a^2=\omega _Ga_G^2`$. Substituting this last equation into the former, we obtain $`a_G=\sqrt{2}\omega a^2`$, which determines $`a_G`$ for a given value of the present angular velocity of matter. As already discussed in , in this context there is no dense phase, which constitutes the major drawback of this alternative scenario, if we have into consideration phenomena like nucleosynthesis or the cosmic background radiation. Although such phenomena could be related to the very process of phase transition , this possibility needs to be further investigated. ## 7 Concluding remarks In this paper we have tried to show that the inclusion of rotation into the standard model of the Universe can enrich it in several aspects. On one hand, rotation does not contradict the current observations of isotropy nor gives rise to parallax effects ; the anisotropic metric (2), that we have just shown to originate from the rotating metric (1) by conservation of angular momentum, reduces to the open metric of Friedman in the limit of nearby distances; in the limit of small times the distribution of pressures is isotropic, and the smallness of the rotation parameter $`l`$ guarantees that physical processes taking place at early times are not affected by rotation, as well as the absence of closed time-like curves. On the other hand, the global rotation can be used to explain the origin of galaxies rotation and the observed relation between their angular momenta and masses ; in the anisotropic context described by metrics (1) and (2), the energy conditions lead naturally to a last epoch dominated by a positive cosmological term decaying as $`a^2`$, a decaying law also expected on the basis of quantum cosmology reasonings ; finally, in such a context, the energy conditions impose also a constant lower bound on the relative energy density, close to the present observed value, providing in this way a possible explanation for the observed quasi-flatness. By the way, let us note that these two last results are originated from the anisotropy of metric (2), no matter its relation with the rotating metric (1). As recently pointed out , the coasting evolution law $`a=bt`$, characteristic of the last phase of the present model, can solve other cosmological problems as well. For example, it leads to $`t=1/H`$, an age for the Universe compatible with the observational bounds. In addition, the conservation law $`ϵa^2=`$ constant is precisely what we need to solve the cosmological constant problem, obtaining a cosmological term in agreement with observation . It has also been shown that the decaying cosmological term proposed by Chen and Wu is not the only feasible possibility, the equation of state $`p=ϵ/3`$ being also compatible with a bicomponent content, formed by ordinary matter and a cosmological constant. Actually, this equation of state can correspond as well to textures or strings. However, as commented in Ref. , it would be unrealistic to consider that the present universe is dominated by such topological defects. Finally, a curious remark is in order. With the superior limit for the matter angular velocity $`\omega 10^{21}s^1`$, derived in section 2, the radius of the Universe $`a10^{26}`$m, and the matter density $`\rho 10^{27}`$Kg/m<sup>3</sup>, we obtain an angular momentum of order $`L10^{82}`$J.s. With this value, it follows that $$\frac{L}{\mathrm{}}10^{116}(10^{39})^3,$$ (38) where $`\mathrm{}`$ is Planck’s constant. This relation between the angular momentum of the Universe and typical angular momenta of particles is also expected on the basis of the large number coincidences . Its physical meaning and cosmological implications remain an exciting and challenging open problem. ## Acknowledgments I am greatly thankful to Guillermo A. Mena Marugán, without whose criticisms, suggestions and encouragement this work would have not been possible. I am also grateful to Luis J. Garay for his careful reading of a first version of the manuscript, to Mariano Moles for a useful discussion on galaxies formation, to Hugo Reis and Henrique Fleming for discussions on the Hubble law in anisotropic models, to Pedro F. González-Díaz for interesting discussions and warm hospitality in CSIC, and to Enric Verdaguer and Alberto Saa for their hospitality in University of Barcelona during the Summer of 2000. My thanks also to Mariam Bouhmadi for her kind help with GRTensor and Mathematica. This work has been partially supported by CNPq.
no-problem/0003/astro-ph0003271.html
ar5iv
text
# 1 Introduction ## 1 Introduction In recent papers, Chakrabarti , and Chakrabarti and Chakrabarti (, hereafter referred to as Paper I) explored the possibility of the formation of biomolecules in star formation region using gas-phase chemistry. Their conclusion was that even in frigid condition in interstellar matter some of the simplest amino acids such as glycine, alanine etc. could be produced even before the formation of stars and planets. Paper I also showed that with a choice of reaction rate constant $`10^{10}`$, significant adenine may also be produced. Some preliminary results of amino acids are in . Adenine is a simply produced vital component of the DNA molecule and its significant production may point to an important clue into the problem of origin of life on planets like ours. Because of this, it is essential to carry out careful analysis on the reaction rates during the adenine formation. In Paper I, we used the formation of adenine by successive addition of HCN by using an ‘average’ rate. In the normal circumstances, in gas-phase reaction $`HCN+HCNH_2C_2N_2`$ rate would be small, since they must combine by radiative association, i.e., they must radiate a photon when combined together. This is a slow process and the probability of photon emission could be $`1`$ in a few thousand to a few million (T. Millar, private communication). However as the size of the molecule gets bigger, the process becomes faster. Thus, it is likely that for a large enough molecule, the radiative association may take place at every collision and at this stage, the collisional rate may be used. One possibility is therefore to assume that after every addition of HCN, the reaction rate goes up by a factor of $`f`$ ($`f`$ may be anywhere from $`1`$ to $`100`$ or more). Hence one may imagine that at the early stages, $`HCN+HCNH_2C_2N_2`$ forms with a reaction rate of $`10^{16}`$, but for $`HCN+H_2C_2N_2`$ the rate becomes $`f\times 10^{16}`$, for $`HCN+H_3C_3N_3`$ the rate becomes $`f^2\times 10^{16}`$ and so on. It would be therefore interest to learn, whether significant adenine is formed and it is detectable when the radiative association process is taken into account. In the present paper, we do just that. It is possible that more favorable reactions take place on ice, but in view of little known reaction rates of ice chemistry we believe that the best we could do is to study the formation of these important molecules as a function of two parameters, namely, $`\alpha _{Ad}`$ and $`f`$. It is quite possible that a suitable $`f`$ parameter we suggested above would take care of the ice-chemistry reaction rates as well. It is still possible that such an $`f`$ may actually be determined by actual detection of molecules in space. Similarly, constancy of $`f`$ is an assumption of our model. In reality it could vary with the size of the molecules. So far, there has been controversy whether glycine has been observed in interstellar matter. Miao et al. tentatively detected glycine in the massive star forming region Sgr B2(N) though this was later challenged by Combes et al. who suggested that with the sensitivity of the detector taken into account, the lines were really at the confusion limit and positive identification would require more sensitive instruments. It is not known if any attempts were made to detect adenine lines, however there may have been detection of adenine in meteoritic samples (M. Bernstein, private communication). ## 2 Reaction Network and Hydrodynamic Model We choose the same reaction network as in Paper I which we again present here for the sake of completeness. We take the UMIST database (Millar, Farquhar & Willacy ) as our basis of chemical reactants and reactions, but added several new reactions such as synthesis of amino acids (alanine and glycine), hydroxy-acids (glycolic and lactic acids), DNA base (adenine, ), urea synthesis etc. These new reactions make the total number of species to be $`422`$. The rate co-efficients of these additional reactions are difficult to find, especially in the environs of a molecular cloud. To use UMIST database, the rate constant for a two body reaction is written as , $$k=\alpha (T/300)^\beta \mathrm{exp}(\gamma /T)\mathrm{cm}^3s^1$$ $`(1)`$ where, $`\alpha `$, $`\beta `$ and $`\gamma `$ are constants and $`T`$ is the temperature. Amino acid synthesis rate was estimated from Fig. 8 of Schulte & Shock . Urea synthesis rate is kept comparable to the rates given in UMIST table. The rate constants were taken to be $`\alpha =10^{10}`$, $`\beta =\gamma =0`$ for each two-body reactions. In Paper I, the rate constants for adenine synthesis was chosen to be similar to other two body reactions \[$`\alpha _{Ad}=10^{10}`$ $`\beta =\gamma =0`$ for each HCN addition in the chain $`HCN(\alpha _1)CH(NH)CN(\alpha _2)NH_2CH(CN)_2(\alpha _3)NH_2(CN)C=C(CN)NH_2(\alpha _4)H_5C_5N_5`$ (adenine)\]. In the present paper we run the same simulation with $`\alpha _i|_{i=1,2,3,4}=\alpha _{Ad}=10^{12},10^{14},10^{16}`$ as well and in addition, consider the possibility that $`\alpha _i=f^{i1}\alpha _{Ad}`$ as discussed in the introduction. In this notation, Paper I, represents the case with $`f=1`$. Initial composition of the cloud before the simulation begins is kept to be the same as in , and formation of $`H_2`$ is included using the grain-surface reaction with rates as in . The initial mass fractions are taken to be the same as in (but appropriate convertion), i.e., H:He:C:N:O:Na:Mg:Si:P:S:Cl:Fe = $`0.64`$:$`0.35897`$:$`5.6\times 10^4`$:$`1.9\times 10^4`$:$`1.81\times 10^3`$:$`2.96\times 10^8`$: $`4.63\times 10^8`$:$`5.4\times 10^8`$:$`5.79\times 10^8`$:$`4.12\times 10^7`$:$`9\times 10^8`$:$`1.08\times 10^8`$. The hydrodynamic model is kept same as that in Paper I. We choose the initial size of the molecular cloud to be $`r_0=3\times 10^{18}`$cm, average temperature of the cloud $`T=10`$K, and angular velocity of the cloud $`\mathrm{\Omega }=10^{16}`$ rad s<sup>-1</sup>. In this case the speed of sound is $`a_s19200`$cm s<sup>-1</sup> and corresponding initial density is $`\rho =10^{22}`$g cm<sup>-3</sup> and accretion rate is $`\dot{M}=1.06\times 10^{20}`$g s<sup>-1</sup>. In the isothermal phase of the cloud collapse, density $`\rho r^2`$ and the velocity is constant. When opacity becomes high enough to trap radiations (say, at $`r=r_{tr}`$), the cloud collapses adiabatically with $`\rho r^{3/2}`$. In presence of rotation, centrifugal barrier forms at $`r=r_c`$, where centrifugal force balances gravity. Density falls off as $`\rho r^{1/2}`$ in this region . The initial constant velocity of infall becomes $`8900`$cm s<sup>-1</sup> and below $`r=r_c`$ velocity $`r^{1/2}`$ was chosen to preserve the accretion rate in a disk like structure of constant height. Since for the parameters chosen (generic as they are) $`r_c>r_{tr}`$, we chose $`T1/r`$ inside the centrifugal barrier ($`r<r_c`$) as in an adiabatic flow. We follow the collapse till a radius of $`10^{12}`$cm is reached. ## 3 Models and Results We use the following models parameterized by $`\alpha _{Ad}`$ and $`f`$. Model A: $`\alpha _{Ad}=1.e16`$ and $`f=1`$. Model B: $`\alpha _{Ad}=1.e14`$ and $`f=1`$. Model C: $`\alpha _{Ad}=1.e12`$ and $`f=1`$. Model D: $`\alpha _{Ad}=1.e10`$ and $`f=1`$ (same as in Paper I) Model E: $`\alpha _{Ad}=1.e16`$ and $`f=100`$. Model F: $`\alpha _{Ad}=1.e14`$ and $`f=10`$. Model G: $`\alpha _{Ad}=1.e12`$ and $`f=5`$. Fig. 1 shows the evolution of adenine abundance $`X_{Ad}`$ with time (upper axis in seconds) and with logarithmic radial distance (in cm). We note that generally speaking, at $`r=10^{16}`$cm, already the abundance $`X_{Ad}`$ has reached almost the saturated value. This is because, as the upper axis indicates, most of the time is spent in this region during collapse. The final abundance in different models are: (A) $`X_{Ad}=1.36\times 10^{34}`$, (B) $`1.36\times 10^{26}`$, (C) $`1.36\times 10^{18}`$, (D) $`6.35\times 10^{11}`$, (E) $`1.2\times 10^{22}`$, (F) $`1.34\times 10^{20}`$, and (G) $`1.8\times 10^{14}`$ respectively. Note that when $`X_{Ad}`$ is really small, it is proportional to $`\alpha _{Ad}^4`$ (for fixed $`f`$) as expected from a reaction with four sequence (see, e.g, Models A, B and C). But when its abundance it significant, HCN otherwise participating in other reactions also contribute significantly to adenine formation. Similarly, at low $`X_{Ad}`$, for $`f1`$, the final abundance is proportional to $`f^6`$ for the same value of $`\alpha _{Ad}`$. If the present detectability limit of abundance is around $`10^{11}`$ , then it is clear that adenine processed in our method should not be detectable (except for Model D) even though it may be enough to contaminate and flourish in some planets as we suggest. With a molecular weight of $`135`$ for adenine, one could imagine that an abundance of $`10^{21}`$ or less should really be considered as insignificant as far as the contamination theory goes. In that case Models A, B and E must be rejected. This would correspond to a lower limit of $`<\alpha _{Ad}>`$ as $`10^{13}`$ (where we use $`<>`$ to indicate an average over the whole chain of reactions leading the adenine formation from $`HCN`$.). On the other hand, even when $`\alpha _1<\alpha _{Ad}>`$, $`f`$ could be large enough to have eventual significant production (Model F). Thus, our $`\alpha f`$ model implies that one needs to study the reaction rate of not only $`HCN+HCNH_2C_2N_2`$, but also every stages of HCN addition in order to come a definitive conclusion in this regard. ## 4 Conclusion In presence of radiative association, adenine abundance $`X_{Ad}`$ in an interstellar cloud seems to be roughly proportional to $`\alpha _{Ad}^4f^6`$ for small $`X_{Ad}`$. This means that the measurements of both $`\alpha _{Ad}`$ and $`f`$ must be made very accurately. We studied the $`\alpha f`$ parameter space and found that while some region could produce significant abundance, a smaller region produce detectable (with present day technology) amount, while the rest produces abundance insignificant enough to dismiss the contamination theory. One must wait for the technological advancements to improve laboratory experiments in extreme conditions and to improve the detectability limit in order to come to a firm conclusion. Acknowledgments SC acknowledges the usages of the facilities of Centre for Space Physics for writing this article. Figure 1.: Evolution of adenine abundance with radial distance and time (upper axis) in a $`\alpha f`$ model. Various models are marked on the curve. See text for details.
no-problem/0003/hep-ph0003169.html
ar5iv
text
# Fluctuation Probes of Quark Deconfinement ## Abstract The size of the average fluctuations of net baryon number and electric charge in a finite volume of hadronic matter differs widely between the confined and deconfined phases. These differences may be exploited as indicators of the formation of a quark-gluon plasma in relativistic heavy-ion collisions, because fluctuations created in the initial state survive until freeze-out due to the rapid expansion of the hot fireball. preprint: hep-ph/0003169, CERN-TH/2000-077, DPNU-00-12, DUKE-TH-00-201 Fluctuations in the multiplicities and momentum distributions of particles emitted in relativistic heavy-ion collisions have been widely considered as probes of thermalization and the statistical nature of particle production in such reactions . The characteristic behavior of temperature and pion multiplicity fluctuations in the final state has been proposed as a tool for the measurement of the specific heat and, specifically, for the detection of a critical point in the nuclear matter phase diagram . Although the hot and dense matter created in heavy-ion collisions is not directly observed at the critical point (if one exists) but rather at the point of thermal freeze-out where particles decouple from the system, certain features of the critical fluctuations were shown to survive due to the finite cooling rate of the fireball . We here draw attention to a different type of fluctuations which are sensitive to the microscopic structure of the dense matter. If the expansion is too fast for local fluctuations to follow the mean thermodynamic evolution of the system, it makes sense to consider fluctuations of locally conserved quantities that show a distinctly different behavior in a hadron gas (HG) and a quark-gluon plasma (QGP). Characteristic features of the plasma phase may then survive in the finally observed fluctuations. This is most likely if subvolumes are considered which recede rapidly from each other due to a strong differential collective flow pattern as it is known to exist in the final stages of a relativistic heavy-ion reaction. Three observables satisfy these constraints and are, in principle, measurable: the net baryon number, the net electric charge, and the net strangeness. Here we will focus on the first two as probes of the transition from hadronic matter to a deconfined QGP. Because they are sensitive to the microscopic structure of the matter, their unusual behavior would provide specific information about the structural change occurring as quarks are liberated and chiral symmetry is restored at high temperature. Our proposal differs from recent suggestions involving fluctuations in the abundance ratios of charged particles and in the baryon number multiplicity in that we only consider locally conserved quantities. We also disregard dynamical fluctuations of the baryon density caused by supercooling and bubble formation . We consider matter which is meson-dominated, i.e. whose baryonic chemical potential $`\mu `$ and temperature $`T`$ satisfy $`\mu T`$. Our arguments will thus apply to heavy-ion collisions at CERN SPS energies and above. In the following, we first explain qualitatively how hadronic and quark matter differ with respect to net baryon number and electric charge fluctuations. We then present analytical calculations supporting the argument. Finally, we estimate the rate at which initial state fluctuations are washed out during the expansion of the hot matter in the final, hadronic stage before thermal freeze-out. In a hadron gas nearly two thirds of the hadrons (for $`\mu T`$ mostly pions) carry electric charge $`\pm 1`$. In the deconfined QGP phase the charged quarks and antiquarks make up only about half the degrees of freedom, with charges of only $`\pm \frac{1}{3}`$ or $`\pm \frac{2}{3}`$. Consequently, the fluctuation of one charged particle in or out of the considered subvolume produces a larger mean square fluctuation of the net electric charge if the system is in the HG phase. For baryon number fluctuations the situation is less obvious because in the HG baryon charge is now only carried by the heavy and less abundant baryons and antibaryons. Still, all of them carry unit baryon charge $`\pm 1`$ while the quarks and antiquarks in the QGP only have baryon number $`\pm \frac{1}{3}`$. It turns out that, as $`\mu /T0`$, the fluctuations are again larger in the HG, albeit by a smaller margin than for charge fluctuations. At SPS energies and below the difference between the two phases increases since the stopped net baryons from the incoming nuclei contribute to the fluctuations, and more so in the HG than in the QGP phase. Generally, if $`𝒪`$ is conserved and $`\mu `$ is the associated chemical potential, in thermal equilibrium the mean square deviation of $`𝒪`$ is given by $$(\mathrm{\Delta }𝒪)^2𝒪^2𝒪^2=T\frac{𝒪}{\mu },$$ (1) where $`𝒪=\mathrm{Tr}𝒪e^{(\mu 𝒪)/T}/\mathrm{Tr}e^{(\mu 𝒪)/T}`$. For $`𝒪=N_b`$ the r.h.s. of (1) is $`T`$ times the baryon number susceptibility which was discussed earlier in the context of possible signatures for chiral symmetry restoration in the hadron-quark transition . In general, the relative fluctuation of any extensive variable vanishes in the thermodynamic limit $`V\mathrm{}`$ because the expectation value $`𝒪`$ increases linearly with the volume $`V`$ while the fluctuation $`\mathrm{\Delta }𝒪`$ grows only like $`\sqrt{V}`$. In reality, the value of a conserved quantum number of an isolated system does not fluctuate at all. However, if we consider a small part of the system, which is large enough to neglect quantum fluctuations, but small enough that the entire system can be treated as a heat bath, Eq. (1) can be used to calculate the statistical uncertainty of the value of the observable in the subsystem. This is the scenario considered here. We first discuss the fluctuations of the net baryon number. Since baryons are heavy, in the dilute HG phase we can apply the Boltzmann approximation : $$N_b^\pm (T,\mu )=N_b^\pm (T,0)\mathrm{exp}(\pm \mu /T).$$ (2) Here $`N_b^\pm `$ denotes the number of baryons $`(+)`$ and antibaryons $`()`$, respectively. The net baryon number is $`N_b=N_b^+N_b^{}`$. Then the net baryon number fluctuations in the hadronic gas are given by $$(\mathrm{\Delta }N_b)_{\mathrm{HG}}^2=N_b^++N_b^{}=2N_b^\pm (T,0)\mathrm{cosh}(\mu /T).$$ (3) This result makes sense, because the fluctuation of either a baryon or an antibaryon into or out of the subvolume changes the net baryon number contained in it. To estimate $`(\mathrm{\Delta }N_b)^2`$ in the QGP phase, we use the exact result for the baryon number density in an ideal gas of massless quarks and gluons (for two massless flavors): $$\frac{1}{V}(\mathrm{\Delta }N_b)_{\mathrm{QGP}}^2=\frac{2}{9}T^3\left(1+\frac{1}{3}\left(\frac{\mu }{\pi T}\right)^2\right),$$ (4) where $`V`$ denotes the volume of the considered subsystem. It is convenient to normalize this by the entropy density (again for two quark flavors plus gluons): $$\frac{1}{V}S_{\mathrm{QGP}}=\frac{74\pi ^2}{45}T^3\left(1+\frac{5}{37}\left(\frac{\mu }{\pi T}\right)^2\right).$$ (5) The later expansion being nearly isentropic, the ratio $$\frac{(\mathrm{\Delta }N_b)^2}{S}|_{\mathrm{QGP}}=\frac{5}{37\pi ^2}\left(1+\frac{22}{111}\left(\frac{\mu }{\pi T}\right)^2+\mathrm{}\right),$$ (6) provides a useful measure for the fluctuations predicted for a transient quark phase. The entropy can be estimated from the final hadron multiplicity . For high collision energies ($`\mu /T\mathrm{\hspace{0.17em}0}`$), the ratio (6) approaches a constant; even for SPS energies, the $`\mu `$-dependent correction is at most 5%. The many resonance contributions make it difficult to write down an analytic expression like (5) for the entropy density in a hadron gas, but it is clear that the stronger $`\mu `$-dependence of (3) compared to (4) induces a stronger $`\mu `$-dependence of the corresponding ratio (6) in the HG phase. This translates into a stronger beam energy dependence of the ratio (6) near midrapidity in the HG than in the QGP phase. Before providing numerical illustrations, let us compare these results with those for net charge fluctuations. All stable charged hadrons have unit electric charge; again using the Boltzmann approximation, which only for pions introduces a small error of at most 10%, we find $$(\mathrm{\Delta }Q)_{\mathrm{HG}}^2=N_{\mathrm{ch}},$$ (7) where $`N_{\mathrm{ch}}`$ is the total number of charged particles emitted from the subvolume. To find the expression for a noninteracting QGP, we introduce the electrochemical potential $`\varphi `$ which couples to the electric charges $`q_u=\frac{2}{3}`$ and $`q_d=\frac{1}{3}`$ of the up- and down-quarks: $$\frac{Q(\varphi )}{V}=\underset{f=u,d}{}q_f\left((\frac{1}{3}\mu +q_f\varphi )T^2+\frac{1}{\pi ^2}(\frac{1}{3}\mu +q_f\varphi )^3\right).$$ (8) We differentiate with respect to $`\varphi `$ at $`\varphi =0`$ and normalize to the entropy density: $$\frac{(\mathrm{\Delta }Q)^2}{S}|_{\mathrm{QGP}}=\frac{25}{74\pi ^2}\left(1+\frac{22}{111}\left(\frac{\mu }{\pi T}\right)^2+\mathrm{}\right).$$ (9) This is a factor $`\frac{5}{2}`$ larger than the corresponding ratio (6) for baryon number fluctuations, due to the larger electric charge of the up-quarks, but shows the same weak $`\mu `$-dependence. The main difference to baryon number fluctuations arises in the HG phase: Since at SPS and higher energies the r.h.s. of (7) is dominated by pions and meson resonances, its $`\mu `$-dependence is now also weak. In contrast to baryon number fluctuations, charge fluctuations thus show a weak beam energy dependence in either phase, and only their absolute values differ . We now give some numerical values for the fluctuation/entropy ratios at SPS and RHIC/LHC. At the SPS, the net baryon number per unit of rapidity is measured: $`dN_b/dy92`$ . The antibaryon/baryon ratio is $`0.085`$ , corresponding to $`dN_b^{}/dy8.5`$. Combined with a specific entropy of $`S/N_b36`$ , Eq. (3) thus gives $`(\mathrm{\Delta }N_b)^2/S0.033`$ if the fluctuations reflect an equilibrium HG. If they have a QGP origin, Eq. (6) gives $`(\mathrm{\Delta }N_b)^2/S0.014`$ , i.e. about a factor 2.4 less. — The charge fluctuations in a HG can be evaluated from the measured charged multiplicity density at midrapidity, $`dN_{\mathrm{ch}}/dy400`$ , after correcting for resonance decays . Assuming hadrochemical freeze-out at $`T170`$ MeV , 60% of the observed pions stem from such decays . One finds $`(\mathrm{\Delta }Q)^2/S0.06`$. If the charge fluctuations arise from a QGP, Eq. (9) gives $`(\mathrm{\Delta }Q)^2/S0.036`$, i.e. 60% of the HG value. It is instructive to extrapolate these results to RHIC/ LHC energies (i.e. $`\mu /T\mathrm{\hspace{0.17em}0}`$). We again assume hadrochemical freeze-out at $`T\mathrm{\hspace{0.17em}170}`$ MeV and use the particle multiplicities predicted by hadrochemical models . One obtains $`(\mathrm{\Delta }N_b)^2/S\mathrm{\hspace{0.17em}0.020}`$ in the HG, compared to 0.0137 in the QGP, and $`(\mathrm{\Delta }Q)^2/S0.067`$ in the HG phase, compared to 0.034 in the QGP. Only the first of these four numbers, corresponding to the hadronic baryon number fluctuations, changes by more than 10% as one proceeds from SPS to RHIC (see Fig. 1). These estimates, including our corrections for resonance decays, refer to ideal gases in equilibrium. Future work should address interaction effects on the thermal fluctuations in HG and QGP and treat resonance decays kinetically. We also point out potentially important non-equilibrium aspects: The fluctuation/entropy ratios in the QGP will be even lower (facilitating the discrimination against HG) if initially the QGP is strongly gluon-dominated and hadronizes before the concentrations of the (baryon) charge carriers $`q,\overline{q}`$ saturate , or if hadronization itself generates additional entropy. We now discuss whether the difference between the two phases (typically a factor 2) is really observable. Even if a QGP is temporarily created in a heavy-ion collision, all hadrons are emitted after re-hadronization. Thus, it is natural to ask whether the fluctuations will not always reflect the hadronic nature of the emitting environment. We must show that the time scale for the dissipation of an initial state fluctuation is larger than the duration from hadronization to final particle freeze-out. It is essential to our argument that fluctuations of conserved quantum numbers can only be changed by particle transport and thus are likely to be frozen in at an early stage, similar to the abundances of strange hadrons, which are frozen early in the reaction and may even reflect the chemical composition of a deconfined plasma . For our estimate we assume for simplicity that the fireball expands mostly longitudinally, with a boost-invarant (Bjorken) flow profile. Longitudinal position and rapidity are then directly related. Strong longitudinal flow exists in collisions at the SPS , and the Bjorken picture is widely expected to hold for collisions at RHIC and LHC. Consider a slice of matter spanning a rapidity interval $`\mathrm{\Delta }\eta `$ at the initial time $`\tau _i`$. ($`\tau `$ is the proper time and $`\eta =\mathrm{tanh}^1(z/t)`$.) Its proper volume is $`V_i=A\tau _i\mathrm{\Delta }\eta `$ where $`A`$ is the transverse area of the fireball. We denote the initial total baryon density by $`\rho _i=\rho _{b+\overline{b}}(\tau _i)`$. We assume $`T_i=\mathrm{\hspace{0.17em}170}`$ MeV and $`T_f=\mathrm{\hspace{0.17em}120}`$ MeV for the initial and final temperature , corresponding to $`\tau _i\mathrm{\hspace{0.17em}2.5}`$ fm/$`c`$ and $`\tau _f\mathrm{\hspace{0.17em}7}`$ fm/$`c`$ at the SPS, and $`\tau _i\mathrm{\hspace{0.17em}5}`$ fm/$`c`$ and $`\tau _f\mathrm{\hspace{0.17em}14}`$ fm/$`c`$ at RHIC. Let us first give a qualitative argument for the survival of a baryon number fluctuation within a rapidity interval $`\mathrm{\Delta }\eta \mathrm{\hspace{0.17em}1}`$. Between $`\tau _i`$ and $`\tau _f`$, this interval expands from a length of 5 fm to 14 fm (we use the RHIC numbers here). Baryons have average thermal longitudinal velocity component $`\overline{v}_z=\frac{1}{2}\overline{v}`$ where $`\overline{v}|𝒗|=\sqrt{8T/\pi M}`$ is the mean thermal velocity ($`\overline{v}=\mathrm{\hspace{0.17em}0.65}`$ for baryons with $`M=\mathrm{\hspace{0.17em}1}`$ GeV at $`T=\mathrm{\hspace{0.17em}170}`$ MeV). Without rescattering, between $`\tau _i`$ and $`\tau _f`$ a baryon which is initially at the center of this interval can travel on average only about 3 fm in the beam direction; hence it will not reach the edge of the interval before freeze-out. Because of rescattering in the hot hadronic matter, the baryon number actually diffuses more slowly, and a fluctuation will even survive in a smaller rapidity interval. For a quantitative argument, we need to estimate the flux of baryons in and out of the considered rapidity interval. Two effects need to be evaluated in this context. First, the difference in the baryon densities inside and outside the subvolume causes a difference in the values of the mean flux of baryons into and out of the volume. Denoting by $`\overline{v}(\tau )`$ the average thermal velocity of baryons, one finds that the initial fluctuation decays exponentially: $$\mathrm{\Delta }N_b(\tau )=\mathrm{\Delta }N_b^{(i)}\mathrm{exp}\left(\frac{1}{2\mathrm{\Delta }\eta }_{\tau _i}^\tau \frac{d\tau }{\tau }\overline{v}(\tau )\right).$$ (10) In the Bjorken scenario, the temperature $`T`$ falls as $`\tau ^{1/3}`$ and one finds for the remaining fluctuation at freeze-out $$\mathrm{\Delta }N_b(\tau _f)=\mathrm{\Delta }N_b^{(i)}\mathrm{exp}\left(\frac{3\overline{v}_i}{\mathrm{\Delta }\eta }[1(T_f/T_i)^{1/2}]\right).$$ (11) For the numbers considered here, the exponent is very close to $`\overline{v}_i/(2\mathrm{\Delta }\eta )`$, implying that the fluctuation survives if $`\mathrm{\Delta }\eta `$ is larger than $`\overline{v}_i/2\mathrm{\hspace{0.17em}0.33}`$. The second effect that can wash out the initial fluctuation is fluctuations in the baryon fluxes exchanged with the neighboring subvolumes. These could eventually replace the initial fluctuation with a thermal fluctuation that is characteristic of the conditions at freeze-out. The total number of baryons entering $`N_b^{(\mathrm{en})}`$ or leaving $`N_b^{(\mathrm{lv})}`$ the subvolume between $`\tau _i`$ and $`\tau _f`$ is given by $$N_b^{(\mathrm{en})}=N_b^{(\mathrm{lv})}=\frac{A}{2}_{\tau _i}^{\tau _f}\rho _b(\tau )\overline{v}(\tau )𝑑\tau .$$ (12) A similar calculation yields $`N_b^{(\mathrm{en})}=N_b^{(\mathrm{lv})}N_b^{(i)}\overline{v}_i/2\mathrm{\Delta }\eta `$. $`N_b^{(\mathrm{en})}`$ and $`N_b^{(\mathrm{lv})}`$ fluctuate independently; one therefore expects that the ratio of the mean square fluctuation of the number of exchanged baryons $`N_b^{(\mathrm{ex})}`$ to the average initial fluctuation is: $$\frac{(\mathrm{\Delta }N_b^{(\mathrm{ex})})^2}{(\mathrm{\Delta }N_b^{(i)})^2}\frac{\overline{v}_i}{\mathrm{\Delta }\eta },$$ (13) which is smaller than unity for $`\mathrm{\Delta }\eta \overline{v}_i\mathrm{\hspace{0.17em}0.65}`$. We conclude that the short time between hadronization and final freeze-out precludes the readjustment of net baryon number fluctuations in rapidity bins $`\mathrm{\Delta }\eta 1`$. A similar calculation applies to net charge fluctuations. Several refinements of our estimate are possible but are expected to partially cancel each other: Additional transverse expansion lets the temperature drop faster than in the Bjorken scenario. During hadronization cooling is impeded by the large change in the entropy density between QGP and HG. And finally, the short mean free path of baryons in hot hadronic matter will significantly reduce our above estimates of the dissipation of an initial state fluctuation. In conclusion, we have argued that the difference in magnitude of local fluctuations of the net baryon number and net electric charge between confined and deconfined hadronic matter is partially frozen at an early stage in relativistic heavy-ion collisions. These fluctuations may thus be useful probes of the temporary formation of a deconfined state in such collisions. The event-by-event fluctuations of the two suggested observables for collisions with a fixed value of the transverse energy $`dE_\mathrm{T}/dy`$ or of the energy measured in a zero-degree calorimeter would be appropriate observables that could test our predictions. Further discrimination can be achieved by measuring the beam energy dependence of the fluctuations: In the QGP the ratio $`(\mathrm{\Delta }Q/\mathrm{\Delta }N_b)^2=\frac{5}{2}`$ of charge to baryon number fluctuations is a beam-energy independent constant; in the HG phase it shows a significant beam energy dependence between SPS and RHIC/LHC energies. This work was supported by the U.S. Department of Energy, the Japanese Ministry of Education, Science, and Culture, and the Japanese Society for the Promotion of Science. B.M. thanks H. Minakata (Tokyo Metropolitan University) for his hospitality. We are indebted to K. Rajagopal for valuable discussions. Note added: After finishing this work we received a paper by Jeon and Koch who discuss similar issues. At the SPS they get $`(\mathrm{\Delta }Q)^2/S|_{\mathrm{HG}}0.13`$ which is more than twice our value, due to a smaller resonance decay correction to $`(\mathrm{\Delta }Q)^2`$ (30% instead of our 50%) and their omission of a 35% extra contribution to $`S`$ from heavy particles (mostly the net baryons and strange hadrons).
no-problem/0003/hep-ph0003085.html
ar5iv
text
# Left-right asymmetries and exotic vector–boson discovery in lepton-lepton colliders ## I Introduction Any extension of the electroweak standard model (ESM) implies necessarily the existence of new particles. We can have a rich scalar-boson sector if there are several Higgs-boson multiplets or have more vector and scalar fields in models with a larger gauge symmetry as in the left–right symmetric and in 3-3-1 models , or we also can have at the same time more scalar, fermion, and vector particles as in the supersymmetric extensions of the ESM . If in a given model all the new particles contribute to all observables, it will be very difficult to identify their contribution in the usual and exotic processes. In some models the contributions of the scalar-bosons can not be suppressed by the fermion mass and they can have same strength of the fermion–vector-boson coupling. Hence, we can ask ourselves if there exist observables and/or processes which allow us to distinguish between the contributions of charged and neutral scalar-bosons from those of the vector-bosons. In Ref. it was noted that the left-right (L-R) asymmetries in the lepton–lepton diagonal scattering are insensible to the contribution of doubly-charged scalar fields but are quite sensible to doubly-charged vector field contributions. On the other hand, in non-diagonal scattering (as $`\mu ^{}e^{}`$) those asymmetries are sensible to the existence of an extra neutral vector-boson $`Z^{}`$ . Here we will extend our previous analysis by considering a detailed study of the L-R asymmetries in order to analyse their capabilities in detecting new physics. The outline of the paper is the following: In Sec. II we define the asymmetries; in Sec. III we show the lagrangian interaction of the models we are considering here. The results and experimental considerations are given in Sec. IV and our conclusions appear in the last section. ## II The L-R asymmetries The left-right asymmetry for the process $`l^{}l^{}l^{}l^{}`$ with one of the particles being unpolarized is defined as $$A_{RL}(ll^{}ll^{})A_{RL}(ll^{})=\frac{d\sigma _Rd\sigma _L}{d\sigma _R+d\sigma _L},$$ (1) where $`d\sigma _{R(L)}`$ is the differential cross section for one right (left)-handed lepton $`l`$ scattering on an unpolarized lepton $`l^{}`$ and where $`l,l^{}=e,\mu `$. That is $$A_{RL}(ll^{})=\frac{(d\sigma _{RR}+d\sigma _{RL})(d\sigma _{LL}+d\sigma _{LR})}{(d\sigma _{RR}+d\sigma _{RL})+(d\sigma _{LL}+d\sigma _{LR})},$$ (2) where $`d\sigma _{ij}`$ denotes the cross section for incoming leptons with helicity $`i`$ and $`j`$, respectively, and they are given by $$d\sigma _{ij}\underset{kl}{}|M_{ij;kl}|^2,i,j;k,l=L,R.$$ (3) Notice that when the scattering is diagonal, $`l=l^{}=e,\mu `$, $`d\sigma _{RL}=d\sigma _{LR}`$, so the asymmetry in Eq. (2) is equal to the asymmetry defined as $`(d\sigma _{RR}d\sigma _{LL})/(d\sigma _{RR}+d\sigma _{LL})`$. For practical purposes, for the non-diagonal ($`e\mu e\mu `$) case of the $`A_{RL}`$ asymmetry, we will focus on the scattering of polarized muons by unpolarized electrons. Another interesting possibility is the case when both leptons are polarized. We can define an asymmetry $`A_{R;RL}`$ in which one beam is always in the same polarization state, say right-handed, and the other is either right- or left-handed polarized (similarly we can define $`A_{L;LR}`$): $$A_{R;RL}=\frac{d\sigma _{RR}d\sigma _{RL}}{d\sigma _{RR}+d\sigma _{RL}},A_{L;RL}=\frac{d\sigma _{LR}d\sigma _{LL}}{d\sigma _{LL}+d\sigma _{LR}}.$$ (4) In this case, when the non-diagonal scattering is considered, we will assume that the muon beam has always the same polarization and the electron one can have both, the left and the right polarizations. We can integrate over the scattering angle and define the asymmetry $`A_{RL}`$ as $$A_{RL}=\frac{(𝑑\sigma _{RR}+𝑑\sigma _{RL})(𝑑\sigma _{LL}+𝑑\sigma _{LR})}{(𝑑\sigma _{RR}+𝑑\sigma _{RL})+(𝑑\sigma _{LL}+𝑑\sigma _{LR})},$$ (5) where $`𝑑\sigma _{ij}_{5^o}^{175^o}𝑑\sigma _{ij}`$. A similar expression can be written for $`A_{R;RL}`$. ## III The models We are studying here the asymmetries defined above in the context of two models: the electroweak standard model (ESM) and in a model having a doubly charged bilepton vector field ($`U_\mu ^{}`$) and an extra neutral vector boson $`Z^{}`$ . The latter model has also two doubly charged scalar bileptons but since their contributions cancel out in the numerator of the asymmetries we are not consider them on this study. We identify the case under study by using the $`\{\mathrm{ESM}\}`$, $`\{\mathrm{ESM}+\mathrm{U}\}`$, and $`\{\mathrm{ESM}+\mathrm{Z}^{}\}`$ labels in cross sections and asymmetries. In the context of the electroweak standard model, at the tree level, the relevant part of the ESM lagrangian is $$_F=\underset{i}{}\frac{gm_i}{2M_W}\overline{\psi }_i\psi _iH^0e\underset{i}{}q_i\overline{\psi }_i\gamma ^\mu \psi _iA_\mu \frac{g}{2\mathrm{cos}\theta _W}\psi _i\gamma ^\mu (g_V^ig_A^i\gamma ^5)\psi _iZ_\mu ,$$ (6) $`\theta _W\mathrm{tan}^1(g^{}/g)`$ is the weak mixing angle, $`e=g\mathrm{sin}\theta _W`$ is the positron electric charge with $`g`$ such that $$g^2=\frac{8G_FM_W^2}{\sqrt{2}};\mathrm{or}g^2/\alpha =4\pi \mathrm{sin}^2\theta _W,$$ (7) with $`\alpha 1/128`$; and the vector and axial neutral couplings are $$g_V^it_{3L}(i)2q_i\mathrm{sin}^2\theta _W,g_A^it_{3L}(i),$$ (8) where $`t_{3L}(i)`$ is the weak isospin of the fermion $`i`$ and $`q_i`$ is the charge of $`\psi _i`$ in units of $`e`$. The charged current interactions in a model having a doubly charged vector boson, in terms of the physical basis, are given by $$\frac{g}{\sqrt{2}}\left[\overline{\nu }_LE_L^\nu E_L^ll_LW_\mu ^++\overline{l}_L^c\gamma ^\mu E_R^{lT}E_L^\nu \nu _LV_\mu ^+\overline{l}_L^cE_R^{lT}E_L^ll_LU_\mu ^{++}\right]+H.c.,$$ (9) with $`l_L^{}=E_L^ll_L,l_R^{}=E_R^ll_R,\nu _L^{}=E_L^\nu \nu _L`$, the primed (unprimed) fields denoting symmetry (mass) eigenstates. We see from Eq. (9) that for massless neutrinos we have no mixing in the charged current coupled to $`W_\mu ^+`$ but we still have mixing in the charged currents coupled to $`V_\mu ^+`$ and $`U_\mu ^{++}`$. That is, if neutrinos are massless we can always chose $`E_L^\nu E_L^l=1`$. However, the charged currents coupled to $`V_\mu ^+`$ and $`U_\mu ^{++}`$ are not diagonal in flavor space and the mixing matrix $`K=E_R^{lT}E_L^\nu `$ has three angles and three phases. (An arbitrary $`3\times 3`$ unitary matrix has three angles and six phases. In the present case, however, the matrix $`𝒦`$ is determined entirely by the charged lepton sector, so we can rotate only three phases ). The total width of the $`U`$-boson ($`\mathrm{\Gamma }_U^{\text{total}}`$) is a calculable quantity in the model once we know all the $`U`$-boson couplings which are derived from the 3-3-1 gauge-invariant lagrangian. However, we find that a complete computation of $`\mathrm{\Gamma }_U`$ is out of the scope of this paper because in this case some realistic hypotheses concerning the masses of the exotic scalars and quarks should be made. Thus, we will only consider the partial width due to the $`U^{}l^{}l^{}`$ decay. In the limit where all the lepton masses are negligible we have: $$\mathrm{\Gamma }_U^{\text{total}}\mathrm{\Gamma }(U^{}\text{leptons})=\underset{i,j}{}\frac{G_F}{6\sqrt{2}\pi }M_W^2M_U|K_{ij}|^2$$ (10) where $`i,j`$ run over the $`e,\mu `$, and $`\tau `$ leptons and $`K_{ij}`$ is a mixing matrix in the flavor space. For the expression above we can write $`\mathrm{\Gamma }_U^{\text{total}}=_i\mathrm{\Gamma }_{ii}+\frac{1}{2}_{ij}\mathrm{\Gamma }_{ij}`$ and assuming that the matrix $`K`$ is almost diagonal we can neglect $`\mathrm{\Gamma }_{ij}`$ for $`ij`$ and consider for practical purposes that $`\mathrm{\Gamma }_U^{\text{total}}=3\times \mathrm{\Gamma }_{ii}`$. In our numerical applications $`\mathrm{\Gamma }_U^{\text{total}}`$ is a varying function of the $`U`$-boson mass. For instance, for $`M_U=300`$ GeV we have $`\mathrm{\Gamma }_U^{\text{total}}2.5`$ GeV. In the model there is also a $`Z^{}`$ neutral vector boson which couples with the leptons as follows $$_{NC}^Z^{}=\frac{g}{2c_W}\left[\overline{l}_{aL}\gamma ^\mu L_ll_{aL}+\overline{l}_{aR}\gamma ^\mu R_ll_{aR}+\overline{\nu }_{aL}\gamma ^\mu L_\nu \nu _{aL}\right]Z_\mu ^{},$$ (11) with $`L_l=L_\nu =(14s_W^2)^{1/2}/\sqrt{3}`$ and $`R_l=2L_l`$. Notice the leptophobic character of $`Z^{}`$. In this case we have no concerns about the $`Z^{}`$–width because this neutral boson is only exchanged in the $`t`$–channel. We will consider the process $$l^{}(p_1,\lambda )+l^{}(q_1,\mathrm{\Lambda })l^{}(p_2,\lambda ^{})+l^{}(q_2,\mathrm{\Lambda }^{}),$$ (12) where $`q=p_2p_1=q_2q_1`$ is the transferred momentum. As we said before, we will neglect the electron mass but not the muon mass i.e., $`E=|\stackrel{}{p}_e|`$ for the electron and $`K^2|\stackrel{}{q}_\mu |^2=m_\mu ^2`$ for the muon. In the non-diagonal elastic scattering in the standard model we have only the $`t`$-channel contribution. The relevant amplitudes for the ESM, $`\{\mathrm{ESM}+\mathrm{U}\}`$ and $`\{\mathrm{ESM}+\mathrm{Z}^{}\}`$ models are in the appendices of Ref. (Ref. ) for the diagonal (non-diagonal) case. ## IV Results ### A The $`U`$ boson We start this section by considering the contributions of the doubly charged vector boson $`U`$ to the asymmetries which contributes uniquely via the $`s`$-channel for a doubly charged initial state. In Fig. 1 we see that the angular dependence of the $`A_{RL}`$ asymmetry, taken into account the $`U`$ contribution, presents a relatively different behavior with respect to the ESM for a wide range of $`U`$-masses. Notice that the lines are considerably separated even for those values of the $`U`$-mass that are not close to $`\sqrt{s}`$. Notice also that for $`U`$-mass lower than $`\sqrt{s}`$ we basically reproduce the ESM result for $`\theta 0`$ and $`\pi /2`$; the largest difference with the ESM occurs for $`\theta `$ in the interval 0.5–1. The behavior of $`A_{RL}`$ for ESM+U as a function of $`M_U`$ is showed in Fig. 2 for a fixed scattering angle and for several values of $`\sqrt{s}`$. We see that this asymmetry is essentially negative and that its maximum value is zero and occurs at the resonance point $`M_U=\sqrt{s}`$. This is due to the fact that at the resonance the numerator of $`A_{RL}`$, as defined in Eq. (1), cancels out no mater the value of $`M_U`$. On the other hand the value of $`M_U`$ governs the width of the curves around the resonance point. This particular feature is better seen in Fig. 3. In this figure we show the $`A_{RL}`$ asymmetry as a function of the center-of-mass energy $`E_{CM}=\sqrt{s}`$ for some values of the $`U`$-mass and it is clearly seen that not only at the peak but also for a considerably large range of masses around the peak, the curves representing the respective $`U`$ contribution are significantly separated from the ESM one. It means that this asymmetry is very sensitive to the $`U`$-boson even in the case where the $`U`$-mass is larger than $`\sqrt{s}`$; when the $`U`$-mass is lower than $`\sqrt{s}`$ we reproduce the ESM results. In Fig. 4 and Fig. 5 we show the effects of the $`U`$-boson on the $`A_{R;RL}`$ asymmetry, defined in Eq. (4), and as it behaves qualitatively like $`A_{RL}`$ we come to the same conclusions we did for $`A_{RL}`$. We must note that near the $`U`$-resonance the $`A_{R;RL}`$ asymmetry is negative. However, in this case, polarization for both beams must be available. The integrated asymmetry $`A_{RL}`$ defined in Eq. (5) is shown in Fig. 6. There we can see that while the ESM curve keeps an almost constant value (0.025-0.031) for $`0.5<\sqrt{s}<2`$ TeV, the ESM+U curves go from zero, for $`M_U\sqrt{s}`$, to a very pronounced peak ($`0.25`$) at the resonance points. In Fig. 7, for the sake of detectability, we show the quantity $`\delta \%`$ defined by: $$\delta \%=\frac{A_{RL}^{ESM+U}A_{RL}^{ESM}}{A_{RL}^{ESM}}\times 100,$$ (13) which in this case stands for the percent deviation of $`A_{RL}^{ESM+U}`$ from $`A_{RL}^{ESM}`$. There we can see that there is a wide range of $`U`$-masses that can be probed at $`e^{}e^{}`$ colliders. Next we study the effect of non-negligible initial and final fermion masses by considering the $`\mu ^{}\mu ^{}\mu ^{}\mu ^{}`$ process in a muon collider. The results are given in Fig. 8. There we can see that, for $`M_U=500`$ GeV, below 300 GeV the muon mass effect is in evidence differing sensibly from the electron-electron case, independently of the $`U`$-contribution for higher energies the lepton mass has no effect at all for both models. Between 300 and 400 GeV all the curves are coincident and we cannot distinguish among both models or leptons. Above 400 GeV it is the $`U`$-effect which dominates and it is the same for electrons and muons. The effect of the $`U`$-resonance is well evident and even above the resonance there is an almost constant difference between the ESM+U and ESM asymmetries showing that the $`A_{RL}`$ asymmetry is still a sensitive parameter for the $`U`$ discovery for $`\sqrt{s}>M_U`$ in lepton-lepton colliders. ### B The $`Z^{}`$ boson In order to search for new physics in the neutral-vector boson sector it is worth to consider de non-diagonal process $`\mu ^{}e^{}\mu ^{}e^{}`$. In this case, assuming that the couplings of the $`U`$-boson with leptons are almost diagonal ($`K_{ii}1`$ as in the $`\mathrm{\Gamma }_U`$), the $`s`$-channel $`U`$-boson exchange will be negligible and provided that the $`Z^{}`$ couples with leptons diagonally, the only contributions to this process for ESM+Z will be the $`t`$-channel ones, i.e., the contributions of $`\gamma ,Z`$, and $`Z^{}`$. The angular dependence of $`A_{RL}`$ is showed in Fig. 9 where we can see that a $`Z^{}`$ contribution is clearly distinguished from the ESM one for a wide range of $`Z^{}`$-masses around a given $`E_\mu =\sqrt{s}/2`$. (In the figures concerning the $`Z^{}`$-boson, once there is no $`s`$-channel, we specify the energy by the muon-beam energy $`E_\mu `$.) As expected the $`A_{RL}`$ asymmetry is more sensitive to relatively light $`Z^{}`$ boson. We come to the same conclusion from Fig. 10 in which we show the $`A_{RL}`$ asymmetry as a function of $`E_\mu `$. The sensitivity of the this asymmetry with the $`Z^{}`$-mass is showed in Fig. 11. Contrarily to the case of the search for the $`U`$-boson, the asymmetry $`A_{R;RL}`$ is not sensitive to the extra neutral vector boson. In this case, the potential capabilities of the asymmetries in discovering new neutral vector bosons are better explored by considering the integrated asymmetry $`A_{RL}`$. The angular integration over the scattering angle of the $`t`$-channel $`Z^{}`$ exchange contribution produces curves that are clearly separated, depending on the $`Z^{}`$-mass, which are also clearly distinguishable from the ESM curve for a wide range of masses. See Fig. 12. We have also computed the asymmetries taking into account a $`Z^{}`$ which couples to leptons with the same couplings of the standard Z boson but with a different mass. Although these ESM couplings are stronger than those of the 3-3-1 model they have no substantial effect on the asymmetries: The results are very similar to the ones showed in Fig. 11. For the $`e\mu `$ scattering there are also contributions coming from the neutral scalar sector of the model. However as in all the scalar contributions the pure scalar terms cancel out in the numerator of the asymmetry and the interference terms are numerically negligible . ### C Observability Based on the figures we have shown throughout the text we have claimed that the values of the asymmetries, when there is an extra contribution of a new vector boson, are different enough from those of ESM ones to allow for the discovery of the referred bosons. However, we must be sure that there is enough statistics to measure these asymmetries. In order to provide some statistical analysis we assume a conservative value for the luminosity: $`=1\text{fb}^1\text{yr}^1=10^{32}\text{cm}^2\text{s}^1`$ for the $`e^{}e^{}`$, $`\mu ^{}\mu ^{}`$ and the $`\mu ^{}e^{}`$ colliders, and compute the number of the expected number of events ($`N`$) based on the unpolarized integrated cross section for each process. For the $`e^{}e^{}e^{}e^{}`$ or $`\mu ^{}\mu ^{}\mu ^{}\mu ^{}`$ processes, the ESM cross section is relatively small, it goes from $`0.05`$ nb at $`\sqrt{s}=0.5`$ TeV to $`1.5\times 10^3`$ nb at $`\sqrt{s}=2`$ TeV. These values correspond to $`5\times 10^4`$ and $`1.5\times 10^3`$ events/yr respectively. Then, computing $`\sqrt{N}/N`$ we get $`4\times 10^3`$, for the first case, and $`2\times 10^2`$, for the second one. This is an indication that the ESM asymmetries can be measured. Note that the asymmetries we have computed here are relatively large: $`A_{RL}^{ESM}A_{R;RL}^{ESM}𝒪(10^1)`$ for a fixed scattering angle and of the order $`𝒪(10^2)`$ for the integrated ones as showed in Figs. 1-5 and Fig. 6, respectively. On the other hand, this reaction is so sensitive to the $`U`$-boson contribution that there is an enormous enlargement in the cross section and consequently in the statistics. In Tab. I we show the relevant parameters depending on the $`U`$-boson mass and for only two values of $`\sqrt{s}`$ for shortness. There we can see that there is enough precision to measure the asymmetries, for the range of masses and energies we have considered. For the $`U`$-boson discovery we can study the cross section directly provided that it is considerably different from that of the ESM . However, the study of the asymmetry gives us more qualitative information once, contrarily to the cross section, it filters the vector-nature contribution of the $`U`$-boson: it was previously shown that the scalar-boson contributions, which are present in the 3-3-1 model, cancel out in the asymmetry numerator. For the $`\mu ^{}e^{}\mu ^{}e^{}`$ process, the integrated cross sections for the ESM are larger than for the electron diagonal process. We find $`3`$ nb for $`E_\mu =0.5`$ TeV and $`0.2`$ nb for $`E_\mu =2.0`$ TeV. The corresponding number of events are $`3\times 10^6`$ and $`2\times 10^5`$, respectively, for the same luminosity used before. In this case the ratio $`\sqrt{N}/N`$ gives $`5.7\times 10^4`$ and $`2.2\times 10^3`$, respectively, which provide enough precision to measure the $`A_{RL}^{ESM}`$ once it is of the order $`𝒪(10^2)`$, as can be seen from Fig. 9-12. For the ESM+Z case we have that although the Z-contribution can affect significantly the values of the asymmetry $`A_{RL}`$, as it has been shown, it only slightly modifies the cross section values. It means that the number of events in for ESM+Z are similar to those of the ESM and hence we have enough precision to measure $`A_{RL}^{ESM+Z^{}}`$ too. Once again we note that the asymmetries are more sensitive than the cross section itself in looking for the Z-discovery. ## V Conclusions Here we have generalized the analysis of Ref. and have shown that the L-R asymmetries in the diagonal ($`e^{}e^{},\mu ^{}\mu ^{}`$) lepton scattering can be the appropriate observable to discover doubly charged vector bosons $`U`$ even for values of $`M_U`$ and $`\sqrt{s}`$ far away from the resonance condition. Although the cross sections may have important contributions from the scalar fields (doubly charged Higgs bosons) these contributions cancel out in the numerator of the L-R asymmetries. On the other hand, the contribution of the extra neutral Z, leptophobic or with the same couplings of the standard model, gives small contributions to the diagonal $`e^{}e^{},\mu ^{}\mu ^{}`$ scattering but it gives an important contribution to non-diagonal $`\mu ^{}e^{}`$ case. Hence, both $`U^{}`$ and Z vector bosons can be potentially discovered in these sort of processes by measuring the L-R asymmetries. Since the couplings of both particles with matter are known in a given model, once their masses were known other processes like exotic decays could be used to study the respective contributions of the scalar fields present in the model. ###### Acknowledgements. This work was supported by Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP), Conselho Nacional de Ciência e Tecnologia (CNPq) and by Programa de Apoio a Núcleos de Excelência (PRONEX).
no-problem/0003/nucl-ex0003005.html
ar5iv
text
# Fragment Kinetic Energies and Modes of Fragment Formation ## Abstract Kinetic energies of light fragments ($`A`$ 10) from the decay of target spectators in <sup>197</sup>Au + <sup>197</sup>Au collisions at 1000 MeV per nucleon have been measured with high-resolution telescopes at backward angles. Except for protons and apart from the observed evaporation components, the kinetic-energy spectra exhibit slope temperatures of about 17 MeV, independent of the particle species, but not corresponding to the thermal or chemical degrees of freedom at breakup. It is suggested that these slope temperatures may reflect the intrinsic Fermi motion and thus the bulk density of the spectator system at the instant of becoming unstable. PACS numbers: 25.70.Pq, 21.65.+f, 25.70.Mn Statistical multifragmentation models have been found to be remarkably successful in describing the partition space populated in nuclear reactions leading to multi-fragment emission . This is particularly true for spectator decays following heavy-ion collisions at relativistic bombarding energies. Fragment yields and fragment correlations have been reproduced with high accuracy , including their dispersions around the mean behavior . The model temperatures are in good agreement with measured isotope temperatures, and the isotopic yields from which these temperatures are derived are well reproduced . The mean kinetic energies of the produced particles and fragments, on the other hand, are not universally accounted for if the same set of model parameters is used . Measured kinetic energies or, equivalently, the slope temperatures obtained from Maxwell-Boltzmann fits to the kinetic energy spectra have been found to considerably exceed the predicted values. This holds even though the effects of Coulomb repulsion and sequential decay cause the calculated slope temperatures to be higher than the equilibrium temperatures in the model description. For the <sup>197</sup>Au + <sup>197</sup>Au reaction studied in this paper, the measured isotope temperatures and the calculated equilibrium temperatures are 6 to 8 MeV for most of the impact parameter range; the calculated slope temperatures for light particles are about 10 MeV while the measured slope temperatures exceed 15 MeV . The qualitative difference between the chemical or thermal equilibrium temperatures and the kinetic temperatures has been observed for a variety of reactions and numerous reasons for it have been presented . Preequilibrium or pre-breakup emissions are likely explanations for high-energy components in light-particle spectra . In some cases, the excess kinetic energies have been ascribed to collective flow even though a characteristic proportionality to the fragment mass is not evident in the data . Other authors were caused by this situation to more generally question the applicability of statistical descriptions for spectator fragmentation . An explanation in terms of the Goldhaber model that is favored here has been suggested long ago but, so far, has not been generally adopted. In this Letter, we analyze kinetic-energy spectra for fragment and light-charged-particle emission from target spectators following collisions of <sup>197</sup>Au + <sup>197</sup>Au at 1000 MeV per nucleon. We will show that the deduced slope temperatures and the much lower chemical breakup temperatures can be consistently understood if the intrinsic constituent motion of the decaying fermionic system is taken into account. With this interpretation, the slope temperatures reflect the bulk density of the system prior to its disintegration, and thus provide information on the mechanism of fragment formation. Beams of <sup>197</sup>Au with incident energy 1000 MeV per nucleon from the heavy-ion synchrotron SIS were used to bombard <sup>197</sup>Au targets of 25-mg/cm<sup>2</sup> areal thickness. As part of a larger experimental setup , three high-resolution telescopes were placed at backward angles for detecting the products of the target-spectator decay. Each telescope consisted of three Si detectors with thickness 50, 300, and 1000 $`\mu `$m and of a 4-cm long CsI(Tl) scintillator with photodiode readout and subtended a solid angle of 7.0 msr. The products of the projectile decay were measured with the time-of-flight wall of the ALADIN spectrometer and $`Z_{bound}`$ was determined event-by-event. The sorting variable $`Z_{bound}`$ is defined as the sum of the atomic numbers $`Z_i`$ of all projectile fragments with $`Z_i`$ 2. $`Z_{bound}`$ reflects the variation of the charge of the primary spectator system and is therefore correlated with the impact parameter of the reaction. Because of the symmetry of the collision system, the mean values of $`Z_{bound}`$ for the target and the projectile spectators within the same event class have been assumed to be identical. First results of these measurements have been presented in Ref. , including some of the kinetic-energy data that will be discussed in the following. Energy spectra of protons, measured at $`\theta _{lab}`$ = 150 and sorted into eight bins of $`Z_{bound}`$, are shown in Fig. 1 (left panel). They are characterized by a hard component and an additional soft component that is most clearly identified at large $`Z_{bound}`$. Fits using two Maxwellians yield temperature parameters of very different magnitude that both are nearly independent of $`Z_{bound}`$ (Fig. 1, right panels). The slope temperature $`T`$ of about 5 to 7 MeV of the soft component is of the same order as the measured breakup temperatures . Its intensity, at the same time, is strongly correlated with that of heavier fragmentation products which suggests a, perhaps partial, interpretation as evaporation from highly excited residual nuclei. For the <sup>197</sup>Au + <sup>12</sup>C reaction at the same energy, an equilibrium proton component with nearly identical properties was reported by Hauger et al. . Two components have also been identified in the spectra of $`\alpha `$ particles (Fig. 2) and in the neutron data measured with LAND , with low-temperature components typical for evaporation. The high-temperature components of protons and neutrons seem to have their origin not only in the breakup stage but also in the earlier cascading stages of the collision; their slope parameters are rather large and, as established for the neutron case , vary with the bombarding energy. The high-temperature component of $`\alpha `$ particles decreases from $`T`$ 17 MeV for small $`Z_{bound}`$ to $`T`$ 13 MeV for large $`Z_{bound}`$ (see below). Its mean value of 15 MeV is close to the slope temperatures observed for other species with $`A`$ 2 which exhibit Maxwellian spectra to a good approximation (Fig. 2). Their slope temperatures were obtained from single-component fits with three-parameter functions that included a Coulomb potential $`V_c`$. The results for the interval 20 $`Z_{bound}`$ 60 are shown in Fig. 3. Apart from the protons with $`T`$ 26 MeV, all spectra exhibit temperatures that are narrowly dispersed around a mean value of 17 MeV. A mass-invariant temperature, at first sight, seems to provide direct evidence for equilibration of the kinetic degrees of freedom. On the other hand, Coulomb effects should contribute in proportion to the fragment charge, recoil effects may be important for heavier fragments, and the small but finite motion of the target spectator should introduce a collective velocity component. In order to estimate the magnitude of these effects, a model study was performed which started from initial configurations generated by randomly placing fragments and light particles in a spherical volume with random momenta corresponding to a given temperature. Experimental charge distributions from Ref. and an average density $`\rho /\rho _0`$ = 0.3 were used where $`\rho _0`$ is the normal nuclear density. N-body Coulomb trajectory calculations were then performed and asymptotic slope temperatures were determined by fitting, following the same procedure as with the experimental data. The calculations showed that the considered effects are relatively small, perhaps of the order of $`\mathrm{\Delta }T/\mathrm{\Delta }A`$ 0.2 MeV, and that they cancel each other to a good approximation at the backward angles chosen for the measurement. The measured slope parameters of $`T`$ 17 MeV (Fig. 3) are thus equal to the temperatures to be used in a thermal interpretation of the data. The model study also showed that the Coulomb repulsion contributes considerably to the final kinetic energies. The resulting mean kinetic energies in the center-of-mass frame of the decaying system add up to about 35 to 40 MeV, quite consistent with values obtained in earlier experiments where kinetic energies were derived from the widths of momentum distributions of projectile fragments . In these experiments, the mean kinetic energies were found to be approximately constant over the range of fragment atomic number $`Z`$ 20. It was suggested many years ago by Goldhaber that the product momenta in fast fragmentation processes may have their origin in the nucleonic Fermi motion within the colliding nuclei . He also pointed out that the resulting behavior is indistinguishable from that of a thermalized system with rather high temperature. For the Fermi momentum $`p_F`$ 265 MeV/c of heavier nuclei the corresponding temperature is $`T`$ = 15 MeV. This is intriguingly close to the measured slope temperatures $`T`$ = 17 MeV. Goldhaber’s idea, initially formulated for cold nuclei, has been extended to the case of expanded fermionic systems at finite temperature by Bauer . We have used the numerical solution reported there by assuming that the temperature of the system is given by the measured breakup temperatures $`T_{\mathrm{HeLi}}`$, derived from the yields of <sup>3,4</sup>He and <sup>6,7</sup>Li isotopes . For the breakup density two alternative values $`\rho /\rho _0`$ = 1.0 and 0.3 were chosen which correspond to two significantly different scenarii for the process of fragment production. The higher value is expected for a fast abrasion stage that will induce fragmentation of the residual spectators but not instantly affect the nucleon motion within them. The lower value is in the range usually assumed for multi-fragment breakups of expanded systems . Here the development of instabilities, as the system enters the spinodal region, will equally cause a rapid fragmentation, so that the nucleonic Fermi motion will contribute to the fragment kinetic energies. For a homogeneous system at a lower than normal density the Fermi motion is reduced and the effect will be smaller. The slope temperatures obtained in this way are represented by the lines shown in Fig. 4. The comparison with the data is here restricted to the $`A`$ 4 isotopes for which the collected statistics are sufficient for studying the $`Z_{bound}`$ dependence. The recoil factor $`(A_sA)/(A_s1)`$ appearing in the Goldhaber formula can be safely ignored as the mass number $`A_s`$ of the spectator system is 100 to 150 on average and still about 50 in the bin of smallest $`Z_{bound}`$ . Qualitatively, the predicted values are close to those observed. In particular, the rise with decreasing $`Z_{bound}`$ follows as a consequence of the rising breakup temperatures $`T_{\mathrm{HeLi}}`$. Apparently, with assumptions as made in the Goldhaber model, the two different types of temperatures are mutually consistent. The role of the slope temperatures, consequently, is restricted to describing the fragment motion while $`T_{\mathrm{HeLi}}`$ is the more suitable observable for representing the temperature of the nuclear environment at the breakup stage. Recent calculations with transport models, which incorporate Fermi motion, support this interpretation . The energies of spectator fragments, as measured in the present reaction, are well reproduced, and the coexistence of qualitatively different internal (or local) temperatures and fragment slope temperatures has been demonstrated in these QMD and BUU studies. On the experimental side, the observed similarity of fragment kinetic energy spectra, measured with different projectiles at various incident energies, is expected from the Goldhaber model . The fluctuations corresponding to the higher slope temperatures may also explain the observed large widths of the intrinsic kinetic-energy distributions in <sup>197</sup>Au fragmentation at 600 MeV per nucleon that are not easily reproduced with statistical multifragmentation models . A more quantitative comparison with the data shown in Figs. 3 and 4 will favor the prediction for $`\rho /\rho _0`$ = 1.0 corresponding to a fast breakup over that for $`\rho /\rho _0`$ = 0.3 that is reached after a homogeneous expansion. This would not be unexpected in the present case of spectator fragmentation following collisions of heavy ions and might indicate a limited resilience of nuclear matter to shape deformations . The cited QMD and BUU calculations consistently suggest that the fragments are preformed at an early stage of the collision ($`50`$ fm/c) before the system has expanded to typical breakup densities. To draw firm conclusions at this time would certainly be premature. The role of secondary decays and possibly other effects would have to be considered, and a moderate collective velocity component as a result of thermal expansion cannot be completely excluded , even though it is not explicitly indicated for the present reaction (Fig. 3 and ). The main purpose of this work is to show that, within the Goldhaber picture of a random superposition of the nucleon momenta, the larger slope temperatures are fully consistent with a breakup at equilibrium temperatures of 6 to 8 MeV, as they are measured and also obtained with statistical multifragmentation models. Because of the sensitivity to the mode of fragment formation, via the density at which the Fermi motion has been established, this interpretation seems rather attractive and deserves further attention. The authors would like to thank J. Aichelin, C. Fuchs, T. Gaitanos and H.H. Wolter for fruitful discussions. M.B., J.P., and C.S. acknowledge the financial support of the Deutsche Forschungsgemeinschaft under the Contract No. Be1634/1-1, Po256/2-1, and Schw510/2-1, respectively. This work was supported by the European Community under contract ERBFMGECT950083.
no-problem/0003/hep-ph0003193.html
ar5iv
text
# Parton densities for heavy quarks ## Abstract We compare parton densities for heavy quarks. Reactions with incoming heavy (c,b) quarks are often calculated with heavy quark densities just like those with incoming light mass (u,d,s) quarks are calculated with light quark densities. The heavy quark densities are derived within the framework of the so-called zero-mass variable flavor number scheme (ZM-VFNS). In this scheme these quarks are described by massless densities which are zero below a specific mass scale $`\mu `$. The latter depends on $`m_c`$ or $`m_b`$. Let us call this scale the matching point. Below it there are $`n_f`$ massless quarks described by $`n_f`$ massless densities. Above it there are $`n_f+1`$ massless quarks described by $`n_f+1`$ massless densities. The latter densities are used to calculate processes with a hard scale $`Mm_c,m_b`$. For example in the production of single top quarks via the weak process $`q_i+bq_j+t`$, where $`q_i`$, $`q_j`$ are light mass quarks in the proton/antiproton, one can argue that $`M=m_t`$ should be chosen as the large scale and $`m_b`$ can be neglected. Hence the incoming bottom quark can be described by a massless bottom quark density. The generation of these densities starts from the solution of the evolution equations for $`n_f`$ massless quarks below the matching point. At and above this point one solves the evolution equations for $`n_f+1`$ massless quarks. However in contrast to the parameterization of the $`x`$-dependences of the light quarks and gluon at the initial starting scale, the $`x`$ dependence of the heavy quark density at the matching point is fixed. In perturbative QCD it is defined by convolutions of the densities for the $`n_f`$ quarks and the gluon with specific operator matrix elements (OME’s), which are now know up to $`O(\alpha _s^2)`$ . These matching conditions determine both the ZM-VFNS density and the other light-mass quark and gluon densities at the matching points. Then the evolution equations determine the new densities at larger scales. The momentum sum rule is satisfied for the $`n_f+1`$ quark densities together with the corresponding gluon density. Parton density sets contain densities for charm and bottom quarks, which generally directly follow this approach or some modification of it. The latest CTEQ densities use $`O(\alpha _s)`$ matching conditions. The $`x`$ dependencies of the heavy c and b-quark densities are zero at the matching points. The MRST densities have more complicated matching conditions designed so that the derivatives of the deep inelastic structure functions $`F_2`$ and $`F_L`$ with regard to $`Q^2`$ are continuous at the matching points. Recently we have provided another set of ZM-VFNS densities , which are based on extending the GRV98 three-flavor densities in to four and five-flavor sets. GRV give the formulae for their LO and NLO three flavor densities at very small scales. They never produced a c-quark density but advocated that charm quarks should only exist in the final state of production reactions, which should be calculated from NLO QCD with massive quarks as in . We have evolved their LO and NLO densities across the matching point $`\mu =m_c`$ with $`O(\alpha _s^2)`$ matching conditions to provide LO and NLO four-flavor densities containing massless c-quark densities. Then these LO and NLO densities were evolved between $`\mu =m_c`$ and $`\mu =m_b`$ with four-flavor LO and NLO splitting functions. At this new matching point the LO and NLO four-flavor densities were then convoluted with the $`O(\alpha _s^2)`$ OME’s to form five-flavor sets containing massless b-quarks. These LO and NLO densities were then evolved to higher scales with five-flavor LO and NLO splitting functions. Note that the $`O(\alpha _s^2)`$ matching conditions should really be used with NNLO splitting functions to produce NNLO density sets. However the latter splitting functions are not yet available, so we make the approximation of replacing the NNLO splitting functions with NLO ones. In this short report we would like to compare the charm and bottom quark densities in the CS, MRS and CTEQ sets. We concentrate on the five-flavor densities, which are more important for Tevatron physics. In the CS set they start at $`\mu ^2=m_b^2=20.25`$ $`\mathrm{GeV}^2`$. At this scale the charm densities in the CS, MRST98 (set 1) and CTEQ5HQ sets are shown in Figs.1,2,3 respectively. Since the CS charm density starts off negative for small $`x`$ at $`\mu ^2=m_c^2=1.96`$ $`\mathrm{GeV}^2`$ it evolves less than the corresponding CTEQ5HQ density. At larger $`\mu ^2`$ all the CS curves in Fig.1 are below those for CTEQ5HQ in Fig.3 although the differences are small. In general the CS c-quark densities are more equal to those in the MRST (set 1) in Fig.2. At the matching point $`\mu ^2=20.25`$ $`\mathrm{GeV}^2`$ the b-quark density also starts off negative at small $`x`$ as can be seen in Fig.4, which is a consequence of the explicit form of the OME’s in . At $`O(\alpha _s^2)`$ the OME’s have nonlogarithmic terms which do not vanish at the matching point and yield a finite function in $`x`$, which is the boundary value for the evolution of the b-quark density. This negative start slows down the evolution of the b-quark density at small $`x`$ as the scale $`\mu ^2`$ increases. Hence the CS densities at small $`x`$ in Fig.4 are smaller than the MRST98 (set 1) densities in Fig.5 and the CTEQ5HQ densities in Fig.6 at the same values of $`\mu ^2`$. The differences between the sets are still small, of the order of five percent at small $`x`$ and large $`\mu ^2`$. Hence it should not really matter which set is used to calculate cross sections for processes involving incoming b-quarks at the Tevatron. We suspect that the differences between these results for the heavy c and b-quark densities are primarily due to the different gluon densities in the three sets rather to than the effects of the different boundary conditions. This could be checked theoretically if both LO and NLO three-flavor sets were provided by MRST and CTEQ at small scales. Then we could rerun our programs to generate sets with $`O(\alpha _s^2)`$ boundary conditions. However these inputs are not available. We note that CS uses the GRV98 LO and NLO gluon densities, which are rather steep in $`x`$ and generally larger than the latter sets at the same values of $`\mu ^2`$. Since the discontinuous boundary conditions suppress the charm and bottom densities at small $`x`$, they enhance the the gluon densities in this same region (in order that the momentum sum rules are satisfied). Hence the GRV98 three flavour gluon densities and the CS four and five flavor gluon densities are generally significantly larger than those in MRST98 (set 1) and CTEQ5HQ. Unfortunately experimental data are not yet precise enough to decide which set is the best one. We end by noting that all these densities are given in the $`\overline{\mathrm{MS}}`$ scheme.
no-problem/0003/physics0003017.html
ar5iv
text
# References Is there a universality of the helix-coil transition in protein models? Josh P. Kemp†, Ulrich H. E. Hansmann‡, Zheng Yu Chen† a i†Dept. of Physics, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada ‡Dept. of Physics, Michigan Technological University, Houghton, MI 49931-1291, USA ## Abstract The similarity in the thermodynamic properties of two completely different theoretical models for the helix-coil transition is examined critically. The first model is an all-atomic representation for a poly-alanine chain, while the second model is a minimal helix-forming model that contains no system specifics. Key characteristics of the helix-coil transition, in particular, the effective critical exponents of these two models agree with each other, within a finite-size scaling analysis. Pacs: 87.15.He, 87.15-v, 64.70Cn, 02.50.Ng The importance of understanding the statistical physics of the protein-folding problem has been stressed recently . For instance, it is now often assumed that the energy landscape of a protein resembles a partially rough funnel. Folding occurs by a multi-pathway kinetics and the particulars of the folding funnel determine the transitions between the different thermodynamic states . This “new view” of folding was derived from studies of minimal protein models which capture only a few, but probably dominant parameters (chain connectivity, excluded volume, etc.) in real proteins. An implicit yet fundamentally crucial assumption is that the basic mechanism of structural transitions in biological molecules depends solely on gross features of the energy function, not on their details, and that a law of corresponding states can be used to explain dynamics and structural properties of real proteins from studies of related minimal models. This assumption needs to be proven. An even stronger notion in statistical physics is the universality hypothesis for critical phenomena. The critical exponents are identical for different theoretical models and realistic systems belonging to the same universality class. Many theoretical concepts in protein folding, such as coil-helix or coil-globular transitions involve phase transition or phase transition-like behavior. Thus, one wonders if physical measurements between two model systems for the same transition would have any “universal” properties. The purpose of this article is to examine these questions for the helix-coil transition in homopolymers of amino acids . Traditionaly, the coil-helix transition is described by theories such as the Zimm-Bragg model in which the homopolymers are regarded as one dimensional systems with only local interactions; as such a true thermodynamic phase transition is impossible. However, recently there have been indications that the coil-helix transition near the transition temperature displays phase-transition like behavior. We use here finite-size scaling analysis, a common tool in statistical physics, to examine the question of universality of the helix-coil transition in two completely different, illuminating models. On one hand, we have a detailed, all-atomic representation of a homo poly-alanine chain . On the other hand, we have a simple coarse-grained model describing the general features of helix-forming polymers . In this article, our interest lies in finding out how far the similarity of the two models go. If the two models yield the same key physical characteristics, then we at least have one concrete example of the validity of the corresponding state principle or universality hypothesis in biopolymer structures. Poly-alanine is well-known to have high helix-propensities in proteins, as demonstrated both experimentally and theoretically . It has been well tested and generally believed that approximate force fields, such as ECEPP/2 as implemented in the KONF90 program , give protein-structure predictions to a surprisingly degree of faithfulness. As our first model, we have “synthesized” poly-alanine with $`N`$ residues, in which the peptide-bond dihedral angles were fixed at the value 180 for simplicity. Since one can avoid the complications of electrostatic and hydrogen-bond interactions of side chains with the solvent for alanine (a non-polar amino acid), we follow earlier work and neglect explicit solvent molecules in the current study. Our second model is a minimalistic view of a helix forming polymer without atomic-level specifics. A wormlike chain is used to model the backbone of the molecule, while a general directionalized interaction, in terms of a simple square well form, is used to capture the essence of hydrogen like bonding. The interaction energy between the residue labeled $`i`$ and $`j`$ is modeled by, $$V_{ij}(𝐫)=\{\begin{array}{cc}\mathrm{}& r<D\hfill \\ v& Dr<\sigma \hfill \\ 0& \sigma r\hfill \end{array}$$ (1) where $`v=ϵ[\widehat{𝐮}_i\widehat{𝐫}_{ij}]^6+ϵ[\widehat{𝐮}_j\widehat{𝐫}_{ij}]^6`$, $`\widehat{𝐮}_i=(\widehat{𝐫}_{i+1,i})\times (\widehat{𝐫}_{i,i1})`$, $`\widehat{𝐫}_{ij}`$ is the unit vector between monomer $`i`$ and $`j`$, $`D=3/2a`$ is the diameter of a monomer, $`\sigma =\sqrt{45/8}a`$ is the bonding diameter, and $`a`$ is the bond length while bond angle is fixed at $`60^{}`$. To obtain the thermodynamic properties, we have conducted multicanonical Monte Carlo simulations for both models. In the low-temperature region where most of the structural changes occur, a typical thermal energy of the order $`k_BT`$ is much less than a typical energy barrier that the polymer has to overcome. Hence, simple canonical Monte Carlo or molecular dynamics simulations cannot sample statistically independent configurations separated by energy barriers within a finite amount of available CPU time, and usually give rise to bias statistics. One way to overcome this problem is the application of generalized ensemble techniques , such as the multicanonical algorithm used here, to the protein folding problem, as has recently been utilized and reported. In a multicanonical algorithm conformations with energy $`E`$ are assigned a weight $`w_{mu}(E)1/n(E)`$, $`n(E)`$ being the density of states. A simulation with this weight generates a random walk in the energy space; since a large range of energies are sampled, one can use the re-weighting techniques to calculate thermodynamic quantities over a wide range of temperatures by $$𝒜_T=\frac{{\displaystyle 𝑑x𝒜(x)w_{mu}^1(E(x))e^{\beta E(x)}}}{{\displaystyle 𝑑xw_{mu}^1(E(x))e^{\beta E(x)}}},$$ (2) where $`x`$ stands for configurations and $`\beta `$ is the inverse temperature. In the case of poly-alanine chains, up to $`N=30`$ alanine residues were considered. The multicanonical weight factors were determined by the iterative procedure described in Refs. and we needed between $`4\times 10^5`$ sweeps (for $`N=10`$) and $`5\times 10^5`$ sweeps (for $`N=30`$) for estimating the weight factor approximately. All thermodynamic quantities were measured from a subsequent production run of $`M`$ Monte Carlo sweeps, where $`M`$=$`4\times 10^5`$, $`5\times 10^5`$, $`1\times 10^6`$, and $`3\times 10^6`$ sweeps for $`N=10`$, 15,20, and 30, respectively. In the minimal model, chain lengths up to 39 monomers were considered. In this model a single sweep involves a rotation of a group of monomers via the pivot algorithm. For the weight factors the similar number of iterative procedure was used, and for the production run $`1\times 10^8`$ sweeps was used in all cases. We obtain the temperature dependence of the specific heat, $`C(T)`$, by calculating $$C(T)=\beta ^2\frac{E_{\mathrm{tot}}^2E_{\mathrm{tot}}^2}{N},$$ (3) where $`E_{\mathrm{tot}}`$ is the total energy of the system. We also analyze the order parameter $`q`$ which measures the helical content of a polymer conformation and the susceptibility $$\chi (T)=\frac{1}{N2}(q^2q^2).$$ (4) associated with $`q`$. For poly-alanine $`q`$ is defined as $$q=\stackrel{~}{n}_H$$ (5) where $`\stackrel{~}{n}_H`$ is the number of residues (other than the terminal ones) for which the dihedral angles ($`\varphi ,\psi `$) fall in the range ($`70\pm 20^{},37\pm 20^{}`$). For our worm-like chain model the order parameter $`q`$ is defined as $$q=\underset{i=2}{\overset{N1}{}}𝐮_i𝐮_{i+1}$$ (6) In both cases the first and last residues, which can move more freely, are not counted in the procedure. From a finite-size scaling analysis of the heights and width of specific heat and susceptibility we can extract a set of effective critical exponents which characterize the helix-coil transition in these two models . For instance, with $`C_{\mathrm{MAX}}`$ defined to be the maximum peak in the specific heat, we have $$C_{\mathrm{MAX}}N^{{\displaystyle \frac{\alpha }{d\nu }}}.$$ (7) In a similar way, we find for the scaling of the maximum of the susceptibility $$\chi _{\mathrm{MAX}}N^{{\displaystyle \frac{\gamma }{d\nu }}}.$$ (8) For both quantities we can also define the temperature gap $`\mathrm{\Gamma }=T_2T_1`$ (where $`T_1<T_{\mathrm{MAX}}<T_2`$) chosen such that $`C(T_1)=bC_{\mathrm{MAX}}=C(T_2)`$, and $`\chi (T_1)=b\chi (T_c)=\chi (T_2)`$ where $`b`$ is a fraction. The temperature gap obeys $$\mathrm{\Gamma }=T_2T_1N^{{\displaystyle \frac{1}{d\nu }}},$$ (9) as has been suggested in Ref. . The analysis should be insensitive to the actual fraction, $`b`$, of $`C_{\mathrm{MAX}}`$ ($`\chi _{\mathrm{MAX}}`$) considered for defining $`T_1`$ and $`T_2`$ which was verified from our numerical data fitting of poly-alanine chains. The scaling exponents, $`\alpha ,\nu `$, and $`\gamma `$, have their usual meaning in critical phenomena; however, the above scaling relations also hold formally for the case of a first-order transition, with effective scaling exponents $`d\nu =\alpha =\gamma =1`$ . Note that $`d`$ is the dimensionality of the system, and it always appears in the combination $`d\nu `$. Without knowing further the effective dimensionality of our systems, we use the combination $`d\nu `$ as a single parameter in the fit. It then becomes straightforward to use the above equation and the values given in Table 1 to estimate the critical exponents. We obtain for poly-alanine from the scaling of the width of the specific heat $`1/d\nu =1.02(11)`$ with a goodness of fit $`(Q=0.9)`$ (see Ref. for the definition of $`Q`$), for chains of length $`N=15`$ to $`N=30`$. Inclusion of $`N=10`$ leads to $`1/d\nu =0.84(7)`$, but with a less acceptable fit $`(Q=0.1)`$. Similarly, we find from the scaling of the width of the susceptibility $`1/d\nu =0.98(11)`$ $`(Q=0.5)`$ for chains of length $`N=15`$ to $`N=30`$ and $`1/d\nu =0.81(7)`$ $`(Q=0.2)`$ when the shortest chain $`N=10`$ is included in the fit. Hence, we present as our final estimate for the correlation exponent of poly-alanine $`d\nu =1.00(9)`$. This value is in good agreement with the estimate $`d\nu =0.93(5)`$ obtained from the partition function zero analysis in Ref. . The results for the exponent $`\alpha `$ give $`\alpha =0.89(12)`$ (Q=0.9) when all chains are considered, and $`\alpha =0.86(10)`$ $`(Q=0.9)`$ when the shortest chain is excluded from the fit. Analyzing the peak in the susceptibility we find $`\gamma =1.06(14)`$ $`(Q=0.5)`$ for chain lengths $`N=1530`$ and $`\gamma =1.04(11)`$ $`(Q=0.5)`$ for chain lengths $`N=1030`$. We summarize our final estimates for the critical exponents in Table 2. The scaling plot for the susceptibility is shown in Fig. 1: curves for all lengths of poly-alanine chains collapse on each other indicating the validity of finite size scaling of our poly-alanine data. The same procedure can be applied to analyze the data from the minimal model. All calculation has been done with the omission of the shortest chain. Using the widths of the specific heat a $`b=80\%`$ of the peak height we obtain $`1/d\nu =1.03(7)`$, $`(Q=0.2)`$. The width of the peak at half maximum is more unreliable in this case as the coil-helix transition is complicated by the additional collapsing transition to a globular state in the vicinity of the coil-helix transition. This exponent agrees with that calculated from the susceptibility widths, $`1/d\nu =0.89(9)`$, $`(Q=0.3)`$. Hence, our final estimate for this critical exponent in our second model is $`d\nu =0.96(8)`$. These values are in good agreement with those of the poly-alanine model. From the $`C_{\mathrm{MAX}}`$ data in Table 1 and using the above given value for the exponent $`d\nu `$ we find $`\alpha =0.70(16)`$ ($`Q=0.3`$) which is somewhat smaller than that of the poly-alanine model. The susceptibility exponent as calculated from the data in Table 1 yields a value of $`\gamma =1.3(2)`$ ($`Q=0.5`$), which agrees with the previous estimation within the error bar. The scaling plot for the susceptibility is shown in Fig. 2. While curves corresponding to large polymer sizes collapse into the same curve, the $`N=13`$ case shows small disagreement, indicating that the finite size scaling are valid only for longer chain lengths in the minimal model. Comparing the critical exponents of our two models as summarized in Table 2 we see that the estimates for the correlation exponent $`d\nu `$ agrees well for the two models. Within the error bars, the estimates for the susceptibility exponent $`\gamma `$ also agree. The estimates for the specific heat exponent $`\alpha `$ seem disagree within the error ranges. However, in view of the fact that both analyses are based on small system size the true error ranges could be actually larger than the ones quoted here. Using these rather crude results, we have already demonstrated a striking similarity in finite-size scalings of the two model. Therefore, we can convincingly make the conjecture that minimal model can be used to represent the structural behavior of real helix-forming proteins. Our analysis should tell us also whether the helix-coil transition in our models is of first or second order. In the former case we would expect $`d\nu =\alpha =\gamma =1`$ which seems barely supported by our data due to the rather large error bars associated with the estimate of the exponents. We have further explored the nature of the transition from another perspective, by considering the change in energy crossing a small temperature gap (taken to be within 90% of $`C_{\mathrm{MAX}}`$) from the original data, $$\mathrm{\Delta }E=(E_{\mathrm{tot}}(T_2)E_{\mathrm{tot}}(T_1))/N$$ (10) This value should approach either a finite value or zero as $`N^1`$ goes to zero. A finite value would indicate a first order transition while a zero value a second order transition. In the case of a first order transitions the intercept would indicate the latent heat. Now, the assumption is that this energy change scales linearly as $`N^1`$ goes to zero. Figure 3 shows a plot of the data from both the atomic-level and minimal models, where nonzero intercepts can be extrapolated at $`N^1`$=0. Hence, our results seem to indicate and finite latent heat and a first-order like helix-coil transition. However, we can not exclude the possibility that the true asymptotic limit of $`|E|`$ is zero, and some of the results of Ref. point for the case of poly-alanine rather towards a second-order transition. Further simulations of larger chains seem to be necessary to determine the order of the helix-coil transition without further doubts. In summary, we conclude that in view of the similarity of the two models examined here, a corresponding state principle can be established for the coil-helix transition. Examining the finite size scaling analysis allows us to calculate estimators for critical exponents in the two models which indicate “universality” of helix-coil transitions. Acknowledgments: Financial supports from Natural Science and Engineering Research Council of Canada and the National Science Foundation (CHE-9981874) are gratefully acknowledged. Figure Captions 1. Scaling plot for the susceptibility $`\chi (T)`$ as a function of temperature $`T`$, for poly-alanine molecules of chain lengths $`N=10,15,20`$ and $`30`$. 2. Scaling plot of $`\chi (T)`$ as a function of temperature $`T`$, for the minimum model of chain lengths $`N=13,19,26,33`$ and $`39`$. 3. Scaling of energy gap and transition width at 80% and 90% of $`C_{MAX}`$. Here we have used $`\mathrm{\Delta }E_{80\%}`$($``$ for all-atom model,$`\mathrm{}`$ for minimal model), $`\mathrm{\Delta }E_{90\%}`$($`\mathrm{}`$ for all-atom model, $``$ for minimal model). Fig. 1 Fig. 2 Fig. 3
no-problem/0003/hep-ex0003012.html
ar5iv
text
# High-𝑝_⟂ charged-pion production in Pb-Au Collisions at 158 AGeV/c ## Abstract The CERES/NA45 experiment at the CERN SPS measured transverse momentum spectra of charged-pions in the range $`1<p_{}<4\mathrm{GeV}/\mathrm{c}`$ near mid-rapidity ($`2.1<y<2.6`$) in $`158\mathrm{AGeV}/\mathrm{c}`$ Pb-Au collisions. The invariant transverse momentum spectra are exponential over the entire observed range. The average inverse slope is $`245\pm 5\mathrm{MeV}/\mathrm{c}`$, it shows a 2.4% increase with centrality of the collision over the 35% most central fraction of the cross section. The $`\pi ^{}/\pi ^+`$ ratio is constant at 1.028$`\pm `$0.005 over the $`p_{}`$ interval measured. Transverse momentum distributions of hadrons which emerge from particle collisions are closely related to the collision dynamics. In general, below 1 GeV/c the $`p_{}`$ spectra exhibit a nearly exponential slope and reflect the properties of the collision system at break-up, when the secondary hadrons cease to interact. For example, $`p_{}`$ spectra of various species measured with lead beam at the SPS have been used to establish collective flow of hadrons and freeze-out temperatures in heavy-ion collisions. In pp collisions the spectra develop a power-law-like tail at much higher $`p_{}`$ which can be described quantitatively (above about 4 GeV/c) by hard scattering of partons in the initial phase of the reaction . In collisions of nuclei this hard scattering contribution may be strongly modified by partonic energy loss in a deconfined medium . The $`p_{}`$-region from 1 to 4 GeV/c, often referred to as semi-hard region, is less easy to interpret. Hard processes begin to contribute in this region and compete with low $`p_{}`$ phenomena. Additionally, in reactions involving nuclei, the spectra are modified in the nuclear environment by partons or hadrons scattering more than once . This was realized by Cronin et al. at FNAL in the late 70’s when in proton-induced reactions with nuclear targets an enhanced production of high $`p_{}`$ particles was found compared to a naive extrapolation from pp collisions. Later a similar increase was found in interactions of $`\alpha `$-particles at the ISR and also in heavy-ion collisions with oxygen and sulfur beam at the CERN SPS . If the hypothesis of a high level of rescattering, prerequisite for thermalization, is correct, the slope of $`p_{}`$ distributions in the semi-hard region could reflect the temperature and expansion dynamics of the system. In 1995 the CERES (ChErenkov Ring Electron Spectrometer) experiment has taken a data sample of $`810^6`$ Pb-Au collisions at the CERN SPS at a beam energy of $`158\mathrm{AGeV}/\mathrm{c}`$. The sample covers the most central 35% of 6230 mb geometrical cross section. In this letter, we present charged-pion transverse momentum spectra derived from $`210^6`$ pions identified in the range from 1 to 4 $`\mathrm{GeV}/\mathrm{c}`$ and near mid-rapidity $`(2.1<y<2.6)`$. These data and a preliminary report on hadron spectra has been presented at QM97 . Data on charged-hadron spectra at lower transverse momenta will be published separately . A detailed description of the spectrometer is given in reference . Here we will restrict the discussion of the experimental setup to aspects essential for the present analysis. CERES has been designed to measure electron pairs in ultra-relativistic heavy-ion collisions. Two RICH detectors, with full azimuthal coverage, located before and after a superconducting double solenoid constitute the heart of the experiment. The magnet system deflects each track in azimuthal direction, but maintains the polar. Precise tracking is provided by two silicon drift detectors (SDD) positioned on average 10 cm and 11.5 cm downstream of the target in front of the first RICH and a pad chamber (PC) downstream of the second RICH. The two SDDs also determine the position of the event vertex and measure the charged-particle density $`\mathrm{dN}_{\mathrm{ch}}/\mathrm{d}\eta `$. By choosing CH<sub>4</sub> as the radiator gas for the RICH detectors the Cherenkov threshold is high enough ($`\gamma _{\mathrm{th}}32`$) to suppress signals from the bulk of hadrons. However, charged pions above 5 GeV/c momentum radiate enough Cherenkov light to be observed in the RICH detectors. The clean environment necessary to detect electrons among hundreds of hadrons is ideally suited for a high-statistics study of those pions which exceed the Cherenkov threshold. Pion identification and momentum measurement are provided via the ring radius measured in the RICH detectors. The azimuthal deflection in the magnetic field gives charge information and redundant momentum determination according to $`\mathrm{\Delta }\varphi =144\mathrm{mrad}/p(\mathrm{GeV}/\mathrm{c})`$. The analysis proceeds in several steps. In the first step pion candidates are reconstructed. Each charged particle track originating from the vertex and traversing the two SDDs is extrapolated downstream to the Pad Chamber. If a Pad Chamber signal is found in a window of $`6\mathrm{mrad}`$ in polar and $`50\mathrm{mrad}`$ in azimuthal direction (corresponding to a lower momentum cut of $`2\mathrm{GeV}/\mathrm{c}`$) ring images are searched for in both RICH detectors at the corresponding locations. Several constraints are imposed to identify a pion track: (i) the two ring images match in radius within 10%, (ii) the radii correspond to the azimuthal deflection in the field within two sigma of the resolution, and (iii) the polar angle coincides within two sigma of the measured resolution in all detectors. Tight cuts on using these constraints remove most false tracks from the sample. Fig. 1 depicts the correlation of ring radius and azimuthal deflection for all the reconstructed pion candidates after applying the cuts (i) and (iii). Two well-defined regions, corresponding to pions of negative and positive charge, cluster around the expected correlation (dotted line) and clearly stand out above a low level of remaining background tracks. Nearly all remaining background tracks in the candidate sample can be associated with electron tracks. Two classes of background tracks must be distinguished: (i) physical background of high-momentum electrons and (ii) unphysical background originating from uncorrelated electron rings accidentally matching the track in both RICH detectors. High momentum electrons are easily identified by the mismatching of ring radius, which always is the asymptotic one, and deflection in the magnetic field. The cut used to remove those tracks is depicted in the figure (solid line). Note that this background was scaled up by a factor of 20 to make it visible in the figure. The electron background in the remaining sample was determined using a Monte-Carlo simulation and was found to be negligible at all momenta. The unphysical combinatorial background was measured using an event-mixing technique and subtracted from the data. The low level of this background may be judged from the amount of tracks in Fig. 1 with uncorrelated radius and deflection. The signal-to-background ratio is 100 at a $`p_{}`$ of 1 GeV/c, and decreases to 10 above 4 GeV/c. At the highest $`p_{}`$ the background subtraction introduces a systematic error of less than 5%. Kaons are only reconstructed if their momentum is above 20 GeV/c, thus kaon background is suppressed by four order of magnitude at all $`p_{}`$. The momentum can be calculated from the azimuthal deflection and the ring radius, both scales are shown in Fig. 1. In our analysis we use the momentum based on the ring radius which has a resolution of $`\mathrm{\Delta }p/p0.0008p^2(\mathrm{GeV}/\mathrm{c})^2`$, significantly better in the range from 4.5 to 30 $`\mathrm{GeV}/\mathrm{c}`$ than the resolution $`\mathrm{\Delta }p/p0.035p(\mathrm{GeV}/\mathrm{c})`$ obtained from the deflection in the magnetic field. The comparison of both momentum measurements fixes the absolute momentum scale to better than 0.5%. In the last step of the analysis the measured transverse momentum spectrum is corrected for the spectrometer characteristics, the limitations of the reconstruction algorithm, as well as all cuts on the data. All corrections are calculated simultaneously in a Monte-Carlo simulation. Pions, generated with realistic kinematical distributions, were traced through a detailed GEANT implementation of the CERES spectrometer. The pion tracks, subjected to the simulated response of the detector, are embedded in real events and passed through the full analysis chain. The ratio $`R`$ of Monte-Carlo input to output $`p_{}`$ distributions is shown in Fig. 2. The fit to the ratio (as shown in Fig. 2) is used to correct the data. The correction function can be split into three regions. Below $`p_{}`$ of 1.3 GeV/c the correction increases rapidly as pions fall below the Cherenkov threshold. Expressed in $`p_{}`$, the threshold depends on the polar angle and hence the cut-off in Fig. 2 is gradual. The plateau region up to $`3\mathrm{GeV}/\mathrm{c}`$ reflects the reconstruction efficiency of about 20% including all losses due to analysis cuts. Above the Cherenkov threshold the number of photons and the ring radius steadily increase towards their asymptotic values. Both yield an increase of the reconstruction efficiency and the ring center resolution with increasing momentum, which results in a gradual reduction of the correction towards higher momenta. Above $`3\mathrm{GeV}/\mathrm{c}`$ the correction factor decreases more rapidly due to the deterioration of the momentum resolution which artificially increases the apparent yield at high momenta. The systematic uncertainty in the region from 1.5 to 3 $`\mathrm{GeV}/\mathrm{c}`$ is less than 10%, below 1.5 $`\mathrm{GeV}/\mathrm{c}`$ the error increases slightly up to 15% at 1.2 GeV/c. The correction for momentum smearing generates uncertainties up to 40% at 4 $`\mathrm{GeV}/\mathrm{c}`$. The absolute normalization introduces an additional overall systematic error of 20% independent of $`p_{}`$. Fig. 2 shows the correction function for the full data set, corresponding to the upper 35% of the geometrical cross section. To study the centrality dependence the data were split into 7 exclusive multiplicity bins, covering the range from $`dN_{ch}/d\eta `$ of 100 to about 400. For each bin the correction was calculated separately to take into account the reduction of the reconstruction efficiency and the deterioration of the momentum resolution with event multiplicity. The absolute value of the correction function increases by a factor of 2 from the lowest to the highest multiplicity events. No change of the $`p_{}`$ dependence of the correction was observed. The final $`p_{}`$ spectrum is shown in Fig. 3. Since $`\pi ^+`$ and $`\pi ^{}`$ spectra are very similar (see the discussion below) we have averaged positively and negatively charged pions in order to cover the largest possible $`p_{}`$ range. The open symbols represent the full data sample with an average charged-particle density of 220 corresponding to the upper 35% of the geometrical cross section. The full symbols give the result for a more central event selection, with $`\mathrm{dN}_{\mathrm{ch}}/\mathrm{d}\eta =310`$ or about 8% of the geometrical cross section. Our result is in good agreement with the recently measured $`\pi ^{}`$ spectrum in Pb-Pb collisions at the same energy and similar rapidity . Except for the increase in the yield proportional to the charged particle density both spectra shown in Fig. 3 are indistinguishable. The spectra are exponential over the full range observed. Fitting an exponential function $`Ae^{(p_{}c/T)}`$ to the full data sample in the range 1.5 to 3.5 $`\mathrm{GeV}`$ gives an inverse slope parameter T $`=245\pm 5\mathrm{MeV}/\mathrm{c}`$. The fit of the $`m_{}m_{}`$ distribution gives within 1 MeV the same inverse slope parameter. The inverse slope varies by less than 10 MeV locally over the full $`p_{}`$ range. Note that the slope parameter is significantly larger than the values of about $`180\mathrm{MeV}`$ which have been observed at lower $`p_{}`$ . We have split the data sample into 7 exclusive multiplicity bins. The inverse slope parameter T extracted from each of these bins is plotted in Fig. 4 versus the average $`\mathrm{dN}_{\mathrm{ch}}/\mathrm{d}\eta `$ of the bin. Over the measured centrality range the inverse slope increases by 7 MeV, which is about 2.4% of its absolute value. The statistical errors are smaller than 1 MeV. The systematic error is 5 MeV (indicated by the brackets in Fig. 4) nearly independent of centrality, thus it is an error on the absolute value but not on the trend of the slope with centrality. From Fig. 1 it is evident that $`\pi ^+`$ and $`\pi ^{}`$ can be measured separately. We have determined the $`\pi ^{}/\pi ^+`$ ratio for 15 exclusive ring radius bands. The average momentum corresponding to each radius bin is determined using the CERES Monte-Carlo simulation. In particular for large $`p_{}`$, this momentum is smaller than calculated from the average ring radius because low momentum pions shift to apparently larger momentum due to the limited resolution. As discussed previously, the inclusive $`p_{}`$ spectra were corrected for the effect of the momentum resolution. While the systematic errors introduced by this correction are acceptable for the inclusive spectra up to 4 GeV/c, they exclude a meaningful comparison of $`\pi ^{}`$ to $`\pi ^+`$ above 2.2 GeV/c. The transverse momentum dependence of the $`\pi ^{}/\pi ^+`$ ratio is shown in Fig.5. The $`\pi ^{}/\pi ^+`$ ratio is constant within errors over the entire observed range at a value of 1.028 $`\pm `$ 0.005, assuming that the systematic errors cancel in the ratio. The $`p_{}`$ spectra of charged pions are nearly exponential over more than four orders of magnitude from 1 to 4 GeV/c in $`p_{}`$. This suggests a statistical interpretation of the data. Of course, the inverse slope of about 245 MeV, well above the Hagedorn temperature of 160 MeV , can not be interpreted as temperature of a dense system of hadronic resonances. This could point towards early thermalization in a partonic phase. On the other hand, collective transverse expansion, well established by observing a linear increase of the inverse slope with particle mass at lower transverse momenta, might sufficiently increase the inverse slope even at the large $`p_{}`$ observed in this experiment. Data on $`\pi ^{}`$ production at similar $`p_{}`$ have been interpreted accordingly . Models with initial state scattering on the hadron or parton level can explain the momentum spectra for central collisions . However, such “random walk” models lead to a much larger centrality dependence of the slope and can be excluded . Wang has shown that perturbative QCD calculation modeling the Cronin effect by $`p_{}`$ broadening can explain $`\pi ^{}`$ data down to 1 GeV/c, and one wonders why they do not seem to reflect any parton energy loss in the hot and dense medium. However, the results depend sensitively on the model of the Cronin effect . One might speculate that the number of consecutive parton scattering processes will increase as the impact parameter decreases. The $`\pi ^{}/\pi ^+`$ ratio, which reflects the isospin asymmerty in the final state, is different for statistical and perturbative QCD based models. Perturbative QCD predicts a $`\pi ^{}/\pi ^+`$ ratio increasing with $`p_{}`$ and saturating at 1.14, which corresponds to the initial isospin unbalance of the valece quarks. The constant ratio at a much lower level observed in the experiment indicates that below 2.2 GeV/c hard scattering does not dominate. A statistical model which evenly distributes the initial isospin unbalance over all particles in the final state gives a ratio of about 1.06. A more sophisticated analysis, including the effects of hadron decays in the final state , results in ratio of 1.05. It increases below 400 MeV/c $`p_{}`$ but remains fairly constant at larger $`p_{}`$. It remains ambiguous whether a statistical or a perturbative QCD interpretation of the data is more appropriate. Additional information might arise from the observation of angular correlations between high momentum pions. We are grateful for the financial support by the MINERVA Foundation and the Benoziyo High Energy Research Center.
no-problem/0003/cond-mat0003005.html
ar5iv
text
# TIGHT-BINDING LINEAR SCALING METHOD APPLICATIONS TO SILICON SURFACES ## 1 Introduction: *Ab Initio* vs. Semiempirical, $`O(N^3)`$ vs. $`O(N)`$ Methods In the past two decades, the percentage of theoretical investigations of materials based on atomistic computer simulations has steadily increased. Those using either an *ab-initio* or a tight-binding independent electron description of interactions are able to account for chemical bond formation and breaking, and are particularly worthy of attention, as reflected in the 1998 Nobel prize for chemistry. One shortcoming of the usual implementation of such methods is that the number of operations scales as the third power of the number of atoms N, i.e., their computational cost is $`O(N^3)`$. Currently this limits the number of atoms in the system that can be treated even using very powerful computers. *Ab-initio* methods certainly give more precise results than semiempirical tight-binding methods, but besides being based on more complicated Hamiltonians, they require much more extensive basis sets to expand the wave functions of electrons. Empirical tight-binding methods provide a useful compromise between classical empirical potential approaches and *ab-initio* methods, because they retain a quantum mechanical description of the electrons, ultimately responsible for chemical bonding, but the Hamiltonian is parametrized and the wavefunctions expanded in a minimal set of atomic orbitals. As a consequence, the number of atoms which can be handled with even the simplest *ab-initio* method, the Local Density Approximation commonly used in materials science is one to two orders of magnitude less than with tight-binding methods within the same computational constraints . Whereas chemists are interested in studying large, complex molecules, materials scientists are concerned with the properties of clusters, solids with specific defects or disorder, surfaces, interfaces, artificial structures and their interactions. Reliable computations of properties require simulations on large enough finite systems, e.g. enclosed in a “box” with periodic boundary conditions applied. In order to increase the number of atoms in the system and to study dynamical process or finite-temperature properties obtained from time averages, efforts are constantly made to reduce computational cost. For this reason, many kinds of linear scaling methods have been introduced, tested and compared . The interested reader is referred to recent reviews. Linear scaling or *O(N)* means that the computation time is proportional to the number N of atoms in the system, just like in classical simulations with finite-range interaction potentials. In this contribution we concentrate on a particular orbital-based linear scaling method which, in conjunction with a tight-binding Hamiltonian, has been successfully applied to *C* and *Si* systems. This contribution is organized in following way: First we review the approximations and the method, then we present results on $`Si(111)5\times 5`$ and $`Si(001)c(4\times 2)`$ reconstructed surfaces which serve to validate the method for future applications. At the end we compare our results with previous LDA and tight-binding computations and summarize our conclusions. ## 2 The Tight-Binding Linear Scaling Method ### 2.1 Semiempirical Tight-Binding Approximation The empirical tight-binding (TB) approximation allows the quantum mechanical nature of covalent bonding to enter the interaction Hamiltonian in a natural way rather than through additional *ad hoc* angular terms in a classical potential. In TB models the total energy of the system is expressed as $$E=E_{BS}+\underset{LL^{}}{}\varphi \left(\left|𝐑_L𝐑_L^{}\right|\right),$$ (1) where $`\varphi `$ is a repulsive two-body potential which includes the ion-ion repulsion and the electron-electron interactions which are double counted in the electronic ”band-structure” term $`E_{BS}`$. This term describes chemical bonding, it can be written as, $$E_{BS}=\underset{i}{}f_iϵ_i=\underset{i}{}f_i<\psi _i\left|\widehat{H}_{TB}\right|\psi _i>,$$ (2) where $`\widehat{H}_{TB}`$ is the TB Hamiltonian, $`ϵ_i`$ and $`\{\psi _i\}`$ are its eigenvalues and eigenstates, and $`f_i`$ is their occupancy. The total number of valence electrons is from now on denoted as N and the total number of single particle states is then N/2 if we assume double occupancy, i.e., spin degeneracy for each state($`f_i=2`$). The occupied eigenstates can in principle be determined by diagonalization, but in our work diagonalization is only used at the very end of each computation to check its accuracy. The off-diagonal elements of $`\widehat{H}_{TB}`$ are described by invariant two-center matrix elements, $`V_{ss\sigma },V_{sp\sigma },V_{pp\sigma }`$ and $`V_{pp\pi }`$, between the set of $`sp^3`$ atomic orbitals(assumed orthonormal). By adjusting their values at the interatomic distance $`r_0`$ = 2.35Å in the equilibrium diamond-like structure, as well as diagonal elements $`E_s`$, $`E_p`$, a good fit to the position and dispersion of the occupied valence bands of Si can be obtained . In order to study properties of covalently bonded systems with defects or free surfaces tight-binding model must be transferable to different physically relevant environments. An important advance was due to Goodwin, Skinner and Pettifor who showed that it is possible to set up a TB model which accurately describes the energy-versus-volume behaviour of Si in crystalline phases with different atomic coordination, and reproduces the structure of small Si clusters. We therefore adopt the functional form suggested by GSP for the distance dependence of the two-center matrix elements and of the two-body potential: $$V_\alpha (r)=V_\alpha (r_0)\left(\frac{r_0}{r}\right)^nexp\left\{n\left[\left(\frac{r}{r_c}\right)^{n_c}+\left(\frac{r_0}{r_c}\right)^{n_c}\right]\right\}$$ (3) $$(\alpha :ss\sigma ,sp\sigma ,pp\sigma ,pp\pi )$$ $$\varphi (r)=\varphi _0\left(\frac{r__0}{r}\right)^mexp\left\{m\left[\left(\frac{r}{d__c}\right)^{m__c}+\left(\frac{r__0}{d__c}\right)^{m__c}\right]\right\}$$ (4) where $`r`$ denotes the interatomic separations, and $`n<<n_c`$, $`m<<m_c`$, $`r_cd_c>r_0`$ and $`\varphi _0`$ are parameters which are determined by fitting judiciously chosen properties. The resulting high values of $`n_c`$, $`m_c`$ ensure a rapid decay of $`V_\alpha (r)`$, $`\varphi (r)`$ beyond $`r_c`$, $`d_c`$. In molecular dynamics(MD) simulations where a finite range of $`r`$ is explored, the quantities in Eqs.(3-4) are further multiplied by a smoothed step function which switches from 1 to 0 in a narrow interval about a cutoff radius $`r_m>r_c,d_c`$. As explained below, similar couplings between Si and H atoms are required in some of our computations. Because hydrogen has a single occupied s-state, only $`ss\sigma `$ and $`sp\sigma `$ matrix elements must be parametrized. $`HH`$ couplings are negligible at the separations considered here. ### 2.2 Tight-Binding Parameters Within the above framework, improved sets of parameters were subsequently developed for Si-Si and Si-H interactions. In this contribution, we used the two different sets of parameters described in detail by Kwon *et al* and Bowler *et al*. The Si-Si parametrization of Kwon *et al* is rather complicated; thus $`r_c`$’s, $`n_c`$’s depend on $`\alpha `$ and the repulsive contribution is represented by a nonlinear function of the second term in Eq.(1). The parameters are fitted to many properties of *Si* in the diamond structure and to computed(LDA) cohesive energies vs. density of four different structures. The resulting properties of liquid Si and of small Si clusters are in remarkable agreement with experiments and with ab-initio computations. On the other hand, this parametrization uses a cut-off which is beyond second nearest neighbours; this implies significantly larger computing times. The revised GSP parametrization of Bowler *et al* is less complicated than the previous one; *Si-Si* parameters were fitted to fewer properties in the diamond and $`\beta `$-tin structures, *Si-H* parameters to properties of $`SiH_4`$. Furthermore, the cut-off radius $`r_c`$ can be chosen between nearest and next-nearest-neighbours; thus this parametrization is very well suited for linear scaling computations. It has in fact been successfully applied to defects and hydrogen diffusion on Si(001) . ### 2.3 Orbital Based Linear Scaling Energy Minimization without Orthogonalization Constraints Traditional electronic structure methods solve the Schrödinger equation by expanding one-electron wavefunctions in a fixed basis set (plane waves, atomic orbitals or combinations thereof) and by diagonalizing the resulting secular equation for the expansion coefficients. In spite of significant progress achieved by applying efficient diagonalization algorithms, the required computing time is proportional to $`NP^2`$, *P* being the number of basis functions. Because $`PN`$, the computational cost is $`O(N^3)`$, the scaling factor depending on the method, being small in the case of empirical tight-binding. Nevertheless, the diagonalization of $`\widehat{H}`$ for each atomic geometry or at each step of MD simulation limits the number of atoms that can be currently studied in conventional TBMD computations to about 100 using a workstation and around 1000 using a supercomputer. The recently developed linear scaling methods compute the total energy by minimizing a functional expressed in terms of localized orbitals in real space. Although typical eigenstates in a condensed system extend throughout most of it, a unitary transformation yields linear combination of the former which are localized about particular sites. On the basis of exact model calculations , these so-called Wannier orbitals are believed to decay exponentially in systems with a finite energy gap between occupied and empty states . This applies in particular to the finite systems on which computations are carried out, although the effective gap can be small and corresponding decay slow if the corresponding real system is metallic. The key feature of $`O(N)`$ methods is that the total energy and the forces acting on individual atoms are evaluated *without computing the eigenvalues and eigenstates of $`\widehat{H}`$*. This is accomplished by dividing the full system into finite subsystems and by defining localized orbitals {$`\varphi `$} which are forced to vanish outside each subsystem. These *localized regions* (LR) are the electronic equivalents of the *linked cells* which ensure *O(N)* scaling in classical simulation. Intuition and experience suggest that the minimum size of each localization region depends on physical and chemical properties of the constituents, and not on the size of the system whole. The size of the localized regions (which must exceed the range of $`\widehat{H}`$) is always the factor which limits the accuracy of an $`O(N)`$ calculation. Another key ingredient to achieve $`O(N)`$ scaling is the definition of an appropriate energy functional whose minimization requires neither explicit orthogonalization of the auxiliary electronic orbitals, nor the inversion of their overlap matrix $`𝐒`$. This functional is in general different from that defined in Eq.(1), but must have the same global minimum in the limit of infinite localization regions. Otherwise it yields an upperbound which in practice, must be close to the minimum energy even for relatively small LR’s. This can in fact be achieved . Various *O(N)* methods are based on different functionals which, however, share this remarkable property. One convenient energy functional which satisfies these requirements is : $$E_{GBS}[\left\{\varphi \right\},\mu ,M]=2\underset{ij=1}{\overset{M}{}}(2\delta _{ij}S_{ij})<\varphi _j\left|\widehat{H}\mu \right|\varphi _i>+\mu N$$ (5) The matrix $`(2IS)`$ is the first order truncated series expansion of the inverse overlap matrix $`S^1`$, where $`S_{ij}=<\varphi _i|\varphi _j>`$. The functional defined in Eq.(5) depends on the number M localized orbitals (LO), and on a global variable $`\mu `$ which determines the highest filled state and hence the total electronic charge in the system at the minimum. Taking $`M>N/2`$ helps avoid unphysical solutions , depending on the initial choice of the orbitals, which can otherwise be obtained. In order to find the electronic ground state energy for a given spatial configuration of the atoms, the functional is minimized with respect to the LO’s. Normally each LR is centered at an atomic site *I* and encompasses all neighbours connected by n bonds; it is then denoted by nLR. Then LO $`\varphi _i`$ centered at atomic site *I* can be expressed as $$\varphi _i=\underset{J\{LR_I\}}{\overset{n_b}{}}\underset{l}{}C_{Jl}^i\alpha _{Jl}$$ (6) where $`\alpha _{Jl}`$’s are the atomic basis functions of atom *J*, the index *l* runs over orbital components (e.g. $`s,p_x,p_y,p_z`$ for carbon or silicon and s for H), $`\{LR_I\}`$ indicates the set of atoms within the localization region centered at site *I*, and $`n_b`$ is the corresponding number of basis functions. The functional is efficiently minimized using a conjugate gradient (CG) algorithm. The required derivatives $$\frac{E_{GBS}}{\varphi _i}=4\underset{j}{\overset{M}{}}\left[|(H\mu )\varphi _i>(2\delta _{ij}S_{ij})|\varphi _j><\varphi _j\left|(H\mu )\right|\varphi _i>\right]$$ (7) are evaluated at each iteration step. Using Eq.(6) the matrix elements can be expressed in terms of the Slater-Koster energy integrals $`<\alpha _{Jl}|\widehat{H}_{TB}|\alpha _{J^{^{}}l^{^{}}}>`$ ; those energy integrals can be further expressed in terms of direction cosines and of the invariant two-center matrix elements defined by Eq.(3)(see Appendix B). Because each derivative needs to be evaluated only in the localization region $`\{LR\}_I`$, the required number of operations scales linearly with the number of atoms in the system. The global variable $`\mu `$ is initially chosen well above the estimated Fermi energy of the system, then iteratively adjusted until the total charge of the system becomes equal to the charge consistent with global neutrality when convergence is achieved, i.e., the ground state energy corresponding to the assumed LRs is attained, and $`\mu `$ is the chemical potential (Fermi energy) of the electrons. ### 2.4 Total Energy Minimization with Respect to Atomic Positions and Electronic Readjustment In this work we consider only known metastable structures of silicon surfaces in order to test the performance and accuracy of the method. To determine such structures, atoms in several surface layers are allowed to move under the influence of the forces $$F_I=E/R_I$$ (8) until all their components become smaller than a preset tolerance (0.01 *eV/Å*) and the energy *E* reaches a minimum. Instead of using a standard minimization procedure, this is achieved by introducing a damping term in the standard Verlet algorithm . This term is adjusted so that the resulting motion is almost critically damped, so that the fastest possible relaxation is achieved. Eq.(8) is physically meaningful only if the electrons are in their ground state (Born-Oppenheimer approximation). Therefore, molecular dynamics can be started only after electronic convergence with respect to the initial atomic configuration has been achieved as described in section 2.3. This rather tedious procedure is necessary at the start of a calculation. The required computing time depends on the initial choice of the LOs and $`\mu `$. To be safe, we start with random coefficients in Eq.(6) and a high $`\mu `$. To ensure stable MD integration, the time step must be small compared to a typical optical vibration period. The corresponding atomic displacements are then small and typically do not strongly perturb the LO coefficients, except if same atom(s) move out or into certain LRs. Therefore, after each atom move enough electronic iterations must be performed in order to reach the slightly modified ground state (within a relative tolerance $`<10^4`$). Fortunately, only small adjustments of $`\mu `$ are necessary once global charge neutrality has been established; they can be performed automatically. The number of electronic steps depends on the system, the narrower its energy gap at the Fermi level(always existing in a finite system), the more electronic steps are needed. Useful *O(N)* performance is achieved if this number of electronic iterations is independent of N and if the necessary computing time is less than that required for diagonalization. ### 2.5 Local charge neutrality If significant atomic rearrangements occur in the course of a molecular dynamics simulation, it is sometimes difficult to avoid unphysical charge transfer between neighboring atoms or layers. To reduce such effects, which slow down convergence and sometimes lead to unphysical solutions, local charge neutrality can be approximately imposed by adding an extra term $`H_U=\frac{1}{2}U_I(q_Iq_I^0)^2`$, where $`q_I^0=4.0`$ for *Si* and 1.0 for H atoms, and the charge around site *I* is expressed as $$q_I=2\underset{ij}{}(2\delta _{ij}S_{ij})<\varphi _i|𝐑_I><𝐑_I|\varphi _j>$$ (9) where $`<\varphi _i|𝐑_I>`$ indicates the projection of the localized orbital $`\varphi _i`$ on the localized region around site *I* . Such a term is obtained by making a local mean field approximation to the Hubbard Hamiltonian and assuming no spin-polarized solutions. The strength *U* of the Hubbard-like term has been estimated and tabulated by Harrison who also discussed its reduction by dielectric polarization. In covalent systems such Hubbard-like contributions must essentially vanish once convergence is achieved, for atoms which have a bulk-like environment. ## 3 Applications In this section we present our recent test calculations with the method described above which has hitherto been mainly applied to carbon systems. Our ultimate goal is to simulate atomic force microscopy(AFM) and manipulation with a Si tip. We present and discuss our results on *Si(111)* and *Si(001)* surface reconstructions which have previously been studied by tight-binding and *ab-initio* methods, and exhibit characteristic features due to rebonding of surface atoms. Such reconstructions reduce the density of energetically unfavourable ”dangling bonds” on surface atoms, but induce distortions from the tetrahedral bonding pattern in the bulk. Optimum surface structures represent a delicate balance between different effects and can not usually be guessed by chemical and physical intuition. These reconstructed surfaces are therefore well-suited for validating the method and are also interesting candidates for controlled atom manipulation experiments. Surface properties are extracted from computations on slabs with a finite number of layers. This number must be large enough to suppress artificial coupling between the free slab surfaces which can arise owing to the overlap of surface states and/or to strain fields induced by atomic rearrangements in surface layers. In order to approximate the effect of the crystalline substrate, atoms in the two centeral layers are held in their bulk positions. Alternatively, the bottom two layers are fixed, and all exposed dangling bonds are passivated with hydrogen atoms so as to preserve tetrahedral coordination. Similar accuracy is expected with half the number of free layers, compared to the former, symmetric slab. Having this in mind, we have investigated the influence of factors which, if chosen properly, should have little effect on physical meaningful results. This includes the number of free and fixed (bulk-like) layers in the slabs used, the influence of H-passivation at the bottom, the shape and lateral dimensions of the computational supercell, the size of the localization regions, the tight-binding parametrization, starting from configurations which had the periodicity and some of the rebonding characteristics of desired reconstructions. All this required many computations which were performed on not too large systems in order to economize computing time. Thus the emphasis here is on validation rather than on achieving *O(N)* performance, although this was demonstrated in the larger systems containing a few hundred atoms. ### 3.1 Si(111)-5x5 Reconstruction The metastable 5$`\times `$5 reconstruction plays an important role in the conversion from 2$`\times `$1 reconstruction, obtained upon cleavage, to the stable 7$`\times `$7 reconstruction of the Si(111) surface . It exhibits the characteristic features of the DAS model first proposed for Si(111)-7x7 , which are indicated in Fig. 1. One important difference is that the 5$`\times `$5 reconstruction has the same average density of atoms in surface layers as the bulk terminated Si(111) surface, whereas additional Si adatoms must be supplied to form the 7$`\times `$7 reconstruction. #### 3.1.1 Computational Details We start with a ten-layers thick inversion-symmetric slab which has two bulk truncated (111) surfaces at the top and bottom, encompassing two adjacent 5$`\times `$5 surface unit cells. Periodic boundary conditions are then maintained in the lateral directions. We have also repeated some of the computations with a six-layers slab passivated by H atoms at the bottom, similar to that used by Adams and Sankey . Following these authors, the initial configuration on the free surface(s) is set up by laterally shifting ten atoms in the surface layer on the left side of each 5$`\times `$5 cell so that they are placed above fourth layer atoms. Then five atoms at one edge of the other ”unfaulted” triangular half and the second-layer atom above the ”corner hole” are placed at the positions of the adatoms so that their bond lengths to the nearest first-layer atoms are equal to the bulk interatomic distance $`r_0`$. During the combined relaxation of electrons and ions, dimers spontaneously formed along the boundaries between “faulted” and “unfaulted” halves shortly after we started the damped molecular dynamics calculation. At the end, we obtained the relaxed structure consistent with the DAS model illustrated in Fig.1. Of the 25 dangling bonds per 5$`\times `$5 cell on the truncated (111) surface only 9 are left: one on the corner hole atom (CH), two on the restatom(R) and six on the adatoms(AD). The former three are doubly occupied, whereas surface states derived from those on the adatoms are partially occupied by the remaining three electrons . Thus one expects the $`Si(111)5\times 5`$ surface have metallic properties just like $`Si(111)7\times 7`$. As a matter of fact, a posteriori diagonalization revealed a tiny ”energy gap” ($`<0.01eV`$) in our relaxed structure. One might also expect problems due to the assumed double occupancy. In fact relaxation in a slab encompassing a single $`5\times 5`$ surface unit cell produced a distorted DAS structure with twisted dimers. On the other hand, no such distortions have been found in previous $`O(N^3)`$ computations of $`Si(111)7\times 7`$ with similar slabs which relied on occupied eigenstates at the $`\mathrm{\Gamma }`$-point(zero wave-vector) obtained by diagonalization or by total energy minimization with orthonormality constraints . Deviations might arise owing to the small size of the localization region(n=3) in our O(N) computations. Fortunately this is not the case, as discussed in the following section, if slabs encompassing on even number number of dangling bonds on each surface are used. #### 3.1.2 Relaxed Geometry Our relaxed configuration exhibits common characteristic features found in previous computations of the $`5\times 5`$ and of the $`7\times 7`$ reconstructions. In what follows we compare results obtained with Bowler’s parametrization, localization regions encompassing third neighbours and a $`5\times 10`$ computational cell containing four free layers (not counting the adatoms) and two fixed Si layers with the bottom one saturated by H atoms. In order to check the influence of free and fixed(bulk-like) layers in the slab, of passivation by H at the bottom, of the size of the localization regions, and of the tight-binding parametrization we have performed many calculations of the relaxed geometry of the $`5\times 5`$ reconstruction. From the result summarized in Appendix A one can conclude that the influence of these factors on the final configuration is negligibly small. This is a very encouraging result. Furthermore the bond lengths listed in Table 1 are remarkably close (except for dimers) to those found in recent TB calculations for the $`7\times 7`$ reconstruction based on a symmetric slab with one free layer less on each side . Small deviations occur, however, compared to bond lengths extracted from a LDA computation for the same system . Similar deviations also occur compared to an earlier TB computation for a H-passivated slab with two less free layers . In that case, these deviations might be due to that restriction or to the somewhat different parametrization. More significant deviations appear when height differences between adatoms and rest atoms in the two halves of the surface unit cells are compared(see Table 2). Our computed differences are larger than those found in previous computations for the $`7\times 7`$ reconstruction with a smaller number of free layers. On the other hand, height difference between adatoms is only half of that found in a pioneering computation for $`Si(111)5\times 5`$ based on a H-passivated slab like ours . However, in contrast to that work, we found that adatoms in each half are equivalent, just like corner hole and central adatoms are on the $`Si(111)7\times 7`$ surface. These two discrepancies are probably due to the non self-consistent LDA-based approximation used in Ref. . TB computations, including ours, are only rudimentary self-consistent if the Hubbard term is included, but since the parameters are fitted to experimental data and/or self-consistent LDA computations , they can yield better results. #### 3.1.3 Convergence, Accuracy and Determination of Surface Energies The accuracy, efficiency and performance of our O(N) TB computations can be judged on the basis of the results reported in Table 3 for the system described in the preceding section. Computations were performed on a single processor of a DEC-Alpha 8400 machine; nLR refers to unconstrained minimizations with localization regions encompassing neighbours connected by n bonds. Diagonalization was performed at the end of the 3LR minimization. The first three rows refer to the unbiased but costly minimizations starting with orthonormalized LOs with random coefficients for the initial configuration. It is gratifying that the subsequent combined optimization of LO coefficients and atomic positions takes about the same time. More importantly the ratio of CG to MD steps implies that only three CG steps per MD step are on the average required to reconverge the coefficients. A comparison of the last two columns suggests that 3LR minimization yields total energies with a relative accuracy of about $`10^4`$. The specified tolerance on the relative energy difference between successive CG iterations(our convergence criterion) was, of course, much smaller. Note that diagonalization yields eigenstates corresponding to the $`\mathrm{\Gamma }`$-point of the computational supercell. More accurate total energies could be obtained by including occupied eigenstates with nonzero parallel wave-vectors in Eq.(2) or by increasing the lateral dimensions of the supercell(the only alternative in the case of O(N) computations). The surface energy differences $`\mathrm{\Delta }E_s`$ which determine the relative stability of possible reconstructions are usually approximated by differences between total energies per projected $`1\times 1`$ surface unit cell computed in the same computational slab with the desired reconstruction on one face and same reference structure on the other. This approximation is reasonable if the system is sufficiently large, in particular to effectively decouple the two faces. On the other hand, an upper bound on the error in $`\mathrm{\Delta }E_s`$ is given by the product of the number of layers times the energy per unit cell of the substrate times the relative accuracy. According to Table 3 this amounts to $`0.05eV`$. Computed values of $`\mathrm{\Delta }E_s`$ at the 3LR level with respect to the truncated Si(111) surface are only -0.19 eV and -0.13 eV for the symmetric and H-passivated slabs described in section 3.1.1. The difference between those two values is disturbingly close to our estimated error bound. Furthermore, previous estimates of surface energy differences between the $`5\times 5`$ and the $`2\times 1`$ and $`7\times 7`$ reconstructions, which are relevant for understanding their growth , from self-consistent LDA computations amount to -0.06 eV and 0.02 eV , respectively. This implies that O(N) computations beyond the 3LR level of accuracy will be required to distinguish them reliably. On the positive side, note that $`\mathrm{\Delta }E_s`$ = -0.15 eV is found upon diagonalization for both above-mentioned slabs. Finally, the rather different values of $`\mathrm{\Delta }E_s`$ obtained in Refs. and , namely -0.395 eV and 0.56 eV, suggest that TB parameterization and non-selfconsistency are delicate issues which should be addressed. ### 3.2 Si(001)-c(4x2) Reconstruction A truncated 1$`\times `$1 Si(001) surface contains many unsaturated bonds and the system tends to minimize its energy by reconstructing its surface. Theoretical and experimental evidence shows that this reconstruction causes dimers to appear on the surface; i.e. surface atoms move toward each other to form pairs. Furthermore, these dimers are tilted and asymmetric with respect to terminated bulk. The dimers can arrange themselves in various patterns on the surface and thus many reconstructions of the surface are possible. We have made a less extensive set of O(N) computations for the Si(001) surface. Following a period of controversy, several LDA studies, in particular the exhaustive one of Ramstad et al have confirmed that the $`Si(001)c(4\times 2)`$ reconstructions schematically illustrated in Fig.2(b), and first predicted in a pioneering TB computation , has in fact the lowest energy. The $`p(2\times 2)`$ structure, characteristic by identical rows with alternatively tilted Si-dimers, is only marginally metastable, while the 2$`\times `$1 structure with untilted, symmetric dimers is unstable at low temperatures and 0.1 eV higher in energy per projected (1$`\times `$1) surface unit cell . We report O(N) computations performed with Bowler et al ’s parameterization, n=3 LRs in the 4$`\times 2`$ surface unit cell indicated by shading in Fig.2(b). Our computational slab consisted of 6 free Si layers and two fixed layers, with the pairs of dangling bonds at the bottom passivated by H atoms in order to approximate a connection to the bulk crystalline silicon substrate. Starting from a configurations with slightly preformed untilted dimers, our computations converged towards a $`c(4\times 2)`$ reconstruction, although the $`p(2\times 2)`$ was obtained in some cases. As can be seen from Table 4, the resulting pattern of atomic displacements is reproduced correctly, the relaxed coordinates being within the spread of values from previous computations. Relevant deviations are more evident in Table 5. The dimer tilt, being energetically easy, is quite sensitive to the level of approximations. The deviation of the dimer bond length from the bulk Si-Si distance(2.35Å), in the opposite direction to LDA prediction is more serious and disappointing. Indeed, Bowler et al claimed that their parameterization would cure this discrepancy. More computations are needed to check the extent to which the above-mentioned deviations are affected by computational approximations, and to extract reliable surface energy differences. A posteriori diagonalization gives a 0.9 *eV* band gap, a value which is reasonable , but should not be taken too seriously because the TB parameters are fitted to occupied valence bands and to ground-state properties. The existence of a band gap is consistent with the known semiconducting nature of the $`c(4\times 2)`$. ## 4 Conclusions Summarizing the results described in section 3 and Appendix A, we conclude that the local-orbital-based linear scaling TB scheme proposed by Kim et al reproduces the correct configurations of representative silicon surface reconstructions. Except for a few discrepancies which can be traced to inadequencies of the tight-binding description itself, satisfactory geometries are obtained with the simpler parametrization of Bowler et al and with local orbitals constrained to vanish beyond second nearest neighbours. This even applies to the metallic $`Si(111)5\times 5`$ surface provided that the computational surface unit cell encompasses on even number of dangling bonds. These encouraging results open the door to applications to larger systems exploiting the linear-scaling capability of this new computational scheme, e.g. involving interactions on and between silicon clusters and surfaces exposing different faces with or without passivating hydrogen atoms. It is, however, important to keep in mind that a higher level of accuracy appears required to quantitatively describe surface energy differences between alternative (meta)stable structures. This aspect is currently under study. ## 5 Acknowledgements This work was supported by the Swiss National Foundation for Scientific Research under the programm NFP36 ”Nanosciences”. The first two authors are grateful for the facilities and services provided by the computing center and the Institute of Physics of University of Basel. They wish to thank Prof. H.-J. Güntherodt for his encouragement, D. Bowler and R. Härle for discussions. ## Appendix A Influence of Various Factors As explained in section 2.3, the use of local orbitals assumed to vanish outside finite localization regions is main approximation leading to linear scaling. O(N) TB computations are efficient if the range of TB interactions and size of the LR(the number of neighbours connected to the central atom by n bonds) can be chosen as small as possible without unduly sacrificing accuracy. For this reason we have performed test calculations with n=2 and n=3 LRs for the two TB parametrizations described in section 3.1 and different slabs. Representative bond lengths obtained for the $`Si(111)5\times 5`$ reconstruction are shown in Table 6. Column 1 and 2 show the results obtained with the complex and simpler parametrizations using a symmetric slab with 8 layers(not counting adatoms) and n=3 LRs. Column 3 shows the results obtained with the same LR and using TB parameters as in column 2 but for a slab with 6 layers, passivated by hydrogen atoms at the bottom. Column 4 shows results obtained with for the same TB parameters and slab as in column 3, but using n=2 LRs. From the results one can see that these factors have little influence on the final relaxed geometry. ## Appendix B The matrix element $`<\alpha _{Jl}|\widehat{H}_{TB}|\alpha _{J^{^{}}l^{^{}}}>`$ When $`J=J^{^{}}`$, the matrix is given by $$<\alpha _{Jl}|\widehat{H}_{TB}|\alpha _{J^{^{}}l^{^{}}}>=\{\begin{array}{cc}E_s\hfill & \begin{array}{cc}if\hfill & l=l^{^{}}=s\hfill \end{array}\hfill \\ E_p\hfill & \begin{array}{cc}if\hfill & l=l^{^{}}=p_x,p_y,p_z\hfill \end{array}\hfill \\ 0\hfill & \begin{array}{cc}if\hfill & ll^{^{}}\hfill \end{array}\hfill \end{array}$$ When $`JJ^{^{}}`$, the various matrix elements can be written as, $`E_{Js,J^{^{}}s}`$ $`=`$ $`V_{ss\sigma }`$ (10) $`E_{Js,J^{^{}}p_x}`$ $`=`$ $`E_{Jp_x,J^{^{}}s}=lV_{sp\sigma }`$ $`E_{Js,J^{^{}}p_y}`$ $`=`$ $`E_{Jp_y,J^{^{}}s}=mV_{sp\sigma }`$ $`E_{Js,J^{^{}}p_z}`$ $`=`$ $`E_{Jp_z,J^{^{}}s}=nV_{sp\sigma }`$ $`E_{Jp_x,J^{^{}}p_y}`$ $`=`$ $`E_{Jp_y,J^{^{}}p_x}=lm(V_{pp\sigma }V_{pp\pi })`$ $`E_{Jp_x,J^{^{}}p_z}`$ $`=`$ $`E_{Jp_z,J^{^{}}p_x}=ln(V_{pp\sigma }V_{pp\pi })`$ $`E_{Jp_y,J^{^{}}p_z}`$ $`=`$ $`E_{Jp_z,J^{^{}}p_y}=mn(V_{pp\sigma }V_{pp\pi })`$ $`E_{Jp_x,J^{^{}}p_x}`$ $`=`$ $`E_{Jp_x,J^{^{}}p_x}=l^2V_{pp\sigma }+(1l^2)V_{pp\pi }`$ $`E_{Jp_y,J^{^{}}p_y}`$ $`=`$ $`E_{Jp_y,J^{^{}}p_y}=m^2V_{pp\sigma }+(1m^2)V_{pp\pi }`$ $`E_{Jp_z,J^{^{}}p_z}`$ $`=`$ $`E_{Jp_z,J^{^{}}p_z}=n^2V_{pp\sigma }+(1n^2)V_{pp\pi }`$ here $`l,m,n`$ are the direction cosines of the vector $`𝐑_J^{^{}}𝐑_J`$: $$l=\frac{R_{J^{^{}}x}R_{Jx}}{|𝐑_J^{^{}}𝐑_J|}$$ $$m=\frac{R_{J^{^{}}y}R_{Jy}}{|𝐑_J^{^{}}𝐑_J|}$$ $$n=\frac{R_{J^{^{}}z}R_{Jz}}{|𝐑_J^{^{}}𝐑_J|}$$