id
stringlengths
30
36
source
stringclasses
1 value
format
stringclasses
1 value
text
stringlengths
5
878k
no-problem/9908/quant-ph9908016.html
ar5iv
text
# 1 Introduction ## 1 Introduction It is well known that potentials having the form of a sombrero are convenient for explanation of spontaneous breaking of continuous symmetry. On the other hand, until now there is no quantitative analysis for the motion of a quantum particle in this potential. The cause seems to be in that the sombrero, as a smooth potential, is a polynomial of the forth power, and for that potential the Schröedinger equation is not exactly solvable. However, there is no contradiction in refusing the smoothness condition and constructing a sombrero by means of the quadratic potential $`W(\rho ,\rho _0)=\mu \omega ^2(\rho \rho _0)^2/2`$, where $`\rho =\sqrt{x^2+y^2}`$ and $`\rho _0[0,\mathrm{})`$ is a parameter. The one-dimensional version of the potential $`W(\rho ,\rho _0)`$ is known as a double-oscillator . The smoothness is broken for $`W(\rho ,\rho _0)`$ at the point $`\rho =0`$. Nevertheless, the potential $`W(\rho ,\rho _0)`$ is still complicated as the corresponding radial equation includes along with the terms $`\rho ^2`$ and $`\rho ^2`$ the linear term $`\rho `$. This type of equations is not exactly solvable. One can go further and break the smoothness condition not at one but at an infinite number of points. The simplest possibility is realized by the potential $`V(\rho ,\rho _0)=\mu \omega ^2|\rho ^2\rho _0^2|/2`$, which we call the parabolic sombrero. Here, the smoothness is broken along the circle of the radius $`\rho _0`$. For $`y=0`$ (or $`x=0`$) the parabolic sombrero transforms into a two-center oscillator considered in paper . The aim of the present article is to calculate the energy spectrum and wave functions of a quantum particle placed in the parabolic sombrero potential. ## 2 Spectral Equation The Hamiltonian of the parabolic sombrero has the form $$H(\rho ,\rho _0)=\frac{\mathrm{}^2}{2\mu }\left(\frac{^2}{\rho ^2}+\frac{1}{\rho }\frac{}{\rho }+\frac{1}{\rho ^2}\frac{^2}{\phi ^2}\right)+\frac{\mu \omega ^2}{2}|\rho ^2\rho _0^2|,$$ where $`\rho `$ and $`\varphi `$ are polar coordinates of the particle: $`0\rho <\mathrm{}`$, $`0\phi <2\pi `$. To this Hamiltonian there correspond two radial equations $$\frac{d^2R_{in}}{dr^2}+\frac{1}{r}\frac{dR_{in}}{dr}+\left(\frac{r^2}{4}\frac{m^2}{r^2}\xi _{in}\right)R_{in}=0,$$ $$\frac{d^2R_{out}}{dr^2}+\frac{1}{r}\frac{dR_{out}}{dr}\left(\frac{r^2}{4}+\frac{m^2}{r^2}+\xi _{out}\right)R_{out}=0,$$ where $`r=(2\mu \omega /\mathrm{})^{1/2}\rho ,r_0=(2\mu \omega /\mathrm{})^{1/2}\rho _0,\epsilon =E/(\mathrm{}\omega )`$ and $`m𝐙`$ is the eigenvalue of the angular momentum operator $`\widehat{L}=i/\phi `$, $$\xi _{in}=r_0^2/4\epsilon ,\xi _{out}=r_0^2/4\epsilon .$$ The first equation is for the inner region $`[0,r_0)`$ of the sombrero, while the second one is for its outer $`(r_0,\mathrm{})`$ region. We are interested in the solutions finite at $`r=0`$ and vanishing as $`r\mathrm{}`$. These conditions select the functions $$R_{in}(r)=C_{in}r^{|m|}e^{ir^2/4}F(\alpha ,\gamma ;ir^2/2),$$ $$R_{out}(r)=C_{out}r^{|m|}e^{r^2/4}\mathrm{\Psi }(a,\gamma ;r^2/2).$$ Here $`F`$ and $`\mathrm{\Psi }`$ are two independent solutions of the confluent hypergeometric equation: $$F(\alpha ,\gamma ;z)=1+\frac{\alpha }{\gamma }\frac{z}{1!}+\frac{\alpha (\alpha +1)}{\gamma (\gamma +1)}\frac{z^2}{2!}+\mathrm{},$$ $$\mathrm{\Gamma }(a)\mathrm{\Psi }(a,b;z)=\underset{0}{\overset{\mathrm{}}{}}e^{zt}t^{a1}(1+t)^{ba1}𝑑t,$$ $$\alpha =\frac{|m|+1i\xi _{in}}{2},a=\frac{|m|+1+\xi _{out}}{2},\gamma =|m|+1,$$ and $`C_{in}`$ and $`C_{out}`$ are normalization constants. The formulae given below from the theory of confluent hypergeometric equation are taken from the monograph . Let us require the equality of the logarithmic derivatives of the functions $`R_{in}`$ and $`R_{out}`$ at the point $`r=r_0`$. This condition works for $`0<r_0<\mathrm{}`$. For $`r_0=0`$ the smoothness at $`r=r_0`$ is observed just for $`R_{in}`$ and $`R_{out}`$, but not their derivatives (see for example ). After using formulae $$F^{}(\alpha ,\gamma ;z)=\frac{\alpha }{\gamma }F(\alpha +1,\gamma +1;z),$$ $$\mathrm{\Psi }^{}(\alpha ,\gamma ;z)=a\mathrm{\Psi }(a+1,\gamma +1;z),$$ we come to the spectral equation $$\frac{i\alpha }{\gamma }\frac{F(\alpha +1,\gamma +1;iz_0)}{F(\alpha ,\gamma ;iz_0)}\frac{i}{2}=a\frac{\mathrm{\Psi }(a+1,\gamma +1;z_0)}{\mathrm{\Psi }(a,\gamma ;z_0)}\frac{1}{2},$$ where $`z_0=r_0^2/2`$. Note that from the Kummer transformation $$F(\alpha ,\gamma ;z)=e^zF(\gamma \alpha ,\gamma ;z),$$ the relation $`\gamma \alpha ^{}=\alpha `$ and the recurrent formula $$(\alpha \gamma )F(\alpha ,\gamma +1;ir_0^2/2)=\gamma F(\alpha ,\gamma ;ir_0^2/2)+\alpha F(\alpha +1,\gamma +1;ir_0^2/2)$$ there follows that the left hand side of the spectral equation is real. From the Kummer transformation there follows that the function $`R_{in}`$ is also real. In the next section, we will discuss the results obtained from the spectral equation by numerical calculations. ## 3 Energy Levels For $`r_0=0`$ the parabolic sombrero transforms into the circular oscillator for which the state is determined by the quantum numbers $`(n_r,m)`$, and the $`n`$-th energy level is given by the formula $`\epsilon =n+1`$ and has multiplicity $`g_n=n+1`$ ($`n_r`$ is the number of zeroes of the radial wave function in the region $`(0,\mathrm{})`$, $`n=2n_r+|m|`$). The inclusion of the parameter $`r_0`$ splits the energy levels and transforms them into an infinite set of intersecting lines in the plane $`(\epsilon ,r_0)`$ composing a complicated picture (see Fig. 1). Let us separate the energy lines of the parabolic sombrero into clusters of three types: with fixed $`n`$ ($`n`$-cluster), $`|m|`$ ($`|m|`$-cluster) and $`n_r`$ ($`n_r`$-cluster). a. The $`n`$-cluster possesses $`(n/2+1)`$ or $`(n+1)/2`$ lines for even and odd $`n`$, respectively (see Fig. 2). For large values of the parameter $`r_0`$, the lines of the $`n`$-cluster are very much separated from each other. With decreasing parameter $`r_0`$ the distance between the lines of the $`n`$-cluster decreases. Beginning at some $`r_0`$, particular for every line, the lines jump above the top of the parabolic sombrero and with the following decrease of $`r_0`$ the lines combine in one $`(n+1)`$ degenerate energy level of a circular oscillator. b. Every $`|m|`$-cluster includes an infinite number of lines (see Fig. 3). With growth of the parameter $`r_0`$, each line of the $`|m|`$-cluster, located above the top of the sombrero, first slightly lowers, then grows and starting from some parameter $`r_0`$, particular for each line, is captured by a circular wall. For further growth of $`r_0`$ the lines of the $`|m|`$-cluster grow with different velocity: the more $`n_r`$ the faster the growth of the line. c. Let us compare the lines of $`|m|`$-cluster with $`m=0`$ and the energy levels of the two-center quantum oscillator . The equation describing the two-center oscillator can not be obtained from radial equation of the parabolic sombrero: if we substitute $`m=0`$ and $`R=f/r^{1/2}`$ into the radial equation, we eliminate the centrifugal potential $`m^2/r^2`$ and delete the term with first derivative, but then arises an additional centrifugal potential. This new centrifugal potential influences the energy spectrum, as the result of which the spectroscopy of the parabolic sombrero with $`m=0`$ and the spectroscopy of the two-center oscillator are not identical. As we can see in Fig. 4, with the increase in the parameter $`r_0`$ the energy levels of the two-center quantum oscillator merge in pairs and between them is the line corresponding to the parabolic sombrero. d. Quite interesting is the behavior of the lines of $`n_r`$-clusters (see Fig. 5). Every $`|m|`$-cluster, as well as $`n_r`$-cluster, includes an infinite number of lines. With the growth of the parameter $`r_0`$, the lines of the $`n_r`$-cluster are gradually coming together, then captured by the circular wall, and continuing the approach merge into one line. Thus, for $`r_0\mathrm{}`$ we have an infinite number of levels, each being an infinite degenerated $`n_r`$-cluster. e. Using the known formula $$\frac{E_{n_r,m}}{\rho _0}=\left(\frac{\widehat{H}}{\rho _0}\right)_{n_r,m;n_r,m},$$ we obtain $$\frac{E_{n_r,m}}{\rho _0}=\mu \omega ^2\rho _0\left[2\underset{0}{\overset{\rho _0}{}}(R_{n_r,m}^{in})^2\rho 𝑑\rho 1\right].$$ The integral in the brackets is a monotonously increasing function of $`\rho _0`$ with the range of values $`[0,1]`$. Expanding it in powers of $`\rho _0`$ (for small $`\rho _0`$) and $`1/\rho _0`$ (for large $`\rho _0`$), we obtain $$\epsilon _{n_r,m}(r_0)2n_r+|m|+1\frac{r_0^2}{4},\text{for}r_01,$$ $$\epsilon _{n_r,m}(r_0)\frac{r_0^2}{4}A_{n_r,m}r_0,\text{for}r_01,$$ where $`A_{n_r,m}`$ is the matrix with positive elements independent of $`r_0`$. As will be shown below (see Conclusion), for extremely large values of the parameter $`r_0`$, the quantities $`A_{n_r,m}`$ cease to depend on the quantum number $`m`$. We come to the following conclusions: firstly, the corrections to the spectrum of the circular oscillator $`(r_01)`$ are not linear but quadratic in $`r_0`$; secondly, after the capture by the wall the energy levels accumulate near the top of the sombrero forming a ring with thickness $`r_0`$ times less than the top of the sombrero. The last conclusion is confirmed by Fig. 5. ## 4 Probability Distribution Let us introduce two functions $$D_{in}(r)=r^{|m|}e^{ir^2/4}F(\alpha ,\gamma ;ir^2/2),$$ $$D_{out}(r)=r^{|m|}e^{r^2/4}\mathrm{\Psi }(a,\gamma ;r^2/2),$$ and rewrite the functions $`R_{in}`$ and $`R_{out}`$ in the form $$R_{in}(r)=C_{in}D_{in}(r),R_{out}(r)=C_{out}D_{out}(r).$$ To find the normalization constants, we just sew together $`R_{in}`$ and $`R_{out}`$ at the point $`r=r_0`$: $$C_{in}D_{in}(r)=C_{out}D_{out}(r),$$ and demand $$C_{in}^2\underset{0}{\overset{r_0}{}}D_{in}^2r𝑑r+C_{out}^2\underset{r_0}{\overset{\mathrm{}}{}}D_{out}^2r𝑑r=1.$$ It is easy to conclude from these two equations that $$C_{in}=D_{out}(r_0)/Q(r_0),C_{out}=D_{in}(r_0)/Q(r_0),$$ where $$Q(r_0)=\left[D_{out}^2(r_0)\underset{0}{\overset{r_0}{}}D_{in}^2(r)r𝑑r+D_{in}^2(r_0)\underset{r_0}{\overset{\mathrm{}}{}}D_{out}^2(r)r𝑑r\right]^{\frac{1}{2}}.$$ The obtained formulae allow one to look at the picture of motion of the energy levels from the point of view of probability distribution. The diagrams of numerical calculations for the functions $`rR^2`$ for $`n_r=3`$, $`m=0`$ and different values of the parameter $`r_0`$ are presented in Fig. 6. Considering these diagrams in ascending order of the parameter $`r_0`$ we see that they confirm the general scenario of the motion of the levels described in the previous section. Fig. 6a corresponds to the circular oscillator. In Fig. 6b the moment of capture of the level by the wall ($`r_c`$ is the value of the parameter $`r_0`$ when the capture takes place) is drawn. Figs. 6c and 6d demonstrate the shift of the particle together with the wall far from the origin of the coordinates with further increase in the parameter $`r_0`$. ## 5 Conclusion In conclusion, let us discuss the limit $`r_0\mathrm{}`$ more thoroughly. As $`r_0\mathrm{}`$ the height of the barrier ($`r_0^2/4`$) increases, and that is why the wave functions, corresponding to the energy levels in the wall differ from zero only in the region of large $`r`$, where the centrifugal term $`m^2/r^2`$ could be neglected. In such an approach the functions $`R_{in}`$ and $`R_{out}`$ as well as the energy levels cease depending on quantum number $`m`$, which in its turn indicates the dependence (in the limit of large $`r_0`$) of the spectroscopy of captured levels practically on the quantum number $`n_r`$ only. Such a conclusion is in agreement with the tendency, presented in Fig. 5, and with the general philosophy of spontaneous breaking of continuous global symmetry . ## Acknowledgment The authors would like to thank G. Pogosyan and A. Sissakian for fruitful discussions. The work of Ye. Hakobyan was partially supported by Russian Foundation for Basic Research (RFBR) under the project # 98-01-00330.
no-problem/9908/gr-qc9908076.html
ar5iv
text
# 4D Wormhole with Signature Change in the Presence of Extra Dimensions ## I Introduction The nice Hawking idea about a change of the signature of the spacetime metric has a problem in the classical regime: usually, a singularity appears at that point where this change takes place. The simplest explanation for this is the following: the determinant of the metric tensor $`g=det(g_{ik})`$ changes its sign by changing the metric signature. Therefore at this point $`g=0`$ and/or one of the scalars $`R`$ or $`R_{ik}R^{ik}`$ or $`R_{iklm}R^{iklm}`$ is equal to $`\pm \mathrm{}`$. A detailed explanation of this fact can be found in and the bibliography for this subject. It can also be shown that the gravitational field requires additional degrees of freedom for the change of metric signature. It is easy to see if we write the metric in the vier-bein formalism: $$ds^2=\eta _{ab}\omega ^a\omega ^b,$$ (1) here $`\omega ^a=e_\mu ^adx^\mu `$; $`\eta _{ab}=(+1,1,1,1)`$ is the Minkowski metric; $`e_\mu ^a`$ is the vier-bein.<sup>*</sup><sup>*</sup>*$`e_\mu ^a`$ are the degrees of freedom of the gravitational field, this means that the gravitational equations are deduced by varying with respect to vier-bein $`e_\mu ^a`$. The signature of the metric is defined by $`\eta _{ab}`$ and is not varying. It is possible that the change of the metric signature can occur as a quantum process on the spacetime foam level when $`\eta _{ab}`$ is changed. But below we will show that in the 5D Kaluza-Klein gravity there is a trick with interchanging of the sign between some 5D metric components that for the 4D observer is similar to the change of the signature of the 4D metric. ## II Signature change in the 4D wormhole In , the following wormhole-like (WH) solution in the vacuum 5D Kaluza-Klein gravity was found: $$ds_{(5)}^2=\frac{r_0^2}{\mathrm{\Delta }(r)}(d\chi \omega (r)dt)^2+\mathrm{\Delta }(r)dt^2dr^2a(r)d\mathrm{\Omega }^2,$$ (2) here $`\chi `$ is the 5<sup>th</sup> extra coordinate; $`r,\theta ,\phi `$ are the $`3D`$ polar coordinates; $`t`$ is the time; $`d\mathrm{\Omega }^2=d\theta ^2+\mathrm{sin}^2\theta d\phi ^2`$ is the metric on the $`S^2`$ sphere; the subscript (5) denotes that the appropriate quantity is 5 dimensional. The equations for $`\mathrm{\Delta }(r)`$ are: $`{\displaystyle \frac{\mathrm{\Delta }^{\prime \prime }}{\mathrm{\Delta }}}{\displaystyle \frac{\mathrm{\Delta }_{}^{}{}_{}{}^{2}}{\mathrm{\Delta }^2}}+{\displaystyle \frac{a^{}\mathrm{\Delta }^{}}{a\mathrm{\Delta }}}{\displaystyle \frac{r_0^2}{\mathrm{\Delta }^2}}\omega _{}^{}{}_{}{}^{2}`$ $`=`$ $`0,`$ (3) $`\omega ^{\prime \prime }2\omega ^{}{\displaystyle \frac{\mathrm{\Delta }^{}}{\mathrm{\Delta }}}+\omega ^{}{\displaystyle \frac{a^{}}{a}}`$ $`=`$ $`0,`$ (4) $`{\displaystyle \frac{\mathrm{\Delta }_{}^{}{}_{}{}^{2}}{\mathrm{\Delta }^2}}+{\displaystyle \frac{4}{a}}{\displaystyle \frac{a_{}^{}{}_{}{}^{2}}{a^2}}{\displaystyle \frac{r_0^2}{\mathrm{\Delta }^2}}\omega _{}^{}{}_{}{}^{2}`$ $`=`$ $`0,`$ (5) $`a^{\prime \prime }2`$ $`=`$ $`0.`$ (6) and we see that if $`\mathrm{\Delta }`$ is a solution then $`\mathrm{\Delta }`$ is also.In contrast to this example, for the Schwarzschild black hole this is not the case. For the metric $`ds^2=\mathrm{\Delta }dt^2dr^2/\mathrm{\Delta }r^2d\mathrm{\Omega }^2`$ there we get the following equation: $`\mathrm{\Delta }^{}+\mathrm{\Delta }/r1/r=0`$ which is not invariant under $`\mathrm{\Delta }\mathrm{\Delta }`$ transformation.. The solution of these $`5D`$ Einstein’s equations is $`a`$ $`=`$ $`r_0^2+r^2,`$ (7) $`\mathrm{\Delta }`$ $`=`$ $`\pm {\displaystyle \frac{2r_0}{q}}{\displaystyle \frac{r^2+r_0^2}{r^2r_0^2}},`$ (8) $`\omega `$ $`=`$ $`\pm {\displaystyle \frac{2r_0^2}{q}}{\displaystyle \frac{a^{}/a}{1\frac{2r_0^2}{a}}}.`$ (9) (i.e., $`\omega =2rr_0\mathrm{\Delta }/a`$), here $`r_0>0`$ and $`q`$ are the same constants. In this paper the 5D spacetime is the total space of the principal bundle with the $`U(1)`$ group as the structural group, the base of this bundle is the ordinary 4D spacetime . This means that we condider the following part of metric (2): $$ds_{(4)}^2=\mathrm{\Delta }(r)dt^2dr^2a(r)d\mathrm{\Omega }^2$$ For the metric on the total space $`E`$ of the principal bundle the most natural choice of the coordinate system is the following: the $`y^a`$ ($`a=5,6,\mathrm{}`$) coordinatesIn our case, we restrict to the 1-dimensional fibre, i.e. $`y^5=\chi `$, and the index $`a=5`$. are chosen on the fibre (gauge group) and $`x^\mu `$ ($`\mu =0,1,2,3`$) along the base (4D spacetime). In this case $`y^a`$ are the coordinates which cover the gauge group $`G`$ (the fibre of the bundle) and $`x^\mu `$ are the coordinates which cover the factor space $`E/G`$ (the base of the bundle or 4D spacetime). In the classical and quantum field theories (without gravitation) the strong, weak and electromagnetic interactions are characterized as a connection on the appropriate bundle with some structural group. These fields are real, therefore the corresponding total space is also real in Nature. But we cannot choose the new coordinate system in which we will mix the coordinates on the fibre (gauge group) and the base (4D spacetime). This is evident because the points on the fibre are the elements of the group but the points on the base are not. In the context of this paper the metrc (2) is the metric on the total space. Once again we emphasize that the Kaluza-Klein theory in the context of this paper is the gravity on such a principal bundle, therefore we can use only the following coordinate transformations: $`y^a`$ $`=`$ $`y^a(y^a)+f^a\left(x^\mu \right),`$ (10) $`x^\mu `$ $`=`$ $`x^\mu \left(x^\mu \right).`$ (11) The first term in (10) means that the choice of coordinate system on the fibre is arbitrary. The second term indicates that in addition we can move the origin of coordinate system on each fibre on the value $`f^a(x^\mu )`$. It is well known that such a transformation law (10) leads to a local gauge transformation for the appropriate nonabelian field (see for overview ). That is the (10) and (11) coordinate transformation are the most natural transformation for the multidimensional gravitation on the principal bundle. Of course we can use the much more generalized coordinate transformations: $`y^a`$ $`=`$ $`y^a(y^a,x^\mu ),`$ (12) $`x^\mu `$ $`=`$ $`x^\mu (y^a,x^\mu ).`$ (13) But in this case we destroy the initial topological structure of the multidimensional spacetime, and what is even worse we mix the points of the fibre <sup>§</sup><sup>§</sup>§which are the elements of some group. with the points of the base which are the ordinary spacetimes points.. That we do not in ordinary classical/quantum field theory (without gravitation) hence this would be a bad coordinate choice for the multidimensional gravity on the principal bundle. The above-mentioned item is a literary description for the next exact theorem : Let $`G`$ be the group fibre of the principal bundle. Then there is a one-to-one correspondence between the $`G`$-invariant metrics on the total space $`𝒳`$: $$ds^2=G_{AB}dx^Adx^B=g_{\mu \nu }+h(x^\mu )\left(\omega ^a+A_\mu ^adx^\mu \right)^2$$ (14) and the triples $`(g_{\mu \nu },A_\mu ^a,h)`$. Here $`G_{AB}`$ is the multidimensional metric on the total space ($`A,B=0,1,2,3,5,6,\mathrm{}`$) $`g_{\mu \nu }`$ is Einstein’s pseudo - Riemannian metric on the base; $`A_\mu ^a`$ is the gauge field of the group $`G`$ ( the nondiagonal components of the multidimensional metric); $`h\gamma _{ab}`$ is the symmetric metric on the fibre; $`\omega _a=\gamma _{ab}\omega ^b`$; $`\omega ^a`$ are the one-form on the group $`G`$. In Ref. the solution (7-9) was applied for the discussion of the composite Lorentzian WH. For this goal the WH-like solution (2) (with $`|r|r_0`$, and the sign (-) in (8), (9)) is inserted between two Reissner-Nordstöm black holes. Below we examine two possibilities for the signs (+) and (-) in eqs. (8), (9). ### A Lorentzian wormhole with the Euclidean throat, the case of (+) Here we examine the solution (2) with (7,8,9) in the whole region $`\mathrm{}<r<+\mathrm{}`$. In this case, for the 4D spacetime (the base of the principal bundle) the following takes place: 1. By $`|r|r_0`$ we have the ordinary 4D asymptotically flat spacetime with Lorentzian signature (from the viewpoint of the 4D observer) as $`g_{tt}=\mathrm{\Delta }>0`$. 2. By $`|r|<r_0`$ we have the Euclidean 4D spacetime bounded between two $`ds_{(5)}^2=0`$ (located by $`r=\pm r_0`$) hypersurfaces as $`g_{tt}=\mathrm{\Delta }<0`$. 3. From the viewpoint of 4D the observer on the $`r=\pm r_0`$ hypersurfaces takes place the change of the 4D metric signature. This is a result of simple interchange the signs of the metric components $`G_{tt}`$ and $`G_{55}`$ on the $`ds_{(5)}^2=0`$ hypersurfaces and nothing more. Thus, the solution (2) describes the WH connecting two Lorentzian asymptotically flat regions by means of the Euclidean throat. Of course we have a question: what happens at the hypersurfaces $`r=\pm r_0`$? Now we describe the properties of such hypersurface on which the interchange of the metric signature happens: 1. On this surface $`ds_{(5)}^2=0`$ ($`\chi `$, $`\theta `$, $`\phi =const`$ and $`r=\pm r_0`$). 2. The 5D scalar invariants $`R_{(5)}=R_{(5)AB}R_{(5)}^{AB}=0`$ in the consequence of the 5D Einstein equations $`R_{(5)AB}=0`$. $`R_{(5)ABCD}R_{(5)}^{ABCD}r_0^4`$ ($`A=0,1,2,3,5`$), i.e. we see that probably these two $`ds_{(5)}^2=0`$ hypersurfaces do not have a singularity. 3. The 4D metric on the 4D base of the principal bundle on the 4D spacetime is (for $`a`$ see eq. (7)): $$ds_{(4)}^2=\frac{2r_0}{q}\frac{1}{1\frac{2r_0^2}{a}}dt^2dr^2\left(r^2+r_0^2\right)d\mathrm{\Omega }^2.$$ (15) By $`r\pm \mathrm{}`$ we have two asymptotically flat Lorentzian spaces with the metric: $$ds_{(4)}^2\frac{2r_0}{q}dt^2dr^2r^2d\mathrm{\Omega }^2$$ (16) 4. The 4D curvature scalar $`R_{(4)}`$: $$R_{(4)}=\frac{6r_0^2}{\left(r^2r_0^2\right)^2},$$ (17) and it has the singularity. Thus, from the point of view of 4D the observer there is a singularity. But we emphasize once again that this singularity is not the really singularity in the consequence of the second item. This situation is similar to what happens in the Schwarzschild metric: $$ds_{(4)}^2=\left(1\frac{r_g}{r}\right)dt^2\frac{dr^2}{1\frac{r_g}{r}}r^2d\mathrm{\Omega }^2$$ (18) on the event horizon. 5. For the 4D observer in the Lorentzian part of this WH these two singularities look as two $`(\pm )`$ electric charges spreaded on the $`r=\pm r_0`$ surfaces with the outgoing and incoming force lines of the electric field. 6. In this 5D case we cannot introduce the notion of electric charge. To see it more directly we shall look on the eq. (4), it can be rewritten in the following form: $$\left(4\pi a\frac{\omega ^{}}{\mathrm{\Delta }^2}\right)^{}=0.$$ (19) This means that the product of the electric field $`F_{01}=\omega ^{}`$ on the area $`4\pi a`$ of the sphere $`S^2`$ is not a conserved electric charge $`q`$. But interesting is that we can correct this field: as we see from the (19) the product of magnitude $`\omega ^{}/\mathrm{\Delta }^2`$ with the area $`4\pi a`$ of the sphere $`S^2`$ is the conserved flux of the corrected electric field which is proportional to the electric charge. It is interesting to compare the metric (2) with the Reissner-Nordström solution. For this purpose we introduce the new radial coordinate $`\rho =\sqrt{r^2+r_0^2}`$. Then we have our metric in the following form: $$ds^2=\frac{r_0q}{2}\left(1\frac{2r_0^2}{\rho ^2}\right)\left(d\chi \omega dt\right)^2+\frac{2r_0}{q}\frac{dt^2}{1\frac{2r_0^2}{\rho ^2}}\frac{d\rho ^2}{1\frac{r_0^2}{\rho ^2}}d\mathrm{\Omega }^2.$$ (20) here the area of the sphere $`S^2`$ is $`4\pi \rho ^2`$ as for the 4D Reissner-Nordström solution but $`g_{tt}=\frac{2r_0}{q}\left(1\frac{2r_0^2}{\rho ^2}\right)^1`$ and $`g_{\rho \rho }=\left(1\frac{r_0^2}{\rho ^2}\right)^1`$ differ from the corresponding metric components of the Reissner-Nordström solution. Also the Maxwell tensor for the metric (2) is $`F_{01}=\omega ^{}=\frac{\mathrm{\Delta }^2}{r_0}\frac{q}{\rho ^2}`$ whereas for the Reissner-Nordström metric we have $`F_{01}=E=\frac{q}{r^2}`$. Hence we can say that the metric (2) cannot be considered as the model of 4D “charge without charge”. This is a simple example of a possible signature change in the presence of the extra dimensions. ### B Euclidean wormhole with the Lorentzian throat, the case (-) Here we can precisely repeat our reasoning of subsection II A with the following interchanging: $`Euclidean\genfrac{}{}{0pt}{}{}{}Lorentzian`$. Thereof we have the WH with the Lorentzian throat connecting two Euclidean asymptotically flat regions. ## III Conclusion The basic idea of the Kaluza-Klein paradigm is that the extra dimensions are very small and therefore unobservable. If so then a wormhole with metric (2) can be a simple example of an Euclidean bridge between two Lorentzian regions (or the Lorentzian bridge between two Euclidean regions). We remark that this 5D construction is regular everywhere, i.e. there is not any singularity in this solution. It is remarkable that this solution is a regular vacuum solution for the 5D Kaluza-Klein gravity in the spirit of Einstein’s idea that the right-hand side of the gravitational field equations should be zero. Finally, we can say that by the assumption of the hidden extra dimensions in the Kaluza-Klein theory there is a possibility for the signature change of the 4D metric.
no-problem/9908/astro-ph9908138.html
ar5iv
text
# Neutrino Event Rates from Gamma Ray Bursts ## 1 High Energy Neutrinos from Relativistic Fireballs The evidence has been steadily accumulating that GRB emission is the result of a relativistically expanding fireball energized by a process involving neutron stars or black holes (Piran (1999)). In the early stage, the fireball cannot emit efficiently because the radiation is trapped due to the very large optical depth. The fireball’s energy is dissipated in kinetic energy until it becomes optically thin and produces the observed display. This scenario can accommodate the observed energy- and time scales provided the bulk Lorentz factor $`\gamma `$ of the expanding shock is $`300`$. The production of high energy neutrinos is anticipated: protons, accelerated in the kinetic phase of the shock, interact with photons producing charged pions which are the parents of neutrinos (Waxman & Bahcall (1997)). Standard particle physics and fireball phenomenology are sufficient to compute the neutrino flux (Waxman & Bahcall (1997); Halzen (1998)) as well as the observed rates in high energy neutrino telescopes. The observation of GRB neutrinos over a cosmological baseline has scientific potential beyond testing the “best-buy” fireball model: the observations can test with unmatched precision special relativity and the equivalence principle, and study oscillating neutrino flavors over the ultimate baseline of $`z1`$. The anticipated neutrino flux traces the broken power-law spectrum observed for the photons, which provide the target material for neutrino production: $`\varphi `$ $`=`$ $`{\displaystyle \frac{A}{E_BE}}\text{ for }E<E_B`$ (1) $`\varphi `$ $`=`$ $`{\displaystyle \frac{A}{E^2}}\text{ for }E>E_B,`$ (2) where $`A`$ is a normalization constant which is determined from energy considerations and $`E_B700`$ TeV (Halzen (1998)). The total energy in GRB neutrinos is given by $$F_{\mathrm{tot}}=\frac{c}{4\pi }\frac{1}{2}f_\pi t_{\mathrm{hubble}}\dot{\rho }_E,$$ (3) where $`f_\pi `$ is the fraction of proton energy that goes into pion production, $`t_{\mathrm{hubble}}`$ is 10 Byrs and $`\dot{\rho }_E`$ is the injection rate into protons accelerated to high energies in the kinetic fireball. This is a critical parameter in the calculations; it can be fixed, for instance, by assuming that GRB are the source of the highest energy cosmic rays beyond the ankle (Waxman 1995; Milgrom & Usov 1995; Vietri (1995)) in the spectrum, or $`\dot{\rho }_E4\times 10^{44}\mathrm{erg}\mathrm{Mpc}^3\mathrm{yr}^1`$ (Halzen (1998)). The factor $`f_\pi `$ represent the fraction of total energy going into pion production. It is calculated by known particle physics and is of order 15% (Waxman & Bahcall (1997); Halzen (1998)). We remind the reader that these assumptions reproduce the observed average photon energy per burst if equal energy goes into the hadronic and electromagnetic component of the fireball (Waxman & Bahcall (1997)). Now, normalizing the flux for this total energy: $$F_{\mathrm{tot}}=\frac{A}{E_B}_{E_{\mathrm{min}}}^{E_B}𝑑E+A_{E_B}^{E_{\mathrm{max}}}\frac{dE}{E}.$$ (4) Approximating $`E_BE_{\mathrm{min}}E_B`$, the integration constant is found to be $`1.20\times 10^{12}\mathrm{TeV}\mathrm{cm}^2\mathrm{s}^1\mathrm{sr}^1`$. The quantities $`f_\pi `$ and $`E_B`$ are calculated using Eqs. (4) and (5) in Waxman and Bahcall (1997). The GRB parameters entering these equations are chosen following Halzen (1998) and assumed to be independent of $`\gamma `$. We assumed $`E_{\mathrm{max}}=10^7`$ TeV. Given the neutrino flux $`\varphi `$, the event rates in a neutrino detector are obtained by a straightforward method (Halzen (1998)): $$N=\varphi (E)P_{\nu \mu }𝑑E,$$ (5) where $`P_{\nu \mu }`$ is the detection probability for neutrinos of muon flavor. It is determined, as a function of energy, by the neutrino cross sections and the range of the secondary muon (Gaisser, Halzen & Stanev 1995; Gandhi et al. (1996)). We obtain an observed neutrino flux of order 10 yr$`{}_{}{}^{1}\mathrm{km}_{}^{2}`$ assuming $`10^3`$ bursts per year. This result is somewhat lower but not inconsistent with those obtained previously. One should keep in mind that neutrinos can also be produced, possibly with higher fluxes, in other stages of the fireball, e.g. when it expands through the interstellar medium (Katz 1994; Halzen & Jaczko 1996; Vietri 1998; Boettcher & Dermer (1998)). ## 2 Burst-to-Burst Fluctuations We here want to draw attention to the fact that this result should not be used without consideration of the burst-to-burst fluctuations which are very large indeed; see also Dermer, Boettcher & Chiang (1999). We will, in fact, conclude that from an observational point of view the relevant rates are determined by the fluctuations, and not by the average burst. Our calculations are performed by fluctuating individual bursts according to the model described above with i) the square of the distance which is assumed to follow a Euclidean or cosmological distribution, ii) with the energy assuming that the neutrino rate depends linearly on energy and follows a simple step function with ten percent of GRB producing more energy than average by a factor of ten, and one percent by a factor of 100, and, most importantly, iii) fluctuations in the $`\gamma `$ factor around its average value of 300. The fluctuations in $`\gamma `$ affect the value of $`f_\pi `$ which varies approximately as $`\gamma ^4`$ (Waxman & Bahcall (1997)) as well as the position of the break energy which varies as $`\gamma ^2`$. For a detailed discussion see Halzen (1998). Both effects are taken into account. Clearly a factor of ten variation of $`\gamma `$ leads to a change in flux by roughly 4 orders of magnitude. The origin of the large fluctuations with $`\gamma `$ should be a general property of boosted accelerators. With high luminosities emitted over short times, the large photon density renders the GRB opaque unless $`\gamma `$ is very large. Only transparent sources with large boost factors emit photons. They are however relatively weak neutrino sources because the actual photon target density in the fireball is diluted by the large Lorentz factor. This raises the unanswered question whether there are bursts with lower $`\gamma `$-factors. Because of absorption such sources would emit less photons of lower energy and could have been missed by selection effects; they would be spectacular neutrino sources. Some have argued (Stern (1999)) for $`\gamma 30`$ on the basis that the unusual fluctuations in the morphology of GRB can only be produced in a relatively dense, turbulent medium. ## 3 Monte Carlo Simulation Results We will illustrate the effect of fluctuations by Monte Carlo simulation of GRB. For a Euclidean distribution the calculation can be performed analytically; in other cases the Monte Carlo method evaluates the integrals. The overwhelming effect of fluctuations is demonstrated in sample results shown in Figs. 1–3 and Table 1,a–c. Not knowing the distribution of $`\gamma `$-factors around its average, we have parametrized it as a Gaussian with widths $`\sigma ,\sigma ^{}`$ below and above the average value of 300. We chose $`\sigma ^{}`$ to be either 0 or 300, to illustrate the effect of allowing GRB with Lorentz factors up to $`10^3`$ with a Gaussian probability. The critical, and unknown, parameter is $`\sigma `$. It may, in fact, be more important than any other parameter entering the calculation. As far as we know, neither theory nor experiment provide compelling information at this point. Note that we require $`\sigma >70`$ in order to allow a significant part of the bursts to have $`\gamma `$-factors less than $`10^2`$, or one third the average value; see Table 1. The value of $`\sigma ^{}`$ is less critical. A value of 300 allows for Lorentz factors as large as $`10^3`$. The dominant effect of this is a renormalization of the neutrino rates because a fraction of the bursts now have a large $`\gamma `$-factor and, as a consequence, a low neutrino flux. We discuss some quantitative examples next. Firstly, even in the absence of the dominant fluctuations in $`\gamma `$, the rate of 9 detected neutrinos per year over $`10^3`$ bursts, becomes 30–90 in the presence of fluctuations in energy and distance. The range covers a variety of assumptions for GRB distributions, which range from Euclidean to cosmological; the issue is not important here because other factors dominate the fluctuations. Every 2 years there will be an event with 7 neutrinos in a single burst. For $`\sigma =70`$, the rate is $`600`$ per year in a kilometer square detector, with 23 individual bursts yielding more than 4 neutrinos and 7 yielding more than 10 in a single year! The results for other values of $`\sigma `$ are tabulated in Table 1. The number of events per year for a range of values of $`\sigma `$ is shown in Fig. 1. The frequency of GRB producing seven or more neutrinos is shown in Fig. 2. The neutrino multiplicity of bursts for $`\sigma =30,60`$ and 75 is shown in Fig. 3. Note that absorption in the source eventually reduces the overall rates when $`\gamma `$ is much smaller than average, or $`\sigma `$ larger than 60. This factor is included in our calculations (Halzen (1998)). To this point we have assumed that all quantities fluctuate independently. Interestingly, the value of $`\sigma =70`$ is obtained in the scenario where instead fluctuations in $`\gamma `$ are a consequence of fluctuations in energy. In order not to double count, fluctuations in energy itself, which only contribute a factor of 3 to the neutrino rate anyway, should be omitted. Even though bursts with somewhat lower $`\gamma `$ produce neutrinos with reduced energy, compared to 700 TeV, over a longer time-scale than 1 second, the signatures of these models are spectacular. Existing detectors with effective area at the 1–10% of a kilometer squared, should produce results relevant to the open question on the distribution of bulk Lorentz factors in the fireball model. Independent of the numerics, the fact that a single GRB with high energy, close proximity and a relatively low Lorentz factor can reasonably produce more detectable neutrino events than all other GRB over several years time, renders the result of the straightforward diffuse flux calculation observationally misleading. Our calculations suggest that it is far more likely that neutrino detectors detect one GRB with favorable characteristics, than hundreds with average values. Clearly, our observations are relevant for other GRB models, as well as for blazars and any other boosted sources. They are also applicable to photons and may represent the underlying mechanism for the fact that the TeV extra-galactic sky consists of a few bursting sources. We expect no direct correlation between neutrino and high energy gamma sources because cosmic events with abundant neutrino production are almost certainly opaque to high energy photons. ## 4 Conclusions Although the average event rates predicted with typical GRB parameters appear somewhat discouraging to present and future Cerenkov neutrino detector experiments, the fluctuations in these calculations are more significant and affect the prospects for detection. We speculated on the distribution of the parameters entering the fireball model calculation and used a Monte Carlo simulation to estimate the actual event rates. The result of these simulations show that a kilometer scale detector could be expected to observe tens or hundreds of events per year. To improve on the reliability of this estimate, a well defined distribution for the Lorentz factor must be determined. Contrariwise, not observing neutrino GRB after years of observation will result in a strong limit on the number of accelerated protons in the kinetic phase of the burst, or in fine-tuned, high values of the Lorentz boost factor. ## Acknowledgements This research was supported in part by the U.S. Department of Energy under Grant No. DE-FG02-95ER40896 and in part by the University of Wisconsin Research Committee with funds granted by the Wisconsin Alumni Research Foundation. We thank G. Barouch, T. Piran and E. Waxman for discussions.
no-problem/9908/cond-mat9908249.html
ar5iv
text
# Collective motion and solid-liquid-type transitions in vibrated granular layers ## I Introduction In a driven granular system, such as a vibrated layer composed of macroscopic particles, energy is dissipated by inelastic grains collisions and by friction. Also, for realistic situations, the typical energy of a grain is many orders of magnitude greater than $`k_BT`$, so the temperature does not play an important role in the dynamics of these materials. However, it is commonly observed that granular matter behaves like solids, liquids or even gases depending on the energy injection and energy dissipation rates . More precisely, the competition between these quantities determines the residual velocity fluctuations which in turn play the role of thermal fluctuations in these materials. From this view point, surface waves excited by vertical vibrations provide one of the most striking examples in which granular materials act like a fluid. For a granular layer, it was well established that parametric waves can be observed whenever the dimensionless acceleration $`\mathrm{\Gamma }=A(2\pi f)^2/g`$ exceeds a critical value (here $`g`$ is the acceleration of gravity, $`A`$ is the amplitude of the vibrating surface, and $`f`$ is the frequency of the driving force). The primary instability resulted from a flat layer to a pattern of squares or stripes, depending on both $`f`$ and the particle diameter, $`d`$. As an illustration we present two typical snapshots of these kind of waves in Fig. 1 a and b. It was found that the crossover frequency $`f_d`$ at which the square to stripe transition occurs is proportional to $`d^{1/2}`$. In addition, this scaling was qualitatively understood in terms of the ratio of kinetic energy injected into the layer to the potential energy required to raise a particle a fraction of its diameter. At low $`f`$, the horizontal mobility should be high since the layer dilation is large , whereas at large $`f`$, the mobility should be low since layer dilation is small . We notice, however, that these waves are the corresponding hydrodynamic surface modes of the layer since they require a transfer of mass to be sustained . In this paper, we report on measurements of the pressure due to the layer-container collision and the surface and bulk dilation of the layer. We show that there exists a critical $`\mathrm{\Gamma }`$ for which the flat layer undergoes a phase transition. This critical $`\mathrm{\Gamma }`$ is smaller than the critical one at which waves appear. At low $`f`$ this transition is a solid-liquid type, at intermediate $`f`$ only a heating up of the surface layer is observed, whereas at high $`f`$ a compaction transition is detected. Thus, our results show that granular surface waves are naturally linked to the fluid like behavior since they are observed only when the energy input per particle is enough to induce a minimal dilation of the flat layer. At high $`f`$ and for the same critical value of $`\mathrm{\Gamma }`$ at which hydrodynamic waves appear, bending waves are detected instead. In this regime, our measurements show that the mobility of grains is almost completely suppressed in both the bulk and the free surface of the layer. In this case, a set of experiments conducted with large photoelastic particles allow us to observe these waves as an alternation of bright and dark zones that oscillate in time at half of the forcing frequency. In addition, we show for the low frequency regime that when $`\mathrm{\Gamma }`$ is increased to a value close to $`4.6`$, an inverse transition from a fluid to a compact layer is observed. This transition is responsible for the flat with kinks state reported in previous works (See Fig. 1 c and d). A brief summary of our results has been given in ref. . This article is organized as follows: section II is devoted to the description of our experimental setup. In section III, which is the main part of this article, we present our pressure measurements and we establish the basis to obtain density profiles and the bulk dilation from the experimental data. In section IV we present surface dilation measurements and we discuss how they link to the bulk dilation measurements. The energy dissipation rate in the vibrated layer is also estimated as a function of the excitation frequency. Section V is concerned with the study of the very low amplitude regime of surface waves and its connection with the compact state of the layer. Finally a brief discussion is given. ## II Experimental Setup In this article we present two sets of experiments. In the first set, we measure simultaneously the pressure resulting from the collisions of the layer with the vibrating plate and the surface particle dilation. From the pressure measurements, we obtain information on the bulk dilation and density (see section III). The surface dilation is determined through the normalized reflected intensity of the surface layer (See discussion below). In fig. 2 we present a schematic drawing of the experimental setup. In this experiment, a thin layer of $`0.1060.125`$ mm diameter bronze particles, 15 particles deep, is placed at the bottom of a $`40`$ mm diameter and $`25`$ mm height cylindrical container. The container’s wall is Lucite while the base is aluminum to reduce electrostatic effects. The container is mounted on a high frequency response pressure sensor (PCB. Model 208A11) which is driven by an electromechanical vibration exciter. The resulting acceleration is measured to a resolution of $`0.01g`$. A second Lucite cylinder is used as a lid for the whole system allowing evacuation of the container to less than $`0.1`$ Torr; at this value volumetric effects of the gas are negligible . The surface of the layer is illuminated at low angle ($`20^0`$ respect to the horizontal) by an array of $`18`$ LEDs organized in a $`10`$ cm diameter ring. The reflected light from the surface layer is focused by a lens of $`28`$ mm focal length on a flat photodiode of $`25`$ mm<sup>2</sup> area. The whole system is automatically run by a Power PC computer equipped with A/D and GPIB boards. With this set-up, the measured light is proportional to both the incident light and the reflectivity coefficient of the bronze particles $`R`$ ($`R0.6`$). We notice that only a small fraction $`s`$, which is about $`5`$ percent of the surface of a single particle, reflects light in the direction of the solid angle of the camera (in the experiment, lens aperture angle is about $`12^0`$). The intensity $`I`$, measured by the photodiode, can be then taken proportional to the surface density or more precisely to the number of particles within the first layer. Furthermore, $`I`$ relates to surface dilation $`\delta _s`$ as $`I/I_0d^2/(d+\delta _s)^2`$, where $`I_0`$ is a reference intensity for which we take $`\delta _s=0`$ . We neglect multiple reflections, since their dominant contribution is proportional to both $`R^2`$ and $`s^2`$, where $`s^2`$ represents the probability of having a secondary reflection within the solid angle of the camera. Thus, by taking the incident light in a small enough angle, we insure that, to a first approximation, only the first layer of particles contribute to $`I`$. The second set of experiments consists of vibrating a two dimensional layer of photoelastic particles and taking images of the layer motion with a high speed CCD camera. The goal is to observe the very low amplitude waves found in the high frequency regime. A cell made of two glass plates $`400`$ mm wide by $`100`$ mm high was mounted on the moving platform of our vibrator system. The gap between the plates is controlled by spacers of varying thickness to a resolution of $`0.05`$ mm. Images with resolution of $`256\times 256`$ pixels are captured at rates of $`1200`$ frames per second by a Hisis $`2002`$ CCD camera. Images are obtained by transmission using parallel light with incidence perpendicular to the cell. Transmitted light is filtered through a polarizer whose axis is perpendicular to direction of polarization of the incident light. More details are given in section V. ## III Pressure measurements and density profiles ### A Experimental procedures and collision model In Fig. 3a, we present a typical pressure signal $`P(t)`$ as a function of time for $`f=40`$ Hz and $`\mathrm{\Gamma }=2.2`$. This signal is composed of a sinusoidal component, corresponding to the force required to accelerate the cell, and of a peak sequence, due to the layer-plate collisions. We are interested in the shape of the peak during the collision so it is necessary to subtract the sinusoidal component. By this procedure we obtain a number of pressure peaks, typically about 15. As an illustration, we show in Fig. 3b the average peak obtained from the peaks presented in Fig. 3a. To characterize the peaks of a sequence like the one presented we take the maximum value $`P`$ and the width $`T_c`$, which is a measure of the collision time. More precisely these quantities are obtained as the averages of the sequence $`\{P^{(k)},T_c^{(k)}\}`$, where $`k`$ is an integer that indicates the collision number, i.e. $`P=P^{(k)}`$ and $`T_c=T_c^{(k)}`$ where $``$ denotes the sequence average. For the data in the Fig. 3, $`P=2.45\pm 0.03`$ kPa and $`T_c=0.52\pm 0.01`$ ms (In this case $`T_c`$ is taken as the width at a pressure equal to a quarter of $`P`$). Now, due to the impulsive nature of the layer plate collision, we write the following scaling relation $$P\frac{M}{A_p}\frac{V_c}{T_c}$$ (1) where $`M`$ is the total mass of the layer, $`A_p`$ is the active surface of the plate and $`V_c`$ is the layer-plate relative velocity at the collision. If the collision takes place at time $`t=\overline{t}`$ and is completely inelastic then $`V_c=V_p(\overline{t})V_b(\overline{t})`$, where $`V_p`$ and $`V_b`$ are the plate and layer velocity respectively. Thus, in practice, we approximate the collision velocity by the one predicted by the completely inelastic ball model. We note that this model assumes that the layer losses immediately all its energy and takes the velocity of the moving plate. This approximation is valid since previous experiments have shown, by measuring the flight time of the layer, that the motion of the center of mass of the layer follows the motion of an inelastic ball . We remark that in our experiments, even close to $`\mathrm{\Gamma }1`$, the vertical average bulk dilation of the layer, $`\delta _b`$, is never strictly zero. This is concluded because the collision time is not determined by the Hertz theory. Indeed, experiments performed by E. Falcon et al have shown that, in the case of a column of $`N`$ particles in contact ($`\delta _b=0`$) colliding vertically with a fixed plate, the collision time is $`T_c=(N1)T_q+\tau _1`$, where $`N`$ is the number of particles, $`T_q`$ is the time duration of the momentum transfer from one particle to another, and $`\tau _1`$ is the collision time between two particles predicted by Hertz theory . Now, for the particles used in our experiments the orders of magnitude of these times are $`T_q\tau _110^6`$ s . Then, the predicted collision time for our three dimensional layer should be of order $`T_c10^5`$ s. This order of magnitude is never observed in our experiments, where for $`\mathrm{\Gamma }`$ close to 1 and for a wide range of $`f`$ the minimum collision time is of order $`T_c5\times 10^4`$ s. The difference between what is expected from the Hertz prediction for $`\delta _b=0`$ and the experimental values for $`T_c`$ can be explained if the layer is slightly dilated at the collision; a local dilation of 1 percent ($`\delta _b/d0.01`$) drastically changes the collision regime from Hertz contact to ballistic collisions. Although this effect has interesting consequences on sound propagation, it does not affect the center of mass motion of the layer. Thus, for $`\mathrm{\Gamma }>1`$ we write the total dilation of the layer as $`\mathrm{\Delta }=V_cT_c`$. If in addition we assume that dilation is homogeneous, i.e. $`\mathrm{\Delta }=N\delta _b`$, where $`N`$ is the number of layers, we can write equation (1) as $$P\frac{M}{A_p}\frac{V_c^2}{N\delta _b}$$ (2) Thus, our procedure to measure the local bulk dilation $`\delta _b`$ is: we fit $`P`$ versus $`V_c/T_c`$ to obtain the numerical factor of equation (1) and we then use equation (2) to obtain the value of $`\delta _b`$. In the following, we will relate the layer density to the pressure signal $`P(t)`$. Fig. 4 presents a schematic view of a collision. We consider that the plate collides with the granular layer of density $`\rho (z)`$ at a velocity $`V_c`$ at time $`t=0`$. Then, the layer is initially fixed in space and the plane $`(x,y,z=0)`$ of the reference frame $`(x,y,z)`$ coincides with the inferior layer just before the collision (See Fig. 4a). We also notice that the mass of the plate $`M_p`$ is much larger than the mass $`M`$ of the layer ($`M/M_p0.01`$) so we neglect any velocity change of the plate due to momentum transfer. We also neglect the variation of the relative velocity due to the action of gravity since experimentally we observe that $`gT_c<<V_c`$. Thus, the force exerted on the plate due to de momentum transfer is $$F(t)\frac{dm(t)}{dt}V_c$$ (3) Here $`m(t)`$ is the mass that has collided with the plate at time $`t`$. In the next we need to link $`dm(t)/dt`$ to $`\rho (z)`$, measured in the fixed frame $`(x,y,z)`$. To do this we define the compression front $`z_c(t)`$ as the height (relative to the fixed frame) of the mass that has been deposited on the plate at time $`t`$ (see Fig. 4b). So, the mass $`m(t)`$ accumulated on the plate becomes $$m(t)=A_p\underset{0}{\overset{z_c(t)}{}}\rho (z)𝑑z$$ (4) whose derivative with respect to time is $$\frac{dm(t)}{dt}=A_p\rho (z_c(t))\frac{dz_c(t)}{dt}$$ (5) Now, at time $`t`$ the plate has moved a distance $`V_ct`$ from its initial position, and the height of the layer with respect to the plate can be written $$h(t)\frac{m(t)}{A_p\rho _o}=\frac{1}{\rho _o}\underset{0}{\overset{z_c(t)}{}}\rho (z)𝑑z$$ (6) where $`\rho _o`$ is the density of the layer in the compact state. We then obtain $$z_c(t)=V_ct+\frac{1}{\rho _o}\underset{0}{\overset{z_c(t)}{}}\rho (z)𝑑z$$ (7) Differentiating this equation respect to time, we obtain the density evaluated at the location of the compression front as a function of its velocity $`\dot{z}_c(t)`$ $$\rho (z_c(t))=\rho _o\left(1\frac{V_c}{\dot{z}_c(t)}\right)$$ (8) Using this result in equation (5), the relation (3) for the force becomes $$\frac{F(t)}{A_p\rho _oV_c}=\dot{z}_c(t)V_c$$ (9) Integrating this equation between $`0`$ and $`t`$ gives us an expression for the compression front as a function of time $$z_c(t)=V_ct+\frac{1}{A_p\rho _oV_c}\underset{0}{\overset{t}{}}F(\overline{t})𝑑\overline{t}$$ (10) In summary, from the pressure signal $`P(t)=F(t)/A_p`$ we deduce the time evolution of the compression front $`z_c(t)`$. Also, we use equation (8), which tells us about the time evolution of density at the compression front. Since the time $`t`$ enters as a simple parameter we can obtain the density as a function of $`z_c`$, right before the collision. Finally, we notice that both equations (8) and (9) are laws for the conservation of mass and momentum through the compression front. It is possible to deduce from them the velocity of the compression front and the pressure in the compressed part of the layer as implicit functions of $`\rho (z_c(t))`$, $`\dot{z}_c(t)`$ $`=`$ $`{\displaystyle \frac{\rho _oV_c}{\rho _o\rho (z_c(t))}}`$ $`{\displaystyle \frac{F(t)}{A_b}}`$ $`=`$ $`{\displaystyle \frac{\rho _o\rho (z_c(t))V_c^2}{\rho _o\rho (z_c(t))}}`$ These expressions are generalizations of those obtained by Goldshtein et al for the propagation of a shock wave through an homogeneous layer of inelastic particles. Before presenting our experimental results we will show that with equation (10) we can check the momentum conservation of the collision. Evaluating it at time $`t=T_c`$ (here $`T_c`$ is the total collision time) we obtain $$z_c(T_c)=V_cT_c+\frac{1}{A_p\rho _oV_c}\underset{0}{\overset{T_c}{}}F(\overline{t})𝑑\overline{t}$$ (11) Since by definition $`z_c(T_c)=H+V_cT_c`$, where $`H`$ is the layer thickness in the compact state and $`V_cT_c`$ is the total displacement of the plate during the collision, and $`\rho _o=M/HA_p`$, we find $$\underset{0}{\overset{T_c}{}}F(\overline{t})𝑑\overline{t}=MV_c$$ (12) ### B Experimental results We begin this section by presenting results concerning the momentum conservation relation (12). Experimentally we have checked it for a wide range of parameters, namely $`1<\mathrm{\Gamma }<\mathrm{\Gamma }_w2.8`$ and $`35`$ Hz $`<f<350`$ Hz. The main point is that, in our experiment, the velocity $`V_c`$ calculated from the completely inelastic ball model is a very good approximation. As the internal degrees of freedom of the layer are excited one expects that this approximation becomes less accurate. However, this only occurs in the wave regime where the take off velocity is reduced and then both the collision velocity and the flight time are smaller than predicted . There are other experimental effects that should be considered. For instance, friction on the cell walls can tranfer momentum to the layer. Another posibility, much less probable, is the transfer of momentum to the walls by the formation of dynamical arcs. Therefore, we expect that the correct form of the momentum conservation, taking into account all possible sources of errors, to be $$\underset{0}{\overset{T_c}{}}F(\overline{t})𝑑\overline{t}=M_{eff}V_c$$ (13) where $`M_{eff}`$ is an effective mass, and $`V_c`$ is the velocity of collision calculated from the completely inelastic ball model. From this model, we know that $`V_c=gF(\mathrm{\Gamma })/f`$, were $`F(\mathrm{\Gamma })`$ is a nonanalytic function of $`\mathrm{\Gamma }`$, which is found numerically . From our data we find then that the scaling $`\underset{0}{\overset{T_c}{}}F(\overline{t})𝑑\overline{t}V_c`$ is very well verified as a function of both $`\mathrm{\Gamma }`$ and $`f`$. As expected, we find that $`M_{eff}`$ is independent of both $`\mathrm{\Gamma }`$ and $`f`$ and is slightly lower than $`M`$. In order to complete our description, it is necessary to estimate the granular density in the compact state $`\rho _o`$. This quantity is in principle a dynamical variable, in the sense that it depends on how we compact our layer. However, we notice that the important parameter in (10) is $`A_p\rho _o=M/H`$. Taking in to account the previous discussion about the effective mass of the layer and that $`H1.7`$ mm, we obtain $`A_p\rho _o4.2`$ kg/m. Fig. 5 presents averaged pressure peaks for several values of $`\mathrm{\Gamma }`$ at $`f=`$ 40 Hz. Each curve is obtained by averaging 15 collisions, in the same way as that for the one presented in Fig. 3b. We notice that our analysis is valid for cases for which $`\mathrm{\Gamma }<\mathrm{\Gamma }_w`$, where $`\mathrm{\Gamma }_w2.8`$ is the onset for parametric waves. Thus, the last pressure curve presented for $`\mathrm{\Gamma }=2.98`$ is shown to display the difference of the parametric wave state; at this value of $`\mathrm{\Gamma }`$ the collision is quite spread out in time and the maximum pressure achieved is also much smaller. We will discuss more this point in the next section. For the other pressure curves presented, the intensity of the collision seems to be an increasing function of $`\mathrm{\Gamma }`$, i.e. the maximum pressure $`P`$ increases. We understand this as the simple fact that the relative velocity of the collision $`V_c`$ is increasing with $`\mathrm{\Gamma }`$, as it does in this region of $`\mathrm{\Gamma }`$ in the completely inelastic ball model. What is not possible to explain with this model is the observation that the collision time $`T_c`$ also seems to be an increasing function of $`\mathrm{\Gamma }`$. As discussed before, this is due to the excitation of the internal degrees of freedom of the granular layer, i.e, the layer is dilated. In Fig. 6 we present the layer density as a function of height for each of the curves introduced in Fig. 5. In general, the density $`\rho (z)`$ is approximately constant in a center region and decreases toward both ends of the layer. A penetration length for the dilation of the layer can be identified. This length increases with $`\mathrm{\Gamma }`$; for instance, it is of the order of $`d`$ and $`3d`$ for $`\mathrm{\Gamma }1.4`$ and $`\mathrm{\Gamma }2.4`$ respectively. Another fact that traces back the dilatance of the layer is that its thickness increases with $`\mathrm{\Gamma }`$; it is $`14d`$ for $`\mathrm{\Gamma }1.4`$ and $`17d`$ for $`\mathrm{\Gamma }2.8`$. In the limit of $`\mathrm{\Gamma }1`$ we obtain that the density in the central part of the layer is close to $`\rho _o`$. Another interesting piece of information is the bulk dilation within the layer, $`\delta `$, as a function of $`z`$. This quantity is linked directly to the density as $`\rho (z)=\overline{\nu }m_o/(d+\delta )^3`$, where $`\overline{\nu }`$ is an average coordination number and $`m_o`$ the particle mass . Defining $`\delta =0`$ for $`\rho (z)=\rho _o`$ we obtain $$\frac{\delta (z)}{d}=\left(\frac{\rho (z)}{\rho _o}\right)^{\frac{1}{3}}1$$ (14) Then , it is possible to find $`\delta (z)`$ for all the data presented previously, which is what is shown in Fig. 7. From this figure, we can qualitatively examine the dependence of $`\delta (z)`$ on $`\mathrm{\Gamma }`$, as we did with $`\rho (z)`$. For instance, as we increase $`\mathrm{\Gamma }`$, the top and the bottom of the layer continuously dilate and the extension of the dilated part increases. The central part also dilates as $`\mathrm{\Gamma }`$ increases; for $`\mathrm{\Gamma }1.4`$ and $`\mathrm{\Gamma }2.4`$ it is of the order of $`0.004d`$ and $`0.02d`$ respectively. We also notice that for $`\mathrm{\Gamma }2.4`$, the value $`\delta 0.1d`$ is reached for $`z2d`$ and $`z15d`$, while the total height of the layer is approximately $`17d`$. At this point, it seems appropriate to define a quantity as the vertical average of $`\delta (z)`$; we do this because it is necessary to link it in some way to $`\delta _b`$. Therefore, we define this average dilation as $$\delta =\frac{1}{H^{}}\underset{0}{\overset{H^{}}{}}\delta (z)𝑑z$$ (15) where $`H^{}`$ is the total height of the layer. Notice that $`H^{}`$ depends on the excitation intensity (i.e, it depends on $`\mathrm{\Gamma }`$ and $`f`$). Fig. 8 presents $`\delta `$ versus $`\mathrm{\Gamma }`$ for several excitation frequencies. We observe in the low frequency regime that $`\delta `$ presents a transition at $`\mathrm{\Gamma }2`$. In contrast, at high $`f`$ this transition has the tendency to disappear and we observe that $`\delta `$ is roughly a linear function of $`\mathrm{\Gamma }`$. Complementing the previous data, in the inset of Fig. 8 we present $`\delta `$ versus $`f`$ for two values of $`\mathrm{\Gamma }`$, below and above $`\mathrm{\Gamma }=2`$. The average dilation $`\delta `$ is a decreasing function of $`f`$. We also present with continuous lines the fits $`\delta f^b`$; we find $`b=1.38\pm 0.06`$ and $`b=1.58\pm 0.03`$ for $`\mathrm{\Gamma }=1.5`$ and $`\mathrm{\Gamma }=2.4`$ respectively. Similar behaviors are obtained for $`\delta _b`$, as a function of both $`\mathrm{\Gamma }`$ and $`f`$ (See section IV). To conclude, in this section we have introduced the peak shape of pressure from which we have obtained density profiles. We have discussed these results in a qualitative way and found that dilation is a function of the vertical coordinate. These results, which show that dilation is approximately constant in the bulk but increases sharply close to the free surface of the layer, are similar to the ones reported in previous works in one and two dimensions . We have also shown that at a low frequency of excitation, a critical value of $`\mathrm{\Gamma }2`$ exists where an abrupt change in the layer dilation takes place. This transition will be discussed in detail in the next section. ## IV pressure and reflectivity measurements Both the time-evolution of the pressure and the intensity are represented in Fig. 9 as a function of $`\mathrm{\Gamma }`$ when $`f`$ is in the low frequency regime. For $`\mathrm{\Gamma }>1`$, the layer-plate collision is always visible in the pressure signal as large peaks. However, no trace of this collision is observed in the reflected light up to $`\mathrm{\Gamma }2`$. Thus, for $`1<\mathrm{\Gamma }<2`$ the layer is compact, indicating that the energy injected during the layer-plate collision is completely dissipated by the multiple collisions between the grains or by friction. In contrast, for $`\mathrm{\Gamma }>2`$ the time mean value of the reflected light (DC component) exhibits a strong decrease that shows that the layer undergoes a transition from a compact to a dilated state. A modulation in time (AC component) which oscillates at the forcing frequency is also observed in the reflected light for $`\mathrm{\Gamma }>2`$. This modulation is in phase with the pressure peak. Indeed, immediately after the pressure peak occurs, reflected light increases, indicating that the layer was dilated during the free flight, and that a small compression occurs due to the collision. Furthermore, when the layer takes off, surface dilation starts to increase (a decrease in reflected light). We first emphasize that this increase in surface dilation is the result of the amplification of small differences in initial conditions, for the free flight of the grains located at the layer surface . Such differences arise as a consequence of the random character of kinetic-energy injection; due to the random packing of grains, a layer plate collision naturally induces velocity fluctuations in the layer. As shown in Fig. 9 b for $`\mathrm{\Gamma }>2`$, the layer never reaches a compact state, which would correspond to a higher value of the reflected light (See Fig. 9 a). This result indicates that kinetic energy, injected into the internal degrees of freedom of the layer, has not been completely dissipated within the cycle. To estimate the amount of energy not dissipated, we consider small changes of $`I`$ in time for early stages of the layer expansion. We can then write $`I(t)/I_{ref}12\mathrm{\Delta }\delta _s(t)/(d+\delta _s(0))`$. Here $`\mathrm{\Delta }\delta _s(t)=\delta _s(t)\delta _s(0)`$ and the reference intensity $`I_{ref}`$ is taken when the layer starts to expand and corresponds to a finite dilation $`\delta _s(0)`$ at $`t=0`$. From the slope of intensity versus time for early stages of the layer expansion we obtain a characteristic time $`\tau `$. Dimensionally $`\tau ^1\mathrm{\Delta }V_{to}/(d+\delta _s(0))`$, where $`\mathrm{\Delta }V_{to}`$ can be associated with the velocity fluctuations at the taking off time of the particles located at the free surface of the layer. Experimentally, $`\tau `$ is close to $`1/27`$ s and is almost independent of frequency. Thus, our estimate shows that $`\mathrm{\Delta }V_{to}/V_c`$ increases linearly with frequency like $`2\times 10^4f`$, varying from 0.01 for low $`f`$ ($`35`$ Hz) to 0.04 for intermediate $`f`$ ($`200`$ Hz). Here $`V_c`$ is the layer-plate relative velocity at the collision calculated from the completely inelastic ball model. Therefore, our results indicate that for $`2<\mathrm{\Gamma }<2.8`$, the ratio of residual energy to energy injection, $`(\mathrm{\Delta }V_{to}/V_c)^2`$, increases with $`f`$, implying that energy dissipation decreases with $`f`$. At this stage, the important feature of typical velocity fluctuations, or “temperature”, at the free surface of the layer arises: we notice that although most of the energy is dissipated, a small amount of residual “thermal energy” is enough to sustain surface dilation. We focus now on the transition from a compact to a dilated state suffered by the layer at $`\mathrm{\Gamma }2`$. Fig. 10 illustrates the pressure, $`P`$, the collision time, $`T_c`$, and the reflected intensity versus $`\mathrm{\Gamma }`$ for $`f=40`$ Hz. $`P`$ corresponds to the maximum pressure exerted on the plate during the collision and $`T_c`$ here is defined as the width of the pressure peak at a quarter of its height. We also present in Fig. 10a the numerical fit of $`P`$ with $`V_c/T_c`$; it is evident that $`PV_c/T_c`$, which is a natural consequence of the impulsive nature of the periodic forcing. At $`\mathrm{\Gamma }2`$, the DC component of the intensity exhibits a strong decrease while its AC component abruptly increases. At the same value of $`\mathrm{\Gamma }`$ there is a small decrease in the pressure peak, and a small increase in $`T_c`$, due to an increase in the bulk dilation in the layer. This decrease in pressure was already observed by P. Umbanhowar , and it is stronger for particles with higher restitution coefficients; however, no correlation with reflectivity measurements were made to investigate the state of the layer. We emphasize that the former transition occurs for $`\mathrm{\Gamma }<\mathrm{\Gamma }_w`$. At the onset of surface waves ($`\mathrm{\Gamma }=\mathrm{\Gamma }_w`$), the pressure presents a strong decrease associated with the fact that layer-plate collision is spread out in time (Fig. 10 a, b). Using the considerations introduced in the previous section, we calculate $`\delta _b`$ and $`\delta _s`$ as functions of $`\mathrm{\Gamma }`$. This is presented in Fig. 11 for $`f=40`$ Hz. To complete the data, we also present the average dilation $`\delta `$, and we observe that the agreement with $`\delta _b`$ is fairly good. Similar to what we found for $`\delta `$ (fig. 8), we observe that both $`\delta _b`$ and $`\delta _s`$ suffer transitions at $`\mathrm{\Gamma }2`$. Curiously, for $`\mathrm{\Gamma }<2`$ the bulk dilation is higher than the dilation in surface and $`\delta _s`$ takes negative values, which simply means that the layer surface reaches a state more compact than the initial one. This is related to the fact that the initial state, consistent with our experimental method , is not the more compact accessible state of the layer. Therefore we say that for $`\mathrm{\Gamma }<2`$ the state of the layer is solid-like, where the injected energy during the collision is completely dissipated. However, in this regime, the available energy is enough to produce rearrangement of surface grains. In all the cases tested, the maximum compaction was never larger than $`2\%`$ of the initial density. Consequently, we associate the increase in $`\delta _s`$ and $`\delta _b`$, observed at low $`f`$ and $`\mathrm{\Gamma }>2`$ to a solid-liquid type transition. In the liquid phase, average dilation is large enough to allow particles to move with respect to each other. For low $`f`$, at the critical value $`\mathrm{\Gamma }2`$, the injected energy rate becomes larger than the dissipation rate, and the energy excess sustains the dilation in the granular layer. In Fig. 12 we include frequency dependence of the same quantities presented in Fig. 10. As opposed to the case of low $`f`$, at high $`f`$ and for $`\mathrm{\Gamma }>2`$ the DC component of $`I`$ increases slightly. This indicates that the layer surface has reached a state more compact than the initial one. It is very important to notice that the decrease in pressure associated with the wave instability is clearly observed over the entire range of frequencies (Fig. 12 a). Complementing Fig. 12, we present in Fig. 13 a and c the maximum pressure and the DC component of intensity versus $`f`$ for two values of $`\mathrm{\Gamma }`$, both smaller than the critical one for waves, with one right below and the other right above the fluidization transition. Both the maximum pressure and the jump in reflectivity decrease as $`f`$ increases. We also present the bulk dilation $`\delta _b`$ (See Fig. 13 b) calculated directly from the pressure through equation (2). We find that $`\delta _b`$ scales as $`f^b`$ with $`b=1.42\pm 0.07`$ and $`b=1.54\pm 0.03`$ for $`\mathrm{\Gamma }=1.5`$ and $`2.3`$ respectively. These values are in very good agreement with those obtained for $`\delta `$ (See Fig. 8), and they provide a consistency test for both kinds of measurements. These results indicate that the relevant quantity is not only the ratio of the injected energy per particle to the potential energy required to rise a particle by a fraction of its diameter, in which case $`\delta _b`$ would vary as $`1/f^2`$, but also the dissipated energy. We notice that our results contrast with those obtained by Luding et al in numerical simulations of a column of particles in the completely fluidized regime, where the average dilation scales as $`\delta (Af)^2`$, which for $`\mathrm{\Gamma }`$ constant becomes $`\delta (\mathrm{\Gamma }/f)^2`$ . However, similar simulations done by the same group for a two-dimensional layer indicate that the layer expansion scales as $`h_{cm}h_{cmo}(Af)^{3/2}(\mathrm{\Gamma }/f)^{3/2}`$. Even though these results were also obtained in a completely fluidized regime (typically $`\mathrm{\Gamma }>10`$) we observe quite good agreement for the scaling on $`f`$ for the expansion of the layer. On the other hand, information about the surface dilation, $`\delta _s`$, versus $`f`$ is obtained using the intensity data as $`I/I_0d^2/(d+\delta _s)^2`$ (See Fig. 13 d). For $`f>225`$ Hz, $`\delta _s`$ takes negative values, which indicates that the surface layer is more compact than the initial one . However, in this regime, as shown in the previous section, the bulk dilation increases with $`\mathrm{\Gamma }`$ but remains very small, $`\delta /d10^3`$. Finally, let us mention another interesting transition which links to the flat with kinks instability reported in previous works . If we increase $`\mathrm{\Gamma }`$ further we find that a period doubling is achieved at $`\mathrm{\Gamma }3.6`$, as reported before . Next, for $`\mathrm{\Gamma }4.6`$, an inverse transition of the liquid-solid type is detected. Fig. 14 shows a typical measurement of both reflected intensity and maximum pressure for a wider range of $`\mathrm{\Gamma }`$. We detect that at $`\mathrm{\Gamma }4.6`$, $`P`$ strongly decreases and the mean value of intensity strongly increases. Using the results discussed previously, both changes reflect a strong decrease of grain mobility. We notice that the critical value $`\mathrm{\Gamma }4.6`$ corresponds to $`V_c=0`$ in the completely inelastic ball model, so in fact no energy is injected to the internal degrees of freedom. We also conclude that no surface waves can be sustained in this regime, except at the kink itself. Indeed, the shear induced at the kink by the flat parts oscillating out of phase is large and enough to induce dilation. At low frequencies, this dilation is enough to allow hydrodynamics waves, as those shown in figure 1c. As $`\mathrm{\Gamma }`$ is increased further, the pressure increases and the reflectivity decreases. Thus, energy injection again becomes enough to sustain surface waves in the layer. This fact is consistent with the existence of $`f/4`$ waves reported previously . Notice that for $`\mathrm{\Gamma }=4.6`$ the maximum velocity of the plate is $`Aw12`$ cm/s; this confirms that the relevant scale of velocity fluctuations is given by $`V_c`$ and not $`Aw`$. The experimental results presented above can be summarized as follows. Depending on the excitation frequency we observe different kind of states and waves. At low frequency, bulk and surface dilation present strong increases which are associated with a fluidization transition. Surface waves observed in this regime involve large relative motion between particles and are therefore considered as the hydrodynamic modes of the layer. At intermediate $`f`$, although the injected energy is small, we still observe a decrease in reflectivity as a function of $`\mathrm{\Gamma }`$. In this regime, as shown in Fig. 13 b, $`\delta _b`$ is too small ($`\delta _b/d<0.1`$) to allow motion between the particles . Therefore, at the critical $`\mathrm{\Gamma }`$, the decrease of reflectivity is the signature of particles fluctuating around their positions at the free surface of the layer. We associate this decrease with a heating up of the solid phase. For higher frequencies, the layer undergoes a compaction transition which is detected by the increase in surface density. Below and above this transition the local bulk dilation $`\delta _b/d`$ is even smaller and is of order $`0.005`$, implying that the mobility in both the bulk and the free surface is completely suppressed. Thus, very low amplitude surface waves detected in the compaction regime, by the strong decrease in the maximum pressure, must correspond to excitations in which the layer is slightly modulated in time and space. We will see in the following section that these waves are bending waves, associated with the ability of the compact layer to deform. Finally, fig. 15 presents the phase diagram for the granular layer: phase boundaries separating the various states and surface waves have been obtained from the data in Fig. 12. The layer state and surface wave transitions occur for approximately constant values of $`\mathrm{\Gamma }`$ independent of $`f`$. ## V Low amplitude waves: bending waves To check the existence of these waves we have performed a set of experiments in a two-dimensional granular layer. The advantage of using a two-dimensional system is that this allows us to obtain side views of the waves. This is of particular importance since bending waves are difficult to visualize as they have amplitudes of the order of fifty percent of a particle diameter. In order to observe such small amplitudes we have used photoelastic cylinders of $`6`$ mm in diameter and $`6.35`$ mm in length. Typical excitation frequencies are about $`40`$ Hz, which for this system are in the high frequency regime. We recall that this regime occurs for frequencies much larger than the frequency crossover, $`f\sqrt{g/d}`$. When varying the deepness of the layer (number of particles = $`N`$), this scaling becomes $`f\sqrt{g/Nd}`$ . In the case of bronze particles of $`d0.12`$ mm, bending waves are detected for $`f>225`$ Hz. This tells us that these waves should be observed at $`f40`$ Hz, for $`d=6`$ mm and a layer about ten particle diameters thick. With these parameters, for $`\mathrm{\Gamma }3.5`$, the amplitude of waves should be of the order of $`1`$ mm. Typical snapshots of two stages of bending waves at $`\mathrm{\Gamma }=3.5`$, $`f=40`$ Hz and $`N=10`$ are presented in figures 16a and 16b. We observe that the layer slightly bends with respect to the horizontal. Similar to what occurs for low frequency waves, this modulation alternates in time at half of the frequency forcing (See below). However, in this case, the wavelength of the modulation is about a layer thickness and is also nearly independent of $`f`$. Some particles are marked with black spots which allows us to distinguish them and follow their trajectories (See Fig. 16a, b). As expected, the mobility of the particles is very low. Only at the surface layer will some particles move with respect to each other over distances of the order of the driving amplitude. In the bulk this motion is completely suppressed. Additional support for these low amplitude waves is provided in Fig. 16c in which a compaction front that moves laterally is observed as a bright zone. This image results from the difference of two consecutive snapshots (period of acquisition is about 0.8 ms). At a compression zone, the vertical stress in the layer is high so the light is transmitted. Thus, the difference of images mainly shows the zones under stress. In the case presented the compression front moves from right to left and the collision occurs very near the right boundary of the image. In the next cycle the collision occurs near the left boundary and the compression front travels to the right. With a wider view of the layer we observe that in fact two compression fronts are created from each collision point; one front travels in each direction. This kind of visualization allows us to safely say that these parametric waves are also subharmonic, i.e. their frequency is $`f/2`$. Finally, from the images we estimate the velocity of the compression front of the order of a few meters per second. This value is very high compared to the estimated velocity at the collision $`V_c0.25`$ m/s; this indicates the existence of contact arcs (see the contact lines in Fig. 16c) so the layer is almost not dilated. ## VI Conclusions In conclusion, depending on the excitation frequency, we observe different kinds of states and waves. Thus, our experimental results reveal the existence of a solid-liquid transition that precedes subharmonic wave instability. Hydrodynamic surface waves can be then considered as the natural excitations existing in a fluidized granular layer. In contrast, very low amplitude surface waves detected in the compaction regime correspond to excitations in which the layer slightly bends alternatively in time and space. We have seen in the previous section that these waves are associated with the compact character of the layer. The layer states observed here and surface wave transitions occur for approximately constant values of $`\mathrm{\Gamma }`$ independent on $`f`$. Additional experimental evidence of the fluidization transition experienced by the layer at $`\mathrm{\Gamma }2`$ can be found in reference (See for instance, Fig. 6 in ref. ). Although they used rather large particles at small forcing frequencies, we find that their experiments are actually in the low frequency regime after estimating the frequency crossover. Independent experimental evidence for the compaction transition is also found in previous works. For instance, a strong increase in granular density was found, close to $`\mathrm{\Gamma }1.9`$, in several experiments on large columns of grains submitted to “taps” of a single cycle of vibration . In this case, applying the scaling to the frequency crossover, we found consistently that these experiments correspond to the high frequency limit. Therefore, we conclude that both fluidization as well as compaction transitions are well established experimentally. However, it is still unclear what mechanisms dominate these transitions and why they arise at a constant value of $`\mathrm{\Gamma }`$. We can only safely say that at low energy injection rates, or equivalently at small plate amplitudes with respect to the thickness of the layer, the transition will be of the compaction type. In the opposite case of high energy injections rates, this transition will be of the fluidization type. Also, we have clearly shown that the relevant scale of velocity fluctuations is given by $`V_c`$ and not $`Aw`$. Then, via the intensity measurements we deduced that the dissipation decreases as $`f`$ is increased; this is equivalent to state that the dissipative effects increase with velocity fluctuations. A possible cause of this is the dependence on the velocity of the restitution coefficient: it has been recently shown that $`1ϵv^\alpha `$, where $`v`$ is the relative normal velocity at the colision and $`\alpha `$ a positive number . Nevertheless, as the dilation is reduced we expect that the friction between grains will become an important dissipative mechanism. Finally, the importance of the bulk and surface dilation measurements is that they provide a complementary way, with respect to the granular temperature, to explore the excitation of the internal degrees of freedom of a vibrated layer. For the dilation in the bulk we have found that, for a constant value of $`\mathrm{\Gamma }`$, it is a decreasing function of $`f`$ of the form $`1/f^b`$, with $`b3/2`$. This agrees with previous simulations of a two dimensional layer . It is clear that the deviation from the expected value $`b=2`$ is due to dissipative effects, but the exact numerical value seems to depend on $`\mathrm{\Gamma }`$. It is a pleasure to acknowledge to Paul Umbanhowar and Enrique Tirapegui for many enlightening discussions and to Satish Kumar for useful comments on the manuscript. This work was supported by Fondecyt Grant $`N_01970682`$, Catedra Presidencial en Ciencias and Dicyt USACH.
no-problem/9908/quant-ph9908060.html
ar5iv
text
# Bogoliubov dispersion relation for a “photon fluid”: Is this a superfluid?*footnote **footnote *It is a great pleasure to dedicate this paper to my lifelong friend, Marlan Scully. Marlan and I first met when we were graduate students (of Willis Lamb and Charles Townes, respectively). Over many years we have enjoyed discussing and learning physics together, as well as sharing and growing in our like precious faith. Happy birthday, Marlan! ## I Introduction Inspired by the recent experimental discoveries of Bose-Einstein condensation of laser-cooled atoms , we would like to consider here the inverse question: Can one observe Bose-Einstein condensation of photons? Closely related is a second question: Is the resulting Bose condensate a superfluid? We know that photons are bosons, so that it would seem that they could in principle undergo this kind of condensation. The difficulty is that in the usual Planck blackbody configuration, which consists of an empty 3D cavity, the photon is massless and its chemical potential is zero, so that the Bose-Einstein condensation of photons under these circumstances would seem to be impossible. However, we consider here an atom-filled 2D Fabry-Perot cavity configuration, instead of the usual empty 3D Planck cavity configuration. We find that if one excites only one of the longitudinal modes of the Fabry-Perot cavity by means of narrow-linewidth laser radiation, so that the dynamics of the light inside the cavity becomes effectively two-dimensional , and if this radiation is well detuned to the red side of the resonance of the atoms in their ground state, so that an effective repulsive photon-photon interaction mediated by the atoms results, the resulting effective mass and chemical potential of a photon inside the cavity no longer vanishes. Thus Bose-Einstein condensation of photons inside the Fabry-Perot cavity can occur. We shall explore the circumstances under which this may happen, and shall connect this problem with an earlier problem solved by Bogoliubov for the weakly-interacting Bose gas. In this way, we shall see that the Bogoliubov dispersion relation should hold for the “photon fluid” that forms as a result of multiple photon-photon collisions occurring inside the cavity. In particular, we shall see that sound waves, or “phonons” in the photon fluid, are the lowest-lying excitations of the system. According to an argument due to Landau, this implies that the photon fluid could become a superfluid. Historically speaking, in the study of the interaction of light with matter, most of the recent emphasis has been on exploring new states of matter, such as the recently observed atomic Bose-Einstein condensates. However, not as much attention has been focused on exploring new states of light. Of course, the invention of the laser led to the discovery of a new state of light, namely the coherent state, which is a very robust one. Two decades ago, squeezed states were discovered, but these states are not as robust as the coherent state, since they are easily degraded by scattering and absorption. In contrast to the laser, which involves a population-inverted atomic system that is far away from thermal equilibrium, we shall explore here states very close to the ground state of a photonic system, and hence very near absolute zero in temperature. Hence they should be robust ones. The interacting many-photon system (the “photon fluid”) is an example of a quantum many-body problem. In an extension of the interacting Bose gas problem, we shall derive the Bogoliubov dispersion relation for the weakly-interacting photon gas with repulsive photon-photon interactions, starting from the microscopic (i.e., the second-quantized) level. Thereby we shall find an expression for the effective chemical potential of a photon in the photon fluid, and shall relate the velocity of sound in the photon fluid to this nonvanishing chemical potential. In this way, we shall lay the theoretical foundations for an experiment to measure the sound wave part of the dispersion relation for the photon fluid. We shall also propose another experiment to measure the critical velocity of this fluid, and thus to test for the possibility of the superfluidity of the resulting state of the light. Although the interaction Hamiltonian used in this paper is equivalent to that used earlier in four-wave squeezing, we emphasize here the collective aspects of the problem which result from multiple photon-photon collisions. This leads to the idea of the “photon fluid.” It turns out that microscopic and macroscopic analyses yield exactly the same Bogoliubov dispersion relation for the excitations of this fluid . Hence it may be argued that there is nothing fundamentally new in the microscopic analysis given below which is not already contained in the macroscopic, classical nonlinear optical analysis. However, it is the microscopic analysis which leads to the viewpoint of the interacting photon system as a “photon fluid,” a conception which could give rise to new ways of understanding and discovering nonlinear optical phenomena. Furthermore, the interesting question of the quantum optical state of the light inside the cavity resulting from multiple collisions between the photons (i.e., whether it results in a coherent, squeezed, Fock, or some other quantum state), cannot be answered by the macroscopic, classical nonlinear optical analysis, and this necessitates the microscopic treatment given below. ## II The Bogoliubov problem Here we re-examine one particular many-body problem, the one first solved by Bogoliubov . Suppose that one has a zero-temperature system of bosons which are interacting with each other repulsively, for example, a dilute system of small, bosonic hard spheres. Such a model was intended to describe superfluid helium, but in fact it did not work well there, since the interactions between atoms in superfluid helium were too strong for the theory for be valid. In order to make the problem tractable theoretically, let us assume that these interactions are weak. In the case of light, the interactions between the photons are in fact always weak, so that this assumption is a good one. However, these interactions are nonvanishing, as demonstrated by the fact that photon-photon collisions mediated by atoms excited near, but off, resonance have been experimentally observed . We start with the Bogoliubov Hamiltonian $`H`$ $`=`$ $`H_{free}+H_{int}`$ (1) $`H_{free}`$ $`=`$ $`{\displaystyle \underset{p}{}}ϵ(p)a_p^{}a_p`$ (2) $`H_{int}`$ $`=`$ $`{\displaystyle \frac{1}{2}}{\displaystyle \underset{\kappa pq}{}}V(\kappa )a_{p+\kappa }^{}a_{q\kappa }^{}a_pa_q,`$ (3) where the operators $`a_p^{}`$ and $`a_p`$ are creation and annihilation operators, respectively, for bosons with momentum $`p`$, which satisfy the Bose commutation relations $$[a_p,a_q^{}]=\delta _{pq}\text{ and }[a_p,a_q]=[a_p^{},a_q^{}]=0.$$ (4) The first term $`H_{free}`$ in the Hamiltonian represents the energy of the free boson system, and the second term $`H_{int}`$ represents the energy of the interactions between the bosons arising from the potential energy $`V(\kappa )`$, which is the Fourier transform of the potential energy $`V(r_2r_1)`$ in configuration space of a pair of bosons located at $`r_2`$ and $`r_1`$. The interaction term is equivalent to the one responsible for producing squeezed states of light via four-wave mixing . It represents the annihilation of two particles, here photons, of momenta $`p`$ and $`q`$, along with the creation of two particles with momenta $`p+\kappa `$ and $`q\kappa `$, in other words, a scattering process with a momentum transfer $`\kappa `$ between a pair of particles with initial momenta $`p`$ and $`q`$, along with the assignment of an energy $`V(\kappa )`$ to this scattering process. Here the assumption that the interactions are weak means that the second term in the Hamiltonian is much smaller than the first, i.e, $`|V(\kappa )||ϵ(\kappa )|`$. ## III The free-photon dispersion relation inside a Fabry-Perot resonator Photons with momenta $`p`$ and $`q`$ also obey the above commutations relations, so that the Bogoliubov theory should in principle also apply to the weakly-interacting photon gas. The factor $`ϵ(p)`$ represents the energy as a function of the momentum (the dispersion relation) for the free, i.e., noninteracting, bosons. In the case of photons in a Fabry-Perot resonator, the boundary conditions of the mirrors cause the $`ϵ(p)`$ of a photon trapped inside the resonator to become an energy-momentum relation which is identical to that of a nonrelativistic particle with an effective mass of $`m=\mathrm{}\omega /c^2`$. This can be understood starting from Fig. 1. For high-reflectivity mirrors, the vanishing of the electric field at the reflecting surfaces of the mirrors imposes a quantization condition on the allowed values of the $`z`$-component of the photon wave vector, $`k_z=n\pi /L`$, where $`n`$ is an integer, and $`L`$ is the distance between the mirrors. Thus the usual frequency-wavevector relation $$\omega (k)=c[k_x^2+k_y^2+k_z^2]^{1/2},$$ (5) upon multiplication by $`\mathrm{}`$, becomes the energy-momentum relation for the photon $$E(p)=c[p_x^2+p_y^2+p_z^2]^{1/2}=c[p_x^2+p_y^2+\mathrm{}^2n^2\pi ^2/L^2]^{1/2}=c[p_x^2+p_y^2+m^2c^2]^{1/2},$$ (6) where $`m=\mathrm{}n\pi /Lc`$ is the effective mass of the photon. In the limit of small-angle (or paraxial) propagation, where the small transverse momentum of the photon satisfies the inequality $$p_{}=[p_x^2+p_y^2]^{1/2}p_z=\mathrm{}k_z=\mathrm{}n\pi /L,$$ (7) we obtain from a Taylor expansion of the relativistic relation, a nonrelativistic energy-momentum relation for the 2D noninteracting photons inside the Fabry-Perot resonator $$E(p_{})mc^2+p_{}^2/2m,$$ (8) where $`m=\mathrm{}n\pi /Lc\mathrm{}\omega /c^2`$ is the effective mass of the confined photons. It is convenient to redefine the zero of energy, so that only the effective kinetic energy, $$ϵ(p_{})p_{}^2/2m,$$ (9) remains. To establish the connection with the Bogoliubov Hamiltonian, we identify the two-dimensional momentum $`p_{}`$ as the momentum $`p`$ that appears in this Hamiltonian, and the above $`ϵ(p_{})`$ as the $`ϵ(p)`$ that appears in Eq. (3). ## IV The Bogoliubov dispersion relation for the photon fluid Now we know that in an ideal Bose gas at absolute zero temperature, there exists a Bose condensate consisting of a macroscopic number $`N_0`$ of particles occupying the zero-momentum state. This feature should survive in the case of the weakly-interacting Bose gas, since as the interaction vanishes, one should recover the Bose condensate state. Hence following Bogoliubov, we shall assume here that even in the presence of interactions, $`N_0`$ will remain a macroscopic number in the photon fluid. This macroscopic number will be determined by the intensity of the incident laser beam which excites the Fabry-Perot cavity system, and turns out to be a very large number compared to unity (see below). For the ground state wave function $`\mathrm{\Psi }_0(N_0)`$ with $`N_0`$ particles in the Bose condensate in the $`p=0`$ state, the zero-momentum operators $`a_0`$ and $`a_0^{}`$ operating on the ground state obey the relations $`a_0|\mathrm{\Psi }_0(N_0)`$ $`=`$ $`\sqrt{N_0}|\mathrm{\Psi }_0(N_01)`$ (10) $`a_0^{}|\mathrm{\Psi }_0(N_0)`$ $`=`$ $`\sqrt{N_0+1}|\mathrm{\Psi }_0(N_0+1).`$ (11) Since $`N_01`$, we shall neglect the difference between the factors $`\sqrt{N_0+1}`$ and $`\sqrt{N_0}`$. Thus one can replace all occurrences of $`a_0`$ and $`a_0^{}`$ by the $`c`$-number $`\sqrt{N_0}`$, so that to a good approximation $`[a_0,a_0^{}]0`$. However, the number of particles in the system is then no longer exactly conserved, as can be seen by examination of the term in the Hamiltonian $$\underset{\kappa }{}V(\kappa )a_\kappa ^{}a_\kappa ^{}a_0a_0N_0\underset{\kappa }{}V(\kappa )a_\kappa ^{}a_\kappa ^{},$$ (12) which represents the creation of a pair of particles, i.e., photons, with transverse momenta $`\kappa `$ and $`\kappa `$ out of nothing. However, whenever the system is open one, i.e., whenever it is connected to an external reservoir of particles which allows the total particle number number inside the system (i.e., the cavity) to fluctuate around some constant average value, then the total number of particles need only be conserved on the average. Formally, one standard way to compensate for the lack of exact particle number conservation is to use the Lagrange multiplier method and subtract a chemical potential term $`\mu N_{op}`$ from the Hamiltonian (just as in statistical mechanics when one goes from the canonical ensemble to the grand canonical ensemble) $$HH^{}=H\mu N_{op},$$ (13) where $`N_{op}=_pa_p^{}a_p`$ is the total number operator, and $`\mu `$ represents the chemical potential, i.e., the average energy for adding a particle to the open system described by $`H`$. In the present context, we are considering the case of a Fabry-Perot cavity with low, but finite, transmittivity mirrors which allow photons to enter and leave the cavity, due to an input light beam coming in from the left and an output beam leaving from the right (see Fig. 3). This permits a realistic physical implementation of the external reservoir, since the Fabry-Perot cavity allows the total particle number inside the cavity to fluctuate due to particle exchange with the beams outside the cavity. However, the photons remain trapped inside the cavity long enough so that a condition of thermal equilibrium is achieved after multiple photon-photon interactions (i.e., after very many collisions, which is indeed the case for the experimental numbers to be discussed below). This leads to the formation of a photon fluid inside the cavity . It will be useful to separate out the zero-momentum components of the interaction Hamiltonian, since it will turn out that there is a macroscopic occupation of the zero-momentum state due to Bose condensation. The prime on the sums $`_p^{}`$, $`_{p\kappa }^{}`$, and $`_{\kappa pq}^{}`$ in the following equation denotes sums over momenta explicitly excluding the zero-momentum state, i.e., all the running indices $`p`$, $`\kappa `$, $`q`$,$`p+\kappa `$,$`q\kappa `$ which are not explicitly set equal to zero, are nonzero: $`H_{int}`$ $`=`$ $`{\displaystyle \frac{1}{2}}V(0)a_0^{}a_0^{}a_0a_0+V(0){\displaystyle \underset{p}{}^{}}a_p^{}a_pa_0^{}a_0+`$ (16) $`{\displaystyle \underset{p}{}^{}}\left(V(p)a_p^{}a_0^{}a_pa_0+{\displaystyle \frac{1}{2}}\left[V(p)a_p^{}a_p^{}a_0a_0+V(p)a_0^{}a_0^{}a_pa_p\right]\right)+`$ $`{\displaystyle \underset{p\kappa }{}^{}}V(\kappa )\left(a_{p+\kappa }^{}a_0^{}a_pa_\kappa +a_{p+\kappa }^{}a_\kappa ^{}a_pa_0\right)+{\displaystyle \frac{1}{2}}{\displaystyle \underset{\kappa pq}{}^{}}V(\kappa )\left(a_{p+\kappa }^{}a_{q\kappa }^{}a_pa_q\right).`$ Here we have also assumed that $`V(p)=V(p)`$. By thus separating out the zero-momentum state from the sums in the Hamiltonian, and replacing all occurrences of $`a_0`$ and $`a_0^{}`$ by $`\sqrt{N_0}`$, we find that the Hamiltonian $`H^{}`$ in Eq. (13) decomposes into three parts $$H^{}=H_0+H_1+H_2,$$ (17) where, in decreasing powers of $`\sqrt{N}_0`$, $$H_0=\frac{1}{2}V(0)a_0^{}a_0^{}a_0a_0\frac{1}{2}V(0)N_0^2,$$ (18) $$H_1\underset{p}{}^{}ϵ^{}(p)a_p^{}a_p+\frac{1}{2}N_0\underset{p}{}^{}V(p)\left(a_p^{}a_p^{}+a_pa_p\right),$$ (19) $$H_2\sqrt{N_0}\underset{p\kappa }{}^{}V(\kappa )\left(a_{p+\kappa }^{}a_pa_\kappa +a_{p+\kappa }^{}a_\kappa ^{}a_p\right)+\frac{1}{2}\underset{\kappa pq}{}^{}V(\kappa )\left(a_{p+\kappa }^{}a_{q\kappa }^{}a_pa_q\right).$$ (20) Here $$ϵ^{}(p)=ϵ(p)+N_0V(0)+N_0V(p)\mu $$ (21) is a modified photon energy, where $`N_0`$ is given by $$N_0+<\mathrm{\Psi }_0|\underset{p}{}^{}a_p^{}a_p|\mathrm{\Psi }_0>=N$$ (22) (the term $`<\mathrm{\Psi }_0|_{p}^{}{}_{}{}^{}a_p^{}a_p|\mathrm{\Psi }_0>`$ represents the number of photons in the “depletion,” i.e., those photons which have been scattered out of the condensate at $`T=0`$ due to photon-photon collisions), and where $`\mu `$ is given by $$\mu =\frac{E_0}{N}.$$ (23) Here $`E_0=\mathrm{\Psi }_0|H|\mathrm{\Psi }_0`$ is the ground state energy of $`H`$. In the approximation that there is little depletion of the Bose condensate due to the interactions (i.e., $`NN_01`$), the first term of Eq. (16) (i.e., $`H_0`$ in Eq. (18)) dominates, so that $$E_0\frac{1}{2}N_0^2V(0)\frac{1}{2}N^2V(0),$$ (24) and therefore that $$\mu NV(0)N_0V(0).$$ (25) This implies that the effective chemical potential of a photon, i.e., the energy for adding a photon to the photon fluid, is given by the number of photons in the Bose condensate times the repulsive pairwise interaction energy between photons with zero relative momentum. It should be remarked that the fact that the chemical potential is nonvanishing here makes the thermodynamics of this two-dimensional photon system quite different from the usual three-dimensional, Planck blackbody photon system. It should also be remarked that the conventional wisdom which tells us that Bose-Einstein condensation and superfluidity are impossible in 2D bosonic systems, does not apply here. To the contrary, we believe that superfluidity of the topological, 2D Kosterlitz-Thouless kind (with algebraic decay of long range order) is possible for the photon fluid . In the same approximation $`NN_01`$, Eq. (21) becomes, upon use of the fact that $`\mu N_0V(0)`$ $$ϵ^{}(p)ϵ(p)+N_0V(p).$$ (26) This is the single-particle photon energy in the Hartree approximation. Here it is again assumed that $`|H_1||H_2|`$, i.e., that the interactions between the bosons are sufficiently weak, so as not to deplete the Bose condensate significantly. In the case of the weakly-interacting photon gas inside the Fabry-Perot resonator, since the interactions between the photons are indeed weak, this assumption is a good one. Following Bogoliubov, we now introduce the following canonical transformation in order to diagonalize the quadratic-form Hamiltonian $`H_1`$ in Eq. (19): $`\alpha _\kappa `$ $`=`$ $`u_\kappa a_\kappa +v_\kappa a_\kappa ^{}`$ (27) $`\alpha _\kappa ^{}`$ $`=`$ $`u_\kappa a_\kappa ^{}+v_\kappa a_\kappa .`$ (28) Here $`u_\kappa `$ and $`v_\kappa `$ are two real $`c`$-numbers which must satisfy the condition $$u_\kappa ^2v_\kappa ^2=1,$$ (29) in order to insure that the Bose commutation relations are preserved for the new creation and annihilation operators for certain new quasi-particles, $`\alpha _\kappa ^{}`$ and $`\alpha _\kappa `$, i.e., that $$[\alpha _\kappa ,\alpha _\kappa ^{}^{}]=\delta _{\kappa ,\kappa ^{}}\text{ and }[\alpha _\kappa ,\alpha _\kappa ^{}]=[\alpha _\kappa ^{},\alpha _\kappa ^{}^{}]=0.$$ (30) We seek a diagonal form of $`H_1`$ given by $$H_1=\underset{\kappa }{}^{}\left[\stackrel{~}{\omega }(\kappa )\left(\alpha _\kappa ^{}\alpha _\kappa +\frac{1}{2}\right)+\text{constant}\right],$$ (31) where $`\stackrel{~}{\omega }(\kappa )`$ represents the energy of a quasi-particle of momentum $`\kappa `$. Substituting the new creation and annihilation operators $`\alpha _\kappa ^{}`$ and $`\alpha _\kappa `$ given by Eq. (28) into Eq. (31), and comparing with the original form of the Hamiltonian $`H_1`$ in Eq. (19), we arrive at the following necessary conditions for diagonalization: $`\stackrel{~}{\omega }(\kappa )u_\kappa v_\kappa `$ $`=`$ $`{\displaystyle \frac{1}{2}}N_0V(\kappa )`$ (32) $`u_\kappa ^2`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left[1+ϵ^{}(\kappa )/\stackrel{~}{\omega }(\kappa )\right]`$ (33) $`v_\kappa ^2`$ $`=`$ $`{\displaystyle \frac{1}{2}}\left[1+ϵ^{}(\kappa )/\stackrel{~}{\omega }(\kappa )\right].`$ (34) Squaring Eq. (32) and substituting from Eqs. (33) and (34), we obtain $$\stackrel{~}{\omega }(\kappa )^2=ϵ^{}(\kappa )^2N_0^2V(\kappa )^2ϵ(\kappa )^2+2ϵ(\kappa )N_0V(\kappa ),$$ (35) where in the last step we have used Eq. (26). Thus the final result is that the Hamiltonian $`H_1`$ in Eq. (31) describes a collection of noninteracting simple harmonic oscillators, i.e., quasi-particles, or elementary excitations of the photon fluid from its ground state. The energy-momentum relation of these quasi-particles is obtained from Eq. (35) upon substitution of $`ϵ(\kappa )=\kappa ^2/2m`$ from Eq. (9) $$\stackrel{~}{\omega }(\kappa )\left[\frac{\kappa ^2N_0V(\kappa )}{m}+\frac{\kappa ^4}{4m^2}\right]^{1/2},$$ (36) which we shall call the “Bogoliubov dispersion relation.” This dispersion relation is plotted in Fig. 2, in the special case that $`V(\kappa )V(0)=`$ constant. (Note that Landau’s roton minimum can also be incorporated into this theory by a suitable choice of the functional form of $`V(\kappa )`$.) For small values of $`\kappa `$ this dispersion relation is linear in $`\kappa `$, indicating that the nature of the elementary excitations here is that of phonons, which in the classical limit of large phonon number leads to sound waves propagating inside the photon fluid at the sound speed $$v_s=\underset{\kappa 0}{lim}\frac{\stackrel{~}{\omega }(\kappa )}{\kappa }=\left(\frac{N_0V(0)}{m}\right)^{1/2}=\left(\frac{\mu }{m}\right)^{1/2}.$$ (37) At a transition momentum $`\kappa _c`$ given by $$\kappa _c=2\left(mN_0V(\kappa _c)\right)^{1/2}$$ (38) (i.e., when the two terms of Eq. (36) are equal), the linear relation between energy and momentum turns into a quadratic one, indicating that the quasi-particles at large momenta behave essentially like nonrelativistic free particles with an energy of $`\kappa ^2/2m`$. The reciprocal of $`\kappa _c`$ defines a characteristic length scale $$\lambda _c2\pi \mathrm{}/\kappa _c=\pi \mathrm{}/mv_s,$$ (39) which characterizes the distance scale over which collective effects arising from the pairwise interaction between the photons become important. Thus in the above analysis, we have shown that all the approximations involved in the Bogoliubov theory should be valid ones for the case of the 2D photon fluid inside a nonlinear Fabry-Perot cavity. Hence the Bogoliubov dispersion relation should indeed apply to this fluid; in particular, there should exist sound wave modes of propagation in the photon fluid. As additional evidence for the existence of these modes, we have recently found that the same Bogoliubov dispersion relation emerges from a classical nonlinear optical analysis of this problem , which we shall not reproduce here. The velocity of sound found by the macroscopic, classical nonlinear optical analysis is identical to the one found in Eq. (37) for the velocity of phonons in the photon fluid in the above microscopic analysis, provided that one identifies the energy density of the light inside the cavity with the number of photons in the Bose condensate as follows: $$_0^2=8\pi N_0\mathrm{}\omega /V_{cav},$$ (40) where $`V_{cav}`$, the cavity volume, is also the quantization volume for the electromagnetic field, and provided that one makes use of the known proportionality between the Kerr coefficient $`n_2`$ and the photon-photon interaction potential $`V(0)`$ $$V(0)=8\pi (\mathrm{}\omega )^2n_2/V_{cav}.$$ (41) In fact, the entire dispersion relation found by the classical, macroscopic analysis for sound waves associated with fluctuations in the light intensity inside a resonator filled with a self-defocusing Kerr medium, turns out to be formally identical to the above Bogoliubov dispersion relation obtained quantum mechanically for the elementary excitations of the photon fluid, in the approximation $`V(\kappa )V(0)=`$ constant. This is a valid approximation, since the pairwise interaction potential between two photons is given by a transverse 2D pairwise spatial Dirac delta function, whose strength is proportional to $`n_2`$, provided that the photons propagate paraxially, and the nonlinearity is fast. It should be kept in mind that the phenomena of self-focusing and self-defocusing in nonlinear optics can be viewed as arising from pairwise interactions between photons when the light propagation is paraxial and the Kerr nonlinearity is fast . Since in a quantum description the light inside the resonator is composed of photons, and since these photons as the constituent particles are weakly interacting repulsively with each other through the self-defocusing Kerr nonlinearity to form a photon fluid, this formal identification between the microscopic and macroscopic results for the Bogoliubov relation is a natural one . One possible experiment to see these sound waves is sketched in Fig. 4. The sound wave mode is most simply observed by applying two incident optical fields to the nonlinear cavity: a broad plane wave resonant with the cavity to form the nonlinear background fluid on top of which the sound waves can propagate, and a weaker amplitude-modulated beam which is modulated at the sound frequency in the radio range by an electro-optic modulator, and injected by means of an optical fiber tip at a single point on the entrance face of the Fabry-Perot. The resulting weak time-varying perturbations in the background light induce transversely propagating density waves in the photon fluid, which propagate away from the point of injection like ripples on a pond. This sound wave can be phase-sensitively detected by another fiber tip placed at the exit face of the Fabry-Perot some transverse distance away from the injection point, and its sound wavelength can be measured by scanning this fiber tip transversely across the exit face. The experiment could employ a cavity length $`L`$ of $`2`$ cm and mirrors with reflectivities of $`R=0.997`$ for a cavity finesse $`=1050`$. The optical nonlinearity could be provided by rubidium vapor at $`80^\mathrm{o}`$ C, corresponding to a number density of $`10^{12}`$ rubidium atoms per cubic centimeter. Incident on the cavity could be a circularly-polarized CW laser beam, detuned by around $`600`$ MHz to the red side of a closed two-level transition, for example, the $`|F=2,m_F=+2|F^{}=3,m_F^{}=+3`$ transition of the $`{}_{}{}^{87}\mathrm{Rb}`$ $`D_2`$ line. Thus the Kerr nonlinear coefficient could be that of a pure two-level atomic system virtually excited well off resonance (i.e., with a detuning much larger than the absorption linewidth), which was calculated by Grischkowsky : $$n_2=\pi N_{atom}\mu ^4/\mathrm{}^3\mathrm{\Delta }^3\mathrm{\hspace{0.17em}6}\times 10^6\mathrm{cm}^3/\mathrm{erg}\mathrm{\hspace{0.17em}5}\times 10^8\mathrm{cm}^2/\mathrm{Watt},$$ (42) where $`N_{atom}`$ is the atomic number density of the atomic vapor, $`\mu `$ is the matrix element of the two-level atomic system, and $`\mathrm{\Delta }`$ is the detuning of the laser frequency from the atomic resonance frequency. Thus the $`\mathrm{\Delta }`$ 600 MHz detuning of the laser from the atomic resonance used in the above example would be considerably larger than the Doppler width of 340 MHz of the rubidium vapor, and the residual absorption arising from the tails of the nearby resonance line would give rise to a loss which would be less than or comparable to the loss arising from the mirror transmissions. This extra absorption loss would contribute to a slightly larger effective cavity loss coefficient, but would not otherwise alter the qualitative behavior of the Bogoliubov dispersion relation. The conditions of validity for the microscopic Bogoliubov theory should be well satisfied by these experimental parameters. An intracavity intensity of $`40\mathrm{W}/\mathrm{cm}^2`$ would result in $`\mathrm{\Delta }n=|n_2|_{0}^{}{}_{}{}^{2}\mathrm{\hspace{0.17em}2}\times 10^6`$, for a sound speed $`v_s\mathrm{\hspace{0.17em}4}\times 10^7\mathrm{cm}/\mathrm{s}`$. For this intensity, $`N_0\mathrm{\hspace{0.17em}8}\times 10^{11}`$, so that the condition for the validity of the Bogoliubov theory $`N_01`$ should be well satisfied. The cavity ring-down time $`\tau _{cav}=2L/c\mathrm{\hspace{0.17em}0.14}\mu \mathrm{s}`$ would be much longer than the mean photon-photon collision time $`\tau _{coll}=(12\omega n_2|_0|^2)^1\mathrm{\hspace{0.17em}17}\mathrm{ps}`$, so that a photon fluid should indeed form inside the cavity, since there would be approximately 8000 photon-photon collisions within a cavity ring-down time, so that the assumption of thermal equilibrium should be a valid one. It should be noted that the above Bogoliubov theory is not limited to the above two-level atomic Kerr nonlinearity, which was chosen only for the purposes of illustration. One could replace this two-level nonlinearity with other recent, more promising kinds of nonlinearities, such as that in a four-level system, where absorption could be eliminated by the use of quantum interference while the Kerr nonlinearity could be simultaneously enhanced , or such as that due to photon exchange, where the nonlinearity is proportional to $`N_{atom}^2`$ rather than to $`N_{atom}`$ . ## V Discussion We suggest here that the Bogoliubov form of dispersion relation, Eq. (36), implies that the photon fluid formed by the repulsive photon-photon interactions in the nonlinear cavity is actually a photon superfluid. This means that a superfluid state of light might actually exist. Although the exact definition of superfluidity is presently still under discussion, especially in light of the question whether the recently discovered atomic Bose-Einstein condensates are superfluids or not , one indication of the existence of a photon superfluid would be that there exists a critical transition from a dissipationless state of superflow, i.e., a laminar flow of the photon fluid below a certain critical velocity past an obstacle, into a turbulent state of flow, accompanied by energy dissipation associated with the shedding of a von-Karman street of $`quantized`$ vortices past this obstacle, above this critical velocity. (It is the generation of quantized vortices above this critical velocity which distinguishes the onset of superfluid turbulence from the onset of normal hydrodynamic turbulence.) The physical meaning of the Bogoliubov dispersion relation is that the lowest energy excitations of the system consist of quantized sound waves or phonon excitations in a superfluid, whose maximum critical velocity is then given by the sound wave velocity. By inspection of this dispersion relation, a single quantum of any elementary excitation cannot exist with a velocity below that of the sound wave. Hence no excitation of the superfluid is possible at all for any object moving with a velocity slower than that of the sound wave velocity, according to an argument by Landau . Hence the flow of the superfluid must be dissipationless below this critical velocity. Above a certain critical velocity, dissipation due to vortex shedding is expected from computer simulations based on the Gross-Pitaevskii (or Ginzburg-Landau or nonlinear Schrödinger) equation, which should give an accurate description of this system at the macroscopic level . We propose a follow-up experiment to demonstrate that the sound wave velocity, typically a few thousandths of the vacuum speed of light, is indeed a maximum critical velocity of a fluid, i.e., that this photon fluid exhibits persistent currents in accordance with the Landau argument based on the Bogoliubov dispersion relation. Suppose we shine light at some nonvanishing incidence angle on a Fabry-Perot resonator (i.e., exciting it on some off-axis mode). This light produces a uniform flow field of the photon fluid, which flows inside the resonator in some transverse direction and at a speed determined by the incidence angle. A cylindrical obstacle placed inside the resonator will induce a laminar flow of the superfluid around the cylinder, as long as the flow velocity remains below a certain critical velocity. However, above this critical velocity a turbulent flow will be induced, with the formation of a von-Karman vortex street associated with quantized vortices shed from the boundary of the cylinder . The typical vortex core size is given by the light wavelength divided by the square root of the nonlinear index change. Typically the vortex core size should be around a few hundred microns, so that this nonlinear optical phenomenon should be readily observable. A possible application is suggested by an analogy with the Meissner effect in superconductors, or the Hess-Fairbank effect in superfluid helium: Vortices in an incident light beam would be expelled from the interior of the photon superfluid. This would lead to a useful beam-cleanup effect, in which speckles in a dirty incident laser beam would be expelled upon transmission through the nonlinear Fabry-Perot resonator, so that a clean, speckle-free beam emerges. ## Acknowledgments I thank Jack Boyce for performing the classical calculation, and for making the first attempt to do the sound wave experiment, and L.M.A. Bettencourt, D.A.R. Dalvit, I.H. Deutsch, J.C. Garrison, D. H. Kobe, D.H. Lee, M.W. Mitchell, J. Perez-Torres, D.S. Rokhsar, D.J. Thouless, E.M. Wright, and W.H. Zurek for helpful discussions. The work was supported by the ONR and by the NSF. ## FIGURES
no-problem/9908/chao-dyn9908015.html
ar5iv
text
# Digital Communication Using Chaotic Pulse Generators ## I Introduction Noise-like signals generated by deterministic systems with chaotic dynamics have a high potential for many applications including communication. Very rich, complex and flexible structure of such chaotic signals is the result of local instability of post-transient motions in a generator of chaos. This is achieved as a result of specific features of nonlinear vector field in the phase space of the generator and not by increasing the design complexity. Even a simple nonlinear circuit with very few off-the-shelf electronic components is capable of generating a very complex set of chaotic signals. The simplicity of chaos generators and the rich structure of chaotic signals are the most attractive features of deterministic chaos that have caused a significant interest in possible utilization of chaos in communication. Chaotic signals exhibit a broad continues spectrum and have been studied in connection with spread-spectrum applications . Due to their irregular nature, they can be used to efficiently encode the information in a number of ways. Thanks to the deterministic origin of the chaotic signals two coupled chaotic systems can be synchronized to produce identical chaotic oscillations. This provides the key to recovery of information that is modulated onto a chaotic carrier. A number of chaos-based covert communication schemes have been suggested, but many of these are very sensitive to distortions, filtering, and noise. The negative effect of filtering is primarily due to the extreme sensitivity of nonlinear systems to phase distortions. This limits the use of filtering for noise reduction in chaos-based communications. One way to avoid this difficulty is to use chaotically timed pulse sequences rather than continuous chaotic waveforms. Each pulse has identical shape, but the time delay between them varies chaotically. Since the information about the state of the chaotic system is contained entirely in the timing between pulses, the distortions that affect the pulse shape will not significantly influence the ability of the chaotic pulse generators to synchronize and thus be utilized in communications. This proposed system is similar to other ultra-wide bandwidth impulse radios that offers a very promising communication platform, especially in severe multi-path environments or where they are required to co-exist with a large number of other radio systems. Chaotically varying the spacing between narrow pulses enhances the spectral characteristics of the system by removing any periodicity from the transmitted signal. Because of the absence of characteristic frequencies, chaotically positioned pulses are difficult to observe and detect for the unauthorized user. Thus one expects that transmission based on chaotic pulse sequences can be designed to have a very low probability of intercept. In this paper we discuss the design of a self-synchronizing chaos-based impulse communication system, and present the results of the performance analysis in the demonstration setup operating through a model channel with noise, filtering, and attenuation. We consider the case where the encoding of the information signal is based upon the alteration of time position of pulses in the chaotic train. This encoding method is called Chaotic Pulse Position Modulation ## II Chaotic Pulse Position Modulation In this section we describe the method of Chaotic Pulse Position Modulation (CPPM) and basic elements of its hardware implementation. Consider a chaotic pulse generator which produces chaotic pulse signal $$U(t)=\underset{j=0}{\overset{\mathrm{}}{}}w(tt_j),$$ (1) where $`w(tt_j)`$ represents the waveform of a pulse generated at time $`t_j=t_0+_{n=0}^jT_n`$, and $`T_n`$ is the time interval between the $`n`$-th and $`(n1)`$-th pulses. We assume that the sequence of the time intervals, $`T_i`$, represents iterations of a chaotic process. For simplicity we will consider the case where chaos is produced by a one-dimensional map $`T_n=F(T_{n1})`$, where $`F()`$ is a nonlinear function. Some studies on such chaotic pulse generators can be found in . Using the Chaotic Pulse Position Modulation method the information is encoded within the chaotic pulse signal by using additional delays in the generated interpulse intervals, $`T_n`$. As a result, the generated pulse sequence is given by a new map of the form $$T_n=F(T_{n1})+d+mS_n,$$ (2) where $`S_n`$ is the information-bearing signal. Here we will consider only the case of binary information, and therefore, $`S_n`$ equals to zero or one. The parameter $`m`$ characterizes the amplitude of modulation. The parameter $`d`$ is a constant time delay which is needed for practical implementation of our modulation and demodulation method. The role of this parameter will be specified later. In the design of chaotic pulse generator the nonlinear function $`F()`$, and parameters $`d`$ and $`m`$ are selected to guarantee chaotic behavior of the map. The modulated chaotic pulse signal $`U(t)=_{j=0}^{\mathrm{}}w(tt_0_{n=0}^jT_n)`$, where $`T_n`$ is generated by Eq.(2) is the transmitted signal. The duration of each pulse $`w(t)`$ in the pulse train is assumed to be much shorter than the minimal value of the interpulse intervals, $`T_n`$. To detect information at the receiver end, the decoder is triggered by the received pulses, $`U(t)`$. The consecutive time intervals $`T_{n1}`$ and $`T_n`$ are measured and the information signal is recovered from the chaotic iterations $`T_n`$ with the formula $$S_n=(T_nF(T_{n1})d)/m,$$ (3) If the nonlinear function, $`F()`$, and parameters $`d`$ and $`m`$ in the authorized receiver are the same as in the transmitter, then the encoded information, $`S_n`$, can be easily recovered. When the nonlinear functions are not matched with sufficient precision, a large decoding error results. In other words, an unauthorized receiver has no information on the spacing between the pulses in the transmitted signal, it cannot determine whether a particular received pulse was delayed, and thus whether $`S_n`$ was “0” or “1”. Since the chaotic map of the decoder in the authorized receiver is matched to the map of the encoder in the corresponding transmitter, the time of the next arriving pulse can be predicted. In this case the input of the synchronized receiver can be blocked up to the moment of time when the next pulse is expected. The time intervals when the input to a particular receiver is blocked can be utilized by other users, thus providing a multiplexing strategy. Such selectivity helps to improve the performance of the system by reducing the probability of false triggering of the decoder by channel noise. ### A Transmitter The implementation of the chaotic pulse modulator used in our experiments is illustrated in Fig.1. The Integrator produces a linearly increasing voltage, $`V(t)=\beta ^1(tt_n)`$, at its output. At the Comparator this voltage is compared with the threshold voltage produced at the output of the nonlinear converter $`F(x)`$. The threshold level $`F(V_n)`$ is formed by a nonlinear conversion of voltage $`V_n=V(t_n)`$ which was acquired and saved from the previous iteration using sample and hold (S&H) circuits. When voltage $`V(t)`$ reaches this threshold level, the comparator triggers the Pulse Generator I. It happens at the moment of time $`t_{n+1}^{}=t_n+\beta F(V_n)`$. The generated pulse (Chaotic Clock Signal) causes the Data Generator to update the transmitted information bit. Depending on what the information bit $`S_{n+1}`$ is being transmitted, the Delay Modulator delays the pulse produced by the Pulse Generator by the time $`d+mS_{n+1}`$. Therefore the delayed pulse is generated at the moment of time $`t_{n+1}=t_n+\beta F(V_n)+d+mS_{n+1}`$. Through the sample and hold circuit (S&H) this pulse first resets the threshold to the new iteration value of the chaotic map $`V(t_{n+1})F(V(t_{n+1}))`$, and then resets the integrator output to zero, $`V(t)=0`$. The dynamics of the threshold is determined by the shape nonlinear function $`F()`$. The spacing between the $`n`$-th and $`(n+1)`$-th pulses is proportional to the threshold value $`V_n`$, which is generated according to the map $$T_{n+1}=\beta F(\beta ^1T_n)+d+mS_{n+1},$$ (4) where $`T_n=t_{n1}t_n`$, and $`S_n`$ is the binary information signal. In the experimental setup the shape of the nonlinear function was built to have the following form $$F(x)\alpha f(x)=\{\begin{array}{cc}\alpha x\hfill & \text{if }x<5V\text{,}\hfill \\ \alpha (10Vx)\hfill & \text{if }x5V\text{.}\hfill \end{array}$$ (5) The selection of the nonlinearity in the form of piece-wise linear function helps to ensure the robust regimes of chaos generation for rather broad ranges of parameters of the chaotic pulse position modulator. The position-modulated pulses, $`w(tt_j)`$ are shaped in the Pulse Generator II. These pulses form the output signal $`U(t)=_{j=0}^{\mathrm{}}w(tt_j)`$, which is transmitted to the receiver. ### B Receiver When the demodulator is synchronized to the pulse position modulator, then in order to decode a single bit of transmitted information we must determine whether a pulse from the transmitter was or was not delayed relative to its anticipated position. If an ideal synchronization is established, but the signal is corrupted by noise, the optimal detection scheme operates as follows. Integrate the signal over the pulse duration inside the windows where pulses corresponding to “1” and “0” are expected to occur. The decision on whether “1” or “0” is received is made based upon whether the integral over “1”-window is larger or smaller than that over “0”-window. Such detection scheme in the ideal case of perfect synchronization is the ideal Pulse Position Modulation (PPM) scheme. The performance of this scheme is known to be 3dB worse than the BPSK system. Although in the case of perfect synchronization this detection scheme is ideal, according to our numerical simulations, its performance quickly degrades when synchronization errors due to the channel noise are taken into account. For this reason and for the sake of design simplicity we use a different approach to detection. The demodulator scheme is illustrated in Fig.2. In the receiver the Integrator, S&H circuits and the nonlinear function block generating the threshold values are reset or triggered by the pulse received from the transmitter rather than by the pulse from the internal feedback loop. To be more precise, they are triggered when the input signal, $`U(t)`$, from the channel exceeds certain input threshold. The time difference between the anticipated location of the pulse without modulation, $`t_{n+1}^{}=t_n+\beta F(V_n)`$, and the actual arrival time $`t_{n+1}`$ translates into the difference between the threshold value, $`F(V_n)`$ generated by the nonlinear function and the voltage, $`V(t_{n+1})`$ at the Integrator at the moment when the input signal, $`U(t)`$ exceeds the input threshold. For each received pulse the difference $`V(t_{n+1})F(V_n)`$ is computed and is used for deciding whether or not the pulse was delayed. If this difference is less than the reference value $`\beta (d+m/2)`$, the detected data bit $`S_{n+1}`$ is “0”, otherwise it is “1”. Another important detail of the receiver is the Window Selection block. Once the receiver correctly observes two consecutive pulses, it can predict the earliest moment of time when it can expect to receive the next pulse. This means that we can block the input to the demodulator circuit until shortly before such a moment. This is done by the Window Select block. In the experiment this circuit opens the receiver input at the time $`t_{n+1}^{}=t_n+\beta F(V_n)`$ by Window Control pulses. The input stays open until the decoder is triggered by the first pulse received. Using such windowing greatly reduces the chance of the receiver being triggered by noise, interference or pulses belonging to other users. ### C Parameters mismatch limitations It is known that because the synchronization-based chaotic communication schemes rely on the identity of synchronous chaotic oscillations, they are susceptible to negative effects of parameters mismatches. Here we evaluate how precisely the parameters of our modulator and demodulator have to be tuned in order to ensure errorless communication over a distortion-free channel. Since the information detection in our case is based on the measurements of time delays, it is important that the modulator and the demodulator can maintain synchronous time reference points. The reference point in the modulator is the front edge of the Chaotic Clock Pulse. The reference point in the demodulator is the front edge of the Window Control Pulse. Ideally, if the parameters of the modulator and the demodulator were exactly the same and the systems were synchronized, then both reference points would be always at the times $`t_{n+1}^{}=t_n+\beta F(V_n)`$, and the received pulse would be delayed by the time $`d`$ for $`S_{n+1}=0`$ and $`d+m`$ for $`S_{n+1}=1`$. In this case, setting the bit separator at the delay $`d+m/2`$ would guarantee errorless detection in a noise-free environment. In an analog implementation of a chaotic pulse position modulator/demodulator system, the parameters of the circuits are never exactly the same. Therefore, the time positions $`t_n^{(M)}`$ and $`t_n^{(D)}`$ of the reference points in the modulator and the demodulator chaotically fluctuate with respect to each other. Due to these fluctuations the position of the received pulse, $`t_n=t_n^{(M)}+d+S_n`$, is shifted from the arrival time predicted in the demodulator, $`t_n^{(D)}+d+S_n`$. The errors are caused by the following two factors. First, when the amplitude of fluctuations of the position shift is larger than $`m/2`$, some delays for “0”s and “1”s overlap and cannot be separated. Second, when the fluctuations are such that a pulse arrives before the demodulator opens the receiver input ($`t_n<t_n^{(D)}`$), the demodulator skips the pulse, loses synchronization and cannot recover the information until it re-synchronizes. In our experimental setup the parameters $`\beta _{M,D}`$ were tuned to be as close as possible, and the nonlinear converters were built using $`1\%`$ components. The fluctuations of the positions of the received pulses with respect to the Window Control pulse were studied experimentally by measuring time delay histograms. Figure 3 presents typical histograms measured for the case of noise-free channel and for the channel with noise when $`E_b/N_o18`$dB. Assuming that systems were synchronized up to the $`(n1)`$-st pulse in the train, the fluctuations of the separation between the reference time positions equals $`\mathrm{\Delta }_nt_n^{(D)}t_n^{(M)}=`$ (6) $`\beta _DF_D(\beta _D^1T_{n1})\beta _MF_M(\beta _M^1T_{n1}),`$ (7) where indices $`D`$ and $`M`$ stand for demodulator and modulator respectively. As it was discussed above, in order to achieve errorless detection, two conditions should be satisfied for all time intervals in the chaotic pulse train produced by the modulator. These conditions are the synchronization condition, $`\{\mathrm{\Delta }_n\}_{max}<d`$, and the detection condition $`\{|\mathrm{\Delta }_n|\}_{max}<m/2`$. As an example we consider the simplest case where all parameters of the systems are the same except for the mismatch of the parameter $`\alpha `$ in the nonlinear function converter, see Eq.(5). Using Eq.(5) and Eq.(6) the expression for the separation time can be rewritten in the form $$\mathrm{\Delta }_n=(\alpha _D\alpha _M)\beta f(\beta ^1T_{n1}).$$ (8) It is easy to show that the largest possible value of the nonlinearity output $`f()`$, which can appear in the chaotic iterations of the map, equals to 5V. Note that in the chaotic regime only positive values of $`f()`$ are realized. Therefore, if conditions $$\beta (\alpha _D\alpha _M)<d/5V\text{ and }2\beta |\alpha _D\alpha _M|<m/5V.$$ (9) are satisfied and there is no noise in the channel, then information can be recovered from the chaotic pulse train without errors. ## III Experimental Setup and Results In our experiment (Fig.4) we used a computer with a data acquisition board as the data source, triggered by the chaotic clock from the transmitter. We also used the computer to record the pulse displacement from the demodulator subtractor for every received pulse. This value was used to decode the information for the bit error rate analysis. The model channel circuit consisted of WGN generator and a bandpass filter with the pass band 1kHz-500kHz. The pulse duration was 500ns. The distance between the pulses varied chaotically between 12$`\mu `$s and 25$`\mu `$s. This chaotic pulse train curried the information flow with the average bit rate $``$60kb/sec. The amplitude of pulse position modulation, $`m`$, was 2$`\mu `$s. The spectra of the transmitter output, noise and the signal at the receiver are shown in Fig.5. We characterize the performance of our system by studying the dependence of the bit error rate on the ratio of energy per one transmitted bit to the spectral density of noise, $`E_b/N_0`$. This dependence is shown in Fig.6, where it is compared to the performance of more traditional communication schemes, BPSK, PPM, and non-coherent FSK. We were also able to analytically estimate the performance of our system assuming perfect synchronization. The corresponding curve is also shown in figure Fig.6. At high noise levels the seemingly better performance of the experimental device compared with the analytical estimate is in part due to the crudeness of the analytical model, and in part due to the fact that at high noise level the noise distribution deviates from Gaussian. In the region of low noise the deviation of the experimental performance from the analytical estimate is probably due to the slight parameter mismatch between the transmitter and the receiver. Discussing chaos-based communication systems, one may notice a potential disadvantage common to all such schemes. Most traditional schemes are based on periodic signals and systems where the carrier is generated by a stable system. All such systems are characterized by zero Kolmogorov-Sinai entropy $`h_{KS}`$: in these systems without any input the average rate of non-redundant information generation is zero. Chaotic systems have positive $`h_{KS}`$ and continuously generate information. Even in the ideal environment, in order to synchronize two chaotic systems, one must transmit an amount of information per unit time that is equal to or larger than the $`h_{KS}`$. Although our detection method allows some tolerance in the synchronization precision, the need to transmit extra information to maintain the synchronization results in an additional shift of the actual CPPM performance curve relative to the case when ideal synchronization is assumed. Since the numerical and experimental curves in Fig.6 pass quite near the analytical estimate that assumes synchronization, the degradation caused by non-zero Kolmogorov-Sinai entropy does not seem to be significant. Although CPPM performs worse than BPSK, non-coherent FSK and ideal PPM, we should emphasize that ($`i`$) this wide band system provides low probability of intercept and low probability of detection; ($`ii`$) improves the privacy adding little circuit complexity ($`iii`$) to our knowledge, this system performs exceptionally well compared to other chaos-based covert communication schemes; ($`iv`$) there exist a multiplexing strategy that can be used with CPPM ($`v`$) compared to other impulse systems, CPPM does not rely on a periodic clock, and thus can eliminate any trace of periodicity from the spectrum of the transmitted signal. All this makes CPPM attractive for development of chaos-based cloaked communications. This research was sponsored in part by the ARO, grant No. DAAG55-98-1-0269 and in part by the U.S. Department of Energy, Office of Basic Energy Sciences, under grant DE-FG03-95ER14516 .
no-problem/9908/astro-ph9908352.html
ar5iv
text
# Redshift Surveys and the Value of Ω ## 1 Introduction There are multiple methods, both observational and numerical that combine to constrain cosmological parameters. No single method is able to determine by itself more than one of the main parameters with good accuracy in a model–independent way. We introduce an approach which bypasses the need to measure peculiar velocities or the underlying mass distribution (bias) to probe $`\mathrm{\Omega }_m`$ (the total matter density) by directly examining displacements in redshift space. The method takes advantage of the highly successful Zel’dovich approximation which relates displacements to peculiar velocities in the weakly nonlinear regime as a function of $`\mathrm{\Omega }`$–plus the fact that peculiar velocities look like displacements in redshift space. This leads to a bias–insensitive method to probe $`\mathrm{\Omega }`$. We explore a radically new way of estimating the mass density of the Universe from redshift surveys. Unlike POTENT and power–spectral redshift distortion methods, this method is insensitive to bias. Furthermore, it does not depend on the expensive and time–consuming measurement of peculiar velocities. It measures $`\mathrm{\Omega }_m`$ from redshift surveys directly in contrast to Supernova projects which measure $`q_0`$, or CMB perturbations, which measure complex combinations of model dependent parameters. This method is model independent and is insensitive to the source of cosmic perturbations, unlike the CMB power spectral methods, which only work when the primordial power spectrum is known. The formation of the largest structures in the universe (i.e. galaxies, groups of galaxies, clusters, superclusters and voids of galaxies) is a fascinating problem. Many current questions ranging from speculations on the physical nature of dark matter, to the measurement of angular anisotropies of the microwave background radiation and determination of the epoch of galaxy formation join together here. These structures hold information about the very early stages of the evolution of the universe. This assumption is based on the fact that the larger the object, the longer the characteristic time of its evolution. Thus, in terms of characteristic evolution time, the larger the structure the younger it is. Superclusters are dynamically unrelaxed systems, and in studying them, one can learn about primordial fluctuations in the universe. We focus here on the weakly nonlinear or quasi–linear regime, in terms of both dynamics and statistics. The very largest scales are in the linear regime. They are observationally difficult to investigate, but the dynamical questions are simple. On the other hand, the deeply nonlinear regime is difficult to connect with initial conditions. A new generation of the redshift surveys (SDSS, 2dF) will open up the possibility of observational study the scales in the quasi-linear regime ($`100h^1`$Mpc, where $`h`$ is the Hubble constant in units of $`100`$ km/s/Mpc.) Also they will allow a much more detailed statistical analysis of the structures on $`30100h^1`$Mpc scales. Studies of geometry and topology of the largest structures which traditionally suffered from small databases will play an important role in discriminating cosmological scenarios. One can characterize the weakly nonlinear regime as probing dense concentrations ($`\delta \rho /\rho `$ 1) which are still within reach of Zel’dovich and other nonlinear approximations. Very little if any phase mixing or shell crossing has happened on these scales, corresponding roughly to superclusters. We think these scales deserve more attention for two reasons: Observationally, new ground–based redshift surveys are greatly increasing the quantity and quality of data. In terms of theory and analysis, new techniques show that it is possible to make a direct link between this scale and initial conditions. Nearly all structure formation work in cosmology has either focused on very large scales using linear theory or else galaxy/cluster formation using hydrodynamics. Superclusters provide information not easily accessible to either approach. ## 2 Method According to the Hubble law, $`vczH_0r`$, the recession velocity $`v`$ of a galaxy, inferred from its redshift, is proportional to its proper distance from the observer, $`r`$, $`H_0`$ is the Hubble constant today. Irregularities gravitationally generate peculiar velocities ($`v_p`$), so that the true relationship is $$v=H_0r+v_p$$ (1) where $`v_p`$ is the line of sight component of the peculiar motion. Maps of galaxy positions constructed by assuming that velocities are exactly proportional to distance (redshift space) have two principal distortions. The first is generated in dense collapsed structures where there are very many galaxies at essentially the same distance from the observer, each with a random peculiar motion. This results in a radial stretching of the structure known as a “Fingers of God”. The second effect acts on much larger scales (e.g. Kaiser 1987). A large overdensity generates coherent bulk motions in the galaxy distribution as it collapses. Material generally flows towards the center of the structure, i.e. towards the observer for material on the far side of the structure and away from the observer on the near side so it will appear compressed along the line of sight. These effects are large for critical $`\mathrm{\Omega }_m`$ and negligible for small $`\mathrm{\Omega }_m`$. (For illustration of this effect see http://kusmos.phsx.ukans.edu/ feldman/redshift-distortions.html). While redshift-space distortions are a nuisance when one wants to construct accurate maps of the (true) spatial distribution of galaxies, they may lead to a robust determination of $`\mathrm{\Omega }_m`$ the contribution of clustered matter, (baryonic or not) to the mass density of the Universe. It has been noticed that superclusters appear to “surround” us, in a preferentially concentric pattern. Although the statistics are poor (few superclusters), it is interesting to ask whether this effect could appear in a homogeneous, isotropic Universe. The effect of peculiar velocity breaks the isotropy in redshift–space diagrams, interacting with inhomogeneities differently depending on how their long axis is oriented. Our new method is based on the suggestion in Melott et al. (1998); see also Praton et al. (1997). The essence of our method can be explained based on the images in Figure 1 which are slices of 3D simulations. The left side is real–space, the right side redshift–space. The upper row are evolved in a $`\mathrm{\Omega }_m=1`$ cosmology ($`\lambda =0`$), the second row are of a critical CDM cosmology with high bias, the lower an $`\mathrm{\Omega }_m=0.1`$ cosmology. All have similar large–scale linear power amplitude and phases at the moment shown. It is clear that the visual concentric effect are much stronger in the high $`\mathrm{\Omega }`$ models in redshift space. The models we use to illustrate the method have the same initial power spectrum (an $`\mathrm{\Omega }_m=1`$ CDM model, normalized to a circle radius of 230 h<sup>-1</sup> Mpc for $`h=0.67`$ $`\sigma _8=1`$). We use the same spectrum also for the low $`\mathrm{\Omega }_m`$ case to make it clear that the effect is rooted in $`\mathrm{\Omega }_m`$, not the spectrum. All the slices have very nearly the same number of particles. The effect of peculiar motions is to increase the spacing of large–scale structures in the radial direction as compared with real space, as we show quantitatively below. The enhancement is $`\mathrm{\Omega }_m`$–dependent. It is on the basis of this visually–striking difference between the redshift–space behavior of low– and high–density models that we propose a statistic that reproduces the eye’s sensitivity to differences in pattern. The essence of large–scale redshift space effects is a compression and/or expansion effect along the line of sight. It can best be explained (following Melott et al, 1998) using the Zel’dovich approximation (Zel’dovich 1970) This approximation follows the development of structure by relating the final (Eulerian) position of a particle $`𝐫`$ at some time $`t`$ to its initial (Lagrangian) position $`𝐪`$ defined at the primordial epoch when particles were smoothly distributed: $$𝐫=a(t)𝐱(𝐪,t)=a(t)[𝐪D_+_𝐪\mathrm{\Phi }(𝐪)].$$ (2) In this simple, separable mapping, the displacement field is given by the gradient of the primordial gravitational potential $`\mathrm{\Phi }`$, with respect to the initial coordinates. $`a(t)`$ is the cosmic scale factor. Differentiating this expression leads to $$𝐕=\frac{d𝐫}{dt}=H𝐫a(t)\dot{D}_+_𝐪\mathrm{\Phi }(𝐪)$$ (3) for the velocity of a fluid element $`𝐕`$, where $`D_+`$ is the linear growth of perturbations as a function of time, usually parameterized by $`f=d\mathrm{log}D_+/d\mathrm{log}a`$. This is now known to reproduce weakly non-linear (i.e. large–scale) features in the distribution of matter very accurately indeed, if implemented in an optimized form known as the Truncated Zel’dovich Approximation (Coles et al. 1993, Melott 1994). The mapping (3) provides a straightforward explanation of the changed characteristic scale of structures in the redshift direction. Calculating the redshift coordinate exactly and translating it into an effective distance $`d_z`$ gives $$d_z=\frac{V}{H}=r_3fa(t)D_+(t)_3\mathrm{\Phi }(𝐪)=aq_3(1+f)a(t)D_+(t)_3\mathrm{\Phi }(𝐪),$$ (4) in which we have the 3-axis in the redshift direction. Thus the displacement term becomes multiplied by a factor $`(1+f)`$ in (4) compared to (3). The effect of the displacement field in redshift space is to give the observer a “preview” (albeit in only one direction) of a later stage of the clustering hierarchy. (Note that $`\delta `$, the density contrast, does not enter here). We construct density contours for the smoothed field, and take lines-of-sight through the smoothed density field and calculate the rms distance between successive same-direction contour-(up)crossings of high density levels; denoted $`S_{}`$. We also do a similar calculation for lines in the direction orthogonal to the observer’s line of sight; denoted $`S_{}`$ After much experimentation with a large ensemble of simulations, a simple statistic turned out to be nearly optimal; the ratio of the rms spacing in the redshift direction to that in the orthogonal direction, which we call $`\mu `$: $$\mu =\frac{S_{}}{S_{}}$$ (5) In order to examine a density field (the galaxy density in redshift surveys) it is necessary to specify a scale on which the density field will be smoothed. In our case, we want the smoothing scale to include the large–scale dynamics while filtering out small, fully nonlinear dynamics. Then, we must chose one contour level to use for the upcrossing interval measurement. Although in general contours corresponding to a given filling factor reduce bias dependence, a particular level must be chosen. We choose that level corresponding to filled fraction of 1/8. This fraction has the motivation that dissipationless collapse of a uniform medium will virialize at about this volume fraction. Since with our choice of smoothing we are looking only at just–collapsed structure, this is an appropriate estimate. The use of a small fraction emphasizes the interval between objects, not the size of the objects themselves. We have checked that our measure also has a broad maximum around our choice of filling factor, so that it is not especially sensitive to this choice. The fundamental object used in this method is the isodensity contour level. Our steps are (1) Make a 2d array consisting of projections of a slice of a 3–D distribution (2) Construct a smoothed density field (3) Make isodensity contour levels corresponding to a set filling factor i.e. fraction of the available area (4) Measure the distance between upcrossings of this contour in the redshift and transverse (real) directions. To summarize, we show that the typical origin of bias-dependence is absent; we argue that our filling factor approach eliminates another possible source of bias; and we show results of a simulation which behaves in this way (bias insensitive). As can be seen in Figure 2, the bias dependence is negligible since the contours change little in the biased model. In figure 3 we show the results of the simulations. We plot $`\mu `$ (see 5) vs filling factor ($`ff`$). we added the error bars only to one line, but they are of similar magnitude. We see that the critical $`\mathrm{\Omega }`$ models behave similarly and are significantly different than the low ($`\mathrm{\Omega }=0.3`$) model. ## 3 Conclusions In the next few years, astronomers will map an appreciable fraction of the Universe by redshift surveys where recession speed is assumed to be proportional to distance. Gravity induce peculiar motions of galaxies as part of the ongoing process of structure formation. We showed that such motions tend to enhance redshift structures concentric about the observer, and argued that the strength of this effect may be a powerful new probe of the mass density of the Universe. The principal limitations of this method are the following: * The smoothing length is specified by the autocorrelation function. Biasing may affect this somewhat (small effect). * The spacing ratio $`\mu 1`$ for low $`\mathrm{\Omega }`$ models. This is due to the fact that the “fingers of God” effect introduce noise (small effect). * The method requires deep, dense, 3–D redshift surveys where the correlation length is much smaller than the survey effective radius. These surveys are coming (SDSS, 2DF) * The method measures $`\mathrm{\Omega }_m`$ not $`\mathrm{\Lambda }`$. The advantages of this method are: * No need for distance measurements, redshifts are enough. * No comparison between the density field and the velocity field. Thus we measure $`\mathrm{\Omega }`$ directly no $`\beta =\mathrm{\Omega }^{0.6}/b`$, that is, virtually no bias dependence. * Bias affects this statistic only through excess smoothing which is both a weak effect and can be controlled easily. ###### Acknowledgements. I would like to thank the conference organizers for a fascinating and well–run meeting. This work was supported in part by the NSF-EPSCoR program and the GRF at the University of Kansas.
no-problem/9908/hep-ex9908064.html
ar5iv
text
# Low-𝑄² low-𝑥 Structure Function Analysis of CCFR data for 𝐹₂ ## Abstract Analyses of structure functions (SFs) from neutrino and muon deep inelastic scattering (DIS) data have shown discrepancies in F<sub>2</sub> for $`x<0.1`$. A new SF analysis of the CCFR collaboration data examining regions in $`x`$ down to $`x=.0015`$ and $`0.4<Q^2<1.0`$ is presented. Comparison to corrected charged lepton scattering results for $`F_2`$ from the NMC and E665 experiments are made. Differences between $`\mu `$ and $`\nu `$ scattering allow that the behavior of $`F_2^\mu `$ could be different from $`F_2^\nu `$ as $`Q^2`$ approaches zero. Comparisons between $`F_2^\mu `$ and $`F_2^\nu `$ are made in this limit. High-energy neutrinos are a unique probe for understanding the parton properties of nucleon structure. Combinations of $`\nu `$ and $`\overline{\nu }`$ DIS data are used to determine the $`F_2`$ and $`xF_3`$ SFs which determine the valence, sea, and gluon parton distributions in the nucleon . The universalities of parton distributions can also be studied by comparing neutrino and charged lepton scattering data. Past measurements have indicated that $`F_2^\nu `$ differs from $`F_2^{e/\mu }`$ by 10-15% in the low-$`x`$ region . These differences are larger than the quoted combined statistical and systematic errors of the measurements and may indicate the need for modifications of the theoretical modeling to include higher-order or new physics contributions. We present a new analysis of the CCFR collaboration $`\nu `$-$`N`$ DIS data in a previously unexplored kinematic region. In this low-$`x`$ and low-$`Q^2`$ region, the discrepancy between $`F_2^\nu `$ and $`F_2^\mu `$ persists. However, in this kinematic region some differences in $`F_2`$ from neutrino and charged lepton data may result from differences in the properties of weak and electromagnetic interactions. Within the PCAC nature of $`\nu `$-$`N`$ DIS, $`F_2^\nu `$ should approach a constant as $`Q^2`$ approaches zero, while $`F_2^{e/\mu }`$ for charged lepton DIS should approach zero. A determination of this constant is presented. The $`\nu `$ DIS data were taken in two high-energy high-statistics runs, FNAL E744 and E770, in the Fermilab Tevatron fixed-target quadrupole triplet beam (QTB) line by the CCFR collaboration. The detector, described in Refs. , consists of a target calorimeter instrumented with both scintillators and drift chambers for measuring the energy of the hadron shower $`E_{HAD}`$ and the $`\mu `$ angle $`\theta _\mu `$, followed by a toroid spectrometer for measuring the $`\mu `$ momentum $`p_\mu `$. There are 950,000 $`\nu _\mu `$ events and 170,000 $`\overline{\nu }_\mu `$ events in the data sample after fiducial-volume cuts, geometric cuts, and kinematic cuts of $`p_\mu `$ $`>15`$ GeV, $`\theta _\mu <150`$ mr, $`E_{HAD}`$ $`>10GeV`$, and $`30<E_\nu <360GeV`$, to select regions of high efficiency and small systematic errors in reconstruction. In order to extract the SFs from the number of observed $`\nu _\mu `$ and $`\overline{\nu }_\mu `$ events, determination of the flux was neccesary . The cross-sections, multiplied by the flux, are compared to the observed number of $`\nu `$-$`N`$ and $`\overline{\nu }`$-$`N`$ events in each $`x`$ and $`Q^2`$ bin to extract $`F_2(x,Q^2)`$ and $`xF_3(x,Q^2)`$. Determination of muon and hadron energy calibrations from the previous CCFR analysis were used in the present analysis. These calibrations were determined from test beam data collected during the course of the experiment . Changes in the SF extraction to extend the analysis into the low-$`Q^2`$, low-$`x`$ region include incorporation of an appropriate model below $`Q^2`$ of 1.35 GeV<sup>2</sup>, in this case we chose the GRV model of PDFs. The data have been corrected using the leading order Buras-Gaemers model for slow rescaling , with charm mass of 1.3 GeV and for the difference in $`xF_3^\nu xF_3^\nu `$. In addition, corrections for radiative effects , non-isoscalarity of the Fe target, and the mass of the $`W`$-boson propagator were applied. Due to the systematic uncertainty in the model, the radiative correction error dominates in the lowest $`x`$ bins. Other significant systematics across the entire kinematic region include the value of $`R`$, which comes from a global fit to the world’s measurements . The SF $`F_2`$ from $`\nu `$ DIS on iron can be compared to $`F_2`$ from charged lepton DIS on isoscalar targets. To make this comparison, two corrections must be made to the charged lepton data. For deuterium data, a heavy nuclear target correction must be made to convert $`F_2^\mathrm{}D`$ to $`F_2^{\mathrm{}Fe}`$ . Second, a correction was made to account for the different quark charge involved in the charged lepton DIS interactions . The errors on the nuclear and charge corrections are small compared to the statistical and systematic errors on both the CCFR and NMC data. The corrected SF, $`F_2`$, from $`\mu `$ DIS experiments NMC and E665 along with CCFR for lowest $`x`$-bins is shown in Fig. 1. The new analysis allows comparison to E665 data, which is in the low-$`x`$, low-$`Q^2`$ region. Error bars for CCFR and E665 data are large in the x-bin, x=.0015. However, In the next x-bin, $`x=.0045`$, there is clearly as much as a 20% discrepancy between the NMC $`F_2^\mu `$ and the CCFR $`F_2^\nu `$ and an approximately 10% discrepancy between CCFR and E665. As the value of $`x`$ increases, the discrepancy decreases; there is agreement between CCFR and the charged lepton experiments above $`x=0.1`$. The discrepancy between CCFR and NMC at low-$`x`$ is outside the experimental systematic errors quoted by the groups. Several suggestions for an explanation have been put forward. One suggestion , that the discrepancy can be entirely explained by a large strange sea, is excluded by the CCFR dimuon analysis which directly measures the strange sea . Another is that the strange sea may not be the same as the anti-strange sea distribution. Data from both NMC and CCFR do not support this possibility . Another possibilty is that the heavy nuclear target correction may be different between neutrinos and charged leptons. Heavy target corrections used in this paper are determined by NMC for charged lepton-nucleon DIS data and applied to NMC and E665 only; no charged lepton correction data is applied to $`\nu `$ data. Another possibility that has been proposed would have a large symmetry violation in the sea quark , but recently the model has been ruled out by the CDF $`W`$ charge asymmetry measurements . Finally, in the low-$`x`$ and low-$`Q^2`$ region, some of the discrepancy may be accounted for by the differences in behavior of $`F_2`$ as $`Q^2`$ approaches zero, although this can only address the $`x<0.0175`$ region. In charged lepton DIS, the SF, $`F_2`$, is constrained by gauge invariance to vanish linearly with $`Q^2`$ at $`Q^2=0`$. Donnachie and Landshoff predict that in the low-$`Q^2`$ region, $`F_2^\mu `$ will follow the form $`C\left(\frac{Q^2}{Q^2+A^2}\right)`$. However, in the case of neutrino DIS, the PCAC nature of the weak interaction contributes a nonzero component to $`F_2`$ as $`Q^2`$ approaches zero. Donnachie and Landshoff predict that $`F_2^\nu `$ should follow a form with a non-zero contribution at $`Q^2=0`$: $`\frac{C}{2}\left(\frac{Q^2}{Q^2+A^2}+\frac{Q^2+D}{Q^2+B^2}\right)`$. Using NMC data we fit to the form predicted for $`e/\mu `$ DIS, extracting the parameter A. Inserting this value for A into the form predicted for $`\nu `$ DIS, we fit CCFR data to extract parameters B,C,D, and determine the value of $`F_2`$ at $`Q^2=0`$. Only data below $`Q^2=1.35`$ GeV<sup>2</sup> are used in the fits. The CCFR x-bins having enough data for a good fit in this $`Q^2`$ region are $`x=.0045`$, $`x=.0080`$, $`x=.0125`$, $`x=.0175`$. Table 1 shows the results of the fits. The values of $`F_2`$ at $`Q^2`$=$`0`$ in the three highest $`x`$-bins are statistically significant and in agreement with each other. The lowest x-bin is consistent with the other results. In summary, a comparison of $`F_2`$ from $`\nu `$ DIS to that from $`\mu `$ DIS continues to show good agreement above $`x=0.1`$ but a difference at smaller $`x`$ that grows to 20% at $`x=0.0045`$. The experimental systematic errors between the two experiments, and improved theoretical analyses of massive charm production in both neutrino and muon scattering are both presently being investigated as possible reasons for this discrepancy. Some of this low-$`x`$ discrepancy may be explained by the different behavior of $`F_2`$ from $`\nu `$ DIS to that from $`e/\mu `$ DIS at $`Q^2=0`$. CCFR $`F_2^\nu `$ data appear to approach a non-zero constant at $`Q^2=0`$.
no-problem/9908/nucl-th9908002.html
ar5iv
text
# Correlations derived from Modern Nucleon-Nucleon Potentials ## I Introduction The microscopic theory of nuclear structure based on realistic nucleon-nucleon (NN) interactions is a very demanding subject because it requires the description of a strongly correlated many-fermion system. Attempts to determine e.g. the energy of nuclei from a realistic NN interaction by using the mean-field or Hartree-Fock approximation fail badly: such attempts typically yield unbound nuclei. The strong short-range and tensor components of a realistic NN interaction induce correlations into the many-body wavefunction of nuclear systems. Many attempts have been made to measure these correlations in detail. As an example for such measurements we mention the exclusive $`(e,e^{}NN)`$ reactions, which have been made possible using modern electron accelerators. The hope is that the detailed analysis of such experiments yields information about the correlated wavefunction of the nucleon pair absorbing the virtual photon. This could be a very valuable test for the model of the NN interaction producing these correlations. In recent years, a new generation of realistic NN potentials has been developed which produce very accurate fits of the proton-proton and proton-neutron (pn) scattering phase shifts. Since these fits are based on the same phase shift analysis by the Nijmegen group and yield a value for the $`\chi ^2`$/datum very close to one, these various potentials could be called phase-shift equivalent NN interactions. This means that the on-shell matrix elements for the transition matrix $`T`$ are essentially identical. This, however, does not imply that the underlying potentials nor the effective interaction between off-shell nucleons moving inside a nucleus are the same. Indeed it has been demonstrated that these phase shift equivalent potentials yield different results even for the deuteron. Of course all of them reproduce the same empirical binding energy and other observables, because these are part of the observables to which the interaction has been fitted. However, the various contributions to the total energy, the kinetic energy and the potential energy in the $`{}_{}{}^{3}S_{1}^{}`$ and $`{}_{}{}^{3}D_{1}^{}`$ partial waves, are quite different indicating that also the two-body wavefunctions must be different. It is one of the aims of this study to explore whether similar differences can also be observed in calculating the energy of nuclear matter. These energies are calculated using the Brueckner-Hartree-Fock (BHF) approximation. The BHF approach for nuclear matter assumes a model-wavefunction of a free Fermi gas, occupying plane wave states up to the Fermi momentum. The effects of correlations are taken into account in terms of the Brueckner $`G`$-matrix. The BHF approximation does not give direct access to quantities like the kinetic or the potential energy. However, in the next section we will illustrate how the Hellman-Feynman Theorem can be used to calculate these quantities and also the expectation value of the $`\pi `$ exchange calculated for the correlated wave function. The results for various contributions to the binding energy as well as the wave functions of correlated NN pairs in nuclear matter will be presented in section 3. The last section contains a summary and conclusions. ## II Correlations in Brueckner Hartree Fock The central equation of the BHF approximation is the Bethe-Goldstone equation, which defines an effective interaction $`G`$ for two nucleons in nuclear matter occupying the plane wave states $`i`$ and $`j`$ by $$G|ij>=V|ij>+V\frac{Q}{ϵ_i+ϵ_jH_0}G|ij>.$$ (1) Here and in the following $`V`$ stands for the bare NN interaction, $`Q`$ denotes the Pauli operator, which prevents of the interacting nucleons into intermediate states with momenta below the Fermi momentum $`k_F`$, $`H_0`$ defines the spectrum of intermediate two-particle states and the BHF single-particle energies are defined by $$ϵ_i=\frac{\mathrm{}^2k_i^2}{2m}+_0^{k_F}d^3k_j<ij|G|ij>,$$ (2) as the sum of the kinetic energy of a free nucleon with mass $`m`$ and momentum $`k_i`$ and the potential energy. The single particle potential corresponds to the Hartree-Fock approximation but calculated in terms of the effective interaction $`G`$ rather than the bare interaction $`V`$. Also the total energy of the system is calculated in a similar way containing the kinetic energy per nucleon of a free Fermi gas $$\frac{T_{FG}}{A}=\frac{3}{5}\frac{\mathrm{}^2k_F^2}{2m},$$ (3) and the potential energy calculated in the Hartree-Fock approximation replacing $`V`$ by the effective interaction $`G`$ (For a more detailed description see e.g. ). This means that the BHF approach considers a model wave function, which is just the uncorrelated wave function of a free Fermi gas and all information about correlations are hidden in the effective interaction $`G`$. Since this effective interaction is constructed such that $`G`$ applied to the uncorrelated two-body wave function yields the same result as the bare interaction $`V`$ acting on the correlated wave function $$G|ij>=V|ij>_{\text{corr.}},$$ (4) the comparison of this equation with (1) allows the definition of the correlated two-nucleon wave function as $$|ij>_{\text{corr.}}=|ij>+\frac{Q}{ϵ_i+ϵ_jH_0}G|ij>.$$ (5) This representation demonstrates that the correlated wave function contains the uncorrelated one plus the so-called defect function, which in this approach should drop to zero for relative distances between the two nucleons, which are larger than the healing distance. The BHF approach yields the total energy of the system including effects of correlations. Since, however, it does not provide the correlated many-body wave function, one does not obtain any information about e.g. the expectation value for the kinetic energy using this correlated many-body state. To obtain such information one can use the Hellmann-Feynman theorem, which may be formulated as follows: Assume that one splits the total Hamiltonian into $$H=H_0+\mathrm{\Delta }V$$ (6) and defines a Hamiltonian depending on a parameter $`\lambda `$ by $$H(\lambda )=H_0+\lambda \mathrm{\Delta }V.$$ (7) If $`E_\lambda `$ defines the eigenvalue of $$H(\lambda )|\mathrm{\Psi }_\lambda >=E_\lambda |\mathrm{\Psi }_\lambda >$$ (8) the expectation value of $`\mathrm{\Delta }V`$ calculated for the eigenstates of the original Hamiltonian $`H=H(1)`$ is given as $$<\mathrm{\Psi }|\mathrm{\Delta }V|\mathrm{\Psi }>=\frac{E_\lambda }{\lambda }|_{\lambda =1}.$$ (9) The BHF approximation can be used to evaluate the energies $`E_\lambda `$, which also leads to the expectation value $`<\mathrm{\Psi }|\mathrm{\Delta }V|\mathrm{\Psi }>`$ employing this eq.(9). In the present work we are going to apply the Hellmann-Feynman theorem to determine the expectation value of the kinetic energy and of the one-pion-exchange term $`\mathrm{\Delta }V=V_\pi `$ contained in the different interactions. ## III Results and discussion The main aim of the work presented here is to investigate differences in nuclear structure calculations originating from four different realistic NN interactions, which are phase-shift equivalent. These four interactions are the so-called charge-dependent Bonn potential (CDBonn), the Argonne V18 (ArV18) and the versions I (Nijm1) and II (Nijm2) of the Nijmegen interaction. All these models for the NN interaction include a one-pion exchange (OPE) term, using essentially the same $`\pi NN`$ coupling constant, and account for the difference between the masses of the charged ($`\pi _\pm `$) and neutral ($`\pi _0`$) pion. However, even this long range part of the NN interaction, which is believed to be well understood, is treated quite differently in these models. The Nijmegen and the Argonne V18 potentials use the local approximation, while the pion contribution to the CDBonn potential is derived in a relativistic framework assuming pseudoscalar coupling. It has recently been shown that the non-localities included in the relativistic description of the CDBonn potential tends to lead to smaller D-state probabilities in the deuteron. The description of the short-range part is also different in these models. The NN potential Nijm2 is a purely local potential in the sense that it uses the local form of the OPE potential for the long-range part and parameterizes the contributions of medium and short-range distances in terms of local functions (depending only on the relative displacement between the two interacting nucleons) multiplied by a set of spin-isospin operators. The same is true for the Argonne $`V_{18}`$ potential . The NN potential denoted by Nijm1 uses also the local form of OPE but includes a $`𝐩^\mathrm{𝟐}`$ term in the medium- and short-range central-force (see Eq. (13) of Ref. ) which may be interpreted as a non-local contribution to the central force. The CD-Bonn is derived in the framework of the relativistic meson field theory. It is calculated in momentum space and contains non-local terms in the short-range as well as long-range part including the pion-exchange contribution. First differences in the prediction of nuclear properties obtained from these interactions are displayed in table I which contains various expectation values calculated for nuclear matter at the empirical saturation density, which corresponds to a Fermi momentum $`k_F`$ of 1.36 fm<sup>-1</sup>. The most striking indication for the importance of nuclear correlations beyond the mean field approximation may be obtained from the comparison of the energy per nucleon calculated in the mean-field or Hartree-Fock (HF) approximation. All energies per nucleon calculated in the (HF) approximation are positive. therefore far away from the empirical value of -16 MeV. Only after inclusion of NN correlations in the BHF approximation results are obtained which are close to the experiment. While the HF energies range from 4.6 MeV in the case of CDBonn to 36.9 MeV for Nijm2, rather similar results are obtained in the BHF approximations. This demonstrates that the effect of correlations is quite different for the different interactions considered. However it is worth noting that all these modern interactions are much “softer” than e.g. the old Reid soft-core potential in the sense that the HF result obtained for the Reid potential (176 MeV) is much more repulsive. Another measure for the correlations is the enhancement of the kinetic energy calculated for the correlated wave function as compared to the mean field result which is identical to $`T_{FG}`$, the energy per particle of the free Fermi gas. At the empirical density this value for $`T_{FG}`$ is 23 MeV per nucleon. One finds that correlations yield an enhancement for this by a factor which ranges from 1.57 in the case of CDBonn to 2.09 for Nijm1. It is remarkable that the effects of correlations, measured in terms of the enhancement of the kinetic energy or looking at the difference between the HF and BHF energies, are significantly smaller for the interactions CDBonn and Nijm1, which contain non-local terms. The table I also lists the expectation value for the pion-exchange contribution $`V_\pi `$ to the two-body interaction. Here one should note that the expectation value of $`V_\pi `$ calculated in the HF approximation is about 15 MeV almost independent of the interaction considered. So it is repulsive and completely due to the Fock exchange term. If, however, the expectation value for $`V_\pi `$ is evaluated for the correlated wave function, one obtains rather attractive contributions ranging from -22.30 MeV per nucleon (CDBonn) to -40.35 MeV (ArV18). This expectation value is correlated to the strength of the tensor force or the D-state probability $`P_D`$ calculated for the deuteron (see table I as well). Interactions with larger $`P_D`$, like the $`ArV18`$, yield larger values for $`<V_\pi >`$. For a further support of this argument we also give the results for three different version of charge-independent Bonn potentials A, B and C, defined in . All this demonstrates that pionic and tensor correlations are very important to describe the binding properties of nuclei. In fact, the gain in binding energy due to correlations from $`V_\pi `$ alone is almost sufficient to explain the difference between the HF and BHF energies. Until now we have just discussed results for nuclear matter at one density. The values for the kinetic energy, $`<T>`$, and $`<V_\pi >`$ are displayed for various densities in Fig. 1. One finds that the ratio of the kinetic energy calculated for the correlated wave function, $`<T>`$, and the energy of the free Fermi-gas $`<T_{FG}>`$ decreases as a function of density. This plot furthermore shows that the results for the different interactions can be separated in two groups: the local interactions, ArV18 and Nijm2, yield larger kinetic energies than CDBonn and Nijm1, which contain nonlocal terms. The lower part of Fig. 1 shows that the pionic contribution to the total energy is quite different for the interactions. It is strongest for ArV18, getting more attractive for larger densities. The pionic contribution obtained from the other potentials is weaker and does not exhibit this increase at high densities. This may indicate that the enhancement of pionic correlations, which has been discussed in the literature as an indication for pion condensation, is a feature which may not be reproduced by realistic interactions different from the Argonne potentials. A different point of view on nuclear correlations may be obtained from inspecting the the relative wave functions for a correlated pair $`|ij>_{\text{corr.}}`$ defined in (5). Results for such correlated wave functions for a pair of nucleons in nuclear matter at empirical saturation density are displayed in Figs 2 and 3. As an example we consider wave functions which “heal” at larger relative distances to an uncorrelated two-nucleon wave function with momentum $`q`$ = 0.96 fm<sup>-1</sup> calculated at a corresponding average value for the starting energy. Fig. 2 shows relative wave functions for the partial wave $`{}_{}{}^{1}S_{0}^{}`$. One observes the typical features: a reduction of the amplitude as compared to the uncorrelated wave function for relative smaller than 0.5 fm, reflecting the repulsive core of the NN interaction, an enhancement for distances between $``$ 0.7 fm and 1.7 fm, which is due to the attractive components at medium range, and the healing to the uncorrelated wave function at large $`r`$. One finds that the reduction at short short distances is much weaker for the interactions CDBonn and Nijm1 than for the other two. This is in agreement with the discussion of the kinetic energies (see Fig. 1) and the difference between HF and BHF energies (see table I). The nonlocal interactions CDBonn and Nijm1 are able to fit the NN scattering phase shifts with a softer central core than the local interactions. Very similar features are also observed in the $`{}_{}{}^{3}S_{1}^{}`$ partial wave displayed in the left half of Fig. 3. For the $`{}_{}{}^{3}D_{1}^{}`$ partial wave, shown in the right part of Fig. 3, one observes a different behavior: All NN interactions yield an enhancement of the correlated wave function at $`r`$ 1 fm. This enhancement is due to the tensor correlations, which couples the partial waves $`{}_{}{}^{3}S_{1}^{}`$ and $`{}_{}{}^{3}D_{1}^{}`$. This enhancement is stronger for the interactions ArV18, Nijm1 and Nijm2 than for the CDBonn potential. Note that the former potential contain a pure nonrelativistic, local one-pion-exchange term, while the CDBonn contains a relativistic, nonlocal pion-exchange contribution. This behavior in the coupled $`{}_{}{}^{3}S_{1}^{}`$ and $`{}_{}{}^{3}D_{1}^{}`$ waves can also be observed in the corresponding wave functions for the deuteron, plotted in Fig. 4. ## IV Conclusions Four modern NN interactions, the charge-dependent Bonn potential (CDBonn), the Argonne V18 (ArV18) and two versions of the Nijmegen potential (Nijm1 and Nijm2), which all give an excellent fit to NN scattering phase shifts, exhibit significant differences in calculating NN correlation functions and other observables in nuclear matter. Two of these interactions, CDBonn and Nijm1, contain nonlocal terms. These two interactions are considerably softer than the other interactions. This conclusion can be derived from three different observations: The Hartree-Fock energies are less repulsive, the kinetic energies calculated with the correlated wave functions are smaller and the correlated wave function in relative $`S`$ states are less suppressed at small relative distances. The interactions also differ quite significantly in the pionic or tensor correlations they induce. This is indicated to some extent by the deuteron wave function, in particular by the D-state probability. These differences, however, are even enhanced in the nuclear wave functions leading to drastic differences in the pionic contribution to the nuclear binding energy. The Argonne potential in particular yields a large pionic contribution, which increases with density. This importance of the pionic correlations is not observed for the other interactions. It would be of great interest to study whether the differences between the correlations predicted from these interactions can be observed in experiments like the exclusive ($`e,e\mathrm{`}NN`$) reactions in order to discriminate the various models for the NN interaction. This work was supported in part by the SFB 382 of the Deutsche Forschungsgemeinschaft, the DGICYT (Spain) Grant PB95-1249 and the Program SGR98-11 from Generalitat de Catalunya.
no-problem/9908/cond-mat9908285.html
ar5iv
text
# References $`K`$-matrices for non-abelian quantum Hall states Eddy Ardonne<sup>1</sup>, Peter Bouwknegt<sup>2</sup>, Sathya Guruswamy<sup>1</sup> and Kareljan Schoutens<sup>1</sup> <sup>1</sup> Institute for Theoretical Physics University of Amsterdam Valckenierstraat 65 1018 XE Amsterdam, THE NETHERLANDS <sup>2</sup> Department of Physics and Mathematical Physics University of Adelaide Adelaide, SA 5005, AUSTRALIA Abstract Two fundamental aspects of so-called non-abelian quantum Hall states (the $`q`$-pfaffian states and more general) are a (generalized) pairing of the participating electrons and the non-abelian statistics of the quasi-hole excitations. In this paper, we show that these two aspects are linked by a duality relation, which can be made manifest by considering the $`K`$-matrices that describe the exclusion statistics of the fundamental excitations in these systems. ITFA-99-18 ADP-99-30/M85 cond-mat/9908285 August 1999 1. Introduction In the description of low-energy excitations over abelian fractional quantum Hall (fqH) states, an important role is played by the so-called $`K`$-matrices, which characterize the topological order of the fqH state (see for a review). These $`K`$-matrices act as parameters in Landau-Ginzburg-Chern-Simons (LGCS) field theories for bulk excitations and in the chiral Conformal Field Theories (CFT) that describe excitations at the edge. At the same time, the $`K`$-matrices provide the parameters for the fractional exclusion statistics (in the sense of Haldane ) of the edge excitations over the fqH states. For a simple example of this, consider the Laughlin fqH state at filling fraction $`\nu =\frac{1}{m}`$, with $`K`$-matrix equal to the number $`m`$. Following the analysis in , one finds that the parameters $`g_e`$ and $`g_\varphi `$ that characterize the exclusion statistics of the edge electrons (of charge $`Q=e`$) and edge quasi-holes ($`Q=+\frac{e}{m}`$), respectively, are given by $$g_e=𝐊=m,g_\varphi =𝐊^1=\frac{1}{m}.$$ (1.1) In this paper, we shall denote these and similar parameters by $`𝐊_e`$ and $`𝐊_\varphi `$, respectively. The $`(e,\varphi )`$ basis for edge excitations is natural in view of the following statements about duality and completeness. The particle-hole duality between the edge electron and quasi-hole excitations is expressed through the relation $`𝐊_\varphi =𝐊_e^1`$ and through the absense of mutual exclusion statistics between the two. It leads to the following relation between the 1-particle distribution functions $`n_e(ϵ)`$ and $`n_\varphi (ϵ)`$ $$mn_e(ϵ)=1\frac{1}{m}n_\varphi (\frac{ϵ}{m}).$$ (1.2) In physical terms, the duality implies that the absence of edge electrons with energies $`ϵ<0`$ is equivalent to the presence of edge quasi-holes with positive energies, and vice versa. By the completeness of the $`(e,\varphi )`$ basis we mean that the collection of all multi-$`e`$, multi-$`\varphi `$ states span a basis of the chiral Hilbert space of all edge excitations. In mathematical terms, this is expressed by a formula that expresses the partition sum for the edge excitations as a so-called Universal Chiral Partition Function (UCPF) (see, e.g., and references therein) based on the matrix $$𝐊_e𝐊_\varphi =\left(\begin{array}{cc}m& 0\\ 0& \frac{1}{m}\end{array}\right).$$ (1.3) We refer to for an extensive discussion of these results, and to for an extension to (abelian) fqH states with more general $`n\times n`$ $`K`$-matrices. Turning our attention to non-abelian quantum Hall states, we observe that the chiral CFT for the edge excitations are not free boson theories. This implies that for these states the notion of a $`K`$-matrix needs to be generalized. In this paper, we shall show that the exclusion statistics of the fundamental edge ‘quasi-hole’ and ‘electron’ excitations over non-abelian quantum Hall states give rise to matrices that are closely analogous to the $`K`$-matrices in the abelian case, and we therefore refer to these matrices as the $`K`$-matrices for non-abelian quantum Hall states. We present results for the $`q`$-pfaffian state , for the so-called parafermionic quantum Hall states and for the non-abelian spin-singlet states recently proposed in . A more detailed account, including explicit derivations of the results presented here, will be given elsewhere . 2. Pfaffian quantum Hall states for spinless electrons The so-called $`q`$-pfaffian quantum Hall states, at filling fraction $`\nu =\frac{1}{q}`$ were proposed in 1991 by Moore and Read and have been studied in considerable detail . The charged spectrum contains fundamental quasi-holes of charge $`\frac{e}{2q}`$ and electron-type excitations with charge $`e`$ and fermionic braid statistics. The edge CFT can be written in terms of a single chiral boson and real (Majorana) fermion, leading to a central charge $`c=\frac{3}{2}`$. The exclusion statistics of the edge quasi-holes were studied in . In the second reference, it was found that the thermodynamics of the edge quasi-hole can be described by the following equations $$\frac{\lambda _01}{\lambda _0}\lambda _0\lambda ^{\frac{1}{2}}=1,\frac{\lambda 1}{\lambda }\lambda _0^{\frac{1}{2}}\lambda ^{\frac{q+1}{4q}}=x,$$ (2.1) with $`x=e^{\beta (\mu _\varphi ϵ)}`$. Comparing this with the general form of the Isakov-Ouvry-Wu (IOW) equations for particles with exclusion statistics matrix $`𝐊`$ , $$\left(\frac{\lambda _a1}{\lambda _a}\right)\underset{b}{}\lambda _b^{K_{ab}}=x_a,$$ (2.2) one identifies $$𝐊_\varphi =\left(\begin{array}{cc}1& {\scriptscriptstyle \frac{1}{2}}\\ {\scriptscriptstyle \frac{1}{2}}& \frac{q+1}{4q}\end{array}\right)$$ (2.3) as the statistics matrix for particles $`(\varphi _0,\varphi )`$, where $`\varphi `$ is the edge quasi-hole of charge $`\frac{e}{2q}`$. The other particle $`\varphi _0`$ does not carry any charge or energy and is called a pseudo-particle. The presence of this particle accounts for the non-abelian statistics of the physical particle $`\varphi `$ . Eliminating $`\lambda _0`$ from equations (2.1) gives $$(\lambda 1)(\lambda ^{\frac{1}{2}}1)=x^2\lambda ^{\frac{3q1}{2q}},$$ (2.4) in agreement with . The duality between the $`\varphi `$ and $`e`$ excitations over the $`q`$-pfaffian state was first discussed in , where it was also shown how the correct spectrum of the edge CFT is reconstructed using the $`e`$ and $`\varphi `$ quanta. Here we present a discussion at the level of $`K`$-matrices where, as in eqn. (1.1), the dual sector is reached by inverting the matrix $`𝐊_\varphi `$ of eqn. (2.3). Starting from the IOW equations (2.2), and denoting by $`\lambda _a^{}`$ and $`x_a^{}`$ the quantities corresponding to $`𝐊^{}=𝐊^1`$, we have the correspondence $$\lambda _a^{}=\frac{\lambda _a}{\lambda _a1},x_a^{}=\underset{b}{}x_b^{K_{ab}^1}.$$ (2.5) which leads, among others, directly to the (appropriate generalization of) eqn. (1.2). For the $`q`$-pfaffian state, we define the $`K`$-matrix for the electron sector ($`e`$-sector) to be the inverse of $`𝐊_\varphi `$ $$𝐊_e=𝐊_\varphi ^1=\left(\begin{array}{cc}q+1& 2q\\ 2q& 4q\end{array}\right).$$ (2.6) Inspecting the right hand sides of the duality-transformed IOW equations, we find $`x_1^{}=y`$ and $`x_2^{}=y^2`$, with $`y=e^{\beta (\mu _eϵ^{})}`$, where $`ϵ^{}=2qϵ`$ and $`\mu _e=2q\mu _\varphi `$ (i.e., $`y=x^{2q}`$), indicating that the two particles in the $`e`$-sector carry charge $`Q=e`$ and $`Q=2e`$, respectively. We shall denote these particles by $`\mathrm{\Psi }_1`$ and $`\mathrm{\Psi }_2`$. The first particle is quickly identified with the edge electron, with self-exclusion parameter equal to $`q+1`$. The presence of a ‘composite’ particle $`\mathrm{\Psi }_2`$ of charge $`2e`$ has its origin in the fundamental electron pairing that is implied by the form of the pfaffian wave function. As before, there is no mutual exclusion statistics between the $`e`$\- and the $`\varphi `$-sectors. In ref. , we show that the conformal characters of the edge CFT can be cast in the UCPF form with matrix $`𝐊_e𝐊_\varphi `$. The value $`c=\frac{3}{2}`$ of the central charge follows as a direct consequence . We remark that the usual relations between the charge vector $`𝐐_e^T=(e,2e)`$, $`𝐐_\varphi ^T=(0,\frac{e}{2q})`$, the matrices $`𝐊_e`$, $`𝐊_\varphi `$ and the filling fraction $`\nu `$ are satisfied in this non-abelian context $$𝐊_e=𝐊_\varphi ^1,𝐐_e=𝐊_\varphi ^1𝐐_\varphi ,$$ $$\nu e^2=𝐐_\varphi ^T𝐊_\varphi ^1𝐐_\varphi =𝐐_e^T𝐊_e^1𝐐_e.$$ (2.7) These relations, which hold in all examples discussed in this paper, together with the UCPF form of the conformal characters, motivate our claim that $`𝐊_e`$ and $`𝐊_\varphi `$ are the appropriate generalizations of the $`K`$-matrices in this non-abelian setting. We shall now proceed and link the composite $`Q=2e`$ particle in the electron sector to the supercurrent that is familiar in the context of the BCS theory for superconductivity. For this argument, it is useful to follow the $`q`$-pfaffian state as a function of $`q`$ with $`0q2`$. This procedure can be interpreted in terms of a flux-attachment transformation . For $`q=2`$ we have the fermionic pfaffian state at $`\nu =\frac{1}{2}`$ and $`q=1`$ gives a bosonic pfaffian state at $`\nu =1`$. In the non-magnetic limit, $`q0`$, we recognize the pfaffian wave function as the coordinate space representation of a specific superconducting BCS state with complex $`p`$-wave pairing . In the limit $`q0`$, the particle $`\mathrm{\Psi }_2`$ has exclusion parameter $`g=0`$ and can, as we shall argue, be associated with the supercurrent of the superconducting state. \[In fact, the mutual exclusion statistics between $`\mathrm{\Psi }_2`$ and $`\mathrm{\Psi }_1`$ vanishes as well in the limit $`q0`$.\] The exclusion statistics parameter for the $`\varphi `$-particle diverges for $`q0`$, meaning that in this limit the $`\varphi `$-sector no longer contributes to the physical edge spectrum. In a quantum state that has all electrons paired, the fundamental flux quantum is $`\frac{h}{2e}`$. Piercing the sample with precisely this amount of flux leads to a quasi-hole excitation of charge $`\frac{e}{2q}`$. This excitation also contains a factor which acts as the spin-field with respect to the neutral fermion in the electron-sector; the latter factor is at the origin of the non-abelian statistics . For $`q=1,2,\mathrm{}`$ the quasi-hole charge $`\frac{e}{2q}`$ is the lowest charge that we consider: the excitations of charge $`e`$ and $`2e`$ correspond to flux insertions that are a (negative) integer multiple of the flux quantum. However, for $`q1`$, the quasi-hole charge is larger than $`e`$, $`2e`$, and we conclude that the fundamental excitations in the $`e`$-sector correspond to an insertion of a fraction of the flux-quantum, in other words, to a situation where the boundary conditions for the original ‘electrons’ have been twisted. For definiteness, let us put $`q=\frac{1}{N}`$ with $`N`$ a large integer. A $`\mathrm{\Psi }_2`$ quantum state of charge $`2e`$ then corresponds to a flux insertion of $`\frac{2qh}{e}=\frac{2h}{Ne}`$. In the absence of any $`\mathrm{\Psi }_1`$ quanta, this quantum state for the $`\mathrm{\Psi }_2`$ particle can be filled up to a maximum of $`n^{\mathrm{max}}=\frac{1}{4q}=\frac{N}{4}`$ times. \[This follows from the self-exclusion parameter $`g=4q`$ and the result that in Haldane’s statistics, $`n^{\mathrm{max}}=\frac{1}{g}`$.\] The amount of flux that corresponds to this maximal occupation equals $`\frac{2h}{Ne}\frac{N}{4}=\frac{h}{2e}`$, which is precisely the flux quantum. Summarizing, we see that the insertion of a single flux quantum $`\frac{h}{2e}`$ gives rise to a quasi-hole ($`\varphi `$) excitation, while (negative) fractions of the flux quantum (between $`\frac{h}{2e}`$ and 0) give rise multiple occupation of the $`\mathrm{\Psi }_2`$ modes. In the description of a BCS superconductor, the excitation that is induced by the insertion of a fraction of the flux quantum $`\frac{h}{2e}`$, is precisely the supercurrent that screens the imposed flux. Comparing the two pictures, we see that in the limit $`q0`$, the $`\mathrm{\Psi }_2`$ excitations in the $`q`$-pfaffian state reduce to supercurrent excitations in the limiting BCS superconductor. In an earlier study , the neutral fermionic excitation over the $`q`$-pfaffian (which is not elementary in our $`(e,\varphi )`$ description), in the limit $`q0`$, has been identified with the pair breaking excitation of the superconducting state. A similar reasoning applies to the Laughlin state at $`\nu =\frac{1}{m}`$, where it links the charge $`e`$ excitations at finite $`m`$ to the supercurrent of the limiting superfluid boson state at $`m=0`$. In a somewhat different physical picture, the edge particle $`\mathrm{\Psi }_2`$ is the one that is excited in the process of Andreev reflection off the edge of a sample in the $`q`$-pfaffian state. It will be interesting to explore in some detail such processes for the $`q=2`$ pfaffian state. 3. Pseudo-spin triplet pfaffian quantum Hall states As a generalization of the results in the previous section, we consider a $`q`$-pfaffian state for particles with an internal (double-layer or pseudo-spin) degree of freedom. The wave function of this state is the Halperin two-layer state with labels $`(q+1,q+1,q1)`$. This state has $`K`$-matrices $$𝐊_e=\left(\begin{array}{cc}q+1& q1\\ q1& q+1\end{array}\right),𝐊_\varphi =\frac{1}{4q}\left(\begin{array}{cc}q+1& q+1\\ q+1& q+1\end{array}\right),$$ (3.1) describing excitations $`(\mathrm{\Psi }_{},\mathrm{\Psi }_{})`$ of charge $`e`$ and $`(\varphi _{},\varphi _{})`$ of charge $`\frac{e}{2q}`$, respectively. There are no pseudo-particles and these states are abelian quantum Hall states. We shall now argue that, based on the analogy with the $`q`$-pfaffian states for spinless electrons, for these states we can identify supercurrent type excitations in the $`e`$-sector and, by duality, reformulate the $`\varphi `$-sector in terms of one physical quasi-hole and two pseudo-particles. We stress that this reformulation does not change the physical interpretation; in particular, although the new formulation employs two pseudo-particles, it still refers to an abelian quantum Hall state. In the limit $`q0`$, the $`(q+1,q+1,q1)`$ paired wave function reduces to a form that can be interpreted as a (complex) $`p`$-wave pseudo-spin triplet state . Taking $`q=\frac{1}{N}`$ as before, we can look for excitations in the $`e`$-sector that describe the response to the insertion of fractional flux, and that will smoothly connect to the supercurrent at $`q=0`$. Inspecting the matrix $`𝐊_e`$, we see that the two quanta $`\mathrm{\Psi }_{}`$ and $`\mathrm{\Psi }_{}`$ each have self-exclusion parameter approaching $`g=1`$, and can by themselves not screen more than an amount of flux equal to $`\frac{h}{Ne}`$. However, due to the strong negative mutual exclusion statistics, an excitation that is effectively a pair of one $`\mathrm{\Psi }_{}`$ and one $`\mathrm{\Psi }_{}`$ particle can screen a much larger amount of flux. To formalize this consideration, we introduce a particle $`\mathrm{\Psi }_3`$, defined as a pair $`(\mathrm{\Psi }_{}\mathrm{\Psi }_{})`$. Following a general construction presented in , we derive a new $`K`$-matrix for the extended system $`(\mathrm{\Psi }_{},\mathrm{\Psi }_{},\mathrm{\Psi }_3)`$ $$\stackrel{~}{𝐊}_e=\left(\begin{array}{ccc}q+1& q& 2q\\ q& q+1& 2q\\ 2q& 2q& 4q\end{array}\right).$$ (3.2) \[This choice of $`K`$-matrix guarantees an equivalence between the $`(\mathrm{\Psi }_{},\mathrm{\Psi }_{})`$ and the $`(\mathrm{\Psi }_{},\mathrm{\Psi }_{},\mathrm{\Psi }_3)`$ formulations.\] On the basis of the extended matrix $`\stackrel{~}{𝐊}_e`$, we identify the supercurrent excitations as before: a single $`\mathrm{\Psi }_3`$ quantum requires flux $`\frac{2h}{Ne}`$, and with a maximal filling of $`n^{\mathrm{max}}=\frac{1}{4q}=\frac{N}{4}`$, we see that the $`\mathrm{\Psi }_3`$ quanta can ‘absorb’ an amount of flux equal to $`\frac{h}{2e}`$. In the limit $`q0`$, the $`\mathrm{\Psi }_3`$ quanta are identified with the supercurrent quanta, which have the ability to screen a full quantum $`\frac{h}{2e}`$ of applied flux. Inverting $`\stackrel{~}{𝐊}_e`$, we obtain $$\stackrel{~}{𝐊}_\varphi =\left(\begin{array}{ccc}1& 0& {\scriptscriptstyle \frac{1}{2}}\\ 0& 1& {\scriptscriptstyle \frac{1}{2}}\\ {\scriptscriptstyle \frac{1}{2}}& {\scriptscriptstyle \frac{1}{2}}& \frac{2q+1}{4q}\end{array}\right),$$ (3.3) with associated parameters $$x_1=\left(\frac{y_{}}{y_{}}\right)^{\frac{1}{2}},x_2=\left(\frac{y_{}}{y_{}}\right)^{\frac{1}{2}},x_3=(y_{}y_{})^{\frac{1}{4q}},$$ (3.4) where $`y_,=\mathrm{exp}(\beta (\mu _,ϵ))`$. The fact that $`x_{1,2}`$ do not depend on the energy parameter $`ϵ`$ makes clear that these are pseudo-particles. As such they account for degeneracies in states that contain more than one $`\varphi _3`$-quantum. Despite this appearance, by construction it is clear that the braid statistics of these degenerate excitations are abelian. At the level of the edge CFT, the two different formulations of the $`(q+1,q+1,q1)`$ theory are easily understood. In the usual abelian formulation, the edge CFT is written in terms of a (charge) boson $`\phi _c`$ plus a Dirac fermion, whose (dimension-$`\frac{1}{8}`$) spin field has abelian statistics. The alternative formulation employs $`\phi _c`$ plus two real fermions (called $`\psi _e`$ and $`\psi _o`$ in ). The two pseudo-particles that we obtained describe the non-abelian statistics of the spin-fields of the real (Ising) fermions $`\psi _e`$ and $`\psi _o`$ separately. The actual chiral Hilbert space of the edge CFT is however a subspace of the Hilbert space of the (Ising)<sup>2</sup> CFT, and the braid statistics in this subspace are all abelian. The various phase transitions described in are easily traced in the statistics matrices. The transition of the $`o`$ spins into the strong-pairing phase decouples one row of the matrices (3.2) and (3.3), turning them into the matrices (2.6), (2.3) corresponding to the $`q`$-pfaffian state. A subsequent transition of the $`e`$ spins into the strong-pairing phase leaves only the edge excitations $`\mathrm{\Psi }_3`$ and further reduces the matrix to $`𝐊_e=4q`$, appropriate for a Laughlin state of charge $`2e`$ particles at filling $`\nu =\frac{1}{q}`$. 4. Parafermionic quantum Hall states: generalized pairing In , Read and Rezayi proposed a series of non-abelian quantum Hall states based on order-$`k`$ clustering of spinless electrons. The wave functions for these states are constructed with help of the well-known $`Z_k`$ parafermions. The general state of , labeled as $`(k,M)`$, has filling fraction $`\nu =\frac{k}{kM+2}=\frac{1}{q}`$ with $`q=M+\frac{2}{k}`$. Fermionic quantum Hall states are obtained for $`M`$ an odd integer. \[For $`M=0`$ we have a bosonic state with $`SU(2)_k`$ symmetry.\] For $`k=1,2`$ these new states reduce to the Laughlin $`(m=M+2)`$ and $`q`$-pfaffian ($`q=M+1`$) states, respectively. In , the matrices $`𝐊_\varphi `$ were identified for general $`(k,M)`$. Here we illustrate the $`K`$-matrix structure for $`k=3`$, where we have $$𝐊_\varphi =\left(\begin{array}{ccc}1& {\scriptscriptstyle \frac{1}{2}}& 0\\ {\scriptscriptstyle \frac{1}{2}}& 1& {\scriptscriptstyle \frac{1}{2}}\\ 0& {\scriptscriptstyle \frac{1}{2}}& \frac{3q+1}{9q}\end{array}\right),𝐐_\varphi ^T=(0,0,\frac{e}{3q}),$$ (4.1) for two pseudo-particles and a physical quasi-hole of charge $`\frac{e}{3q}`$. Inverting this matrix gives $$𝐊_e=\left(\begin{array}{ccc}q+{\scriptscriptstyle \frac{2}{3}}& 2q+{\scriptscriptstyle \frac{2}{3}}& 3q\\ 2q+{\scriptscriptstyle \frac{2}{3}}& 4q+{\scriptscriptstyle \frac{4}{3}}& 6q\\ 3q& 6q& 9q\end{array}\right),𝐐_e^T=(e,2e,3e).$$ (4.2) Again putting $`q=\frac{1}{N}`$, with $`N`$ large, we can repeat the previous arguments. Clearly, the $`\mathrm{\Psi }_3`$ quanta of charge $`3e`$ act as the ‘supercurrent’ for the 3-electron clustering. One such quantum requires a flux of $`\frac{3h}{eN}`$, and with $`n^{\mathrm{max}}=\frac{N}{9}`$, the total flux that can be absorbed equals $`\frac{h}{3e}`$ as expected. We remark that, for $`q0`$, the excitations $`\mathrm{\Psi }_{1,2}`$ have fractional exclusion statistics parameters, in agreement with the fact that the $`k=3`$ state at $`q=0`$ has $`M=\frac{2}{3}`$ and is thus not fermionic. What one has instead is an ‘anyonic’ superconductor with Cooper clusters of charge $`3e`$ and cluster-breaking excitations with fractional exclusion statistics. 5. Non-abelian spin-singlet quantum Hall states In , two of the present authors introduced a series of non-abelian spin singlet (NASS) states. The states are labeled as $`(k,M)`$ and have filling fraction $`\nu =\frac{2k}{2kM+3}=\frac{1}{q}`$ with $`q=M+\frac{3}{2k}`$. The wave functions, which are constructed as conformal blocks of higher rank (Gepner) parafermions, have a BCS type factorized form, where the factors describe a $`k`$-fold spin-polarized clustering of electrons of given spin and the formation of a spin-singlet with $`2k`$ participating electrons. \[See for an example and for general and explicit expressions for the wave functions.\] For $`k=1`$ the spin-singlet states are abelian with $`K`$-matrices given by $$𝐊_e=\left(\begin{array}{cc}q+{\scriptscriptstyle \frac{1}{2}}& q{\scriptscriptstyle \frac{1}{2}}\\ q{\scriptscriptstyle \frac{1}{2}}& q+{\scriptscriptstyle \frac{1}{2}}\end{array}\right),𝐊_\varphi =\frac{1}{2q}\left(\begin{array}{cc}q+{\scriptscriptstyle \frac{1}{2}}& q+{\scriptscriptstyle \frac{1}{2}}\\ q+{\scriptscriptstyle \frac{1}{2}}& q+{\scriptscriptstyle \frac{1}{2}}\end{array}\right).$$ (5.1) We remark that, as for the Laughlin series, there is a self-duality in the sense that $`𝐊_e(M)=𝐊_\varphi (M^{})`$ with $$M=\frac{3M^{}+4}{2M^{}+3},\left(q=\frac{1}{4q^{}}\right)$$ (5.2) One of the self-dual points is $`M=M^{}=1`$ ($`q=q^{}=\frac{1}{2}`$), corresponding to two decoupled $`\nu =1`$ systems for spin up and down. In a forthcoming paper we present a detailed derivation of the $`K`$-matrix structure for the general NASS states, where we obtain ‘minimal’ $`K`$-matrices of size $`2k\times 2k`$. The matrix $`𝐊_e`$ describes fully polarized composites (of both spins) of $`1,2,\mathrm{},k`$ quasi-electrons, while the matrix $`𝐊_\varphi `$ describes a spin-doublet of physical, fractionally charged quasi-holes ($`Q=\frac{e}{4q}`$) and a collection of $`2(k1)`$ pseudo-particles that take care of the non-abelian statistics. The simplest non-trivial example is the result for $`k=2`$, ($`M=q\frac{3}{4}`$) $`𝐊_\varphi =\left(\begin{array}{cccc}{\scriptscriptstyle \frac{4}{3}}& {\scriptscriptstyle \frac{2}{3}}& {\scriptscriptstyle \frac{2}{3}}& {\scriptscriptstyle \frac{1}{3}}\\ {\scriptscriptstyle \frac{2}{3}}& {\scriptscriptstyle \frac{4}{3}}& {\scriptscriptstyle \frac{1}{3}}& {\scriptscriptstyle \frac{2}{3}}\\ {\scriptscriptstyle \frac{2}{3}}& {\scriptscriptstyle \frac{1}{3}}& \frac{28q+3}{48q}& \frac{4q3}{48q}\\ {\scriptscriptstyle \frac{1}{3}}& {\scriptscriptstyle \frac{2}{3}}& \frac{4q3}{48q}& \frac{28q+3}{48q}\end{array}\right),𝐐_\varphi ^T=(0,0,\frac{e}{4q},\frac{e}{4q}),`$ (5.7) $`𝐊_e=\left(\begin{array}{cccc}q+{\scriptscriptstyle \frac{5}{4}}& q{\scriptscriptstyle \frac{3}{4}}& 2q+{\scriptscriptstyle \frac{1}{2}}& 2q{\scriptscriptstyle \frac{1}{2}}\\ q{\scriptscriptstyle \frac{3}{4}}& q+{\scriptscriptstyle \frac{5}{4}}& 2q{\scriptscriptstyle \frac{1}{2}}& 2q+{\scriptscriptstyle \frac{1}{2}}\\ 2q+{\scriptscriptstyle \frac{1}{2}}& 2q{\scriptscriptstyle \frac{1}{2}}& 4q+1& 4q1\\ 2q{\scriptscriptstyle \frac{1}{2}}& 2q+{\scriptscriptstyle \frac{1}{2}}& 4q1& 4q+1\end{array}\right),𝐐_e^T=(e,e,2e,2e).`$ (5.12) In analogy with the reasoning for the pseudo-spin triplet pfaffian state, one may now consider the composite of the $`k`$-spin-up and the $`k`$-spin-down components, and determine an extended $`K_e`$-matrix (cf. (3.2)). For $`q0`$ one finds that all statistical couplings of this composite vanish, and we identify it with the supercurrent corresponding to the $`2k`$-electron spin-singlet clustering. The extended $`K_e`$-matrix is invertible and gives a redefined $`\varphi `$-sector with a single spinless $`\varphi `$-quantum and $`(2k1)`$ pseudo-particles. For obtaining a formulation with manifest $`SU(2)`$-spin symmetry, a further extension of $`𝐊_e`$ can be considered . 6. Conclusions In this paper, we have studied the exclusion statistics of edge excitations over non-abelian quantum Hall states. From the results we have extracted matrices $`𝐊_e`$ and $`𝐊_\varphi `$ which generalize the well-known $`K`$-matrices for abelian quantum Hall states to the non-abelian case. Note, however, that the torus degeneracy for the non-abelian case is not given by the abelian result $`|\mathrm{det}(𝐊_e)|`$, but that a further reduction is necessary due to the presence of the pseudo-particles. \[Compare with where such a reduction was discussed in the context of the parton construction of non-abelian quantum Hall states.\] We expect that these new $`K`$-matrices can be used to formulate effective (edge and bulk) theories for these non-abelian quantum Hall states. Until now, effective field theories for bulk excitations (of the LGCS type) have only been obtained for some very special cases , and it will be most interesting to find more systematic constructions. Acknowledgements We would like to thank Nick Read for illuminating discussions. This research was supported in part by the Australian Research Council and the foundation FOM of the Netherlands.
no-problem/9908/hep-ph9908398.html
ar5iv
text
# Cosmological Consequences of Slow-Moving Bubbles in First-Order Phase Transitions ## I Introduction According to standard cosmology, the early universe is expected to have undergone a series of symmetry-breaking phase transitions as it expanded and cooled, at which topological defects may have formed . Phase transitions are labelled first- or second-order, according to whether the position of the vacuum state in field space changes discontinuously or continuously as the critical temperature is crossed. A first-order phase transition proceeds by bubble nucleation and expansion. When at least $`(4n)`$ of these bubbles collide (for $`n=0,1`$ or $`2`$), an $`n`$-dimensional topological defect may form in the region between them. In recent years there has been considerable interest in the formation of defects in first-order phase transitions, in particular the validity of the so-called geodesic rule. The geodesic rule, first stated by Kibble , predicts that after a two-bubble collision the phase of the scalar field interpolates continuously between the values in each bubble, along the shortest path in field space. Early analysis confirmed the geodesic rule for defect formation in both global and local theories, albeit using a planar approximation and neglecting the effect of the surrounding plasma. In later work, the finite conductivity of the plasma was considered for local theories, as was the effect of slow-moving (i.e. speeds less than the speed of light) bubble walls in theories with a global symmetry . These analyses confirm defect formation in first-order phase transitions, but make conflicting claims about the number of defects actually formed. In this paper we investigate this issue. Unlike previous work, we include the effect of slow-moving bubble walls in both the global and local cases. We use our results to make qualitative comparisons between defect densities formed in global and local theories, and by slow-moving and fast-moving bubble walls. As well as the consequences for defect formation, we also consider the implications of slow-moving walls for the formation of primordial magnetic fields at a first-order phase transition. We take as our model the simplest spontaneously-broken gauge symmetry: the Abelian Higgs model, which has a local $`U(1)`$ symmetry, with Lagrangian $$=(D_\mu \mathrm{\Phi })^{}(D_\mu \mathrm{\Phi })\frac{1}{4}F_{\mu \nu }F^{\mu \nu }V(\mathrm{\Phi }^{}\mathrm{\Phi }),$$ (1) where $`D_\mu \mathrm{\Phi }=_\mu \mathrm{\Phi }ieA_\mu \mathrm{\Phi }`$ and $`F_{\mu \nu }=_\mu A_\nu _\nu A_\mu `$. The detailed form of the effective potential $`V(|\mathrm{\Phi }|)`$ will depend upon the particular particle-physics model being considered, but in order to be able to study the generic features of a first-order phase transition we shall take $`V`$, following Ferrera and Melfo , to be $$V(\mathrm{\Phi })=\lambda \left[\frac{|\mathrm{\Phi }|^2}{2}(|\mathrm{\Phi }|\eta )^2\frac{\epsilon }{3}\eta |\mathrm{\Phi }|^3\right],$$ (2) where in a realistic model, $`\epsilon =\epsilon (T)(T_cT)`$. $`V`$ has a local minimum false-vacuum state at $`\mathrm{\Phi }=0`$ which is invariant under the $`U(1)`$ symmetry, and global minima true-vacuum states on the circle $`|\mathrm{\Phi }|=\rho _{tv}\left(\eta /4\right)(3+\epsilon +\sqrt{1+6\epsilon +\epsilon ^2})`$ which possess no symmetry. The dimensionless parameter $`\epsilon `$ is responsible for lifting the degeneracy between the two sets of minima – the greater $`\epsilon `$, the greater the potential difference between the false- and true-vacuum states, and hence the faster the bubbles will accelerate. By making the field and coordinate transformations $`\mathrm{\Phi }\varphi `$ $`=`$ $`\eta \mathrm{\Phi }`$ (3) $`𝐱𝐱^{}`$ $`=`$ $`{\displaystyle \frac{𝐱}{\sqrt{\lambda }\eta }}`$ (4) $`tt^{}`$ $`=`$ $`{\displaystyle \frac{t}{\sqrt{\lambda }\eta }}`$ (5) it is possible to set $`\lambda `$ and $`\eta `$ to unity, so that the potential is parametrized only by $`\epsilon `$, and hereafter we shall use these transformed variables. The bubble nucleation rate per unit time per unit volume is given by the ‘bounce’ solution of the Euclidean field theory . Ignoring quantum fluctuations, the phase $`\theta `$ is constant within each bubble, and uncorrelated between spatially-separated bubbles. Any non-zero gauge fields in the nucleation configuration will make a contribution to the action and hence the nucleation of bubbles with non-zero gauge fields is exponentially suppressed. When three or more bubbles collide, a phase-winding of $`2\pi n`$ can occur around a point, which by continuity must then be at $`\mathrm{\Phi }=0`$. In three spatial dimensions, this topologically-stable region of high-energy false vacuum is string-like – a cosmic string. The formation and evolution of cosmic strings have been studied in great detail. Cosmic strings have been evoked as, amongst other things, possible seeds for cosmic structure formation, sources of cosmic rays, gravitational radiation and baryogenesis (see, e.g. ). In order to be able to assess the significance of cosmic strings in the evolution of the early universe, it is important to be able to estimate the initial defect density accurately. This depends on how the phases between two or more bubbles interpolate after collision. In particular, although strings are in general formed when three or more bubbles collide, a simultaneous three-bubble collision is unlikely – one would expect in general two-bubble collisions, with a third, or fourth bubble colliding some finite time later. If the phase inside a two-bubble collision is able to equilibrate quickly, and before a third bubble arrives, there may be a strong suppression of the initial string density. The effect of phase equilibration on the initial defect density was first investigated by Melfo and Perivolaropoulos . They found a decrease of less than $`10\%`$, in models which possess a global symmetry and with bubbles moving at the speed of light. The above description of defect formation, however, ignores any effect that the hot-plasma background may have on the evolution of the Higgs field, which may be significant in the early universe. Real-time simulations and analytic calculations for the (Standard Model) electroweak phase transition predicted that the bubble wall would reach a terminal velocity $`v_{ter}0.1c`$. The reason for this is simple: outside the bubble, where the ($`SU(2)\times U(1)`$) symmetry remains unbroken, all fields coupled to the Higgs are massless, acquiring their mass from the vacuum expectation value of the Higgs in the spontaneously-broken symmetry phase inside the bubble. Particles outside the bubble without enough energy to become massive inside bounce off of the bubble wall, retarding its progress through the plasma. The faster the bubble is moving, the greater the momentum transfer in each collision, and hence the stronger the retarding force. Thus a force proportional to the bubble-wall velocity appears in the effective equations of motion. Ferrera and Melfo studied bubble collisions in such an environment, for theories which possessed a global symmetry, and found that decaying phase oscillations occur inside a two-bubble collision, leading to a suppression of the defect formation rate . Kibble and Vilenkin studied phase dynamics in collisions of undamped bubbles in models with a local symmetry, and found, analytically, a different kind of decaying phase oscillation. When the finite conductivity of the plasma was included, these oscillations were found not to occur. However, Kibble and Vilenkin did not consider the behaviour of the phase after collisions of bubbles moving at speeds slower than the speed of light. Moreover, because of the symmetry assumptions made in their calculations, their results cannot be simply extrapolated to the slower-moving case. The behaviour of the phase inside bubble collisions in local theories where the bubbles move at the speed of light, and in global theories with slow-moving bubbles has been considered. However, the most realistic scenario cosmologically – a gauge-theory phase transition where the bubbles are slowed significantly by the plasma (as might be expected at the electroweak- or GUT-scales) – has not been studied. This paper presents the results of our investigations into what happens in theories with a local symmetry, where the bubbles are moving at terminal velocities less than the speed of light. Our $`3+1`$-dimensional simulations indicate that, for slow-moving bubbles, phase oscillations of either of the types described in or do not occur, before the effect of the plasma conductivity is even considered. We therefore expect that (a) fewer defects would form in a phase transition where the ‘Higgs’ field is coupled to a gauge field than in a global-symmetry phase transition, and (b) in local theories, fewer defects would form in slow-moving bubbles than fast-moving ones. We should note in passing that we have ignored the effect of the expansion of the universe in our work. This is a good approximation for phase transitions which take place at late times, like the electroweak phase transition. At phase transitions which occur earlier, however, the Hubble expansion may have a significant effect on bubble and phase dynamics. This topic deserves consideration on its own, and work is currently in progress . In the following section, we describe the effects of a slow-moving bubble wall on phase dynamics inside bubbles collisions in theories which possess a global symmetry. In section III, we discuss phase equilibration in theories with a local symmetry and present our new results in the case of slowly-moving local-symmetry bubbles. Our conclusions are supported by examples of defect formation suppressed in slow-moving bubbles. We show that the ‘extra defects’ found in do not occur in heavily-damped environments. In section IV we discuss the formation of a primordial magnetic field. We show that the presence of the plasma conductivity results in a larger magnetic field for fast-moving bubbles. For slow-moving bubbles, the plasma conductivity stops the field dispersing. A larger magnetic field could also result in this case. A discussion of our results and conclusion are presented in section V. ## II Global Symmetry If the gauge coupling $`e`$ is set to zero, we have a theory with a global $`U(1)`$ symmetry $$=(_\mu \mathrm{\Phi })^{}(_\mu \mathrm{\Phi })V(\mathrm{\Phi }^{}\mathrm{\Phi }).$$ (6) By writing $`\mathrm{\Phi }=\rho e^{i\theta }`$, the equations of motion for the modulus $`\rho `$ and phase $`\theta `$ of the Higgs field are $`\ddot{\rho }\rho ^{\prime \prime }(_\mu \theta )^2\rho `$ $`=`$ $`{\displaystyle \frac{V}{\rho }}`$ (7) $`^\mu \left[\rho ^2_\mu \theta \right]`$ $`=`$ $`0.`$ (8) If the potential difference between the true- and false-vacuum states is much smaller than the height of the barrier separating them, the field equations may be solved using the ‘thin-wall’ approximation , by setting $`\epsilon =0`$. For our potential (2), this yields $$|\mathrm{\Phi }|=\frac{\eta }{2}\left[1+\mathrm{tanh}\left(\frac{\sqrt{\lambda }\eta }{2}\left(sR_0\right)\right)\right],$$ (9) where $`s^2=𝐱^2t^2`$ and $`R_0`$ is the bubble radius on nucleation. Note that $`\theta =\mathrm{constant}`$ trivially satisfies the phase equation (8), and so if the phase is initially constant within each bubble, as we shall assume, there are no phase dynamics until the bubbles collide. As described in the introduction however, we would like to investigate the behaviour of the phase in collisions of slow-moving bubbles. For a given theory, by considering the Boltzmann equations for scattering off of the Higgs field, it is possible to calculate the terminal velocity of the bubble wall . Since we are not concerned here with the parameters of a specific particle-physics model, we choose instead to use a single damping parameter $`\mathrm{\Gamma }`$ to model the interaction of the Higgs with the plasma. In the introduction we claimed that the plasma would introduce a term proportional to the bubble-wall velocity into the equations of motion. Since the phase $`\theta `$ of the Higgs field is not affected by the effects described, we assume that the plasma couples only to the modulus $`\rho `$. We then have effective equations of motion $`\ddot{\rho }\rho ^{\prime \prime }+\mathrm{\Gamma }\dot{\rho }(_\mu \theta )^2\rho `$ $`=`$ $`{\displaystyle \frac{V}{\rho }}`$ (10) $`^\mu \left[\rho ^2_\mu \theta \right]`$ $`=`$ $`0.`$ (11) A damping term of this form has been used by several authors , , , and has also been derived from the stress-energy of the Higgs, assuming a coupling to the plasma . Heckler estimates $`\mathrm{\Gamma }g_W^2T_c`$ for the electroweak phase transition, by comparing the energy generated by the frictional damping with the pressure on the wall due to the damping. The effect of this damping term is that instead of accelerating up to the speed of light, the bubble walls reach a terminal velocity $`v_{ter}<c`$. By making the ansatz $`\rho =\rho \left[xx_0(t)\right]`$, the terminal velocity can be calculated by integrating the equation of motion for $`\rho `$ $$v_{ter}=\frac{\mathrm{\Delta }V}{\mathrm{\Gamma }\rho ^2𝑑x},$$ (12) where $`\mathrm{\Delta }V`$ is the difference in potential energy between the true- and false-vacuum states. Assuming that the wall has a Lorentz-contracted, moving profile of the form (9) $$\rho =\frac{\rho _{tv}}{2}\left[1+\mathrm{tanh}\left(\frac{\sqrt{\lambda }\rho _{tv}\gamma }{2}\left(rv_{ter}tR_0\right)\right)\right],$$ (13) the integral in the denominator of (12) can be evaluated. Expanding $`\gamma =(1v_{ter}^2)^{1/2}`$, we obtain $$v_{ter}=\frac{A}{\sqrt{A^2+\mathrm{\Gamma }^2}},$$ (14) where $`A=6\mathrm{\Delta }V/\sqrt{\lambda }\rho _{tv}^3`$. We have simulated the evolution of bubbles in such a dissipative environment in $`1+1`$-dimensions. Taking a static profile of the form (9) as the initial conditions, the terminal velocity of the bubble was calculated for a range of values of friction parameter $`\mathrm{\Gamma }`$. The accuracy of formula (14), compared with terminal velocities calculated directly from simulations can be seen in Figure 1. This is a very useful result, as from it we can dial the input value of $`\mathrm{\Gamma }`$ to produce the value of $`v_{ter}`$ corresponding to the particular particle-physics model we are interested in. Heckler , and Ferrera and Melfo obtained a result like (12), and Haas found the best-fit equation $`v_{ter}=A+\left(1A\right)/\left(1+B\mathrm{\Gamma }^{1.62}\right)`$ from Langevin-equation simulations. However, equation (14), we believe, holds for all of the cases above, provided that $`\epsilon `$ is small enough for the ‘thin wall’ approximation to hold, and is more useful when performing simulations. For example, it could be applied to the electroweak phase transition in the supersymmetric case, if the terminal velocity of the bubble wall were calculated. Ferrera and Melfo described how, in the context of a theory with a global symmetry, slow-moving bubble walls lead to phase oscillations. When two bubbles collide, the walls merge. Across the plane (in 3 spatial dimensions) of intersection, there exists a phase gradient to drive equation (8), and a phase wave propagates into each bubble from the centre – see Figures 2 (b) and 3 (b). As the Goldstone boson is massless, and undamped, this wave travels at the speed of light. If the phase difference between the bubbles is $`\mathrm{\Delta }\theta `$, the phase wave will carry a phase difference $`+\mathrm{\Delta }\theta /2`$ into one of the bubbles, and $`\mathrm{\Delta }\theta /2`$ into the other, equilibrating the phase. If the bubble wall is moving at a terminal velocity $`v_{ter}<c`$, the wave will catch up with the bubble wall, and rebound – the returning wave will now ‘flip’ the original phase profile. Thus phase oscillations occur inside the merged bubbles. Given three or more spatially-separated bubbles whose distribution of phases one would expect to generate a vortex on collision, a vortex, an anti-vortex, or none at all may form, depending on the profile of the phase inside the two bubbles at the moment of collision of the third – an example of how phase dynamics can affect the defect-formation process. The oscillations are damped, because the bubble walls continue to expand, increasing the volume over which the finite-energy wave must sweep, thus diluting the phase difference carried by the wave. Thus the converse of the above statement is not true – an initial distribution of phases which one would not expect to form a defect, will not produce one as a result of phase oscillations. Statistical simulations in two dimensions have shown that this leads to a suppression in the defect-formation rate – the slower the bubble walls, the fewer defects are formed per nucleated bubble. ## III Local Symmetry ### A Phase dynamics inside two-bubble collisions Including the gauge fields in our model, the field equations become $`\ddot{\rho }\rho ^{\prime \prime }(_\mu \theta eA_\mu )^2\rho `$ $`=`$ $`{\displaystyle \frac{V}{\rho }}`$ (15) $`^\mu \left[\rho ^2(_\mu \theta eA_\mu )\right]`$ $`=`$ $`0`$ (16) $`\ddot{A_\nu }A_{\nu }^{}{}_{}{}^{\prime \prime }_\nu \left(A\right)`$ $`=`$ $`2e\rho ^2_\nu \theta .`$ (17) Since we now have a local $`U(1)`$ symmetry, the phase $`\theta `$ can be arbitrarily re-defined at any point in time by a gauge transformation, and so we need a gauge-invariant notion of phase. We define, following Kibble and Vilenkin , the gauge-invariant phase difference between two points $`A`$ and $`B`$ $$\mathrm{\Delta }\theta =_A^B𝑑x^i\left(_iieA_i\right),$$ (18) where $`i=1,2,3`$ and the integral is taken, for simplicity, along the straight line joining $`A`$ and $`B`$. For bubbles which move at approximately the speed of light, it is possible to greatly simplify the field equations. If we consider a two-bubble collision, in a frame where the bubbles are nucleated simultaneously, by assuming that the bubbles instantly propagate at the speed of light, it is possible to impose $`SO(1,2)`$ Lorentz symmetry on the field equations. Thus the fields are functions of $`z`$ and $`\tau ^2=t^2x^2y^2`$ only. With this assumption, and a step-function ansatz for the phase $`\theta `$ at the time of collision, Kibble and Vilenkin solved the field equations for $`\mathrm{\Delta }\theta `$ $$\mathrm{\Delta }\theta =\frac{2R}{t}\theta _0\left(\mathrm{cos}e\eta \left(tR\right)+\frac{1}{e\eta R}\mathrm{sin}e\eta \left(tR\right)\right),$$ (19) where $`2\theta _0`$ is the initial phase difference between the spatially-separated bubbles and $`R`$ is their radius on collision at $`t=0`$. Equation (19) describes decaying phase oscillations, the time scale of equilibration determined by the initial phase difference and radius of the colliding bubbles, the frequency of oscillation by the gauge-boson mass. These oscillations, and the accuracy of this formula for small initial phase differences, have recently been confirmed in simulations . However, the assumption that the bubbles move at, or close to the speed of light, does not appear to be realistic . In this case, the symmetry assumptions made in are no longer valid. Moreover, it is not possible to replace the coordinate $`\tau ^2=t^2x^2y^2`$ by the obvious choice $`\tau ^2=\left(v_{ter}t\right)^2x^2y^2`$, since only the the modulus of the Higgs field, the bubble wall, is constrained in this way – the phase and gauge fields are still free to propagate causally. In order to investigate whether phase oscillations – which occur in the global theory with slow-moving bubbles, and in the local theory with fast-moving bubbles – still occur in the local theory when the bubbles expand slower than the speed of light, we include the dissipation term $`\mathrm{\Gamma }\dot{\rho }`$ into the equation for the modulus of the Higgs field $`\rho `$, without coupling it to the phase or the gauge fields. This is motivated in the same manner as described in the global case. A term proportional to $`\dot{\rho }`$ is, of course, $`U(1)`$ gauge-invariant. Since there is no longer an obvious simplification of the equations of motion which might lead to an analytic solution, we turn to computer simulations. The equations of motion were discretized in the gauge-invariant way described in , choosing the temporal gauge $`A_00`$ in order to make the time evolution trivial. We used a lattice of size $`200^3`$ and a lattice spacing $`a=0.5`$ – tests were performed on lattices with spacing down to $`a=0.1`$ giving no qualitatively-different results. The time evolution was performed using a fourth-order Runge-Kutta algorithm. We took as initial conditions a static profile of the form (9) for $`\rho =|\mathrm{\Phi }|`$, for two bubbles of radius $`R=5`$, with phases $`\theta =0`$ and $`\theta =2\pi /3`$, centred at $`(\pm 8,0,0)`$. We choose to ignore any primordial magnetic field and, since the nucleation process is not expected to generate non-zero gauge fields (see Introduction), set all the gauge fields to zero initially. The results of the simulations are displayed in Figures 4, 6 and 5. For the sake of clarity and to aid comparison between the different cases, we have chosen to present our results in terms of the evolution with time of the gauge-invariant phase difference $`\mathrm{\Delta }\theta `$. We evaluated $`\mathrm{\Delta }\theta `$ between the centres of the two bubbles, though the qualitative behaviour was found not to change when it was calculated between different points. Figure 4 (a) shows the behaviour of the gauge-invariant phase difference for bubbles moving at the speed of light – the decaying oscillations calculated by Kibble and Vilenkin in the local case. In the global case, $`e=0`$, we find that the phase does equilibrate, but on a much longer time-scale. Thus we would expect that for fast-moving bubbles, fewer defects are formed in local theories than global ones, since in order to form a defect a phase difference inside the two merged bubbles must be present when a third bubble collides. In Figure 4 (b) we plot $`\mathrm{\Delta }\theta `$ for slower-moving bubbles. For $`e=0`$, we confirm in $`3+1`$-dimensions the decaying phase oscillations described by Ferrera and Melfo and observed by them in $`2+1`$-dimensions. These oscillations are killed by adding in gauge fields – for a fixed bubble-wall velocity, the stronger the gauge coupling, the less time the gauge-invariant phase difference is non-zero, and hence the less likely a third collision will occur in time for a defect to form. Thus we would expect a lower defect-formation rate in local theories with slower-moving bubble walls. Figure 5 illustrates our findings – it shows a cross-section through a non-simultaneous three-bubble collision, after all three bubbles have merged. In each case, the bubbles of initial radius $`R=5`$, centred at $`(\pm 8,0,10)`$ and $`(0,0,10)`$, were given phases $`\theta =\pi /2,0`$ and $`2\pi /3`$. For identical initial conditions, we see that in the fast-moving case a vortex is formed, but when the bubbles are slowed down, the phase difference between the two bubbles has equilibrated by the time the third bubble collides, and no defect is formed. In any cosmological phase transition where the bubble wall is significantly slowed down, we may also expect the plasma to have non-zero conductivity, which will affect the evolution of the fields and so needs to be considered in any attempt at a realistic model. We have simulated the effects of the finite-conductivity of the plasma, by adding on to the right-hand side of the gauge field equations (17) a conduction current $`j_c^\mu `$, whose spatial part is given by $$𝐣_𝐜=\sigma 𝐄.$$ (20) The corresponding charge density $`\rho _c`$ is fixed by the continuity relation $`_\mu j_c^\mu =0`$. For large values of the conductivity, it has been shown that the oscillations in the gauge-invariant phase difference, which took place in fast-moving bubbles with $`\sigma =0`$, are exponentially damped. Figure 6 (a) shows the evolution of the gauge-invariant phase difference in this case, where the walls are moving at the speed of light, for three different values of the conductivity $`\sigma `$. We confirm that, as $`\sigma `$ increases, the phase oscillations are more heavily suppressed, with practically no oscillations occurring for $`\sigma 0.5`$. In Figure 6 (b), we present the results of our simulations for slow-moving bubbles. For $`\sigma =0`$, we have the case considered above – heavily suppressed oscillations. Increasing $`\sigma `$ merely serves to increase the suppression of phase oscillations: no new effect is observed. Whereas it is already known that for bubbles moving at the speed of light, phase oscillations can be killed by a high conductivity , it is clear from our work that in slower-moving bubbles, the same effect can be obtained by a much lower value of $`\sigma `$. ### B Extra defect production An interesting consequence of slower-moving bubbles concerns the issue of ‘extra’ defect production at a two-bubble collision. Hawking, Moss and Stewart first described (by energy considerations) how two true-vacuum bubbles travelling at nearly the speed of light would ‘pass through each other’, leading to the temporary restoration of the spontaneously-broken symmetry in a region between the two bubbles. This is illustrated in Figure 2 – the two bubbles collide, and bounce off of (or pass through) each other, producing a region of $`\mathrm{\Phi }=0`$ false vacuum inside the merged bubbles, which decays via oscillations of the bubble walls into the true-vacuum state. The size of the symmetry-restored region, and the time taken to decay completely to the true vacuum depends on the initial phase difference between the two bubbles, and the asymmetry parameter $`\epsilon `$. Copeland and Saffin showed how this could lead to the formation of ‘extra’ – in the sense that a defect would not be expected from the initial distribution of phases – flux-tube vortices in a gauge theory, around these regions of temporarily restored symmetry, and hence to an increase in the initial defect density after a phase transition. Our simulations show – see Figure 3 – that dissipation prevents this bouncing, or passing-through, of the bubbles. The excess energy, which would cause the symmetry restoration, is dissipated away by the plasma, and the bubbles simply merge. Thus there is no symmetry-restored region around which a non-zero winding of the phase can occur, and so no ‘extra’ defects would be formed. ## IV Magnetic Fields Another consequence of first-order phase transitions which may be cosmologically significant is the generation of primordial magnetic fields. Galaxies are observed to have magnetic fields $`B_{\mathrm{gal}}10^6G`$, coherent over large scales. Given a small initial seed field a dynamo mechanism, powered by the differential rotation of the galaxy in combination with the small-scale turbulent motion of the ionized gas, could generate the observed galactic fields. Many mechanisms for producing such a seed field have been proposed, one being bubble collisions at a first-order phase transition (see e.g. and references therein). It had been believed that it was not possible to generate a seed field of sufficient magnitude at the electroweak phase transition for the dynamo mechanism to explain galactic magnetic fields as large as $`10^6G`$. However a recent paper by Davis, Lilley and Törnkvist showed how, in a universe with a low matter density and in particular a positive cosmological constant, a dynamo mechanism may be able to generate observed galactic magnetic fields from a much smaller seed field. As a consequence, electroweak-scale magnetic fields may be viable primordial seed fields, and it is of interest to consider the effect of slow-moving bubble walls and finite plasma conductivity on the generation of magnetic fields. If the gauge fields are set to zero initially, it can be seen from the equations of motion (15), (16) and (17), that non-zero gauge fields can only be generated where there exist spatial phase gradients, that is after the collision of two or more bubbles. After the collision of two bubbles, a loop of magnetic flux is generated around the circle of intersection . The amount of flux generated is given by the integral of the gauge field $`A`$ around any loop which passes outside the bubbles. This is the same for all bubbles, regardless of size or speed $$A_i𝑑x^i=\frac{2\theta _0}{e},$$ (21) and our simulations confirm this. When a third bubble collides, the fluxes combine, and if there is a phase winding of $`2\pi `$ around the centre, one flux quantum $`2\pi /e`$ will be trapped. If there is no plasma conductivity, the magnetic flux generated is free to propagate at the speed of light away from the bubble collision. If the bubbles are expanding at the speed of light, then the fields can disperse no further outwards than the intersection of the bubbles. Copeland, Saffin and Törnkvist , demonstrated how in this case two tubes of flux are produced – a ‘primary flux tube’ at the intersection of the collided bubbles and a smaller, ‘secondary’ peak of opposite direction following behind it – see Figure 7 (a). Our simulations show that when non-zero conductivity is included, this secondary peak does not occur, and all the magnetic flux is concentrated into the primary peak, which is consequently larger – Figure 7 (b). We might expect that in this case, a larger magnetic field would form as all the flux generated by the two-bubble collision is aligned. If, however, the bubbles are moving at speeds less than the speed of light, the flux is able to disperse into the plasma. Unless the bubble nucleation rate is extremely high (when a third bubble might be expected to collide quickly after the initial collision), no magnetic field will be able to form. For slow-moving bubbles, large-enough conductivity prevents the flux from dispersing, freezing it in to the plasma. Figure 7 (c) shows the magnetic field strength formed after a collision of two slow-moving bubbles, for $`\sigma =5`$ (in reference an estimate of $`\sigma =T/e^2`$ is given. The temperature at a phase transition is typically $`T\eta `$, and so $`\sigma =5`$ may be realistic). It can be seen that for this value of $`\sigma `$, a magnetic field does form, but it is spread through the inside of the bubble, rather than being concentrated in one or two narrow peaks at the wall, as seen in the fast-moving case. The height of the peak is lower by an order of magnitude – this is finite, rather than infinite conductivity and so some flux still escapes. We note in passing here that provided that the plasma dynamics which slow down the bubble walls do not affect the bubble nucleation rate, the average number of bubbles nucleated per unit volume by the time the phase transition is completed will increase as the bubble wall velocity decreases. That is the average bubble radius on collision, the correlation length of the Higgs field $`\xi `$, decreases as the wall velocity decreases. Since the amount of flux generated at each collision is independent of the bubble radius, slow-moving bubbles will generate more flux, and hence a larger magnetic field when coarse-grained over many bubble radii. ## V Conclusion In this paper, we have examined the behaviour of colliding true-vacuum bubbles at a first-order cosmological phase transition. In the Abelian Higgs model, strings may form at the collision of three or more bubbles, but since a simultaneous three-bubble collision is very unlikely, the dynamics of the phase inside two-bubble collisions is crucial – if phase differences between two bubbles can be equilibrated quickly, and before the arrival of a third bubble, a topological defect will not form. The most relevant phase transitions to cosmology involve gauge fields coupled to the symmetry-breaking field. In such phase transitions, the speed of the bubble walls will be considerably less than the speed of light, and yet the phase dynamics of slow-moving bubbles in a gauge field have not been considered previously. We have thus paid particular attention to the evolution of the phase inside collisions of bubbles moving at speeds much lower than the speed of light, in a $`U(1)`$ gauge theory. In the simplest model, with no gauge fields and where the bubble walls accelerate up to the speed of light, the phase difference between two points is found to equilibrate. In models with a global symmetry where the bubble walls move slowly, and models with a local symmetry where the walls move at the speed of light, decaying phase oscillations have been observed. We find that in a $`U(1)`$ gauge theory, with slow-moving bubble walls, these oscillations are suppressed. On collision of two bubbles, instantaneous phase equilibration is observed. This would lead to a decrease in the initial expected defect density compared to the other cases. We have illustrated our claims by demonstrating an example of the suppression of defect formation in a local theory, due to nontrivial phase dynamics. When two bubbles collide and merge, there will exist phase gradients across the intersection, a potential difference. In local theories it is necessary to define a gauge-invariant notion of the phase difference, which involves the gauge fields. Thus in local theories, the phase difference may equilibrate through the generation of gauge fields – there is in effect an ‘extra channel’ for the decay of the potential difference created on collision. This explains why we would expect fewer defects in local theories than in global ones. In a local theory, the phase difference between two bubbles is observed to equilibrate more quickly in slower-moving bubbles than in bubbles moving at the speed of light. This is due to the fact in slow-moving bubbles the rate of generation of phase gradients is lower, yet the gauge fields are not restricted to propagate at the speed of the bubble wall and are thus able to equilibrate the phase difference more rapidly. If phase equilibration and hence the suppression of defect formation is aided by the coupling of gauge fields to the Higgs, it is interesting to ask whether another scalar field $`\chi `$ could have the same effect. In order to be able to dissipate the potential energy in the phase gradient such a field would need to couple to the phase of the Higgs field, but also preserve the U(1) symmetry of the Lagrangian. This can only be achieved (with terms at most quadratic in $`\mathrm{\Phi }`$ and its derivatives) by Higgs couplings proportional to $`_\mu \chi [\mathrm{\Phi }^{}_\mu \mathrm{\Phi }(_\mu \mathrm{\Phi }^{})\mathrm{\Phi }]=_\mu \chi [2i\rho ^2_\mu \theta ]`$, but in this case $`\chi `$ is effectively a gauge field (17). It is possible that fermion couplings to the Higgs field would aid phase equilibration, but unfortunately this cannot be simulated easily. We predict that fewer defects will form in gauge theories than global-symmetry theories since the phase difference is non-zero for less time after collision in gauge theories. In fact, if the bubble nucleation rate is low enough, it might be possible to effectively rule out the formation of defects, solely on phase-dynamical grounds, though percolation could presumably still be achieved. This is a very interesting prospect, which could have significant implications for cosmology – it may be possible for example to circumvent the monopole problem without needing inflation if defect formation is dynamically suppressed in this way. It has also been seen how it is unlikely that ‘extra defects’, caused by bubbles bouncing off of each other on collision, will be formed in cosmological phase transitions, since the bubbles are retarded sufficiently by the plasma for no such bouncing to occur. First-order phase transitions can also generate a primordial magnetic field, which may seed the galactic dynamo and hence be responsible for the galactic magnetic fields observed today. A simple qualitative analysis suggests that in fast-moving bubble walls, high conductivity (as would be expected in the early universe) would lead to the generation of a larger magnetic field. Where the bubble walls move slower, we have demonstrated that a magnetic field can form if the plasma has non-zero conductivity. In this case, the smaller average bubble radius on collision may cause more flux to be generated, producing a larger magnetic field. This may be significant in helping to beat the lower-bound required by the dynamo model in order to produce observed fields. We have shown qualitatively how we expect the defect-formation probability to be decreased by phase equilibration in two-bubble collisions. It would be interesting to perform a statistical simulation of the type done in , but for gauge-theory phase transitions, to see quantitatively how the defect density is affected by the terminal velocity of the walls or the introduction of gauge fields. We note that the argument given in the Introduction for nucleating bubbles with zero gauge fields does not apply if there already exists a primordial magnetic field before bubble nucleation. In this case, it is not at all clear what the preferred nucleation field configuration would be, and we believe that a study of bubble nucleation in the presence of a magnetic field would be worth while. It is also of some interest to consider the effect on phase dynamics and defect formation of the Hubble expansion, since this acts as a dissipation term on the phase as well as on the bubble walls. We conclude, though, with a summary of our findings. In gauge theories, more defects are formed by fast-moving bubble walls than by slower ones. In global theories, the same is true. For fast-moving bubble walls, more defects are formed in global theories than in local ones. For slow-moving bubble walls, the same is true. ## VI Acknowledgements We would like to thank T.Kibble, P.Saffin, D.Steer and especially O. Törnkvist for helpful comments and conversations. Computer facilities were provided by the UK National Cosmology Supercomputing Centre in cooperation with Silicon Graphics/Cray Research, supported by HEFCE and PPARC. This work was supported in part by PPARC and an ESF network grant. Support for M.L. was provided by a PPARC studentship and Fitzwilliam College, Cambridge.
no-problem/9908/hep-ph9908459.html
ar5iv
text
# Interpretation of the nonextensitivity parameter 𝑞 in some applications of Tsallis statistics and Lévy distributions ## Abstract The nonextensitivity parameter $`q`$ occuring in some of the applications of Tsallis statistics (known also as index of the corresponding Lévy distribution) is shown to be given, in $`q>1`$ case, entirely by the fluctuations of the parameters of the usual exponential distribution. PACS numbers: 05.40.Fb 24.60.-k 05.10.Gg Keywords: Nonextensive statistics, Lévy distributions, Thermal models There is an enormous variety of physical phenomena described most economically (by introducing only one new parameter $`q`$) and adequately by the so called nonextensive statistics introduced some time ago by Tsallis . They include all situations characterized by long-range interactions, long-range microscopic memories and space-time (and phase-space as well) (multi) fractal structure of the process (cf. for details). The high energy physics applications of nonextensive statistics are quite recent, but already numerous and still growing, cf. Refs. . All examples mentioned above have one thing in common: the central formula employed is the following power-like distribution: $$G_q(x)=C_q\left[\mathrm{\hspace{0.17em}1}(1q)\frac{x}{\lambda }\right]^{\frac{1}{1q}},$$ (1) which is just a one parameter generalization of the Boltzmann-Gibbs exponential formula to which it converges for $`q1`$: $$G_{q=1}=g(x)=c\mathrm{exp}\left[\frac{x}{\lambda }\right]$$ (2) When $`G_q(x)`$ is used as probability distribution (Lévy distribution) of the variable $`x(0,\mathrm{})`$ (which will be the case we are interested in here), the parameter $`q`$ is limited to $`1q<2`$. For $`q<1`$, the distribution $`G_q(x)`$ is defined only for $`x[0,\lambda /(1q)]`$. For $`q>1`$ the upper limit comes from the normalization condition (to unity) for $`G_q(x)`$ and from the requirement of the positivity of the resulting normalisation constant $`C_q`$. However, if one demands in addition that the mean value of $`G_q(x)`$ is well defined, i.e., that $`x=\lambda /(32q)<\mathrm{}`$ for $`x(0,\mathrm{})`$, then $`q`$ is further limited to $`1q<1.5`$ only. In spite of numerous applications of the Lévy distribution $`G_q(x)`$, the interpretation of the parameter $`q`$ is still an open issue. In this work we shall demonstrate, on the basis of our previous application of the Lévy distribution to cosmic rays , that this Lévy distribution $`G_q(x)`$ (1) emerges in a natural way from the fluctuations of the parameter $`1/\lambda `$ of the original exponential distribution (2) and that the parameters of its distribution $`f(1/\lambda )`$ define parameter $`q`$ in unique way. Let us first briefly summarise the result of . Analysing experimental distributions $`dN(x)/dx`$ of depths $`x`$ of interactions of hadrons from cosmic ray cascades in the emulsion chambers, we have shown that the so called long flying component (manifesting itself in aparently unnexpected nonexponential behaviour of $`dN(x)/dx`$) is just a manifestation of the Lévy distribution $`G_q(x)`$ with $`q=1.3`$. This result must be confronted with our earlier analysis of the same phenomenon . We have demonstrated there that distributions $`dN(x)/dx`$ can be also described by the fluctuation of the corresponding cross-section $`\sigma =Am_N\frac{1}{\lambda }`$ (where $`A`$ denotes mass number of the target, $`m_N`$ is the mass of the nucleon and $`\lambda `$ is the corresponding mean free path). The fluctuation of this cross-section (i.e., in effect, fluctuations of the quantity $`1/\lambda `$) with relative variance $$\omega =\frac{\sigma ^2\sigma ^2}{\sigma ^2}\mathrm{\hspace{0.17em}0.2}$$ (3) allow us to describe the non-exponentiality of the experimental data as well as the distribution $`G_{q=1.3}(x)`$ mentioned above. We therefore argue that these two numerical examples show that fluctuations of the parameter $`1/\lambda `$ in the $`g(x;\lambda )`$ result in the Lévy distributions $`G_q(x;\lambda )`$. Actually the above quoted example from cosmic ray physics is not the only one known at present in the field of high energy collisions. It turns out that distributions of transverse momenta $`dN(p_T)/dp_T`$ are best described by a slightly non-exponential distribution $`G_q(p_T)`$ of the Lévy type with $`q=1.01÷1.2`$ depending on situation considered. The usual exponential distribution $`dN(p_T)/dp_T=g(p_T)\mathrm{exp}(\sqrt{m^2+p_T^2}/kT)`$ contains as a main parameter the inverse temperature $`\beta =1/kT`$ and the above mentioned numerical results leading to $`G_{q=1.01÷1.2}(p_T)`$ can again be understood as a result of a fluctuation of inverse temperature $`\beta `$ in the usual exponential formula $`g(p_T)`$. This point is of special interest because of recent discussions on the dynamical possibility of temperature fluctuations in some collisions, cf. Ref. . Later on we shall use it to illustrate our results concerning $`q`$. To recapitulate: we claim that (for $`q>1`$) the parameter $`q`$ is nothing but a measure of fluctuations present in Lévy distributions $`G_q(x)`$ describing particular processes under consideration. To make our statement more quantitative, let us analyse the influence of fluctuations of parameter $`1/\lambda `$ which are present in the exponential formula $`g(x)\mathrm{exp}(x/\lambda )`$ on the final result. Our aim will be a deduction of the form of the function $`f(1/\lambda )`$ which leads from an exponential distribution $`g(x)`$ to power-like Lévy distribution $`G_q(x)`$ and which describes fluctuation about the mean value $`1/\lambda _0`$, i.e., such that $$G_q(x;\lambda _0)=C_q\left(1+\frac{x}{\lambda _0}\frac{1}{\alpha }\right)^a=C_q_0^{\mathrm{}}\mathrm{exp}\left(\frac{x}{\lambda }\right)f\left(\frac{1}{\lambda }\right)d\left(\frac{1}{\lambda }\right)$$ (4) where for simplicity we have introduced the abbreviation $`\alpha =\frac{1}{q1}`$. From the representation of the Euler gamma function we have $$\left(1+\frac{x}{\lambda _0}\frac{1}{\alpha }\right)^a=\frac{1}{\mathrm{\Gamma }(\alpha )}_0^{\mathrm{}}𝑑\xi \xi ^{\alpha 1}\mathrm{exp}\left[\xi \left(1+\frac{x}{\lambda _0}\frac{1}{\alpha }\right)\right].$$ (5) Changing variables under the integral in such a way that $`\frac{\xi }{\lambda _0}\frac{1}{\alpha }=\frac{1}{\lambda }`$ one immediately obtains eq. (4) with $`f(1/\lambda )`$ given by the following gamma distribution $$f\left(\frac{1}{\lambda }\right)=f_\alpha (\frac{1}{\lambda },\frac{1}{\lambda _0})=\frac{1}{\mathrm{\Gamma }(\alpha )}(\alpha \lambda _0)\left(\frac{\alpha \lambda _0}{\lambda }\right)^{\alpha 1}\mathrm{exp}\left(\frac{\alpha \lambda _0}{\lambda }\right)$$ (6) with mean value $$\frac{1}{\lambda }=\frac{1}{\lambda _0}$$ (7) and variation $$\left(\frac{1}{\lambda }\right)^2\frac{1}{\lambda }^2=\frac{1}{\alpha \lambda _0^2}.$$ (8) Notice that, with increasing $`\alpha `$ variance (8) decreases and asymptotically (for $`\alpha \mathrm{}`$, i.e, for $`q1`$) the gamma distribution (6) becomes a delta function $`\delta (\lambda \lambda _0)`$. The relative variance (cf. eq.(3)) for this distribution is given by $$\omega =\frac{\left(\frac{1}{\lambda }\right)^2\frac{1}{\lambda }^2}{\frac{1}{\lambda }^2}=\frac{1}{\alpha }=q\mathrm{\hspace{0.17em}1}.$$ (9) We see therefore that, indeed, the parameter $`q`$ in the Lévy distribution $`G_q(x)`$ describes the relative variance of the parameter $`1/\lambda `$ present in the exponential distribution $`g(x;\lambda )`$. Some remarks on the numerical results quoted before are in order here. Notice that the value of $`q=1.3`$ for cosmic ray distribution $`dN(x)/dx`$ obtained in leads to the relative variance of the cross section $`\omega =0.3`$ whereas in we have reported value $`\omega ^{}=0.2`$. This discrepancy has its origin in the fact that in numerical calculations in we have used a symmetric Gaussian distribution to decribe fluctuations of the cross section, whereas the relation (9) has been obtained for fluctuations described by gamma distribution. In the gaussian approximation we expect that $$\frac{q1}{q^2}<\omega ^{}<q1,$$ (10) where lower and upper limits are obtained by normalizing the variance of the $`f(1/\lambda )`$ distribution to the modial (equal to $`(2q)/\lambda _0`$) and mean (equal to $`1/\lambda _0`$) values, respectively. Therefore for $`q=1.3`$ one should expect that $`0.18<\omega ^{}<0.3`$, which is exactly the case. Let us now proceed to the above mentioned analysis of transverse momentum distributions in heavy ion collisions, $`dN(p_T)/dp_T`$ . It is interesting to notice that the relatively small value $`q1.015`$ of the nonextensive parameter obtained there, if interpreted in the same spirit as above, indicates that rather large relative fluctuations of temperature, of the order of $`\mathrm{\Delta }T/T0.12`$, exist in nuclear collisions. It could mean therefore that we are dealing here with some fluctuations existing in small parts of the system in respect to the whole system (according to interpretation of ) rather than with fluctuations of the event-by-event type in which, for large multiplicity $`N`$, fluctuations $`\mathrm{\Delta }T/T=0.06/\sqrt{N}`$ should be negligibly small . We shall now propose a general explanation of the meaning of the function $`f(\chi )`$ describing fluctuations of some variable $`\chi `$. In paticular, we shall be interested in question why, and under what circumstances, it is the gamma distribution that describes fluctuations. To this end let us start with the well known equation for the variable $`\chi `$, which in the Langevin formulation has the following form $$\frac{d\chi }{dt}+\left[\frac{1}{\tau }+\xi (t)\right]\chi =\varphi =\mathrm{const}>\mathrm{\hspace{0.17em}0}.$$ (11) Let us concentrate for our purposes on the stochastic process which is defined by the white gaussian noise $`\xi (t)`$ with ensemble mean $$\xi (t)=\mathrm{\hspace{0.17em}0}$$ (12) and correlator $`\xi (t)\xi (t+\mathrm{\Delta }t)`$, which for sufficiently fast changes is equal to $$\xi (t)\xi (t+\mathrm{\Delta }t)=\mathrm{\hspace{0.17em}2}D\delta (\mathrm{\Delta }t).$$ (13) Constants $`\tau `$ and $`D`$ define, respectively, the mean time for changes and their variance by means of the following conditions: $$\chi (t)=\chi _0\mathrm{exp}\left(\frac{t}{\tau }\right)\mathrm{and}\chi ^2(t=\mathrm{})=\frac{1}{2}D\tau .$$ (14) Thermodynamical equilibrium is assumed here (i.e., $`t>>\tau `$, in which case the influence of the initial condition $`\chi _0`$ vanishes and the mean squared of $`\chi `$ has value corresponding to the state of equilibrium). Making use of the Fokker-Plank equation $$\frac{df(\chi )}{dt}=\frac{}{\chi }K_1f(\chi )+\frac{1}{2}\frac{^2}{\chi ^2}K_2f(\chi )$$ (15) we get for the distribution function the following expression $$f(\chi )=\frac{c}{K_2(\chi )}\mathrm{exp}\left[\mathrm{\hspace{0.17em}2}_0^\chi 𝑑\chi ^{}\frac{K_1(\chi ^{})}{K_2(\chi ^{})}\right]$$ (16) where the constant $`c`$ is defined by the normalisation condition for $`f(\chi )`$: $`_0^{\mathrm{}}𝑑\chi f(\chi )=1`$. $`K_1`$ and $`K_2`$ are the intensity coefficients which for the process defined by eq. (11) are equal to (cf., for example, ): $`K_1(\chi )`$ $`=`$ $`\varphi \mathrm{\hspace{0.17em}2}{\displaystyle \frac{\chi }{\tau }}+D\chi ,`$ $`K_2(\chi )`$ $`=`$ $`\mathrm{\hspace{0.17em}2}D\chi ^2.`$ (17) It means therefore that as result we have the following distribution function $$f(\chi )=\frac{1}{\mathrm{\Gamma }(\alpha )}\mu \left(\frac{\mu }{\chi }\right)^{\alpha 1}\mathrm{exp}\left(\frac{\mu }{\chi }\right),$$ (18) which is nothing but a gamma distribution of variable $`1/\chi `$ depending on two parameters: $$\mu =\frac{\varphi }{D}\mathrm{and}\alpha =\frac{1}{\tau D}.$$ (19) Returning to the $`q`$-notation (cf. eq. (4)) we have therefore $$q=\mathrm{\hspace{0.17em}1}+\tau D,$$ (20) i.e., the parameter of nonextensitivity is given by the parameter $`D`$ describing the white noise and by the damping constant $`\tau `$. This means then that the relative variance $`\omega (1/\chi )`$ of distribution (18) is (as in eq. (9)) given by $`\tau D`$. As illustration of the genesis of eq. (11) used to derive eq. (20), we turn once more to the fluctuations of temperature discussed before (i.e., to the situation when $`\chi =T`$). Suppose that we have a thermodynamic system, in a small (mentally separated) part of which the temperature fluctuates with $`\mathrm{\Delta }TT`$. Let $`\xi (t)`$ describes stochastic changes of temperature in time. If the mean temperature of the system $`T=T_0`$ then, as result of fluctuations in some small selected region, the actual temperature $`T^{}`$ equals $$T^{}=T_0b\xi (t)T,$$ (21) where the constant $`b`$ is defined by the actual definition of the stochastic process under consideration, i.e., by $`\xi (t)`$, which is assumed to satisfy conditions given by eqs. (12) and (13). The inevitable exchange of heat between this selected region and the rest of the system leads to the equilibration of the temperature. The corresponding process of heat conductance is described by the following equation $$c_p\rho \frac{T}{t}a(T^{}T)=\mathrm{\hspace{0.17em}0},$$ (22) where $`c_p,\rho `$ and $`a`$ are, respectively, the specific heat, density and the coefficient of the external conductance. Using $`T^{}`$ as defined in (21) we finally get the linear differential equation (11) for the temperature $`T`$ with coefficients: $`\tau =\frac{c_p\rho }{a}`$, $`\varphi =\frac{a}{c_p\rho }T_0=T_0/\tau `$ and $`b=\tau `$: $$\frac{T}{t}+\left[\frac{a}{c_p\rho }+\frac{a}{c_p\rho }b\xi (t)\right]T=\frac{a}{c_p\rho }T_0.$$ (23) This result demonstrates clearly that one can think of a deep physical interpretation of the parameter $`q`$ of the corresponding Lévy distribution describing the distributions of the transverse momenta mentioned before. In this respect our work differs from works in which $`G_q(x)`$ is shown to be connected with $`G_{q=1}(x)=g(x)`$ by the so called Hilhorst integral formula (the trace of which is our eq. (5)) but without discussing the physical context of the problem. Our original motivation was to understand the apparent success of Tsallis statistics (i.e., the situations in which $`q>1`$) in the realm of high energy collisions. To summarise: if fluctuations of the variable $`\chi `$ can be described in terms of the Langevin formulation, their distribution function $`f(1/\chi )`$ satisfies the Fokker-Plank equation and is therefore given by the Gamma distribution in the variable $`1/\chi `$. Such fluctuations of the parameter $`1/\chi `$ in the exponential formula of physical interest, $`g(x/\chi )`$, lead immediately to a Lévy distribution $`G_{q>1}(x/\chi )`$ with $`q`$ parameter given by the relative variance of the fluctuations described by $`f(1/\chi )`$. It should be stressed that in this way we address the interpretation of only very limited cases of applications of Tsallis statistics. They belong to the category in which the power laws physically appear as a consequence of some continuous spectra within appropriate integrals. It does not touche, however, a really hard case of applicability of Tsallis statistics, namely when zero Lyapunov exponents are involved . Nevertheless, this allows us to interpret some nuclear collisions data in terms of fluctuations of the inverse temperature, providing thus an important hint to the origin of some systematics in the data, understanding of which is crucial in the search for the new state of matter: the Quark Gluon Plasma . Acknowledgement: We are grateful to Prof. St. Mrówczyński for fruitful discussions and comments.
no-problem/9908/math9908072.html
ar5iv
text
# On spaces Baire isomorphic to the powers of the real line ## 1. Introduction It is a classical result that any two uncountable Polish spaces are Borel isomorphic. In particular, countable powers $`\text{}^\omega `$, $`\text{𝕀}^\omega `$, $`\text{}^\omega `$ and $`\text{𝔻}^\omega `$ of the real line , a closed interval 𝕀, the discrete space of natural numbers and the two-point discrete space 𝔻 are all Borel isomorphic. Topologically the above mentioned spaces of course differ. Characterizations of $`\text{}^\omega `$ (the space of irrational numbers, ) and $`\text{𝔻}^\omega `$ (the Cantor cube, ) have been obtained during the early stages of the development of general topology, whereas the characterizations of spaces $`\text{}^\omega `$ (the separable Hilbert space, ) and $`\text{𝕀}^\omega `$ (the Hilbert cube, ) have been obtained relatively recently by means of powerful methods of modern infinite-dimensional topology. Later these results have been extended in order to obtain topological characterizations of uncountable powers $`\text{}^\tau `$, $`\text{𝕀}^\tau `$, $`\text{}^\tau `$ and $`\text{𝔻}^\tau `$. Turned out that characterizing properties of the above spaces, which at first glance do not have anything in common, are in fact of the same nature and all of them can be described in an unified way in terms of certain universlity properties (or, equivalently, in terms of a far going generalizations of the concept of “general position”). As an illustration we recall the corresponding results. ###### Theorem (\[3, Theorem 7.3.3\]). Let $`\tau >\omega `$. The following conditions are equivalent for any $`AR`$-space $`X`$ of weight $`\tau `$: 1. $`X`$ is homeomorphic to $`\text{}^\tau `$. 2. For each space $`Y`$ of -weight $`\tau `$, the set of $`C`$-embeddings is dense in the space $`C_\tau (Y,X)`$<sup>1</sup><sup>1</sup>1Precise definition of the space $`C_\tau (Y,X)`$ is given in Section 3.. 3. For each space $`Y`$ of -weight $`<\tau `$, the set of $`C`$-embeddings is dense in the space $`C_\tau (Y,X)`$. 4. For each space $`Y`$ of -weight $`<\tau `$ the subset $$\{fC_\tau (Y\times \text{}):\text{the collection}\{f(Y\times \{n\}):n\text{}\}\text{is discrete in}X\}$$ is dense in the space $`C_\tau (Y\times \text{})`$. ###### Theorem (Ščepin, \[3, Theorem 7.2.8\]). Let $`\tau >\omega `$. The following conditions are equivalent for any compact $`AR`$-space $`X`$ of weight $`\tau `$: 1. $`X`$ is homeomorphic to $`\text{𝕀}^\tau `$. 2. For each compact space $`Y`$ of weight $`\tau `$, the set of embeddings is dense in the space $`C_\tau (Y,X)`$. 3. For each compact space $`Y`$ of weight $`<\tau `$, the set of embeddings is dense in the space $`C_\tau (Y,X)`$. 4. For each compact space $`Y`$ of weight $`<\tau `$ the subset $$\{fC_\tau (Y\times \text{𝔻}):f(Y\times \{0\})f(Y\times \{1\})=\mathrm{}\}$$ is dense in the space $`C_\tau (Y\times \text{𝔻},X)`$. 5. $`X`$ is homogeneous with respect to the character, i.e. $`\chi (x,X)=\tau `$ for each $`xX`$. ###### Theorem (\[3, Theorem 8.1.4\]). Let $`\tau >\omega `$. The following conditions are equivalent for any zero-dimensional (in the sense of $`dim`$) $`AE(0)`$-space $`X`$ of weight $`\tau `$: 1. $`X`$ is homeomorphic to $`\text{}^\tau `$. 2. For each zero-dimensional space $`Y`$ of -weight $`\tau `$, the set of $`C`$-embeddings is dense in the space $`C_\tau (Y,X)`$. 3. For each zero-dimensional space $`Y`$ of -weight $`<\tau `$, the set of $`C`$-embeddings is dense in the space $`C_\tau (Y,X)`$. 4. For each zero-dimensional space $`Y`$ of -weight $`<\tau `$ the subset $$\{fC_\tau (Y\times \text{}):\text{the collection}\{f(Y\times \{n\}):n\text{}\}\text{is discrete in}X\}$$ is dense in the space $`C_\tau (Y\times \text{})`$. ###### Theorem (Ščepin, \[3, Theorem 8.1.6\]). Let $`\tau >\omega `$. The following conditions are equivalent for any zero-dimensional compact $`AE(0)`$-space $`X`$ of weight $`\tau `$: 1. $`X`$ is homeomorphic to $`\text{𝔻}^\tau `$. 2. For each zero-dimensional compact space $`Y`$ of weight $`\tau `$, the set of embeddings is dense in the space $`C_\tau (Y,X)`$. 3. For each zero-dimensional compact space $`Y`$ of weight $`<\tau `$, the set of embeddings is dense in the space $`C_\tau (Y,X)`$. 4. For each zero-dimensional compact space $`Y`$ of weight $`<\tau `$ the subset $$\{fC_\tau (Y\times \text{𝔻}):f(Y\times \{0\})f(Y\times \{1\})=\mathrm{}\}$$ is dense in the space $`C_\tau (Y\times \text{𝔻},X)`$. 5. $`X`$ is homogeneous with respect to the character, i.e. $`\chi (x,X)=\tau `$ for each $`xX`$. Conditions 1–3 in the above theorems remain equivalent in the metrizable case as well and are just reformulations of the above cited results ,, and respectively (but as stated the first two results are false in the case $`\tau =\omega `$). Conditions 4 are variations of “general position” properties mentioned above. It should also be emphasized that all four results are obtained by using spectral teqchnique and the general theory of $`AE(n)`$-spaces and $`n`$-soft maps. In the light of this discussion one would expect that the very first result cited in this article (every uncountable Polish space is Borel isomorphic to $`\text{}^\omega `$) also admits a reasonable extension to the non-metrizable case and one might ask: what are spaces Baire isomorphic to $`\text{}^\tau `$ for uncountable $`\tau `$? <sup>2</sup><sup>2</sup>2Partial results in this direction are contained in where the answer to the above question has been obtained for spaces representable as the limits of well-ordered spectra with perfect projections. The class of such spaces contains no uncountable power of a noncompact Polish space, let alone $`\text{}^\tau `$ itself. Below we present a complete solution of this problem for $`AE(0)`$-spaces. Interestingly enough the main characterizing property (see conditions 2 and 3 of Theorem 3.1) is of the same nature as conditions 4 in the above cited results. ## 2. Auxiliary results In this section we present statements which will be used later in the proof of Theorem 3.1. All spaces are assumed to be Tychonov. Separable and completely metrizable spaces are referred as Polish spaces. Baire sets are generated by zero-sets in the same way as Borel sets by closed ones. For metrizable spaces every Borel set is a Baire set and the converse is true always. We assume a familiarity with the standard spectral techniques based on the Ščepin’s Spectral Theorem. We use as the main source for references, where all necessary technical details and a variety of related results can be found. ###### Lemma 2.1. Let $`p:XY`$ be an open surjection of Polish spaces. Then there exists a Borel map $`q:YX`$ such that $`pq=\mathrm{id}_Y`$. If, in addition, $`|p^1(y)|2`$ for each $`yY`$, then there exist Borel maps $`q_1,q_2:YX`$ such that $`pq_i=\mathrm{id}_Y`$, $`i=1,2`$, $`q_1(Y)q_2(Y)=\mathrm{}`$ and $`q_i(Y)`$ is a Borel subset of $`X`$, $`i=1,2`$. ###### Proof. Let $`f:\stackrel{~}{Y}Y`$ be a continuous one-to-one Borel isomorphism, where $`\stackrel{~}{Y}`$ is a zero-dimensional Polish space . Next consider the pullback square $$\begin{array}{ccc}\stackrel{~}{X}& \stackrel{\stackrel{~}{f}}{}& X\\ \stackrel{~}{p}& & p& & \\ \stackrel{~}{Y}& \stackrel{f}{}& Y\end{array}$$ where $`\stackrel{~}{X}=\{(\stackrel{~}{y},x)\stackrel{~}{Y}\times X:f(\stackrel{~}{y})=p(x)\}`$ and the maps $`\stackrel{~}{p}:\stackrel{~}{X}\stackrel{~}{Y}`$ and $`\stackrel{~}{f}:\stackrel{~}{X}X`$ are the restrictions of the natural projections $`\pi _1:\stackrel{~}{Y}\times X\stackrel{~}{Y}`$ and $`\pi _2:\stackrel{~}{Y}\times XX`$ onto $`\stackrel{~}{X}`$ respectively. Since the above diagram is a pullback, it follows that $`\stackrel{~}{p}`$ is an open surjection and $`\stackrel{~}{f}`$ is a contiunuous one-to-one Borel isomorphism. Since $`dim\stackrel{~}{Y}=0`$ and since $`\stackrel{~}{p}`$ is open, there exists a continuous map $`g_1:\stackrel{~}{Y}\stackrel{~}{X}`$ such that $`\stackrel{~}{p}g_1=\mathrm{id}_{\stackrel{~}{Y}}`$. Clearly the Borel map $`q=q_1=\stackrel{~}{f}g_if^1:YX`$ satisfies the equality $`qq=\mathrm{id}_Y`$. This proves the first part of the Lemma. Obviously, $`g_1(\stackrel{~}{Y})`$ is a closed subset of $`\stackrel{~}{X}`$. Now consider a Polish space $`\stackrel{~}{X}g_1(\stackrel{~}{Y})`$. Clearly the map $`\stackrel{~}{p}|(\stackrel{~}{X}g_1(\stackrel{~}{Y})):\stackrel{~}{X}g_1(\stackrel{~}{Y})\stackrel{~}{Y}`$ is open and surjective (since $`|\stackrel{~}{p}^1(\stackrel{~}{y})|2`$ for each $`\stackrel{~}{y}\stackrel{~}{Y}`$). As above, there exists a continuous map $`g_2:\stackrel{~}{Y}\stackrel{~}{X}g_1(\stackrel{~}{Y})`$ such that $`\stackrel{~}{p}g_2=\mathrm{id}_{\stackrel{~}{Y}}`$. Note that $`g_2(\stackrel{~}{Y})`$ is also closed in $`\stackrel{~}{X}`$ and $`g_1(\stackrel{~}{Y})g_2(\stackrel{~}{Y})=\mathrm{}`$. Finally let $`q_i=\stackrel{~}{f}g_if^1:YX`$, $`i=1,2`$. ∎ ###### Lemma 2.2. Let $`p:XY`$ be a surjective continuous map of Polish spaces. Suppose that there exists a Borel subset $`M`$ of $`X`$ such that for each $`yY`$ the subspace $`Mp^1(y)`$ is a topological copy of the Cantor cube. Then there exists a Borel isomorphism $`h:Y\times \text{}^\omega X`$ such that $`ph=\pi _1`$, where $`\pi _1:Y\times \text{}^\omega Y`$ denotes the projection onto the first coordinate. ###### Proof. Identify $`X`$ with the graph of the map $`p`$, i.e. $`X=\{(x,y)X\times Y:p(x)=y\}`$. Clearly $`X`$ is a closed subset of the product $`X\times Y`$ and the map $`p`$ coincides with the restriction of the projection $`\pi _1:X\times YY`$ onto $`X`$. By our assumption, there exists a Borel subset $`M`$ of $`X`$ such that $`\pi _1^1(y)M`$ is a copy of the Cantor set. By the main result of , $`X`$ has a Borel parametrization. Since any two uncountable Polish spaces are Borel isomorphic, the latter means that there exists a Borel isomorphism $`h:Y\times \text{}^\omega X`$ such that $`\pi _1h=\pi _1`$. Since $`p=\pi _1|X`$, it follows that $`ph=\pi _1`$ as required. ∎ ###### Lemma 2.3. Let $`𝒮=\{X_n,p_n^{n+1},\omega \}`$ be an inverse sequence consisting of Polish spaces $`X_n`$ and open surjective projections $`p_n^{n+1}:X_{n+1}X_n`$. If $`\left|\left(p_n^{n+1}\right)^1(x)\right|2`$ for each $`n\omega `$ and each $`xX_n`$, then there exists a Borel isomorphism $`h:X_0\times \text{}^\omega lim𝒮`$ such that $`\pi _1=p_0h`$, where $`p_0:lim𝒮X_0`$ is the limit projection of the spectrum $`𝒮`$ and $`\pi _1:X_0\times \text{}^\omega X_0`$ is the projection onto the first coordinate. ###### Proof. According to Lemma 2.1, for each $`n\omega `$ there exist Borel maps $`q_1^n,q_2^n:X_nX_{n+1}`$ such that $`p_n^{n+1}q_i^n=\mathrm{id}_{X_n}`$, $`i=1,2`$, $`q_1^n(X_n)q_2^n(X_n)=\mathrm{}`$ and $`q_i^n(X_n)`$ is a Borel subset of $`X_{n+1}`$, $`i=1,2`$. Let $`M_0=X_0`$ and suppose that for each $`mn`$ we have already constructed a subset $`M_mX_m`$ so that the following conditions are satisfied: * $`M_m`$ is a Borel subset of $`X_m`$ whenever $`0mn`$. * $`p_m^{m+1}(M_{m+1})=M_m`$ whenever $`0mn1`$. * $`\left|M_{m+1}\left(p_m^{m+1}\right)^1(x_m)\right|=2`$ whenever $`x_mM_m`$ and $`0mn1`$. We let $`M_{n+1}=q_1^n(M_n)q_2^n(M_n)`$. Since $`q_1^n`$ and $`q_2^n`$ are Borel maps, it follows that $`M_{n+1}`$ is a Borel subset of $`X_{n+1}`$. Conditions (b)<sub>n+1</sub> and (c)<sub>n+1</sub> are also satisfied by the construction. This completes the inductive step and consequently we may assume that the Borel sets $`M_nX_n`$, satisfying conditions (a)<sub>n</sub>–(c)<sub>n</sub>, have been constructed for each $`n\omega `$. Next consider the subset $`M=\{p_n^1(M_n):n\omega \}`$. Conditions (a)<sub>n</sub>, $`n\omega `$, imply that $`M`$ is a Borel subset of $`lim𝒮`$. Conditions (b)<sub>n</sub> and (c)<sub>n</sub>, $`n\omega `$, ensure that $`Mp_0^1(x_0)`$ is a topological copy of the Cantor cube for each $`x_0X_0`$. Finally, by Lemma 2.2 (with $`X=lim𝒮`$, $`Y=X_0`$ and $`p=p_0`$), we conclude the existence of the required Borel isomorphism $`h:X_0\times \text{}^\omega lim𝒮`$. ∎ ###### Proposition 2.4. Let $`𝒮=\{X_n,p_n^{n+1},\omega \}`$ be an inverse sequence consisting of $`AE(0)`$-spaces $`X_n`$ and $`0`$-soft projections $`p_n^{n+1}:X_{n+1}X_n`$ which have Polish kernels. If $`\left|\left(p_n^{n+1}\right)^1(x)\right|2`$ for each $`n\omega `$ and each $`xX_n`$, then there exists a Baire isomorphism $`h:X_0\times \text{}^\omega lim𝒮`$ such that $`\pi _1=p_0h`$, where $`p_0:lim𝒮X_0`$ is the limit projection of the spectrum $`𝒮`$ and $`\pi _1:X_0\times \text{}^\omega X_0`$ is the projection onto the first coordinate. ###### Proof. If $`w(X_0)=\omega `$, then all spaces $`X_n`$, $`n\omega `$, are Polish and the statement follows from Lemma 2.3. Thus we may assume that $`w(X_0)=\tau >\omega `$. Since each short projection of the spectrum $`𝒮`$ has a Polish kernel, we conclude that $`w(X_n)=w(X_0)=\tau `$ for each $`n\omega `$. Represent the space $`X_n`$, $`n\omega `$, as the limit space of a factorizing $`\omega `$-spectrum $`𝒮_n=\{X_\alpha ^n,q_\alpha ^{\beta ,n},A\}`$, consisting of Polish spaces and $`0`$-soft limit projections \[3, Theorem 6.3.2\]. Observe, in the meantime, that the indexing sets of all these spectra coincide with $`A`$ (which has cardinality $`\tau `$). Since all short projections $`p_n^{n+1}`$ of the spectrum $`𝒮`$ are $`0`$-soft and have Polish kernels, we see, by \[3, Theorem 6.3.2(iv)\], that $`p_n^{n+1}`$, $`n\omega `$, is the limit of some Cartesian morphism $$M_n^{n+1}=\{p_n^{n+1,\alpha }:X_\alpha ^{n+1}X_\alpha ^n,A_n\}:𝒮_{n+1}|A_n𝒮_n|A_n,$$ consisting of open maps between Polish spaces, where $`A_n`$ is a cofinal and $`\omega `$-closed subset of the indexing set $`A`$. Let $`B=\{A_n:n\omega \}`$ and note that $`B`$ is still a cofinal and $`\omega `$-closed subset of $`A`$ \[3, Proposition 1.1.27\]. In particular, $`B\mathrm{}`$. For each $`\alpha B`$ consider the inverse sequence $`𝒮_\alpha =\{X_\alpha ^n,p_n^{n+1,\alpha },\omega \}`$ and let $`X_\alpha =lim𝒮_\alpha `$. If $`\beta \alpha `$, $`\alpha ,\beta B`$, then there is a Cartesian morphism $$M_\alpha ^\beta =\{q_\alpha ^{\beta ,n}:X_\beta ^nX_\alpha ^n,\omega \}:𝒮_\beta 𝒮_\alpha ,$$ consisting of open surjective maps $`q_\alpha ^{\beta ,n}`$, $`n\omega `$. Denote by $`q_\alpha ^\beta `$ the limit map of the morphism $`M_\alpha ^\beta `$. Thus, the following infinite commutative diagram arises: Straightforward verification shows that the limit space of the spectrum $`𝒮^{}=\{X_\alpha ,q_\alpha ^\beta ,A\}`$ coincides with the space $`X`$, and that all newly arising square diagrams are also Cartesian squares. Now take an index $`\alpha B`$. Since the above mentioned diagrams are Cartesian (pullback) squares, we conclude that $`\left|\left(p_n^{n+1,\alpha }\right)^1(x)\right|2`$ for each $`xX_\alpha ^n`$ and $`n\omega `$. By Lemma 2.3, the limit projection $`p_0^\alpha :X_\alpha X_\alpha ^0`$ of the spectrum $`𝒮_\alpha `$ is Baire isomorphic to the projection $`\pi _1^\alpha :X_\alpha ^0\times \text{}^\omega X_\alpha ^0`$, i.e. there exists a Baire isomorphism $`h_\alpha :X_\alpha ^0\times \text{}^\omega X_\alpha `$ such that $`p_0^\alpha h_\alpha =\pi _1^\alpha `$. The required Baire isomorphism $`h:X_0\times \text{}^\omega lim𝒮`$ can now be defined by letting $$h=q_\alpha ^0\pi _1\mathrm{}h_\alpha (q_\alpha ^0\times \mathrm{id}_\text{}^\omega )$$ ###### Proposition 2.5. Let $`p:XY`$ be a $`0`$-soft map of $`AE(0)`$-spaces. Then there exists a Baire map $`q:XY`$ such that $`pq=\mathrm{id}_Y`$. ###### Proof. First let us assume that $`p`$ has a Polish kernel. Then, by \[3, Theorem 6.3.2(iv)\], there exists pullback diagram $$\begin{array}{ccc}X& \stackrel{p}{}& Y\\ f& & g& & \\ X_0& \stackrel{p_0}{}& Y_0,\end{array}$$ where $`X_0`$ and $`Y_0`$ are Polish spaces and the map $`p_0`$ is an open surjection. By Lemma 2.1, there exists a Borel map $`q_0:Y_0X_0`$ such that $`p_0q_0=\mathrm{id}_{Y_0}`$. Then the diagonal product $$q=q_0g\mathrm{}\mathrm{id}_Y:YX$$ is a Baire map satisfying the equality $`pq=\mathrm{id}_Y`$. Now consider the general case. It follows from consideration in \[3, Section 6.3\], that there exists a well ordered continuous inverse spectrum $`𝒮=\{X_\alpha ,p_\alpha ^{\alpha +1},\tau \}`$ consisting of $`AE(0)`$-spaces and $`0`$-soft short projections with Polish kernels such that $`lim𝒮=X`$, $`X_0=Y`$ and the first limit projection $`p_0:lim𝒮X_0`$ of the spectrum $`𝒮`$ coincides with the given map $`p`$. Let $`q_0=\mathrm{id}_{X_0}`$ and suppose that Baire for each ordinal number $`\alpha <\gamma `$, where $`\gamma <\tau `$, we have already constructed a Baire map $`q_\alpha :X_0X_\alpha `$ so that the following conditions are satisfied: * $`p_\alpha q_\alpha =\mathrm{id}_{X_0}`$ whenever $`\alpha <\gamma `$. * $`p_\alpha ^\beta q_\beta =q_\alpha `$ whenever $`\alpha \beta <\gamma `$. * $`q_\beta =\mathrm{}\{q_\alpha :\alpha <\beta \}`$ whenever $`\beta <\gamma `$ is a limit ordinal number. Let us construct a Baire map $`q_\gamma :X_0X_\gamma `$. If $`\gamma `$ is a limit ordinal number, then let $`q_\gamma =\mathrm{}\{q_\alpha :\alpha <\gamma \}`$. Since the spectrum $`𝒮`$ is continuous, $`q_\gamma `$ is a well defined Baire map. If $`\gamma =\alpha +1`$, then according to the first part of the proof there exists a Baire map $`i:X_\alpha X_{\alpha +1}`$ (recall that the short projection $`p_\alpha ^{\alpha +1}:X_{\alpha +1}X_\alpha `$ has a Polish kernel) such that $`p_\alpha ^{\alpha +1}i=\mathrm{id}_{X_\alpha }`$. Then the Baire map $`q_{\alpha +1}=iq_\alpha `$ satisfies all the required properties. This completes the inductive step and hence we may assume that Baire maps $`q_\alpha :X_0X_\alpha `$ with the above properties are constructed for each $`\alpha <\tau `$. Let finally $`q=\mathrm{}\{q_\alpha :\alpha <\tau \}`$. Then $`q:X_0lim𝒮`$ is a Baire map and $`p_0q=\mathrm{id}_{X_0}`$. ∎ ## 3. Main result Let $`X`$ and $`Y`$ be arbitrary Tychonov spaces and $`\tau `$ be an arbitrary infinite cardinal number. Recall (see \[3, Section 6.5.1\]) the definition of a topology, depending on $`\tau `$, on the set $`X^Y`$ of all maps from $`Y`$ into $`X`$. Let $`\mathrm{cov}(X)`$ denote the collection of all countable functionally open covers of the space $`X`$. For each map $`f:YX`$ the sets of the form $$B(f,\{𝒰_t:tT\})=\{gY^X:g\text{is}𝒰_t\text{close to}f\text{for each}tT\},$$ where $`|T|<\tau `$ and $`𝒰_t\mathrm{cov}(X)`$ for each $`tT`$, are declared to be open basic neighborhoods of the point $`f`$ in $`Y_\tau ^X`$. The maps, contained in the neighborhood $`B(f,\{𝒰_t:tT\})`$ are called $`\{𝒰_t:tT\}`$-close to $`f`$. The space $`C_\tau (Y,X)`$ mentioned in the Introduction is the subspace of the space $`X_\tau ^Y`$ consisting of continuous maps. By $`_\tau (Y,X)`$ we denote the subspace of $`X_\tau ^Y`$ consisting of all Baire maps of $`Y`$ into $`X`$. Finally we say that two subsets $`A`$ and $`B`$ of a space $`X`$ are Baire separated if there exists a Baire subset $`M`$ of $`X`$ such that $`AM`$ and $`BM=\mathrm{}`$. ###### Theorem 3.1. Let $`X`$ be an $`AE(0)`$-space of weight $`\tau >\omega `$. Then the following conditions are equivalent: 1. $`X`$ is Baire isomorphic to $`\text{}^\tau `$. 2. For each space $`Y`$ of -weight $`\tau `$ the set $$\{f_\tau (Y\times \text{𝔻},X):f(Y\times \{0\})\text{and}f(Y\times \{1\})\text{are Baire separated}\}$$ is dense in the space $`_\tau (Y\times \text{𝔻},X)`$. 3. For each space $`Y`$ of -weight $`<\tau `$ the set $$\{f_\tau (Y\times \text{𝔻},X):f(Y\times \{0\})\text{and}f(Y\times \{1\})\text{are Baire separated}\}$$ is dense in the space $`_\tau (Y\times \text{𝔻},X)`$. ###### Proof. (1) $``$ (2). Let $`h:X\text{}^\tau `$ be a Baire isomorphism and consider a Baire map $`f:Y\times \text{𝔻}X`$. Take a neighbourhood $`U`$ of $`f`$ in the space $`_\tau (Y\times \text{𝔻},X)`$. By the definition of the topology $`_\tau (Y\times \text{𝔻},X)`$ there exists a collection $`\{𝒰_t:tT\}`$ such that $`𝒰_t\mathrm{cov}(X)`$ for each $`tT`$, $`|T|<\tau `$ and $$\begin{array}{c}fB(f,\{𝒰_t:tT\})=\hfill \\ \hfill \{g_\tau (Y\times \text{𝔻},X):g\text{is}𝒰_t\text{close to}f\text{for each}tT\}U.\end{array}$$ Let $`\kappa =\mathrm{max}\{\omega ,|T|\}`$ and note that $`\kappa <\tau `$. Since $`X`$ is an $`AE(0)`$-space it can be represented as the limit space of a (factorizing) $`\kappa `$-spectrum $`𝒮=\{X_\alpha ,p_\alpha ^\beta ,\mathrm{exp}_\kappa \tau \}`$ consisting of $`AE(0)`$-spaces of weight $`\kappa `$ and $`0`$-soft limit projections (see \[3, Proposition 6.3.3\]). Let also $`𝒮^{}=\{\left(\text{}^\omega \right)^\alpha ,\pi _\alpha ^\beta ,\mathrm{exp}_\kappa \tau \}`$ be the standard $`\kappa `$ spectrum consisting of $`\kappa `$ subproducts (of $`\text{}^\tau `$ and natural projections whose limit coincides with $`\mathrm{}^\tau `$. By the Spectral Theorem for Baire maps (see \[3, Theorem 8.8.1\]), we may assume without loss of generality that $`h`$ is the limit of a morphism $$h=lim\{h_\alpha :X_\alpha \left(\text{}^\omega \right)^\alpha \}$$ consisting of Baire isomorphisms (i.e. $`\pi _\alpha h=h_\alpha p_\alpha `$ for each $`\alpha \mathrm{exp}_\kappa \tau `$). Note that if $`M`$ is a Baire subset of $`X`$, then there exists an index $`\alpha _M\mathrm{exp}_\kappa \tau `$ such that $`M=p_{\alpha _M}^1(p_{\alpha _M}(M))`$ (recall that $`𝒮`$ is a factorizing $`\kappa `$-spectrum) – we say in such a case that $`M`$ is $`\alpha _M`$-cylindrical. This implies that for each $`tT`$ there exists an index $`\alpha _t\mathrm{exp}_\kappa \tau `$ such that every element of $`𝒰_t`$ is $`\alpha _t`$-cylindrical (recall that collection $`𝒰_t`$ is countable). Finally, since $`|T|\kappa `$ and since $`\mathrm{exp}_\kappa \tau `$ is a $`\kappa `$-complete set, it follows that there exists an index $`\alpha \mathrm{exp}_\kappa \tau `$ such that for each $`tT`$ every element of $`𝒰_t`$ is $`\alpha `$-cylindrical. In this situation we have $$V=\{g_\tau (Y\times \text{𝔻},\text{}^\tau ):p_\alpha g=p_\alpha f\}B(f,\{𝒰_t:tT\})U.$$ Improtant observation here is that the set $`V`$ is a neighbourhood of the point $`f`$ in $`_\tau (Y\times \text{𝔻},X)`$ (see \[3, Lemma 6.5.1\]). Now consider the Baire map $`hf:Y\times \mathrm{𝔻}\mathrm{}^\tau `$ and represent it as the diagonal product $$hf=\pi _\alpha hf\mathrm{}\pi _{\tau \alpha }hf.$$ Since the $`\mathrm{}`$-weight of the product $`Y\times \mathrm{𝔻}`$ does not exceed $`\tau `$ and since $`|\tau \alpha |=\tau `$, there exists a $`C`$-embedding $`j:Y\times \mathrm{𝔻}\left(\text{}^\omega \right)^{\tau \alpha }`$. Then the diagonal product $$\stackrel{~}{g}=\pi _\alpha hf\mathrm{}j:Y\times \mathrm{𝔻}\mathrm{}^\tau $$ is a Baire map such that * $`\pi _\alpha \stackrel{~}{g}=\pi _\alpha hf`$. * The sets $`\stackrel{~}{g}(Y\times \{0\})`$ and $`\stackrel{~}{g}(Y\times \{1\})`$ are Baire separated in $`\mathrm{}^\tau `$. Let now $`g=h^1\stackrel{~}{g}:Y\times \text{𝔻}X`$. It follows that the sets $`g(Y\times \{0\})`$ and $`g(Y\times \{1\})`$ are Baire separated in $`X`$. Finally observe that $$p_\alpha g=p_\alpha h^1\stackrel{~}{g}=h_\alpha ^1\pi _\alpha \stackrel{~}{g}=h_\alpha ^1\pi _\alpha hf=p_\alpha h^1hf=p_\alpha f$$ which shows that $`gV`$. (2) $``$ (3). Trivial. (3) $``$ (1). Embed $`X`$ as a $`C`$-embedded subspace into $`\text{}^A`$, where $`|A|=\tau `$. By repeating the argument presented in the proof of \[3, Theorem 6.3.1\] we construct a collection $`𝒜`$ of countable subsets of the indexing set $`A`$ satisfying the following properties: * The collection $`𝒜`$ is cofinal and $`\omega `$-closed in $`\mathrm{exp}_\omega A`$. * If $`B`$ is an (arbitrary) union of elements of $`𝒜`$, then $`\pi _B(X)`$ is a closed and $`C`$-embedded $`AE(0)`$-subspace of the product $`\text{}^B`$. * If $`B`$ and $`C`$ are (arbitrary) unions of elements of $`𝒜`$ and $`CB`$, then the restrictions $$\pi _C^B|\pi _B(X):\pi _B(X)\pi _C(X)\text{and}\pi _B|X:X\pi _B(X)$$ are $`0`$-soft. By using the above collection we construct a well ordered continuous inverse spectrum $`𝒮=\{X_\alpha ,p_\alpha ^{\alpha +1},\tau \}`$ consisting of $`AE(0)`$ spaces $`X_\alpha `$ and $`0`$-soft short projections $`p_\alpha ^{\alpha +1}:X_{\alpha +1}X_\alpha `$ with Polish kernels so that $`lim𝒮=X`$. Since $`|A|=\tau `$, we can write $`A=\{a_\alpha :\alpha <\tau \}`$. By property (a), there exists an element $`A_0𝒜`$ such that $`a_0A_0`$. Let $`X_0=\pi _{A_0}(X)`$. Without loss of generality we may assume that $`|X_0|>\omega `$. By properties (b) and (c), $`X_0`$ is a closed subspace of $`\text{}^{A_0}`$ and the map $`p_0=\pi _{A_0}|X:XX_0`$ is $`0`$-soft. Suppose that for each ordinal $`\alpha <\gamma `$, where $`\gamma <\tau `$, we have already constructed a subset $`A_\alpha A`$ as an union of less than $`\tau `$ elements of the collection $`𝒜`$ so that $`\{a_\beta :\beta <\alpha \}A_\alpha `$ whenever $`\alpha >0`$ and all point inverses of the map $$p_\alpha ^{\alpha +1}=\pi _{A_\alpha }^{A_{\alpha +1}}|X_{\alpha +1}:X_{\alpha +1}X_\alpha $$ contain at least two points. Here $`X_\alpha =\pi _{A_\alpha }(X)`$ and $`X_{\alpha +1}=\pi _{A_{\alpha +1}}(X)`$. Suppose also that the construction has been carried out in such a way that $`|A_{\alpha +1}A_\alpha |\omega `$ for each $`\alpha `$ with $`\alpha +1<\gamma `$. This ensures that the $`0`$-soft map $`p_\alpha ^{\alpha +1}`$ has a Polish kernel. If $`\gamma `$ is a limit ordinal number we let $`A_\gamma =\{A_\alpha :\alpha <\gamma \}`$. If $`\gamma =\alpha +1`$, then consider the $`0`$-soft map $`p_\alpha :XX_\alpha `$. By Proposition 2.5, there exists a Baire map $`q:X_\alpha X`$ such that $`p_\alpha q=\mathrm{id}_{X_\alpha }`$. Next consider the composition $$X_\alpha \times \text{𝔻}\stackrel{𝜋}{}X_\alpha \stackrel{𝑞}{}X,$$ where $`\pi :X_\alpha \times \text{𝔻}X_\alpha `$ denotes the natural projection onto the first coordinate. According to \[3, Lemma 6.5.1\], the collection of maps $`f:X_\alpha \times \text{𝔻}X`$ satisfying the equality $`p_\alpha f=p_\alpha q\pi =\pi `$ is a neighbourhood of the composition $`q\pi `$ in $`_\tau (X_\alpha \times \text{𝔻},X)`$. Consequently, by condition (2), for at least one map $`f:X_\alpha \times \text{𝔻}X`$ with $`p_\alpha f=\pi `$ the sets $`f(X_\alpha \times \{0\})`$ and $`f(X_\alpha \times \{1\})`$ are Baire separated. Since $`X`$ is $`C`$-embeddwed in $`\text{}^A`$, it follows that there exists a Baire subset $`M`$ of $`\text{}^A`$ such that $$f(X_\alpha \times \{0\})M\text{and}f(X_\alpha \times \{1\})M=\mathrm{}.$$ Choose a countable subset $`BA`$ such that $`M=\pi _B^1(\pi _B(M))`$. This obviously implies that $$\pi _B(f(X_\alpha \times \{0\}))\pi _B(f(X_\alpha \times \{0\}))=\mathrm{}.$$ Since $`p_\alpha f=\pi `$, it follows that $`BA_\alpha `$. The cofinality of the collection $`𝒮`$ in $`\mathrm{exp}_\omega A`$ allows us to find an element $`\stackrel{~}{B}𝒜`$ such that $`B\{a_\alpha \}\stackrel{~}{B}`$. Clearly $$\pi _{\stackrel{~}{B}}(f(X_\alpha \times \{0\}))\pi _{\stackrel{~}{B}}(f(X_\alpha \times \{0\}))=\mathrm{}.$$ Finally let $`A_{\alpha +1}=A_\alpha \stackrel{~}{B}`$. It then follows that the $`0`$-soft map $`p_{\alpha +1}:X_{\alpha +1}X_\alpha `$ has a Polish kernel and all its fibers contain at least two points. Thus the required well ordered continuous inverse spectrum $`𝒮=\{X_\alpha ,p_\alpha ^{\alpha +1},\tau \}`$ has been constructed so that all pont inverses of all short projections $`p_\alpha ^{\alpha +1}`$ (which have Polish kernels) contain at least two points. A straightforward transfinite induction, based on Proposition 2.4, shows that $`X`$ is Baire isomorphic to the product $`X_0\times \left(\text{}^\omega \right)^\tau `$. Since $`X_0`$ is an uncountable Polish space, it is Borel isomorphic to $`\text{}^\omega `$. Thus $`X`$ is Baire isomorphic to $`\text{}^\omega \times \left(\text{}^\omega \right)^\tau \text{}^\tau `$. ∎ ###### Corollary 3.2. Let $`X`$ be an $`AE(0)`$-space of weight $`\omega _1`$. Then the following conditions are equivalent: 1. $`X`$ is Baire isomorphic to $`\text{}^{\omega _1}`$. 2. For each space $`Y`$ of -weight $`\omega _1`$ the set $$\{f_\tau (Y\times \text{𝔻},X):f(Y\times \{0\})\text{and}f(Y\times \{1\})\text{are Baire separated}\}$$ is dense in the space $`_\tau (Y\times \text{𝔻},X)`$. 3. For each Polish space $`Y`$ the set $$\{f_\tau (Y\times \text{𝔻},X):f(Y\times \{0\})\text{and}f(Y\times \{1\})\text{are Baire separated}\}$$ is dense in the space $`_\tau (Y\times \text{𝔻},X)`$. 4. $`X`$ contains no $`G_\delta `$-points.
no-problem/9908/cond-mat9908453.html
ar5iv
text
# 1 Introduction ## 1 Introduction Two statistical systems can be characterised by the same internal symmetries and still differ in their microscopic realisation. This difference will be observable as far as the correlation length is not much larger than the microscopic length scale. Nearby a second order phase transition point, however, the microscopic details become irrelevant and the two systems appear as representatives of the same universality class. In the characterisation of universal behaviour, which is one of the basic tasks of statistical mechanics, one can distinguish different steps. The first one is the determination of the universal features of the critical point (first of all the critical exponents). In two dimensions this goal was achieved with the solution of conformal field theories . The second natural step is the study of the scaling region surrounding the fixed point. The leading behaviour of a physical quantity in this region is assigned in terms of a critical amplitude multiplying the suitable power of the temperature. The critical amplitudes depend on metric factors, but they can be used to construct universal combinations which characterise the scaling region . It is clear that the computation of the universal amplitude combinations requires a solution of the theory away from criticality. This has become possible over the last years for a large class of two-dimensional quantum field theories characterised by the presence of an infinite number of integrals of motion (integrable field theories) . They describe the scaling limit of the isotropic statistical models which are solved on the lattice, but also of many others whose lattice solution is not available. In particular, the universal amplitude ratios for such a basic model as the Ising model in a magnetic field have been computed exactly in this framework . More generally, integrable field theory provides accurate approximations for the amplitude ratios . For the purpose of comparison with the results provided by integrable field theory, it is clearly desirable to obtain independent estimates for the universal quantities. In view of the difficulties of other traditional approaches (accurate series expansions are available only for few models, and $`d=2`$ is normally too far from the upper critical dimension to obtain reliable estimates through the $`ϵ`$-expansion), numerical simulations appear as a most valuable source of data. The field theoretical predictions concerning the scaling limit of an infinite system can in principle be tested numerically working in a range of temperature for which the correlation length is much larger than the lattice spacing and much smaller than the lattice size. In practice, however, both the location of this temperature window and the required lattice sizes are model dependent and not obvious to identify. This paper deals with universal amplitude ratios for the two-dimensional $`q`$-state Potts model and the related problem of isotropic percolation. These have been the subject of a general study in the framework of integrable field theory in Ref. . Here we mainly focalise on the susceptibility amplitude ratios presenting new theoretical predictions for the ratio (not considered in ) of the transverse and longitudinal susceptibilities below $`T_c`$, and a Monte Carlo study for $`q=3`$ and 4. The $`q`$-state Potts model is defined by the lattice Hamiltonian $$H=J\underset{(x,y)}{}\delta _{s(x),s(y)},$$ (1.1) where the sum is over nearest neighbours and the site variable $`s(x)`$ can assume $`q`$ possible values (colours). The model is clearly invariant under the group of permutations of the colours. In the ferromagnetic case $`J>0`$ we are interested in, the states in which all the sites have the same colour minimise the energy and the system exhibits spontaneous magnetisation at sufficiently low temperatures. There exists a critical temperature $`T_c`$ above which the thermal fluctuations become dominant and the system is in a disordered phase. We will consider the Potts model in two dimensions in the range of the parameter $`q`$ for which the phase transition at $`T=T_c`$ is continuous, namely $`q4`$ . Let us introduce the spin variables $$\sigma _i(x)=\delta _{s(x),i}\frac{1}{q},i=1,2,\mathrm{},q$$ (1.2) satisfying the condition $$\underset{i=1}{\overset{q}{}}\sigma _i(x)=0.$$ (1.3) When $`T>T_c`$ all values of the site variable occur with equal probability $`1/q`$. At low-temperature, however, one of the $`q`$ degenerate ground states is selected out by spontaneous symmetry breaking. This might be done either by imposing a symmetry-breaking field which is allowed to tend to zero after taking the thermodynamic limit, or by imposing symmetry-breaking boundary conditions before taking the limit. Without loss of generality we choose the colour of the selected ground state at $`T<T_c`$ to correspond to $`i=1`$. Then, for any temperature, we can write $$\sigma _i=\frac{q\delta _{i1}1}{q1}M,$$ (1.4) where $`M`$ denotes the ‘longitudinal’ spontaneous magnetisation $`\sigma _1`$ and vanishes at $`T>T_c`$. The connected spin–spin correlation functions are given by $$G_{ij}(x)=\sigma _i(x)\sigma _j(0)\sigma _i\sigma _j.$$ (1.5) If $`\nu _i`$ denotes the fraction of sites with colour $`i`$, the magnetic susceptibilities per site can be written as $$\chi _i=\underset{x}{}G_{ii}(x)=\nu _i^2\nu _i^2.$$ (1.6) Of course, $`\chi _i=\chi `$ at $`T>T_c`$, while in the low-temperature phase we have to distinguish between the longitudinal susceptibility $`\chi _L=\chi _1`$ and the transverse susceptibility $`\chi _T=\chi _{i1}`$. In the vicinity of the critical point, for $`q<4`$, the susceptibilities behave as $$\chi _i\mathrm{\Gamma }_it^\gamma ,$$ (1.7) where $`t=|TT_c|/T_c`$. Denoting $`\mathrm{\Gamma }`$, $`\mathrm{\Gamma }_L`$ and $`\mathrm{\Gamma }_T`$ the critical amplitudes associated respectively to $`\chi `$, $`\chi _L`$ and $`\chi _T`$, we have the two universal amplitude ratios $$\mathrm{\Gamma }/\mathrm{\Gamma }_L,\mathrm{\Gamma }_T/\mathrm{\Gamma }_L.$$ (1.8) For $`q=4`$ it is well known that quantities like the susceptibility asymptotically exhibit multiplicative logarithmic correction factors of the form $`|\mathrm{ln}|t||^{\overline{\gamma }}`$ . These are due to a marginally irrelevant operator. The analytic calculation of Ref. was performed in the continuum massive field theory corresponding to a point on the outflowing renormalisation group trajectory. From this point of view, the logarithmic factors arise only when the parameters of the continuum theory are expressed in terms of those of the bare theory. We therefore expect that predictions for such universal quantities as the amplitude ratios above still to be valid when applied to ratios in which the leading logarithmic factors cancel. In the next section we recall the link between the isotropic percolation problem and the $`q1`$ limit of the Potts model, and show how in this limit the ratios (1.8) provide some universal information about percolation clusters. In section 3 we recall the origin of the theoretical predictions and present the new analytic results for the ratio $`\mathrm{\Gamma }_T/\mathrm{\Gamma }_L`$ and its percolation analogue. Section 4 is devoted to a Monte Carlo study of the ratios (1.8) in the Potts model for $`q=3`$ and 4 before discussing the theoretical and numerical results in the final section. ## 2 Connection with percolation Percolation is the geometrical problem in which bonds are randomly distributed on a lattice with occupation probability $`p`$ . A set of bonds forming a connected path on the lattice is called a cluster. There exist a critical value $`p_c`$ of the occupation probability above which an infinite cluster appears in the system; $`p_c`$ is called the percolation threshold. If $`N`$ is the total number of bonds in the lattice, the probability of a configuration with $`N_b`$ occupied bonds is $`p^{N_b}(1p)^{NN_b}`$. Hence, the average of a quantity $`X`$ over all configurations $`𝒢`$ is $$X=\underset{𝒢}{}Xp^{N_b}(1p)^{NN_b}.$$ (2.1) It is well known that the percolation problem can be mapped onto the limit $`q1`$ of the $`q`$-state Potts model . In fact, if we define $`z=e^{J/T}1`$, the partition function of the Potts model can be written in the form $$Z=\text{Tr}_s\underset{(x,y)}{}(1+z\delta _{s(x),s(y)}).$$ (2.2) A graph $`𝒢`$ on the lattice can be associated to each Potts configuration by drawing a bond between two sites with the same colour. In the above expression, a power of $`z`$ is associated to each bond in the graph. Taking into account the summation over colours one arrives to the expansion $$Z=\underset{𝒢}{}q^{N_c}z^{N_b},$$ (2.3) where $`N_b`$ is the total number of bonds in the graph $`𝒢`$ and $`N_c`$ is the number of clusters in $`𝒢`$ (each isolated site is also counted as a cluster). In terms of the partition function (2.3) the $`q`$-state Potts model is well defined even for noninteger values of $`q`$. The average of a quantity $`X`$ can be written as $$X_q=Z^1\underset{𝒢}{}Xq^{N_c}z^{N_b}.$$ (2.4) Hence, it is sufficient to make the formal identification $`z=p/(1p)`$ to see that $`X_1`$ coincides with the percolation average (2.1). For $`q1`$ the Potts model describes a generalised percolation problem in which each cluster can assume $`q`$ different colours. The presence of a spontaneous magnetisation $`M`$ at $`T<T_c`$ reflects the appearance of an infinite cluster at $`p>p_c`$. Let $`P`$ denote the probability that a site belongs to the infinite cluster ($`P=0`$ for $`p<p_c`$). Then, for any value of $`p`$, the probability that the site $`x`$ has colour $`k`$ is $$\delta _{s(x),k}=P\delta _{k1}+\frac{1}{q}(1P).$$ (2.5) Recalling Eqs. (1.2) and (1.4), we obtain $$P=\frac{q}{q1}M.$$ (2.6) Consider now two sites located at $`x`$ and $`y`$, and call $`P_i`$ the probability that they are both in the infinite cluster, $`P_f`$ the probability that they are in the same finite cluster, $`P_{ff}`$ the probability that they are in different finite clusters, and $`P_{if}`$ the probability that $`x`$ is in the infinite cluster while $`y`$ is in a finite one. The probability that $`x`$ has colour $`k`$ and $`y`$ has colour $`j`$ can be expressed as $$\delta _{s(x),k}\delta _{s(y),j}=P_i\delta _{k1}\delta _{j1}+P_f\frac{1}{q}\delta _{kj}+P_{ff}\frac{1}{q^2}+P_{if}\frac{1}{q}(\delta _{k1}+\delta _{j1}).$$ (2.7) Since $`P_i+P_f+P_{ff}+2P_{if}=1`$ and $`P_i+P_{if}=P`$, the two-point correlations only depend on two independent functions of $`xy`$, say $`P_i`$ and $`P_f`$. For the connected spin–spin correlators (1.5) one finds $`G_{kj}(x)`$ $`=`$ $`\left(\delta _{k1}\delta _{j1}{\displaystyle \frac{1}{q}}(\delta _{k1}+\delta _{j1})+{\displaystyle \frac{1}{q^2}}\right)P_i(x)+\left({\displaystyle \frac{1}{q}}\delta _{kj}{\displaystyle \frac{1}{q^2}}\right)P_f(x)`$ (2.8) $``$ $`\left(\delta _{k1}{\displaystyle \frac{1}{q}}\right)\left(\delta _{j1}{\displaystyle \frac{1}{q}}\right)P^2.`$ Restrict from now on our attention to the case of ordinary percolation, so that the limit $`q1`$ is understood in all the subsequent equations. From the previous equation we obtain $`G_{11}(x)=(q1)P_f(x),`$ (2.9) $`G_{kk}(x)=P_i(x)P^2,k1.`$ (2.10) The average size of finite clusters is given by $$S=\underset{x}{}P_f(x)=\frac{1}{q1}\underset{x}{}G_{11}(x).$$ (2.11) Nearby the percolation threshold this quantity behaves as $$S\sigma _\pm |p_cp|^\gamma ,$$ (2.12) where the subscripts $`+`$ and $``$ refer to $`p<p_c`$ and $`p>p_c`$, respectively, and $`\gamma `$ is the Potts critical exponent evaluated at $`q=1`$. Equation (2.11) implies $$\frac{\sigma _+}{\sigma _{}}=\frac{\mathrm{\Gamma }}{\mathrm{\Gamma }_L}.$$ (2.13) The quantity $$S^{}=\underset{x}{}(P_i(x)P^2)=\underset{x}{}G_{kk}(x),k1$$ (2.14) is a measure of the short range correlations inside the infinite cluster and behaves near criticality as $$S^{}\sigma ^{}|p_cp|^\gamma .$$ (2.15) One can then introduce a second universal ratio $`\sigma ^{}/\sigma _{}`$ whose relation with the Potts susceptibility amplitudes is $$\frac{\sigma ^{}}{\sigma _{}}=(q1)\frac{\mathrm{\Gamma }_T}{\mathrm{\Gamma }_L}.$$ (2.16) ## 3 Analytic results The scaling limit of the $`q`$-state Potts model is an integrable field theory , and this fact allows the evaluation of the correlation functions through the form factor approach, which is of general applicability within integrable field theory. This programme was carried out for the $`q`$-state Potts model in Ref. . We just recall here the basic steps of the procedure, referring the reader to that paper for all the details. The starting point is the exact scattering description of the low-temperature phase of the model determined by Chim and Zamolodchikov . Since at $`T<T_c`$ the model exhibits $`q`$ degenerate vacua, the elementary excitations entering this scattering description are kinks interpolating among the different vacua. The knowledge of the $`S`$-matrix allows the computation of the matrix elements (form factors) $`0|\mathrm{\Phi }(0)|n`$ entering the spectral decomposition of the correlation functions: $$\mathrm{\Phi }_1(x)\mathrm{\Phi }_2(0)=\underset{n=0}{\overset{\mathrm{}}{}}0|\mathrm{\Phi }_1(0)|nn|\mathrm{\Phi }_2(0)|0e^{|x|E_n},$$ (3.1) where $`E_n`$ denotes the total energy of the $`n`$-particle state $`|n`$. It is known that in integrable models the spectral series (3.1) exhibit remarkable convergence properties, and that, in particular, very accurate estimates of integrated correlators can be obtained retaining only the terms of the series containing no more than two particles (two-particle approximation). In Ref. , the one- and two-particle form factors of the energy, spin and disorder operators were computed in both phases of the model, the information about the high-temperature phase being obtained by duality. The two-particle approximation for the correlators was then used to evaluate a series of universal amplitude ratios, including $`\mathrm{\Gamma }/\mathrm{\Gamma }_L`$. Ref. also contains all the necessary information for the computation (within the same approximation) of the ratio $`\mathrm{\Gamma }_T/\mathrm{\Gamma }_L`$, which however was not discussed in that paper. We give in Table 1 the results corresponding to $`q=2,3,4`$. Due to technical difficulties, the form factor equations for the spin operator could be solved only for $`q=2,3,4`$ in Ref. , a limitation which prevents the analytic continuation to the percolation point $`q=1`$ for those amplitude ratios which are related to correlation functions of the spin operator. An estimate of these percolation ratios ($`\sigma _+/\sigma _{}`$, in particular) was however proposed in terms of a simple (quadratic) extrapolation to $`q=1`$ of the results obtained for $`q=2,3,4`$. We do here the same thing for the low-temperature ratio $`\sigma ^{}/\sigma _{}`$. Using the values of $`\mathrm{\Gamma }_T/\mathrm{\Gamma }_L`$ given in Table 1 for the extrapolation of Eq. (2.16), and explicitly incorporating the factor of $`(q1)`$, we find<sup>1</sup><sup>1</sup>1This is the result of the extrapolation performed in the variable $`\lambda `$ (related to $`q`$ by $`\sqrt{q}=2\mathrm{sin}(\pi \lambda /3)`$) in which all the results originating from the scattering theory are analytic. Extrapolating in $`q`$ gives $`\sigma ^{}/\sigma _{}1.43`$. $$\frac{\sigma ^{}}{\sigma _{}}1.49.$$ (3.2) According to the considerations developed in , we estimate an accuracy of order 1% for our predictions of $`\mathrm{\Gamma }_T/\mathrm{\Gamma }_L`$, while we allow for a 10% error on the value of $`\sigma ^{}/\sigma _{}`$ to take into account the uncertainty coming from the extrapolation procedure. We conclude this section with a reminder of the theoretical predictions for the correlation length amplitude ratio which will also be considered in the next section. The “true” correlation length $`\xi `$ is defined through the large distance decay of the spin-spin correlation function $$\sigma _i(x)\sigma _i(0)e^{|x|/\xi },$$ (3.3) and is determined as the inverse of the total mass of the lightest state entering the spectral series (3.1). For the spin operator at $`q3`$, this lightest state is the one-kink state for $`T>T_c`$, and the two-kink state for $`T<T_c`$. For $`q>3`$ one has to take into account that the two-kink state gives rise to a bound state whose mass equals $`\sqrt{3}m`$ at $`q=4`$, $`m`$ being the mass of the kink. Hence, denoting by $`\xi _\pm `$ the critical amplitudes of the “true” correlation length in the two different phases, the exact results for the ratio for integer $`q`$ are those given in Table 1. We also include our results for $`\mathrm{\Gamma }/\mathrm{\Gamma }_L`$ taken from Ref. . ## 4 Computer simulations Computer simulations are performed on the three- and four-states Potts model on an $`L\times L`$ square lattice with periodic (helical) boundary conditions. The Wolff single-spin cluster algorithm Wolff was used, implemented as described in Ref. . A random configuration is generated, and thermalised by applying cluster moves until each spin is statistically flipped 500 times for the three-states Potts model, 1000 times for the four-state Potts model with $`L400`$, or 2000 times for the four-state Potts model with larger lattices. After the thermalisation, a sequence of 1000 (or for the smallest lattice size 2000) configurations is generated, in which consecutive configurations are separated by a sequence of cluster moves in which each spin is statistically visited 10 times. In each configuration, the magnetisation $`\nu _i`$, the fraction of spins in each state $`i`$, is determined. From these numbers, the magnetic susceptibility above the critical temperature is calculated using Eq. (1.6) and averaging over $`i`$. Below the critical temperature, we expect to find spontaneous symmetry breaking, but of course this cannot occur in a finite system. One way to implement this would be to choose open boundaries and to fix the spins on the boundary to a preferred state. However, this brings in boundary effects and would necessitate using much larger systems. Instead, we simply observe that in any given configuration one colour of spin dominates, and in the thermodynamic limit this will be the preferred orientation of the ground state. Thus we estimate the longitudinal susceptibility from the fluctuations in the fraction of spins which are in the majority state, averaged over all configurations, and the transverse susceptibility from the fluctuations of the fraction of spins in each minority state, averaged over the $`(q1)`$ minority states in each configuration, and then over all configurations. We expect that the difference between the result of this method of estimation and that using fixed boundary conditions to be of order $`e^{L/\xi }`$, and thus exponentially suppressed in the region we study<sup>2</sup><sup>2</sup>2It should be noted that this is no longer the case as $`TT_c`$ at fixed $`L`$, and indeed we found that our susceptibility ratios as estimated this way did not converge to unity as the true ones must in this limit.. We are interested in the ratios $`\mathrm{\Gamma }_T/\mathrm{\Gamma }_L`$ and $`\mathrm{\Gamma }/\mathrm{\Gamma }_L`$, as discussed in the previous sections. The measurements should be performed at temperatures where the correlation length $`\xi `$ is large compared to the lattice spacing, but small compared to the system size. To determine approximately the middle of the appropriate temperature regime, we first measured the spin-spin correlation function above and below the critical temperature, in simulations of the $`200\times 200`$ three- and four-states Potts models. In figure 1 we plot the correlation length as a function of reduced temperature $`t=(TT_c)/T_c`$ above the critical temperature, and on top of that the correlation length as a function of the scaled reduced temperature $`t^{}=c(TT_c)/T_c`$ below the critical temperature. For the three-state Potts model, the curves collapse for $`c_{Q=3}=2.7\pm 0.5`$, for the four-state Potts model for $`c_{Q=4}=2.2\pm 0.3`$; both values are in agreement with the theoretical expectations $`c_{Q=3}=2^{1/\nu }=2.297`$ and $`c_{Q=4}=(\sqrt{3})^{1/\nu }=2.280`$. For the three- and four-states Potts models, we measured the susceptibility $`\chi `$ for lattice sizes $`L=200`$ to 1200, at temperatures $`t_+`$ above $`T_c`$. We found surprising difficulty in identifying a window where both finite-size effects and corrections to scaling may simultaneously be ignored. The former are negligible in the region for sufficiently large $`t_+`$ when the data for different, sufficiently large, values of $`L`$ all collapse. The latter are negligible if we observe a plateau in the collapsed data when it is multiplied by $`(t_+)^\gamma `$. An example of such a plateau is shown in Fig. 2 for $`q=3`$. We also found it more difficult to identify the scaling window for $`T>T_c`$ than below the critical temperature. This may be for two reasons: first the correlation length amplitudes are larger above $`T_c`$, so that one needs to go to larger values of $`|t|`$ to get rid of finite size effects; and second, periodic boundary conditions move the peak in the finite-size susceptibility to higher temperatures, pushing away the plateau in $`\chi (t_+)^\gamma `$. After these initial difficulties we decided to repeat the exercise for the $`q=2`$ Ising model and found precisely the same effect. Examination of all three cases led us to the following prescription: we measured the susceptibility above $`T_c`$ around the temperature where the correlation length is around $`\xi =\sqrt{L/2}`$; we next measured the longitudinal and transverse susceptibilities $`\chi _l`$ and $`\chi _t`$ at the corresponding temperatures $`t_{}=t_+/c`$ below $`T_c`$ where the correlation length is the same; the exact temperatures used in the simulations are listed in table 2. From $`\chi (t_+),\chi _L(t_{})`$ and $`\chi _T(t_{})`$ we obtain the required ratios using $`{\displaystyle \frac{\mathrm{\Gamma }_T}{\mathrm{\Gamma }_L}}`$ $`=`$ $`{\displaystyle \frac{\chi _T(t_{})}{\chi _L(t_{})}}`$ (4.1) $`{\displaystyle \frac{\mathrm{\Gamma }}{\mathrm{\Gamma }_L}}`$ $`=`$ $`{\displaystyle \frac{\chi (t_+)}{\chi _L(t_{})}}\left({\displaystyle \frac{t_+}{t_{}}}\right)^\gamma `$ (4.2) where $`\gamma _{Q=3}=13/9`$ and $`\gamma _{Q=4}=7/6`$. The results are presented in table 2. The statistical errors are two standard deviations wide; they are obtained by repeating the same procedure five or ten times, with different random number generator seeds. ## 5 Analysis and comparison with other work We first discuss the comparison of our numerical results in Table 2 with the analytic predictions presented in Table 1. The most stable results are those for the ratio $`\mathrm{\Gamma }_T/\mathrm{\Gamma }_L`$ for $`q=3`$. The analytic prediction of 0.327 is just outside the statistical error bars, but we note that for a fixed temperature there is a consistent trend towards this value with increasing $`L`$. We conclude that the data support the analytic prediction in this case, particularly given the small but unknown errors of the two-particle truncation, which might be expected to lie in the third decimal place. However, for the ratio $`\mathrm{\Gamma }/\mathrm{\Gamma }_L`$, while the data is less stable, there is clear trend towards a value below 10.0, with error bars which, although large, appear to exclude the analytic prediction of 13.8. The situation for $`q=4`$ is more complex. The data for $`\mathrm{\Gamma }_T/\mathrm{\Gamma }_L`$ appear fairly stable, yet even in the most favourable case lie at least three standard deviations above the analytic prediction. The situation for $`\mathrm{\Gamma }/\mathrm{\Gamma }_L`$ is even worse as the results do not appear to be stable. As remarked earlier, in the amplitude ratios the leading multiplicative logarithmic prefactors should cancel, and even some of the non-leading terms , but there is no reason to suppose this is true for the $`O(1/\mathrm{ln}|t|)`$ corrections and further. For $`\mathrm{\Gamma }_T/\mathrm{\Gamma }_L`$ it is conceivable that these corrections are responsible for the discrepancy with the analytic prediction, but we have not attempted to perform a fit including the $`O(1/\mathrm{ln}|t|)`$ corrections, since at the reduced temperatures we are working the neglect of the further corrections cannot be justified. The instability of the results for $`\mathrm{\Gamma }/\mathrm{\Gamma }_L`$ may be understood on plotting the scaled susceptibilities $`\chi (t_+)^\gamma `$ and $`\chi _L(t_{})^\gamma `$, which should, asymptotically, reveal the logarithmic prefactors $`(\mathrm{ln}|t|)^{\overline{\gamma }}`$, with $`\overline{\gamma }=\frac{3}{4}`$. In fact (see Fig. 3) these have opposite slopes, indicating that, in this region, the effective exponents $`\overline{\gamma }`$ have different signs. Once again, this is probably explained by the importance of non-leading and non-universal $`1/\mathrm{ln}|t|`$ corrections. We deduce that our numerical results are inconclusive for $`q=4`$. We now compare our results with those of some other recent studies. Salas and Sokal made a detailed study of logarithmic corrections in the $`q=4`$ model, including the susceptibility for $`T>T_c`$. Our raw data appears to be consistent with theirs, in the ranges of $`t_+`$ and $`L`$ for which they overlap. However, their main goal was to extract the exponent $`\overline{\gamma }`$ of the leading multiplicative logarithmic prefactor. They found that there was no region in which they could isolate such a prefactor and eliminate finite-size corrections. In fact, in order to determine $`\overline{\gamma }`$ they had to take such corrections systematically into account using a modified form of finite-size scaling. It is therefore no surprise that it should be impossible to determine the asymptotic amplitude of such a term from data taken over similar ranges. Caselle et al. have also performed Monte Carlo simulations of the $`q=4`$ model with a view to extracting the susceptibility amplitude ratios and also that involving the magnetisation. Before taking into account any logarithmic corrections, these authors’ estimates for the ratio $`\mathrm{\Gamma }/\mathrm{\Gamma }_L`$ disagree with the predictions of Ref. by a factor of $`2.5`$. By performing a linear extrapolation versus $`1/\mathrm{ln}|t|`$ they then find a corrected result in reasonably good agreement. However, one might argue that it is difficult to justify such a linear extrapolation, ignoring $`O(1/(\mathrm{ln}|t|)^2)`$ terms, when the resultant correction is so large. In any case, the analysis of Salas and Sokal indicates that finite-size effects cannot be excluded in the region that the non-leading logarithmic corrections are small. The agreement with the analytic predictions of Ref. may therefore be fortuitous, particularly since, as we argue here, the latter may well be wrong. Ziff and co-workers , following the appearance of Ref. , have reanalysed the percolation data which corresponds to the limit $`q1`$. Previous quoted results for the ratio of mean cluster size below and above $`p_c`$ ($`\sigma _+/\sigma _{}`$, which is the $`q1`$ limit of $`\mathrm{\Gamma }/\mathrm{\Gamma }_L`$, see Sec. 2) had ranged from 14 to 220. The prediction of Ref. , based on simple extrapolation of the results for $`q=2,3,4`$ (which appears to work to 10% accuracy for other amplitude ratios) gives a value around 74. However, Ziff et al.’s value is $`163\pm 2`$, in complete disagreement. (Note that in percolation there are no finite-size effects and corrections to scaling are generally small.) Ziff et al. have also measured the ratio $`\sigma ^{}/\sigma _{}`$ of integrated correlations within the infinite cluster to those in the finite clusters,<sup>3</sup><sup>3</sup>3There appears to have been some confusion over this in the literature in the past. For example, Aharony and Stauffer on p. 60 (2nd. ed.) state that the mean cluster size for $`p>p_c`$ is found by summing the connectedness function $`g(r)`$ over $`r`$ and subtracting $`P^2`$, where $`P`$ is the probability of a given site belonging to the infinite cluster. However, this would give $`\sigma _{}+\sigma ^{}`$. and find a value $`1.5\pm 0.2`$, in perfect agreement with our extrapolated value of $`1.49`$ in Eq. 3.2. What conclusion is to be drawn? There appears to be firm confirmation of our new analytic results for the low-temperature ratio $`\mathrm{\Gamma }_T/\mathrm{\Gamma }_L`$ both from $`q=3`$ and from percolation, while there is strong evidence that the results of Ref. for the ratio $`\mathrm{\Gamma }/\mathrm{\Gamma }_L`$ are incorrect. The most likely source of error lies in the computation of the correlation function for $`T>T_c`$, since the low-temperature calculations are verified by the other ratio. We recall that in fact all calculations in Ref. were performed in the low-temperature phase, and that the order parameter form factors for $`T>T_c`$ were inferred from those of the disorder operator for $`T<T_c`$ by duality. In order to fix the ratio $`\mathrm{\Gamma }/\mathrm{\Gamma }_L`$ it is therefore crucial to be able to fix the relative normalisation of the order and disorder operator form factors. In Ref. this was done assuming an extension (to the case of theories with internal symmetries) of the factorisation result of . While this extension gives the correct result for $`q=2`$, the analysis of this paper suggests that this may not be the case for $`q=3,4`$ <sup>4</sup><sup>4</sup>4This would affect the predictions of Ref. for the ratios $`\mathrm{\Gamma }/\mathrm{\Gamma }_L`$ and $`R_C`$, which are the only ones to be sensitive to the relative normalisation of order and disorder operators.. ## 6 Acknowledgements We thank Robert Ziff for discussions and for communicating his results with us before publication. We have also benefited from correspondence with M. Caselle and A. Sokal. GTB gratefully acknowledges the High-performance computing group of Utrecht University for computer time. JC’s work was supported in part by EPSRC Grant GR/J78327.
no-problem/9908/adap-org9908002.html
ar5iv
text
# Contents ## 1 Causes of extinction There are two primary colleges of thought about the causes of extinction. The traditional view, still held by most palaeontologists as well as many in other disciplines, is that extinction is the result of external stresses imposed on the ecosystem by the environment (Benton 1991, Hoffmann and Parsons 1991, Parsons 1993). There are indeed excellent arguments in favour of this viewpoint, since we have good evidence for particular exogenous causes for a number of major extinction events in the Earth’s history, such as marine regression (sea-level drop) for the late-Permian event (Jablonski 1985, Hallam 1989), and bolide impact for the end-Cretaceous (Alvarez et al. 1980, Alvarez 1983, 1987). These explanations are by no means universally accepted (Glen 1994), but almost all of the alternatives are also exogenous in nature, ranging from the mundane (climate change (Stanley 1984, 1988), ocean anoxia (Wilde and Berry 1984)) to the exotic (volcanism (Duncan and Pyle 1988, Courtillot et al. 1988), tidal waves (Bourgeois et al. 1988), magnetic field reversal (Raup 1985, Loper et al. 1988), supernovae (Ellis and Schramm 1995)). There seems to be little disagreement that, whatever the causes of these mass extinction events, they are the result of some change in the environment. However, the mass extinction events account for only about 35% of the total extinction evident in the fossil record at the family level, and for the remaining 65% we have no firm evidence favouring one cause over another. Many believe, nonetheless, that all extinction can be accounted for by environmental stress on the ecosystem. The extreme point of view has been put forward (though not entirely seriously) by Raup (1992), who used statistical analyses of fossil extinction and of the effects of asteroid impact to show that, within the accuracy of our present data, it is conceivable that all terrestrial extinction has been caused by meteors and comets. This however is more a demonstration of the uncertainty in our present knowledge of the frequency of impacts and their biotic effects than a realistic theory. At the other end of the scale, an increasing number of biologists and ecologists are supporting the idea that extinction has biotic causes—that extinction is a natural part of the dynamics of ecosystems and would take place regardless of any stresses arising from the environment. There is evidence in favour of this viewpoint also, although it is to a large extent anecdotal. Maynard Smith (1989) has given a variety of different examples of modern-day extinctions caused entirely by species interactions, such as the effects of overzealous predators, or the introduction of new competitors into formerly stable systems. The problem is that extinction events of this nature usually involve no more than a handful of species at the most, and are therefore too small to be picked out over the “background” level of extinction in the fossil data, making it difficult to say with any certainty whether they constitute an important part of this background extinction. (The distinction between mass and background extinction events is discussed in more detail in Section 2.2.1.) The recent modelling work which is the primary focus of this review attempts to address this question by looking instead at statistical trends in the extinction record, such as the relative frequencies of large and small extinction events. Using models which make predictions about these trends and comparing the results against fossil and other data, we can judge whether the assumptions which go into the models are plausible. Some of the models which we discuss are based on purely biotic extinction mechanisms, others on abiotic ones, and still others on some mixture of the two. Whilst the results of this work are by no means conclusive yet—there are a number of models based on different extinction mechanisms which agree moderately well with the data—there has been some encouraging progress, and it seems a promising line of research. ## 2 The data In this section we review the palaeontological data on extinction. We also discuss a number of other types of data which may have bearing on the models we will be discussing. ### 2.1 Fossil data The discovery and cataloguing of fossils is a painstaking business, and the identification of a single new species is frequently the sole subject of a published article in the literature. The models with which we are here concerned, however, predict statistical trends in species extinction, origination, diversification and so on. In order to study such statistical trends, a number of authors have therefore compiled databases of the origination and extinction times of species described in the literature. The two most widely used such databases are those of Sepkoski (1992) and of Benton (1993). Sepkoski’s data are labelled by both genus and family, although the genus-level data are, at the time of writing, unpublished. The database contains entries for approximately forty thousand marine genera, primarily invertebrates, from about five thousand families. Marine invertebrates account for the largest part of the known fossil record, and if one is to focus one’s attention in any single area, this is the obvious area to choose. Benton’s database by contrast covers both marine and terrestrial biotas, though it does so only at the family level, containing data on some seven thousand families. The choice of taxonomic level in a compilation such as this is inevitably a compromise. Certainly we would like data at the finest level possible, and a few studies have even been attempted at the species level (e.g., Patterson and Fowler 1996). However, the accuracy with which we can determine the origination and extinction dates of a particular taxon depend on the number of fossil representatives of that taxon. In a taxon for which we have very few specimens, the chances of one of those specimens lying close to the taxon’s extinction date are slim, so that our estimate of this date will tend to be early. This bias is known as the Signor–Lipps effect (Signor and Lipps 1982). The reverse phenomenon, sometimes humorously referred to as the “Lipps–Signor” effect, is seen in the origination times of taxa, which in general err on the late side in poorly represented taxa. By grouping fossil species into higher taxa, we can work with denser data sets which give more accurate estimates of origination and extinction dates, at the expense of throwing out any information which is specific to the lower taxonomic levels (Raup and Boyajian 1988). (Higher taxa do, however, suffer from a greater tendency to paraphyly—see the discussion of pseudoextinction in Section 2.2.5.) #### 2.1.1 Biases in the fossil data The times of origination and extinction of species are usually recorded to the nearest geological stage. Stages are intervals of geological time determined by stratigraphic methods, or in some cases by examination of the fossil species present. Whilst this is a convenient and widely accepted method of dating, it presents a number of problems. First, the dates of the standard geological stages are not known accurately. They are determined mostly by interpolation between a few widely-spaced calibration points, and even the timings of the major boundaries are still contested. In the widely-used timescale of Harland et al. (1990), for example, the Vendian–Cambrian boundary, which approximately marks the beginning of the explosion of multi-cellular life, is set at around 625 million years ago (Ma). However, more recent results indicate that its date may be nearer 545 Ma, a fairly significant correction (Bowring et al. 1993). Another problem, which is particularly annoying where studies of extinction are concerned, is that the stages are not of even lengths. There are 77 stages in the Phanerozoic (the interval from the start of the Cambrian till the present, from which virtually all the data are drawn) with a mean length of 7.3 My, but they range in length from about 1 My to 20 My. If one is interested in calculating extinction rates, i.e., the number of species becoming extinct per unit time, then clearly one should divide the number dying out in each stage by the length of the stage. However, if, as many suppose, extinction occurs not in a gradual fashion, but in intense bursts, this can give erroneous results. A single large burst of extinction which happens to fall in a short stage, would give an anomalously high extinction rate, regardless of whether the average extinction rate was actually any higher than in surrounding times. Benton (1995) for example has calculated familial extinction rates in this way and finds that the apparent largest mass extinction event in the Earth’s history was the late Triassic event, which is measured to be 20 times the size of the end-Cretaceous one. This result is entirely an artifact of the short duration (1 to 2 My) of the Rhaetian stage at the end of the Triassic. In actual fact the late Triassic event killed only about half as many families as the end-Cretaceous. In order to minimize effects such as these, it has become common in studies of extinction to examine not only extinction rates (taxa becoming extinction per unit time) but also total extinction (taxa becoming extinct in each stage). While the total extinction does not suffer from large fluctuations in short stages as described above, it obviously gives a higher extinction figure in longer stages in a way which rate measures do not. However, some features of the extinction record are found to be independent of the measure used, and in this case it is probably safe to assume that they are real effects rather than artifacts of the variation in stage lengths. The use of the stages as a time scale has other problems associated with it as well. For example, it appears to be quite common to assign a different name to specimens of the same species found before and after a major stage boundary (Raup and Boyajian 1988), with the result that stage boundaries “generate” extinctions—even species which did not become extinct during a mass extinction event may appear to do so, by virtue of being assigned a new name after the event. There are many other shortcomings in the fossil record. Good discussions have been given by Raup (1979a), Raup and Boyajian (1988) and Sepkoski (1996). Here we just mention briefly a few of the most glaring problems. The “pull of the recent” is a name which refers to the fact that species diversity appears to increase towards recent times because recent fossils tend to be better preserved and easier to dig up. Whether this in fact accounts for all of the observed increase in diversity is an open question, one which we discuss further in Section 2.2.3. A related phenomenon affecting recent species (or higher taxa) is that some of them are still alive today. Since our sampling of living species is much more complete than our sampling of fossil ones, this biases the recent record heavily in favour of living species. This bias can be corrected for by removing living species from our fossil data. The “monograph” effect is a source of significant bias in studies of taxon origination. The name refers to the apparent burst of speciation seen as the result of the work of one particularly zealous researcher or group of researchers investigating a particular period; the record will show a peak of speciation over a short period of geological time, but this is only because that period has been so extensively researched. A closely related phenomenon is the so-called “Lagerstätten” effect, which refers to the burst of speciation seen when the fruits of a particularly fossil-rich site are added to the database. These and other fluctuations in the number of taxa—the standing diversity—over geologic time can be partly corrected for by measuring extinction as a fraction of diversity. Such “per taxon” measures of extinction may however miss real effects such as the slow increase in overall diversity over time discussed in Section 2.2.3. For this reason it is common in fact to calculate both per taxon and actual extinction when looking for trends in fossil data. Along with the two ways of treating time described above, this gives us four different extinction “metrics”: total number of taxa becoming extinct per stage, percentage of taxa becoming extinct per stage, number per unit time, and percentage per unit time. A source of bias in measures of the sizes of mass extinction events is poor preservation of fossils after a large event because of environmental disturbance. It is believed that many large extinction events are caused by environmental changes, and that these same changes may upset the depositional regime under which organisms are fossilized. In some cases this results in the poor representation of species which actually survived the extinction event perfectly well, thereby exaggerating the measured size of the event. There are a number of examples of so-called Lazarus taxa (Flessa and Jablonski 1983) which appear to become extinct for exactly this reason, only to reappear a few stages later. On the other hand, the Signor–Lipps effect discussed above tends to bias results in the opposite direction. Since it is unlikely that the last representative of a poorly-represented taxon will be found very close to the actual date of a mass-extinction event, it sometimes appears that species are dying out for a number of stages before the event itself, even if this is not in fact the case. Thus extinction events tend to get “smeared” backwards in time. In fact, the existence of Lazarus taxa can help us to estimate the magnitude of this problem, since the Signor–Lipps effect should apply to these taxa also, even though we know that they existed right up until the extinction event (and indeed beyond). With all these biases present in the fossil data, one may well wonder whether it is possible to extract any information at all from the fossil record about the kinds of statistical trends with which our models are concerned. However, many studies have been performed which attempt to eliminate one or more of these biases, and some results are common to all studies. This has been taken as an indication that at least some of the trends visible in the fossil record transcend the rather large error bars on their measurement. In the next section we discuss some of these trends, particularly those which have been used as the basis for models of extinction, or cited as data in favour of such models. ### 2.2 Trends in the fossil data There are a number of general trends visible in the fossil data. Good discussions have been given by Raup (1986) and by Benton (1995). Here we discuss some of the most important points, as they relate to the models with which this review is concerned. #### 2.2.1 Extinction rates In Figure 2.1.1 we show a plot of the number of families of marine organisms becoming extinct in each geological stage since the start of the Phanerozoic. The data are taken from an updated version of the compilation by Sepkoski (1992). It is clear from this plot that, even allowing for the irregular sizes of the stages discussed above, there is more variation in the extinction rate than could be accounted for by simple Poissonian fluctuations. In particular, a number of mass extinction events can be seen in the data, in which a significant fraction of the known families were wiped out simultaneously. Palaeontology traditionally recognizes five large extinction events in terrestrial history, along with quite a number of smaller ones (Raup and Sepkoski 1982). The “big five” are led by the late Permian event (indicated by the letter P in the figure) which may have wiped out more than 90% of the species on the planet (Raup 1979b). The others are the events which ended the Ordovician (O), the Devonian (D), the Triassic (Tr) and the Cretaceous (K). A sixth extinction peak at about 525 Ma is also visible in the figure (the leftmost large peak), but it is still a matter of debate whether this peak represents a genuine historical event or just a sampling error. As discussed in Section 1, the cause of mass extinction events is a topic of much debate. However, it seems to be widely accepted that those causes, whatever they are, are abiotic, which lends strength to the view, held by many palaeontologists, that all extinction may have been caused by abiotic effects. The opposing view is that large extinction events may be abiotic in origin, but that smaller events, perhaps even at the level of single species, have biotic causes. Raup and Boyajian (1988) have investigated this question by comparing the extinction profiles of the nine major invertebrate groups throughout the Phanerozoic. While the similarities between these profiles is not as strong as between the extinction profiles of different subsets of the same group, they nonetheless find strong correlations between groups in the timing of extinction events. This may be taken as evidence that there is comparatively little taxonomic selectivity in the processes giving rise to mass extinction, which in turn favours abiotic rather than biotic causes. In Figure 2.2, for example, reproduced from data given in their paper, we show the percentage extinction of bivalve families against percentage extinction of all other families, for each stage of the Phanerozoic. The positive correlation ($`r^2=0.78`$) of these data suggest a common cause for the extinction of bivalves and other species. The shortcoming of these studies is that they can still only yield conclusions about correlations between extinction events large enough to be visible above the noise level in the data. It is perfectly reasonable to adopt the position that the large extinction events have exogenous causes, but that there is a certain level of “background” events which are endogenous in origin. In order to address this issue a number of researchers have constructed plots of the distribution of the sizes of extinction events; non-uniformity in such a distribution might offer support for distinct mass and background extinction mechanisms (Raup 1986, Kauffman 1993, Solé and Bascompte 1996). One such distribution is shown in Figure 2.2.1, which is a histogram of the number of families dying out per stage. This is not strictly the same thing as the sizes of extinction events, since several distinct events may contribute to the total in a given stage. However, since most extinction dates are only accurate to the nearest stage it is the best we can do. If many independent extinction events were to occur in each stage, then one would expect, from Poisson statistics (see, for instance, Grimmett and Stirzaker 1992), that the histogram would be approximately normally distributed. In actual fact, as the figure makes clear, the distribution is highly skewed and very far from a normal distribution (Raup 1996). This may indicate that extinction at different times is correlated, with a characteristic correlation time of the same order of magnitude as or larger than the typical stage length so that the extinctions within a single stage are not independent events (Newman and Eble 1999a). The histogram in Figure 2.2.1 shows no visible discontinuities, within the sampling errors, and therefore gives no evidence for any distinction between mass and background extinction events. An equivalent result has been derived by Raup (1991b) who calculated a “kill curve” for marine extinctions in the Phanerozoic by comparing Monte Carlo calculations of genus survivorship with survivorship curves drawn from the fossil data. The kill curve is a cumulative frequency distribution of extinctions which measures the frequency with which one can expect extinction events of a certain magnitude. Clearly this curve contains the same information as the distribution of extinction sizes, and it can be shown that the conversion from one to the other involves only a simple integral transform (Newman 1996). On the basis of Raup’s calculations, there is again no evidence for a separation between mass and background extinction events in the fossil record. This result is not necessarily a stroke against extinction models which are based on biotic causes. First, it has been suggested (Jablonski 1986, 1991) that although there may be no quantitative distinction between mass and background events, there could be a qualitative one; it appears that the traits which confer survival advantages during periods of background extinction may be different from those which allow species to survive a mass extinction, so that the selection of species becoming extinction under the two regimes is different. Second, there are a number of models which predict a smooth distribution of the sizes of extinction events all the way from the single species level up to the size of the entire ecosystem simply as a result of biotic interactions. In fact, the distribution of extinction sizes is one of the fundamental predictions of most of the models discussed in this review. Although the details vary, one of the most striking features which these models have in common is their prediction that the extinction distribution should follow a power law, at least for large extinction events. In other words, the probability $`p(s)`$ that a certain fraction $`s`$ of the extant species/genera/families will become extinct in a certain time interval (or stage) should go like $$p(s)s^\tau ,$$ (1) for large $`s`$, where $`\tau `$ is an exponent whose value is determined by the details of the model. This is a conjecture which we can test against the fossil record. In Figure 2.2.1 we have replotted the data from Figure 2.2.1 using logarithmic scales, on which a power-law form should appear as a straight line with slope $`\tau `$. As pointed out by Solé and Bascompte (1996), and as we can see from the figure, the data are indeed compatible with the power-law form,<sup>1</sup><sup>1</sup>1In this case we have excluded the first point on the graph from our fit, which is justifiable since the power law is only expected for large values of $`s`$. but the error bars are large enough that they are compatible with other forms as well, including the exponential shown in the figure. In cases such as this, where the quality of the data makes it difficult to distinguish between competing forms for the distribution, a useful tool is the rank/frequency plot. A rank/frequency plot for extinction is constructed by taking the stratigraphic stages and numbering them in decreasing order of number of taxa becoming extinct. Thus the stage in which the largest number of taxa become extinct is given rank 1, the stage with the second largest number is given rank 2, and so forth. Then we plot the number of taxa becoming extinct as a function of rank. It is straightforward to show (Zipf 1949) that distributions which appear as power laws or exponentials in a histogram such as Figure 2.2.1 will appear as power laws and exponentials on a rank/frequency plot also. However, the rank frequency plot has the significant advantage that the data points need not be grouped into bins as in the histogram. Binning the data reduces the number of independent points on the plot and throws away much of the information contained in our already sparse data set. Thus the rank/frequency plot often gives a better guide to the real form of a distribution. In Figure 2.2.1 we show a rank/frequency plot of extinctions of marine families in each stage of the Phanerozoic on logarithmic scales. As we can see, this plot does indeed provide a clearer picture of the behaviour of the data, although ultimately the conclusions are rather similar. The points follow a power law quite well over the initial portion of the plot, up to extinctions on the order of 40 families or so, but deviate markedly from power law beyond this point. The inset shows the same data on semi-logarithmic scales, and it appears that they may fall on quite a good straight line, although there are deviations in this case as well. Thus it appears again that the fossil data could indicate either a power-law or an exponential form (and possibly other forms as well). More sophisticated analysis (Newman 1996) has not settled this question, although it does indicate that the Monte Carlo results of Raup (1991b) are in favour of the power-law form, rather than the exponential one, and also allows for a reasonably accurate measurement of the exponent of the power law, giving $`\tau =2.0\pm 0.2`$. This value can be compared against the predictions of the models. #### 2.2.2 Extinction periodicity In an intriguing paper published in 1984, Raup and Sepkoski have suggested that the mass extinction events seen in the most recent 250 My or so of the fossil record occur in a periodic fashion, with a period of about 26 My (Raup and Sepkoski 1984, 1986, 1988, Sepkoski 1989, 1990). Figure 2.2.1 shows the curve of extinction intensity for marine invertebrate genera from the middle Permian to the Recent from Sepkoski’s data, with the postulated periodicity indicated by the vertical lines. A number of theories have been put forward, mostly based on astronomical causes, to explain how such a periodicity might arise (Davis et al. 1984, Rampino and Stothers 1984, Whitmire and Jackson 1984, Hut et al. 1987). More recently however, it has been suggested that the periodicity has more mundane origins. Patterson and Smith (1987, 1989), for instance, have theorized that it may be an artifact of noise introduced into the data by poor taxonomic classification (Sepkoski and Kendrick (1993) argue otherwise), while Stanley (1990) has suggested that it may be a result of delayed recovery following large extinction events. A quantitative test for periodicity of extinction is to calculate the power spectrum of extinction over the appropriate period and look for a peak at the frequency corresponding to 26 My. We have done this in Figure 2.2.1 using data for marine families from the Sepkoski compilation. As the figure shows, there is a small peak in the spectrum around the relevant frequency (marked with an arrow), but it is not significant given the level of noise in the data. On the other hand, similar analyses by Raup and Sepkoski (1984) and by Fox (1987) using smaller databases do appear to produce a significant peak. The debate on this question is still in progress. The power spectrum of fossil extinction is interesting for other reasons. Solé et al. (1997) have suggested on the basis of calculations using fossil data from the compilation by Benton (1993) that the spectrum has a $`1/f`$ form, i.e., it follows a power law with exponent $`1`$. This result would be intriguing if true, since it would indicate that extinction at different times in the fossil record was correlated on arbitrarily long time-scales. However, it now appears likely that the form found by Solé et al. is an artifact of the method of analysis, rather than a real result (Kirchner and Weil 1998). Spectra calculated using other methods do not show the $`1/f`$ form and can be explained without assuming any long-time correlations: they are consistent with an exponential form at low frequencies crossing over to a $`1/f^2`$ behaviour at high frequencies (Newman and Eble 1999a). #### 2.2.3 Origination and diversity The issue of origination rates of species in the fossil record is in a sense complementary to that of extinction rates, but has been investigated in somewhat less depth. Interesting studies have been carried out by, for example, Gilinsky and Bambach (1987), Jablonski and Bottjer (1990a, 1990b, 1990c), Jablonski (1993), Sepkoski (1998) and Eble (1998, 1999). One clear trend is that peaks in the origination rate appear in the immediate aftermath of large extinction events. In Figure 2.2.2 we show the number of families of marine organisms appearing per stage. Comparison with Figure 2.1.1 shows that there are peaks of origination corresponding to all of the prominent extinction peaks, although the correspondence between the two curves is by no means exact. The usual explanation for these bursts of origination is that new species find it easier to get a toe-hold in the taxonomically under-populated world which exists after a large extinction event. As the available niches in the ecosystem fill up, this is no longer the case, and origination slows. Many researchers have interpreted this to mean that there is a saturation level above which a given ecosystem can support no more new species, so that, apart from fluctuations in the immediate vicinity of the large extinction events, the number of species is approximately constant. This principle has been incorporated into most of the models considered in this review; the models assume a constant number of species and replace any which become extinct by an equal number of newly-appearing ones. (The “reset” model considered in Section 8 is an important exception.) However, the hypothesis of constant species number is not universally accepted. In the short term, it appears to be approximately correct to say that a certain ecosystem can support a certain number of species. Modern-day ecological data on island biogeography support this view (see for example Rosenzweig (1995)). However, on longer timescales, the diversity of species on the planet appears to have been increasing, as organisms discover for the first time ways to exploit new habitats or resources. In Figure 2.2.3 we show the total number of known fossil families as a function of geological time. The vertical axis is logarithmic, and the approximately straight-line form indicates that the increase in diversity is roughly exponential, although logistic and linear growth forms have been suggested as well (Sepkoski 1991, Newman and Sibani 1999). As discussed in Section 2.1.1, one must be careful about the conclusions we draw from such figures, because of the apparent diversity increase caused by the “pull of the recent”. However, current thinking mostly reflects the view that there is a genuine diversity increase towards recent times associated with the expansion of life into new domains. As Benton (1995) has put it: “There is no evidence in the fossil record of a limit to the ultimate diversity of life on Earth”. #### 2.2.4 Taxon lifetimes Another quantity which has been compared with the predictions of a variety of extinction models is the distribution of the lifetimes of taxa. In Figure 2.2.3 we show a histogram of the lifetimes of marine genera in the Sepkoski database. The axes of the figure are logarithmic and the solid and dotted lines represent respectively power-law and exponential fits to the data. At first glance it appears from this figure that the lifetime distribution is better fitted by the exponential form. This exponential has a time constant of $`40.1`$ My, which is of the same order of magnitude as the mean genus lifetime of $`30.1`$ My. An exponential distribution of this type is precisely what one would expect to see if taxa are becoming extinct at random with a constant average rate (a Poisson process). A number of authors have however argued in favour of the power-law fit (Sneppen et al. 1995, Bak 1996). The power-law fit in the figure is a fit only to the data between 10 and 100 My. In this interval it actually matches the data quite well, but for longer or shorter lifetimes the agreement is poor. Why then should we take this suggestion seriously? The answer is that both very long and very short lifetimes are probably under-represented in the database because of systematic biases. First, since the appearance and disappearance of genera are recorded only to the nearest stage, lifetimes of less than the length of the corresponding stage are registered as being zero and do not appear on the histogram. This means that lifetimes shorter than the average stage length of about 7 My are under-represented. Second, as mentioned briefly in Section 2.1.1, a taxon is sometimes given a different name before and after a major stage boundary, even though little or nothing about that taxon may have changed. This means that the number of species with lifetimes longer than the typical separation of these major boundaries is also underestimated in our histogram. This affects species with lifetimes greater than about 100 My. Thus there are plausible reasons for performing a fit only in the central region of Figure 2.2.3 and in this case the power-law form is quite a sensible conjecture. The exponent of the power law for the central region of the figure is measured to be $`\alpha =1.6\pm 0.1`$. This value is questionable however, since it depends on which data we choose to exclude at long and short times. In fact, a case can be made for any value between about $`\alpha =1.2`$ and $`2.2`$. In this review we take a working figure of $`\alpha =1.7\pm 0.3`$ for comparison with theoretical models. Several of these models provide explanations for a power-law distribution of taxon lifetimes, with figures for $`\alpha `$ in reasonable agreement with this value. We should point out that there is a very simple possible explanation for a power-law distribution of taxon lifetimes which does not rely on any detailed assumptions about the nature of evolution. If the addition and removal of species from a genus (or any sub-taxa from a taxon) are stochastically constant and take place at roughly the same rate, then the number of species in the genus will perform an ordinary random walk. When this random walk reaches zero—the so-called first return time—the genus becomes extinct. Thus the distribution of the lifetimes of genera is also the distribution of first return times of a one-dimensional random walk. As is easily demonstrated (see Grimmett and Stirzaker (1992), for example), the distribution of first return times follows a power law with exponent $`\frac{3}{2}`$, in reasonable agreement with the figure extracted from the fossil record above. An alternative theory is that speciation and extinction should be multiplicative, i.e., proportional to the number of species in the genus. In this case the logarithm of the size of the genus performs a random walk, but the end result is the same: the distribution of lifetimes is a power law with exponent $`\frac{3}{2}`$. #### 2.2.5 Pseudoextinction and paraphyly One possible source of discrepancy between the models considered in this paper and the fossil data is the way in which an extinction is defined. In the palaeontological literature a distinction is usually drawn between “true extinction” and “pseudoextinction”. The term pseudoextinction refers to the evolution of a species into a new form, with the resultant disappearance of the ancestral form. The classic example is that of the dinosaurs. If, as is supposed by some, modern birds are the descendants of the dinosaurs (Gauthier 1986, Chiappe 1995), then the dinosaurs did not truly become extinct, but only pseudoextinct. Pseudoextinction is of course a normal part of the evolution process; Darwin’s explanation of the origin of species is precisely the replacement of strains by their own fitter mutant offspring. And certainly this is a form of extinction, in that the ancestral strain will no longer appear in the fossil record. However, palaeontology makes a distinction between this process and true extinction—the disappearance of an entire branch of the phylogenetic tree without issue—presumably because the causes of the two are expected to be different. Pseudoextinction is undoubtedly a biotic process (although the evolution of a species and subsequent extinction of the parent strain may well be brought on by exogenous pressures—see Roy (1996), for example). On the other hand, many believe that we must look to environmental effects to find the causes of true extinction (Benton 1991). Some of the models discussed in this review are models of true extinction, with species becoming extinct and being replaced by speciation from other, unrelated species. Others however deal primarily with pseudoextinction, predicting varying rates of evolution over the course of time, with mass extinction arising as the result of periods of intense evolutionary activity in which many species evolve to new forms, causing the pseudoextinction of their parent forms. It may not be strictly fair to compare models such as these to the fossil data on extinction presented above. To be sure, the data on extinction dates from which the statistics are drawn do not distinguish between true extinction and pseudoextinction; all that is recorded is the last date at which a specimen of a certain species is found. However, the grouping of the data into higher taxa, as discussed in Section 2.1, does introduce such a distinction. When a species evolves to a new form, causing the pseudoextinction of the ancestral form, the new species is normally assigned to the same higher taxa—genus and family—as the ancestor. Thus a compilation of data at the genus or family level will not register the pseudoextinction of a species at this point. The extinction of a genus or family can normally only occur when its very last constituent species becomes (truly) extinct, and therefore the data on the extinction times of higher taxa reflect primarily true extinctions. However, the situation is not entirely straightforward. The assignment of species to genera and genera to families is, to a large extent, an arbitrary process, with the result that whilst the argument above may apply to a large portion of the data, there are many anomalies of taxonomy which give rise to exceptions. Strictly, the correct way to construct a taxonomic tree is to use cladistic principles. A clade is a group of species which all claim descendence from one ancestral species. In theory one can construct a tree in which each taxon is monophyletic, i.e., is composed only of members of one clade. Such a tree is not unique; there is still a degree of arbitrariness introduced by differences of opinion over when a species should be considered the founding member of a new taxon. However, to the extent that such species are a small fraction of the total, the arguments given above for the absence of pseudoextinction from the fossil statistics, at the genus level and above, are valid. In practice, however, cladistic principles are hard to apply to fossil species, whose taxonomic classification is based on morphology rather than on a direct knowledge of their lines of descent. In addition, a large part of our present classification scheme has been handed down to us by a tradition which predates the introduction of cladism. The distinction between dinosaurs and birds, for example, constitutes exactly such a traditional division. As a result, many—indeed most—taxonomic groups, particularly higher ones, tend to be paraphyletic: the members of the taxa are descended from more than one distinct ancestral species, whose own common ancestor belonged to another taxon. Not only does this failing upset our arguments concerning pseudoextinction above, but also, by virtue of the resulting unpredictable nature of the taxonomic hierarchy, introduces errors into our statistical measures of extinction which are hard to quantify (Sepkoski and Kendrick 1993). As Raup and Boyajian (1988) put it: “If all paraphyletic groups were eliminated from taxonomy, extinction patterns would certainly change”. ### 2.3 Other forms of data There are a few other forms of data which are of interest in connection with the models we will be discussing. Chief amongst these are taxonomic data on modern species, and simulation data from so-called artificial life experiments. #### 2.3.1 Taxonomic data As long ago as 1922, it was noted that if one takes the taxonomic hierarchy of current organisms, counts the number of species $`n_s`$ in each genus, and makes a histogram of the number of genera $`n_g`$ for each value of $`n_s`$, then the resulting graph has a form which closely follows a power law (Willis 1922, Williams 1944): $$n_gn_s^\beta .$$ (2) In Figure 2.3.1, for example, we reproduce the results of Willis for the number of species per genus of flowering plants. The measured exponent in this case is $`\beta =1.5\pm 0.1`$. Recently, Burlando (1990, 1993) has extended these results to higher taxa, showing that the number of genera per family, families per order, and so forth, also follow power laws, suggesting that the taxonomic tree has a fractal structure, a result of some interest to those working on “critical” models of extinction (see Section 5.5). In certain cases, for example if one makes the assumption that speciation and extinction rates are stochastically constant, it can be shown that the average number of species in a genus bears a power-law relation to the lifetime of the genus, in which case Willis’s data are merely another consequence of the genus lifetime distribution discussed in Section 2.2.4. Even if this is true however, these data are nonetheless important, since they are derived from a source entirely different from the ones we have so far considered, namely from living species rather than fossil ones. Note that we need to be careful about the way these distributions are calculated. A histogram of genus sizes constructed using fossil data drawn from a long period of geologic time is not the same thing as one constructed from a snapshot of genera at a single point in time. A snapshot tends to favour longer lived genera which also tend to be larger, and this produces a histogram with a lower exponent than if the data are drawn from a long time period. Most of the models discussed in this review deal with long periods of geologic time and therefore mimic data of the latter kind better than those of the former. Willis’s data, which are taken from living species, are inherently of the “snapshot” variety, and hence may have a lower value of $`\beta `$ than that seen in fossil data and in models of extinction. #### 2.3.2 Artificial life Artificial life (Langton 1995) is the name given to a class of evolutionary simulations which attempt to mimic the processes of natural selection, without imposing a particular selection regime from outside. (By contrast, most other computation techniques employing ideas drawn from evolutionary biology call upon the programmer to impose fitness functions or reproductive selection on the evolving population. Genetic algorithms (Mitchell 1996) are a good example of such techniques.) Probably the work of most relevance to the evolutionary biologist is that of Ray and collaborators (Ray 1994a, 1994b), who created a simulation environment known as Tierra, in which computer programs reproduce and compete for the computational resources of CPU time and memory. The basic idea behind Tierra is to create an initial “ancestor” program which makes copies of itself. The sole function of the program is to copy the instructions which comprise it into a new area of the computer’s memory, so that, after some time has gone by, there will be a large number of copies of the same program running at once. However, the trick is that the system is set up so that the copies are made in an unreliable fashion. Sometimes a perfect copy is made, but sometimes a mistake occurs, so that the copy differs from the ancestor. Usually such mistakes result in a program which is not able to reproduce itself any further. However, occasionally they result in a program which reproduces more efficiently than its ancestor, and hence dominates over the ancestor after a number of generations. In systems such as this, many of the features of evolving biological systems have been observed, such as programs which cooperate in order to aid one another’s efficient reproduction and parasitic programs which steal resources from others in order to reproduce more efficiently. In the context of the kinds of models we will be studying here, the recent work of Adami (1995) using the Tierra system has attracted attention. In his work, Adami performed a number of lengthy runs of the Tierra simulation and observed the lifetimes of the species appearing throughout the course of the simulations. In Figure 2.3.2 we show some of his results. The distribution of lifetimes appears again to follow a power law, except for a fall-off at long lifetimes, which may be accounted for by the finite length of the simulations.<sup>2</sup><sup>2</sup>2Although the integrated distribution in Figure 2.3.2 does not appear to follow a straight line very closely, Adami (1995) shows that in fact it has precisely the form expected if the lifetimes are cut off exponentially. This result appears to agree with the fossil evidence discussed in Section 2.2.4, where the lifetimes of taxa were also found to follow a distribution approximately power-law in form. Possible explanations of this result have been discussed by Newman et al. (1997). ## 3 Early models of extinction Most discussion of extinction has taken place at the species level, which is natural since extinction is intrinsically a species-level effect—by extinction we mean precisely the disappearance of a species, although the concept is frequently extended to cover higher taxa as well. Our discussion will also take place mostly at the species and higher taxonomic levels, but we should bear in mind that the processes underlying extinction occur at the level of the individual. McLaren (1988), for instance, has argued that it would be better to use the term “mass killing”, rather than “mass extinction”, since it is the death of individuals rather than species which is the fundamental process taking place. Although many fossils of extinct species were unearthed during the eighteenth and early nineteenth centuries, it was not until the theory of evolution gained currency in the latter half of the nineteenth century that extinction became an accepted feature of the history of life on Earth. One of the earliest serious attempts to model extinction was that of Lyell (1832) whose ideas, in some respects, still stand up even today. He proposed that when species first appear (he did not tackle the then vexed question of exactly how they appear) they possess varying fitnesses, and that those with the lowest fitness ultimately become extinct as a result of selection pressure from other species, and are then replaced by new species. While this model does not explain many of the most interesting features of the fossil record, it does already take a stand on a lot of the crucial issues in today’s extinction debates: it is an equilibrium model with (perhaps) a roughly constant number of species and it has an explicit mechanism for extinction (species competition) which is still seriously considered as one of the causes of extinction. It also hints at of a way of quantifying the model by using a numerical fitness measure. A few years after Lyell put forward his ideas about extinction, Darwin extended them by emphasizing the appearance of new species through speciation from existing ones. In his view, extinction arose as a result of competition between species and their descendants, and was therefore dominated by the process which we referred to as “pseudoextinction” in Section 2.2.5. The Darwin–Lyell viewpoint is essentially a gradualist one. Species change gradually, and become extinct one by one as they are superseded by new fitter variants. As Darwin wrote in the Origin of Species (Darwin 1859): “Species and groups of species gradually disappear, one after another, first from one spot, then from another, and finally from the world.” The obvious problem with this theory is the regular occurrence of mass extinctions in the fossil record. Although the existence of mass extinctions was well-known in Darwin’s time, Darwin and Lyell both argued strongly that they were probably a sampling artifact generated by the inaccuracy of dating techniques rather than a real effect. Today we know this not to be the case, and a purely gradualist picture no longer offers an adequate explanation of the facts. Any serious model of extinction must take mass extinction into account. With the advent of reasonably comprehensive databases of fossil species, as well as computers to aid in their analysis, a number of simple models designed to help interpret and understand extinction data were put forward in the 1970s and 1980s. In 1973, van Valen proposed what he called the “Red Queen hypothesis”: the hypothesis that the probability per unit time of a particular species becoming extinct is independent of time. This “stochastically constant” extinction is equivalent to saying that the probability of a species surviving for a certain length of time $`t`$ decays exponentially with $`t`$. This is easy to see, since if $`p`$ is the constant probability per unit time of the species becoming extinct, then $`1p`$ is the probability that it does not become extinct in any unit time interval, and $$P(t)=(1p)^t=\mathrm{e}^{t/\tau }$$ (3) is the probability that it survives $`t`$ consecutive time intervals, where $$\tau =\frac{1}{\mathrm{log}(1p)}\frac{1}{p},$$ (4) where the second relation applies for small $`p`$. Van Valen used this argument to validate his hypothesis, by plotting “survivorship curves” for many different groups of species (van Valen 1973). A survivorship curve is a plot of the number of species surviving out of an initial group as a function of time starting from some arbitrary origin. In other words, one takes a group of species and counts how many of them are still present in the fossil record after time $`t`$. It appears that the time constant $`\tau `$ is different for the different groups of organisms examined by van Valen but roughly constant within groups, and in this case the survivorship curves should fall off exponentially. In Figure 3 we reproduce van Valen’s results for extinct genera of mammals. The approximately straight-line form of the survivorship curve on semi-logarithmic scales indicates that the curve is indeed exponential, a result now known as “van Valen’s law”. Van Valen constructed similar plots for many other groups of genera and families and found similar stochastically constant extinction there as well. Van Valen’s result, that extinction is uniform in time has been used as the basis for a number of other simple extinction models, some of which are discussed in this paper. However, for a number of reasons, it must certainly be incorrect. First, it is not mathematically possible for van Valen’s law to be obeyed at more than one taxonomic level. As Raup (1991b) has demonstrated, if species become extinct at a stochastically constant rate $`p`$, the survivorship curve $`S`$ for genera will not in general be exponential, because it depends not only on the extinction rate but also on the speciation rate. The general form for the genus survivorship curve is $$S=1\frac{p[\mathrm{e}^{(qp)t}1]}{q\mathrm{e}^{(qp)t}p},$$ (5) where $`q`$ is the average rate of speciation within the genus. A similar form applies for higher taxa as well. Second, van Valen’s law clearly cannot tell the whole story since, just like the theories of Lyell and Darwin, it is a gradualist model and takes no account of known mass extinction events in the fossil record. Raup (1991b, 1996) gives the appropriate generalization of van Valen’s work to the case in which extinction is not stochastically constant. In this case, the mean survivorship curve follows van Valen’s law (or Equation (5) for higher taxa), but individual curves show a dispersion around this mean whose width is a measure of the distribution of the sizes of extinction events. It was in this way that Raup extracted the kill curve discussed in Section 2.2.1 for Phanerozoic marine invertebrates. These models however, are all fundamentally just different ways of looking at empirical data. None of them offer actual explanations of the observed distributions of extinction events, or explain the various forms discussed in Section 2. In the remainder of this review we discuss a variety of quantitative models which have been proposed in the last ten years to address these questions. ## 4 Fitness landscape models Kauffman (1993, 1995, Kauffman and Levin 1987, Kauffman and Johnsen 1991) has proposed and studied in depth a class of models referred to as NK models, which are models of random fitness landscapes on which one can implement a variety of types of evolutionary dynamics and study the development and interaction of species. (The letters $`N`$ and $`K`$ do not stand for anything, they are the names of parameters in the model.) Based on the results of extensive simulations of NK models Kauffman and co-workers have suggested a number of possible connections between the dynamics of evolution and the extinction rate. To a large extent it is this work which has sparked recent interest in biotic mechanisms for mass extinction. In this section we review Kauffman’s work in detail. ### 4.1 The NK model An NK model is a model of a single rugged landscape, which is similar in construction to the spin-glass models of statistical physics (Fischer and Hertz 1991), particularly $`p`$-spin models (Derrida 1980) and random energy models (Derrida 1981). Used as a model of species fitness<sup>3</sup><sup>3</sup>3NK models have been used as models of a number of other things as well—see, for instance, Kauffman and Weinberger (1989) and Kauffman and Perelson (1990). the NK model maps the states of a model genome onto a scalar fitness $`W`$. This is a simplification of what happens in real life, where the genotype is first mapped onto phenotype and only then onto fitness. However, it is a useful simplification which makes simulation of the model for large systems tractable. As long as we bear in mind that this simplification has been made, the model can still teach us many useful things. The NK model is a model of a genome with $`N`$ genes. Each gene has $`A`$ alleles. In most of Kauffman’s studies of the model he used $`A=2`$, a binary genetic code, but his results are not limited to this case. The model also includes epistatic interactions between genes—interactions whereby the state of one gene affects the contribution of another to the overall fitness of the species. In fact, it is these epistatic interactions which are responsible for the ruggedness of the fitness landscape. Without any interaction between genes it is possible (as we will see) to optimize individually the fitness contribution of each single gene, and hence to demonstrate that the landscape has the so-called Fujiyama form, with only a single global fitness peak. In the simplest form of the NK model, each gene interacts epistatically with $`K`$ others, which are chosen at random. The fitness contribution $`w_j`$ of gene $`j`$ is a function of the state of the gene itself and each of the $`K`$ others with which it interacts. For each of the $`A^{K+1}`$ possible states of these $`K+1`$ genes, a value for $`w_j`$ is chosen randomly from a uniform distribution between zero and one. The total fitness is then the average over all genes of their individual fitness contributions: $$W=\frac{1}{N}\underset{j=1}{\overset{N}{}}w_j.$$ (6) This procedure is illustrated in Figure 4.1 for a simple three-gene genome with $`A=2`$ and $`K=1`$. Some points to notice about the NK model are: 1. The choices of the random numbers $`w_j`$ are “quenched”, which is to say that once they have been chosen they do not change again. The choices of the $`K`$ other genes with which a certain gene interacts are also quenched. Thus the fitness attributed to a particular genotype is the same every time we look at it. 2. There is no correlation between the contribution $`w_j`$ of gene $`j`$ to the total fitness for different alleles of the gene, or for different alleles of any of the genes with which it interacts. If any single one of these $`K+1`$ genes is changed to a different state, the new value of $`w_j`$ is completely unrelated to its value before the change. This is an extreme case. In reality, epistatic interactions may have only a small effect on the fitness contribution of a gene. Again, however, this is a simplifying assumption which makes the model tractable. 3. In order to think of the NK model as generating a fitness “landscape” with peaks and valleys, we have to say which genotypes are close together and which far apart. In biological evolution, where the most common mutations are mutations of single genes, it makes sense to define the distance between two genotypes to be the number of genes by which they differ. This definition of distance, or “metric”, is used in all the studies discussed here. A (local) peak is then a genotype that has higher fitness than all $`N(A1)`$ of its nearest neighbours, those at distance 1 away. 4. The fact of taking an average over the fitness contributions of all the genes in Equation (6) is crucial to the behaviour of the model. Taking the average has the effect that the typical height of fitness peaks diminishes with increasing $`N`$. In fact, one can imagine defining the model in a number of other ways. One could simply take the total fitness to be the sum of the contributions from all the genes—organisms with many genes therefore tending to be fitter than ones with fewer. In this case one would expect to see the reverse of the effect described above, with the average height of adaptive peaks increasing with increasing $`N`$. One might also note that since $`W`$ is the sum of a number of independent random variables, its values should, by the central limit theorem, be approximately normally distributed with a standard deviation increasing as $`\sqrt{N}`$ with the number of genes. Therefore, it might make sense to normalize the sum with a factor of $`N^{1/2}`$, so that the standard deviation remains constant as $`N`$ is varied. Either of these choices would change some of the details of the model’s behaviour. For the moment however, we stick with the model as defined above. What kind of landscapes does the NK model generate? Let us begin by considering two extreme cases. First, consider the case $`K=0`$, in which all of the genes are entirely non-interacting. In this case, each gene contributes to the total fitness an amount $`w_j`$, which may take any of $`A`$ values depending on the allele of the gene. The maximum fitness in this case is achieved by simply maximizing the contribution of each gene in turn, since their contributions are independent. Even if we assume an evolutionary dynamics of the most restrictive kind, in which we can only change the state of one gene at a time, we can reach the state of maximum fitness of the $`K=0`$ model starting from any point on the landscape and only making changes which increase the fitness. Landscapes of this type are known as Fujiyama landscapes, after Japan’s Mount Fuji: they are smooth and have a single global optimum. Now consider the other extreme, in which $`K`$ takes the largest possible value, $`K=N1`$. In this case each gene’s contribution to the overall fitness $`W`$ depends on itself and all $`N1`$ other genes in the genome. Thus if any single gene changes allele, the fitness contribution of every gene changes to a new random number, uncorrelated with its previous value. Thus the total fitness $`W`$ is entirely uncorrelated between different states of the genome. This gives us the most rugged possible fitness landscape with many fitness peaks and valleys. The $`K=N1`$ model is identical to the random energy spin-glass model of Derrida (1981) and has been studied in some detail (Kauffman and Levin 1987, Macken and Perelson 1989). The fitness $`W`$ in this case is the average of $`N`$ independent uniform random variables between zero and one, which means that for large $`N`$ it will be normally distributed about $`W=\frac{1}{2}`$ with standard deviation $`1/\sqrt{12N}`$. This means that the typical height of the fitness peaks on the landscape decreases as $`N^{1/2}`$ with increasing size of the genome. It also decreases with increasing $`K`$, since for larger $`K`$ it is not possible to achieve the optimum fitness contribution of every gene, so that the average over all genes has a lower value than $`K=0`$ case, even at the global optimum. For values of $`K`$ intermediate between the two extremes considered here, the landscapes generated by the NK model possess intermediate degrees of ruggedness. Small values of $`K`$ produce highly correlated, smooth landscapes with a small number of high fitness peaks. High values of $`K`$ produce more rugged landscapes with a larger number of lower peaks and less correlation between the fitnesses of similar genotypes. ### 4.2 Evolution on NK landscapes In order to study the evolution of species using his NK landscapes, Kauffman made a number of simplifying assumptions. First, he assumed that evolution takes place entirely by the mutation of single genes, or small numbers of genes in an individual. That is, he neglected recombination. (This is a reasonable first approximation since, as we mentioned above, single gene mutations are the most common in biological evolution.) He also assumed that the mutation of different genes are a priori uncorrelated, that the rate at which genes mutate is the same for all genes, and that that rate is low compared to the time-scale on which selection acts on the population. This last assumption means that the population can be approximated by a single genotype, and population dynamical effects can be ignored. (This may be valid for some populations, but is certainly not true in general.) In addition to these assumptions it is also necessary to state how the selection process takes place, and Kauffman examined three specific possibilities, which he called the “random”, “fitter” and “greedy” dynamics. If, as discussed above, evolution proceeds by the mutations of single genes, these three possibilities are as follows. In the random dynamics, single-gene mutations occur at random and, if the mutant genotype possesses a higher value of $`W`$ than its ancestral strain, the mutant replaces the ancestor and the species “moves” on the landscape to the new genotype. A slight variation on this scheme is the fitter dynamics, in which a species examines all the genotypes which differ from the current genotype by the mutation of a single gene, its “neighbours”, and then chooses a new genotype from these, either in proportion to fitness, or randomly amongst those which have higher fitness than the current genotype. (This last variation differs from the previous scheme only in a matter of time-scale.) In the greedy dynamics, a species examines each of its neighbours in turn and chooses the one with the highest fitness $`W`$. Notice that whilst the random and fitter schemes are stochastic processes, the greedy one is deterministic; this gives rise to qualitative differences in the behaviour of the model. The generic behaviour of the NK model of a single species is for the fitness of the species to increase until it reaches a local fitness peak—a genotype with higher fitness than all of the neighbouring genotypes on the landscape—at which point it stops evolving. For the $`K=0`$ case considered above (the Fujiyama landscape), it will typically take on the order of $`N`$ mutations to find the single fitness peak (or $`N\mathrm{log}N`$ for the random dynamics). For instance, in the $`A=2`$ case, half of the alleles in a random initial genotype will on average be favourable and half unfavourable. Thus if evolution proceeds by the mutation of single genes, $`\frac{1}{2}N`$ mutations are necessary to reach the fitness maximum. In the other extreme, when $`K=N1`$, one can show that, starting from a random initial genotype, the number of directions which lead to higher fitness decreases by a constant factor at each step, so that the number of steps needed to reach one of the local maxima of fitness goes as $`\mathrm{log}N`$. For landscapes possessing intermediate values of $`K`$, the number of mutations needed to reach a local maximum lies somewhere between these limits. In other words, as $`N`$ becomes large, the length of an adaptive walk to a fitness peak decreases sharply with increasing $`K`$. In fact, it appears to go approximately as $`1/K`$. This point will be important in our consideration of the many-species case. Recall also that the height of the typical fitness peak goes down with increasing $`K`$. Thus when $`K`$ is high, a species does not have to evolve far to find a local fitness optimum, but in general that optimum is not very good. ### 4.3 Coevolving fitness landscapes The real interest in NK landscapes arises when we consider the behaviour of a number of coevolving species. Coevolution arises as a result of interactions between different species. The most common such interactions are predation, parasitism, competition for resources, and symbiosis. As a result of interactions such as these, the evolutionary adaptation of one species can prompt the adaptation of another (Vermeij 1987). Many examples are familiar to us, especially ones involving predatory or parasitic interactions. Plotnick and McKinney (1993) have given a number of examples of coevolution in fossil species, including predator-prey interactions between echinoids and gastropods (McNamara 1990) and mutualistic interactions between algae and foraminifera (Hallock 1985). How is coevolution introduced into the NK model? Consider $`S`$ species, each evolving on a different NK landscape. For the moment, let us take the simplest case in which each species has the same values of $`N`$ and $`K`$, but the random fitnesses $`w_j`$ defining the landscapes are different. Interaction between species is achieved by coupling their landscapes so that the genotype of one species affects the fitness of another. Following Kauffman and Johnsen (1991), we introduce two new quantities: $`S_i`$ which is the number of neighbouring species with which species $`i`$ interacts,<sup>4</sup><sup>4</sup>4Although this quantity is denoted $`S_i`$, it is in fact a constant over all species in most of Kauffman’s studies; the subscript $`i`$ serves only to distinguish it from $`S`$, which is the total number of species. Of course, there is no reason why one cannot study a generalized model in which $`S_i`$ (or indeed any of the other variables in the model, such as $`N`$ or $`K`$) is varied from species to species, and Kauffman and Johnsen (1991) give some discussion and results for models of this type, although this is not their main focus. and $`C`$ which is the number of genes in each of those neighbouring species which affect the fitness contribution of each gene in species $`i`$. On account of these two variables this variation of the model is sometimes referred to as the NKCS model. Each gene in species $`i`$ is “coupled” to $`C`$ randomly chosen genes in each of the $`S_i`$ neighbouring species, so that, for example, if $`C=1`$ and $`S_i=4`$, each of $`i`$’s genes is coupled to four other genes, one randomly chosen from each of four neighbouring species. The coupling works in exactly the same way as the epistatic interactions of the last section—the fitness contribution $`w_j`$ which a particular gene $`j`$ makes to the total fitness of its host is now a function of the allele of that gene, of each of the $`K`$ genes to which it is coupled and of the alleles of the $`CS_i`$ genes in other species with which it interacts. As before, the values $`w_j`$ are chosen randomly for each of the possible states of these genes. The result is that when a species evolves so as to improve its own fitness, it may in the process change the allele of one of its genes which affects the fitness contribution of a gene in another species. As a result, the fitness of the other species will change. Clearly the further a species must evolve to find a fitness peak, the more alleles it changes, and the more likely it is to affect the fitness of its neighbours. Since the distance to a fitness peak depends on the value of $`K`$, so also does the chance of one species affecting another, and this is the root cause of the novel behaviour seen in Kauffman’s coevolution models. The $`S_i`$ neighbouring species of species $`i`$ can be chosen in a variety of different ways. The most common are either to chose them at random (but in a “quenched” fashion—once chosen, they remain fixed) or to place the species on a regular lattice, such as a square lattice in two dimensions, and then make the nearest neighbours of a species on the lattice its neighbours in the evolutionary sense. In their original work on coevolving NK systems, Kauffman and Johnsen (1991) examined a number of different variations on the basic model outlined above. Here we consider the simplest case of relevance to extinction, the case of uniform $`K`$ and $`S_i`$. ### 4.4 Coevolutionary avalanches Consider the case of binary genes ($`A=2`$), with single-gene mutations. Starting from an initial random state, species take turns in strict rotation, and attempt by mutation to increase their own fitness irrespective of anyone else’s. It is clear that if at any time all species in the system simultaneously find themselves at local fitness optima then all evolution will stop, since there will be no further mutations of any species which can increase fitness. This state is known as a Nash equilibrium, a name taken from game theoretic models in which similar situations arise.<sup>5</sup><sup>5</sup>5A related concept is that of the “evolutionarily stable strategy” (Maynard Smith and Price 1973), which is similar to a Nash equilibrium but also implies non-invadability at the individual level. The simulations of Kauffman and Johnsen considered here take place entirely at the species level, so “Nash equilibrium” is the appropriate nomenclature in this case. The fundamental question is whether such an equilibrium is ever reached. This, it turns out, depends on the value of $`K`$. For large values of $`K`$, individual species landscapes are very rugged, and the distance that a species needs to go to reach a local fitness maximum is short. This means that the chance of it affecting its neighbours’ fitness is rather small, and hence the chance of all species simultaneously finding a fitness maximum is quite good. On the other hand, if $`K`$ is small, species must change many genes to reach a fitness maximum, and so the chances are high that they will affect the fitnesses of their neighbours. This in turn will force those neighbours to evolve, by moving the position of the maxima in their landscapes. They in turn may have to evolve a long way to find a new maximum, and this will affect still other species, resulting in an avalanche of coevolution which for small enough $`K`$ never stops. Thus as $`K`$ is decreased from large values to small, the typical size of the coevolutionary avalanche resulting from a random initial state increases until at some critical value $`K_c`$ it becomes infinite. What is this critical value? The product $`CS_i`$ is the number of genes in other species on which the fitness contribution of a particular gene in species $`i`$ depends. A rough estimate of the chance that at least one of these genes mutates during an avalanche is $`CS_iL`$, where $`L`$ is the typical length of an adaptive walk of an isolated species (i.e., the number of genes which change in the process of evolving to a fitness peak). Assuming, as discussed in Section 4.2, that $`L`$ varies inversely with $`K`$, the critical value $`K_c`$ at which the avalanche size diverges should vary as $`K_cCS_i`$. This seems to be supported by numerical evidence: Kauffman and Johnsen found that $`K_cCS_i`$ in the particular case where every species is connected to every other ($`S_i=S`$). The transition from the high-$`K`$ “frozen” regime in which avalanches are finite to the low-$`K`$ “chaotic” regime in which they run forever appears to be a continuous phase transition of the kind much studied in statistical physics (Binney et al. 1992). Bak et al. (1992) have analysed this transition in some detail, showing that it does indeed possess genuine critical properties. Precisely at $`K_c`$, the distribution of the sizes $`s`$ of the avalanches appears to be scale free and takes the form of a power law, Equation (1), which is typical of the “critical behaviour” associated with such a phase transition. Kauffman and Johnson also pointed out that there are large fluctuations in the fitness of individual species near $`K_c`$, another characteristic of continuous phase transitions. Figure 4.4 shows the average fitness of the coevolving species as a function of $`K`$ for one particular case investigated by Kauffman and Johnsen. For ecosystems in the frozen $`K>K_c`$ regime the average fitness of the coevolving species increases from the initial random state until a Nash equilibrium is reached, at which point the fitness stops changing. As we pointed out earlier, the typical fitness of local optima increases with decreasing $`K`$, and this is reflected in the average fitness at Nash equilibria in the frozen phase: the average fitness of species at equilibrium increases as $`K`$ approaches $`K_c`$ from above. In the chaotic $`K<K_c`$ regime a Nash equilibrium is never reached, but Kauffman and Johnsen measured the “mean sustained fitness”, which is the average fitness of species over time, after an initial transient period in which the system settles down from its starting state. They found that this fitness measure decreased with decreasing $`K`$ in the chaotic regime, presumably because species spend less and less time close to local fitness optima. Thus, there should be a maximum of the average fitness at the point $`K=K_c`$. This behaviour is visible in Figure 4.4, which shows a clear maximum around $`K=10`$. The boundary between frozen and chaotic regimes was separately observed to occur at around $`K_c=10`$ for this system. On the basis of these observations, Kauffman and Johnsen then argued as follows. If the level of epistatic interactions in the genome is an evolvable property, just as the functions of individual genes are, and our species are able to “tune” the value of their own $`K`$ parameter to achieve maximum fitness, then Figure 4.4 suggests that they will tune it to the point $`K=K_c`$, which is precisely the critical point at which we expect to see a power-law distribution of coevolutionary avalanches. As we suggested in Section 2.2.5, mass extinction could be caused by pseudoextinction processes in which a large number of species evolve to new forms nearly simultaneously. The coevolutionary avalanches of the NKCS model would presumably give rise to just such large-scale pseudoextinction. Another possibility, also noted by Kauffman and Johnson is that the large fluctuations in species fitness in the vicinity of $`K_c`$ might be a cause of true extinction, low fitness species being more susceptible to extinction than high fitness ones. These ideas are intriguing, since they suggest that by tuning itself to the point at which average fitness is maximized, the ecosystem also tunes itself precisely to the point at which species turnover is maximized, and indeed this species turnover is a large part of the reason why $`K=K_c`$ is a fit place to be in first place. Although extinction and pseudoextinction can certainly be caused by exogenous effects, even without these effects we should still see mass extinction. Some efforts have been made to determine from the fossil evidence whether real evolution has a dynamics similar to the one proposed by Kauffman and co-workers. For example, Patterson and Fowler (1996) analysed fossil data for planktic foraminifera using a variety of time-series techniques and concluded that the results were at least compatible with critical theories such as Kauffman’s, and Solé et al. (1997) argued that the form of the extinction power spectrum may indicate an underlying critical macroevolutionary dynamics, although this latter suggestion has been questioned (Kirchner and Weil 1998, Newman and Eble 1999a). ### 4.5 Competitive replacement There is however a problem with the picture presented above. Numerically, it appears to be true that the average fitness of species in the model ecosystem is maximized when they all have $`K`$ close to the critical value $`K_c`$. However, it would be a mistake to conclude that the system therefore must evolve to the critical point under the influence of selection pressure. Natural selection does not directly act to maximize the average fitness of species in the ecosystem, but rather it acts to increase individual fitnesses in a selfish fashion. Kauffman and Johnsen in fact performed simulations in which only two species coevolved, and they found that the fitness of both species was greater if the two had different values of $`K`$ than if both had the value of $`K`$ which maximized mean fitness. Thus, in a system in which many species could freely vary their own $`K`$ under the influence of selection pressure, we would expect to find a range of $`K`$ values, rather than all $`K`$ taking the value $`K_c`$. There are also some other problems with the original NKCS model. For instance, the values of $`K`$ in the model were not actually allowed to vary during the simulations, but one would like to include this possibility. In addition, the mechanism by which extinction arises is rather vague; the model really only mimics evolution and the idea of extinction is tacked on somewhat as an afterthought. To tackle all of these problems Kauffman and Neumann (unpublished) proposed a refinement of the NKCS model in which $`K`$ can change and an explicit extinction mechanism is included, that of competitive replacement. (An account of this work can be found in Kauffman (1995).) In this variation of the model, a number $`S`$ of species coevolve on NK fitness landscapes just as before. Now however, at each turn in the simulation, each species may change the state of one of its genes, change the value of its $`K`$ by $`\pm 1`$, it may be invaded by another species (see below), or it can do nothing. In their calculations, Kauffman and Neumann used the “greedy” dynamics described above and choose the change which most improves the fitness, but “fitter” and “random” variants are also possible. Allowing $`K`$ to vary gives species the ability to evolve the ruggedness of their own landscapes in order to optimize their fitness. Extinction takes place in the model when a species invades the niche occupied by another. If the invading species is better at exploiting the particular set of resources in the niche, it drives the niche’s original occupant to extinction. In this model, a species’ niche is determined by its neighbouring species—there is no environmental component to the niche, such as climate, terrain, or food supply. Extinction by competitive replacement is actually not a very well-documented mode of extinction (Benton 1987). Maynard Smith (1989) has discussed the question at some length, but concludes that it is far more common for a species to adapt to the invasion of a new competitor than for it to become extinct. Nonetheless, there are examples of extinction by competitive replacement, and to the extent that it occurs, Kauffman and Neumann’s work provides a model of the process. In the model, they add an extra “move” which can take place when a species’ turn comes to evolve: it can be invaded by another species. A randomly chosen species can create a copy of itself (i.e., of its genome) which is then placed in the same niche as the first species and its fitness is calculated with respect to the genotypes of the neighbours in that niche. If this fitness exceeds the fitness of the original species in that niche, the invader supersedes the original occupant, which becomes extinct. In this way, fit species spread through the ecosystem making the average fitness over all species higher, but at the same time making the species more uniform, since over time the ecosystem will come to contain many copies of a small number of fit species, rather than a wide diversity of less fit ones. In numerical simulations this model shows a number of interesting features. First, regardless of their initial values, the $`K`$s of the individual species appear to converge on an intermediate figure which puts all species close to the phase boundary discussed in the last section. This lends support to the ideas of Kauffman and Johnsen that fitness is optimized at this point (even though other arguments indicated that this might not be the best choice for selfishly evolving species—see above). Interestingly, the model also shows a power-law distribution of the sizes of extinction events taking place; if we count up the number of species becoming extinct at each time-step in the simulation and make a histogram of these figures over the course of a long simulation, the result is of the form shown in Figure 4.5. The power-law has a measured exponent of $`\tau 1`$, which is not in good agreement with the figure of $`\tau 2`$ found in the fossil data (see Section 2.2.1), but the mere existence of the power-law distribution is quite intriguing. Kauffman and Neumann explain its appearance as the result of avalanches of extinction which arise because the invasion of a niche by a new species (with the resulting extinction of the niche’s previous occupier) disturbs the neighbouring species, perhaps making them susceptible to invasion by further species. Another possible mechanism arises from the uniformity of genotypes which the invasion mechanism gives rise to. As noted above, the invasion of many niches by one particularly fit species tends to produce an ecosystem with many similar species in it. If a new species arises which is able to compete successfully with these many similar species, then they may all become extinct over a short period of time, resulting in an extinction avalanche. Why avalanches such as these should possess a power-law distribution is not clear. Kauffman and Neumann connect the phenomenon with the apparent adaptation of the ecosystem to the phase boundary between the ordered and chaotic regimes—the “edge of chaos” as Kauffman has called it. A more general explanation may come from the study of “self-organized critical” systems, which is the topic of the next section. Kauffman and Neumann did not take the intermediate step of simulating a system in which species are permitted to vary their values of $`K`$, but in which there is no invasion mechanism. Such a study would be useful for clarifying the relative importance of the $`K`$-evolution and invasion mechanisms. Bak and Kauffman (unpublished, but discussed by Bak (1996)) have carried out some simulations along these lines, but apparently found no evidence for the evolution of the system to the critical point. Bak et al. (1992) have argued on theoretical grounds that such evolution should not occur in the maximally rugged case $`K=N1`$, but the argument does not extend to smaller values of $`K`$. In the general case the question has not been settled and deserves further study. ## 5 The Bak–Sneppen model and variations The models discussed in the last section are intriguing, but present a number of problems. In particular, most of the results about them come from computer simulations, and little is known analytically about their properties. Results such as the power-law distribution of extinction sizes and the evolution of the system to the “edge of chaos” are only as accurate as the simulations in which they are observed. Moreover, it is not even clear what the mechanisms responsible for these results are, beyond the rather general arguments we have already given. In order to address these shortcomings, Bak and Sneppen (1993, Sneppen et al. 1995, Sneppen 1995, Bak 1996) have taken Kauffman’s ideas, with some modification, and used them to create a considerably simpler model of large-scale coevolution which also shows a power-law distribution of avalanche sizes and which is simple enough that its properties can, to some extent, be understood analytically. Although the model does not directly address the question of extinction, a number of authors have interpreted it, using arguments similar to those of Section 2.2.5, as a possible model for extinction by biotic causes. The Bak–Sneppen model is one of a class of models that show “self-organized criticality”, which means that regardless of the state in which they start, they always tune themselves to a critical point of the type discussed in Section 4.4, where power-law behaviour is seen. We describe self-organized criticality in more detail in Section 5.2. First however, we describe the Bak–Sneppen model itself. ### 5.1 The Bak–Sneppen model In the model of Bak and Sneppen there are no explicit fitness landscapes, as there are in NK models. Instead the model attempts to mimic the effects of landscapes in terms of “fitness barriers”. Consider Figure 5.1, which is a toy representation of a fitness landscape in which there is only one dimension in the genotype (or phenotype) space. If the mutation rate is low compared with the time-scale on which selection takes place (as Kauffman assumed), then a population will spend most of its time localized around a peak in the landscape (labelled P in the figure). In order to evolve to another, adjacent peak (Q), we must pass through an intervening “valley” of lower fitness. This valley presents a barrier to evolution because individuals with genotypes which fall in this region are selected against in favour of fitter individuals closer to P. In their model, Bak and Sneppen assumed that that the average time $`t`$ taken to mutate across a fitness barrier of this type goes exponentially with the height $`B`$ of the barrier: $$t=t_0\mathrm{e}^{B/T},$$ (7) where $`t_0`$ and $`T`$ are constants. The value of $`t_0`$ merely sets the time scale, and is not important. The parameter $`T`$ on the other hand depends on the mutation rate in the population, and the assumption that mutation is low implies that $`T`$ is small compared with the typical barrier heights $`B`$ in the landscape. Equation (7) was proposed by analogy with the so-called Arrhenius law of statistical physics rather than by appealing to any biological principles, and in the case of evolution on a rugged fitness landscape it may well not be correct (see Section 5.3). Nonetheless, as we will argue later, Equation (7) may still be a reasonable approximation to make. Based on Equation (7), Bak and Sneppen then made a further assumption. If mutation rate (and hence $`T`$) is small, then the time-scales $`t`$ for crossing slightly different barriers may be widely separated. In this case a species’ behaviour is to a good approximation determined by the lowest barrier which it has to cross to get to another adaptive peak. If we have many species, then each species $`i`$ will have some lowest barrier to mutation $`B_i`$, and the first to mutate to a new peak will be the one with the lowest value of $`B_i`$ (the “lowest of the low”, if you like). The Bak–Sneppen model assumes this to be the case and ignores all other barrier heights. The dynamics of the model, which we now describe, have been justified in different ways, some of them more reasonable than others. Probably the most consistent is that given by Bak (private communication) which is as follows. In the model there are a fixed number $`N`$ of species. Initially each species $`i`$ is allotted a random number $`0B_i<1`$ to represent the lowest barrier to mutation for that species. The model then consists of the repetition of two steps: 1. We assume that the species with the lowest barrier to mutation $`B_i`$ is the first to mutate. In doing so it crosses a fitness barrier and finds its way to a new adaptive peak. From this new peak it will have some new lowest barrier for mutation. We represent this process in the model by finding the species with the lowest barrier and assigning it a new value $`0B_i<1`$ at random. 2. We assume that each species is coupled to a number of neighbours. Bak and Sneppen called this number $`K`$. (The nomenclature is rather confusing; the variables $`N`$ and $`K`$ in the Bak–Sneppen model correspond to the variables $`S`$ and $`S_i`$ in the NK model.) When a species evolves, it will affect the fitness landscapes of its neighbours, presumably altering their barriers to mutation. We represent this by also assigning new random values $`0B_i<1`$ for the $`K`$ neighbours. And that is all there is to the model. The neighbours of a species can be chosen in a variety of different ways, but the simplest is, as Kauffman and Johnsen (1991) also did, to put the species on a lattice and make the nearest neighbours on the lattice neighbours in the ecological sense. For example, on a one dimensional lattice—a line—each species has two neighbours and $`K=2`$. So what is special about this model? Well, let us consider what happens as we repeat the steps above many times. Initially the barrier variables are uniformly distributed over the interval between zero and one. If $`N`$ is large, the lowest barrier will be close to zero. Suppose this lowest barrier $`B_i`$ belongs to species $`i`$. We replace it with a new random value which is very likely to be higher than the old value. We also replace the barriers of the $`K`$ neighbours of $`i`$ with new random values. Suppose we are working on a one-dimensional lattice, so that these neighbours are species $`i1`$ and $`i+1`$. The new barriers we choose for these two species are also very likely to be higher than $`B_i`$, although not necessarily higher than the old values of $`B_{i1}`$ and $`B_{i+1}`$. Thus, the steps (i) and (ii) will on average raise the value of the lowest barrier in the system, and will continue to do so as we repeat them again and again. This cannot continue forever however, since as the value of the lowest barrier in the system increases, it becomes less and less likely that it will be replaced with a new value which is higher. Figure 5.1 shows what happens in practice. The initial distribution of barriers gets eaten away from the bottom at first, resulting in a “gap” between zero and the height of the lowest barrier. After a time however, the distribution comes to equilibrium with a value of about $`\frac{2}{3}`$ for the lowest barrier. (The actual figure is measured to be slightly over $`\frac{2}{3}`$; the best available value at the time of writing is $`0.66702\pm 0.00003`$ (Paczuski, Maslov and Bak 1996).) Now consider what happens when we make a move starting from a state which has a gap like this at the bottom end of the barrier height distribution. The species with the lowest barrier to mutation is right on the edge of the gap. We find this species and assign it and its $`K`$ neighbours new random barrier values. There is a chance that at least one of these new values will lie in the gap, which necessarily makes it the lowest barrier in the system. Thus on the next step of the model, this species will be the one to evolve. We begin to see how avalanches appear in this model: there is a heightened chance that the next species to evolve will be one of the neighbours of the previous one. In biological terms the evolution of one species to a new adaptive peak changes the shape of the fitness landscapes of neighbouring species, making them more likely to evolve too. The process continues, until, by chance, all new barrier values fall above the gap. In this case the next species to evolve will not, in general, be a neighbour of one of the other species taking part in the avalanche, and for this reason we declare it to be the first species in a new avalanche, the old avalanche being finished. As the size of the gap increases, the typical length of an avalanche also increases, because the chances of a randomly chosen barrier falling in the gap in the distribution become larger. As we approach the equilibrium value $`B_c=0.667`$ the mean avalanche size diverges, a typical sign of a self-organized critical system. ### 5.2 Self-organized criticality So what exactly is self-organized criticality? The phenomenon was first studied by Bak, Tang and Wiesenfeld (1987), who proposed what has now become the standard example of a self-organized critical (SOC) model, the self-organizing sand-pile. Imagine a pile of sand which grows slowly as individual grains of sand are added to it one by one at random positions. As more sand is added, the height of the pile increases, and with it the steepness of the pile’s sides. Avalanches started by single grains increase in size with steepness until at some point the pile is so steep that the avalanches become formally infinite in size, which is to say there is bulk transport of sand down the pile. This bulk transport in turn reduces the steepness of the pile so that subsequent avalanches are smaller. The net result is that the pile “self-organizes” precisely to the point at which the infinite avalanche takes place, but never becomes any steeper than this. A similar phenomenon takes place in the evolution model of Bak and Sneppen, and indeed the name “coevolutionary avalanche” is derived from the analogy between the two systems. The size of the gap in the Bak–Sneppen model plays the role of the steepness in the sandpile model. Initially, the gap increases as described above, and as it increases the avalanches become larger and larger on average, until we reach the critical point at which an infinite avalanche can occur. At this point the rates at which barriers are added and removed from the region below the gap exactly balance, and the gap stops growing, holding the system at the critical point thereafter. It is interesting to compare the Bak–Sneppen model with the NKCS model discussed in Section 4.3. Like the Bak–Sneppen model, the NKCS model also has a critical state in which power-law distributions of avalanches occur, but it does not self-organize to that state. It can be critical, but not self-organized critical. However the essence of both models is that the evolution of one species distorts the shape of the fitness landscape of another (represented by the barrier variables in the Bak–Sneppen case), thus sometimes causing it to evolve too. So what is the difference between the two? The crucial point seems to be that in the Bak–Sneppen case the species which evolves is the one with the smallest barrier to mutation. This choice ensures that the system is always driven towards criticality. At first sight, one apparent problem with the Bak–Sneppen model is that the delineation of an “avalanche” seems somewhat arbitrary. However the avalanches are actually quite well separated in time because of the exponential dependence of mutation timescale on barrier height given by Equation (7). As defined above, an avalanche is over when no species remain with a barrier $`B_i`$ in the gap at the bottom of the barrier height distribution, and the time until the next avalanche then depends on the first barrier $`B_i`$ above the gap. If the “temperature” parameter $`T`$ is small, then the exponential in Equation (7) makes this inter-avalanche time much longer than typical duration of a single avalanche. If we make a plot of the activity of the Bak–Sneppen model as a function of “real” time, (i.e., time measured in the increments specified by Equation (7)), the result looks like Figure 5.2. In this figure the avalanches in the system are clearly visible and are well separated in time. One consequence of the divergence of the average avalanche size as the Bak–Sneppen model reaches the critical point is that the distribution of the sizes of coevolutionary avalanches becomes scale-free—the size scale which normally describes it diverges and we are left with a distribution which has no scale parameter. The only (continuous) scale-free distribution is the power law, Equation (1), and, as Figure 5.2 shows, the measured distribution is indeed a power law. Although the model makes no specific predictions about extinction, its authors argued, as we have done in Section 2.2.5, that large avalanches presumably give rise to large-scale pseudoextinction, and may also cause true extinction via ecological interactions between species. They suggested that a power-law distribution of coevolutionary avalanches might give rise in turn to a power-law distribution of extinction events. The exponent $`\tau `$ of the power law generated by the Bak–Sneppen model lies strictly within the range $`1\tau \frac{3}{2}`$ (Bak and Sneppen 1993, Flyvbjerg et al. 1993), and if the same exponent describes the corresponding extinction distribution this makes the model incompatible with the fossil data presented in Section 2, which give $`\tau 2`$. However, since the connection between the coevolutionary avalanches and the extinction profile has not been made explicit, it is possible that the extinction distribution could be governed by a different, but related exponent which is closer to the measured value. One of the elegant properties of SOC models, and critical systems in general, is that exponents such as $`\tau `$ above are universal. This means that the value of the exponent is independent of the details of the dynamics of the model, a point which has been emphasized by Bak (1996). Thus, although the Bak–Sneppen model is undoubtedly an extremely simplified model of evolutionary processes, it may still be able to make quantitative predictions about real ecosystems, because the model and the real system share some universal properties. ### 5.3 Time-scales for crossing barriers Bak and Sneppen certainly make no claims that their model is intended to be a realistic model of coevolution, and therefore it may seem unfair to level detailed criticism at it. Nonetheless, a number of authors have pointed out shortcomings in the model, some of which have since been remedied by extending the model in various ways. Probably the biggest criticism which can be levelled at the model is that the crucial Equation (7) is not a good approximation to the dynamics of species evolving on rugged landscapes. Weisbuch (1991) has studied this question in detail. He considers, as the models of Kauffman and of Bak and Sneppen both also do, species evolving under the influence of selection and mutation on a rugged landscape in the limit where the rate of mutation is low compared with the timescale on which selection acts on populations. In this regime he demonstrates that the timescale $`t`$ for mutation from one fitness peak across a barrier to another peak is given by $$t=\frac{1}{qP_0}\underset{i}{}\frac{F_0F_i}{q},$$ (8) where $`q`$ is the rate of mutation per gene, $`P_0`$ is the size of the population at the initial fitness peak, and $`F_i`$ are the fitnesses of the mutant species at each genotype $`i=0,1,2,\mathrm{}`$ along the path in genotype space taken by the evolving species. The product over $`i`$ is taken along this same path. Clearly this expression does not vary exponentially with the height of the fitness barrier separating the two fitness peaks. In fact, it goes approximately as a power law, with the exponent depending on the number of steps in the evolutionary path taken by the species. If this is the case then the approximation implicit in Equation (7) breaks down and the dynamics of the Bak–Sneppen model is incorrect. This certainly appears to be a worrying problem, but there may be a solution. Bak (1996) has suggested that the crucial point is that Equation (8) varies exponentially in the number of steps along the path from one species to another, i.e., the number of genes which must change to get us to a new genotype; in terms of the lengths of the evolutionary paths taken through genotype space, the timescales for mutation are exponentially distributed. The assumption that the “temperature” parameter $`T`$ appearing in Equation (7) is small then corresponds to evolution which is dominated by short paths. In other words, mutations occur mostly between fitness peaks which are separated by changes in only a small number of genes. Whether this is in fact the case historically is unclear, though it is certainly well known that mutational mechanisms such as recombination which involve the simultaneous alteration of large numbers of genes are also an important factor in biological evolution. ### 5.4 The exactly solvable multi-trait <br>model The intriguing contrast between the simplicity of the rules defining the Bak–Sneppen model and the complexity of its behaviour has led an extraordinary number of authors to publish analyses of its workings. (See Maslov et al. (1994), de Boer et al. (1995), Pang (1997) and references therein for a subset of these publications.) In this review we will not delve into these mathematical developments in any depth, since our primary concern is extinction. However, there are several extensions of the model which are of interest to us. The first one is the “multi-trait” model of Boettcher and Paczuski (1996a, 1996b). This model is a generalization of the Bak–Sneppen model in which a record is kept of several barrier heights for each species—barriers for mutation to different fitness peaks. In the model of Boettcher and Paczuski, each of the $`N`$ species has $`M`$ independent barrier heights. These heights are initially chosen at random in the interval $`0B<1`$. On each step of the model we search through all $`MN`$ barriers to find the one which is lowest. We replace this one with a new value, and we also change the value of one randomly chosen barrier for each of the $`K`$ neighbouring species. Notice that the other $`M1`$ barrier variables for each species are left untouched. This seems a little strange; presumably if a species is mutating to a new fitness peak, all its barrier variables should change at once. However, the primary aim of Boettcher and Paczuski’s model is not to mimic evolution more faithfully. The point is that their model is exactly solvable when $`M=\mathrm{}`$, which allows us to demonstrate certain properties of the model rigorously. The exact solution is possible because when $`M=\mathrm{}`$ the dynamics of the model separates into two distinct processes. As long as there are barrier variables whose values lie in the gap at the bottom of the barrier distribution, then the procedure of finding the lowest barrier will always choose a barrier in the gap. However, the second step of choosing at random one of the $`M`$ barriers belonging to each of $`K`$ neighbours will never choose a barrier in the gap, since there are an infinite number of barriers for each species, and only ever a finite number in the gap. This separation of the processes taking place allowed Boettcher and Paczuski to write exact equations governing the dynamics of the system and to show that the model does indeed possess true critical behaviour with a power-law distribution of avalanches. The Bak–Sneppen model is the $`M=1`$ limit of the multi-trait generalization, and it would be very satisfying if it should turn out that the analytic results of Boettcher and Paczuski could be extended to this case, or indeed to any case of finite $`M`$. Unfortunately, no such extension has yet been found. ### 5.5 Models incorporating speciation One of the other criticisms levelled at the Bak–Sneppen model is that it fails to incorporate speciation. When a biological population gives rise to a mutant individual which becomes the founder of a new species, the original population does not always die out. Fossil evidence indicates that it is common for both species to coexist for some time after such a speciation event. This process is absent from the Bak–Sneppen model, and in order to address this shortcoming Vandewalle and Ausloos (1995, Kramer et al. 1996) suggested an extension of the model in which species coexist on a phylogenetic tree structure, rather than on a lattice. The dynamics of their model is as follows. Initially there is just a small number of species, perhaps only one, each possessing a barrier to mutation $`B_i`$ whose value is chosen randomly in the range between zero and one. The species with the lowest barrier mutates first, but now both the original species and the mutant are assumed to survive, so that there is a branching of the tree leading to a pair of coexisting species (Figure 5.5). One might imagine that the original species should retain its barrier value, since this species is assumed not to have changed. However, if this were the case the model would never develop a “gap” as the Bak–Sneppen model does and so never self-organize to a critical point. To avoid this, Vandewalle and Ausloos specified that both species, the parent and the offspring should be assigned new randomly-chosen barrier values after the speciation event. We might justify this by saying for example that the environment of the parent species is altered by the presence of a closely-related (and possibly competing) offspring species, thereby changing the shape of the parent’s fitness landscape. Whatever the justification, the model gives rise to a branching phylogenetic tree which contains a continuously increasing number of species, by contrast with the other models we have examined so far, in which the number was fixed. As we pointed out in Section 2.2.3, the number of species in the fossil record does in fact increase slowly over time, which may be regarded as partial justification for the present approach. In addition to the speciation process, there is also a second process taking place, similar to that of the Bak–Sneppen model: after finding the species with the lowest barrier to mutation, the barrier variables $`B_i`$ of all species within a distance $`k`$ of that species are also given new, randomly-chosen values between zero and one. Distances on the tree structure are measured as the number of straight-line segments which one must traverse in order to get from one species to another (see Figure 5.5 again). Notice that this means that the evolution of one species to a new form is more likely to affect the fitness landscape of other species which are closely related to it phylogenetically. There is some justification for this, since closely related species tend to exploit similar resources and are therefore more likely to be in competition with one another. On the other hand predator-prey and parasitic interactions are also very important in evolutionary dynamics, and these interactions tend not to occur between closely related species. Many of the basic predictions of the model of Vandewalle and Ausloos are similar to those of the Bak–Sneppen model, indicating perhaps that Bak and Sneppen were correct to ignore speciation events to begin with. It is found again that avalanches of coevolution take place, and that the system organizes itself to a critical state in which the distribution of the sizes of these avalanches follows a power law. The measured exponent of this power law is $`\tau =1.49\pm 0.01`$ (Vandewalle and Ausloos 1997), which is very close to the upper bound of $`\frac{3}{2}`$ calculated by Flyvbjerg et al. (1993) for the Bak–Sneppen model. However, there are also some interesting features which are new to this model. In particular, it is found that the phylogenetic trees produced by the model are self-similar. In Section 2.3.1 we discussed the work of Burlando (1990), which appears to indicate that the taxonomic trees of living species are also self-similar. Burlando made estimates of the fractal (or Hausdorf) dimension $`D_H`$ of taxonomic trees for 44 previously-published catalogues of species taken from a wide range of taxa and geographic areas, and found values ranging from $`1.1`$ to $`2.1`$ with a mean of $`1.6`$.<sup>6</sup><sup>6</sup>6In fact, $`D_H`$ is numerically equal to the exponent $`\beta `$ for a plot such as that shown in Figure 2.3.1 for the appropriate group of species. (The typical confidence interval for values of $`D_H`$ was on the order of $`\pm 0.2`$.) These figures are in reasonable agreement with the value of $`D_H=1.89\pm 0.03`$ measured by Vandewalle and Ausloos (1997) for their model, suggesting that a mechanism of the kind they describe could be responsible for the observed structure of taxonomic trees. The model as described does not explicitly include extinction, and furthermore, since species are not replaced by their descendents as they are in the Bak–Sneppen model, there is also no pseudoextinction. However, Vandewalle and Ausloos also discuss a variation on the model in which extinction is explicitly introduced. In this variation, they find the species with the lowest barrier to mutation $`B_i`$ and then they randomly choose either to have this species speciate with probability $`1\mathrm{exp}(B_i/r)`$ or to have it become extinct with probability $`\mathrm{exp}(B_i/r)`$, where $`r`$ is a parameter which they choose. Thus the probability of extinction decreases with increasing height of the barrier. It is not at first clear how we are to understand this choice. Indeed, it seems likely from reading the papers of Vandewalle et al. that there is some confusion between the barrier heights and the concept of fitness; the authors argue that the species with higher fitness should be less likely to become extinct, but then equate fitness with the barrier variables $`B_i`$. One way out of this problem may be to note that on rugged landscapes with bounded fitness there is a positive correlation between the heights of barriers and the fitness of species: the higher the fitness the more likely it is that the lowest barrier to mutation will also be high. When $`r=0`$, this extinction model is equal to the first model described, in which no extinction took place. When $`r`$ is above some threshold value $`r_c`$, which is measured to be approximately $`0.48\pm 0.01`$ for $`k=2`$ (the only case the authors investigated in detail), the extinction rate exceeds the speciation rate and the tree ceases to grow after a short time. In the intervening range $`0<r<r_c`$ evolution and extinction processes compete and the model shows interesting behaviour. Again there is a power-law distribution of coevolutionary avalanches, and a fractal tree structure reminiscent of that seen in nature. In addition there is now a power-law distribution of extinction events, with the same exponent as the coevolutionary avalanches, i.e., close to $`\frac{3}{2}`$. As with the Bak–Sneppen model this is in disagreement with the figure of $`2.0\pm 0.2`$ extracted from the fossil data. Another variation of the Bak–Sneppen model which incorporates speciation has been suggested by Head and Rodgers (1997). In this variation, they keep track of the two lowest barriers to mutation for each species, rather than just the single lowest. The mutation of a species proceeds in the same fashion as in the normal Bak–Sneppen model when one of these two barriers is significantly lower than the other. However, if the two barriers are close together in value, then the species may split and evolve in two different directions on the fitness landscape, resulting in speciation. How similar the barriers have to be in order for this to happen is controlled by a parameter $`\delta s`$, such that speciation takes place when $$|B_1B_2|<\delta s,$$ (9) where $`B_1`$ and $`B_2`$ are the two barrier heights. The model also incorporates an extinction mechanism, which, strangely, is based on the opposite assumption to the one made by Vandewalle and Ausloos. In the model of Head and Rodgers, extinction takes place when species have particularly high barriers to mutation. To be precise, a species becomes extinct if its neighbour mutates (which would normally change its fitness landscape and therefore its barrier variables) but both its barriers are above some predetermined threshold value. This extinction criterion seems a little surprising at first: if, as we suggested above, high barriers are positively correlated with high fitness, why should species with high barriers become extinct? The argument put forward by Head and Rodgers is that species with high barriers to mutation find it difficult to adapt to changes in their environment. To quote from their paper, “A species with only very large barriers against mutation has become so inflexible that it is no longer able to adapt and dies out”. It seems odd however, that this extinction process should take place precisely in the species which are adjacent to others which are mutating. In the Bak–Sneppen model, these species have their barriers changed to new random values as a result of the change in their fitness landscapes brought about by the mutation of their neighbour. Thus, even if they did indeed have high barriers to mutation initially, their barriers would be changed when their neighbour mutated, curing this problem and so one would expect that these species would not become extinct.<sup>7</sup><sup>7</sup>7A later paper on the model by Head and Rodgers (unpublished) has addressed this criticism to some extent. The model has other problems as well. One issue is that, because of the way the model is defined, it does not allow for the rescaling of time according to Equation (7). This means that evolution in the model proceeds at a uniform rate, rather than in avalanches as in the Bak–Sneppen model. As a direct result of this, the distribution of the sizes of extinction events in the model follows a Poisson distribution, rather than the approximate power law seen in the fossil data (Figure 2.2.1). The model does have the nice feature that the number of species in the model tends to a natural equilibrium; there is a balance between speciation and extinction events which causes the number of species to stabilize. This contrasts with the Bak–Sneppen model (and indeed almost all the other models we discuss) in which the number of species is artificially held constant, and also with the model of Vandewalle and Ausloos, in which the number of species either shrinks to zero, or grows indefinitely, depending on the value of the parameter $`r`$. Head and Rodgers gave an approximate analytic explanation for their results using a “mean field” technique similar to that employed by Flyvbjerg et al. (1993) for the Bak–Sneppen model. However, the question of whether the number of species predicted by their model agrees with the known taxon carrying capacity of real ecosystems has not been investigated. ### 5.6 Models incorporating external <br>stress Another criticism of the approach taken in Bak and Sneppen’s work (and indeed in the work of Kauffman discussed in Section 4) is that real ecosystems are not closed dynamical systems, but are in reality affected by many external factors, such as climate and geography. Indeed, as we discussed in Section 2.2.1, a number of the larger extinction events visible in the fossil record have been tied quite convincingly to particular exogenous events, so that any model ignoring these effects is necessarily incomplete. Newman and Roberts (1995, Roberts and Newman 1996) have proposed a variation on the Bak–Sneppen model which attempts to combine the ideas of extinction via environmental stress and large-scale coevolution. The basic idea behind this model is that a large coevolutionary avalanche will cause many species to move to new fitness peaks, some of which may possess lower fitness than the peaks they previous occupied. Thus a large avalanche produces a number of new species which have low fitness and therefore may be more susceptible to extinction as a result of environmental stress. This in fact is not a new idea. Kauffman for example has made this point clearly in his book The Origins of Order (Kauffman 1993): “During coevolutionary avalanches, species fall to lower fitness and hence are more likely to become extinct. Thus the distribution of avalanche sizes may bear on the distribution of extinction events in the fossil record.” Newman and Roberts incorporated this idea into their model as follows. A fixed number $`N`$ of species each possess a barrier $`B_i`$ to mutation, along with another variable $`F_i`$ which measures their fitness at the current adaptive peak. On each step of the simulation the species with the lowest barrier $`B_i`$ for mutation, and its $`K`$ neighbours, are selected, just as in the Bak–Sneppen model. The $`B_i`$ and $`F_i`$ variables of these $`K+1`$ species are all given new independent random values between zero and one, representing the evolution of one species and the changed landscapes of its neighbours. Then, a positive random number $`\eta `$ is chosen which represents the level of environmental stress at the current time, and all species with $`F_i<\eta `$ are wiped out and replaced by new species with randomly chosen $`F_i`$ and $`B_i`$. The net result is that species with low fitness are rapidly removed from the system. However, when a large coevolutionary avalanche takes place, many species receive new, randomly-chosen fitness values, some of which will be low, and this process provides a “source” of low-fitness species for extinction events. Interestingly, the distribution of extinction events in this model follows a power law, apparently regardless of the distribution from which the stress levels $`\eta `$ are chosen (Figure 5.6). Roberts and Newman (1996) offered an analytical explanation of this result within a “mean field” framework similar to the one used by Flyvbjerg et al. (1993) for the original Bak–Sneppen model. However, what is particularly intriguing is that, even though the distribution of avalanche sizes in the model still possesses an exponent in the region of $`\frac{3}{2}`$ or less, the extinction distribution is steeper, with a measured exponent of $`\tau =2.02\pm 0.03`$ in excellent agreement with the results derived from the fossil data. The model however has some disadvantages. First, the source of the power-law in the extinction distribution is almost certainly not a critical process, even though the Bak–Sneppen model, from which this model is derived, is critical. In fact, the model of Newman and Roberts is just a special case of the extinction model proposed later by Newman (see Section 7.1), which does not contain any coevolutionary avalanches at all. In other words, the interesting behaviour of the extinction distribution in this model is entirely independent of the coevolutionary behaviour inherited from the Bak–Sneppen model. A more serious problem with the model is the way in which the environmental stress is imposed. As we pointed out in Section 5.1, the time-steps in the Bak–Sneppen model correspond to different durations of geological time. This means that there should be a greater chance of a large stress hitting during time-steps which correspond to longer periods. In the model of Newman and Roberts however, this is not the case; the probability of generating a given level of stress is the same in every time-step. In the model of stress-driven extinction discussed in Section 7.1 this shortcoming is rectified. Another, very similar extension of the Bak–Sneppen model was introduced by Schmoltzi and Schuster (1995). Their motivation was somewhat different from that of Newman and Roberts—they were interested in introducing a “real time scale” into the model. As they put it: “The \[Bak–Sneppen\] model does not describe evolution on a physical time scale, because an update step always corresponds to a mutation of the species with the smallest fitness and its neighbours. This implies that we would observe constant extinction intensity in morphological data and that there will never be periods in which the system does not change.” This is in fact is only true if one ignores the rescaling of time implied by Equation (7). As Figure 5.2 shows, there are very clear periods in which the system does not change if one calculates the time in the way Bak and Sneppen did. The model of Schmoltzi and Schuster also incorporates an external stress term, but in their case it is a local stress $`\eta _i`$, varying from species to species. Other than that however, their approach is very similar to that of Newman and Roberts; species with fitness below $`\eta _i`$ are removed from the system and replaced with new species, and all the variables $`\{\eta _i\}`$ are chosen anew at each time step. Their results also are rather similar to those of Newman and Roberts, although their main interest was to model neuronal dynamics in the brain, rather than evolution, so that they concentrated on somewhat different measurements. There is no mention of extinction, or of avalanche sizes, in their paper. ## 6 Inter-species connection <br>models In the Bak–Sneppen model, there is no explicit notion of an interaction strength between two different species. It is true that if two species are closer together on the lattice then there is a higher chance of their participating in the same avalanche. But beyond this there is no variation in the magnitude of the influence of one species on another. Real ecosystems on the other hand have a wide range of possible interactions between species, and as a result the extinction of one species can have a wide variety of effects on other species. These effects may be helpful or harmful, as well as strong or weak, and there is in general no symmetry between the effect of $`A`$ on $`B`$ and $`B`$ on $`A`$. For example, if species $`A`$ is prey for species $`B`$, then $`A`$’s demise would make $`B`$ less able to survive, perhaps driving it also to extinction, whereas $`B`$’s demise would aid $`A`$’s survival. On the other hand, if $`A`$ and $`B`$ compete for a common resource, then either’s extinction would help the other. Or if $`A`$ and $`B`$ are in a mutually supportive or symbiotic relationship, then each would be hurt by the other’s removal. A number of authors have constructed models involving specific species–species interactions, or “connections”. If species $`i`$ depends on species $`j`$, then the extinction of $`j`$ may also lead to the extinction of $`i`$, and possibly give rise to cascading avalanches of extinction. Most of these connection models neither introduce nor have need of a fitness measure, barrier, viability or tolerance for the survival of individual species; the extinction pressure on one species comes from the extinction of other species. Such a system still needs some underlying driving force to keep its dynamics from stagnating, but this can be introduced by making changes to the connections in the model, without requiring the introduction of any extra parameters. Since the interactions in these models are ecological in nature (taking place at the individual level) rather than evolutionary (taking place at the species level or the level of the fitness landscape), the characteristic time-scale of the dynamics is quite short. Extinctions produced by ecological effects such as predation and invasion can take only a single season, whereas those produced by evolutionary pressures are assumed to take much longer, maybe thousands of years or more. The models described in this section vary principally in their connection topology, and in their mechanisms for replacing extinct species. Solé and co-workers have studied models with no organized topology, each species interacting with all others, or with a more-or-less random subset of them (Solé and Manrubia 1996, Solé, Bascompte and Manrubia 1996, Solé 1996). By contrast, the models of Amaral and Meyer (1998) and Abramson (1997) involve very specific food-chain topologies. The models of Solé et al. keep a fixed total number of species, refilling empty niches by invasion of surviving species. Abramson’s model also keeps the total fixed, but fills empty niches with random new species, while Amaral and Meyer use an invasion mechanism, but do not attempt to keep the total number of species fixed. ### 6.1 The Solé–Manrubia model Solé and Manrubia (1996, Solé, Bascompte and Manrubia 1996, Solé 1996) have constructed a model that focuses on species–species interactions through a “connection matrix” $`𝐉`$ whose elements give the strength of coupling between each pair of species. Specifically, the matrix element $`J_{ij}`$ measures the influence of species $`i`$ on species $`j`$, and $`J_{ji}`$ that of $`j`$ on $`i`$. A positive value of $`J_{ij}`$ implies that $`i`$’s continued existence helps $`j`$’s survival, whereas a negative value implies that $`j`$ would be happy to see $`i`$ disappear. The $`J_{ij}`$ values range between $`1`$ and $`1`$, chosen initially at random. In most of their work, Solé and Manrubia let every species interact with every other species, so all $`J_{ij}`$s are non-zero, though some may be quite small. Alternatively it is possible to define models in which the connections are more restricted, for instance by placing all the species on a square lattice and permitting each to interact only with its four neighbours (Solé 1996). A species $`i`$ becomes extinct if its net support $`_jJ_{ji}`$ from others drops below a certain threshold $`\theta `$. The sum over $`j`$ here is of course only over those species that (a) are not extinct themselves, and (b) interact with $`i`$ (in the case of restricted connections). Solé and Manrubia introduce a variable $`S_i(t)`$ to represent whether species $`i`$ is alive ($`S_i=1`$) or extinct ($`S_i=0`$) at time $`t`$, so the extinction dynamics may be written $$S_i(t+1)=\mathrm{\Theta }\left[\underset{j}{}J_{ji}S_j(t)\theta \right],$$ (10) where $`\mathrm{\Theta }(x)`$ is the Heaviside step function, which is 1 for $`x>0`$ and zero otherwise. As this equation implies, time progresses in discrete steps, with all updates occurring simultaneously at each step. When avalanches of causally connected extinctions occur, they are necessarily spread over a sequence of successive time steps. To complete the model, Solé and Manrubia introduce two further features, one to drive the system and one to replace extinct species. The driving force is simply a slow random mutation of the coupling strengths in the connection matrix $`𝐉`$. At each time step, for each species $`i`$, one of the incoming connections $`J_{ji}`$ is chosen at random and given a new random value in the interval between $`1`$ and $`1`$. This may cause one or more species to become extinct though loss of positive support from other species or through increase in the negative influences on it. It is not essential to think of these mutations as strictly biotic; external environmental changes could also cause changes in the coupling between species (and hence in species’ viability). The replacement of extinct species is another distinguishing feature of Solé and Manrubia’s model. All the niches that are left empty by extinction are immediately refilled with copies of one of the surviving species, chosen at random. This is similar to the speciation processes studied by Kauffman and Neumann in the variation of the NKCS model described in Section 4.5, and in fact Solé and Manrubia refer to it as “speciation”. However, because the Solé–Manrubia model is a model of ecological rather than evolutionary processes, it is probably better to think of the repopulation processes as being an invasion of empty niches by survivor species, rather than a speciation event. Speciation is inherently an evolutionary process, and, as discussed above, takes place on longer time-scales than the ecological effects which are the primary concern of this model. Invading species are copied to the empty slots along with all their incoming and outgoing connections, except that a little noise is added to these connections to introduce diversity. Specifically, if species $`k`$ is copied to fill a number of open niches $`i`$, then $$J_{ij}=J_{kj}+\eta _{ij},J_{ji}=J_{jk}+\eta _{ji},$$ (11) where $`j`$ ranges over the species with which each $`i`$ interacts, and the $`\eta `$s are all chosen independently from a uniform random distribution in the interval $`(ϵ,ϵ)`$. Because empty niches are immediately refilled, the $`S_i(t)`$ variables introduced on the right hand side of Equation (10) are actually always $`1`$, and are therefore superfluous. They do however make the form of the dynamics formally very similar to that of spin-glasses in physics (Fischer and Hertz 1991), and to that of Hopfield artificial neural networks (Hertz et al. 1991), and it is possible that these similarities will lead to useful cross-fertilization between these areas of study. Solé and Manrubia studied their model by simulation, generally using $`N=100`$ to 150 species, $`\theta =0`$, and $`ϵ=0.01`$. Starting from an initial random state, they waited about $`\mathrm{10\hspace{0.17em}000}`$ time steps for transients to die down before taking data. Extinction events in the model were found to range widely in size $`s`$, including occasional large “mass extinction” events that wiped out over 90% of the population. Such large events were often followed by a long period with very little activity. The distribution $`p(s)`$ of extinction sizes was found to follow a power law, as in Equation (1), with $`\tau =2.3\pm 0.1`$ (see Figure 6.1). Later work by Solé et al. (1996) using $`ϵ=0.05`$ gave $`\tau =2.05\pm 0.06`$, consistent with the value $`\tau =2.0\pm 0.2`$ from the fossil data (Section 2.2.1). The diversified descendants of a parent species may be thought of as a single genus, all sharing a common ancestor. Since the number of offspring of a parent species is proportional to the number of niches which need to be filled following a extinction event, the distribution of genus sizes is exactly the same as that of extinction sizes. Thus Solé and Manrubia find an exponent in the vicinity of 2 for the taxonomic distribution as well (see Equation (2)), to be compared to $`1.5\pm 0.1`$ for Willis’s data (Figure 2.3.1) and to values between $`1.1`$ and $`2.1`$ for Burlando’s analysis (Section 5.5). The waiting time between two successive extinction events in the Solé–Manrubia model is also found to have a power-law distribution, with exponent $`3.0\pm 0.1`$. Thus events are correlated in time—a random (Poisson) process would have an exponential distribution of waiting times. The distribution of both species and genus lifetimes can in theory also be measured in these simulations, although Solé and Manrubia did not publish any results for these quantities. Further studies would be helpful here. Solé and Manrubia claim on the basis of their observed power laws that their model is self-organized critical. However, it turns out that this is not the case (Solé, private communication). In fact, the model is an example of an ordinary critical system which is tuned to criticality by varying the parameter $`\theta `$, which is the threshold at which species become extinct. It is just coincidence that the value $`\theta =0`$ which Solé and Manrubia used in all of their simulations is precisely the critical value of the model at which power laws are generated. Away from this value the distributions of the sizes of extinction events and of waiting times are cut off exponentially at some finite scale, and therefore do not follow a power law. This then begs the question of whether there is any reason why in a real ecosystem this threshold parameter should take precisely the value which produces the power law distribution, rather than any other value. At present, no persuasive case has been made in favour of $`\theta =0`$, and so the question remains open. ### 6.2 Variations on the Solé–Manrubia <br>model A number of variations on the basic model of Solé and Manrubia are mentioned briefly in the original paper (Solé and Manrubia 1996). The authors tried relaxing the assumptions of total connectivity (letting some pairs of species have no influence on each other), of $`\theta =0`$, and of diversification (letting $`ϵ=0`$). They also tried letting each $`J_{ij}`$ take only the values $`+1`$ or $`1`$. In all these cases they report that they found the same behaviour with the same power-law exponents (although as mentioned above, later results showed that in fact the power-law behaviour is destroyed by making $`\theta 0`$). This robustness to changing assumptions is to be expected for critical phenomena, where typically there occur large “universality classes” of similar behaviour with identical exponents (see Section 5.2). Solé (1996) presents a more significant extension of the model which does change some of the exponents: he proposes a dynamical rule for the connectivity itself. At any time some pairs of sites $`i,j`$ are not connected, so that in effect $`J_{ij}=J_{ji}=0`$. (Solé introduces a new connection variable to represent this, but that is not strictly necessary.) Initially the number of connections per site is chosen randomly between 1 and $`N1`$. During the population of an empty niche $`i`$ by a species $`k`$, all but one of $`k`$’s non-zero connections are reproduced with noise, as in Equation (11), but the last is discarded and replaced entirely by a new random link from $`i`$ to a site to which $`k`$ is not connected. Solé also replaces the mutation of $`J_{ij}`$, which provides the fundamental random driving force in the Solé–Manrubia model, by a rule that removes one of the existing species at random at any step when no extinction takes place. Without this driving force the system would in general become frozen. The emptied niche is refilled by invasion as always, but these “random” extinction events are not counted in the statistical analysis of extinction. (The waiting time would always be 1 if they were counted.) It is not clear whether this difference between the models has a significant effect on the results. The observed behaviour of this model is similar to that of the Solé–Manrubia model as far as extinction sizes are concerned; Solé reports an exponent $`\tau =2.02\pm 0.03`$ for the extinction size distribution. However the waiting-time distribution falls much more slowly (so there are comparably more long waits), with an exponent $`1.35\pm 0.07`$ compared to $`3.0\pm 0.1`$ for the Solé–Manrubia model. The smaller exponent seems more reasonable, though of course experimental waiting time data is not available for comparison. The number of connections itself varies randomly through time, and a Fourier analysis shows a power spectrum of the form $`1/f^\nu `$ with $`\nu =0.99\pm 0.08`$. Power spectra of this type are another common feature of critical systems (Solé et al. 1997). ### 6.3 Amaral and Meyer’s food chain <br>model Whereas the Solé–Manrubia model and its variants have a more or less arbitrary connection topology between species, real ecosystems have very specific sets of interdependencies. An important part of the natural case can be expressed in terms of food chains, specifying who eats whom. Of course food chains are not the only type of inter-species interaction, but it is nevertheless of interest to consider models of extinction based on food-chain dynamics. Amaral and Meyer (1998) and Abramson (1997) have both constructed and studied such models. Amaral and Meyer (1998) have proposed a model in which species are arranged in $`L`$ trophic levels labelled $`l=0,1,\mathrm{},L1`$. Each level has $`N`$ niches, each of which may be occupied or unoccupied by a species. A species in level $`l`$ (except $`l=0`$) feeds on up to $`k`$ species in level $`l1`$; these are its prey. If all of a species’ prey become extinct, then it too becomes extinct, so avalanches of extinction can occur. This process is driven by randomly selecting one species at level 0 at each time-step and making it extinction extinction with probability $`p`$. There is no sense of fitness or of competition between species governing extinction in this model. To replace extinct species, Amaral and Meyer use a speciation mechanism. At a rate $`\mu `$, each existing species tries to engender an offspring species by picking a niche at random in its own level or in the level above or below. If that randomly selected niche is unoccupied, then the new species is created and assigned $`k`$ preys at random from the existing species on the level below. The parameter $`\mu `$ needs to be large enough that the average origination rate exceeds the extinction rate, or all species will become extinct. Note that, as pointed out earlier, speciation is inherently an evolutionary process and typically takes place on longer time-scales than extinction through ecological interactions, so there is some question about whether it is appropriate in a model such as this. As with the Solé–Manrubia model, it might be preferable to view the repopulation of niches as an invasion process, rather than a speciation one. The model is initialized by populating the first level $`l=0`$ with some number $`N_0`$ of species at random. Assuming a large enough origination rate, the population will then grow approximately exponentially until limited by the number of available niches. Amaral and Meyer presented results for a simulation of their model with parameters $`L=6`$, $`k=3`$, $`N=1000`$, $`N_050`$, $`p=0.01`$ and $`\mu =0.02`$. The statistics of extinction events are similar to those seen in many other models. The times series is highly intermittent, with occasional large extinction events almost up to the maximum possible size $`NL`$. The distribution of extinction sizes $`s`$ fits a power law, Equation (1), with exponent $`\tau =1.97\pm 0.05`$. Origination rates are also highly intermittent, and strongly correlated with extinction events.<sup>8</sup><sup>8</sup>8The authors report that they obtained similar results, with the same exponents, for larger values of $`k`$ too (Amaral, private communication). An advantage of this model is that the number of species is not fixed, and its fluctuations can be studied and compared with empirical data. Amaral and Meyer compute a power spectrum for the number of species and find that it fits a power law $`p(f)1/f^\nu `$ with $`\nu =1.95\pm 0.05`$. The authors argue that this reveals a “fractal structure” in the data, but it is worth noting that a power-spectrum exponent of $`\nu =2`$ occurs for many non-fractal processes, such as simple random walks, and a self-similar structure only needs to be invoked if $`\nu <2`$. Amaral and Meyer also compute a power spectrum for the extinction rate, for comparison with the fossil data analysis of Solé et al. (1997). They find a power law with $`\nu 1`$ for short sequences, but then see a crossover to $`\nu 0`$ at longer times, suggesting that there is no long-time correlation. Drossel (1999) has analysed the Amaral–Meyer model in some detail. The $`k=1`$ case is most amenable to analysis, because then the food chains are simple independent trees, each rooted in a single species at level 0. The extinction size distribution is therefore equal to the tree size distribution, which can be computed by master equation methods, leading to $`p(s)s^2`$ (i.e., $`\tau =2`$) exactly in the limits $`N\mathrm{}`$, $`L\mathrm{}`$. Finite size effects (when $`N`$ or $`L`$ are not infinite) can also be evaluated, leading to a cutoff in the power law at $`s_{\mathrm{max}}N\mathrm{log}N`$ if $`L\mathrm{log}N`$ or $`s_{\mathrm{max}}\mathrm{e}^L`$ if $`L\mathrm{log}N`$. These analytical results agree well with the simulation studies. The analysis for $`k>1`$ is harder, but can be reduced in the case of large enough $`L`$ and $`N`$ (with $`L\mathrm{ln}N`$) to a recursion relation connecting the lifetime distribution of species on successive levels. This leads to the conclusion that the lifetime distribution becomes invariant after the first few levels, which in turn allows for a solution. The result is again a power-law extinction size distribution with $`\tau =2`$ and cutoff $`s_{\mathrm{max}}\mathrm{e}^L`$. Drossel also considers a variant of the Amaral–Meyer model in which a species becomes extinct if any (instead of all) of its prey disappear. She shows that this too leads to a power law with $`\tau =2`$, although very large system sizes would be needed to make this observable in simulation. She also points out that other variations of the model (such as making the speciation rate depend on the density of species in a layer) do not give power laws at all, so one must be careful about attributing too much universality to the “critical” nature of this model. ### 6.4 Abramson’s food chain model Abramson (1997) has proposed a different food chain model in which each species is explicitly represented as a population of individuals. In this way Abramson’s model connects extinction to microevolution, rather than being a purely macroevolutionary model. There is not yet a consensus on whether a theory of macroevolution can be built solely on microevolutionary principles; see Stenseth (1985) for a review. Abramson considers only linear food chains, in which a series of species at levels $`i=1,2,\mathrm{},N`$ each feed on the one below (except $`i=1`$) and are fed on by the one above (except $`i=N`$). If the population density at level $`i`$ at time $`t`$ is designated by $`n_i(t)`$, then the changes in one time step are given by $`n_i(t+1)n_i(t)`$ $`=`$ $`k_in_{i1}(t)n_i(t)[1n_i(t)/c_i]`$ (12) $`g_in_{i+1}(t)n_i(t).`$ Here $`k_i`$ and $`g_i`$ represent the predation and prey rates, and $`c_i`$ is the carrying capacity of level $`i`$. These equations are typical of population ecology. At the endpoints of the chain, boundary conditions may be imposed by adjoining two fictitious species, $`0`$ and $`N+1`$ with $`n_0=n_{N+1}=1`$. For simplicity Abramson takes $`c_i=1`$ for all $`i`$, and sets $`g_i=k_{i+1}`$. The species are then parameterized simply by their $`k_i`$ and by their population size $`n_i(t)`$. These are initially chosen randomly in the interval $`(0,1)`$. The population dynamics typically leads to some $`n_i(t)`$’s dropping asymptotically to 0. When they drop below a small threshold, Abramson regards that species as extinct and replaces it with a new species with randomly chosen $`n_i`$ and $`k_i`$, drawn from uniform distributions in the interval between zero and one. But an additional driving force is still needed to prevent the dynamics from stagnating. So with probability $`p`$ at each time-step, Abramson also replaces one randomly chosen species, as if it had become extinct. The replacement of an extinct species by a new one with a larger population size has in general a negative impact on the species below it in the food chain. Thus one extinction event can lead to an avalanche propagating down the food chain. Note that this is the precise opposite of the avalanches in the Amaral–Meyer model, which propagate upwards due to loss of food source. Abramson studies the statistics of extinction events in simulations of his model for values of $`N`$ from 50 to 1000.<sup>9</sup><sup>9</sup>9Solé (private communication) has made the point that these values are unrealistically large for real food chains. Real food chains typically have less than ten trophic levels. He finds punctuated equilibrium in the extinction event sizes, but the size distribution $`p(s)`$ does not fit a power law. It does show some scaling behaviour with $`N`$, namely $`p(s)=N^\beta f(sN^\nu )`$, where $`\beta `$ and $`\nu `$ are parameters and $`f(x)`$ is a particular “scaling function”. Abramson attributes this form to the system being in a “critical state”. The waiting time between successive extinctions fits a power law over several decades of time, but the exponent seems to vary with the system size. Overall, this model does not have strong claims for criticality and does not agree very well with the extinction data. ## 7 Environmental stress models In Sections 4 to 6 we discussed several models of extinction which make use of ideas drawn from the study of critical phenomena. The primary impetus for this approach was the observation of apparent power-law distributions in a variety of statistics drawn from the fossil record, as discussed in Section 2; in other branches of science such power laws are often indicators of critical processes. However, there are also a number of other mechanisms by which power laws can arise, including random multiplicative processes (Montroll and Shlesinger 1982, Sornette and Cont 1997), extremal random processes (Sibani and Littlewood 1993) and random barrier-crossing dynamics (Sneppen 1995). Thus the existence of power-law distributions in the fossil data is not on its own sufficient to demonstrate the presence of critical phenomena in extinction processes. Critical models also assume that extinction is caused primarily by biotic effects such as competition and predation, an assumption which is in disagreement with the fossil record. As discussed in Section 2.2.1, all the plausible causes for specific prehistoric extinctions are abiotic in nature. Therefore an obvious question to ask is whether it is possible to construct models in which extinction is caused by abiotic environmental factors, rather than by critical fluctuations arising out of biotic interactions, but which still give power-law distributions of the relevant quantities. Such models have been suggested by Newman (1996, 1997) and by Manrubia and Paczuski (1998). Interestingly, both of these models are the result of attempts at simplifying models based on critical phenomena. Newman’s model is a simplification of the model of Newman and Roberts (see Section 5.6), which included both biotic and abiotic effects; the simplification arises from the realization that the biotic part can be omitted without losing the power-law distributions. Manrubia and Paczuski’s model was a simplification of the connection model of Solè and Manrubia (see Section 6.1), but in fact all direct species-species interactions were dropped, leaving a model which one can regard as driven only by abiotic effects. We discuss these models in turn. ### 7.1 Newman’s model The model proposed by Newman (1996, 1997) has a fixed number $`N`$ of species which in the simplest case are non-interacting. Real species do interact of course, but as we will see the predictions of the model are not greatly changed if one introduces interactions, and the non-interacting version makes a good starting point because of its extreme simplicity. The absence of interactions between species also means that critical fluctuations cannot arise, so any power laws produced by the model are definitely of non-critical origin. As in the model of Newman and Roberts (1995), the level of the environmental stress is represented by a single number $`\eta `$, which is chosen independently at random from some distribution $`p_{\mathrm{stress}}(\eta )`$ at each time-step. Each species $`i=1\mathrm{}N`$ possesses some threshold tolerance for stress denoted $`x_i`$ which is high in species which are well able to withstand stress and low in those which are not. (See Jablonski (1989) for a discussion of the selectivity of extinction events in the fossil record.) Extinction takes place via a simple rule: if at any time-step the numerical value of the stress level exceeds a species’ tolerance for stress, $`\eta >x_i`$, then that species becomes extinct at that time-step. Thus large stresses (sea-level change, bolide impact) can give rise to large mass extinction events, whilst lower levels of stress produce less dramatic background extinctions. Note that simultaneous extinction of many species occurs in this model because the same large stress affects all species, and not because of any avalanche or domino effects in the ecosystem. In order to maintain a constant number of species, the system is repopulated after every time-step with as many new species as have just become extinct. The extinction thresholds $`x_i`$ for the new species can either be inherited from surviving species, or can be chosen at random from some distribution $`p_{\mathrm{thresh}}(x)`$. To a large extent it appears that the predictions of the model do not depend on which choice is made; here we focus on the uniform case with $`p_{\mathrm{thresh}}(x)`$ a constant independent of $`x`$ over some allowed range of $`x`$, usually $`0x<1`$. In addition, it is safe to assume that the initial values of the variables $`x_i`$ are also chosen according to $`p_{\mathrm{thresh}}(x)`$, since in any case the effects of the initial choices only persist as long as it takes to turn over all the species in the ecosystem, which happens many times during a run of the model (and indeed many times during the known fossil record). There is one further element which needs to be added to the model in order to make it work. As described, the species in the system start off with randomly chosen tolerances $`x_i`$ and, through the extinction mechanism described above, those with the lowest tolerance are systematically removed from the population and replaced by new species. Thus, the number of species with low thresholds for extinction decreases over time, in effect creating a gap in the distribution, as in the Bak–Sneppen model. As a result the size of the extinction events taking place dwindles and ultimately extinction ceases almost entirely, a behaviour which we know not to be representative of a real ecosystem. Newman suggests that the solution to this problem comes from evolution. In the intervals between large stress events, species will evolve under other selection pressures, and this will change the values of the variables $`x_i`$ in unpredictable ways. Adapting to any particular selection pressure might raise, lower, or leave unchanged a species’ tolerance to environmental stresses. Mathematically this is represented by making random changes to the $`x_i`$, either by changing them all slightly at each time-step, or by changing a small fraction $`f`$ of them to totally new values drawn from $`p_{\mathrm{thresh}}(x)`$, and leaving the rest unchanged. These two approaches can be thought of as corresponding to gradualist and punctuationalist views of evolution respectively, but it appears in practice that the model’s predictions are largely independent of which is chosen. In his work Newman focused on the punctuationalist approach, replacing a fraction $`f`$ of the species by random new values. This description fully defines Newman’s model except for the specification of $`p_{\mathrm{stress}}(\eta )`$ and $`p_{\mathrm{thresh}}(x)`$. However it turns out that we can, without loss of generality, choose $`p_{\mathrm{thresh}}(x)`$ to have the simple form of a uniform distribution in the interval from 0 to 1, since any other choice can be mapped onto this with the transformation $$xx^{}=_{\mathrm{}}^xp_{\mathrm{thresh}}(y)dy.$$ (13) The stress level must of course be transformed in the same way, $`\eta \eta ^{}`$, so that the condition $`\eta ^{}>x_i^{}`$ corresponds precisely to $`\eta >x_i`$. This in turn requires a transformation $$p_{\mathrm{stress}}(\eta ^{})=p_{\mathrm{stress}}(\eta )\frac{\mathrm{d}\eta }{\mathrm{d}\eta ^{}}=\frac{p_{\mathrm{stress}}(\eta )}{p_{\mathrm{thresh}}(\eta )}$$ (14) for the stress distribution. The choice of $`p_{\mathrm{stress}}(\eta )`$ remains a problem, since it is not known what the appropriate distribution of stresses is in the real world. For some particular sources of stress, such as meteor impacts, there are reasonably good experimental results for the distribution (Morrison 1992, Grieve and Shoemaker 1994), but overall we have very little knowledge about stresses occurring either today or in the geologic past. Newman therefore tested the model with a wide variety of stress distributions and found that, in a fashion reminiscent of the self-organized critical models, many of its predictions are robust against variations in the form of $`p_{\mathrm{stress}}(\eta )`$, within certain limits. In Figure 7.1 we show simulation results for the distribution $`p(s)`$ of the sizes $`s`$ of extinction events in the model for one particular choice of stress distribution, the Gaussian distribution: $$p_{\mathrm{stress}}(\eta )\mathrm{exp}\left[\frac{\eta ^2}{2\sigma ^2}\right].$$ (15) This is probably the commonest noise distribution occurring in natural phenomena. It arises as a result of the central limit theorem whenever a number of different independent random effects combine additively to give one overall stress level. As the figure shows, the resulting distribution of the sizes of extinction events in Newman’s model follows a power law closely over many decades. The exponent of the power law is measured to be $`\tau =2.02\pm 0.02`$, which is in good agreement with the value of $`2.0\pm 0.2`$ found in the fossil data. The only deviation from the power-law form is for very small sizes $`s`$, in this case below about one species in $`10^8`$, where the distribution flattens off and becomes independent of $`s`$. The point at which this happens is controlled primarily by the value of the parameter $`f`$, which governs the rate of evolution of species (Newman and Sneppen 1996). No flat region is visible in the fossil extinction distribution, Figure 2.1.1, which implies that the value of $`f`$ must be small—smaller than the smallest fractional extinction which can be observed reliably in fossil data. However, this is not a very stringent condition, since it is not possible to measure extinctions smaller than a few per cent with any certainty. In Figure 7.1 we show results for the extinction size distribution for a wide variety of other distributions $`p_{\mathrm{stress}}(\eta )`$ of the applied stress, including various different Gaussian forms, exponential and Poissonian noise, power laws and stretched exponentials. As the figure shows, the distribution takes a power-law form in each case. The exponent of the power law varies slightly from one curve to another, but in all cases it is fairly close to the value of $`\tau 2`$ found in the fossil record. In fact, Sneppen and Newman (1997) have shown analytically that for all stress distributions $`p_{\mathrm{stress}}(\eta )`$ satisfying $$_\eta ^{\mathrm{}}p_{\mathrm{stress}}(x)dxp_{\mathrm{stress}}(\eta )^\alpha $$ (16) for large $`\eta `$ and some exponent $`\alpha `$, the distribution of extinction sizes will take a power law form for large $`s`$. This condition is exactly true for exponential and power-law distributions of stress, and approximately true for Gaussian and Poissonian distributions. Since this list covers almost all noise distributions which occur commonly in natural systems, the predictions of the model should be reasonably robust, regardless of the ultimate source of the stresses. It is also straightforward to measure the lifetimes of species in simulations of this model. Figure 7.1 shows the distribution of lifetimes measured in one particular run. The distribution is power-law in form as it is in the fossil data, with a measured exponent of $`1.03\pm 0.05`$. Newman (1997) has given a number of other predictions of his model. In particular, he has suggested how taxonomy can be incorporated into the model to allow one to study the birth and death of genera and higher taxa, in addition to species. With this extension the model predicts a distribution of genus lifetimes similar to that of species, with a power-law form and exponent in the vicinity of one. Note that although the power-law form is seen also in the fossil data, an exponent of one is not in agreement with the value of $`1.7\pm 0.3`$ measured in the fossil lifetime distribution (see Section 2.2.4). The model does however correctly predict Willis’s power-law distribution of the number of species per genus (see Section 2.3.1) with an exponent close to the measured value of $`\beta =\frac{3}{2}`$. Another interesting prediction of the model is that of “aftershock extinctions”—strings of smaller extinctions arising in the aftermath of a large mass extinction event (Sneppen and Newman 1997, Wilke et al. 1998). The mechanism behind these aftershock extinctions is that the repopulation of ecospace after a large event tends to introduce an unusually high number of species with low tolerance for stress. (At other times such species are rarely present because they are removed by the frequent small stresses applied to the system.) The rapid extinction of these unfit species produces a high turnover of species for a short period after a mass extinction, which we see as a series of smaller “aftershocks”. The model makes the particular prediction that the intervals between these aftershock extinctions should fall off with time as $`t^1`$ following the initial large event. This behaviour is quite different from that of the critical models of earlier sections, and therefore it could provide a way of distinguishing in the fossil record between the two processes represented by these models. So far, however, no serious effort has been made to look for aftershock extinctions in the fossil data, and indeed it is not even clear that the available data are adequate for the task. In addition, later work by Wilke and Martinetz (1997) calls into question whether one can expect aftershocks to occur in real ecosystems. (This point is discussed further in Section 7.4.) ### 7.2 Shortcomings of the model Although Newman’s model is simple and makes predictions which are in many cases in good agreement with the fossil data, there are a number of problems associated with it. First, one could criticise the assumptions which go into the model. For example, the model assumes that species are entirely non-interacting, which is clearly false. In the version we have described here it also assumes a “punctuated” view of evolution in which species remain constant for long periods and then change abruptly. In addition, the way in which new species are added to the model is questionable: new species are given a tolerance $`x_i`$ for stress which is chosen purely at random, whereas in reality new species are presumably descended from other earlier species and therefore one might expect some correlation between the values of $`x_i`$ for a species and its ancestors. These criticisms lead to a number of generalizations of the model which have been examined by Newman (1997). To investigate the effect of species interactions, Newman looked at a variation of the model in which the extinction of a species could give rise to the extinction of a neighbouring species, in a way reminiscent of the avalanches of Kauffman’s NK model. He placed the model on a lattice and added a step to the dynamics in which the extinction of a species as a result of external stress caused the knock-on extinction (and subsequent replacement) of all the species on adjacent lattice sites. In simulations of this version of the model, Newman found, inevitably, spatial correlations between the species becoming extinct which are not present in the original version. Other than this however, it appears that the model’s predictions are largely unchanged. The distributions of extinction event sizes and taxon lifetimes for example are still power-law in form and still possess approximately the same exponents. Similarly it is possible to construct a version of the model in which evolution proceeds in a “gradualist” fashion, with the values of the variables $`x_i`$ performing a slow random walk rather than making punctuated jumps to unrelated values. And one can also create a version in which the values of $`x_i`$ assumed by newly appearing species are inherited from survivors, rather than chosen completely at random. Again it appears that these changes have little effect on the major predictions of the model, although these results come primarily from simulations of the model; the analytic results for the simplest version do not extend to the more sophisticated models discussed here. ### 7.3 The multi-trait version of <br>the model A more serious criticism of Newman’s model is that it models different types of stress using only a single parameter $`\eta `$. Within this model one can only say whether the stress level is high or low at a particular time. In the real world there are many different kinds of stress, such as climatic stress, ecological stresses like competition and predation, disease, bolide impact, changes in ocean chemistry and many more. And there is no guarantee that a period when one type of stress is high will necessarily correspond to high stress of another type. This clearly has an impact on extinction profiles, since some species will be more susceptible to stresses of a certain kind than others. To give an example, it is thought that large body mass was a contributing factor to extinction at the Cretaceous–Tertiary boundary (Clemens 1986). Thus the particular stress which caused the K–T extinction, thought to be the result of a meteor impact, should correspond to tolerance variables $`x_i`$ in our model which are lower for large-bodied animals. Another type of stress—sea-level change, say—may have little or no correlation with body size. To address this problem, Newman (1997) has also looked at a variation of his model in which there are a number $`M`$ of different kinds of stress. In this case each species also has a separate tolerance variable $`x_i^{(k)}`$ for each type of stress $`k`$ and becomes extinct if any one of the stress levels exceeds the corresponding threshold. As with the other variations on the model, it appears that this “multi-trait” version reproduces the important features of the simpler versions, including the power-law distributions of the sizes of extinction events and of species lifetimes. Sneppen and Newman (1997) have explained this result with the following argument. To a first approximation, one can treat the probability of a species becoming extinct in the multi-trait model as the probability that the stress level exceeds the lowest of the thresholds for stress which that species possesses. In this case, the multi-trait model is identical to the single-trait version but with a different choice for the distribution $`p_{\mathrm{thresh}}(x)`$ from which the thresholds are drawn (one which reflects the probability distribution of the lowest of $`M`$ random numbers). However, as we argued earlier, the behaviour of the model is independent of $`p_{\mathrm{thresh}}(x)`$ since we can map any distribution on the uniform one by a simple integral transformation of $`x`$ (see Equation (13)). ### 7.4 The finite-growth version of <br>the model Another shortcoming of the model proposed by Newman is that the species which become extinct are replaced instantly by an equal number of new species. In reality, fossil data indicate that the process of replacement of species takes a significant amount of time, sometimes as much as a few million years (Stanley 1990, Erwin 1996). Wilke and Martinetz (1997) have proposed a generalization of the model which takes this into account. In this version, species which become extinct are replaced slowly according to the logistic growth law $$\frac{\mathrm{d}N}{\mathrm{d}t}=gN(1N/N_{\mathrm{max}}),$$ (17) where $`N`$ is the number of species as before, and $`g`$ and $`N_{\mathrm{max}}`$ are constants. Logistic growth appears to be a reasonable model for recovery after large extinction events (Sepkoski 1991, Courtillot and Gaudemer 1996). When the growth parameter $`g`$ is infinite, we recover the model proposed by Newman. Wilke and Martinetz find, as one might expect, that there is a transition in the behaviour of the system at a critical value $`g=g_c`$ where the rate of repopulation of the system equals the average rate of extinction. They give an analytic treatment of the model which shows how $`g_c`$ varies with the other parameters in the problem. For values of $`g`$ below $`g_c`$ life eventually dies out in the model, and it is probably reasonable to assume that the Earth is not, for the moment at least, in this regime. For values of $`g`$ above $`g_c`$ it is found that the power-law behaviour seen in the simplest versions of the model is retained. The value of the extinction size exponent $`\tau `$ appears to decrease slightly with increasing $`g`$, but is still in the vicinity of the value $`\tau 2`$ extracted from the fossil data. Interestingly they also find that the aftershock extinctions discussed in Section 7.1 become less well-defined for finite values of $`g`$, calling into question Newman’s contention that the existence of aftershocks in the fossil record could be used as evidence in favour of his model. This point is discussed further by Wilke et al. (1998). ### 7.5 The model of Manrubia and <br>Paczuski Another variation on the ideas contained in Newman’s model has been proposed by Manrubia and Paczuski (1998). Interestingly, although this model is mathematically similar to the other models discussed in this section, its inspiration is completely different. In fact, it was originally intended as a simplification of the connection model of Solé and Manrubia discussed in Section 6.1. In Newman’s model, there are a large number of species with essentially constant fitness or tolerance to external stress, and those which fall below some time-varying threshold level become extinct. In the model of Manrubia and Paczuski by contrast, the threshold at which species become extinct is fixed and their fitness is varied over time. In detail, the model is as follows. The model contains a fixed number $`N`$ of species, each with a fitness $`x_i`$, or “viability” as Manrubia and Paczuski have called it. This viability measures how far a species is from becoming extinct, and might be thought of as a measure of reproductive success. All species are subject to random coherent stresses, or “shocks”, which additively increase or decrease the viability of all species by the same amount $`\eta `$. If at any point the viability of a species falls below a certain threshold $`x_0`$, that species becomes extinct and is replaced by speciation from one of the surviving species. In Newman’s model there was also an “evolution” process which caused species with high viability to drift to lower values over the course of time, preventing the system from stagnating when all species with low viability had been removed. The model of Manrubia and Paczuski contains an equivalent mechanism, whereby the viabilities of all species drift, in a stochastic fashion, toward lower values over the course of time. This also prevents stagnation of the dynamics. Although no one has shown whether the model of Manrubia and Paczuski can be mapped exactly onto Newman’s model, it is clear that the dynamics of the two are closely similar, and therefore it is not surprising to learn that the behaviour of the two models is also similar. Figure 7.5 shows the distribution of the sizes $`s`$ of extinction events in a simulation of the model with $`N=3200`$ species. The distribution is close to power-law in form with an exponent of $`\tau =1.9`$ similar to that of Newman’s model, and in agreement with the result $`\tau 2`$ seen in the fossil data. The model also generates a power-law distribution in the lifetimes of species and, as in Newman’s model, a simple definition of genus can be introduced and it can be shown that the distribution of number of species per genus follows a power law as well. The exponent of the lifetime distribution turns out to be approximately 2, which is not far from the value of $`1.7\pm 0.3`$ found in the fossil data (see Section 2.2.4).<sup>10</sup><sup>10</sup>10The exponent for the distribution of genus sizes is also 2 which is perhaps a shortcoming of this model; recall that Willis’s value for flowering plants was $`1.5`$ (Figure 2.3.1), and the comprehensive studies by Burlando (1990, 1993) gave an average value of $`1.6`$. What is interesting about this model however, is that its dynamics is derived using a completely different argument from the one employed by Newman. The basic justification of the model goes like this. We assume first of all that it is possible to define a viability $`x_i`$ for species $`i`$, which measures in some fashion how far a species is from the point of extinction. The point of extinction itself is represented by the threshold value $`x_0`$. The gradual downward drift of species’ viability can be then be accounted for as the result of mutation; the majority of mutations lower the viability of the host. Manrubia and Paczuski justify the coherent stresses in the system by analogy with the model of Solé and Manrubia (1996) in which species feel the ecological “shock” of the extinction of other nearby species. In the current model, the origin of the shocks is similarly taken to be the extinction of other species in the system. In other words it is the result of biotic interaction, rather than exogenous environmental influences. However, by representing these shocks as coherent effects which influence all species simultaneously to the same degree, Manrubia and Paczuski have removed from the dynamics the direct interaction between species which was present in the original connection model. Amongst other things, this allows them to give an approximate analytic treatment of their model using a time-averaged approximation similar to the one employed by Sneppen and Newman (1997) for Newman’s model. One further nice feature of the Manrubia–Paczuski model is that it is particularly easy in this case to see how large extinction events arise. Because species are replaced by speciation from others, the values of their viabilities tend to cluster together: most species are copies, or near copies, of other species in the system. Such clusters of species tend all to become extinct around the same time because they all feel the same coherent shocks and are all driven below the extinction threshold together. (A similar behaviour is seen in the Solé–Manrubia model of Section 6.1.) This clustering and avalanche behaviour in the model is reminiscent of the so-called “phase-coherent” models which have been proposed as a mechanism for the synchronization of the flashing of fireflies (Strogatz and Stewart 1993). Although no one has yet made a direct connection between these two classes of models, it is possible that mathematical techniques similar to those employed with phase-coherent models may prove profitable with models of type proposed by Manrubia and Paczuski. ## 8 Sibani’s reset model Sibani and co-workers have proposed a model of the extinction process, which they call the “reset model” (Sibani et al. 1995, 1998), which differs from those discussed in the preceding sections in a fundamental way; it allows for, and indeed relies upon, non-stationarity in the extinction process. That is, it acknowledges that the extinction record is not uniform in time, as it is assumed to be (except for stochastic variation) in the other models we have considered. In fact, extinction intensity has declined on average over time from the beginning of the Phanerozoic until the Recent. Within the model of Sibani et al., the distributions of Section 2 are all the result of this decline, and the challenge is then to explain the decline, rather than the distributions themselves. ### 8.1 Extinction rate decline In Figure 2.2.3 we showed the number of known families as a function of time over the last 600 My. On the logarithmic scale of the figure, this number appears to increase fairly steadily and although, as we pointed out, some of this increase can be accounted for by the bias known as the “pull of the recent”, there is probably a real trend present as well. It is less clear that there is a similar trend in extinction intensity. The extinctions represented by the points in Figure 2.1.1 certainly vary in intensity, but on average they appear fairly constant. Recall however, that Figure 2.1.1 shows the number of families becoming extinct in each stage, and that the lengths of the stages are not uniform. In Figure 8.1 we show the extinction intensity normalized by the lengths of the stages—the extinction rate in families per million years—and on this figure it is much clearer that there is an overall decline in extinction towards the Recent. In order to quantify the decline in extinction rate, we consider the cumulative extinction intensity $`c(t)`$ as a function of time. The cumulative extinction at time $`t`$ is defined to be the number of taxa which have become extinct up to that time. In other words, if we denote the extinction intensity at time $`t`$ by $`x(t)`$ then the cumulative extinction intensity is $$c(t)=_0^tx(t^{})dt^{}.$$ (18) Figure 8.1 shows this quantity for the marine families in Sepkoski’s database. Clearly the plot has to be monotonically increasing. Sibani et al. suggested that it in fact has a power-law form, with an exponent in the vicinity of $`0.6`$. Newman and Eble (1999b) however have pointed out that it more closely follows a logarithmic increase law—a straight line on the linear–log scales of Figure 8.1. (For comparison we show the same data on log–log scales in the inset. The power-law form proposed by Sibani et al. would appear as a straight line on these scales.) This implies that $`c(t)`$ can be written in the form $$c(t)=A+B\mathrm{log}(tt_0),$$ (19) where $`A`$ and $`B`$ are constants and $`t_0`$ is the point of intercept of the line in Figure 8.1 with the horizontal axis. (Note that $`t_0`$ lies before the beginning of the Cambrian. If time is measured from $`t=0`$ at the start of the data set, which coincides roughly with the beginning of the Cambrian, then the best fit of the form (19) has $`t_0260`$ My.) Combining Equations (18) and (19) and differentiating with respect to $`t`$ we get an expression for the extinction per unit time: $$x(t)=\frac{B}{tt_0}.$$ (20) In other words the average extinction rate is falling off over time as a power law with exponent $`1`$. Sibani et al. have pointed out that a power-law decline in itself could be enough to explain the distribution of the sizes of extinction events seen in Figure 2.2.1. For an extinction profile of the form of Equation (20) the number of time intervals in which we expect to see extinction events of a certain size $`s`$ is given by $$p(s)=\frac{\mathrm{d}t}{\mathrm{d}x}|_{x=s}=\frac{B}{s^2}.$$ (21) In other words, the distribution of event sizes has precisely the power-law form see in Figure 2.2.1, with an exponent $`\tau =2`$ which is in good agreement with the fossil data. (If we use the power-law fit to the cumulative extinction intensity suggested by Sibani et al., the exponent works out at about $`\tau =2.5`$, which is outside the standard error on the value measured in the fossil record—another reason for preferring the logarithmic fit.) There are problems with this argument. The analysis assumes that the extinction rate takes the idealized form of Equation (20), whereas in fact this equation represents only the average behaviour of the real data. In reality, there is a great deal of fluctuation about this form. For example, Equation (20) implies that all the large extinction events happened in the earliest part of the fossil record, whereas in fact this is not true. The two largest events of all time (the late-Permian and end-Cretaceous events) happened in the second half of the Phanerozoic. Clearly then this analysis cannot tell the entire story. A more serious problem is that this theory is really just “passing the buck”. It doesn’t tell us how, in biological terms, the observed extinction size distribution comes about. All it does is tell us that one distribution arises because of another. The extinction size distribution may be a result of the fall-off in the average extinction rate, but where does the fall-off come from? The origin of the decline in the extinction rate has been a topic of debate for many years. It has been suggested that the decline may be a sampling bias in the data, arising perhaps from variation in the quality of the fossil record through geologic time (Pease 1992) or from changes in taxonomic structure (Flessa and Jablonski 1985). As with the increase in diversity discussed in Section 2.2.3, however, many believe that these biases are not enough to account entirely for the observed extinction decline. Raup and Sepkoski (1982) have suggested instead that the decline could be the result of a slow evolutionary increase in the mean fitness of species, fitter species becoming extinct less easily than their less fit ancestors. This appears to be a plausible suggestion, but it has a number of problems. With respect to what are we measuring fitness in this case? Do we mean fitness relative to other species? Surely not, since if all species are increasing in fitness at roughly the same rate, then their fitness relative to one another will remain approximately constant. (This is another aspect of van Valen’s “Red Queen hypothesis”, which we mentioned in Section 3.) Do we then mean fitness with respect to the environment, and if so, how is such a fitness defined? The reset model attempts to address these questions and quantify the theory of increasing species fitness. ### 8.2 The reset model The basic idea of the reset model is that species are evolving on high-dimensional rugged fitness landscapes of the kind considered previously in Section 4. Suppose a species is evolving on such a landscape by mutations which take it from one local peak to another at approximately regular intervals of time. (This contrasts with the picture proposed by Bak and Sneppen (1993)—see Section 5.1—in which the time between evolutionary jumps is not constant, but depends on a barrier variable which measures how difficult a certain jump is.) If the species moves to a new peak where the fitness is higher than the fitness at the previous peak, then the new strain will replace the old one. If the dimensionality of the landscape is sufficiently high then the chance of a species retracing its steps and encountering the same peak twice is small and can be neglected. In this case, the process of sampling the fitness at successive peaks is equivalent to drawing a series of independent random fitness values from some fixed distribution, and keeping a record of the highest one encountered so far. Each time the current highest value is replaced by a new one, an evolutionary event has taken place in the model and such events correspond to pseudoextinction of the ancestral species. Sibani et al. refer to this process as a “resetting” of the fitness of the species (hence the name “reset model”), and to the entire dynamics of the model as a “record dynamics”. The record dynamics is simple enough to permit the calculation of distributions of a number of quantities of interest. First of all, Sibani et al. showed that the total number of evolution/extinction events happening between an initial time $`t_0`$ and a later time $`t`$ goes as $`\mathrm{log}(tt_0)`$ on average, regardless of the distribution from which the random numbers are drawn. This of course is precisely the form seen in the fossil data, Equation (19), and immediately implies that the number of events per unit time falls off as $`1/(tt_0)`$. Then the arguments leading up to Equation (21) tell us that we should expect a distribution of sizes of extinction events with an exponent $`\tau =2`$, as in the fossil data. We can also calculate the distribution of the lifetimes of species. Assuming that the lifetime of a species is the interval between the evolutionary event which creates it and the next event, in which it disappears, it turns out that the reset model implies a distribution of lifetimes which is power-law in form with an exponent $`\alpha =1`$, again independent of the distribution of the random numbers used. This is some way from the value $`\alpha =1.7\pm 0.3`$ observed in the fossil data (Section 2.2.4), but no more so than for most of the other models discussed previously. ### 8.3 Extinction mechanisms The model described so far contains only a pseudoextinction mechanism; there is no true extinction taking place, a situation which we know not to be representative of the fossil record. Sibani et al. suggested an extension of their model to incorporate a true extinction mechanism based on competition between species. In this version of the model each species interacts with a number of neighbouring species. Sibani et al. placed the species on a lattice and allowed each one to interact with its nearest neighbours on the lattice. (Other choices would also be possible, such as the random neighbours of the NK and Solé–Manrubia models, for instance.) If a species increases its fitness to some new value through an evolutionary event, then any neighbouring species with fitness lower than this new value becomes extinct. The justification for this extinction mechanism is that neighbouring species are in direct competition with one another and therefore the fitter species tends to wipe out the less fit one by competitive exclusion. As in most of the other models we have considered, the number of species in the model is maintained at a constant level by repopulating empty niches with new species whose fitnesses are, in this case, chosen at random. Curiously, Sibani et al. did not calculate the distribution of the sizes of extinction events in this version of the model, although they did show that the new version has a steeper species lifetime distribution; it is still a power law but has an exponent of $`\alpha =2`$, a value somewhat closer to the $`\alpha =1.7\pm 0.3`$ seen in the fossil data. ## 9 Conclusions In this paper we have reviewed a large number of recent quantitative models aimed at explaining a variety of large-scale trends seen in the fossil record. These trends include the occurrence of mass extinctions, the distribution of the sizes of extinction events, the distribution of the lifetimes of taxa, the distribution of the numbers of species per genus, and the apparent decline in the average extinction rate. None of the models presented match all the fossil data perfectly, but all of them offer some suggestion of possible mechanisms which may be important to the processes of extinction and origination. In this section we conclude our review by briefly running over the properties and predictions of each of the models once more. Much of the interest in these models has focussed on their ability (or lack of ability) to predict the observed values of exponents governing distributions of a number of quantities. In Table 1 we summarize the values of these exponents for each of the models. Most of the models we have described attempt to provide possible explanations for a few specific observations. (1) The fossil record appears to have a power-law (i.e., scale-free) distribution of the sizes of extinction events, with an exponent close to $`2`$ (Section 2.2.1). (2) The distribution of the lifetimes of genera also appears to follow a power law, with exponent about $`1.7`$ (Section 2.2.4). (3) The number of species per genus appears to follow a power law with exponent about $`1.5`$ (Section 2.3.1). One of the first models to attempt an explanation of these observations was the NK model of Kauffman and co-workers. In this model extinction is driven by coevolutionary avalanches. When tuned to the critical point between chaotic and frozen regimes, the model displays a power-law distribution of avalanche sizes with an exponent of about $`1`$. It has been suggested that this could in turn lead to a power-law distribution of the sizes of extinction events, although the value of $`1`$ for the exponent is not in agreement with the value $`2`$ measured in the fossil extinction record. It is not clear by what mechanism the extinction would be produced in this model. Building on Kauffman’s ideas, Bak and Sneppen proposed a simpler model which not only produces coevolutionary avalanches, but also self-organizes to its own critical point, thereby automatically producing a power-law distribution of avalanche sizes, regardless of other parameters in the system. Again the exponent of the distribution is in the vicinity of one, which is not in agreement with the fossil record. Many extensions of the Bak–Sneppen model have been proposed. We have described the multi-trait model of Boettcher and Paczuski which is less realistic but has the advantage of being exactly solvable, the model of Vandewalle and Ausloos which incorporates speciation effects and phylogenetic trees, the model of Head and Rodgers which also proposes a speciation mechanism, and the model of Newman and Roberts which introduces true extinction via environmental stress. A different, but still biotic, extinction mechanism has been investigated by Solé and Manrubia, who proposed a “connection” model based on ideas of ecological competition. It is not clear whether ecological effects have made an important contribution to the extinction we see in the fossil record, although the current consensus appears to be that they have not. The Solé–Manrubia model, like Kauffman’s NK model, is a true critical model, which only produces power-law distributions when tuned to its critical point. Unlike Kauffman’s model however, the model of Solé and Manrubia produces the correct value for the extinction size distribution when tuned to this point. We have also described two other models of extinction through ecological interaction: the food chain models of Amaral and Meyer and of Abramson. A third distinct extinction mechanism is extinction through environmental stress, which has been investigated in modelling work by Newman. In Newman’s model, species with low tolerance for stress become extinct during periods of high stress, and no species interactions are included at all. The model gives a value of $`2`$ for the extinction size distribution, the same as that seen in the fossil record. Wilke and Martinetz have proposed a more realistic version of the same model in which recovery after mass extinctions takes place gradually, rather than instantaneously. Another related model is that of Manrubia and Paczuski in which extinction is also caused by coherent “shocks” to the ecosystem, although the biological justification for these shocks is different from that given by Newman. Their model also generates a power-law distribution of extinction sizes with exponent 2. Finally, we have looked at the “reset model” of Sibani et al., which proposes that the distribution of sizes of extinction events is a result of declining extinction intensity during the Phanerozoic. The decline is in turn explained as a result of increasing average fitness of species as they evolve. Clearly there are a large number of competing models here, and simply studying quantities such as the distribution of the sizes of extinction events is not going to allow us to distinguish between them. In particular, the question of whether the dominant mechanisms of extinction are biotic or abiotic is interesting and thus far undecided. However, the models we have give us a good feeling for what mechanisms might be important for generating these distributions. A sensible next step would be to look for signatures, in the fossil record or elsewhere, which might allow us to distinguish between these different mechanisms. ## Acknowledgements The authors would like to thank Per Bak, Stefan Boettcher, Gunther Eble, Doug Erwin, Wim Hordijk, Stuart Kauffman, Tim Keitt, Erik van Nimwegen, Andreas Pedersen, David Raup, Jack Sepkoski, Paolo Sibani, Kim Sneppen and Ricard Solé for useful discussions. Special thanks are due also to Chris Adami, Gunther Eble, Doug Erwin and Jack Sepkoski for providing data used in a number of the figures. This work was supported by the Santa Fe Institute and DARPA under grant number ONR N00014–95–1–0975. ## References * Abramson, G. 1997. Ecological model of extinctions. Phys. Rev. E 55, 785–788. * Adami, C. 1995. Self-organized criticality in living systems. Phys. Letts. A 203, 29–32. * Alvarez, L. W. 1983. Experimental evidence that an asteroid impact led to the extinction of many species 65 million years ago. Proc. Natl. Acad. Sci. 80, 627–642. * Alvarez, L. W. 1987. Mass extinctions caused by large bolide impacts. Physics Today 40, 24–33. * Alvarez, L. W., Alvarez, W., Asara, F. & Michel, H. V. 1980. Extraterrestrial cause for the Cretaceous–Tertiary extinction. Science 208, 1095–1108. * Amaral, L. A. N. & Meyer, M. 1999. Environmental changes, coextinction, and patterns in the fossil record. Phys. Rev. Lett. 82, 652–655. * Bak, P. 1996. How Nature Works: The Science of Self-Organized Criticality. Copernicus (New York). * Bak. P., Flyvbjerg, H. & Lautrup, B. 1992. Coevolution in a rugged fitness landscape. Phys. Rev. A 46, 6724–6730. * Bak, P. & Sneppen, K. 1993. Punctuated equilibrium and criticality in a simple model of evolution. Phys. Rev. Lett. 71, 4083–4086. * Bak, P., Tang, C. & Wiesenfeld, K. 1987. Self-organized criticality: An explanation of $`1/f`$ noise. Phys. Rev. Lett. 59, 381–384. * Benton, M. J. 1987. Progress and competition in macroevolution. Biol. Rev. 62, 305–338. * Benton, M. J. 1991. Extinction, biotic replacements and clade interactions. In The Unity of Evolutionary Biology, Dudley, E. C. (ed.), Dioscorides (Portland). * Benton, M. J. 1993. The Fossil Record 2. Chapman and Hall (London). * Benton, M. J. 1995. Diversification and extinction in the history of life. Science 268, 52–58. * Binney, J. J., Dowrick, N. J., Fisher, A. J. & Newman, M. E. J. 1992. The Theory of Critical Phenomena. Oxford University Press (Oxford). * Boettcher, S. & Paczuski, M. 1996. Exact results for spatiotemporal correlation in a self-organized critical model of punctuated equilibrium. Phys. Rev. Lett. 76, 348–351. * Bourgeois, T., Clemens, W. A., Spicer, R. A., Ager, T. A., Carter, L. D. & Sliter, W. V. 1988. A tsunami deposit at the Cretaceous–Tertiary boundary in Texas. Science 241, 567–571. * Bowring, S. A., Grotzinger, J. P., Isachsen, C. E., Knoll, A. H., Pelechaty, S. M. & Kolosov, P. 1993. Calibrating rates of early Cambrian evolution. Science 261, 1293–1298. * Burlando, B. 1990. The fractal dimension of taxonomic systems. J. Theor. Biol. 146, 99–114. * Burlando, B. 1993. The fractal geometry of evolution. J. Theor. Biol. 163, 161–172. * Chiappe, L. M. 1995. The first 85 million years of avian evolution. Nature 378, 349–355. * Clemens, W. A. 1986. Evolution of the vertebrate fauna during the Cretaceous–Tertiary transition. In Dynamics of Extinction, Elliott, D. K. (ed.), Wiley (New York). * Courtillot, V., Feraud, G., Malushi, H., Vandamme, D., Moreau, M. G. & Besse, J. 1988. Deccan flood basalts and the Cretaceous/Tertiary boundary. Nature 333, 843–846. * Courtillot, V. & Gaudemer, Y. 1996. Effects of mass extinctions on biodiversity. Nature 381, 146–148. * Davis, M., Hut, P. & Muller, R. A. 1984. Extinction of species by periodic comet showers. Nature 308, 715–717. * de Boer, J., Jackson, A. D. & Wettig, T. 1995. Criticality in simple models of evolution. Phys. Rev. E 51, 1059–1073. * Derrida, B. 1980. Random energy model: The limit of a family of disordered models. Phys. Rev. Lett. 45, 79–82. * Derrida, B. 1981. Random energy model: An exactly solvable model of disordered systems. Phys. Rev. B 24, 2613–2626. * Drossel, B. 1999. Extinction events and species lifetimes in a simple ecological model. Phys. Rev. Lett. 81, 5011–5014. * Duncan, R. A. & Pyle, D. G. 1988. Rapid eruption of the Deccan basalts at the Cretaceous/Tertiary boundary. Nature 333, 841–843. * Eble, G. J. 1998 The role of development in evolutionary radiations. In Biodiversity Dynamics: Turnover of Populations, Taxa and Communities, M. L. McKinney (ed.), Columbia University Press (New York). * Eble, G. J. 1999. Originations: Land and sea compared. Geobios 32, 223–234. * Ellis, J. & Schramm, D. M. 1995. Could a nearby supernova explosion have caused a mass extinction? Proc. Natl. Acad. Sci. 92, 235–238. * Erwin, D. H. 1996. Understanding biotic recoveries. In Evolutionary Paleobiology, Jablonski, D., Erwin, D. & Lipps, I. (eds.), University of Chicago Press (Chicago). * Flessa, K. W. & Jablonski, D. 1983. Extinction is here to stay. Paleobiology 9, 315–321. * Flessa, K. W. & Jablonski, D. 1985. Declining Phanerozoic background extinction rates: Effect of taxonomic structure? Nature 313, 216–218. * Flyvbjerg, H., Sneppen, K. & Bak, P. 1993. Mean field theory for a simple model of evolution. Phys. Rev. Lett. 71, 4087–4090. * Fox, W. T. 1987. Harmonic analysis of periodic extinctions. Paleobiology 13, 257–271. * Gauthier, J. A. 1986. Saurischian monophyly and the origin of birds. Mem. Calif. Acad. Sci. 8, 1–47. * Gilinsky, N. L. & Bambach, R. K. 1987. Asymmetrical patterns of origination and extinction in higher taxa. Paleobiology 13, 427–445. * Glen, W. 1994. The Mass Extinction Debates. Stanford University Press (Stanford). * Grieve, R. A. F. & Shoemaker, E. M. 1994. The record of past impacts on Earth. In Hazards Due to Comets and Asteroids, Gehrels, T. (ed.), University of Arizona Press (Tucson). * Grimmett, G. R. & Stirzaker, D. R. 1992. Probability and Random Processes, 2nd Edition. Oxford University Press (Oxford). * Hallam, A. 1989. The case for sea-level change as a dominant causal factor in mass extinction of marine invertebrates. Phil. Trans. R. Soc. B 325, 437–455. * Hallock, P. 1986. Why are large foraminifera large? Paleobiology 11, 195–208. * Harland, W. B., Armstrong, R., Cox, V. A., Craig, L. E., Smith, A. G. & Smith, D. G. 1990. A Geologic Time Scale 1989. Cambridge University Press (Cambridge). * Head, D. A. & Rodgers, G. J. 1997. Speciation and extinction in a simple model of evolution. Phys. Rev. E 55, 3312–3319. * Hertz, J. A., Krogh, A. S. & Palmer, R. G. 1991. Introduction to the Theory of Neural Computation. Addison-Wesley (Reading). * Hoffman, A. A. & Parsons, P. A. 1991. Evolutionary Genetics and Environmental Stress. Oxford University Press (Oxford). * Hut, P., Alvarez, W., Elder, W. P., Hansen, T., Kauffman, E. G., Killer, G., Shoemaker, E. M. & Weissman, P. R. 1987. Comet showers as a cause of mass extinctions. Nature 329, 118–125. * Jablonski, D. 1985. Marine regressions and mass extinctions: a test using the modern biota. In Phanerozoic diversity patters, Valentine, J. W. (ed.), Princeton University Press (Princeton). * Jablonski, D. 1986. Background and mass extinctions: The alternation of macroevolutionary regimes. Science 231, 129–133. * Jablonski, D. 1989. The biology of mass extinction: A palaeontological view. Phil. Trans. R. Soc. B 325, 357–368. * Jablonski, D. 1991. Extinctions: A paleontological perspective. Science 253, 754–757. * Jablonski, D. 1993. The tropics as a source of evolutionary novelty through geological time. Nature 364, 142–144. * Jablonski, D. & Bottjer, D. J. 1990a The ecology of evolutionary innovation: the fossil record. In Evolutionary Innovations, M. Nitecki, (ed.), University of Chicago Press (Chicago). * Jablonski, D. & Bottjer, D. J. 1990b The origin and diversification of major groups: Environmental patterns and macroevolutionary lags. In Major Evolutionary Radiations, P. D. Taylor & G. P. Larwood, (eds.), Oxford University Press (Oxford). * Jablonski, D. & Bottjer, D. J. 1990c Onshore-offshore trends in marine invertebrate evolution. In Causes of Evolution: A Paleontological Perspective, R. M. Ross & W. D. Allmon, (eds.), University of Chicago Press (Chicago). * Kauffman, S. A. 1993. Origins of Order: Self-Organization and Selection in Evolution. Oxford University Press (Oxford). * Kauffman, S. A. 1995. At Home in the Universe. Oxford University Press (Oxford). * Kauffman, S. A. & Johnsen, S. 1991. Coevolution to the edge of chaos: Coupled fitness landscapes, poised states, and coevolutionary avalanches. J. Theor. Biol. 149, 467–505. * Kauffman, S. A. & Levin, S. 1987. Towards a general theory of adaptive walks on rugged landscapes. J. Theor. Biol. 128, 11–45. * Kauffman, S. A. & Perelson, A. S. 1990. Molecular Evolution on Rugged Landscapes: Proteins, RNA, and the Immune Response. Addison–Wesley (Reading). * Kauffman, S. A. & Weinberger, E. W. 1989. The NK model of rugged fitness landscapes and its application to maturation of the immune response. J. Theor. Biol. 141, 211–245. * Kramer, M., Vandewalle, N. & Ausloos, M. 1996. Speciations and extinction in a self-organizing critical model of tree-like evolution. J. Phys. I France 6, 599–606. * Langton, C. G. 1995. Artificial Life: An Overview. MIT Press (Cambridge). * Loper, D. E., McCartney, K. & Buzyna, G. 1988. A model of correlated periodicity in magnetic-field reversals, climate and mass extinctions. J. Geol. 96, 1–15. * Lyell, C. 1832. Principles of Geology, Vol. 2. Murray (London). * Macken, C. A. & Perelson, A. S. 1989. Protein evolution on rugged landscapes. Proc. Natl. Acad. Sci. 86, 6191–6195. * Manrubia, S. C. & Paczuski, M. 1998. A simple model of large scale organization in evolution. Int. J. Mod. Phys. C 9, 1025–1032. * Maslov, S., Paczuski, M. & Bak, P. 1994. Avalanches and $`1/f`$ noise in evolution and growth models. Phys. Rev. Lett. 73, 2162–2165. * May, R. M. 1990. How many species? Phil. Trans. R. Soc. B 330, 293–304. * Maynard Smith, J. 1989. The causes of extinction. Phil. Trans. R. Soc. B 325, 241–252. * Maynard Smith, J. and Price, G. R. 1973. The logic of animal conflict. Nature 246, 15–18. * McLaren, D. J. 1988. Detection and significance of mass killings. Historical Biology 2, 5–15. * McNamara, K. J. 1990. Echinoids. In Evolutionary Trends, McNamara, K. J. (ed.), Belhaven Press (London). * Mitchell, M. 1996. An Introduction to Genetic Algorithms. MIT Press (Cambridge). * Montroll, E. W. & Shlesinger, M. F. 1982. On $`1/f`$ noise and other distributions with long tails. Proc. Natl. Acad. Sci. 79, 3380–3383. * Morrison, D. 1992. The Spaceguard Survey: Report of the NASA International Near-Earth Object Detection Workshop. Jet Propulsion Laboratory (Pasadena). * Newman, M. E. J. 1996. Self-organized criticality, evolution and the fossil extinction record. Proc. R. Soc. London B 263, 1605–1610. * Newman, M. E. J. 1997. A model of mass extinction. J. Theor. Biol. 189, 235–252. * Newman, M. E. J. & Eble, G. J. 1999a. Power spectra of extinction in the fossil record. Proc. R. Soc. London B 266, 1267–1270. * Newman, M. E. J. & Eble, G. J. 1999b. Decline in extinction rates and scale invariance in the fossil record. Paleobiology, in press. * Newman, M. E. J., Fraser, S. M., Sneppen, K. & Tozier, W. A. 1997. Comment on “Self-organized criticality in living systems”. Phys. Lett. A 228, 201–203. * Newman, M. E. J. & Roberts, B. W. 1995. Mass extinction: Evolution and the effects of external influences on unfit species. Proc. R. Soc. London B 260, 31–37. * Newman, M. E. J. & Sibani, P. 1999. Extinction, diversity and survivorship of taxa in the fossil record. Proc. R. Soc. London B 266, 1593–1600. * Newman, M. E. J. & Sneppen, K. 1996. Avalanches, scaling and coherent noise. Phys. Rev. E 54, 6226–6231. * Paczuski, M., Maslov, S. & Bak, P. 1996. Avalanche dynamics in evolution, growth, and depinning models. Phys. Rev. E 53, 414–443. * Pang, N. N. 1997. The Bak–Sneppen model: A self-organized critical model of biological evolution. Int. J. Mod. Phys. B 11, 1411–1444. * Parsons, P. A. 1993. Stress, extinctions and evolutionary change: From living organisms to fossils. Biol. Rev. 68, 313–333. * Patterson, C. & Smith, A. B. 1987. Is the periodicity of extinctions a taxonomic artifact? Nature 330, 248–251. * Patterson, C. & Smith, A. B. 1989. Periodicity in extinction: the role of the systematics. Ecology 70, 802–811. * Patterson, R. T. & Fowler, A. D. 1996. Evidence of self organization in planktic foraminiferal evolution: Implications for interconnectedness of paleoecosystems. Geology 24, 215–218. * Pease, C. M. 1992. On the declining extinction and origination rates of fossil taxa. Paleobiology 18, 89–92. * Plotnick, R. E. & McKinney, M. L. 1993. Ecosystem organization and extinction dynamics. Palaios 8, 202–212. * Rampino, M. R. & Stothers, R. B. 1984. Terrestrial mass extinctions, cometary impacts and the sun’s motion perpendicular to the galactic plane. Nature 308, 709–712. * Raup, D. M. 1979a. Biases in the fossil record of species and genera. Bulletin of the Carnegie Museum of Natural History 13, 85–91. * Raup, D. M. 1979b. Size of the Permo-Triassic bottleneck and its evolutionary implications. Science 206, 217–218. * Raup, D. M. 1985. Magnetic reversals and mass extinctions. Nature 314, 341–343. * Raup, D. M. 1986. Biological extinction in Earth history. Science 231, 1528–1533. * Raup, D. M. 1991a. Extinction: Bad Genes or Bad Luck? Norton (New York). * Raup, D. M. 1991b. A kill curve for Phanerozoic marine species. Paleobiology 17, 37–48. * Raup, D. M. 1992. Large-body impact and extinction in the Phanerozoic. Paleobiology 18, 80–88. * Raup, D. M. 1996. Extinction models. In Evolutionary Paleobiology, Jablonski, D., Erwin, D. H. & Lipps, J. H., (eds.), University of Chicago Press (Chicago). * Raup, D. M. & Boyajian, G. E. 1988. Patterns of generic extinction in the fossil record. Paleobiology 14, 109–125. * Raup, D. M. & Sepkoski, J. J., Jr. 1982. Mass extinctions in the marine fossil record Science 215, 1501–1503. * Raup, D. M. & Sepkoski, J. J., Jr. 1984. Periodicity of extinctions in the geologic past. Proc. Natl. Acad. Sci. 81, 801–805. * Raup, D. M. & Sepkoski, J. J., Jr. 1986. Periodic extinctions of families and genera. Science 231, 833–836. * Raup, D. M. & Sepkoski, J. J., Jr. 1988. Testing for periodicity of extinction. Science 241, 94–96. * Ray, T. S. 1994a. An evolutionary approach to synthetic biology. Artificial Life 1, 179–209. * Ray, T. S. 1994b. Evolution, complexity, entropy and artificial reality. Physica D 75, 239–263. * Roberts, B. W. & Newman, M. E. J. 1996. A model for evolution and extinction. J. Theor. Biol. 180, 39–54. * Rosenzweig, M. L. 1995. Species Diversity in Space and Time. Cambridge University Press (Cambridge). * Roy, K. 1996. The roles of mass extinction and biotic interaction in large-scale replacements. Paleobiology 22, 436–452. * Schmoltzi, K. & Schuster, H. G. 1995. Introducing a real time scale into the Bak–Sneppen model. Phys. Rev. E 52, 5273–5280. * Sepkoski, J. J., Jr. 1988. Perodicity of extinction and the problem of catastophism in the history of life. J. Geo. Soc. London 146, 7–19. * Sepkoski, J. J., Jr. 1990. The taxonomic structure of periodic extinction. In Global Catastrophes in Earth History, Sharpton, V. L. & Ward, P. D. (eds.), Geological Society of America Special Paper 247, 33–44. * Sepkoski, J. J., Jr. 1991 Diversity in the Phanerozoic oceans: A partisan review. In The Unity of Evolutionary Biology, Dudley, E. C. (ed.), Dioscorides (Portland). * Sepkoski, J. J., Jr. 1993. A compendium of fossil marine animal families, 2nd edition. Milwaukee Public Museum Contributions in Biology and Geology 83. * Sepkoski, J. J., Jr. 1996 Patterns of Phanerozoic extinction: A perspective from global databases. In Global events and event stratigraphy, O. H. Walliser, (ed.), Springer-Verlag (Berlin). * Sepkoski, J. J., Jr. 1998. Rates of speciation in the fossil record. Phil. Trans. R. Soc. B 353, 315–326. * Sepkoski, J. J., Jr. & Kendrick, D. C. 1993. Numerical experiments with model monophyletic and paraphyletic taxa. Paleobiology 19, 168–184. * Sibani, P. & Littlewood, P. 1993. Slow dynamics from noise adaptation. Phys. Rev. Lett. 71, 1482–1485. * Sibani, P., Schmidt, M. R. and Alstrøm, P. 1995. Fitness optimization and decay of extinction rate through biological evolution. Phys. Rev. Lett. 75, 2055–2058. * Sibani, P., Schmidt, M. R. and Alstrøm, P. 1998. Evolution and extinction dynamics in rugged fitness landscapes. Int. J. Mod. Phys. B 12, 361–391. * Signor, P. W. & Lipps, J. H. 1982. Sampling bias, gradual extinction patterns, and catastrophes in the fossil record. In Geological Implications of Impacts of Large Asteroids and Comets on the Earth, Silver, L. T. & Schultz, P. H. (eds.), Geological Society of America Special Paper 190, 291–296. * Simpson, G. G. 1952. How many species? Evolution 6, 342–342. * Sneppen, K. 1995. Extremal dynamics and punctuated co-evolution. Physica A 221, 168–179. * Sneppen, K., Bak, P., Flyvbjerg, H. & Jensen, M. H. 1995. Evolution as a self-organized critical phenomenon. Proc. Natl. Acad. Sci. 92, 5209–5213. * Sneppen, K. & Newman, M. E. J. 1997. Coherent noise, scale invariance and intermittency in large systems. Physica D 110, 209–222. * Solé, R. V. 1996. On macroevolution, extinctions and critical phenomena. Complexity 1, 40–44. * Solé, R. V. & Bascompte, J. 1996. Are critical phenomena relevant to large-scale evolution? Proc. R. Soc. London B 263, 161–168. * Solé, R. V., Bascompte, J., & Manrubia, S. C. 1996. Extinction: Bad genes or weak chaos? Proc. R. Soc. London B 263, 1407–1413. * Solé, R. V. & Manrubia, S. C. 1996. Extinction and self-organized criticality in a model of large-scale evolution. Phys. Rev. E 54, R42–R45. * Solé, R. V., Manrubia, S. C., Benton, M. & Bak, P. 1997. Self-similarity of extinction statistics in the fossil record. Nature 388, 764–767. * Sornette, D. & Cont, R. 1997. Convergent multiplicative processes repelled from zero: power laws and truncated power laws. J. Phys. I France 7, 431–444. * Stanley, S. M. 1984. Marine mass extinction: A dominant role for temperature. In Extinctions, Nitecki, M. H. (ed.), University of Chicago Press (Chicago). * Stanley, S. M. 1988. Paleozoic mass extinctions: Shared patterns suggest global cooling as a common cause. Am. J. Sci. 288, 334–352. * Stanley, S. M. 1990. Delayed recovery and the spacing of major extinctions. Paleobiology 16, 401–414. * Stenseth, N. C. 1985. Darwinian evolution in ecosystems: The red queen view. In Evolution, Cambridge University Press (Cambridge). * Strogatz, S. H. & Stewart, I. 1993. Coupled oscillators and biological synchronization. Scientific American 269, 102–109. * Van Valen, L. 1973. A new evolutionary law. Evol. Theory 1, 1–30. * Vandewalle, N. & Ausloos, M. 1995. The robustness of self-organized criticality against extinctions in a tree-like model of evolution. Europhys. Lett. 32, 613–618. * Vandewalle, N. & Ausloos, M. 1997. Different universality classes for self-organized critical models driven by extremal dynamics. Europhys. Lett. 37, 1–6. * Vermeij, G. J. 1987. Evolution as Escalation. Princeton University Press (Princeton). * Weisbuch, G. 1991. Complex Systems Dynamics. Addison-Wesley (Reading). * Whitmire, D. P. & Jackson, A. A. 1984. Are periodic mass extinctions driven by a distant solar companion? Nature 308, 713–715. * Wilde, P. & Berry, W. B. N. 1984. Destabilization of the oceanic density structure and its significance to marine extinction events. Palaeogeog. Palaeoclimatol. Palaeoecol. 48, 142–162. * Wilke, C., Altmeyer, S. & Martinetz, T. 1998. Aftershocks in coherent-noise models. Physica D 120, 401–417. * Wilke, C. & Martinetz, T. 1997. Simple model of evolution with variable system size. Phys. Rev. E 56, 7128–7131. * Williams, C. B. 1944. Some applications of the logarithmic series and the index of diversity to ecological problems. J. Ecol. 32, 1–44. * Willis, J. C. 1922. Age and Area. Cambridge University Press (Cambridge). * Zipf, G. K. 1949. Human Behavior and the Principle of Least Effort. Addison–Wesley (Reading).
no-problem/9908/quant-ph9908015.html
ar5iv
text
# Performing Quantum Measurement in Suitably Entangled States Originates the Quantum Computation Speed Up ## I Introduction Why quantum computation can be more efficient than its classical counterpart is an open problem attracting increasing attention, . The reason is naturally sought in the special features of quantum mechanics exploited in quantum computation, like state superposition, entanglement and quantum interference. Quantum measurement, instead, is generally considered necessary only to “read” the computation output. In the justification we shall provide, measurement does more than “reading” an output, it contributes in creating that output in a computationally efficient way. We will show that the logical constraint that there is a single measurement outcome, acquires a striking function in existing quantum algorithms. It becomes a set of logical-mathematical constraints representing the problem to be solved, or the hard part thereof, whereas the measurement outcome, by satisfying these constraints, yields the solution. In all these algorithms, the state before measurement is entangled with respect to a couple of observables<sup>*</sup><sup>*</sup>*As we will see, also in Deutsch’s and Grover’s algorithms, provided that both the problem and the solution algorithm are represented in a physical way.. It is a basic axiom of quantum measurement theory that the time required to measure an observable is independent of this possible entanglement: entanglement is interaction-free. The computational complexity of satisfying the above logical-mathematical constraints originates from entanglement and is transparent to measurement time. On the basis of these arguments, we will justify the speed-up in all known quantum algorithms. ## II Overview For unity of exposition, we shall provide an overview of our justification of the speed-up based on a simplified version of Simon’s algorithm. All details are deferred to the subsequent Sections. The problem is as follows. Given $`B=\{0,1\}`$, we consider a function $`f\left(x\right)`$ from $`B^n`$ to $`B^n`$. The argument $`x`$ ranges over $`0,1,`$ $`\mathrm{},`$ $`N1`$, where $`N=2^n`$; $`n`$ is said to be the size of the problem. We assume that $`f\left(x\right)`$ has the following properties: * it is a 2-to-1 function, namely for any $`xB^n`$ there is one and only one second argument $`x^{^{}}B^n`$ such that $`xx^{^{}}`$ and $`f\left(x\right)=f\left(x^{^{}}\right)`$; * such $`x`$ and $`x^{^{}}`$ are evenly spaced by a constant value $`r`$, namely: $`\left|xx^{^{}}\right|=r`$; * given a value $`x`$ of the argument, computing the corresponding value of $`f\left(x\right)`$ requires a time polynomial in $`n`$ \[poly$`\left(n\right)`$\]; whereas, given a value $`f`$ of the function, finding an $`x`$ such that $`f\left(x\right)=f`$, requires a time exponential in $`n`$ \[exp$`\left(n\right)`$\]; the function is “hard to reverse”. Besides knowing the above properties, we can use a quantum computer that, given any input $`x`$, produces the output $`f\left(x\right)`$ in poly$`\left(n\right)`$ time. The problem is to find $`r`$ in an efficient way, which turns out to be in poly$`\left(n\right)`$ rather than exp$`\left(n\right)`$ time. The computer operates on two registers $`a`$ and $`v`$, each of $`n`$ qubits; $`a`$ contains the argument $`x`$ and $`v`$ – initially set at zero – will contain the result of computing $`f\left(x\right)`$. We denote by $`_{av}span\{|x_a,|y_v\}`$, with $`(x,y)`$ running over $`B^n\times B^n`$, the Hilbert space of the two registers. By using the quantum computer and standard operations like the Hadamard transform (see IV for details), we obtain in poly$`\left(n\right)`$ time, at time $`t_2`$, the following state of the two registers (indexes are as in IV): $$|\phi ,t_2_{av}=\frac{1}{\sqrt{N}}\underset{x}{}|x_a|f\left(x\right)_v,$$ (1) with $`x`$ running over $`0,1,`$ $`\mathrm{},`$ $`N1`$. We designate by $`\left[a\right]`$ (an observable) the number stored in register $`a`$. Similarly $`\left[v\right]`$ is the number stored in $`v`$. We measure $`\left[v\right]`$ in state (1)This intermediate measurement can be skipped, but we will see that it is mathematically equivalent either performing or skipping it.. Given the character of $`f\left(x\right)`$, measurement outcome has the form: $$|\phi ,t_3_{av}=\frac{1}{\sqrt{2}}\left(|\overline{x}_a+|\overline{x}+r_a\right)|\overline{f}_v,$$ (2) where $`\stackrel{\mathrm{\_}}{f}`$ is the value of the measured observable, and $`f\left(\overline{x}\right)=f\left(\overline{x}+r\right)=\overline{f}.`$ We will see that, under a reasonable criterion, the quantum speed-up has already been achieved by reaching state (2) – see also Section IV. Since the speed-up is referred to an efficient classical computation that yields the same result, the quantum character of state (2) constitutes a difficulty. This difficulty can be avoided by resorting to the notion of the computational cost of classically producing the description of state (2). This criterion yields a more universal way of comparing quantum and classical efficiency, and coincides with the usual one when the quantum algorithm has produced the “classical reading”. It will be instrumental in achieving an a-posteriori self-evident result. Thus, we should assess the cost of classically producing description (2). Of course, we must think that $`\overline{x}`$, $`\overline{x}+r,`$ and $`\overline{f}`$ are appropriate numerical values. Finding them requires solving the following system of numerical algebraic equations: $`f\left(x_1\right)`$ $`=`$ $`f\left(x_2\right),`$ (3) $`x_1`$ $``$ $`x_2.`$ (4) Fig. 1 It is convenient to resort to the network representation of equations (3) – fig. 1. The gate $`c(x_1,x_2)`$ imposes that, if $`x_1x_2`$, then the output is 1, if $`x_1=x_2`$, then the output is 0, and vice-versa. To impose $`x_1x_2`$, the output must be set at 1. Note that the network represents a system of algebraic equations: time is not involved and gates are just logical constraints. Each of the two gates $`f\left(x\right)`$ imposes that, if the input is $`x`$, then the output is $`f\left(x\right)`$ or, conversely, if the output is $`f`$, then the input is an $`x`$ such that $`f\left(x\right)=f.`$ This network is hard to satisfy by classical means. Because of the looped network topology, finding a valuation of $`x_1,x_2`$ and $`f`$ satisfying the network requires reversing $`f\left(x\right)`$ at least once, which takes, by assumption, exp$`\left(n\right)`$ time. Instead, the time to produce state (2) with Simon’s algorithm, is the sum of the poly$`\left(n\right)`$ time required to produce state (1), and the time required to measure the observable $`\left[v\right]`$ in state (1). This latter is independent of the entanglement between registers $`v`$ and $`a`$ and is simply linear in the number of qubits of register $`v`$, namely in $`n`$. The overall time is poly$`\left(n\right)`$. Under the above criterion, the speed-up has already been achieved. We shall provide two ways of seeing the active role played by the action of measuring $`\left[v\right]`$. In Section III, we will show that measuring $`\left[v\right]`$ introduces and satisfies, in linear$`\left(n\right)`$ time, a system of algebraic equations in Hilbert space (the one-outcome constraint and consequent ones) equivalent, under the above criterion, to the system of algebraic equations (3). Here, this active role will be discussed at a conceptual level. The previous criterion needs to be extended. The computational cost of producing a quantum state starting from another quantum state will be benchmarked with the cost of classically producing the description of the former starting from the description of the latter. We shall instrumentally use the following way of thinking (opposite to our view): > quantum computation can produce a number of parallel outputs exponential in register size, at the cost of producing one output, but this “exponential wealth” is easily spoiled by the fact that quantum measurement reads only one output. Let us examine the cost of classically deriving description (2) from description (1). The latter can be visualized as the print-out of the sum of 2<sup>n</sup> tensor products. Loosely speaking, two values of $`x`$ such that $`f\left(x_1\right)=f\left(x_2\right)`$, must be exp$`\left(n\right)`$ spaced. Otherwise such a pair of values could be found in poly$`\left(n\right)`$ time by classical “trial and error”. The point is that the print-out would create a Babel LibraryFrom the story “The Library of Babel” by J.L. Borges. effect. Even for a small $`n`$, it would fill the entire known universe with, say, $`\mathrm{}`$ $`|x_1_a|f\left(x_1\right)_v`$ $`\mathrm{}`$ here, and $`\mathrm{}`$ $`|x_2_a|f\left(x_2\right)_v`$ $`\mathrm{}`$ \[such that $`f\left(x_1\right)=f\left(x_2\right)`$\] in Alpha Centauri. Finding such a pair of print-outs would still require exp$`\left(n\right)`$ time. The capability of directly accessing that “exponential wealth” would be vanified by its “exponential dilution”. Quantum measurement, instead, distills the desired pair of arguments in a time linear in $`n`$. In fact, it does more than randomly selecting one measurement outcome; by selecting one outcome, it performs a logical operation (selecting the two values of $`x`$ associated with the value of that outcome) crucial for solving the problem. The active role played by quantum measurement, complementary to the production of the parallel computation outputs, is self-evident. In Section III, this role will be pinpointed in a rigorous way. ## III Quantum algebraic computation It is easy to show that quantum measurement introduces and satisfies a system of algebraic equations equivalent to (3). By going through elementary notions, we will highlight the pattern of a new form of computation. We shall first apply von Neumann’s model to the quantum measurement of $`\left[v\right]`$ in state (1). This model is two steps. The first is a unitary evolution $`U`$, leading from the state before measurement to a “provisional description” of the state after measurement: $$|\psi ,t_2_{avp}=|\phi ,t_2_{av}|0_p\stackrel{U}{}$$ (5) $$|\psi ,t_3_{avp}=\frac{1}{\sqrt{N}}_i\left(|x_i_a+|x_i+r_a\right)|f_i_v|f_i_p,$$ (6) where $`f_i=f\left(x_i\right)=f\left(x_i+r\right)`$. Here $`p`$ denotes a third register of $`n`$ qubits used to represent the state of the “classical pointer” in Hilbert space. This is sharp in state (4), before measurement interaction. In the state after measurement (5), $`f_i`$ runs over all the values of $`f\left(x\right).`$ As stated before, the elapsed time $`t_3t_2`$ is linear in $`n`$ (the number of qubits in register $`v`$). As well known, description (5) represents the appropriate entanglement between measured observable and classical pointer, but it must be reconciled with the empirical evidence that the pointer is in a sharp state. The second step of von Neumann’s model amounts to be a reinterpretation of description (5). The tensor products appearing in (5) become mutually exclusive measurement outcomes (still at the same time $`t_3`$) of probability distribution the square modules of the respective probability amplitudes, as well known<sup>§</sup><sup>§</sup>§It is the same in decoherence theory, where the elements of a mixture become mutually exclusive measurement outcomes.. This yields a measurement outcome of the form: $`|\phi ,t_3_{av}|\overline{f}_p={\displaystyle \frac{1}{\sqrt{2}}}\left(|\overline{x}_a+|\overline{x}+r_a\right)|\overline{f}_v|\overline{f}_p.`$ We can disregard the factor $`|\overline{f}_p`$, and focus on the quantum part of the measurement outcome, $`|\phi ,t_3_{av}`$, resulting from the reinterpretational step. We should note that this reinterpretation, as it is, does not involve the notion of time and is transparent to dynamics. Interestingly, the speed-up stems out of the reinterpretation, i.e. by the constraint that there is only one measurement outcome. In fact, we will show that $`|\phi ,t_3_{av}`$ is the solution of a system of algebraic equations equivalent to (3). These equations represent the following usual conditions introduced by quantum measurement: (i) the outcome of measuring $`\left[v\right]`$ must be a single eigenstate $`|f_v`$, anyone element of the set of all eigenstates $`\left\{|f_v\right\}`$; (ii) this eigenstate must “drag” all the tensor products appearing in $`|\phi ,t_2_{av}`$ that contain it; (iii) it must be a specific eigenstate $`|\overline{f}`$, selected according to probability amplitudes. Let $`|\phi _{av}=_{x,y}\alpha _{x,y}|x_a|y_v`$ be an “unknown” vector of $`_{av}`$; $`(x,y)`$ runs over $`B^n\times B^n`$, and $`\alpha _{x,y}`$ are complex variables independent of each other up to normalization: $`_{x,y}\left|\alpha _{x,y}\right|^2=1.`$ The above conditions originate a system of three algebraic equations to be simultaneously satisfied by $`|\phi _{av}`$: $$P_v^f|\phi _{av}=|\phi _{av},$$ (7) where $`P_v^f=|f_vf|_v`$ is the projector on the Hilbert subspace $`_{av}^f=span\{|x_a,|f_v\}`$ with $`x`$ running over $`B^n`$ and $`|f_v\left\{|f_v\right\}`$ being fixed; a $`|\phi _{av}`$ satisfying eq. (6) is a free linear combination of all the tensor products of $`_{av}`$ containing $`|f_v`$; this is condition (i); $$\left|\phi |_{av}|\phi ,t_2_{av}\right|\text{ must be maximum;}$$ (8) $`|\phi _{av}`$, satisfying (6) and (7) becomes the projection of $`|\phi ,t_2_{av}`$ on $`_{av}^f:|\phi _{av}=\sqrt{\frac{N}{2}}|f_vf|_v|\phi ,t_2_{av};`$ this means that $`|f_v`$ has “dragged” all the tensor products of $`|\phi ,t_2_{av}`$ containing it; this is condition (ii); $$|f=|\overline{f}$$ (9) with $`|\overline{f}`$ randomly selected as stated before. The solution of equations (6-8) is $`|\phi _{av}=\sqrt{\frac{N}{2}}|\overline{f}_v\overline{f}|_v|\phi ,t_2_{av}=|\phi ,t_3_{av}`$, indeed the quantum state after measurement. To sum up, satisfying equations (6-8) is equivalent to performing the reinterpretational step of von Neumann’s model. This is transparent to measurement dynamics, namely to the first step of the model. Thus, performing the first step gives “for free” (without incurring any further dynamical cost) the solution of (6-8)In any way, the process of satisfying equations (6-8) must be comprised in the time interval $`[t_2,t_3]`$, which is linear in $`n`$. Ref. provides a reformulation of von Neumann’s model that better fits the current approach.. This is equivalent to solving equations (3), namely the classically hard part of the problem. This justifies the quantum speed-up. The capability of directly solving a system of algebraic equations, without having to execute an algorithm, comes from a peculiar feature. The determination of the measurement outcome (i.e. of the solution) is dually influenced by both the initial actions, required to prepare the state before measurement, and the logical-mathematical constraints introduced by the final measurement action. These constraints are in fact independent of the initial actions since they hold unaltered for all initial actions. In Simon’s algorithm, dual influence is what distills a proper pair of values of $`x`$ among an exponential number of such values, thus yielding the speed-up. Conversely, the speed-up is the observable consequence of dual influence. Dual influence can be seen as a special instance of time-symmetrized quantum measurementThe notion of time-symmetrized quantum measurement has been developed by Aharanov et al., still outside the context of entanglement and problem solving. See, e.g., refs. , .. Whether this notion is purely interpretational or can have observable consequences is a controversial issue – as well known. Unexpectedly, we have found a certainly observable consequence (the speed-up) in the context of quantum computation. It is worth noting that this consequence becomes observable after the action of quantum measurement and dual influence. Summarizing, quantum computation turns out to belong to an entirely new paradigm where there is identity between implicit or algebraic definition of a solution and its physical determination. It is worth noting that this paradigm blurs a long-standing distinction (of mathematical logic) between the notions of “implicit definition” and “computation”. An implicit definition does not prescribe how to construct its object (say, a string in some formal language). It only says that, demonstratedly, there exists such an object. For example, the numerical problems we are dealing with, implicitly or algebraically define their solutions<sup>\**</sup><sup>\**</sup>\**If the problem admits no solution, we should consider the meta-problem whether the problem admits a solution.. Let us consider factorization: given the known product $`c`$ of two unknown prime numbers $`x`$ and $`y`$, the numerical algebraic equation $`xy=c`$ implicitly defines the values of $`x`$ and $`y`$ that satisfy it. Equations (3) constitute a similar example. In order to find the object of an implicit definition, the latter must be changed into an equivalent constructive definition, namely into an algorithm (if possible, but it is always possible with the problems we are dealing with). An algorithm is an abstraction of the way things can be constructed in reality – inevitably in a model thereof – and prescribes a computation process that builds the object of the definition. The current notion of algorithm still reflects the way things can be constructed in the traditional classical reality – namely through a sequential process. Turing machine computation and the Boolean network representation of computation are examples of sequential computation. An algorithm specifies a one-way propagation of logical implication from a completely defined input to a completely defined output which contains the solution. It is thus meant to be executable through a dynamical process, namely through a one-way causality propagation<sup>††</sup><sup>††</sup>††Classical analog computation is not considered here to be fundamentally different, being still performed through a one-way causality propagation.. We can see that the essence of quantum computation, dual influence, is extraneous to the sequential notions of both algorithm and dynamics. In particular, quantum computation is not “quantum Turing machine” computation. ## IV Four types of quantum algorithms ### A Modified Simon’s algorithm In order to make our interpretation of the quantum speed-up more visible, we will follow the simplified version of Simon’s algorithm. With respect to the original version, we must confine ourselves to the case that the oracle gives us a 2-to-1 function $`f:B^nB^n`$ such that $`xx^{^{}}:f\left(x\right)=f(x^{^{}})x=x^{^{}}r,`$ where $``$ denotes bitwise exclusive or. The problem is to find $`r`$ in poly(n) time. With a further simplification, as anticipated in Section II, we replace the above condition with the condition $`\left|xx^{^{}}\right|=r`$. For the sake of clarity, the following table gives a trivial example. | $`x`$ | $`0`$ | $`1`$ | $`2`$ | $`3`$ | | --- | --- | --- | --- | --- | | $`f\left(x\right)`$ | $`0`$ | $`1`$ | $`0`$ | $`1`$ | Table I The modified algorithm is given in Fig. 2 – we should disregard $`/F`$ for the time being. Fig. 2 Registers $`a`$ and $`v`$ undergo successive unitary transformations, either jointly or separately: * The $`f\left(x\right)`$ transform (a reversible Boolean gate in the time-diagram of computation – Fig. 3) leaves the content of register $`a`$ unaltered, so that an input $`x`$ is repeated in the corresponding output, and computes $`f\left(x\right)`$ adding it to the former content of register $`v`$ (which was set to zero). If the state is not sharp but is a quantum superposition, the same transformation applies to any tensor product appearing in it. Fig. 3 * $`H`$ is the Hadamard transform. On a single qubit $`i`$, it operates as follows: $`|0_i\stackrel{H}{}\frac{1}{\sqrt{2}}\left(|0_i+|1_i\right),`$ $`|1_i\stackrel{H}{}\frac{1}{\sqrt{2}}\left(|0_i|1_i\right)`$. In the general case of a register of n qubits, containing the number $`\overline{x}`$, it yields $`|\overline{x}_a\stackrel{H}{\text{ }\text{ }}\frac{1}{\sqrt{N}}_x\left(1\right)^{\overline{x}x}|x_a,`$ where $`N=2^n`$, $`x`$ ranges over $`0,1,\mathrm{}`$, $`N1`$, and $`\overline{x}x`$ denotes the module 2 inner product of the two numbers in binary notation (they should be seen as row matrices). * $`M`$ represents the action of measuring the numerical content of a register. The algorithm proceeds through the following steps (also applied to table I example): 1. prepare: $`|\phi ,t_0_{av}=|0_a|0_v;`$ perform the Hadamard transform on register $`a`$, this yields: $`|\phi ,t_1_{av}=\frac{1}{\sqrt{N}}_x|x_a|0_v=\frac{1}{2}\left(|0_a|0_v+|1_a|0_v+|2_a|0_v+|3_a|0_v\right);`$ 2. compute $`f\left(x\right)`$ and add the result to the former content $`\left(0\right)`$ of register $`v`$, which yields: $`|\phi ,t_2_{av}=\frac{1}{\sqrt{N}}_x|x_a|f\left(x\right)_v=\frac{1}{2}\left(|0_a|0_v+|1_a|1_v+|2_a|0_v+|3_a|1_v\right);`$ this is the state before measurement; 3. measure $`\left[v\right]`$ obtaining, say, $`\overline{f}=1`$; the state after measurement is thus: $`|\phi ,t_3_{av}=\frac{1}{\sqrt{2}}\left(|\overline{x}_a+|\overline{x}+r_a\right)|\overline{f}_v=\frac{1}{\sqrt{2}}\left(|1_a+|3_a\right)|1_v,`$ We should note that, at this stage of the algorithm, it is equivalent to either perform or skip $`\left[v\right]`$ measurement (see further below). It will be easier to understand the algorithm and the reason of the speed-up if we assume that this measurement has been performed. The measurement outcome, $`|\phi ,t_3_{av}=\sqrt{\frac{N}{2}}|\overline{f}_v\overline{f}|_v|\phi ,t_2_{av}`$, is naturally dually influenced (Section III). Ekert and Jozsa have shown that quantum entanglement between qubits is essential for providing a computational speed up, in terms of time or resources, in the class of quantum algorithms we are dealing with (which yield an exponential speed up). After measuring $`f\left(x\right)`$, the state of the two registers becomes factorizable, and all entanglement is destroyed. The remaining actions, performed on register $`a`$, use interference (which generates no entanglement) to “extract” $`r`$ out of the superposition $`\frac{1}{\sqrt{2}}\left(|\overline{x}_a+|\overline{x}+r_a\right)`$. Under the criterion introduced in Section II, we must conclude from another standpoint that the speed-up has been achieved by preparing $`|\phi ,t_3_{av}`$. 1. perform $`H`$ on register $`a`$, this yields: $`|\phi ,t_4_{av}=\frac{1}{\sqrt{2N}}_z\left(1\right)^{\overline{x}z}\left[1+\left(1\right)^{rz}\right]|z_a|\overline{f}_v`$; 2. measure $`\left[a\right]`$ in $`|\phi ,t_4_{av}`$; we designate the result by $`z`$; $`rz`$ must be 0 – see the form of $`|\phi ,t_4_{av}`$. This holds unaltered if step (d) measurement is omitted, as well known; 3. by repeating the overall computation process a sufficient number of times, poly($`n`$) on average, a number of constraints $`rz=0`$ sufficient to identify $`r`$ is gathered. How the speed-up is achieved in $`[t_0,t_3]`$ has been anticipated in Sections II and III. Summarizing, measuring $`\left[v\right]`$ in state $`|\phi ,t_2_{av}`$, creates the system of algebraic equations (6-8) \[equivalent to (3)\] and yields the superposition of a pair of values of $`x_1`$ and $`x_2`$ which satisfy this system ($`r`$ is “easily” extracted from the superposition). Solving equations (3) by classical computation would require exp($`n`$) time. Finally, let us show that performing or skipping step (d) (i.e. $`\left[v\right]`$ measurement in $`|\phi ,t_2_{av}`$) is equivalent. Let us skip step (d) and measure $`\left[a\right]`$ first, at time $`t_4`$. In Fig. 2, $`M`$ on $`v`$ should be shifted at least after $`t_5`$. Whether $`\left[v\right]`$ is measured after $`t_5`$ is indifferent, or mathematically equivalent. Let us think of measuring it. This induces a “wave function collapse”<sup>‡‡</sup><sup>‡‡</sup>‡‡The notion of “collapse” is not needed in any essential way; it is a mathematically legitimate notion that comes handy here for the sake of explanation; the result of collapse can be backdated any time during the unobserved evolution of the quantum system from $`t_0`$ to $`t_3`$, provided that this result undergoes back in time (in an inverted way) the same transformations undergone by the time-forward evolution (the usual one). of the state of register $`v`$ on some $`|\overline{f}_v`$. Since $`|\overline{f}_v`$ is disentangled from the state of register $`a`$, and no operation has been performed on register $`v`$ since time $`t_2`$ (see fig. 2, keeping in mind that $`M`$ on $`v`$ has been shifted after $`t_5`$), back-dating collapse at time $`t_2`$ means back-dating the result of collapse, namely $`|\overline{f}_v`$, as it is. This is equivalent to having performed step (d). Another way of seeing this is that, because of the entanglement between registers $`a`$ and $`v`$, measuring $`\left[a\right]`$ first, at time $`t_4`$, is equivalent to simultaneously measuring $`\left[v\right]`$; the result of this virtual measurement can be backdated, and we can go on with a reasoning similar to the above one. ### B Shor’s algorithm The problem of factoring an integer $`L`$ – the product of two unknown primes – is transformed into the problem of finding the period of the function $`f\left(x\right)=a^xmodL`$, where $`a`$ is an integer between $`0`$ and $`L1`$, and is coprime with $`L`$, . Figure 2 can also represent Shor’s algorithm, provided that $`f\left(x\right)`$ is defined as above and that the second Hadamard transform is substituted by the discrete Fourier transform $`F`$. The state before measurement has the form $`|\phi ,t_2_{av}=\frac{1}{\sqrt{L}}_x|x_a|f\left(x\right)_v`$. Measuring or not measuring $`f\left(x\right)`$ in $`|\phi ,t_2_{av}`$ is still equivalent. By measuring it, the above quantum state changes into the superposition $$\overline{k}\left(|\overline{x}_a+|\overline{x}+r_a+|\overline{x}+2r_a+\mathrm{}\right)|\overline{f}_v,$$ (10) where $`f\left(\overline{x}\right)=f\left(\overline{x}+r\right)=\mathrm{}=\overline{f}`$, and $`\overline{k}`$ is a normalization factor. The second part of the algorithm generates no entanglement and serves to “extract” $`r`$ in polynomial time, by using Fourier-transform interference and auxiliary, off line, mathematical considerations. Under the current assumptions, the quantum speed-up has been achieved by preparing state (9): the discussion is completely similar to that of the previous algorithm. ### C Deutsch’s 1985 algorithm The seminal 1985 Deutsch’s algorithm has been the first demonstration of a quantum speed-up. In its current form, this algorithm yields a deterministic output, apparently ruling out the dual influence explanation. A thorough examination of both the problem and the solution algorithm will show that this is not the case. Until now, the problem has been to efficiently reverse a hard-to-reverse function $`f\left(x\right)`$. In the language of game theory, this is a game against (mathematical) nature. Deutsch’s algorithm and more in general quantum oracle computing is better seen as a competition between two players. One produces the problem, the other should produce the solution. Sticking to Greek tradition, we shall call the former player Sphinx, the latter Oedipus. The game is formalized as follows. Both players know everything of a set of software programs $`\left\{f_k\right\}`$ (where $`k`$ labels the elements of the set), whereas each program $`f_k`$ computes some function $`f_k:B^nB^n`$. The Sphinx chooses $`k`$ at random, loads program $`f_k`$ on a computer (i.e., sets the oracle in its $`k`$-th mode) and passes it on to Oedipus. Oedipus knows nothing of the Sphinx’ choice and must efficiently find $`k`$ by testing the computer (oracle) input-output behaviour. If the computer is quantum, then we speak of “quantum oracle computing”. Deutsch’s 1985 algorithm, as modified in , is as follows. Let $`\left\{f_k\right\}`$ be the set of all possible functions $`f_k:BB`$, namely: | $`x`$ | $`f_{00}\left(x\right)`$ | | | $`x`$ | $`f_{01}\left(x\right)`$ | | | $`x`$ | $`f_{10}\left(x\right)`$ | | | $`x`$ | $`f_{11}\left(x\right)`$ | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | $`0`$ | $`0`$ | | | $`0`$ | $`0`$ | | | $`0`$ | $`1`$ | | | $`0`$ | $`1`$ | | $`1`$ | $`0`$ | | | $`1`$ | $`1`$ | | | $`1`$ | $`0`$ | | | $`1`$ | $`1`$ | $`\left\{f_k\right\}`$ is divided into a couple of subsets: the balanced functions, characterized by an even number of zero and one values, thus labeled by $`k=01,10`$, and the unbalanced ones, labeled by $`k=00,11`$. Once set in its $`k`$-th mode, the oracle computes $`f_k\left(x\right)`$. Oedipus must find, with a minimum number of oracle runs, whether the oracle (whose mode has been randomly set by the Sphinx) computes a balanced or an unbalanced function. In other words, he must compute the functional $`\left(f_k\right)`$ which is, say, 1 (0) when the function is balanced (unbalanced). The algorithm is illustrated in Fig. 4(a). The computation of $`f_k\left(x\right)`$ is represented as a reversible Boolean gate like in the previous algorithms, but for the fact that the result of the computation is now module 2 added to the former content of register $`v`$. Fig. 4(a),(b) Given the Sphinx’ choice $`k`$, the algorithm proceeds as follows; each point gives the action and the corresponding result. 1. prepare: $`|\phi _k,t_0_{av}=\frac{1}{\sqrt{2}}|0_a\left(|0_v|1_v\right),`$ 2. perform Hadamard on $`a`$: $`|\phi _k,t_1_{av}=\frac{1}{2}\left(|0_a+|1_a\right)\left(|0_v|1_v\right),`$ 3. we shall consolidate the next two steps – Fig. 4(a): compute $`f_k\left(x\right)`$ adding it, module 2, to the former content of $`v`$, and perform Hadamard on $`a`$; the result depends on the Sphinx’ choice: $`|\phi _{00},t_3_{av}=\frac{1}{\sqrt{2}}|0_a`$ $`\left(|0_v|1_v\right)`$ $`|\phi _{01},t_3_{av}=\frac{1}{\sqrt{2}}|1_a`$ $`\left(|0_v|1_v\right)`$ $`|\phi _{10},t_3_{av}=\frac{1}{\sqrt{2}}|1_a`$ $`\left(|0_v|1_v\right)`$ $`|\phi _{11},t_3_{av}=\frac{1}{\sqrt{2}}|0_a`$ $`\left(|0_v|1_v\right)`$ 4. measure $`\left[a\right]`$: it can be seen that the content of register $`a`$ yields the functional $`\left(f_k\right)`$, namely Oedipus’ answer. This algorithm is more efficient than any classical algorithm, where two runs of the oracle are required to compute $`\left(f_k\right)`$. However, the result is apparently reached in a deterministic way, without any active role of quantum measurement. This must be ascribed to an incomplete physical representation of the problem. In Section III, we had a problem that implicitly defined its solution, whereas this mathematical fact was physically represented by the quantum measurement of an entangled state. This obviously requires that the problem is physically represented<sup>\**</sup><sup>\**</sup>\**In Sections IV.A and IV.B, all knowledge of the function and ignorance about $`r`$ were physically represented in a superposition of the form (1)., whereas presently an essential part of it, the Sphinx choosing the oracle mode, is not. First, we shall follow a most simple way of completing the physical representation. The Sphinx’ random selection of the oracle mode will be performed through a suitable quantum measurement, after having run the algorithm. We introduce the extended gate $`F(k,x)`$ which computes the function $`F(k,x)=f_k\left(x\right)`$ for all $`k`$ and $`x`$. This gate has an ancillary input register $`m`$ ($`m`$ for mode) which contains $`k`$, namely the oracle mode \[Figure 4(b) gives the extended algorithm\]. This input is identically repeated in a corresponding output – to keep gate reversibility. Of course, Oedipus is forbidden to access register $`m`$. The preparation becomes $`|\phi ,t_0_{mav}={\displaystyle \frac{1}{\sqrt{2}}}|00_m|0_a\left(|0_v|1_v\right).`$ After performing Hadamard on $`m`$ and $`a`$ we obtain: $$|\phi ,t_1_{mav}=\frac{1}{4}\left(|00_m+|01_m+|10_m+|11_m\right)\left(|0_a+|1_a\right)\left(|0_v|1_v\right).$$ (11) Performing Hadamard on $`|00_m`$ is a way of preparing the Sphinx’ random selection of an oracle mode (as will become clear). Let us go directly to the state before the first measurement – see Fig. 4(b) $$|\phi ,t_3_{mav}=\frac{1}{2\sqrt{2}}\left[\left(|00_m|11_m\right)|0_a+\left(|01_m|10_m\right)|1_a\right]\left(|0_v|1_v\right).$$ (12) It can be seen that the entangled state (11) represents the mutual definition between the Sphinx’ choice $`k`$ and Oedipus’ answer $`\left(f_k\right)`$. The former implicit definition of the problem solution appears here in the form of the mutual definition of the moves of the two players. Reaching state (11) with quantum parallel computation still requires one oracle run. The action of measuring $`\left[m\right]`$ in state (11), equivalent to the Sphinx’ choice of the oracle mode, by bringing in and satisfying equations (6-8)<sup>\*†</sup><sup>\*†</sup>\*†The “state before measurement” $`|\phi ,t_2_{av}`$ of Section III must be changed into $`|\phi ,t_3_{mav}`$., transforms mutual definition into correlation between individual outputs (like in an EPR situation). In other words, the Sphinx’ choice of $`k`$ simultaneously determines Oedipus’ answer $`\left(f_k\right)`$ – retrievable by measuring $`\left[a\right]`$. In the classical framework instead, the Sphinx’ choice should necessarily be propagated to Oedipus’ answer by means of an algorithm, in fact through the computation of $`\left(f_k\right)`$ (requiring two oracle runs). Achieving the speed-up still involves the interplay between the reversible preparation of an entangled state before measurement and a final measurement action, namely dual influence – here of an EPR kind. It should be noted that the above “complete physical representation” is not the original Deutsch’s algorithm. This can readily be fixed. To this end, the Sphinx must randomly select the mode before giving the oracle – i.e. the quantum gate $`F(k,x)`$ – to Oedipus. This means that Oedipus receives the oracle in an input state randomly selected among four possible quantum states, corresponding to the modes $`k=00,01,10,11.`$ This is indistinguishable from a mixture. Therefore, the preparation at time $`t_1`$ becomes: $`|\phi ,t_1_{mav}={\displaystyle \frac{1}{4}}\left(|00_m+e^{i\delta _1}|01_m+e^{i\delta _2}|10_m+e^{i\delta _3}|11_m\right)\left(|0_a+|1_a\right)\left(|0_v|1_v\right),`$ where $`\delta _1`$, $`\delta _2`$ and $`\delta _3`$ are independent random phases – this is the random phase representation of a mixture. After $`t_1`$, the algorithm goes on as before yielding $`|\phi ,t_3_{mav}={\displaystyle \frac{1}{2\sqrt{2}}}\left[\left(|00_me^{i\delta _3}|11_m\right)|0_a+\left(e^{i\delta _1}|01_me^{i\delta _2}|10_m\right)|1_a\right]\left(|0_v|1_v\right).`$ Clearly, the roles of entanglement, quantum measurement and dual influence remain unaltered. ### D An instance of Grover’s algorithm The rules of the game are the same as before. This time we have the set of the $`2^n`$ functions $`f_k:B^nB`$ such that $`f_k\left(x\right)=\delta _{k,x}`$, where $`\delta `$ is the Kronecker symbol. We shall consider the simplest instance $`n=2`$. This yields four functions $`f_k\left(x\right)`$, labeled by $`k=0,1,2,3`$. Figure 5(a) gives Grover’s algorithm (in the standard version provided in for $`n=2`$. Let us assume the Sphinx has chosen $`k=2`$. The preparation is $`\frac{1}{\sqrt{2}}|0_a\left(|0_v|1_v\right)`$. Without entering into detail, the state before measurement is: $`\frac{1}{\sqrt{2}}|2_a\left(|0_v|1_v\right)`$. Measuring $`\left[a\right]`$ deterministically yields Oedipus’ answer. This is more efficient than classical computation where three oracle runs are required to find the solution with certainty, whereas in Grover’s algorithm two runs are enough – Fig. 5(a). Fig. 5(a),(b) The extended algorithm is given in Fig. 5(b). The preparation becomes $`\frac{1}{\sqrt{2}}|0_m|0_a\left(|0_v|1_v\right)`$; the state before measurement becomes: $`\frac{1}{2\sqrt{2}}\left(|0_m|0_a+|1_m|1_a+|2_m|2_a+|3_m|3_a\right)\left(|0_v|1_v\right)`$. Again, we have the mutual definition of the Sphinx’ choice and Oedipus’ answer. Measuring $`\left[m\right]`$ selects the Sphinx’ choice and Oedipus’ answer at the same time, as in the previous oracle problem. ## V Conclusions Quantum computation is concerned with the efficient solution of numerical algebraic problems. We shall first summarize the main results of this work. We have shown that the action of measuring an observable in a suitably entangled state, introduces and satisfies a system of algebraic equations. In all existing quantum algorithms, this system represents the problem that algebraically defines its solution. Moreover, measurement time is independent of entanglement. This justifies the quantum speed-up in all types of quantum algorithms found so far. Quantum computation turns out to be an entirely new paradigm (extraneous to the notion of sequential computation) where there is identity between the algebraic definition of a solution and its physical determination. The capability of directly solving<sup>\*‡</sup><sup>\*‡</sup>\*‡Without having to execute an algorithm, which would be necessary in the classical framework. a system of algebraic equations, is related to the feature that the determination of the measurement outcome is dually influenced by both the reversible initial actions, leading to the state before measurement, and by the logical-mathematical constraints introduced by the final measurement action. Dual influence is extraneous to the notion of sequential process, namely of dynamical, one-way propagation. Although our explanation of the speed-up appears a-posteriori to be simple and evident, it is likely to displace rather common views. In the first place, it is reasonable to assume that quantum algorithms are commonly thought to be, in fact, algorithms, namely the quantum transposition of sequential Turing machine. At the light of the results of this work, this way of thinking would be a classical vestige, ruling out the active role of quantum measurement and dual influence. In the second place, there is a widespread belief that quantum theory can do without the measurement problem. In other words, the fact that the mutual exclusivity of the possible measurement outcomes comes from an ad-hoc reinterpretation of a state superposition (of a mixture, in decoherence theory) would be a price paid once for all. There would be no further consequences on quantum theory. In contrast with this, we have highlighted a striking consequence in the context of quantum computation. Here the “reinterpretation” implies dual influence, which yields a completely observable speed-up. These appear to be important clarifications provided by this work. From the one hand, the notion of dual influence, with its striking consequence, might lend itself to further development at a fundamental level. From the other hand, having ascertained that quantum algorithms are more than sequential computation, might open the way to unforeseen prospects in the quest of new forms of computation. For example, quantum measurement of an observable in an entangled state is a projection on a Hilbert subspace subject to certain constraints whose satisfaction amounts to efficiently solving a problem. In some respect, this feature is similar to the projections due to particle statistics symmetrizations. Therefore, investigating the possibility of exploiting such symmetrizations in problem solving could be an interesting prospect. Refs. , provide still abstract attempts in this direction. More generally, this work highlights the essential role played by non-dynamical effects in quantum computation. Let us mention in passing that a form of quantum computation which is of geometric rather than dynamical origin has recently been provided . This concretely shows that there are ways of getting out of the usual quantum computation paradigm. Thanks are due to T. Beth, A. Ekert, D. Finkelstein and V. Vedral for stimulating discussions and valuable comments.
no-problem/9908/cond-mat9908472.html
ar5iv
text
# Electron delocalization and multifractal scaling in electrified random chains \[ ## Abstract Electron localization property of a random chain changing under the influence of a constant electric field has been studied. We have adopted the multifractal scaling formalism to explore the possible localization behavior in the system. We observe that the possible localization behavior with the increase of electric field is not systematic and shows strong instabilities associated with the local probability variation over the length of the chain. The multifractal scaling study captures the localization aspects along with strong instability when the electric field is changed by infinitesimal steps for a reasonably large system size. KEYWORDS : localization, multifractal scaling, random chain \] Electronic states are exponentially localized in one dimensional (hereafter $`1D`$) random chains and the envelope of the wave function, $`\varphi (x)\mathrm{exp}(\alpha x)`$ for $`x\mathrm{}`$ where $`\alpha `$ is the inverse of localization length . This localization nature of electronic states can be changed through application of a constant electric field and as a result electronic states exhibit some kind of different localized nature over the sample size. The problem of electronic states and localization in a random chain in the presence of a constant electric field is still a matter of controversy and is yet to be fully understood . In the past, the possibility of existence of non-exponential localization or localization/delocalization transition in electrified chain has been addressed . The deviation from exponential localization has been also claimed through the numerical study of electronic transmittance . However, the localization mechanism can be understood more rigorously within the multifractal scaling analysis of the electronic wave functions without any a priori assumption of exponential localization nature and the existence of localization length . The multifractal scaling formalism has been invoked in the recent past to analyze the nature of electronic states in the vicinity of the mobility edge , and also for characterization of critical nature of electronic states in $`1D`$ Fibonacci quasiperiodic systems . Generally speaking, in all of the above examples, wave functions exhibit a rather involved oscillatory behavior displaying strong fluctuations. As a consequence the notion of envelope wave function or lyapunov exponent which has been successful in studying both the extended and exponentially localized states, is no longer suitable to the states in examples above. On the other hand multifractal scaling formalism has been found to be very useful to characterize the spatial fluctuating pattern of the wave functions which are neither Bloch like homogeneously extended state nor the exponentially decaying one. The same scaling analysis has been successful to extrapolate in the extreme limits also. So, it naturally appears that the possible delocalization behavior that we are going to address in this Letter can also be understood through the same scaling analysis. The aim of this Letter is to report our investigation on how does localization nature is influenced by switching on a constant electric field through numerical study of electronic wave functions for a reasonably large array of $`\delta `$-function random potentials having a bi-modal distribution. The choice of this type of potential has been justified in the past both from its experimental relevances as well as from the pure academic interest. We start with the Schrödinger’s equation for the electrons in a random chain in presence of a constant electric field : $$\left[\frac{d^2}{dx^2}+\underset{n=1}{\overset{N}{}}V_n\delta (xna)Fx\right]\mathrm{\Psi }(x)=E\mathrm{\Psi }(x)$$ (1) where the units are such that $`(\mathrm{}^2=2m_e=1)`$ $`m_e`$ is the effective mass of electron. The electric field induced force $`F`$ is expressed in unit of $`\left(\frac{\mathrm{}^2}{2m_ea^3}\right)`$, $`a`$ is the lattice spacing and $`V_n`$ is the strength of the $`n`$-th potential barrier (taking the value $`V_A`$ or $`V_B`$ randomly), $`F`$ is the product of electric field by the electronic charge. The lattice constant $`a`$ is taken unity throughout this calculation. One can map the above Eq.(1) to a finite difference equation by approximating the potential $`Fx`$ by a step function in-between the $`\delta `$-functions. Within this approximation, the solutions in-between the $`\delta `$-function potentials are now plane waves instead of Airy functions. The corresponding Poincare map is: $$\mathrm{\Psi }_{n+1}=A_n\mathrm{\Psi }_n+B_n\mathrm{\Psi }_{n1}$$ (2) The coefficients $`A_n`$ and $`B_n`$ are given by $`A_n`$ $`=`$ $`\left[\mathrm{cos}k_{n+1}+{\displaystyle \frac{k_n}{k_{n+1}}}{\displaystyle \frac{\mathrm{sin}k_{n+1}}{\mathrm{sin}k_n}}\mathrm{cos}k_n+V_n{\displaystyle \frac{\mathrm{sin}k_{n+1}}{k_{n+1}}}\right]`$ (3) $`B_n`$ $`=`$ $`{\displaystyle \frac{k_n}{k_{n+1}}}\left({\displaystyle \frac{\mathrm{sin}k_{n+1}}{\mathrm{sin}k_n}}\right)`$ (4) with $`k_n=(E+nF)^{1/2}`$ and $`\mathrm{\Psi }_n=\mathrm{\Psi }(x=n)`$. Now in order to solve the equation iteratively for a reasonably large system size one can consider the initial values for $`\mathrm{\Psi }_1=\mathrm{exp}(ıE^{1/2}a)`$ and $`\mathrm{\Psi }_2=(2ıE^{1/2}a)`$, $`E`$ being the incident electron energy before it reaches the region where the electric field is applied. The transmittance corresponding to the array of random $`\delta `$-function potential is given by $$T=\left(\frac{k}{k_1}\right)\left(\frac{|\mathrm{exp}(2ık_1)1|^2}{|\mathrm{\Psi }_{N+2}\mathrm{\Psi }_{N+3}\mathrm{exp}(ık_1)|^2}\right)$$ (5) with $$k=E^{1/2}\text{and}k_1=(E+FL)^{1/2}$$ and $`L=Na`$. We now analyze the pattern of local probability density $`|\mathrm{\Psi }_n|^2`$ along the chain through its multifractal scaling relation of $`\alpha `$ and $`f(\alpha )`$, where $`\alpha `$ stands for the scaling exponent and $`f(\alpha )`$, the corresponding distribution function. We have used the mathematical prescription suggested by Chabbra and Jensen for its simplicity and success in correctly evaluating the quantities $`\alpha `$ and $`f(\alpha )`$ directly through the normalized measure without any numerical instability. Let us define the required normalized measure in our study by $$P_i=\frac{|\mathrm{\Psi }_i|^2}{_{i=1}^N|\mathrm{\Psi }_i|^2}$$ where the scaling behavior of $`P_iN^{\alpha _i}`$ for $`N\mathrm{}`$. According to Chabbra and Jensen if we define the $`q`$-th moment of the probability measure $`P_i`$ by $`\mu _i(q,N)`$ where $$\mu _i(q,N)=\frac{P_i^q}{_{i=1}^NP_i^q}$$ then a complete characterization of the fractal singularities can be made in terms of $`\mu _i(q,N)`$. The expression for the distribution function of scaling exponent $`\alpha `$ can be written as: $$f(\alpha )=\underset{N\mathrm{}}{lim}\frac{1}{\mathrm{log}N}\underset{i=1}{\overset{N}{}}\mu _i(q,N)\mathrm{log}\mu _i(q,N)$$ (6) and the corresponding singularity strength of the measure is obtained by $$\alpha =\underset{N\mathrm{}}{lim}\frac{1}{\mathrm{log}N}\underset{i=1}{\overset{N}{}}\mu _i\mathrm{log}P_i.$$ (7) One can infer on the nature of electronic states for large $`N`$ based on the following observation: 1. Extended nature : $`\alpha _{min}1`$, $`f(\alpha _{min})1`$, $`\alpha _{max}1`$, $`f(\alpha _{max})1`$. 2. Localized nature : $`\alpha _{min}0`$, $`f(\alpha _{min})0`$, $`\alpha _{max}\mathrm{}`$, $`f(\alpha _{max})1`$. This property is usually manifested by rectangular two-hump form of $`f(\alpha )`$ curve with a sparse distribution of points in between. 3. Critical nature : $`\alpha `$ vs $`f(\alpha )`$ curves closely overlap on one another with the increase of system size. 4. Power-law nature : The right portion of $`\alpha f(\alpha )`$ curve deviates slowly from one another with the increase of system size in contrast to the strong deviation as seen in the exponential decay. We now define a simple way the degree of localization. We consider $`\mathrm{\Delta }_\alpha =(\alpha _{max}\alpha _{min})`$ as a measure of degree of localization. This can distinguish clearly an extended state from a localized state for a sufficiently large system size $`N`$. Also, one can investigate the change in the nature of states brought about due to the change of some external parameters, e. g., electric field. In figure (1) we have shown the spatial pattern of local probability variation along the chain for a localized state in both zero and finite electric field. The upper curve exhibits the delocalized pattern in the presence of electric field $`F=1.25\times 10^5`$ unit. The corresponding localized and delocalized behavior of the electronic transmittance data have been been presented for both the zero as well as for a finite electric field in figure (2). In figure (3), we have shown $`(\alpha f(\alpha ))`$ plots for both the zero and the finite field value $`2\times 10^5`$. In the zero field case the plot shows in $`f(\alpha )`$ a two-hump form corresponding to the exponential localization. This is due to the fact that $`(\alpha f(\alpha ))`$ spectrum is densely populated on the extreme left and on the right but the region in-between is very sparse. On the other hand, for finite electric field, the $`(\alpha f(\alpha ))`$ data gather in a relatively narrow region on the curve having a convex shape. This indicates that the change in localization behavior shows an overall delocalization trend which is exhibited through the more spatial extension of the state over the sample and hence resulting in the contraction of the $`\alpha f(\alpha )`$ spectrum. Next we investigate further whether this delocalization pattern can change systematically as we change the electric field through small steps of the order of $`10^8`$ unit for a sufficiently large system size. In figure (4) we have shown the plot of degree of localization $`\mathrm{\Delta }_\alpha `$ with the electric field for two large system sizes, $`N=10^5`$ (upper curve) and $`N=2\times 10^5`$ (lower curve). In both the plots we see that $`\mathrm{\Delta }_\alpha `$ exhibits strong instability throughout the whole regime of electric field from a very low to as high as $`10^5`$ unit. We have observed that for both the two system sizes, the order of deviation in $`\mathrm{\Delta }_\alpha `$ from its previous step is of the order of unity whereas the value of $`\mathrm{\Delta }_\alpha `$ itself is of the same order. Also the fluctuating pattern of $`\mathrm{\Delta }_\alpha `$ with $`F`$ for the both the two system sizes is almost the same. In figure (5) we have shown again the variation of the degree of localization, i. e., $`\mathrm{\Delta }_\alpha `$ with the electric field for a different set (cf. Figure 4) of potential parameters. Here the order of fluctuations in $`\mathrm{\Delta }_\alpha `$ corresponding to two large systems, i. e., $`75\times 10^3`$ and $`1.7\times 10^4`$ number of atoms have been presented. The order of fluctuations appear to be quite significant and it is nearly unity in the both the cases over a wide region of the electric field as shown in the figure through covering the fluctuating zone between the horizontal lines. We think this kind of instability has its intrinsic origin in the restructuring of the different states in a complicated manner and each of them is highly sensitive even for an infinitesimal change in the applied electric field. This can be also understood as due to the competing nature of the potential $`Fn`$ and the disordered $`\delta `$-potentials in the large system size. However, if one neglects this fluctuation through some brute force methods, it shows only an apparent simple delocalization effects due to the increase of electric field. In conclusion, we have shown that the change of localization aspects due to the increase of a uniform electric field is not simply an overall delocalization but the localization property is very sensitive with respect to an infinitesimal change of electric field giving rise to a strong instability in the degree of localization. This instability aspect in the localization/delocalization is present for all reasonably large lengths for an appropriate set of parameters and is due to the combined effects of disorder potential and the electric field induced linear potential. At sufficiently large length scale the states change depending upon the restructuring of the spectrum from its previous form and hence the localization property of a particular state in a given field will change drastically giving rise to a state of modified localized nature. P. Biswas would like to thank the Council of Scientific and Industrial Research (CSIR) for financial assistance in the form of a senior research fellowship.
no-problem/9908/hep-ph9908333.html
ar5iv
text
# Gravitational production of gravitinos ## I Introduction It was realized early on that the gravitino could pose a serious cosmological problem in the context of a hot Big-Bang, if it were once in thermal equilibrium. An unstable gravitino, for instance, would decay in the post-Big-Bang nucleosynthesis era, if its mass $`m_{3/2}10^4`$GeV, and the entropy produced would ruin the successes of Big-Bang nucleosynthesis. Similarly, if the gravitino were stable, as e.g., in gauge mediated supersymmetry breaking, its energy density would eventually overclose the Universe, if its mass $`m_{3/2}2`$keV. A solution to this problem was brought forward in Ref. : if inflation took place, the gravitinos present at the time of Big-Bang nucleosynthesis were created during reheating, in an abundance possibly much smaller than that corresponding to thermal equilibrium. Cosmological constraints on their abundance could then be turned into useful upper limits on the reheating temperature $`T_\mathrm{R}`$ , typically $`T_\mathrm{R}10^810^{10}`$GeV, for an unstable gravitino with $`m_{3/2}3\times 10^3`$GeV, or $`T_\mathrm{R}10^210^{10}`$GeV, for a stable gravitino with $`1\mathrm{keV}m_{3/2}1\mathrm{GeV}`$. These studies assume that the gravitino abundance has been exponentially suppressed during inflation, and that gravitinos were only created in particle interactions during reheating. However, particles can be produced out of the vacuum in a non-static gravitational background if their coupling to the gravitational field is not conformal . A well-known case is the production of gravitational waves or scalar density perturbations during inflation. As we argue below, a massive gravitino is not conformally invariant in a Friedmann–Robertson–Walker (FRW) background, and our main objective is thus to quantify the number density of gravitinos that can be produced gravitationally during inflation. We will consider a generic inflation scenario within $`N=1`$ supergravity , and briefly discuss the more particular case of pre-Big Bang cosmology . The cosmological consequences of gravitino production during inflation were briefly discussed in Ref. . These authors did not actually study the gravitational production of gravitinos, and rather focused on the spin 0 and spin 1/2 cases, as they were interested in the problem of moduli and modulini fields. With regards to the gravitino, they assumed that one particle would be produced per quantum state for modes with comoving wavenumber $`kH_\mathrm{I}`$, where $`H_\mathrm{I}`$ denotes the Hubble scale of inflation. This estimate showed that gravitational production of gravitinos could pose a cosmological problem if the energy scale $`V^{1/4}`$ at which inflation takes place saturates its observational upper bound, i.e. $`V^{1/4}10^{16}`$GeV, and in this respect, it justifies further the present work. Our study is more specialized than that of Ref. , as we concentrate exclusively on the gravitino. However, it is also more systematic and more detailed, as we derive and solve the gravitino field equation to provide quantitative estimates of the number density of gravitinos produced. We also examine different cases for the magnitude and dynamics of the gravitino effective mass term during and after inflation. Finally, we also study the effect of a finite duration of the transition between inflation and reheating, using numerical integration of the field equation. This effect is important, as this timescale defines the “degree of adiabaticity” of the transition, and indeed the number density of gravitinos produced is found to be inversely proportional to it. The study of the conformal behavior of the gravitino also bears interest of its own, apart from any application to cosmology, and to our knowledge, the quantization of quantum fields in curved space-time has been examined for spins 0, 1/2, 1 and 2, but not $`3/2`$ (although the case of a massless gravitino in a perfect fluid cosmology was studied in Ref. ). In the present work, we focus on the helicity 3/2 modes of the gravitino. The field equations and the quantization of the helicity 1/2 modes are indeed more delicate, due to the presence of constraints. These constraints vanish identically if supersymmetry is unbroken ; in broken supersymmetry, these constraints do not vanish, but do not induce any inconsistency . As shown below, these constraints apply to the modes of helicity 1/2, not to those of helicity 3/2, and for this reason, we leave the problem of the helicity 1/2 modes open for a further study; nonetheless, we present these field equations and their constraints. The number density of helicity 3/2 gravitinos produced during inflation that we derive in this paper should thus be interpreted as a lower limit. Quite probably, however, the number density of helicity 1/2 gravitinos should be of the same order as that of helicity 3/2, and the results correct within a factor of order 2. This paper is organized as follows. In Section II, we derive the gravitino field equation for the $`\pm 3/2`$ helicity modes, and in Section III, we calculate the number density of gravitinos produced in a generic inflation scenario. We summarize our conclusions and briefly discuss the case of pre-Big-Bang string cosmology in Section IV. All throughout this paper, we use natural units $`\mathrm{}=c=m_{\mathrm{Pl}}=1`$, where $`m_{\mathrm{Pl}}(8\pi G)^{1/2}`$ is the reduced Planck mass. We note $`M_{\mathrm{Pl}}\sqrt{8\pi }m_{\mathrm{Pl}}`$ the Planck mass. Furthermore, we restrict ourselves to a FRW background, whose metric is written as: $`\mathrm{d}s^2=g_{\mu \nu }\mathrm{d}x^\mu \mathrm{d}x^\nu =a^2(\eta )(\mathrm{d}\eta ^2+\mathrm{d}x^2+\mathrm{d}y^2+\mathrm{d}z^2)`$, where $`a(\eta )`$ is the scale factor, and $`\eta `$ denotes conformal time; the Minkowski metric is written $`\eta _{ab}`$. We also use standard conventions on the derivative of the Kähler potential $`G(z_i,z_i^{})`$ with respect to the scalar components $`z_i`$ of chiral superfields: $`G^iG/z_i`$, $`G_i^{}G/z^i`$. Other notations, relative to the Dirac matrices, are given in the Appendix. ## II Field equation We consider the gravitino in a background of a classical FRW spacetime, in the context of $`N=1`$ supergravity, and adopt the following lagrangian density: $$=\frac{e}{2}R\frac{i}{2}ϵ^{\mu \nu \rho \sigma }\overline{\mathrm{\Psi }}_\mu \gamma _5\gamma _\nu D_\rho \mathrm{\Psi }_\sigma +\frac{e}{2}e^{G/2}\overline{\mathrm{\Psi }}_\mu \sigma ^{\mu \nu }\mathrm{\Psi }_\nu +_\mathrm{m}.$$ (1) In this equation, $`e`$ represents the determinant of the vierbein $`e_\mu ^a`$, $`R`$ denotes the Ricci scalar, $`\mathrm{\Psi }_\mu `$ the gravitino field, and $`_\mathrm{m}`$ represents external matter, more specifically the scalar fields whose dynamics drive the evolution of the background metric: we neglect the matter gauge and fermion fields. The gravitino covariant derivative $`D_\rho `$ is defined as: $$D_\rho =_\rho +\frac{1}{4}\omega _\rho ^{ab}\sigma _{ab}\frac{1}{4}\gamma _5\lambda _\rho ,$$ (2) where $`\omega _\rho ^{ab}(e)`$ is the torsion tensor, in which we do not include $`\mathrm{\Psi }`$ torsion, since we neglect the backreaction of the gravitino on the metric. We included in this covariant derivative the Kähler connection $`\lambda _\rho =K^i_\rho z_iK_i^{}_\rho z_i^{}`$, where $`K`$ denotes the Kähler function. The gravitino is also coupled to matter through the Kähler potential $`G(z,z^{})=K(z,z^{})+\mathrm{ln}\left(|W(z)|^2\right)`$, with $`W`$ the superpotential. This term gives rise to an effective mass for the gravitino, which we write as $`m`$: $`me^{G/2}`$. The gravitino field equation can be written in the compact notation : $$R^\mu =ϵ^{\mu \nu \rho \sigma }\gamma _5\gamma _\nu 𝒟_\rho \mathrm{\Psi }_\sigma =0,$$ (3) where $`𝒟_\rho D_\rho +\frac{1}{2}m\gamma _\rho `$. As is well-known , a consistency condition can be obtained by taking the divergence of Eq. (3), $`𝒟_\mu R^\mu `$=0, which leads, after some manipulations, to: $$\left[3m^2\gamma ^\nu G_\mu ^\nu \gamma ^\mu +2_\mu m\sigma ^{\mu \nu }+2m\gamma _5\lambda _\mu \sigma ^{\mu \nu }\right]\mathrm{\Psi }_\nu =\mathrm{\hspace{0.17em}0},$$ (4) where $`G_{\mu \nu }`$ is the Einstein tensor, symmetric in the absence of torsion. In this equation, we did not include a term of the form $`ϵ^{\mu \nu \mathrm{}}_\mu \lambda _\nu \mathrm{}`$, since it vanishes in a homogeneous and isotropic background. We now define: $`_\mu =R_\mu \frac{1}{2}\gamma _\mu \gamma ^\nu R_\nu `$, and rewrite the field equation Eq.(3), in an equivalent way, as $`_\mu =0`$: $`_0`$ $`=`$ $`\left(\gamma ^\nu _\nu +m+{\displaystyle \frac{3}{2}}\gamma ^0+{\displaystyle \frac{1}{2}}\gamma _5\gamma ^0\lambda _0\right)\mathrm{\Psi }_0\left(_0{\displaystyle \frac{1}{2}}m\gamma _0++{\displaystyle \frac{1}{2}}\gamma _5\lambda _0\right)\gamma ^\nu \mathrm{\Psi }_\nu =0,`$ (6) $`_i`$ $`=`$ $`\left(\gamma ^\nu _\nu +m+{\displaystyle \frac{}{2}}\gamma ^0+{\displaystyle \frac{1}{2}}\gamma _5\gamma ^0\lambda _0\right)\mathrm{\Psi }_i\left(_i{\displaystyle \frac{1}{2}}m\gamma _i+{\displaystyle \frac{}{2}}\gamma _i\gamma ^0\right)\gamma ^\nu \mathrm{\Psi }_\nu +\gamma _i\mathrm{\Psi }^0=0,`$ (7) where $`a^{}/a`$. Note that in a FRW background, $`\lambda _i=0`$, and $`_j\lambda _\mu =0`$. Another integrability condition can be obtained from the difference $`\gamma ^0_0\gamma ^i_i`$: $$g^{ij}_i\mathrm{\Psi }_j=(\gamma ^i_im+\gamma ^0)\gamma ^j\mathrm{\Psi }_j.$$ (8) Equation (II) and the constraints Eqs. (4) and (8) form the system of field equations for the gravitino. We now perform a standard decomposition of the gravitino field operator. We rescale the gravitino field, and write: $`\mathrm{\Psi }_\mu (x)=a(\eta )^{3/2}e_\mu ^c\widehat{\mathrm{\Psi }}_c(\eta ,𝒌)e^{i𝒌𝒙}`$; we recall that we reserve latin indices $`a,b,c\mathrm{}`$, or a hat, if confusion could arise, for Lorentz indices, and $`e_\mu ^c=a(\eta )\delta _\mu ^c`$. Note also that $`\mathrm{\Psi }_\mu (x)`$ transforms with a conformal weight $`1/2`$, and $`\widehat{\mathrm{\Psi }}_a(x)`$ transforms with a conformal weight $`3/2`$. Then, we decompose the spatial part of $`\widehat{\mathrm{\Psi }}_c(\eta ,𝒌)`$ into helicity eigenstates : $$\widehat{\mathrm{\Psi }}_c(\eta ,𝒌)=\underset{m=L,+,}{}_{s=\pm }C_{1,1/2}(m+\frac{s}{2};m,\frac{s}{2})ϵ_c^m(𝒌)\psi _{ms}(\eta ,𝒌),$$ (9) where $`C_{1,1/2}(m+\frac{s}{2};m,\frac{s}{2})`$ is a Clebsch-Gordan coefficient, and $`\mathit{ϵ}^\mathrm{L}`$, $`\mathit{ϵ}^+`$, and $`\mathit{ϵ}^{}`$ are polarization vectors. They satisfy: $`\mathit{ϵ}^s\mathit{ϵ}^s^{}=\delta _{ss^{}}`$, with $`s=\mathrm{L},+,`$; in particular, $`\mathit{ϵ}^\mathrm{L}`$ is parallel to $`𝒌`$, $`\mathit{ϵ}^+`$ and $`\mathit{ϵ}^{}`$ are tranverse to $`𝒌`$, and $`\mathit{ϵ}^+=\mathit{ϵ}^{}`$. Similarly, the spinors $`\psi _{ms}`$ are helicity eigenstates of the helicity operator $`\mathrm{diag}(\mathit{ϵ}^\mathrm{L}𝝈,\mathit{ϵ}^\mathrm{L}𝝈)`$. More specifically, each spinor $`\psi _{ms}(\eta ,𝒌)`$ is written in terms of a Weyl spinor $`\chi _s(𝒌)`$ of helicity $`s/2`$, i.e. such that $`\mathit{ϵ}^\mathrm{L}𝝈\chi _s=s\chi _s`$, and mode functions $`h_{ms}(\eta ,k)`$ and $`g_{ms}(\eta ,k)`$, where $`k|𝒌|`$, following the notations of the Appendix. In this decomposition, the vector-spinors $`\mathit{ϵ}^\mathrm{L}\psi _{\mathrm{L}+}`$, $`\mathit{ϵ}^\mathrm{L}\psi _\mathrm{L}`$, $`\mathit{ϵ}^+\psi _+`$, $`\mathit{ϵ}^{}\psi _+`$, and $`\mathrm{\Psi }_0`$ form the helicity $`\pm 1/2`$ components of $`\widehat{\mathrm{\Psi }}_c`$, while $`\mathit{ϵ}^+\psi _{++}`$ and $`\mathit{ϵ}^{}\psi _{}`$ are the helicity $`\pm 3/2`$ components. The field equation for the helicity $`\pm 3/2`$ modes of the gravitino can now be extracted from Eq. (II). To start with, one notes that the helicity $`\pm 3/2`$ components do not appear in the product $`𝜸𝚿`$, because $`\mathit{ϵ}^\pm 𝜸`$ project out the modes with spinor helicity $`\pm `$: $`\mathit{ϵ}^\pm 𝝈\chi _\pm =0`$, and $`\mathit{ϵ}^\pm 𝝈\chi _{}=\sqrt{2}\chi _\pm `$. Therefore, Eqs. (4), (6) and (8) only concern the helicity $`1/2`$ components, not the helicity $`3/2`$. The field equation for the $`\pm 3/2`$ helicity modes is then obtained by contracting Eq. (7) with $`\mathit{ϵ}^{}\widehat{𝜸}ϵ_b^{}\eta ^{bc}`$. This contraction projects out all terms of helicity $`\pm 1/2`$, because these are either parallel to $`𝒌`$, or of the form $`\gamma _c\mathrm{}`$, and $`\mathit{ϵ}^{}\widehat{𝜸}.\mathit{ϵ}^{}\widehat{𝜸}=\mathit{ϵ}^{}\mathit{ϵ}^{}=0`$. Thus the helicity $`\pm 3/2`$ components do not mix with the $`\pm 1/2`$ helicity modes, and their field equation reads: $$\left(\widehat{\gamma }^0_0+i\widehat{𝜸}𝒌+am+\frac{1}{2}\gamma _5\widehat{\gamma }^0\lambda _0\right)\psi _{ss}=0,s=\pm .$$ (10) Finally, this equation can be rewritten in the usual way as two systems of two linear and coupled differential equations in the mode functions $`h_{++}(\eta )`$, $`g_{++}(\eta )`$, and $`h_{}(\eta )`$, $`g_{}(\eta )`$. For $`m=0`$, and zero Kähler connection, it is easy to see that Eq. (10) is identical to the field equation for a massless gravitino in Minkowski spacetime, or, in other words, a massless and uncoupled helicity $`3/2`$ gravitino is conformally invariant. The gravitino field can now be quantized, following the methods developed for spin $`1/2`$ fermions in curved spacetime , or, what is similar, for electrons in an external electromagnetic field . Introducing the shorthand notation: $`\widehat{\mathrm{\Psi }}_{c\pm 3/2}ϵ_c^\pm \psi _{\pm \pm }`$, it can be checked that the inner product $`\widehat{\mathrm{\Psi }}_{as}^{}\eta ^{ab}\widehat{\mathrm{\Psi }}_{bs}`$, where $`s=\pm 3/2`$, is conserved by virtue of the field equations. The solutions of Eq. (10) are normalized according to: $`\psi _{ss}^{}\psi _{s^{}s^{}}=\delta _{ss^{}}`$, $`s,s^{}=\pm `$, and one obtains at all times: $$\widehat{\mathrm{\Psi }}_{as}^{}\eta ^{ab}\widehat{\mathrm{\Psi }}_{bs^{}}=\delta _{ss^{}},$$ (12) $$\widehat{\mathrm{\Psi }}_{as}^{}(\eta ,𝒌)\eta ^{ab}\widehat{\mathrm{\Psi }}_{bs^{}}^\mathrm{C}(\eta ,𝒌)=0,s,s^{}=\pm 3/2,$$ (13) where the superscript $`\mathrm{C}`$ denotes charge conjugation. The helicity$`3/2`$ gravitino field operator is written as: $$\widehat{\mathrm{\Psi }}_a^{(3/2)}(x)=\frac{\mathrm{d}𝒌}{(2\pi )^{3/2}}\underset{s=\pm 3/2}{}\left[b(𝒌)\widehat{\mathrm{\Psi }}_{as}(\eta ,𝒌)e^{i𝒌𝒙}+b^{}(𝒌)\widehat{\mathrm{\Psi }}_{as}^\mathrm{C}(\eta ,𝒌)e^{i𝒌𝒙}\right],$$ (14) where the $`b`$, $`b^{}`$ are annihilation and creation operators respectively. They are related by hermitian conjugation as the gravitino is a Majorana fermion. Finally, one can relate field operators $`\widehat{\mathrm{\Psi }}_{as}^{\mathrm{in}}(\eta ,𝒌)`$ and $`\widehat{\mathrm{\Psi }}_{as}^{\mathrm{out}}(\eta ,𝒌)`$, that are solutions of the field equation, and whose boundary conditions are respectively defined at conformal times $`\eta _{\mathrm{in}}`$ and $`\eta _{\mathrm{out}}`$, by means of a Bogoliubov transform : $$\widehat{\mathrm{\Psi }}_{as}^{\mathrm{out}}(\eta ,𝒌)=\alpha _{ks}\widehat{\mathrm{\Psi }}_{as}^{\mathrm{in}}(\eta ,𝒌)+\beta _{ks}\widehat{\mathrm{\Psi }}_{as}^{\mathrm{in}\mathrm{C}}(\eta ,𝒌).$$ (15) The Bogoliubov coefficients $`\alpha _{ks}`$ and $`\beta _{ks}`$ satisfy at all times: $`|\alpha _{ks}|^2+|\beta _{ks}|^2=1`$, as required for a half-integer spin field. The occupation number operator for the in quantum state with momentum $`k`$ and helicity $`s`$, $`s=\pm 3/2`$, in the out vacuum, is then $`|\beta _{ks}|^2`$, and: $$|\beta _{ks}(\eta )|^2=\left|h_{ss}^{\mathrm{in}}(\eta )g_{ss}^{\mathrm{out}}(\eta )g_{ss}^{\mathrm{in}}(\eta )h_{ss}^{\mathrm{out}}(\eta )\right|^2.$$ (16) In the following, we solve the field equations for the helicity $`\pm 3/2`$, and use Eq. (16) to calculate the number density of gravitinos produced. ## III Gravitational production of spin$`3/2`$ We now assume that the background undergoes an era of inflation, followed by radiation or matter domination. The magnitude and the evolution of the gravitino mass term in both epochs are model-dependent, since the scalar potential $`V(z,z^{})`$ is tied to the Kähler potential in a non-trivial way: $$V(z,z^{})=e^G\left[G_i^{}(G^1)_j^{}^iG^j3\right]+D\mathrm{terms}$$ (17) Nevertheless, it is well-known that scalar fields generically receive a contribution to their mass of order of the Hubble constant , and we adopt this as an ansatz for the gravitino mass term, i.e. $`m=\mu _1H`$ during inflation, and $`m=\mu _2H`$ during radiation/matter domination, where $`\mu _1`$ and $`\mu _2`$ are constant parameters, $`H`$ is the Hubble constant. During inflation, $`H=H_\mathrm{I}`$ is also assumed constant. Note that this ansatz may be realised rather generically in inflationary scenarios. For instance , the superpotential $`\sqrt{\lambda }\varphi ^3`$ gives a potential $`\lambda \varphi ^4`$ (albeit in a global supersymmetry approximation), a gravitino mass $`\sqrt{\lambda }\varphi ^3/m_{\mathrm{Pl}}^2`$ (also neglecting Kähler terms), and a Hubble constant $`\sqrt{\lambda }\varphi ^2/m_{\mathrm{Pl}}`$. In this model of chaotic inflation, $`\varphi M_{\mathrm{Pl}}`$ towards the end of slow-roll, and therefore $`mH`$. Similarly, for new inflation types of models, the superpotential $`M^2(m_{\mathrm{Pl}}\varphi )^2/m_{\mathrm{Pl}}`$ gives a scalar potential $`M^4`$ when $`\varphi m_{\mathrm{Pl}}`$, a gravitino mass $`M^2/m_{\mathrm{Pl}}`$, and a Hubble constant $`M^2/m_{\mathrm{Pl}}`$. The quantities $`\mu _1`$ and $`\mu _2`$ above can take any value, and, presumably, $`\mu _11`$ and $`\mu _21`$ . Note that, strictly speaking, this ansatz is justified as long as $`\mu _{1,2}H>m_{3/2}`$, where $`m_{3/2}`$ denotes the mass of the gravitino in the true vacuum of broken supersymmetry; provided inflation takes place at an energy scale $`V^{1/4}>10^{11}\mathrm{GeV}(m_{3/2}/10^3\mathrm{GeV})^{1/2}`$, this relation should be satisfied for reasonable values of $`\mu _1`$ and $`\mu _2`$. If, however, $`V^{1/4}10^{11}\mathrm{GeV}(m_{3/2}/10^3\mathrm{GeV})^{1/2}`$, then according to the adiabatic theorem , the production of gravitinos will be exponentially suppressed. Nevertheless, for the sake of completeness, we also present results for this case where $`m`$ is constant during both inflation and radiation/matter domination. For reasons that are similar to the above, one cannot write a generic Kähler connection $`\lambda _\rho `$ for a generic inflation scenario. It has actually been argued that if inflation is to proceed via the $`F`$terms, the Kähler function $`K`$ should not have a minimal form . Out of simplicity, we thus assume that this term is zero. This is realized, for instance, in scenarios in which the dynamical scalar field is real. Moreover, as we argue in Section IV in the case of string cosmology, a non-zero Kähler connection in a homogeneous and isotropic background does not induce particle creation by itself. With these assumptions, the differential equations satisfied by the mode functions read: $`g_{ss}^{\prime \prime }`$ $``$ $`g_{ss}^{}+\left(k^2+a^2m^2isk\right)g_{ss}=0`$ (19) $`h_{ss}`$ $`=`$ $`{\displaystyle \frac{is}{am}}\left(g_{ss}^{}+iskg_{ss}\right),s=\pm ,`$ (20) where $``$ has been redefined as $`(am)^{}/(am)`$. These equations can be solved in terms of Whittaker functions $`W_{\lambda ,i\mu _j\alpha _j}(z_j)`$ and $`W_{\lambda ,i\mu _j\alpha _j}(z_j)`$, $`j=1,2`$, where $`z_j2ik\alpha _j|\eta _\mathrm{I}|(1+(1+\eta /|\eta _\mathrm{I}|)/\alpha _j)`$, $`\lambda =\pm 1/2`$, and $`\eta _\mathrm{I}`$ denotes the conformal time of exit of inflation: $`\eta _\mathrm{I}=H_\mathrm{I}^1`$, as we set $`a(\eta =\eta _\mathrm{I})1`$. The subscript $`j=1,2`$ correspond to the two eras, $`j=1`$ for inflation, and $`j=2`$ for radiation/matter domination; $`\alpha _j`$ is defined by: $`a(\eta )=(1+(1+\eta /|\eta _\mathrm{I}|)/\alpha _j)^{\alpha _j}`$, i.e. $`\alpha _1=1`$ corresponding to de Sitter, and $`\alpha _2=1,2`$ corresponding respectively to radiation or matter domination. The in solution is defined as that which reduces to positive energy plane waves as $`\eta \mathrm{}`$, and the out solution as that which reduces to positive energy plane waves as $`\eta +\mathrm{}`$. Using the large argument limit of Whittaker functions, one obtains : $`g_{++}^{(j)}(\eta )`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{z_j}}}W_{1/2,i\mu _j\alpha _j}(z_j),`$ (22) $`h_{++}^{(j)}(\eta )`$ $`=`$ $`{\displaystyle \frac{i\mu _j\alpha _j}{\sqrt{z_j}}}W_{1/2,i\mu _j\alpha _j}(z_j),j=1,2.`$ (23) In this equation $`j=1`$ corresponds to the in solution for $`\eta <\eta _\mathrm{I}`$, and $`j=2`$ corresponds to the out solution. In the radiation or matter domination region, the $`\mathrm{in}`$ solution reads $`g_{++}=c_1W_{1/2,i\mu _2\alpha _2}(z_2)+c_2W_{1/2,i\mu _2\alpha _2}(z_2)`$, and the coefficients $`c_1`$ and $`c_2`$ can be obtained by matching $`g_{++}`$ and $`h_{++}`$ with $`g_{++}^{(1)}(\eta <\eta _\mathrm{I})`$ and $`h_{++}^{(1)}(\eta <\eta _\mathrm{I})`$ in Eq. (III) continuously at $`\eta =\eta _\mathrm{I}`$. Finally, the solutions of helicity $`3/2`$ are expressed in terms of the solutions of helicity $`+3/2`$: $`h_{}=g_{++}`$ and $`g_{}=h_{++}`$. Using Eq. (16), the asymptotic number of particles produced per quantum state $`|\beta _{ks}(\eta +\mathrm{})|^2`$ then reads: $$|\beta _{ks}(+\mathrm{})|^2=\frac{1}{|z_1||z_2|}\left|\mu _2\alpha _2W_{+1/2,i\mu _1}(z_1)W_{1/2,i\mu _2\alpha _2}(z_2)+\mu _1W_{1/2,i\mu _1}(z_1)W_{+1/2,i\mu _2\alpha _2}(z_2)\right|^2,$$ (24) where $`z_1=2ik|\eta _\mathrm{I}|`$, and $`z_2=2ik\alpha _2|\eta _\mathrm{I}|`$, and $`\alpha _2=1,2`$ depending on whether inflation is followed by radiation or matter domination. The limits $`\mu _10`$, or $`\mu _20`$ are non-singular, and reduce to the solutions one would obtain in either of these limits, even though Eq. (III) reads differently in these limits (it decouples into two first-order uncoupled differential equations). The quantity of direct interest to us is $`Y_{3/2}`$, defined as the ratio of the number density of helicity$`3/2`$ gravitinos to the entropy density at the time of reheating $`\eta _\mathrm{R}`$: $$Y_{3/2}(\eta _\mathrm{R})=\frac{45}{4\pi ^4g_s}\left(\frac{a_\mathrm{I}H_\mathrm{I}}{a_\mathrm{R}T_\mathrm{R}}\right)^3\underset{s=\pm }{}d\stackrel{~}{k}\stackrel{~}{k}^2|\beta _{ks}|^2,$$ (25) where $`g_s`$ is the effective number of degrees of freedom, $`g_s229`$ for the minimal supersymmetric standard model, $`T_\mathrm{R}`$ is the reheating temperature, $`a_\mathrm{R}a(\eta _\mathrm{R})`$, $`a_\mathrm{I}a(\eta _\mathrm{I})`$, and $`\stackrel{~}{k}k/H_\mathrm{I}=k|\eta _\mathrm{I}|`$. Since $`|\beta _{ks}|^21`$ is imposed by Pauli blocking, the integral in Eq. (25) is dominated by the high wavenumber modes. As a matter of fact, this integral diverges linearly, since $`|\beta _{ks}|^2k^2`$ for $`k|\eta _\mathrm{I}|1`$, according to Eq. (24). This divergence is unphysical, and results from the sudden transition approximation, as is well-known, for instance, in the case of gravitational waves production during inflation . The adiabatic theorem indeed implies that $`|\beta _{ks}|^2`$ falls off exponentially with $`k`$ beyond some cut-off $`k_\mathrm{c}`$ (see also Ref. for a recent study). In effect, a numerical integration of the field equation shows that if the transition between inflation and radiation/matter domination is sufficiently smooth, an exponential cut-off appears, as shown in Fig. (1). Since the integral in Eq. (25) is proportional to $`k_\mathrm{c}`$ (provided $`k_\mathrm{c}|\eta _\mathrm{I}|>1`$), we will use numerical integration of the field equation for quantitative estimates, and the analytical solution to understand the behavior in various regimes of $`k`$, $`\mu _1`$, and $`\mu _2`$. In Fig. (1), we compare the analytical (dashed line) and numerical solutions to $`|\beta _{ks}|^2`$ in the case where $`\mu _1=0`$ and $`\mu _2=1`$, and for a transition into matter domination. The exact value of the cut-off wavenumber $`k_c`$ depends on the duration $`\mathrm{\Delta }\eta `$ of the transition between inflation and matter domination , as indeed, the “degree of non-adiabaticity” of the transition is inversely proportional to $`\mathrm{\Delta }\eta `$. Figure (1) shows that the analytical solution is an excellent approximation to the numerical solution for $`kk_\mathrm{c}`$, even for $`k_\mathrm{c}|\eta _\mathrm{I}|>1`$. Furthermore, as expected, $`k_\mathrm{c}1/\mathrm{\Delta }\eta `$, and therefore the number density in Eq. (25) also scales approximately as $`1/\mathrm{\Delta }\eta `$. Indeed, as shown in Fig. (1), the analytical solution for the sudden transition corresponds to the numerical solution with the cut-off $`k_\mathrm{c}`$ cast to infinity. In Fig. (2), we show a (base 10) logarithmic contour plot of $`Y_{3/2}(g_s/200)(T_\mathrm{R}/H_\mathrm{I})^3(a_\mathrm{R}/a_\mathrm{I})^3`$ in the plane $`\mu _1,\mu _2`$, assuming a transition with $`\mathrm{\Delta }\eta /|\eta _\mathrm{I}|=1`$ into matter domination. A transition into radiation domination gives similar results. As $`\mu _1=\mu _2+\mathrm{}`$, the production is exponentially suppressed \[see also Fig. (3)\], in agreement with the adiabatic theorem. This can be seen in Eq. (24), at least in the limit where $`k|\eta _\mathrm{I}|1`$, for which $`|\beta _{ks}|^2\mathrm{exp}(2\pi \mu _1)`$. In the limit $`\mu _10`$, with $`\mu _2`$ fixed, one has $`Y_{3/2}\mu _1^2`$, and similarly for $`\mu _20`$, with $`\mu _1`$ fixed. Notably, $`Y_{3/2}0`$ as $`\mu _10`$ and $`\mu _20`$, since a massless gravitino is conformally invariant. Finally, for a fixed $`\mu _1`$, the number density is not exponentially suppressed as $`\mu _2+\mathrm{}`$; indeed, Eq. (24) gives $`|\beta _{ks}|^21/2`$ when $`\mu _10`$ and $`\mu _2+\mathrm{}`$, for $`k|\eta _\mathrm{I}|<1`$. Since $`k_\mathrm{c}`$ is roughly proportional to max$`(\mu _1,\mu _2)`$, the number density increases linearly with $`\mu _2`$ in this limit. Note that this does not contradict the adiabatic theorem: as $`\mu _10`$ and $`\mu _2+\mathrm{}`$, for a fixed $`\mathrm{\Delta }\eta `$, the transition becomes increasingly non-adiabatic, and particle production is not suppressed. Finally, in Fig. (3), we show cuts of the previous contour plot, for $`\mu _1=\mu _2`$ \[diagonal of Fig. (2)\], for $`\mu _1=0`$ \[$`y`$axis of Fig. (2)\], and for $`\mu _2=0`$ \[$`x`$axis of Fig. (2)\], in each case for a transition into matter domination with $`\mathrm{\Delta }\eta /|\eta _\mathrm{I}|=1`$. For the sake of completeness, we also include the result of a numerical integration for the case where the mass of the gravitino is constant and has the same value in both inflationary and matter dominated epochs, in dash-dotted line. Just like the case $`\mu _1=\mu _2`$, it shows exponential suppression as $`mH_\mathrm{I}`$. The number density of gravitationally produced gravitinos present at the time of reheating thus depends on several parameters, notably the effective mass terms during and after inflation, the duration of the transition from inflation to reheating, and the number of e-folds of reheating. The yield of gravitinos scales as the inverse of the transition timescale, which defines the degree of ”non-adiabaticity”, and decreases as $`\mathrm{exp}(3N_\mathrm{R})`$, where $`N_\mathrm{R}\mathrm{ln}(a_\mathrm{R}/a_\mathrm{I})`$ is the number of e-folds of reheating, during which the gravitinos are diluted. These dependences make a direct comparison with the number density of gravitinos produced in particle interactions in reheating slightly delicate. Let us first isolate the dependence on the mass terms and transition timescale in the fiducial quantity $`\widehat{Y}_{3/2}`$, which is defined through: $`Y_{3/2}=\widehat{Y}_{3/2}(g_s/200)^1(H_\mathrm{I}/T_\mathrm{R})^3\mathrm{exp}(3N_\mathrm{R})`$; the quantity plotted in Figs (2) and (3) is $`\widehat{Y}_{3/2}`$ (for $`\mathrm{\Delta }\eta =H_\mathrm{I}^1`$). The number of e-folds of reheating, and therefore $`Y_{3/2}`$, depend on the detailed mechanism of reheating, which is unfortunately not well known at present. In the most standard model of reheating , in which the inflaton slowly decays through its coherent oscillations, and the Universe is matter dominated, one obtains: $`3N_\mathrm{R}58.5+\mathrm{ln}\left[(H_\mathrm{I}/10^{13}\mathrm{GeV})^2(T_\mathrm{R}/10^9\mathrm{GeV})^4(g_{}/200)^1\right]`$. The dilution is therefore quite strong, and $`Y_{3/2}3\times 10^{14}\widehat{Y}_{3/2}(H_\mathrm{I}/10^{13}\mathrm{GeV})(T_\mathrm{R}/10^9\mathrm{GeV})`$. More generally, if reheating takes place in an era dominated by an equation of state of the form $`p=w\rho `$, the above number of e-folds is reduced by a factor $`1/(1+w)`$; for $`w=1/3`$, for instance, which corresponds to a relativistic fluid, one finds: $`Y_{3/2}9\times 10^8\widehat{Y}_{3/2}(H_\mathrm{I}/10^{13}\mathrm{GeV})^{3/2}(g_s/200)^{1/4}`$, and the number density produced becomes independent of the reheating temperature. It was pointed out in Ref. that in a general case, oscillations of an inflaton in a potential $`\lambda \varphi ^{2n}`$ would yield an equation state with $`w(n1)/(n+1)`$ after averaging out over an oscillation period. Thus, in particular, for chaotic inflation with a potential $`\lambda \varphi ^4`$, the Universe is indeed dominated by a relativistic fluid during reheating ($`w=1/3`$). The ratio $`Y_{3/2}^\mathrm{R}`$ of the number density of gravitinos produced in particle interactions during reheating to the entropy density is, up to logarithmic corrections : $`Y_{3/2}^\mathrm{R}3.7\times 10^{13}(T_\mathrm{R}/10^9\mathrm{GeV})(g_s/200)^{3/2}`$. Therefore, the ratio $`Y_{3/2}/Y_{3/2}^\mathrm{R}`$ of these two yields, assuming that reheating takes place in a matter dominated era is: $$\frac{Y_{3/2}}{Y_{3/2}^\mathrm{R}}0.1\widehat{Y}_{3/2}\left(\frac{H_\mathrm{I}}{10^{13}\mathrm{GeV}}\right)\left(\frac{g_s}{200}\right)^{3/2},$$ (26) and according to Fig. (3), $`\widehat{Y}_{3/2}10^3\mu _{1,2}^2(\mathrm{\Delta }\eta /|\eta _\mathrm{I}|)^1`$, if $`\mu _1=0`$ and $`\mu _2=1`$ or the reverse. This constitutes our main result. If throughout reheating, the Universe is dominated by a relativistic equation of state, it becomes: $$\frac{Y_{3/2}}{Y_{3/2}^\mathrm{R}}2\times 10^5\widehat{Y}_{3/2}\left(\frac{H_\mathrm{I}}{10^{13}\mathrm{GeV}}\right)^{3/2}\left(\frac{T_\mathrm{R}}{10^9\mathrm{GeV}}\right)^1\left(\frac{g_s}{200}\right)^{5/4}.$$ (27) Whereas the ratio of the two production yields is independent of the reheating temperature when the Universe is matter dominated during reheating, it becomes inversely proportional to $`T_\mathrm{R}`$ when $`w=1/3`$ (relativistic fluid). Therefore, if gravitational production is efficient, i.e., if $`mH`$ during or after inflation, a low reheating temperature does not exclude a strong gravitino problem. Let us now discuss the magnitude of $`\widehat{Y}_{3/2}`$. As seen in Fig. (3), one probably has $`\widehat{Y}_{3/2}10^3`$ if $`\mathrm{\Delta }\eta =H_\mathrm{I}^1`$, where the upper limit corresponds to $`\mu _1=0`$ and $`\mu _2=1`$, or $`\mu _1=1`$ and $`\mu _2=0`$. Therefore, for reheating in a matter dominated era, one finds that the production of gravitinos out of the vacuum is less efficient than that during reheating, provided $`\mathrm{\Delta }\eta 10^4H_\mathrm{I}^1`$. In the other limit, where the Universe is dominated by a relativistic equation of state during reheating, one finds that gravitational production can be much more efficient that reheating production of gravitinos, by a factor $`10^2\mu _{1,2}^2(\mathrm{\Delta }\eta /|\eta _\mathrm{I}|)^1(T_\mathrm{R}/10^9\mathrm{GeV})^1`$. Finally, an order of magnitude estimate for $`\mathrm{\Delta }\eta `$ is $`\varphi /\varphi ^{}`$ ($`\varphi `$ is the inflaton field) taken at the point at which the slow-roll approximation breaks down, i.e. where $`\dot{\varphi }^2/2V(\varphi )`$ (a dot denotes differentiation with respect to cosmic time). This gives $`\mathrm{\Delta }\eta 2(\varphi /M_{\mathrm{Pl}})|\eta _\mathrm{I}|`$, with $`M_{\mathrm{Pl}}(8\pi )^{1/2}m_{\mathrm{Pl}}`$ the Planck mass. Therefore, $`\mathrm{\Delta }\eta |\eta _\mathrm{I}|`$ for scenarios of the chaotic type, and $`\mathrm{\Delta }\eta |\eta _\mathrm{I}|`$ for scenarios of the new inflation type with small field values; quite possibly, in this latter case, $`\mathrm{\Delta }\eta <10^4|\eta _\mathrm{I}|`$ . In new inflation, therefore, the gravitational production cannot be neglected if $`\mu _10`$ and $`\mu _21`$ (or the reverse), $`\varphi 10^4M_{\mathrm{Pl}}`$ at the end of slow-roll, and $`H_\mathrm{I}10^{13}`$GeV. Moreover, if reheating proceeds faster than in the “standard” model (matter domination), such as in $`\lambda \varphi ^4`$ chaotic inflation, gravitational production of gravitinos can become more efficient than reheating production. If gravitational production dominates, cosmological bounds on the gravitino abundance at the time of Big Bang nucleosynthesis should be turned into upper limits on the effective mass terms of the gravitino during and after inflation, as gravitational production is suppressed as $`(m/H_\mathrm{I})^2`$ if $`m<H_\mathrm{I}`$. ## IV Discussion We discussed the conformal behavior of the gravitino in a spatially flat FRW background spacetime. We obtained the linearized field equation for the helicity $`\pm 3/2`$ components, which reduces to a Dirac like equation in curved spacetime. A massive gravitino is not conformally invariant, and cosmological particle production ensues, through the amplification of the vacuum fluctuations by the non-static background metric. We assumed that the gravitino effective mass is proportional to the Hubble constant, and used the technique of Bogoliubov transforms to calculate the ratio $`Y_{3/2}`$ of the number density of gravitinos to the entropy density at the time of reheating. This quantity depends on the effective mass of the gravitino during and after inflation, on the Hubble constant at the exit of inflation ($`H_\mathrm{I}`$), on the duration of the transition between inflation and radiation/matter domination ($`\mathrm{\Delta }\eta `$), and on the number of e-folds of reheating. Notably, $`Y_{3/2}`$ scales as the inverse of the transition timescale, which defines the degree of “non-adiabaticity” of the transition during which gravitinos are produced. The comparison of the gravitational production of gravitinos to production in reheating depends on the details of the mechanism of reheating, during which the gravitationally produced gravitinos are strongly diluted. If we assume that the gravitino mass is of order of the Hubble constant during or after inflation, that $`H_\mathrm{I}10^{13}`$GeV, and that the Universe is matter dominated throughout reheating, gravitational production is generically less efficient than production in reheating interactions, provided the transition is not too abrupt, i.e. $`\mathrm{\Delta }\eta 10^4H_\mathrm{I}^1`$. However, in scenarios of new inflation, one can find $`\mathrm{\Delta }\eta 10^4H_\mathrm{I}^1`$, in which case gravitational production would turn out to produce as many gravitinos, or more, than reheating interactions. Similarly, if reheating proceeds faster, for instance if the Universe is dominated by a relativistic fluid during reheating, as happens in e.g., chaotic inflation with a potential $`\lambda \varphi ^4`$, the number density of gravitinos produced out of the vacuum exceeds, possibly by a large factor, the density of gravitinos produced in particle interactions during reheating. It must be stressed that in the above, we assumed the effective mass $`m`$ of the gravitino to be of order the Hubble constant, either during or after inflation. If not, gravitational production is suppressed as $`(m/H_\mathrm{I})^2`$. To conclude, let us briefly address the particular case of pre-Big-Bang string cosmology , in which particle production out of the vacuum has been studied extensively , but not for spin$`3/2`$. In this scenario, one expects that, to leading order, the gravitino mass term vanishes during inflation, if the only dynamical field is the axion-dilaton field, as indeed, the tree level superpotential of string-inspired supergravity does not receive contributions from the dilaton. Similarly, $`m=0`$ also in the post-inflationary phase (if only the axion-dilaton field is considered), at least until a non perturbative superpotential for the dilaton sets in, or until supersymmetry breaking takes place. If we assume that the gravitino is also massless during the so-called stringy phase, it then couples to the axion-dilaton field only through the Kähler connection. The field equation Eq. (10) then decouples into two first order differential equations for the mode functions $`h_{ss}`$ and $`g_{ss}`$, whose solutions are written as in flat-space up to a time-dependent phase which depends on the Kähler connection. If the initial state corresponds to the conformal vacuum as $`\eta \mathrm{}`$, then $`h_{++}^{\mathrm{in}}=g_{}^{\mathrm{in}}=0`$. Since we assume that the gravitino remains massless after inflation, conformal triviality also holds as $`\eta +\mathrm{}`$, and $`h_{++}^{\mathrm{out}}=g_{}^{\mathrm{out}}=0`$. From Eq. (16), it is then obvious that $`\beta _{ks}=0`$, i.e. no particle production takes place. At the next level of approximation, one should consider moduli fields, take into account higher order corrections to the effective action in the stringy phase, and/or introduce a non-perturbative superpotential to stabilize the dilaton in the FRW phase. This would lead, quite presumably, to the appearance of an effective mass for the gravitino. Unfortunately, these effects are difficult to implement, because the underlying dynamics or the physics remain poorly known. To give an example of what could be obtained, let us assume that the gravitino is massless during the pre-Big Bang and stringy phases, and that it acquires a mass after the exit in the FRW era. Then the methods and results of the previous section can easily be transposed to this scenario, since the gravitino, being massless in the pre-FRW eras, is insensitive to the background dynamics. It is easy to verify that if the exit in the FRW phase takes place at a scale $`H_\mathrm{I}10^{17}`$GeV, as has been advocated recently , gravitational and reheating production of gravitinos become of the same order, even if the Universe is matter dominated throughout reheating, provided $`mH_\mathrm{I}`$. However, reheating in pre-Big Bang cosmology is not expected to proceed through coherent oscillations of the “inflaton”, and the above estimate could turn out to be naive. A detailed study of the mechanism of reheating in pre-Big Bang cosmology thus appears mandatory. Depending on how fast reheating proceeds, and what temperature is achieved, this could lead to a strong gravitino problem (which one would naively expect if $`H_\mathrm{I}10^{17}`$GeV), which would thus require: $`m_{3/2}10^4`$GeV for an unstable gravitino, or $`m_{3/2}2`$keV for a stable gravitino. A more detailed study of this problem is left for further work. Note added Upon completion of this paper, we became aware of a related work by A. L. Maroto and A. Mazumdar (“Production of spin 3/2 particles from vacuum fluctuations”, hep-ph/9904206). These authors obtained the field equation for helicity 3/2 gravitinos, assuming $`\gamma ^\mu \mathrm{\Psi }_\mu =0`$ (which projects out helicity 1/2 modes), and calculated the amplification of vacuum fluctuations, using the technique of Bogoliubov transforms. They applied their technique to the production of gravitinos in preheating. In this respect, their work and ours are complementary: gravitational production during inflation generically produces particles with wavenumber $`kH_\mathrm{I}`$, while in preheating, the production takes place for modes with $`kH_\mathrm{I}`$. After the present paper was submitted, two other related studies appeared: R. Kallosh, L. Kofman, A. Linde and A. Van Proeyen (“Gravitino production after inflation”, hep-ph/9907124) studied the problem of gravitino production during inflation and during preheating, for both helicity 1/2 and helicity 3/2 modes. Their important work shows that the helicity 1/2 modes are not conformally invariant even if they are massless, and that their production in preheating can be very large. The paper by G. F. Giudice, A. Riotto and I. Tkachev (“Non-thermal production of dangerous relics in the early Universe”, hep-ph/9907510) reaches similar conclusions. ###### Acknowledgements. It is a pleasure to thank A. Buonanno and J. Martin for many valuable comments and discussions, and P. Binétruy, R. Brustein, B. Carter, R. Kallosh, A. Linde, J. Madore, K. Olive, A. Riotto and G. Veneziano for discussions. ## Notations We write $`\gamma _\mu `$ a general relativistic Dirac matrix, and $`\gamma _a=e_a^\mu \gamma _\mu `$, or, if confusion could arise, $`\widehat{\gamma }_a`$, a (constant) flat-space Dirac matrix. We define: $`\sigma _{ab}=\frac{1}{2}[\gamma _a,\gamma _b]`$. The Dirac matrices are written in the Weyl representation: $$\gamma ^a=i\left(\begin{array}{cc}0& \overline{\sigma }_a\\ \overline{\sigma }_a& 0\end{array}\right),$$ (28) with: $`\sigma _a=(1,𝝈)`$, and $`\overline{\sigma }_a=(1,𝝈)`$, and the $`𝝈`$ are flat-space Pauli matrices. We also define: $`\gamma _5=i\widehat{\gamma }_0\widehat{\gamma }_1\widehat{\gamma }_2\widehat{\gamma }_3`$. Our choice of vierbein for the FRW background is: $`e_\mu ^a=a(\eta )`$, $`e_a^\mu =a(\eta )^1`$. The spin connection, without $`\mathrm{\Psi }`$ torsion, is then: $$\frac{1}{4}\omega _0^{ab}\sigma _{ab}=0,\frac{1}{4}\omega _i^{ab}\sigma _{ab}=\frac{1}{2}\gamma _i\gamma ^0,$$ (29) where we defined $`=a^{}/a`$. We define the helicity operator $`\mathit{ϵ}^\mathrm{L}𝝈`$ for a Weyl spinor of momentum $`𝒌`$, where $`\mathit{ϵ}^\mathrm{L}`$ is the unitary vector along $`𝒌`$. We then define $`\chi _+(𝒌)`$ and $`\chi _{}(𝒌)`$ as the eigenspinors of $`\mathit{ϵ}^\mathrm{L}𝝈`$ with respective helicity $`+1/2`$ and $`1/2`$: $`\mathit{ϵ}^\mathrm{L}𝝈\chi _\pm (𝒌)=\pm \chi _\pm (𝒌)`$. We decompose a four component spinor $`\psi (\eta ,𝒌)`$ in eigenstates of helicity , $`\psi (\eta ,𝒌)=\psi _+(\eta ,𝒌)+\psi _{}(\eta ,𝒌)`$, with: $$\psi _s(\eta ,𝒌)=\left(\begin{array}{c}h_s(\eta )\chi _s(𝒌)\\ sg_s(\eta )\chi _s(𝒌)\end{array}\right),s=\pm ,$$ (30) where $`h_s(\eta )`$, and $`g_s(\eta )`$ are (scalar) functions of conformal time. The eigenspinors $`\chi _+`$ and $`\chi _{}`$ verify, in particular: $`\chi _s^{}\chi _s^{}=\delta _{ss^{}}`$. In Section II, we perform a similar decomposition for the spinor-vector in terms of the mode functions $`h_{ms}(\eta )`$, where $`m=L,+,`$ denotes the helicity of the polarization vector, and $`s=\pm `$ denotes the spinor helicity. Finally, we define the charge conjugation operator: $`C=i\widehat{\gamma }^2\widehat{\gamma }^0`$, and the conjugate $`\psi ^\mathrm{C}`$ of a spinor $`\psi `$: $`\psi ^\mathrm{C}=iC\widehat{\gamma }^0\mathrm{\Psi }^{}`$. One can show that: $`i\sigma _2\chi _s^{}=s\chi _s`$, where $`s=\pm `$. It is then easy to show that the conjugate of a spinor $`\psi _s(\eta ,𝒌)`$, with helicity $`s`$, and momentum $`𝒌`$, is: $$\mathrm{\Psi }_s^\mathrm{C}(\eta ,𝒌)=i\left(\begin{array}{c}g_s^{}\chi _s\\ sh_s^{}\chi _s\end{array}\right).$$ (31) This identity is useful in deriving the normalization identities of the gravitino operator in Section II.
no-problem/9908/cond-mat9908374.html
ar5iv
text
# References accepted for publication in Phys. Rev. B, August 25, 1999 Influence of the cooperative Jahn-Teller effect on the transport- and magnetic properties of La<sub>7/8</sub>Sr<sub>1/8</sub>MnO<sub>3</sub> single crystals P. Wagner, I. Gordon, S. Mangin, V. V. Moshchalkov, and Y. Bruynseraede Laboratorium voor Vaste-Stoffysica en Magnetisme, Katholieke Universiteit Leuven, Celestijnenlaan 200 D, 3001 Leuven, Belgium L. Pinsard and A. Revcolevschi Laboratoire de Chimie des Solides, Université Paris-Sud, 91405 Orsay Cédex, France Abstract The low-doped magnetic perovskite La<sub>7/8</sub>Sr<sub>1/8</sub>MnO<sub>3</sub> undergoes within the paramagnetic-semiconducting phase a first-order structural transition due to antiferrodistorsive ordering of Jahn-Teller deformed MnO<sub>6</sub> octahedra. This allows to study not only the influence of the spin configuration on the magneto-transport properties (CMR effect) but also the role of orbital order and disorder. The orbital ordering transition (at 269 K in zero magnetic field) causes a doubling of the resistivity (regardless of the CMR effect in applied magnetic fields) and a drop of the paramagnetic susceptibility. The latter might be interpreted in terms of a shrinking of spin polarons. External magnetic fields shift the ordering transition to lower temperatures according to the field-induced decrease of the carrier localization. The magnetic field - temperature phase boundary line was investigated by means of magnetoresistance (up to 12 T) and pulsed-fields magnetization measurements up to 50 T. The pronounced magnetization anomalies, associated with the phase transition, vanish for fields exceeding 20 T. This behaviour has been attributed to a field-induced crossover from antiferrodistorsive order to a nondistorsive/ferromagnetic orbital configuration. | PACS: | 61.50.Ks | Crystallographic aspects of phase transformations | | --- | --- | --- | | | 71.30.+h | Metal-insulator transitions and other electronic transitions | | | 71.70.Ej | Spin-orbit coupling, Zeeman and Stark splitting, Jahn-Teller effect | | | 75.30.Vn | Colossal magnetoresistance | | Corr. author: | Dr. Patrick Wagner | | | --- | --- | --- | | | Laboratorium voor Vaste - | | | | Stoffysica en Magnetisme | | | | Katholieke Universiteit Leuven | Tel. 0032 - 16 - 32 76 46 | | | Celestijnenlaan 200 D | Fax. 0032 - 16 - 32 79 83 | | | B-3001 Leuven / Belgium | Patrick.Wagner@fys.kuleuven.ac.be | $``$: present address: Laboratoire de Physique des Matériaux, Université H. Poincaré, B. P. 239, 54506 Vandoeuvre-les-Nancy Cédex, France 1. Introduction Rare-earth (RE) manganites with partial divalent (D) substitution on the rare-earth site, i. e. RE<sub>1-x</sub>D<sub>x</sub>MnO<sub>3</sub>, show a transition from a paramagnetic-semiconducting phase to a ferromagnetic-quasimetallic state. This insulator-metal transition can be tuned by an external magnetic field, resulting in a colossal negative magnetoresistance effect (CMR). An overview on the physics of manganite materials in general and the relevant electronic transitions can be found in the review articles by Ramirez and by Imada et al. . The relationship between ferromagnetic spin alignment and enhanced charge carrier mobility between neighbouring Mn<sup>3+</sup> and Mn<sup>4+</sup> ions has been described in terms of the double-exchange model and its recent extensions, taking into account the role of electron-phonon coupling and the existence of a Berry phase . Also a Mott-type hopping model, in which the effective barrier depends on the mutual spin orientation at the hopping sites, gives a correct description of the CMR effect in the para- and in the ferromagnetic state . The charge carriers are quite close to the localized state and the shielded Coulomb repulsion between them results, for special commensurate substitution ratios, e.g. $`x=1/8,1/4,1/2`$, in an additional phase transition to a charge-ordered, poorly conducting state, being commonly described as ’Wigner-’ or ’charge-crystal’ . Prototypes of charge ordering compounds are single crystals of Nd<sub>0.5</sub> (Pr<sub>0.5</sub>)Sr<sub>0.5</sub>MnO<sub>3</sub> and La<sub>7/8</sub>Sr<sub>1/8</sub>MnO<sub>3</sub> . The superstructure of ordered charges (Mn ions of different valency) was in both cases confirmed by neutron- and hard-x-ray diffraction. Fundamentally different however is the behaviour of these Wigner phases under the influence of external magnetic fields: the 50% - doped compounds are antiferromagnetic insulators at low temperature, and applying magnetic fields destroys the antiferromagnetic alignment as well as delocalizes the charge carriers, thus resulting in a negative magnetoresistance of several orders of magnitude . At low temperature the La<sub>7/8</sub>-crystals are ferromagnetic insulators, and the transition to the ordered state shifts to higher temperatures under external fields, giving rise to a slightly positive magnetoresistance . Another important feature of manganites, besides magnetism and charge ordering, is the Jahn-Teller (JT) distortion of the Mn<sup>3+</sup>O$`{}_{}{}^{2}{}_{6}{}^{}`$ octahedra . A review on the Jahn-Teller effect in general can be found in ref. . Its most common appearance is a volume-conserving elongation of the octahedra along one axis and a compression along the other two axes. This lifts the energetic degeneracy of the 3d-e<sub>g</sub> level and stabilizes the 3d$`_{3z^2r^2}`$ orbital with respect to the 3d$`_{x^2y^2}`$ state. In the following we will refer to these orbitals simply as ’$`3z^2r^2`$’ and ’$`x^2y^2`$’. At high temperatures the distortion is not reflected in the variation of the lattice constants due to a mixed occupation of these orbitals, which can moreover be randomly oriented along the main crystalline axes. There is also a breathing-type oscillation between the elongated type and a compressed version of the JT distortion, which favours the $`x^2y^2`$ orbital rather than $`3z^2r^2`$. This disordered and fluctuating state can be described as a ’dynamic JT-state’, abbreviated by ’DJT’. The JT-ordering temperature T<sub>JT</sub> is characterized by a preferential occupation of $`3z^2r^2`$ orbitals and the distortion becomes static with a coherent orientation of the elongated axes throughout the sample. This state is in the following described as ’cooperative JT effect’ and abbreviated as ’CJT’. Pure LaMnO<sub>3</sub> shows hereby the so-called antiferrodistorsive orbital order , which minimizes the increase of elastic energy on a macroscopic length scale. This peculiar type of order is preserved in doped manganites with a maximum Mn<sup>4+</sup> content of $`x=0.15`$ , at the expense of a lowering of T<sub>JT</sub>, e. g. from 790 K for the undoped system to 269 K for $`x=1/8`$ . The main objective of this article is to investigate the relationship between the resistive/magnetic properties of La<sub>7/8</sub>Sr<sub>1/8</sub>MnO<sub>3</sub> single crystals and the orbital configuration of the $`e_g`$ electrons. These electrons mediate the transport- as well as the magnetic properties and the difference between a random configuration and a static arrangement of electron orbitals should have a significant impact on charge transfer and magnetic interactions on a macroscopic scale. More specifically, we found that the freezing of the orbital configuration into a regular pattern enhances the resistivity by a roughly a factor of two, accompanied by a simultaneous drop of the paramagnetic susceptibility. Both observations will be analyzed and explained on grounds of an orbital ordering model, which is based on the magnetic- and charge-transfer interactions between the diluted Mn<sup>4+</sup> ions and the surrounding nearest neighbours of Mn<sup>3+</sup>. 2. Experimental Single crystals of La<sub>7/8</sub>Sr<sub>1/8</sub>MnO<sub>3</sub> were prepared from sintered polycrystalline rods by the floating zone method, with the growth direction approximately along the $`b`$-axis . Due to the relative smallness of the orthorhombic deviation from a perfectly cubic structure we can, however, not exclude the possibility of some microtwinning in the crystals. The sample used for our measurements was cut to the dimensions $`1\times 2\times 3`$ mm<sup>3</sup>. Four gold contacts were evaporated onto the $`2\times 3`$ mm<sup>2</sup> top side and annealed in air for 60 min at 600<sup>o</sup>C. The measuring current was flowing in the Lorentz-force free configuration parallel to the applied magnetic field. The magnetoresistivity measurements were performed in a temperature range between 1.5 K and 300 K in a cryostat equipped with a superconducting magnet coil generating fields up to 12 T and magnetization was measured in a SQUID magnetometer in fields up to 5 T. For magnetization studies at higher magnetic fields we employed the pulsed fields setup described in ref. , allowing to measure magnetization induced by field pulses up to 50 T on a timescale of 10 - 20 ms. The detected signal is hereby the voltage induced in highly sensitive pick-up coils, being proportional to the time derivative of the magnetization $`M/t`$, which is electronically integrated to $`M(B)`$. The filling factor and temperature-dependent sensitivity were calibrated according to the absolute magnetization values obtained by the SQUID. The nominal temperatures of these pulsed-field measurements in the considered temperature range (between 200 K and 300 K) are accurate within $`\pm `$ 4 K. 3. Results and Discussion 3.1 Correlations between the structure and resistive/magnetic properties The temperature dependence of the lattice constants in zero external field is given in Fig. 1a), together with the temperature-dependent resistivity (Fig. 1b) and the sample magnetization in Fig. 1c). The structural data are adopted from the x-ray- and synchrotron radiation studies by Niemöller et al. and agree with neutron diffraction results obtained on the same compound by Pinsard et al. and by Kawano et al. . The structure at room-temperature is pseudocubic with a slight orthorhombic distortion, which becomes notably stronger in the antiferrodistorsive state. The $`a`$ and $`b`$-axis are hereby expanded and the $`c`$-axis becomes compressed. The expansion of $`a`$ is much less pronounced than the $`b`$-expansion, which is somewhat uncommon. This might be related to an interplay of the JT distortion with the rotation and tilt of MnO<sub>6</sub> octahedra with respect to the La<sub>7/8</sub>Sr<sub>1/8</sub> lattice, or to additional lattice distortions induced by the relatively small Mn<sup>4+</sup> ions. At low temperatures there is a reentrant structural transition to the very same lattice parameters found at room temperature, and a possible orbital configuration, consistent with these structural data, will be suggested below. The two structural transitions, together with the magnetic transition, allow to distinguish four different temperature regimes (see Fig. 1): (i) At room temperature the system is paramagnetic/semiconducting, and the the conductivity is usually ascribed to thermally activated polaron hopping. The JT distortion is not seen in the ’macroscopic’ structural data, and is therefore of the ’disordered’ or ’dynamic’ type. This is schematically shown in Fig. 2, where the 3d-$`e_g`$ electrons of Mn<sup>3+</sup> occupy $`x^2y^2`$\- as well as $`3z^2r^2`$ orbitals, oriented along random axes. Moreover, there is a breathing-type phonon mode, which allows for an oscillation between these two orbital types for a given Mn<sup>3+</sup> site. The axes in Fig. 2 (and in Figs. 3,4) are denoted as ’$`x^{},y^{},z^{}`$’, to discriminate them from those of the pseudocubic unit cell ’$`a,b,c`$’, and from the ’x,y,z’ coordinates, employed for the description of the shape of individual orbitals. (ii) At T = 269 K, in the following denoted as Jahn-Teller temperature T<sub>JT</sub>, a structural transition arises, which is attributed to an antiferrodistorsive ordering of JT-elongated MnO<sub>6</sub> octahedra , i.e. the $`e_g`$ electrons occupy predominantly $`3z^2r^2`$ orbitals. A graphic representation of this peculiar type of orbital order is given in Fig. 3. The expanded pseudocubic axes $`a`$ and $`b`$ correspond to the diagonals between $`x^{}`$ and $`y^{}`$, while the compressed $`c`$ axis is oriented along the $`z^{}`$ direction. This structure of orbitals is equivalent to the ’resonant x-ray scattering’ results by Murakami et al. on undoped LaMnO<sub>3</sub> . It is noteworthy that the mutual orbital orientation at neighbouring Mn sites causes ferromagnetic correlations within the $`x^{}y^{}`$-plane, and antiferromagnetic superexchange in the perpendicular direction . In LaMnO<sub>3</sub> these interactions are weak and lead to A-type antiferromagnetism below T<sub>N</sub> = 140 K, indicating already that the low-temperature orbital structure of La<sub>7/8</sub>Sr<sub>1/8</sub>MnO<sub>3</sub> deviates from the antiferrodistorsive type. The orbital ordering at 269 K is a first-order phase transition (compare the specific heat results ) and accompanied by a sudden increase of resistivity (roughly by a factor of two) and an instantaneous drop of the paramagnetic susceptibility by 20 % . These two interrelated effects will be discussed on grounds of the orbital structure in Sections 3.2 and 3.3. Effects with similar appearance (enhancement of resistivity and drop of magnetization due to a structural transition) were also observed within the paramagnetic phase of La<sub>0.83</sub>Sr<sub>0.17</sub>MnO<sub>3</sub> and in the ferromagnetic phase of La<sub>0.825</sub>Sr<sub>0.175</sub>Mn<sub>0.94</sub>Mg<sub>0.06</sub>O<sub>3</sub> . The origin of the structural transition might be different from antiferrodistorsive order due to the relatively high Mn<sup>4+</sup> content, exceeding the threshhold value of 15 % ). (iii) From the extrapolation of the inverse paramagnetic susceptibility to zero one finds a ferromagnetic Curie temperature T<sub>C</sub> = 188 K. At T<sub>C</sub> the colossal negative magnetoresistance effect is maximum and the resistivity decreases below this temperature in a quasimetallic manner, meaning that $`d\rho /dT>0`$ while the absolute $`\rho `$ values are unusually high compared to common metals. The antiferrodistorsive orthorhombicity starts to decrease gradually at T<sub>C</sub> and disappears around 150 K. This indicates that there is a competition between ferromagnetism (induced by the little Mn<sup>4+</sup> content) and the antiferrodistorsive structure, favouring an A-type AFM. We note that the magnetization deviates from that of an ideal ferromagnet because the absolute value stabilizes at 3/4 of the low-temperature value. This points to a possible phase separation into ferromagnetic and para-/or antiferromagnetic areas - a phenomenon, which is observed in a wide variety of manganite compounds . (iv) At 147 K the resistivity increases spontaneously by roughly a factor of two, which is associated with a charge-ordering transition to a Wigner-crystal state. The charge order was demonstrated by the observation of superlattice reflections in x-ray diffraction . The charge-ordering transition is also accompanied by a sudden jump in magnetization to the actual low-temperature value and by the vanishing of the macroscopically observed JT distortion. The Wigner state is not perfectly insulating and the resistivity increase between 146 K (right below the Wigner transition) and 75 K can best be described by Shklovskii-Efros (SE) hopping with $`\rho (T)\mathrm{exp}\{(T_0/T)^{1/2}\}`$ . This hopping mechanism corresponds to variable range hopping with a soft Coulomb gap in the density of states. Other hopping processes, like thermally activated (polaron) hopping, Mott’s variable range hopping, or cascade hopping, do not apply. From the SE description of the resistivity in the temperature regime between 75 K and 146 K (see fit function in Fig. 1b) we could extract $`T_0`$ = 1.26 $``$ 10<sup>4</sup> K, where $`T_0`$ is given by $`k_BT_0=2.8e^2/(4\pi ϵ_0ϵ_LL)`$, with $`ϵ_L`$ being the dielectric constant of the lattice and $`L`$ the carrier localization length . The resulting $`L`$ = 470 Å / $`ϵ_L`$ seems of the correct magnitude, since the dielectric constant of the ionic perovskites easily achieves values in the range of $`10^2`$, e.g. up to 300 for the isostructural SrTiO<sub>3</sub>. More precise data on $`ϵ_L`$ of manganites are to our knowledge not yet available. The resistivity increase below 75 K is weaker than described by any of the afore-mentioned hopping processes, including SE hopping. The finite conductivity for T $``$ 0 means that the Wigner crystal exhibits imperfections in form of non-localized charge carriers, which might result from slight deviations from the exact 1/8 doping ratio. Interestingly enough, this remanent conductivity is also found in the charge-ordered Pr- and Nd-manganites with 50 % strontium doping . The resistivity increase below 40 K (Fig. 1b) scales with $`\mathrm{log}T`$, pointing either to a Kondo-type problem or to a resistivity contribution caused by electron-electron interactions . In the low-temperature limit we found a magnetic moment per Mn ion of 3.75 $`\mu _B`$ (at 10 K, 5 T). This is very close to the spin-only value for a mixture of Mn$`{}_{}{}^{3+}{}_{7/8}{}^{}`$ ($`J=s=2`$) and Mn$`{}_{}{}^{4+}{}_{1/8}{}^{}`$ ($`J=s=3/2`$) and a gyromagnetic ratio of $`g=2`$, resulting in 3.88 $`\mu _B`$. This is, within the precision of the measurement, compatible with ferromagnetic spin alignment, although neutron diffraction gave possible evidence for a small spin canting . Both observations - ferromagnetism and recovery of the DJT-lattice parameters - suggest that the orbital structure at low temperatures is different from the antiferrodistorsive configuration. The possible guess that lowering temperature transfers the static order back to a dynamic/disordered JT-state, as at high temperatures, is clearly contra-intuitive. Furthermore, XAFS-studies on La<sub>2/3</sub>Ca<sub>1/3</sub>MnO<sub>3</sub> have shown that there are two peaks in the distribution function of Mn-O bond lengths for the paramagnetic state (attributed to the JT distortion), which merge to a single peak for the ferromagnetic quasimetal . This is interpreted in the sense that the $`e_g`$ electrons become delocalized and need therefore not to profit from the formation of JT distortions on the scale of individual unit cells. In the case of La<sub>7/8</sub>Sr<sub>1/8</sub>MnO<sub>3</sub> at low temperatures, the electrons are, despite of ferromagnetism, well localized and the JT-distortion is nevertheless not seen in the XRD data of Fig. 1a. This may be interpreted in terms of the orbital structure sketched in Fig. 4, which agrees with a ’G-type orbital order’, in the scheme of Maezono et. al. . Firstly, this orbital configuration results in a ferromagnetic nearest neighbour coupling between Mn<sup>3+</sup> ions for all directions , while the coupling between Mn<sup>3+</sup> and the low-concentrated Mn<sup>4+</sup> is per se mainly ferromagnetic. Secondly, this structure provides an exact compensation of the elongated and compressed axes of the JT-distorted octahedra on a macroscopic scale: the $`3z^2r^2`$ orbitals (50 % occupation) are surrounded by octahedra, which are elongated by a value $`+\delta `$ along the orbital’s axis. Since the JT-effect is in first approximation volume-conserving, the contraction along the two perpendicular axes has to be $`\delta /2`$. The corresponding figures for the $`x^2y^2`$ orbitals (50 % occupation) are an expansion by $`+\delta /2`$ within the plane of these orbitals, and a compression by $`\delta `$ in the perpendicular direction. These distortions compensate precisely, in the structure of Fig. 4, along the three spatial directions, provided that we can ignore local distortions caused by the low-concentrated Mn<sup>4+</sup>. 3.2 Influence of the CJT transition on the resistivity We performed $`\rho (B)`$ measurements at constant temperatures around T<sub>JT</sub>, see Fig. 5. Besides the usual CMR behaviour, there is a sharp negative magnetoresistance effect upon crossing the boundary of the antiferrodistorsive phase, which is stable in low fields, and entering the disordered DJT phase in higher magnetic fields. The strong hysteresis effect (being especially pronounced at 268 K) gives further confirmation of the first-order nature of the CJT transition. Upon lowering the temperature the transition shifts to higher fields with a shrinking of the hysteresis width. The shape of the phase-boundary line will be discussed in Sect. 3.4. Almost independent of the temperature is, however, the relative height of the resisitive jump with 2.05 $`\pm `$ 0.15, shown as solid dots in Fig. 5. This result was corroborated by $`\rho (T)`$ measurements at constant fields, and the resulting jump heights are given in Fig. 5 by open circles. At present it seems difficult to give an unambiguous explanation for the doubling of $`\rho `$ upon the CJT transition since several possibilities seem to apply equally well: (i) Change of the double-exchange overlap integral It was argued that the JT-elongation of MnO<sub>6</sub> octahedra and the exclusive occupation of the $`3z^2r^2`$ orbitals might lower the double-exchange overlap integral along the elongated axis, while the overlap integrals with neighbours along the perpendicular axes should become negligible. This is somewhat doubtful, because JT distortions, albeit fluctuating and in random directions, are present already above T<sub>JT</sub>. Furthermore, the change in orthorhombicity at T<sub>JT</sub> might bring about a spontaneous modification of the bond angle between Mn<sup>3+</sup>, O<sup>2-</sup>, and Mn<sup>4+</sup>. The importance of this angle for the absolute resistivity values (again via the overlap integral) was pointed out in ref. . Neutron diffraction on undoped LaMnO<sub>3</sub> however has proven that the tilt angle of MnO<sub>6</sub> octahedra (responsible for the buckling of the Mn-O-Mn bonds) indeed increases with decreasing temperature - but there is no discontinuous change around T<sub>JT</sub> . (ii) Decrease of the carrier localization volume Alternatively, we can consider a transport mechanism on the basis of Mott’s hopping concept, which is suggested by the low carrier mobility resulting from a strong carrier localization . The resistivity in this model (and related hopping concepts) depends exponentially on the ratio between the average hopping distance $`R`$ (at high temperatures corresponding to the nearest neighbour distance) and the half of the carrier localization length $`L`$. According to the orbital structure sketched in Fig. 2 we might assume a localization of the hole-type carrier within a volume $`V_L`$ extending from the Mn<sup>4+</sup> ion to the 6 nearest neighbours (via hybridization of orbitals) for the DJT state. The CJT ordering (Fig. 3) will restrict $`V_L`$ to the Mn<sup>4+</sup> site and only 2 out of the 6 nearest neighbours. From this we can calculate for both phases a localization length averaged over the three main crystallographic directions, resulting in a resistive jump ratio of $`1.70`$. We point out that this approach has to average over actually anisotropic localization lengths, a situation for which the Mott-hopping mechanism is conceptually not yet worked out. (iii) Frustration of charge transport by orbital ordering The doubling of the resistivity may be understood by considering the change in the allowed hopping-paths in the Figures 2 and 3. We postulate that charge transport between Mn<sup>3+</sup> and Mn<sup>4+</sup> can only occur in configurations with orbital overlap. This means that an $`e_g`$ electron in a $`x^2y^2`$ orbital can only be transferred to nearest-neighbour Mn<sup>4+</sup> ions, which are located in the plane of this orbital, but not in the perpendicular direction. An $`e_g`$ electron in a $`3z^2r^2`$ orbital can be transferred to nearest-neighbour Mn<sup>4+</sup> ions located on the axis of this orbital, but not in the equatorial directions. We attribute to both processes an equal, dimensionless conductivity $`\sigma =1`$, and a possible magnetic impedence of the charge-transfer by spin-misalignment will preliminarily be ignored. In the disordered state (Fig. 2) the hole associated with Mn<sup>4+</sup> can move to all nearest Mn<sup>3+</sup> neighbours (due to low doping we consider only Mn<sup>3+</sup>), which provide orbital overlap, and is in principal mobile in three dimensions. The probability for orbital overlap is, regardless of the breathing mode and depending only on the disordered nature of the orbital arrangement, given by the factor 1/2: 50 % of the $`e_g`$-electrons occupy $`3z^2r^2`$ orbitals, which point with a 1/3 probability into the direction of a given nearest-neighbour site. The other 50 % of $`e_g`$ electrons occupy $`x^2y^2`$ orbitals with a respective statistical weight of 2/3. The antiferrodistorsive order (Fig. 3) confines the charge carrier within the $`x^{}y^{}`$ plane (movement perpendicular to this plane is forbidden by the lack of suitable orbitals), and direct charge transfer within this plane can occur only to 2 out of the 4 nearest neighbours. Furthermore, we need to take into account that the sample exhibits microtwinning in the sense that the current path probes equal portions of differently oriented domains. Therefore we calculate the relative conductivity averaged along arbitrary directions of a three-dimensional cubic network, assuming that domains with various orientations contribute to the total conductivity like resistors connected in parallel. The average of conductivities can be approximated by the average of the three types of directions shown in Figure 6: (i) the principal axes of the crystal ($`x^{},y^{}`$ and $`z^{}`$ are used in the sense of Fig. 2); (ii) the square diagonals ($`x^{}y^{}`$, $`x^{}z^{}`$, and $`y^{}z^{}`$); (iii) the cube diagonals. In the three dimensional DJT state the conductivity of a Mn<sup>4+</sup> hole along the principal axes is $`\sigma _p^{3d}=1/2`$, i.e. the elemental conductivity $`\sigma =1`$ (1 unit cell distance is spanned by 1 hopping event) is weighted with the probability for a suitable orbital geometry. The elemental conductivity along square diagonals is $`1/\sqrt{2}`$ (2 hops are required for a distance of $`\sqrt{2}`$ unit cells), and we take into account that there are 2 possible hopping paths with a respective probability for orbital overlap of $`(1/2)^2`$. The sqare-diagonal conductivity is therefore $`\sigma _{sd}^{3d}=1/\sqrt{2}2(1/2)^20.354`$. Correspondingly, we obtain for cube diagonals $`\sigma =1/\sqrt{3}`$ (3 hops for a distance of $`\sqrt{3}`$ unit cells), and there are now 6 possible paths with a probability factor of $`(1/2)^3`$. The cube-diagonal conductivity is then $`\sigma _{cd}^{3d}=1/\sqrt{3}6(1/2)^30.433`$. The arithmetic average of $`\sigma _p^{3d},\sigma _{sd}^{3d}`$, and $`\sigma _{cd}^{3d}`$ approximates the averaged conductivity along all possible directions and has a numerical value of $`\overline{\sigma ^{3d}}0.429`$. The major difference in the more two-dimensional situation with antiferrodistorsive order is the suppression of conductivity along the $`z^{}`$ direction. Considering the conductivity along the two remaining principal axes we note that, in the average, two steps are required to span a distance of one unit cell in the desired direction, i.e. the conductivity is reduced by a factor of two. This is indicated by thin solid lines along the black-shaded orbitals in Fig. 3. The corresponding average conductivity is $`\sigma _p^{2d}=1/3(0+1/2+1/2)=1/3`$. Due to the static orbital arrangement we need not to consider the probability factors for overlap geometries. The number of steps for charge transport along the diagonal direction $`x^{}y^{}`$ is not altered by the orbital ordering (see diagonal lines along the grey-shaded orbitals in Fig. 3), however the conductivity along the other two square-diagonal directions becomes zero. The average square-diagonal conductivity is therefore $`\sigma _{sd}^{2d}=1/3(\sqrt{2}/2+0+0)0.236`$. It is evident that the cube-diagonal conductivity $`\sigma _{cd}^{2d}`$ is 0 and the total average of all directions results in $`\overline{\sigma ^{2d}}0.190`$. The relative resistive jump height at the CJT transition is then given by the ratio $`\overline{\sigma ^{3d}}/\overline{\sigma ^{2d}}2.26`$, in a very close agreement with the experimentally found jump ratios between 1.9 and 2.2 (slightly dependent on the applied magnetic field). We note that magnetoresistive measurements in untwinned single crystals, with the current path along well-defined crystallographic axes, should bring about jump ratios different of 2. 3.3 Influence of the CJT transition on the paramagnetic susceptibility Associated with the doubling of the resistivity, we found also a sudden drop in the paramagnetic magnetization, which is shown in the insert of Fig. 1c. In field-dependent magnetization measurements at constant temperatures (up to 5 T with a SQUID magnetometer, and up to 50 T in pulsed magnetic fields, see Figs. 7 and 8) this corresponds to a sudden, hysteretic upturn of the magnetization upon leaving the CJT state, becoming unstable in sufficiently high fields. For a better comparison, Fig. 7 includes also a magnetization curve at 272 K, i.e. above the JT ordering temperature, and a magnetization curve at 265 K, i.e. entirely inside the ordered phase. In addition to the presented $`M(B)`$ data we studied magnetization curves at various temperatures between 220 K and 290 K. The signature of the CJT transition is especially pronounced for the pulsed-fields magnetization technique (Fig. 8). As a general tendency we can note that at lower temperatures (or respectively: at higher magnetic fields) the hysteresis width shrinks rapidly and the relative upturn of the magnetization decreases. Below 235 K and a corresponding field of 18 T, we got no indication for a further distinction between a JT-ordered and a disordered state. To understand the influence of the CJT transition on the magnetic properties, we should analyze the paramagnetic magnetization- and susceptibility data not only in the framework of a simple model with independent magnetic ions, but take also into account the existence of spin polarons. The ’spin polaron’ denotes hereby an ensemble of neighbouring unit cells with parallel (on a local scale ferromagnetic) spin alignment, which leads to a ’superparamagnetic behaviour’. The existence of these clusters in CMR manganites was postulated in ref. , and found an experimental support in the articles . In compounds with negligible orbital order, i.e. with higher doping concentration, the charge transport is governed by the relative spin-misorientation between neighbouring spin polarons, while the carriers are delocalized within their respective spin cluster . For the size determination of spin polarons from the field-dependent magnetization data (SQUID results in Fig. 7) we employed first a fitting based on the Brillouin function $``$, which is justified since the respective temperatures around T<sub>JT</sub>(B = 0) are sufficiently above the ferromagnetic transition. The paramagnetic magnetization along the field axis (per unit cell volume) is usually given by $`M(B,T)=g\mu _BJ\{g\mu _BJB/k_BT\}`$, with the gyromagnetic ratio $`g=2`$, $`\mu _B`$ the Bohr magneton, and $`J`$ the spin-moment of the magnetic ions . For the mixture of Mn<sup>3+</sup> and Mn<sup>4+</sup> we replace $`J`$ by the average $`\overline{J}=1.94`$. In the case of superparamagnetic clusters composed of n ions the $`J`$ factor in the argument of the Brillouin function changes to $`J^{}=n\overline{J}`$ (causing a modification of the curvature) while the $`\overline{J}`$ in the prefactor remains unchanged. This is due to the compensation of the $`n`$-fold increase in magnetic moment per (super-) paramagnetic entity by the $`1/n`$-fold decrease of their density per unit volume. Finally we have to replace the temperature $`T`$ by the effective temperature scale $`(TT_C)^\gamma `$, with T<sub>C</sub> = 188 K. Since we are not too close to T<sub>C</sub> it is reasonable to chose $`\gamma =1`$, according to the Curie-Weiss law . The result of the fitting is given in Fig. 9 and suggests a CJT-induced shrinking of superparamagnetic clusters from $`J^{}=8`$ ($``$ 4 Mn ions envolved) to $`J^{}=6`$ (corresponding to roughly 3 Mn spins). The three magnetization curves measured across the CJT phase boundary gave hereby six data points. Besides this shrinking there is a slight tendency towards an increasing cluster size with decreasing temperature. Describing these cluster sizes on grounds of the orbital structures is difficult, since a random orientation of $`e_g`$ electrons results, in principle, in ferromagnetic nearest-neighbour correlations, while the antiferrodistorsive structure promotes A-type antiferromagnetism . These interactions are, however, weak because already in the case of pure LaMnO<sub>3</sub>, where CJT order is established around 800 K, the A-type AFM spin order is only observed far below room temperature. We will therefore restrict the discussion to the strongest ferromagnetic bonds, i.e. the coupling of the diluted Mn<sup>4+</sup> ions to the neighbouring Mn<sup>3+</sup> sites. In the case of the ordered structure (see Fig. 3) the central Mn<sup>4+</sup> ($`J=3/2`$) can undergo spin alignment in the sense of double exchange with two out of the 6 Mn<sup>3+</sup> neighbours ($`J=2`$). The total $`J^{}`$ moment of this entity corresponds to 5.5. For the DJT state (Fig. 2) 3 out of the 6 Mn<sup>3+</sup> neighbours are on the average in the $`x^2y^2`$ configuration, and the probability that an orbital lobe points towards the central Mn<sup>4+</sup> is 2/3. The other 3 orbitals are of the $`3z^2y^2`$ shape, and the overlap probability is 1/3 (according to the three possible spatial orientations of these orbitals). The average total moment of this entity is therefore 7.5. It might be accidental, but these figures (5.5 and 7.5) agree closely with the experimentally found $`J^{}`$ values in the CJT and DJT phases. As an alternative method for the determination of cluster sizes we analyzed also the low-field magnetization data from Fig. 1c by means of the modified (superparamagnetic) susceptibility formula $`\chi =(g\mu _B)^2\overline{J}\{(J^{}+1)/(3k_B(TT_C))\}`$ . The resulting $`J^{}`$ values (solid line in Fig. 9) are similar to the afore-mentioned data, suggesting that the size of these clusters is quite insensitive to the influence of external fields. From the findings of this subsection we conclude that each hole-type charge-carrier, associated with a Mn<sup>4+</sup> ion, is embedded into a locally ferromagnetic environment, extending to the nearest-neighbour sites. The charge transfer depends therefore indeed on the orbital configuration between Mn<sup>4+</sup> and its nearest neighbours, while a possible impedence of the charge-movement by magnetic disorder might, in a first approximation, be ignored. 3.4 Competition between paramagnetic spin-alignment and orbital ordering It can be seen from Fig. 10 that increasing magnetic fields shift the cooperative JT transition, identified by the resistive jump and the susceptibility drop, to lower temperatures. This shift is quadratic in moderate fields (see insert of Fig. 10) with a slope of $`0.11K/T^2`$, in agreement with the behaviour published in ref. . It was argued in earlier work on double-exchange that ferromagnetic spin alignment enhances the mobility of charge carriers and lowers thereby their kinetic energy in the sense of the uncertainty principle . The possible energy decrease due to itinerant behaviour can, certainly in a ferromagnetic/quasimetallic state, exceed the total energy decrease associated with a localized state forming a JT-deformed environment . Here we are dealing with a similar problem. In the higher conducting DJT state the carriers are relatively mobile and charge transfer is impeded by spin disorder. This spin-disorder persists in the CJT state, however the static orbital structure results in an additional carrier localization, as discussed in Sect. 3.2. The enhanced kinetic energy of the carrier system is hereby overcompensated by the elastic energy of the lattice, which is responsible for the transition to the antiferrodistorsive structure. While external magnetic fields lower the degree of spin-disorder localization in the CJT- and in the DJT state, the gain in free energy is more pronounced in the latter case. The frustration of carrier mobility in the CJT state would even persist in the absence of spin disorder due to the static orbital arrangement. In conclusion, magnetic fields lower the kinetic energy of the carrier system more effectively in the DJT- than in the CJT state, and the DJT state becomes stable in a wider temperature range, meaning that the orbital ordering transition shifts to lower temperatures. Possible measures of the field-induced decrease of free energy in the DJT state are the enhancement of carrier mobility, and equivalently, the decrease of resistivity. The CMR effect scales, in the paramagnetic state, with the square of the Brillouin function, depending on the total moment of superparamagnetic entities , and the field-dependent transition temperature T<sub>JT</sub>(B) should scale according to the implicit equation: $$T_{JT}(B)=T_{JT}(B=0)\alpha ^2\left(\frac{g\mu _BJ^{}B}{k_B(T_{JT}(B)T_C)}\right)$$ The $`J^{}`$ moment was chosen with 8 (experimental value right above the phase transition in Fig. 9), and $`\alpha `$, the only free parameter, connects the decrease of free energy (via delocalization) to the CMR-induced lowering of the resistivity. The equation was solved numerically for T<sub>JT</sub>(B) and the best agreement with the data (see the solid line Fig. 10) was achieved for $`\alpha `$ = 47 K, with an uncertainty of $`\pm `$ 2 K. The shape of the phase-boundary line is correctly reproduced and the quadratic low-field behaviour emerges directly from the properties of the Brillouin function. There are two alternative mechanisms which can also explain a decrease of T<sub>JT</sub> with $`B^2`$, but these effects are probably too small to account for the observed shift: Firstly, the intrinsic antiferromagnetism of the antiferrodistorsive structure is caused by superexchange due to the overlap of $`t_{2g}`$ orbitals along the $`z^{}`$ axis (Fig. 3). External magnetic fields, or an internal Weiss field in the ferromagnetic state, should therefore result in a decompression of the $`z^{}`$ axis in order to minimize the superexchange-interaction, which causes in turn a destabilization of the antiferrodistorsive order. The correlations from superexchange are, however, weak and only notable below T<sub>N</sub> $``$ 0.5 $``$ T<sub>JT</sub>. Secondly, it is known that magnetic fields cause a shrinking of the exponentially falling tails of electron wave functions . This shrinking reduces the overlap between the $`3z^2r^2`$ orbital and the two opposite O<sup>2-</sup> orbitals and allows therefore for smaller JT-distortions, with less pronounced minima in the free energy. The compression of the Bohr radius $`a`$ is given by $`aa(1a^4/(24\lambda ^4+3a^4))`$, with $`\lambda `$ being the ’Larmor length’ $`(\mathrm{}/eB)^{1/2}`$ . This expression is valid for $`a<<\lambda `$ and predicts essentially a shrinking proportional to the square of the local magnetic field. Comparing the Bohr radius of $`e_g`$ electrons ($``$ 2 Å) with the Larmor length at 50 T (36 Å) corresponds to a compression effect of the order of 10<sup>-7</sup>, which is insufficient to affect the JT distortion itself. It is noteworthy that the CJT transition vanishes for temperatures below 235 K (see also Fig. 8), meaning that it might become second order. The charge-ordering transition, characterized by the lifting of the antiferrodistorsive structure and by ferromagnetic spin alignment, shifts in non-zero fields to higher temperatures, because the internal Weiss field is corroborated by the external contribution, see Fig. 10. This means that sufficiently high fields can stabilize the nondistorsive orbital structure (Fig. 4) already at temperatures above T<sub>co</sub> = 147 K. We speculate that for temperatures where the CJT transition apparently vanishes, see e. g. the pulsed-fields measurement at 225 K in Fig. 8, the increasing external field can transform the antiferrodistorsive order gradually to the nondistorsive type. If there is a phase mixture between these two states, the typical signatures of the CJT transition in magnetization and resistivity will be smeared out and finally vanish, because transport- and magnetic properties should be widely similar in the nondistorsive- and in the disordered JT state. 4. Conclusions and Summary We investigated the resistive and magnetic behaviour of a rare-earth manganite in the low-doping regime, La<sub>7/8</sub>Sr<sub>1/8</sub>MnO<sub>3</sub>, where the high Mn<sup>3+</sup> content results in a cooperative Jahn-Teller effect. The phase transition from the orbitally disordered state to a phase with antiferrodistorsive orbital order has a substantial influence on the resistive as well as on the magnetic properties. The orbital ordering impedes charge transfer along certain directions, resulting in a doubling of the resistivity irrespective of external magnetic fields and the corresponding spin-alignment and CMR effects. The magnitude of this resistive jump was calculated and found to be quantitatively correct on the basis of a simple model, and crossing the phase boundary from the orbitally ordered to the disordered state corresponds to a strong, negative magnetoresistance effect, which is independent of the CMR effect as such. The lower resistivity of the disordered state goes along with an increase of the paramagnetic susceptibility, which is especially remarkable, since it can only be explained on grounds of superparamagnetic behaviour controlled by ferromagnetic spin clusters or spin polarons. The typical cluster moment was evaluated by two independent fitting procedures and corresponds to the total moments of Mn<sup>4+</sup> ions and the surrounding Mn<sup>3+</sup> sites, to which the Mn<sup>4+</sup> has an orbital overlap. The shift of the orbital ordering transition to lower temperatures under the influence of magnetic fields was explained by a stabilization of the disordered phase through an enhancement of the carrier mobility. Furthermore, we pointed out that the low-temperature properties of La<sub>7/8</sub>Sr<sub>1/8</sub>MnO<sub>3</sub>, which are incompatible with antiferrodistorsive order, might be explained on grounds of a static \- but nondistorsive - orbital structure. Acknowledgements: This work was supported by the Flemish Concerted Action (GOA), the Fund for Scientific Research - Flanders (FWO), and the Belgian Interuniversity Attraction Poles programs (IUAP). The authors thank L. Trappeniers for technical support with the pulsed-fields magnetization measurements and V. Bruyndoncx for help with the numerical solution of the phase-boundary line. Constructive advice by R. Gross, S. Uhlenbruck, and B. Büchner from the University of Cologne is gratefully acknowledged. List of Figure Captions Figure 1: Temperature dependence of: (a) the lattice constants of a La<sub>7/8</sub>Sr<sub>1/8</sub>MnO<sub>3</sub> single crystal (according to ref. ), (b) the resistivity, and (c)the low-field magnetization. The cooperative JT transition at T<sub>JT</sub> = 269 K results in a doubling of the resistivity and a drop of the paramagnetic susceptibility (see insert in c). The gradual lifting of the cooperative JT distortion below the Curie temperature (188 K) is associated with the step-like development of a ferromagnetic/quasimetallic, and finally ferromagnetic/charge-ordered state below T<sub>CO</sub> = 147 K. The dotted line in (b) is a fit to the resistivity increase in the charge ordered state by Shlovskii-Efros hopping. Figure 2: The dynamic JT state is characterized by a random occupation of $`x^2y^2`$ and $`3z^2r^2`$ orbitals and a vibronic mode (indicated by arrows at the left $`3z^2r^2`$ orbital), which favours an oscillation between the two orbital types. The hole-type charge carrier associated with a Mn<sup>4+</sup> ion is mobile in three dimensions. The orbitals in this and in the subsequent Figures 3 and 4 are not drawn on scale with respect to the lattice constant. Figure 3: The antiferrodistorsive order lowers the conductivity along the $`x^{}`$\- and $`y^{}`$ axes by a factor of 2 (possible paths are indicated by dark-shaded orbitals in part a), the conductivity along the diagonal axes remains unaffected (grey-shaded orbitals). The two-dimensional character prohibits charge transport along the $`z^{}`$ axis (part b) and the compression of the crystalline structure along this axis causes A-type antiferromagnetism for undoped LaMnO<sub>3</sub> via superexchange between the $`t_{2g}`$ orbitals. Figure 4: Tentative orbital structure of slightly doped LaMnO<sub>3</sub> at low temperatures. The $`x^2y^2`$ orbitals (50 % occupation) are oriented within the $`x^{}z^{}`$ plane, the $`3z^2r^2`$ orbitals along the $`y^{}`$ axis. The occupation of orbitals in subsequent layers along $`z^{}`$ is reversed. The length $`d`$ corresponds to the mean diameter of Mn<sup>3+</sup>O$`{}_{}{}^{2}{}_{6}{}^{}`$ octahedra without JT distortion. This nondistorsive pattern is able to account for the reentrant structural properties together with ferromagnetic spin alignment, in agreement with the nearest-neighbour coupling rules . Figure 5: Normalized magnetoresistance (left axis) at the transition from the cooperative JT state in low fields to the dynamic JT state in higher fields. Upon lowering temperature the width of the hysteresis loops decreases rapidly, indicating a weakening of the first-order character of the CJT-transition under the presence of external magnetic fields. The relative height of the resistive jump from the ordered- to the disordered state (right axis) is almost field- and temperature independent with an absolute value around 2.1. Solid dots refer to the magnetoresistance measurements at constant temperature, open dots to temperature sweeps at fixed magnetic fields. The dotted line is a guide to the eye. Figure 6: Illustration of nearest-neighbour hopping on a cubic network with different conductivities voor principal axes, square-, and cube diagonals. The elemental conductivities ($`\sigma =1,1/\sqrt{2},1/\sqrt{3}`$) have to be weighted by the number of possible paths, and by the probability for a suitable orbital-overlap configuration. Figure 7: SQUID magnetization measurements around the CJT-transition: the orbitally ordered state becomes unstable at higher fields, resulting in a hysteretic upturn of the paramagnetic magnetization. For clarity, the absolute values of the magnetization curves at different temperatures are shifted by the indicated offset values. Figure 8: The pulsed-fields magnetization measurement at 245 K is qualitatively equivalent to the curves in Fig. 7. The signature of the phase transition vanishes abruptly for temperatures lower than 235 K (here shown for 225 K). The orbital-ordering transition results in pronounced spikes in the non-integrated $`dM/dB`$ signal (see insert). Figure 9: The temperature-dependent size of preformed ferromagnetic spin clusters in the paramagnetic phase around the CJT transition. The data points were determined by Brillouin fits to the magnetization curves (Fig. 7) and suggest a shrinking from 4 to 3 Mn ions involved in the formation of an individual spin cluster. Measurements crossing the phase-boundary line gave two corresponding data points (solid squares and circles). The solid line was calculated on the basis of the low-field magnetization data from Fig. 1c. Figure 10: Increasing magnetic fields shift the CJT transition to lower temperatures (quadratic in low fields, compare insert) and the charge ordering transition to higher temperatures (the dotted line is a guide to the eye). The orbital ordering line is fitted on grounds of the field-induced mobility contribution to the free energy of the DJT state. The distinction between orbitally ordered and disordered state ceases above 20 T. The squares correspond to resistive data from $`\rho (T)`$ measurements at constant fields, the open circles to pulsed-field magnetization.
no-problem/9908/hep-ph9908394.html
ar5iv
text
# References The method of expansion of Feynman integrals S.A. Larin Institute for Nuclear Research of the Russian Academy of Sciences, 60th October Anniversary Prospect 7a, Moscow 117312, Russia ## Abstract The method of expansion of integrals in external parameters is suggested. It is quite universal and works for Feynman integrals both in Euclidean and Minkowski regions of momenta. During the last two decades different techniques were developed - for asymptotic expansions of Feynman integrals in quantum field theory. These techniques allow to perform practical calculations when exact integrations are not possible. In the present paper we suggest the new effective method of expansion of integrals in external parameters – ’the method of cancelling factors’. It is quite universal and works for Feynman integrals both in Euclidean and Minkowski regions of momenta. To demonstrate the essence of the method of cancelling factors we begin with the following simple integral $$_0^1\frac{dx}{(x+t)(1+x)}=\frac{\mathrm{ln}(1+t)\mathrm{ln}(t)\mathrm{ln}(2)}{1t}.$$ (1) We want to expand this integral in small external parameter $`t`$ before the integration. The naive expansion of the integrand in the Taylor series in $`t`$ does not work since it produces the non-integrable singularities at $`x=0`$. To get the correct expansion let us distinguish two factors in the integrand : the expanding factor $`\frac{1}{x+t}`$ (the expansion of this factor initiates the expansion of the whole integral) and the cancelling factor $`\frac{1}{1+x}`$ (this factor is used to cancel singularities arising in the expansion of the expanding factor) . We subtract from and add to the cancelling factor its Taylor series in $`x`$ up to some power $`n`$ $$_0^1\frac{dx}{(x+t)(1+x)}=_0^1𝑑x\frac{1}{x+t}\left[\frac{1}{1+x}\underset{j=0}{\overset{n}{}}(x)^j\right]+_0^1𝑑x\frac{1}{x+t}\underset{j=0}{\overset{n}{}}(x)^j.$$ (2) In the first integral of the right hand sight of eq.(2) we can now safely perform the Taylor expansion of the expanding factor $`\frac{1}{x+t}`$ in $`t`$ up to and including the order $`t^n`$. This will not generate anymore non-integrable singularities at $`x=0`$ since the factor in the square brackets (the subtracted cancelling factor) has the behavior $`O(x^{n+1})`$ and thus suppresses singularities arising in the expansion of $`\frac{1}{x+t}`$. Finally we get the desired expansion $$_0^1\frac{dx}{(x+t)(1+x)}=_0^1𝑑x\underset{k=0}{\overset{n}{}}\frac{(t)^k}{x^{k+1}}\left[\frac{1}{1+x}\underset{j=0}{\overset{n}{}}(x)^j\right]+_0^1𝑑x\frac{1}{x+t}\underset{j=0}{\overset{n}{}}(x)^j+$$ (3) $$O(t^{n+1}\mathrm{ln}t).$$ In each term of this expression some factor is expanded and integrations reproduce the expansion in $`t`$ of the exact result in the right hand side of eq.(1). Let us generalize the above considerations. For a given integral the method of cancelling factors distinguishes the expanding factor (which will be expanded in external parameters of the integral) and several (one or more) cancelling factors (which will be used to cancel singularities in the expansion of the expanding factor, the number of cancelling factors is determined by the necessity to suppress all arising singularities). For each cancelling factor one adds and subtracts its expansion in the integration variable up to the necessary order at some singular point (the point where singularities appear in the expansion of the expanding factor). At last in the term containing the product of subtracted cancelling factors one expands the expanding factor in external parameters up to the necessary order without generating non-integrable singularities. Let us now apply the method of cancelling factors to Feynman integrals in quantum field theory. To regularize divergent integrals we will use dimensional regularization for convenience (but the method is regularization independent). The dimension of the momentum space is defined as $`D=42ϵ`$ where $`ϵ`$ is the parameter defining the deviation of the dimension from its physical value 4. We consider first the expansion at large external momentum squared $`q^2`$ of the following one-loop Feynman integral of the propagator type $$\frac{d^Dk}{(k^2m_1^2+i0)[(k+q)^2m_2^2+i0]},$$ (4) where $`k`$ is the integration momentum, $`m_1`$ and $`m_2`$ are the masses of the propagators. Below in the paper we will omit the ’causal’ $`i0`$ for brevity. The expansion in large $`q^2`$ means alternatively the expansion in small $`m_1`$ and $`m_2`$. We can not just naively expand the integrand in Taylor series in $`m_1^2`$ and $`m_2^2`$. Such an expansion produces incorrect result since it generates infrared singularities at $`k=0`$ and $`k+q=0`$. (At first glance the singularities are generated at $`k^2=0`$ and $`(k+q)^2=0`$ but it is known that Feynman integrals with one external momentum can be always treated in Euclidean region of momenta where conditions $`k^2=0`$ and $`k=0`$ are equivalent.) To get the correct expansion let us distinguish two factors in the integrand : the expanding factor $`\frac{1}{k^2m_1^2}`$ and the cancelling factor $`\frac{1}{(k+q)^2m_2^2}`$. We subtract from and add to the cancelling factor its Taylor series in $`k`$: $$\frac{d^Dk}{(k^2m_1^2)[(k+q)^2m_2^2]}=d^Dk\frac{1}{k^2m_1^2}\left[\frac{1}{(k+q)^2m_2^2}T_k^{2n_1}\frac{1}{(k+q)^2m_2^2}\right]+$$ (5) $$d^Dk\frac{1}{k^2m_1^2}T_k^{2n_1}\frac{1}{(k+q)^2m_2^2},$$ where $$T_k^{2n_1}\frac{1}{(k+q)^2m_2^2}=\underset{j=0}{\overset{2n_1}{}}\frac{^j}{k^{\mu _1}\mathrm{}k^{\mu _j}}\frac{1}{(k+q)^2m_2^2}|_{k=0}\frac{k^{\mu _1}\mathrm{}k^{\mu _j}}{j!}$$ is the Taylor expansion of the cancelling factor in $`k`$ up to some order $`2n_1`$. In the first integral of the right hand sight of eq.(5) we can now perform the Taylor expansion in $`m_1^2`$ of the expanding factor $`\frac{1}{k^2m_1^2}`$ up to and including the order $`(m_1^2)^{n_1+1}`$. This expansion will not generate anymore infrared singularities since the factor in the square brackets (the subtracted cancelling factor) behaves as $`O(k^{2n_1+1})`$ and thus suppresses the infrared singularities at $`k=0`$ arising in the Taylor expansion $$T_{m_1^2}^{n_1+1}\frac{1}{(k^2m_1^2)}=\underset{j=0}{\overset{n_1+1}{}}\frac{(m_1^2)^j}{(k^2)^{j+1}}.$$ Thus we get $$\frac{d^Dk}{(k^2+m_1^2)[(k+q)^2m_2^2]}=d^DkT_{m_1^2}^{n_1+1}\frac{1}{k^2m_1^2}\frac{1}{(k+q)^2m_2^2}+$$ (6) $$d^Dk\frac{1}{k^2m_1^2}T_k^{2n_1}\frac{1}{(k+q)^2m_2^2}+O\left((m_1^2)^{n_1+2}\right),$$ where we took into account that the term containing both Taylor expansions $`T_{m_1^2}^{n_1+1}`$ and $`T_k^{2n_1}`$ is zero due to the known property of the dimensional regularization to nullify the integrals without external parameters (massless tadpoles). The approximation here $`O\left((m_1^2)^{n_1+2}\right)`$ and approximations below in the paper are written up to logarithms. This is already a kind of expansion but we can continue further with the expansion of the first term in the right hand side of eq.(6). For this purpose it is convenient to make in this term the shift of integration momentum $`kkq`$, so we get $$\frac{d^Dk}{(k^2+m_1^2)[(k+q)^2m_2^2]}=d^Dk\frac{1}{k^2m_2^2}T_{m_1^2}^{n_1+1}\frac{1}{(kq)^2m_1^2}+$$ (7) $$d^Dk\frac{1}{k^2m_1^2}T_k^{2n_1}\frac{1}{(k+q)^2m_2^2}+O\left((m_1^2)^{n_1+2}\right).$$ Then in the first term the factor $`\frac{1}{k^2m_2^2}`$ is considered as the expanding factor and the factor $`T_{m_1^2}^{n_1+1}\frac{1}{(kq)^2m_1^2}`$ as the cancelling factor. Again we subtract from and add to the cancelling factor its Taylor expansion $`T_k^{2n_2}`$ in $`k`$. Then in the term containing the subtracted cancelling factor we can make the Taylor expansion $`T_{m_2^2}^{n_2+1}`$ of the expanding factor $`\frac{1}{k^2m_2^2}`$ (in the same way as the expansion in $`m_1^2`$ during the derivation of eq.(6)). Finally we come to the expansion (after nullification of massless tadpoles) $$\frac{d^Dk}{(k^2+m_1^2)[(k+q)^2m_2^2]}=d^DkT_{m_2^2}^{n_2+1}\frac{1}{k^2m_2^2}T_{m_1^2}^{n_1+1}\frac{1}{(kq)^2m_1^2}+$$ (8) $$d^Dk\frac{1}{k^2m_2^2}T_k^{2n_2}T_{m_1^2}^{n_1+1}\frac{1}{(kq)^2m_1^2}+$$ $$d^Dk\frac{1}{(k^2m_1^2)}T_{m_2^2}^{n_2+1}T_k^{2n_1}\frac{1}{(k+q)^2m_2^2}+O((m_1^2)^{n_1+2},(m_2^2)^{n_2+2}),$$ where in the last term (which is nothing but the last term in eq.(7)) we applied the Taylor expansion in $`m_2`$ which does not effect integrations. This result agrees with the recipe explicitly formulated in for the large $`q^2`$ expansion of propagator integrals. The new point here is the simple derivation of the expansion. As the next application of the method we shall consider the Sudakov formfactor which is a typically Minkowskian case not reducible to the Euclidean space of momenta. The corresponding one-loop Feynman integral is $$\frac{d^Dk}{(k^2m^2)(k^22p_1k)(k^22p_2k)},$$ (9) where external momenta are on mass shell: $`p_1^2=p_2^2=0`$. The integral will be expanded in small mass $`m^2`$ which means the expansion in terms of the ratio $`\frac{m^2}{q^2}`$ where $`q=p1p2`$. This expansion was obtained in . Here we give the simple derivation of the expansion with the method of cancelling factor. The expansion of the integrand in $`m^2`$ generates infrared singularities at $`k^2=0`$. We distinguish here the expanding factor $`\frac{1}{k^2m^2}`$ and two cancelling factors $`\frac{1}{k^22p_1k}`$ and $`\frac{1}{k^22p_2k}`$. For each cancelling factor we subtract and add its expansion in $`k^2`$ $$T_{k^2}^n\frac{1}{k^22p_ik}=\underset{j=0}{\overset{n}{}}\frac{(k^2)^j}{(2p_ik)^{j+1}},i=1,2.$$ In this way we get $$\frac{d^Dk}{(k^2m^2)(k^22p_1k)(k^22p_2k)}=$$ (10) $$d^Dk\frac{1}{k^2m^2}\left[\left(1T_{k^2}^n+T_{k^2}^n\right)\frac{1}{k^22p_1k}\right]\left[\left(1T_{k^2}^n+T_{k^2}^n\right)\frac{1}{k^22p_2k}\right]=$$ $$d^Dk\frac{1}{k^2m^2}\left[(1T_{k^2}^n)\frac{1}{k^22p_1k}\right]\left[(1T_{k^2}^n)\frac{1}{k^22p_2k}\right]+$$ $$d^Dk\frac{1}{k^2m^2}\frac{1}{k^22p_2k}T_{k^2}^n\frac{1}{k^22p_1k}+d^Dk\frac{1}{k^2m^2}\frac{1}{k^22p_1k}T_{k^2}^n\frac{1}{k^22p_2k},$$ where in the last equation we took into account that the terms containing two factors with $`T_{k^2}^n`$ are zero. (Here is a technical subtlety. Dimensional regularization does not regularize individual terms in the last equation although it regularizes the original integral (9). Strictly speaking, we should introduce analytic regularization $`\frac{1}{k^22p_ik}\frac{1}{(k^22p_ik)^{1+\lambda _i}},i=1,2`$ in eq.(9) in addition to dimensional regularization, where $`\lambda _i`$ are the arbitrary parameters of analytic regularization. But this technical subtlety does not change the derivation of the expansion and the final result.) Then in the first term of the last equation we can expand the expanding factor as $$T_{m^2}^n\frac{1}{k^2m^2}=\underset{j=0}{\overset{n}{}}\frac{(m^2)^j}{(k^2)^{j+1}}$$ without generating infrared singularities at $`k^2=0`$. This is because the first square bracket (the first subtracted cancelling factor) behaves as $`O\left(\frac{(k^2)^{n+1}}{(2p_1k)^{n+2}}\right)`$ and the second square bracket (the second subtracted cancelling factor) behaves as $`O\left(\frac{(k^2)^{n+1}}{(2p_2k)^{n+2}}\right)`$ at small $`k^2`$. The scalar product $`2p_1k`$ can be small simultaneously with $`k^2`$ and then the first square bracket does not suppress infrared singularities at small $`k^2`$. In this case the second square bracket ensures the suppression of infrared singularities at $`k^2=0`$ and vice versa. (The scalar products $`2p_1k`$ and $`2p_2k`$ are not simultaneously small at small $`k^2`$ since the momenta $`p_1`$ and $`p_2`$ are different). That is why we need two cancelling factors here. Finally we get the following expansion (after taking into account that terms containing two factors with Taylor expansions are zero) $$\frac{d^Dk}{(k^2m^2)(k^22p_1k)(k^22p_2k)}=d^DkT_{m^2}^n\frac{1}{k^2m^2}\frac{1}{k^22p_1k}\frac{1}{k^22p_2k}$$ (11) $$+d^Dk\frac{1}{k^2m^2}\frac{1}{k^22p_2k}T_{k^2}^n\frac{1}{k^22p_1k}$$ $$+d^Dk\frac{1}{k^2m^2}\frac{1}{k^22p_1k}T_{k^2}^n\frac{1}{k^22p_2k}+O\left((m^2)^{n+1}\right).$$ Here the second and third integrals are not individually regularized by dimensional regularization as was already mentioned above but their sum is regularized. To conclude, in the present paper we described the method of cancelling factors for expansion of integrals in external parameters, giving three examples of its applications. The author gratefully acknowledges the support by the Russian Fund for Basic Research under contract 97-02-17065 and by Volkswagen Foundation under contract No. I/73611.
no-problem/9908/astro-ph9908286.html
ar5iv
text
# CO(4–3) and dust emission in two powerful high-z radio galaxies, and CO lines at high redshifts ## 1 Introduction The successful detection of CO emission in many local ($`\mathrm{z}0.3`$) IRAS galaxies revealed large reservoirs of molecular gas mass that fuel nuclear starbursts and Active Nuclei (e.g. Tinney et al. 1988; Sanders, Scoville & Soifer 1991; Solomon 1997). In the most IR-luminous galaxies ($`\mathrm{L}_{\mathrm{FIR}}10^{11}\mathrm{M}_{}`$) gas-rich mergers and interactions are thought to play a crucial role in causing the rapid accumulation of molecular gas in the center and thus initiating the onset of spectacular starbursts. Subsequent high resolution imaging of the CO emission in Ultra Luminous IR Galaxies (hereafter ULIRGs) (e.g. Scoville, Yun & Bryant 1997; Downes & Solomon 1998) confirmed this picture by demonstrating the presence of molecular gas with high mass surface densities ($`\mathrm{\Sigma }(\mathrm{H}_2)5\times 10^3\mathrm{M}_{}\mathrm{pc}^2`$) and large velocity widths ($`500`$ km s<sup>-1</sup>) confined within a few hundred parsec in the nuclear regions. Besides being responsible for some of the most spectacular starbursts in the local Universe the merging process is thought to play an important role in galaxy formation at high redshift, especially for spheroidal systems. The favorite formation model for these systems is that they grow from the hierarchical clustering of gas-rich “fragments” where the oldest stars ($`10`$ Gyr) have already formed (e.g. Baron & White 1987; White 1996). The subsequent intense star formation rapidly consumes the gas mass and finally gives rise to a present-day giant-elliptical galaxy with its red colors, evolved stellar population and large mass, but devoid of substantial amounts of gas and dust. Powerful high-z radio galaxies (hereafter HzRGs) may be the progenitors of the massive present-day ellipticals hosting a radio-loud AGN. Since such galaxies seemed to have already settled into ellipticals (Best, Longair, & Röttgering 1997, 1998 and 1999) by $`\mathrm{z}1`$, it is natural to assume that HzRGs at $`\mathrm{z}>2`$ with their frequently irregular morphologies (e.g. Pentericci et al. 1999) is where merger-induced large scale starbursts may have occured. The first detection of CO in IRAS 10214+4724 at z=2.286 (Brown & Vanden Bout 1991, 1992) and the subsequent detection of its mm/sub-mm continuum from dust (see Downes et al. 1992 and references therein) initiated ongoing efforts to detect CO and mm/sub-mm emission from the copious amounts of gas and dust expected in the high-z counterparts of ULIRGs and in galaxies undergoing their formative starbursts. The large negative K-corrections expected for the thermal dust spectrum and the high-J CO lines (e.g. Hughes 1996; van der Werf & Israel 1996) of high-z galaxies as well as gravitational lensing help to make such objects detectable to the current mm/sub-mm instruments. Lensing amplifies the expected emission but usually complicates the gas and dust mass estimates. The galaxy IRAS 10214+4724 and many high-z CO/sub-mm luminous objects like the QSO H1413+117 (Barvainis et al. 1994), the recent sub-mm selected galaxies (Frayer et al 1998; Frayer et al. 1999) and the BAL quasar APM 08279+5255 (Downes et al. 1999) are lensed. Recently there have been detections of large amounts of dust in HzRGs (e.g. Hughes 1996; Röttgering et al. 1998; Hughes & Dunlop 1998; Best et al. 1998), and a systematic survey is now been conducted to detect sub-mm emission from dust for all $`\mathrm{z}>3`$ radio galaxies (Röttgering et al. 1999) in order to understand the large range of FIR luminosities of these objects. The most notable examples are the extremely FIR-luminous radio galaxies 8C 1435+635 (Ivison et al. 1998) and 4C 41.17 (Dunlop et al. 1994; Chini & Krügel 1994) whose properties do not appear to be influenced by gravitational lensing. In both cases the inferred gas masses are large enough ($`10^{11}\mathrm{M}_{}`$) to suggest that the large FIR luminosities of these objects ($`10^{13}\mathrm{L}_{}`$) are due to the formative starbursts. However, despite systematic efforts (Evans et al. 1996; van Ojik 1997) no CO emission has been detected from these two objects or any other powerful HzRGs. In this paper we present the detection of dust continuum and CO J=4–3 line emission in two powerful high-z radio galaxies and discuss the latter in the context of the earlier unsuccessful attempts to detect CO in this class of objects. Throughout this work we assume $`\mathrm{H}_{}=75\mathrm{km}\mathrm{sec}^1\mathrm{Mpc}^1`$ and $`\mathrm{q}_{}=0.5`$. ## 2 Observations and data reduction ### 2.1 The SCUBA observations We used the Sub-mm Common User Bolometer Array (SCUBA) at the 15-m James Clerk Maxwell Telescope (JCMT)<sup>1</sup><sup>1</sup>1The JCMT is operated by the Joint Astronomy Center in Hilo, Hawaii on behalf of the parent organizations PPARC in the United Kingdom, the National Research Council of Canada and the The Netherlands Organization for Scientific Research. in the photometry mode to observe 4C 60.07 on 1998 April 15 and 6C 1909+722 on 1997 October 4 and 5 as part of an ongoing program to observe all HzRGs at $`\mathrm{z}>3`$ (Röttgering et al. 1999). SCUBA is a dual camera system cooled to $`0.1`$ K allowing sensitive observations with two arrays simultaneously. The short-wavelength array at 450 $`\mu `$m contains 91 pixels and the long-wavelength array at 850 $`\mu `$m 37 pixels. The resolution is diffraction-limited with $`\mathrm{HPBW}15^{\prime \prime }`$ (850 $`\mu `$m), and $`\mathrm{HPBW}8^{\prime \prime }`$ (450 $`\mu `$m). For a full description of the instrument see Holland et al. (1998). We employed the recommended rapid beam switching at a frequency of 8 Hz and a beam throw of $`60^{\prime \prime }`$ in azimuth. The pointing and focus of the telescope were monitored frequently using CRL 618 and QSO 0836+710, and the typical rms pointing error was $`3^{\prime \prime }`$. Frequent skydips were used to deduce the atmospheric extinction. Typical opacities at 850 $`\mu `$m were $`\tau 0.13`$ for our 1998 run and $`\tau 0.20`$ for our 1997 run. Beam maps of Mars, CRL 618 and Uranus were used to derive the gain $`\mathrm{C}_{850}280\mathrm{mJy}/\mathrm{beam}\mathrm{mV}^1`$. The data were flatfielded, corrected for extinction and sky noise was removed before they were co-added following standard procedures outlined in Stevens et al. (1997). The estimated rms of our measurements is consistent with what is expected from the total integration times and the NEFDs at the sky conditions of our runs. The systematic uncertainty in the flux density scale at 850 $`\mu `$m is of the order of $`10\%`$ (Holland, private communication). ### 2.2 The IRAM interferometric observations We used the IRAM Plateau de Bure Interferometer (PdBI)<sup>2</sup><sup>2</sup>2IRAM is supported by INSU/CNRS (France), MPG (Germany) and IGN (Spain). between 1998 April 20 and 1998 May 15 in the D configuration to observe CO J=4–3 ($`\nu _{\mathrm{rest}}=461.040`$ GHz) in 6C 1909+722 at z=3.534 (van Breugel private communication) in four tracks ranging from 4-14 hours each. The same configuration was also used on 1998 November 8 and 29 to obtain two 8-hr tracks for 4C 60.07 at z=3.788 (Chambers et al. 1996). The correlator setup used for the 3 mm receivers involved 4 x 160 MHz modules covering a total bandwidth of 560 MHz tunned in single sideband. The band center was positioned at 96.290 GHz for 4C 60.07 and 101.770 GHz for 6C 1909+722. The 1 mm receivers were used in double sideband mode with the two remaining correlator modules as backends to simultaneously observe the continuum at 240 GHz. Typical SSB system temperatures were $``$120 K (96.290 GHz) and $``$150 K (101.770 GHz), and at 240 GHz they were $`400700`$ K (DSB). Bandpass calibration was obtained using 3C 454.3 and amplitude and phase calibration were obtained using IAP<sub>-</sub>0444+634 and NRAO 150. For the 3 mm receivers the residual phase noise was $`20^{}`$ and for the amplitude $`5\%`$. For the 1 mm receivers these figures are $`30^{}`$ and $`20\%`$. The flux density scale is accurate to within $`10\%`$. After calibration and editing the data were processed with the standard NRAO AIPS software package. At 3 mm no continuum emission was detected in the line-free channels and no continuum subtraction was performed. The maps were produced using the task MX and in cases of low S/N, CLEAN was not applied. The rms noise of the maps is consistent with the thermal noise expected for the total observing time and average $`\mathrm{T}_{\mathrm{sys}}`$ in the runs. ## 3 Results Both radio galaxies are detected in 850 $`\mu `$m and CO J=4–3 and are the brightest sub-mm objects found in the ongoing SCUBA survey of HzRGs at $`\mathrm{z}>3`$ so far, this being the main reason for selecting them for the follow-up sensitive CO observations. In 4C 60.07 the CO(4–3) emission is clearly resolved (Figure 1). A remarkable characteristic of the CO emission in this galaxy is its large velocity range ($`1000\mathrm{km}\mathrm{s}^1`$) and its two distinct components with line centers separated by $`700`$ km s<sup>-1</sup>. One component has $`\mathrm{\Delta }\mathrm{V}_{\mathrm{FWHM}}550\mathrm{km}\mathrm{s}^1`$ (extending beyond the observed band), while the other is narrower with $`\mathrm{\Delta }\mathrm{V}_{\mathrm{FWHM}}150\mathrm{km}\mathrm{s}^1`$. The feature with the narrow linewidth coincides with the position of the suspected radio core and thus the AGN, but a significant part of the broad linewidth component is clearly offset from it. This aspect of the CO J=4–3 emission is reminiscent of the CO J=5–4 emission detected in the radio-quiet quasar BR1202-0725 at z=4.69 (Omont et al. 1996) where a broad and a narrow linewidth component are also detected. In that object the narrow linewidth component is centered on the AGN position while the broad one is offset by $`4^{\prime \prime }`$. The velocity-integrated map in Figure 2 shows extended CO emission with two peaks that are $`7^{\prime \prime }`$ apart ($`30`$ kpc at $`\mathrm{z}=3.791`$). Towards the same region 1.25 mm emission from dust is also detected, shown in Figure 3. Both the CO and 1.25 mm continuum maps are overlaid on a map of the non-thermal radio emission at 6 cm (Carilli et al. 1997). The radio galaxy 4C 60.07 is a powerful radio galaxy ($`\mathrm{P}_{4\mathrm{c}\mathrm{m}}1.6\times 10^{27}`$ W Hz<sup>-1</sup>) with a Fanaroff-Riley II (FR II) edge-brightened double lobe morphology (Fanaroff & Riley 1974), hence there is the possibility of a significant contribution of the non-thermal emission to the observed 1.25 mm and 850 $`\mu `$m continuum. Extrapolating the non-thermal flux density of $`\mathrm{S}_{6\mathrm{c}\mathrm{m}}=19`$ mJy of its brightest component with its associated spectral index of $`\alpha _{6\mathrm{c}\mathrm{m}}^{3.6\mathrm{cm}}=1.4`$<sup>3</sup><sup>3</sup>3spectral index defined as $`\mathrm{S}_\nu \nu ^\alpha `$ (De Breuck, private communication) yields a negligible contribution ($`1\%`$) in both 1.25 mm and 850 $`\mu `$m bands. Further confirmation of the thermal origin of the 1.25 mm continuum is offered by the high resolution map in Figure 3 which shows that most of this emission is not associated with the observed non-thermal radio continuum and the peak 1.25 mm brightness is $`4^{\prime \prime }`$ (17.6 kpc) offset from the radio core where the AGN probably resides (De Breuck private communication). Deep K-band images obtained with Keck (van Breugel et al. 1998) reveal faint emission towards the weak eastern radio component but none from the regions with the brightest 1.25 mm continuum. Since at $`\mathrm{z}3.8`$ the K-band is rest-frame B-band, this is consistent with the brightest 1.25 mm emission marking the place of a massive gas/dust reservoir with large amounts of extinction. In order to obtain the highest S/N maps for the two distinct CO J=4–3 features we averaged all the appropriate channels and the two resulting maps are shown in Figure 4 overlaid with the 1.25 mm continuum. From these maps it becomes obvious that the component with the largest dynamical and molecular gas mass is closely associated with the peak of the dust emission as expected. In the case of the radio galaxy 6C 1909+722 a map of the averaged CO J=4–3 emission and its spectrum are shown in Figure 5. There it can be seen that its linewidth is also rather large. Extrapolation of the non-thermal flux $`\mathrm{S}_{6\mathrm{c}\mathrm{m}}=59`$ mJy with the observed power law $`\alpha _{20\mathrm{c}\mathrm{m}}^{6\mathrm{c}\mathrm{m}}=1.3`$ (De Breuck private communication) yields a non-thermal contribution of $`2\%`$ at 850 $`\mu `$m. All the observed parameters for the two HzRGs are tabulated in Table 1. EDITOR: PLACE TABLE 1 HERE ## 4 Discussion The two galaxies have extended radio emission and are unlikely candidates for gravitationally lensed objects. In the case of 4C 60.07, high-resolution radio images show a classical FR II source. Sensitive 6 cm (Carilli et al. 1997), K-band (Chambers et al. 1996) and deep K$`^{^{}}`$-band images (van Breugel et al. 1998) do not reveal any obviously lensed features. For 6C 1909+722, HST images at 7000$`\AA `$ do not reveal any lensed features either (Pentericci, private communication). Henceforth we assume that the two sources are unlensed and organize the discussion as follows: 1. Estimate the molecular gas mass implied by the CO J=4–3 luminosity assuming global gas excitation conditions similar to the ones found in local starburst galaxies. 2. Briefly explore the influence of various galactic environments on the molecular gas excitation conditions over large scales. Special emphasis is given on the effects of the various environments on the high-J CO lines and their detectability at high redshift. 3. Use the mm/sub-mm data to find the dust masses and FIR luminosities and estimate the $`\mathrm{M}(\mathrm{H}_2)/\mathrm{M}_{\mathrm{dust}}`$ ratios, which we then compare to the values in the local Universe. 4. Discuss the evolutionary status of these objects in terms of their gas mass relative to their dynamic mass, and their star formation rates and efficiencies. We focus particularly on 4C 60.07 where the two distinct molecular gas components strongly suggest a large scale starburst event unlike the usually nuclear starbursts observed in local ULIRGs. ### 4.1 Molecular gas content The estimate of molecular gas mass from the CO J=1–0 luminosity involves the so-called standard conversion factor $`\mathrm{X}_{\mathrm{CO}}`$ (e.g. Young & Scoville 1982; Bloemen 1985; Dickman et al. 1986; Young & Scoville 1991), which is “calibrated” using molecular clouds in the Milky Way. From the CO(4–3) luminosity $`\mathrm{M}(\mathrm{H}_2)`$ is then given by $$\mathrm{M}(\mathrm{H}_2)=\left(\frac{\mathrm{X}_{\mathrm{CO}}}{\mathrm{r}_{43}}\right)\frac{\mathrm{c}^2}{2\mathrm{k}\nu _{43}^2}\left[\frac{\mathrm{D}_\mathrm{L}^2}{1+\mathrm{z}}\right]_{\mathrm{\Delta }\mathrm{v}}\mathrm{S}_{\nu _{\mathrm{obs}}}\mathrm{dv},$$ (1) where $`\mathrm{D}_\mathrm{L}=2\mathrm{c}\mathrm{H}_{}^1(1+\mathrm{z}\sqrt{1+\mathrm{z}})`$ is the luminosity distance for $`\mathrm{q}_{}=0.5`$, $`\nu _{43}`$ is the rest-frame frequency of the CO J=4–3 transition, $`\mathrm{r}_{43}`$ is the (4–3)/(1–0) line ratio of the area/velocity-integrated brightness temperatures and $`\mathrm{S}_{\nu _{\mathrm{obs}}}`$ is the observed flux density. Substituting astrophysical units yields $$\mathrm{M}(\mathrm{H}_2)=9.77\times 10^9\left(\frac{\mathrm{X}_{\mathrm{CO}}}{\mathrm{r}_{43}}\right)\frac{\left(1+\mathrm{z}\sqrt{1+\mathrm{z}}\right)^2}{1+\mathrm{z}}\left[\frac{_{\mathrm{\Delta }\mathrm{v}}\mathrm{S}_{\nu _{\mathrm{obs}}}\mathrm{dv}}{\mathrm{Jy}\mathrm{km}\mathrm{s}^1}\right]\mathrm{M}_{}.$$ (2) The dependence of $`\mathrm{X}_{\mathrm{CO}}`$ on the ambient conditions of the $`\mathrm{H}_2`$ gas has been extensively explored (e.g. Bryant & Scoville 1996; Sakamoto 1996). An important recent result is that in the intense starburst environments of ULIRGs the warm, diffuse gas phase dominating the <sup>12</sup>CO emission does not consist of virialized clouds, hence leading to an overestimate of $`\mathrm{M}(\mathrm{H}_2)`$ when the standard $`\mathrm{X}_{\mathrm{CO}}5\mathrm{M}_{}(\mathrm{km}\mathrm{s}^1\mathrm{pc}^2)^1`$ is used (Solomon et al. 1997; Downes & Solomon 1998). A more appropriate value for such ISM environments and hence their high-z counterparts is $`\mathrm{X}_{\mathrm{CO}}1\mathrm{M}_{}(\mathrm{km}\mathrm{s}^1\mathrm{pc}^2)^1`$ (Downes & Solomon 1998). A plausible value for $`\mathrm{r}_{43}`$ can be found assuming an ISM environment similar to that of a local “average” starburst. The average line ratios measured towards such systems are $`\mathrm{r}_{21}0.9`$ (e.g. Braine & Combes 1992; Aalto et al. 1995), $`\mathrm{r}_{32}0.64`$ (Devereux et al. 1994), while the <sup>12</sup>CO/<sup>13</sup>CO J=1–0, 2–1 ratios are $`\mathrm{R}_{10}\mathrm{R}_{21}13`$ (Aalto et al. 1995). A Large Velocity Gradient (LVG) code with the aforementioned ratios as inputs was employed to model the physical conditions of the gas (e.g. Richardson 1985). The conditions corresponding to the best fit are $`\mathrm{T}_{\mathrm{kin}}=50`$ K, $`\mathrm{n}(\mathrm{H}_2)10^3`$ cm<sup>-3</sup> and $`[\mathrm{CO}/\mathrm{H}_2]/\mathrm{dV}/\mathrm{dr}=3\times 10^5(\mathrm{km}\mathrm{s}^1\mathrm{pc}^1)^1`$, similar to the ones deduced for the high-z starburst IRAS 10214+4724 (Solomon, Downes, Radford 1992). For these conditions it is $`\mathrm{r}_{43}=0.40`$, which for $`\mathrm{T}_{\mathrm{CMB}}=(1+\mathrm{z})\times 2.75\mathrm{K}13\mathrm{K}`$, is slightly enhanced to $`\mathrm{r}_{43}=0.45`$. This ratio and $`\mathrm{X}_{\mathrm{CO}}=1\mathrm{M}_{}(\mathrm{km}\mathrm{s}^1\mathrm{pc}^2)^1`$ yield $`\mathrm{M}(\mathrm{H}_2)8\times 10^{10}\mathrm{M}_{}`$ for 4C 60.07 and $`\mathrm{M}(\mathrm{H}_2)4.5\times 10^{10}\mathrm{M}_{}`$ for 6C 1909+722. These are most likely lower limits since even in ULIRGs the ratio $`\mathrm{r}_{43}`$ can be significantly smaller but not much larger. Indeed the assumption of an “average” starburst excitation environment, while convenient, hides the fact that there is a wide range of physical conditions for the gas in IR-luminous galaxies, and in much of that range CO(J+1$``$J), $`\mathrm{J}+1>3`$ can be significantly weaker than CO(1–0). As we shall see, the potential faintness of the high-J CO transitions may partly explain why attempts to detect high-z CO have been less successful than the ones trying to detect mm/sub-mm continuum from dust. Especially for HzRGs two large systematic searches (Evans et al. 1996; van Ojik et al. 1997) gave null results. On the other hand observations of the sub-mm continuum fare better with 7 such objects detected at 850 $`\mu `$m (see van der Werf 1999 and references therein). A similar situation applies also in other types of high-z objects. While the uncertainties of the gas-to-dust ratio and a rising $`\mathrm{L}_{\mathrm{FIR}}/\mathrm{L}_{\mathrm{CO}}`$ ratio with FIR luminosity (van der Werf 1999) can still adequately account for this, the wide range of gas excitation observed in IR-luminous galaxies cannot but have an effect on the luminosity of CO lines especially at high J levels. ### 4.2 The high-J CO transitions in ULIRGs at high z In the attempts to detect CO in high-z objects various transitions are observed depending on the particular redshift and the receiver used. For $`\mathrm{z}2`$ mostly CO(J+1$``$J) with $`\mathrm{J}+13`$ is observed and in the systematic searches in HzRGs transitions with J+1=4-9 were routinely observed (Evans et al. 1996; van Ojik 1997). It has been argued that observing higher J transitions can offset the dimming due to the distance in high-z objects in a manner similar to the negative K-corrections of the thermal spectrum from dust (van der Werf & Israel 1996). In this picture the warm and dense gas in an “average” starburst environment thermalizes the CO transitions up to J+1=6. However the luminosity of the high-J CO transitions may be much smaller for two basic reasons, namely 1) The presence of a warm ($`\mathrm{T}_{\mathrm{kin}}=50100`$ K) but diffuse ($`\mathrm{n}(\mathrm{H}_2)10^210^3`$ cm<sup>-3</sup>) and subthermally excited (for $`\mathrm{J}+1>2`$) gas phase that dominates the <sup>12</sup>CO emission in ULIRGs (Aalto et al. 1995; Downes & Solomon 1998). The most conspicuous such galaxy is Arp 220, frequently used as the standard ULIRG for comparison with high-z FIR-luminous sources. In this source a low $`\mathrm{r}_{21}=0.53`$ and high $`\mathrm{R}_{10}>20`$, $`\mathrm{R}_{21}=18`$ ratios are observed (Aalto et al. 1995; Papadopoulos & Seaquist 1998), in contrast to the “average” starburst ratios mentioned previously. Another important characteristic of this gas phase is the moderate optical depths ($`\tau 12`$) of the <sup>12</sup>CO (1–0) transition (Aalto et al. 1995). 2) A large reservoir of cold and/or subthermally excited gas extending beyond the warm starbursting nuclear region of an ULIRG. Such excitation gradients are observed when large beams that sample the extended emission are used (Papadopoulos & Seaquist 1998). The associated cool dust in the ULIRG VV 114 was imaged recently with SCUBA (Frayer et al. 1999) and extremely cold gas ($`\mathrm{T}_{\mathrm{kin}}=710`$ K) is inferred over large scales for starbursts like IC 5135 and NGC 7469 (Papadopoulos & Seaquist 1998). This gas phase, if present, can easily dominate the global CO excitation especially in high-z systems where a beam of $`5^{\prime \prime }`$ at $`\mathrm{z}2`$ corresponds to $`\mathrm{L}30`$ kpc. Here we must stress that the effectiveness of a cold gas component in suppressing the observed global gas excitation is not altered by the higher CMB temperature at high z. Indeed, while the higher CMB temperature enhances the populations of the high J levels of CO, it also corresponds to a higher background against which the respective lines must be detected. Moreover the effective temperature of cold gas at high z is not simply the sum of the temperature of this gas phase at z=0 and $`(1+\mathrm{z})\times 2.75`$ K as one might naively assume. This can be demonstrated with some simple arguments. Obviously the excitation temperature $`\mathrm{T}_{\mathrm{exc}}`$ of any collisionally excited line at any redshift is bounded as $$(1+\mathrm{z})\times \mathrm{T}_{\mathrm{cmb}}\mathrm{T}_{\mathrm{exc}}\mathrm{T}_{\mathrm{kin}},$$ (3) where $`\mathrm{T}_{\mathrm{cmb}}=2.75`$ K is the present epoch CMB temperature and $`\mathrm{T}_{\mathrm{kin}}`$ is the gas kinetic temperature. If we further assume that on large scales in the ISM of a galaxy thermodynamic equilibrium exists between gas and dust, i.e. $`\mathrm{T}_{\mathrm{dust}}=\mathrm{T}_{\mathrm{kin}}`$, then for a dust emissivity of $`\alpha =2`$ the energy balance within a typical giant molecular cloud is described by $$\mathrm{U}_{\mathrm{ISRF}}+\mathrm{U}_{\mathrm{mech}}=\mathrm{cooling}\mathrm{rate}\left[\mathrm{T}_{\mathrm{dust}}(\mathrm{z})^62.75^6\times (1+\mathrm{z})^6\right],$$ (4) where $`\mathrm{U}_{\mathrm{ISRF}}`$ is the energy density of the interstellar radiation field (O, B, A stars) that heats the grains directly and $`\mathrm{U}_{\mathrm{mech}}`$ is the “mechanical” heating energy density deposited to the molecular clouds (e.g. supernovae shocks, turbulent cascade, cloud-cloud collisions) that first heats the gas and then the dust. These are the two major heating mechanisms of an average molecular cloud and since, a) more cooling processes are at play than only dust radiation (e.g. C<sup>+</sup>, CI, CO lines), b) usually $`\mathrm{T}_{\mathrm{gas}}\mathrm{T}_{\mathrm{dust}}`$ (except in the surfaces of UV-illuminated clouds, Hollenbach & Tielens 1997), it follows that the dust temperature yielded by the last equation will be an upper limit to the gas temperature. The aforementioned physical processes responsible for the heating of the ISM do not depend on the particular value of the CMB temperature at a given z. Thus the last equation, being redshift-invariant, allows us to find the high-z equivalent temperature of any given ISM component from its temperature in the local Universe (see also Combes, Maoli & Omont 1999). It is easy to see that at high z, the low-temperature component of the the local ISM has $`\mathrm{T}_{\mathrm{dust}}(1+\mathrm{z})\times \mathrm{T}_{\mathrm{cmb}}`$ which also means (Equation 3) that lines become thermalized at the expense of becoming invisible against the enhanced CMB radiation field. Choosing a nominal redshift of $`\mathrm{z}=3.5`$ as representative of the redshifts of the two HzRGs we find that for a warm ISM component of $`\mathrm{T}_{\mathrm{dust}}(\mathrm{z}=0)=50`$ K the $`\mathrm{T}_{\mathrm{dust}}(\mathrm{z}=3.5)`$ is essentially identical. However for a cold component of $`\mathrm{T}_{\mathrm{dust}}=10`$ K it is $`\mathrm{T}_{\mathrm{dust}}(\mathrm{z}=3.5)=12.892`$ K, i.e. just $`0.5`$ K above the $`(1+\mathrm{z})\times 2.75\mathrm{K}=12.375`$ K temperature of the CMB at that redshift. In Table 2 we display the expected line ratios at $`\mathrm{z}=3.5`$ for three sets of conditions, namely a) an “average” starburst, b) diffuse and warm gas of moderate <sup>12</sup>CO J=1–0 optical depth, and c) cold gas, representative of the phase that may exist in ULIRGs beyond the central starburst region. EDITOR: PLACE TABLE 2 HERE This table shows that a wide range is expected for CO(J+1$``$J), J$`>`$2 line luminosities relative to CO J=1–0. Several published $`\mathrm{r}_{32}=(32)/(10)`$ line ratios (Lisenfeld et al. 1996) that sample the global CO emission in ULIRGs, and a recent survey of this ratio in many nearby galaxies (Mauersberger et al. 1999) reveal a large range of values, namely $`\mathrm{r}_{32}0.11`$, thus a similar or larger range is to be expected for the higher J transitions. Moreover since the flux density ratio $`\mathrm{S}(\mathrm{J}+1\mathrm{J})/\mathrm{S}(10)=(\mathrm{J}+1)^2\mathrm{r}_{\mathrm{J}+1\mathrm{J}}`$ determines whether a high-J transition is easier to detect than J=1–0 (assuming equal sensitivities), it can be seen (Table 2) that CO J=3–2 is the highest transition for which this ratio is $`1`$ for all the expected conditions. Hence many non-detections of CO(J+1$``$J), J$`>`$2 in high-z systems can be due to the globally sub-thermal excitation of these lines rather than a true molecular gas mass deficiency. At the same time this makes the upper limits for $`\mathrm{M}(\mathrm{H}_2)`$ from such non-detections much less stringent. For example if the CO J=5–4 line was observed, and for $`\mathrm{X}_{\mathrm{CO}}=1\mathrm{M}_{}(\mathrm{km}\mathrm{s}^1\mathrm{pc}^2)^1`$, the upper limit in $`\mathrm{M}(\mathrm{H}_2)`$ can be $`1/0.025\times 1/5=8`$ times larger that what is usually reported for this line (e.g. van Ojik 1998). Finally for gravitationally lensed objects with underlying steep gas excitation gradients, differential amplification can render line ratios useless in deducing the average gas excitation unless an accurate description of the lensing potential is available. Such gradients are expected in nuclear starbursts where gas warm and dense enough to excite high-J CO lines is more centrally confined in the immediate area of the starburst. Recent mapping of CO J=4–3 in M 51 and NGC 6946 (Nieten et al. 1999) showed that even in more quiescent galaxies the highly excited gas is strongly concentrated in the center. Under such circumstances differential amplification can enlarge the effective size of the high-J CO emitting region with respect to the low-J one and thus alter the observed global line ratios towards the ones expected for warm and dense gas. The frustrating aspect of this effect is that it can become progressively severe for lines that are widely separated in J, i.e. exactly the ones whose ratios are most sensitive to the gas excitation conditions under normal circumstances. Clearly more observations of high-J CO lines in local ULIRGs are needed in order to reveal the range of their global luminosities and relative brightness distributions and thus allow a better understanding of similar systems at high-z. ### 4.3 Dust mass, the gas-to-dust ratio and $`\mathrm{L}_{\mathrm{FIR}}`$ There are many uncertainties associated with the estimate of the dust mass from mm/sub-mm measurements (e.g. Gordon 1995), with the uncertainty of the dust temperature being one of the most important ones (Hughes 1996). In the case of 4C 60.07 the detection of the dust continuum both at 1.25 mm and 850 $`\mu `$m allows us to place some broad constraints on the dust temperature as long as the emissivity law is $`\alpha >1`$. Indeed, for an optically thin isothermal reservoir of dust mass it is $$\mathrm{R}(\alpha ,\mathrm{T}_\mathrm{d})=\frac{\mathrm{S}_{850\mu \mathrm{m}}}{\mathrm{S}_{1.25\mathrm{mm}}}=1.47^\alpha \frac{\left(\mathrm{e}^{81/\mathrm{T}_\mathrm{d}}1\right)^10.0022}{\left(\mathrm{e}^{55/\mathrm{T}_\mathrm{d}}1\right)^10.0162},$$ (5) where $`\mathrm{T}_\mathrm{d}`$ is the dust temperature and $`\alpha =12`$ is the exponent of the emissivity law. For 4C 60.07 we find $`\mathrm{R}(\alpha ,\mathrm{T}_\mathrm{d})=2.44\pm 0.73`$. We adopt an emissivity law of $`\alpha =2`$, which was deduced in other high-z objects, e.g. IRAS 10214+4724 (Downes et al. 1992) and 8C 1435+635 (Ivison et al. 1998) over a similar rest-frame spectral range. For this emissivity law the $`\pm 1\sigma `$ range of the observed $`\mathrm{R}(\alpha ,\mathrm{T}_\mathrm{d})`$ yields a range of $`\mathrm{T}_\mathrm{d}2050`$ K. Given the starburst nature of these two objects and their luminous CO (4–3) line (the J=4 level is $`55`$ K above the ground state) we assume a dust temperature of $`\mathrm{T}_\mathrm{d}=50`$ K. Then the dust mass can be estimated from $$\mathrm{M}_{\mathrm{dust}}=\frac{\mathrm{D}_\mathrm{L}^2\mathrm{S}_{\nu _{\mathrm{obs}}}}{\left(1+\mathrm{z}\right)\mathrm{k}_\mathrm{d}(\nu _{\mathrm{em}})}\left[\mathrm{B}(\nu _{\mathrm{em}},\mathrm{T}_\mathrm{d})\mathrm{B}(\nu _{\mathrm{em}},\mathrm{T}_{\mathrm{cmb}}(\mathrm{z}))\right]^1,$$ (6) where $`\nu _{\mathrm{em}}=(1+z)\nu _{\mathrm{obs}}`$ is the emitted frequency, $`\mathrm{B}(\nu ,\mathrm{T})`$ is the Planck function, $`\mathrm{T}_{\mathrm{cmb}}(\mathrm{z})`$ is the CMB temperature at redshift z, and $`\mathrm{k}_\mathrm{d}(\nu _{\mathrm{em}})=0.04(\nu _{\mathrm{em}}/250\mathrm{G}\mathrm{H}\mathrm{z})^2`$ m<sup>2</sup> kgr<sup>-1</sup> is the adopted dust emissivity (e.g. Krügel, Steppe, & Chini 1990). For the assumed $`\mathrm{T}_\mathrm{d}`$ the CMB term can be omitted with $`3\%`$ error. Here it is worth noting that for cold dust this term can be significant. For example $`\mathrm{T}_{\mathrm{cmb}}(\mathrm{z}=4)=13.75`$ K, hence a dust component with $`\mathrm{T}_\mathrm{d}=15`$ K at that redshift is a very cold component, just $`1`$ K above $`\mathrm{T}_{\mathrm{cmb}}(\mathrm{z}=4)`$. An observed wavelength of 1.25 mm corresponds to $`\mathrm{h}\nu _{\mathrm{em}}/\mathrm{k}57.4`$ K at $`\mathrm{z}=4`$ and for this wavelength the CMB term is $`70\%`$ of the dust term. Assuming similar parameters for the dust emission in 6C 1909+722 the 850$`\mu `$m flux density yields $`\mathrm{M}_{\mathrm{dust}}=1.5\times 10^8\mathrm{M}_{}`$ for both objects. This gives warm gas-to-dust ratios of $`\mathrm{M}(\mathrm{H}_2)/\mathrm{M}_{\mathrm{dust}}530`$ (4C 60.07) and $`\mathrm{M}(\mathrm{H}_2)/\mathrm{M}_{\mathrm{dust}}300`$ (6C 1909+722), i.e. within the range of the values found for local IRAS galaxies (e.g. Young et al. 1986; Stark et al. 1986; Young et al. 1989) and ULIRGs (Sanders et al. 1991). This suggests that in a Universe at $`10\%`$ of its current age these two HzRGs already have heavy-element abundance comparable to the galaxies in the contemporary Universe. It is also worth noting that, while many uncertain factors enter into the estimate of the gas-to-dust ratio, the values chosen for them here are the ones considered plausible from the study of the ISM in local ULIRGs. The FIR luminosities of the two HzRGs were estimated by assuming the underlying spectrum of an optically thin, isothermal reservoir of dust mass, hence $$\mathrm{L}_{\mathrm{FIR}}=_0^{\mathrm{}}\mathrm{L}_{\nu _{\mathrm{em}}}d\nu _{\mathrm{em}}=4\pi \mathrm{M}_{\mathrm{dust}}_0^{\mathrm{}}\mathrm{k}_\mathrm{d}(\nu _{\mathrm{em}})\mathrm{B}(\nu _{\mathrm{em}},\mathrm{T}_\mathrm{d})d\nu _{\mathrm{em}},$$ (7) after using Equation 6 to substitute $`\mathrm{M}_{\mathrm{dust}}`$ this finally yields $$\mathrm{L}_{\mathrm{FIR}}=4\pi \lambda (\alpha )\mathrm{D}_\mathrm{L}^2\mathrm{x}^{(\alpha +4)}\left(\mathrm{e}^\mathrm{x}1\right)\mathrm{S}_{\nu _{\mathrm{obs}}}\nu _{\mathrm{obs}},$$ (8) where $`\mathrm{x}=\mathrm{h}\nu _{\mathrm{em}}/\mathrm{kT}_\mathrm{d}`$ and $`\lambda (\alpha )`$ a numerical constant that depends on the emissivity law. In astrophysical units this becomes $$\mathrm{L}_{\mathrm{FIR}}=2\times 10^7\left(1+\mathrm{z}\sqrt{1+\mathrm{z}}\right)^2\lambda (\alpha )\mathrm{x}^{(\alpha +4)}\left(\mathrm{e}^\mathrm{x}1\right)\left(\frac{\mathrm{S}_{\nu _{\mathrm{obs}}}}{\mathrm{mJy}}\right)\left(\frac{\nu _{\mathrm{obs}}}{\mathrm{GHz}}\right)\mathrm{L}_{}.$$ (9) For the assumed $`\mathrm{T}_{\mathrm{dust}}=50`$ K the last expression will yield a lower limit to the true $`\mathrm{L}_{\mathrm{FIR}}`$ since in some ULIRGs the optical depth may become significant even at FIR frequencies (e.g. Solomon et al. 1997), and in a starburst environment dust can be warmer still. We find $`\mathrm{L}_{\mathrm{FIR}}1.5\times 10^{13}\mathrm{L}_{}`$ for both HzRGs, a luminosity comparable to 8C 1435+635 and 4C 41.17 (Ivison et al. 1998 and references therein) the other two high-z radio galaxies detected in the sub-mm (rest-frame FIR) spectral range whose properties do not appear to be influenced by gravitational lensing. ### 4.4 The evolutionary status of 4C 60.07 and 6C 1909+722 The usual conclusion drawn when such large FIR luminosities are deduced for a galaxy is that we are witnessing it during a starburst phase. However many of the IR-luminous galaxies in the local Universe and certainly the two particular HzRGs also harbor an AGN, thus part of the FIR luminosity maybe due to AGN-heated dust. Nevertheless, recent studies of ULIRGs (Genzel et al. 1998; Downes & Solomon 1998) reveal that most of them are powered by recently formed massive stars. Also, even when an AGN does make a significant contribution to $`\mathrm{L}_{\mathrm{FIR}}`$, it is usually $`30\%`$ (Genzel et al. 1998; Downes & Solomon 1998). Therefore the qualitative arguments behind converting FIR luminosities to star formation rates are not likely to change even if an AGN is present. For 4C 60.07 the case for the starburst origin of its FIR luminosity is made stronger by the fact that most of its 1.25 mm dust emission is not emanating in the vicinity of the AGN. The FIR luminosity provides a measure of the current star formation rate (SFR) according to $$\mathrm{SFR}=10^{10}\mathrm{\Psi }\mathrm{L}_{\mathrm{FIR}}\mathrm{M}_{}\mathrm{yr}^1,$$ (10) where $`\mathrm{\Psi }16`$, depending on the assumed IMF (Telesco 1988 and references therein). Adopting the most conservative value $`\mathrm{\Psi }1`$, the FIR luminosity of the two HzRGs implies $`\mathrm{SFR}1500\mathrm{M}_{}\mathrm{yr}^1`$. Such a high rate of star formation can, if sustained, produce $`10^{11}10^{12}\mathrm{M}_{}`$ of stars in $`0.060.6`$ Gyr and it is comparable to the SFRs found for 8C 1435+635 and 4C 41.17 (Ivison et al. 1998). The efficiency with which such a burst converts molecular gas into stars is given by $`\mathrm{SFR}/\mathrm{M}(\mathrm{H}_2)`$ or equivalently by $`\mathrm{L}_{\mathrm{FIR}}/\mathrm{M}(\mathrm{H}_2)`$ (assuming the same $`\mathrm{\Psi }`$ for all the galaxies). On many occasions the quantity $`\mathrm{L}_{\mathrm{FIR}}/\mathrm{L}_{\mathrm{CO}}`$ is used instead, but since the $`\mathrm{X}_{\mathrm{CO}}`$ conversion factor is $`45`$ times smaller in ULIRG systems the latter ratio can be misleading when a single value for $`\mathrm{X}_{\mathrm{CO}}`$ is used for galaxies spanning a FIR range from moderately IR-luminous galaxies ($`\mathrm{L}_{\mathrm{FIR}}10^{10}\mathrm{L}_{}`$) to ULIRGs ($`\mathrm{L}_{\mathrm{FIR}}10^{12}\mathrm{L}_{}`$). The star formation efficiencies estimated are $`190\mathrm{L}_{}\mathrm{M}_{}^1`$ (4C 60.07) and $`330\mathrm{L}_{}\mathrm{M}_{}^1`$ (6C 1909+722), comparable to the ones found for ULIRGs (Solomon et al. 1997) once $`\mathrm{M}(\mathrm{H}_2)`$ has been estimated using the same $`\mathrm{X}_{\mathrm{CO}}`$ factor. The implied star formation rates and efficiencies of these galaxies point towards spectacular starbursts occurring at high redshift. In addition the large linewidths observed, which in the case of 4C 60.07 exceed $`1000`$ km s<sup>-1</sup>, are routinely observed towards ULIRGs (e.g. Solomon et al. 1997) and are a kinematic signature of the ongoing mergers/interactions that trigger their enormous starbursts (Sanders, Scoville, & Soifer 1991). The tantalizing question raised when high-z starbursts are found is to what extent the observed star formation episode is forming the bulk of their eventual stellar mass. The main difficulty in answering this important question stems from the uncertainties involved in deducing the total gas mass from CO measurements. If we assume an atomic-to-molecular gas ratio of $`\mathrm{M}(\mathrm{HI})/\mathrm{M}(\mathrm{H}_2)2`$ found for IRAS galaxies (Andreani, Casoli, & Gerin 1995) and a large sample of spirals (Casoli et al. 1998), we obtain a total gas content of $`\mathrm{M}_{\mathrm{gas}}=1.0\times 10^{11}(\mathrm{X}_{\mathrm{CO}}/\mathrm{r}_{43})`$ (4C 60.07) and $`\mathrm{M}_{\mathrm{gas}}=6.0\times 10^{10}(\mathrm{X}_{\mathrm{CO}}/\mathrm{r}_{43})`$ (6C 1909+722). For the “average” starburst value $`\mathrm{r}_{43}=0.45`$ this gives gas masses of the order of $`(1.32.2)\times 10^{11}\mathrm{M}_{}`$. These are an order of magnitude higher than typical gas masses in local ULIRGs (e.g. Downes and Solomon 1998) and constitute $`(1520)\%`$ of the total stellar mass of a typical elliptical associated with the 3CR radio galaxies at $`\mathrm{z}1`$ (Best, Longair & Röttgering 1998). Thus it is clear that these enormous starbursts still have a vast reservoir of gas to be eventually turned into stars. The best evidence yet that at least in the case of 4C 60.07 we are witnessing an extraordinary starburst and not a scaled-up version of an ULIRG comes from the fact that in this galaxy the CO J=4–3 emission from gas is distributed over scales of $`30`$ kpc. This is in contrast with the local ULIRGs where most of the molecular gas and starburst activity is confined within the central $`15`$ kpc. Since starburst activity is usually co-extensive with the molecular gas reservoir, and this will be particularly true for the high excitation CO J=4–3 line, it seems that the starburst in 4C 60.07 occurs on galaxy-wide scales. Equally intriguing is the fact that the CO emission consists of two distinct components widely separated in velocity (Figures 1, 4). Their velocity-integrated CO flux densities of $`1.65\pm 0.35`$ Jy km s<sup>-1</sup> (wide-linewidth component) and $`0.85\pm 0.20`$ Jy km s<sup>-1</sup> (narrow-linewidth component) yield masses of $`\mathrm{M}(\mathrm{H}_2)5\times 10^{10}\mathrm{M}_{}`$ and $`\mathrm{M}(\mathrm{H}_2)2.6\times 10^{10}\mathrm{M}_{}`$ respectively (Equation 2, $`\mathrm{r}_{43}=0.45`$). We derived size estimates for these two components by fitting both the image and the visibility plane of the CO emission shown in Figure 4 with an underlying gaussian brightness distribution. The narrow-linewidth component appears unresolved with an upper limit of $`4^{\prime \prime }`$ while the wide-linewidth one is marginally resolved with the largest size being $`5^{\prime \prime }`$. In the absence of adequate spatial/kinematic information allowing a distinction between the possible geometrical arrangements of the CO-emitting gas we assume that the largest estimated size $`\mathrm{L}`$ corresponds to the diameter of a disk, hence its mass is given by $$\mathrm{M}_{\mathrm{dyn}}\frac{\mathrm{\Delta }\mathrm{V}_{\mathrm{FWHM}}^2\mathrm{L}}{2\mathrm{a}_\mathrm{d}\mathrm{G}\mathrm{sin}^2\mathrm{i}}=1.16\times 10^9\left(\frac{\mathrm{\Delta }\mathrm{V}_{\mathrm{FWHM}}}{100\mathrm{km}\mathrm{s}^1}\right)^2\left(\frac{\mathrm{L}}{\mathrm{kpc}}\right)\left(\mathrm{sin}^2\mathrm{i}\right)^1\mathrm{M}_{}$$ (11) where i is the inclination of the disk and $`\mathrm{a}_\mathrm{d}1`$ (Bryant & Scoville 1996). For the wide-linewidth component the largest estimated size corresponds to $`\mathrm{L}22`$ kpc, which gives $`\mathrm{M}_{\mathrm{dyn}}7.7\times 10^{11}(\mathrm{sin}^2\mathrm{i})^1\mathrm{M}_{}`$, comparable to the mass of a present-day giant elliptical. The ratio of the inferred molecular to dynamic gas mass then is $`\mathrm{M}(\mathrm{H}_2)/\mathrm{M}_{\mathrm{dyn}}0.06\mathrm{sin}^2\mathrm{i}`$. For the narrow-velocity component the upper limit corresponds to $`\mathrm{L}17.5`$ kpc yielding a dynamic mass of $`\mathrm{M}_{\mathrm{dyn}}4.6\times 10^{10}(\mathrm{sin}^2\mathrm{i})^1\mathrm{M}_{}`$. This is $`5\%`$ of the mass of a typical giant elliptical and a comparison with its molecular gas mass gives $`\mathrm{M}(\mathrm{H}_2)/\mathrm{M}_{\mathrm{dyn}}0.60\mathrm{sin}^2\mathrm{i}`$. Thus, geometrical factors aside, this component is significantly richer in molecular gas than the more massive one and it maybe the one where star formation has yet to form the bulk of its eventual stellar mass. It is easy to contrive a combination of different excitation properties, $`\mathrm{X}_{\mathrm{CO}}`$ values, and inclinations so that $`\mathrm{M}(\mathrm{H}_2)/\mathrm{M}_{\mathrm{dyn}}`$ in one or both molecular gas components is altered significantly. Still the fact that remains unaltered is that in 4C 60.07, unlike in a typical ULIRG, the intense star formation occurs over large scales in two spatially and kinematically distinct molecular gas reservoirs. This brings in mind the scenario for the formation of a giant elliptical at high redshift where several star-forming clumps merge to eventually form the galaxy. In this picture a gas-rich low mass clump like the one seen in 4C 60.07 is still in the process of merging and vigorous star formation while the higher mass object has already formed most of its stars. Of course the possibility that the wide-linewidth component itself consists of several virialized gas-rich clumps that do not necessarily constitute a bound system cannot be discarded in the light of the present data. For 6C 1909+722 the CO emission is unresolved with a size $`4^{\prime \prime }`$ ($`\mathrm{L}18`$ kpc), which for the observed linewidth (Table 1) corresponds to $`\mathrm{M}_{\mathrm{dyn}}5.85\times 10^{11}(\mathrm{sin}^2\mathrm{i})^1\mathrm{M}_{}`$ and $`\mathrm{M}(\mathrm{H}_2)/\mathrm{M}_{\mathrm{dyn}}0.08\mathrm{sin}^2\mathrm{i}`$. Thus, while there is little doubt that this galaxy is a starburst, it can still be a high-z counterpart of an ULIRG where most of its eventual stellar population has already formed and the intense star formation is confined in its central region. Finally, the case of 4C 60.07 makes it clear that when searching for high-z CO lines a large velocity coverage is necessary not only because of their often uncertain redshift, but in order to be able to detect various components that may be far apart in velocity. Higher resolution observations are necessary in order to reveal more structure in the molecular gas reservoir of this galaxy and allow better constraints of its dynamical mass. However the faintness of the CO emission observed here may hinder such efforts and a systematic study of such objects may have to await the advent of the next generation mm interferometers with significantly larger collecting areas. ## 5 Conclusions In this paper the detection of mm/sub-mm emission from dust and the first detection of CO J=4–3 in two powerful high-z radio galaxies has been presented. The analysis of the data leads us to the following conclusions: 1. Using the most conservative CO/H<sub>2</sub> conversion factor (1/5 of the galactic value) the CO J=4–3 emission implies molecular gas masses of the order of $`\mathrm{M}(\mathrm{H}_2)(0.51)\times 10^{11}\mathrm{M}_{}`$ in the two high-z powerful radio galaxies. Their sub-mm continuum corresponds to large FIR luminosities ($`10^{13}\mathrm{L}_{}`$) which imply that we are witnessing intense starburst phenomena at $`\mathrm{z}3.53.8`$ converting the aforementioned gas mass into stars. 2. The wide range of the molecular gas excitation expected in Ultra Luminous IR galaxies over large scales is briefly explored and we conclude that observing objects with similar ISM conditions at high z using high-J CO transitions may in some cases hinder their detection. This can partly explain explain why, despite systematic efforts, CO detections of unlensed objects are fewer than detections of their mm/sub-mm continuum from dust. Therefore it is still too early to draw any general conclusions about the abundance of molecular gas in high-z systems from the data currently available in the literature. 3. The estimated molecular gas-to-dust ratios found in these two objects are in the range found for the local IRAS galaxies, revealing that in both of them most of the heavy elements have already been produced at present-day abundances. 4. In 4C 60.07 the CO emission and presumably the starburst activity implied by its large FIR luminosity is distributed over scales of $`30`$ kpc and consists of two distinct components spanning a total velocity range of $`1000`$ km s<sup>-1</sup>. Thus this galaxy does not seem to be just a scaled-up high-z version of a local Ultra Luminous IR galaxy where most of the starburst activity and the accompanying bright CO/dust emission are confined to a more compact region in the center. A plausible scenario is that we are witnessing the ongoing formation event of a giant elliptical galaxy, the future host of the residing radio-loud AGN. ### 5.1 Acknowledgments We acknowledge the IRAM staff from the Plateau de Bure and from Grenoble for carrying the observations and help provided during the data reduction. P. P. Papadopoulos would like to thank especially Dieter Nürnberger and Anne Dutrey for their patient help during the data reduction in Grenoble and Jessica Arlett for providing us the LVG code. We thank the referee David Sanders as well as Simon Radford for valuable comments, Chris Carilli for making available the 6 cm maps of 4C 60.07 and Carlos De Breuck for help with the radio images and stimulating discussions. The work of W. v. B. at IGPP/LLNL was performed under the auspices of the US Department of Energy under contract W-7405-ENG-48. P. P. P. is supported by the “Surveys with the Infrared Space Observatory” network set up by the European Commission under contract ERB FMRX-CT96-0086 of its TMR programme. Table 1 Observational parameters | Parameter | 4C 60.07 | 6C 1909+722 | | --- | --- | --- | | RA (J2000)<sup>a</sup> | 05<sup>h</sup> 12<sup>m</sup> 54<sup>s</sup>.80 | 19<sup>h</sup> 08<sup>m</sup> 23<sup>s</sup>.70 | | Dec (J2000)<sup>a</sup> | +60 30 51.7<sup>′′</sup> | +72 20 11.8<sup>′′</sup> | | $`\mathrm{z}_{\mathrm{co}}`$ | $`3.791`$ | 3.532 | | $`\mathrm{I}_{\mathrm{CO}}`$ (Jy km s<sup>-1</sup>) | $`2.50\pm 0.43`$ | $`1.62\pm 0.30`$ | | $`\mathrm{\Delta }\mathrm{V}_{\mathrm{FWHM}}`$ (km s<sup>-1</sup>) | $`1000`$ | $`530\pm 70`$ | | $`\mathrm{S}_{1.25\mathrm{mm}}`$ (mJy) | $`4.5\pm 1.2`$ | $`2`$ (2$`\sigma `$) | | $`\mathrm{S}_{3\mathrm{m}\mathrm{m}}`$ (mJy) | $`0.5`$ (2$`\sigma `$) | $`0.6`$ (2$`\sigma `$) | | $`\mathrm{S}_{850\mu \mathrm{m}}`$ (mJy) | $`11.0\pm 1.5`$ | $`13.5\pm 2.8`$ | <sup>a</sup> Coordinates of the image center. Table 2 ISM environments and line ratios at $`\mathrm{z}=3.5`$ | Line ratios | “Average” starburst | Diffuse warm gas | Cold gas | | --- | --- | --- | --- | | J=2–1 | 0.95–1.12 | 0.64–1.06 | 0.40–0.61 | | J=3–2 | 0.73–1.07 | 0.30–0.92 | 0.14–0.29 | | J=4–3 | 0.45–1.04 | 0.10–0.74 | 0.035–0.11 | | J=5–4 | 0.21–0.99 | 0.025–0.52 | 0.005-0.02 | | J=6–5 | 0.065–0.93 | 0.006–0.31 | $`<10^3`$–0.002 | | J=7–6 | 0.015–0.84 | 0.0015–0.15 | $`<10^3`$ | | J=8–7 | 0.002–0.69 | $`<10^3`$–0.05 | $`<10^3`$ | | J=9–8 | $`<10^3`$–0.50 | $`<10^3`$–0.013 | $`<10^3`$ | Note.— All the line ratios are normalized by the <sup>12</sup>CO J=1–0 brightness, and the assumed CMB temperature is $`(1+z)\times 2.75`$ K = 12.375 K. “Average” starburst: $`\mathrm{T}_{\mathrm{kin}}=50100`$ K, $`\mathrm{n}(\mathrm{H}_2)=10^310^4`$ cm<sup>-3</sup>, $`[\mathrm{CO}/\mathrm{H}_2]/(\mathrm{dV}/\mathrm{dr})=3\times 10^5`$ (km s<sup>-1</sup> pc<sup>-1</sup>)<sup>-1</sup>. Diffuse warm gas: $`\mathrm{T}_{\mathrm{kin}}=50100`$ K, $`\mathrm{n}(\mathrm{H}_2)=10^210^3`$ cm<sup>-3</sup>, $`\tau 12`$ for <sup>12</sup>CO J=1–0. Cold gas: $`\mathrm{T}_{\mathrm{kin}}13`$ K (see text), $`\mathrm{n}(\mathrm{H}_2)=10^210^3`$ cm<sup>-3</sup>, $`[\mathrm{CO}/\mathrm{H}_2]/(\mathrm{dV}/\mathrm{dr})=3\times 10^5`$ (km s<sup>-1</sup> pc<sup>-1</sup>)<sup>-1</sup>.
no-problem/9908/astro-ph9908054.html
ar5iv
text
# Untitled Document Strange stars in low-mass binary pulsar systems J. E. Horvath Instituto Astronômico e Geofísico, Universidade de São Paulo Av. Miguel Stefano 4200 - São Paulo SP, 04301-904, Brazil and Steward Observatory, U. of Arizona N. Cherry Av. 930, Tucson Arizona, USA Abstract Based on observational facts and a variety of theoretical arguments we discuss in this work the possibility that pulsars in Low-Mass Binary Pulsar systems could be strange stars rather than neutron stars. It is shown that, although subject to reasonable uncertainties, the consideration of the physics of the SQM core and thin normal crusts leads to the prediction of several observed features of the magnetic field history of these systems whitin this working hypothesis. Pacs 97.04.40.Dg Submitted to Int. Jour. Mod. Phys. D 1. INTRODUCTION Low-Mass Binary Pulsars (hereafter LMBP) are generally regarded as important to probe the effects of substantial accretion rates on the structure and evolution of pulsars (see, for example, for a review). In particular, it has become increasingly clear that the magnetic fields $`B`$ of isolated radio pulsars do not seem to decay on short timescales ($`\tau _{decay}\mathrm{\hspace{0.17em}10}^7yr`$) as previously thought; while LMBP show strong evidence for $`B`$ decay. Accretion has been recently argued to be involved in the very formation of at least one particular system ($`PSR\mathrm{\hspace{0.17em}1831}00`$) and possibly all the pulsars in these systems by triggering an accretion-induced collapse (AIC) of a progenitor white dwarf; thus avoiding problems with formation in a type II supernova explosion . On the other hand, it has long been speculated that, at least some pulsars (if not all \[4-6\]) must be strange stars if the strange matter (SQM) conjecture is true; in the sense that hadronic matter is a metastable state decaying into a cold $`uds`$ plasma (with a net gain of energy) under favorable conditions. Which exactly are these ”favorable conditions” is a matter of some controversy. While the idea of a mixed (neutron $`+`$ strange) population has been implicit in the literature, there is actually a fine-tunning problem to be explained, namely why a metastable state such as neutron matter can outlive a huge macroscopic time without decaying (we have argued elsewhere that the ”natural” conversion is driven by the Kelvin-Helmholtz timescale $`\mathrm{\hspace{0.17em}1}s`$ after the formation of a protoneutron star ). On the other hand, it has been argued on observational grounds that field radio pulsars can not be identified with strange stars; a strong conclusion that prompted refined calculations of ”classical” strange stars and exotic pulsar models . Nevertheless, it should be acknowledged that the widely-spread notion that a neutron star must be highly compressed to reach the $`neutronstrangematter`$ threshold at its center finds a natural setting in binary LMBP where AIC has formed the secondary before the end of the mass-transfer regime. The purpose of this work is to point out that the identification of pulsars in LMBP systems with strange stars may allow the construction of a consistent scenario for their magnetic, thermal and evolutionary histories. Moreover, the physical features for the model to work seem to emerge quite naturally from simple estimates within reasonable uncertainties. 2. SCHEMATIC OVERVIEW OF A STRANGE STAR STRUCTURE AND EVOLUTION As is well-known, the structure of a strange star consists of a dense degenerate SQM core sourrounded by a normal matter crust. Both the Baym-Pethick-Sutherland equation of state (which is generally considered as an accurate description for this density regime) and the case of an accretion-generated crust (which is probably more adequate for the system we are considering) can be parametrized by a polytropic expression $`P=K\rho ^\mathrm{\Gamma }`$ with sufficient accuracy. For the sake of the argument, we may properly describe the structure of this crust by integrating the hydrostatic equilibrium equation in the approximation $`M=constant`$ (compare, for example, with ) and get $$\delta R=\xi \frac{\mathrm{\Gamma }}{\mathrm{\Gamma }1}\frac{KR_{ss}^2}{GM_{ss}}\rho _B^{\mathrm{\Gamma }1}$$ $`(1)`$ where $`R_{ss}`$ is the strange core radius ($``$ star radius for $`R_{ss}\delta R`$), $`M_{ss}`$ is the mass of the strange star in the same approximation, $`\rho _B`$ is the density at the base of the normal crust and $`\xi \mathrm{\hspace{0.17em}0.65}`$ is a relativistic correction. This approximate (yet accurate) expression for the this crust allows us to relate all the relevant quantities to $`\rho _B`$ in a simple and useful way as will be explained below. We shall further assume that the crust forms on short timescales compared to the thermal evolution of the star. It is widely agreed that, due to the very features of the SQM core, the density of the normal matter at the base of the crust $`\rho _B`$ has to be limited by the neutron drip value $`\rho _D\mathrm{\hspace{0.17em}4.3}\times \mathrm{\hspace{0.17em}10}^{11}gcm^3`$. However, there is no obvious reason for the latter being the actual value, and any density lower than $`\rho _D`$ is, in principle, possible. In fact, the authors of Ref. have argued that $`\rho _D/5`$ is the maximum value allowed by considering the mechanical equilibrium of the crust. In the first case, the exact accretion history of the star would be irrelevant, since after the condition $`\rho _B=\rho _D`$ is achieved, the dripped neutrons would be swallowed by the SQM core and we will always deal with a maximal crust. On the other hand, if the limiting density is $`<\rho _D`$, the mass of the crust $`\delta M`$ ($`\mathrm{\hspace{0.17em}10}^6M_{}`$ in this approximation) may depend on the history of the object (in the former case, the dominating isotope would be $`{}_{}{}^{118}Kr`$, while in the latter we would find $`{}_{}{}^{80}Zn`$ at the base of the crust). Fortunately, since $`\mathrm{\Gamma }`$ is near $`4/3`$, the results will be quite insensitive to the actual value of $`\rho _B`$ provided the crust is not extremely tiny. Keeping this in mind, we shall leave the scaling explicit to a fiducial value $`\rho _0=\mathrm{\hspace{0.17em}10}^{11}gcm^3`$ to allow for a range of possibilities. The maximal, neutron drip-limited case is obtained by setting $`\rho =4.3\rho _0`$, $`Z=36`$, $`X=0.3`$ below. The thermal and magnetic history of a strange star will be very different according to the actual state of the SQM core. It has been longly recognized that it is entirely possible that the quark liquid forms a superconducting state. The first schematic model calculations of compact stars cooling with superconducting SQM cores have been recently refined and improved , to confirm that dramatic effects due to quark pairing can completely alter the quick-cooling signature of strange stars (see, for example, ). Since the critical temperature for pairing to occurr is very high in these models ($`T_c\mathrm{\hspace{0.17em}0.1}MeV`$, see ); we can neglect the very short time after the AIC formation event in which the SQM core remains in the normal state. Once it becomes superconducting the magnetic field $`B`$ should obey the diffusion equation $$^2B=\frac{1}{D}\frac{B}{t}$$ $`(2)`$ where $`D=c^2/4\pi \sigma _{sqm}`$ is the diffusion coefficient. Acording to Refs., the SQM conductivity $`\sigma _{sqm}`$ can be expressed as $$\sigma _{sqm}\mathrm{\hspace{0.17em}10}^{19}\left(\alpha _cT_9\right)^{5/3}\left(\frac{\mu }{300MeV}\right)^{8/3}s^1$$ $`(3)`$ where $`\alpha _c`$ is the strong coupling constant, $`T_9T/10^9K`$ is the internal temperature and $`\mu `$ is the quark chemical potential. The consequence of such a conductivity is immediately clear: unless the estimate (valid for normal SQM) happens to be wrong by many orders of magnitude, the magnetic flux is expulsed in a timescale as short as $`\tau _{exp}R^2/D\mathrm{\hspace{0.17em}3}\times \mathrm{\hspace{0.17em}10}^3yr(\mu /300MeV)^{8/3}(\alpha _cT_9)^{5/3}`$; an astronomically small value . Now, the conservation of the expulsed flux demands that the final value of the normal crust grows by a factor $`B^{crust}=B^{core}(R/\delta R)`$ but, because of elastic stresses in the crust can not support magnetic stresses, the $`B^{crust}`$ must be limited by $`B_{max}^{crust}=(8\pi \mu \mathrm{\Theta }\delta R/R)^{1/2}`$, where $`\mu `$ is the lattice shear modulus and $`\mathrm{\Theta }\mathrm{\hspace{0.17em}10}^2`$ is the shear angle. If we impose $`\mu \mathrm{\hspace{0.17em}10}^{26}dyncm^2`$ for the former at $`\rho =\rho _0`$ then $`B_{max}^{crust}\mathrm{\hspace{0.17em}8}\times \mathrm{\hspace{0.17em}10}^{11}G`$, which is the initial value expected for the field in a young BP (see below). Because its limited resistance, the crust would be blown off if the initial $`B^{core}`$ exceeds the threshold $`B_{}^{core}=\mathrm{\hspace{0.17em}8}\times \mathrm{\hspace{0.17em}10}^{11}G(\delta R/R)\mathrm{\hspace{0.17em}10}^{10}G`$. We suggest that we may in fact be observing those LMBP in which the initial $`B^{core}`$ (for which we do not have any reliable information at all) remained lower than $`B_{}^{core}`$. It is remarkable that none of the 24 known systems posesses a $`B`$ larger than the expected in the model. 3. FURTHER EVOLUTION OF THE MAGNETIC FIELD Due to ohmic currents the field, now totally confined to the crust if a small skin depth is neglected, should proceed to decay according to $$\frac{B}{t}=\frac{c^2}{4\pi }\times \left(\frac{1}{\sigma _{crust}}\times B\right).$$ $`(4)`$ The solutions which are relevant to our problem are those satisfying $`B=\mathrm{\hspace{0.17em}0}`$ immediately above $`\rho =\rho _B`$; in agreement with the complete core flux expulsion expectation. Furthermore, since as discussed above, strange stars with crust satisfy the condition $`\delta R/R\mathrm{\hspace{0.17em}1}`$, we may properly use the ”thin crust approximation” to obtain the decay timescale of the longest-lived dipole mode $$\tau _d=\frac{2\pi }{c^2}\xi \frac{\mathrm{\Gamma }}{\mathrm{\Gamma }1}\overline{\sigma }_{crust}\frac{KR_{ss}^3}{GM_{ss}}\rho _B^{\mathrm{\Gamma }1}.$$ $`(5)`$ where $`\overline{\sigma }_{crust}`$ is the average of the electrical conductivity over the crust. Since the conductivity will be dominated by the densest matter , an upper limit is set by replacing the average of the conductivity by its value at the drip point (that is $`\overline{\sigma }_{crust}\sigma _{crust}(\rho =\rho _B)`$) hereafter. As in most field decay models by ohmic dissipation, it is the value of $`\sigma _{crust}`$ which controls the behavior of $`\tau _d`$. At the higher temperatures $`\sigma _{crust}`$ is dominated by the Umklapp processes and reads $$\sigma _U=\mathrm{\hspace{0.17em}1.43}\times \mathrm{\hspace{0.17em}10}^{21}\left(\frac{\rho }{\rho _0}\right)^{7/6}\left(\frac{X}{0.375}\right)^{5/3}T_9^2s^1,$$ $`(6)`$ where the fraction of protons per nucleus $`X`$ has been scaled to its expected value at $`\rho _0`$ and $`T_9T/10^9K`$ identified with the (isothermal) core temperature. Below a freezeout temperature $`T_F`$ the Umklapp processes are frozen and the conductivity is dominated by the impurity concentration $`Q`$. This $`T_F`$ can be expressed as $$T_F=\mathrm{\hspace{0.17em}1.73}\times \mathrm{\hspace{0.17em}10}^7\left(\frac{\rho }{\rho _0}\right)^{1/2}\left(\frac{Z}{30}\right)^{1/2}\left(\frac{X}{0.375}\right)K,$$ $`(7)`$ where $`Z`$ is the charge of the dominant isotope $`{}_{}{}^{80}Zn`$ at $`\rho _0`$. Below this temperature the dominant conductivity due to impurity concentration takes the form $$\sigma _I=\mathrm{\hspace{0.17em}4.13}\times \mathrm{\hspace{0.17em}10}^{24}\left(\frac{\rho }{\rho _B}\frac{X}{0.375}\right)^{1/3}\left(\frac{Z}{30}\right)\left(\frac{1}{Q}\right)s^1,$$ $`(8)`$ where $`Q\mathrm{\hspace{0.17em}10}^3`$ is the mean square deviation of $`A`$ from its average (it can be readily checked that, unless $`Q`$ is unexpectedly large, $`\sigma _I`$ does not dominate $`\sigma _U`$ for temperatures $`T>T_F`$ and we shall dismiss this possibility in the remaining of this work). An important difference between these regimes is that while $`\sigma _UT^2`$, $`\sigma _I`$ does not depend on the temperature. The crust field decay is therefore different for $`T>T_F`$ than for $`T<T_F`$. In the first case the decay proceeds according to $$B^{crust}(t)=B_{max}^{crust}\mathrm{exp}\left(_0^t\frac{dt^{}}{\tau (t^{})}\right),$$ $`(9)`$ where $$\tau =\mathrm{\hspace{0.17em}7.2}\times \mathrm{\hspace{0.17em}10}^3\left(\frac{R}{10km}\right)^3\left(\frac{1.4M_{}}{M_{ss}}\right)\left(\frac{\rho }{\rho _0}\right)^{\mathrm{\Gamma }+\frac{1}{6}}\times \left(\frac{X}{0.375}\right)^3T_9^2yr,$$ $`(10)`$ depends on time through the temperature $`T`$. In the second case, the decay is a simple exponential $$B^{crust}(t)=B_F^{crust}\mathrm{exp}(t/\tau _I),$$ $`(11)`$ with $`B_F^{crust}`$ the value of the field at the end of the first regime and the time constant is $$\tau _I=\mathrm{\hspace{0.17em}2}\times \mathrm{\hspace{0.17em}10}^7(R/10km)^3(X/0.375)^{5/3}\times (\rho /\rho _0)^{\mathrm{\Gamma }\frac{2}{3}}(Z/30)(1/Q)yr,$$ $`(12)`$ which happens to be always $`\mathrm{\hspace{0.17em}10}^9yr`$ for the expected parameters unless $`Q`$ happens to be very high. A determination of the evolution of $`B^{crust}(t)`$ for $`T>T_F`$ requires the knowledge of the thermal history $`T(t)`$. The important point here is that SQM superconductivity renders a plateau in the $`T`$ vs. age curve (which is absent in the case of a normal SQM core). The boldest approximation is to set $`T=constant`$ for the plateau era and use the relationship between the surface temperature and core temperature given in Ref.; namely $`T_s\mathrm{\hspace{0.17em}10}^6(T/10^8K)^{0.55}K`$ (which has been argued to be valid for strange stars as well ) to yield the crust field at any time $`tt_{pl}`$ $$B^{crust}(t)=B_{max}^{crust}\mathrm{exp}\left[\frac{1}{7.2}\left(\frac{t}{10^5yr}\right)\left(\frac{<T_s>}{10^6K}\right)^{3.64}\right],$$ $`(13)`$ where $`<T_s>`$ is the average of the surface temperature in the plateau era. Since, according to recent calculations , $`<T_s>\mathrm{\hspace{0.17em}2}\times \mathrm{\hspace{0.17em}10}^6K`$, we conclude that the field should decay by a factor of $`\mathrm{\hspace{0.17em}10}^2`$ along the plateau era lasting $`few\times \mathrm{\hspace{0.17em}10}^5yr`$. Using a more accurate fit to Ref. results $`T_s=\mathrm{\hspace{0.17em}10}^{6.5}(yr/t)^{0.05}K`$ we find the refined estimate of the decay along this epoch $$B_F^{crust}=B_{max}^{crust}\mathrm{exp}\left[1.45\left(\frac{t}{10^5yr}\right)^{0.82}\right],$$ $`(14)`$ which leads to essentially identical conclusions. 4. DISCUSSION Based on the results of the former section, it is tempting to suggest that a decay of the field by a factor of $`\mathrm{\hspace{0.17em}10}^3`$ in the first $`10^510^6yr`$ is built-in by the physics of strange star crusts with superconducting SQM cores. The same line of reasoning shows that, unless the impurity concentration is very high ($`Q>\mathrm{\hspace{0.17em}10}^2`$), further decay of $`B^{crust}`$ below $`B_F^{crust}\mathrm{\hspace{0.17em}10}^9G`$ is inhibited and its value remains effectively frozen because of the larger value of the decay constant $`\mathrm{\hspace{0.17em}10}^9yr`$, again an effect controlled by the microphysics of the thin normal crust of a strange star. From the astrophysical point of view a model in which LMBPs form from a symbiotic system ($`\mathrm{\hspace{0.17em}1}M_{}`$ low-mass giant $`+`$ white dwarf) is attractive since the latter are abundant in the galaxy. According to the accepted scenario (see for example ), as the non-degenerate star leaves the main sequence its radius increases until filling the critical lobe and mass transfer starts. Along this mass-transfer stage AIC of the white dwarf happens, and because of the $`\mathrm{\hspace{0.17em}0.1}M_{}`$ energy loss, the binary temporarily detaches. When accretion resumes the collapse of the neutron star to a strange star should follow after the accretion of $`\mathrm{\hspace{0.17em}0.1}M_{}`$ from the companion, which is sufficient to drive the conversion by compression. The SS is born hot but must cool very quickly below $`T_c`$, and therefore expulse the interior field as described (the latest work on quark pairing seems to suggest much larger gaps of $`\mathrm{\hspace{0.17em}100}MeV`$ with potential important effects on the cooling which have not been explored as yet, see for example ). After $`few\times \mathrm{\hspace{0.17em}10}^5yr`$ this field would decay to the ”bottom value” $`B_F^{crust}\mathrm{\hspace{0.17em}10}^8G`$ (eqs. 9-11). We suggest that systems like PSR 1718-19 and PSR 1831-00 are quite young and their evolution downwards in the $`B_sP_{orb}`$ plane (Fig. 1 of Ref. ) may be measurable. It is also important to note that spin periods in the millisecond range are possible for strange stars in LMBP , in agreement with observations; while they would be prohibited in the case of a neutron composition because of r-mode instability . Our estimations above may be helpful for an interpretation of why we should expect the field to decay differently on two different timescales, a point not easily made for neutron star models where the much denser crust behaves differently and complete flux expulsion remains controversial. We finally remark that, at least in principle, the identification is also consistent with the lack of glitches and substantial timing noise of pulsars in LMBP systems (which is naturally expected from strange stars ); and explains the sharp contrast with isolated field radio pulsars $`B`$ evolution, whose composition need not to be exotic because of formation arguments. 5. ACKNOWLEDGEMENTS We would like to acknowledge the financial support of the Brazilian Agencies FAPESP (São Paulo) and CNPq through several forms of grants. M.P.Allen, P.Benaglia, G.A.Romero and, particularly, J.A.de Freitas Pacheco are greatfully acknowledged for useful suggestions. 6. REFERENCES D.Bhattacharya and E.P.J.van den Heuvel, Phys.Rep. 203, 1 (1991). E.P.J.van den Heuvel and O.Bitzaraki, Astron.Astrophys. 297, L41 (1995). D.J.Helfand, M.Ruderman and J.Shaham, Nature 304, 423 (1983). E.Witten, Phys. Rev. D 30, 272 (1984). C.Alcock, E.Farhi and A.V.Olinto, Astrophys.J. 310, 261 (1986) ; P.Haensel, J.Zudnik and R. Schaeffer, Astron. Astrophys. 160, 121 (1986). O.G.Benvenuto, J.E.Horvath and H.Vucetich, Int.Jour.Mod.Phys. A 6, 4769 (1991). A.Bodmer, Phys. Rev. D 4, 1601 (1971), see also H. Terazawa INS Report 338 (INS, Univ. of Tokyo, 1979) and S. A. Chin and A.K. Kerman, Phys. Rev. Lett. 43, 1292 (1979). O.G.Benvenuto and J.E.Horvath, Phys. Rev. Lett. 63, 716 (1989). M.A.Alpar, Phys. Rev. Lett. 58, 2152 (1987). N.K.Glendenning and F.Weber, Astrophys.J. 400, 647 (1992). O.G.Benvenuto, J.E.Horvath and H.Vucetich, Phys. Rev. Lett. 64, 713 (1990). G.Baym, C.Pethick and P.Sutherland, Astrophys.J. 170, 299 (1971). P. Haensel and J.L. Zdunik, Astron. Astrophys. 229, 117 (1990). Y.F.Huang and T.Lu, Astron.Astrophys.325, 189 (1997). D.Bailin and A.Love, Phys. Rep. 107, 325 (1984). J.E.Horvath, O.G.Benvenuto and H.Vucetich, Mod. Phys. Lett. A 7, 995 (1992). J.E.Horvath, O.G.Benvenuto and H.Vucetich, Phys. Rev. D 44, 1147 (1991). C.Schaab, B.Hermann, F.Weber and M.K.Wiegel, Astrophys.J.Lett. 480, 111 (1997). P.Pizzochero, Phys. Rev. Lett. 66, 2425 (1991). H. Heiselberg and C.J. Pethick, Phys. Rev. D 48, 2916 (1993), note that this refined calculations give a different behavior than, e.g. P.Haensel and A.J.Jerzak, Acta Phys. Pol. B 20, 141 (1989). Bailin and Love op. cit. had in fact first speculated that very short expulsion timescales were possible. M.Ruderman, Astrophys.J. 382, 576 (1991). C.Pethick and M.Sharling, Astrophys.J. 453, L29 (1995). V.A.Urpin, Sov. Astron. 36, 393 (1992). V.A.Urpin and D.G.Yakovlev, Sov. Astron. 24, 303 (1980). E.Flowers and M.A.Ruderman, Astrophys.J. 215, 302 (1977). E.H.Gudmunsson, C.J.Pethick and R.Epstein, Astrophys.J. 272, 286 (1983). V.V.Usov, Astrophys.J.Lett. 481, L107 (1997). M. de Kool and J. van Paradijs, Astron.Astrophys. 173, 279 (1987). T. Schaefer and F. Wilczek, hep-ph/9906512 J. Madsen, Phys. Rev. Lett. 81, 3311 (1998). N. Andersson, K. Kokkotas and B.F. Schutz, Astrophys. J 510, 846 (1999). P.B. Jones, Mon. Not. R.A.S. 246, 364 (1990).
no-problem/9908/cond-mat9908080.html
ar5iv
text
# Numerical Study of the Vortex Phase Diagram Using the Bose Model in STLS approximation ## I Introduction The phases of flux lines (FLs) in high temperature superconductors are the subject of many current experimental and theoretical investigations . In the classical (Abrikosov) picture of type-II superconductors, FLs penetrate a clean material for fields $`H>H_{c1}`$, to form a triangular solid lattice. This mean field phase diagram is modified by inclusion of the strong thermal fluctuations and play an important role in high-$`T_c`$ materials . The presence of various forms of disorder (such as point or columnar disorder), also affects on the mean field phase diagram. In this paper we typically focus on the phase diagram of the pure materials, however, there are many interesting experimental and theoretical works describing the effect of both thermal fluctuations and disorder on the phase diagram . It is well known that the thermal fluctuations lead to melting of the vortex lattice and appearance of a vortex liquid. It has now become possible to observe the melting transition experimentally. The indirect observations based on the resistivity measurements, and recent experiments based on the measuring a jump in the magnetization or the latent heat show a first-order vortex lattice melting transition. On the other hand, vortex lattice melting has been studied theoretically using various approximations. Early works using the renormalization group or density functional theory have indicated a first-order transition. Elastic theory combined with the Lindemann criterion produce a melting line in good agreement with the experimental observations . There are also a large interest in numerical simulations for studying this problem. An interesting work in this direction is done by Nordborg and Blatter which present an extensive numerical study of the vortex matter using the mapping to 2D bosons and path-integral Monte Carlo simulations. It was suggested by Nelson that the vortex system is equivalent to a system of interacting bosons in two dimensions (Bose model) . This mapping predicts a melting transition into an entangled vortex liquid. Therefore, the problem of a vortex system maps to a system of $`N`$ bosons in two dimensions interacting through the potential $`V(r)=g^2K_0(r/\lambda )`$, where $`K_0`$ is the modified Bessel function, $`\lambda `$ is the London penetration depth, and $`g^2`$ is a constant that scales the energy of interaction. In the language of the vortices, this potential comes from the interaction between vortices in the London theory, and $`g`$ is related to the elastic moduli of the vortex lattice. The Bose model differs from the real vortex system (see next section), however it still contains the main part of the interaction . Hence, it would be reasonable and interesting to use this model for describing the properties of the vortex phase diagram. More recently , the Bose model has been used in a numerical study of the vortex matter. Some physical quantities such as structure factor and superfluid density in different temperatures are given addressed, and the first order vortex lattice melting transition into an entangled vortex liquid is approved by numerical simulations. In the language of the boson system, this transition is related to the quantum phase transition from a Wigner crystal to a superfluid. The Bose model idea also allows for using the many body techniques. In this work, we apply the self-consistent-field approximation of Singwi, Tosi, Land, and Sjolander (STLS) , and calculate the static structure factor, pair correlation function, interaction energy and the spectrum of the excited energies for different magnetic field strengths and temperatures. The STLS approximation has originally been proposed for describing a degenerate electron gas and has been used successfully to study a variety of other systems too . In the STLS theory, the correlation effects are incorporated through a static local-field correction, which is obtained numerically in a self-consistent way. We find numerical results for the static structure factor $`S(q)`$ over a wide range of the parameters. From the calculation of $`S(q)`$, we present the results for the pair correlation function, the interaction energy and spectrum of the excited energies. Different behaviors of these quantities may be used for studying the phases of vortex lines. It is well known that the oscillatory behavior in $`g(r)`$ is a signature of the solid phase. Therefore, the phase transition can be detected by looking at the behavior of the $`g(r)`$ in a fixed temperature (magnetic field) but varying magnetic field (temperature). The solid-liquid transition can also be observed using the static structure factor. Disappearing of the peaks in the structure factor resembles the onset of the phase transition. Hence, using the behaviors of the pair correlation function and static structure factor, the phase diagram can be explained qualitatively, however, it is not possible to determine precisely the melting transition temperature. One of the transition signatures is the appearance of a special $`q`$ on which the spectrum of the excited energies vanishes. So we have numerically investigated the dispersion relation of the excited energies as a good quantity for the estimation of the $`BT`$ phase diagram and the melting temperature. Quantitatively our results for the excited energies are compatible with the expected results of the phase diagram and recent Monte Carlo simulations . On the other hand, the direction dependent interaction for real vortices has very interesting consequences and predicts a van der Waals interaction even for straight vortices . It is shown that in the decoupled limit, $`\gamma 0`$ (where $`\gamma `$ is the anisotropy parameter), the van der Waals attraction is proportional to $`1/R^4`$. This attractive interaction with entropic repulsion has very important outcomes for the low-field phase diagram of the anisotropic superconductors . Recently Volmer and Schwartz introduced a new variational approach to consider the effects of van der Waals attraction and repulsive interaction on the field phase diagram. We also consider the same model in the STLS approximation for studying the combined effects of the repulsive and attractive potentials on the solid and liquid phases of the pure anisotropic or layered superconductors. The rest of this paper is organized as follows. In Sec. II, we shortly review the Bose model and discuss its applicability to the vortex system. The STLS approximation is briefly discussed in Sec. III. The numerical results for the repulsion interaction is presented and discussed in the Sec. IV. The results for the van der Waals interaction and its consequences in the phase diagram is given in Sec. V, and finally the conclusions appear in Sec. VI. ## II Bose model In the Feynman path-integral picture , the system of $`N`$ interacting bosons in two dimensions is described by the action $$\frac{S}{h}=\frac{1}{h}_0^{h/T}𝑑\tau \left\{\underset{i}{}\frac{M}{2}\left(\frac{d\stackrel{}{R}_i}{d\tau }\right)^2+\underset{i<j}{}g^2K_0\left(\frac{R_{ij}}{\lambda }\right)\right\},$$ (1) where $`\stackrel{}{R}_i(\tau )`$ is a two dimensional vector representing the positions of the bosons, and $`T`$ is the temperature of the system. In the Schrodinger picture the above action is equivalent to the two-dimensional Schrodinger equation, $$\left[\underset{i}{}\frac{_i^2}{2M}+\underset{i,j}{}V(R_{ij})\right]\psi _0=E_0\psi _0,$$ (2) where the potential $`V(R_{ij})`$ is proportional to the modified Bessel function. It was pointed out by Nelson that the above action can be also interpreted as the London free energy for a system of vortex lines. The London free energy for a system of interacting vortices for a sample of length $`L_z`$ is given by, $$\frac{}{T}=\frac{1}{T}_0^{L_z}𝑑z\left\{\underset{i}{}\frac{\epsilon _l}{2}\left(\frac{d\stackrel{}{R}_i}{dz}\right)^2+\underset{i<j}{}2\epsilon _0K_0\left(\frac{R_{ij}}{\lambda }\right)\right\},$$ (3) where $`\epsilon _l\gamma ^2\epsilon _0a_0/(2\sqrt{\pi }\xi )`$, $`\epsilon _0=(\mathrm{\Phi }_0/4\pi \lambda )^2`$, $`\mathrm{\Phi }_0`$ is the quantum flux, $`\xi `$ is coherence length, and $`a_0`$ is the lattice spacing. Comparing this functional free energy, which is referred as Bose model for the vortex system, with the action (1) shows the relationship between the parameters of the 2D bosons and vortices. The modified Bessel function $`K_0(R/\lambda )`$ describes a screened logarithmic interaction, $`K_0(x)ln(x)`$ for $`x0`$, and $`K_0(x)x^{1/2}e^x`$ for $`x\mathrm{}`$. Thus, the London penetration depth defines the interaction range. Note that according to the two-fluid model , the penetration depth diverges at zero-field transition temperature $`T_c`$, and therefore the interaction range becomes considerably longer upon approaching $`T_c`$. As it is discussed in details in the Ref. , in spite of the fact that the Bose model differs from the real vortex system in the choice of boundary conditions, the linearalization leading to the elastic term, and the retarded interaction, contain the main parts of the interaction between vortices and one expects that the results be in a rough quantitative agreement with those of real systems. ## III STLS approximation In this section we shortly review the principal equations of the STLS approximation . These equations show the relation between the response function, static structure factor, and the local field correction. In the STLS approximation, the response function can be expressed as , $$\chi (q,\omega )=\frac{\chi _0(q,\omega )}{\left[1\psi (q)\chi _0(q,\omega )\right]}.$$ (4) In this equation $`\chi _0`$ is the response function of the free Bose gas and $`\psi `$ is the effective potential given by $`\psi (q)=v(q)(1G(q))`$, where $`v(q)`$ is the Fourier transformation of the bare potential, and $`G(q)`$ is local field correction. The STLS local field correction is $$G(q)=\frac{1}{n}\frac{d𝐪^{}}{(2\pi )^2}\frac{(𝐪.𝐪^{})}{q^2}\frac{v(q^{})}{v(q)}(S(|𝐪𝐪^{}|)1),$$ (5) where $`n`$ is the density, and $`S(q)`$ is the static structure factor and is related to response function as ($`\mathrm{}=1`$), $$S(q)=\frac{1}{(n\pi )}_0^{\mathrm{}}𝑑\omega Im(\chi (q,\omega )).$$ (6) For a noninteracting two dimension Bose gas the free response function is given by, $$\chi _0(q,\omega )=\frac{2nϵ(q)}{\left[(\omega +i\eta )^2ϵ(q)^2\right]},$$ (7) where $`ϵ(q)=q^2/(2m)`$ is the free particle energy, and $`\eta `$ is a positive infinitesimal quantity. Using the Eqn. (7), one can calculate the integral Eqn. (6) analytically which leads to the following result for the static structure factor $$S(q)=\frac{1}{\left[1+2n\psi (q)/ϵ(q)\right]^{(1/2)}}.$$ (8) The Eqs. (5) and (8) should be solved numerically for obtaining the $`S(q)`$ self consistently. From the knowledge of the structure factor, the pair correlation function $`g(r)`$ is calculated as $$g(r)=1+\frac{1}{n}\frac{d𝐪}{(2\pi )^2}e^{i𝐪.𝐫}\left[S(q)1\right].$$ (9) The interaction energy is also related to the structure factor as, $$E_{int}=\frac{1}{4\pi }_0^1\frac{d\lambda }{\lambda }v_\lambda (q)(S_\lambda (q)1)q𝑑q.$$ (10) where $`v_\lambda (q)=\lambda v(q)`$ and $`S_\lambda (q)`$ is its related static structure factor. One can also compute the excited energy using the poles of Eqn. (4), leading to $$\omega (q)=\sqrt{ϵ^2(q)+2nϵ(q)v(q)(1G(q))}.$$ (11) In the next section, we present numerical results for various quantities of interest. ## IV Numerical results In this section we present the results of the numerical calculations for the interesting physical quantities. We numerically solve the set of Eqs. (5) and (8) with the repulsion potential defined in Eqn. (3), and find the static structure factor. The calculations are done for different values of the two parameters $`m`$ and $`r_s`$. For the boson system $`m`$ can be considered as the mass of particles (in the unit of $`\mathrm{}=1`$), and for vortex lines $`m`$ is related to the temperature as $`m=\epsilon _0\epsilon _1\lambda ^2/T^2`$. Substituting the numerical values for parameters as $`\epsilon _0=50K/A`$, $`\gamma =100`$, $`\lambda =1000A`$, the equivalent temperature will be fixed as $`T500/\sqrt{m}(K)`$. On the other hand, $`r_s`$ is the inverse of density corresponding to the particle density for boson system, or the density of FLs in the vortex matter. The mentioned relationship is expressed as $`B=\varphi _0/(\pi r_s^2\lambda ^2)`$, or $`B0.06/r_s^2`$ (Tesla). One of our numerical results which was developed by iterating the Eqs. (5) and (8) is the behavior of the static structure factor which is shown in Fig. 1 for a fixed $`r_s=0.6`$ ($`B0.17`$(Tesla)) and different $`m`$’s (temperatures). Fig. 1: The static structure factor in terms of $`q`$, for $`r_s=0.6`$ and different $`m`$’s. Fig. 1 manifests the very expected behavior of the static structure factor, that is the peak of the $`S(q)`$ is decreased by increasing $`T`$ (decreasing $`m`$). Disappearing of the peak in the $`S(q)`$ by increasing the temperature shows that the system undergoes a phase transition from the solid phase to the liquid phase. We can also see the melting transition by fixing the temperature and increasing the magnetic field. Choosing $`m=100`$ ($`T=50K`$), we have found the $`S(q)`$ for different values of $`r_s`$, see Fig. 2. Fig. 2: The static structure factor for $`m=100`$ and with different values of $`r_s`$. The amplitude of the peaks becomes smaller with decreasing the $`r_s`$, while it disappears for high magnetic fields. So we will have the same scenario for describing the phase transition. The melting transition of the vortex lattice can also be discussed by studying the behavior of the pair correlation function. It is seen that the results of $`S(q)`$ can be used for calculating the $`g(r)`$ defined in the Eqn. (9). The results of $`g(r)`$ for $`r_s=0.4`$ ($`B0.38`$(Tesla)) and different masses (temperatures) are plotted in Fig. 3. Fig. 3: The pair correlation function for $`r_s=0.4`$ with different temperatures. Fig. 3 shows that the pair correlation function has oscillatory behavior for low temperatures, however, its amplitude becomes shorter by increasing the temperature, and disappears for very high temperatures. The oscillatory behavior is a signature of the solid phase, therefore, the system is going to have a transition from the solid phase to the liquid phase by increasing the temperature. We have also determined the interaction energy of the system using the Eqn. (10). The results are shown in the Table 1. $$\begin{array}{ccc}& & \\ r_s& B& E_{int}\\ & & \\ 0.1& 6& 2.15\\ 0.2& 1.5& 1.78\\ 0.4& 0.38& 1.36\\ 0.6& 0.17& 1.09\\ 0.8& 0.09& 0.89\\ 1& 0.06& 0.74\end{array}$$ Table 1: The interaction energy, in STLS approximation for $`m=50`$ and different densities (The unit of magnetic fields is Tesla). We observe that the numerical results show that the interaction energy increases by decreasing the strength of magnetic field. The results of the pair correlation function and static structure factor are in good qualitative agreement with the expected results of the phase diagram of the vortex system. However, for obtaining some quantitative description, we use the spectrum of the excited energy of the system. We have plotted the $`\omega (q)`$ in terms of $`q`$, see Fig. 4. Fixing the magnetic field but for varying temperatures, the following graph resembles that $`\omega (q)`$ tends to a zero value for a finite $`q`$ . The appearance of minimum in $`\omega (q)`$ is in correspondence with the fact that $`G(q)`$ becomes greater than one for some ranges of $`m`$. Fig. 4: The dispersion relation of the excited energy for $`r_s=0.6`$ for different values of $`m`$. The zero value of the dispersion relation indicate that the system undergoes a phase transition at that point. Because of numerical restrictions we are not able to find the exact value of the point where $`\omega (q)`$ becomes zero. However, we have realized by our numerical experience that when we are close to the transition point the $`q`$ value of the minimum $`\omega (q)`$ and the local field correction $`G(q)`$ are not highly sensitive to the variations of $`m`$. Therefore, at least for determining the transition temperature we have ignored their dependence to $`m`$ in the Eqn. (11). So we fixed the magnetic field and found the temperature that $`\omega (q)`$ becomes zero. The resulting $`BT`$ phase diagram is plotted in the Fig. 5. Fig. 5: The $`BT`$ phase diagram of the flux lines. The solid line is the predicted result in Ref. produced by Lindemann criterion. Triangular points are the results of our numerical calculations. Proceeding to fit our data with the known result $`B=4c_L^4\varphi _0\epsilon _0\epsilon _1/(\sqrt{3}T^2)`$ , we found that the best fits can be achieved by fixing $`c_L0.15`$ and it is observed that the results are supporting the reports of Monte Carlo simulations . ## V van der Waals interaction TAking into account the direction dependent interaction for real vortices give rise to interesting results and predicts a van der Waals (vdW) interaction even for straight vortices . Therefore, it would be interesting to consider the superposition of the short range attractive and the long range repulsive interactions and study its consequence for the low-field phase diagram of the anisotropic superconductors. It is shown that in the decoupled limit, $`\epsilon 0`$, the interaction potential is given by $$V(R)=v_0\left(K_0(R/\lambda )a_{vdw}\varphi (R/\lambda )\frac{\lambda ^4}{R^4}\right),$$ (12) where, $`v_0=2\epsilon _0/T`$ measures the amplitude of the direct interaction between flux lines, and $`a_{vdw}`$ determines the strength of the thermal vdW attraction. The function $`\varphi (x)`$ smoothly cuts off the power law part for $`R<\lambda `$ and is defined as, $$\varphi (x)=\{\begin{array}{cc}0,\hfill & xx_1\hfill \\ \frac{1}{4}\left[1+\mathrm{sin}\left(\pi \frac{x(x_1+x_2)/2}{x_2x_1}\right)\right]^2,\hfill & x_1<x<x_2\hfill \\ 1,\hfill & xx_2\hfill \end{array}$$ (13) with $`x_1=1`$ and $`x_2=5`$. The choice of the cutoff function and the values of $`x_1`$ and $`x_2`$ is to some extent arbitrary . The amplitude of the vdW attraction is given by $`a_{vdw}T/(2\epsilon _0d\mathrm{ln}^2(\pi \lambda /d))`$, where $`d`$ is the layer spacing. Using the BSCCO numerical values for the parameters, one finds that $`a_{vdw}2\times 10^5T/K`$ which is $`2\times 10^3`$ for the temperature $`T=100K`$ close to critical temperature $`T_c`$ . The potential defined in Eqn. (12) is applied for calculating the static structure factor in the STLS approximation. The results are plotted in the Fig. 6, where $`Avw=a_{vdw}v_0`$. Fig. 6: The static structure factor for fixed magnetic field $`r_s=0.2`$ with different temperatures and strengths of the vdW attraction. The results show that the importance of the vdW interaction with high magnetic field is valid only for the temperature close to the transition. One should note that the vdW interaction has very important consequences for low magnetic fields and temperatures. However, in our approach it is not possible to consider this case, and one has to use some other approximations or Monte Carlo simulations for seeing these effects. ## VI Conclusions In this paper we studied the Bose model in STLS approximation. We found the static structure factor, pair correlation function, interaction energy and spectrum of the excited energies for different values of the mass (temperature) and the density (magnetic field). We discussed that how the results may be applied for the vortex matter phase diagram. If one fixes the magnetic field (temperature) and calculate $`S(q)`$ for different temperatures (magnetic field), the gradual disappearance of the peaks in $`S(q)`$ manifests the existence of the phase transition. One can also use the pair correlation function for describing the melting transition. We showed that the changing behavior of $`g(r)`$ from oscillatory to a rather smooth one can be explored within our numerical scheme so that the hallmark of the transition from solid phase to the liquid phase can be observed . The results were in good qualitative agreement with the phase diagram of the FLs. For estimation of the $`BT`$ phase diagram quantitatively, we invoke to the behavior of the excited energy spectrum, from which the resulting phase diagram supports the expected results of the high temperature superconductors and Monte Carlo simulation quite well. We also added the van der Waals attractive potential and studied the effect of both repulsive and attractive potentials in the phase diagram. The results indicate that the vdW interaction for high magnetic field is only important for the temperatures close to the melting temperature. We emphasis that our approach doesn’t work for the low magnetic fields and hence, it would be interesting to use some other methods and determine the effects of both repulsive and attractive potentials in the low magnetic fields and temperatures. To our knowledge this is the first time that the STLS approximation is applied for studying the vortex system, and our work show that STLS approximation is applicable for studying other aspects of the vortex systems and it might help to reveal some unknown properties in further investigations. We have benefited from useful discussion with R. Asgari, J. Davoudi and M. Kardar. M. Kohandel acknowledges support from the Institute for Advanced Studies in Basic Sciences, Zanjan, Iran. We also acknowledge support from Institute for Studies in Theoretical Physics and Mathematics, Tehran, Iran. 2
no-problem/9908/hep-ex9908010.html
ar5iv
text
# Hadroproduction of the 𝜒₁ and 𝜒₂ States of Charmonium in 800 GeV/c Proton-Silicon Interactions Charmonium hadroproduction has provided interesting challenges to the understanding of QCD. Early attempts to describe the formation of a $`c\overline{c}`$ bound state, according to the color evaporation or color singlet models, did not provide a satisfactory description of the available data. More recently, the Non-Relativistic QCD Factorization Approach , which incorporates in a more rigorous fashion some of the features of the previous models, has provided a more successful description of the process. For any model, a rather crucial test has been the prediction of the relative rate of production for different charmonium states. In particular, when dealing with proton-induced processes, the absence of quark annihilation diagrams and the suppression in Leading Order of $`\chi _1`$ production gluonic diagrams implies rather small values, typically less than 10%, for the ratio of $`\chi _1`$ to $`\chi _2`$ production . The measurement presented here represents a significant contribution to the available data, since it is the first observation of cleanly resolved $`\chi _1`$ and $`\chi _2`$ states in a proton-induced fixed target experiment. The FNAL E771 experiment utilized a large-acceptance spectrometer to measure several processes containing muons in the final state. Protons of 800 GeV/c momentum were transported by the Fermilab Proton West beam line to the High Intensity Laboratory, where they hit a 24 mm thick silicon target. Operating at a beam intensity of $`3.6\times 10^7`$ protons per spill-second, the experiment accumulated a total of 6.4 $`\times 10^{11}`$ p-Si interactions. The incoming proton beam trajectory and flux were measured by a six plane silicon detector station. The 0.26 radiation length target was composed of twelve 2 mm silicon foils separated by 4 mm. The target was followed by a microvertex detector consisting of fourteen 300 $`\mu `$m thick silicon planes that, while not used in the analysis presented here, contributed an additional 0.045 X<sub>0</sub> to the target region radiation length. The spectrometer’s tracking system consisted of seven multi-wire proportional chambers and three drift chambers upstream plus three drift chambers and six combination drift/pad/strip chambers downstream of a dipole analysis magnet which provided an 821 MeV/c $`p_t`$ kick in the horizontal plane. Downstream of the wire chamber system, an electromagnetic calorimeter consisting of an active converter and 396 scintillating glass and lead glass blocks was used for electron/positron identification. The final element of the spectrometer, a set of three planes of resistive plate counters (RPC’s) segmented into 512 readout pads and sandwiched between layers of steel and concrete absorbers, provided muon identification. The material in the absorber walls represented an energy loss of 10 GeV in the central region and 6 GeV in the outer region of the detector for the incident muons. A dimuon trigger selected events with a $`J/\psi `$ in the final state via the decay $`J/\psi \mu ^+\mu ^{}`$. A trigger muon was defined as the triple coincidence of the *OR* of 2 $`\times `$ 2 pads in the first RPC plane and the *OR* of 6 $`\times `$ 6 pads in the second and third RPC planes in projective arrangements. A dimuon trigger was defined as two such triple coincidences. The trigger reduced the 1.9 MHz interaction rate by a factor of $`10^4`$, selecting approximately 1.3 $`\times 10^8`$ dimuon events to be written to tape. The seed for muon track reconstruction was provided by the RPC triple coincidences. The roads formed by the pads involved in the coincidences were projected into the rear chamber set, identifying a region in which to search for candidate muon tracks. Muon tracks reconstructed in the rear chamber set were then matched with tracks found in the front chamber set by requiring a good front-rear linking $`\chi ^2`$. Muon pairs were required to come from a common vertex by applying a cut to their distance of closest approach. About fifty thousand dimuon events survived the reconstruction process, quality cuts, and vertex cuts. Figure 1 shows the resulting dimuon mass spectrum containing peaks corresponding to the $`J/\psi `$, $`\psi (2S)`$, and $`\mathrm{{\rm Y}}`$ (inset). Superimposed on the dimuon mass spectrum is a fit to the data obtained with the sum of two Gaussians for the $`J/\psi `$ peak, a single Gaussian for the $`\psi (2S)`$, and the form $`\frac{a}{m_{\mu \mu }^3}exp(bm_{\mu \mu })`$ for the continuum background. The two Gaussians fit to the $`J/\psi `$ peak is a good approximation (as confirmed by Monte Carlo) to a non-constant mass resolution, caused by the confusion associated with increases in hit density near the beam region. The number of $`J/\psi `$’s and $`\psi (2S)`$’s after background subtraction was 11,660 $`\pm `$ 139 and 218 $`\pm `$ 24, respectively . Events in a window of $`\pm `$ 100 MeV/c<sup>2</sup> around the $`J/\psi `$ mass were refit varying the muon momenta within measurement errors, with the constraint that the invariant mass of the pair be equal to $`J/\psi `$ mass. The resulting dimuon event sample was then inspected to search for $`e^+e^{}`$ pairs that might be the result of conversions of photons from $`\chi J/\psi \gamma `$ decays. Dimuon events which contained pairs of tracks matching the topology of a $`\gamma e^+e^{}`$ conversion in the target region – collinear before the magnet in both bend and non-bend projections, collinear in the non-bend plane and coplanar in the bend plane after the magnet – were then designated as $`\chi `$ decay candidates. All electron/positron pair candidates were required to satisfy additional conditions. At least one of the two track candidates was required to be associated with an energy deposition in the calorimeter consistent with an electro-magnetic shower. In addition, the total transverse momentum of the $`e^+e^{}`$ pair in the rear of the magnet was required to be zero (within the resolution of the spectrometer) relative to the common $`e^+e^{}`$ trajectory in front of the magnet. To quantify how well a pair fitted the $`\gamma e^+e^{}`$ hypothesis, a $`\chi ^2`$ was formed, $$\chi ^2=\frac{(a_{x1}a_{x2})^2}{\sigma _{ax1}^2+\sigma _{ax2}^2}+\frac{(a_{y1}a_{y2})^2}{\sigma _{ay1}^2+\sigma _{ay2}^2}+\frac{(b_{y1}b_{y2})^2}{\sigma _{by1}^2+\sigma _{by2}^2}$$ (1) where $`a_{x1}`$ and $`a_{x2}`$ are the electron and positron track intercepts at the magnet in the bend plane, $`a_{y1}`$ and $`a_{y2}`$ the track intercepts in the non-bend plane, $`b_{y1}`$ and $`b_{y2}`$ the track slopes in the non-bend plane, and the $`\sigma `$’s are the measurement errors on these quantities. The electron/positron candidate with the smallest $`\chi ^2`$ in a given event was designated as a photon conversion candidate. Additional cuts requiring a good $`\chi ^2`$, the transverse momentum of the parent photon to be between 250 and 700 MeV/c, and the invariant mass squared of the $`e^+e^{}`$ pair to be less than 3000 (MeV/c<sup>2</sup>)<sup>2</sup> were applied to maximize signal to background in the final sample of events containing both a $`J/\psi `$ and a photon conversion. The $`J/\psi e^+e^{}`$ invariant mass shown in Fig. 2 was calculated using the electron and positron momenta obtained from the tracking system. Clear $`\chi _1`$ and $`\chi _2`$ signals can be seen. The background to the $`\chi _1`$ and $`\chi _2`$ was well-described by uncorrelated $`e^+e^{}`$ and $`J/\psi `$ combinations: the solid line of Fig. 2 was obtained by fitting two Gaussians plus a polynomial background. The polynoimial background was obtained by fitting the mass distributions of $`J/\psi `$’s and $`e^+e^{}`$’s extracted from different events. The numbers of $`\chi _1`$ and $`\chi _2`$ obtained from the fit are 33 $`\pm `$ 9 and 33 $`\pm `$ 10, respectively. The fitted width is 5.2 $`\pm `$ 2.0 MeV/c<sup>2</sup> for both the $`\chi _1`$ and $`\chi _2`$ peaks. To determine the total cross section for $`\chi _1`$ and $`\chi _2`$ production, the overall acceptance times efficiency for photon conversion and for electron/positron acceptance and recontruction efficiency had to be determined. To accomplish this, a Monte Carlo sample of $`\chi J/\psi \gamma `$, $`J/\psi \mu ^+\mu ^{}`$ decays was generated using Pythia . The photon and the muons were then propagated through a GEANT simulation of the E771 detector, including $`\gamma `$ conversion, scattering, bremsstrahlung and dE/dx. Hits from the Monte Carlo tracks obtained by this prescription were then inserted into actual dimuon trigger events to simulate realistically backgrounds and losses in pattern recognition due to confusion from noise hits and other tracks. Measured detector efficiencies were also applied to the inserted hits. These hybrid Monte Carlo and data events were analyzed in a manner identical to the data in order to determine acceptances and tracking efficiencies. Rather than attempting to simulate the electromagnetic calorimeter response to $`e^\pm `$ in detail in a Monte Carlo, the efficiency of matching an electron or positron candidate to a shower in the calorimeter was determined using a large sample of electron/positron pairs from photon conversions in minimum bias events. A sample of $`e^+e^{}`$ pairs with kinematics similar to those of the $`\chi `$ $`e^+e^{}`$ pairs was collected using very tight cuts to ensure an $`e^+e^{}`$ identity. This sample was then subjected to the same constraints as those applied in the $`\chi `$ analysis. The resulting overall acceptance times efficiency (inclusive of conversion probability) for photons from $`\chi `$ decay was determined to be $`(8.25\pm 0.4)\times 10^3`$. Using the $`\gamma e^+e^{}`$ acceptance and efficiency, the measured branching ratios for $`\chi _1`$ and $`\chi _2`$ into $`J/\psi \gamma `$ , the measured $`J/\psi `$ pN forward cross section at $`\sqrt{s}`$=38.8 GeV and the number of observed $`\chi _1`$, $`\chi _2`$ and $`J/\psi \mu ^+\mu ^{}`$, the absolute $`\chi _1`$ and $`\chi _2`$ cross sections for $`x_F>0`$ were calculated to be $`\sigma (\chi _1)`$ = 263 $`\pm `$ 69(stat) $`\pm `$ 32 (syst) nb/nucleon and $`\sigma (\chi _2)`$ = 498 $`\pm `$ 143(stat) $`\pm `$ 67 (syst) nb/nucleon, respectively. The main contributions to the systematic errors came from the error on the $`J/\psi `$ cross section (9%), the uncertainty in the knowledge of the cut efficiencies (5%) and the errors on the branching ratios for $`\chi _1`$ (6%) and $`\chi _2`$ (8%) . Using the production cross sections for $`\chi _1`$ and $`\chi _2`$, the ratio of the $`\chi _1`$ to $`\chi _2`$ production cross sections was determined to be $`\sigma (\chi _1)/\sigma (\chi _2)`$ = 0.53 $`\pm `$ 0.20(stat) $`\pm `$ 0.07(syst). Combining this result with the two previous measurements of $`\chi `$ production by a proton beam , we have computed the world average (shown in Fig. 3) to be $`\sigma (\chi _1)/\sigma (\chi _2)`$ = 0.31 $`\pm `$ 0.14. This figure is consistent with the latest NRQCD estimates of $``$0.3 , where $`\chi _1`$ production was boosted by the inclusion of higher order terms in the velocity expansion . Finally, the energy dependence of the combined $`\chi _1`$ and $`\chi _2`$ production near threshold was compared to the corresponding quantity for $`J/\psi `$ production. In Ref. data on $`J/\psi `$ production from seventeen different pN experiments over a large range of center-of-mass energy, $`\sqrt{s}`$ 8 to 52 GeV, were fit as a function of $`\sqrt{s}`$. The $`J/\psi `$ production data near threshold was well represented by the function $`\sigma (\sqrt{s})_{J/\psi }=\sigma _0(1M_{J/\psi }/\sqrt{s})^\beta `$, with $`\sigma _0=1.0\pm 0.1`$ $`\mu `$b/nucleon and $`\beta =11.8\pm 0.5`$. To check whether $`\chi `$ production has similar dynamics as $`J/\psi `$ production, the sum of the $`\chi `$ cross sections have been fit to a similar parameterization with $`\beta `$ fixed to the $`J/\psi `$ value and $`M_\chi `$ replacing $`M_{J/\psi }`$. The result of the fit, shown in Fig. 4, shows the similarity of the $`J/\psi `$ threshold production parametrization to the threshold behavior of the combined $`\chi `$ state cross sections. The fit yields $`\sigma _0=2.3\pm 0.4\mu `$b/nucleon for the asymptotic $`\sigma (\chi )`$ cross section. We wish to thank Fermilab, the U.S. Department of Energy, the National Science Foundation, the Istituto Nazionale di Fisica Nucleare of Italy, the Natural Science and Engineering Research Council of Canada, the Institute for Particle and Nuclear Physics of the Commonwealth of Virginia, and the Texas Advanced Research Program for their support. To whom correspondence should be addressed. Electronic address: cox@uvahep.phys.virginia.edu
no-problem/9908/cond-mat9908118.html
ar5iv
text
# Realization of Bose-Einstein condensation of dilute magnons in TlCuCl3 ## Abstract The recent observation \[Oosawa et al. J. Phys. : Condens. Matter 11, 265 (1999)\] of the field-induced Néel ordering in a spin-gap magnetic compound TlCuCl<sub>3</sub> is interpreted as a Bose-Einstein Condensation of magnons. A mean-field calculation based on this picture is shown to describe well the temperature dependence of the magnetization. The present system opens a new area for studying Bose-Einstein condensation of thermodynamically large number of particles in a grand-canonical ensemble. The Bose-Einstein Condensation (BEC) is one of the most exotic phenomena predicted by quantum mechanics . Although the superfluid transition of Helium 4 may be regarded as a BEC, it is influenced strongly by the interaction and is much different from condensation of ideal or dilute Bose gas. Recently, there has been a renewed interest in BEC, because the realization of BEC by ultracooling of dilute atoms has become possible . While the BEC of ultracooled atoms is of great interest, there are various experimental limitations. On the other hand, it has been known for a long time that a quantum spin system can be mapped to an interacting Bose gas, and that the off-diagonal long-range order which characterizes BEC corresponds to a long-range magnetic order in the spin system . It is then possible to tune the density of bosons (magnons) by a magnetic field to observe BEC of dilute bosons. However, such an attempt has been apparently lacking. In this letter, we argue that BEC of dilute bosons in a thermodynamic number $`10^{20}`$ is realized in a recent high-field experiment on TlCuCl<sub>3</sub> , which is composed of chemical double chain of Cu<sub>2</sub>Cl<sub>6</sub> . The compound has an excitation gap $`\mathrm{\Delta }/k_\mathrm{B}7.5`$K above the singlet ground state, in the absence of the magnetic field . The origin of the gap may be attributed to the antiferromagnetic dimer coupling in the double chain. When the external field $`H_g=\mathrm{\Delta }/(g\mu _\mathrm{B})`$ to the gap is applied, the gap collapses. At finite temperature, the “collapse” of the gap at $`H_g`$ does not give a singularity because thermal excitations exist even if $`H<H_g`$. However, there seems to be a phase transition due to the interchain interactions at higher field $`H=H_c>H_g`$, which depends on the temperature. In Ref. the phase transition was identified as a long-range magnetic ordering, and was compared with a mean-field theory (MFT) based on a dimer model. While the dimer MFT does predict the field-induced ordering, the experimental features were not well reproduced. In particular, it predicts almost flat dependence of the critical temperature $`T_c`$ on the magnetic field, while in the experiment $`T_c`$ depends on the magnetic field by a power law $`T_c^\varphi HH_g`$ (see Fig. 1). Moreover, it predicts almost constant magnetization for $`T<T_c`$ and concave magnetization for $`T>T_c`$, as a function of temperature $`T`$. However, in the experiment, magnetization was found to increase as decreasing $`T`$ below $`T_c`$ and it is a convex function of $`T`$ for $`T>T_c`$ (see Fig. 2). We will show that the transition is rather well described as BEC of magnons. While the details of exchange interactions in TlCuCl<sub>3</sub> are not known yet, excitations above the singlet ground state generally can be treated as a collection of bosonic particles – magnons . If the exchange interaction is isotropic, which seems to be the case in TlCuCl<sub>3</sub>, the number of magnons are conserved in a short timescale (but not conserved in a longer timescale.) We assume that magnons carry spin $`1`$, as generally expected. Under a magnetic field $`HH_g`$, the magnons with $`S^z=1`$ can be created by small energy. Thus, at low temperatures $`T\mathrm{\Delta }`$ and $`HH_g`$, we can consider only those magnons. The chemical potential of the magnons are given by $`\mu =g\mu _\mathrm{B}(HH_g)`$. The total number of magnons $`N`$ is associated with the total magnetization $`M`$ through $`M=g\mu _\mathrm{B}N`$. If the magnons are free bosons, the number of magnons would be infinite for $`H>H_g`$. However, in the spin system, magnons cannot occupy the same sites and thus there is a hard-core type interaction between them. The interaction keeps the number of magnons to be finite. The transverse components of the exchange interactions give rise to hopping of the magnons, while the longitudinal component give rise to the interaction. Although the exchange interaction and thus hopping might be complicated, generically the dispersion relation of a magnon is quadratic near the bottom. Thus the low-energy effective Hamiltonian for the ($`S^z=1`$) magnons are given by $`H`$ $``$ $`{\displaystyle \underset{k}{}}\left[\left({\displaystyle \underset{\alpha =x,y,z}{}}{\displaystyle \frac{\mathrm{}^2k_\alpha ^2}{2m_\alpha }}\right)\mu \right]a_k^{}a_k`$ (2) $`+{\displaystyle \frac{1}{2}}{\displaystyle \underset{k,k^{},q}{}}v(𝐪)a_{k+q}^{}a_{k^{}q}^{}a_ka_k^{}+\mathrm{}`$ Here the momentum $`𝐤`$ is measured from the minimum of the magnon dispersion. For simplicity, we do not consider the case where the magnon dispersion has more than one minimum . The effective masses $`m_\alpha `$ is related to the curvature of the dispersion relation in the direction of $`\alpha `$. By a rescaling of momentum, we may consider isotropic effective Hamiltonian instead. This is nothing but the non-relativistic bosons with a short-range interaction. Moreover, in the low-density and low-temperature limit, only the two-particle interaction is important and it can be replaced by delta-function interaction $`v(q)v_0`$. Thus the effective Hamiltonian is given by $$H=\underset{k}{}\left(\frac{\mathrm{}^2k^2}{2m}\mu \right)a_k^{}a_k+\frac{v_0}{2}\underset{k,k^{},q}{}a_{k+q}^{}a_{k^{}q}^{}a_ka_k^{}.$$ (3) This effective Hamiltonian can be derived from some specific models . However, we emphasize that it is universal at the low-temperature and low magnon density limit, and does not depend on details of the exchange interaction. Since the number of magnon is actually not conserved due to the small effects neglected in the Hamiltonian, we have a grand canonical ensemble of the bosons. The “chemical potential” can be controlled precisely by tuning the magnetic field. When the chemical potential becomes larger than a critical value, the system undergoes a BEC. Thus the spin-gap system in general would provide a great opportunity to study BEC in a grand canonical ensemble, with thermodynamically large number of particles. The idea that BEC is induced by the magnetic field in a spin-gap system has appeared several times. There was a discussion of (quasi-) Bose condensation in a Haldane gap system under a magnetic field . However, there is no BEC at finite temperature in a one-dimensional system. On the other hand, the experiments on Haldane gap systems are often affected by the anisotropy and the staggered $`g`$-tensor, which wipe out the BEC. Giamarchi and Tsvelik have recently discussed the three-dimensional ordering in coupled ladders in connection with BEC. However, as far as we know, there has been no experimental observation of the magnon BEC induced by an applied field. We first consider the normal (non-condensed) phase. Within the Hartree-Fock (HF) approximation, the momentum distribution of the magnons is given by $$n_ka_k^{}a_k^{}=\frac{1}{e^{\beta (\epsilon _k\mu _{\mathrm{eff}})}1},$$ (4) with $`\epsilon _k\mathrm{}^2k^2/2m`$ and $`\mu _{\mathrm{eff}}\mu 2v_0n`$. The magnon density $`nN/N_d`$ ($`N_d`$ is the total number of the dimer pairs) has to be determined self-consistently by $$n=\frac{d^3k}{(2\pi )^3}n_k=\frac{1}{\mathrm{\Lambda }^3}g_{3/2}(z),$$ (5) where $`ze^{\beta \mu _{\mathrm{eff}}}`$ is the fugacity, $`\mathrm{\Lambda }(2\pi \mathrm{}^2/mk_\mathrm{B}T)^{1/2}`$ is the thermal de Broglie wavelength, and $`g_n(z)_{l=1}^{\mathrm{}}z^l/l^n`$ is the Bose-Einstein function. BEC occurs when the effective chemical potential $`\mu _{\mathrm{eff}}`$ vanishes so that $`\mu =2v_0n`$. Setting $`z=1`$ in (4) gives the temperature dependence of the critical value of the chemical potential $$\mu _c=2v_0\left(\frac{mk_\mathrm{B}T}{2\pi \mathrm{}^2}\right)^{3/2}\zeta (3/2).$$ (6) This implies that the temperature dependence of the critical magnetic field at low temperatures is $`H_c(T)H_gT^{3/2}`$. This power-law dependence is independent of the interaction parameter $`v_0`$. When $`\mu >\mu _c`$, one has the macroscopic condensate order parameter $`a_0=\sqrt{N_c}e^{i\theta }0`$, where $`N_c`$ is the total number of the condensate magnons. In terms of the original spin system, this means that there is a (staggered) transverse magnetization component $`m_{}=g\mu _\mathrm{B}\sqrt{n_c/2}`$ with $`n_cN_c/N_d`$. Within the Hartree-Fock-Popov (HFP) approximation, the condensate density is determined by $$\mu =v_0n_c+2v_0\stackrel{~}{n},$$ (7) where $`\stackrel{~}{n}=nn_c`$ is the density of the non-condensed magnons, which is given by $`\stackrel{~}{n}`$ $`=`$ $`{\displaystyle \frac{d^3k}{(2\pi )^3}\left[\left(\frac{\epsilon _k+v_0n_c}{2E_k}\frac{1}{2}\right)+\frac{\epsilon _k+v_0n_c}{E_k}f_\mathrm{B}(E_k)\right]}`$ (8) $`=`$ $`{\displaystyle \frac{1}{3\pi ^2}}\left({\displaystyle \frac{mv_0n_c}{\mathrm{}^2}}\right)^{3/2}+{\displaystyle \frac{d^3k}{(2\pi )^3}\frac{\epsilon _k+v_0n_c}{E_k}f_\mathrm{B}(E_k)},`$ (9) where we have used the HFP energy spectrum $`E_k=\sqrt{\epsilon _k^2+2\epsilon _kv_0n_c}`$ and the Bose distribution $`f_\mathrm{B}(E_k)=1/(e^{\beta E_k}1)`$. The first term of (7) represents the depletion of the condensate due to interaction between magnons, which reduces to the ground-state non-condensate density at $`T0`$. The second term is the contribution from thermally excited non-condensate magnons, which vanishes at $`T0`$. Eq.(6) is to be solved self-consistently in conjunction with Eq.(7). Then the total magnon density is given by $$n=n_c+\stackrel{~}{n}=\frac{\mu }{v_0}\stackrel{~}{n}.$$ (10) In particular, the magnon density at $`T0`$ is given by $$n\frac{\mu }{v_0}+\frac{1}{3\pi ^2}\left(\frac{m\mu }{\mathrm{}^2}\right)^{3/2}.$$ (11) If we ignore the deviation of $`z`$ from $`1`$ for $`TT_c`$, we obtain a simple result (using $`v_0n_c=0`$ in (7) for $`T<T_c`$: $`{\displaystyle \frac{n(T)}{n(T_c)}}`$ $`=`$ $`\left({\displaystyle \frac{T}{T_c}}\right)^{3/2}(T>T_c),`$ (12) $`{\displaystyle \frac{n(T)}{n(T_c)}}`$ $`=`$ $`2\left({\displaystyle \frac{T}{T_c}}\right)^{3/2}(T<T_c).`$ (13) Thus it predicts the cusp-like minimum of the magnon density (magnetization) at $`T=T_c`$. In contrast, the dimer MFT predicts a constant magnetization below $`T_c`$. Figure 2 shows the observed low-temperature magnetizations of TlCuCl<sub>3</sub> at various external fields for $`Hb`$. We can see the cusp-like anomaly at the transition temperature, as predicted by the present theory. The similar temperature dependence of the magnetization can be observed for $`H(1,0,\overline{2})`$ . Thus the main qualitative feature of the temperature dependence of the magnetization, which cannot be understood in the dimer MFT, is captured by the magnon BEC picture. The increase of $`n`$ for decreasing $`T`$ below $`T_c`$ is due to condensation of the bosons; the cusp shape of the magnetization curve observed in the experiment can be regarded as an evidence of the magnon BEC. We note that, in the range of the experiment, the magnon density is of order of $`10^3`$ and is consistent with the assumption of diluteness. However, the approximation (13) does not precisely reproduce the experimental result. In particular, it predicts independence of $`n`$ on the applied field $`\mu `$ for $`T>T_c`$ while the dependence was observed experimentally. Part of the discrepancy may be due to the approximation $`z=1`$. Actually, even in the HF framework, the approximation $`z=1`$ cannot be justified. In Fig. 3, we plot the temperature dependence of the total density $`n`$ above and below the transition temperature $`T_c`$ obtained by solving the self-consistency equations (4) and (8) numerically. The interaction parameter $`v_0`$ and the effective mass $`m`$ are estimated from the experimental data as $`v_0/k_\mathrm{B}400`$K and $`mk_\mathrm{B}/\mathrm{}^20.025`$K<sup>-1</sup>. The self-consistent calculation does predict the total density $`n`$ dependent on the applied field for $`T>T_c`$, which is qualitatively consistent with the experiment. In Fig. 4 we also plot the temperature dependence of the staggered transverse magnetization component $`m_{}`$. Direct measurements of $`m_{}`$ using neutron diffraction are in fact intended. We see a discontinuity in magnon density (magnetization) at the transition point. This is because our HFP approximation is inappropriate in the critical region, and leads to an unphysical jump in the condensate density $`n_c`$ (for detailed discussion, see ). In the vicinity of the critical point, the HFP approximation eventually breaks down; the critical behavior then belongs to the so-called 3D XY universality class . On this ground, in the vicinity of $`T_c`$, the transverse magnetization $`m_{}`$ is expected as $`m_{}(T_cT)^\beta `$, where $`\beta 0.35`$. Figure 1 shows the experimentally determined magnetic phase diagram of TlCuCl<sub>3</sub>. We fit the phase boundary $`H_c`$ as a function as a temperature $`T_c`$ with the following formula: $$(g/2)[H_c(T)H_c(0)]T^\varphi .$$ (14) The best fitting is obtained with $`(g/2)H_c(0)=5.61`$T and $`\varphi =2.2`$ . The obtained exponent $`\varphi =2.2`$ disagrees somewhat with the HF approximation (6) which gives $`\varphi =3/2`$. We note that, exactly $`z=1`$ holds at the transition point, and thus $`\varphi =3/2`$ is a definite conclusion within the HF framework. On the other hand, the dimer MFT predicts $`H_c(T)`$ to be exponentially flat at low temperature . The observed power-law dependence is qualitatively consistent with the magnon BEC picture, compared to the dimer MFT. As discussed above, our mean-field analysis for a dilute Bose gas is not reliable in the critical region, and thus the discrepancies with the experiment may be attributed to the fluctuation effects. More precise description of the experiment near the critical point therefore requires the inclusion of the fluctuation effects. Furthermore, in the experiment there may be other effects that were ignored in the effective Hamiltonian (3), such as impurities. These will be interesting problems to be studied in the future. To conclude, we believe that the essential feature of the experimental observation on TlCuCl<sub>3</sub>, which cannot be understood in the traditional dimer MFT, is captured by the magnon BEC picture. The present system provides the first clear experimental observation of field-induced magnon BEC, with thermodynamically large number of particles. It opens a new area of BEC research in grand canonical ensemble with an easily tunable chemical potential (magnetic field). Similar BEC of magnons would be observed in other magnetic materials in the vicinity of the gapped phase, which may be the singlet ground state due to large single-ion anisotropy , the completely polarized state , or the “plateau” phase in the middle of the magnetization curve . An essential requirement for observing BEC is that the system has the rotationally invariance about the direction of the applied magnetic field, so that the number of magnons is (approximately) conserved. We thank H. Shiba for useful comments. T.N. was supported by JSPS and M.O. was supported in part by Grant-in-Aid from Ministry of Education, Culture and Science of Japan.
no-problem/9908/astro-ph9908172.html
ar5iv
text
# Phase Lags of QPOs in Microquasar GRS 1915+105 ## 1 Introduction Black hole candidates (BHCs) are known to vary strongly in X-ray (van der Klis 1995; Cui 1999). The variability can sometimes show a characteristic periodicity, in the form of a quasi-periodic oscillation (QPO). For BHCs, QPOs were initially observed only in a few sources at very low frequencies ($`<`$ 1 Hz), with the exception of the rare “very-high-state” (VHS) QPOs at a few Hz observed of GX 339-4 and GS 1124-68 (van der Klis 1995 and references therein). Since the launch of Rossi X-ray Timing Explorer (RXTE; Bradt et al. 1993), new QPOs have been discovered at an accelerated pace, thanks to the advanced instrumentation of RXTE. Not only are the QPOs now seen in more BHCs, they are also detected at increasingly higher frequencies (Cui 1999 and references therein). The phenomenon now spans a broad spectrum from a few mHz to several hundred Hz. Despite the observational advances, the origin of QPOs remains uncertain. Progress has been made empirically by correlating the observed properties of the QPOs, such as centroid frequency and amplitude, to physical quantities, such as photon energy and X-ray flux (or mass accretion rate). It has been shown that the correlations can be quite different for different QPOs, perhaps indicating that for BHCs the QPOs form a heterogeneous class of phenomena (Cui 1999; Cui et al. 1999a). GRS 1915+105 is one of only three known BHCs that occasionally produce relativistic radio jets with superluminal motion (Mirabel & Rodriguez 1999 and references therein). These sources are often referred to as microquasars. First discovered in X-ray (Castro-Tirado et al. 1992), GRS 1915+105 has been studied extensively at this wavelength. Recent RXTE observations revealed a great variety of QPOs associated with the source (Morgan et al. 1997), in addition to its complicated overall temporal (as well as spectral) behaviors (e.g., Greiner et al. 1996). The most famous of all is the QPO at 67 Hz. At the time, it was the highest-frequency QPO ever detected in BHCs. More interestingly, the centroid frequency of the QPOs hardly varies with X-ray flux (Morgan et al. 1997), unlike a great majority of other QPOs. Suggestions have subsequently been made to associate the feature to the dynamical processes in the immediate vicinity of the central black hole, where general relativistic effects may be strongly manifested (Morgan et al. 1997; Nowak et al. 1997; Cui et al. 1998). As a result, a lot of excitement has been generated by the prospect of using such signals to test the general theory of relativity in the strong-field limit (Cui et al. 1999b and references therein). Before this ultimate goal can be reached, however, it is clearly important to best characterize and understand this particular QPO observationally. In this Letter, I report the discovery of an important property of the feature: the oscillation lags more behind at higher photon energies. Some of the results have already been presented elsewhere in preliminary form (Cui 1997; Cui 1999). ## 2 Observations The 67 Hz QPO has been detected in various different states of GRS 1915+105 (Morgan et al. 1997). For this investigation, I have selected one RXTE observation when the oscillation appears the strongest (based on Morgan et al. 1997). The observation was made on May 5, 1996, when the source was in a bright state, with a total exposure time about 10 ks. Multiple high-resolution timing modes were adopted for the observation. Considering the trade-off between statistics and energy resolution, I have decided to rebin the $`16\mu s`$ Event data to $`2\text{ }ms`$ and to combine the sixteen energy bands (above $``$13 keV) into one. I have then merged the Event mode data with the $`2\text{ }ms`$ Binned mode data (which covers four energy bands below $``$13 keV). Now, a total of 5 energy bands are available for carrying out subsequent analyses. The bands are approximately defined as 2–5.2 keV, 5.2–7.0 keV, 7.0–9.6 keV, 9.6–13.2 keV, and 13.2–60 keV. ## 3 Data Analysis and Results A collection of power-density spectra (PDS) of GRS 1915+105 can be found in Morgan et al. (1997), for the initial 31 RXTE observations of the source, but only in one energy band (2–20 keV). Of great interest here is the energy dependence of the temporal properties of the source, so I have constructed the PDS (with the deadtime-corrected Poisson noise power subtracted) in the five energy bands defined. Fig. 1 shows the results. The presence of QPOs is apparent. Most prominent is the one centered at about 67 mHz, along with its first three harmonics. As noted by Morgan et al. (1997), the amplitude of the fundamental component seems to follow a general decreasing trend toward high energies, while that of the harmonics shows just the opposite. The 67 Hz QPO is clearly visible in the PDS, especially at high energies. Also detected are a weak QPO at about 400 mHz (which is missing from Morgan et al. 1997) and a broad QPO at about 840 mHz. To be more quantitative, I have fitted the PDS with an empirical model consisting of Lorentzian functions for the QPOs and a double-broken power law for the underlying continuum. I have limited the frequency range to roughly 0.001–10 Hz during the fitting to focus on the low-frequency QPOs, since the 67 Hz QPO has already been quantified (Morgan et al. 1997). Table 1 summarizes the best-fit centroid frequency and width for each QPO at low frequencies, derived from the 7.0–9.6 keV band (which is chosen as a compromise between the signal strength and the data quality). Fig. 2 shows the fraction rms amplitude of each QPO at different energies. The 400 mHz QPO and the 840 mHz QPO, as well as the 67 Hz QPO (Morgan et al. 1997), strengthen toward high energies (with a hint of saturation), similar to most QPOs of BHCs (Cui 1999). The behavior of the 67 mHz QPO is, however, more complicated and very intriguing: while the harmonics of the QPO follow the usual trend, the fundamental component becomes stronger first and then weakens significantly at higher energies, with the amplitude peaking at 8–9 keV. To derive phase lags, I have chosen the 2–5.2 keV band as a reference band, and computed a cross-power spectrum (CPS) between it and each of the higher energy bands. Note that the final CPS represents an ensemble average over the results from multiple 512-second segments of the time series (similarly for the PDS shown). The phase of the CPS represents a phase shift of the light curve in a selected energy band with respect to that of the reference band. Here, I follow the convention that a positive phase indicates hard X-rays lagging behind soft X-rays, i.e., a hard lag. The uncertainty of the phase is estimated from the standard deviation of the real and imaginary parts of the CPS. The magnitude of the QPO lags is derived from fitting the profile of the lags with Lorentzian functions at the QPO frequencies which are fixed during the fitting. Note that for the 67 Hz QPO I have also fixed the width of the profile to that of the QPO (see Table 2 in Morgan et al. 1997), due to the lack of statistics. The measured lags are plotted in figures 3 and 4 for the 67 Hz QPO and other low-frequency QPOs, respectively. The errors are derived by varying the parameters until $`\mathrm{\Delta }\chi ^2=1`$ (i.e., corresponding roughly to $`1\sigma `$ confidence intervals; Lampton et al. 1976). Most QPOs show significant hard lags. Surprisingly, however, the odd harmonics of the 67 mHz QPO display soft lags. It is clear that the QPO lags depend strongly on photon energy — the higher the energy the larger the lag, with the exception of the 840 mHz QPO where the measured hard lag increases first and then drops above 13 keV. For the 67 Hz QPO, the phase lag reaches as high as 2.3 radians, which is equivalent to a time lag of about 5.6 ms. The phase lags are smaller for low-frequency QPOs, but the corresponding time lags are quite large. For instance, the first harmonic of the 67 mHz QPO shows a time lag greater than 1 second for the highest energy band. ## 4 Discussion It is known that hard lags are associated with the X-ray emission from BHCs (van der Klis 1995; Cui 1999). Although the studies of hard lags are mostly based on broad-band variability, the large lags associated with the VHS QPOs of GS 1124-68 have been noted (van der Klis 1995). Often, the hard lags are attributed to thermal inverse-Comptonization processes (e.g., Miyatomo et al. 1988; Hua & Titarchuk 1996; Kazanas et al. 1997; Böttcher & Liang 1998; Hua et al. 1999), which are generally thought to be responsible for producing the hard power-law tail of X-ray spectra of BHCs (Tanaka & Lewin 1995). In these models, the lags are expected to be larger for photons with higher energies, since a greater number of scatterings are required for seed photons to gain enough energy. More quantitatively, the hard lags, which indicate the diffusion timescales through the Comptonizing region, should scale logarithmically with photon energy (e.g., Payne 1980; Hua & Titarchuk 1996); this roughly agrees with the observations (Cui et al. 1997; Crary et al. 1998; Nowak et al. 1999; also see figures 3 and 4). However, the measured time lags can often be quite large, e.g., a few tenths of a second, at low frequencies, which would require a large hot electron corona (roughly one light second across; Kazanas et al. 1997; Böttcher & Liang 1998; Hua et al. 1999). A even larger corona would be needed to account for the hard lags observed of the first harmonic of the 67 mHz QPO in GRS 1915+105. It is still much debated whether the required corona can be maintained physically (Nowak et al. 1999; Böttcher & Liang 1998; Poutanen & Fabian 1999). Also, the observed soft lags of the 67 mHz QPO (and its second harmonic) are entirely incompatible with the Compton models. On the other hand, the smaller time lags associated with the QPOs at higher frequencies (e.g., the 67 Hz QPO) can still be accommodated by the models. Like the QPOs themselves, the phase lags may very well be of multiple origins. Another class of models link the time lags to the propagation or drift time scales of waves or blobs of matter through an increasingly hotter region, toward the central black hole, where hard X-rays are emitted (Miyamoto et al. 1988; Böttcher & Liang 1998). In this scenario, as the disturbance (such as waves, blobs, and so on) propagates, its X-ray spectrum hardens, producing the observed hard lags. The models can also produce the logarithmic energy dependence of the hard lags (Böttcher & Liang 1998). The origin of clumps of matter may lie in the Lightman-Eardley instability which sets in wherever radiation pressure dominates over gas pressure in the accretion disk (Lightman & Eardley 1974). It has recently been proposed that such a condition is perhaps generally met for the inner region of accretion disks in BHCs and thus the presence of blobs may not be surprising (Krolik 1998). However, it is not clear how to associate QPOs with the dynamics of the blobs. While the Keplerian motion of the blobs could manifest itself in a QPO observationally, any radial drift of the blobs would cause the QPO frequency to increase with energy, which is not observed. On the other hand, it is perhaps easier to associate QPOs with traveling waves. For instance, Kato (1989) suggested that the propagation of corrugation-mode oscillations might explain the hard lags observed of BHCs. Note also that inward-propagating perturbations have recently been invoked to explain the X-ray variability of BHCs (Manmoto et al. 1996), in the context of advection-dominated accretion flows (ADAFs; e.g., Narayan & Yi 1994). In general, recent works seem to converge toward an ADAF-like geometry for the accretion flows around black holes, i.e., an inner quasi-spherical, optically thin region and an outer thin, optically thick disk (e.g., Narayan & Yi 1994; Chen et al. 1995; Dove et al. 1997; Luo & Liang 1998). Applied to GRS 1915+105, the wave-propagation models might be able to account for the hard lags observed of the QPOs, although the required speed of propagation would be quite small (much smaller than that of free fall) in some cases. Moreover, the peculiar behavior of the 67 mHz QPO — the even harmonics showing soft lags while the odd harmonics hard lags — can perhaps be explained by invoking a significant change in the wave form as the wave propagates. To better illustrate this point, I have simulated an oscillation of fundamental frequency 67 mHz that also includes the first three of its harmonics. A sine function is used to describe each harmonic component, which uses the measured fractional rms amplitude and phase of each component of the 67 mHz QPO in GRS 1915+105. The overall profiles of the simulated oscillation are constructed by summing up the four components for the 2–2.5 keV band (the reference band in which the phases are initialized to zero) and the 13.2–60 keV band, respectively. They are shown in Fig. 5. The inferred evolution of the oscillation profile is quite drastic. The soft and hard lags are mixed together in the figure, contributing to the overall variation of the wave form. But, because the hard lag is so dominating, it can be still be recognized by comparing the times of the minimum points between the two profiles. However, the models cannot naturally explain why the fractional amplitude of the QPOs increases with energy if the QPOs are of disk origin (Cui 1999); neither can most Compton models, unless a certain spatial distribution of the Compton y-parameter is assumed (Wagoner et al. 1999; Lehr et al. 1999). A third class of models associate the hard lags with dynamical processes in Comptonizing regions themselves. Poutanen & Fabian (1999) proposed that the time lags may be identified with the spectral hardening of magnetic flares as the magnetic loops inflate, detach and move away from the accretion disk. Now, the time lags are directly related to the evolution timescales of the flares. This model differs fundamentally from others in allowing Comptonizing regions to vary. It, therefore, provides an interesting possibility that the QPOs of BHCs might be an observational manifestation of the oscillatory nature of these regions. The oscillation may occur, for example, in the temperature of hot electrons and/or in the Compton optical depth, both of which cause “pivoting” of the Comptonized spectrum. The spectral pivoting might naturally explain the observed energy dependence of the QPO amplitude (e.g., Lee & Miller 1998; see Kazanas & Hua 1999 for another possibility). To summarize, besides other observable properties of the QPOs, the phase lags provide additional information that may be critical for our understanding of the QPO origins. The energy dependence of the lags has already had serious implications on theoretical models — Comptonization seems always required. It might also shed light on the evolution of intrinsic wave forms, when combined with the energy dependence of the QPO amplitude, and thus on underlying physical processes and conditions that can cause the evolution. Moreover, the magnitude of the lags might provide a direct measure of such important physical properties of the system as the size of the Comptonizing region, or the propagation speed of disturbances in accretion flows, or the evolution timescales of magnetic flares originating in the accretion disk. Therefore, the QPO lags might ultimately prove essential for understanding the geometry and dynamics of mass accretion processes in BHCs. I gratefully acknowledge useful discussions with many participants of the first Microquasar Workshop (which was held in May, 1997 at the Goddard Space Flight Center, Greenbelt Maryland) where some of the preliminary results were first presented. I thank the referee, Dr. Bob Wagoner, for his prompt report and many useful comments. Financial support for this work is partially provided by NASA through an LTSA grant and several RXTE grants.
no-problem/9908/cond-mat9908257.html
ar5iv
text
# Flux penetration and expulsion in thin superconducting disks ## Abstract Using an expansion of the order parameter over the eigenfunctions of the linearized first Ginzburg-Landau (GL) equation, we obtain numerically the saddle points of the free energy separating the stable states with different number of vortices. In contrast to known surface and geometrical barrier models, we find that in a wide range of magnetic fields below the penetration field, the saddle point state for flux penetration into a disk does not correspond to a vortex located nearby the sample boundary, but to a region of suppressed superconductivity at the disk edge with no winding of the current, and which is a nucleus for the following vortex creation. The height of this nucleation barrier, which determines the time of flux penetration, is calculated for different disk radii and magnetic fields. PACS number(s): 74.24.Ha, 74.60.Ec, 73.20.Dx The study of magnetic flux penetration and expulsion in type-II superconductors has traditionally attracted much attention in view of important technological and fundamental questions concerning hysteretic behavior and phase transitions in bounded samples. The vortex creation problem is also related to phase transitions in superfluids. It is well known that for type-II superconductors ($`\lambda /\xi >1/\sqrt{2}`$, $`\lambda `$, $`\xi `$ \- are the penetration and coherence lengths, respectively), the Meissner state becomes energetically unfavorable with increasing magnetic field at $`H=H_{c1}`$ in comparison to the Abrikosov vortex lattice. In a finite system these two states, which correspond to minima of the superconductor free energy, are separated by a barrier. Therefore, a first-order transition between the Meissner and Abrikosov states takes some time which decreases with temperature and approximately follows an Arrhenius law $`\tau exp(U/kT)`$, where $`U`$, $`T`$ are the barrier height and sample temperature, respectively. For $`T=0`$, the Meissner state survives up to the penetration field $`H_p`$ and transits suddenly to the Abrikosov state due to dynamic instability of the order parameter . With decreasing the magnetic field at zero temperature, the vortex state remains stable up to the expulsion field $`H_e<H_{c1}`$ and then goes to the Meissner state due to vortex expulsion. The origin of barriers for flux penetration and expulsion has been discussed during the last thirty years. According to the Bean-Livingston (BL) model, the surface barrier appears due to a competition between the vortex attraction to the sample walls by its mirror image and its repulsion by screening currents. This model was further developed for: i) cylindrical samples, where the vortex shape was assumed not to be an infinite line but a semicircle , ii) thin disks and iii) strips , where shielding, due to finite size effects, does not decay exponentially. For samples with a non-elliptical cross section, the geometrical barrier arises because of Meissner screening currents flowing on the top and bottom surfaces of a flat strip . In addition, vortex pinning by defects can play an important role in the delay of vortex expulsion or promotion of vortex penetration. It should be stressed, that the above mentioned barrier models, which are based on the London theory, do not account for the process of vortex formation and describe only the vortex motion far from the sample boundary. The Ginzburg-Landau (GL) theory has been previously applied for the study of barriers only for the 1D cases of narrow wires and rings. The approaches based on solving time-dependent GL equations allow to treat flux penetration (expulsion) only for magnetic fields higher (lower) than the penetration (expulsion) field. In this Letter, starting from the non-linear GL theory we present an approach for finding the saddle point states in thin disks and calculate numerically the heights of the free energy barriers separating the stable states with different number of vortices. We consider a superconducting defect free disk with radius $`R`$ and thickness $`d`$ immersed in an insulating media in the presence of a perpendicular uniform magnetic field $`H`$. For thin disks $`Rd\lambda ^2`$ we can neglect the distortion of the magnetic field, which are induced by screening and vortex currents, and write the GL functional as $$G=G_n+𝑑\stackrel{}{r}\left(\alpha |\mathrm{\Psi }|^2+\frac{\beta }{2}|\mathrm{\Psi }|^4+\mathrm{\Psi }^{}\widehat{L}\mathrm{\Psi }\right),$$ (1) where $`G`$, $`G_n`$ are the free energies of the superconducting and normal states; $`\mathrm{\Psi }`$ is the complex order parameter; $`\widehat{L}=(i\mathrm{}\stackrel{}{}e^{}\stackrel{}{A}/c)^2/2m^{}`$ is the kinetic energy operator for Cooper pairs of charge $`e^{}=2e`$ and mass $`m^{}=2m`$; $`\stackrel{}{A}=\stackrel{}{e}_\varphi H\rho /2`$ is the vector potential of the uniform magnetic field written in cylindrical coordinates $`\varphi ,\rho `$; $`\alpha `$, $`\beta `$ are the GL coefficients depending on the sample temperature. Expanding the order parameter $`\mathrm{\Psi }=_i^NC_i\psi _i`$ in the orthonormal eigenfunctions of the kinetic energy operator $`\widehat{L}\psi _i=ϵ_i\psi _i`$ we go from the functional form (1) to the free energy written in terms of complex variables $$GG_n=(\alpha +ϵ_i)C_iC_i^{}+\frac{\beta }{2}A_{kl}^{ij}C_i^{}C_j^{}C_kC_l,$$ (2) where the matrix elements $`A_{kl}^{ij}=𝑑\stackrel{}{r}\psi _i^{}\psi _j^{}\psi _k\psi _l`$ are calculated numerically. Note, that the sample geometry enters in the calculations only through the eigen energies $`ϵ_i`$ and eigenfunctons $`\psi _i`$, which are well known for the disk case . In thin ($`d\xi `$) disks these eigenfunctions have the form $`\psi _{i=(l,n)}=exp(il\varphi )f_n(\rho )`$, where $`l`$ is the angular momentum and the index $`n`$ counts different states with the same $`l`$. In contrast to the approaches , we do not restrict ourselves to the lowest Landau level approximation (i.e. $`n=1`$) and expand the order parameter over all eigenfunctions with $`ϵ_i<ϵ_{}`$, where the cutting parameter $`ϵ_{}`$ is choosen such that increasing it does not influence our results. The typical number of complex components used are in the range $`N=30÷50`$. Thus the superconducting state is mapped into a 2D cluster of $`N`$ classical particles $`(x,y)(Re(C),Im(C))`$, which is governed by the Hamiltonian (2). To find a saddle point, which presents an extremum of the free energy, we use a technique similar to the eigenvector following method . We start with some set of coefficients $`C`$. In the vicinity of this point the free energy $`\delta G=G(C^n)G(C)`$ can be represented as a quadratic form for small deviations $`\delta =C^nC`$: $$\delta G=F_m\delta _m^{}+B_{mn}\delta _n\delta _m^{}+D_{mn}\delta _n^{}\delta _m^{}+c.c.,$$ (3) where $`F_m=(\alpha +ϵ_i)C_m+\beta A_{kl}^{mj}C_jC_k^{}C_l`$, $`B_{mn}=(\alpha +ϵ_m)I_{mn}+2\beta A_{kl}^{mn}C_kC_l^{}`$, $`D_{mn}=\beta A_{kl}^{mn}C_kC_l`$, and $`I_{mn}`$ is the unit matrix. The quadratic form (3), which is Hermitian, can be rewritten in normal coordinates $`\delta _m=x_kQ_m^k`$ as $`\delta G=2(\gamma _kx_k+\eta _kx_k^2)`$, where $`\gamma _k=Q_m^kF_m`$, the eigenvalues $`\eta _k`$ and eigenvectors $`Q^k`$ are found by solving numerically the following equation $$\left|\begin{array}{cc}B+Re(D)\hfill & Im(D)\hfill \\ Im(D)\hfill & BRe(D)\hfill \end{array}\right|\left|\begin{array}{c}Re(Q^k)\\ Im(Q^k)\end{array}\right|=\eta _k\left|\begin{array}{c}Re(Q^k)\\ Im(Q^k)\end{array}\right|$$ Moving in the direction with negative free energy gradient $`\gamma _k`$ we will approach a minimum of the free energy corresponding to the ground or a metastable state. In order to find a saddle point we move to a minimum of the free energy in all directions $`x_{kl}=\gamma _k/(ϵ+\eta _k)`$ except one, which has the lowest eigenvalue, and for which we go to a maximum $`x_l=\gamma _l/(ϵ+\eta _l)`$, and find $`C_m^n=C_m+x_kQ_m^k`$ for all $`k`$. The iteration parameter $`ϵ>0`$ controls the convergency, which is always reached for any initial state close enough to a saddle point. Starting from different initial states, for which the coefficients $`C`$ are choosen randomly, we find the saddle points for different magnetic fields (Fig. 1). Due to fluctuations (i.e. thermal, …) the system will be able to reach the saddle point and can then transfer to the other superconducting state. When the magnetic field approaches the expulsion or penetration field, the attraction region of a saddle point state decreases and random searching becomes inefficient. Therefore, to trace the saddle point evolution in the vicinity of the penetration (expulsion) field we start from the saddle point state and increase (decrease) the magnetic field up to the penetration (expulsion) field, when the lowest eigenvalue goes to zero (see Fig. 2, dashed curve). The spatial distributions of the superconducting electron density $`|\mathrm{\Psi }|^2`$ and velocity $`\stackrel{}{V}=\mathrm{}\stackrel{}{}S2e\stackrel{}{A}/c`$ ($`\mathrm{\Psi }=Fexp(iS)`$) in the saddle point state corresponding to the transition from the Meissner state to the vortex state are depicted in Figs. 3 and 4 for different magnetic fields and disk radius $`R=4.8\xi `$. These figures demonstrate two different stages in the saddle point evolution. Below the penetration field, the saddle point state corresponds to a region of suppressed superconductivity (Fig. 3(d)) with a minimum of $`|\mathrm{\Psi }|^2`$, which is located at the disk boundary. While the minimum value of the order parameter remains different from zero, the vorticity $`L=𝑑\stackrel{}{l}S/\stackrel{}{l}/2\pi `$, where integration is performed along the disk boundary, equals zero and the supervelocity distributions is similar to that of the Meissner state (Fig. 4(a)). When the order parameter reaches its zero value at the nucleation field $`H_n`$ (Fig. 2, solid curve), the vorticity transits suddenly to $`L=1`$. For lower magnetic fields $`H<H_n`$, the saddle point state presents a vortex-like state with closed velocity circulation (Fig. 4(b)). Note, that this transition is not followed by any discontinuity in the free energy or the curvature of the potential curve $`\eta `$. With further decreasing the magnetic field, the saddle point corresponds to a vortex closer to the disk center (Fig. 3(d),(c)). This physical picture of flux expulsion and penetration remains valid for other transitions $`LL+1`$ with different $`L`$, independently of the disk radius and the type (giant vortex or multivortex) of superconducting state. The free energy, measured in the condensation energy $`G_0=\alpha ^2\pi R^2d/2\beta `$, is shown in Fig. 5 for the saddle point (dotted curves) and stable (solid curves) states for the disk radius $`R=4.8\xi `$. The difference between the free energy of the saddle point state and the nearby metastable state corresponds to the transition barrier shown in the inset of Fig. 5 for transition $`01`$. As seen from Fig. 5, the penetration barrier grows more slowly deep inside the metastable region than the expulsion barrier. Therefore, we expect a larger fluctuation of the penetration field at a finite sample temperature, which agrees with recent experimental observations by Geim . Below the nucleation field, when the saddle point state is similar to a vortex state, the penetration and expulsion barriers (see Fig. 6) can be estimated from the London theory, which leads to the following expression for the vortex free energy $$\frac{G_1}{4\pi G_{}}=ln(\frac{R^2\rho ^2}{r_cR})\mathrm{\Phi }(1\frac{\rho ^2}{R^2})+\frac{1}{4}(\mathrm{\Phi }^2\frac{R^2}{\xi ^2}),$$ (4) where $`G_{}=\alpha ^2\xi ^2d/2\beta =G_0\xi ^2/\pi R^2`$, $`\rho `$ is the radial vortex position, $`r_c\xi `$ is the vortex core radius, $`\mathrm{\Phi }=\pi HR^2/\mathrm{\Phi }_0`$ is the unitless magnetic flux, and $`\mathrm{\Phi }_0=hc/2e`$ is the flux quantum. Note that: i) the expulsion field $`H_e=\mathrm{\Phi }_0/\pi R^2`$, ii) the vortex position $`\rho _s=R\sqrt{11/\mathrm{\Phi }}`$ in the saddle point, and iii) the BL expulsion barrier $`U=G_1(\rho _s)G_1(0)=4\pi G_{}(\mathrm{\Phi }1ln\mathrm{\Phi })`$ do not depend on the vortex core energy, which is represented by the first term in Eq. (4). As seen from Fig. 6(a), the London theory prediction for the expulsion barriers are confirmed by the GL theory in the limit of large disks $`R\xi `$. We extended the BL model to arbitrary $`R/\xi `$ by taking into account the spatial nonuniformity of the modulus of the order parameter, which obeys the first GL equation $$\frac{\mathrm{}^2}{2m^{}}\left(\mathrm{}F(\stackrel{}{}S\frac{2e}{\mathrm{}c}\stackrel{}{A})^2F\right)=\alpha F+\beta F^3,$$ (5) with the boundary condition $`(F/\rho )_{\rho =R}=0`$. Following the BL model we assume that the phase distribution is created by a vortex and its mirror image, which are located at the distances $`\rho _v,R^2/\rho _v`$ from the disk center , respectively. Solving Eq. (5) numerically for different vortex positions $`\rho _v`$ we find the expulsion (solid circles) and penetration (open circles) barriers shown in the inset of Fig. 5. Below the nucleation field there is an excellent quantative agreement between our GL theory and this improved BL model. Nevertheless, this model breaks down in the range $`H_n<H<H_p`$. Note, that the barrier height at $`H=H_n`$ increases with disk radius and the role of the nucleation barrier may become even more important in macroscopic systems, where possible 3D (for $`d>\xi `$) and demagnetization (for $`Rd>\lambda ^2`$) effects must also be taken into account. In the unitless variables $`2\pi H\xi R/\mathrm{\Phi }_0`$ ($`\mathrm{\Phi }/\mathrm{\Phi }_0`$), the penetration (expulsion) barriers measured in $`G_{}`$ are proportional to the disk thickness and increase slightly with the disk radius. In conclusion, we have demonstrated that in a wide range of magnetic fields $`H_n<H<H_p`$ the saddle point state presents a vortex nucleus, which is a region of suppressed superconductivity surrounded by a background of Meissner state, which transits to a vortex state at $`H<H_n`$. We have found the penetration field and the corresponding nucleation barriers for thin disks. For lower magnetic fields $`H_e<H<H_n`$, the saddle point state can be reasonably described by the conventional London theory. We also extended the BL model to finite disk radius. We gratefully acknowledge discussions with A.K. Geim, R. Blossey, A. MacDonald and V. Moschchalkov. This work is supported by the Flemish Science Foundation (FWO-Vl) through project 5.0277.97 and the “Interuniversity Poles of Attraction Program - Belgian State, Prime Minister’s Office - Federal Office for Scientific, Technical and Cultural Affairs”. One of us (FMP) is a research director with the FWO-Vl.
no-problem/9908/astro-ph9908338.html
ar5iv
text
# TeV Cherenkov Events as Bose-Einstein Gamma Condensations ## 1 Introduction High energy gamma rays are readily absorbed in the intergalactic medium through pair production in a sufficiently dense, diffuse, microwave or infrared radiation field (Gould & Schréder, 1966; Stecker, De Jager, & Salamon 1992). For this reason, a great deal of attention has be paid to gamma rays at energies apparently reaching $`10`$ TeV, recently detected from the galaxy Mkn 501 (Hayashida et al., 1998, Pian et al., 1998, Aharonian et al., 1999, Krennrich, et al., 1999). Mkn 501 is a BL Lac object at a distance of $`200`$ Mpc, for a Hubble constant, H<sub>0</sub> = 50 km s<sup>-1</sup> Mpc<sup>-1</sup>. Unattenuated transmission of $`10`$ TeV photons across distances of this order would place severe constraints on the diffuse extragalactic infrared background radiation (Coppi & Aharonian, 1997, Stanev & Franceschini, 1998) placing upper limits to the radiation density that are close to values derived from COBE detections and IRAS source counts alone (Hauser, et al., 1998; Hacking & Soifer, 1991; Gregorich, et al., 1995). Given these close coincidences it is useful to re-examine the severity that these observations place on the density of the diffuse extragalactic infrared radiation (DEIR). ## 2 Bose-Einstein Condensations of Photons Coherent radiation, i.e. highly excited quantum oscillators, are produced in a variety of processes, but are also regular components of blackbody radiation in the Rayleigh-Jeans tail of the energy distribution. These excited oscillators correspond to densely occupied radiation phase cells — a Bose-Einstein condensation of photons all having quantum-mechanically indistinguishable properties, i.e. identical momenta, positions, polarizations, and directions of propagation, within the Heisenberg uncertainty constraints. Given that cosmic ray particles can have energies going up to $`3\times 10^{20}`$ eV, and given that one expects a cutoff for gammas from Mkn 501 at energies many orders of magnitude lower, around 10 or 20 TeV, it does not seem far-fetched to think that the actually observed gammas reaching Earth might lie far out in the low-frequency tail of some significantly more energetic radiation field characterized by an equivalent temperature much higher than a few TeV. If this were the case, we would expect that the radiation arriving at Earth could be highly coherent, meaning that phase cells would be filled to rather high occupation numbers, $`N`$. As they interact with the DEIR, densely filled phase cells can decline in population and lose energy only by going stepwise from an initial occupation number $`N`$, to $`(N1)`$, and from there to $`(N2)`$, etc. Because the mean free path for interactions of photons with the DEIR is energy dependent, a fraction of a coherent assembly of photons could penetrate appreciably greater distances through the diffuse extragalactic radiation field than, say, a single photon of the same total energy. A number $`N_a`$ of such arriving photons, each with energy $`h\nu `$ would impinge on the Earth’s atmosphere at precisely the same instant, and would interact with the atmosphere producing an air shower that emits Cherenkov light that could mimic that due to a single photon with energy $`N_ah\nu `$ impinging on the atmosphere. These two kinds of impacts could be distinguished by shower images they produce and probably also by the fluctuations in the energy distribution observed near the cut-off energy $`E_{\mathrm{CO}}`$ for a series of Cherenkov events. Because of their high momenta, the arriving bunched photons would spread over only the smallest distance $`\mathrm{\Delta }y`$ in their traversal through extragalactic space, given by the uncertainty relation $`\mathrm{\Delta }p_y\mathrm{\Delta }yh`$, where $`\mathrm{\Delta }p_y`$ is the uncertainty in transverse momentum. $`\mathrm{\Delta }p_y`$ is the product of the photon momentum $`h\nu /c`$ and the angular size that the source subtends at Earth. The smallest dimension we could expect would be of the order of an AGN black hole Schwarzschild radius $`3\times 10^{13}M/(10^8M_{})`$ cm. This would make $`\mathrm{\Delta }y(10^8M_{}/M)10^3`$ cm — negligible in Cherenkov detection. ## 3 Interpretation of Cherenkov Radiation Data TeV $`\gamma `$-rays are detected through the Cherenkov radiation generated in the Earth’s atmosphere by electrons in an “air shower” initiated by the $`\gamma `$-ray. Such air showers are electromagnetic cascades involving pair production and bremsstrahlung interactions. As long as the energy of the photon entering the atmosphere is sufficiently high, the Cherenkov yield of the air shower is sensitive primarily to the total energy deposited, not to the number of instantaneously arriving photons. Accordingly, one might expect such telescopes to mistakenly record five simultaneously arriving 5 TeV photons as a single shower of 25 TeV. On the other hand, if the number of simultaneously arriving photons, $`N`$, were much higher, then the showers would look very different, and if $`N`$ were really large there would be no Cherenkov radiation at all. To quantify the discussion above, we shall compare the mean and standard deviation of the number of electrons in the shower, $`N_e(t)`$, as a function of depth into the atmosphere measured in radiation lengths, $`t`$, for the two cases. Note that the atmosphere is approximately 1030 g cm<sup>-2</sup> thick and the radiation length of air including forward scattering is 36.66 g cm<sup>-2</sup>. Although the cross section for interaction of an assembly of $`N`$ coherent photons is $`N`$ times higher than that of an individual photon, a shower initiated by an assembly of $`N`$ coherent photons having total energy $`N\epsilon `$ would be identical to a superposition of $`N`$ showers due to individual photons of energy $`\epsilon `$. Above $`3`$ GeV the pair production mean free path for photons in air is constant at $`t_{\mathrm{pair}}=9/7`$ radiation lengths. For an assembly of $`N`$ coherent photons, the pair production mean free path is therefore identical to an exponential distribution with mean $`t_{\mathrm{pair}}=(9/7)`$, i.e. it is the same as the distribution of first interaction points of a single photon. This also implies that at depth $`t`$ the average number of photons remaining in the assembly is $`N\mathrm{exp}(t/t_{\mathrm{pair}})`$. Crewther and Protheroe (1990) provide a parametrization of the distribution of the number of electrons in photon initiated showers, $`p[N_e(tt_1)]`$, as a function of depth into the atmosphere beyond the first interaction points of the primary photons, $`(tt_1)`$. We use their results together with our Monte Carlo simulation of the first interaction points of each of the $`N`$ photons in a coherent assembly to simulate the development of the air shower due to the coherent assembly, thus taking account of all fluctuations in shower development. In Fig. 1 we show as a function of atmospheric depth $`t`$ $`\overline{N_e}`$ and $`\overline{N_e}\pm 1\sigma `$ based on 1000 simulations for the case of single photons of energy 25 GeV, assemblies of 5 coherent photons each having energy 5 TeV, and assemblies of 25 coherent photons each having energy 1 TeV (each assembly has energy 25 GeV). As can be seen, air showers due to coherent assemblies develop higher in the atmosphere, and have much smaller fluctuations in shower development. Such differences between showers due to single photons and assemblies of coherent photons would produce different Cherenkov light signatures and should be detectable with state-of-the-art Cherenkov telescopes such as HEGRA (see e.g. Konopelko et al. 1999). ## 4 Extragalactic Optical Depth Propagation of assemblies of $`N_0`$ coherent photons each of energy $`\epsilon `$ through the microwave and DEIR fields is analogous to their propagation through the atmosphere. However, assemblies of coherent photons having total energy $`E_{\mathrm{tot}}=N_0\epsilon `$ may travel farther than single photons of energy $`N_0\epsilon `$ without interaction because, unlike in the atmosphere, the mean free path for pair-production in the extragalactic radiation fields depends strongly on photon energy. Just as in the air-shower cascade, only a single photon at a time can be lost from a phase cell, with a corresponding decline in occupation number from $`N`$ to $`N1`$. On each encounter with an infrared photon, the coherent assembly of $`N`$ photons has an $`N`$-fold increase in probability for some photon to be removed, so the mean free path is $`x_{\mathrm{pair}}(\epsilon )/N`$ where $`x_{\mathrm{pair}}(\epsilon )`$ is the mean free path for photon-photon pair production by single photons of energy $`\epsilon `$ through the extragalactic radiation fields. This implies that at distance $`x`$ from the source the average number of photons remaining in the assembly is $`N_R(x)=N_0\mathrm{exp}[x/x_{\mathrm{pair}}(\epsilon )]`$, precisely the expression that would hold for $`N_0`$ independent photons. If $`d`$ is the distance from the source to Earth, then the energy observable by Cherenkov telescopes is $`E_{\mathrm{obs}}=N_R(d)\epsilon `$, and the number of photons in the assembly of coherent photons on emission was $`N_0=N_R\mathrm{exp}[d/x_{\mathrm{pair}}(E_{\mathrm{obs}}/N_R)]`$. For the purpose of illustration, we use for $`x_{\mathrm{pair}}(E)`$ the logarithmic mean of the upper and lower curves of Fig. 1(a) of Bednarek and Protheroe (1999) which is based on the infrared background models of Malkan and Stecker (1998). We show in Fig. 2 the result for propagation of coherent photons through the microwave and DEIR fields across $`d=200`$ Mpc appropriate to Mkn 501. We note, for example, that a coherent assembly of forty 10 TeV photons emitted would typically arrive at Earth as a coherent assembly of ten 10 TeV photons with an observable energy of 100 TeV, while a single photon of 100 TeV would have a probability of much less than $`10^6`$ of reaching Earth. ## 5 Fluctuations in the Arriving Phase Cell Energy Content A stream of photons characterized by a brightness temperature $`T_b`$ of necessity will also have a distribution of phase cell occupation numbers, $`N`$, which, for high average values $`N`$ fluctuates as $`(\mathrm{\Delta }N)_{\mathrm{rms}}N`$. For emission of a stream of identical assemblies of coherent photons, each containing $`N_0`$ photons on emission, fluctuations in the number of photons, $`N_R`$, remaining in each assembly after propagation to Earth through the DEIR, are Poissonian about the mean value $`N_R`$, i.e. $`(\mathrm{\Delta }N)_{\mathrm{rms}}\sqrt{N_R}`$, for $`N_RN_0`$, and less than Poissonian for $`N_RN_0`$. Both these effects broaden the energy distributions of observed Cherenkov events. ## 6 What Mechanisms Could Produce Coherent TeV Gammas? In the laboratory (such as DESY), coherent X-radiation can be produced by stimulated emission of relativistic electrons through a periodically varying magnetic field (Madey, 1971). Therefore this shows that such processes are available in principle. A more promising astrophysical process might arise from the interaction of a collimated beam of relativistic electrons moving roughly upstream against an OH or H<sub>2</sub>O megamaser. This process is attractive, because a substantial number of AGNs are known to have nuclear megamasers. Inverse Compton scattering would produce photons with an energy increase $`\gamma ^2`$ in the co-moving frame of the jet of relativistic, randomly directed electrons. Here $`\gamma `$ is the Lorentz factor of electrons in the jet’s co-moving frame. To produce 1 TeV photons from H<sub>2</sub>O megamaser radiation at 22 GHz, we would require $`(\delta \gamma )^21.1\times 10^{16}(E/\mathrm{TeV})`$, where $`\delta =[\mathrm{\Gamma }(1\beta \mathrm{cos}\theta )]^1`$ is the Doppler factor, $`\beta =v/c`$ refers to the relativistic bulk velocity $`v`$, and $`\mathrm{\Gamma }[1v^2/c^2]^{1/2}`$ is the Lorentz factor of the jet. The factor $`\delta ^2`$ translates the photon’s initial energy to the co-moving frame and back to the frame of an Earth-based observer. For Mkn 501, the line of sight angle $`\theta `$ appears to be directed very nearly in our direction, so we may choose $`\delta =2\mathrm{\Gamma }25`$ (e.g. Tavecchio et al. 1998). As shown below, the number of phase cells into which the maser photons can be inverse-Compton scattered is limited and quickly fill up for relativistic jets with high column densities. At the photon densities discussed, nonlinear effects can be neglected. To provide a representative example, we might cite conditions in the galaxy NGC 1052, which contains a water megamaser with components that appear to lie along the direction of a radio jet (Claussen et al. 1998). Though this may just be a projection effect, we will assume as these authors have that it may signify interaction of the jet with dense clumps of molecular clouds – possibly producing maser activity in shocks. The observed radiation intensity of the maser per unit bandwidth at 22 GHz is $`I(\nu )=(c\rho (\nu )/4\pi )=50\mathrm{mJy}`$ for a beam size that is unresolved at $`0.3\times 1`$ mas. The beam, however, is clearly much larger than the roughly forty individual sources that are detected by virtue of their different velocities along the line of sight, whose centroids are separated by as little as $`0.1`$ mas. The brightness temperature of these individual sources is $`T_b(\nu )[I(\nu )c^2/2k\nu ^2]>4.5\times 10^8\mathrm{K}`$ if the nominal beam size is assumed. The density of phase space cells at this frequency is $`n(\nu )=(8\pi \nu ^2/c^3)4.5\times 10^{10}\mathrm{cm}^3\mathrm{Hz}^1`$ so that the phase cell occupation number becomes $`N_{\mathrm{occ}}=[\rho (\nu )/h\nu n(\nu )]=(kT_b/h\nu )>4.3\times 10^8`$. All these figures are lower limits, since neither the angular resolution nor the spectral resolution suffice to resolve the individual maser sources. For this reason, it may be better to assume the properties of the better-resolved Galactic H<sub>2</sub>O masers, which have a brightness temperature of order $`T_b10^{14}`$K, and a corresponding occupation number of order $`N_{\mathrm{occ}}10^{14}`$ (Moran, 1997). To be somewhat more conservative, we will adopt a value of $`N_{\mathrm{occ}}3\times 10^{13}`$ below. Under a Lorentz transformation $`I(\nu )`$ and $`\rho (\nu )`$ scale as $`\nu ^3`$, as does $`h\nu n(\nu )`$, so that the phase cell occupation number transforms as a constant. We can therefore deal with the occupation number as though it were in the rest frame of the jet of relativistic electrons. These electrons with energy $`\gamma m_ec^2`$ will have some velocity dispersion, leading to an energy bandwidth $`\mathrm{\Delta }\gamma m_ec^2`$. On inverse-Compton scattering the effective occupation number of scattered photons will be reduced by the ratio of bandwidths, $`(\mathrm{\Delta }\gamma /\gamma )/(\mathrm{\Delta }\nu /\nu )`$. If we take $`(\mathrm{\Delta }\gamma /\gamma )1`$, and $`(\mathrm{\Delta }\nu /\nu )3\times 10^6`$ corresponding to a 1 km s<sup>-1</sup> velocity spread, the reduction in occupation number is of order $`3\times 10^5`$ bringing the actual occupation number down to $`10^8`$. The occupation number of inverse-Compton scattered photons also could in principle be diluted by the low, effective cross section for back-scatter, i.e. by the Klein-Nishina cross section for back-scattering. However, despite the $`\gamma \delta `$ value of $`10^8(\delta /25)(\gamma /(4\times 10^6))`$ for electrons, the incident photons only have energy $`\gamma h\nu \delta 10^4`$ eV in the electron’s rest frame, far lower than the 0.511 MeV electron rest mass. The Klein-Nishina cross section, therefore, reduces to the Thomson cross section $`\sigma _T6.6\times 10^{25}`$ cm<sup>2</sup>. We can assume that the masers are isotropic, or else, if they are not, that there are a larger number than are actually observed. Either way, the scattered light they produce would be the same. If we further assume a jet with relativistic electron column density through which the maser photons pass of order $`n_{e,rel}\mathrm{}10^{17}`$ cm<sup>-2</sup>, we can estimate the phase cell occupation number of the scattered radiation. It is the product of the maser beam phase-cell occupation number, the ratio of bandwidths, the electron column density, and the Thomson cross section, giving $$N_{\mathrm{occ}}^{\mathrm{scat}}6\left(\frac{N_{\mathrm{occ}}}{3\times 10^{13}}\right)\left(\frac{(\mathrm{\Delta }\nu /\nu )/(\mathrm{\Delta }\gamma /\gamma )}{3\times 10^6}\right)\left(\frac{n_{e,rel}\mathrm{}}{10^{17}\mathrm{cm}^2}\right)$$ (1) Interestingly, those phase cells with high back-scattered occupation number $`N_b`$ will increase their occupancy at a rate $`(N_b+1)`$ times faster than unoccupied cells, since induced scattering then begins to play a role — there is gain. We may, therefore, expect such a configuration to give rise to reasonably high occupation numbers for TeV photons and energy densities compatible with observed values. NGC 1052, exhibits nearly 40 maser hot spots, with a total 22 GHz luminosity of $`200L_{}8\times 10^{35}`$erg s<sup>-1</sup>. Let us assume that the maser power available for interacting with the relativistic jet would be equivalent to only 25% of this. If only fraction $`n_{e,rel}\mathrm{}\sigma _T6.6\times 10^8`$ of this radiation is scattered, but each photon’s energy increases by $`1.1\times 10^{16}`$, the 1 TeV luminosity is $`1.5\times 10^{44}`$ erg s<sup>-1</sup>. This needs to be compared to the TeV flux from Mkn 501 in its high state, which is of order $`3\times 10^{10}`$ erg cm<sup>-2</sup> s<sup>-1</sup>, corresponding for a distance of 200 Mpc to an apparent omnidirectional luminosity of $`1.5\times 10^{45}`$ erg s<sup>-1</sup> (Pian et al. 1998). Since our model assumes only a single jet spherically expanding within a relatively narrow solid cone whose axis is directed at us, these two figures are roughly consonant. ## 7 Synchrotron Emission from Relativistic Electrons A highly relativistic electron with energy $`E`$ emits synchrotron power in its rest frame $`P(E)2.6\times 10^4(B/0.1\mathrm{gauss})^2(\gamma /4\times 10^6)^2\mathrm{erg}\mathrm{s}^1`$. The peak frequency the photons attain in this frame will be of the order of $$\nu _m\frac{eB\gamma ^2}{2\pi m_ec}4.5\times 10^{18}\left(\frac{B}{0.1\mathrm{gauss}}\right)\left(\frac{\gamma }{4\times 10^6}\right)^2\mathrm{Hz}$$ (2) where $`m_e`$ is the electron rest mass, $`B0.1`$ gauss (e.g. Bednarek & Protheroe 1999) is the local magnetic field strength, and $`e`$ the electron charge. In the terrestrial observer’s frame the frequency becomes $`\nu _m\delta 10^{20}`$ Hz, which roughly corresponds to the peak synchrotron radiation frequency of Mkn 501 in the high state. OSSE observations during flaring (Catanese et al. 1997) show that the energy flux per log energy interval continues up to $`500`$ keV at roughly the same level as that observed by Beppo-SAX (Pian et al. 1998), indicating that Mkn 501 emits a synchrotron power at 0.5 MeV comparable to the TeV power during flaring. The emitted synchrotron power in the relativistic jet’s comoving frame would be $`2.4\times 10^{42}(25/\delta )^2`$ erg s <sup>-1</sup>, implying emission from $`10^{46}(25/\delta )^2(0.1\mathrm{gauss}/B)^2(4\times 10^6/\gamma )^2`$ relativistic electrons. In recent models of AGN jet dynamics (e.g. Falcke & Biermann 1999) a relativistic jet can readily interact with $`N_{\mathrm{cl}}`$ dense ambient molecular clumps located at $`10^{19}`$cm from the central engine, to produce relativistic shocks that could trigger maser emission in these clumps. Local acceleration at the shock fronts or production from hadronic interaction and decays could then also provide relativistic particle energies $`\gamma m_ec^23.2(\gamma /4\times 10^6)`$ erg in the jet’s comoving system. The time scale for energy loss for these particles through synchrotron radiation is of order $`t_{synch}1.25\times 10^4(4\times 10^6/\gamma )(0.1\mathrm{gauss}/B)^2`$ seconds. Since the relativistic shocks propagate into the jet at a significant fraction of the speed of light, the radiating post-shock volumes have dimensions of order $`ct_{synch}10^{14}`$ to $`10^{15}`$ cm on a side. At particle densities of order $`n_{e,rel}10^2/N_{\mathrm{cl}}`$cm<sup>-3</sup>, a post-shock column density of $`10^{17}`$ cm<sup>-2</sup>, through the $`N_{\mathrm{cl}}`$ shocks, therefore, appears possible. ## 8 Discussion It is possible that highly energetic gamma radiation from distant cosmological sources will be found to appear in conflict with pair-production constraints imposed by the diffuse extragalactic infrared background radiation. This apparent violation could then be due to coherent TeV gammas of lower energy, whose Cherenkov radiation superficially mimics individual photons of much higher energy. We have suggested how the Cherenkov radiation signatures of coherent and incoherent radiation can be distinguished, and have sketched a plausible way in which coherent TeV photons could be astrophysically generated. Whether this particular mechanism is found in nature, remains to be determined, but other possible sources of coherent TeV gamma radiation are also entirely possible. If coherent TeV photons can be produced in nature then we have shown that there exists a mechanism by which multi-TeV Cherenkov signals may be observed from high redshift sources. The work of one of us (MH) is supported by grants from NASA. The Alexander von Humboldt Foundation, the Max Planck Institute for Radio Astronomy in Bonn, and the Australia Telescope National Facility were his gracious hosts during work on this paper. Drs. Vladimir Strelnitski and Karl Menten kindly provided helpful comments. The work of RJP is supported by the Australian Research Council. PLB’s work on high energy physics is partially supported by a DESY grant. He wishes to acknowledge discussions with Dr. Carsten Niebuhr of DESY, Hamburg, Dr. Yiping Wang of PMO, Nanjing, and Dr. Heino Falcke and Ms. Giovanna Pugliese from Bonn. References Aharonian, F. A., et al. for the HEGRA collaboration 1999 A & A submitted, astro-ph/9903386 Bednarek, W. & Protheroe, R. J. 1999, MNRAS, in press astro-ph/9902050 Catanese M. et al., 1997 ApJ, 487, L143 Claussen, M. J., et al. 1998, ApJ, 500, L129 Coppi, P. S. & Aharonian, F. A. 1997, ApJ, 487, L9 Crewther, I. Y. & Protheroe, R. J. 1990, J. Phys. G: Nucl. Part. Phys., 16, L13 Falcke, H. & Biermann, P.L. 1999, A&A, 342, 49 Gould, R. J. & Schréder, G. 1966, PRL, 16, 252 Gregorich, D. T., et al. 1995, AJ, 110, 259 Hacking, P. B. & Soifer, B. T. 1991, ApJ, 367, L49 Hauser, M. G., et al. 1998, ApJ, 508, 25, astro-ph/9806167 Hayashida, N., et al. 1998, ApJ, 504, L71 Konopelko, A., et al. for the HEGRA collaboration 1999, Astropart. Phys., 10, 275 Krennrich, F., et al. 1999, ApJ, 511, 149 Madey, J. M. J. 1971 J. Appl. Phys., 42, 1906 Malkan M.A. & Stecker F.W. 1998, ApJ, 496, 13 Moran, J. M. 1997, “Modern Radio Science”, ed. J. H. Hamelin, International Radio Science Union (URSI), Oxford University Press (also Harvard-Smithsonian Center for Astrophysics Preprint No. 4305) Pian, E., et al. 1998, ApJ, 492, L17 Stanev, T. & Franceschini, A. 1998, ApJ, 494, L159 Stecker, F. W., De Jager, O. C., & Salamon, M. H. 1992, ApJ, 390, L49 Tavecchio, F., Maraschi, L., & Ghisellini, G. 1998, ApJ, 509, 608
no-problem/9908/hep-lat9908010.html
ar5iv
text
# 1 Introduction ## 1 Introduction The static quark potential at high temperatures is interesting for several reasons. Phenomenologically, the properties of quark bound states, in particular of heavy quarkonia, can be derived from potential models quite successfully. It is then important to compute the temperature dependence of the potential as this might lead to observable consequences in heavy ion collision experiments. Notably, it has been suggested to use the suppression of $`J/\mathrm{\Psi }`$ and $`\mathrm{\Psi }^{}`$ production as a signal for the quark-gluon plasma. For this purpose, a detailed knowledge of the temperature dependence of the potential appears very helpful . Moreover, it is well known that a linearly increasing potential at large distances arises naturally from a string picture of confinement. As long as one stays in the confined phase of QCD, string models then also predict a definite behaviour of the potential at finite temperatures . These predictions ought to be tested by lattice analyses. In close vicinity of the deconfinement transition temperature the static quark potential and the mass gap i.e. the potential integrated over perpendicular directions are sensitive to the order of the phase transition. In colour $`SU(3)`$ the observation of a finite mass gap at the critical temperature supported that the transition is of first order , while in $`SU(2)`$ a continuous decrease to zero with the appropriate Ising critical exponents was found . In the deconfined phase, asymptotic freedom suggests that at high temperatures the plasma consists of weakly interacting quarks and gluons. Previous numerical studies \[8-12\] have, however, shown that non-perturbative phenomena prevail up to temperatures of at least several times the critical temperature. In particular, the heavy quark potential did not show the simple Debye-screened behaviour anticipated from a resummed lowest-order perturbative treatment . This might not be too surprising as various non-perturbative modes may play a role in the long distance sector of the plasma . It is then important to quantify colour screening effects by a genuinely non-perturbative approach. In the present paper we compute the static quark potential in the pure gluonic sector of QCD. We investigate the temperature dependence of the potential over a range of temperatures from 0.8 to about 4 times the critical temperature $`T_c`$. The analysis is based on gluon configurations generated on lattices of size $`32^3\times N_\tau `$ with $`N_\tau =4,6`$ and $`8`$. This enables us to gain some control over finite lattice spacing artefacts. On the smallest lattice a tree level improved gauge action was used while on the two bigger lattices a standard Wilson action was employed. We go beyond previous studies of the potential in so far the temperature range is covered more densely and also because a large set of lattice distances was probed. This helps to extract fit parameters with higher reliability. The paper is organized such that the next section summarizes theoretical expectations on the behaviour of the potential both below and above the transition temperature. In section 3 we present and discuss our results for the potential in the confined phase. Section 4 contains our findings for temperatures above $`T_c`$ and section 5 the conclusion. ## 2 Theoretical Expectations Throughout this paper the potential is computed from Polyakov loop correlations $$L(\stackrel{}{0})L^{}(\stackrel{}{R})=\mathrm{exp}\{V(|\stackrel{}{R}|,T)/T\}$$ (1) where $$L(\stackrel{}{x})=\frac{1}{3}\mathrm{tr}\underset{\tau =1}{\overset{N_\tau }{}}U_0(\stackrel{}{x},\tau )$$ (2) denotes the Polyakov loop at spatial coordinates $`\stackrel{}{x}`$. In the limit $`R\mathrm{}`$ the correlation function should approach the cluster value $`|L(0)|^2`$ which vanishes if the potential is rising to infinity at large distances (confinement) and which acquires a finite value in the deconfined phase. In the limit where the flux tube between two static quarks can be considered as a string, predictions about the behaviour of the potential are available from computations of the leading terms arising in string models. For zero temperature one expects $$V(R)=V_0\frac{\pi }{12}\frac{1}{R}+\sigma R$$ (3) where $`V_0`$ denotes the self energy of the quark lines, $`\sigma `$ is the string tension and the Coulomb-like $`1/R`$ term stems from fluctuations of the string . Eq. (3) generally gives a good description of the zero temperature ground-state potential although it has been shown that the excitation spectrum meets string model predictions only at large quark pair separations. For non-vanishing temperatures below the critical temperature of the transition to deconfinement, a temperature-dependent potential has been computed as $`V(R,T)`$ $`=`$ $`V_0\left[{\displaystyle \frac{\pi }{12}}{\displaystyle \frac{1}{6}}\mathrm{arctan}(2RT)\right]{\displaystyle \frac{1}{R}}`$ (4) $`+\left[\sigma {\displaystyle \frac{\pi }{3}}T^2+{\displaystyle \frac{2}{3}}T^2\mathrm{arctan}({\displaystyle \frac{1}{2RT}})\right]R+{\displaystyle \frac{T}{2}}\mathrm{ln}(1+(2RT)^2)`$ In the limit $`R1/T`$ this goes over into $$V(R,T)=V_0+\left[\sigma \frac{\pi }{3}T^2\right]R+T\mathrm{ln}(2RT)$$ (5) which had been calculated previously . Note the logarithmic term which originates from transverse momentum fluctuations<sup>1</sup><sup>1</sup>1In the context of analyzing numerical data this term has been mentioned in and was discussed in detail in .. So far, it has been left open whether the string tension $`\sigma `$ appearing in eqs. (4) and (5) is identical to the zero temperature value. In the context of a low temperature or large $`R`$ expansion, the temperature dependent terms appearing in eqs. (4) and (5) should, however, be considered as thermal corrections to the zero temperature string tension. An explicitly temperature-dependent string tension was computed by means of a $`1/D`$ expansion $$\frac{\sigma (T)}{\sigma (0)}=\sqrt{1\frac{T^2}{T_c^2}}$$ (6) where $`T_c`$ was obtained as $$T_c^2=\frac{3}{\pi (D2)}\sigma (0).$$ (7) Note, however, that for $`D\mathrm{}`$ the phase transition is of second order leading to a continuous vanishing of the string tension at the deconfinement temperature. In colour $`SU(2)`$, which also exhibits a second order transition, it was established that $`\sigma (T)`$ vanishes $`(\beta _c\beta )^\nu `$ with a critical exponent $`\nu `$ taking its 3-D Ising value of 0.63 as suggested by universality. In the present case of colour $`SU(3)`$ one expects a discontinuous behaviour and a non-vanishing string tension at the critical temperature. In the deconfined phase the Polyakov loop acquires a non-zero value. Thus, we can normalize the correlation function to the cluster value $`|L|^2`$, thereby removing the quark-line self energy contributions. Moreover, the quark-antiquark pair can be in either a colour singlet or a colour octet state. Since in the plasma phase quarks are deconfined the octet contribution does not vanish<sup>2</sup><sup>2</sup>2It is, however, small compared to the singlet part. This is true perturbatively, see eq. (9), as well as numerically . and the Polyakov loop correlation is a colour-averaged mixture of both $$e^{V(R,T)/T}=\frac{1}{9}e^{V_1(R,T)/T}+\frac{8}{9}e^{V_8(R,T)/T}$$ (8) At high temperatures, perturbation theory predicts that $`V_1`$ and $`V_8`$ are related as $$V_1=8V_8+𝒪(g^4)$$ (9) Correspondingly, the colour-averaged potential is given by $$\frac{V(R,T)}{T}=\frac{1}{16}\frac{V_1^2(R,T)}{T^2}$$ (10) Due to the interaction with the heat bath the gluon acquires a chromo-electric mass $`m_e(T)`$ as the IR limit of the vacuum polarization tensor. To lowest order in perturbation theory, this is obtained as $$\left(\frac{m_e^{(0)}(T)}{T}\right)^2=g^2(T)\left(\frac{N_c}{3}+\frac{N_F}{6}\right)$$ (11) where $`g(T)`$ denotes the temperature-dependent renormalized coupling, $`N_c`$ is the number of colours and $`N_F`$ the number of quark flavours. The electric mass is also known in next-to-leading order in which it depends on an anticipated chromo-magnetic gluon mass although the magnetic gluon mass itself cannot be calculated perturbatively. Fourier transformation of the gluon propagator leads to the Debye-screened Coulomb potential for the singlet channel $$V_1(R,T)=\frac{\alpha (T)}{R}e^{m_e(T)R}$$ (12) where $`\alpha (T)=g^2(T)(N_c^21)/(8\pi N_c)`$ is the renormalized T-dependent fine structure constant. It has been stressed that eq. (12) holds only in the IR limit $`R\mathrm{}`$ because momentum dependent contributions to the vacuum polarization tensor have been neglected. Moreover, at temperatures just above $`T_c`$ perturbative arguments will not apply so that we have chosen to attempt a parametrization of the numerical data with the more general ansatz $$\frac{V(R,T)}{T}=\frac{e(T)}{(RT)^d}e^{\mu (T)R}$$ (13) with an arbitrary power $`d`$ of the $`1/R`$ term, an arbitrary coefficient $`e(T)`$ and a simple exponential decay determined by a general screening mass $`\mu (T)`$. Only for $`TT_c`$ and large distances we expect that $`d2`$ and $`\mu (T)2m_e(T)`$, eq. (10), corresponding to two-gluon exchange. ## 3 Results below $`T_c`$ The results to be presented here as well as in the next section are based on two different sets of data. The first set, referred to as (I) in the following, was generated with a tree-level Symanzik-improved gauge action consisting of $`1\times 1`$ and $`2\times 1`$ loops. The lattice size was $`32^3\times 4`$. We used a pseudo-heatbath algorithm with FHKP updating in the $`SU(2)`$ subgroups. Each heatbath iteration is supplemented by 4 overrelaxation steps . To improve the signal in calculations of Polyakov loop correlation functions link integration was employed. For each $`\beta `$-value the data set consists of 20000 to 30000 measurements separated by one sweep. The second set of data (II) was obtained as a by-product of earlier work, the analysis of the equation of state . The gauge configurations used in the present study were generated with the standard Wilson gauge action on lattices of size $`32^3\times 6`$ and $`32^3\times 8`$. The same algorithm as for (I) was employed. The statistics amounts to 1000 to 4000 measurements separated by 10 sweeps for the $`N_\tau =6`$ data and between 15000 and 30000 measurements at each sweep for $`N_\tau =8`$. The errors on the potentials as well as the fit parameters were determined by jackknife in both cases. The lattice results for the potential at temperatures below $`T_c`$ are shown in Figures 12 and 3. The correlation functions, eq. (1), have been computed not only for on-axis separations but also for some, in the case I almost all, off-axis distance vectors $`\stackrel{}{R}`$. Although the lattice spacing for the $`N_\tau =4`$ data is larger than for the other two lattice sizes rotational symmetry is quite well satisfied due to the use of an improved action in this case. As we will focus on the intermediate to large distance behaviour of the potential, it was not attempted to specifically treat the deviations from rotational invariance at small separations. Note that the distances covered by the data extend to $`RT<4`$ for (I) while in case II we could obtain signals up to $`RT<\mathrm{\hspace{0.17em}2}`$. The potentials have first been fitted to eq. (4) with two free parameters, the self-energy $`V_0`$ and a possibly temperature-dependent string tension $`\sigma (T)`$. These fits work rather well even when data at small separations are included because the fit ansatz also accounts for a $`1/R`$ piece in the potential. The results to be quoted for the string tension, Table 1, have, however, been obtained when the data at small separations are excluded from the fit. Typically, a minimal distance of $`RT1/2`$ was chosen. The fits are stable under variation of $`R_{\mathrm{min}}`$ in this ballpark and return good $`\chi ^2`$ values. Varying the maximum distance to be fitted does not lead to noticeable changes of the results. This holds for all three lattices. The results for the string tension, normalized to the critical temperature squared, $`\sigma /T_c^2`$, are summarized in Figure 4. The temperature scale has been determined from measurements of the string tension at $`T=0`$ . The finite temperature string tension is compared to these results at zero temperature, $`\sigma (0)/T_c^2`$, shown as the line in the figure. Quite clearly, in the investigated temperature range there are substantial deviations from the zero temperature string tension. These deviations amount to about 10 % at $`T/T_c=0.8`$ and become larger when the temperature is raised. Close to $`T_c`$ the results from $`N_\tau =6`$ and $`8`$ at first sight do not seem to agree with the numbers coming from the $`N_\tau =4`$ lattices. However, recall that the $`SU(3)`$ quenched theory exhibits a first order transition with the coexistence of hadron and plasma phase at the critical temperature. The tunneling rate between the two phases decreases exponentially $`\mathrm{exp}(2\widehat{\sigma }\times (N_\sigma /N_\tau )^2)`$ where $`\widehat{\sigma }=\sigma _I/T_c^3`$ is the normalized interface tension. The lattices of data set (II) have a smaller aspect ratio of $`N_\sigma /N_\tau =4`$ and 5.33, respectively, than the $`N_\tau =4`$ lattice whose aspect ratio is 8. Correspondingly, the ensemble of configurations of the second set contains (more) configurations in the “wrong”, the deconfined phase. In fact, close to $`T_c`$, Polyakov loop histograms reveal this two-state distribution for $`N_\tau =6`$ and 8, with a clear separability between the two Gaussian-like peaks. Such a two-state signal is absent for the $`N_\tau =4`$ data. Carrying out the averaging of the potential only over configurations with Polyakov loops in the confined peak leads to the corrected data points in Figure 4. At temperatures not so close to $`T_c`$ this separation of phases is not possible anymore as the Polyakov loop histogram has tails into the deconfined phase but it is not clear where one should set the cut. When we apply this correction the agreement of the results from the three different lattices is evident. This shows that the temperature dependence of the string tension is not subject to severe discretisation effects. Moreover, the functional form of the fit ansatz eq. (4), as suggested by string model calculations, describes the behaviour of the lattice data quite well. However, with increasing temperature we observe a substantial decrease of the string tension away from its zero temperature value. Since the fit ansatz, eq. (4), contains already a $`\pi /3T^2`$ term, the decreasing slope of the linear part of the potential can not solely be accounted for by this leading correction. In order to analyze the linear rising part of the potential in a more model-independent way, in a second round of fits we have compared our data with the ansatz $$V(R,T)=V_0+\sigma (T)R+CT\mathrm{ln}(2RT)$$ (14) Note that this ansatz differs from eq. (5) in so far it summarizes all the linear dependence on the distance $`R`$ by an explicitly temperature dependent string tension $`\sigma (T)`$. Due to its lacking of a $`1/R`$ piece this formula is very well capable of describing the data but only if the fit is applied to large distances of $`RT1`$. For data set (II) this requirement leaves not too many data points to be fitted. In this case we checked that eq. (14) is able to parametrize the potential. However, since we do not have as much room to check for stability of the results as one would wish, we refrain from quoting results for data set (II). In case I we do have enough distances and obtain fits with good $`\chi ^2`$ values which are stable under variation of the minimal distance to be included in the minimization. On data set (I) we clearly observe the logarithmic term contained in eqs. (5) and (14). The fits return values for the coefficient $`C`$ of the logarithm which are equal to 1 within an error margin of less than 10 %. We thus confirm a logarithmic piece in the potential with a strength as anticipated from the string model calculation or, equivalently, a subleading power-like $`1/R`$-factor with power 1 contributing to the Polyakov-loop correlation function. Because of these findings, we fix this coefficient to 1 in the following. The resulting string tension, normalized to its zero temperature value is shown in Figure 5. The temperature dependence compares well with the (modified) prediction of the Nambu-Goto model, eq. (6), $$\frac{\sigma (T)}{\sigma (0)}=a\sqrt{1b\frac{T^2}{T_c^2}}$$ (15) Recall that the string model prediction assumes a second order transition with a continuous vanishing of the string tension at the critical temperature. The deconfinement transition in pure $`SU(3)`$ Yang-Mills theory, however, is known to be of first order. Thus, a discontinuity at the critical temperature is expected. To account for this, the coefficients $`a`$ and $`b`$ in eq. (15) are allowed to deviate from unity. In fact, the fit to the data, shown as the line in figure 5, results in the values $`a=1.21(5)`$ and $`b=0.990(5)`$. This leads to a non-vanishing string tension at the critical temperature of $$\frac{\sigma (T_c)}{\sigma (0)}=0.121(35)$$ (16) This number can be converted into a value for the (physical) mass gap at the transition point, $`m_{\mathrm{phys}}(T_c)/T_c=\sigma (T_c)/T_c^2=0.30(9)`$. This is a bit below but not incompatible with earlier results of dedicated analyses of the order of the deconfinement transition, $`m_{\mathrm{phys}}(T_c)/T_c=0.40.8`$, as summarized in . Finally, we compared the string tension $`\sigma (T)`$ defined in eq. (14) with the leading behaviour $`\sigma (0)\pi T^2/3`$ as given in eq. (5). This is shown as the dotted line in Figure 5. Similarly to Figure 4 the comparison fails, reflecting that non-leading terms contribute substantially. ## 4 Results above $`T_c`$ Above the critical temperature we have normalized the Polyakov loop correlations to their cluster value $$V(|\stackrel{}{R}|,T)=T\mathrm{ln}\frac{L(0)L^{}(\stackrel{}{R})}{|L(0)|^2}$$ (17) to eliminate the self-energy contributions. In principle, the correlation function itself is periodic in $`R`$. Alternatively, one can fit the potential, eq. (17), with a periodic ansatz, $`V(R)V(R)+V(N_\sigma aR)`$. The second contribution turns out to be very small at the distances fitted and both procedures lead to the same results for the fit parameters. In the following we first concentrate on data set (I) which has somewhat better statistics and which, more important, covers the explored range of distances more densely, see Figure 6. As has been explained in section 2, we fit the potentials above $`T_c`$ with the generalized screening ansatz, eq. (13), where the exponent $`d`$ of the Coulomb-like part is treated as a free parameter. It turns out that the value of the exponent and the value of the screening mass $`\mu `$ are strongly correlated. In particular at the higher temperatures it is difficult to obtain fit results which are stable under the variation of the minimum distance included in the fit. These fluctuations have been taken into account in our estimates of the error bars. At the highest temperatures analyzed we observed that at large quark separations the Polyakov-loop correlation decreases below the cluster value. In it was argued that finite momentum contributions to the vacuum polarization tensor can give rise to a modified screening function which undershoots the exponential Debye decay at intermediate distances and approaches the infinite distance limit from below. Despite the high, yet limited precision of our data we are not in the position, however, to confirm this suggestion. Instead, we have taken an operational approach and have added an overall constant to our fit ansatz. In Figure 7 we summarize the results for the exponent. At temperatures very close to $`T_c`$, the exponent $`d`$ is compatible with $`1`$. When the temperature is increased slightly, $`d`$ starts rising to about $`1.4`$ for temperatures up to $`2T_c`$. Between 2 and 3 times $`T_c`$, the exponent centers around 1.5, although the error bars tend to become rather large. A value of 2 as predicted by perturbation theory seems to be ruled out, however, in the investigated temperature range. The results for the screening mass $`\mu (T)`$ obtained from the same fits with eq. (13) are shown in figure 8. The screening mass turns out to be small but finite just above $`T_c`$ and rises rapidly when the temperature is increased. It reaches a value of about $`2.5T`$ at temperatures around $`1.5T_c`$ and seems to stabilize there also. Figure 8 also includes a comparison with lowest order perturbation theory, $`\mu (T)=Am_e^{(0)}(T)`$ with $`m_e^{(0)}(T)`$ as given in eq. (11). For the temperature dependent renormalized coupling $`g^2(T)`$ the two-loop formula $$g^2(T)=2b_0\mathrm{ln}\left(\frac{2\pi T}{\mathrm{\Lambda }_{\overline{MS}}}\right)+\frac{b_1}{b_0}\mathrm{ln}\left(\mathrm{\hspace{0.17em}2}\mathrm{ln}\left(\frac{2\pi T}{\mathrm{\Lambda }_{\overline{MS}}}\right)\right)$$ (18) was used, where $`T_c/\mathrm{\Lambda }_{\overline{MS}}=1.14(4)`$ and the lattice scale was set by the lowest Matsubara frequency $`2\pi T`$. Perturbation theory predicts the factor $`A`$ to be 2. Indeed, adjusting to the data points at the two highest temperatures, $`T>2T_c`$, leads to a value of $`A=1.82\pm 0.15`$ which is close to the prediction. However, in view of the results for the exponent $`d`$ we regard this as an accidental coincidence. This is further supported by analyses in colour $`SU(2)`$ where the electric gluon mass was obtained from gluon propagators and from the singlet potential $`V_1`$, see eq (12). Here it was found that the observed mass follows a behaviour $`m_e(T)1.6m_e^{(0)}(T)`$. If this result could be transferred to the case of $`SU(3)`$, and provides some early evidence for this, we would have $`\mu (T)1m_e(T)`$ contrary to the perturbative value of 2. The potentials above $`T_c`$ from data set (II) are very similar to the ones already discussed. Fits with eq. (13) with a free exponent do work and return parameter values in the same ballpark as in case I. However, because of the much smaller number of distances probed in this set the fit results are not as reliable as in case I. Therefore we have chosen to carry out fits with eq. (13) but with $`d`$ kept fixed. For comparison, also data set (I) has been treated this way. The general feature of these fits is that increasing $`d`$ from 1.0 to 2.0 leads to decreasing numbers for the screening mass. For instance, at $`3T_c`$ we obtain $`\mu /T3`$ for $`d=1.0`$ whereas with $`d=2.0`$ the result for the screening mass is $`\mu /T2`$. Similar shifts occur at all temperatures. The quality of the fits, however, is not always the same. Typically, at temperatures close to $`T_c`$ fits with $`d=2`$ return unacceptable $`\chi ^2`$ values while for $`T2T_c`$ the $`\chi ^2`$ values are equally good for all values of $`d`$ and cannot be used to distinguish between the various exponent values anymore. This observation fits nicely into the picture as shown in Figure 7. As an example for the temperature dependence of the screening mass at fixed $`d`$, in Figure 9 we show our results at $`d=1.5`$ for all three different lattice sizes. Recall that a value of $`d1.5`$ was favored at all temperatures $`T>1.2T_c`$ of data set (I). The general behavior is similar to that shown in Figure 8: the screening mass is small close to $`T_c`$ and starts to rise quickly. It reaches a kind of plateau with a value of $`\mu /T2.5`$ for temperatures between roughly 1.5 and 3 $`T_c`$. For temperatures beyond $`3T_c`$ the $`N_\tau =8`$ data may indicate a slow decrease with rising temperature. The main conclusion to be drawn from Figure 9 is that the results from the different lattices, i.e. at different lattice spacings are in agreement with each other within the error bars. Thus, in the investigated temperature range colour screening effects are not yet properly described by simple perturbative predictions. ## 5 Conclusion In this paper we have analyzed the heavy quark potential at finite temperatures in the range $`0.8T_c`$ up to about $`4T_c`$ in $`SU(3)`$ Yang-Mills theory. We have done so on lattices with 3 different temporal extents and found results consistent with each other. Moreover, the standard Wilson action as well as a tree-level improved Symanzik action were used. Again, consistency was observed. This indicates that finite lattice spacing artefacts are not futilizing the analysis. The potentials at temperatures below the critical temperature of the deconfinement transition are well parametrized by formulae which have been derived within string models. In particular, the presence of a logarithmic term with the predicted strength could be established. However, the obtained string tension shows a substantial temperature dependence which is not in accord with the leading string model result. Instead, we find a decrease of the string tension which is compatible with being proportional to $`(T_cbT)^{1/2}`$ in the critical region below $`T_c`$. At the critical temperature the string tension retains a finite value of $`\sigma (T_c)/\sigma (0)=0.121(35)`$, consistent with a first order transition. Above the deconfinement transition the potentials show a screened power-like behaviour. By comparing the data with perturbative predictions we can further strengthen earlier claims that these predictions do not properly describe the potentials up to temperatures of few times the critical one. In particular, it can be excluded that the exchange of two gluons with an effective chromo-electric mass is the dominant screening mechanism. Judging from the exponent of the $`1/R`$ term in the potential, at temperatures close to $`T_c`$ it seems that the complex interactions close to the phase transition arrange themselves in such a way as to be effectively describable by some kind of one-gluon exchange. At temperatures of about 1.5 to 3 times $`T_c`$ we observe a behaviour which could be interpreted as a mixture of one- and two-gluon exchange. The resulting screening mass scales with the temperature, $`\mu (T)2.5T`$, a perturbative decrease due to the temperature-dependent renormalized coupling $`g(T)`$ is not really seen. Thus, it is very likely that non-perturbative phenomena and higher order perturbative contributions are needed to explain the observed screening behaviour in the investigated temperature range. Acknowledgements: This work was supported by the TMR network ERBFMRX-CT-970122, the DFG grant Ka 1198/4-1 and partly by the “Nederlandse Organisatie voor Wetenschappelijk Onderzoek” (NWO) via a research program of the “Stichting voor Fundamenteel Onderzoek der Materie” (FOM). The numerical work has been carried out on Quadrics QH2 and QH1 computers at the University of Bielefeld which in part have been funded by the DFG under grant Pe 340/6-1. F.K. acknowledges support through the visitor program of the Center for Computational Physics at the University of Tsukuba and thanks the CCP for the kind hospitality extended to him. E.L. thanks the NIKHEF for the kind hospitality and J. Koch for critical comments on the manuscript. ## Appendix
no-problem/9908/hep-lat9908024.html
ar5iv
text
# Message passing on the QCDSP supercomputer ## 1 Introduction The massively parallel QCDSP supercomputers located at Columbia University and the RIKEN/BNL Research Center were explicitly designed for large scale lattice gauge calculations. The machines are primarily run with software highly tuned tuned for four dimensional lattices with an internal SU(3) gauge group. As such, they have effectively been serving as special purpose machines for a single problem. An open question is whether whether this architecture is sufficiently flexible for more general tasks. With this in mind, as well as with a personal desire to explore the the machine, I developed a simple message passing scheme. My goal is a small number of generic functions for manipulation of a large data set spread over the entire machine. For a machine to be “general purpose” has two prerequisites. First is a compiler in a higher level language. This is provided by the optimizing Tartan C/C++ compiler from Texas Instruments. Second is an efficient communication scheme between the individual nodes. The rich software environment of the Riken/BNL/Columbia collaboration provides this for the primary application of the machine. In this mode the machine, while capable of MIMD operation, runs in a SIMD manner. A high degree of tuning obtains excellent performance, up to 30% of the theoretical peak speed of the machine. In contrast, the aim of the project described here is a highly flexible communication package for rapid prototyping of a variety of problems. In the process, some efficiency loss is expected. I hide the basic geometry of the machine from the top level, and applications are developed entirely in a higher level language. The source and more details are available on the web . The goal is similar to but much less ambitious than the MPI project . ## 2 Top level My top level interface is designed with simplicity as the primary goal. The usage begins with the definition of a basic data type, and proceeds with a small number of routines for manipulation of a large assembly of objects of this type. For example, in the case of lattice gauge theory the data type might be $`SU(3)`$ matrices. After the basic type is defined, the communication package is included, making available several routines to manipulate such objects. A call to the function allocate(n) sets up space for n items of this type. The way the allocation is spread over the processors is meant to be fully hidden. In the case of lattice gauge theory one would allocate space for the total number of links. The scheme revolves about three basic functions to manipulate the allocated objects. First store(i, & item) stores the data item at the i’th allocated location. The complementary function fetch(i, & item) recovers the item. Any processor can store or fetch any item, and need not know on which processor it is stored. After stacking up a number of stores or fetches, all processors call a synchronizing function worksync(). This allows the communication to proceed, with the data being passed until all pending stores and fetches are completed. While multiple stores/fetches can occur simultaneously, there is no guarantee of the order in which events are completed. When worksync() returns, the machine is synchronized. For efficient loops over the variables, it is useful to know what data is stored on the current node. This is accomplished with the boolean function onnode(i), which returns true if item i is local. In addition to the basic interface, there are several conveniences available. A variant of store(), add(i, & item) adds the new item to whatever is already stored in location i. This improves efficiency in eliminating the need to fetch the old stored value, which could be on a distant processor. A variant of fetch() obtains multiple stored items in parallel. A few utility functions are included, such as global sums and broadcasts. A function cmalloc() attempts to malloc space in the fast memory on the processor chip. These and similar functions will presumably eventually be built into the machine operating system. To test these routines, I implemented a “fast Fourier transform”, a pure gauge code, and a Grassmann integration routine involving manipulation of large Fock spaces via hash table techniques. The FFT code works by recursively subdividing the lattice, giving each half to half the remaining processors. Once a sub-lattice is assigned to only one processor, the procedure is a standard FFT. After assigning the various tasks, the results are combined, which involves heavy communication. The dominance of communication makes the overall process discouragingly slow compared to running on a workstation. My pure gauge code is more satisfying, running at about 2/3 the speed of the equivalent code of the RIKEN/BNL/Columbia collaboration. However, it is extremely flexible, allowing an arbitrary number of space time dimensions, each of arbitrary even size. The group is an arbitrary $`SU(N)`$. Much of the communication speed is due to the ability to fetch several neighbors at once using the multiple fetch function. The Grassmann integration implementation works particularly well. The algorithm is based on Ref. , and involves a large distributed hash table, spread over all the processors. Each processor handles a portion of this table, sending stores non-locally to randomly chosen other processors. The efficiency is primarily due to the parallel nature of the communication, and the fact that an item in the process of being stored is not needed for immediate computation. The primary limitation of the algorithm is the exponentially large amount of memory required, quickly exhausting the limited amount on the current machine. The distributed hash table uses simple extensions of the communications class. Instead of a single data type, two are used. One, hindex, is a type used to index the other, an hvalue. After defining these classes, including the file hashcom.C in turn includes the communication routines. Storing an item uses hstore(hindex,hvalue), while fetching involves the complementary hvalue hfetch(hindex). The function worksync() is used as before for the communication to proceed. The storage is random over the entire machine. To manipulate the table, each processor handles his local part. On storing, the final location is unknown, but is not needed by the algorithm. Parallel loops over the table are fast since all operations are carried out locally and the non-local storage proceeds in parallel. The processor needs only occasionally check for active messages to keep the communication running. ## 3 Middle level My goal was to keep the details of the communication as hidden from the top level as possible. The data is passed around in messages, the basic message structure containing the identities of the source and destination processors, one data element, a verb to indicate what to do with the data (store, fetch, acknowledge, error, etc.), and an extra word for various purposes, such as to carry the index of the element. The machine architecture is a four dimensional toroid with nearest neighbor serial connections. While these are in principle bi-directional links, for simplicity I always send messages in one of the positive directions. Each processor listens for incoming messages on the negative wires. Thus any particular serial connection is used in only one direction. The advantage is simplicity, while the disadvantage is that the messages may not follow the shortest path to their destination. A store and acknowledge combination between different processors circles the machine. Given a message, a lookup table determines which wires lead closer to the destination. The first free one is used. If none are free, the message enters a queue. In this scheme all wires can be simultaneously active. The route from one processor to another is not predetermined, but progresses according to the currently available wires. At this level, several internal functions appear. First, sendmessage() selects and activates a wire to start a message traveling. If no wire is available, the message is put in a FIFO queue. The complementary function readmessage() checks the incoming wires for a completed transmission and forwards messages not for the current processor. A function handlemessages() calls readmessage(), performs any requested actions, sends acknowledgements, and checks the message queue. The function worksync() works by repeatedly calling handlemessages() until all unfinished stores and fetches are completed. ## 4 Bottom level The basic communication works through the custom serial communication unit (SCU) of the individual nodes. Program initialization fixes the SCU registers for the message size and sets the receive address registers to buffers in ram. To send a message, a write to a send address register starts the transfer. Monitoring progress uses a poll of the SCU status register. This is all implemented in C/C++, without any assembly language. The function worksync() uses two (of three) global interrupt lines available on the machine. One flags unfinished stores/fetches. When this line is set by all processors, a second interrupt synchronizes the machine. I also currently use the interrupt lines for global ands, broadcasts and sums, but these will presumably eventually be replaced by operating system functions. ## 5 Summary I have described a simple interface to the QCDSP machines. The goal is rapid prototyping of new ideas in a high level language. I compromise efficiency for flexibility. For most problems I expect a loss of a factor of 2 to 3 in speed. The test examples show varying performance. The FFT functions somewhat disappointingly; here all the complexity is in non-local communication. For a simple lattice gauge algorithm the approach performs nicely, with more flexibility than in the highly tuned production code. Remarkably, for the Fock state manipulations involved in evaluating Grassmann integrals, the performance was excellent up to system sizes where inherent memory limitations appear.
no-problem/9908/hep-th9908014.html
ar5iv
text
# References CERN-TH/99-234 hep-th/9908014 TOWARDS THE CLASSIFICATION OF CONFORMAL FIELD THEORIES IN ARBITRARY EVEN DIMENSION Damiano Anselmi CERN, Theory Group, CH-1211, Geneva 23, Switzerland Abstract I identify the class of even-dimensional conformal field theories that is most similar to two-dimensional conformal field theory. In this class the formula, elaborated recently, for the irreversibility of the renormalization-group flow applies also to massive flows. This implies a prediction for the ratio between the coefficient of the Euler density in the trace anomaly (charge $`a`$) and the stress-tensor two-point function (charge $`c`$). More precisely, the trace anomaly in external gravity is quadratic in the Ricci tensor and the Ricci scalar and contains a unique central charge. I check the prediction in detail in four, six and eight dimensions, and then in arbitrary even dimension. Pacs: 11.25.H; 11.10.Gh; 11.15.Bt; 11.40.Ex; 04.62.+v Four-dimensional conformal field theories have two central charges, $`c`$ and $`a`$, defined by the trace anomaly in a gravitational background. The charge $`c`$ multiplies the conformal invariant $`W_{\mu \nu \rho \sigma }^2`$ (square of the Weyl tensor) and is the coefficient of the two-point function of the stress tensor. The quantity $`a`$ multiplies the Euler density $`\mathrm{G}_4=\epsilon _{\mu \nu \rho \sigma }\epsilon ^{\alpha \beta \gamma \delta }R_{\alpha \beta }^{\mu \nu }R_{\gamma \delta }^{\rho \sigma }`$. A third term, $`\mathrm{}R`$, is multiplied by a coefficient $`a^{}`$: $`\mathrm{\Theta }={\displaystyle \frac{1}{(4\pi )^2}}\left[cW^2+{\displaystyle \frac{a}{4}}\mathrm{G}_4{\displaystyle \frac{2}{3}}a^{}\mathrm{}R\right],`$ (1) where $`c=\frac{1}{120}(N_s+6N_f+12N_v)`$, $`a=\frac{1}{360}(N_s+11N_f+62N_v)`$ for free field theories of $`N_{s,f,v}`$ real scalars, Dirac fermions and vectors, respectively. In higher, even dimension $`n`$ the trace anomaly contains more terms, which can however be grouped into the same three classes as in four dimensions. Several terms are exactly invariant under conformal transformations and are not total derivatives. They generalize $`W^2`$. The constants in front of these terms will be denoted collectively by $`c`$. One of such central charges, in particular, is related to the stress-tensor two-point function. It multiplies an invariant of the form $`W_{\mu \nu \rho \sigma }\mathrm{}^{n/22}W^{\mu \nu \rho \sigma }+𝒪(W^3)`$. The central charge $`a`$ is always unique and multiplies the Euler density $`\mathrm{G}_n`$, which is not conformally invariant, but a non-trivial total derivative. Finally, the constants in front of the trivial total derivatives, which generalize $`\mathrm{}R`$, will be collectively denoted by $`a^{}`$. Only in two dimensions has the trace anomaly a unique term, the Ricci scalar $`R`$. In some sense, we can say that “$`c=a=a^{}`$” there. It is natural to expect that there exists a special class of higher-dimensional conformal field theories that is most similar to two-dimensional conformal field theory. This class will have to be identified by a universal relationship between the central charges $`c`$, $`a`$ and $`a^{}`$. The main purpose of this paper is to identify this class of conformal theories, collecting present knowledge and offering further evidence in favour of the statement. I first use the sum rule of refs. for the irreversibility of the renormalization-group flow to derive a quantitative prediction from this idea, namely the ratio between the coefficient $`a_n`$ of the Euler density $`\mathrm{G}_n`$ and the coefficient $`c_n`$ of the invariant $`W_{\mu \nu \rho \sigma }\mathrm{}^{n/22}W^{\mu \nu \rho \sigma }+𝒪(W^3)`$ (or, which is the same, the constant in front of the stress-tensor two-point function). Secondly, I argue that the conformal field theories of our special class are also those whose trace anomaly in external gravity is quadratic in the Ricci tensor and Ricci scalar. This property relates unambigously the central charges $`c`$ to the unique central charge $`a`$ and, in particular, should agree with the ratio $`c_n/a_n`$ found using the irreversibility of the RG flow. I then proceed to check the prediction. This is first done in detail in four, six and eight dimensions and then extended to the general case. The results are also a very non-trivial test of the ideas of refs. about the irreversibility of the RG flow. I recall that in it was shown that in four dimensions there is a “closed limit”, in which the stress-tensor operator product expansion (OPE) closes with a finite number of operators up to the regular terms. The idea of this limit was suggested by a powerful theorem, due to Ferrara, Gatto and Grillo and to Nachtmann , on the spectrum of anomalous dimensions of the higher-spin currents generated by the OPE, which follows from very general principles (unitarity) and is therefore expected to hold in arbitrary dimension. When $`c=a`$ , OPE closure is achieved in a way that is reminiscent of two-dimensional conformal field theory, with the stress tensor and the central extension. Instead, when $`ca`$ the algebraic structure is enlarged and contains spin-1 and spin-0 operators, yet in finite number. Therefore, the subclass of theories we are interested in is identified, in four dimensions, by the equality of $`c`$ and $`a`$ and the closed limit. Secondly, it is well known that $`\mathrm{\Theta }`$ vanishes on Ricci-flat metrics when $`c=a`$ in four dimensions. A closer inspection of (1) shows that actually $`\mathrm{\Theta }`$ is quadratic in the Ricci tensor and the Ricci scalar. We are led to conjecture that the subclass of “$`c=a`$”-theories in arbitrary even dimension are those that have a trace anomaly quadratic in the Ricci tensor and the Ricci scalar. Summarizing, in arbitrary even dimension greater than 2 we can distinguish the following important subclasses of conformal field theories: i) The “closed” theories, when the quantum conformal algebra, i.e. the algebra generated by the singular terms of the stress-tensor OPE, closes with a finite number of operators. They can have $`c=a`$ , but also $`ca`$ . ii) The $`c=a`$-theories, whose trace anomaly is quadratic in the Ricci tensor and the Ricci scalar. They can be either closed or open. iii) The closed $`c=a`$-theories, which exhibit the highest degree of similarity with two-dimensional conformal field theory. While the equality $`c=a`$ is a restriction on the set of conformal field theories, the equality of $`a`$ and $`a^{}`$ is not. In refs. the equality $`a=a^{}`$ was studied in arbitrary even dimension $`n`$, leading to the sum rule $`a_n^{\mathrm{U}V}a_n^{\mathrm{I}R}={\displaystyle \frac{1}{2^{\frac{3n}{2}1}nn!}}{\displaystyle \mathrm{d}^nx|x|^n\mathrm{\Theta }(x)\mathrm{\Theta }(0)},`$ (2) expressing the total renormalization-group (RG) flow of the central charge $`a_n`$, induced by the running of dimensionless couplings. This formula was checked to the fourth-loop order included in the most general renormalizable theory in four and six dimensions. No restriction on the central charges $`c`$ and $`a`$ is required here. The charge $`a_n`$ is normalized so that the trace anomaly reads $$\mathrm{\Theta }=a_n\mathrm{G}_n=a_n(1)^{\frac{n}{2}}\epsilon _{\mu _1\nu _1\mathrm{}\mu _{\frac{n}{2}}\nu _{\frac{n}{2}}}\epsilon ^{\alpha _1\beta _1\mathrm{}\alpha _{\frac{n}{2}}\beta _{\frac{n}{2}}}\underset{i=1}{\overset{\frac{n}{2}}{}}R_{\alpha _i\beta _i}^{\mu _i\nu _i}$$ plus conformal invariants and trivial total derivatives. As it was explained in the introduction of , the arguments of do not necessarily apply to flows generated by super-rinormalizable couplings and mass terms. (In general, the effect of masses can be included straightforwardly .) The sum rule (2) measures the effect of the dynamical RG scale $`\mu `$ in lowering the amount of massless degrees of freedom of the theory along the RG flow. The basic reason why massive flows behave differently is that in a finite theory Duff’s identification $`a^{}=c`$ is consistent (but not unique), while along a RG flow the only consistent identification is $`a^{}=a`$, as shown in . Divergences are crucial in discriminating between the two cases. A flow induced by divergences cannot, in general, be assimilated to a flow induced by explicit (“classical”) scales. Repeating the arguments of in two dimensions, we would come to the same conclusion as in higher dimensions: that the sum rule (2) works for RG flows and not necessarily for massive ones. The point is, nevertheless, that the two-dimensional version of (2), due to Cardy , is universal; in particular, it does work for massive flows. It is therefore compulsory to understand in what cases the domain of validity of our sum rule (2) is similarly enhanced in higher dimensions. This property identifies the special class of theories we are looking for. The arguments and explicit checks that we now present show that this enhancement takes place in the subclass of theories with $`c=a`$ (classes ii and iii above), because of the higher similarity with the two-dimensional theories. The two relevant terms of the trace anomaly are $$\mathrm{\Theta }=a_n\mathrm{G}_n\frac{c_n(n2)\left(\frac{n}{2}\right)!}{4(4\pi )^{\frac{n}{2}}(n3)(n+1)!}W\mathrm{}^{\frac{n}{2}2}W+\mathrm{},$$ where $$c_n=N_s+2^{\frac{n}{2}1}(n1)N_f+\frac{n!}{2\left[\left(\frac{n}{2}1\right)!\right]^2}N_v$$ is the value of the central charge $`c`$ for free fields, and in arbitrary dimension $`n`$. $`N_v`$ denotes the number of $`\left(n/21\right)`$-forms. This calculation is done in ref. , section 9, starting from the stress-tensor two-point function. Massive flows have been considered, among other things, by Cappelli et al. in . An explicit computation for free massive scalar fields and fermions gives $`{\displaystyle \mathrm{d}^nx|x|^n\mathrm{\Theta }(x)\mathrm{\Theta }(0)}={\displaystyle \frac{c_n\left(\frac{n}{2}\right)!}{\pi ^{\frac{n}{2}}(n+1)}}.`$ (3) Repeating the computation for massive vectors, or $`\left(n/21\right)`$ -forms, is problematic in the UV. However, the relative coefficient between the scalar and fermion contributions is sufficient to show that the result is proportional to $`c_n`$ and not $`a_n`$. Our prediction is that in the special $`c=a`$-theories the sum rule (2) should reproduce (3) for massive flows, which means $`c_n=a_n{\displaystyle \frac{2^{\frac{n}{2}1}(4\pi )^{\frac{n}{2}}n(n+1)!}{\left(\frac{n}{2}\right)!}}.`$ (4) The trace anomaly therefore has the form $`\mathrm{\Theta }=a_n\left(\mathrm{G}_n{\displaystyle \frac{2^{\frac{n}{2}3}n(n2)}{n3}}W\mathrm{}^{n/22}W\right)+\mathrm{}`$ (5) Formula (4) is the generalized version of the relation $`c=a`$. It is uniquely implied by the requirement that $`\mathrm{\Theta }`$ be quadratic in the Ricci tensor and Ricci curvature. This condition fixes all the central charges of type $`c`$ in terms of $`a_n`$, not only the constant $`c_n`$ in front of the stress-tensor two-point function. These further relationships are not important for our purposes. In four dimensions the combination between the parenthesis in (5) is indeed quadratic in the Ricci tensor: $$\frac{\mathrm{G}_4}{4}W^2=2R_{\mu \nu }^2+\frac{2}{3}R^2.$$ I stress that this is a non-trivial check of the prediction that formula (2) correctly describes massive flows when $`c=a`$. In higher dimensions the check is less straightforward, owing to the high number of invariants. Using the results of Bonora et al. from (see also ), where the terms occurring in the trace anomaly were classified in six dimensions, we can perform a second non-trivial check of our prediction. The conformal invariants are three: $`I_1`$ $`=`$ $`W_{\mu \nu \rho \sigma }W^{\mu \alpha \beta \sigma }W_{\alpha \beta }^{\nu \rho },I_2=W_{\mu \nu \rho \sigma }W^{\mu \nu \alpha \beta }W_{\alpha \beta }^{\rho \sigma },`$ $`I_3`$ $`=`$ $`W_{\mu \alpha \beta \gamma }\left(\mathrm{}\delta _\nu ^\mu +4R_\nu ^\mu {\displaystyle \frac{6}{5}}R\delta _\nu ^\mu \right)W^{\nu \alpha \beta \gamma },`$ and the general form of the trace anomaly is $$\mathrm{\Theta }=a_6\mathrm{G}_6+\underset{i=1}{\overset{3}{}}c^{(i)}I_i+\mathrm{t}.t.d.,$$ where “t.t.d.” means “trivial total derivatives” (as opposed to $`\mathrm{G}_6`$, which is a non-trivial total derivative). Our notation differs from the one of in the signs of $`R_{\mu \nu }`$ and $`R`$. More importantly, the invariant $`I_3`$ differs from the invariant $`M_3`$ of and other references , the latter containing a spurious contribution proportional to $`\mathrm{G}_6`$ (see also , section 3), as well as a linear combination of $`I_1`$ and $`I_2`$. Precisely, we find $$M_3=\frac{5}{12}\mathrm{G}_6+\frac{80}{3}I_1+\frac{40}{3}I_25I_3.$$ Finally, our $`I_3`$ differs from the expression of ref. , formula (19), by the addition of t.t.d.’s, which, however, can be consistently omitted for our purposes. In it is pointed out that there exists a simple combination of the four invariants $`\mathrm{G}_6`$ and $`I_{1,2,3}`$, which reads $`𝒥_6`$ $`=`$ $`R_{\mu \nu }\mathrm{}R^{\mu \nu }{\displaystyle \frac{3}{10}}R\mathrm{}RRR_{\mu \nu }R^{\mu \nu }`$ (6) $`2R_{\mu \nu }R_{\rho \sigma }R^{\mu \rho \sigma \nu }+{\displaystyle \frac{3}{25}}R^3`$ $`=`$ $`{\displaystyle \frac{1}{24}}\mathrm{G}_64I_1I_2+{\displaystyle \frac{1}{3}}I_3+\mathrm{t}.t.d.`$ The BPB (Bonora–Pasti–Bregola) term $`𝒥_6`$ is precisely the combination we are looking for. A closer inspection of this expression shows that it is uniquely fixed by the requirement that it be quadratic in the Ricci tensor and Ricci curvature. On the other hand, the requirement that $`𝒥_6`$ just vanishes on Ricci-flat metrics is not sufficient to fix it uniquely, in particular it does not imply the relation “$`c=a`$” that we need. In conclusion, the $`c=a`$-theories have a unique central charge, multiplying the BPB invariant $`𝒥_6`$, $$\mathrm{\Theta }=24a_6𝒥_6,c^{(1)}=96a_6,c^{(2)}=24a_6,c^{(3)}=8a_6,$$ so that $`\mathrm{\Theta }`$ is of the predicted form (5): $`\mathrm{\Theta }=a_6(\mathrm{G}_68W\mathrm{}W)+\mathrm{}.`$ (7) Our prediction is meaningful in arbitrary even dimension and can be checked using the recent work of Henningson and Skenderis , which contains, as I now discuss, an algorithm to generate precisely the invariants $`𝒥_n`$’s that we need. It is easy to verify this in four and six dimensions. In six dimensions the result can be read from formula (30) of , taking into account that in the BPB invariant $`M_3`$ is used. A more convenient decomposition of the anomaly into Euler density and conformal invariants is the last equality of (6), leading directly to (7). It is therefore natural to expect that the algorithm of answers our question and constructs the invariants $`𝒥_n`$’s. I now check agreement with formula (5) in arbitrary even dimension. I begin with $`n=8`$. The relevant terms of $`𝒥_8`$ are $$𝒥_8=R_{\mu \nu }\mathrm{}^2R^{\mu \nu }\frac{2}{7}R\mathrm{}^2R+𝒪(R^3)=\alpha _8\mathrm{G}_8+\mathrm{c}.i.+\mathrm{t}.t.d.,$$ $`\alpha _8`$ being the unknown coefficient and “$`\mathrm{c}.i.`$” denoting conformal invariants. On a sphere, in particular, all terms but $`\alpha _8\mathrm{G}_8`$ vanish, so that $`\alpha _8`$ can be found by evaluating the integral of $`𝒥_8`$: $$_{S^8}\sqrt{g}𝒥_8\mathrm{d}^8x=768\alpha _8(4\pi )^4.$$ Using $$W\mathrm{}^2W=\frac{10}{3}\left(R_{\mu \nu }\mathrm{}^2R^{\mu \nu }\frac{2}{7}R\mathrm{}^2R\right)+𝒪(R^3)+\mathrm{t}.t.d.,$$ our prediction (5) is $`\alpha _8=1/64.`$ Indeed, applying the method of on a conformally-flat metric with $`R_{\mu \nu }=\mathrm{\Lambda }g_{\mu \nu },`$ we get, after a non-trivial amount of work, $$𝒥_8=\alpha _8\mathrm{G}_8=\frac{1440}{343}\mathrm{\Lambda }^4,$$ which gives the desired value of $`\alpha _8`$. The check can be generalized for arbitrary $`n`$. The invariant $`𝒥_n`$ is, up to an overall factor $`\beta _n`$, the coefficient of $`\rho ^{n/2}`$ in the expansion of $`\sqrt{detG},`$ where $$G_{\mu \nu }=g_{\mu \nu }+\underset{k=1}{\overset{n/2}{}}\rho ^kg_{\mu \nu }^{(k)}+𝒪(\rho ^{n/2}\mathrm{ln}\rho ,\rho ^{n/2+1},\mathrm{})$$ and the $`\rho `$-dependence is fixed by the equations $`\mathrm{t}r[G^1G^{\prime \prime }]{\displaystyle \frac{1}{2}}\mathrm{t}r[G^1G^{}G^1G^{}]`$ $`=`$ $`0,`$ (8) $`2\rho (G^{\prime \prime }G^{}G^1G^{})`$ $`=`$ $`(G\rho G^{})\mathrm{t}r[G^1G^{}]`$ $`+\mathrm{R}ic(G)+(n2)G^{}.`$ Precisely, $`{\displaystyle \frac{1}{\left(\frac{n}{2}\right)!}}{\displaystyle \frac{\mathrm{d}^{\frac{n}{2}}}{\mathrm{d}\rho ^{\frac{n}{2}}}}{\displaystyle \frac{\sqrt{detG}}{\sqrt{detg}}}|_{\rho =0}`$ $`=`$ $`\beta _n𝒥_n`$ $`\beta _n(R_{\mu \nu }\mathrm{}^{\frac{n}{2}2}R^{\mu \nu }+\alpha _n\mathrm{G}_n+\mathrm{r}est).`$ First, we consider metrics with $`R_{\mu \nu }=\mathrm{\Lambda }g_{\mu \nu }`$. The form of the solution and the first equation of (8) read $$G_{\mu \nu }=u(\rho \mathrm{\Lambda })g_{\mu \nu },\frac{u^{\prime \prime }}{u}=\frac{1}{2}\left(\frac{u^{}}{u}\right)^2.$$ The second equation of (8) is used to fix the integration constants, with the result $$u(\rho \mathrm{\Lambda })=\left(1\frac{\rho \mathrm{\Lambda }}{4(n1)}\right)^2,\beta _n𝒥_n\frac{(1)^{\frac{n}{2}}n!\mathrm{\Lambda }^{\frac{n}{2}}}{2^n(n1)^{\frac{n}{2}}\left[\left(\frac{n}{2}\right)!\right]^2}.$$ Then, we fix the normalization $`\beta _n`$ by looking for the term $`R_{\mu \nu }\mathrm{}^{\frac{n}{2}2}R^{\mu \nu }`$ (we can set the Ricci curvature $`R`$ to zero for simplicity). We write $$G_{\mu \nu }=g_{\mu \nu }+\frac{1}{\mathrm{}}v(\rho \mathrm{})R_{\mu \nu }+R_{\mu \alpha }\frac{1}{\mathrm{}^2}y(\rho \mathrm{})R_\nu ^\alpha +O(R^3),$$ with $`v(0)=y(0)=y^{}(0)=0`$. We have $$\beta _n=\frac{1}{2\left(\frac{n}{2}\right)!}\frac{\mathrm{d}^{\frac{n}{2}}x}{\mathrm{d}t^{\frac{n}{2}}}|_{t=0}$$ where $`t=\rho \mathrm{}`$ and $`x=yv^2/2`$. Integrating $`𝒥_n`$ over a sphere, we can convert our prediction (5) to a prediction for $`\beta _n`$ or $`{\displaystyle \frac{\mathrm{d}^{\frac{n}{2}}x}{\mathrm{d}t^{\frac{n}{2}}}}|_{t=0}={\displaystyle \frac{1}{2^{n1}\mathrm{\Gamma }\left(\frac{n}{2}\right)}}.`$ (9) Equations (8) relate $`y`$, and therefore $`x`$, to $`v`$ and imply that $`v`$ is a Bessel function of the second type: $`x^{\prime \prime }={\displaystyle \frac{(v^{})^2}{2}},2tv^{\prime \prime }1+{\displaystyle \frac{v}{2}}(n2)v^{}=0.`$ (10) $`\beta _n`$ is a coefficient in the series expansion of the square of a Bessel function of the second type, and is not usually in the mathematical tables. Solving (10) recursively with the help of a calculator, we have checked agreement between (9) and (10) up to dimension $`1000`$. Our picture and the quantitative agreement with prediction (5 ) explain, among other things, the physical meaning of the construction of ref. . Furthermore, the mathematical properties of the invariant $`𝒥_n`$, and therefore the identification of $`c`$ and $`a`$ (in the subclasses of theories ii and iii where it applies), are a nice counterpart of the notion of extended (pondered) Euler density introduced in , which explained the identification $`a=a^{}`$. The results presented in this paper are a further check of the ideas of and of the picture offered there. These are, we believe, the first steps towards the classification of all conformal field theories. The set of higher-dimensional quantum field theories, conformal or not, is not rich of physical models. Yet, one can consider higher-derivative theories, which, despite the issues about unitarity (see for example ), are useful toy-models for our purposes. Here higher-dimensional higher-derivative theories are meant as a convenient laboratory where the results of the present paper might be applied. I thank A. Cappelli for reviving my interest for massive flows, the organizers of the 4<sup>th</sup> Bologna Workshop on CFT and integrable models, D.Z. Freedman and N. Warner for stimulating conversations on the four-dimensional problem, M. Porrati for drawing my attention to the six-dimensional results of ref. , and finally L. Girardello and A. Zaffaroni.
no-problem/9908/nucl-th9908070.html
ar5iv
text
# Extraordinary Baryon Fluctuations and the QCD Tricritical Point \[ ## Abstract The dynamic separation into phases of high and low baryon density in a heavy ion collision can enhance fluctuations of the net rapidity density of baryons compared to model expectations. We show how these fluctuations arise and how they can survive through freezeout. \] QCD can exhibit a first order phase transition at high temperature and baryon density, culminating in a tricritical point . Specifically, below the tricritical point, a phase coexistence region separates distinct phases of QCD matter at different baryon densities, as shown in fig. 1. Stephanov, Rajagopal and Shuryak have pointed out that critical fluctuations of $`E_T`$ and similar meson measurements can lead to striking signals at the tricritical point in relativistic heavy ion collisions . We suggest that measurements of fluctuations of the net baryon number in nuclear collisions can help establish the first order coexistence region and, ultimately, the tricritical point. We characterize baryon fluctuations by the variance $`𝒱_BN_B^2N_B^2`$, where $`N_B=N\overline{N}`$ is the net baryon number in one unit of rapidity, obtained from the baryon $`N`$ and antibaryon $`\overline{N}`$ distributions; the average is over events. Ordinarily, net baryon fluctuations in thermal and participant nucleon models of heavy ion collisions satisfy $$𝒱_B^0TV\rho _B/\mu _BN+\overline{N},$$ (1) where the second equality holds for an ideal gas . In contrast, we argue below that enhanced fluctuations occur if the expansion of the system quenches the matter from an initial high density state into the phase coexistence region. At the tricritical point, these fluctuations diverge because $`V\rho _B/\mu _B=^2\mathrm{\Omega }/\mu _B^2\mathrm{}`$ in (1), where $`\mathrm{\Omega }`$ is the free energy. Baryon fluctuation measurements add new leverage to a search for the first order region, complimenting information from pion and kaon interferometry, intermittency and wavelet analyses . The latter measurements probe the spatial structure introduced by phase separation and droplet formation. By comparison, baryon fluctuations are weakly dependent on the morphology of the mixed phase, because they do not rely on distinct droplets escaping a rather dense system. In this paper we explore the onset of baryon fluctuations and the possibility of their dissipation in nuclear collisions. That an order parameter such as the baryon density should undergo extraordinary fluctuations during a phase transition comes as no surprise – critical opalescence results from an analogous divergence of density fluctuations. Furthermore, less extreme but nevertheless measurable density fluctuations are familiar in condensed matter systems that are rapidly quenched into a phase coexistence region . Perhaps more surprising is the possibility that observable baryon fluctuations can survive the subsequent evolution of the system. As motivation, we start by describing how phase separation can produce observable fluctuations. We then use a spinodal decomposition model to illustrate how a highly supercooled system can produce large fluctuations. Finally, we ask whether diffusion can dissipate these fluctuations before they are detected. Generally, there are two ways the dynamics can obscure large net baryon fluctuations in a subregion of the system. First, particles can diffuse throughout the fluid, diluting any “hot spots” and their consequent fluctuations. Second, fluid flow can carry the particles away to similar effect. We focus on the effect of diffusion because transport theory estimates indicate that the relevant diffusion coefficient can be large . In principle, chemical reactions introduce dissipation by annihilating and creating baryon-antibaryon pairs. However, these reactions cannot affect the net baryon number, which is conserved, although they do affect the individual baryon and antibaryon fluctuations . Figure 1 shows the expected QCD phase diagram at high temperature $`T`$ and baryon density $`\rho _B`$. A phase coexistence region lies below the tricritical point. Inside this region, uniform matter must eventually break up into distinct domains containing the high and low density phases respectively. Within the coexistence region is the spinodal region (the shaded area). There, uniform matter is mechanically unstable, so that small fluctuations can rapidly generate bubbles throughout the fluid. Matter is metastable between the spinodal and coexistence boundaries, so that fluctuations must overcome an energy barrier to nucleate bubbles. Phase separation by spinodal decomposition and bubble nucleation have been discussed in the context of heavy ion collisions , albeit with different phase diagrams in mind. Ideally, a heavy ion collision will produce extraordinary fluctuations if experimenters can adjust the beam energy and the ion combination to produce initial baryon densities and temperatures within the spinodal region or, optimally, at the tricritical point. Alternatively, fluctuations can arise if the expansion of the heavy ion system rapidly quenches a high density system deeply into the spinodal region. The dashed curves in fig. 1 show one such quenching trajectory (a) and one trajectory that only reaches the nucleation region (b). Either process may produce phase separation far from equilibrium . In spinodal decomposition, runaway density fluctuations rapidly contort and compress the high density fluid into filamentary regions. These runaway modes appear because the fluid is dynamically unstable, with $`\rho _B/\mu _B<0`$ inside the spinodal region . These modes grow exponentially until the density outside the regions reaches the equilibrium density for that temperature, $`\rho _h`$, at the low-density boundary of the phase coexistence curve. After this burst of nonequilbrium growth, phase separation in an ion collision can proceed smoothly as the system expands and rarefies. Nucleation may proceed more uniformly, with perhaps one bubble growing smoothly as the system expands. However, there is little distinction between spinodal decomposition and nucleation in a highly supercooled system . To understand why these baryon density fluctuations can exceed our baseline estimate (1), suppose that by a proper time $`\tau _Q`$ the phase transition has created a relatively stable initial fraction $`1f`$ of the low density bubbles within the high density phase. The net baryon density is then $`\rho _B=f\rho _q+(1f)\rho _h`$, where $`\rho _q`$ and $`\rho _h`$ are the equilibrium densities of the respective phases. This density corresponds to a net rapidity density of baryons $`N_B𝒜\rho _B\tau _Q`$, where $`𝒜`$ is the transverse area of the collision volume. We write $$N_BfN_q+(1f)N_h,$$ (2) where $`N_{q,h}\rho _{q,h}𝒜\tau _Q`$. The fluctuations of $`N_B`$ are enhanced relative to a uniform system because the distribution of densities within each event is bimodal, with peaks at $`\rho _q`$ and $`\rho _h`$. The variance is therefore $$𝒱_B𝒱_B^0+f(1f)(N_qN_h)^2,$$ (3) where $`𝒱_B^0=f𝒱_q+(1f)𝒱_h`$ is the weighted average of the variance of each phase. If we take each component to be a nearly ideal gas, then $`𝒱_{q,h}=N_{q,h}+\overline{N}_{q,h}`$, where $`N`$ and $`\overline{N}`$ are the rapidity densities of baryons and antibaryons . The quantity $`𝒱_B^0`$ then reduces to (1), precisely the Poissonian fluctuations we would expect if the system were uniform. The total variance (3) exceeds this value by an amount proportional to the square of the density contrast of the two phases. A nonequilibrium model of the quench would be needed to compute $`f`$. Observe that the two-phase effect, which vanishes with the density contrast, gives way to the critical divergence of (1) near tricritical point. To estimate the affect of flow on baryon evolution, we write the net baryon current conservation law: $$\rho _B/\tau +_\mu j_B^\mu =0.$$ (4) where $`/\tau u^\mu _\mu `$ for a fluid of four velocity $`u^\mu `$. The flow of the system changes the baryon density through $`u^\mu `$. If we neglect dissipation for the moment, then the net baryon number flows along with fluid as a whole, i.e. $`j_B^\mu =\rho _Bu^\mu `$. For the Bjorken scaling flow, $`u^\mu =u_\mathrm{s}^\mu \tau ^1(t,0,0,z)`$ and $`\tau \sqrt{t^2z^2}`$. As is well known, the density satisfies $`\rho _B(\tau )\tau =\rho _B(\tau _Q)\tau _Q`$ for scaling flow. The rapidity density is then $`\tau `$ independent. The variance of $`N_B`$ for an ensemble of events $`i`$ is $`𝒱_B=_i(N_B^iN_B)^2`$. Differentiating, we find the scaling results $$dN_B/d\tau 0\mathrm{and}d𝒱_B/d\tau 0.$$ (5) Even though the expansion after $`\tau _Q`$ will cause $`f(\tau )`$ to decrease, both $`N_B`$ and $`𝒱_B`$ are fixed at the initial values (2, 3). It follows that (3) can represent the observed fluctuations, provided that the flow satisfies scaling. Baryon diffusion can play an important role in both the onset and the propagation of fluctuations. We follow and write $`j_B^\mu =\rho _Bu^\mu +j_{\mathrm{diss}}^\mu `$, where we define $`u^\mu `$ so that the total momentum density of the fluid vanishes in the local rest frame. The diffusion of baryons relative to the fluid center of momentum gives rise to: $$j_{\mathrm{diss}}^\mu =MT^\mu \left(\mu _B/T\right),$$ (6) where $`M=D\rho _B/\mu _B`$ is loosely termed the mobility, $`D`$ is the diffusion coefficient and $`^\mu =(g^{\mu \nu }+u^\mu u^\nu )_\mu `$. To illustrate the onset of spinodal decomposition, we modify (6) to describe the strong inhomogeneities that spontaneously arise in the fluid. We follow the classic linear stability analysis of Cahn , duplicating the salient details here a) to motivate QCD calculations of the microscopic inputs and b) to highlight the differences and similarities with DCC studies . We study the evolution of a small perturbation $`\stackrel{~}{\rho }_B(k,t)=\stackrel{~}{\rho }_B(t)\mathrm{exp}(i\stackrel{}{k}\stackrel{}{r})`$ in the spinodal region, where $`\rho _B/\mu _B`$ is negative. Anticipating that the fluid will spontaneously become inhomogeneous, we assume that the free energy of ref. can be written as an effective Ginzburg-Landau functional of the baryon density, $$F\{\rho _B(\stackrel{}{r})\}=d^3r\{f[\rho _B(\stackrel{}{r})]+\xi (\rho _B)^2/2\},$$ (7) in the local rest frame, where $`f=\mathrm{\Omega }/V+\mu _B\rho _B`$ is the Helmholtz free energy density for a uniform system. Replacing $`\mu _B=(F/N_B)_{T,V}`$ in (6) with the functional derivative with respect to $`\rho _B`$, we find that $$\frac{}{t}\stackrel{~}{\rho }_B(t)=Mk^2\left(\frac{\mu _B}{\rho _B}+\frac{\xi }{M}k^2\right)\stackrel{~}{\rho }_B(t).$$ (8) For $`\rho _B/\mu _B<0`$, a perturbation of $`k<k_\mathrm{c}=[|\rho _B/\mu _B|/M\xi ]^{1/2}`$ grows exponentially. The fastest mode at $`k_{\mathrm{sp}}=k_\mathrm{c}/\sqrt{2}`$ grows at the shortest time scale $$\tau _{\mathrm{sp}}=\frac{2|\rho _B/\mu _B|}{Mk_{\mathrm{sp}}^2},\mathrm{where}k_{\mathrm{sp}}=\frac{k_\mathrm{c}}{\sqrt{2}}.$$ (9) This mode dominates the early stage of the decomposition. Outside the spinodal region where $`\rho _B/\mu _B>0`$, perturbations decay exponentially at rates that differ from normal diffusion only for wavelengths smaller than $`2\pi /k_\mathrm{c}`$ (roughly the equilibrium droplet size in a vacuum). At these early times, the variance averaged over the thermal fluctuations within an event grows as $`\stackrel{~}{\rho }_B(r,t)^2\mathrm{e}^{2t/\tau _{\mathrm{sp}}}`$. This growth exceeds that of the density, $`\mathrm{e}^{t/\tau _{\mathrm{sp}}}`$, allowing large fluctuations to build. A quantitative treatment of spinodal decomposition using modern techniques awaits the introduction of kinetic terms to the coarse-grained free energy density — neither $`M`$ nor $`\xi `$ are known. However, we can get a rough idea of the time scales needed for the onset of instabilities by identifying the inverse momentum $`k_{\mathrm{sp}}`$ with the correlation length, $`m_\sigma ^11`$ fm, as in . Well within the spinodal region, we surmise that $`|M(\rho _B/\mu _B)^1|`$ is of the order of the diffusion coefficient in a stable, perturbative plasma, $`13`$ fm . Therefore $`\tau _{\mathrm{sp}}1`$ fm or smaller, suggesting that spinodal decomposition can erupt violently when a quench is achieved. One faces a similar situation in disoriented chiral condensate formation . There, one studies nonequilibrium fluctuations of the chiral order parameter at very small baryon density. Equilibrium fluctuations of that order parameter diverge along with the baryon density fluctuations at the tricritical point . The essential distinction between our first order transition and those phenomena is that, here, the low density phase can coexist with the high density phase in thermodynamically stable bubbles. This is impossible in a second order transition. The wavelength scale $`2\pi /k_{\mathrm{sp}}`$ of the fastest growing mode (9) is a nonequilibrium manifestation of the equilibrium size of bubbles. The second order chiral transition is also heralded by unstable modes, but the fastest mode has $`k=0`$, reflecting the critical divergence of the correlation length. Furthermore, the corresponding time scale diverges in a “critical slowing down” as the correlation length $`m_\sigma ^1\mathrm{}`$, see, e.g., eq. (2) in . Analogous behavior takes place at the tricritical point, where $`M^1\rho _B/\mu _B`$ and (9) diverge. Optimistic that fluctuations can be produced by a quench, we turn to the dissipation of these fluctuations due to diffusion. For a stable liquid, we use (4,6) to obtain the diffusion equation, $$\rho _B/\tau +\rho _B_\mu u^\mu =D^2\rho _B,$$ (10) where we take $`D`$ to be constant and neglect thermodiffusion. Observe that diffusion cannot affect scaling flow because the diffusive term vanishes if $`\rho _B`$ is a function of $`\tau `$ alone. In that case (5) would still apply. Let us then face a conservative scenario in which phase separation only occurs as a perturbation $`\stackrel{~}{\rho }(\tau ,\eta )`$ of the scaling hadronic system (here $`\mathrm{tanh}\eta =z/t`$ is the spatial rapidity). To be concrete, we take $`N_B`$ and $`𝒱_B`$ to be perturbed at time $`\tau _Q`$ by $`\stackrel{~}{N}_Bf(N_qN_h)`$ and $`\stackrel{~}{𝒱}_Bf(1f)(N_qN_h)^2`$ for $`f=f_0\mathrm{exp}\{\eta ^2/2\sigma ^2\}`$. Note that $`\sigma 0.88`$ for a spherically symmetric perturbation. We describe the evolution for $`\tau >\tau _Q`$ using (10) for $`\rho _B_\mu u^\mu \rho _B/\tau `$. Writing the rapidity density at fixed $`\tau `$ as $`N_B(\tau ,\eta )=\rho _B(\tau ,\eta )𝒜\tau `$, we obtain $$\tau ^2\stackrel{~}{N}_B/\tau =D^2\stackrel{~}{N}_B/\eta ^2.$$ (11) The unperturbed rapidity density $`N_B^0`$ is constant. It follows that $`\stackrel{~}{N}_B=\stackrel{~}{N}_B(\tau _Q)\varphi _\sigma (\eta )`$, where $`\stackrel{~}{N}_B(\tau _Q)=f_0(N_qN_h)`$ and $$\varphi _\sigma =\frac{\mathrm{exp}\{\eta ^2/2(\sigma ^2+2Ds)\}}{(1+2Ds\sigma ^2)^{1/2}},s=\tau _Q^1\tau _F^1,$$ (12) and $`\tau _F`$ is the freezeout time. Longitudinal flow limits the degree to which the Gaussian perturbation can be dispersed. For $`\tau _F\tau _Q`$, we see that the rapidity density near $`y\eta 0`$ is $`N_BN_B^0+\stackrel{~}{N}_B(\tau _Q)\{1+2D/(\tau _Q\sigma ^2)\}^{1/2}`$ since $`s\tau _Q^1`$. To study the contribution of the perturbation to the variance for $`\tau \tau _Q`$, we write the event averaged $`𝒱_B𝒱_B^0+2N_B^0\stackrel{~}{N}_B`$ . We then differentiate $`𝒱_B`$ with respect to $`\tau `$ and use (11) to obtain: $$\tau ^2\stackrel{~}{𝒱}_B/\tau =D^2\stackrel{~}{𝒱}_B/\eta ^2,$$ (13) to linear order in $`\stackrel{~}{N}_B/N_B`$. We find $`\stackrel{~}{𝒱}_B=(N_qN_h)^2\{f_0\varphi _\sigma (\eta )f_0^2\varphi _{\sigma /\sqrt{2}}(\eta )\}`$, for $`\varphi _\sigma (\eta )`$ given by (12). The diffusion coefficient for baryon current is unknown for the mixed phase described in . However, we can get a rough estimate for $`D`$ in the pure phases from kinetic theory, which implies that $`D\tau _{\mathrm{diff}}v_{\mathrm{th}}^2/3`$, where $`v_{\mathrm{th}}`$ is the thermal velocity of baryons and $`\tau _{\mathrm{diff}}`$ is the relaxation time for diffusion. For nucleons diffusing through a hadron gas, $`\tau _{\mathrm{diff}}35`$ fm and $`D6`$ fm at a temperature of 150 MeV. Flavor diffusion through a perturbative quark gluon plasma yields $`D13`$ fm . We estimate diffusion in the mixed phase using the larger hadronic value, $`D6`$ fm. In fact, arguments in ref. suggest similarities between hadrons and droplets of mixed phase. At $`\eta y0`$, we find the rather small decrease $`\stackrel{~}{𝒱}_B/\stackrel{~}{𝒱}_B(\tau _Q)76\%`$ for $`\tau _Q5`$ fm, $`\sigma =0.88`$, $`\tau _F\mathrm{}`$ and $`f_0=1/2`$. Experimenters can use baryon fluctuations to search for the tricritical point as follows. Collisions for a range of beam energies, ion combinations and centralities can produce high density systems that follow trajectories as in fig. 1. Results can be compared as shown in fig. 2 by plotting the normalized ratio , $$\omega _B𝒱_B/(N+\overline{N})$$ (14) as a function of a normalized centrality selector, e.g. the total charged particle multiplicity $`N_{\mathrm{ch}}`$. In the absence of unusual fluctuations, this ratio is energy independent, with a value close to unity, cf. eq. (1). In fig. 2 we show the results of simulated collisions incorporating the above results to compute the mean and variance in the event generator of refs. ; see these refs. for details. The rapidity densities of baryons, antibaryons and charged hadrons are taken to be 60, 15 and 300 for impact parameter $`b=0`$ and scale with the number of participants for $`b>0`$, as appropriate at SPS energy. A value of $`f`$ is assigned to each event using the ad hoc distribution, $`f(b)=0.25[1(b/b_0)^2]`$ for $`b<b_0=3`$ fm. The mean baryon number and its fluctuations are computed at $`y\eta =0`$ for $`\tau _Q=5`$ fm, $`\tau _F=10`$ fm, $`D=6`$ fm, and rapidity density contrasts $`\delta N=N_qN_h=20`$ and 40, corresponding to $`\rho _q\rho _h0.10`$ and 0.2 fm<sup>-3</sup> on the scale of normal nuclear matter density. The ‘hadron’ curve is computed assuming no enhancement. The difference between this curve and unity is due to impact parameter fluctuations (volume and thermal fluctuations are omitted). The top curve is computed with $`\delta N=40`$ but without diffusion. We see that diffusion is a small effect for our parameter choices. Much more work is needed to obtain crucial information, such as the beam energies and rapidity ranges needed to probe the high density phase. To start, we need a better idea of the phase diagram away from the tricritical point. We must also understand transport processes in the unstable and mixed phase regions. Then, we can combine these ingredients to perform dynamical simulations such as those in to assess the likelihood of a quench. We have also overlooked the possibility of supeconductivity in the high density phase, a reasonable assumption near the tricritical point . Superconductivity would modify baryon transport because a) the condensate carries baryon current and b) additional transport modes are possible. In summary, we have explored the possibility that the phase transition of refs. can enhance fluctuations of the net baryon number. We find that fluctuations can plausibly persist through freezeout to surmount impact-parameter fluctuations . A collision that reaches the tricritical point will have the largest fluctuations, but collisions that pass below can also be extraordinary, provided that the dynamics can quench the system. Calculations for $`y0`$ at RHIC energy suggest that proton fluctuations alone can reveal net baryon fluctuations at the level of fig. 2. Work is in progress to evaluate this signal at higher rapidity and lower beam energy where higher baryon densities are met. I am grateful to C. Pruneau for many illuminating discussions, and to J. Berges, K. Rajagopal, E. Surdutovich and S. Voloshin for useful comments. I also thank the Aspen Center for Physics for hospitality during the completion of this work. This work is supported in part by the U.S. DOE grant DE-FG02-92ER40713.
no-problem/9908/astro-ph9908131.html
ar5iv
text
# THE INFLUENCE OF BULGE PROFILE SHAPES ON CLAIMS FOR A SCALE-FREE HUBBLE SEQUENCE FOR SPIRAL GALAXIES ## 1 Introduction De Jong (1996b) and Courteau, de Jong, & Broeils (1996) have suggested that “the Hubble sequence of spirals is scale-free”. They claim that “the constant ratio of bulge-to-disk scale-lengths appears to be independent of galaxy type”. If true, this would not only be at odds with the classification scheme posed by Hubble (1926,1936) and later Sandage (1961) - in which the bulge-to-disk ratio progressively decreases as one goes from early to late type spirals (Simien & de Vaucouleurs 1986) - but would have consequences for theories of galaxy formation. While the surface brightness profiles of the disks of spirals are well described by exponential models, the light profiles of the bulges are known to possess a range of structural shapes (Andredakis, Peletier & Balcells 1995; Carollo, Stiavelli & Mack 1998). These can be easily modelled with the Sersic (1968) $`r^{1/n}`$ law. A generalization of de Vaucouleurs’ (1948) $`r^{1/4}`$ law, the free parameter $`n`$ can describe the observed range of bulge profile shapes. Indeed, a subset of this model (namely $`n`$=1, 2, 4) was applied by de Jong (1996b) to the surface brightness profiles of the bulges in his sample of 86 face-on spiral galaxies. de Jong found that 60% of his sample were better modelled (based on the $`\chi ^2`$ statistic) with an $`n`$=1 model, while 40% preferred a larger value of $`n`$, with 15% preferring $`n`$=4. Similarly, Courteau (1996) found, when fitting both an $`n`$=1 and an $`n`$=4 profile model, that 15% of the bulges in his sample of spiral galaxies were better modelled with the $`n`$=4 profile (Courteau et al. 1996). Given these results, Courteau et al. (1996) reported that late-type spirals are best fitted by two exponential models, and they chose to represent all their spiral galaxies this way. Subsequently, their claim for a scale-free Hubble sequence for spirals was based upon structural parameters obtained from fitting exponential light profile models to both the disk and the bulge. However, the above percentages become most interesting when one notes that the galaxies preferring the larger values of $`n`$ are the early-type spirals, while the late-type spirals prefer values of $`n`$$``$1 (Andredakis et al. 1995; Moriondo, Giovanardi, & Hunt 1998). Additionally, Andredakis et al. (1995) have shown that the bulge-to-disk ratio of luminosities varies systematically with profile shape, such that galaxies with larger bulge-to-disk luminosity ratio have larger shape parameters. Logically, any conclusions drawn from structural parameters which have ignored these structural differences must surely be questioned (Moriondo et al. 1998). By using the best-fitting profile models (either $`n`$$`=`$1, 2, or 4) this paper reinvestigates the claim for a scale-free Hubble sequence of spiral galaxies. ## 2 Data We have re-analyzed the data presented by Courteau et al. (1996). They presented two data sets, however, only one is appropriate for explorations of galaxy properties as a function of morphological type. Lahav et al. (1995) showed that the dispersion in galaxy type index, T, between six experienced galaxy classifiers was on average 1.8 T-units, and 2.2 T-units when comparing the RC3 (de Vaucouleurs et al. 1991) T-index with those of the six classifiers. A similar figure of disagreement (2.0-2.5 T-units) was obtained by four human classifiers of HST images (Odewahn et al. 1996). Unfortunately, because of this, the larger of the two data sets presented in Courteau et al. (1996) - 243 Sb–Sc galaxies from the 349 Sb–Sc galaxies of Courteau (1996) - can not in themselves be used to explore possible trends within the Hubble sequence of spiral galaxies. What the R-band data of Courteau (1996) does enable, is to show that the individual ratios of bulge-to-disk scale-lengths span a broad range of values (Courteau et al. 1996, Figure 1). Scale-length ratios within just one standard deviation of the median are shown to span a range greater than a factor of 4, with the 1$`\sigma `$ confidence interval ranging from 0.029 to 0.135, and a long tail in the distribution stretching to 0.35. To obtain the ratio of the bulge effective radius $`r_e`$ to the disk scale-height $`h`$, these numbers should be multiplied by 1.679, giving ratios of $`r_e/h`$ up to $``$0.6. Therefore, in passing, we stress that caution should be employed when using any sort of mean bulge-to-disk scale-length ratio, since a broad range of values spanning one order of magnitude exists amongst the real galaxy population. The second data set, that of de Jong & van der Kruit (1994), is however useful. It includes galaxy types from Sa through to Sm. This sample of 86 galaxies actually includes two S0 galaxies which are removed here as de Jong (1996b) notes that their surface brightnesses are significantly below the trend seen for the rest of the spiral galaxies, and their connection with the early-type spiral galaxies is still unclear. The sole irregular galaxy (T=10) is also removed, leaving 83 face-on (minor over major axis ratios greater than 0.625) Sa to Sm galaxies, imaged in six passbands (BVRIHK). ## 3 Analysis In recent years, some of the limitations of the classical surface brightness profile models, such as the exponential or the $`r^{1/4}`$ law, have been realised. Departures in the radial falloff of light from these models has been not only detected but successfully modelled for: the dwarf galaxy population (Davies et al. 1988; Young & Currie 1994; Binggeli & Jerjen 1998), the ellipticals (Caon et al. 1993; Graham & Colless 1997), brightest cluster galaxies (Graham et al. 1996), and for the bulges of spirals (Andredakis et al. 1995). The Sersic (1968) law has proved successful in parameterizing such departures from the traditional models and can be written as $`I(r)`$ $`=`$ $`I_0\mathrm{exp}\left[\left({\displaystyle \frac{r}{h_b}}\right)^{1/n}\right]`$ $`=`$ $`I_e\mathrm{exp}\left[(2n0.327)\left\{\left({\displaystyle \frac{r}{r_e}}\right)^{1/n}1\right\}\right].`$ The first line shows how the intensity $`I`$ varies with radius $`r`$; $`I_0`$ is the central intensity where $`r`$$`=`$0. We use $`h_b`$ here to denote the bulge scale-length and avoid confusion with the disk scale-length which is denoted by $`h`$ elsewhere in this paper. The third model parameter, $`n`$, describes the level of curvature in the light profile. For example, when $`n`$=1 the Sersic law is equivalent to an exponential light distribution; when $`n`$=4 it mimics the de Vaucouleurs $`r^{1/4}`$ law. The value of $`n`$ is of course not restricted to integer values and remains meaningful up until values of around 10-15. The second line is a variant of the first expression, with the model parameters now $`I_e`$, the intensity at the radius $`r_e`$ which encloses half of the total light of the bulge. Equating like-terms, one has that $`I_0`$$`=`$$`I_e\mathrm{exp}(2n0.327)`$ and $`(r_e/h_b)`$$`=`$$`(2n0.327)^n`$. Therefore, when $`n`$$`=`$1, $`r_e`$$`=`$1.67$`h_b`$, and when $`n`$$`=`$2, $`r_e`$=13.5$`h_b`$. One can also easily see why effective radii rather than scale-lengths are used for the $`r^{1/4}`$ law, since $`h_b`$=$`r_e`$/3466. Given that this paper uses parameters from $`n`$=1, 2, and 4 Sersic models, we have used effective radii rather than scale-lengths. de Jong (1996a) fitted three models to the surface brightness profiles of the bulges, all with an accompanying exponential profile model to the disk. The goodness-of-fit for each model was measured using the $`\chi ^2`$ statistic.<sup>1</sup><sup>1</sup>1The data can be found at http://cdsweb.u-strasbg.fr/htbin/Cat?J/A+AS/118/557. For the B, V, H and K passbands, it is observed that for every two galaxy bulges that are best fit with an $`n`$=2 or $`n`$=4 profile, there are three galaxy bulges whose best-fitting profile model is the $`n`$=1 model. For the R and I passbands, the number of galaxy bulges best fit with the $`n`$=1 model equal the number of bulges better fitted with the alternative $`n`$=2 or $`n`$=4 models (Table 1). In using the best-fitting bulge models (either $`n`$$`=`$1, 2, or 4) the associated model parameters were not always reliable. In particular, the $`r^{1/4}`$ model sometimes resulted in values for $`r_e`$ that were either inaccurately determined and/or were unrealistically large. To accommodate for this, each value of $`r_e`$ was inspected and the galaxy either retained, or rejected if $`\mathrm{\Delta }r_e/r_e`$$`>`$40% or $`r_e/r_{max}`$$`>`$0.5, where $`r_{max}`$ is the maximum radius for which the the surface brightness profiles extend ($``$26$`\pm `$1 in B). This typically resulted in the exclusion of only 1 or 2 galaxies from each of the morphological type bins T=1-3 and T=7-9 used in this comparative study. Table 2 shows the difference in the mean value of $`r_e/h`$ for the early- and late-type morphological class bins used by Courteau et al. (1996). It shows this ratio for the K- and R-band data fit with an exponential bulge model by de Jong (1996b) and Courteau et al. (1996). Using the best-fitting $`n`$$`=`$1, 2, and 4 models, we present this difference of means for all six passbands (BVRIHK). However, this difference in the ratio is meaningless on its own. What is important is the significance of this difference, and this depends on the sample size and standard deviation of the distributions. To this end, we have applied Student’s t-test. The probability, Prob(t), that the difference in means could be as large as it is by chance is given in Table 2; small values indicate that the means are significantly different from each other. ## 4 Discussion The majority of the early-type spirals ($``$Sb) prefer to have values of $`n`$$`>`$1, while late-type galaxies ($``$Sd) are better fit with an exponential bulge (see Table 1). The universal application of the exponential fitting function ignores from the start real differences in galaxy structure, and introduces a systematic bias into the parameterization of these galaxies - under-estimating the effective half-light radius of the bulge.<sup>2</sup><sup>2</sup>2A similar behaviour is known to occur with $`r^{1/4}`$ modelling of the light profiles of elliptical galaxies (Graham & Colless 1997, Figure 11). Figure 4 shows the ratio of the effective radii derived from the $`r^{1/4}`$ model ($`r_{e,4}`$) and the effective radii derived from the exponential model ($`r_{e,exp}`$), plotted against the ratio of the exponential model disk scale length co-fitted with the $`r^{1/4}`$ bulge model ($`h_4`$) and the exponential disk scale length co-fitted with the exponential bulge model ($`h_{exp}`$). It shows that $`r_{e,4}/r_{e,exp}`$$`>`$1, while the exponential disk scale-length remains largely unchanged as the bulge profile model is adjusted. Similarly, fitting an $`n`$=1 profile will over-estimate the half-light radii for some of the late-type spirals. Although de Jong (1996a) shows for the late-type spirals that an $`n`$=1 model provides a better representation of the bulge than an $`n`$=2 or $`n`$=4 model, he also notes that values as low as $`n`$=0.5 are obtained when applying the Sersic profile to the bulge (de Jong 1996a). Furthermore, Andredakis et al. (1995), in fitting the Sersic model to the K-band bulge light profiles of 30 spiral galaxies, found some Sb–Sd galaxies to have bulge profiles with shape parameters smaller than 1. Consequently, restricting the structural profiles of all late-type galaxies to be described by an $`n`$=1 model may be increasing their mean bulge scale-length and hence reducing the true difference between the $`r_e/h`$ ratio of the early- and late-type spirals. That is, the probabilities in Table 2 may be larger than they should. As stated by de Jong (1996b), K-band data is the passband of choice for such studies, making it “possible for the first time to trace fundamental parameters related to the luminous mass while hardly being hampered by the effects of dust and stellar populations.” Indeed, some of the galaxies in de Jong sample were noted to possess dust lanes and circumnuclear star formation. Furthermore, bulges are brighter in K than in B with respect to the disk, and since the bulge/disk decomposition is easier when the bulge is relatively brighter, the fitting algorithm therefore works better in the K-band (de Jong 1996b). Using exponential bulge models, Courteau et al. (1996) mention that the $`r_e/h`$ ratios of the early-type spirals appear systematically below the average $`r_e/h`$ value for all spiral galaxy types. They assert that this difference is not large, and claim that the constant ratio of bulge-to-disk scale-length is independent of Hubble type. However, our analysis of the exponential models fitted to the K band data of de Jong (1996b, Figure 18) reveals that the mean value of $`r_e/h`$ for the Sa–Sb type galaxies is actually smaller than that for the late-type spirals at a significance of 98% (3$`\sigma `$)! (Table 2). Similarly, with the R-band data presented by Courteau et al. (1996), and in fact for all wavelengths used (excluding the V-band), the ratio of $`r_e/h`$ is smaller for the Sa–Sb galaxies than it is for types $``$Sbc. This result is at odd with the classical picture of the Hubble sequence, where early-type spirals have larger bulge-to-disk scale-length ratios than late-type spirals. Due to the use of exponential bulge models for the Sa–Sb type galaxies, the above result can be understood in terms of systematically under-estimating the size of these bulges. Correcting for this, by taking the best-fitting structural parameters, from either the $`n`$=1, 2, or 4 models, we find that the situation reverses itself. The average value of $`r_e/h`$ for the Sa–Sb type galaxies is found to be larger than the average value of $`r_e/h`$ for galaxy types $``$Sd, in all six passbands. Table 2 shows that the probability that the Sa–Sb type galaxies have the same mean $`r_e/h`$ as the Sd–Sm type galaxies is weakly ruled out at the 1.5–2 $`\sigma `$ level in five of the six passbands used by de Jong & van der Kruit (1994). Interestingly, it is the K-band data which suggest that the difference in means is not significant. However, this result in itself is significant when compared to the result obtained using only exponential bulge profile models. Using the best-fitting models, the average $`r_e/h`$ ratio for the sample is larger – at the 3 $`\sigma `$ significance level – than when obtained using only the $`n`$=1 model. We plan to refine this work by fitting a Sersic profile with free (i.e. not fixed) shape parameter, $`n`$, to the bulges of the spirals in the sample of de Jong & van der Kruit (1994). Furthermore, Courteau et al. (1996) noted that about 1/3 of the sample of galaxies from de Jong (1996a) had a bar modelled as an additional component - requiring eight structural model parameters for these galaxies. While de Jong modelled a bar when fitting the exponential bulge models to the 2D images, his one-dimensional decomposition technique which he used to fit the $`r^{1/4}`$ and $`r^{1/2}`$ bulge models did not allow for the influence of a bar. Subsequently, we must caution that failure to model the bar in the 1D data used here may influence the scale-lengths obtained. Arguments for secular evolution, namely the exponential bulge light profile and the restricted range of bulge-to-disk scale-lengths, are either wrong or questionable. Andredakis et al.’s (1995) alternative to secular evolution - based upon the continuous trend between galaxy structure, as measured by $`n`$, and galaxy type - is largely supported by the data of Courteau et al. (1996). In the framework of this model, n-body simulations (Andredakis 1998) have shown how the imprint of disk formation is left upon the bulge, creating the observed trend between shape parameter and morphological type. Yet another alternative is offered by Aguerri & Balcells (1999), where the shape of the bulge grows from an $`n`$=1 profile to larger values of $`n`$ as shown through n-body simulations of merger events. Whether the bulges of spiral galaxies formed after the disk, as in the secular evolution model (Courteau et al. 1996), or, whether the bulge is in fact older than the disk (Andredakis 1998, and references within) may be better answered when the range and trends of bulge-to-disk ratios are better known. We thank Marc Balcells for his comments on this paper prior to its submission. We also wish to thank the anonymous referee for their comments and suggestions.
no-problem/9908/cond-mat9908340.html
ar5iv
text
# Surface Excitations in a Bose-Einstein Condensate \[ ## Abstract Surface modes in a Bose-Einstein condensate of sodium atoms have been studied. We observed excitations of standing and rotating quadrupolar and octopolar modes. The modes were excited with high spatial and temporal resolution using the optical dipole force of a rapidly scanning laser beam. This novel technique is very flexible and should be useful for the study of rotating Bose-Einstein condensates and vortices. \] Elementary excitations play a crucial role in the understanding of many-body quantum systems. Landau derived the properties of superfluid liquid helium from the spectrum of collective excitations . After the observation of Bose-Einstein condensation in dilute alkali gases , considerable theoretical and experimental efforts focused on collective excitations. This has already led to advances in our understanding of the weakly interacting Bose gas . In most studies, collective modes were excited by modulating the parameters of the magnetic trapping potential . This method of exciting collective modes is limited to spatial perturbations that reflect the geometry of the trapping coils. Such a limitation is particularly severe for the widely used dc magnetic traps, where only modes with cylindrical symmetry have been excited . Studies of high multipolarity modes are important for a number of reasons. First, high multipolarity modes are the closest counterpart to the surface excitations in mesoscopic liquid helium droplets. These surface modes are considered crucial to understand finite size effects in superfluids, but are difficult to achieve experimentally . Second, for higher angular momentum the surface modes change their character from collective to single particle type . This crossover could be crucial for the existence of a critical rotational velocity for vortex formation . Also, because the thermal atoms are localized around the Thomas-Fermi radius, surface modes should be more sensitive to finite temperature effects . In this Letter we report on the observation of surface excitations of a Bose-Einstein condensate confined in a dc magnetic trap. The excitations were induced by the optical dipole force of a focused red-detuned laser beam which was controlled by a 2-axis acousto-optic deflector. With these tools, local and controllable deformations of the magnetic trapping potential with both arbitrary spatial symmetry and timing can be achieved. This opens the way to selectively excite modes with higher multipolarity and complex spatial patterns. Elementary excitations in a dilute Bose condensate are usually described by the hydrodynamic equations derived from the Bogoliubov theory , which closely resemble the equations describing superfluids at zero temperature : $$m\frac{}{t}𝐯+\left(\frac{1}{2}mv^2+V_{ext}(𝐫)\mu +\frac{4\pi \mathrm{}^2a}{m}\rho \right)=0.$$ (1) Here $`\rho (𝐫,t)`$ and $`𝐯(𝐫,t)`$ are the condensate density and velocity respectively (linked by a continuity equation), $`m`$ the atomic mass, $`a`$ the $`s`$-wave scattering length, $`\mu `$ the chemical potential, and $`V_{ext}`$ the external trapping potential. For an isotropic harmonic oscillator potential $`V_{ext}=m\omega _0^2r^2/2`$ the solution for the density perturbation $`\delta \rho `$ can be expressed as: $$\delta \rho (𝐫)=P_{\mathrm{}}^{(2n)}(r/R)r^{\mathrm{}}Y_\mathrm{}m(\theta ,\varphi ),$$ (2) where $`P_{\mathrm{}}^{(2n)}(r/R)`$ are polynomials of degree $`2n`$ ($`R`$ being the Thomas-Fermi radius $`R=\sqrt{2\mu /m\omega _0^2}`$, $`Y_\mathrm{}m(\theta ,\varphi )`$ are the spherical harmonics, and $`\mathrm{}`$, $`m`$ are the total angular momentum of the excitation and its $`z`$ component, respectively. The dispersion law for the frequency of the normal modes is expressed in terms of the trapping frequency $`\nu _0=\omega _0/2\pi `$ as : $$\nu (n,\mathrm{})=(2n^2+2n\mathrm{}+3n+\mathrm{})^{1/2}\nu _0,$$ (3) which should be compared to the prediction for an ideal Bose gas in a harmonic trap, $`\nu _{HO}=(2n+\mathrm{})\nu _0`$. The effect of interactions in determining the transition from a collective to a single-particle regime is particularly evident for the excitations whose radial dependence of the density perturbation has no nodes ($`n`$=0). These modes are referred to as surface excitations since the density perturbation, while vanishing at the origin, is peaked at the surface of the condensate. In thin films of superfluid liquid <sup>4</sup>He and <sup>3</sup>He, their study has led to the observation of third sound . In a semiclassical picture these excitations can be considered the mesoscopic counterpart to tidal waves at the macroscopic level . The experimental results were obtained using a newly developed apparatus for studying Bose-Einstein condensates of sodium atoms. A Zeeman slower with magnetic field reversal delivers $`10^{11}`$ slow atoms s<sup>-1</sup> which are collected in a magneto-optical (MOT) trap. A loading time of 3 s allowed us to obtain $`10^{10}10^{11}`$ atoms in a dark-SPOT trap at $`1`$ mK. After 5 ms of polarization gradient cooling, atoms in the $`F=1,m_F=1`$ ground state at $`50100\mu `$K are loaded into a magnetic trap. The latter realizes a Ioffe-Pritchard configuration modified with four Ioffe bars and two strongly elongated pinch coils, symmetrically located around a quartz glass cell. This novel design combines excellent optical access with tight confinement. The typical values for the axial curvature and radial gradient of the magnetic field at the trap center are $`202`$ G/cm<sup>2</sup> and $`330`$ G/cm, among the largest ever obtained in such magnetic traps. The resulting trapping frequencies are $`\nu _r=547`$ Hz and $`\nu _z=26`$ Hz for the radial and the axial directions respectively. The background gas-limited lifetime of the atoms in the magnetic trap at $`10^{11}`$ Torr is around 1 min. After evaporative cooling with an rf-sweep lasting 20 s, around $`510`$ million atoms are left in a condensate with a chemical potential of $`200`$ nK and a negligible thermal component (condensate fraction $`90\%`$). A decompression to the final radial and axial trapping frequencies of $`(90.1\pm 0.5)`$ Hz and $`18`$ Hz lowered the density to $`210^{14}`$ cm<sup>-3</sup> where three-body recombination losses were less prominent. The radial trapping frequency was measured by exciting the condensate motion with a short modulation of the bias magnetic field and looking at the free center of mass oscillation in the magnetic trap. Surface modes were excited by perturbing the magnetic trapping potential with light from a Nd:YAG laser (emitting at 1064 nm) travelling parallel to the axis of the trap and focused near the center of the magnetic trap. Because of the low intensity of the laser beam and the large detuning from the sodium resonance, heating from spontaneous scattering was negligible. The laser beam was red-detuned from the sodium resonance and therefore gave rise to an attractive dipole potential . The 1 mm Rayleigh range of the beam waist is considerably longer than the 220 $`\mu `$m axial extent of the condensate. Therefore, the laser only created radial inhomogeneities in the trapping potential, leaving the axial motion almost undisturbed. The spatial and temporal control of the beam was achieved with two crossed acousto-optic deflectors. Using the 2-axis deflector arbitrary laser patterns could be scanned in a plane transverse to the propagation of the laser beam. The maximum size of these patterns is 100 beam widths in both directions. The scan rate was chosen to be 10 kHz, which is much larger than the trapping frequencies. Thus, the atoms experienced a time-averaged potential that is superimposed upon the magnetic trap potential as depicted in Fig. 1a. For these experiments a beam width of 15 $`\mu `$m and a power of $`80\mu `$W were used to generate a potential depth corresponding to 20 $`\%`$ of the chemical potential for each point. For an anisotropic axially symmetric trapping potential, only the $`z`$ component of the angular momentum is conserved and the eigenfunctions are more complicated than in the isotropic case. However, surface modes of the form as in Eq. (2) with $`m=\pm \mathrm{}`$ are still solutions with a frequency : $$\nu (m=\pm \mathrm{})=\sqrt{\mathrm{}}\nu _r.$$ (4) Quadrupolar standing waves were studied by exciting a superposition of $`\mathrm{}=2,m=2`$ and $`\mathrm{}=2,m=2`$ modes with a pattern of two points located on opposite sides of the condensate. The light intensity was modulated in phase at the expected quadrupole frequency $`\nu _2=\sqrt{2}\nu _r`$. After 5 cycles the IR light was turned off, leaving the condensate free to oscillate in the magnetic trap. The condensate was then released from the magnetic trap and after 20 ms of ballistic expansion it was probed by resonant absorption imaging along the axis of the trap. In Fig. 2a, images are shown for different phases of the oscillation. The aspect ratio of the condensate oscillates at a frequency of $`(130.5\pm 2.5)`$ Hz with a damping time of about 0.5 s. A similar damping time was observed for the lowest $`m=0`$ mode of an almost pure condensate . A rotating wave $`\mathrm{}=2,m=2`$ was excited with two IR spots of constant intensity rotating around the axis at half the measured quadrupole frequency. This excitation scheme was highly frequency selective. When the rotation frequency deviated by $`10\%`$ from the resonance no excitation of this mode was observed, consistent with the narrow bandwidth of the mode. In Fig. 2b we show a set of 10 pictures of the rotating mode taken with non-destructive phase-contrast imaging in the magnetic trap. The higher lying $`\mathrm{}`$=4 surface mode (superposition of $`m=\pm 4`$) was driven with a four-point pattern that was intensity-modulated at the expected frequency $`\nu _4=2\nu _r`$ (Fig. 1b). In Fig. 3a time of flight absorption images are shown for variable hold times in the magnetic trap after stopping the drive and compared to the time evolution for a pure $`\mathrm{}`$=4 surface mode (Fig. 3b). By analyzing the density distribution close to the surface we extracted the Fourier spectrum of the first radial moment $`r=r(\theta )`$, which was strongly peaked at $`\mathrm{}=4`$. The time evolution of the $`\mathrm{}=4`$ Fourier cosine coefficient was obtained by repeating the analysis for various hold times (Fig. 3c). We observed an exponentially decaying oscillation at $`\nu _4=(177\pm 5)`$ Hz with a damping time of $`\tau _4=(9.5\pm 2.2)`$ ms. The agreement between the measured frequencies and the hydrodynamic predictions is very good (see Table I). Note that the octopolar mode is damped much faster than the quadrupole mode. This could indicate that higher order surface excitations interact more strongly with the thermal cloud. Due to the mean-field repulsion of the condensate, the effective potential felt by the thermal atoms has a minimum at the Thomas-Fermi radius. For increasing $`\mathrm{}`$ surface waves are more localized in the same region (Eq.(2)). Thus, a systematic study of the temperature dependence of frequencies and damping times of higher order surface modes could extend thermometry for Bose-Einstein condensates to lower temperatures where no thermal cloud is discernible (and the usual method of fitting the wings of the thermal distribution is no longer applicable). | $`\mathrm{}`$ | $`\nu _{\mathrm{}}`$ (Hz) | $`\nu _{\mathrm{}}/\nu _1(exp)`$ | $`\nu _{\mathrm{}}/\nu _1(th)`$ | | --- | --- | --- | --- | | 1 | 90.1 $`\pm `$ 0.5 | $``$ | $``$ | | 2 | 130.5 $`\pm `$ 2.5 | 1.45 $`\pm `$ 0.04 | $`\sqrt{2}`$ | | 4 | 177 $`\pm `$5 | 1.96 $`\pm `$ 0.06 | 2 | Table I: Comparison between observed and predicted frequencies for the quadrupole and the octopole surface excitations, normalized to the radial trapping frequency (dipole mode) $`\nu _1`$. For higher $`\mathrm{}`$, the crossover from the hydrodynamic regime to the single particle picture could be explored. This is expected to occur for $`\mathrm{}\mathrm{}_{crit}=2^{1/3}(R/a_{HO})^{4/3}24`$ (where $`a_{HO}=[\mathrm{}/2\pi m(\nu _r^2\nu _z)^{1/3}]^{1/2}`$ is the harmonic oscillator length), for our trap parameters. The excitation of such modes would require smaller beam waists. However, this resulted either in a very weak excitation or, when the power was increased, the condensate became strongly distorted and/or localized around the laser focus leading to high densities and to large recombination losses. We plan to use blue-detuned light in the future, which will make it easier to create stronger perturbations without loss mechanisms. Our method of generating time-averaged optical potentials can also be used to create purely optical traps in a variety of geometries. By increasing the laser intensity and shutting off the magnetic trap we were able to tranfer the condensate into multiple optical dipole traps, as shown in Fig. 4. They can be used for interference of multiple condensates and studies of coherence and decoherence. Another interesting possibility is the study of condensates in rotating potentials where vortices should be stable. Our first attempts showed a very short trapping time, probably caused by heating due to micromotion. It should be possible to overcome this limitation by increasing the scan frequency beyond the current maximum value of $`100`$ kHz. In conclusion, we have developed a technique to excite surface modes in a Bose-Einstein condensate by inducing deformations of the trap potential with a rapidly scanning red-detuned laser beam. With this technique we could excite both standing and rotating modes. The measured frequencies for quadrupole and octopole modes are in agreement with the predictions of the hydrodynamic theory for collective excitations of dilute Bose gases. This flexible technique should be useful for the investigation of the interplay between collective excitations and the physics of rotating Bose-Einstein condensates. We would like to thank J. Gore, Z. Hadzibabic, and J. Vogels for experimental assistance and useful discussions. This work was supported by the ONR, NSF, JSEP (ARO), NASA, and the David and Lucile Packard Foundation. M.K. acknowledges also support from Studienstiftung des Deutschen Volkes.
no-problem/9908/cond-mat9908016.html
ar5iv
text
# Spin and charge ordering in self-doped Mott insulators \[ ## Abstract We have investigated possible spin and charge ordered states in 3$`d`$ transition-metal oxides with small or negative charge-transfer energy, which can be regarded as self-doped Mott insulators, using Hartree-Fock calculations on $`d`$-$`p`$-type lattice models. It was found that an antiferromagnetic state with charge ordering in oxygen 2$`p`$ orbitals is favored for relatively large charge-transfer energy and may be relevant for PrNiO<sub>3</sub> and NdNiO<sub>3</sub>. On the other hand, an antiferromagnetic state with charge ordering in transition-metal 3$`d`$ orbitals tends to be stable for highly negative charge-transfer energy and can be stabilized by the breathing-type lattice distortion; this is probably realized in YNiO<sub>3</sub>. \] The electronic structure of 3$`d`$ transition metal oxides is described by Zaanen-Sawatzky-Allen (ZSA) scheme in which they are classified into two regimes according to the relative magnitude of the oxygen-to-metal charge-transfer energy $`\mathrm{\Delta }`$ and the $`dd`$ Coulomb interaction energy $`U`$. While the magnitude of the band gap is given by $`U`$ in the Mott-Hubbard regime, it is given by $`\mathrm{\Delta }`$ in the charge-transfer regime $`\mathrm{\Delta }<U`$. 3$`d`$ transition-metal oxides with high valence generally have very small or negative charge-transfer energy $`\mathrm{\Delta }`$ and fall in a region which is not included in the ZSA scheme . Actually, perovskite-type 3$`d`$ transition-metal oxides such as LaCu<sup>3+</sup>O<sub>3</sub>, PrNi<sup>3+</sup>O<sub>3</sub> and SrFe<sup>4+</sup>O<sub>3</sub> have been studied by high energy spectroscopy and have been found to have very small or even negative charge-transfer energy $`\mathrm{\Delta }`$. With small or negative $`\mathrm{\Delta }`$, the highest part of the oxygen 2$`p`$ bands can overlap with the lowest part of the upper Hubbard band constructed from the transition-metal 3$`d`$ orbitals so that some holes are transferred from the 3$`d`$ orbitals to the 2$`p`$ orbitals in the ground state. This state can be viewed as a self-doped state of a Mott insulator such as has recently been suggested for CrO<sub>2</sub> . The properties of such system are far from clear and can be very rich. It can be a paramagnetic metal, a ferromagnetic (FM) metal, and a non-magnetic insulator similar to Kondo-insulators . However, there exists another possibility which has not been explored until now: it may have charge ordering or charge density wave. It is possible that, in a self-doped state of a Mott insulator, holes in the oxygen 2$`p`$ orbitals undergo charge ordering just like doped Mott insulators such as La<sub>2-x</sub>Sr<sub>x</sub>NiO<sub>4</sub> . In this letter, we study this possibility using model Hartree-Fock (HF) calculation and show that spin and charge ordered states may appear in perovskites with negative $`\mathrm{\Delta }`$. Based on the calculations, we argue that this phenomena occurs in perovskites containing Fe<sup>4+</sup> (CaFeO<sub>3</sub>) and Ni<sup>3+</sup> ($`R`$NiO<sub>3</sub> where $`R`$ is a rare earth) . Specifically, we consider the latter system, properties of which, especially its strange magnetic properties remain a puzzle until now . We use the multi-band $`d`$-$`p`$ model with 16 Ni sites in which full degeneracy of the Ni 3$`d`$ orbitals and the oxygen 2$`p`$ orbitals are taken into account . The Hamiltonian is given by $`H=H_p+H_d+H_{pd},`$ (1) $`H_p={\displaystyle \underset{k,l,\sigma }{}}ϵ_k^pp_{k,l\sigma }^+p_{k,l\sigma }+{\displaystyle \underset{k,l>l^{},\sigma }{}}V_{k,ll^{}}^{pp}p_{k,l\sigma }^+p_{k,l^{}\sigma }+H.c.,`$ (2) $`H_d`$ $`=`$ $`ϵ_d{\displaystyle \underset{i,m\sigma }{}}d_{i,m\sigma }^+d_{i,m\sigma }+u{\displaystyle \underset{i,m}{}}d_{i,m}^+d_{i,m}d_{i,m}^+d_{i,m}`$ (3) $`+`$ $`u^{}{\displaystyle \underset{i,mm^{}}{}}d_{i,m}^+d_{i,m}d_{i,m^{}}^+d_{i,m^{}}`$ (4) $`+`$ $`(u^{}j^{}){\displaystyle \underset{i,m>m^{},\sigma }{}}d_{i,m\sigma }^+d_{i,m\sigma }d_{i,m^{}\sigma }^+d_{i,m^{}\sigma }`$ (5) $`+`$ $`j^{}{\displaystyle \underset{i,mm^{}}{}}d_{i,m}^+d_{i,m^{}}d_{i,m}^+d_{i,m^{}}`$ (6) $`+`$ $`j{\displaystyle \underset{i,mm^{}}{}}d_{i,m}^+d_{i,m^{}}d_{i,m^{}}^+d_{i,m},`$ (7) $`H_{pd}={\displaystyle \underset{k,m,l,\sigma }{}}V_{k,lm}^{pd}d_{k,m\sigma }^+p_{k,l\sigma }+H.c.`$ (8) $`d_{i,m\sigma }^+`$ are creation operators for the 3$`d`$ electrons at site $`i`$. $`d_{k,m\sigma }^+`$ and $`p_{k,l\sigma }^+`$ are creation operators for Bloch electrons with wave vector $`k`$ which are constructed from the $`m`$-th component of the 3$`d`$ orbitals and from the $`l`$-th component of the 2$`p`$ orbitals, respectively. The intra-atomic Coulomb interaction between the 3$`d`$ electrons is expressed using Kanamori parameters, $`u`$, $`u^{}`$, $`j`$ and $`j^{}`$ . The transfer integrals between the transition-metal 3$`d`$ and oxygen 2$`p`$ orbitals $`V_{k,lm}^{pd}`$ are given in terms of Slater-Koster parameters $`(pd\sigma )`$ and $`(pd\pi )`$. The transfer integrals between the oxygen 2$`p`$ orbitals $`V_{k,ll^{}}^{pp}`$ are expressed by $`(pp\sigma )`$ and $`(pp\pi )`$. Here, the ratio $`(pd\sigma )`$/$`(pd\pi )`$ is -2.16. $`(pp\sigma )`$ and $`(pp\pi )`$ are fixed at -0.60 and 0.15, respectively, for the undistorted lattice. When the lattice is distorted, the transfer integrals are scaled using Harrison’s law. The charge-transfer energy $`\mathrm{\Delta }`$ is defined by $`ϵ_d^0ϵ_p+nU`$, where $`ϵ_d^0`$ and $`ϵ_p`$ are the energies of the bare 3$`d`$ and 2$`p`$ orbitals and $`U`$ ($`=u20/9j`$) is the multiplet-averaged $`dd`$ Coulomb interaction. $`\mathrm{\Delta }`$, $`U`$, and $`(pd\sigma )`$ for PrNiO<sub>3</sub> are 1.0, 7.0, and -1.8 eV, respectively, which are taken from the photoemission study . The formally Ni<sup>3+</sup> (low-spin $`d^7`$) compounds $`R`$NiO<sub>3</sub> exhibit a metal-insulator transition as a function of temperature and the size of the $`R`$ ion . Among them, PrNiO<sub>3</sub> and NdNiO<sub>3</sub> are antiferromagnetic insulators below the metal-insulator transition temperature. Neutron diffraction study of PrNiO<sub>3</sub> and NdNiO<sub>3</sub> has shown that the magnetic structure has a propagation vector of (1/2,0,1/2) with respect to the orthorhombic unit cell or is a up-up-down-down stacking of the ferromagnetic planes along the (1,1,1)-direction of the pseudocubic lattice (see Fig. 1(a)) . In order to explain the magnetic structure, orbital ordering of the $`x^2y^2`$ and $`3z^2r^2`$ orbitals has been proposed because one of the $`e_g`$ orbitals is occupied in the low-spin $`d^7`$ configuration . However, previous model HF calculations have shown that the orbital ordered state of $`x^2y^2/3z^2r^2`$-type has a relatively high energy, suggesting that orbital ordering is not responsible for the magnetic structure . The photoemission study of PrNiO<sub>3</sub> has shown that the charge-transfer energy $`\mathrm{\Delta }`$ of PrNiO<sub>3</sub> is $``$ 1 eV and that the ground state is a mixture of the $`d^7`$ and $`d^8\underset{¯}{L}`$ configurations, where $`\underset{¯}{L}`$ denotes a hole at the oxygen 2$`p`$ orbitals. Since the ground state has a large amount of oxygen 2$`p`$ holes, it is also possible to describe it starting from the $`d^8\underset{¯}{L}`$ state. In this picture, the system can be viewed as a self-doped Mott insulator and the antiferromagnetic and insulating state in PrNiO<sub>3</sub> and NdNiO<sub>3</sub> may be interpreted as a spin and charge ordered state in the self-doped Mott insulator. Indeed, our calculations confirmed the existence of such ordered states which are consistent with the neutron diffraction measurement. They are illustrated in Fig. 1. In the state shown in Fig. 1(a), half of the oxygen sites have more holes than the other half. The excess holes located at the oxygen sites cause the ferromagnetic coupling between the neighboring two Ni spins. Therefore, the up-up-down-down stacking of the ferromagnetic planes along the (1,1,1)-direction is realized without orbital ordering. On the other hand, all the Ni sites have the same number of 3$`d`$ electrons. Let us denote this state as an oxygen-site charge-ordered (OCO) state. In the state shown in Fig. 1(b), while all of the oxygen sites are occupied by the same amount of holes, half of the Ni sites have more 3$`d`$ electrons than the other half. This state can be called a metal-site charge-ordered (MCO) state. In Fig. 2(a), the energies of the spin and charge ordered states relative to the FM and metallic state are plotted as functions of the charge-transfer energy $`\mathrm{\Delta }`$ for the cubic perovskite lattice. For $`\mathrm{\Delta }1`$ eV, the OCO and MCO states exist as stable solutions. The OCO state is lower in energy than the MCO state for -5 eV $`\mathrm{\Delta }`$ 1 eV. At $`\mathrm{\Delta }`$ = - 7eV, the OCO and MCO states are almost degenerate in energy. This result indicates that, as the charge-transfer energy $`\mathrm{\Delta }`$ decreases, the MCO state becomes favored compared to the OCO state. In Fig. 2(b), the energies of the OCO and MCO states relative to the FM state are plotted for the perovskite lattice with the orthorhombic distortion which is due to the tilting of the NiO<sub>6</sub> octahedra. Here, the tilting angle is 15 which is a typical value found in $`R`$NiO<sub>3</sub> . At $`\mathrm{\Delta }`$ = - 7eV, the MCO state is slightly lower in energy than the OCO state, indicating that the orthorhombic distortion or the GdFeO<sub>3</sub>-type distortion favors the MCO state. However, for $`\mathrm{\Delta }`$ -5 eV, the OCO state has lower energy than the MCO state even with the substantial distortion. Since, in PrNiO<sub>3</sub> and NdNiO<sub>3</sub>, every Ni site has the same magnitude of the magnetic moment , it is reasonable to attribute the antiferromagnetic and insulating state in PrNiO<sub>3</sub> and NdNiO<sub>3</sub> to the OCO state. In the present model calculation without lattice distortions, the OCO state is higher in energy than the FM and metallic state for realistic $`\mathrm{\Delta }`$. However, since the charge ordering at the oxygen sites is expected to strongly couple with a lattice relaxation, a structural modulation may stabilize the OCO state as discussed in the following paragraphs. The number of 3$`d`$ holes $`N_d^h`$ and spin $`S_d`$ at the Ni sites are plotted as functions of $`\mathrm{\Delta }`$ in Fig. 3(a). In the OCO state, $`N_d^h`$ is uniform at all the Ni sites. As $`\mathrm{\Delta }`$ decreases, $`N_d^h`$ becomes smaller because the transfer of holes from the oxygen sites to the Ni sites increases. In these solutions, $`N_d^h`$ are approximately two and the population of the $`d_{x^2y^2}`$ orbital is the same as that of the $`d_{3z^2r^2}`$ orbital, indicating that, in the OCO state, Ni is essentially +2 and the orbital degeneracy is lifted. On the other hand, $`N_d^h`$ are 2.00 and 2.28 in the MCO state for $`\mathrm{\Delta }`$ = 1 eV. The Ni sites with $`N_d^h`$ of 2.00 have the spin $`S_d`$ of 0.80 and are Ni<sup>2+</sup> as those in the OCO state. The Ni sites with $`N_d^h`$ of 2.28 have no spin and is well described by the $`d^8\underset{¯}{L}^2`$ state which can hybridize with the low-spin $`d^6`$ state. In this sense, the Ni sites can be viewed as Ni<sup>4+</sup>-like (low-spin $`d^6`$) sites. Therefore, the MCO state is a kind of charge disproportionated state in which two Ni<sup>3+</sup> sites are turned into the Ni<sup>2+</sup>-like and Ni<sup>4+</sup>-like sites as pointed out by Solovyev et al. based on LDA+$`U`$ calculation . An antiferromagnetic ordering of magnetic Ni<sup>2+</sup>-like sites (see Fig. 1(b)) is also consistent with the neutron diffraction results . As $`\mathrm{\Delta }`$ decreases, the difference of $`N_d^h`$ between the Ni<sup>2+</sup>-like and Ni<sup>4+</sup>-like sites becomes smaller in the MCO state. The difference almost disappears at $`\mathrm{\Delta }`$ = -7 eV, where the MCO state is almost degenerate in energy with the OCO state. Here, it is interesting to note that the charge disproportionation of 2Fe<sup>4+</sup> $``$ Fe<sup>3+</sup> \+ Fe<sup>5+</sup> has been observed in CaFeO<sub>3</sub> which has highly negative charge-transfer energy $`\mathrm{\Delta }`$ . The number of 2$`p`$ holes $`N_p^h`$ at the oxygen sites are plotted as functions of $`\mathrm{\Delta }`$ in Fig. 3(b). The OCO state has the hole-rich and hole-poor oxygen sites. For $`\mathrm{\Delta }`$ of 1 eV, $`N_p^h`$ is $``$ 0.33 at the hole-rich oxygen sites and is $``$ 0.22 at the hole-poor oxygen sites. On the other hand, in the MCO state, $`N_p^h`$ is uniform at all the oxygen sites. Recently, Medarde et al. have observed strong <sup>16</sup>O-<sup>18</sup>O isotope effect on the metal-insulator transition of $`R`$NiO<sub>3</sub>, indicating that the electron-lattice coupling is important in $`R`$NiO<sub>3</sub>. Very recently, Alonso et al. performed neutron diffraction studies of YNiO<sub>3</sub> and found the breathing-type distortion which may be an indication of charge ordering . In Fig. 4, the relative energies of the OCO and MCO states compared to the ferromagnetic state are plotted for $`\mathrm{\Delta }`$ of -1 eV as functions of the various lattice distortions with which the charge orderings are expected to couple. The MCO state becomes stable with the breathing-type lattice distortion as shown in Fig. 4(a). Here, $`\delta _\mathrm{O}`$ is the shift of the oxygen ions which gives the breathing-type distortion. The MCO state becomes the lowest in energy for rather small distortion, indicating that the MCO state coupled with the breathing-type distortion is relevant for YNiO<sub>3</sub>. The OCO state can be stabilized with the modulation of the bond length which is a consequence of the shift of the Ni ions as shown in Fig. 4(b). Here, the shifts of the Ni ion are along the (1,1,1) direction and are given by ($`\delta _{\mathrm{Ni}}`$,$`\delta _{\mathrm{Ni}}`$,$`\delta _{\mathrm{Ni}}`$) and (-$`\delta _{\mathrm{Ni}}`$,-$`\delta _{\mathrm{Ni}}`$,-$`\delta _{\mathrm{Ni}}`$). Consequently, the Ni-O bond length for the FM coupling becomes shorter and that for the AFM coupling becomes longer. Fig. 4(c) shows that the OCO state is also stabilized by the modulation of the bond angle which is derived from the tilting of the octahedra and the shift of the oxygen ions. In this model distortion, the Ni-O-Ni bond angle for the FM coupling is 180 and that for the AFM coupling is smaller than 180 . In Fig. 4(c), the relative energy is plotted as a function of the smaller Ni-O-Ni band angle. Although these distortions can stabilize the OCO state, we need unreasonably large modulations in order to make the OCO state lower in energy than the FM state. We need more experimental and theoretical investigations to identify the lattice distortion realized in PrNiO<sub>3</sub> and NdNiO<sub>3</sub>. In conclusion, we have studied spin and charge ordered states in self-doped Mott insulators with small or negative charge-transfer energy. It was found that two types of charge ordered states are possible: the OCO state with charge ordering at the oxygen sites and the MCO state with charge ordering at the transition-metal sites. The present HF calculation without distortion has shown that the OCO state has lower energy than the MCO state for moderately small $`\mathrm{\Delta }`$ and that the OCO and MCO states are almost degenerate for highly negative $`\mathrm{\Delta }`$. Since, in PrNiO<sub>3</sub> and NdNiO<sub>3</sub>, every Ni site has the same magnitude of the magnetic moment , the antiferromagnetic and insulating state in PrNiO<sub>3</sub> and NdNiO<sub>3</sub> can be attributed to the OCO state of the self-doped Mott insulators. The OCO state in the self-doped Mott insulators is novel in that, even without explicit doping, the spin ordering at the transition-metal sites and the charge ordering at the oxygen sites coexist and couple with each other just like the spin and charge ordered states in the doped Mott insulators. On the other hand, for YNiO<sub>3</sub>, the strong breathing-type distortion stabilizes the MCO state. Here, it is interesting to note that $`\mathrm{\Delta }`$ of CaFeO<sub>3</sub> is highly negative and can have the MCO state even without the strong beathing-type distortion. The charge disproportionated state observed in CaFeO<sub>3</sub> may be regarded as a kind of MCO state in the self-doped Mott insulators. In $`R`$NiO<sub>3</sub> and CaFeO<sub>3</sub>, the homogeneous state corresponds to an orbitally degenerate state ($`t_{2g}^6e_g^1`$ for Ni<sup>3+</sup> and $`t_{2g}^3e_g^1`$ for Fe<sup>4+</sup>). The charge disproportionation observed in CaFeO<sub>3</sub> and the OCO and MCO states for $`R`$NiO<sub>3</sub> may be another way to get rid of this orbital degeneracy besides the usual cooperative Jahn-Teller (or orbital) ordering. The authors would like to thank M. Medarde, J. Rodriguez-Carvajal, J. L. Garcia-Mu$`\stackrel{~}{\mathrm{n}}`$os, J. Matsuno, A. Fujimori, I. Solovyev and J. B. Goodenough for useful discussions. This work was supported by the Nederlands Organization for Fundamental Research of Matter (FOM) and by the European Commission TRM network on Oxide Spin Electronics (OXSEN).
no-problem/9908/astro-ph9908126.html
ar5iv
text
# Steep Slopes and Preferred Breaks in GRB Spectra: the Role of Photospheres and Comptonization ## 1 Introduction The standard fireball shock scenario for gamma ray bursts (GRB) assumes a synchrotron and/or an inverse Compton (IC) spectrum, in good general agreement with a number of observations (Tavani 1996; Cohen et al.1997, Panaitescu, Spada & Mészáros 1999). The physical motivation for this scenario is strong; it would indeed be surprising if the expansion of the ejecta from the huge energies inferred in GRB did not lead to shocks, where synchrotron and IC radiation play a significant role. One need only remind oneself of the dominant role of these effects in AGN jets and supernova remnants. However, in a significant fraction of bursts there is evidence in the 1-10 keV range for spectral intensity slopes steeper than 1/3 (photon number slopes flatter than -2/3), as well as in some cases for a soft X-ray excess over the extrapolated power law from higher energies (Preece et al.1998, 1999; Crider et al.1997, etc.), and this has served as the motivation for considering a thermal (e.g. Liang (1997) or a thermal/nonthermal (Liang et al.1999) comptonization mechanism. While an astrophysical model where this mechanism would arise naturally has been left largely unspecified, Ghisellini & Celotti 1999 have pointed out that if internal shocks lead to high compactness parameters and pair formation, this could naturally produce conditions where the pair temperature and the scattering opacity are self-regulated at values favoring comptonization. There is also a trend in current analyses of GRB spectra indicating that the apparent clustering of spectral break energies in the 50 keV-1 MeV range is not due to observational selection effects (e.g. Preece et al.1998; Brainerd et al.1998; see however Dermer et al.1999a). Models to explain a preferred break physically, e.g. through a Compton attenuation model (Brainerd et al.1998) require reprocessing by an external medium whose column density adjusts itself near a few g cm<sup>-2</sup>. More recently a preferred break has been attributed to the blackbody peak of the fireball photosphere when this occurs at the comoving pair recombination temperature in the accelerating regime, which is blueshifted to the appropriate observer frame energy (Eichler & Levinson 1999), and in this case the Rayleigh-Jeans portion of the photosphere provides a steep low energy spectral slope. To get a photosphere as close in as to occur at the pair annihilation temperature requires an extremely low baryon load outflow; the presence of a high energy power law extending to GeV then requires a separate explanation. At the other extreme of large baryon load outflows, Thompson (1994) has considered scattering off MHD turbulence in photospheres in the coasting regime, which boosts the adiabatically cooled thermal photons near the photospheric peak up to a larger break energy, leading to canonical energy slopes of 0 and -1 (photon slopes -1 and -2) below and above the break. An alternative scenario is that of Ghisellini & Celotti (1999), who invoke $`e^\pm `$ pair breakdown in high compactness shocks above the photosphere to achieve a self-regulating nonrelativistic lepton temperature and scattering optical depth of order a few, leading to a thermal comptonization spectrum with standardized features. While this does not explain steep low energy slopes it does provide nonthermal spectral energy power laws of slope $`0`$ close to those of synchrotron, as well as a standard temperature which would lead to a preferred break in the observer frame, if the bulk Lorentz factor is in a narrow range. In this paper we synthesize and extend some of these ideas within the framework of the standard fireball internal shock model. While in our previous work (e.g. Mészáros , Laguna & Rees 1993, Rees & Mészáros 1994) we considered photospheres and pair formation, their thermal character, the strong uncompensated photosphere redshift in the coasting phase, and the lack of a straightforward way to get from these a power law extending to GeV energies served as compelling arguments for concentrating on the synchrotron and inverse Compton mechanisms. Here we re-examine the relative roles of photospheres and shocks, as well as those of synchrotron, pair breakdown, scattering on MHD waves and comptonization. This leads to a unified picture where both shocks and/or a photosphere with a nonthermal component can provide most of the luminosity, and where the synchrotron and IC mechanisms in shocks provide the primary spectrum or (in high comoving luminosity cases) the trigger for pair breakdown leading to comptonization. We investigate the range of burst model parameters over which different mechanisms come to the fore, and discuss their role in providing flatter or steeper spectral slopes as well as preferred spectral break energies. ## 2 Fireball Photospheric Luminosity We assume a fireball wind of total luminosity output $`L_o=10^{52}L_{52}\text{erg s}\text{-1}`$ expanding from some initial radius $`r_o`$, which for the sake of argument is normalized to the last stable orbit $`r_o`$ at three Schwarzschild radii around a non-rotating black hole of mass $`M_{bh}=10\mu _1M_{}`$, with a corresponding Kepler rotation timescale $`t_o`$, $`r_o=`$ $`6GM_{bh}/c^2=0.9\times 10^7\mu _1\text{cm},`$ $`t_o=`$ $`2\pi r_o^{3/2}(2GM_{bh})^{1/2}=3.25\times 10^3\mu _1\text{s}.`$ (1) The initial blackbody temperature in electron rest mass units at that radius is $$\mathrm{\Theta }_o=(k/m_ec^2)(L_o/4\pi r_o^2c\mathrm{\Gamma }_o^2a)^{1/4}=2.42L_{52}^{1/4}\mu _1^{1/2}\mathrm{\Gamma }_o^{1/2},$$ (2) (or $``$ 1.2 MeV) for a $`10\mu _1`$ solar mass BH and an initial bulk Lorentz factor $`\mathrm{\Gamma }_o1`$ at $`r_o`$. As the optically thick (adiabatic) wind expands with comoving internal energy $`ϵ^{}n^{4/3}`$, where $`n^{}`$ is the comoving baryon number density, the baryon bulk Lorentz factor increases as $`\mathrm{\Gamma }r`$ and the comoving temperature drops $`\mathrm{\Theta }^{}r^1`$. The $`e^\pm `$ pairs drop out of equilibrium (Paczyński 1986, 1990) at $`\mathrm{\Theta }_p^{}0.03`$ ($``$ 17 keV), at a radius $`r_p`$ where the bulk Lorentz factor has grown linearly to a value $`\mathrm{\Gamma }_p`$, $$\frac{r_p}{r_o}=\frac{\mathrm{\Gamma }_p}{\mathrm{\Gamma }_o}=\frac{\mathrm{\Theta }_o}{\mathrm{\Theta }_p}=7\times 10^1L_{52}^{1/4}\mu _1^{1/2}\mathrm{\Gamma }_o^{1/2}.$$ (3) This is the radius of an $`e^\pm `$ pair photosphere, above which the scattering optical depth is less than unity, unless the wind carries enough baryons to provide an electron scattering photosphere above $`r_p`$. For a wind baryon load $`\dot{M}`$ parameterized by a dimensionless entropy $`\eta =L/\dot{M}c^2`$, the baryonic electrons lead to a photosphere larger than equation (3) if $`\eta <\eta _p`$ (equation ). As long as the wind remains optically thick it is radiation dominated and continues to expand as a relativistic gas with $`\mathrm{\Gamma }r`$ (e.g. Shemi & Piran 1990). Clearly $`\mathrm{\Gamma }`$ cannot exceed $`\eta =L_o/\dot{M}c^2`$, and for large baryon loads or moderate $`\eta `$ this occurs at a saturation radius $`r_s/r_o=\eta /\mathrm{\Gamma }_o`$ (for low loads or large $`\eta `$ see however equation ). Above the saturation radius, the flow continues to coast with $`\mathrm{\Gamma }=`$ constant equal to the final value achieved at $`r_s`$. An electron scattering photosphere is defined by $`\tau _s^{}=n^{}Y\sigma _Tr_{ph}/\mathrm{\Gamma }=1`$, where $`n^{}=(L/4\pi r^2m_pc^3\mathrm{\Gamma }\eta )`$ is the comoving baryon density, $`Y`$ is the number of electrons per baryon and $`r/\mathrm{\Gamma }`$ is a typical comoving length. For relatively low values of $`\eta <\eta _{}`$ (defined in equation ) the flow remains optically thick above the saturation radius $`r_s`$ and the photosphere arises in the coasting regime $`\mathrm{\Gamma }=\eta =`$ constant, at a radius $`r_{ph}>r_s`$, (Rees & Mészáros , 1994, Thompson 1994), $`{\displaystyle \frac{r_{ph}^>}{r_o}}=`$ $`{\displaystyle \frac{L\sigma _TY}{4\pi r_om_pc^3\eta ^3}}=1.3\times 10^6L_{52}\mu _1^1Y\eta _2^3`$ $`=`$ $`\mathrm{\Gamma }_o^1\eta _{}(\eta /\eta _{})^3.`$ (4) Here $`\eta _{}`$ is the critical value at which $`r_{ph}^>=r_s`$, $$\eta _{}=\left(\frac{L\sigma _TY\mathrm{\Gamma }_o}{4\pi m_pc^3r_o}\right)^{1/4}10^3(L_{52}\mu _1^1Y\mathrm{\Gamma }_o)^{1/4},$$ (5) which is the wind equivalent of the critical $`\eta `$ of an impulsive fireball photosphere discussed in Mészáros , Laguna & Rees 1993, labeled there $`\eta _m`$. For low baryon loads where $`\eta >\eta _{}`$ a baryonic electron photosphere appears in the accelerating portion $`\mathrm{\Gamma }r`$ of the flow at $`r_{ph}<r_s`$, $`{\displaystyle \frac{r_{ph}^<}{r_o}}=`$ $`\left({\displaystyle \frac{L\sigma _TY}{4\pi r_om_pc^3\eta \mathrm{\Gamma }_o^2}}\right)^{1/3}=2.35\times 10^3(L_{52}Y\mathrm{\Gamma }_o^2)^{1/3}\mu _1^{1/3}\eta _2^{1/3}`$ $`=`$ $`\mathrm{\Gamma }_o^1\eta _{}(\eta /\eta _{})^{1/3}.`$ (6) These photospheric radii are shown in Figure 1. For sufficiently high $`\eta `$, the baryon photospheric radius given by equation (6) can formally become smaller than the pair photosphere radius of equation (3), in which case the latter should be used. The minimum possible photospheric radius is therefore achieved at $`r=r_p`$ given by equation (3), requiring extremely large values of $`\eta >\eta _p`$ ($`>\eta _{}`$), $$\eta _p=4\times 10^6L_{52}^{1/4}\mu _1^{1/2}Y\mathrm{\Gamma }_o^{1/2}=3.75\times 10^3\eta _{}(\mu _1Y\mathrm{\Gamma }_o^1)^{3/4},$$ (7) which implies extremely low baryon loads, $`\dot{M}1.5\times 10^8M_{}\mathrm{s}^1`$. The Lorentz factor attained at the photosphere, $`\mathrm{\Gamma }_{ph}`$, grows linearly with $`\eta `$ for $`1\eta \eta _{}`$ (assuming $`\mathrm{\Gamma }_o=1`$), then it decays as $`\eta ^{1/3}`$ up to $`\eta _p`$ and remains constant above that. The comoving temperature $`\mathrm{\Theta }_{ph}^{}n^{1/3}r^1`$ for a photosphere at $`r_{ph}<r_s`$, while $`\mathrm{\Theta }_p^{}r^{2/3}`$ for $`r_{ph}>r_s`$. The observer-frame photospheric temperature $`\mathrm{\Theta }_{ph}=\mathrm{\Theta }_{ph}^{}\mathrm{\Gamma }_{ph}`$ is then $$\frac{\mathrm{\Theta }_{ph}}{\mathrm{\Theta }_o}=\{\begin{array}{cc}(r_{ph}/r_s)^{2/3}=(\eta /\eta _{})^{8/3},\hfill & \text{for }\eta <\eta _{},r_{ph}>r_s\text{;}\hfill \\ 1,\hfill & \text{for }\eta >\eta _{},r_{ph}<r_s\text{.}\hfill \end{array}$$ (8) The observed photospheric thermal luminosity is $`L_{pht}r^2\mathrm{\Gamma }^2\mathrm{\Theta }_{ph}^4r^0`$ for $`r<r_s`$ and $`L_{pht}r^{2/3}`$ for $`r>r_s`$, hence $$\frac{L_{pht}}{L_o}=\{\begin{array}{cc}(r_{ph}^>/r_s)^{2/3}=(\eta /\eta _{})^{8/3},\hfill & \text{for }\eta <\eta _{},r_{ph}>r_s\text{;}\hfill \\ 1,\hfill & \text{for }\eta >\eta _{},r_{ph}<r_s\text{.}\hfill \end{array}$$ (9) ## 3 Kinetic and Internal Shock Luminosity For typical values of $`\eta \text{ }<10^3`$ it is clear that the terminal baryon Lorentz factor is $`\mathrm{\Gamma }=\eta `$, since the photosphere occurs beyond the saturation radius after the baryons are already coasting with $`\eta `$. However the terminal baryon Lorentz factor is less obvious in cases where $`\eta >\eta _{}`$. For such values, a photosphere occurs in the regime where $`\mathrm{\Gamma }r`$, so $`r_{ph}<r_s`$, but the question is what happens to the baryons above this photosphere, and what is the appropriate value of $`r_s`$. One possibility is that the outflow has magnetic fields strong enough that Poynting stresses continue to accelerate baryons outside the photosphere. If radiation provides the dominant relativistic pressure, to achieve a saturation radius at the value $`r_s/r_o=\eta /\mathrm{\Gamma }_o`$ would require the baryons to be coupled to the radiation out to that radius, beyond the photosphere. Alternatively, it is sometimes assumed that the baryons decouple at the photosphere $`r_{ph}`$, and coast thereafter with $`\mathrm{\Gamma }=\mathrm{\Gamma }_{ph}=\mathrm{\Gamma }_o(r_{ph}/r_o)`$. However, the fact that the outflow has become optically thin to scattering means that most photons no longer scatter. Nonetheless, most of the electrons above the photosphere can still scatter with a decreasing fraction of free-streaming photons, as long as the comoving Compton drag time $`t_{dr}^{}=m_pc^2/c\sigma _Tu_\mathrm{\Gamma }^{}`$ is less than the comoving expansion time $`t_{ex}^{}=r/c\mathrm{\Gamma }`$. The ratio of these two times, $$(t_{dr}^{}/t_{ex}^{})=(4\pi m_pc^3r\mathrm{\Gamma }^3/L\sigma _T)=(\eta _{}/\mathrm{\Gamma }_o)^4(r/r_o)^4,$$ (10) exceeds unity above a radius $`r_{}/r_o=\eta _{}/\mathrm{\Gamma }_o`$. Thus for $`\eta >\eta _{}`$ the appropriate saturation Lorentz factor is $`\eta _{}`$ (instead of the larger $`\eta `$) and the saturation radius is $`r_s/r_o=r_{}/r_o=\eta _{}/\mathrm{\Gamma }_o<\eta /\mathrm{\Gamma }_o`$. Thus, in general the terminal bulk Lorentz factor and the saturation radius are $$\mathrm{\Gamma }_s=\mathrm{min}[\eta ,\eta _{}],(r_s/r_o)=\mathrm{min}[\eta ,\eta _{}]/\mathrm{\Gamma }_o,$$ (11) where the critical value $`\eta _{}`$ is given by equation (5). The observer-frame kinetic (matter) luminosity of the outflow $`L_kr^2\mathrm{\Gamma }^2n^{}(kT^{}+m_pc^2)r`$ for $`r<r_s`$ and $`L_kr^0`$ for $`r>r_s`$. For $`\eta <\eta _{}`$ it is clear that $`L_k`$ reaches the level $`L_o=\dot{M}c^2\eta `$ since the terminal bulk Lorentz factor saturates at the initial dimensionless entropy $`\eta `$. However, for $`\eta >\eta _{}`$ the terminal $`L_k`$ can only reach a lower level, since the bulk Lorentz factor saturates at the lower value $`\eta _{}<\eta `$. The terminal value of $`L_k`$ above $`r_s`$ is then $$\frac{L_k}{L_o}=\{\begin{array}{cc}1,\hfill & \text{ for }\eta <\eta _{},r>r_s\text{;}\hfill \\ (\eta _{}/\eta )1,\hfill & \text{ for }\eta >\eta _{},r>r_s\text{.}\hfill \end{array}$$ (12) Internal shocks can occur when the flow has variations in the initial $`\eta `$ or $`L_o`$ with consequent variations in the terminal Lorentz factors, so shells with different $`\mathrm{\Gamma }_s`$ catch up with each other. The shocks cannot occur at $`r<r_s`$ since in this region both shells accelerate at the same rate $`\mathrm{\Gamma }r`$ and do not catch up. After $`\mathrm{\Gamma }`$ has saturated at $`r>r_s`$, shocks develop at radii $`2ct_v\mathrm{\Gamma }_1\mathrm{\Gamma }_22ct_v\mathrm{\Gamma }^2`$ for shells whose terminal Lorentz factors differ by $`\mathrm{\Delta }\mathrm{\Gamma }=\mathrm{\Gamma }_2\mathrm{\Gamma }_1\mathrm{\Gamma }`$ as a result of initial variations in $`\eta `$ or $`L`$ over timescales $`t_v=\xi _vt_o<t_w`$, where $`\xi _v1`$ and $`t_o`$ is the minimum dynamic timescale in equation (1). $$(r_{sh}/r_o)=2.17\times 10^5\xi _v\mu _1\mathrm{\Gamma }_2^2=2.17\times 10^1\xi _v\eta _{}^2(\mathrm{\Gamma }/\eta _{})^2.$$ (13) (The factor $`21.7=2\times 2\pi \times \sqrt{3}`$ comes from the factor $`2ct_v`$ in the shock radius definition, the $`2\pi `$ from taking the rotation time at $`r_o`$ rather than crossing time, and $`\sqrt{3}`$ because $`r_o`$ is at three Schwarzschild radii.) To produce non-thermal radiation, shocks must occur in an optically thin region, which requires $$\eta >\eta _{sh,m}=1.42\times 10^2L_{52}^{1/5}\mu _1^{1/5}Y^{1/5}\xi _v^{1/5},\text{for}r_{sh}>r_{ph}^>,$$ (14) in order for a shock to occur above a photosphere which is in the coasting region, $`r_{sh}>r_{ph}^>>r_s`$. For $`\xi _v=10^3`$ corresponding to variability timescales $`10^3t_o1`$s, this can be as low as $`\eta _{sh,m}35`$. For even higher loads such that $`\eta <\eta _{sh,1m}`$, the photosphere is further out on the coasting regime, and any shocks would occur inside the photosphere. Internal shock radii are shown in Figure 1 as a function of $`\eta `$ for various multiples $`\xi _v=t_v/t_o`$ of the Kepler time at the last stable orbit $`t_o`$. However, from causality considerations shocks may occur at even lower radii, formally corresponding to $`\xi _v1/21.7`$, since as soon as the $`r>r_s`$ coasting regime is reached shells of different $`\eta `$ can catch up with each other. This would allow smaller regions where the variability timescale is as small as $`r_o/c`$. For very low baryon loads or very high $`\eta >\eta _{}`$, the photosphere arises in the accelerating region, and in this region shocks are not possible. They are, however, possible beyond $`r_s=r_{}=r_o\eta _{}\mathrm{\Gamma }_o^1`$, where the baryon Lorentz factor has saturated to the value $`\mathrm{\Gamma }_s=\eta _{}`$. Any initial variations on a timescale $`t_v`$ will lead to shocks at a radius $`r_{sh}`$ given by equation (13) with $`\mathrm{\Gamma }_s\eta _{}`$, which will be located above $`r_s=r_{}`$. There is thus a maximum possible internal shock radius for any given $`\xi _v`$, $$r_{sh,M}=2.17\times 10^1\xi _v\eta _{}^2r_o=2.17\times 10^{14}(L_{52}\mu _1\mathrm{\Gamma }_oY)^{1/2}\xi _v\text{cm},$$ (15) which, unless $`\xi _v`$ is large or the external density is very large, can still be smaller than the radius where external shocks are expected. The range where internal shocks can occur is shown as the horizontally or vertically striped region in Figure 1. The internal shocks in the wind can dissipate a fraction of the terminal kinetic energy luminosity $`L_k`$ above the saturation radius $`r_s`$. For a mechanical efficiency $`\epsilon _{sh}=10^1\epsilon _{sh1}`$ of conversion of kinetic energy $`L_k`$ into random energy which can be radiated, the shock luminosity in the radiative regime is $$\frac{L_{sh}}{L_o}=\{\begin{array}{cc}10^1\epsilon _{sh1},\hfill & \text{ for }\eta <\eta _{},r>r_s\text{;}\hfill \\ 10^1\epsilon _{sh1}(\eta /\eta _{})^1,\hfill & \text{ for }\eta >\eta _{},r>r_s\text{.}\hfill \end{array}$$ (16) ## 4 Photosphere and Shock Spectra: Comptonization and Pair Formation The basic photospheric spectrum is that of a blackbody, $`xF_xx^3\mathrm{exp}(x/\mathrm{\Theta }_{ph})`$, with a thermal peak at $$x_{ph}3\mathrm{\Theta }_{ph}x_{pho}=3\mathrm{\Theta }_o,$$ (17) where $`\mathrm{\Theta }_{ph},\mathrm{\Theta }_o`$ are given by equations (8,2). Notice that at $`r_s`$ the level is $`L_{pht}=L_o`$ and the spectral peak is at $`x_{ph}3\mathrm{\Theta }_o1`$, from equations (9) (8). For $`r_{ph}>r_s`$, both $`L_{pht}`$ and $`\mathrm{\Theta }_{ph}`$ decrease $`(r_{ph}/r_s)^{2/3}`$. In an $`xF_x`$ or $`xL_x`$ diagram the thermal peak $`x_{ph}`$ (labeled with T in Figure 2) moves down and to the left with a slope 1 as $`\eta `$ decreases. Before leaving the photosphere, however, the blackbody photons can act as seeds for scattering to higher energies, if there is a substantial amount of energy in scattering centers moving with characteristic speeds or energies larger than that of the emitting electrons (which are subrelativistic since $`\mathrm{\Theta }_{ph}^{}\mathrm{\Theta }_p^{}=0.03`$). Alfvén waves generated by magnetic field reconnection or MHD turbulence can act as such centers (Thompson 1994). Alfvén waves travel at speeds $`V_w`$ nearly the speed of light, with an equivalent comoving electron energy $`\mathrm{\Theta }_w^{}=kT_w^{}/m_ec^2=(1/3)\gamma _w^21(1/3)V_w^2/c^2\text{ }<(1/3)`$, where $`V_w/c\text{ }<1`$, and these waves can be efficiently damped for $`\tau _s>1`$. Alternatively, shocks which occur inside the photosphere may also induce Alfvén waves. Repeated scattering on the Alfvén waves acts in the same way as comptonization off hot electrons. The spectrum follows from conservation of photon number and conservation of energy. As seen in the observer frame, starting from seed photons at energy $`\mathrm{\Theta }_{ph}\mathrm{\Theta }_o`$ this yields a spectrum $`F_xx^0`$ or $`xF_xx`$, as the conserved photon number is scattered up in energy. The photons diffuse up and to the right on a slope $`x`$ in the $`xF_x`$ diagram. From conservation of energy, the maximum energy they can reach is $`3\mathrm{\Theta }_o1`$. However, if the energy in Alfvén waves or turbulence is a fraction $`ϵ_w<1`$ of the total $`L_o`$, the comptonized photosphere luminosity is $$L_{phc}=ϵ_wL_o,$$ (18) and the comptonized photosphere spectrum $`xF_xx`$ can extend only up to a break energy $$x_{phc}=\mathrm{min}[ϵ_w3\mathrm{\Theta }_o,(L_{phc}/L_{ph})3\mathrm{\Theta }_{ph}]3\mathrm{\Theta }_o.$$ (19) This is still much less than the wave equivalent energy in the lab-frame, $`\mathrm{\Theta }_w=(kT_w^{}/m_ec^2)\eta \text{ }<(1/3)\mathrm{\Gamma }`$. Thus above this break energy, an increasingly smaller fraction of the total photons can be scattered with spectrum $`xF_xx^0`$ up to a maximum energy in the lab frame $$x_w3\mathrm{\Theta }_w\text{ }<\mathrm{\Gamma }.$$ (20) As discussed by Thompson (1994) and in classical references on comptonization, such a spectrum $`xF_xx^0`$ is naturally expected from the diffusion of photons out of bounded scattering regions (such as reconnection hot-spots or turbulent cells in this case). We show in Figure 2 the comptonized photosphere component (labeled PHC), assuming a turbulent wave energy level $`ϵ_w=10^1L_o`$, for various values of $`\eta `$, while in Figure 3 a lower value $`ϵ_w=10^2`$ is assumed. Internal shocks outside the photosphere, expected in the coasting regime if the outflow is unsteady, provide a significant nonthermal component of the spectrum. The primary energy loss mechanism in the shocks is synchrotron radiation, or inverse Compton (IC). It is common to assume that at the shocks the magnetic field energy density is some fraction $`ϵ_B`$ of the equipartition value with the outflow. The dimensionless field at the base of the flow is then $$x_{Bo}=(B/B_Q)=(2L_oϵ_B/r_o^2c)^{1/2}B_Q^1=2L_{52}^{1/2}ϵ_B^{1/2}\mu _1^1,$$ (21) where $`B_Q=2\pi m_2^2c^3/eh=4.44\times 10^{13}G`$ is the critical field. Thus the dimensionless comoving field at the shock is $$x_{Bsh}^{}=x_{Bo}(r_{sh}/r_o)^1\mathrm{\Gamma }^1=10^7\mathrm{\Gamma }_2^3L_{52}^{1/2}ϵ_B^{1/2}\xi _v^1\mu _1^1.$$ (22) Note that if the fields are turbulently generated and equipartition is with respect to $`L_{sh}`$, instead of $`L_o`$, then from equation (16), $`ϵ_B\mathrm{min}[\epsilon _{sh},\epsilon _{sh}(\eta _{}/\eta )]`$. This is a time average value of $`B^{}`$ over the duration of the shocks, and it neglects any time-varying compression factors associated with individual pulses. The observer-frame dimensionless synchrotron peak frequency in units of electron rest mass is $`x_{sy}=(3/2)x_B^{}\gamma _m^2\mathrm{\Gamma }`$, where the minimum electron random Lorentz factor in internal shocks $`\gamma _m0.9\times 10^3ϵ_e`$ is typically a fraction $`ϵ_e`$ of the equipartition value $`0.5m_p/m_e`$, remembering that internal shock collide at relative speeds $`\mathrm{\Gamma }_{rel}1`$. Thus the observed synchrotron peak is at $$x_{sy}=x_{}(\mathrm{\Gamma }/\eta _{})^2=1.26\times 10^1\mathrm{\Gamma }_2^2L_{52}^{1/2}ϵ_B^{1/2}ϵ_e^2\mu _1^1\xi _v^1,$$ (23) where $`x_{}=1.26\times 10^1ϵ_B^{1/2}ϵ_e^2\mu _1^{1/2}(Y\mathrm{\Gamma }_o)^{1/2}\xi _v^1`$. The comoving synchrotron cooling time is $`t_{sy}^{}=2.5\times 10^{19}x_{}^{}{}_{B}{}^{2}\gamma ^1=2.5\times 10^8\mathrm{\Gamma }_2^6L_{52}^1ϵ_B^1ϵ_e^1\mu _1^2\xi _v^2`$ s, using equation (22). For $`ϵ_B1`$ the IC mechanism can become important for the MeV radiation. However, for $`ϵ_B`$ not too far below equipartition values, the IC losses occur on timescales comparable or longer than synchrotron, and produce photons well above the MeV range where breaks and anomalous slopes occur. The comoving expansion time is $`t_{ex}^{}=r/c\mathrm{\Gamma }=0.65\mathrm{\Gamma }_2\mu _1\xi _v`$ s, and the ratio $`t_{sy}^{}/t_{ex}^{}`$ exceeds unity only for rather large $`\mathrm{\Gamma }\text{ }>3\times 10^3L_{52}^{1/5}ϵ_B^{1/5}ϵ_e^{1/5}\mu _1^{1/5}\xi _v^{1/5}`$ if $`\gamma \gamma _m`$. Thus the electrons are in the radiative regime above the synchrotron break $`x_{sy}`$ in all cases considered, and also for a large range below it. Above the break $`x_{sy}`$ the synchrotron spectrum is then $`xF_xx^0`$, or more generally $`xF_xx^{(p2)/2}`$, from the power law electrons $`N(\gamma )\gamma ^p`$ above $`\gamma =0.9\times 10^3ϵ_e`$ produced by Fermi acceleration in the shocks (for the rest of the discussion we assume $`p=2`$ or $`x^0`$ above the break as an example). Below $`x_{sy}`$ one has $`xF_xx^{1/2}`$, down to a synchrotron self-absorption frequency $$x_a10^4\mathrm{\Gamma }_2^{4/5}L_{52}^{3/10}\epsilon _{sh1}^{2/5}ϵ_{B1}^{1/10}ϵ_e^{4/5}\mu _1^{3/5}\xi _v^{3/5},$$ (24) where the equality applies if $`xF_xx^{1/2}`$ down to $`x_a`$, and the inequality applies if there is a transition to an adiabatic regime $`x^{4/3}`$ before reaching $`x_a`$ (for very high $`\mathrm{\Gamma }`$ or $`\xi _v`$). Schematic synchrotron spectra are shown in Figure 2 from shocks where $`\epsilon _{sh}=10^1`$, and in Figure 3 (left panel) where $`\epsilon _{sh}=3\times 10^3`$, assuming $`p=2`$. These curves, labeled S, show the break at $`x_{sy}`$ and a radiative slope 1/2 below that down to $`x_a`$. For cases where an adiabatic regime is achieved at frequencies below $`x_{sy}`$ but above $`x_a`$, the synchrotron slope would steepen to 4/3 and $`x_a`$ moves further down in frequency. Pair breakdown via $`\gamma \gamma e^\pm `$ can occur when the comoving synchrotron luminosity from the internal shocks is large enough, which is the case over a non-negligible range of parameter space. The comoving compactness parameter is $`\mathrm{}^{}=n_\gamma ^{}\sigma _T(r/\mathrm{\Gamma })`$, where $`n_\gamma ^{}=(\alpha L_{sh}/4\pi r_{sh}^2m_ec^3\mathrm{\Gamma }^2)`$ is the comoving photon density and $`\alpha `$ is the fraction above threshold. For typical synchrotron spectra peaking at $`x_{sy}`$ and with $`xF_xx^0`$ above that, a fraction above threshold in the comoving frame $`\alpha =0.3\alpha _{.3}`$ is a typical value for the high $`\mathrm{}^{}`$ cases. Thus $$\mathrm{}^{}=(\alpha L_{sh}\sigma _T/8\pi m_ec^3t_v\mathrm{\Gamma }^5)3\times 10^2\mathrm{\Gamma }_2^5L_{52}\epsilon _{sh1}\alpha _{.3}\xi _v^1.$$ (25) For values of $`\mathrm{\Gamma }>\mathrm{\Gamma }_{\mathrm{}^{}}3.1\times 10^2(L_{52}\epsilon _{sh1}\alpha _{.3}\mu _1^1\xi _v^1)^{1/5}`$ the compactness $`\mathrm{}^{}1`$ and pair formation does not occur. This corresponds to shock radii $`(r_{sh}/r_o)_{\mathrm{}^{}}0.7\times 10^8L_{52}\epsilon _{sh1}\alpha _{.3}\mu _1^1\mathrm{\Gamma }_2^3`$. Below that, in the range $`r_s<r_{sh}<r_{sh\mathrm{}^{}}`$, one has $`\mathrm{}^{}\text{ }>1`$, and pair breakdown rapidly adds to the opacity and cooling. This leads to a self-regulating pair plasma whose scattering optical depth $`\tau _s`$ several, with a characteristic mean $`e^\pm `$ energy $`\mathrm{\Theta }_c^{}m_ec^2/\tau _s10^1`$ (e.g. Svensson 1987, Ghisellini & Celotti 1999). Clearly, $`\tau _s`$ cannot be too large, otherwise advection and adiabatic cooling would dominate over comptonization and diffusion. At such nonrelativistic energies, cyclotron radiation produces seed photons at harmonics whose energy is much lower than $`x_{sy}`$, but repeated scattering on the much hotter $`e^\pm `$ produces a comptonized spectrum $`F_x^{}x_{}^{}{}_{}{}^{0}`$ up to the maximum energy $`\mathrm{\Theta }_c^{}`$. For $`\mathrm{\Gamma }=\eta `$ (and $`\mathrm{}^{}>1`$), one then has an observer-frame characteristic break energy $$x_c10^1\mathrm{\Gamma }10\mathrm{\Gamma }_2,$$ (26) and below that a spectrum $`xF_xx`$. The pair Comptonized luminosity is comparable to the (initial) synchrotron shock luminosity in the absence of pairs, $`L_c/L_o=L_{sh}/L_o`$ given by equation (16), since the self-regulation of the comptonizing pair plasma is achieved by a time-averaged balance between energy dissipation by the shocks and radiative losses. Above the break $`x_c`$, one would expect a drop-off of the spectrum, in the absence of other effects. However, since internal shocks occur in the coasting regime and self-regulating pair breakdown tends to maintain a moderate scattering optical depth, reconnection and MHD turbulence may arise here too, leading to Alfvén waves of much higher characteristic energies than that of the pairs, which could lead to a flatter power law spectrum $`xF_xx^0`$ extending above $`x_c`$ up to energies $`x_w`$ similar to that given in equation (20). In Figure 2 the top row shows cases where $`\mathrm{}^{}<1`$ and hence only the photosphere (with thermal T and nonthermal PHC) and the shock synchrotron (S) components are present. The middle row shows the marginal cases where $`\mathrm{}^{}=1`$ and hence besides the photosphere T and PHC components one expects the synchrotron S and the pair breakdown comptonized C components to be comparably important. The bottom row of Figure 2 shows cases with $`\mathrm{}^{}1`$, where pair breakdown is so important as to completely replace the synchrotron component S with a self-regulated comptonizing pair plasma component C. Both the S and C components have a luminosity level given by a shock efficiency (e.g. Kumar 1999) $`\epsilon _{sh}=10^1`$ in Figure 2, with the effects of a lower value $`\epsilon _{sh}=3\times 10^3`$ shown in Figure 3 (left). ## 5 Discussion Within the framework of the standard internal shock model, we have analyzed the observable effects of the two major radiating regions, the photosphere and the internal shocks, which are expected to contribute to the flux from an unsteady fireball outflow model of GRB. Note that, as in most GRB models, the spherically symmetric assumption applies to any conical outflow whose opening half-angle is $`\text{ }>\mathrm{\Gamma }^1`$. (We note also that, if the cone angle were very narrow, transverse pressure gradients would cause significant departures from radial outflow; under these circumstances, the $`r`$-dependences would change, although the qualitative features would not be substantially altered.) We have purposely left out of our discussion any radiation from an external shock, which is expected to occur at radii beyond those considered here (and which can add other radiation components, especially a long term afterglow). The standard internal shock model of GRB is generally assumed to produce its observed nonthermal radiation by the synchrotron (or possibly inverse Compton) process. Here, in addition to synchrotron we have also considered in more detail the role of the outflow photosphere and of possible nonthermal spectral distortions in it, as well as the role of pair breakdown in shocks with very high comoving luminosity. In a diagram (Figure 1) of radius vs. dimensionless entropy $`\eta =\dot{L}/\dot{M}c^2`$, the regions where the internal shocks are dominated by synchrotron radiation (and pair breakdown is unimportant, $`\mathrm{}^{}1`$) are shown by the vertically striped area S. The line where the compactness parameter $`\mathrm{}^{}=1`$ runs parallel to the line $`r_{ph}^>`$ for the photosphere in the coasting regime, in the same figure. The region where internal shocks are dominated by pair-breakdown, $`\mathrm{}^{}>`$, is given by the horizontally striped region C in Figure 1, where the shock spectrum is dominated by comptonizing pairs. To the left and below the photospheric lines, shocks would occur at high optical depths and their spectrum would be thermalized, adding to the purely thermal (T) and non-thermal (PHC) components emerging form the photosphere. These various spectral components are shown in Figures 2 and 3 in a power per decade $`xL_x`$ vs. $`x`$ plot, where $`x`$ is photon energy in electron rest mass units. In our earlier papers on fireball shock models of GRB and most subsequent related work, the role of photospheres and pair breakdown was briefly considered, but until recently the observations did not appear to provide much support for their being important. The need for a non-thermal spectrum continues to be a strong argument for shocks and a synchrotron component, while for the less problematic case of large baryon loads the photospheres occur in the coasting regime, where their observer frame thermal luminosity is drastically weakened by adiabatic cooling. This is confirmed by the spectra of Figures 2 and 3, where for moderate to large baryon loads (low $`\eta `$) and moderate variability $`\xi _v10^2`$ or $`t_v\text{ }>0.3`$ s the thermal peaks T are strongly suppressed, especially in the more “conservative” region going part way above and below from the central and right-of-center panels of Figure 2. Pair breakdown is also a phenomenon considered in earlier papers (c.f. also Pilla & Loeb 1998, Papathanassiou & Mészáros 1996), which received less attention than it deserves because its importance appears to be restricted to a relatively narrow region of parameter space. This is illustrated in Figure 1, where one can compare the vertically striped region $`\mathrm{}^{}<1`$ labeled S versus the narrowish horizontally striped region $`\mathrm{}^{}>1`$ labeled C. This region would be even narrower if one normalized to $`L_o10^{50}\text{erg s}\text{-1}`$ as opposed to $`L_{52}`$. However, fresh motivation for reconsidering the role of photospheres and pairs is provided by the evidence, in a non-negligible fraction of bursts, of low energy (1-10 keV) spectral slopes steeper than 1/3 in energy (or 4/3 in $`\mathrm{log}xL_x`$) and in some cases an X-ray excess above the power law extrapolation from higher energies, as well as the ubiquitousness of observed break energies clustering between 50-1000 keV (discussed in §1 and references there). A look at Figures 2 and 3 shows that allowing a larger role to photospheres and comptonized pairs provides a way of addressing these observational trends. If photospheres are rather more important than shocks (e.g. due to some preference for very high $`\eta `$, weakly varying outflows or low shock efficiencies) the thermal component can provide low energy slopes as steep as $`x^3`$ in $`xL_x`$ (while flatter slopes can be achieved through integration or distributions). The same can explain an X-ray excess well above the power law extrapolation from above. Reasonable break energies can be obtained from either the synchrotron or pair comptonization mechanisms in the shock, but they depend on $`\mathrm{\Gamma }=\eta `$, and unless the range of $`\eta `$ is narrow they would not necessarily cluster between 50-1000 keV. In the case of pair comptonization they also tend to be a bit high, unless the equilibrium pair temperature is $`\mathrm{\Theta }_c^{}\text{ }<0.03`$. A preferred break could be attributed to a photospheric peak, provided baryon loads are very low, $`\eta \eta _{}10^3`$ in all cases; this still requires a strong shock synchrotron component, or possibly Alfvén wave comptonization in the photosphere, to explain the high energy power law spectra. It would also imply a pronounced upward change of slope above the break, from the thermal peak to a flatter power law in all bursts where a break is observed, unless the shock or Alfvén wave scattering always produces a luminosity comparable to the thermal photosphere. Preferred break energies arise naturally if Thompson’s (1994) proposed mechanism of comptonization by Alfvén wave damping in the photosphere is taken at face value and has good efficiency, leading to a source frame break around 0.5 MeV. Note that the comptonized spectral slopes discussed here (either from thermal pairs in shocks or Alfvén waves in the photosphere) are nominally 1 and 0 in $`xF_x`$, but for simple time-dependent calculations an evolving slope is expected (e.g. Liang et al., 1999, Thompson 1994). In reality the actual time dependence for an unsteady outflow leading to shocks, pair breakdown and comptonization could be more complicated. If both a photospheric and a shock component are detected, one would expect the thermal photospheric luminosity (and its non-thermal part, if present) to vary on similar timescales as the nonthermal synchrotron or pair comptonized shock component (unless the shock efficiency is radius dependent, or unless one or both are beyond $`r=r_o\eta ^2`$, in which case $`r/c\eta ^2`$ imposes a lower limit on the corresponding variability timescale). But even if the bolometric luminosity varies on the same timescale, the luminosity in a given band (e.g. BATSE) probably varies differently, since the thermal peak energy is $`L^{1/4}`$ and falls off steeply on either side, while the synchrotron peak energy varies $`B^{}\eta L^{3/2}`$ and falls off more slowly. The pair comptonized break energy on the other hand varies as $`\eta L`$, and also drops off slowly on the low energy side, or on both sides if scattering off waves is present in the high energy side. If observations at high time resolution become possible in X-ray or optical during the burst (as opposed to the afterglow), we would expect (c.f. Figure 2) the bursts with shorter time structure (low $`\xi _v`$) to be more suppressed at these wavelengths, compared to those with longer variability timescales. In summary, there are several plausible mechanisms for producing a preferred energy break, which rely on internal properties of the outflow. In those bursts with low energy slopes steeper than implied by synchrotron, the prominence of the photosphere is a likely explanation, in which case its luminosity could in some cases vary differently from the higher energy power law component. Intrinsically high luminosity bursts, where pair breakdown is inferred, would be predicted to have generally harder high energy power slopes than in lower luminosity bursts where synchrotron provides the high energy slope. If photospheric comptonization on Alfvén waves is responsible for the high energy power law slopes, the thermal peak and the power law should vary together in time. In this case a straightforward prediction is that the thermal peak photon energy and the intrinsic luminosity allow one to determine the expected maximum photon energy $`x_w\mathrm{\Gamma }`$. This research has been supported by NASA NAG5-2857, NSF PHY94-07194 and the Royal Society.
no-problem/9908/astro-ph9908103.html
ar5iv
text
# Luminous “Dark” Halos ## 1 Introduction Several years ago it was proposed that cold gas could make up a significant fraction of the dark matter in spiral galaxies (Pfenniger, Combes & Martinet 1994). This particular proposal advocated massive, fractal gas clouds distributed in a thin disk, but subsequent authors have contemplated spherical clouds in the (dynamically more conventional) context of a quasi-spherical halo (Henriksen & Widrow 1995; de Paolis et al 1995; Gerhard & Silk 1996; Walker & Wardle 1998). Walker & Wardle’s (1998) model for Extreme Scattering Events (radio wave lensing events) requires that essentially all of the Galactic dark matter be in the form of cold, dense gas clouds. Data on Galactic emissions – notably in the $`\gamma `$-ray region (de Paolis et al 1995; Kalberla, Shchekinov & Dettmar 1999) – and LMC microlensing properties (Draine 1998; Rafikov & Draine 2000) do not exclude this possibility. A natural question, then, is to ask whether all of the dark matter in all halos (galaxies and clusters of galaxies) might assume the form of cold gas. However, this extension precipitates a conflict with established ideas, because clusters of galaxies are generally assumed to be so large that they are representative samples of the Universe, and well-known arguments favour a predominantly non-baryonic Universe (e.g. Peebles 1993). One possible resolution of this conflict has been considered by Walker & Wardle (1999), who emphasised the loopholes in the case for non-baryonic dark matter. Even in the absence of a resolution, though, it is useful to contemplate purely baryonic models of dark halos, in order to clarify the strengths and weaknesses of these models; that is the spirit of this paper. Three discoveries in the last two years have promoted the basic idea of dark matter in the form of cold, dense gas clouds. First Walker & Wardle (1998) were able to explain the enigmatic “Extreme Scattering Events” (Fiedler et al 1987) as radio-wave lensing events caused by the photoionised surfaces of cold clouds. This model requires “lens” radii of order 2 AU, individual masses in the planetary range, and a total mass which dominates the mass of the Galaxy. Secondly, Dixon et al (1998) found that the $`\gamma `$-ray background contains a substantial component attributable to the Galactic halo; given that the diffuse gas in the Galactic plane is the principal feature of the $`\gamma `$-ray sky (e.g. Bloemen 1989), this is prima facie evidence for unseen gas in the Galactic halo (de Paolis et al 1995). Thirdly, Walker (1999: W99) showed that the cold-cloud model predicts a relation between visible mass and halo velocity dispersion, $`M_{vis}\sigma ^{7/2}`$, which agrees extraordinarily well with data on spiral galaxies. Indeed this result appears to underlie the Tully-Fisher relation, with the latter following when most of the visible mass is in stellar form. In the model of W99 these results arise from consideration of the collisions which must occur between clouds (Gerhard & Silk 1996); such collisions destroy the colliding pair, and in this picture the visible content of any halo increases with time as dark matter is converted to visible forms. The success of this simple picture of (visible) galaxy assembly encourages further investigation into the physics of cloud-cloud collisions. One of the most basic features of the collision process is that it involves strong shocks in the cold gas. By virtue of the high particle densities ($`10^{12}\mathrm{cm}^3`$) within the clouds, the post-shock radiative cooling time-scales are very short and the shocks are radiative. This means that the bulk of the kinetic energy dissipated during a collision goes into radiation, implying a minimum level of emission from “dark” halos. This is a key prediction which must be squared with the data: is the model in conflict with observations? Here we investigate that question and answer in the negative. The model does lead to a highly unconventional picture of the origin of X-ray emission in clusters of galaxies, and some readers will regard this as intrinsically problematic because they believe the phenomenon to be well understood. However, in both the proposed picture and the conventional one, X-ray emission arises from a two-body collision process acting in an isothermal gas in virial equilibrium in the cluster potential; consequently the two theories are degenerate in many respects, as will be evident in §3. One aspect of the new theory is, however, so strikingly different from the accepted interpretation that there will be no difficulty in deciding between the two, once new data are acquired. Because of the simplicity with which this issue will be decided, no attempt is made here to model anything other than the fundamental properties of the X-ray emission. These properties are consistent with existing data and, bearing in mind that the new theory has no free parameters, it seems appropriate to accept the model as a bona fide alternative for the time being, pending the outcome of the test described in §4. We follow W99 in modelling dark halos as isothermal spheres which are entirely composed of dense gas clouds; we adopt a Maxwellian distribution function, and for simplicity we assume that all clouds have the same mass and radius. The expected properties of the emission are presented in the next section. Because the predicted luminosity is a steep function of halo velocity dispersion, we then (§3) focus on the application to clusters of galaxies, where intense X-radiation is expected. Implications of the theory and ways in which it can be tested are given in §4. ## 2 Basic properties of the emission In order to calculate the emissivity, $`\epsilon `$, of a halo we need to estimate the rate at which kinetic energy is dissipated in collisions between clouds. Essentially all collisions are highly supersonic, so we can apply conservation of momentum to each element of area, $`\mathrm{\Delta }A`$, of the two colliding clouds, with local surface density $`\mathrm{\Sigma }_1,\mathrm{\Sigma }_2`$ (these surface densities being measured parallel to the relative velocity vector). If each cloud has speed $`u`$, in the frame of the centre-of-mass, before the collision, then for a fully inelastic collision the final speed is just $`u|\mathrm{\Sigma }_1\mathrm{\Sigma }_2|/(\mathrm{\Sigma }_1+\mathrm{\Sigma }_2)`$. The change in kinetic energy in each elemental area is thus $`2\mathrm{\Delta }A\mathrm{\Sigma }_1\mathrm{\Sigma }_2u^2/(\mathrm{\Sigma }_1+\mathrm{\Sigma }_2)`$. If we define $`\eta `$ to be the total kinetic energy dissipated as a fraction of the total initial kinetic energy (in the centre-of-mass frame), then we have $$\eta =\frac{1}{M}dA\frac{2\mathrm{\Sigma }_1\mathrm{\Sigma }_2}{\mathrm{\Sigma }_1+\mathrm{\Sigma }_2},$$ $`(1)`$ where $`M`$ is mass of a cloud. Evidently $`\eta =\eta (b)`$ is a function of impact parameter, $`b`$, for the collision, and depends on the density profile within the cloud. We shall be primarily concerned with the average value $`\overline{\eta }`$: $$\overline{\eta }=\frac{1}{2r^2}_0^{2r}dbb\eta (b),$$ $`(2)`$ where $`r`$ is the cloud radius. We have evaluated $`\overline{\eta }`$ for polytropic cloud models of indices $`n=1.5,\mathrm{\hspace{0.33em}3},\mathrm{\hspace{0.33em}4}`$, with the results $`\overline{\eta }=0.136,\mathrm{\hspace{0.33em}5.37}\times 10^2,\mathrm{\hspace{0.33em}1.66}\times 10^2`$ respectively. A firm model for the density profile of the putative dark clouds has not yet been constructed. Wardle & Walker (1999) suggest that solid molecular hydrogen plays a key rôle in their thermal regulation, in which case most of the radiative losses are likely to occur from a thin surface layer because the precipitation/sublimation balance is very temperature sensitive. Beneath this radiative layer the dominant cooling is expected to come from spectral lines which are very optically thick, and we anticipate that these regions are thus unstable to convection (Clarke & Pringle 1997). We therefore adopt a polytropic model with $`n=1.5`$ as an approximation to the likely cloud density structure, leading to $`\overline{\eta }0.136`$. In deriving the rate of collisions between clouds, $``$, we assume that the cloud population follows a Maxwellian velocity distribution with total density $`\rho `$, leading to (W99) $$=\frac{16}{\sqrt{\pi }}\frac{\rho ^2\sigma }{M\mathrm{\Sigma }},$$ $`(3)`$ ($`\mathrm{\Sigma }`$ is henceforth the average surface density of a cloud), with a mean kinetic energy of $`2\overline{\eta }M\sigma ^2`$ dissipated per collision. (Note that the numerical coefficient in eq. 3 differs slightly from W99’s eq. 1, because we have specified a Maxwellian distribution function. Similar, slight differences will be evident when comparing some of our subsequent results with those of W99.) It is now trivial to determine the local emissivity of the halo: $`\epsilon =2\overline{\eta }M\sigma ^2`$. We employ W99’s eq. 3 for the halo density distribution – implicitly assuming that the dark halo is entirely made up of cold clouds – whence the intensity $$I=\frac{1}{4\pi }ds\epsilon =\frac{\overline{\eta }}{64}\left[\frac{\sigma ^5\mathrm{\Sigma }}{Gt^3\sqrt{\pi }}\right]^{1/2}\frac{1}{(1+x^2)^{3/2}},$$ $`(4)`$ where $`x`$ is the projected distance of the line-of-sight from the centre of the halo, in units of the core radius, $`r_c`$, and $`r_c^2=16\sigma ^3t/\pi ^{3/2}G\mathrm{\Sigma }`$. Here $`t`$ is the time which has elapsed since the halo virialised; we adopt $`t10`$ Gyr, corresponding to halos which virialised at redshifts $`z\mathrm{}>1`$. The total luminosity can be found simply by integrating eq. 4, leading to $$L=8\pi ^2r_c^2_0^{\mathrm{}}dxxI(x)=2\overline{\eta }\left[\frac{\sigma ^{11}\sqrt{\pi }}{G^3\mathrm{\Sigma }t}\right]^{1/2}.$$ $`(5)`$ This result may also be written as $`L=\overline{\eta }\dot{M}_{vis}\sigma ^2`$, where $`M_{vis}=\pi \sigma ^2r_c/G`$, emphasising the connection with the pseudo-Tully-Fisher relation derived by W99. The average column density of the individual clouds can be measured by fitting the theoretical relation $`M_{vis}(\sigma )`$ to data for spiral galaxies (W99); for our Maxwellian distribution function this yields $`\mathrm{\Sigma }=134\mathrm{g}\mathrm{cm}^2`$ for $`t=10`$ Gyr. We can now evaluate eq. 5 numerically: $$L3.2\times 10^{44}\sigma _3^{11/2}\mathrm{erg}\mathrm{s}^1,$$ $`(6)`$ where $`\sigma =10^3\sigma _3\mathrm{km}\mathrm{s}^1`$; this implies very luminous halos for clusters of galaxies ($`\sigma _31`$). The equality $`L=\overline{\eta }\dot{M}_{vis}\sigma ^2`$ is important because it demonstrates a close tie between the predicted luminosity and the observed Tully-Fisher relation. That is, if W99’s theory is a correct explanation for the Tully-Fisher relation, then a result very similar to equation (6) must follow for the bolometric luminosity of the halo; this is independent of the value of $`\mathrm{\Sigma }`$, or the number density and spatial distribution of the clouds. What about the spectrum of the radiation? For the present it suffices to note two general points. First, the radiation is thermal; and secondly, a fiducial temperature for the radiation is that of the shocked gas. This temperature can be estimated from the jump conditions for a strong shock: $`kT_s=(3/16)\mu u^2`$, where $`\mu `$ is the mean molecular mass. There is no unique value of the shock speed, $`u`$, but $`u^2=2\sigma ^2`$ so $`kT_s2\sigma _3^2`$ keV, and this gives us a crude measure of the typical photon energy. In this way we see that the halos of dwarf galaxies ($`\sigma \mathrm{}<50\mathrm{km}\mathrm{s}^1`$) should emit mostly in the optical and near-IR; this radiation is observable in principle, but we note the low luminosities implied by eq. 6 ($`L\mathrm{}<2\times 10^{37}\mathrm{erg}\mathrm{s}^1`$). Normal and giant-galaxy halos should emit mainly far- and extreme-UV, which is not ordinarily observable because of the large opacity of the Galactic interstellar medium in these bands. The halos of clusters of galaxies should emit X-radiation which is both observable and at a level which is easy to detect. Consequently we expect that clusters offer the best prospects for testing our theory, and we now focus our attention on these systems. ## 3 X-ray emission from clusters of galaxies The first question we must address is whether the predicted luminosity is consistent with the data for clusters. Conventionally the observations are interpreted in terms of two components: one due to hot gas spread throughout the cluster, and another due to a central “cooling flow” (e.g. Fabian 1994). Cooling flows introduce a large scatter in the observed luminosity-temperature correlation (Fabian et al 1994). Our model involves X-ray emission arising from cloud collisions throughout the cluster, not just the central regions, and must be compared with the cluster-wide component; it is this component which is meant henceforth when we refer to the data. The systematic trend of luminosity with X-ray spectral temperature, $`L(T_X)`$, has been the subject of several recent studies (Markevitch 1998; Arnaud & Evrard 1999; Reichart, Castander & Nichol 1999), with very similar results: $`L_{bol}T_X^{2.80\pm 0.15}`$ (Reichart et al 1999). In our model all temperatures scale with $`\sigma ^2`$, so eq. 5 implies a close parallel with the data: $`L_{bol}T_X^{11/4}`$. We note also the study of Wu, Xue & Fang (1999) which, although it did not exclude the cooling flow contribution to $`L_{bol}`$, employed such a large sample of clusters that the deduced correlation was nevertheless very precisely determined: $`L_{bol}T_X^{2.72\pm 0.05}`$, in agreement with the theory we have presented. For clusters which have detailed optical spectroscopy in addition to the X-ray data, we can assess the dependence of $`L`$ directly on $`\sigma `$, as measured from cluster galaxy velocity dispersions. Contamination by field galaxies, small sample sizes, sub-clustering and anisotropic velocity distributions all mean that measuring $`\sigma `$ is not easy. Girardi et al (1996) have made careful estimates of $`\sigma `$ in 38 rich clusters; their sample has 13 and 6 clusters in common with the samples of Markevitch (1998) and Arnaud & Evrard (1999), respectively. Taking bolometric luminosities (for $`H_0=75\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$) from the latter data sets, and the velocity dispersions determined by Girardi et al (1996), we arrive at the points shown in figure 1. Also shown is the theoretical prediction given by eq. 6, from which we see that the data are all consistent with or in excess of the prediction. In the context of our model, a measured luminosity which is significantly in excess of the prediction must be interpreted in one of two ways: as emission from diffuse hot gas, spread throughout the cluster (see §4), or as a halo which virialised relatively recently (cf. eq. 5), e.g. in a cluster merger event. Is the spatial distribution of emission within clusters consistent with our theory? Our model has a mean intensity profile (eq. 4) which is identical to the standard model profile (e.g. Sarazin 1988) $`(1+x^2)^{3\beta +1/2}`$ with $`\beta =2/3`$; this is an adequate approximation for many clusters (Jones & Forman 1984). Exact agreement should not be expected because an isothermal sphere is only a crude approximation to the likely dark halo density distribution. A more realistic model distribution might include many smaller halos, perhaps associated with individual cluster galaxies (cf. Moore et al 1999) and these would give local enhancements in the mean X-ray intensity, with modest attendant spectral changes. An important aspect of the present model is that it predicts a graininess in the intensity profile, at any instant, because the total cluster emission is contributed by a large number of discrete sources. This feature is fundamental to the theory and admits an unequivocal test with high resolution imaging data, as discussed in §4. Attempting to predict the spectra resulting from cloud collisions is a formidable task. Consider first that the unshocked gas (density $`10^{12}\mathrm{cm}^3`$, and temperature of several Kelvin) is, initially, entirely opaque to X-rays as a result of the bound-free opacities of hydrogen and helium. Because of the high temperature of the shocked gas, the resultant photons ionise the upstream material (cf. Shull & McKee 1979), thus erasing the principal source of opacity. For collisions occurring in clusters the mean energy dissipated per unit cloud mass is so large, roughly $`100\sigma _3^2`$ times the total chemical binding energy of the cold gas, that we expect the ionisation fronts to break out of the clouds very quickly. Thereafter the primary opacity presented by the unshocked gas is due to electron scattering. Each X-ray photon is expected to scatter hundreds of times before escaping, with a few eV exchanged between electron and photon on each scattering. Thus, although the thermal coupling is loose, in total there is a significant exchange of energy between the escaping photons and the upstream gas. Add to this the complex, time-dependent geometry associated with shocks in a pair of colliding clouds, and we see that it will not be easy to arrive at reliable quantitative predictions of the spectra even for single collisions. The observed spectrum is, of course, a sum of the spectra of a large number of collisions, with a spread in collision speed; but this aspect of the calculation is more straightforward, as the cloud kinetics are likely to be reasonably well approximated by a Maxwellian distribution function. It is beyond the scope of this paper to attempt a prediction of the spectrum, instead we confine our attention to a single qualitative point: the observed spectra should exhibit a strong low-energy component. One can easily see that such a component should be present because the post-shock gas cools as it flows downstream, and emission from this gas will be predominantly in the soft X-ray and EUV bands. To illustrate this point we have calculated an idealised spectrum which neglects radiative transfer through the upstream gas. This calculation assumes: a Maxwellian cloud distribution function; the strong shock limit (cold upstream gas); pure bremsstrahlung emission; and the optically thin limit. The resulting spectral form is given by $$\frac{\nu L_\nu }{L}=\frac{\omega }{\gamma 1}_{\xi _1}^{\mathrm{}}\frac{\mathrm{d}\xi }{\xi }\left[\gamma \frac{1}{\xi 1}\right]𝒮(p),$$ $`(7)`$ where $`p\omega \xi ^2/(\xi 1)`$, $`\omega h\nu /\mu \sigma ^2`$, $`\xi _1=(\gamma +1)/(\gamma 1)`$, $`\gamma =5/3`$ and $$𝒮(p)=_0^{\mathrm{}}dqq\mathrm{exp}(qp/q).$$ $`(8)`$ The spectrum of eq. 7 is shown in figure 2, along with a bremsstrahlung spectrum from isothermal gas with $`kT=\mu \sigma ^2`$, representing the conventional theory of cluster X-ray emission. Relative to the standard theory it can be seen that this calculation predicts a much broader spectrum which peaks at lower energies, with a much larger fraction of the power emerging at $`\omega 1`$. We emphasise that this calculation is only intended to be illustrative; the assumptions employed are not good approximations to the actual physical conditions, and the computed spectrum is therefore not quantitatively correct. However, the qualitative point that a high EUV luminosity is expected, relative to the X-ray luminosity, should be model independent. This result is of particular current interest as it has recently become apparent that some clusters have EUV luminosities which are much higher than expected on the basis of an isothermal bremsstrahlung model for the X-ray emission (Mittaz, Lieu & Lockman 1998; Lieu, Bonamente & Mittaz 1999). It is not currently known whether this difficulty extends to all clusters. Various models have been proposed specifically to account for these EUV data (e.g. Sarazin & Lieu 1998), but dark matter in the form of cold clouds may be able to explain this emission without the need for such ad hoc introductions. The implication of a relatively large luminosity at low energies raises the question of whether the proposed model is consistent with the known X-ray spectra of clusters, which for the most part are well described by optically thin emission from a single-temperature hot gas. In particular, the model demands a somewhat surprising coincidence whereby a complex amalgam of physics leads to apparently simple X-ray spectra. Unfortunately this issue is difficult to address because it requires a detailed computation of the spectral shape. By contrast one can confidently assert a high EUV luminosity, because even a modest difference in spectral index, over a large range in photon energy, will manifest itself as a significant difference in flux. One should, therefore, also expect real clusters to show significant departures from the conventional model at very high X-ray energies, although it is not clear whether an “excess” or a “deficit” is to be expected at these energies. (One would, for example, need to know the precise form of the the dark matter distribution function in order to decide this question.) Observationally, studies at very high X-ray energies are difficult because of the paucity of photons, but in at least some cases, e.g. the cluster A2199 (Kaastra et al 1999), there is evidence of an excess relative to the conventional model. We emphasise that for the model we have presented, the issue of the detailed spectral shape is not a critical one at present, because a powerful test of the theory is available via the predicted spatial distribution (see §4). ## 4 Discussion It is important to recognise that the theory presented in §§2,3 is not incompatible with hot, diffuse gas contributing to the observed X-ray emission, rather the opposite in fact. W99 computed the total visible mass which should accumulate within a halo of given velocity dispersion, as a consequence of cloud-cloud collisions (all of which disrupt the cold clouds): $`M_{vis}=7.4\times 10^{13}\sigma _3^{7/2}\mathrm{M}_{}`$ after an interval of 10 Gyr. W99 gave no predictions as to what form this visible material should take (e.g. stars vs. diffuse gas). The material released by collisions is initially just gas in free, near-adiabatic expansion, with a centre-of-mass moving on a ballistic trajectory in the cluster potential. As it interacts with the intracluster medium, this gas can be shock heated to high temperatures. We note that in a cluster the mean particle densities are very low, implying long cooling times and a large fraction of $`M_{vis}`$ could therefore be in the form of hot, diffuse gas. In consequence, phenomena such as the Sunyaev-Zel’dovich (SZ) effect, which are contingent on the existence of tenuous hot gas, are expected to be present in our theory. Because we are ascribing a substantial fraction of the observed X-ray emission to cloud collisions, it is clear that the expected magnitude of the SZ effect is diminished relative to the standard theory of cluster X-ray emission. However, accurate measurements would be necessary to distinguish between our theory and the standard model, whereas the SZ effect has only recently been convincingly detected at all (e.g. Rephaeli 1995). A referee has brought to my attention the point that the spatial distribution of SZ decrement can be used to test the proposed model. The theory predicts that the diffuse, hot gas should lie mostly within a radius of order $`r_c`$ of the cluster centre — i.e. more compact than is conventionally assumed. Existing images (e.g. Carlstrom et al 1999) appear to be consistent with the proposed model, in that they indicate characteristic radii comparable to $`r_c`$ for the underlying gas distribution. We note that if the theory presented here is correct then the SZ effect is unlikely to prove useful as a technique for measuring the distances to clusters. Although our theory predicts a similar mean intensity profile (eq. 4) to that of the conventional model, the instantaneous distribution consists of a large number of point-like sources, and the most fundamental test of the theory would be to attempt to resolve the emission from a cluster into its component sources. Each of these sources should be transient, with a characteristic time-scale $`t_0r/2\sigma `$, and for $`r1`$ AU this is $`t_01/\sigma _3`$ days. The mean luminosity is $`L_06.7\times 10^{39}\sigma _3^3\mathrm{erg}\mathrm{s}^1`$ (this estimate assumes a virial temperature of order 10 K for the clouds, cf. Wardle & Walker 1999); in turn this implies a total number $`4.7\times 10^4\sigma _3^{5/2}`$ of sources contributing to the cluster luminosity. In the case of the Virgo cluster, the nearest rich cluster of galaxies ($`\sigma 650\mathrm{km}\mathrm{s}^1`$, $`D15`$ Mpc), we deduce: a mean flux of roughly $`7\times 10^{14}\mathrm{erg}\mathrm{cm}^2\mathrm{s}^1`$; $`t_01.5`$ days; a total number of order 16,000 sources; and a peak source density (in the cluster core) of $`17\mathrm{arcmin}^2`$. These estimates should be interpreted as order-of-magnitude estimates only; nevertheless they indicate that the X-ray satellite Chandra should easily detect individual transients within the Virgo cluster, even in relatively short observations of an hour or so. By virtue of Chandra’s high resolution imaging, source confusion should not be a problem even in the core of the cluster. The ROSAT satellite was less sensitive than Chandra, and had much poorer angular resolution, but even the ROSAT All Sky Survey (RASS) should have revealed the brightest of the ongoing collisions at the periphery of the Virgo cluster, where source confusion is expected to be less of a problem than in the core. Inspection of the publicly available RASS image of Virgo (reproduced in figure 3) suggests that this is indeed the case, as the outer regions of the cluster appear to possess a great deal of compact substructure. These data are discussed in Böhringer et al (1994). What are the implications of non-detection of the predicted population of transient sources in Virgo? Chandra’s resolution and sensitivity are such that the predicted sources should be detectable, and not confused with each other, even if their fluxes are an order of magnitude lower than the predicted value. This is a sufficiently wide margin for error (in the modelling) that observations with Chandra should be definitive: if Chandra does not detect these sources, then clouds of the type we have discussed make only a small contribution to the dark matter in the Virgo cluster. As the collision rate, and hence the expected number of detectable sources, is proportional to the square of the number of clouds per unit volume, if no collisions are observed where $`10^4`$ are expected, then the putative clouds comprise $`\mathrm{}<1`$% of all the matter in Virgo. Entities as large as clusters are widely regarded as representative samples of the Universe as a whole, so in turn this can be taken as a limit on the contribution of cold clouds to the total matter density of the Universe. In principle, non-detection by Chandra admits another possible interpretation: collisions which are so brief that less than five photons are collected by Chandra from each event; this circumstance would require cloud masses $`M\mathrm{}<7\times 10^8\mathrm{M}_{}`$. This, however, is not a self-consistent model: the condition $`\mathrm{\Sigma }=134\mathrm{g}\mathrm{cm}^2`$, from the Tully-Fisher relation (§2; W99), with the simultaneous requirement that the cloud temperature be greater than the temperature of the microwave background (3K), fixes a lower limit on the cloud mass of $`10^5\mathrm{M}_{}`$. Neither of these requirements can be relaxed without compromising the model, so Chandra will provide a strong test of the theory. We also note that the thermodynamic (temperature) requirement alone demands very small radii, $`r\mathrm{}<4\times 10^{10}\mathrm{cm}`$, for $`M\mathrm{}<7\times 10^8\mathrm{M}_{}`$, making it difficult to explain the extreme scattering events (Walker & Wardle 1998) if low-mass clouds are invoked. An interesting qualitative point is that the X-ray spectra of galaxy clusters typically exhibit iron abundances of order 0.3 (in solar units: Mushotzky & Loewenstein 1997). If a large fraction of the X-ray emission does indeed arise from cloud-cloud collisions, then these clouds presumably contain iron and other heavy elements. (This conclusion can also be tested by Chandra observations.) The simplest interpretation of this point is that it indicates primordial non-zero heavy element abundances. This is highly unconventional, but no more so than the idea that all of the dark matter might be in the form of cold gas clouds. Indeed, as emphasised by Walker & Wardle (1999), the two ideas are linked: if all of the dark matter is baryonic, then consistency of the Big Bang nucleosynthesis calculations with the observed light element abundances requires that the Universe was inhomogeneous at the epoch of cosmic nucleosynthesis, and in turn this admits the possibility of primordial heavy element nucleosynthesis. This logic led Walker & Wardle (1999) to propose that the genesis of the (proto-)clouds involved a phase transition in the very early Universe (i.e. prior to the epoch of cosmic nucleosynthesis) – cf. Hogan (1978). The abundant iron in cluster X-ray spectra underlines this interpretation, thereby connecting the cold-cloud model directly to the physics of elementary particles. ## 5 Conclusions We have shown that if the dark matter is entirely composed of cold gas clouds, then a substantial fraction of the observed X-ray emission from clusters should be due to physical collisions between these clouds. This possibility appears to be consistent with existing data. Indeed the form of the observed cluster luminosity-temperature correlation, and the measurement of high EUV luminosities for some clusters, both suggest that this process may well be occurring. If so then high-resolution images of the Virgo cluster should reveal a large number of point-like, transient X-ray sources contributing to the emission. Conversely, if these sources are not seen, then cold clouds of $`M\mathrm{}>10^7\mathrm{M}_{}`$ cannot contribute more than about 1% of the dark matter, either in clusters or, by extension, in the Universe as a whole. ## Acknowledgements I thank Mark Wardle for providing numerical polytropic density profiles, and for several useful discussions. Andy Fabian, Ron Ekers and Haida Liang contributed helpful advice on clusters. ## References Arnaud M., Evrard A.E. 1999, MNRAS, 305, 631 Bloemen H. 1989, ARAA, 27, 469 Böhringer H., Briel U.G., Schwarz R.A., Voges W., Hartner G., Trümper J. 1994 Nature 368, 828 Carlstrom J.E., Joy M.K., Grego L., Holder G.P., Holzapfel W.L., Mohr J.J., Patel S., Reese E.D. 1999 “Particle physics and the universe” eds L. Bergstrom, P. Carlson, C. Fransson (In press, astro-ph/9905255) Clarke C.J., Pringle J.E. 1997, MNRAS, 288, 674 de Paolis F., Ingrosso G., Jetzer Ph., Roncadelli M. 1995, Phys. Rev. Lett., 74, 14 Dixon D.D., Hartmann D.H., Kolaczyk E.D., Samimi J., Diehl R., Kanbach G., Mayer-Hasselwander H., Strong A.W. 1998, New Ast., 3, 539 Draine B.T. 1998 ApJL 509, L41 Fabian A.C 1994, ARAA, 32, 277 Fabian A.C., Crawford C.S., Edge A.C., Mushotzky R.F. 1994, MNRAS, 267, 779 Fiedler R.L., Dennison B., Johnston K.J., Hewish A. 1987, Nature, 326, 675 Gerhard O., Silk J. 1996, ApJ, 472, 34 Girardi M., Fadda D., Giuricin G., Mardirossian F., Mezzetti M., Biviano A. 1996, ApJ, 457, 61 Henriksen R.N., Widrow L.M. 1995 ApJ 441, 70 Hogan C.J. 1978 MNRAS 185, 889 Jones C., Forman W. 1984, ApJ, 276, 38 Kaastra J.S., Lieu R., Mittaz J.P.D., Bleeker J.A.M., Mewe R., Colafrancesco S., Lockman F.J. 1999, ApJL, 519, L119 Kalberla P.M.W., Shchekinov Yu.A., Dettmar R.J. 1999, A&A, 350, L9 Lieu R., Bonamente M., Mittaz J.P.D. 1999, ApJ, 517, L91 Markevitch M. 1998, ApJ, 504, 27 Mittaz J.P.D., Lieu R., Lockman F.J 1998, ApJ, 498, L17 Moore B., Ghigna S., Governato F., Lake G., Quinn T, Stadel J., Tozzi P. 1999 ApJL 524, L19 Mushotzky R.F., Loewenstein M. 1997, ApJ, 481, L63 Peebles P.J.E. 1993 “Principles of Physical Cosmology” (Princeton Univ. Press: Princeton) Pfenniger D., Combes F., Martinet L. 1994, A&A, 285, 79 Rafikov R.R., Draine B.T. 2000 ApJ (submitted) astro-ph/0006320 Reichart D.E., Castander F.J., Nichol R.C. 1999, ApJ, 516, 1 Rephaeli Y. 1995, ARAA, 33, 541 Sarazin C.L. 1988 “X-ray emissions from clusters of galaxies” (CUP: Cambridge) Sarazin C.L., Lieu R. 1998, ApJ, 494, L177 Shull J.M., McKee C.F. 1979, ApJ, 227, 131 Walker M. 1999, MNRAS 308, 551 (W99) Walker M., Wardle M. 1998, ApJ, 498, L125 Walker M., Wardle M. 1999, PASA, 16 (3), 262 Wardle M., Walker M. 1999, ApJL, 527, L109 Wu X.P., Xue Y.J., Fang L.Z. 1999, ApJ, 524, 22
no-problem/9908/astro-ph9908086.html
ar5iv
text
# Physical Mechanisms for the Variable Spin-down of SGR 1900+14 ## 1 Introduction Woods et al. (1999c; hereafter Paper I) have shown that over the period September 1996 – May 1999, the spin-down history of SGR $`1900+14`$ is generally smooth, with an average rate of 6 $`\times `$ 10<sup>-11</sup> s s<sup>-1</sup>. However, during an 80 day interval starting in June 1998 which contains the extremely energetic August 27 flare (Hurley et al. 1999a; Mazets et al. 1999), the average spindown rate of SGR $`1900+14`$ increased by a factor $``$ 2.3. The sampling of the period history of SGR $`1900+14`$ is insufficient to distinguish between a long-term (i.e. 80 days) increase of the spin-down rate to an enhanced value and a sudden increase (a ‘braking’ glitch) in the spin period connected with the luminous August 27 flare. In this paper, we investigate several physical processes that may generate a positive period increment of the observed magnitude ($`\mathrm{\Delta }P/P10^4`$) directly associated with the August 27 flare. We focus on two mechanisms: a particle wind coinciding with the period of hyper-Eddington radiative flux; and an exchange of angular momentum between the crustal neutron superfluid and the rest of the neutron star. We show that both models point to the presence of an intense magnetic field. The change in the persistent pulse profile of SGR 1900+14 following the August 27 outburst is considered, and related to continuing particle output in the active region of the burst. We also consider mechanisms that could drive the (nearly) steady spindown observed in both SGRs and AXPs, as well as departures from uniform spindown. ## 2 Braking driven by a particle outflow The radiative flux during the oscillatory tail of the August 27 event decreased from $`1\times 10^{42}(D/10\mathrm{kpc})^2`$ erg/s, with an exponential time constant of $`90`$ s (Mazets et al. 1999). The net energy in the tail, radiated in photons of energy $`>`$ 15 keV, was $`5\times 10^{43}(D/10\mathrm{kpc})^2`$ erg. The tail was preceded by much harder, narrow pulse of duration $`0.35`$ s and energy $`>7\times 10^{43}(D/10\mathrm{kpc})^2`$ erg (Mazets et al. 1999). The very fast rise time of $`10^3`$ s points convincingly to an energy source internal to the neutron star. Just as in the case of the 1979 March 5 event, several arguments indicate the presence of a magnetic field stronger than $`10^{14}`$ G (Thompson & Duncan 1995; hereafter “TD95”). Not only can such a field spin down the star to its observed 5.16 s period (Hurley et al. 1999c; Kouveliotou et al. 1999), but it can power the burst by inducing a large-scale fracture of the neutron star crust. Indeed, only a fraction $`10^2(B_{}/10B_{\mathrm{QED}})^2`$ of the external dipole magnetic energy must be tapped, where $`B_{\mathrm{QED}}4.4\times 10^{13}`$ G. This allows for individual SGR sources to emit $`10^2`$ such giant flares over their $`10^4`$ yr active lifetimes. More generally, any energy source that excites internal seismic modes of the neutron star must be combined with a magnetic field of this strength, if seismic energy is to be transported across the stellar surface at the (minimum) rate observed in the initial spike (cf. Blaes et al. 1989). A field stronger than $`1.5\times 10^{14}(E/6\times 10^{43}\mathrm{erg})^{1/2}(\mathrm{\Delta }R/10\mathrm{km})^{3/2}[(1+\mathrm{\Delta }R/R_{})/2]^3`$ G is also required to confine the energy radiated in the oscillatory tail (Hurley et al. 1999a), which maintained a very constant temperature even while the radiative flux declined by an order of magnitude (Mazets et al. 1999). The radiative flux was high enough throughout the August 27 event to advect outward a large amount of baryonic plasma at relativistic speed. Even though one photon polarization mode (the E-mode) has a suppressed scattering cross-section when $`B>B_{\mathrm{QED}}`$ (Paczyński 1992), splitting of E-mode photons will regenerate the O-mode outside the E-mode scattering photosphere, and ensure than the radiation and matter are hydrodynamically coupled near the stellar surface (TD95). Matter will continue to accumulate further out in the magnetosphere during the burst, but cannot exceed $`\tau _\mathrm{T}1`$ outside a radius where the energy density of the freely streaming photons exceeds the dipole magnetic energy density, $$\frac{L_\mathrm{X}}{4\pi R_\mathrm{A}^2c}\frac{B_{}^2}{4\pi }\left(\frac{R_\mathrm{A}}{R_{}}\right)^6,$$ (1) or equivalently $$\frac{R_\mathrm{A}}{R_{}}\left(\frac{B_{}^2R_{}^2c}{L_\mathrm{X}}\right)^{1/4}=280\left(\frac{B_{}}{10B_{\mathrm{QED}}}\right)^{1/2}\left(\frac{\mathrm{\Delta }E_\mathrm{X}}{10^{44}\mathrm{erg}}\right)^{1/4}\left(\frac{\mathrm{\Delta }t_{\mathrm{burst}}}{100\mathrm{s}}\right)^{1/4}.$$ (2) The radiation pressure acting on the suspended matter will overcome the dipole magnetic pressure at a radius $`R_\mathrm{A}`$; the same is true for the ram pressure of matter streaming relativistically outward along the dipole field lines. Photons scattering last at radius $`R_\mathrm{A}`$ and polar angle $`\theta `$ (or relativistic matter escaping the dipole magnetic field from the same position) will carry a specific angular momentum $`\mathrm{\Omega }R_\mathrm{A}^2\mathrm{sin}^2\theta `$. The net loss of angular momentum corresponding to an energy release $`\mathrm{\Delta }E`$ is $$I_{}\mathrm{\Delta }\mathrm{\Omega }\frac{\mathrm{\Delta }E}{c^2}\mathrm{\Omega }R_\mathrm{A}^2\mathrm{sin}^2\theta .$$ (3) The period increase accumulated on a timescale $`\mathrm{\Delta }t_{\mathrm{burst}}`$ is largest if the outflow is concentrated in the equatorial plane of the star: $$\frac{\mathrm{\Delta }P}{P}(\mathrm{\Delta }E\mathrm{\Delta }t_{\mathrm{burst}})^{1/2}\frac{B_{}R_{}^3}{I_{}c^{3/2}}=8\times 10^6\left(\frac{\mathrm{\Delta }E}{10^{44}\mathrm{erg}}\right)^{1/2}\left(\frac{\mathrm{\Delta }t_{\mathrm{burst}}}{100\mathrm{s}}\right)^{1/2}\left(\frac{B_{}}{10B_{\mathrm{QED}}}\right).$$ (4) The torque is negligible if the dipole field is in the range $`B_{}0.1B_{\mathrm{QED}}`$ typical of ordinary radio pulsars. Even for $`B_{}10B_{\mathrm{QED}}`$ this mechanism can induce $`\mathrm{\Delta }P/P1\times 10^4`$ only if the the outflow lasts longer than the observed duration of the oscillatory tail. Release of $`10^{44}`$ erg over $`10^4`$ s would suffice; but extending the duration of the outflow to $`10^5`$ s would imply $`\dot{P}1.3\times 10^8`$ one day after the August 27 event, in contradiction with the measured value $`200`$ times smaller. Note also that the short initial spike is expected to impart a negligible torque to the star. This is the basic reason that persistent fluxes of Alfvén waves and particles are more effective at spinning down a magnetar than are sudden, short bursts of equal fluence. One might consider increasing the torque by increasing the inertia of the outflow, so that it moves subrelativistically at the Alfvén surface, at speed $`V`$. For a fixed kinetic luminosity, $`\dot{E}=(1/2)\dot{M}V^2`$, the Alfvén radius scales in proportion to $`(V/c)^{1/4}`$, and one finds $$\frac{\mathrm{\Delta }P}{P}1\times 10^4\left(\frac{\mathrm{\Delta }E}{10^{44}\mathrm{erg}}\right)^{1/2}\left(\frac{\mathrm{\Delta }t_{\mathrm{burst}}}{100\mathrm{s}}\right)^{1/2}\left(\frac{B_{}}{10B_{\mathrm{QED}}}\right)\left(\frac{V}{0.2c}\right)^{3/2}.$$ (5) However, the energy needed to lift this material from the surface of the neutron star exceeds $`\mathrm{\Delta }E=\dot{E}𝑑t`$ by a factor $`10(V/0.2c)^2`$ (assuming $`GM_{}/(R_{}c^2)=0.2`$). This scenario therefore requires some fine-tuning, if the flow is to remain subrelativistic far from the neutron star. Moreover, such a slow outflow is very thick to Thomson scattering and free-free absorption. The Thomson depth along a radial line through the outflow is $$\tau _\mathrm{T}(R_\mathrm{A})=10\left(\frac{\mathrm{\Delta }E}{10^{44}\mathrm{erg}}\right)^{5/4}\left(\frac{B_{}}{10B_{\mathrm{QED}}}\right)^{1/2}\left(\frac{\mathrm{\Delta }t_{\mathrm{burst}}}{100\mathrm{s}}\right)^{5/4}\left(\frac{V}{c}\right)^{13/4}$$ (6) at the Alfvén radius. The free-free optical depth is $$\tau _{\mathrm{ff}}\frac{\alpha _{\mathrm{em}}\overline{g}_{\mathrm{ff}}}{3^{1/2}(2\pi )^{3/2}}\left(\frac{kT}{m_ec^2}\right)^{1/2}\frac{\tau _\mathrm{T}^2(hc)^3}{\sigma _\mathrm{T}R(kT)^3}f\left(\frac{h\nu }{kT}\right),$$ (7) where $$f\left(\frac{h\nu }{kT}\right)\left(\frac{h\nu }{kT}\right)^3\left(1e^{h\nu /kT}\right),$$ (8) and $`\alpha _{\mathrm{em}}=1/137`$ is the fine structure constant. This becomes $$\tau _{\mathrm{ff}}(R)=3\times 10^2\left(\frac{R}{R_\mathrm{A}}\right)^5\left(\frac{\mathrm{\Delta }P/P}{10^4}\right)^{5/4}\left(\frac{\mathrm{\Delta }E}{10^{44}\mathrm{erg}}\right)^{1/2}\left(\frac{B_{}}{10B_{\mathrm{QED}}}\right)^{16/3}\left(\frac{\mathrm{\Delta }t_{\mathrm{burst}}}{100\mathrm{s}}\right)^5f\left(\frac{h\nu }{kT}\right).$$ (9) Here, we have substituted the value of $`V/c`$ needed to generate the observed $`\mathrm{\Delta }P/P`$. Notice that the magnetic dipole field and burst duration enter into $`\tau _{\mathrm{ff}}`$ with strong negative powers. The optical depth through a flow along rigid dipole magnetic field lines is $`\tau _\mathrm{T}(R)=(R/R_\mathrm{A})^2\tau _\mathrm{T}(R_\mathrm{A})`$ at constant $`V`$. This calculation indicates that the flow will be degraded to a black body temperature corresponding to an emission radius of $`100R_{}`$ = 1000 km, which is $`1`$ keV at a luminosity $`10^4L_{\mathrm{edd}}`$, far below the observed value (Mazets et al. 1999; Feroci et al. 1999). Note, however, that Inan et al. (1999) found evidence for an intense ionizing flux of soft X-rays in the Earth’s ionosphere, coincident with the first second of the August 27th event. They fit this ionization data with an incident spectrum containing two thermal components, of temperatures 200 and 5 keV, and with the soft component carrying $`80\%`$ of the energy flux at 5 keV. This model contrasts with the initial spectrum of the August 27 event measured by BeppoSAX, which contained a very hard power-law component ($`\nu F_\nu \nu ^{1/2}`$: Feroci et al. 1999). The effects of pair creation on the ionization rate have yet to be quantified. The four-pronged profile seen within the later pulses of the August 27 event (Feroci et al. 1999; Mazets et al. 1999) has a plausible interpretation in the magnetar model. The radiation-hydrodynamical outflow originates near the surface of the neutron star, where the opacity of X-ray photons moving across the magnetic field lines is smallest (TD95). This is the case even if the trapped $`e^\pm `$ fireball that powers the burst extends well beyond the stellar surface. In this model, the pattern of the emergent X-ray flux is a convolution of the multipolar structure of the stellar magnetic field, with the orientation of the trapped fireball. The presence of four X-ray ‘jets’ requires that the trapped fireball connect up with four bundles of magnetic field lines extending to at least a few stellar radii. ## 3 Braking via the internal exchange of angular momentum Now let us consider the exchange of angular momentum between the the crustal superfluid neutrons and the rest of the magnetar. Because an SGR or AXP source is slowly rotating, $`\mathrm{\Omega }_{\mathrm{cr}}1`$, the maximum angular velocity difference $`\omega =\mathrm{\Omega }_{\mathrm{sf}}\mathrm{\Omega }_{\mathrm{cr}}`$ that can be maintained between superfluid and crust is a much larger fraction of $`\mathrm{\Omega }_{\mathrm{cr}}`$ than it is in an ordinary radio pulsar – and may even exceed it. At the same time, these sources are observed to spin down very rapidly, on a timescale comparable to young radio pulsars such as Crab or Vela. If the rotation of the superfluid were to lag behind the crust in the usual manner hypothesized for glitching radio pulsars, the maximum glitch amplitude would increase in proportion to the spin period (Thompson & Duncan 1996, hereafter TD96; Heyl & Hernquist 1999). One deduces $`\mathrm{\Delta }P/P1\times 10^5`$ by scaling to the largest glitches of the Crab pulsar, and $`\mathrm{\Delta }P/P1\times 10^4`$ by scaling to Vela. How would a glitch be triggered in a magnetar? A sudden fracture of the crust, driven by a magnetic field stronger than $`10^{14}`$ G, induces a horizontal motion at the Alfvén speed $`V_\mathrm{A}=1.3\times 10^7(B/10B_{\mathrm{QED}})(\rho /10^{14}\mathrm{g}\mathrm{cm}^3)^{1/2}`$ cm s<sup>-1</sup>, or higher. This exceeds the maximum velocity difference $`V_{\mathrm{sf}}V_{\mathrm{cr}}`$ that can be sustained between superfluid and crust, before the neutron vortex lines unpin (e.g. Link, Epstein, & Baym 1993). The internal heat released in a large flare such as the August 27 event is probably comparable to the external X-ray output, if the flare involves a propagating fracture of the neutron star crust. This heat is $`100`$ times the minimum energy of $`10^{42}`$ erg that will induce a sudden increase in the rate of thermal vortex creep (Link & Epstein 1993). For both reasons, giant flares from magnetars probably trigger the widespread unpinning of superfluid vortices in the crust and hence large rotational glitches. Magnetically-driven fractures have also been suggested as the trigger for vortex unpinning in ordinary radio pulsars (Thompson & Duncan 1993, hereafter TD93; Ruderman, Zhu, & Chen 1998). The observation of a period increase associated with the August 27 outburst leads us to re-examine whether the superfluid should, in fact, maintain a faster spin than the crust and charged interior of the star. Transport of superfluid vortices by thermal creep will cause the angular velocity lag $`\omega `$ to relax to its equilibrium value $`\omega _{\mathrm{}}`$ on a timescale $$t_r^1=\left|\frac{\mathrm{\Omega }_{\mathrm{cr}}}{t}\right|\left(\frac{\mathrm{ln}V_{\mathrm{cr}}}{\omega }\right)_\omega _{\mathrm{}},$$ (10) if the creep is driven primarily by spindown (Alpar, Anderson, Pines, & Shaham 1984; Link, Epstein, & Baym 1993). The partial derivative of the creep velocity $`V_{\mathrm{cr}}/\omega `$ depends mainly on temperature and density. As a result, this relaxation time is expected to be proportional to $`t/\mathrm{\Omega }_{\mathrm{cr}}`$ at constant temperature. Comparing with a prompt (intermediate) relaxation time of $`1`$ day ($`1`$ week) for glitches of the Crab pulsar ($`t10^3`$ yr; Alpar et al. 1996), one infers $`t_r1`$ ($`10`$) years for a magnetar of spin period $`6`$ s and characteristic age $`P/\dot{P}=3000`$ yr. The response of the crust to the evolving magnetic field is expected to be a combination of sudden fractures and plastic deformation. When the temperature of the crust exceeds about $`0.1`$ of the melt temperature, it will deform plastically (Ruderman 1991). One deduces $`T2.4\times 10^8(B/10^2B_{\mathrm{QED}})^2`$ K for magnetars of age $`10^4`$ yr (TD96; Heyl & Kulkarni 1998). Plastic deformation is also expected when $`B^2/4\pi >\mu `$ in the deep crust (TD96). In a circumstance where the magnetic field is transported through the stellar interior on a timescale shorter than the age of the star, departures from corotation between superfluid and crust are primarily due to advection of the superfluid vortices across the stellar surface by the deforming crust, not due to spindown. (Recall the principal definition of a magnetar: a neutron star in which magnetism, not rotation, is the dominant source of free energy.) If these deformations occur on a timescale much less than the spindown age, they will control the equilibrium lag between the rotation of the superfluid and crust. Indeed, the SGR bursts provide clear evidence for deformations on short timescales. More precisely, a large burst such as the August 27 event may be preceded (or followed) by an extended period of slow, plastic deformation. If the superfluid starts near corotation with the crust, this process will take angular momentum out of the superfluid, and force its rotation to lag behind the rest of the star. A glitch triggered by a violent disturbance such as the August 27 event will then cause the neutron star crust to spin down. The angular momentum of the thin shell of crustal superfluid can be expressed simply as $$J_{\mathrm{sf}}=\frac{\kappa }{2}M_{\mathrm{sf}}R_{}^2_1^1d(\mathrm{cos}\theta )\mathrm{cos}^2\theta n_V(\theta ),$$ (11) when the cylindrical density $`n_V(\theta )`$ of neutron vortex lines depends only on angle $`\theta `$ from the axis of rotation. Here $`\kappa =h/2m_n`$ is the quantum of circulation, and we neglect that the rotational deformation of the star. One observes from this expression that the outward motion of vortex lines reduces $`J_{\mathrm{sf}}`$, because the weighting factor $`\mathrm{cos}^2\theta `$ decreases with distance from the axis of rotation. The simplest deformation of the neutron star crust, which preserves its mass and volume, involves a rotational twist of a circular patch through an angle $`\mathrm{\Delta }\varphi `$. Indeed, the stable stratification of the star (Reisenegger & Goldreich 1992) forces the crust to move horizontally, parallel to the local equipotential surfaces. For this reason, one can neglect horizontal displacements of the crustal material that are compressible in the two non-radial dimensions. The patch has radius $`aR_{}`$ and is centered at an angle $`\theta `$ from the axis of rotation. The superfluid is assumed initially to corotate with the crust, $`\mathrm{\Omega }_{\mathrm{sf}}=\mathrm{\Omega }_{\mathrm{cr}}`$, everywhere within the patch, so that $`n_V(\theta )=2\mathrm{\Omega }_{\mathrm{cr}}/\kappa `$. As the patch is rotated, the number of vortex lines per unit surface area of crust is conserved. A piece of crust that moves from $`\theta _i`$ to $`\theta _f`$ ends up with a vortex density $`n_V=(2\mathrm{\Omega }_{\mathrm{cr}}/\kappa )\mathrm{cos}\theta _i/\mathrm{cos}\theta _f`$. The vortex lines are squeezed together in a piece of the crust that moves away from the rotation axis, and are spread apart if the movement is in the opposite direction. If the vortex density is smoothed out in azimuth following this process, the net decrease in the angular momentum of the superfluid is $$\frac{\mathrm{\Delta }J_{\mathrm{sf}}}{J_{\mathrm{sf}}}=\frac{3}{4}\left(\frac{a}{R_{}}\right)^4\left(1\mathrm{cos}\varphi \right)\mathrm{sin}^2\theta .$$ (12) Here, $`J_{\mathrm{sf}}=\frac{2}{3}M_{\mathrm{sf}}\mathrm{\Omega }_{\mathrm{cr}}R_{}^210^2I_{}\mathrm{\Omega }_{\mathrm{cr}}`$ is the total angular momentum of the crustal superfluid. A transient, plastic deformation of the crust would induce a measurable spinup of the crust, by forcing the neutron superfluid further from corotation with the crust. Such a gradual glitch would have the same negative sign as in ordinary radio pulsars, but would not necessarily involve any sudden unpinning of the vortex lines. For example, rotation of a patch of radius $`a=\frac{1}{3}R_{}`$ through an angle $`\mathrm{\Delta }\varphi 1`$ radian would cause a period decrease $`\mathrm{\Delta }P/P=\mathrm{\Delta }J_{\mathrm{sf}}/(I_{}I_{\mathrm{sf}})\mathrm{\Omega }_{}=4\times 10^5`$. A transient spinup of this magnitude may have been observed in the AXP source 1E2259+586 (Baykal & Swank 1996). That excursion from a constant, long term spindown trend can be modelled with a glitch of amplitude $`\mathrm{\Delta }P/P3\times 10^5`$, although the X-ray period observations are generally too sparse to provide a unique fit. ## 4 The long-term spin-down of SGRs and AXPs Let us now consider the persistent spindown rate of SGR 1900+14, and its broader implications for the ages and spindown histories of the SGR and AXP sources. Recall that the spindown rate was almost constant at $`\dot{P}6.1\times 10^{11}`$ s/s before May 1998, and after August 28 1998 (Paper I). A May 1997 measurement of $`P`$ revealed a 5% deviation from this trend; and larger variations in the ‘instantaneous’ spindown rate ($`40`$%) were found by RXTE in September 1996 and May/June 1998. Another important constraint comes from the observed angular position of SGR 1900+14. It lies just outside the edge of the $`10^4`$ yr-old supernova remnant G42.8+0.6 (Hurley et al. 1994; Vasisht et al. 1994). A strong parallel can be drawn with SGR 0526-66, which also emitted a giant flare (on 5 March 1979) and is projected to lie inside, but near the edge of, SNR N49 in the Large Magellanic Cloud (Cline et al. 1982). The other known SGRs also have positions coincident with supernova remnants of comparable ages (Kulkarni & Frail 1993; Kulkarni et al. 1994; Murakami et al. 1994; Woods et al. 1999b; Smith, Bradt, & Levine 1999; Hurley et al. 1999d). It seems very likely that these physical associations are real; so we will hereafter adopt the hypothesis that SGR 1900+14 formed at the center of SNR G42.8+0.6. The implied transverse velocity is $$V_{}3400\left(\frac{D}{7\text{kpc}}\right)\left(\frac{t}{10^4\text{yr}}\right)^1\text{km s}^1$$ (13) (Hurley et al. 1996; Vasisht et al. 1996; Kouveliotou et al. 1999). Several mechanisms may impart large recoil velocities to newborn magnetars (Duncan & Thompson 1992, hereafter “DT92”), but this very high speed indicates that an age much less than $`1\times 10^4`$ yrs is unlikely. In this context, the short charactersitic spindown age $`P/2\dot{P}1400`$ yr of SGR 1900+14 gives evidence that the star is currently in a transient phase of accelerated spindown (Kouveliotou et al. 1999). The almost identical spindown age measured for SGR 1806-20 suggests that a similar effect is being observed in that source (Kouveliotou et al. 1998; Table 1). If each SGR undergoes accelerated spindown during a minor fraction $`ϵ_{\mathrm{active}}P/\dot{P}t_{\mathrm{SNR}}0.25`$ of its life, then its true age increases to $$t=ϵ_{\mathrm{active}}^1\left(\frac{P}{\dot{P}}\right).$$ (14) ### 4.1 Wind-Aided Spindown Seismic activity will accelerate the spindown of an isolated neutron star, if the star is slowly rotating and strongly magnetized (Thompson & Blaes 1998, hereafter “TB98”). Fracturing in the crust generates seismic waves which couple directly to magnetospheric Alfvén modes and to the relativistic particles that support the associated currents. The fractures are frequent and low energy ($`10^{35}`$ erg) when the magnetic field is forced across the crust by compressive transport in the core (TD96). When the persistent luminosity $`L_\mathrm{A}`$ of waves and particles exceeds the magnetic dipole luminosity $`L_{\mathrm{MDR}}`$ (as calculated from the stellar dipole field and angular velocity), the spindown torque increases by a factor $`\sqrt{L_\mathrm{A}/L_{\mathrm{MDR}}}`$. This result follows directly from our treatment of hydrodynamic torques in §2. Magnetic stresses force the relativistic wind to co-rotate with the star out to the Alfvén radius $`R_\mathrm{A}`$, which is determined by substituting $`L_\mathrm{A}`$ for $`L_\mathrm{X}`$ in eq. (2): $$\frac{R_\mathrm{A}}{R_{}}=1.6\times 10^4L_{\mathrm{A}\mathrm{\hspace{0.17em}35}}^{1/4}\left(\frac{B_{}}{10B_{\mathrm{QED}}}\right)^{1/2}.$$ (15) The torque then has the form $`I\dot{\mathrm{\Omega }}=\mathrm{\Lambda }(L/c^2)R_\mathrm{A}^2`$, or equivalently $$\dot{P}=\mathrm{\Lambda }\frac{B_{}R_{}^3}{I_{}}\left(\frac{L_\mathrm{A}}{c^3}\right)^{1/2}P.$$ (16) Here, $`\mathrm{\Lambda }`$ is a numerical factor of order unity that depends on the angle between the angular velocity $`𝛀`$ and the dipole magnetic moment $`𝐦_{}`$. One finds $`\mathrm{\Lambda }\frac{2}{3}`$ by integrating eq. (3) over polar angle, under the assumption that $`𝛀`$ and $`𝐦_{}`$ are aligned, that the ratio of mass flux to magnetic dipole flux is constant, and that the magnetic field is swept into a radial configuration between the Alfvén radius and the light cylinder. This normalization is $`6`$ times larger than deduced by TB98 for a rotator with $`𝐦_{}`$ inclined by 45 with respect to $`𝛀`$: they considered the enhanced torque resulting from the sweeping out of magnetic field lines, but not the angular momentum of the outflow itself. The dipole magnetic field inferred from $`P`$ and $`\dot{P}`$ depends on the persistent wind luminosity. Normalizing $`L_\mathrm{A}`$ to the persistent X-ray luminosity, $`L_\mathrm{A}=L_{\mathrm{A}\mathrm{\hspace{0.17em}35}}\times 10^{35}`$ erg s<sup>-1</sup>, one finds for SGR 1900+14, $$B_{}=3\times 10^{14}L_{\mathrm{A}\mathrm{\hspace{0.17em}35}}^{1/2}\left(\frac{\mathrm{\Lambda }}{2/3}\right)^1I_{45}\left(\frac{\dot{P}}{6\times 10^{11}}\right)\left(\frac{P}{5.16\text{s}}\right)^1\text{G.}$$ (17) A very strong magnetic field is needed to channel the flux of Alfvén waves and particles in co-rotation with the star out to a large radius. This extended “lever arm” enhances the magnetic braking torque for a given wind luminosity. The surface dipole field of SGR 1900+14 is inferred to be less than $`B_{\mathrm{QED}}=4.4\times 10^{13}`$ G only if $`L_\mathrm{A}>10^{37}`$ erg s<sup>-1</sup>. That is, the wind must be $`30100`$ times more luminous than the time-averaged X-ray output of the SGR in either quiescent or bursting modes. Such a large wind luminosity may conflict with observational bounds on the quiescent radio emission of SGR 1900+14 (Vasisht et al. 1994; Frail, Kulkarni, & Bloom 1999). From these considerations alone (which do not involve the additional strong constraints from bursting activity) we find it difficutl to reconcile the observed spindown rate of SGR 1900+14 with dipole fields typical of ordinary radio pulsars (as suggested recently by Marsden, Rothschild, & Lingenfelter 1999). Note also that the synchrotron nebula surrounding SGR 1806-20 (Frail & Kulkarni 1993), thought until recently to emanate from the SGR itself and to require a particle source of luminosity $`10^{37}`$ erg s<sup>-1</sup> (TD96), appears instead to be associated with a nearby luminous blue variable star discovered by Van Kerkwijk et al. (1995). The new IPN localization of the SGR source (Hurley et al. 1999b) is displaced by 12<sup>′′</sup> from the peak of the radio emission. There is no detected peak in radio emission at the revised location. Since the two SGRs have nearly identical $`\dot{P}/P`$, we estimate a dipole field $`B_{}=3\times 10^{14}L_{\mathrm{A}\mathrm{\hspace{0.17em}35}}^{1/2}`$ G for SGR 1806-20. During episodes of wind-aided spindown, the period grows exponentially: $$P(t)=𝒫\text{exp}(t/\tau _\mathrm{w}),$$ (18) if the luminosity $`L_\mathrm{A}`$ in outflowing Alfvén waves and relativistic particles remains constant. In this equation, $`\tau _\mathrm{w}P/\dot{P}=I_{}c^{3/2}/(\mathrm{\Lambda }B_{}R_{}^3L_\mathrm{A}^{1/2})`$ is a characteristic braking time, and $`𝒫`$ is the rotation period at the onset of wind-aided spindown. If $`L_\mathrm{A}`$ has remained unchanged over the lifetime of the star, then $`𝒫`$ would be set by the condition that the Alfvén radius sit inside the light cylinder, $`𝒫=2\pi (B_{}^2R_{}^6/c^3L_\mathrm{A})^{1/4}=1.9L_{\mathrm{A}\mathrm{\hspace{0.17em}35}}^{1/4}(B_{\mathrm{\hspace{0.17em}14}}/3)^{1/2}`$ s (cf. eq. ). (Here, $`B_{}=10^{14}B_{\mathrm{\hspace{0.17em}14}}`$ G is the polar magnetic field.) The narrow distribution of spin periods in the SGR/AXP sources ($`P=5`$—12 s) would be hard to explain if every source underwent this kind of extended exponential spindown; but the possibility cannot be ruled out in any one source. The total age of such a source would be $$t=(P/\dot{P})\mathrm{ln}(P/𝒫)+t(𝒫),$$ (19) where $`t(𝒫)`$ is the time required to spin down to period $`𝒫`$. Notice that $`\dot{P}P`$ at constant $`L_\mathrm{A}`$, as compared with $`\dot{P}P^1`$ in the case of magnetic dipole radiation (MDR). The net result is to lengthen the spindown age deduced from a given set of $`P`$ and $`\dot{P}`$, relative to the usual estimate $`t_{\mathrm{MDR}}P/2\dot{P}`$ employed for radio pulsars. Note also that $`P/\dot{P}`$ remains constant throughout episodes of wind-aided spindown. Applying these results to SGR 1900+14 (eq. ), we would infer that wind-aided spindown has been operating for $`(P/\dot{P})\mathrm{ln}(P/𝒫)=2700`$ yrs (assuming a steady wind of luminosity $`L_{\mathrm{A}\mathrm{\hspace{0.17em}35}}=1`$). Its total age, including the age $`t(𝒫)`$ at the onset of wind-aided braking, would be $`2700+1300=4000`$ yrs. (This number only increases to 5600 yrs if $`L_\mathrm{A}`$ increases to $`10^{36}`$ erg s<sup>-1</sup>.) This age remains uncomfortably short to allow a physical assocation with SNR G42.8+0.6: it would imply a transverse recoil velocity $`V_{}0.03(D/7\text{kpc})c`$ \[eq. (13)\]. The age of SGR 1900+14 can be much longer, and $`V_{}`$ much smaller, if the accelerated spindown we now observe occurs only intermittently (eq. ). In the magnetar model, it is plausible that small-scale seismic activity and Alfvén-driven winds are only vigorous during transient episodes, which overlap periods of bursting activity (§4.4 below). ### 4.2 Connection with Anomalous X-ray Pulsars If each magnetar undergoes accelerated spindown only for a fraction $`ϵ_{\mathrm{active}}P/\dot{P}t_{\mathrm{SNR}}0.25`$ of its life (eq. ), then the observed SGRs should be outnumbered some $`ϵ_{\mathrm{active}}^14`$ times by inactive sources that spin down at a rate $`\dot{P}P/2t_{\mathrm{SNR}}`$. The Anomalous X-ray Pulsars (AXPs) have been identified as such inactive SGRs (Duncan & Thompson 1996; TD96; Vasisht & Gotthelf 1997; Kouveliotou et al. 1998). Although harder to find because they do not emit bright bursts, 6 AXPs are already known in our Galaxy, as compared with 3 Galactic SGRs. Table 1 summarizes the spin behavior and age estimates of the two AXP sources that are presently associated with supernova remnants (1E2259+586 and 1E1841-045). Their characteristic ages are larger than those of SGRs 1900+14 and 1806-20. The characteristic age of 1E2259+586 also appears to be much longer, by about an order of magnitude, than the age of the associated SNR CTB 109. From Wang et al. (1992), $$t_{\mathrm{SNR}}=13,000\left(\frac{E_{\mathrm{SN}}}{0.4\times 10^{51}\text{erg}}\right)^{1/2}\left(\frac{n}{0.13\text{cm}^3}\right)^{1/2}\text{yr,}$$ (20) where $`E_{\mathrm{SN}}`$ is the supernova energy and $`n`$ is the ISM particle density into which the remnant has expanded. Such a large characteristic age has a few possible explanations in the magnetar model. First, the source may previously have undergone a period of wind-aided spindown that increased its period to $`4`$ times the value that it would have reached by magnetic dipole braking alone. Indeed, there is marginal evidence for an extended X-ray halo surrounding the source, suggesting recent output of energetic particles (Rho & Petre 1997). Alternatively, the long characteristic age of 1E2259+586 could be caused by significant decay of the dipole field (TD93 §14.3 and 15.2); or by the alignment of a vacuum magnetic dipole with the axis of rotation (Davis & Goldstein 1970; Michel & Goldwire 1970). Episodes of seismic activity can increase the spindown torque in aligned rotators both by driving the conduction current above the displacement current in the outer magnetosphere, and by carrying off angular momentum in particles and waves. Indeed, the outer boundary of the rigidly corotating magnetosphere, calculated by Melatos (1997) to lie at a radius<sup>1</sup><sup>1</sup>1When the displacement current dominates the conduction current. $`R_{\mathrm{mag}}/R_{}=1\times 10^3\gamma ^{1/5}(B_{}/10^{14}\mathrm{G})^{2/5}`$, is contained well inside the speed of light cylinder, $`R_{\mathrm{lc}}/R_{}=3\times 10^4(P/6\mathrm{s})`$. Here, $`\gamma `$ is the bulk Lorentz factor of the streaming charges. There may be some tendency toward an initial alignment of $`𝐦_{}`$ and $`𝛀`$ in rapidly rotating neutron stars that support a large scale $`\alpha `$-$`\mathrm{\Omega }`$ dynamo. However, as we argue in §4.3, rapid magnetic field decay will generically force $`𝐦_{}`$ out of alignment with $`𝛀`$ and the principal axes of the star. The remarkable AXP 1841–045 discovered by Vasisht & Gotthelf (1997) is only $`2000`$ yr old, as inferred from the age of the counterpart supernova remnant (Gotthelf & Vasisht 1997). The ratio $`t_{\mathrm{MDR}}/t_{\mathrm{SNR}}`$ is consistent with unity, in contrast with all other magnetar candidates that have measured spindown and are associated with supernova remnants (Table 1). Of these sources, AXP 1841–045 is also unique in failing to show measurable variations in its spindown rate, X-ray luminosity, or X-ray pulse shape over 10 years (Gotthelf, Vasisht, & Dotani 1999); nor has it emitted any X-ray bursts, or evinced any evidence for a particle outflow through a radio synchrotron halo. These facts reinforce the hypothesis that departures from simple magnetic dipole breaking are correlated with internal activity in a magnetar, and suggest that inactive phases can occur early in the life of a magnetar. ### 4.3 Free Precession in SGRs and AXPs Magnetic stresses will distort the shape of a magnetar (Melatos 1999). The internal magnetic field generated by a post-collapse $`\alpha `$$`\mathrm{\Omega }`$ dynamo is probably dominated by a toroidal component (DT92; TD93). A field stronger than $`100B_{\mathrm{QED}}`$ is transported through the core and deep crust of the neutron star on a timescale short enough for SGR activity (TD96). Such a magnetar is initially prolate, with quadrupole moment $`ϵ=1\times 10^5(B_{\mathrm{in}}/100B_{\mathrm{QED}})^2`$ (Bonazzola & Gourgoulhon 1996). Rapid field decay may cause the magnetic moment $`𝐦_{}`$ to rotate away from the long principal axis $`\widehat{𝐳}`$ of the star, irrespective of any initial tendency for these two axes to align. The distortion of the rotating figure of the star induced by the rigidity of the crust can be neglected when calculating the spin evolution of the star, as long as $`B>10^{12}(P/1\mathrm{s})^1`$ G (Goldreich 1970). This hydromagnetic distortion gives rise to free precession on a timescale $$\tau _{\mathrm{pr}}=\frac{2\pi }{ϵ\mathrm{\Omega }}=2\times 10^2\left(\frac{B_{\mathrm{in}}}{100B_{\mathrm{QED}}}\right)^2\left(\frac{P}{6\mathrm{s}}\right)\mathrm{yr}.$$ (21) Even when the magnetosphere is loaded with plasma, the spindown torque will depend on the angle between $`𝐦_{}`$ and the angular velocity $`𝛀`$. Free precession modulates this angle when $`𝐦_{}`$ is canted with respect to the long principal axis $`\widehat{𝐳}`$, and so induces a periodic variation in the spindown torque. Observation of free precession in an SGR or AXP source would provide a direct measure of its total magnetic energy. How may free precession be excited? In the case of a rigid vacuum dipole, free precession is damped by the radiation torque if the inclination between $`𝐦_{}`$ and $`\widehat{𝐳}`$ is less than $`55^{}`$ (Goldreich 1970). At larger inclinations, free precession is excited. In the more realistic case of a plasma-loaded magnetosphere, the rate at which free precession is excited or damped by electromagnetic and particle torques is, unfortunately, not yet known. An additional, internal excitation mechanism, which may be particularly effective in an active SGR, involves rapid transport of the field in short, intense bursts. This is a likely consequence of energetic flares like the March 5 or August 27 events, which probably have occurred $`10^2`$ times over the lifetimes of these sources. If the principal axes of the star are rearranged on a timescale less than $`\tau _{\mathrm{pr}}`$, then $`𝛀`$ will not have time to realign with the principal axes and precession is excited. Only if the magnetic field is transported on a timescale longer than $`\tau _{\mathrm{pr}}`$, will $`𝛀`$ adiabatically track the principal axes. An interesting alternative suggestion (Melatos 1999) is that forced radiative precession in a magnetar drives the bumpy spindown of the AXP sources 1E2259+586 and 1E1048-593 on a timescale of years. When $`𝐦_{}`$ is not aligned with $`𝛀`$, the asymmetric inertia of the corotating magnetic field induces a torque along $`𝛀\times 𝐦_{}`$ (Davis & Goldstein 1970). This near-field torque acts on a timescale $`\tau _{\mathrm{nf}}`$ that is $`(\mathrm{\Omega }R/c)`$ times the electromagnetic braking time: $$\tau _{\mathrm{nf}}0.3\left(\frac{B_{}}{10B_{\mathrm{QED}}}\right)^2\left(\frac{P}{6\mathrm{s}}\right)\mathrm{yr}.$$ (22) As long as $`\tau _{\mathrm{nf}}<\tau _{\mathrm{pr}}`$, this near-field torque drives an anharmonic wobble of the neutron star; in particular, Melatos (1999) considers the case where $`\tau _{\mathrm{nf}}\tau _{\mathrm{pr}}`$. However, inspection of equations (21) and (22) suggests instead that $`\tau _{\mathrm{pr}}\tau _{\mathrm{nf}}`$, because the magnetic energy is dominated by an internal toroidal component. In this case, the near-field torque averages to zero (Goldreich 1970). Note also that this mechanism is predicated on an evacuated inner magnetosphere, although the nonthermal spectra of SGRs and AXPs indicate that this may not be a good approximation (Thompson 1999). The model has the virtue of making clear predictions of the future rotational evolution of the AXPs, which will be tested in coming years. ### 4.4 Almost Constant Long-term Spindown We now address the near-uniformity of the long-term spindown rate of SGR 1900+14, before and after the August 27 outburst (Woods et al. 1999a; Marsden, Rothschild & Lingenfelter 1999; Paper I). It provides an important clue to any mechanism causing acceleration of the rate of spindown. There appears to be no measurable correlation between bursting activity and long-term spindown rate (Paper I). This observation is consistent with the occurence of short, energetic bursts: the period increment caused by the release of a fixed amount of energy is smaller for outbursts of short duration $`\mathrm{\Delta }t`$, scaling as $`(\mathrm{\Delta }t)^{1/2}`$ (eq. ). The implied constancy of the magnetic dipole moment is also consistent with the energetic output of the August 27 burst: only $`0.01(E_{\mathrm{Aug}27}/10^{44}\mathrm{erg})(B_{}/10B_{\mathrm{QED}})^2`$ of the exterior dipole energy need be expended to power the burst. Indeed, if the burst is powered by a large-scale magnetic instability, one infers, from this argument alone, that the dipole field cannot be much smaller than $`10B_{\mathrm{QED}}`$. An additional clue comes from the bursting history of SGR 1806-20. In that source, the cumulative burst fluence grows with time, in a piecewise linear manner (Palmer 1999). This indicates that there exist many quasi-independent active regions in the star, each of which expends a fraction $`10^5`$ of the total energy budget. The continuous output of waves and particles from the star is therefore the cumulative effect of many smaller regions. Nonetheless, the long term uniformity of $`\dot{P}`$ requires the rate of persistent seismic activity in the crust to remain carefully regulated over a period of years (or longer), even though the bursting activity is much more intermittent. Persistent seismic activity is excited in a magnetar by the compressive mode of ambipolar diffusion of the magnetic field through the core (TD96). The resulting compressive transport of the magnetic field through the crust requires frequent, low energy ($`E10^{35}`$ erg) fractures of the crust induced by the Hall term in the electrical conductivity. The total energy released in magnetospheric particles has the same magnitude as the heat conducted out from the core to the stellar surface. The (orthogonal) rotational model of ambipolar diffusion will shear the crust. It can induce much larger fractures that create optically thick regions of hot $`e^\pm `$ plasma trapped by the stellar magnetic field (TD95). The strong intermittency of SGR burst activity appears to be closely tied to the energy distribution of SGR bursts, which is weighted toward the largest events (Cheng et al. 1996). This suggests that the rate of low-energy Hall fracturing will more uniform, being modulated by longer term variations in the rate of ambipolar diffusion through the neutron star core. Nontheless, the modest variability observed in the short term measurements of $`\dot{P}`$ (Paper I) must be accounted for. Stochastic fluctuations in the rate of small-scale crustal fractures provide a plausible mechanism. An alternative source of periodic, short-term variability involves free precession in a magnetar whose dipole axis is tilted from the long principal axis (§4.3). Although angular momentum exchange with the crustal superfluid is a promising mechanism to account for the $`\mathrm{\Delta }P/P10^4`$ period shift associated with the August 27 event, it is less likely to dominate long-term variations in the spindown rate. An order of magnitude increase in the spindown rate driven such exchange could persist only for a small fraction $`10^1I_{\mathrm{sf}}/I_{}10^3`$ of the star’s life. Moreover, a gradual deformation of the neutron star crust by magnetic stresses will remove angular momentum from the superfluid and decrease the rate of spindown. ## 5 Changes in the Persistent X-ray Flux and Lightcurve The persistent X-ray lightcurve of SGR 1900+14 measured following the August 27 event (Kouveliotou et al. 1999; Murakami et al. 1999) appears dramatically different from the pulse profile measured earlier: indeed, the profile measured following the burst activity of May/June 1998 (Kouveliotou et al. 1999) is identical to that measured in April 1998 (Hurley et al. 1999c) and September 1996 (Marsden, Rothschild & Lingenfelter 1999). Not only did the pulse-averaged luminosity increase by a factor 2.3 between the 1998 April 30 and 1998 September 17/18 ASCA observations (Hurley et al. 1999c; Murakami et al. 1999), but the lightcurve also simplified into a single prominent pulse, from a multi-pulsed profile before the August 27 flare. The brighter, simplified lightcurve is suggestive of enhanced dissipation in the active region of the outburst (Kouveliotou et al. 1999). We now discuss the implications of this observation for the dissipative mechanism that generates the persistent X-rays, taking into account the additional constraints provided by the period history of SGR 1900+14. ### 5.1 Magnetic Field Decay The X-ray output of a magnetar can be divided into two components (TD96): thermal conduction to the surface, driven by heating in the core and inner crust; and external Comptonization and particle bombardment powered by persistent seismic activity in the star. Both mechanisms naturally generate $`10^{35}`$ erg s<sup>-1</sup> in continuous output. The appearence of a thermal pulse at the surface of the neutron star will be delayed with respect to a deep fracture or plastic rearrangement of the neutron star crust, by the thermal conduction time of $`1`$ year (e.g. Van Riper, Epstein, & Miller 1991). By contrast, external heating will vary simultaneously with seismic activity in the star. We have previously argued that if 1E2259+586 is a magnetar, then the coordinated rise and fall of its two X-ray pulses (as observed by Ginga; Iwasawa et al. 1992) requires the thermal component of the X-ray emission to be powered, in part, by particle bombardment of two connected magnetic poles (TD96, §4.2). Neither internal heating, nor variability in the rate of persistent seismic activity, appears able to provide a consistent explanation for the variable lightcurve of SGR 1900+14. Deposition of $`10^{44}`$ erg of thermal energy in the deep crust, of which a fraction $`1ϵ`$ is lost to neutrino radiation, will lead to an increased surface X-ray output of $`3\times 10^{35}(ϵ/0.1)`$ erg s<sup>-1</sup>. If, in addition, the heated deposited per unit mass is constant with depth $`z`$ in the crust, then the heat per unit area scales as $`z^4`$; whereas the thermal conduction time varies weakly with $`z`$ at densities above neutron drip (Van Riper et al. 1991). The outward heat flux should, as a result, grow monotonically. This conflicts with the appearance of the new pulse profile of SGR 1900+14 no later than one day after the August 27 event. By the same token, a significant increase in persistent seismic activity – at the rate needed to power the increased persistent luminosity $`L_\mathrm{X}1.5\times 10^{35}(D/7\mathrm{kpc})^2`$ erg s<sup>-1</sup> (Murakami et al. 1999) – would induce a measurable change in the spindown rate that was not observed. The observations require instead a steady particle source that is confined to the inner magnetosphere. A large-scale deformation of the crust of the neutron star, which likely occured during the August 27 outburst, must involve a horizontal twisting motion (§3). If this motion were driven by internal magnetic stresses,<sup>2</sup><sup>2</sup>2A sudden unwinding of an external magnetic field could release enough energy to power the March 5 (or August 27) event, but it was argued in TD95 that the timescale $`R_{}/c10^4`$ s would be far too short to explain the width of the initial $`0.2`$ s hard spike. A pulse broadened by a heavy matter loading would suffer strong adiabatic losses and carry a much greater kinetic energy than is observed in $`\gamma `$-rays. Shearing of the external magnetic field requires internal motions that will, in themselves, trigger a large outburst by fracturing the crust. then the external magnetic field lines connected to the rotating patch would be twisted with respect to their opposite footpoints (which we assume to remain fixed in position). We suppose that the twist angle decreases smoothly from a value $`\theta _{\mathrm{max}}`$ at the center of the patch to its boundary at radius $`a`$. This means that a component of the twist will remain even after magnetic reconnection eliminates any tangential discontinuities in the external magnetic field resulting from the motion. The current carried by the twisted bundle of magnetic field is $$I\frac{\theta _{\mathrm{max}}\mathrm{\Phi }c}{8\pi L},$$ (23) where $`\mathrm{\Phi }=\pi a^2B_{}`$ is the magnetic flux carried by the bundle and $`L`$ is its length. The surface of an AXP or SGR is hot enough ($`T0.5`$ keV) to feed this current via thermionic emission of $`Z<12`$ ions from one end of the flux bundle, and electrons from the other end. In magnetic fields stronger than $`Z^3\alpha _{\mathrm{em}}^2B_{\mathrm{QED}}=4\times 10^{13}(Z/26)^3`$ G, even iron is able to form long molecular chains. The cohesion energy per atom is $$\frac{\mathrm{\Delta }E}{Z^3\times 13.6\mathrm{eV}}=1.52\left(\frac{B}{Z^3\alpha _{\mathrm{em}}^2B_{\mathrm{QED}}}\right)^{0.37}\frac{7}{24}\left[\mathrm{ln}\left(\frac{B}{Z^3\alpha _{\mathrm{em}}^2B_{\mathrm{QED}}}\right)\right]^2.$$ (24) In this expression, the first term is the binding energy per atom in the chain (Neuhauser, Koonin, & Langanke 1987; Lai, Salpeter, & Shapiro 1992), from which we subtract the binding energy of an isolated atom (Lieb, Solovej, & Yngvason, 1992). Thermionic emission of ions is effective above a surface temperature $$T_{\mathrm{thermionic}}\frac{\mathrm{\Delta }E}{30}.$$ (25) Substituting $`B=10B_{\mathrm{QED}}=4.4\times 10^{14}`$ G, one finds that $`T_{\mathrm{thermionic}}`$ remains well below 0.5 keV for $`Z<12`$, but grows rapidly at higher $`Z`$. Thus, the surface of a magnetar should be an effective thermionic emitter for a wide range of surface compositions. We can now estimate the energy dissipated by the current flow. The kinetic energy carried by ions of charge $`Z`$ and mass $`A`$ is $$L_{\mathrm{ion}}=\left(\frac{A}{Z}\right)\frac{Im_p\varphi }{e}=3\times 10^{35}\frac{\theta _{\mathrm{max}}A}{Z}\left(\frac{B_{}}{10B_{\mathrm{QED}}}\right)\left(\frac{L}{R_{}}\right)^1\left(\frac{a}{0.5\mathrm{km}}\right)^2\mathrm{erg}\mathrm{s}^1.$$ (26) Here, $`\varphi g_{}R_{}=GM_{}/R_{}`$ is the gravitational potential that the charges have to climb along the tube, and we assume $`M_{}=1.4M_{}`$, $`R_{}=10`$ km. Note that the particle flow estimated here is large enough to break up heavy nuclei even where the outflowing current has a positive sign: electrons returning from the opposite magnetic footpoint are energetic enough for electron-induced spallation to be effective (e.g. Schaeffer, Reeves, & Orland 1982). On what timescale will this twist decay? Each charge accumulates a potential energy $`Am_pgz`$ a height $`z`$ above the surface of the neutron star. Equating this energy with the electrostatic energy released along the magnetic field, one requires a longitudinal electric field $`E=Am_pg/Ze`$. The corresponding electrical conductivity is $$\sigma =\frac{I}{\pi a^2E}=\left(\frac{Z\theta _{\mathrm{max}}}{8\pi A}\right)\frac{eBc}{m_pg_{}L},$$ (27) and the ohmic decay time is $$t_{\mathrm{ohmic}}=\frac{4\pi \sigma L^2}{c^2}=\left(\frac{Z\theta _{\mathrm{max}}}{2A}\right)\frac{eB_{}L}{m_pg_{}c}=300\left(\frac{Z\theta _{\mathrm{max}}}{A}\right)\left(\frac{B_{}}{10B_{\mathrm{QED}}}\right)\left(\frac{L}{10\mathrm{km}}\right)\mathrm{yr}.$$ (28) This timescale agrees with that obtained by dividing the persistent luminosity $`L_{\mathrm{ion}}`$ into the available energy of the twisted magnetic field. Further twisting of the field lines would prolong or shorten the lifetime of the current flow. A static twist in the surface magnetic field will not produce a measurable increase in the torque because the current flow is contained well inside the Alfvén radius (eq. ). The particles that carry the current lose their energy to Compton scattering and surface impact on a timescale $`R_{}/c`$ or shorter. By contrast, a persistent flux of low amplitude Alfvén waves into the magnetosphere causes the wave intensity to build up, until the wave luminosity transported beyond the Alfvén radius balances the continuous output of the neutron star (TB98). Thus, the particle flow induced by a localized twist in the magnetic field lines supplements the particle output associated with persistent seismic activity occuring over the larger volume of the star. ### 5.2 Evidence Against Persistent Accretion Direct evidence that the persistent X-ray output of SGR 1900+14 is not powered by accretion comes from measurements one day after the August 27 outburst (Kouveliotou et al. 1999). The increase in persistent $`L_\mathrm{X}`$ is not consistent with a constant spindown torque, unless there was a substantial change in the angular pattern of the emergent X-ray flux following the burst. In addition, the radiative momentum deposited by that outburst on a surrounding accretion disk would more than suffice to expel the disk material, out to a considerable distance from the neutron star. In such a circumstance, the time to re-establish the accretion flow onto the neutron star, via inward viscous diffusion from the inner boundary $`R_{\mathrm{in}}`$ of the remnant disk, would greatly exceed one day.<sup>3</sup><sup>3</sup>3This estimate of the viscous timescale is conservative for two reasons: First, if the binding energy of the disk material were balanced with the incident radiative energy, the inner boundary of the remnant disk would like at even larger radius. Second, the central X-ray source may puff up the disk, which increases $`\tau _{\mathrm{visc}}`$ (eq. ). Let us consider this point in more detail. The accretion rate (assumed steady and independent of radius before the outburst) is related to the surface mass density $`\mathrm{\Sigma }(R)`$ of the hypothetical disk via $$\dot{M}=\frac{2\pi R^2\mathrm{\Sigma }(R)}{t_{\mathrm{visc}}(R)}.$$ (29) The viscous timescale is, as usual, $$t_{\mathrm{visc}}(R)\alpha _{\mathrm{SS}}^1\left(\frac{H(R)}{R}\right)^2\left(\frac{R^3}{GM_{}}\right)^{1/2},$$ (30) where $`H(R)`$ is the half-thickness of the disk at radius $`R`$ and $`\alpha _{\mathrm{SS}}<1`$ is the viscosity coefficient (Shakura & Sunyaev 1973). Balancing the radiative momentum incident on a solid angle $`2\pi (2H/R)`$ against the momentum $`\pi \mathrm{\Sigma }(R)R^2(2GM_{}/R)^{1/2}`$ of the disk material moving at the escape speed, and equating the persistent X-ray luminosity $`L_\mathrm{X}`$ with $`GM_{}\dot{M}/R_{}`$, one finds $$t_{\mathrm{visc}}=\frac{E_{\mathrm{Aug}27}}{L_\mathrm{X}}\left(\frac{2GM_{}}{R_{}c^2}\right)^{1/2}\left(\frac{R_{\mathrm{in}}}{R_{}}\right)^{1/2}\left(\frac{H(R_{\mathrm{in}})}{R_{\mathrm{in}}}\right).$$ (31) The most important factor in this expression is the ratio of burst energy to persistent X-ray luminosity, $`E_{\mathrm{Aug}27}/L_\mathrm{X}=30(E_{\mathrm{Aug}27}/10^{44}\mathrm{erg})(L_\mathrm{X}/10^{35}\mathrm{erg}\mathrm{s}^1)^1`$ yr. The timescale is long as the result of the enormous energy of the August 27 flare, and the relatively weak persistent X-ray flux preceding it. It is interesting to compare with Type II X-ray bursts from the Rapid Burster and GRO J1744-28, which are observed to be followed by dips in the persistent emission (Lubin et al. 1992; Kommers et al. 1997). These bursts, which certainly are powered by accretion, involve energies $`10^4`$ times smaller and a persistent source luminosity that is $`10^210^3`$ times higher. Indeed, the dips in the persistent emission following the Type II bursts last for only 100-200 s, consistent with the above formula. Now let us evaluate eq. (31) in more detail. At a fixed $`\dot{M}`$, the surface mass density of the disk increases with decreasing $`\alpha _{\mathrm{SS}}`$, and so a conservative upper bound on $`t_{\mathrm{visc}}`$ is obtained by choosing $`\alpha _{\mathrm{SS}}`$ to be small. (Note that eq. (31) depends implicity on $`\alpha _{\mathrm{SS}}`$ only through the factor of $`R_{\mathrm{in}}^{1/2}\alpha _{\mathrm{SS}}^{1/2}`$.) For the observed parameters $`E_{\mathrm{Aug}27}10^{44}`$ erg (Mazets et al. 1999) and $`L_\mathrm{X}=10^{35}`$ erg s<sup>-1</sup> (before the August 27 outburst; Hurley et al. 1999a), one finds $`R_{\mathrm{in}}=1\times 10^{10}`$ cm when $`\alpha _{\mathrm{SS}}=0.01`$. The corresponding thickness of the gas-pressure dominated disk is (Novikov & Thorne 1973) $`H(R_{\mathrm{in}})/R_{\mathrm{in}}5\times 10^3`$. The timescale over which the persistent X-ray flux would be re-established is extremely long, $`t_{\mathrm{visc}}10`$ yr. One final note on disk accretion. There is no observational evidence for a binary companion to any SGR or AXP (Kouveliotou 1999). Because of its large recoil velocity (eq. ), SGR 1900+14 almost certainly could not remain bound in a binary system. A similar argument applies to the other giant flare source, SGR 0526–66 (DT92). Thus, any accretion onto SGR 1900 +14 would have to come from a fossil disk. To remain bound, the initial radius of such a disk must be less than $`GM_{}/V_{\mathrm{rec}}^210^4`$ km, for stellar recoil velocity $`V_{\mathrm{rec}}(3/2)^{1/2}V_{}`$ \[eq. (13)\]. The behavior of a passively spreading remnant disk appears inconsistent with the measured spin evolution of the AXP and SGR sources (Li 1999). A trigger involving sudden accretion of an unbound planetesimal (Colgate and Petschek 1981) is not consistent with the log-normal distribution of waiting periods between bursts (Hurley et al. 1994) in SGR 1806-20. An internal energy source is also indicated by the power-law distribution of burst energies, with index $`dN/dEE^{1.6}`$ similar to the Gutenburg-Richter law for earthquakes (Cheng et al. 1996). In addition, the mass of the accreted planetesimals must exceed $`1/30`$ times the mass of the Earth’s Moon in the case of the March 5 and August 27 events. It is very difficult to understand how the accretion of a baryon-rich object could induce a fireball as clean as the initial spike of these giant flares (TD95, §7.3.1). When $`B_{}10^{14}`$ G, only a tiny fraction $`(B_{}/B_E)^2`$ of the hydrostatic released would be converted to magnetic energy; here, $`B_E10^{14}`$ G is the minimum field needed to directly power the outburst. ## 6 Conclusions The observation (Paper I) of a rapid spindown associated with the August 27 event, $`\mathrm{\Delta }P/P=+1\times 10^4`$, provides an important clue to the nature of SGR 1900+14. We have described two mechanisms that could induce such a rapid loss of angular momentum from the crust and charged interior of the star. The torque imparted by a relativistic outflow during the August 27 event is proportional to $`B_{}`$, but falls short by an order of magnitude even if $`B_{}10B_{\mathrm{QED}}=4.4\times 10^{14}`$ G. Only if SGR 1900+14 released an additional $`10^{44}`$ erg for an extended period $`10^4`$ s immediately following the August 27 outburst would the loss of angular momentum be sufficient. (The integrated torque increases with the duration $`\mathrm{\Delta }t`$ of the outflow as $`(\mathrm{\Delta }t)^{1/2}`$; eq. .) The alternative model, which we favor, involves a glitch driven by the violent disruption of the August 27 event. The unpinned neutron superfluid will absorb angular momentum if it starts out spinning more slowly than the rest of the star – the opposite of the situation encountered in glitching radio pulsars. We have argued that a slowly spinning neutron superfluid is the natural consequence of magnetic stresses acting on the neutron star crust. A gradual, plastic deformation of the crust during the years preceding the recent onset of bursting activity in SGR 1900+14 would move the superfluid out of co-rotation with the rest of the star, and slow its rotation. The magnitude of the August 27 glitch can be crudely estimated by scaling to the largest glitches of young, active pulsars with similar spindown ages and internal temperatures. Depending on the object considered, one deduces $`|\mathrm{\Delta }P|/P10^510^4`$. This model for the August 27 period increment has interesting implications for the longer-term spindown history of the Soft Gamma Repeaters and Anomalous X-ray Pulsars. It suggests that these objects can potentially glitch, with or without associated bursts, and that $`P`$ will suddenly shift upward, rather than downward as in radio pulsar glitches. By the same token, an accelerated rate of plastic deformation within a patch of the neutron star crust will force the superfluid further out of co-rotation and induce a transient (but potentially resolvable) spin-up of the crust (TD96). The magnitude of such a ‘plastic spin-up’ event (eq. ), could approach that inferred for the August 27 event, but with the usual (negative) sign observed in radio pulsar glitches. Indeed, RXTE spin measurements provide evidence for a rapid spin-up of the AXP source 1E2259+586 (Baykal & Swank 1996), to the tune of $`\mathrm{\Delta }P/P=3\times 10^5`$. Transient variations in the persistent X-ray flux of the AXP 1E2259+586, which were not associated with any large outbursts, also require transient plastic deformations of the neutron star crust (TD96). The rapid spindown rate of SGR 1900+14 during the past few years, $`\dot{P}=6\times 10^{11}`$ s/s, indicates that this SGR is a transient phase of accelerated spindown, with stronger braking torques than would be produced by simple magnetic dipole radiation (Kouveliotou et al. 1999). Such accelerated spindown can be driven by magnetically-induced seismic activity, with small-scale fractures powering a steady relativistic outflow of magnetic vibrations and particles. This outflow, when channeled by the dipole magnetic field, carries away the star’s angular momentum. A very strong field, $`B_{}B_{\mathrm{QED}}`$, is required to give a sufficiently large “lever arm” to the outflow. Further evidence for episodic accelerated spindown comes from the two AXPs that are directly associated with supernova remnants: 1E2259+586 and 1E1841-045 (§4.2). The characteristic ages $`P/2\dot{P}`$ of these stars are longer than the the ages of the associated supernova remnant, and also longer than the characteristic ages of the SGRs. This suggests that the AXPs are magnetars observed during phases of seismic inactivity. The constancy of the long-term spindown rate before and after the bursts and giant flare of 1998 (Woods et al. 1999a; Marsden, Rothschild & Lingenfelter 1999; Paper I) gives evidence that the spindown rate correlates only weakly with bursting activity. It is easy to understand why short, intense bursts are not effective at spinning down a magnetar: the Alfvén radius (the length of the “lever arm”) decreases as the flux of Alfvén waves and particles increases. A persistent output of waves and particles could be driven by the compressive mode of ambipolar diffusion in the liquid neutron star interior (TD96). As the magnetic field is forced through the crust, the Hall term in the electrical conductivity induces many frequent, small fractures ($`\mathrm{\Delta }E10^{35}`$ erg). By contrast, large fractures of the crust are driven by shear stresses that involve the orthogonal (rotational) mode of ambipolar diffusion. The greater intermittency of bursting activity is a direct consequence of the dominance of the total burst fluence by the largest bursts (Cheng et al. 1996). Forced radiative precession could cause a short-term modulation of the spindown rate in a magnetar (Melatos 1999), but this requires an evacuated magnetosphere that may not be consistent with the observed non-thermal spectra of the SGR and AXP sources (Thompson 1999). We have argued that transport of the neutron star’s magnetic field will deform the principal axes of the star and induce free precession. The resulting modulation of the spindown torque has an even shorter timescale (eq. ), and is potentially detectable. A twist in the exterior magnetic field induced by a large scale fracture of the crust will force a persistent thermionic current through the magnetosphere (§5). The resulting steady output in particles would explain the factor $`2.3`$ increase in the persistent X-ray flux of SGR 1900+14 immediately following the August 27 event (Murakami et al. 1999) if $`B_{}10B_{\mathrm{QED}}`$ and the twist is through $`1`$ radian. In this model, the simplification of the lightcurve – into a single large pulse – is due to concentrated particle heating at the site of the August 27 event. We conclude by emphasizing the diagnostic potential of coordinated measurements of spectrum, flux, bursting behavior and period derivative. When considered together, they constrain not only the internal mechanism driving the accelerated spindown of an SGR source, but also the mechanism powering its persistent X-ray output. For example: an increase in surface X-ray flux will be delayed by $`1`$ year with respect to an episode of deep heating (e.g. Van Riper et al. 1991); whereas a shearing and twisting of the external magnetic field of the neutron star will drive a simultaneous increase in the rate of external particle heating (TD96). The magnetar model offers a promising framework in which to interpret these observations. We acknowledge support from NASA grant NAG 5-3100 and the Alfred P. Sloan foundation (C.T.); NASA grant NAG5-8381 and Texas Advanced Research Project grant ARP-028 (R.C.D.); the cooperative agreement NCC 8-65 (P.M.W.); and NASA grants NAG5-3674 and NAG5-7808 (J.vP.). C.T. thanks A. Alpar and M. Ruderman for conversations.
no-problem/9908/astro-ph9908218.html
ar5iv
text
# Dynamic Screening in Thermonuclear Reactions ## 1 Introduction Our knowledge of reaction rates in the Sun is becoming more and more accurate. There have been several works dealing with electrostatic screening effects in the solar plasma (as for example, Carraro, Schäfer, & Koonin 1988; Gruzinov & Bahcall 1997; Brüggen & Gough 1997). Recently, more precise calculations have been made beyond the linear regime (Gruzinov & Bahcall 1998). The screening has the effect of lowering the Coulomb barrier between the interacting ions, therefore enhancing the reaction rates. This is included in the enhancing factor in the weak screening limit (where $`Z_1Z_2e^2/R_DT1`$ and $`R_D`$ is the Debye radius) $$w=exp\lambda ,$$ (1) where $$\lambda =Z_1Z_2e^2/R_DT.$$ (2) This screeening factor uses the Debye-Hückel expression. Usually, the calculation is made in the electrostatic case, based on the calculation of Salpeter (1954). It is assumed that the motion of the screened ion is slow, compared to the motion of the screening particles. Carraro, Schäfer, & Koonin (1988) have studied the case of dynamic screening. In a recent study, Gruzinov (1998) argued that in the weak screening limit, there is no dynamic screening corrections to Salpeter’s enhancement factor, even for high energies. He based his conclusion on two arguments: 1) For thermodynamic equilibrium, the Gibbs probability distributions in velocity and configuration space are decoupled; and 2) Through an analysis of the thermal electric field, we can estimate the random electrostatic potential and conclude that the enhancement of the reaction rates is given by the Salpeter expression. We show below that although these arguments are simple, a careful examination shows that they are wrong. In §2 and §3 we recall the arguments and show why they are incorrect. ## 2 Gibbs Distribution The enhancement factor for a reaction rate between nuclei of charges $`Z_1e`$ and $`Z_2e`$ is $$w=exp(Z_2e\varphi _0/T)$$ (3) ($`k_B=1`$), where $`\varphi _0`$ is the electrostatic potential, created by the plasma on the particle $`Z_1e`$. A test particle $`Z_1e`$ moving through a plasma with velocity $`v^{}`$ suffers dynamic screening. The electrostatic potential is written as (see Krall & Trivelpiece (1973), chap.11) $$\varphi _0=4\pi eZ_1\frac{d^3k}{(2\pi )^3}\left[\frac{1}{\epsilon (k,kv^{})}1\right]k^2,$$ (4) where $`\epsilon `$ is the dielectric permittivity, which describes the plasma response to the test particle. ($`\epsilon `$ depends on the velocity distribution of the plasma particles $`f(v)`$, which usually is taken as Maxwellian). From the above expression, it can be seen that $`\varphi _0`$ depends on the velocity $`v^{}`$ of particle $`Z_1e`$. Gruzinov (1998) argued that the Gibbs probability distribution $`\rho `$ is $$\rho exp\left(\beta \frac{m_iv_i^2}{2}\beta \frac{e_ie_j}{r_{ij}}\right),$$ (5) (where $`\beta =1/T`$) which can be factorable into $$\rho exp\left(\beta \frac{m_iv_i^2}{2}\right)exp\left(\beta \frac{e_ie_j}{r_{ij}}\right).$$ (6) From the above, it was argued that the distributions in velocity and configuration space are decoupled. This first argument is simple. However, it is based on a misunderstanding. The general Gibbs probability distribution $`\rho `$ for a plasma is $$\rho exp\left(\beta \frac{m_iv_i^2}{2}\right)exp\left(\beta \underset{i>k}{}\underset{k}{}W_{ik}\right),$$ (7) where $`W_{ik}`$ is the interaction energy between the particles. $`W_{ik}`$ is the interaction energy of particle $`i`$ with all the other particles in the plasma. This energy is the Coulomb energy $`e_ie_j/r_{ij}`$, related to the positions of all the other particles. It is an assumption that their positions $`r_i`$, $`r_j`$ are independent of their velocities. However, we know from dynamic screening, Eq. (4), that their coordinates are, in fact, velocity dependent. Eq. (6) states that the coordinates are independent of the velocities only in zero order. The exact Gibbs distribution takes into account dynamic corrections. In fact, this argument is circular. The separability of the Gibbs distribution in velocity and configuration contributions is true only if the particles are moving under a (static) conservative force. The argument to show that there are no dynamical contributions, therefore, rests on a statement that is valid only if there are no dynamical contributions, which, in fact, is what he set out to prove. ## 3 Thermal Electric Field Due to thermal fluctuations in the plasma, the reaction rate between two fast moving ions $`Z_1e`$ and $`Z_2e`$ is enhanced. The enhancement factor of a reaction rate is $$w=1+\beta ^2e^2Z_1Z_2\varphi ^2,$$ (8) where $`\varphi ^2`$ is the average of the square of the random electrostatic potential $`\varphi `$. $`\varphi ^2`$ is given by $$\varphi ^2=\frac{d^3k}{(2\pi )^3}\varphi ^2_k,$$ (9) where $`\varphi ^2_k=E^2_k/k^2`$. In his second argument, the expression used for the fluctuation electric field was (Krall & Trivelpice (1973), chap.11) $$\frac{E^2_k}{(8\pi )}=\left(\frac{T}{2}\right)\left[1\frac{1}{\epsilon (0,k)}\right].$$ (10) Substituing this expression in Eq.(8) and Eq.(9), Salpeter’s expression is obtained. However, the expression of the thermal electric field (for a Maxwellian plasma), given by Eq. (10), assumes that $`\omega T`$ ($`\mathrm{},k_B=1`$). The general expression for the intensity of the electric field is given by the Fluctuation-Dissipation Theorem (see for example Sitenko (1967), Akhiezer et al. (1975)): $$\frac{E^2_{k\omega }}{(8\pi )}=\frac{1}{e^{\omega /T}1}\frac{Im\epsilon }{\epsilon ^2}$$ (11) and $$\frac{E^2_k}{(8\pi )}=𝑑\omega \frac{1}{e^{\omega /T}1}\frac{Im\epsilon }{\epsilon ^2}.$$ (12) This expression includes electrostatic Langmuir waves as well as all other fluctuations that exist in a plasma. In the limit of $`\omega T`$, $$\frac{E^2_k}{(8\pi )}=𝑑\omega \frac{T}{\omega }\frac{Im\epsilon }{\epsilon ^2},$$ (13) for which it is then possible to use the Kramers-Kronig relations. Eq.(13) then turns out to be $$\frac{E^2_k}{(8\pi )}=\left(\frac{T}{2}\right)\left[1\frac{1}{\epsilon (0,k)}\right],$$ (14) which is the expression used. The assumption that $`\omega T`$, however, is very strong. In fact, in the case of the transverse electric field, we showed (Opher & Opher (1997a, 1997b)) that only by not making this strong assumption, is the blackbody at high frequencies obtained. We recently showed (Opher & Opher (1999)), that by not making the assumption that $`\omega T`$, the energy of a plasma in the classical limit is larger than previously thought. Without assuming that $`\omega T`$, the enhancement factor is given by Eq. (8) with Eq. (12). It is also to be noted that the second argument is also circular. It assumes that $`\omega T`$, making it a static analysis, which is then used to prove that there does not exist a dynamic contribution. It is to be emphasized that we are not arguing here whether or not dynamic screening exists in thermonuclear reactions. We only show that the arguments used by Gruzinov (1998), to prove that dynamic screening does not exist, are not valid. M.O. would like to thank the Brazilian agency FAPESP for support (no. 97/13427-8). R.O. thanks the Brazilian agency CNPq for partial support. Both authors would also like to thank the Brazilian project Pronex/FINEP (no. 41.96.0908.00) for support.
no-problem/9908/astro-ph9908305.html
ar5iv
text
# Phase–Space Constraints on Visible and Dark Matter Distributions in Elliptical Galaxies ## 1. Introduction High resolution observations of elliptical galaxies and bulges of spirals show that their (spatial) central stellar densities are well described by a power–law profile, $$\rho _{}(r)r^\gamma ,r0,$$ (1) where $`0\gamma <2.4`$ (e.g., Gebhardt et al. 1996; Carollo & Stiavelli 1998). Moreover, it is now commonly accepted that a non negligible fraction of the total mass in elliptical galaxies is made of a dark component, whose density distribution at large radii differs significantly from that of the visible one. The radial distribution of the dark matter in the galactic central regions is less well known. In the past a commonly used, centrally flat model for the dark matter distribution was the so called quasi isothermal (QI) density distribution ($`\beta =2`$ in eq. ), but, more recently, high resolution numerical simulations of dark matter halo formation showed that also for the dark matter density distribution $$\rho _{\mathrm{DM}}(r)r^\beta $$ (2) in the central regions, where $`\beta 1`$ (e.g., Dubinsky & Carlberg 1991; Navarro, Frenk, & White 1997). Thus, there are indications that both the visible and the dark matter density distributions increase significantly up to the center of galaxies. A natural question arises: is the origin of this qualitatively similar behavior independent for the two density distributions, or it is related for some reason? From a theoretical point of view it is well known that in any physically acceptable multicomponent galaxy model the phase–space DF of each density component must be non negative. A model satisfying this minimal requirement is called a consistent model. Thus, the question above can be reformulated as follows: can the request of positivity of the DF of each density component in multicomponent galaxy models tell us something about the relative distribution of dark and visible matter in real galaxies? In order to answer this question I studied the problem of the construction and investigation of the DFs of two component galaxy models with variable amount and distribution of dark matter, and variable orbital anisotropy. ## 2. Technique For a multi component spherical system, where the (radial) orbital anisotropy of each component is modeled according to the OM parameterization (Osipkov 1979; Merritt 1985), the DF of the density component $`\rho _k`$ is given by: $$f_k(Q_k)=\frac{1}{\sqrt{8}\pi ^2}\frac{d}{dQ_k}_0^{Q_k}\frac{d\varrho _k}{d\mathrm{\Psi }_T}\frac{d\mathrm{\Psi }_T}{\sqrt{Q_k\mathrm{\Psi }_T}},\varrho _k(r)=\left(1+\frac{r^2}{r_{ak}^2}\right)\rho _k(r),$$ (3) where $`\mathrm{\Psi }_T(r)=_\mathrm{k}\mathrm{\Psi }_k(r)`$ is the total relative potential, and $`Q_k=L^2/2r_{ak}^2`$. $``$ and $`L`$ are respectively the relative energy and the angular momentum modulus per unit mass, $`r_{ak}`$ is the anisotropy radius, and $`f_k(Q_k)=0`$ for $`Q_k0`$ (e.g., Binney & Tremaine 1987). Preliminary information on the model consistency can be easily obtained using the following results (Ciotti & Pellegrini 1992, CP; Ciotti 1996, C96; Ciotti 1999, C99). A necessary condition for the non negativity of each $`f_k`$ is: $$\frac{d\varrho _k(r)}{dr}0,0r\mathrm{}.$$ (4) If the NC is satisfied, strong and weak sufficient conditions for the non negativity of each $`f_k`$ are: $$\frac{d}{dr}\left[\frac{d\varrho _k(r)}{dr}\frac{r^2\sqrt{\mathrm{\Psi }_T(r)}}{M_T(r)}\right]0,\frac{d}{dr}\left[\frac{d\varrho _k(r)}{dr}\frac{r^2}{M_T(r)}\right]0,0r\mathrm{}.$$ (5) The explored two component galaxy models are made of various combinations of spherically symmetric density distributions, as the flat–core mass distributions $$\rho (r)=\frac{\rho _0r_\mathrm{c}^\beta }{(r_\mathrm{c}^2+r^2)^{\beta /2}},$$ (6) the centrally–peaked spatial density associated to the deprojection of the Sersic (1968) profile $$I(R)=I(0)\mathrm{exp}[b(m)(R/R_\mathrm{e})^{1/m}],\rho (r)r^{(1m)/m},r0,$$ (7) (where the explicit expression for $`b(m)`$ can be found in Ciotti & Bertin 1999), and finally the $`\gamma `$ models (Dehnen 1993) $$\rho (r)=\frac{3\gamma }{4\pi }\frac{Mr_\mathrm{c}}{r^\gamma (r_\mathrm{c}+r)^{4\gamma }},0\gamma <3.$$ (8) The mass, scale–length, and anisotropy radius of each component are free parameters. For example, in C96 and C99 the family of two component self–consistent galaxy models, where one density distribution follows a $`\gamma _1`$ profile, and the other a $`\gamma _2`$ profile \[$`(\gamma _1,\gamma _2)`$ models\], is presented. ## 3. Results The main results can be summarized as follows. In CP, by numerical investigation of the inequalities given in eqs. (4)-(5), it was shown that it is not possible to add an isotropic QI or Hubble modified halo ($`\beta =3`$ in eq. ) to a R<sup>1/4</sup> galaxy ($`m=4`$ in eq. , de Vaucouleurs 1948), because their DFs run into negative values near the model center. On the contrary, the isotropic R<sup>1/4</sup> galaxy was found to be consistent for any value of the superimposed halo mass and scale–length (i.e., over all the parameter space). At variance with the previous case, the isotropic two component models where both density distributions are characterized by a centrally flat profile (the Hubble modified + QI models), or by a centrally peaked profile (the R<sup>1/4</sup>+R<sup>1/4</sup> models), were found to be consistent over all the parameter space. In C96 and C99 the analytical DF for both components of OM anisotropic (1,1) and (1,0) models is found in terms of elliptic functions. Moreover, the method described by eqs. (4)-(5) is applied analytically to the more general family of $`(\gamma _1,\gamma _2)`$ models. It is proved that for $`1\gamma _1<3`$ and $`\gamma _2\gamma _1`$ the DF of the $`\gamma _1`$ component in isotropic $`(\gamma _1,\gamma _2)`$ models is nowhere negative, independent of the mass and concentration of the $`\gamma _2`$ component. As a special application of this result, it follows that a black hole (BH) of any mass can be consistently added at the center of any isotropic member of the family of $`\gamma `$ models when $`1\gamma <3`$, and that both components of isotropic $`(\gamma ,\gamma )`$ models (with $`\gamma 1`$) are consistent over all the parameter space. As a consequence, the isotropic $`\gamma =1`$ component in (1,1) and (1,0) models is consistent. In the anisotropic case, it is shown that an analytic estimate of a minimum value of $`r_a`$ for one component $`\gamma `$ models with a massive BH at their center can be explicitly found. As expected, this minimum value decreases for increasing $`\gamma `$. The region of the parameter space in which (1,0) models are consistent is successively explored using the derived DFs: it is shown that, unlike the $`\gamma =1`$ component, the $`\gamma =0`$ component becomes inconsistent when the $`\gamma =1`$ component is sufficiently concentrated, even in the isotropic case. The combined effect of the mass concentration and orbital anisotropy is also investigated, and an interesting behavior of the DF of the anisotropic $`\gamma =0`$ component is found and explained: it is analytically shown that there exists a small region in the parameter space where a sufficient amount of anisotropy can compensate for the inconsistency produced by the $`\gamma =1`$ component concentration on the structurally analogous but isotropic case. ## 4. Conclusions Some of the general trends that emerge when comparing different one and two component models with OM anisotropy, as those investigated in CP, Ciotti, Lanzoni, & Renzini (1996), Ciotti & Lanzoni (1997), C96 and C99, can be summarized as follows. 1) In the isotropic case, when the two density components (i.e., stars and dark matter) are similarly distributed in the galactic center, the corresponding models are consistent for any choice of the mass ratio and scale–length ratio. 2) On the contrary, when the two density components are significantly different in the galactic central regions, the mass component with the steeper density is “robust” against inconsistency. The mass component with the flatter density in the central regions is the most “delicate” and can easily become inconsistent. 3) For sufficiently small values of the anisotropy radius, OM anisotropy may produce a negative DF outside the galaxy center, while the halo concentration affects mainly the DF at high (relative) energies (i.e., near the galaxy center). 4) The trend of the minimum value for the anisotropy radius as a function of the halo concentration shows that a more diffuse halo allows for a larger amount of anisotropy: in other words, the possibility of sustaining a strong degree of anisotropy is weakened by the presence of a very concentrated halo. As a consequence of previous points 1 and 2, one could speculate that in the presence of a centrally peaked dark matter halo, centrally flat elliptical galaxies should be relatively rare, or that a galaxy with a central power law density profile cannot have a dark halo too flat in the center. Thus, the qualitatively similar behavior of visible and dark matter density profiles in the central regions of elliptical galaxies could be a consequence of phase–space constraints. ### Acknowledgments. This work was supported by MURST, contract CoFin98. ## References Binney, J., & Tremaine, S. 1987, Galactic Dynamics, Princeton University Press Carollo, C.M., & Stiavelli, M. 1998, AJ, 115, 2306 Ciotti, L. 1996, ApJ, 471, 68 (C96) Ciotti, L. 1999, ApJ, 520, 574 (C99) Ciotti, L., & Lanzoni, B. 1997, A&A, 321, 724 Ciotti, L., & Pellegrini, S. 1992, MNRAS, 255, 561 (CP) Ciotti, L., Lanzoni, B., & Renzini, A. 1996, MNRAS, 282, 1 Ciotti, L., & Bertin, G. 1999, this conference de Vaucouleurs, G. 1948, Ann.d’Ap., 11, 247 Dehnen, W. 1993, MNRAS, 265, 250 Dubinski, J., & Carlberg, R.G. 1991, ApJ, 378, 496 Gebhardt, K., et al. 1996, AJ, 112, 105 Merritt, D. 1985, AJ, 90, 1027 Navarro, J.F., Frenk, C.S., & White, S.D.M. 1997, ApJ, 490, 493 Osipkov, L.P. 1979, Pis’ma Astron.Zh., 5, 77 Sersic, J.L. 1968, Atlas de galaxias australes. Observatorio Astronomico, Cordoba
no-problem/9908/cs9908016.html
ar5iv
text
# Quadrilateral Meshing by Circle Packing ## 1 Introduction We investigate here problems of unstructured quadrilateral mesh generation for polygonal domains, with two conflicting requirements. First, we require there to be few quadrilaterals, linear in the number of input vertices; this is appropriate for methods in which high order basis functions are used, or in multiblock grid generation in which each quadrilateral is to be further subdivided into a structured mesh. Second, we require some guarantees on the quality of the mesh: either the elements themselves should have shapes restricted to certain classes of quadrilaterals, or the mesh should satisfy some more global quality requirements. Computing a linear-size quadrilateralization, without regard for quality, is quite easy. One can find quadrilateral meshes with few elements, for instance, by triangulating the domain and subdividing each triangle into three quadrilaterals. For convex domains, it is possible to exactly minimize the number of elements. However these methods may produce very poor quality meshes. High-quality quadrilateralization, without rigorous bounds on the number of elements, is an area of active practical interest. Techniques such as paving can generate high-quality meshes for typical inputs; however these meshes may have many more than $`O(n)`$ elements. Indeed, if the requirements on element quality include a constant bound on aspect ratio, then meshing a rectangle of aspect ratio $`A`$ will require $`\mathrm{\Omega }(A)`$ quadrilaterals, even though in this case $`n=\mathrm{4}`$. We provide a first investigation into the problem of finding a suitable tradeoff between those two requirements: for which measures of mesh quality is it possible to find guaranteed-quality meshes with guaranteed linear complexity? The results—and indeed the algorithms—of this paper are analogous to the problem of nonobtuse triangulation. Interestingly, for quadrilaterals there seem to be several analogues of nonobtuseness. Our algorithms are based on circle packing, a powerful geometric technique useful in a variety of contexts. Specifically, we build on a circle-packing method due to Bern, Mitchell, and Ruppert. In this method, before constructing a mesh, one fills the domain with circles, packed closely together so that the gaps between them are surrounded by three or four tangent circles. (Circle packings with only three-sided gaps form a sort of discrete analogue to conformal mappings. However for most domains some four-sided gaps are necessary, and in some of our algorithms four-sided gaps are actually helpful since they lead to degree-four mesh vertices.) One then uses these circles as a framework to construct the mesh, by placing mesh vertices at circle centers, points of tangency, and within each gap. Earlier work by Shimada and Gossard also uses approximate circle packings and sphere packings to construct triangular meshes of 2-d domains and 3-d surfaces. Other authors have introduced related circle packing ideas into meshing via conforming Delaunay triangulation, conformal mapping, and decimation. Circle packing ideas closely related to the algorithms in this paper have also been applied in origami design. We use circle packing to develop four new quadrilateral meshing methods. First, in Section 3, we show that the Voronoï diagram of the points of tangency of a suitable circle packing forms a quadrilateral mesh. Although the individual elements in this mesh may not have good quality, the Voronoï structure of the mesh may prove useful in some applications such as finite volume methods. Second, in Section 4, we overlay this Voronoï mesh with its dual Delaunay triangulation; this overlay subdivides each Voronoï cell into quadrilaterals having two opposite right angles. Note that any such quadrilateral must have all four of its vertices on a common circle. Third, in Section 5, we show that a small change to the method of Bern et al. (basically, omitting some edges), produces a mesh of kites (quadrilaterals having two adjacent pairs of equal-length sides). The resulting mesh optimizes the cross ratio of the elements (a measure of the aspect ratio of the rectangles into which each element may be conformally mapped): any kite can be conformally mapped onto a square. Finally, in Section 6, we subdivide these kites into smaller quadrilaterals, producing a mesh in which each quadrilateral has maximum angle at most $`\mathrm{1}\mathrm{2}\mathrm{0}^{}`$. This is optimal: there exist domains for which no mesh has angles better than $`\mathrm{1}\mathrm{2}\mathrm{0}^{}`$. ## 2 Circle Packing Let us first review the nonobtuse triangulation method of Bern et al.. This algorithm is given an $`n`$-vertex polygonal region (possibly with holes), and outputs a triangulation with $`O(n)`$ new Steiner points in which no triangle has an obtuse angle. In outline, it performs the following steps: 1. Protect reflex vertices of the polygon by placing circles tangent to the boundary of the polygon on either side of them, small enough that they do not intersect each other or other features of the polygon (Figure 1(a)). 2. Connect holes of the polygon by placing nonoverlapping circles, tangent to edges of the polygon or to previously placed circles, so that the domain outside the circles forms one or more simply connected regions with circular-arc sides. 3. Simplify each region by packing it with further circles until each remaining region has three or four circular-arc or straight-line sides (Figure 1(b)). 4. Partition the polygon into 3- and 4-sided polygonal regions by connecting the centers of tangent circles (Figure 1(c)). 5. Triangulate each region with nonobtuse triangles (Figure 1(d)). Our quadrilateralization algorithms will be based on the same general outline, and in several cases the quadrilaterals we form can be viewed as combinations of several of the triangles formed by this algorithm. Steps 1, 4, and 5 are straightforward to implement. Eppstein showed that step 2 could be implemented efficiently, in time $`O(nlogn)`$, as independently did Mike Goodrich and Roberto Tamassia, and Warren Smith (unpublished). We now describe in some more detail step 3, simplification of regions, as we will need to modify this step in some of our algorithms. ###### Lemma 1 (Bern et al.) Any simply connected region of the plane bounded by $`n`$ circular arcs and straight line segments, meeting at points of tangency, can be packed with $`O(n)`$ additional circles in $`O(nlogn)`$ time, such that the remaining regions between circles are bounded by at most four tangent circular arcs. Proof: Compute the Voronoï diagram of the circles within this region; that is, a partition of the region into cells, each of which contains points closer to one of the circles than to any other circle (Figure 2). Because the region is simply connected, the cell boundaries of this diagram form a tree. We choose a vertex $`v`$ of this tree such that each of the subtrees rooted at $`v`$ has at most half the leaves of the overall tree, and draw a circle centered at $`v`$ and tangent to the circles having Voronoï cells incident at $`v`$. This splits the region into simpler regions. We continue recursively within these regions, stopping when we reach regions bounded by only four arcs (in which no further simplification is possible). Adding each new circle to the Voronoï diagram can be done in time linear in the number of arcs bounding the region, so the total time to subdivide all regions is $`O(nlogn)`$. We call the region between circles of this packing a gap. We now state without proof two technical results of Bern et al. about these gaps. ###### Lemma 2 (Bern et al.) The points of tangency on the boundary of a gap are cocircular. A three-sided gap is one bounded by three circular arcs. A good four-sided gap is a gap bounded by four arcs, such that the circumcenter of its points of tangency is contained within the convex hull of those points. A bad four-sided gap is any other four-arc gap. ###### Lemma 3 (Bern et al.) Any bad four-sided gap can be split into two good four-sided gaps by the addition of a circle tangent to two of the bad four-sided gap’s circles. (Figure 3.) In some cases two opposite circles bounding one of the new gaps created by Lemma 3 may overlap, but this poses no problem for the rest of the algorithm. ## 3 Voronoï Quadrilateralization We begin with the quadrilateralization procedure most likely to be useful in practice, due to its low output complexity and lack of complicated special cases. The geodesic Voronoï diagram of a set of point sites in a polygonal domain is a partition of the domain into cells, in each of which the geodesic distance (distance along paths within the domain) is closest to one of the given sites. We now describe a method of finding a point set for which the geodesic Voronoï diagram forms a quadrilateral mesh. One potential application of this type of mesh would be in the finite volume method, as the dual of this Voronoï mesh could be used to define control volumes for that method (see e.g. Miller et al.). The angle between each primal and dual edge pair would be $`\mathrm{9}\mathrm{0}^{}`$, causing some terms in the finite volume method to cancel and therefore saving some multiplications. Our mesh will also have the possibly useful property that all Voronoï edges will cross their duals. We modify the initial circle packing of Bern et al., as follows. We start by protecting vertices, as before; but in this case that protection consists of a circle centered at each domain vertex. Then, as before we fill the remainder of the domain by tangent circles; however we do not attempt to create tangencies with the domain boundary; instead the circle packing should meet the boundary at circles with their centers on the boundary. Further, no tangent point between two circles should lie on the domain boundary, although circles centered on the boundary may meet in the domain interior. (Some circles may cross or be tangent to the boundary, however we ignore these incidences, instead treating these circles as part of three-sided gaps.) It is not hard to modify the previous circle packing algorithms to meet these conditions. The result will be a packing with, again, three-sided and four-sided gaps. However, the gaps involving boundary edges are all four-sided and have right-angled corners rather than points of tangency on those edges. ###### Theorem 1 In $`O(nlogn)`$ time we can find a circle packing as above, such that the geodesic Voronoï diagram of the points of tangencies of the circles forms a quadrilateral mesh. Proof: The vertex protection step can be performed in $`O(n)`$ time using circles with radius half the minimum distance between vertices. (This minimum distance is an edge of the Delaunay triangulation and can be found in $`O(nlogn)`$ time.) Next we find a set of $`O(n)`$ circles to add to our packing, so that any remaining gaps are simply connected. If in this step we ever add a circle $`c`$ tangent to the boundary, we replace it by a set of circles: a circle with radius $`ϵ`$ centered on the boundary at the point of tangency, circles with radius $`ϵ/\mathrm{2}`$ tangent to $`c`$ at each of its other points of tangency, and one circle concentric to $`c`$ with radius reduced by $`ϵ`$; this replacement is depicted in Figure 5. Finally, as in the algorithm of Bern et al. we repeatedly find the Voronoï diagram of the circles bounding any remaining gap and place a circle on a Voronoï vertex so as to divide the gap into two smaller parts of roughly equal complexity. However, in order to avoid placing circles tangent to the boundary in this step, we change their method by using only the circles around a gap as Voronoï sites, omitting any diagram edges that may bound the gap. Our Voronoi diagram’s edges (together with any boundary edges of the gap) form a tree, so we can find a vertex which splits the tree’s leaves roughly evenly. Placing a circle at that vertex produces two simpler gaps and does not cause essential tangencies with the domain boundary. Unlike Bern et al., we do not bother eliminating bad four-sided gaps. We form a mesh by connecting each center of a circle in the packing to the circumcenters of adjacent gaps (Figure 4). In the four-sided gaps along the domain boundary, we place an additional edge from the boundary to the center of the opposite circle, bisecting the chord between the tangencies with the two other circles. These edges form a quadrilateral mesh since each face surrounds a point of tangency, and each point of tangency is surrounded by the vertices from two circles and two gaps. Each mesh element is the Voronoi cell of the point of tangency it contains; its boundary is composed of perpendicular bisectors of dual Delaunay edges. Each dual Delaunay edge has a circumscribing circle from the circle packing as witness to the empty-circle property of Delaunay graphs. Curiously, this mesh is not only a certain type of generalized Voronoï diagram; it is also another type of generalized Delaunay triangulation! The power of a circle with respect to a point in the plane is the squared radius of the circle minus the squared distance of the point to the circle’s center. The power diagram of a set of (not necessarily disjoint) circles is a partition of the plane into cells, each consisting of the points for which the power of some particular circle is greatest. Like the usual kind of Voronoï diagram, the power diagram has convex cells, since the separator between any two circles’ cells is a line (if the two circles overlap, their separator is the line through their two intersection points). We can restrict the power diagram to a polygonal domain by defining the power only for points visible to the center of the given circle. From the construction above, define a family $`F`$ of circles by including the original packing and a “dual” collection of circles through the tangencies surrounding each gap, centered at the gap’s site. As we now show, the power diagram of this family (depicted in Figure 6) is the planar dual to our mesh. ###### Theorem 2 The quadrilateral mesh defined above includes an edge between two points if and only if the corresponding circles’ cells share an edge in the power diagram of $`F`$. Proof: $`F`$ has one circle centered at each vertex; the two circles corresponding to the endpoints of an edge overlap in a lune. The lune’s two corners are points of tangency in the original circle packing (or, if the edge is on the domain boundary, the corners are one such point of tangency and its reflection) and are contained in the two quadrilaterals on either side of the edge. These corners have power zero with respect to the two circles, and are not interior to any other circles; therefore they have those two circles (and possibly some others) as nearest power neighbors. Since power diagram cells are convex, those two circles must continue to be the nearest neighbors to each point along the center line of the lune; in other words this center line lies along an edge in the power diagram corresponding to the given mesh edge. Conversely, we must show that every power diagram adjacency corresponds to a mesh edge. But the power diagram boundaries described above form a convex polygon completely containing the center of the cell’s circle; therefore there can be no other adjacencies than the ones we have already found, which correspond to mesh edges. Since quadrilaterals in this mesh typically correspond to eight triangles in the nonobtuse triangulation algorithm of Bern et al., the constant factors in the $`O(n)`$ bound above should be quite small in practice. Bern et al. observed that their method typically generated between $`\mathrm{2}\mathrm{0}n`$ and $`\mathrm{3}\mathrm{0}n`$ triangles, so we should expect between $`\mathrm{3}n`$ and $`\mathrm{4}n`$ quadrilaterals in our mesh. ## 4 Opposite Right Angles As we now show, the Voronoï triangulation above can be used to find another quadrilateral mesh, in which each quadrilateral has two opposite right angles. Such a quadrilateral must be cyclic (having all four vertices on a common circle); further, the circumcenter bisects the diagonal connecting the two remaining vertices. Our algorithm works by overlaying the power diagram defined above onto the quadrilaterals of Theorem 1, resulting in their subdivision into smaller quadrilaterals. In order to perform this subdivision, we may need to place a few additional circles into our packing. On the boundary of the domain, the gaps between circles will be formed by chains of three tangent circles, the two ends of which are circles centered on the domain boundary. The center circle in this chain is allowed to cross the boundary; we ignore this crossing. Reflecting such a chain across the domain boundary edge produces a four-sided gap partially outside the domain; like Bern et al. we say that this gap is good or bad if the convex hull of its points of tangency contains or doesn’t contain their circumcenter respectively. The algorithm of this section requires these gaps to be good. As in the method of Bern et al., any bad four-sided gap can be subdivided into two good four-sided gaps by the addition of another circle which by symmetry can be placed with its center on the domain boundary. ###### Theorem 3 In $`O(nlogn)`$ time we can partition any polygon into a mesh of $`O(n)`$ quadrilaterals, each having two opposite right angles. Proof: We form the Voronoï quadrilateralization of Theorem 1, and subdivide each quadrilateral $`Q`$ into four smaller quadrilaterals by dropping perpendiculars from the Voronoï site contained in $`Q`$ to each of $`Q`$’s four sides. On edges where two cells of the Voronoï quadrilateralization meet, the two perpendiculars end at a common vertex because they are the two halves of a chord connecting two tangent points on the same circle. For the same reason, each perpendicular meets the edge to which it is perpendicular without crossing any other cell boundaries first. The same procedure of dropping perpendiculars will work whenever we have a Voronoï diagram in which the site generating each cell can be connected by a perpendicular to each cell edge. Therefore, some heuristic simplification can be applied to the mesh above, reducing its complexity further: after forming the Voronoï quadrilateralization of Theorem 1, remove sites one by one from the set of generators as long as this condition is met. ## 5 Kites The next type of quadrilateralization we describe is one in which all quadrilaterals are kites (convex quadrilaterals with an axis of symmetry along one diagonal). Although kites may have bad angles (very close to $`\mathrm{0}^{}`$ or $`\mathrm{1}\mathrm{8}\mathrm{0}^{}`$), they have some other nice theoretical properties. In particular, the cross ratio of a kite is always one. The cross ratio of a quadrilateral with consecutive side lengths $`a`$, $`b`$, $`c`$, and $`d`$ is the ratio $`ac:bd`$. Since this ratio is invariant under conformal mappings, a conformal mapping from the quadrilateral to a rectangle (taking vertices to vertices) can only exist if the rectangle has the same cross ratio; but the cross ratio of a rectangle is just the square of its aspect ratio. Therefore, kites are among the few quadrilaterals that can be conformally mapped onto squares. ###### Theorem 4 In $`O(nlogn)`$ time we can partition any polygon into a mesh of $`O(n)`$ kites. Proof: As in the algorithm of Bern et al., we find a circle packing; however as discussed below we place some further constraints on the placement of circles. We then connect pairs of tangent circles by radial line segments through their points of tangency, and apply a case analysis to the resulting set of polygons. As shown in Figure 7, all interior gaps can be subdivided into kites: three-sided gaps result in three kites, good four-sided gaps result in four, and bad four-sided gaps result in seven. Also shown in the figure are three types of gaps on the boundary of the polygon: three-sided gaps along the edge, reflex vertices protected by two equal tangent circles, and convex vertices packed by a single circle. There are two remaining cases, in which one or two of the sides of a four-sided gap are portions of the domain boundary, and the four-sided gap has a high aspect ratio preventing these boundary edges from being covered by a small number of three-sided gaps. In the simpler of these cases, two opposite sides of the four-sided gap are both boundary edges. Such a gap is necessarily good. If it has aspect ratio $`O(\mathrm{1})`$, we can line the domain edges by $`O(\mathrm{1})`$ additional circles, as in the next case. Otherwise, our construction is illustrated in Figure 8. We find a mesh using an auxiliary set of circles, perpendicular to the original packing. We first place at each end of the four-sided gap a pair of identical circles, tangent to each other and crossing the boundary edges perpendicularly at their points of tangency. These are the medium-sized circles in the figure. We next place two more circles, each perpendicular to one of the boundary edges and crossing it at the same points already crossed by the previously added circles; these are the large overlapping circles in the figure. Finally, each end of the original four-sided gap now contains a gap formed by four circles, but two of these circles cross rather than sharing a tangency. We fill each gap with an additional circle; these are the small circles in the figure. The resulting set of eight circles forms six three-sided gaps and one good four-sided gap, and can be meshed as shown in the figure. The final case consists of four-sided gaps (not necessarily good) involving one boundary edge. To make this case tractable, we restrict our initial placement of circles so that, if we place a circle $`C`$ within a gap involving boundary edges, then $`C`$ is either tangent to those edges or separated from them by a distance of at least $`ϵ`$ times its radius, for some sufficiently small value $`ϵ`$. Then, any remaining four-sided boundary gap must have bounded aspect ratio, and we can place $`O(\mathrm{1})`$ small circles along the boundary edge leaving only three-sided gaps on that edge (Figure 9). The interior of the gap can then be packed with $`O(\mathrm{1})`$ additional circles leaving only the previously solved three- and four-sided internal gap cases. ## 6 No Large Angles The maximum angle of any triangle has been shown to be one of the more important indicators of triangular mesh quality, and it is believed that the maximum angle is similarly important in quadrilateral meshes. For triangular meshes, a maximum angle of $`\mathrm{9}\mathrm{0}^{}`$ can be achieved, but for quadrilaterals this would imply that all elements are rectangles, which can only be achieved when the domain has axis-parallel sides. Indeed, as we now show, some domains require $`\mathrm{1}\mathrm{2}\mathrm{0}^{}`$ angles. ###### Theorem 5 Any simple polygon with all angles at least $`\mathrm{1}\mathrm{2}\mathrm{0}^{}`$ cannot be meshed by quadrilaterals having all angles less than $`\mathrm{1}\mathrm{2}\mathrm{0}^{}`$. Proof: Suppose we have such a simple polygon, and a quadrilateral mesh on it. Let $`x`$ denote the number of mesh vertices on the boundary of the polygon, $`i`$ denote the number of interior vertices, $`e`$ denote the number of mesh edges, and $`q`$ denote the number of mesh quadrilaterals. Then, since each quadrilateral has four edges, each interior edge appears twice, and there are $`x`$ boundary edges, we have the relation $`\mathrm{4}q=\mathrm{2}ex`$. Combining this with Euler’s formula $`x+i+qe=\mathrm{1}`$ and cancelling $`q`$ leaves $`e=\mathrm{2}i+(\mathrm{3}/\mathrm{2})x\mathrm{2}`$. However, if all interior vertices of the mesh were incident to four or more edges, and all exterior vertices were incident to three or more edges, we would have $`e\mathrm{2}i+(\mathrm{3}/\mathrm{2})x`$ (since each edge contributes two to the sum of vertex degrees), a contradiction. So, the mesh has either an interior vertex with degree three, or an exterior vertex with degree two, and in either case at least one of the angles at that vertex must be at least $`\mathrm{1}\mathrm{2}\mathrm{0}^{}`$. As we now show, this lower bound can be matched by our circle packing methods. ###### Theorem 6 In $`O(nlogn)`$ time we can partition any polygon into a mesh of $`O(n)`$ quadrilaterals with maximum angle $`\mathrm{1}\mathrm{2}\mathrm{0}^{}`$. Proof: The result follows from Theorem 4, since any kite (which we can assume without loss of generality to have a vertical axis of symmetry) can be divided into six $`\mathrm{1}\mathrm{2}\mathrm{0}^{}`$ quadrilaterals in one of three ways depending on how the top and bottom angles of the kite compare to $`\mathrm{1}\mathrm{2}\mathrm{0}^{}`$. Specifically, we add new subdivision points on the midpoints of each kite edge. Then, if both the top and bottom angle of the kite are sharp (less than $`\mathrm{1}\mathrm{2}\mathrm{0}^{}`$), we can split the kite along a line between the left and right vertices, and subdivide both of the resulting triangles into three $`\mathrm{1}\mathrm{2}\mathrm{0}^{}`$ quadrilaterals (Figure 10(a)). If both angles are large (greater than $`\mathrm{6}\mathrm{0}^{}`$), we can similarly split the kite vertically along a line from top to bottom and again subdivide both of the resulting triangles (Figure 10(b)). In both of these two cases the subdivisions are axis-aligned or at $`\mathrm{6}\mathrm{0}^{}`$ angles to the axes. In the final case, the top angle is large (at least $`\mathrm{1}\mathrm{2}\mathrm{0}^{}`$) and the bottom is sharp (less than $`\mathrm{1}\mathrm{2}\mathrm{0}^{}`$). In this case, like the second, we partition the kite vertically into two triangles, and again partition each triangle into three; however in this final case the subdivisions are along lines between the bottom of the triangle and the two opposite edge midpoints, and at $`\mathrm{6}\mathrm{0}^{}`$ angles to those lines. It is easily verified that with the given assumptions on the angles of the original kite, all vertices of the subdivision lie as depicted in the figures and all angles are at most $`\mathrm{1}\mathrm{2}\mathrm{0}^{}`$. ## 7 Conclusions We have shown that circle packing may be used in a variety of ways for quadrilateral mesh generation with simultaneous guaranteed bounds on complexity and quality. Many questions remain open: How small can we make the constant factors in our complexity bounds, both in the worst case and in practice? Can we generate linear-complexity quadrilateral meshes with no small angles? Can we combine guarantees on several quality measures at once? Extensions of the circle packing method to three dimensional tetrahedral or hexahedral meshing would be of interest, but seem difficult due to the inability of three dimensional spheres to partition the domain into bounded-complexity regions. However perhaps our methods can be generalized to guaranteed-quality quadrilateral surface meshes. Some of the methods we describe are purely of theoretical interest, due to high constant factors or distorted quadrilateral shapes, but we believe circle packing should be useful in practice as well. Among our methods, perhaps the low constant factors and lack of complicated cases in the Voronoï quadrilateralization make it the most practical choice. ## Acknowledgements Eppstein’s work was supported in part by NSF grant CCR-9258355 and by matching funds from Xerox Corp. ## References
no-problem/9908/hep-lat9908020.html
ar5iv
text
# Some Insights into the Method of Center ProjectionPresented by Š. Olejník. Supported in part by the Slovak Grant Agency for Science (Grant VEGA No. 2/4111/97). ## 1 CENTER PROJECTION IN MAXIMAL CENTER GAUGE In recent years a wealth of evidence has been accumulated on the lattice in favour of the center vortex theory of colour confinement. Our procedure for identifying center vortices consists of the following steps: 1. Generate thermalized SU(2) lattice gauge field configurations. 2. Fix to maximal center gauge by maximizing: $$[U]=\underset{x,\mu }{}\left|\text{Tr}[U_\mu (x)]\right|^2.$$ (1) This in fact is adjoint Landau gauge; the above condition is equivalent to maximizing $$[U^A]=\underset{x,\mu }{}\text{Tr}[U_\mu ^A(x)].$$ (2) 3. Make center projection by replacing: $$U_\mu (x)Z_\mu (x)\text{signTr}[U_\mu (x)].$$ (3) 4. Identify excitations (P-vortices) of the resulting $`Z_2`$ lattice configurations. P-vortices after center projection in MCG appear to be correlated with thick center vortices of full configurations . Their density scales in MCG . Removal of center vortices destroys confinement and restores chiral symmetry . In the present paper we address the question why the above procedure is able to locate center vortices and why it in some cases fails, on lattice configurations preconditioned in a special way. ## 2 VORTEX-FINDING PROPERTY The simplest condition which a successful method for locating center vortices has to fulfill is to be able to find vortices inserted into a lattice configuration “by hand”. This will be called the “vortex-finding property”. Does the method described in Section 1 have this property? An argument for a positive answer is rather simple: A center vortex is created, in a configuration $`U`$, by making a discontinuous gauge transformation. Call the result $`U^{}`$. Apart from the vortex core, the corresponding link variables in the adjoint representation, $`U^A`$ and $`U^A`$, are gauge equivalent. Let $`[U^A]=\text{max}`$ be a complete gauge-fixing condition (e.g. adjoint Landau gauge) on the adjoint links. Then (ignoring both Gribov copies and the core region) $`U^A`$ and $`U^A`$ are mapped into the same gauge-fixed configuration $`\stackrel{~}{U}^A`$. The original fundamental link configurations $`U`$ and $`U^{}`$ are thus transformed by the gauge-fixing procedure into configurations $`\stackrel{~}{U},\stackrel{~}{U}^{}`$ which correspond to the same $`\stackrel{~}{U}^A`$. This means that $`\stackrel{~}{U},\stackrel{~}{U}^{}`$ can differ only by continuous or discontinuous $`Z_2`$ gauge transformations, with the discontinuous transformation corresponding to the inserted center vortex in $`U^{}`$. Upon center projection, $`\stackrel{~}{U},\stackrel{~}{U}^{}Z,Z^{}`$, and the projected configurations $`Z,Z^{}`$ differ by the same discontinuous $`Z_2`$ transformation. The discontinuity shows up as an additional thin center vortex in $`Z^{}`$, not present in $`Z`$, at the location of the vortex inserted by hand. This vortex-finding property goes a long way towards explaining the success of maximal center gauge in locating center vortices in thermalized lattice configurations, and also suggests that there may be an infinite class of gauges with this property. However, there are two caveats that could invalidate the argument: 1. We have neglected the vortex core region, where $`U`$ and $`U^{}`$ differ by more than a (dis)continuous gauge transformation; and 2. Fixing to $`[U^A]=\text{max}`$ is bedeviled by Gribov copies. To find out whether these problems destroy the vortex-finding property, we have carried out a series of numerical tests. The simplest is the following: 1. Take a set of equilibrium SU(2) configurations. 2. From each configuration make three: I – the original one; II – the original one with $`U_4(x,y,z,t)(\text{1})\times U_4(x,y,z,t)`$ for $`t=t_0`$, $`x_1xx_2`$ and all $`y`$, $`z`$, i.e. with 2 vortices (one lattice spacing thick) inserted by hand. To hide them a bit, a random gauge copy is made of the configuration with inserted vortices; III – a random copy of I. 3. Measure: $$G(x)=\frac{_{y,z}<P_I(x,y,z)P_{II}(x,y,z)>}{_{y,z}<P_I(x,y,z)P_{III}(x,y,z)>}.$$ (4) $`P_i(x,y,z)`$ is the Polyakov line measured on the configuration $`i=`$I, II, or III. If the method correctly identifies the inserted vortices, one simply expects $$G(x)=\{\begin{array}{cc}\hfill 1& x[x_1,x_2]\hfill \\ \hfill 1& \text{otherwise}\hfill \end{array}.$$ (5) The result of the test is shown in Fig. 1. The inserted vortices are clearly recognized, and the associated Dirac volume is found in its correct location. A more sophisticated test is to insert vortices with a core a few lattice spacings thick. Our method also passes that test satisfactorily. ## 3 WHEN GRIBOV COPIES BECOME PROBLEMATIC: PRECONDITIONING WITH LANDAU GAUGE Gribov copies in maximal center gauge do not seem to be a severe problem in our procedure; it appears that P-vortex locations vary comparatively little, from copy to copy . However, it has been shown recently that if one first fixes to Landau gauge (LG), before relaxation to maximal center gauge, center dominance is lost. This failure has a simple explanation: LG preconditioning destroys the vortex-finding property. This is illustrated by redoing the test shown in Fig. 1, only with a prior fixing to Landau gauge. The result, shown in Fig. 2, is that the vortex-finding condition is not satisfied; the Dirac volume is not reliably identified. The Gribov copy problem, which is fairly harmless on most of the gauge orbit, seems severe enough to ruin vortex-finding on a tiny region of the gauge orbit near Landau gauge.<sup>0</sup><sup>0</sup>0Cooling and smoothing, which modify thermalized configurations and greatly expand vortex cores, also pose some problems for center projection. Whether these are related to the Gribov problem, as found in Landau gauge preconditioning, is currently under investigation. ## 4 MCG IS NOT ALONE The vortex-finding argument above does not seem to single out MCG. In fact, there should exist (infinitely) many gauges with the vortex-finding property. They should fulfill the following conditions: 1. The gauge fixing condition depends on the adjoint link variable. 2. The gauge fixing condition is complete for adjoint links, leaving a residual $`Z_2`$ gauge symmetry for fundamental links. 3. The gauge fixing condition is smooth, the gauge-fixed adjoint link is close to the identity matrix for large $`\beta `$. An example is a slight generalization of MCG, namely a gauge maximizing the quantity $$^{}[U]=\underset{x,\mu }{}c_\mu \left|\text{Tr}[U_\mu (x)]\right|^2$$ (6) with some choice of $`c_\mu `$, e.g. $`c_\mu =\{1,1.5,0.75,1\}`$. Fig. 3 shows that this gauge has the vortex-finding property. Also, center dominance is observed in this gauge, in the same manner as in MCG. ## 5 CONCLUSION We conclude with a sort of tautology: To find center vortices, one must use a procedure with the vortex-finding property. If that property is destroyed somehow, e.g. by Landau gauge preconditioning, then center vortices are not correctly identified, and center dominance in the projected configurations is lost. This fact does not call into question the physical relevance of P-vortices found by our usual method (which has the vortex-finding property); that relevance is well-established by the strong correlation that exists between these objects and gauge-invariant observables. A gauge-fixing technique which completely avoids the Gribov copy problem is desirable. A viable alternative has been proposed by Ph. de Forcrand at this conference .
no-problem/9908/hep-ph9908389.html
ar5iv
text
# Scaling violations: Connections between elastic and inelastic hadron scattering in a geometrical approach ## Abstract Starting from a short range expansion of the inelastic overlap function, capable of describing quite well the elastic $`pp`$ and $`\overline{p}p`$ scattering data, we obtain extensions to the inelastic channel, through unitarity and an impact parameter approach. Based on geometrical arguments we infer some characteristics of the elementary hadronic process and this allows an excellent description of the inclusive multiplicity distributions in $`pp`$ and $`\overline{p}p`$ collisions. With this approach we quantitatively correlate the violations of both geometrical and KNO scaling in an analytical way. The physical picture from both channels is that the geometrical evolution of the hadronic constituents is principally reponsible for the energy dependence of the physical quantities rather than the dynamical (elementary) interaction itself. PACS numbers: 13.85.Hd, 13.65.Ti, 13.85.Dz, 11.80.Fv e-mail: menon@ifi.unicamp.br, fax: 55-19-7885512, phone: (19) 7885530 Table of Contents I. INTRODUCTION II. EXPERIMENTAL DATA AND PHENOMENOLOGICAL CONTEXT A. Elastic channel B. Inelastic channel C. Strategies III. UNITARITY AND IMPACT PARAMETER PICTURE A. Hadronic and elementary multiplicity distributions B. Elastic channel input: the BEL $`G_{in}`$ C. Elementary hadronic process in a geometrical picture $``$ Analytical relation between multiplicity function and eikonal $``$ Elementary multiplicity distribution $``$ Power coefficient D. Results for the hadronic multiplicity distributions IV DISCUSSION A. Sensitivity of the parametrizations $``$ Changing $`G_{in}`$ $``$ Changing the elementary distribution $`\phi `$ $``$ Changing the power coefficient $`\gamma `$ B. The multiplicity function and the power assumption C. Physical picture V. CONCLUSIONS AND FINAL REMARKS I. INTRODUCTION Hadron scattering is presently one of the most intriguing process in high energy particle physics. Unlike the unification scheme which characterizes the electroweak sector of the standard model, some topical aspects of quantum chromodynamics (QCD) remain yet unknown and this has been a great challenge for decades. One point concerns some subtleties emerging from its running coupling constant, which entails that high energy hadronic phenomena have been classified into two wide and nearly incongrous areas, namely, large $`p_T`$ or hard processes and low $`p_T`$ or soft hadronic physics. From a purely theoretical point of view (QCD), these phenomena are treated through perturbative and non-perturbative approaches respectively, and this renders difficult an unified formalism able to describe the totality of experimental data available on high energy hadronic interactions. The reason is that, despite the successes of perturbative QCD in the description of the hard (inelastic) hadronic scattering and also the successes of non-perturbative QCD in treating static properties of the hadronic systems, the scattering states in the soft (long range) region yet remains without a pure QCD explanation: Perturbative approaches do not apply and pure non-perturbative formalisms are not yet able to predict the bulk of the scattering states. This soft hadronic physics is associated with the elastic and diffractive scattering, characterized, experimentally, by several physical quantities in both the elastic and inelastic channels, such as elastic differential cross section, total cross section, charged multiplicity distributions, average multiplicities and others . In spite of the long efforts to describe these data through a pure microscopic theory (QCD), our knowledge is still largely phenomenological and also based on a wide class of models and some distinct theoretical concepts, such as Pomeron, Odderon, impact parameter picture, parton and dual models, Monte Carlo approaches and so on. However, presently, this phenomenological treatment of the soft hadronic physics plays an important role as a source of new theoretical insights and as a strategy in the search for adequate calculational schemes in QCD. The multiple facets associated with this phenomenological scenario have been extensively discussed in the literature and Ref. presents a detailed outlook of the progresses and present status of the area. In addition to the intrinsic importance of high energy diffractive physics associated with our limited theoretical understanding, a renewed interest in the subject may be seen in the last years. This, in part, is due to the Hera and Tevatron programs, but also to the advent of the next accelerator generation, the Relativistic Heavy Ion Collider (RHIC) and the CERN Large Hadron Collider (LHC). In fact, with these new machines it shall be possible to investigate $`pp`$ collisions at center-of-mass energies never reached before in accelerators, allowing comparative studies between $`pp`$ and $`\overline{p}p`$ scattering at the highest energies, including both hard and soft processes. Presently, at this “pre-new-era” stage and due to the lack of an widely accepted unified theoretical treatment of both elastic and inelastic channels, it may be important to re-investigate ways of connecting these channels, looking for new information. Even if the treatment is essentially phenomenological, as explained before, the predictions shall be checked and may contribute with future theoretical (QCD) developments. To this end, in this work we shall investigate some aspects of both elastic and inelastic $`pp`$ and $`\overline{p}p`$ scattering in the context of a particular phenomenological approach. Our goal is to obtain simultaneous descriptions of some experimental data from both channels, that is, our primary interest concerns connections between elastic and inelastic hadron scattering. Accordingly we shall base our study on one of the most important principle of quantum field theory: Unitarity. For reasons that will be explained in detail in what follows, our framework is the impact parameter formalism (geometrical approach). At first, under geometrical considerations, we shall not refer to quarks/gluons or partons, but treat hadron-hadron interactions as collisions between composite objects made up by elementary parts, which we shall generically refer as “constituents”. At the end we discuss some possible connections between our results and the framework of QCD. Also, as shall be explained, our starting point is the description of physical quantities that characterize the elastic channel in $`pp`$ and $`\overline{p}p`$ scattering. We then proceed to consider the inelastic channel through unitarity arguments and in a geometrical approach. Following other authors , we shall express the “complex” (overall) hadron-$`p`$ multiplicity distributions (inelastic channel) in terms of an “elementary” distribution (associated with an elementary process taking place at given impact parameter) and the inelastic overlap function, which is constructed from descriptions of the elastic channel data. The novel aspects concern: (a) quantitative correlation between the violations of the Koba-Nielsen-Olesen (KNO) scaling (inelastic channel) and geometrical scaling (elastic channel); (b) introduction of novel parametrizations for the elementary quantities based on geometrical arguments and taking suitable account of the most recent data on contact interactions. With this general formalism, in addition to the description of the elastic data (even at large momentum transfers), the hadronic multiplicity distributions may be evaluated and an excellent reproduction of the experimental data on $`pp`$ and $`\overline{p}p`$ inelastic multiplicities is achieved . We also present predictions at the LHC energies. The paper is organized as follows. In Sec. II we discuss the underlying phenomenological ideas, the data to be investigated and the strategy assumed. In Sec. III we present the theoretical framework connecting elastic and inelastic channels, the inputs from elastic scattering data, the novel parametrizations for the elementary processes, the predictions for the hadronic multiplicities distributions and comparison with the experimental data. In Sec. IV we discuss in some detail all the results obtained and the physical and geometrical interpretations. The conclusions and some final remarks are the content of Sec. V. In what follows we shall represent the main physical quantities associated with hadron-hadron scattering (complex/overall system) by capital letters and those associated with constituent-constituent interactions (elementary process) by lower case. II. EXPERIMENTAL DATA AND PHENOMENOLOGICAL CONTEXT The broad classification in hard, soft (and also semi-hard) processes is based on the momentum transferred in the collision. On the other hand, depending on the physical process involved, high energy hadron scattering may also be classified into elastic and inelastic processes and the later, in diffractive (single and double dissociation) and non diffractive. Concerning both elastic and inelastic channels, one of the striking features that emerged from early experiments was the violation of the scaling laws, namely, the geometrical scaling in elastic scattering and the KNO scaling in the inelastic events . For this reason, our main interest in this work is to correlate quantitatively the above scaling violations and to discuss its phenomenological and dynamical aspects. To this end, before we present the underlying formalism and results, we discuss in this section the physical observables to be investigated and the reasons for our choices concerning phenomenology and strategies. A. Elastic channel The differential cross section is the most important physical observable in the elastic channel, since from it other quantities may be obtained, in particular, the integrated elastic cross section, $`\sigma _{el}`$, and the total cross section, $`\sigma _{tot}`$ (optical theorem). The violation of the geometrical scaling may be characterized by the increase of the ratio $`\sigma _{el}/\sigma _{tot}`$ with the energy at the CERN Intersecting Storage Ring ($`ISR`$) and at the CERN Super Proton Synchrotron ($`S\overline{p}pS`$) regions. The differential cross section yields the elastic hadronic amplitude, $`F(q,s)`$, by $$\frac{d\sigma }{dq^2}=\pi |F(q,s)|^2$$ (1) and this amplitude may be expressed in terms of the elastic profile function, $`\mathrm{\Gamma }(b,s)`$, by $$F(q,s)=ib𝑑bJ_0(qb)\mathrm{\Gamma }(b,s),$$ (2) where $`b`$ is the impact parameterand $`\sqrt{s}`$ the center-of-mass energy. As commented before, despite the bulk of models able to reproduce the differential cross section data at the $`ISR`$, $`S\overline{p}pS`$ and Tevatron energies , an approach based exclusively in QCD is still missing. Obviously due to its soft character, a QCD treatment of the elastic scattering should be non-perturbative. Along this line, despite the difficulties, important results have recently been reached through the works by Landshoff, Nachtmann, Simonov, Dosch, Ferreira and Kramer . The approach, based on the functional integral representation (QCD) and eikonal approximation, allows to extract a quark-quark profile function $`\gamma (b)`$ (impact parameter space) from the gluon gauge-invariant two-point correlation function, determined, for example, from lattice QCD . Through the Fourier transform (analogous to Eq. (2) at the elementary level), the quark-quark scattering amplitude, $`f(q,s)`$, may be obtained: $$f(q,s)=ib𝑑bJ_0(qb)\gamma (b,s).$$ (3) One possible connection with the hadronic scattering amplitude, Eq. (2), is by means of the Stocastic Vacuum Model (SVM) and some important results have recently been obtained . However, presently, this theoretical framework still depends on some phenomenological inputs. Also, it is able to reproduce only the experimental data in the forward region and/or very small values of the momentum transfer and does not distinguish $`pp`$ and $`\overline{p}p`$ scattering (dip region), even at $`ISR`$ energies . Another way to obtain the hadronic amplitude from the elementary one is through the Glauber’s multiple diffraction theory (MDT) and this plays a central role in our choices concerning phenomenology and strategies as discussed in what follows. Originally the MDT was applied to hadron-nucleus and nucleus-nucleus collisions and after to hadron-hadron scattering . The topical point which interest us is the allowed general connection between complex quantity (composite object) and elementary quantity (constituents). In the case of hadron-hadron collisions, the connection between the hadronic amplitude (composite object) and the elementary amplitude (constituents) is also established through the eikonal approximation. In this approach the hadronic profile function, Eq. (2), is expressed by $$\mathrm{\Gamma }(b,s)=1e^{i\chi (b,s)},$$ (4) where $$\chi (b,s)=Cq𝑑qJ_0(qb)G_AG_Bf$$ (5) is the eikonal function, $`G_{A,B}`$ the hadronic form factors, $`f`$ the elementary (constituent - constituent) amplitude and $`C`$ does not depend on the transferred momentum. The above notation shall be useful for the discussion we are interested in. In spite of their simplicity, Eqs. (2), (4) and (5) are extremely useful. Recently, with suitable parametrizations for the form factors and with the elementary amplitude (quark-quark) extracted from a parametrization for the gluonic correlator through the functional approach (non-perturbative QCD), Grandel and Weise obtained good descriptions of the differential cross secion data for $`pp`$ and $`\overline{p}p`$ elastic scattering at the $`ISR`$ and $`S\overline{p}pS`$ energies, but only in the region of small momentum transfer . On the other hand, excellent descriptions of experimental data, including also large momentum transfers, have been obtained in a rather phenomenological context, through suitable parametrizations for $`G_A`$, $`G_B`$ and $`f`$ . Moreover, elementary amplitudes obtained through the SVM and the gluonic correlator from lattice QCD have been investigated and also comparisons with empirical analysis and model predictions have been discussed . We understand that all these facts indicate that the impact parameter formalism (and the eikonal approximation), connecting the complex (overall) amplitude with the elementary amplitude (constituent-constituent), Eqs. (2), (4) and (5), is a very fruitful and simple approach in the investigation of the elastic hadron scattering. As shown, it seems also to be an adequate bridge between phenomenology and non-perturbative QCD. These conclusions constitute one of the foundations of our approach and, as discussed in what follows, the extensions to the inelastic channel shall be based on the general idea of connections between overall and elementary quantities in an impact parameter picture. B. Inelastic channel Concerning scaling in the inelastic channel, the quantity of interest is the hadronic charged particle multiplicity distribution $`P_N`$, normalized in terms of the KNO variable, $`Z=N(s)/<N>(s)`$, as $$<N>(s)P_N(Z)\mathrm{\Phi }.$$ (6) The broader distribution observed at the $`S\overline{p}pS`$ characterizes the violation of the KNO scaling, namely, $`\mathrm{\Phi }=\mathrm{\Phi }(Z,s)`$. As in the case of elastic differential cross section data, a wide class of models describes this behavior, as for example, dual parton , fireball , two-component models , and others. Also, hadronic processes have been extensively treated through Monte Carlo event generators and the Lund parton approach . However, we observe that, despite some QCD inspired approaches and good descriptions of some soft processes, all these formalims and models are concerned exclusively with the inelastic channel and this is the topical point that distinguishes our strategy, as discussed in what follows. We shall also return to this subject in Secs. IV and V. C. Strategies Connections between Geometrical and KNO scalings were established a long time ago, by Dias de Deus , Lam and Yeung . However, we are interested here in their violations and the central point is: Does one need a new connection when the two phenomena are violated at the $`Sp\overline{p}S`$ or can the two effects be correlated both phenomenologically and dynamically? We will argue that the latter alternative seems to be prevail. Specifically, our goal is to correlate quantitatively both violations in an analytical way and we shall show that, beginning with a formalism that describes quite well the violation of the Geometrical scaling (elastic channel input), it is possible to extend it and to describe, quantitatively, the violation of the KNO scaling in an analytical way. We stress that this strategy distinguishes our approach from all the other model and theoretical descriptions of elastic or inelastic scattering that treat these interactions separately, in an independent way or in distinct contexts. Since the connection between elastic channel $``$ inelastic channel is our primary interest, the approach shall be based in direct analogy with the ideas discussed in Sec. II.A, that is, we consider hadron-hadron collisions as collisions between complex objects, each one composed by a number of more elementary ones. As an extension of the Glauber multiple diffraction theory, which connects hadronic and elementary elastic amplitudes, we shall consider the impact parameter formalism and also express the hadronic multiplicity distribution (complex system) in terms of elementary multiplicity distributions (constituents). The point is to describe a “wide” distribution (hadronic) by superimposing a number of narrower ones (elementary) . What we shall do here is to infer what these elementary distributions should be, in order to reproduce the experimental data on hadronic distributions and in the context of the impact parameter picture. He hope that, as in the elastic case, these information can contribute to further theoretical developments. III. UNITARITY AND IMPACT PARAMETER PICTURE Unitarity is one of the most important principles in quantum field theory. In the geometrical picture, unitarity correlates the elastic scattering amplitude in the impact parameter $`b`$ space, $`\mathrm{\Gamma }(b,s)`$, Eq. (2), with the inelastic overlap function, $`G_{in}(b,s)`$, by $$2Re\mathrm{\Gamma }(b,s)=|\mathrm{\Gamma }(b,s)|^2+G_{in}(b,s)$$ (7) which is term-by-term equivalent to $$G_{tot}(b,s)=G_{el}(b,s)+G_{in}(b,s).$$ (8) For a purely imaginary elastic amplitude in momentum transfer space the profile function $`\mathrm{\Gamma }(b,s)`$ is real and in the eikonal approximation is expressed by $$\mathrm{\Gamma }(b,s)=1exp[\mathrm{\Omega }(b,s)],$$ (9) where $`\mathrm{\Omega }(b,s)=Im\chi (b,s)`$ in Eq. (4). With this, $$G_{in}(b,s)=1exp[2\mathrm{\Omega }(b,s)]\sigma _{in}(b,s)$$ (10) is the probability for an inelastic event to take place at $`b`$ and $`s`$ and $$\sigma _{in}(s)=d^2𝐛G_{in}(b,s).$$ (11) In this picture the topological cross section for producting an even number $`N`$ of charged particles at CM energy $`\sqrt{s}`$ is given by $$\sigma _N(s)=d^2𝐛\sigma _N(b,s)=d^2𝐛\sigma _{in}(b,s)\left[\frac{\sigma _N(b,s)}{\sigma _{in}(b,s)}\right]$$ (12) where the quantity in brackets can be interpreted as the probability of producing $`N`$ particles at impact parameter $`b`$. A. Hadronic and elementary multiplicity distributions We now introduce the multiplicity distributions for both an overall and an elementary processes in terms of corresponding KNO variables and also the formal connection between these distributions. Representing the hadronic (overall) multiplicity distribution by $`\mathrm{\Phi }`$ and the corresponding KNO variable by $$Z=\frac{N(s)}{<N>(s)}$$ (13) where $`<N>(s)`$ is the average hadronic multiplicity at $`\sqrt{s}`$, we have in general $$\mathrm{\Phi }=<N>(s)\frac{\sigma _N(s)}{\sigma _{in}(s)}=\mathrm{\Phi }(Z,s).$$ (14) Now, let $`<n>(b,s)`$ be the average number of particles produced at $`b`$ and $`s`$, $`\phi `$ the elementary multiplicity distribution and $$z=\frac{N(s)}{<n>(b,s)}$$ (15) a KNO variable associated with the elementary process taking place at $`b`$ (and $`s`$). Then, in general, $$\phi =<n>(b,s)\frac{\sigma _N(b,s)}{\sigma _{in}(b,s)}=\phi (z,s).$$ (16) Both distributions are normalized by the usual conditions $$_0^{\mathrm{}}\mathrm{\Phi }(Z)𝑑Z=2=_0^{\mathrm{}}\mathrm{\Phi }(Z)Z𝑑Z.$$ (17) $$_0^{\mathrm{}}\phi (z)𝑑z=2=_0^{\mathrm{}}\phi (z)z𝑑z,$$ (18) The relationship between $`\mathrm{\Phi }`$ and $`\phi `$ then follows from Eqs. (10-12), (14) and (16): $$\mathrm{\Phi }=\frac{<N>(s)d^2𝐛\frac{G_{in}(b,s)}{<n>(b,s)}\phi }{d^2𝐛G_{in}(b,s)}.$$ (19) Now, let us define a multiplicity function $`m(b,s)`$ by the ratio $$m(b,s)=\frac{<n>(b,s)}{<N>(s)},$$ (20) so that Eq. (19) becomes $$\mathrm{\Phi }=\frac{d^2𝐛\frac{G_{in}(b,s)}{m(b,s)}\phi (\frac{Z}{m(b,s)})}{d^2𝐛G_{in}(b,s)}=\mathrm{\Phi }(Z,s).$$ (21) It is well known that connections between KNO and Geometrical scaling may be established if $`m(b,s)=m(b/R(s))`$ and also $`G_{in}(b,s)=G_{in}(b/R(s))`$ , where $`R(s)`$ is the “geometrical radius”. In this case $`\mathrm{\Phi }(Z,s)`$ is only a function of $`Z`$. The general result (21) means that, once one has parametrizations for $`G_{in}(b,s)`$ and the elementary quantities $`\phi `$ (multiplicities distribution) and $`m(b,s)`$ (multiplicity function) the overall hadronic multiplicity distribution may be evaluated. In this work we consider $`G_{in}(b,s)`$ from analyses of elastic $`pp`$ and $`\overline{p}p`$ scattering data (taking account of geometrical scaling violation) and infer the elementary quantities based on geometric arguments, as explained in what follows. In so doing, we shall correlate quantitatively the violations of both KNO and Geometrical scaling in an analytical way. B. Elastic channel input: the BEL $`G_{in}`$ In the elastic channel, the breaking of Geometrical scaling is quite well described by the BEL behaviour, analytically expressed by the Short Range Expansion of the inelastic overlap function $$G_{in}(b,s)=P(s)exp\{b^2/4B(s)\}k(x,s),$$ (22) with $`k`$ being expanded in terms of a short-range variable $`x=bexp\{(ϵb)^2/4B(s)\}`$, i.e. $$k(x,s)=\underset{n=0}{\overset{N}{}}\delta _{2n}(s)\left[\frac{ϵexp\{1/2\}}{\sqrt{2B(s)}}x\right]^{2n}.$$ (23) The quantity in the bracket of (23) by itself exhibits GS for constant values of $`ϵ^2`$ $``$ 0.78, but $`k(x,s)`$ doesn’t because of the $`s`$-dependence of $`\delta _{2n}(s)`$ and therefore $`G_{in}(b,s)`$ doesn’t either (also because of P($`s`$)). Each term in the bracket of (23) has a maximum value of 1 and the rapid convergence of the series reproduces data for all values of $`t`$ $``$ (0, 14) GeV<sup>2</sup> with N=3. For $`k`$=1=$`P`$, we recover the Van Hove limit for $`\sigma _{el}/\sigma _{tot}`$=1-$`\frac{1}{4(1\mathrm{ln2})}`$ $``$ 0.1853 which is nearly attained at the ISR. The deviation of $`k`$ from the constant value of 1, in particular the increase of $`\delta _2(s)`$ with increasing $`s`$ is responsible for the Edgier behaviour of $`G_{in}(b,s)`$, while increasing values of $`P(s)`$ and $`B(s)`$ make the proton Blacker and Larger respectively (BEL behavior of the inelastic overlap function). Excellent agreement with experimental data on $`pp`$ and $`\overline{p}p`$ elastic scattering is achieved for the following parametrizations in terms of the Froissart-like variable $`y=\mathrm{ln}^2(s/s_0)`$ with $`s_0=100`$ GeV<sup>2</sup> $$P(s)=\frac{0.908+0.027y}{1+0.027y},\delta _2(s)=0.115+0.00094y$$ (24) $`B(s)=6.64+0.044y`$ and $`\delta _4`$ determined from $`\delta _2`$ in some models by $`\delta _4=\delta _2^2/4`$. C. Elementary hadronic process in a geometrical picture We now turn to the discussion of the elementary hadronic process, characterized by $`\phi `$ and $`m`$ in Eq. (21). By construction, these quantities are associated to collisions of strongly interacting hadronic constituents. As commented before, due to the success of the geometrical models in the investigation of elastic hadron scattering (for example, the above BEL approach) and the presently lack of a pure QCD approach and/or Monte Carlo models to the subject (elastic/soft scattering states), we shall discuss what an elementary hadronic process could be in the geometrical framework and in an analytical way. Our arguments are as follows. In the geometrical approach an elementary process is a process occuring at a given impact parameter. Concerning contact interactions, experimental information is only available from lepton-lepton collisions, which is a process occuring in a unique angular momentum state and therefore also at a given impact parameter (zero in this case). Although these processes can not be the same as collisions between hadrons constituents, it is reasonable, from the geometrical point of view, to think that some characteristics of both processes could be similar. The point is to find out or infer what they could be. For these reasons we shall consider the experimental data available on $`e^+e^{}`$ collisions as a possible source of (limited) geometrical information concerning elementary hadronic interactions (at given impact parameter). We do not pretend to look for connections between $`e^+e^{}`$ annihilations and $`pp`$ and $`\overline{p}p`$ collisions but to extract from the former processes suitable information that allows the construction of the hadronic multiplicities (and the connections with the corresponding elastic amplitude) in an analytical way and in the geometrical context. This may be achieved for both $`\phi `$ and $`m`$ in Eq. (21) through the following procedure. $``$ Analytical relation between multiplicity function and eikonal First, in order to connect the multiplicity function $`m(b,s)`$ and the eikonal $`\mathrm{\Omega }(b,s)`$ (and so $`G_{in}(b,s)`$ by Eq. (10)) in an analytical way, let us consider the very simple assumption that the average multiplicity at given impact parameter depends on the center-of-mass energy in the form of a general power law $$<n>(bfixed,s)E_{CM}^\gamma .$$ (25) We shall discuss this assumption in detail in Sec. IV.B.. Now, from Eqs. (10) and (11), $`exp\{2\mathrm{\Omega }(b,s)\}`$ is the transmission coefficient, i.e. the probability of having no interaction at a given impact parameter, and therefore $`\mathrm{\Omega }`$ should be proportional to the thickness of the target, or as first approximation, to the energy $`E_{CM}`$ that can be deposited at $`b`$ for particle production at a given $`s`$. By Eq. (25) this implies $$<n>(b,s)\mathrm{\Omega }^\gamma (b,s).$$ (26) Comparison of Eqs. (20) and (26) allows us to correlate the multiplicity function $`m(b,s)`$ with the eikonal through a non-factorizing relation (in $`b`$ and $`s`$): $$m(b,s)=\xi (s)\mathrm{\Omega }^\gamma (b,s),$$ (27) with $`\xi (s)`$ being determined by the normalization condition of the overall multiplicity distribution, Eq. (18). With this, Eq. (21) becomes $$\mathrm{\Phi }=\frac{d^2𝐛\frac{G_{in}(b,s)}{\xi \mathrm{\Omega }^\gamma (b,s)}\phi (\frac{Z}{\xi \mathrm{\Omega }^\gamma (b,s)})}{d^2𝐛G_{in}(b,s)}$$ (28) where $$\xi (s)=\frac{𝑑b^2G_{in}(b,s)}{𝑑b^2G_{in}(b,s)\mathrm{\Omega }^\gamma (b,s)}.$$ (29) Once the above analytical connection is assumed, the elementary hadronic process is now characterized by only two quantities, namely, the elementary distribution $`\phi `$ and the power coefficient $`\gamma `$. We proceed with the determination of these quantities through quantitative analyses of $`e^+e^{}`$ data and under the following arguments. $``$ Elementary multiplicity distribution Because the elementary process occurs at a given impact parameter, its elementary structure suggests that it should scale in the KNO sense. Now, since experimental information on $`e^+e^{}`$ multiplicity distributions shows agreement with this scaling , we shall base our parametrization for $`\phi `$ just on these data. In particular, it is sufficient to assume a gamma distribution (one free parameter), normalized according to Eq. (17), $$\phi (z)=2\frac{K^K}{\mathrm{\Gamma }(K)}z^{K1}exp\{Kz\}.$$ (30) Fit to the most recent data, covering the interval 22.0 $`GeV`$ $``$ $`\sqrt{s}`$ $``$ 161 GeV furnished $`K=10.775`$ $`\pm `$ 0.064 with $`\chi ^2/N_{DF}=508/195=2.61`$ and the result is shown in Fig. 1. Concerning this fit, we verified that data at $`29`$ and $`56`$ GeV make the highest contributions in terms of $`\chi ^2`$ values. For example, if the former data are excluded we obtain $`K=10.62`$ and $`\chi ^2/N_{DF}=414/181=2.29`$ and if both sets are excluded then $`K=10.88`$ and $`\chi ^2/N_{DF}=286/162=1.77`$. For comparison we recall that the DELPHI fit through a negative binomial distribution to data at only $`91`$ GeV gives $`\chi ^2/N_{DF}=80/34=2.35`$ (and $`\chi ^2/N_{DF}=43/33=1.30`$ through a modified negative binomial distribution) . However, we also verified that the above two values for $`K`$ are not so sensitive in the final result concerning the hadronic multiplicity distribution, which is our goal (we shall return to this point in Sec. IV.A). For this reason and since we are only looking for experimental information that could represent contact interactions (geometrical point of view) we consider our first result shown in Fig. 1 as the representative one. $``$ Power coefficient Finally, following Eq. (25), we consider fits to the $`e^+e^{}`$ average multiplicity through the general power law $$<n>_{e^+e^{}}=A[\sqrt{s}]^\gamma .$$ (31) We collected experimental data at center-of-mass energies above resonances and thresholds and also the most recent data at the highest energies, covering the interval 5.1 GeV $``$ $`\sqrt{s}`$ $``$ 183 GeV . Fitting to Eq. (31) yields $`A`$=2.09 $`\pm `$ 0.02, $$\gamma =0.516\pm 0.002$$ (32) with $`\chi ^2/N_{DF}=409/46=8.89`$ and the result is shown in Fig. 2. We observe that this parametrization deviates from the data above $`\sqrt{s}`$ $``$ 100 GeV and this contributes to the high $`\chi ^2`$ value. However, as commented before, we do not expect that $`e^+e^{}`$ annihilation exactly represent the collisions between hadrons constituents. The power law is a form which allows an analytical and simple connection between the multiplicity function and the eikonal as expressed by Eq. (27). In Sec. IV.B we present a detailed discussion concerning this power assumption and in Sec. IV.A and IV.C, we discuss the physical meaning of the differences between our parametrization and the experimental data on $`<n>_{e^+e^{}}`$. D. Results for the hadronic multiplicities distributions With the above results we are now able to predict the hadronic inelastic multiplicity distribution $`\mathrm{\Phi }(Z,s)`$, Eqs. (28, 29), without free parameters: $`G_{in}(b,s)`$ (and $`\mathrm{\Omega }(b,s)`$) comes from analysis of the elastic scattering data (Eq. (10) and (22-24)); $`\phi (z)`$ and $`\gamma `$ from fits to $`e^+e^{}`$ data, Eq. (30) and (31-32) respectively. We express $`\mathrm{\Phi }`$ in terms of the scaling variable $`Z^{^{}}=N^{^{}}/<N^{^{}}>`$ where $`<N^{^{}}>=N(s)N_0`$ with $`N_0`$=0.9 leading charges removed. It is well known that such a subtraction improve the KNO curves for all measured data below the $`S\overline{p}pS`$ Collider with the above value of $`N_0`$. This is completely equivalent to the Wroblewski relation for the dispersion $`D=\sqrt{N^2<N>^2}`$=0.594 $`[<N>N_0]`$ with the same value of $`N_0`$. Values of $`N_0`$ around 1 and the numerical value of the $`D`$ vs $`N`$ can be found by parton model arguments for valence quark distributions. The predictions for $`pp`$ scattering at ISR energies and $`\overline{p}p`$ at 546 GeV are shown in Figs. (3) and (4), respectively, together with the experimental data . The theoretical curves present excellent agreement with all the data, showing a slow evolution with the energy at ISR for large $`Z^{^{}}`$ and reproducting the KNO violations for large $`Z^{^{}}`$ values at 546 GeV. In Fig. (4) is also shown the predictions at 14 TeV (LHC). IV DISCUSSION In the last section we obtained a quantitative correlation between the violations of both KNO and geometrical scaling. In the framework of the impact parameter picture, Sec. III.A, we only used four inputs, three parametrizations from fits to experimental data and one geometrical assumption, namely, 1. $`G_{in}(b,s)`$ from fits to elastic scattering data (BEL behavior); 2. $`\phi (z)`$ from fit to $`e^+e^{}`$ multiplicity distribution data; 3. The geometrical assumption (27) concerning the multiplicity function $`m(b,s)`$; 4. The power coeficient $`\gamma `$, from fit to $`e^+e^{}`$ average multiplicities data. In this section, we first investigate the sensitivity of each parametrization from fits to the experimental data in the output of interest, namely, the hadronic multiplicity distribution. Then, we discuss in some detail the geometrical assumption concerning the multiplicity function and the power law. Finally, based on this study, we outline the physical picture associated with all the results from both geometrical/phenomenological and QCD point of views. A. Sensitivity of the parametrizations First we observe that the power coefficient $`\gamma `$ in Eqs. (28-29) could, formally, be considered as a free parameter in a direct fit to the data on the hadronic multiplicity distributions and, in this case, it would not be necessary to take account of the $`e^+e^{}`$ average multiplicity data. This, however, leds to a strong correlation between $`\gamma `$ and the other two inputs, $`G_{in}(b,s)`$ and $`\phi (z)`$. On the other hand, with our procedure, the values and behaviours of the three inputs, $`G_{in}`$, $`\phi `$ and $`\gamma `$, are rougly uncorrelated and this allows tests of the inputs by fixing two of them and changing the third. In what follows we perform this kind of analysis, beginning allways with the results obtained in the last section and considering, separately, a change in each one of the inputs. Since we are interested in the scaling violation we shall base this study in the results for the hadronic multiplicity distribution only at the collider energy. $``$ Changing $`G_{in}`$ Among the wide class of models for $`G_{in}`$ , we shall consider a multiple diffraction model (MDM) and also the traditional approach by Chou and Yang, as a class of geometrical model (GM). The reason is based on the discussion in Sec. II.A. Also, as we shall show in Sec. IV.C, these models allows a suitable connection with the interpretations that can be inferred from our general approach. \- Multiple diffraction model (MDM) This class of models is characterized by each particular choice of parametrizations for the physical quantities in Eq. (5), namely, form factors, $`G_{A,B}`$ and elementary scattering amplitude $`f`$ . In particular, Menon and Pimentel obtained a good description of the experimental data o $`pp`$ and $`\overline{p}p`$ elastic scattering, above $`\sqrt{s}=10GeV`$ through the following choices citemenonpimentel: $$G_A(q)=G_B(q)=\frac{1}{(1+\frac{q^2}{\alpha ^2})(1+\frac{q^2}{\beta ^2})},$$ (33) $$f(q)=\frac{i[1(q^2/a^2)]}{[1+(q^2/a^2)^2]}.$$ (34) The parameters $`a^2`$ and $`\beta ^2`$ are fixed and the dependence on the energy is contained in the other two parameters: $$C(s)=\xi _3exp\{\xi _4[\mathrm{ln}(s)]^2\},$$ (35) $$\alpha _{(s)}^2=\xi _1[\mathrm{ln}(s)]^{\xi _2},$$ (36) where $`\xi _i`$, $`i=1,2,3,4`$ are real constants. The reason for these choices and physical interpretaions are extensively discussed in . With these perametrizations the opacity function, $$\mathrm{\Omega }(b,s)=\mathrm{Im}\chi (b,s),$$ (37) is analytically determined and then the inelastic overlap function through Eq. (10). \- Geometrical model (GM) In the geometrical approach by Chou and Yang, the essential ingredient is the convolution of form factors in the impact parameter space . However, in the context of the multiple diffraction theory, it can also be specified by the following choices : $$G_A(q)=G_B(q)=\frac{1}{2\pi [1+(\frac{q^2}{\mu ^2})]^2},$$ (38) $$f=1.$$ (39) In Ref. the parameters $`\mu `$ and $`C`$ were determined through fits to elastic $`pp`$ data at $`\sqrt{s}=23.5GeV`$ and $`\overline{p}p`$ at $`546GeV`$. Following the authors, we consider the parametrizations $$C(s)=a_1+a_2\mathrm{ln}s,\frac{1}{\mu ^2(s)}=b_1+b_2\mathrm{ln}s.$$ (40) With the above double pole parametrization for the form factors, the opacity, Eq. (5), is analytically determined and so the inelastic overlap function, Eq. (10). \- Results The results for the inelastic overlap function at $`546GeV`$ are shown in Fig. 5, from both the MDM and the GM, together with the BEL $`G_{in}`$ for comparison. In what follows, we shall use the following notation for these parametrizations $`G_{in}^{MDM}`$, $`G_{in}^{GM}`$, and $`G_{in}^{BEL}`$, respectively. We then calculate the hadronic multiplicity distribution, Eqs. (28-29), at this energy, by fixing both $`\gamma =0.516`$, Eq. (32), and the gamma parametrization for the elementary multiplicity $`\phi (z)`$, Eq. (30), and using $`G_{in}^{MDM}`$ and $`G_{in}^{GM}`$. The results are displayied in Fig. 6 togheter with that obtained with $`G_{in}^{BEL}`$ (Fig. 4) for comparison. We observe that, for central collisions (small b), $`G_{in}^{BEL}`$ and $`G_{in}^{GM}`$ are very similar, but $`G_{in}^{MDM}`$ has higher values (Fig. 5a). This leads to the differences in $`\varphi (Z^{})`$ at high multiplicities, as can be seen in Fig. 6. In the same way, the differences between $`G_{in}^{MDM}`$, $`G_{in}^{GM}`$, and $`G_{in}^{BEL}`$ at large b (Fig. 5b) originate the differences in $`\varphi (Z^{})`$ at small multiplicities. In all the cases the physical picture is that large multiplicities (large $`Z^{^{}}`$) occur for small impact parameters while grazing collisions (large $`b`$) lead to small multiplicities, as one would have naively expected. An important conclusion is that, with $`\gamma `$ and $`\phi (z)`$ fixed, the hadronic multiplicity distributions obtained with $`G_{in}^{MDM}`$, $`G_{in}^{GM}`$, and $`G_{in}^{BEL}`$ reproduce the experimental data quite well. We shall return to this point in Sec. IV.C. $``$ Changing the elementary distribution $`\phi `$ As a pedagogical exercise, we shall consider only an early parametrization introduced by Barshay and Yamaguchi , $$\phi _{BY}(z)=\frac{81\pi ^2}{64}z^3exp\{\frac{9\pi }{16}z^2\}.$$ (41) This function was used in the analysis of $`e^+e^{}`$ multiplicity distributions at lower energies and, as can be seen in Fig 7, does not reproduce the data at higher energies as well as the gamma parametrization. As before, we now proceed by fixing both $`\gamma =0.516`$ and $`G_{in}^{BEL}`$ and using the above parametrization for the elementary distribution. The result for the hadronic multiplicity distribution at $`546GeV`$ is shown in Fig. 8, together with the result obtained with the gamma parametrization for the elementary process (Fig. 4). The broader width of $`\phi _{BY}(z)`$ as compared with that of the gamma distribution, is directly reflected in the hadronic multiplicity. Despite the differences between the two parametrizations for the elementary process the final result for the hadronic distribution with $`\phi _{BY}`$ can yet be considered as a resonable reproduction of the experimental data. $``$ Changing the power coefficient $`\gamma `$ Finally, we consider different parametrizations for the $`e^+e^{}`$ average multiplicity data in the interval $`5.1\sqrt{s}183`$ GeV ,but under the assumption of the power dependence. We shall discuss this assumption in the next section. First we consider the naive parametrization based on the thermodynamic model (see next section) $$<n>_{e^+e^{}}=2.20[\sqrt{s}]^{0.500}.$$ (42) For the above ensemble of data one obtains $`\chi ^2/DOF=209/48=4.35`$. Second, and more importantly, we shall investigate the effect of the data at the highest energies, which are not reproduced by our original parametrization, as can be seen in Fig. 2. To this end, we consider only the data above $`10`$ GeV (25 data points) and the general power law parametrization. With this procedure we obtained $$<n>_{e^+e^{}}=3.46[\sqrt{s}]^{0.396\pm 0.008}.$$ (43) with $`\chi ^2/DOF=27/23=1.7`$. The result is displayied in Fig. 9 together with Eq. (42) and our original parametrization, Eqs. (31-32). We observe that, concerning $`e^+e^{}`$ average multiplicity Eq. (43) brings information from data at high energies (roughly above $`50`$ GeV), while the original parametrization, Eqs. (31-32) is in agreement with data at smaller energies (below $`100`$ GeV) and the same is true for the parametrization with Eq. (42). As before, we now calculate the corresponding hadronic multiplicity distribution by fixing both the gamma parametrization for the elementary distribution, Eq. (30), and the $`G_{in}^{BEL}`$, Eqs. (22-24), and considering the three parametrizations for the average multiplicity, Eqs. (31-32), (42) and (43). The results at $`546GeV`$ are shown in Fig. 10. We conclude that, in the context of our approach with the fixed inputs $`G_{in}^{BEL}`$ and gamma parametrization for $`\phi (z)`$, the information from the $`e^+e^{}`$ average multiplicites at high energies with the power-law does not reproduce the hadronic multiplicity distribution. That is, the elementary average multiplicity distributions in hadronic interactions must deviates from the $`<n>_{e^+e^{}}(s)`$ as the energy increases, roughly above $`50100GeV`$. We shall discuss the physical interpretations of this result in Sec. IV.C. B. The multiplicity function and the power assumption We now turn to the discussion of a crucial assumption in our approach, namely, that the elementary average multiplicity at fixed impact parameter collisions grows as a power of the center-of-mass energy. To this end we shall first briefly recall some aspects of the power-law in hadron-hadron and $`e^+e^{}`$ collisions, both in experiment and theory, and after, based on these ideas, we shall present a discussion concerning the use of this assumption in our approach and also the meaning of the multiplicity function. From the early sixties cosmic ray results on extensive air showers, at energies $`E_{lab}<10^610^7GeV`$, led to empirical fits of the type $`<N>E_{lab}^{1/4}[\sqrt{s}]^{1/2}`$ (see for a review). A general power law with the exponent as a free parameter was used a long time ago, in order to allows analytical connections in analysis of cosmic ray data . Also, in the beginnig of sixties, these investigations introduced the concept of inelasticity . This comes from the observation that the energy effectively available for particle production could not be identified with the c.m. energy, as believed before $`1953`$ (Wataghin, Fermi, Landau), but only with a fraction of it: $$W=k\sqrt{s}.$$ (44) The remained $`(1k)\sqrt{s}`$ was associated with the early named “isobar” system, presently known as leading particle. From the theoretical side, the power dependence emerged in the context of statistical models (Fermi, Pomeranchuck) and hydrodynamical models (Heisenberg, Ladau) . For example, taking account of the inelasticity, in the Landau model, the fact that the averaged multiplicity is proportional to the total entropy leads to the result $$<N>k^{3/4}[\sqrt{s}]^{1/2}.$$ (45) Dependences on $`s^{1/2}`$ is characteristic of the Heisenberg and Pomeranchuck models and even $`s^{1/8}`$ appears in the Landau model, when viscosity is taken into account . In the context of termodynamic models, a universal formula was discovered for proton targets and for energies below $`50GeV`$: Data including $`\gamma ,\pi ,N`$ and $`p`$ collisions with $`p`$ were quite well reproduced by $`<N>=1.75s^{1/4}`$ . Concerning $`e^+e^{}`$ data on average multiplicity this model suggested $`<n>=1.5s^{3/8}`$ and pure fits to low energy data furnished $`<n>=(2.2\pm 0.1)s^{0.25\pm 0.01}`$, and also $`<n>=(1.73\pm 0.03)s^{0.34\pm 0.01}`$ . Moreover, the power-law, with the exponent $`1/4`$, was successfully used in the context of the parton model, either connecting KNO and Bjorken scaling or treating directly the violation of the KNO scaling . The power-law may also appear under more general arguments. For example, suppose that an intermediate state (fireball) of invariant mass $`M\sqrt{s}`$ decays into two systems each of invariant mass $`M_1=M/c`$, where $`c`$ is a constant. Suppose also that similar processes continue through some steps (sucessive cluster production) until the masses reach a value $`M_0`$ (some minimum ressonance mass). It is easy to show that the final multiplicity reads $$n[\sqrt{s}]^{\mathrm{ln2}/\mathrm{ln}c}.$$ (46) For example for $`c=4ns^{1/4}`$. The exponent $`\gamma `$ (our notation) may be inferred from $`c=2^{1/\gamma }`$, so that higher $`\gamma `$ values imply in higher splited masses in each step (for $`\gamma =0.516c3.8`$). Based on the above review, we see that the power-law is characteristic of several analysis of experimental data on hadron-hadron and $`e^+e^{}`$ collisions and also several theoretical approaches and models. Now we shall discuss this law in the context of our approach. First let us stress that in our formalism this assumption concerns an elementary hadronic process taking place at fixed impact parameter b. Thus, it does not pretend to represent the average hadronic multiplicity $`<N(s)>`$. Also, we used $`e^+e^{}`$ data only as a possible source of information on contact interactions (fixed $`b`$) and therefore the power assumption does not pretend to represent the average multiplicity in $`e^+e^{}`$ collisions. This is a subtle point in our approach and we would like to discuss it in some detail. The main reason for the power assumption was to obtain an analytical and simple connection between the multiplicity function $`m(b,s)`$ and the eikonal, Eq. (27), which allows the general analytical connection between the elastic and inelastic channels. Since it is typical of several kind of collisions, as reviewed above, it is not unreazonable that it could represent an elementary hadronic process taking place at fixed impact parameter. Just for its elementary character (at given $`b`$), there seems also to be no reason to include any inelasticity effect (leading particle) in the basic assumption represented by Eq. (25). That is, it seems reasonable that $`<n>(bfixed,s)`$ may be just proportional only to $`E_{CM}^\gamma `$. The multiplicity function $`m(b,s)`$, as defined by Eq. (20), connects the hadronic and elementary (at given $`b`$) average multiplicities. With the power assumption and the geometrical arguments of Sec. II.C, $`m(b,s)`$ may be expressed in terms of the eikonal and the power coefficient $`\gamma `$. The subtle point in our approach is that, since by definition $`m(b,s)`$ is proportional also to the average elementary multiplicity at given $`b`$, the coefficient $`\gamma `$ was determined by fit to data available on contact interactions. In this sense, the model “imposes” the power-law and the $`e^+e^{}`$ data are supposed to provide the limited, but possible, information on contact interactions. These considerations may allow to infer a distinction between $`e^+e^{}`$ average multiplicity and what this quantity could be in an elementary hadronic process. Specifically, we showed in Sec. IV.A that data on the average multiplicity in $`e^+e^{}`$ collisions, presently available above $`5GeV`$, can not be reproduced by the power-law. For example, a second degree polynomial in $`lns`$ gives a quite good fit to all the data above $`5GeV`$: $$<n>_{e^+e^{}}(s)=0.0434+0.775\mathrm{ln}s+0.168\mathrm{ln}^2s$$ (47) with $`\chi ^2/DOF=145/45=3.2`$. However, besides this parametrization does not allow the analytical connection with the eikonal, we showed that, with the power-law, the behavior of $`<n>_{e^+e^{}}(s)`$ at energies above $`50GeV`$ does not lead to the description of the hadronic multiplicity distribution. In other words, in the context of our approach, the increase of the elementary average multiplicity with energy in hadronic collisions must be faster than that observed in $`e^+e^{}`$ collisions. This is not the case at lower energies, since the power-law with $`\gamma =0.516`$ gives a satisfactory description of the $`e^+e^{}`$ data. In the next section we discuss the physical interpretations associated with these observations. C. Physical picture Based on the results of Secs. II and III, we now discuss the physical picture associated with the scaling violations, specifically, with the evolution of the hadronic multiplicity distribution $`\mathrm{\Phi }(Z^{})`$ from the ISR to the collider and LHC energies, Figs. 3 and 4. From Eq. (21) the hadronic multiplicity $`\mathrm{\Phi }`$ is constructed in terms of $`G_{in}`$ and the elementary quantities $`\phi `$ and $`m`$. In our approach, $`\phi `$ scales and so does not depend on the energy. The multiplicity function $`m(b,s)`$ is connected with $`G_{in}`$ through Eqs. (10) and (27), $$m(b,s)=\xi (s)\{\mathrm{ln}[1G_{in}(s,b)]\}^\gamma ,$$ (48) where $`\xi `$ comes from the normalization condition (29). Both $`\xi (s)`$ and $`m(b,s)`$ depend on the power coefficient $`\gamma `$, which is a constant determined from the fit through Eq. (31). Therefore, the evolution of the hadronic distribution with energy comes directly from $`G_{in}(b,s)`$ and depends also on the value of the exponent $`\gamma `$. This exponent, in turns, comes from the elementary average multiplicity dependence with the energy, Eq. (31), and therefore is associated with the effective number of colliding constituents in the hadronic process. Based on the above observations, the physical picture that emerges is that the energy evolution of the hadronic multiplicity distribution is correctly reproduced by changing only the overlap function, without tampering with the underlying more elementary process ($`\phi `$). The geometrical evolution of the constituents of the hadron is responsible for the energy dependence and not the dynamical interaction itself. This is what one would expect if the underlying interaction is unique (QCD) but the relative importance of the constituents involved in collisions changes with energy (indicated by the exponent $`\gamma `$). We showed that with the power assumption, the information from $`e^+e^{}`$ data above, say, $`100`$ GeV leads to an understimation of the hadronic multiplicity ditribution (Fig. 10). This means that the average multiplicity in an elementary hadronic process must increase with energy faster than that associated with $`e^+e^{}`$ collisions. This result seems quite reasonable since, in a QCD guided approach, we expect different contributions from gluons/quarks interactions than those associated with lepton-lepton collisions. As the average multiplicity increases, the relevance of the original parton decreases, so that at high energies $`e^+e^{}`$ can serve as a good first guide to quark-quark, quark-gluon and gluon-gluon multiplicity distributions. In a parton model (following QCD), this effect above $`100`$ GeV may be interpreted as the unset of gluons interactions . The faster increase represented by our power-law with $`\gamma =0.516`$ (Fig. 2) may be atributed to the fully development of the gluonic structure, rather than the quark (valence) structure. These “microscopic” interpretations are also directly associated with the BEL behavior, since its origin may be traced either to gluon interactions in the eikonal formalism , or to the increased size of spot scattering in the overlap function formalism . As commented before, a novel aspect of this work concerns the simultaneous treatment of both the elastic and inelastic channels. Specifically, we started from elastic channel descriptions ($`pp`$ and $`\overline{p}p`$ differential cross sections) and extended the results to the inelastic channel (multiplicity distributions). In this sense, we expect that the physical picture from both channels should be the same. Besides the microscopic interpretation associated with the BEL $`G_{in}(b,s)`$, even if we consider the naive models represented by the MDM and the GM, discussed in Sec. IV.A, the same scenario emerges. In fact, in both models the elementary interaction, represented by the elementary elastic amplitude $`f(q,s)`$, does not depend on the energy, Eqs. (34) and (39). The energy dependence is associated with the form factor $`G(q,s)`$ and the “absorption constant” $`C(s)`$. The former, through the associated radius, $$R^2(s)=6\frac{dG}{dq^2}|_{q^2=0},$$ (49) describes the expansion effect (geometry). The latter is associated with absorption (blackning) in the context of the geometrical (Chou-Yang) model and to the relevant number on constituents in the context of the multiple diffraction theory . herefore, we also conclude that the elementary interaction is unique (does not depend on the energy), but the geometrical evolution of the constituents and its relevent number in collisions changes as the energy increases. V. CONCLUSIONS AND FINAL REMARKS The underlying theory of the hadronic phenomena is QCD. As commented in Secs. I and II, depite all its successes, the theory has presently some limited efficiency in the treatment of soft hadronic processes, meanly related with unified descriptions of physical quantities from both elastic and inelastic channels. Moreover, some QCD approaches are based in extensive Monte Carlo calculations and concerning this point, we understand that, although these techniques represent a powerfull tool for experimentalists, it is questionable if they could really be the adequate and final scenario for a theoretical understand of the hadronic interactions, mainly if we think in connections with first principles of QCD. At this stage, it seems that phenomenology must play an important role to bridge the gap, or, at least, to indicate or suggest some suitable calculational schemes for further theoretical developments. On the other hand, all the phenomenological approaches presently available, have also very limited intervals of validity and efficiency in the treatment of hadronic processes at high energies. One of the serious limitations of the geometrical approach is the difficulty to directly connect its relative efficiency with the well established microscopic ideas (QCD). However, it has not been proved that this direct connection can not be obtained. In this work, making use of the unitarity principle and in the context of a geometrical picture, we obtained analytical connections between physical quantities from both elastic and inelastic channels. In particular we correlated quantitatively the violations of the geometrical and KNO scalings in an analytical way. The physical picture that emerges from both channels, for $`pp`$ and $`\overline{p}p`$ collisions above $`10GeV`$ is the following. The dependence of the physical quantities with the energy (elastic differential cross section and inelastic multiplicity distributions) is associated with the geometrical evolution of the constituents and the relative importance of the constituents involved in the collisions. The underlying elementary process or interaction does not change with the energy. This is in agreement with what could be expected from QCD. With this kind of approach the correct information extracted from the elastic channel is fundamental. Our prediction at LHC energies was based on extrapolations from analysis at lower energies and so has a limited character. This observation, and obviously other considerations regarding different models, point out to the importance of complete measurements of physical quantities associated with the elastic channel at the LHC, that is not only total cross sections but also the $`\rho `$ parameter and differential cross sections at large momentum transfer. Based on the limitations referred to in this section, we do not pretend that the forms we inferred for the hadronic constituent-constituent collisions, multiplicity distribution and average multiplicity, are a conclusive solution. However, we hope that, at least, they can bring new information on what some aspects of an elementary hadronic process could be. ACKNOWLEDGEMENTS Thanks are due to Pierre St. Hilaire. P.C.B. and M.J.M. are thankful to CNPq and FAPESP ( Proc. N. 1998/2249-4) for financial support. Figure Captions Fig. 1. The KNO charged multiplicity distribution for $`e^+e^{}`$ annihilation data and the fitted gamma distribution, Eq. (30) (dashed). Fig. 2. The average charged multiplicity for $`e^+e^{}`$ annihilation data and the fitted power law, Eq. (31). Fig. 3. Scaled multiplicity distribution for inelastic $`pp`$ data at ISR energies compared to theoretical expectations using Eqs. (28-29). Fig. 4. Scaled multiplicity distribution for inelastic $`\overline{p}p`$ data at 546 GeV compared to theoretical expectations using Eqs. (28-29) (solid) and predictions at 14 TeV (dashed). Fig. 5. Inelastic overlap functions for $`\overline{p}p`$ collisions at $`546GeV`$, predicted by the multiple diffraction model (MDM), geometrical model (GM) and the short-range-expansion, black-edge-large approach (BEL): (a) central region; (b) large distances. Fig. 6. Same as Fig. 4 with the three different inputs for the inelastic overlap function. Same legend of Fig. 5. Fig. 7. Same as Fig. 1 with the Barshay-Yamaguchi parametrization, Eq. (41). Fig. 8. Same as Fig. 4 with two different inputs for the elementary multiplicity distributions: gamma function, Eq. (30) (solid) and Barshay-Yamaguch parametrization, Eq. (41) (dot-dashed). Fig. 9. Same as Fig. 2 with three power-law parametrizations: $`\gamma =0.516`$ (solid), $`\gamma =0.500`$ (dotted) and $`\gamma =0.396`$ (dot-dashed). In the last case only data above $`10GeV`$ were fitted. Fig. 10. Same as Fig. 4 using the three different parametrizations for the elementary average multiplicity (Fig. 9 and same legend).
no-problem/9908/hep-ex9908029.html
ar5iv
text
# 1 Introduction ## 1 Introduction CP violating phenomena arise in the Standard Model because of the single complex parameter in the quark mixing matrix. Such phenomena are expected to occur widely in $`B`$ meson decays and are the incentive for most of the current $`B`$-physics initiatives in the world. As of yet there is little direct experimental evidence. CDF’s recent determination of $`\mathrm{sin}2\beta `$ at the $`2\sigma `$ level, which followed upon earlier less sensitive searches by both CDF and OPAL, is consistent with expectations for mixing-induced Standard Model CP violation and to date the only evidence of CP effects in $`B`$ mesons. Direct CP violation however is also anticipated to play a prominent role in the CP phenomena of $`B`$ decay. To date the only published search for direct CP violation in $`B`$ decay is the recent CLEO limit on $`𝒜_{\mathrm{CP}}`$ in $`bs\gamma `$. CP asymmetries may show up in any mode where there are two or more participating diagrams which differ in weak and strong phases. $`BK\pi `$ modes, for instance, involve $`bu`$ tree diagrams carrying the weak phase $`\mathrm{Arg}\left(V_{ub}^{}V_{us}\right)\gamma `$ and $`bs`$ penguin diagrams carrying the weak phase $`\mathrm{Arg}\left(V_{tb}^{}V_{ts}\right)=\pi `$. Though the branching ratios are small, such cases are experimentally straightforward to search for. Rate differences between $`BK^+\pi ^{}`$ and $`\overline{B}K^{}\pi ^+`$ decays would be unambiguous signals of direct CP violation if seen. We report here five searches for direct CP violation in charmless hadronic $`B`$ decay modes, based on the full CLEO II and CLEO II.V datasets which together comprise 9.66 million $`B\overline{B}`$ events. The modes searched for are the three $`K\pi `$ modes, $`K^\pm \pi ^{}`$, $`K^\pm \pi ^0`$, $`K_S^0\pi ^\pm `$, the mode $`K^\pm \eta ^{}`$, and the vector-pseudoscalar mode $`\omega \pi ^\pm `$. In all but the first case the flavor of the parent $`b`$ or $`\overline{b}`$ quark is tagged simply by the sign of the high momentum charged hadron; for $`K^\pm \pi ^{}`$ one must further identify $`K`$ and $`\pi `$. In what follows we will refer when needed to the generic final state from $`b`$ and $`\overline{b}`$ as $`f`$ and $`\overline{f}`$ respectively. The corresponding event yields of signal and background we will label as $`𝒮`$, $`\overline{𝒮}`$, $``$, and $`\overline{}`$. We define the sign of $`𝒜_{\mathrm{CP}}`$ with the following convention: $$𝒜_{\mathrm{CP}}\frac{Br\left(bf\right)Br\left(\overline{b}\overline{f}\right)}{Br\left(\overline{b}\overline{f}\right)+Br\left(bf\right)}=\frac{𝒮\overline{𝒮}}{𝒮+\overline{𝒮}}$$ (1) The statistical precision one can achieve in a measurement of $`𝒜_{\mathrm{CP}}`$ depends on the signal yield $`S𝒮+\overline{𝒮}`$, the CP-symmetric background $`B=+\overline{}`$, any correlation $`\chi `$ that might exist between the measurements of $`f`$ and $`\overline{f}`$, and of course on $`𝒜_{\mathrm{CP}}`$ itself: $$\sigma _{𝒜_{\mathrm{CP}}}^2=\frac{1𝒜_{\mathrm{CP}}^2}{S}\left(1+\frac{B}{S}\left(\frac{1+𝒜_{\mathrm{CP}}^{}{}_{}{}^{2}}{1𝒜_{\mathrm{CP}}^{}{}_{}{}^{2}}\right)2\chi \sqrt{\frac{1𝒜_{\mathrm{CP}}^{}{}_{}{}^{2}}{4}+\frac{B^2}{S^2}+\frac{B}{S}}\right)$$ (2) In most cases there is no chance of confusing $`f`$ and $`\overline{f}`$ so $`\chi =0`$. However for $`f=K^{}\pi ^+`$ and $`\overline{f}=K^+\pi ^{}`$, a small degree of crossover is possible due to imperfect particle identification; we find $`\chi =0.11`$ for this case. For $`\chi 0`$, $`B/S1`$, and $`𝒜_{\mathrm{CP}}0`$ one has an easy rule of thumb, $`\sigma \sqrt{2/S}`$. For $`S100`$ this means one expects statistical precision in the neighborhood of $`\pm 0.15`$. As will be discussed later, systematic errors are small, and consequently the precision in $`𝒜_{\mathrm{CP}}`$ measurements can be expected to be dominated by statistical errors for a long time to come. As can be seen in Eq. 2 the statistical error is in turn dominated by the leading $`1/\sqrt{S}`$ coefficient which reminds us that the only path to better $`𝒜_{\mathrm{CP}}`$ measurements will be more data. Improvements to analysis technique that reduce $`B/S`$ or $`\chi `$ have less impact. ## 2 Theoretical Expectations The existence of a CP violating rate asymmetry depends on having both two different CP nonconserving weak phases and two different CP conserving strong phases. The former may arise from either the Standard Model CKM matrix or from new physics, while the latter may arise from the absorptive part of a penguin diagram or from final state interaction effects. The difficulty of calculating strong interaction phases, particularly when long distance non-perturbative effects are involved, largely precludes reliable predictions. Under well-defined model assumptions, however, numerical estimates may be made and the dependence on both model parameters and CKM parameters can be probed. A recent and comprehensive review of CP asymmetries under the assumption of generalized factorization has been published by Ali et al . We quote in Table 1 their predictions for the modes examined in this paper. If final state interactions are not neglected, however, strong phases as large as $`90^{}`$ are not ruled out and $`\left|𝒜_{\mathrm{CP}}\right|`$ could reach 0.44 in favorable cases. In this nonperturbative regime numerical predictivity is limited, but a variety of relationships among asymmetries or $`f,\overline{f}`$ rate differences can be found in the literature. ## 3 Data Set, Detector, Event Selection The data set used in this analysis was collected with the CLEO II and CLEO II.V detectors at the Cornell Electron Storage Ring (CESR). It consists of $`9.1\mathrm{fb}^1`$ taken at the $`\mathrm{{\rm Y}}`$(4S) (on-resonance) and $`4.5\mathrm{fb}^1`$ taken below $`B\overline{B}`$ threshold. The below-threshold sample is used for continuum background studies. The on-resonance sample contains 9.66 million $`B\overline{B}`$ pairs. This is a factor 2.9 increase in the number of $`B\overline{B}`$ pairs over the published measurements of the modes considered here ,,. In addition, the CLEO II.V data set, which has significantly improved particle identification and momentum resolution as compared with CLEO II, now dominates the data set. CLEO II and CLEO II.V are general purpose solenoidal magnet detectors, described in detail elsewhere . In CLEO II, the momenta of charged particles are measured in a tracking system consisting of a 6-layer straw tube chamber, a 10-layer precision drift chamber, and a 51-layer main drift chamber, all operating inside a 1.5 T superconducting solenoid. The main drift chamber also provides a measurement of the specific ionization loss, $`dE/dx`$, used for particle identification. For CLEO II.V the 6-layer straw tube chamber was replaced by a 3-layer, double-sided silicon vertex detector, and the gas in the main drift chamber was changed from an argon-ethane to a helium-propane mixture. Photons are detected using 7800-crystal CsI(Tl) electromagnetic calorimeter. Muons are identified using proportional counters placed at various depths in the steel return yoke of the magnet. Charged tracks are required to pass track quality cuts based on the average hit residual and the impact parameters in both the $`r\varphi `$ and $`rz`$ planes. Candidate $`K_S^0`$ are selected from pairs of tracks forming well-measured displaced vertices. Furthermore, we require the $`K_S^0`$ momentum vector to point back to the beam spot and the $`\pi ^+\pi ^{}`$ invariant mass to be within $`10`$ MeV, two standard deviations ($`\sigma `$), of the $`K_S^0`$ mass. Isolated showers with energies greater than $`40`$ MeV in the central region of the CsI calorimeter and greater than $`50`$ MeV elsewhere, are defined to be photons. Pairs of photons with an invariant mass within 2.5$`\sigma `$ of the nominal $`\pi ^0`$ ($`\eta `$) mass are kinematically fitted with the mass constrained to the nominal $`\pi ^0`$ ($`\eta `$) mass. To reduce combinatoric backgrounds we require the lateral shapes of the showers to be consistent with those from photons. To suppress further low energy showers from charged particle interactions in the calorimeter we apply a shower-energy-dependent isolation cut. Charged particles are identified as kaons or pions using $`dE/dx`$. Electrons are rejected based on $`dE/dx`$ and the ratio of the track momentum to the associated shower energy in the CsI calorimeter. We reject muons by requiring that the tracks do not penetrate the steel absorber to a depth greater than seven nuclear interaction lengths. We have studied the $`dE/dx`$ separation between kaons and pions for momenta $`p2.6`$ GeV$`/c`$ in data using $`D^0K^{}\pi ^+\left(\pi ^0\right)`$ decays; we find a separation of $`\left(1.7\pm 0.1\right)\sigma `$ for CLEO II and $`\left(2.0\pm 0.1\right)\sigma `$ for CLEO II.V. Resonances are reconstructed through the decay channels: $`\eta ^{}\eta \pi ^+\pi ^{}`$ with $`\eta \gamma \gamma `$; $`\eta ^{}\rho \gamma `$ with $`\rho \pi ^+\pi ^{}`$; and $`\omega \pi ^+\pi ^{}\pi ^0`$. ## 4 Analysis The $`𝒜_{\mathrm{CP}}`$ analyses presented are intimately related to the corresponding branching ratio determinations presented in separate contributions to this conference. We summarize here the main points of the analysis. We select hadronic events and impose efficient quality cuts on tracks, photons, $`\pi ^0`$ candidates, and $`K_S^0`$ candidates. We calculate a beam-constrained $`B`$ mass $`M=\sqrt{E_\mathrm{b}^2p_B^2}`$, where $`p_B`$ is the $`B`$ candidate momentum and $`E_\mathrm{b}`$ is the beam energy. The resolution in $`M`$ ranges from 2.5 to 3.0 $`\mathrm{MeV}/c^2`$, where the larger resolution corresponds to the $`B^\pm h^\pm \pi ^0`$ decay. We define $`\mathrm{\Delta }E=E_1+E_2E_\mathrm{b}`$, where $`E_1`$ and $`E_2`$ are the energies of the daughters of the $`B`$ meson candidate. The resolution on $`\mathrm{\Delta }E`$ is mode-dependent. For final states without photons the $`\mathrm{\Delta }E`$ resolution for CLEO II.V(II) is $`20\left(26\right)`$ MeV. Most other modes are only slightly worse but for the $`B^\pm h^\pm \pi ^0`$ analysis, the $`\mathrm{\Delta }E`$ resolution is worse by about a factor of two and becomes asymmetric because of energy loss out of the back of the CsI crystals. The energy constraint also helps to distinguish between modes of the same topology. For example, $`\mathrm{\Delta }E`$ for $`BK^+\pi ^{}`$, calculated assuming $`B\pi ^+\pi ^{}`$, has a distribution that is centered at $`42`$ MeV, giving a separation of $`2.1\left(1.6\right)\sigma `$ between $`BK^+\pi ^{}`$ and $`B\pi ^+\pi ^{}`$ for CLEO II.V(II). We accept events with $`M`$ within $`5.25.3`$ $`\mathrm{GeV}/\mathrm{c}^2`$ and $`\left|\mathrm{\Delta }E\right|<200`$ MeV. The $`\mathrm{\Delta }E`$ requirement is loosened to 300 MeV for the $`B^\pm h^\pm \pi ^0`$ analysis. This fiducial region includes the signal region, and a sideband for background determination. Similar regions are included around each of the resonance masses ($`\eta ^{}`$, $`\eta `$, and $`\omega `$) in the likelihood fit. For the $`\eta ^{}\rho \gamma `$ case, the $`\rho `$ mass is not included in the fit; we require $`0.5\mathrm{GeV}<m_{\pi \pi }<0.9\mathrm{GeV}`$. We have studied backgrounds from $`bc`$ decays and other $`bu`$ and $`bs`$ decays and find that all are negligible for the analyses presented here. The main background arises from $`e^+e^{}q\overline{q}`$ (where $`q=u,d,s,c`$). Such events typically exhibit a two-jet structure and can produce high momentum back-to-back tracks in the fiducial region. To reduce contamination from these events, we calculate the angle $`\theta _{sph}`$ between the sphericity axis of the candidate tracks and showers and the sphericity axis of the rest of the event. The distribution of $`\mathrm{cos}\theta _{\mathrm{sph}}`$ is strongly peaked at $`\pm 1`$ for $`q\overline{q}`$ events and is nearly flat for $`B\overline{B}`$ events. We require $`\left|\mathrm{cos}\theta _{\mathrm{sph}}\right|<0.8`$ which eliminates $`83\%`$ of the background. For $`\eta ^{}`$ and $`\omega `$ modes the cut is made at 0.9. Additional discrimination between signal and $`q\overline{q}`$ background is provided by a Fisher discriminant technique as described in detail in Ref. . The Fisher discriminant is a linear combination $`_{i=1}^N\alpha _iy_i`$ where the coefficients $`\alpha _i`$ are chosen to maximize the separation between the signal and background Monte-Carlo samples. The 11 inputs, $`y_i`$, are $`\left|\mathrm{cos}\theta _{cand}\right|`$ (the cosine of the angle between the candidate sphericity axis and beam axis), the ratio of Fox-Wolfram moments $`H_2/H_0`$ , and nine variables that measure the scalar sum of the momenta of tracks and showers from the rest of the event in nine angular bins, each of $`10^{}`$, centered about the candidate’s sphericity axis. For the $`\eta ^{}`$ and $`\omega `$ modes, $`\left|\mathrm{cos}\theta _B\right|`$ (the angle between the $`B`$ meson momentum and beam axis), is used instead of $`H_2/H_0`$. Using a detailed GEANT-based Monte-Carlo simulation we determine overall detection efficiencies of $`1546\%`$, as listed in Table 2. Efficiencies contain secondary branching fractions for $`K^0K_S^0\pi ^+\pi ^{}`$ and $`\pi ^0\gamma \gamma `$ as well as $`\eta ^{}`$ and $`\omega `$ decay modes where applicable. We estimate a systematic error on the efficiency using independent data samples. In Table 2 we summarize, for each mode, cuts, efficiencies, and the total number of events which pass the cuts and enter the likelihood fit described in the next paragraph. To extract signal and background yields we perform unbinned maximum-likelihood (ML) fits using $`\mathrm{\Delta }E`$, $`M`$, $``$, $`\left|\mathrm{cos}\theta _B\right|`$ (if not used in $``$), and $`dE/dx`$ (where applicable), daughter resonance mass (where applicable), and helicity angle in the daughter decay (where applicable). The free parameters to be fitted are the asymmetry ($`\left(f\overline{f}\right)/\left(f+\overline{f}\right)`$) and the sum ($`f+\overline{f}`$) in both signal and background. In most cases there is more than one possible signal component and its corresponding background component, as for instance we fit simultaneously for $`K^+\pi ^0`$ and $`\pi ^+\pi ^0`$ to ensure proper handling of the $`K\pi `$ identification information. The probability distribution functions ($`PDF`$s) describing the distribution of events in each variable are parametrized by simple forms (gaussians, polynomials, etc.) whose parameter values are determined in separate studies. For signal $`PDF`$ shapes the parameter determinations are made by fitting signal Monte Carlo events. Backgrounds in these analyses are dominated by continuum $`eeq\overline{q}`$ events, and we determine parameters of the background $`PDF`$s by fitting data taken below the $`\mathrm{{\rm Y}}\left(4S\right)`$ resonance or data taken on resonance but lying in the sidebands of the signal region. The uncertainties associated with such fits are used later to assess the final systematic error. ## 5 Results ### ($`K\pi ^0`$) In the mode $`B^\pm K^\pm \pi ^0`$ we find a total of $`45.6_{10.2}^{+11.2}`$ events with an asymmetry of $`𝒜_{\mathrm{CP}}\left(K\pi ^0\right)=0.27\pm 0.23`$. This corresponds to $`28.9\pm 7.5`$ $`K^+\pi ^0`$ and $`16.8\pm 7.5`$ $`K^{}\pi ^0`$ events. (Here and elsewhere we will quote the yields $`𝒮`$ and $`\overline{𝒮}`$ for the convenience of the reader. These are derivative quantities as the fit directly extracts $`𝒜_{\mathrm{CP}}`$ and $`𝒮+\overline{𝒮}`$.) We note that $`\pi \pi ^0`$ which is analyzed simultaneously but does not have a sufficiently significant yield to measure a branching fraction shows an asymmetry of $`𝒜_{\mathrm{CP}}\left(\pi \pi ^0\right)=0.03\pm 0.40`$. Fig. 1 shows the likelihood function dependence on $`𝒜_{\mathrm{CP}}\left(K\pi ^0\right)`$. As a cross check we measure the asymmetry of the background events, finding $`0.023\pm 0.026`$ for $`K\pi ^0`$ background and $`0.000\pm 0.017`$ for $`\pi \pi ^0`$ background. These values are consistent with the expected null result for continuum background. ### ($`K_s\pi `$) In the mode $`B^\pm K_s\pi ^\pm `$ we find a total of $`25.2_{5.6}^{+6.4}`$ events with an asymmetry of $`𝒜_{\mathrm{CP}}\left(K_s\pi \right)=0.17\pm 0.24`$. This corresponds to $`10.2\pm 4.0`$ $`K_s\pi ^+`$ and $`14.5\pm 4.4`$ $`K_s\pi ^{}`$ events. The background events show an asymmetry of $`0.02\pm 0.04`$, consistent with zero as expected. ### ($`K\pi `$) In the mode $`BK^\pm \pi ^{}`$ we find a total of $`80.2_{11.0}^{+11.8}`$ events with an asymmetry of $`𝒜_{\mathrm{CP}}\left(K\pi \right)=0.04\pm 0.16`$. This corresponds to $`41.6_{8.0}^{+8.9}`$ $`K^+\pi ^{}`$ and $`38.6_{8.1}^{+9.0}`$ $`K^{}\pi ^+`$ events. The dependence of the likelihood function on $`𝒜_{\mathrm{CP}}`$ is shown in Fig. 2. The background events show an asymmetry of $`0.02\pm 0.04`$, consistent with zero as expected. ### ($`\omega \pi `$) In the mode $`B^\pm \omega \pi ^\pm `$ we find a total of $`28.5_{7.3}^{+8.2}`$ events with $`𝒜_{\mathrm{CP}}\left(\omega \pi \right)=0.34_{0.26}^{+0.24}`$. This corresponds to $`19.1_{5.9}^{+6.8}`$ $`\omega \pi ^+`$ events and $`9.4_{4.0}^{+4.9}`$ $`\omega \pi ^{}`$ events. Figure 3 shows the likelihood as a function of $`𝒜_{\mathrm{CP}}`$. The fit also allows for a possible charge asymmetry in the continuum background, and finds values consistent with zero: $`0.013\pm 0.015`$ for $`\omega K^\pm `$ background, and $`0.001\pm 0.010`$ for $`\omega \pi ^\pm `$ background. ### ($`\eta ^{}K`$) We find $`𝒜_{\mathrm{CP}}\left(\eta ^{}K\right)=0.03\pm 0.12`$. Separating the $`\eta ^{}K`$ signal sample by submodes $`\eta ^{}\eta \pi ^+\pi ^{}`$ and $`\eta ^{}\rho \gamma `$ we find $`𝒜_{\mathrm{CP}}=0.06\pm 0.17`$ and $`𝒜_{\mathrm{CP}}=0.01\pm 0.17`$, respectively. Fig. 4 shows the dependence of the fitted likelihood function on $`𝒜_{\mathrm{CP}}`$ for the separate and combined submodes. Background $`\eta ^{}K`$ events are found to have an asymmetry of $`0.01\pm 0.07`$ in the $`\eta \pi ^+\pi ^{}`$ mode, and $`0.009\pm 0.015`$ in the $`\rho \gamma `$ mode, both consistent with zero as expected. ## 6 Systematic Errors The charge asymmetries measured in this analysis hinge primarily on the properties of high momentum tracks. The charged meson that tags the parent $`b/\overline{b}`$ flavor has momentum in all cases between 2.3 and 2.8 GeV/c. In independent studies using very large samples of high momentum tracks we have searched for and set stringent limits on the extent of possible charge-correlated bias in the CLEO detector and analysis chain for tracks in the $`23`$GeV range. Based on a sample of 8 million tracks, we find $`𝒜_{\mathrm{CP}}`$ bias introduced by differences in reconstruction efficiencies for positive and negative high momentum tracks passing the same track quality requirements as are imposed in this analysis is less than $`\pm 0.002`$. For $`K^\pm \pi ^{}`$ combinations where differential charge-correlated efficiencies must also be considered in correlation with $`K/\pi `$ flavor, we use 37,000 $`D^0K\pi \left(\pi ^0\right)`$ decays and set a corresponding limit on $`𝒜_{\mathrm{CP}}`$ bias at $`\pm 0.005`$. These $`D^0`$ decays, together with an additional 24,000 $`D_{\left(s\right)}^\pm `$ decays, are also used to set a tight upper limit of 0.4 MeV/c on any charge-correlated or charge-strangeness-correlated bias in momentum measurement. The resulting limit on $`𝒜_{\mathrm{CP}}`$ bias from this source is $`\pm 0.002`$. We conclude that there is no significant $`𝒜_{\mathrm{CP}}`$ bias introduced by track reconstruction or selection. We note that for each mode we crosscheck the asymmetry of the background events (normally a fairly large sample) and find results consistent with zero as anticipated. Particle identification information for $`K^\pm `$ and $`\pi ^\pm `$ is shown in Fig. 5. No significant differences are seen between different charge species. Quantification of the effect on $`𝒜_{\mathrm{CP}}`$ is covered by $`PDF`$ variation studies discussed immediately below. All $`PDF`$ shapes which are used in the maximum likelihood fits are varied within limits prescribed by the fits which determine the shape parameters to assess the systematic error associated with uncertainty in the parameters. The resulting changes in $`𝒜_{\mathrm{CP}}`$ are summed in quadrature to estimate the systematic error due to possible misparametrization of the $`PDF`$ shapes. This contribution to the systematic error ranges from $`\pm 0.02`$ to $`\pm 0.04`$ depending on mode. We choose to assign a conservative systematic error $`\pm 0.05`$ for all modes. ## 7 Summary Table 3 and Fig. 6 summarize all results. We have presented in this paper the first measurements of charge asymmetries in hadronic $`B`$ decay. We see no evidence for CP violation in the five modes analyzed here and set 90% CL intervals (systematics included) that reduce the possible range of $`𝒜_{\mathrm{CP}}`$ by as much as a factor of four. While the sensitivity is not yet sufficient to address the rather small $`𝒜_{\mathrm{CP}}`$ values predicted by factorization models, extremely large $`𝒜_{\mathrm{CP}}`$ values which could arise if large phases were available from final state interactions are firmly ruled out. For the case of $`K\pi `$ and $`\eta ^{}K`$ we can exclude at 90% confidence values of $`\left|𝒜_{\mathrm{CP}}\right|`$ greater than 0.35 and 0.28 respectively. The search for CP violating asymmetries will intensify with the new datasets expected in the coming years. The precision of such searches will be statistics limited and should improve with integrated luminosity as $`1/\sqrt{}`$. If modes with large asymmetries and reasonable branching ratios exist they could be found within a few years.
no-problem/9908/astro-ph9908194.html
ar5iv
text
# Spiral Galaxies with HST - Near-Infrared Properties of Bulges ## 1. Introduction The nuclei of galaxies harbor clues to the processes involved in the formation of the host galaxies. Although, extensive work has been performed using HST imaging of nearby elliptical galaxies (e.g., Lauer et al. 1995; Carollo et al. 1997a, b; Faber et al. 1997), there is a lack of work concerning the formation of disk galaxies. There are a few recent studies that have started to address this. Peletier & Balcells (1999) published results obtained using multi-color observations with NICMOS and WFPC2 data of 20 nearby early-type galactic bulges. They found that the nuclei were typically dusty with a small age spread among bulges of early-type spiral galaxies (about 2-3 Gyr). There has also been an optical study by Carollo et al. (1997c, 1998) and Carollo & Stiavelli (1998). Here we present NICMOS data of the galaxies that they observed. ## 2. Analytical fits to the nuclear surface brightness profiles The nuclear surface brightness profiles of early-type galaxies is described by Lauer et al. (1995). Their description was used by Carollo et al. (1997b, 1998) to describe nuclear profiles of spiral galaxies imaged with WFPC2 and we now use it here to describe the profiles of 72 galaxies imaged with NICMOS on board HST in the H-band. There is an uncertainty that exists in removing the nuclear compact source contribution from the galaxy light profile, as the form of the light profile of the compact source is not well defined. This uncertainty was quantified by performing several different light profile fits to the same galaxy with a variable inner radial cutoff. Of the 72 galaxies, 55 were fitted. From these fits the average logarithmic nuclear slope $`\gamma `$ has been derived. Figure 1 shows that galaxies fitted with $`R^{1/4}`$ law profiles are generally found in the top right hand part of the ($`\gamma `$, $`M_H`$) diagram, with exponential law fits being in the bottom left hand area, coincident with the area where elliptical galaxies are usually found (Lauer et al. 1995; Carollo et al. 1997a; Faber et al. 1997; Carollo & Stiavelli 1998). Also a clear dichotomy is shown with early-type spirals generally have steeper cusps than late-type spirals, as well as a trend for galaxies with $`R^{1/4}`$ law fits to have steeper cusps than those with exponential law light-profile fits. Both of these results are in agreement with the main results of the optical study performed by Carollo & Stiavelli (1998). ## 3. Conclusions Our main result is that $`R^{1/4}`$-law bulges and exponential bulges have significantly different nuclear stellar cusps. Specifically, $`R^{1/4}`$ law bulges have steep stellar cusps which steepen with decreasing luminosity. Their stellar cusp slopes are also comparable to those of elliptical galaxies of similar luminosities. By contrast, in exponential bulges the inward extrapolation performed imply shallow cusp slopes. This is similar to the results of optical studies (Carollo & Stiavelli 1998). ## References Carollo C.M., Stiavelli M., 1998, AJ, 115, 2306 Carollo C.M., Danziger I.J., Rich R.M., Chen X., 1997a, ApJ, 491, 545 Carollo C.M., Stiavelli M., de Zeeuw P.T., Mack J., 1997b, AJ, 114, 2366 Carollo C.M., Stiavelli M., Mack J., 1998, AJ, 116, 68 Faber S.M., et al., 1997, AJ, 114, 1771 Lauer T.R., et al., 1995, AJ, 110, 2622 Peletier R.F., Balcells M., 1999, MNRAS, in press
no-problem/9908/astro-ph9908259.html
ar5iv
text
# NEW MID-INFRARED DIAGNOSTIC OF THE DUSTY TORUS MODEL FOR SEYFERT NUCLEI ## 1. INTRODUCTION Dusty tori around active galactic nuclei (AGNs) play an important role in the classification of Seyfert galaxies. (Antonucci & Miller 1985; see also Antonucci 1993 for a review). Seyfert galaxies observed from a face-on view of the torus are recognized as type 1 Seyferts (S1s) while those observed from a edge-on view are recognized as type 2 Seyferts (S2s). In this way, the dusty tori act as a material anisotropically obscuring the emission from their interior region. Dusty tori themselves are also important emitting sources in AGNs. Dust grains within the torus absorb high-energy photons from the central engine, and re-emit them in the mid-infrared (MIR) regime. Therefore, infrared radiation from the dusty torus emission is useful in studying the physical properties of the tori in AGNs (e.g., Dopita et al. 1998 and references therein). Since the tori are quite optically thick, the MIR spectrum is predicted to have strong dependence on the viewing angle \[Efstathiou & Rowan-Robinson 1990; Pier & Krolik 1992, 1993 (hereafter PK92 and PK93, respectively); Granato & Danese 1994; Granato, Danese, & Franceschini 1996, 1997\]. When the torus is observed from a face-on view, its hot inner surface is seen and the emission at $`\lambda `$ 10 µm is enhanced. When observed from a edge-on view, the emission at $`\lambda `$ 10 µm is obscured and thus weakened. Heckman (1995) observed that the averaged ratio of $`N`$-band (10 µm) flux to nonthermal radio flux is higher in S1s than in S2s (see also Giuricin, Mardirossian, & Mezzerre 1995). Heckman, Chambers, & Postman (1992) observed a similar enhancement in radio-loud quasars (i.e., type 1) with respect to radio galaxies (i.e., type 2). PK93 observed that flux ratios of $`L`$ band (3.5 µm) to $`N`$ band in S1s are higher than those in S2s. Fadda et al. (1998) observed that the MIR spectrum is steeper (i.e., redder) in S2s than in S1s. However, further details of the MIR emission from dusty tori are unknown. This paper proposes the flux ratio of $`L`$ band to IRAS 25 µm band as a new MIR diagnostic for the dusty torus model (§2). We compile the observational data from the literature (§3), compare the above ratios of S1s with those of S2s (§4), and discuss properties of the tori (§5). ## 2. NEW MIR DIAGNOSTIC As stated above, the torus emission is expected to be more anisotropic at $`\lambda `$ $``$ 10 µm than at $`\lambda `$ $``$ 20 µm because the visibility of the inner wall of the torus is highly viewing angle dependent. Therefore, it is of interest to compare S1s with S2s in a flux ratio between $`\lambda 10`$ µm and $`\lambda 20`$ µm. Since the IRAS photometric data are available for most of the nearby Seyfert galaxies (Moshir et al. 1992), we adopt the flux ratio between $`L`$ band and IRAS 25 µm band, $$R(L,25)=\mathrm{log}[(\nu _{3.5\mu \mathrm{m}}S_{\nu _{3.5\mu \mathrm{m}}})/(\nu _{25\mu \mathrm{m}}S_{\nu _{25\mu \mathrm{m}}})].$$ The basic concept of our new MIR diagnostic is schematically shown in Figure 1. Since the viewing angle dependence is more significant at 3.5 µm, S2s are expected to have lower values of $`R(L,25)`$ than S1s. Here we note that PK 93 used the flux ratio between $`L`$ and $`N`$ bands, $`R(L,N)`$, to compare S1s with S2s. However, the $`N`$-band flux is affected by a silicate line at 9.7 µm. When the torus is observed from a edge-on view, the silicate line is seen as an absorption line, and thus both the fluxes in $`L`$ and $`N`$ bands are weakened. The difference between S1s and S2s in $`R(L,N)`$ is thereby expected not to be as prominent as that in $`R(L,25)`$. ## 3. DATA SAMPLE To perform a statistical analysis with the MIR diagnostic defined in the previous section, we have compiled photometric data in $`L`$, $`N`$, and IRAS 25 µm bands from the literature (e.g., Ward et al. 1987; Roche et al. 1991; Moshir et al. 1992; PK93). Since radiation form AGNs is anisotropic in most of the energy bands, it is difficult to construct a statistically complete sample. We instead adopt three samples chosen by different selection criteria. The first sample consists of the CfA Seyfert galaxies<sup>1</sup><sup>1</sup>1The preliminary analysis based on the CfA Seyfert galaxies was reported in Murayama, Mouri, & Taniguchi (1997). (Huchra & Burg 1992), which provide a well defined collection of objects limited by the $`B`$ magnitude of their host galaxies. The second sample is the one limited by the hard X-ray flux from 2 to 10 keV (Ward et al. 1987). Since hard X-rays arise from the central engine itself and are not affected seriously by dust grains, this sample is expected to be fair at least for S1s. The third sample is taken from Roche et al. (1991). This sample is not complete but composed of $`N`$-band bright objects. For each object in this sample, Roche et al. (1991) observed an emission feature at 11.3 µm, which allows us to examine the presence or absence of any circumnuclear star formation activities. Our CfA, Ward, and Roche samples contain 18 S1s and 6 S2s, 20 S1s and 4 S2s, and 11 S1s and 11 S2s, respectively. Some objects are included in more than one sample. In total, there are 31 S1s and 14 S2s. Their basic data are summarized in Table 1. Besides the dusty torus, several sources in Seyfert galaxies contribute to the observed MIR fluxes (see below). We exclude galaxies where the contamination with such sources appears to be significant. These galaxies are indicated in Table 1. The resultant final samples consist of 27 S1s and 5 S2s. The IRAS 25 µm measurements were made with an aperture which is large enough to cover the entire galaxy ($`0\stackrel{}{\mathrm{.}}75\times 4\stackrel{}{\mathrm{.}}6`$; Neugebauer et al. 1984). There could be contamination with the disk of the host galaxy. To find galaxies where the disk emission dominates over the torus emission, we use the compactness parameter $`CP`$ at 10 µm (Devereux 1987), $`CP=f_{\mathrm{cc}}\times S_{\nu _N}/S_{\nu _{12\mu \mathrm{m}}}.`$ Here $`S_{\nu _N}`$ is the $`N`$ band flux, $`S_{\nu _{12\mu \mathrm{m}}}`$ is the IRAS 12 µm flux, and $`f_{\mathrm{cc}}`$ is the color correction factor, $`f_{\mathrm{cc}}=0.12S_{\nu _{12\mu \mathrm{m}}}/S_{\nu _{25\mu \mathrm{m}}}+1.04.`$ This compactness parameter gives an estimate on the ratio of the small-beam flux to the entire flux at 10 µm. The $`CP`$ value of each galaxy is given in Table 1. Some objects exhibit $`CP>1`$. This is due to uncertainties in the measurement or time variation of the nuclear flux. In such cases, we give $`CP=1`$. The MIR fluxes of galaxies with $`CP<0.5`$ are likely to be dominated by the disk emission. These galaxies are not used in our following analysis. On the other hand, the $`L`$\- and $`N`$-band data given in Table 1 were obtained with small apertures ($`\varphi `$ 5″–10″). In these data, the contamination with the host galaxy is unlikely to be important. From $`K`$-band images of Seyfert galaxies, Kotilainen et al. (1992) estimated the average light contribution from the host galaxy as 32 %. Zitelli et al. (1993) found that $`L`$-band images of Seyfert galaxies are more centrally concentrated than the $`K`$-band ones. Hence the contribution from the host galaxy to the $`L`$-band flux is less than $`30`$ %. Seyfert galaxies often exhibit circumnuclear starburst activities, which could affect the MIR emission (see Keto et al. 1992 for the case of NGC 7469). Such objects are excluded from our analysis. As a signature of the starburst activity, we use emission features in the 8–13 µm regime (Roche et al. 1991). They are due to transient heating of polycyclic aromatic hydrocarbon molecules (PAHs) by UV photons from OB stars. Since PAHs are destroyed by X-rays, PAH features are absent in genuine AGNs (see Voit 1992 and references therein). We also exclude narrow-line X-ray galaxies (NLXGs), i.e., S2s with strong hard X-ray emission (Shuder 1980; Véron et al. 1980; Ward et al. 1987). The central engine of these galaxies is believed to be hidden not by a dusty tori but by the disk of the host galaxy. Most of NLXGs are actually edge-on galaxies (see Ulvestad & Wilson 1984 and Keel 1980 for the cases of NGC 2992 and NGC 5506). Furthermore, Glass et al. (1981) reported that $`R(L,N)`$ values of NLXGs are similar to those of S1s rather than those of S2s. ## 4. RESULTS Figure 2 shows frequency distributions of $`R(L,25)`$ of S1s and S2s separately for the CfA sample ($`a`$), the Ward sample ($`b`$), the Roche sample ($`c`$), and the total sample ($`d`$). The galaxies excluded in the previous section are shown by white bars. Only the galaxies shown by black bars are used in the following analysis. All of the S1s have $`R(L,25)>0.6`$ while most of the S2s have $`R(L,25)<0.6`$. The S2 which lies exceptionally at $`R(L,25)>0.6`$ is Mrk 348. This galaxy exhibits no silicate absorption feature at 9.7 µm (Roche et al. 1991). Since the silicate absorption is a common property of S2s, Mrk 348 is considered to be in a face-on view like usual S1s. The absence of the broad-line region in this galaxy could result from, e.g., obscuration of the central region by a small cloud. There is no significant difference in the distributions of S1s and S2s among the three samples in Figures 2$`a`$$`c`$, thus our samples is probably free of large orientation bias. If we apply the Kolmogrov-Smirnov (KS) test, the probability that the observed distributions of S1s and S2s originate in the same underlying population turns out to be 0.275 %. When galaxies shown by white bars are included, the distribution of S2s is different among the three samples. This difference is likely to come from the different sampling criteria. Figure 3 compares the $`R(L,25)`$ ratio with the nuclear absolute $`B`$ magnitude \[$`M_B`$ (nucleus)\] for S1s. The $`M_B`$ (nucleus) values are taken from Kotilainen, Ward, & Williger (1993) and Granato et al. (1993), and are used as a measure of the luminosity of the central engine. If the observed $`R(L,25)`$ ratio depends on the intrinsic luminosity of the central engine rather than the Seyfert type, there would be a certain relationship between $`R(L,25)`$ and $`M_B`$ (nucleus). Since no clear correlation is seen in Figure 3, we conclude that the difference in the intrinsic nuclear luminosity does not affect the observed value of $`R(L,25)`$ in S1s. This conclusion is applicable to S2s because S1s and S2s are likely to have the same torus properties. Finally, we show the frequency distributions of $`R(L,N)`$ in Figure 4. The KS probability that the underlying populations of S1s and S2s are the same is 0.0404 %. Although this value is smaller than that for the $`R(L,25)`$ ratio, the separation between S1s and S2s in $`R(L,N)`$ (Figure 4) is less clear than that in $`R(L,25)`$ (Figure 2). This is because the $`N`$-band emission is affected by the silicate feature at 9.7 µm. ## 5. DISCUSSION The $`R(L,25)`$ values for the S1s are clearly separated from those for the S2s at the critical value of $`0.6`$. This limits the extent to which the dusty torus can vary among Seyfert galaxies; such variations would add “noise” and cause overlap between the two types. Hereafter, we compare our results with theoretical torus models of PK92 and PK93, and investigate the model which agrees best with the observations. Figure 5 shows the geometrical configuration assumed in PK92 and PK93. The torus surrounds cylindrically around the central engine and the broad-line region. The semi-opening angle $`\theta _{\mathrm{open}}`$ is given by the inner radius $`a`$ and the height $`h`$ of the torus; $`\theta _{\mathrm{open}}=\mathrm{tan}^1(2a/h)`$. The viewing angle $`i`$ is defined as an angle between the rotation axis of the torus and the line of sight. The critical viewing angle $`i_{\mathrm{cr}}`$ is defined such that the broad-line region is visible at $`i<i_{\mathrm{cr}}`$. Since the actual torus should be clumpy and not have a sharp edge, we expect $`i_{\mathrm{cr}}\theta _{\mathrm{open}}`$. The torus emission is parameterized by the three quantities in the models of PK92 and PK93; 1) $`T`$: the effective temperature of the inner wall of the torus, 2) $`a/h`$: the inner aspect ratio, and 3) $`\tau _\mathrm{r}`$ and $`\tau _\mathrm{z}`$: the radial and vertical Thomson optical depths. In the upper panel of Figure 6, we show the theoretical $`R(L,25)`$ values as a function of the viewing angle $`i`$ for six dusty torus models of PK92 and PK93. In the lower panel, the observed $`R(L,25)`$ values are shown separately for S1s and S2s. For each of the models, the observed critical ratio, $`R(L,25)=0.6`$, yields a critical viewing angle. The results together with the model parameters are given in Table 2. The derived critical viewing angle ranges from 46° to 87° (Models 1, 4, 5, and 6). Since Models 2 and 3 do not give the critical viewing angle, these models are not appropriate for dusty tori in Seyfert galaxies. To proceed further, we have to compare the results with other observational properties of Seyfert galaxies (see below). Narrow-line regions of S2s often exhibit conical morphologies, which are due to shadowing of the nuclear ionizing continuum by the torus. The observed semi-opening angle of the cone $`\theta _{\mathrm{open}}(\mathrm{NLR})`$ is thereby equal to the semi-opening angle of the torus $`\theta _{\mathrm{open}}`$. Table 3 summarizes statistical results from observations of conical narrow-line regions (Pogge 1989; Wilson & Tsvetanov 1994; Schmitt & Kinney 1996). These results indicate $`\theta _{\mathrm{open}}(\mathrm{NLR})30\mathrm{°}`$. On the other hand, Model 1 has $`\theta _{\mathrm{open}}=11\mathrm{°}`$. Thus this model is not appropriate for dusty tori in Seyfert galaxies. The critical viewing angle $`i`$ can be estimated from the number statistics of S1s and S2s if we observe Seyfert nuclei from random orientations on the statistical ground, $$\frac{N(\mathrm{S1})}{N(\mathrm{S1})+N(\mathrm{S2})}=1\mathrm{cos}i_{\mathrm{cr}}(\mathrm{stat}),$$ where $`N`$(S1) and $`N`$(S2) are the observed numbers of S1s and S2s, respectively (Miller & Goodrich 1990). Table 4 summarizes the results for three different surveys of Seyfert galaxies (Osterbrock & Shaw 1988; Salzer 1989; Huchra & Burg 1992). The derived critical viewing angles ranges from 27° to 46°. Since Models 4 and 5 give too large critical viewing angles ($`i_{\mathrm{cr}}>80\mathrm{°}`$), they are not appropriate for dusty tori in Seyfert galaxies. Consequently, among the six models of PK92 and PK93, Model 6 with $`\theta _{\mathrm{open}}=31\mathrm{°}`$ and $`i_{\mathrm{cr}}=46\mathrm{°}`$ is the best torus model. The $`R(L,25)`$ values of the S2s lie between $`1.48`$ and $`0.6`$, which correspond to the viewing angles between $`86\mathrm{°}`$ and $`46\mathrm{°}`$. On the other hand, $`R(L,25)`$ values of the S1s lie between $`0.6`$ and $`0.37`$. This range is not explained by Model 6. The locus of Model 6 in Figure 6 is drawn down only to $`i=41\mathrm{°}`$. Since the $`R(L,25)`$ value at the smaller viewing angle is expected to be nearly constant, it would be impossible to reproduce the $`R(L,25)`$ ratio as high as $`0.37`$. One possibility that explains this higher ratio may be the 3 µm bump often seen in type 1 AGNs. This bump may be attributed to thermal emission from hot dust grains with $`T1300`$ K (e.g., PK93). Because the models of PK92 and PK93 assumed that the host dust component is an additional source to the torus, $`R(L,25)`$ is underpredicted at small inclination angles. Although it is controversial whether this host dust component is a separate component from the torus or the inner surface of the torus, those hot dust grains lie close to the central engine in either case. Since their emission is important only when the central region is clearly visible, the 3 µm bump is negligible in S2s. Therefore the critical $`R(L,25)`$ and $`i_{\mathrm{cr}}`$ are not affected by the treatment of the 3 µm bump. We have examined only the small and sparse sets of the model parameters presented by PK92 and PK93. Further analyses with the larger and denser parameter sets are required to understand the torus properties in more detail. Nevertheless, our most important result, which has been obtained firstly with the new MIR diagnostic, is the clear separation in the $`R(L,25)`$ ratio between S1s and S2s. This strongly suggests that the torus properties do not vary among Seyfert galaxies. The effective temperature of the inner wall may be universal as a result of that the inner wall is formed by balancing the rate of dust destruction with the rate at which the torus clouds drift inward (Krolik & Begelman 1988; PK92). We suspect that there are also certain mechanisms confining the vertical structure of the torus and shaping the uniform semi-opening angle. We thank K. Iwasawa and R. Antonucci for helpful comments and suggestions that improved the paper. TM is a JSPS Research Fellow. This work was financially supported in part by Grant-in-Aids for the Scientific Research (Nos. 07044054, 10044052, and 10304013) of the Japanese Ministry of Education, Science, Sports, and Culture.
no-problem/9908/cond-mat9908456.html
ar5iv
text
# 1 Introduction ## 1 Introduction It was well established two decades ago that all electronic states of one dimensional (1D) disordered systems are exponentially localized in the absence of external fields irrespective of the amount of disorder . However, recently some models of disorder introducing the correlation and the nonlinearity have been shown to exhibit extended states at particular energies. The electric field, on the other hand has been shown to delocalize the electronic states in 1D disordered systems where the wave function becomes power-law decaying \[5-7\] while for sufficiently large field strenghs the eigenstates become extended . Furthermore, it can affect the backscattering and the interferences yielding a strong enhancement the localization ( Wannier-Stark localization) . In a recent paper, We found that the nonlinearity can either localize or delocalize the electronic states depending on the strengh and the sign of the nonlinear potential . Physically, a repulsive nonlinear (NL) potential represents the electron-electron interaction while an attractive one corresponds to the electron-phonon interaction. These interactions are important in various systems such as quantum dots, superlattices etc. . Therfore, the electric field and the nonlinear potential effects can compete and their presence together in the system may lead to the suppression of some effects such as the Wannier-Stark localization. This is the aim of the present letter where we examine the effect of the NL interaction on the electronic properties of a chain of potentials in the presence of a constant electric field. Note that this effect on the resonnant transmission has been investigated by Cota et al . These resonnances seem to change their structure with the NL strength. However, to the best of our knowledge this effect on the nature of the eigenstates has not been investigated before. ## 2 Model description The model studied in this letter is defined by the following nonlinear Schrodinger equation $$\left\{\frac{d^2}{dx^2}+\underset{n}{}(\beta _n+\alpha \left|\mathrm{\Psi }(x)\right|^2)\delta (xn)eFx\right\}\mathrm{\Psi }(x)=E\mathrm{\Psi }(x)$$ (1) Here $`\mathrm{\Psi }(x)`$ is the single particle wavefunction at $`x`$, $`\beta _n`$ the potential strength at the $`nth`$ site, $`\alpha `$ the nonlinearity strength and $`E`$ the single particle energy in units of $`\mathrm{}^2/2m`$ with $`m`$ the electronic effective mass and $`F`$ the electric field. The electronic charge $`e`$ and the lattice parameter $`a`$ are taken here for simplicity to be unity. The two ends of the system are assumed to be connected ohmically to ideal leads (where the electron moves freely) and maintained at a constant potential difference $`V=FL`$. The potential strength $`\beta _n`$ is uniformly distributed between $`0`$ and $`W`$ in the case of potential barriers and between $`W`$ and $`0`$ in the case of potential wells ($`W`$ being the degree of disorder). Equation (1) can be mapped by means the Poincaré map representation in the ladder approximation (i.e, when the field can be approximated as constant between two consecutive sites . This approximation is valid for $`eFaE`$) to the following recursive equation $$\mathrm{\Psi }_{n+1}=\left[\mathrm{cos}k_{n+1}+\frac{k_n\mathrm{sin}k_{n+1}}{k_{n+1}\mathrm{sin}k_n}cosk_n+(\beta _n+\alpha |\psi (x)|^2)\frac{\mathrm{sin}k_{n+1}}{k_{n+1}}\right]\mathrm{\Psi }_n\frac{k_n\mathrm{sin}k_{n+1}}{k_{n+1}\mathrm{sin}k_n}\mathrm{\Psi }_{n1}$$ (2) where $`\mathrm{\Psi }_n`$ is the value of the wavefunction at site $`n`$ and $`k_n=\sqrt{E+Fn}`$ is the electron wave number at the site $`n`$. The solution of equation (2) is carried out iteratively by taking the two initial wave functions at sites $`1`$ and $`2`$ of the ideal leads : $`\mathrm{\Psi }_1=`$ $`\mathrm{exp}(ik)`$ and $`\mathrm{\Psi }_2=`$ $`\mathrm{exp}(2ik)`$. We consider here an electron with a wave number $`k`$ incident at site $`N+3`$ from the right side (by taking the chain length $`L=N`$, i.e. $`N+1`$ scatterers ). The transmission coefficient ($`T`$) reads $$T=\frac{k_0}{k_L}\frac{|1exp(2ik_L)|^2}{|\mathrm{\Psi }_{N+2}\mathrm{\Psi }_{N+3}exp(ik_L)|^2}$$ (3) where $`k_0=\sqrt{E}`$ and $`k_L=\sqrt{E+FN}`$. ## 3 Results and discussion In this section we examine in a first step the effect of nonlinearity on the energy spectrum of a periodic system in the presence of an electric field. We choose in this case $`\beta =1`$, $`F=0.01`$ and $`L=500`$. For linear systems ($`\alpha =0`$), the electric field seems to narrow the allowed bands because of the Wannier-Stark localization. Indeed, in this case the transmission coefficient has been shown to decrease abruptly near the band edges while Bloch oscillations appear . The nonlinearity, on the other hand was found to delocalize, under certain conditions, the electronic states in periodic systems in the sense that the allowed bands become larger and the gaps get narrowed . Figure 1 shows the effect of the NL on a periodic chain of potential barriers in the presence of an electric field. We in particular observe for increasing $`\alpha <0`$ , an increase of the transmission coefficient in the regions localized by the electric field (i.e. Wannier-Stark localization). This field induced localization tends to diappear for a given NL strength. On the other hand, the amplitude of the Bloch oscillations observed in the linear case (solid line) seems to decrease. This delocalization is however not observed if we consider periodic potential wells whith repulsive NL, although we found recently that this type of NL delocalizes the electronic states in the gap for such systems . This surprising effect may come from the unstabilities (strong drop of the transmission) observed at certain length scales where any amount of the NL potential strength enhances the localization . These unstabilities should appear at larger length scales for the potential barriers. Let us now examine the effect of NL interactions on disordered chains in the presence of an electric field. It was shown that the wave function becomes power-law decaying in the presence of an electric field . On the other hand, the electric field was also found in certain cases to modify the scaling of the transmission in jumps with a behavior as $`exp(L^\gamma )`$ (with $`\gamma >1`$ and $`L`$ the length scale) between them . This case was shown to correspond to a negative differential resistance . Figure 2 shows the transmission coefficient versus the chain length in the case of disordered potential wells. We choose $`E=5`$,$`F=0.015`$ and $`W=2`$ with an ensemble averaging over 2000 samples (sufficient for an accuracy about $`1\%`$). We observe clearly that the superlocalization before the first jump tends to be suppressed in the presence of a repulsive NL ($`\alpha >0`$) and the eigenstates become power-law decaying. The same behavior can be observed in the case of potential barriers (not shown here to avoid a lengthy paper). We note here that for almost cases the unstabilities of $`T`$ discussed above appear after the first jump of $`T`$. Therefore, we restricted ourselve to the first jump. In figure 2 we observed also a characteristic length $`l_c`$ separating the superlocalized states for small lengths from the power-law decaying ones for larger length scales. This caracteristic length seems to decrease logarithmically with the NL strength in the case of disordered potential wells while it decreases more rapidly for potential barriers (see Fig.3). ## 4 Conclusion We studied in this letter the effect of nonlinearity on electrified periodic and disordered chains using a simple Kronig-Penney model. We found that in periodic potential barriers, the nonlinearity contributes to the delocalization of the Wannier-Stark localized states induced by the electric field. In the case of disordered systems, we found that the superlocalization observed recently in such systems in the presence of an electric field is suppressed progressively by the NL interaction and the wave functions become power-law decaying above a characteristic length $`l_c`$ (which seems to decrease also at least logarithmically with nonlinearity). However, beyond a certain length scale (corresponding after the first jump), any amount of the NL interaction destroys the transmission in certain samples instead of enhancing it, due to the unstability observed in nonlinear systems. Most probably this unstability predicts very interesting statistical properties of the transmission in such systems and should be carefully examined. This investigation should be the subject of a forthcoming paper. Figure Captions Fig.1 $`Ln(T)`$ versus the Wavenumber $`k`$ in units of $`\pi /a`$ for ordered systems with potential barriers ($`\beta =1`$), a length scale $`L=500`$ and $`E=1`$. Effect of the NL interaction. Fig.2 $`<Ln(T)>`$ versus $`L`$ for disordered potentials wells with $`E=5`$, $`W=2`$, $`F=0.015`$. Effect of the NL interaction. Fig3 $`L_c`$ versus $`Log(|\alpha |)`$ for disordered potential barriers and wells for the same parameters as Fig.2.
no-problem/9908/cond-mat9908124.html
ar5iv
text
# Unusual Charge Localization in Zn-doped and Heavily Underdoped YBa2Cu3O7-δ at Low Temperatures Unusual Charge Localization in YBCO ## Abstract The in-plane normal-state resistivity of Zn-doped YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> and heavily underdoped pure YBCO single crystals is measured down to low temperatures under magnetic fields up to 18 T. We found that the temperature dependence of the normal-state $`\rho _{\mathrm{𝑎𝑏}}`$ does not obey $`\mathrm{log}(1/T)`$ and tends not to diverge in the low temperature limit. The result suggests that the “ground state” of the normal state of YBCO is metallic. PACS numbers: 74.25.Fy, 74.62.Dh, 74.72.Bk In high-$`T_c`$ cuprates, the low-temperature normal-state resistivity is expected to reflect the electronic structure which underlies the high-$`T_c`$ superconductivity. In La<sub>2-x</sub>Sr<sub>x</sub>CuO<sub>4</sub> (LSCO) system, the measurement of the normal-state resistivity at low temperatures was performed using a pulsed magnet . The in-plane resistivity $`\rho _{\mathrm{𝑎𝑏}}`$ of underdoped LSCO was reported to show logarithmic divergence in high magnetic fields in the zero-temperature limit. Thus, the “ground state” of the normal state of the underdoped LSCO appears to be insulating. In order to see whether the insulating “ground state” is a common feature of the underdoped high-$`T_c`$ cuprates or not, one needs to perform the measurement in other high-$`T_c`$ systems. In the case of YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> (YBCO), the normal-state $`\rho _{\mathrm{𝑎𝑏}}`$ has been measured at low temperatures by using 18 T magnetic field . Zn substitution in excess of 2.5% was reported to induce an upturn in the temperature dependence of $`\rho _{\mathrm{𝑎𝑏}}`$ at low temperatures, and the logarithmic temperature dependence of $`\rho _{\mathrm{𝑎𝑏}}`$ was observed; however, it remains unclear whether the logarithmic divergence continues to very low temperatures in the YBCO system, because the 18 T magnetic field was insufficient to completely suppress superconductivity at low temperatures in superconducting samples. On the other hand, the superconducting fluctuation in non-superconducting samples can easily be suppressed by 18 T field at low temperatures. Thus the temperature dependence of the resistivity in such non-superconducting samples can be measured down to low temperatures without significant difficulties. If one investigates the temperature dependence of the resistivity in the non-superconducting samples as well as the superconducting Zn-doped samples, the behavior of $`\rho _{\mathrm{𝑎𝑏}}(T)`$ in the low temperature limit is expected to be clarified. Here we report measurements of the low-temperature normal-state resistivity along the CuO<sub>2</sub> planes, $`\rho _{\mathrm{𝑎𝑏}}`$, of Zn-doped YBCO and non-superconducting heavily-underdoped pure YBCO. We found that the normal-state $`\rho _{\mathrm{𝑎𝑏}}`$ is not likely to diverge in the low-temperature limit. The result suggests that the “ground state” of the normal state is metallic in YBCO. The single crystals of YBa<sub>2</sub>(Cu<sub>1-z</sub>Zn<sub>z</sub>)<sub>3</sub>O<sub>7-δ</sub> are grown by flux method in pure Y<sub>2</sub>O<sub>3</sub> crucibles to avoid inclusion of any impurities other than Zn . The oxygen content $`y`$ ($``$ 7-$`\delta `$) in the crystals is controlled by annealing in evacuated and sealed quartz tubes at 500-650C for 1-2 days together with sintered blocks and/or powders and then quenched with liquid nitrogen. The final oxygen content is confirmed by iodometric titration with an accuracy of better than $`\pm 0.01`$. The measurement of $`\rho _{\mathrm{𝑎𝑏}}`$ is performed with ac four-probe technique under dc magnetic fields up to 18 T applied along the $`c`$ axis. Figure 1(a) shows the temperature dependence of $`\rho _{\mathrm{𝑎𝑏}}`$ in 0 and 15 T, for $`z`$=2.7% samples with $`y`$=6.70 (sample A), 6.75 (sample B) and 6.80 (sample C), and Fig.1(b) shows $`\rho _{\mathrm{𝑎𝑏}}(T)`$ in 0 and 16 T for a sample with $`z`$=1.3% $`y`$=6.50 (sample D). The $`\rho _{\mathrm{𝑎𝑏}}(T)`$ of these four samples all show an upturn in magnetic fields at low temperatures. Figs.1(c)(d)(e)(f) show the $`\mathrm{log}T`$ plots of $`\rho _{\mathrm{𝑎𝑏}}(T)`$ for samples A, B, C and D, respectively. The temperature dependences of $`\rho _{\mathrm{𝑎𝑏}}`$ of these samples can be seen to be consistent with $`\mathrm{log}T`$ only in the temperature range of 5-12 K, 9-20 K, 7-15 K and 12-20 K, respectively, for samples A, B, C and D. Below those temperature regions, $`\rho _{\mathrm{𝑎𝑏}}(T)`$ starts to deviate downwardly from $`\mathrm{log}T`$. The downward deviation is due to a different temperature dependence of $`\rho _{\mathrm{𝑎𝑏}}`$ at lower temperatures and/or the superconducting fluctuation which is not sufficiently suppressed in 15-18 T magnetic fields. Figures 1(g)(h)(i)(j) show $`\sigma _{\mathrm{𝑎𝑏}}(T)`$ vs $`T^{1/2}`$ plots for samples A, B, C and D. The temperature dependences of $`\rho _{\mathrm{𝑎𝑏}}`$ are consistent with $`\sigma _{\mathrm{𝑎𝑏}}(T)a+T^{1/2}`$, in the range of 1.5-8 K (1.2-2.8 K<sup>1/2</sup>), 6-12 K (2.4-3.5 K<sup>1/2</sup>), 5-9 K (2.2-3.0 K<sup>1/2</sup>) and 8-20 K (2.8-4.5 K<sup>1/2</sup>), respectively, for samples A, B, C and D. In particular, $`\sigma _{\mathrm{𝑎𝑏}}(T)`$ of sample A is well fitted with $`a+T^{1/2}`$ down to 1.5 K. This temperature dependence, $`\sigma _{\mathrm{𝑎𝑏}}(T)a+T^{1/2}`$, is fundamentally different from that of $`\rho _{\mathrm{𝑎𝑏}}(T)\mathrm{log}(1/T)`$; $`\mathrm{log}(1/T)`$ diverges as $`T0`$, whereas $`1/(a+T^{1/2})`$ remains finite. Thus, the result of Fig.1(g), in which $`\sigma _{\mathrm{𝑎𝑏}}(T)a+T^{1/2}`$ well fits the measured data, suggest that the in-plane resistivity in YBCO is not likely to diverge in the zero-temperature limit. However, the resistivity might be reduced by the superconducting fluctuation at low temperatures in those samples. If so, it is possible that the intrinsic $`\rho _{\mathrm{𝑎𝑏}}(T)`$ is diverging in the low temperature limit if there were no superconducting fluctuations. In order to avoid such uncertainty, we prepared non-superconducting samples by reducing oxygen in pure YBCO. Fig.2(a) shows $`\rho _{\mathrm{𝑎𝑏}}(T)`$ in 0 T for pure samples with $`y`$= 6.40 (sample E) and 6.38 (sample F). Sample E is close to the superconducting region and its resistivity value corresponds to $`k_Fl2`$, where $`k_F`$ is the Fermi wave number and $`l`$ is the mean free path. Sample F is strongly insulating and its resistivity corresponds to $`k_Fl<1`$ at low temperatures. Both samples E and F show no superconducting transition. However, in sample E, $`\rho _{\mathrm{𝑎𝑏}}`$ shows a decrease due to the superconducting fluctuation in 0 T at low temperatures. The superconducting fluctuation in sample E can easily be suppressed in 18 T down to 0.2 K. Figure 2(b) shows the $`\mathrm{log}T`$ plot of $`\rho _{\mathrm{𝑎𝑏}}(T)`$ for sample E. This $`\rho _{\mathrm{𝑎𝑏}}(T)`$ is not consistent with $`\mathrm{log}(1/T)`$. In contrast, as Fig.2(d) shows, $`\sigma _{\mathrm{𝑎𝑏}}(T)`$ is well fitted with $`a+T^{1/2}`$ down to 0.2 K (0.45 K<sup>1/2</sup>). We note that Lavrov et al. reported similar $`T`$-dependence of conductivity in heavily underdoped YBCO, whose oxygen content is between samples E and F. However, their data were also consistent with $`\rho _{\mathrm{𝑎𝑏}}(T)\mathrm{log}(1/T)`$ as well as $`\sigma _{\mathrm{𝑎𝑏}}(T)a+T^{1/2}`$ at low temperatures. In contrast to their data, our data definitely show that the increase in $`\rho _{\mathrm{𝑎𝑏}}`$ with decreasing temperature is weaker than $`\mathrm{log}T`$ at low temperatures. Thus, we may conclude that $`\rho _{\mathrm{𝑎𝑏}}`$ in sample E is not diverging in the low temperature limit. For sample F, Fig.2(c) shows the $`\mathrm{log}T`$ plot of $`\rho _{\mathrm{𝑎𝑏}}(T)`$. It is clear in Fig.2(c) that the resistivity of sample F tends to diverge more strongly than $`\mathrm{log}T`$ with decreasing temperature. We found instead that $`\rho _{\mathrm{𝑎𝑏}}(T)T^{1/3}`$ is a better description of the temperature dependence of $`\rho _{\mathrm{𝑎𝑏}}`$ of sample F. Fig.2(e) shows the plot of $`\rho _{\mathrm{𝑎𝑏}}(T)`$ vs $`T^{1/3}`$ for sample F, where one can find that the data lie on a straight line reasonably well at low temperatures (right hand side of the plot). Thus, the resistivity of sample F is likely to be diverging in the low temperature limit. In both the superconducting Zn-doped samples and the non-superconducting sample (but close to the superconducting composition), $`\rho _{\mathrm{𝑎𝑏}}`$ does not behave as $`\mathrm{log}(1/T)`$ at low temperatures. Instead, the behavior of $`\sigma _{\mathrm{𝑎𝑏}}(T)a+T^{1/2}`$ is observed in the low temperature limit. This result suggests that $`\rho _{\mathrm{𝑎𝑏}}`$ does not diverge in the low temperature limit and thus the “ground state” of the normal state of YBCO is metallic. Of course, not all YBCO samples are “metallic”, for example, $`y`$=6.38 sample remains insulating at low temperatures. The resistivity of the insulating $`y`$=6.38 sample corresponds to $`k_Fl<1`$, whereas that of the “metallic” $`y`$=6.40 sample corresponds to $`k_Fl2`$. This means that the samples with $`k_Fl<1`$ cannot be metallic at low temperatures, like conventional metals. In summary, we measured the in-plane resistivity of Zn-doped YBCO and non-superconducting heavily underdoped YBCO crystals down to low temperatures under magnetic fields up to 18 T. It is found that the temperature dependence of the normal-state $`\rho _{\mathrm{𝑎𝑏}}`$ tends not to diverge in the low temperature limit. The result suggests that the “ground state” of the normal state of YBCO is metallic. We thank A. N. Lavrov for fruitful discussions and experimental assistance.
no-problem/9908/astro-ph9908031.html
ar5iv
text
# The relation of extragalactic cosmic ray and neutrino fluxes: the logic of the upper bound debate ## 1 Introduction The most commonly assumed mechanism for the origin of cosmic rays is the Fermi acceleration of protons and ions at strong shock waves in magnetized plasmas. The same mechanism also accelerates electrons, which then emit their energy in synchrotron radiation with the magnetic field, and is therefore probably responsible for most of the non-thermal radiation in the universe. This has motivated theorists to search for the possible sources of cosmic rays up to the highest energies among the known objects which emit non-thermal radiation. A large fraction of the non-thermal power in the universe is provided by Active Galactic Nuclei (AGN), but also the enigmatic Gamma Ray Bursts, the most powerful explosive events in the universe, have obtained some attention as possible sources of energetic protons, i.e. cosmic rays. If cosmic rays exist in these sources, they can interact with the dense ambient photon field, producing secondary mesons (mostly pions), which decay and give rise to the emission of very high energy gamma-rays and neutrinos. Moreover, such interactions can convert protons into neutrons, which are released from the magnetic confinement and can be emitted as cosmic rays. The tight connection between cosmic ray, gamma ray and neutrino production has lead to some attempts to constrain the possible fluxes of one component by observations of the others. This is particularly important for the neutrino fluxes, for which large experiments are currently under construction. Its exploration can be expected to be one of the primary goals of the astrophysics in the next century. ## 2 Extragalactic cosmic rays, gamma rays and neutrinos The origin of very high energy ($`>100\mathrm{TeV}`$) neutrinos in the universe is generally attributed to the decay of pions or other mesons (we discuss pions only for simplicity). Pions exist in three isospin states, $`\pi ^+`$, $`\pi ^{}`$ and $`\pi ^0`$, which are in the gross average equally likely to be produced. The decay of charged pions leads to the production of neutrinos and charged leptons, while neutral pions decay electromagnetically into photons. Under the assumption that electrons deposit their energy into photons by radiative processes, equipartition of pion flavors leads immediately to an approximately equal luminosity in gamma rays and neutrinos form any pion-producing source. $$[\gamma \mathrm{luminosity}](\mathrm{cascade})[\nu \mathrm{luminosity}](E_\nu )$$ ($``$) One can show that this relation holds even when other mesons (e.g. kaons) are involved. The production of pions usually involves the presence or the production of cosmic rays. In most models, pions are produced in $`pp`$ and $`p\gamma `$ interactions involving a pre-existing cosmic ray population in the source. To specify how cosmic rays are produced, most scenarios assume Fermi acceleration, which requires that cosmic ray protons are magnetically confined in the source. However, confinement breaks up when a proton interacts and changes into a neutron (isospin-flip), which can be ejected from the source. Charge conservation requires the production of at least one charged meson in this interaction, and therefore inevitably leads to the production of neutrinos. This can be expressed in the relation $$[\nu \mathrm{luminosity}](E_\nu )=\mathrm{kinematics}[n\mathrm{luminosity}](E_\mathrm{n}=fE_\nu )$$ ($``$) For both $`pp`$ and $`p\gamma `$ interactions, the factor $`\mathrm{kinematics}`$ is in the range $`0.21`$, depending on the mean CMF interaction energy . In some exotic models that involve the decay of topological defects in the universe, cosmic rays and mesons are produced in the same process and their relation can be predicted from QCD models of hadronic fragmentation . Also this can be expressed in form of ($``$), but with $`\mathrm{kinematics}1`$. The above relations apply to the emission process only. To establish the corresponding relation for the observable luminosities of an astrophysical source, we have to take into account opacity factors that may reduce the ejected power in photons and neutrons through interactions with ambient matter or photons. No such modification applies to high-energy neutrinos in the cases of interest here. The interaction of neutrons generally leads to a reduction of the bolometric cosmic ray luminosity, since cosmic ray energy is converted into neutrinos or electromagnetic radiation. Of particular importance is the possibility of a neutron to flip back in such interactions into a proton, which keeps it confined over a long time and effectively allows removing all its energy. Photons interacting with matter can convert their energy into heat and therefore reduce the energy content in non-thermal radiation. The interaction of energetic non- thermal photons with background radiation, however, does not in general lead to a reduction of the bolometric electromagnetic energy, since the $`e^\pm `$ pairs emit their energy at lower frequencies in synchrotron radiation or inverse-Compton scattering. Unless the cascade develops into saturated comptonization, this energy can be emitted in the $`\mathrm{MeV}\mathrm{GeV}`$ gamma ray regime, still having non-thermal characteristics. To infer relations between observable fluxes, a further factor arising from the different propagation properties of gamma rays, neutrinos and cosmic rays has to be considered. Neutrinos, and gamma rays $`100\mathrm{GeV}`$, are not strongly affected by the presence of cosmic photon or matter backgrounds. The luminosity flux relation is therefore given by standard cosmological expressions in both cases. Cosmic rays $`>10^{18}\mathrm{eV}`$ are affected by photohadronic pair and pion production with the local cosmic microwave background. Also at lower energies these interactions can modify the relation between cosmic ray and neutrino fluxes arriving from large redshifts. In addition, cosmic ray propagation can be influenced by the presence of magnetic fields in the source, our galaxy and on large scales. Following this, we may write the relation of observable neutrino fluxes to observable and gamma ray and cosmic ray fluxes as $$[\nu \mathrm{flux}](\mathrm{bolometric})<[\gamma \mathrm{flux}](\mathrm{MeV}\mathrm{GeV})\mathrm{opacity}_\gamma $$ (relation 1) and $`[\nu \mathrm{flux}](E_\nu )`$ $``$ $`[\mathrm{CR}\mathrm{flux}](E_{\mathrm{CR}}=fE_\nu )`$ $`\times \mathrm{opacity}_n\mathrm{kinematics}\mathrm{propagation}`$ where the value of $`\mathrm{propagation}`$ is determined by the inverse ratio of the proton attenuation length to the Hubble radius, and by the dependence of the comoving source luminosity density on redshift. For non-evolving sources, it ranges from $`1`$ for $`E_{\mathrm{CR}}<10^{17}\mathrm{eV}`$, over $`3`$ at $`E_{\mathrm{CR}}=10^{19}\mathrm{eV}`$ to $`>100`$ for $`E_{\mathrm{CR}}>10^{20}\mathrm{eV}`$. With source evolution on the level suggested by observations for AGN and starburst galaxies, the value at $`E_{\mathrm{CR}}>10^{19}\mathrm{eV}`$ is increased by about a factor of $`5`$. The parameter $`f0.010.05`$ is determined from the interaction kinematics and depends on the CMF energy . Relation 1 holds for the comparison of bolometric luminosities, and considers that a (maybe dominant) fraction of the non-thermal radiation of the source is not produced in photohadronic interactions, but by synchrotron-self Compton emission of co-accelerated primary electrons. The factor $`\mathrm{opacity}_\gamma `$ considers the fraction of energy reprocessed to energies below $`\mathrm{MeV}`$ by comptonization. In relation 2, which holds also for specific energies, we have considered the possibility that part of the cosmic ray ejection is due to the direct, non-adiabatic ejection of protons. ## 3 Can we impose “robust” or “model-independent” upper bounds? Obviously, both relations 1 and 2 pose upper limits on the observable neutrino flux, if we chose proper upper limits for the parameters averaged over all contributing sources. $`[\gamma \mathrm{flux}]`$ and $`[\mathrm{CR}\mathrm{flux}]`$ on the right hand sides are observables, so it is in principle possible to fix their values by observations. All other parameters have to be determined by theory. We may distinguish, however, whether the theory used here is well established and supported sufficiently by observations, or whether we have to rely on weakly supported hypotheses. For example, the parameter $`\mathrm{kinematics}`$ is, within narrow bounds, well known and little dependent on astrophysical model assumptions, as long we confine ourselves to neutrino production in $`p\gamma `$ or $`pp`$ interactions. More difficult is the situation for $`\mathrm{propagation}`$ — while its value is well known for a given cosmic ray energy assuming straight-line propagation, since it only depends on well measured cross sections and the temperature of the microwave background, the influence of poorly known magnetic fields in the universe is difficult to determine. Obviously, the $`\mathrm{opacity}_\gamma `$ and $`\mathrm{opacity}_n`$ can hardly been constrained a priory without specifying a particular choice of sources. As an additional complication, also the determination of $`[\gamma \mathrm{flux}]`$ and $`[\mathrm{CR}\mathrm{flux}]`$ is not straightforward, since we have to distinguish the extragalactic contribution to these fluxes from the generally dominant galactic contribution. However, here we may remember the logic inherent to an “upper bound”: we do not need precise measurements for these quantities, it is sufficient to have upper limits for them. Obviously, a safe upper limit for any extragalactic flux contribution is, unless determined more precisely, given by the total observed flux. The limits determined by this minimal condition, however, may be very weak and of little practical relevance. The extragalactic $`\mathrm{MeV}\mathrm{GeV}`$ gamma-ray background (hereafter EGRB<sup>2</sup><sup>2</sup>2For the convenience of a simple argument, we understand in this paper under EGRB the diffuse plus identified point source contribution to the extragalactic gamma ray background, thus the total density of ambient extragalactic gamma rays. We note that usually given experimental values for this quantities subtract extragalactic point sources. ) is fairly well determined . Relation 1 has therefore readily been used to normalize flux estimates for extragalactic neutrinos. A complication is only that there is little theoretical agreement in which energy range exactly the reprocessed electromagnetic cascade radiation emerges from the source. For example, normalizing to the EGRB above $`1\mathrm{MeV}`$ yields about one order of magnitude higher fluxes than normalizing to the EGRB above $`100\mathrm{MeV}`$. The extragalactic contribution to the cosmic ray spectrum is generally believed to dominate the total observed flux above $`3\times 10^{18}\mathrm{eV}`$, where the cosmic ray spectrum shows a distinct feature, called “the ankle”. This belief is also somewhat supported by the absence of a signature of the galactic plane in the arrival direction distribution of the cosmic ray events, but it may be noted that models for a galactic halo origin of cosmic rays, which would not show such a signature, have been suggested. Another clear signature of an extragalactic origin would be the Greisen-Zatsepin-Kuzmin (GZK) cutoff expected above $`5\times 10^{19}\mathrm{eV}`$ due to photoproduction losses of the cosmic rays at the microwave background. Unfortunately, the current experiments disagree about the presence or absence of this cutoff in the data, so that we have to wait for a better data statistics to achieve clarification. Below the ankle, very little is known about the origin of the dominant cosmic ray component. Some experiments suggest a composition change from a dominantly heavy (iron) component to a light (proton) component at the ankle, i.e., at the same position where also the spectrum flattens. The result was recently corroborated by measurements at lower energies, which indicate an increasingly heavy composition of cosmic rays around “the knee” of the CR-spectrum at $`10^{1416}\mathrm{eV}`$. This, and also some tentative results on a possible signature of the galactic plane in the arrival directions at $`10^{17}\mathrm{eV}`$, has lead to a common sense that cosmic rays below the ankle are dominantly of galactic origin. Although this might be the case, and it is indeed expected from theoretical arguments, too, we have to be careful not to over-interpret these data. In particular the composition measurements at $`10^{17}\mathrm{eV}`$ rely on Monte Carlo simulations using particle interaction cross section extrapolations into energy regions which are not explored by accelerators yet. It has been shown that the presence or absence of the composition change signature in different experiments is dependent on the Monte-Carlo code used for the data interpretation . In summary, we may have good observational evidence that somewhere between “the knee” and “the ankle” of the cosmic ray spectrum the dominant origin changes from galactic to extragalactic. Any stronger statement on the exact shape of the extragalactic spectrum below the ankle, however, rather falls into the category of personal or public opinion. The only conservative upper limit on the extragalactic cosmic ray flux below the ankle and above the knee is therefore the measured total flux. ## 4 The Waxman-Bahcall upper bound In a recent paper, Waxman and Bahcall claimed that the relation between cosmic ray and neutrino fluxes (relation 2) sets a model-independent upper bound on extragalactic neutrino fluxes at all energies. This bound is about $`12`$ orders of magnitude stricter than previously assumed bounds from comparison with the EGRB (relation 1). Consequently, the authors claim to rule out most present models of neutrino production, in particular those connected to hadronic AGN models that have been generally normalized to the EGRB. As a corollary, they claim that this provides a model-independent proof that the EGRB is not completely produced by hadronic processes in AGN. From the discussion above, this conclusion seems rather surprising, particularly regarding to the fact that their bound is energy independent in power per decade—a behavior that is not seen in the cosmic ray data, which they claim are the only pinpoint for their conclusion. It is therefore worth asking: (a) how exactly this bound was derived; (b) in which respect it is really model independent, which means that it affects any model suggested for extragalactic neutrino production in the past, present and future; (c) to which respect it really affects present models for neutrino production in AGN jets; and (d) whether it really rules out a hadronic production of the EGRB. Indeed, Waxman and Bahcall (WB) use relation 2 to derive their bound, although this is somewhat hidden. Instead of writing down the relation of neutrino and cosmic ray mean free paths, they use a result obtained in an earlier paper for the local cosmic ray injection density at $`10^{19}\mathrm{eV}`$, where the propagation of ultra-high energy (UHE) cosmic rays was properly treated using a common transport approximation . Applying then the trivial equations for cosmological neutrino transport, they derive the correct (straight-line) $`\mathrm{propagation}`$ factor used in relation 2. They also discuss properly the dependence of this factor on source evolution. Also their factor $`\mathrm{kinematics}=0.25`$ falls into the right range for photohadronic (or $`pp`$) neutrino production. Also, WB point out that no statement can be made on sources which are opaque to cosmic ray neutrons, which they exclude a priori from their treatment. We may at this point summarize the assumptions that so far entered in the derivation of the WB-bound: Neutrinos are produced in interactions of cosmic rays with background photons or matter. The sources are transparent for neutrons of an energy $`10^{19}\mathrm{eV}`$. Cosmic rays of energy $`10^{19}\mathrm{eV}`$ ejected by these sources are not affected by magnetic fields in or at the vicinity of the source, or on large scales. While Assumption 2 is clearly stated in the paper, assumption 1 is rather implicitly understood. It is certainly justified since in fact most models of extragalactic neutrino production invoke this process. In contrast, WB devote an extensive discussion to the justification of assumption 3, where they show that (a) neutrons of this energy ($`10^{19}\mathrm{eV}`$) escape from most known strong field regions around putative neutrino sources before undergoing $`\beta `$-decay, and (b) protons of this energy cannot be confined in large scale fields (i.e. clusters of galaxies or superclusters) for time scales comparable to the Hubble time. From this, they conclude that magnetic fields cannot lead to inhomogeneities in the universal distribution of cosmic rays, which can be easily seen to be equivalent to the straight-line propagation assumption. Thus, according to WB, assumption 3 can be derived from our present observational upper limits on extragalactic magnetic fields. The authors neglect, however, that particles moving diffusively through an adiabatically decreasing magnetic field suffer energy losses due to expansion work towards the outer medium. We return to this issue below. Under assumptions 1-3, the bound is only valid at one energy of the spectrum: it corresponds to a cosmic ray energy of $`10^{19}\mathrm{eV}`$, or a neutrino energy of $`3\times 10^{17}\mathrm{eV}`$ assuming standard kinematical relations. To extend it to other energies, WB introduce The overall cosmic ray injection spectrum in the universe has the spectral shape $`dN/dEE^2`$, and extends without break up to $`10^{19}\mathrm{eV}`$ and beyond. To support this assumption, WB refer to the theory of diffusive shock acceleration, which canonically predicts an $`E^2`$ power law spectrum for particle acceleration at strong, non-relativistic shocks. They give no reason why the spectrum should extend to $`10^{19}\mathrm{eV}`$ for all sources in the universe. With assumption 4, the spectral shape of the WB bound becomes obvious: since the factor $`\mathrm{kinematics}`$ is only weakly dependent on energy (see Mücke et al. , these proceedings, for a more detailed discussion), the assumed flat (= constant power per decade) cosmic ray spectrum produces a flat neutrino spectrum. The WB upper bound is therefore the result of fitting a model spectrum to the observed cosmic ray flux at $`10^{19}\mathrm{eV}`$. ## 5 Critique of the Waxman-Bahcall bound as a general upper limit Obviously, Waxman and Bahcall made no “mistake” in deriving their bound — for extragalactic neutrino sources which comply with assumptions 1-4, it is indeed a valid upper limit for the observable flux, based on observations. The question we have to ask is whether this justifies the claim of “model-independence”: is it in fact reasonable to believe that assumptions 1-4 are all of general validity for any possible neutrino source? Assumption 1 is certainly the one that can most easily be accepted; it is the only mechanism predicting high-energy neutrinos so far which is not entirely speculative. Nevertheless, we may remind the reader that other models have been suggested. Such models, which are based on string hadronization (the so-called “topological defect” models for cosmic ray origin ), predict for the given cosmic ray flux a neutrino flux about two orders of magnitude larger than the WB bound. This is based a relation similar to our relation 2, but with $`\mathrm{kinematics}100`$, rather than $`0.25`$ as used by WB. It has been shown that these models are strongly constrained by the more general relation 1, i.e. the requirement not to over-produce the EGRB . Regarding assumption 2, we may just follow Waxman and Bahcall and restrict our consideration a priori to sources fulfilling it. However, we disagree with WB in stating that such sources cannot be identified or constrained by any other emission than neutrinos. For a large subclass of them, i.e., those who emit gamma rays in the $`\mathrm{MeV}\mathrm{GeV}`$ (but not up to the $`\mathrm{TeV}`$) regime, non-thermal gamma emission produced from $`\pi ^0`$-induced unsaturated synchrotron-pair cascades can emerge from the source. As correctly noted by Waxman and Bahcall, there is a strict connection between $`\gamma \gamma `$ and $`n\gamma `$ opacity . While sources transparent to $`\mathrm{TeV}`$ gamma rays can be shown to be transparent to UHE neutrons, too, one can use the same relation to show that sources which show an opacity break in the gamma ray spectrum at a few $`\mathrm{GeV}`$ must be opaque to neutrons at $`10^{19}\mathrm{eV}`$. For example, this is the case for most high luminosity gamma ray blazars. In fact, the observed non-thermal gamma-ray emission from such blazars did motivate the assumption that they are strong neutrino sources, and at the same time it restricts the maximum neutrino flux from such sources by relation 1 to a level about two orders of magnitude above the WB bound. We discuss below in which respect the WB bound still affects the expected neutrino fluxes from blazars. Before discussing assumption 3, we turn to assumption 4, which is certainly the one with the weakest observational support. In fact, even the theoretical support WB give has to be questioned. For the large variety of shocks of different speeds and compression ratios in astrophysical sources, shock acceleration theory predicts a large range of spectral indices; the common value of $`2`$ is just as a canonical assumption, applying to non-relativistic, strong shocks. Even more questionable is the assumption that in all sources the spectrum extends to $`10^{19}\mathrm{eV}`$. Obviously, even if all accelerators would have a spectral index of $`2`$, the contribution of sources with cutoffs below $`10^{19}\mathrm{eV}`$ can locally produce an overall spectrum much steeper. This would allow higher associated neutrino fluxes at energies below $`10^{17}\mathrm{eV}`$, without being in conflict with the cosmic ray data at $`10^{19}\mathrm{eV}`$ where only a few sources contribute. Obviously, the existence of Fermi accelerating sources with a proton maximum energy below $`10^{19}\mathrm{eV}`$ cannot be ruled out from first principles. Rather, it is suggested by a consistent interpretation of gamma ray observations of blazars within the hadronic model. Without assumption 4, which allows to restrict the comparison of neutrino and cosmic ray transport properties to the energy $`10^{19}\mathrm{eV}`$, the validity of assumption 3 has to be revised. Taking the results of WB for confinement in cores of rich galaxy clusters and large scale supergalactic filaments, we obtain confinement over a Hubble time for protons below $`10^{18}\mathrm{eV}`$ and $`10^{16}\mathrm{eV}`$, respectively. We could apply the same calculation to galactic halos of magnetic field strength $`1\mu \mathrm{G}`$ on a variation scale of $`10\mathrm{kpc}`$, extending to $`300\mathrm{kpc}`$, and again obtain confinement for cosmic rays below $`10^{16}\mathrm{eV}`$. At the same energy, we can also expect the halo of our own galaxy to modify the incoming extragalactic proton flux, similar to the modifications of the solar wind observed in cosmic rays around $`1\mathrm{GeV}`$. Since $`10^{16}\mathrm{eV}`$ protons are connected to the production of $`300\mathrm{TeV}`$ neutrinos, it would be unreasonable to propose an upper bound on their extragalactic flux based on cosmic ray observations directly related to their energy, regardless what the limits on the extragalactic contribution to the observed cosmic ray flux at $`10^{16}\mathrm{eV}`$ are. Moreover, protons which migrate through a gradually decreasing magnetic field, with a gyro-radius much smaller that the scale over which the field changes, lose energy towards adiabatic expansion if there is some interaction with the cold plasma which allows them to keep an isotropic distribution in the rest frame of the bulk flow. The detailed energy loss depends on the exact field configurations, but for the simplest case of a largely chaotic field, the energy loss follows the same rule as the adiabatic expansion of a relativistic gas, i.e. $`E_{\mathrm{CR}}R^1`$. If this applies to the putative outflows in galactic halos (galactic winds), and if we assume that most neutrino sources reside in galaxies having winds, then the cosmic ray bound can be relaxed by one order of magnitude or more for neutrino energies below $`10^{17}\mathrm{eV}`$. A similar effect can be obtained by the likely assumption that a considerable fraction of neutrino sources reside in galaxies belonging to more or less dense clusters or groups with stronger-than-average magnetic fields between galaxies, leading to a (partial) large scale confinement of cosmic rays below $`10^{18}\mathrm{eV}`$. We note, in agreement with Waxman and Bahcall, that none of the above effects can strongly influence the propagation of cosmic rays at $`>10^{19}\mathrm{eV}`$. One reason for this is, that neutrons of this energy jump out of the confinement of most of the structures we discuss before undergoing $`\beta `$-decay. Our critique is rather directed against the connection of the justification of assumption 3 on the validity of assumption 4. Dropping assumption 4, i.e. the application of a model spectrum, the influence of magnetic fields can no longer be neglected. On the basis of relation 2, together with assumptions 1 and 2, we have therefore derived a neutrino upper bound which is truly based on the observed cosmic ray flux . We have also discussed the possible influence of magnetic fields on this bound. The result is that, as expected, we confirm the WB bound for a neutrino energy of $`3\times 10^{17}\mathrm{eV}`$, but find much less restrictive limits at lower energies. At neutrino energies below about $`10^{15}\mathrm{eV}`$, the flux is only limited by the EGRB, regardless of the choice of parameters. Both, the cosmic ray data we use and the magnetic fields we assume, suffer from difficulties in the interpretation of the data, and can therefore be disputed. However, at this point we may remind again in the logical meaning of the term “upper bound”: to derive a true upper bound, we can only use the observational upper limits on both the extragalactic cosmic ray flux and the magnetic fields connected to extragalactic sources and large scales. Everything else would not comply with the standards of a reliable scientific result. It is needless to say that we do not want to propose neutrino fluxes of this strength — we only state that they cannot be ruled out by general theoretical arguments and current observations. It is also worth to note that the qualitative feature of our result, i.e., that our bound is nowhere as strict as at $`3\times 10^{19}\mathrm{eV}`$, is independent whether we use or don’t use the cosmic ray composition data, or whether we assume or don’t assume an effect of magnetic fields. At last, we may also have a look on neutrino energies higher than $`3\times 10^{17}\mathrm{eV}`$, which are produced by cosmic rays above $`10^{19}\mathrm{eV}`$. Here, as we see immediately from relation 2, the factor $`\mathrm{propagation}`$ rises from $`3`$ to about $`100`$ (for the no-evolution case). Assuming that the observed quantity $`[\mathrm{CR}\mathrm{flux}]`$ does not drastically change, this would imply a strong increase of the upper bound. In fact, when we look at the data, a continuation of the cosmic ray spectrum as a power law $`dN/dEE^{2.7}`$ beyond $`10^{20}\mathrm{eV}`$ is suggested by one of three large exposure experiments , and consistent with the combined result of all experiments (including the ones with lower exposure), and can therefore not be ruled out. The fact that the WB bound does not show this increase again goes back on assumption 4: The assumption of a flat injection spectrum implies a drastical change in the slope of the observed CR spectrum, i.e. the existence of the GZK-cutoff. This is also consistent with present data, but we note that it is the current lower limit on the observed CR flux at this energy, and can therefore not be used for upper limit estimates. We also point out that the common assumption that the post-GZK cosmic rays origin from a strong, local source, would not imply a increase of the neutrino flux: For a local source, the factor $`\mathrm{propagation}`$ in relation 2 obviously approaches unity, since both cosmic rays and neutrinos propagate (approximately) loss-free and in straight lines. An increase of the neutrino flux correlated to our bound would imply that the non-observation of the GZK cutoff is due to the increased activity of all CR sources in the universe, rather than to one local source. Although this scenario is currently not favored by theoretical arguments, it cannot be ruled out a priory. Only an observational upper limit excluding the associated neutrino flux would so this. We note that the present theoretical upper limit on the UHE neutrino flux is set by the observed EGRB through relation 1. ## 6 The impact of the Waxman-Bahcall upper bound on present models In the last section we have shown how the special selection of parameters and assumptions allowed Waxman and Bahcall to set their cosmic ray upper bound to the lowest value possible for any model assumption. We have also shown that there is a large freedom to invent models which evade this bound at energies different from those chosen by WB in their derivation. Here we want to discuss in which respect their bound affects present models which are already discussed in the literature. Clearly, since such models make a clear prediction about the global source spectrum, the bound may be of more relevance here since we can compare in the most restrictive energy regime, $`E_{\mathrm{CR}}10^{19}\mathrm{eV}`$ or $`E_\nu 3\times 10^{17}\mathrm{eV}`$. We start with AGN models. Waxman and Bahcall have already noted that there is one class of AGN related neutrino models for which no bound whatsoever can be stated, except by direct neutrino observations: the so called AGN core model , which is opaque to both neutrons and gamma rays. (Actually, there is a bound also on this models, since the energy in gamma rays is converted to X-rays by saturated comptonization, and can be compared to the total flux of observed extragalactic X-ray point sources and the diffuse background. This is the way how this model indeed has been normalized.) Here we concentrate, like Waxman and Bahcall, on the discussion of AGN jet models. First of all, it should be noted that Waxman and Bahcall did not discover that such models may be constrained by cosmic ray data. Mannheim (1995, ) already pointed out this problem, and suggested two models: Model A, which was constructed to explain both the cosmic ray data (assuming neutron transparence and straight-line propagation) and the EGRB above $`100\mathrm{MeV}`$ (however, using an incorrect relation of gamma ray and neutrino fluxes, see Mücke et al. , these proceedings), and Model B, normalized to the gamma ray background above $`1\mathrm{MeV}`$, which was at that time overestimated by the incorrect Apollo measurements by one order of magnitude. It was noted in ref. that for Model B, in order to evade overproduction of cosmic rays above the ankle, one has to assume some mechanism preventing their cosmic rays from reaching us at these energies. Citing Model B only, and two similar models by Protheroe , and Halzen and Zas , Waxman and Bahcall claimed that all these models violate the cosmic ray bound by two orders of magnitude and can therefore be ruled out. In fact, using the correct bound derived for the cosmological evolution observed in AGN, the discrepancy is reduced to a factor $`30`$, and by another factor of $`2`$ when we recalculate the WB bound at $`E_{\mathrm{CR}}=10^{19}\mathrm{eV}`$ using a precise Monte-Carlo simulation of extragalactic transport , rather than the approximate treatment performed by Waxman (1995, ). Obviously, hadronic blazar models follow assumption 1, and since we have a model spectrum given we can chose an energy in the spectrum where assumption 3 applies as well. To support the validity of assumption 2 (transparence) for AGN jets, WB refer to the observed TeV emission of Mrk 421 and Mrk 501. Unfortunately, they misinterpret the TeV data in stating that the observed emission at $`10\mathrm{TeV}`$ proves that blazar jets are optically thin at this energy. In fact, the observed break of the gamma ray spectral index between the EGRET regime ($`<30\mathrm{GeV}`$) and the Whipple/HEGRA data ($`>300\mathrm{GeV}`$) implies that these sources become optically thick at $`<300\mathrm{GeV}`$. It should be pointed out that in an homogeneous emitter, a $`\gamma \gamma `$-opacity larger than one does not lead to an exponential cutoff, as sometimes erroneously assumed, but to a spectral break by the amount of the low energy flux spectral index. For Mrk 421 and Mrk 501, the observed $`\mathrm{GeV}\mathrm{TeV}`$ break matches this prediction very well. Moreover, a spectral break of this kind is not expected in the emission spectrum of the hadronic scenario, so a consistent interpretation of the data within this model requires the assumption that the observed break is due to opacity. Therefore, the $`\gamma \gamma `$-opacity of Mrk 501 at 10 TeV is $`30`$, rather than $`1`$ as assumed by WB. Correcting this in the estimate, we obtain an neutron-opacity of Mrk 501 at $`10^{19}\mathrm{eV}`$ of $`0.1`$. Mrk 501 is therefore indeed optically thin for neutrons, but we have to note that it is a low-luminosity blazar, and that the opacity is directly proportional to the blazar luminosity. Averaging the neutron opacity of blazars over the flat blazar luminosity function , we obtain an average value $`\mathrm{opacity}_n10`$, which reduces the cosmic ray ejection per given neutrino flux (or, in other words, increases the bound) by the same factor (see \[2, v2\] for details). This removes the discrepancy with the WB bound — existing AGN jet models are not at any energy in conflict with the cosmic ray data, because they do not fulfill assumption 2. Obviously, this also shows that a hadronic production of the EGRB is not ruled out by the cosmic ray constraint, as Waxman & Bahcall claim. Rather, improved determinations of the EGRB and its origin may set the strongest constraints on the possible neutrino fluxes. We note that this result was obtained without any modifications to the models, and without invoking other energy loss processes for cosmic rays expected in the extended halos of radio-loud AGN. As a side remark, we note that due to the very flat neutrino spectrum AGN models expect (and always expected) neutrino fluxes in the interesting PeV regime which are much below the WB bound. Even if the bound would be perfectly valid, a direct discrepancy at these energies never existed. We now may add a few remarks on neutrinos from Gamma Ray Bursts. Here, no discrepancy with the cosmic ray bound has been found by Waxman and Bahcall. We may remark, that due to the property of this model to expect an optically thin, $`E^2`$ cosmic ray emission spectrum, and cutoffs generally above $`10^{19}\mathrm{eV}`$, it is the only model to which all assumptions of Waxman and Bahcall, thus also their bound, fully apply. However, we may point out an interesting turn of the argument: In highly relativistic flows like GRB (and AGN also), we have good reason to assume that the direct ejection of protons is strongly suppressed, because the adiabatic loss time is of the order of the crossing time of the shell. Thus, it is likely that only neutrons can be ejected from a GRB shell . If this is the case, then we can use the WB bound for an $`E^2`$ neutron spectrum, as expected to be produced by GRB, as a test flux for the hypothesis that GRB are the dominant sources of ultra-high energy cosmic rays . If it could be independently confirmed that GRBs follow the evolution pattern observed in star-formation, an observed neutrino flux correlated with GRB events “only” on the level predicted by Waxman and Bahcall (1997 ), which is then about one order of magnitude below the appropriate bound, would provide evidence against rather than in favor of this scenario. ## 7 Conclusions The relation of cosmic ray and neutrino fluxes has been shown to be an important, so far not sufficiently noticed measure for the viability of models of extragalactic neutrino production. It is thanks to Waxman and Bahcall that this point has now found attention by the community. Unfortunately, the way their result was presented, namely as a model-independent upper bound on any kind of extragalactic neutrino production, could impose to some severe misunderstandings. The most serious could be that this result might shatter our confidence in the object of very-high and ultra-high energy neutrino observatories. In this paper we have presented the case that the Waxman-Bahcall upper bound is not model-independent, but rather relies on very special model assumptions. We have also shown that present models for extragalactic neutrino fluxes, which have provided one motivation for the construction of the experiments mentioned above, are not seriously affected by their result and need no modifications. However, it is also clear that the consistency of these models with cosmic ray observations is marginal, so that cosmic ray data can be regarded an important constraint for their parameter space. With respect to the motivation of experiments, we may make one point very clear: The debate whether the Waxman-Bahcall bound is valid or not is a purely theoretical dispute. It is the dispute whether assumptions 1-4 stated in Section 3 generally apply to nature, or whether they do not. The decision can only be made by experiment. Although theories are necessary to understand our data, they can never replace them. Truly model-independent bounds are only observational upper limits. The discussion in this paper also made clear how many important questions regarding the origin of cosmic rays can be decided by neutrino observations. The prediction of neutrinos above the WB-bound at energies $`10^{16}10^{18}\mathrm{eV}`$ is an important test of the viability of hadronic blazar models, and of the total contribution of hadronic blazar emission to the extragalactic gamma ray background. Setting upper limits to neutrinos from Gamma-Ray Burst below the WB-bound may enable us to limit their contribution to the ultra-high energy cosmic ray spectrum — or, finding them on the level of the bound, would provide strong evidence that they are indeed the dominant sources of these cosmic rays. Finally, in case that the non-existence of the GZK cutoff in the cosmic ray spectrum is further supported by observations, searching for neutrinos in excess of the WB-bound at ultra-high energies ($`>10^{19}\mathrm{eV}`$) can test whether this is due to an increased overall activity of the cosmic ray/neutrino sources in the universe, or rather due to the contribution from one, local source (or even our own galactic halo). All in all, these are only a few reasons why the tight connection between extragalactic cosmic ray and neutrino fluxes provides a strong additional motivation for VHE and UHE neutrino observatories. ## Acknowledgements JPR acknowledges support by the EU-TMR network “Astro-Plasma Physics” under contract number ERBFMRX-CT98-0168. RJP is supported by the Australian Research Council. ## References
no-problem/9908/physics9908008.html
ar5iv
text
# An Explicit Space-time Adaptive Method for Simulating Complex Cardiac Dynamics ## Abstract For plane-wave and many-spiral states of the experimentally based Luo-Rudy 1 model of heart tissue in large (8 cm square) domains, we show that an explicit space-time-adaptive time-integration algorithm can achieve an order of magnitude reduction in computational effort and memory—but without a reduction in accuracy—when compared to an algorithm using a uniform space-time mesh at the finest resolution. Our results indicate that such an explicit algorithm can be extended straightforwardly to simulate quantitatively large-scale three-dimensional electrical dynamics over the whole human heart. Understanding the dynamics of excitable media such as heart tissue is a problem of substantial interest to physicists, physiologists, biomedical engineers, and doctors. For reasons not yet understood experimentally, the healthy time-periodic spatially-coherent beating of a human heart will sometimes change to a nonperiodic spatially-incoherent fibrillating state in which the heart cannot pump blood effectively (leading to death if suitable treatment is not administered quickly). It would be valuable to understand how the onset of arrhythmias that lead to fibrillation depends on details such as the heart’s size , geometry, electrical state, anisotropic fiber structure , and inhomogeneities. A deeper understanding of the heart’s dynamics may also make possible the invention of protocols by which electrical feedback could be used to prevent fibrillation . Because of many experimental difficulties in studying the three-dimensional dynamics of a heart , simulations of cardiac tissue (and more generally of excitable media) play an extremely important role in identifying and testing specific mechanisms of arrhythmia. However, quantitatively accurate simulations of an entire three-dimensional human heart are not yet feasible. The essential difficulty is that human heart muscle is a strongly excitable medium whose electrical dynamics involve rapidly varying, highly localized fronts (see Figs. 1 and 2). The width of such a front is about 0.05 cm and a simulation that approximates well the dynamics of such a front requires a spatial resolution at least 5 times smaller, $`\mathrm{\Delta }x0.01\mathrm{cm}`$. The muscle in an adult human heart has a volume of about $`250\mathrm{cm}^3`$ and so a uniform spatial resolution of 0.01 cm would require a computational grid with $`3\times 10^8`$ nodes. Depending on the assumed material properties of the heart and on the quantities of interest to analyze, up to 50 floating point numbers might be associated with each node, requiring the storage and processing of about $`10^{10}`$ numbers per time step. The fastest time scale in heart dynamics is associated with the rapid depolarization of the cell membrane, about 0.1 msec in duration, and a reasonable resolution of this depolarization requires a time step about a fifth of this, $`\mathrm{\Delta }t0.02\mathrm{msec}`$. Since arrhythmias such as fibrillation may require several seconds to become established, the $`10^{10}`$ numbers associated with the spatial mesh would have to be evolved over about $`10^6`$ time steps. Such a uniform mesh calculation currently exceeds existing computational resources and has not yet been carried out. A clue about how to improve heart simulations comes from experiments and simulations which suggest that the electrical membrane potential $`V(t,𝐱)`$ in the fibrillating state consists of many spirals (for approximately two-dimensional tissue such as the atrium, see Fig. 2) or of many scroll waves (for thicker cardiac tissue such as the left ventricle ). A striking feature of these spatiotemporal disordered states is that the dynamics is sparse: at any given time, only a small volume fraction of the excitable medium is occupied by the fronts, and away from the fronts the dynamics is slowly varying in space and time. It may then be the case that the computational effort and storage can be greatly reduced, from being proportional to the volume of the excitable medium (the case for a spatially uniform mesh) to being proportional to the arclength (in 2d) or surface area (in 3d) of the fronts. In this Letter, we show for representative solutions of the quantitatively accurate Luo-Rudy 1 (LR1) membrane model of cardiac tissue ) that an explicit space-time adaptive-mesh-refinement algorithm (AMRA) can indeed take advantage of the sparse excitable dynamics to reduce by an order of magnitude the computational effort and memory needed to simulate arrhythmias in large domains. Further, we show that there is no significant reduction in accuracy when using an AMRA compared to an algorithm that uses a spatially uniform mesh at the finest resolution of the AMRA. Since the AMRA is explicit in time and has a fairly simple data structure consisting of nested patches of uniform Cartesian meshes, the AMRA can be parallelized straightforwardly , leading to a further reduction in computational effort by the number of processors. The AMRA is also general and does not require the use of reduced models , which increase efficiency but sacrifice experimental accuracy by using fewer variables and perhaps explicitly eliminating rapid variables. The results presented below suggest that a quantitatively accurate AMRA simulation of fibrillation in an entire human left ventricle for several seconds with an effective $`0.01\mathrm{cm}`$ resolution should already be practical with existing computers. This is highly encouraging since further improvements to such algorithms are possible as discussed below. In the following, we discuss some details of the AMRA and then its accuracy and efficiency for simulations of the LR1 model in large one- and two-dimensional domains. Our particular algorithm was a straightforward modification of an AMRA that has been used by other researchers to integrate hyperbolic sets of partial differential equations such as the Euler equations of fluid dynamics . Since key mathematical and algorithmic details are available elsewhere , only some essential ingredients and our modifications of them are briefly described here; a more detailed discussion will be given elsewhere . The AMRA approximates a given continuous field such as the cardiac membrane potential $`V(t,𝐱)`$ on a set of nested locally-uniform patches of $`d`$-dimensional Cartesian meshes in a $`d`$-dimensional Cartesian box . On each patch, spatial derivatives in the dynamical equations are approximated by second-order-accurate finite differences and an explicit method (we use forward-Euler) is used to advance in time. The power of the algorithm arises from its ability to automatically and efficiently refine or coarsen the representations of fields by varying the number of grid points locally to achieve a specified truncation error. A further reduction in computational effort is achieved by allowing the the time step to change locally with the spatial mesh . In related prior work, Quan et. al. have studied cardiac models using spatially adaptive time steps but with a uniform spatial mesh and alternation of implicit and explicit time steps, while Moore has studied reaction-diffusion equations using a spatially-adaptive fully-implicit method but with a spatially-uniform adaptive time step. To our knowledge, ours is the first study of an algorithm for excitable media for which the spatial and temporal resolutions change locally. An important subtlety is that our AMRA was designed for hyperbolic equations but is here applied to an excitable medium which is described by parabolic equations. For explicit time integrations of hyperbolic equations, the Courant-Friedrichs-Lewy (CFL) condition for the onset of numerical instability bounds the largest possible local time step $`\mathrm{\Delta }t`$ by the first power of the local spatial resolution $`\mathrm{\Delta }x`$. For parabolic equations, the stability condition for an explicit algorithm bounds the time step by $`\mathrm{\Delta }x^2`$ for $`\mathrm{\Delta }x`$ sufficiently small. In the LR1 model, this dependence is evident only for spatial resolutions an order of magnitude finer ($`\mathrm{\Delta }x<.0025cm`$) than those used in our calculations. For resolutions in our range of interest, the fast reaction kinetics, not the diffusion operator, sets the stability limit on the time step . A standard way to avoid the stability restriction on $`\mathrm{\Delta }t`$ is to use a semi- or fully-implicit time-integration algorithm . We have estimated that by using an explicit integration scheme, our time steps on the finest meshes are about an order of magnitude smaller than those needed to achieve a 10% relative error in the speed of the front (AMRA uses 0.003 ms as opposed to the value 0.04 ms for the semi-implicit case) . However, one cannot conclude that a semi-implicit algorithm is automatically better than our explicit one since, for a fixed spatial resolution, the larger time step allowed by a semi-implicit method may give less accuracy during the upstroke and require more computation (some of these issues will be discussed quantitatively elsewhere for the 1d case ). Since the spatiotemporal dynamics of even the most detailed cardiac membrane models are not yet understood and the relation between specified local truncation error and correct dynamics is also not understood, the present calculations should be considered as an early but significant step in finding a good balance between efficiency and accuracy for simulating arrhythmias in large domains and over long times. Our results for the AMRA were obtained for the quantitatively accurate LR1 model , which in 2d can be written in the form: $`C_m_tV(t,x,y)`$ $`=`$ $`{\displaystyle \frac{1}{\beta }}\left(g_x_x^2V+g_y_y^2V\right)I_{\mathrm{ion}}(𝐦)I_{\mathrm{stim}}(t,x,y),`$ (1) $`{\displaystyle \frac{d𝐦}{dt}}`$ $`=`$ $`𝐟(𝐦,V),`$ (2) where $`V(t,𝐱)`$ is the membrane potential at time $`t`$ and at position $`𝐱=(x,y)`$, $`C_m`$ is the membrane capacitance per unit area, $`\beta `$ is a surface-to-volume ratio of a heart cell, $`g_x`$ and $`g_y`$ are membrane conductivities (generally not equal since the heart is anisotropic), $`I_{\mathrm{ion}}`$ is the total ionic current flowing across the membrane, and $`I_{\mathrm{stim}}`$ is a specified current injected to initiate a propagating wave. (For all calculations reported below, the boundary condition $`(\widehat{n})V=0`$ was used, where $`\widehat{n}`$ is the unit vector normal to a given boundary point.) The seven voltage-sensitive membrane variables $`m_i(t,𝐱)`$ for the LR1 model determine the flow of various ions across the membrane and satisfy ordinary differential equations, which are also integrated by a forward-Euler method. The same membrane parameter values as those of Ref. were used except for the calcium conductivity $`g_{\mathrm{Ca}}`$ in the $`I_{\mathrm{ion}}`$ term, whose value was changed from 0.09 to 0.045 (in units of $`\mathrm{m}\mathrm{\Omega }^1\mathrm{cm}^2`$). The medium was isotropic with $`g_x`$ and $`g_y`$ set to 1 $`\mathrm{k}\mathrm{\Omega }^1\mathrm{cm}^1`$ and $`\beta `$ set to 3000 $`\mathrm{cm}^1`$. These values shortened the action potential duration and led to dynamical states with many spirals, providing a more challenging test of the AMRA. In addition to the physical parameters in Eq. (2), many numerical and algorithmic parameters need to be specified . Several of the more important choices are an initial resolution for a uniform coarse mesh covering the domain (we used $`\mathrm{\Delta }x=0.05\mathrm{cm}`$), the temporal resolution for the coarse mesh (we used $`\mathrm{\Delta }t=0.012\mathrm{ms}`$), the maximum number of grid levels allowed for refinement (we used the value 3), the factor by which the spatial mesh is refined locally (we chose the factor 2), the error tolerance used in the Richardson extrapolation estimate of the local truncation error (we chose $`ϵ=2\times 10^3`$); and the number of time steps to elapse before estimating a local error and regridding (we chose 2). As a first demonstration of the effectiveness of the AMRA, Fig. 1 summarizes a 3-level calculation of the LR1 model in a 1d domain of length $`L=9\mathrm{cm}`$. The system was stimulated at $`t=0`$ with a $`0.2\mathrm{cm}`$ square pulse along the left edge of the domain, which evolved into a front propagating to the right (the spatial profile is independent of the initial condition and of the system size for $`L9\mathrm{cm}`$). One can see from the spatial profile in Fig. 1a at time $`t=240\mathrm{ms}`$ how narrow is the front (region of depolarization) compared to the profile’s extent and this specifically is what makes numerical simulation of highly excitable media so difficult. In the vicinity of the front, Fig. 1b shows the grid structure which was automatically calculated by the ARMA; the colors black, green, and red indicate the coarse, fine, and finest mesh regions respectively. Taking into account the reduction of spatial mesh points and the asynchronous updating of grid points using spatially varying time steps , the AMRA overall used a factor of 3.6 fewer grid points and did less computational work by a factor of 9 for the LR1 model than a constant-time-step uniform-spatial-mesh forward-Euler code using the finest space-time resolutions of the AMRA. The spatial adaptivity of the time step accounts for a factor of 2 in this factor of 10 and so is an important part of the algorithm. The temporal profiles at a fixed point in space, the front speeds, and the times between peak and recovery at a fixed point in space (action potential duration) for the AMRA and for a high-resolution uniform-mesh code (discussed in Ref. ) agree within 0.1% relative errors except at the peaks of the temporal profiles, where the relative error is about 4%. We conclude that there is no significant loss of accuracy when using the more efficient AMRA. Fig. 2 shows how the AMRA performs for the LR1 model in a large square domain of size $`L=8\mathrm{cm}`$, using the same parameter values as the 1d case, for which spirals are unstable and break up into other spirals. This complex many-spiral dynamical state is a much stronger test of the efficiency and utility of an AMRA than Fig. 1 since the geometry of the fronts fluctuates strongly in time. A multi-spiral state was initiated by a standard S1-S2 stimulation protocol in which a right-going planar pulse is created by stimulating the left edge of the domain (the S1 stimulus), and the lower left quadrant of the domain is excited (the S2 stimulus) 334 ms later, when the left half of the domain has returned to rest but the right half is still repolarizing. A comparison of the field $`V`$ with the instantaneous grid structure approximating $`V`$ is given in Fig. 2 1346 ms after S2 and demonstrates how the AMRA is able to increase automatically the space-time resolution only in the vicinity of the fronts, greatly decreasing the overall computational effort since, at any given time, the sharp fronts indeed occupy only a small fraction of the domain. The total number of mesh points used by the AMR varies substantially with time, from $`3\times 10^4`$ to $`7\times 10^4`$ mesh points with an average of $`5\times 10^4`$. A comparison of these results with those required by a uniform-spatial-mesh constant-time-step code using the finest AMRA resolution shows that the AMRA uses about 8 times fewer mesh points, requires less integration work by a factor of 12, and achieves a speedup of about a factor of 11 . The above results can be used to estimate the computer time needed by the ARMA to integrate for one second the LR1 model for a 3d section of left ventricular wall of dimensions $`8\mathrm{cm}\times 8\mathrm{cm}\times 1\mathrm{cm}`$, with an effective fine uniform mesh resolution of $`\mathrm{\Delta }x=0.0125\mathrm{cm}`$ in space and $`\mathrm{\Delta }t=0.003\mathrm{msec}`$ in time. On a Pentium III 500 MHz computer, we found that a 3-level 2d AMRA calculation at this resolution takes about 3 days. The time for the 3d calculation then can be estimated by assuming that each of the spirals in Fig. 2 becomes a continuous stack of spirals (a scroll wave), with the stack transverse to the square sides of the domain , and correspondingly that the mesh refinements extend uniformly from the 2d case through the transverse direction. A 3d AMRA calculation should then take roughly 15 days, which is a factor of 17 speedup over the 9 months required to complete a similar calculation using a uniform space-time mesh with the above resolution. Without substantial change to the AMRA, an additional speedup of at least 10 can be gained by using a distributed parallel computer with 100 Pentium III processors, and another speedup of 5 by using table-lookups to avoid the many exponentiations associated with the integration of the membrane variables $`m_i(t)`$. These further gains would reduce the total simulation time for one second of the LR1 model in this 3d domain to 7 hours or less. (With a substantial modification to make the AMRA semi-implicit, another reduction by a factor of 2-3 might be possible.) Simulation of an entire heart (a factor of 4 greater in volume) for one second with a LR 1 model should then be possible on the time scale of one day, which is acceptably fast for exploring many interesting questions about the dependence of arrhythmias on parameters. In summary, we have shown that an explicit space-time adaptive algorithm using one of the simplest possible data structures (a hierarchy of Cartesian meshes) can already attain an order of magnitude reduction in computational effort and memory when applied to the experimentally based LR1 cardiac membrane model , and that this reduction is achieved without incurring a corresponding reduction in accuracy when compared to an explicit code using a uniform space-time mesh. Important next steps include determining whether the algorithm can be improved by using implicit time integration, generalizing the method to curved boundaries, and making specific applications to the initiation and control of human arrhythmias. We thank M. Berger, Z. Qu, and A. Garfinkel for useful discussions and especially M. Berger for making available to us one of her AMRA codes. This work was supported by a NSF Graduate Research Fellowship, by NSF grant DMS-9722814, and by NIH grant R29-HL-57478.
no-problem/9908/quant-ph9908086.html
ar5iv
text
# Continuity bounds for entanglement ## Acknowledgments This research was supported by DARPA through the Quantum Information and Computing Institute (QUIC) administered through the ARO, and by the California Institute of Technology through a Tolman Fellowship.
no-problem/9908/physics9908023.html
ar5iv
text
# Track Restore Technique (RST) Applied to Analysis of Waveform of Voltage Pulse in SAGE Proportional Counters ## I Introduction One of main procedures of solar neutrino flux measurement in gallium experiments is a detection of several atoms of <sup>71</sup>Ge in a small proportional counter. <sup>71</sup>Ge decays solely via electron capture to the ground state of <sup>71</sup>Ga. In the proportional counter 1.2 keV and 10.4 keV Auge-electrons are usually detected. These low-energy electrons produce a nearly point-like ionization in the counter gas. This ionization will arrive at the anode wire of the proportional counter as a unit resulting in a fast rise time for the pulse. In contrast, although a typical $`\beta `$-particle produced by a background process may also lose 1 keV to 15 keV in the counter gas, it will leave an extended trail of ionization. The ionization will arrive at the anode wire distributed in time according to its radial extent in the counter, which usually gives a pulse with a slower rise time than for a <sup>71</sup>Ge event. The identification of true <sup>71</sup>Ge events and the rejection of background events is thus greatly facilitated by using a two parameter analysis: a candidate <sup>71</sup>Ge event must not only fall within the appropriate energy region, but must also have a rise time consistent with point-like ionization. The anode wire is directly connected to a charge-sensitive preamplifier. After further amplification the signal is going (in SAGE) to the digital oscilloscope HP5411D, which records the voltage pulse waveform with 8-bit voltage resolution and 1 ns time resolution for 800 ns after pulse onset. A typical pulse produced by 10.4 keV Auge-electron after <sup>71</sup>Ge decay is shown on Fig. 1. ## II Standard analysis of waveform There are several different techniques which are applied to waveform analysis in both gallium solar neutrino experiments. All of them are described in detail elsewhere (see , for SAGE, for GALLEX). For example, in the standard analysis of SAGE data so called $`T_N`$ method is used there. A functional form described in with parameter $`T_N`$ characterizing rise time of the pulse is written for fit the observed pulse to. This technique gives the correct description of the shape of the voltage pulse as recorded by the digital oscilloscope when the ionization produced in the proportional counter consists of a set of point ionizations evenly distributed along a straight track. Since <sup>71</sup>Ge events are usually a single cluster of ionization, this method works satisfactorily to select <sup>71</sup>Ge candidate events. It is, however, restricted to the particular form of ionization that is assumed, and gives a poor fit to other types of charge deposit in the counter, such as the combination of a point event from <sup>71</sup>Ge $`K`$-electron capture followed by capture of the 9.3-keV x ray at some other location in the counter. To give us the capability to investigate all possible events that may occur in the counter, we have also developed a more general method which can analyze an event produced by ionization with an arbitrary distribution of charge. We call this the ‘restored pulse method’, or ‘RST method’ for short. ## III Description of RST technique We begin with the measured voltage pulse $`V(t)`$ as recorded by the digitizer. For an ideal point charge that arrives at the counter anode wire, $`V(t)`$ has the Wilkinson form $`V(t)=W(t)=V_0\mathrm{ln}(1+t/t_0)`$, provided the counter is ideal and the pulse processing electronics has infinite bandwidth. For a real event from the counter, with unknown charge distribution, $`V(t)`$ can in general be expressed as the convolution of the Wilkinson function with a charge collection function $`G(t)`$: $$V(t)=W(t)G(t).$$ (1) The function $`G(t)`$ contains within it the desired information about the arrival of charge at the counter anode, coupled with any deviations of the counter or electronics from ideal response. Equation (1) can be considered as the definition of $`G(t)`$. To get the desired function $`G(t)`$, one must deconvolute Eq. (1). To perform this deconvolution, we have found it mathematically convenient to use the current pulse $`I(t)`$, which is obtained by numerical differentiation of $`V(t)`$: $`I(t)`$ $`=`$ $`{\displaystyle \frac{dV}{dt}}={\displaystyle \frac{d}{dt}}(W(t)G(t))`$ (2) $`=`$ $`{\displaystyle \frac{dW}{dt}}G(t)=W^{^{}}(t)G(t),`$ (3) where $`W^{^{}}(t)`$ is normalized over the observed time of pulse measurement $`T_{\text{obs}}`$ such that $`_0^{T_{\text{obs}}}W^{^{}}(t)𝑑t=1`$. To deconvolute, we Fourier transform to the frequency domain and then use the theorem that convolution in the time domain becomes multiplication in the frequency domain. This simply gives $`I(f)=W^{^{}}(f)G(f)`$, which can be solved for $`G(f)`$. We then Fourier transform $`G(f)`$ back to the time domain to get the desired function $`G(t)`$. The energy of the event is given by $`_0^{T_{\text{obs}}}G(t)𝑑t`$. The duration of the collection of ionization is given by the width of $`G(t)`$, which can be used as a measure of the rise time. An example of this procedure as applied to a typical <sup>71</sup>Ge $`K`$-peak event is given in Fig. 2. This pulse has $`T_N=3.9`$ ns. The recorded voltage pulse after inversion and smoothing is given by $`V(T)`$ in the lower panel. The current pulse, obtained by numerical differentiation of the voltage pulse, is given by $`I(t)`$ in the upper panel. The deduced function $`G(t)`$ is also shown in the upper panel. It has a FWHM of about 15 ns, found to be typical for true <sup>71</sup>Ge $`K`$-peak events. The integrated current pulse, which records the pulse energy, is given by $`G(t)𝑑t`$ in the lower panel. ## IV Conclusion This method has the advantage that it can reveal the basic nature of the ionization in the counter for an arbitrary pulse. It is also capable of determining the pulse energy over a wider range than the $`T_N`$ method. A problem that has been found with this method in practice, however, is that when <sup>71</sup>Ge data are analyzed one obtains multiple collection functions (i.e., $`G(t)`$ has several distinct peaks separated in time) more often than is expected from the known physical processes that take place in the counter. These multiple peaks are due to noise on the pulse and cutoff of the system frequency response at about 100 MHz. Attempts have been made to remove these extraneous peaks by filtering and smoothing the original pulse, but they have not been fully successful. Evidently we need faster electronics and a reduction in the noise level to be able to fully exploit this pulse shape analysis technique. As a result, we have only been able to use this method to select events on the basis of energy. ## V Acknowledgments We thank many members of SAGE for fruitful and stimulating discussions. Especially we thank B. T. Cleveland for his help in careful preparation of the article.
no-problem/9908/cond-mat9908252.html
ar5iv
text
# All-dielectric one-dimensional periodic structures for total omnidirectional reflection and partial spontaneous emission control ## I Introduction It is well known that spontaneous emission can be strongly modified by changing the environment near an excited atom. A microcavity with perfectly reflecting walls can considerably inhibit or enhance spontaneous emission of atoms placed inside it . Metallic mirrors being a good reflector for any angle of incidence fulfill the requirements to be inside walls of microcavities. At optical frequencies there is however a problem with them: they display notable dissipative losses. Photonic crystals were originally proposed by Yablonovitch to fix this problem. Photonic crystals are periodically microstructured dielectric materials which can exhibit the frequency bands that are completely free of electromagnetic states. The forbidden band is usually referred to as a full three-dimensional (3D) photonic band gap. Being made from positive-dielectric-constant materials photonic crystal can be almost free of dissipative losses at any prescribed frequency. In the limit of a thick sample photonic crystal behaves as an omnidirectional high reflector. Since the first works , the concept of photonic crystal has been attracting close attention of the scientific community and a lot of applications have been proposed (see and refs. therein). However, until now the possibility to design an all-dielectric microstructure displaying total omnidirectional reflection has mainly been associated with 3D periodic materials. In spite of current success in the microstucturing of 3D photonic crystals , there is still a serious technological problem to fabricate a periodic structure of arbitrary wavelength-scale period. The investigations of low dimensional periodic media have been attracting considerable interest. It has recently been recognized, that 2D and 1D periodic structures can display some features of full 3D photonic band gap, namely, display total omnidirectional reflection of arbitrary polarized wave within certain frequency region. A one-dimensional photonic crystal is nothing other than the well-known dielectric Bragg mirror consisting of alternating layers with low and high indices of refraction. In contrary to 3D microstructures, 1D dielectric lattices are unique in that modern technology is currently able to produce the needed wavelength-scale period. Depending on the chosen geometry and frequency region, a lot of applications are possible. The planar geometry, for example, can be used to improve the properties of vertical-cavity surface-emitting lasers and microwave antenna, to design transmission and energy saving filters. By rolling into hollow fibers, the mirror can be used as inside walls of high-finesse waveguides and microcavities. The paper is organized as follows. Section II outlines the origin of the total omnidirectional reflection displayed by 1D photonic crystals. Optimization criteria of the mirror design are reported. The possibility of partial spontaneous emission control is discussed. In Section III, the experimental demonstration of the mirror is presented at optical frequencies. ## II Theoretical results ### II-A Total omnidirectional reflection Consider an infinite periodic stack of alternating layers of low, $`n_1`$, and high, $`n_2`$, indices of refraction and the thicknesses $`d_1`$ and $`d_2`$, respectively (Fig. 1). The period is $`\mathrm{\Lambda }=d_1+d_2`$. The periodicity of the structure leads to the Bloch wave solutions of the Maxwell equations. The Bloch wave number $`K`$ may be obtain from the dispersion relation (see e.g. ) $$K(𝐤_{},\omega )\mathrm{\Lambda }=\mathrm{arccos}\left(\frac{1}{2}(A+D)\right)$$ (1) where $`𝐤_{}`$ is the tangential component of the Bloch wave vector, $`\omega `$ is the frequency. A particular form of the quantities $`A`$ and $`D`$ may be found elsewhere . Due to the planar geometry of the problem, the separation of the electromagnetic field into TE (transverse electric) and TM (transverse magnetic) polarization states is possible, where the electric or magnetic field vector, respectively, is parallel to the layers interfaces. This splits the problem into the two independent ones for TE and TM polarizations, respectively. Photonic band structure of an infinite system of layers is depicted in the figure 2. The refractive indices, $`n_1=1.4`$ and $`n_2=3.4`$, are chosen close to ones of SiO<sub>2</sub> and Si in the near IR region, where these materials are essentially transparent. The band structure has been calculated using the dispersion equation (1). The left panel is for TE polarization, and the right one for TM. An infinite periodic structure can support both propagating and evanescent Bloch waves, depending on real or imaginary Bloch wave numbers are. In the figure 2, gray areas correspond to the propagating states, whereas white areas contain the evanescent states only and are usually referred to as photonic band gaps. As only the normal component of the wave vector is involved in the band gaps formation for oblique propagation, the band gaps shift towards the higher frequencies with the tangential component of the wave vector (Fig. 2). The common feature of the band structure of 1D lattices is that the forbidden gaps are always closed up (e. g. the crossed circles in figure 2). Another feature of the band structure is that the TM forbidden gaps shrink to zero onto the Brewster light-line, where $`\omega =c\left|𝐤_{}\right|/n_1\mathrm{sin}\alpha _B`$ (Fig. 2), $`\alpha _B=\mathrm{arctan}n_2/n_1`$ is the Brewster angle. The TM polarized wave propagates without any reflection from $`n_1`$ to $`n_2`$ layer, and from $`n_2`$ to $`n_1`$ layer, at the Brewster angle $`\alpha _B`$. Suppose that the plane electromagnetic wave illuminates the boundary of a semi-infinite periodic stack under the angle, $`\alpha _{inc}`$, from semi-infinite homogeneous medium of refractive index, $`n`$ (Fig. 1). While the frequency and the wave vector of an incident wave are fitted into the forbidden gaps of the photonic crystal<sup>1</sup><sup>1</sup>1It is important to note that infinite and semi-infinite photonic crystals have the same band structure , the only difference is the existence of surface modes in the case of semi-infinite stack., an incident wave undergoes total reflection. The band gaps of the crystal lead to the total reflection bands in the spectra, which are very sensitive to the incident angle. Two questions arise: (i) whether it is possible to avoid the coupling of the incident wave to the Brewster window, where the reflection coefficient at the interface of low, $`n_1`$, and high, $`n_2`$, index layers is identically zero; and (ii) whether the reflection bands (photonic band gaps) can be wide enough to be open for all incident angles. When electromagnetic wave illuminates the boundary of the semi-infinite crystal, the possible values of the internal angles are restricted by the Snell’s law. The full domain of incident angles $`[\pi /2,\pi /2]`$ is mapped into the internal cone of half-angle $`\alpha _1^{max}=\mathrm{arcsin}n/n_1`$ (the light gray area in the figure 1). The larger are refractive indices of the layers with respect to the medium outside the crystal, the narrower is a cone of internal angles. For sufficiently large index ratio $`\delta n_0=n_1/n`$ the internal cone’s half-angle $`\alpha _1^{max}`$ can be smaller than the Brewster angle $`\alpha _B`$. An externally incident wave will never couple to the Brewster window. This answers the first question. To answer the second question, consider the reduced region of k-space associated with the ambient medium (Fig. 2). For an incident wave with the wave vector $`\left|𝐤\right|=n\omega /c`$, the tangential component of the wave vector remains constant throughout the crystal and equals to $`\left|𝐤_{}\right|=(n\omega /c)a\mathrm{sin}\alpha _{inc}`$. Here, $`\alpha _{inc}`$ is incident angle, $`\omega `$ is the frequency and $`c`$ is the speed of light in vacuum. The wave coming from the outside can only excite the states lying above the ambient-medium light-line (solid line in the figure 2 corresponds to the ambient medium with refractive index $`n=1`$). To have an omnidirectional reflection the forbidden gap should be open within this reduced region of k-space. A sufficiently large index ratio $`\delta n=n_2/n_1`$ can make a trick, leading to the wide band gap opened for all incident angles. For the structure presented (Fig. 2) and air as an ambient medium, $`n=1`$, both index ratio $`\delta n=n_2/n_12.4`$ and $`\delta n_0=n_1/n=1.4`$ are sufficiently large, so the first two overall band gaps are open for all external angles of incidence. No propagating mode are allowed in the stack for any propagating mode in the ambient medium within the gaps for both TE and TM polarizations (shaded areas in the figure 2). The total omnidirectional reflection arises. A band of surface modes lies below the ambient-medium light-line and thus surface modes do not affect the external reflectivity. It is instructive also to represent the band structure in terms of internal angles. An internal angle parametrizes the tangential component of the wave vector as $`\left|𝐤_{}\right|=(n_i\omega /c)\mathrm{sin}\alpha _{int}`$, where $`n_i`$ is the reflective index of the layer, $`\alpha _{int}`$ is the internal angle in the $`n_i`$ layer. In the figure 3 the band structure is redrawn in terms of internal angles in the low index layer. One can see that the overall forbidden gap is opened for all external incident angles for both fundamental polarizations forming an omnidirectional total reflection band (gray area in the figure 3). ### II-B Optimization criteria To design an omnidirectional mirror, one needs to have total reflection for all incident angles and all polarization states. The TM photonic band gap is narrower than TE one and so defines the bandwidth of an omnidirectional reflection band. The upper edge of the reflection band corresponds to the upper edge of the forbidden gap at normal incidence. The lower edge is defined by the intersection of the ambient-medium light-line with the upper edge of the corresponding TM band (Fig. 2). For given parameters of the periodic structure, the refractive index of an ambient medium may be used to control the bandwidth. By increasing the refractive index of an ambient medium from $`n=1`$ to some $`n=n_{max}`$ the bandwidth decreases till an omnidirectional reflection band is closed up. In figure 4 the maximum refractive index of the ambient medium $`n_{max}`$, for which the first omnidirectional reflection band is closed up, is presented as a function of the index ratio $`\delta n=n_2/n_1`$ for various values of the refractive index of the low index layer $`n_1`$ and the fixed filling fraction $`\eta =d_2/\mathrm{\Lambda }=0.5`$. The filling fraction $`\eta =d_2/\mathrm{\Lambda }`$ optimizes the relative reflection bandwidth $`\mathrm{\Delta }\omega /\omega _0`$ of an omnidirectional reflection band with respect to the given refractive indices of the layers constituting the 1D photonic crystal and the index of an ambient medium. Here $`\mathrm{\Delta }\omega `$ is the width of the omnidirectional reflection band and $`\omega _0`$ is the central frequency. In figure 5 an overall band gaps leading to the omnidirectional total reflection bands are presented versus the filling fraction. For the first overall band gap, gaps corresponding to the normal and grazing incidence are presented. The solid curves are for normal incidence. The dashed (dotted) curves are for grazing incidence for TE (TM) polarization. The omnidirectional reflection band, which is due to the overlap of the gaps corresponding to the normal and grazing incidence, is depicted as the shaded area. The omnidirectional reflection bands of the higher order are opened. The inset (Fig. 5) shows relative bandwidth of the first total reflection band versus filling fraction. There is a clear optimum filling fraction $`\eta _{opt}`$ leading to the maximum of the relative bandwidth. We further present the set of contour plots (Fig. 6) which provides the full information about the first omnidirectional total reflection band for given parameters of the system. An optimal filling fraction and corresponding central frequency are shown in figures 6 (a) and 6 (b), respectively, as a functions of the index ratio $`\delta n`$ for different values of index ratio $`\delta n_0`$. The dashed curve in figure 6 (a) corresponds to the filling factor of a quarter-wave stack, which is $`\eta _{\lambda /4}=1/(1+\delta n)`$. Within a given parameter range a quarter-wave stack is not an optimal configuration to reach a maximum relative bandwidth for the omnidirectional reflection band, however it gives the relative bandwidth which is usually few percent smaller than optimal one \[Fig. 6 (c)\]. In figure 6 (c) the optimal relative bandwidth is depicted as a function of the index ratio $`\delta n`$ for different values of index ratio $`\delta n_0`$. A wide omnidirectional total reflection band exists for reasonable values of both $`\delta n`$ and $`\delta n_0`$. To obtain an omnidirectional band with bandwidth larger than 5% the index ratios should be larger than 1.5 ($`\delta n>1.5,\delta n_0>1.5`$). A decrease in one of the index ratios is partially compensated by an increase in the other one. For the SiO<sub>2</sub>/Si ($`n_1=1.4`$, $`n_2=3.4`$) structure in air ($`n=1.0`$) the omnidirectional reflection band is centered at the normalized frequency $`\omega \mathrm{\Lambda }/2\pi c=0.275`$ with the optimal filling fraction $`\eta _{opt}=0.324`$. The relative bandwidth is about 25%. To obtain an omnidirectional reflection centered at the radiation wavelength $`\lambda =1.5\mu m`$, one needs a period of the structure of about $`0.412\mu m`$. ### II-C Perfect mirror or more? To suppress substantially the spontaneous emission of radiating species placed inside the layer of a periodic stack, the layer must be free of all unbound and bound modes. For the structure presented (Fig. 3) an overall forbidden gap is open up to the grazing angles in the case of TE polarized radiation, while an overall forbidden gap is closed up at near $`60^{}`$ of an internal angle for TM polarization. The internal angle in the low index layer, for which an overall forbidden gap is closed up, is presented in the figure 7 as a function of the index ratio $`\delta n`$. Black dot corresponds to the stack parameters as in the figure 3: $`n_1=1.4`$, $`n_2=3.4`$. For all index ratios $`\delta n`$ which are to the right of the dashed vertical line, an overall TE forbidden gap is open for all internal angles. For such design parameters, the low index layer of the structure can be completely free of TE polarized propagating modes. To ensure the layer is free of all modes, bound modes, which are essentially guided modes, should be absent as well. However, there be no modes within the low index layer which can couple to the high-index-layer modes, including guided modes . The radiation of emitting species embedded in low index layer may be inhibited over about $`\pm 60^{}`$ aperture of internal angles (Fig. 3). Prospects for partial spontaneous emission control using this kind of 1D periodic structures are discussed in : introducing a low index defect layer Russell et. al. resolve the Brewster window problem, namely design a defect layer which may be free of all unbound and bound modes. Properly designed defect layer may provide the emission control over a solid angle $`4\pi `$ steradian. Another way to resolve the Brewster window problem is associated with anisotropic periodic medium . Forbidden band gaps of such a periodic structure may do not shrink to zero anywhere, so the low index layers of the structure may be free of all electromagnetic modes. ## III Experimental results We have chosen to check theoretical predictions at optical wavelengths. A lattice consisting of 19 layers of Na<sub>3</sub>AlF<sub>6</sub> and ZnSe ($`n_1=1.34`$ and $`n_2=2.52.8`$ correspondingly) was fabricated by standard optical technology using layer by layer deposition of the materials on a glass substrate. The multilayer stack was terminated at both ends with ZnSe layer. The thickness of each layer was $`d_1=d_2=90`$ nm. An omnidirectional total reflection was expected within the spectral band $`\mathrm{\Delta }\lambda =604.3638.4`$ nm with relative bandwidth of 5.3%. Transmission spectra for TE- and TM- polarizations at different incident angles in the range of $`060^{}`$ were measured using a ’Cary 500’ spectrophotometer (right panel in the figure 8). The calculated transmission spectra are depicted in the left panel of the figure 8, a good agreement is obtained. From figure 8 one can see that for spectral range 600–700 nm the transmission coefficient is very low for both polarizations. The absolute values of transmission for TE-polarization in spectral range 630–700 nm was less than 0.001 within the $`\pm 060^{}`$ aperture, corresponding to a reflection coefficient of 99.9%. To reach higher values of angle of incidence a simple set-up consisting of a He-Ne laser and a CCD detector was used. The intensity of laser beam passed through the sample was detected by CCD camera. Sample was mounted on rotational stage to allow different angles of incidence. With this set-up one can directly determine the transmission coefficient of samples at angles up to $`70^{}`$. For larger angles it is necessary to measure the reflection coefficient of samples. The dependence of the transmission coefficients for TE- and TM-polarized incident radiation of a He-Ne laser at 632.8 nm on angle of incidence is presented in the figure 9. For TM-polarization circles depict the directly measured transmission coefficient, and squares depict data obtained from reflection measurements. Mismatch between them can be explained by additional reflection from air–ZnSe, ZnSe–substrate and substrate–air interfaces. The solid (dashed) curve in the figure 9 gives theoretically calculated transmission coefficients for TE-(TM-) polarized light. As can be seen from the figure 9 the transmission coefficient of TM-polarized radiation remains below $`10^3`$ over a wide angular range. Due to the Brewster effect at the air–ZnSe, ZnSe–substrate and substrate–air interfaces at large angles it increases to 0.33 at $`80^{}`$ and then decreases again. In contrast, transmission of TE-polarized radiation decreases monotonically with growing angle of incidence being less than $`10^5`$ for angles larger than $`40^{}`$. Transmission coefficients of less than $`10^5`$ are beyond the capabilities of the experimental set-up used. For this reason, the transmitted signal at more than $`60^{}`$ cannot be detected. Because of this, no data points for TE-polarization at these angles are presented in the figure 9. A reflectivity of TM-polarized radiation at large angles can be enhanced in the structures terminated at both ends with low index layer, Na<sub>3</sub>AlF<sub>6</sub>, (Fig. 9). In this case, transmission at $`80^{}`$ is as small as 0.03, corresponding to a reflection coefficient of 97%. An overall reflectivity can be enhanced in structures with larger number of layers. ## IV Conclusions In summary, we have demonstrated the possibility to achieve a total omnidirectional reflection with one-dimensional periodic dielectric structures. The origins of the total omnidirectional reflection have been discussed. Optimization criteria of omnidirectional totally reflecting mirror design have been presented. We have found that for reasonable values of structure parameters ($`\delta n>1.5,\delta n_0>1.5`$) a relatively large omnidirectional total reflection band ($`>5\%`$) may be obtain, making the fabrication of a perfect all-dielectric thin-film mirror feasible. The possibility of partial spontaneous emission control with one-dimensional periodic structures has been discussed. The experimental demonstration of the mirror has been presented at optical frequencies.
no-problem/9908/astro-ph9908189.html
ar5iv
text
# Untitled Document THE STELLAR INITIAL MASS FUNCTION <sup>1</sup> Presented at the conference on “Star Formation 1999”, Nagoya, Japan, June 21–25, 1999; to be published, edited by T. Nakamoto Richard B. Larson Yale Astronomy Department New Haven, CT 06520-8101, USA larson@astro.yale.edu ABSTRACT The current status of both the observational evidence and the theory of the stellar initial mass function (IMF) is reviewed, with particular attention to the two basic, apparently universal features shown by all observations of nearby stellar systems: (1) a characteristic stellar mass of the order of one solar mass, and (2) a power-law decline of the IMF at large masses similar to the original Salpeter law. Considerable evidence and theoretical work supports the hypothesis that the characteristic stellar mass derives from a characteristic scale of fragmentation in star-forming clouds which is essentially the Jeans scale as calculated from the typical temperature and pressure in molecular clouds. The power-law decline of the IMF at large masses suggests that the most massive stars are built up by scale-free accretion or accumulation processes, and the observed formation of these stars in dense clusters and close multiple systems suggests that interactions between dense prestellar clumps or protostars in forming clusters will play a role. A simple model postulating successive mergers of subsystems in a forming cluster accompanied by the accretion of a fraction of the residual gas by the most massive protostar during each merger predicts an upper IMF of power-law form and reproduces the Salpeter law with a plausible assumed accretion efficiency. 1 Introduction The stellar initial mass function (IMF), or distribution of masses with which stars are formed, is the most fundamental output function of the star formation process, and it controls nearly all aspects of the evolution of stellar systems. The importance of understanding the origin of the IMF and its possible universality has therefore been a stimulus for much research on star formation, both theoretical and observational, and interest in this subject is of long standing, going back at least to the pioneering study by Nakano (1966) of some of the processes that might be responsible for determining the stellar IMF. In recent years there has been much progress in observational studies relating to the IMF, and somewhat more modest progress in reaching a theoretical understanding of its origin; here I review briefly the current status of both the observational evidence and the theoretical ideas concerning the origin IMF. Other recent reviews of the observations and the theory of the IMF have been given by Scalo (1998), Clarke (1998), Larson (1998, 1999), Elmegreen (1999), and Meyer et al. (2000). 2 Basic Observed Features of the Stellar IMF Numerous observational studies have been carried out to measure or constrain the IMF in systems with as wide a range in properties as possible in order to establish whether it is universal or whether it varies with place or time, depending for example on parameters such as metallicity. The regions that have been studied with direct star counts so far include the local field star population in our Galaxy and many star clusters of all ages and metallicities in both our Galaxy and the Magellanic Clouds. As summarized below, this large body of direct evidence does not yet demonstrate convincingly any variability of the IMF, although the uncertainties are still large. Some indirect evidence based on the photometric properties of more distant and exotic systems suggests that the IMF may vary in extreme circumstances, possibly being more top-heavy in starbursts and high-redshift galaxies (Larson 1998), but this indirect evidence is less secure and will not be discussed further here. As reviewed by Miller & Scalo (1979), Scalo (1986, 1998), Kroupa (1998), and Meyer et al. (2000), the IMF derived for the field stars in the solar neighborhood exhibits an approximate power-law decline with mass above one solar mass that is consistent with, or somewhat steeper than, the original Salpeter (1955) law; however, below one solar mass the IMF of the field stars clearly flattens, showing a possible broad peak at a few tenths of a solar mass in the number of stars per unit logarithmic mass interval. If the logarithmic slope $`x`$ of the IMF is defined by $`dN/d\mathrm{log}mm^x`$, then the slope at large masses is $`x1.5`$, while the slope at small masses is $`x0`$, the range of values or uncertainty in $`x`$ being about $`\pm 0.5`$ in each case. The IMF inferred for the local field stars is subject to significant uncertainty, especially in the range around one solar mass, because it depends on the assumed evolutionary history of the local Galactic disk and on assumed stellar lifetimes. In contrast, the IMFs of individual star clusters can be derived with fewer assumptions and should be more reliable, since all of the stars in each cluster have the same age and since, at least in the youngest clusters, all of the stars ever formed are still present and can be directly counted as a function of mass without the need for evolutionary corrections. Much effort has therefore gone into determining IMFs for clusters with a wide range of properties in both our Galaxy and the Magellanic Clouds. As reviewed by von Hippel et al. (1996), Hunter et al. (1997), Massey (1998), and Scalo (1998), the results of these studies are generally consistent with the IMF inferred for the local field stars, and the values found for the slope $`x`$ of the IMF above one solar mass generally scatter around the Salpeter value $`x=1.35`$ (see figure 5 of Scalo 1998). In all cases in which it has been possible to observe low-mass stars, the cluster IMFs also show a flattening below one solar mass. No clear evidence has been found for any systematic dependence of the IMF on any property of the systems studied, and this has led to the current widely held view that the IMF is universal, at least in the local universe. Recent studies have provided more information about very faint stars and brown dwarfs, and the IMF estimated for them remains approximately flat or shows only a moderate decline into the brown dwarf regime, consistent with an extrapolation of the IMF of lower main sequence stars and showing no evidence for any abrupt truncation at low masses (Basri & Marcy 1997; Martín et al. 1998; Bouvier et al. 1998; Reid 1998). Another area of recent progress has been the determination of IMFs for a number of newly formed star clusters that still contain many pre-main-sequence stars; as reviewed by Meyer et al. (2000), these results again show general consistency with the field star IMF, including a similar flattening below one solar mass and a possible broad peak at a few tenths of a solar mass. Significant numbers of brown dwarf candidates have been found in these young clusters, and although the derivation of an IMF for them is complicated by the need to know their ages accurately, the results again suggest an IMF that is flat or moderately declining at the low end (Luhman & Rieke 1999; Hillenbrand & Carpenter 1999). In summary, within the still rather large uncertainties, all of the data that have been described are consistent with a universal IMF that is nearly flat at low masses and that can be approximated by a declining power law with a slope similar to the original Salpeter slope above one solar mass. The fact that the IMF cannot be approximated by a single power law at all masses but flattens below one solar mass means that there is a characteristic stellar mass of the order of one solar mass such that most of the mass that condenses into stars goes into stars with masses of this order. In fact, a more robust statement about the IMF than any claimed functional form is the fact that about 75% of the mass that forms stars goes into stars with masses between 0.1 and $`10`$M, while about 20% goes into stars more massive than $`10`$M and only 5% into stars less massive than $`0.1`$M. The existence of this characteristic stellar mass is the most fundamental fact needing to be explained by any theoretical understanding of star formation. The second fundamental fact to be explained is that a significant fraction of the mass goes into massive stars in a power-law tail of the IMF extending to masses much larger than the characteristic mass. Possible theoretical explanations of these two basic facts will be discussed in the following sections. 3 The Origin of the Characteristic Stellar Mass For some years, there have been two contending viewpoints about the origin of the characteristic stellar mass, one holding that it results from a characteristic mass scale for the fragmentation of star-forming clouds (e.g., Larson 1985, 1996), and the other holding that it results from the generation of strong outflows at some stage of protostellar accretion (e.g., Adams & Fatuzzo 1996); both effects might in fact play some role, as reviewed by Meyer et al. (2000). The fragmentation hypothesis for the origin of the characteristic mass has recently received support from observations showing that the $`\rho `$ Ophiuchus cloud contains many small, apparently pre-stellar clumps with masses between 0.05 and $`3`$M whose properties are consistent with their having been formed by the gravitational fragmentation of the cloud, and whose mass spectrum is very similar to the stellar IMF discussed above, including the flattening below one solar mass (Motte, André, & Neri 1998; see also André 2000; André, Ward-Thompson, & Barsony 2000). In particular, the mass spectrum of the clumps in the $`\rho `$ Oph cloud is quite similar to the mass spectrum of the young stars observed in this cloud (Luhman & Rieke 1999), suggesting that the IMF of the stars derives directly from the mass spectrum of the clumps. A clump mass spectrum consistent with the stellar IMF has also been found in the Serpens cloud by Testi & Sargent (1998), and additional evidence for a possible mass scale of order one solar mass in the structure of molecular clouds has been reviewed by Evans (1999) and Williams, Blitz, & McKee (2000). Although the original analysis of Jeans (1929) showing the existence of critical length and mass scales for the fragmentation of a collapsing cloud was not self-consistent, rigorous stability analyses that yield dimensionally equivalent results can be made for various equilibrium configurations, including sheets, disks, and filaments (Spitzer 1978; Larson 1985). In all cases, there is a predicted characteristic mass scale for fragmentation that is a few times $`c^4/G^2\mu `$, where $`c`$ is the isothermal sound speed and $`\mu `$ is the surface density of the assumed equilibrium configuration. For a typical molecular cloud temperature of $`10`$K and a typical surface density of 100 M$`_{}`$pc<sup>-2</sup>, this mass scale is about one solar mass, similar to the observed typical stellar mass (Larson 1985). Alternatively, if collapsing pre-stellar clumps form not by the fragmentation of equilibrium configurations but as condensations in a medium with some characteristic ambient pressure $`P`$, the minimum mass that can collapse gravitationally is that of a marginally stable ‘Bonnor-Ebert’ sphere with a boundary pressure $`P`$, or $`1.18c^4/G^{3/2}P^{1/2}`$ (Spitzer 1968). Since any self-gravitating configuration has an internal pressure $`P\pi G\mu ^2/2`$, this result is dimensionally equivalent to the fragmentation scale given above, and it can be regarded as a different expression for the same basic physical quantity, which can still conveniently be called the ‘Jeans mass’. It is not yet clear to what extent star-forming molecular clouds or their denser subregions can be regarded as equilibrium configurations, and it may instead be that much of the structure in these clouds consists of transient density fluctuations generated by supersonic turbulence (Larson 1981). Some of the filamentary structure in molecular clouds may be created by violent dynamical phenomena in an active star-forming environment (Bally et al. 1991), and simulations of turbulence in the interstellar medium often show the appearance of transient filamentary features that form where supersonic turbulent flows converge (Vázquez-Semadeni et al. 1995, 2000; Ballesteros-Paredes et al. 1999). In the presence of gravity, some of the densest clumps produced in this way may become self-gravitating and begin to collapse; the initial state for their collapse might then be roughly approximated by a marginally stable Bonnor-Ebert sphere whose boundary pressure is determined by the ram pressure of the turbulent flow. If there is a rough balance between turbulent pressure and gravity in molecular clouds, the turbulent pressure will be approximately equal to the gravitational pressure $`P\pi G\mu ^2/2`$, yielding a pressure $`P3\times 10^5`$ cm$`^3`$K for a typical surface density $`\mu 100`$ M$`_{}`$pc<sup>-2</sup>. Alternatively, a typical pressure may be estimated by noting that the correlations among linewidth, size, and density that hold among many molecular clouds (Larson 1981; Myers & Goodman 1988) imply that these clouds all have similar turbulent ram pressures $`\rho v^2`$, for which a typical value is again approximately $`3\times 10^5`$ cm$`^3`$K. For a marginally stable Bonnor-Ebert sphere with a temperature of 10 K bounded by this pressure, the predicted mass and radius are about $`0.7`$M and $`0.03`$pc respectively (Larson 1991, 1996, 1999). Although factors of 2 may not be very meaningful, these quantities are similar in magnitude to the typical masses and sizes of the pre-stellar clumps observed in molecular clouds (e.g., Motte et al. 1998) and to the characteristic stellar mass noted above. Thus there may indeed be an intrinsic mass scale in the star formation process, and this mass scale may be essentially the Jeans mass as defined above. There are also some hints that there may be a corresponding size scale for star-forming clumps. Analyses of the spatial distributions of the newly formed T Tauri stars in several regions show the existence of two regimes in a plot of average companion surface density versus separation, namely a binary regime with a steep slope at small separations and a clustering regime with a shallower slope at large separations, with a clear break between them at a separation of about $`0.04`$pc that may represent the size of a typical collapsing pre-stellar clump (Larson 1995; Simon 1997). Although this interpretation of the observations is not unique and the scale of the break may also depend on superposition effects and on the dynamical evolution of the system (Nakajima et al. 1998; Bate, Clarke, & McCaughrean 1998), the interpretation of the break in terms of a typical clump size may still be valid in low-density regions like Taurus where these effects may not be as important as in denser regions. A similar size scale has been found by Ohashi et al. (1997) in a study of the rotational properties of collapsing pre-stellar clumps, which shows that their specific angular momentum is apparently conserved on scales smaller than about $`0.03`$pc; this may represent the characteristic size of a region that collapses rapidly to form a star or binary system (see also Ohashi 2000; Myers, Evans, & Ohashi 2000). Finally, an analysis of the internal kinematics of star-forming cloud cores by Goodman et al. (1998) shows a transition from a turbulent regime on large scales, where the linewidth increases systematically with region size, to a regime of ‘velocity coherence’ on scales smaller than about $`0.1`$pc, where the linewidth becomes nearly independent of region size. These authors suggest that this change in kinematic behavior is related to the break between the clustering and binary regimes for the T Tauri stars noted above, and they suggest that it has the same basic cause, namely a transition from chaotic dynamics on large scales to more ordered behavior on small scales. Such a transition might be expected because molecular clouds are dominated by turbulent and magnetic pressures on large scales and by thermal pressure on small scales (Larson 1981; Myers 1983), and the transition between the two regimes is in fact what defines the Jeans scale when the latter is calculated by assuming pressure balance between a thermally supported isothermal clump and a turbulent ambient medium. All of the evidence described here is thus consistent with the existence of a scale in the star formation process which is essentially the Jeans scale as derived above. The characteristic stellar mass may thus depend, via the Jeans mass, on the typical temperature and pressure in star-forming clouds, being proportional to $`T^2/P^{1/2}`$. The temperatures of molecular clouds are controlled by radiative processes, but their pressures are probably of dynamical origin and result from the cloud formation process, since their internal pressures are much higher than the general pressure of the interstellar medium (Larson 1996). Molecular clouds are probably created by the collisional agglomeration of smaller, mostly atomic clouds in regions where large-scale converging flows assemble the atomic clouds into large complexes. The resulting cloud collisions produce a ram pressure $`\rho v^2`$ which may determine the typical internal pressure of the molecular clouds formed. If the typical density of the colliding clouds is 20 atoms per cm<sup>3</sup> and if they collide with a velocity of 10 km s<sup>-1</sup>, the ram pressure produced is $`3\times 10^5`$ cm$`^3`$K, similar to the inferred internal pressures of molecular clouds. Thus the typical pressures in molecular clouds can be understood in terms of the structure and dynamics of the atomic component of the interstellar medium. It may further be possible to understand the properties of the atomic clouds in terms of the classical two-phase model of the ISM, which postulates a balance in thermal pressure between a cool cloud component and a warm intercloud component and predicts cloud densities of a few tens of atoms per cm<sup>3</sup> (Field, Goldsmith, & Habing 1969; Wolfire et al. 1995). Thus it may be possible to understand the characteristic temperatures and pressures of molecular clouds, and hence the characteristic stellar mass, in terms of relatively well-studied thermal and dynamical properties of the interstellar medium (Larson 1996). If the mass scale for star formation depends on the temperature and pressure in star-forming clouds as predicted above, one might expect to see some variability of the IMF between regions with different properties; for example, clouds with higher temperatures might be expected to form stars with a higher characteristic mass (Larson 1985). There is possible evidence for such an effect in extreme cases such as starburst systems and high-redshift galaxies (Larson 1998), but no clear dependence of the IMF on the temperature or other properties of star-forming clouds has been found in local star-forming regions. In fact, clouds with higher temperatures generally also have much higher pressures, so there is a partial cancellation of these effects when the Jeans mass is calculated, and it is not clear that one effect or the other dominates. Elmegreen (1999) has argued that such an approximate cancellation of effects is to be expected for physical reasons since the cloud temperature depends on radiative heating rates while the overall pressure of the ISM depends on the local column density of matter in a galaxy, both of which increase with the stellar surface density in such a way that $`T^2/P^{1/2}`$ is approximately constant. 4 The Formation of Massive Stars and the Origin of the Power-Law Upper IMF The second basic fact about star formation needing to be explained is that the IMF has a power-law tail extending to masses much larger than the characteristic mass, such that about 20% of the total mass goes into stars more massive than $`10`$M. Most of the feedback effects of star formation on the evolution of galaxies depend on energy input from these massive stars, so it is clearly of great importance to understand the origin and possible universality of the upper IMF. At present the formation of massive stars is relatively poorly understood, both observationally and theoretically, so most of what can be said about the origin of the upper IMF remains speculative. Recent observational and theoretical progress in understanding the formation of massive stars has been reviewed by Evans (1999), Garay & Lizano (1999), and Stahler, Palla, & Ho (2000). A theoretical constraint on the formation processes of massive stars is provided by the fact that, for stellar masses larger than about $`10`$M, radiation pressure begins to exceed gravity in the infalling envelope around an accreting protostar (Wolfire & Cassinelli 1987); this means that standard radial infall models probably cannot account for the formation of stars much more massive than about $`10`$M, although such models may still suffice for stars of up to about this mass (Stahler et al. 2000). Therefore, non-spherical or non-uniform accretion processes are probably required to continue building up the most massive stars. One possibility is that the infalling gas settles into a disk which can then be accreted without hindrance from radiation pressure (Nakano 1989; Jijina & Adams 1996). Evidence that disks may play a role in the formation of massive stars has been reviewed by Garay & Lizano (1999), but the role of disks for massive stars is not as clear as in the case of low mass stars. Another possibility is that the formation of massive stars involves the accretion of very dense clumps, or of dense circumstellar matter accreted as a result of interactions among protostars in a forming cluster of stars (Larson 1982, 1990). Relevant observational evidence is provided by the fact that newly formed massive stars are always found to be surrounded by clusters of less massive stars, the more massive stars tending to have larger associated clusters (Hillenbrand 1995; Testi, Palla, & Natta 1999; Garay & Lizano 1999). This means that the conditions that favor the formation of massive stars also favor the formation of many less massive stars in the same vicinity. The most massive stars in young clusters tend to be centrally located in these clusters, as is exemplified by the Trapezium system (Larson 1982; Zinnecker, McCaughrean, & Wilking 1993; Hillenbrand & Hartmann 1998), and this can be understood only if these stars were in fact formed near the cluster center (Bonnell & Davies 1998). Massive stars also have a high frequency of massive companions, and even the runaway O stars must have been formed in close proximity to other massive stars in very dense stellar systems (Stahler et al. 2000). All of this evidence indicates that massive stars form only in regions of exceptionally high density along with many less massive stars, and that they typically form in very close proximity to other massive stars. Massive star-forming cloud cores also show evidence for more internal substructure than the less massive cores that have been more widely studied (Evans 1999). Interactions among the many dense pre-stellar clumps and accreting protostars that must exist in such an environment will therefore almost certainly play a role in the accretional growth of the most massive stars, perhaps accounting for the accretion by them of matter that is sufficiently dense to overcome the effects of radiation pressure. Large amounts of matter must be accumulated very rapidly to form a massive star, so the process must be a rather violent one. An extreme case of such a violent formation process, which must sometimes happen, would be the merging of two already-formed less massive stars (Bonnell, Bate, & Zinnecker 1998; Stahler et al. 2000). If no new mass scale larger than the Jeans mass enters the problem, it is possible that the accumulation processes involved in the formation of the massive stars might proceed in an approximately scale-free fashion to build up a power-law upper IMF. Several types of approximately scale-free accumulation models have been considered in efforts to explain how a power-law upper IMF might be produced. The first to be developed in some detail was that of Nakano (1966), who suggested that clumps formed by the fragmentation of a collapsing cloud would collide randomly and sometimes coalesce to create a spectrum of clump masses extending to values much larger than the initial fragment mass; he showed that this process could yield an approximate power-law mass spectrum similar to the observed IMFs of some star clusters. Such models were elaborated further by Arny & Weissman (1973), Silk & Takahashi (1979), Pumphrey & Scalo (1983), and Nakano, Hasegawa, & Norman (1995). A second possibility is that protostars might continue to accrete ambient matter gravitationally at a rate that increases with their mass, as is true for Bondi-Hoyle accretion whose rate increases with the square of the mass; this process can build up a power-law tail on the IMF with a slope $`x=1`$ (Zinnecker 1982). A third possibility is that if stars form in a hierarchy of groups and subgroups, and if accumulation processes tend to build more massive stars in the more massive subgroups of such a hierarchy, then a power-law upper IMF can be produced (Larson 1991, 1992). Most stars do indeed form in clusters, and in at least some cases there is evidence for hierarchical subclustering (Zinnecker et al. 1993; Gomez et al. 1993; Larson 1995; Elmegreen et al. 2000). Since the larger subgroups in such a hierarchy contain more ‘raw materials’ from which to build massive stars, they will almost certainly produce stars with a mass spectrum extending to a larger maximum mass. If the mass $`M_{\mathrm{max}}`$ of the most massive star formed in any subgroup increases with a power $`n<1`$ of the mass of the subgroup, i.e. if $`M_{\mathrm{max}}M_{\mathrm{group}}^n`$, and if all stars form in a self-similar hierarchy of such groups, then a power-law IMF is generated whose slope is $`x=1/n`$ (Larson 1992). For example, the IMF slope $`x=1.4\pm 0.4`$ suggested by the evidence discussed in Section 2 could be reproduced if $`n`$ were $`0.7\pm 0.2`$. One hypothesis involving hierarchical structure that has been developed further is that star-forming clouds have fractal structures, and that the universal power-law upper IMF results from a universal fractal cloud structure produced by turbulence (Larson 1992, 1995; Elmegreen 1997, 1999). In the model of Larson (1992), stars are assumed to form by gas accumulation along filaments in a fractal filamentary network, and the resulting IMF slope $`x`$ is equal to the fractal dimension $`D`$ of the network. Elmegreen (1997) has proposed a more generic model in which stars form by random selection from different levels of any fractal hierarchy. However, while there is evidence that molecular clouds have fractal boundary shapes, it is less clear that they have fractal mass distributions, and most of their mass cannot plausibly be fractally distributed but must have a smoother spatial distribution. In any case, even if a fractal picture were correct, the accumulation processes required to form stars in such a model would first form small stars from small cloud substructures before matter could be accumulated from larger regions to form more massive stars; the cloud regions that form massive stars would then contain substructure that has already begun to form less massive stars. Such a picture would predict the formation of massive stars only in clusters, as is indeed observed, but the interactions among star-forming clumps and protostars that would necessarily occur during the accumulation of matter to form the more massive stars were not taken into account in the above fractal models. Such interactions would almost certainly play a role in determining the final stellar mass spectrum. It may in fact be that all of the ideas mentioned above have some merit, and that a more realistic model will involve elements of all of them, namely clump collisions, continuing gas accretion, and hierarchical clustering. In its original form, the clump coagulation model of Nakano (1966) did not take into account the fact that clumps formed by the fragmentation of a contracting cloud will often begin to collapse into stars before colliding and interacting with each other. Many of the colliding clumps will then contain accreting protostars, and the effects of their interactions on the protostellar accretion process and on the structure of the forming system of stars will play an important role in its further development. Since these interactions will generally be dissipative, the star-forming clumps will tend to become bound into progressively larger and denser aggregates (Larson 1990). In this way, star clusters may be built up hierarchically by the merging of smaller subsystems, perhaps basically as in the clump coagulation model of Nakano (1966) but with the clumps replaced here by groups of forming stars. For a brief time, a newly formed cluster of stars may continue to show hierarchical subclustering, but this substructure will soon be erased by dynamical relaxation processes. As smaller systems of forming stars continue to merge into larger ones, the protostars in the most favored central locations may continue to gain mass from larger and larger accretion zones, building up an extended spectrum of stellar masses. Numerical simulations illustrate the likely importance of interactions for the continuing accretional growth of the more massive stars in such a scenario. Interactions between newly formed stars with residual disks can strongly perturb their surrounding disks, causing part of the disk matter to be ejected and part to be accreted by the central star (Heller 1991, 1995); in general, the more massive system tends to gain mass from the less massive one in such interactions. In the simulations of cloud fragmentation and accretion by Larson (1978), the most massive objects gained much of their final mass during episodes of rapid accretion associated with close encounters or mergers between dense clumps. Simulations of accretion processes in forming clusters of stars (Bonnell et al. 1997, 1998; Clarke, Bonnell, & Hillenbrand 2000) show the development of a broad spectrum of masses, the more massive objects tending to form near the cluster center where the accretion and interaction rates are highest; the most massive stars may even gain much of their final mass by mergers between already-formed stars (Bonnell et al. 1998; Stahler et al. 2000). The simple Bondi-Hoyle accretion model of Zinnecker (1982) assumes a protostellar accretion rate that increases with mass in qualitatively the expected way, and it predicts a rapidly increasing spread in protostellar masses and the growth of a power-law tail on the IMF that is qualitatively similar to what is observed. However, it also has the unrealistic feature that it predicts the unlimited runaway growth in mass of the most massive protostar because it is assumed to accrete matter from a region of unlimited size. More realistically, each protostar in a forming cluster will have an accretion zone of finite size associated with the subsystem in which it forms (Larson 1978), and the total amount of gas available to form massive stars will be limited by the size of the cluster. Since the gas supply is depleted as accreting protostars continue to gain mass from it, a decreasing amount of mass is available to build stars of higher and higher mass, resulting in an IMF with $`x>1`$ in which there is less and less mass in stars of increasing mass, as is observed. The amount of mass accreted by each protostar may then be determined by the amount of gas in the subsystem in which it forms, and by the effects of continuing interactions and mergers among the subsystems in a forming cluster; each such interaction or merger is likely to cause additional gas to be accreted by the most massive protostar present. One can easily construct simple interaction and accretion schemes based on these ideas that generate a power-law IMF. The only essential requirement is that the accretion processes involved are basically scale-free, that is, they do not depend on any new mass scale larger than the Jeans mass. This would be the case if, for example, each interaction or merger between two subsystems causes a constant fraction of the remaining gas to be accreted by the most massive protostar present. If we assume, in the simplest formulation of such a model, that the mass of the most massive protostar increases by a constant factor $`f`$ when the mass of the system to which it belongs increases by another constant factor $`g`$ because of a merger with another system (for example, $`g=2`$ for equal-mass mergers), then the mass of the most massive star formed in a cluster built up by a sequence of such mergers increases as a power $`n`$ of the cluster mass, where $`n=\mathrm{log}f/\mathrm{log}g`$. If all stars more massive than the Jeans mass are formed in a self-similar hierarchy of such merging subsystems, then the assumptions of the hierarchical clustering model of Larson (1991, 1992) are satisfied and a power-law upper IMF is produced that has a slope $`x=1/n`$. The Salpeter slope is recovered if, for example, $`g=2`$ and $`f=5/3`$; then $`n=\mathrm{log}(5/3)/\mathrm{log}2=0.74`$ and $`x=1/n=1.36`$. If the most massive protostar grows by accreting residual gas, then it can be shown that in this simple example, 1/6 of the remaining gas in the two subsystems is accreted during each merger. Conversely, if it is assumed that 1/6 of the remaining gas is accreted by the most massive protostar during each merger, then a Salpeter IMF is produced. If the fraction of the gas accreted in each merger varies between 1/10 and 1/4, then the predicted value of $`x`$ varies between 1.18 and 1.71. These results are not very sensitive to the assumption of equal-mass mergers; for example, if the mass ratio of the interacting subsystems is not 1 but 3, a typical value for clump coalescence models, and if again 1/6 of the remaining gas is accreted during each merger, the resulting IMF slope is $`x=1.44`$. These assumptions do not seem obviously implausible in the light of the observational evidence and the theoretical results noted above, and they all result in IMF slopes that are consistent with the observations, within the uncertainties. While it would be easy to construct more elaborate and perhaps more realistic accretion models that also yield power-law IMFs, what is really needed to advance our understanding of the origin of the upper IMF is better physical input regarding the processes involved in the accretional growth of massive stars, and estimates of the efficiency of these processes, for example the fraction of residual gas accreted during each interaction or merger between subsystems. The processes involved can now be studied in some detail with numerical simulations, which as noted above have already begun to simulate some of the processes likely to be important. At present these simulations do not provide sufficient quantitative information to test in any detail the kind of model that has been proposed. However, if more detailed simulations support the kind of interaction/accretion picture suggested above, and if the accretion processes involved are indeed approximately scale-free and characterized by similar efficiencies, important progress will have been made toward understanding the formation of massive stars and the origin of the upper IMF. Ultimately such simulations will have to reproduce not only the IMF of the massive stars but also the clustering and binary properties of these stars as well, and this test will place strong constraints on the models. Whatever processes may be involved, the formation of massive stars cannot be understood without explaining the striking facts that they form only in dense clusters, and typically in very close proximity to other massive stars. It seems almost unavoidable that complex and perhaps violent dynamical interactions will play an important role. References Adams, F. C., & Fatuzzo, M. 1996, ApJ, 464, 256 André, P. 2000, this conference André, P., Ward-Thompson, D., & Barsony, M. 2000, in Protostars and Planets IV, eds. V. Mannings, A. P. Boss, & S. S. Russell (University of Arizona Press, Tucson), in press Arny, T., & Weissman, P. 1973, AJ, 78, 309 Ballesteros-Paredes, J., Vázquez-Semadeni, E., & Scalo, J. 1999, ApJ, 515, 286 Bally, J., Langer, W. D., Wilson, R. W., Stark, A. A., & Pound, M. W.. 1991, in Fragmentation of Molecular Clouds and Star Formation, IAU Symposium No. 147, eds. E. Falgarone, F. Boulanger, & G. Duvert (Kluwer, Dordrecht), 11 Basri, G., & Marcy, G. W. 1997, in Star Formation Near and Far, eds. S. S. Holt & L. G. Mundy (AIP Conference Proceedings 393, Woodbury, NY), 228 Bate, M. R., Clarke, C. J., & McCaughrean, M. J. 1998, MNRAS, 297, 1163 Bonnell, I. A., & Davies, M. B. 1998, MNRAS, 295, 691 Bonnell, I. A., Bate, M. R., Clarke, C. J., & Pringle, J. E. 1997, MNRAS, 285, 201 Bonnell, I. A., Bate, M. R., & Zinnecker, H., 1998, MNRAS, 298, 93 Bouvier, J., Stauffer, J.R., Martín, E.L., Barrado y Navascués, D., Wallace, B., & Béjar, V. J. S. 1998, A&A, 336, 490 Clarke, C. 1998, in The Stellar Initial Mass Function, eds. G. Gilmore and D. Howell (ASP Conference Series, Vol. 142, San Francisco), 189 Clarke, C. J., Bonnell, I. A., & Hillenbrand, L. A. 2000, in Protostars and Planets IV, eds. V. Mannings, A. P. Boss, & S. S. Russell (University of Arizona Press, Tucson), in press Elmegreen, B. G. 1997, ApJ, 486, 944 Elmegreen, B. G. 1999, in The Evolution of Galaxies on Cosmological Timescales, eds. J. E. Beckman & T. J. Mahoney (ASP Conference Series, San Francisco), in press Elmegreen, B. G., Efremov, Y., Pudritz, R. E., & Zinnecker, H. 2000, in Protostars and Planets IV, eds. V. Mannings, A. P. Boss, & S. S. Russell (University of Arizona Press, Tucson), in press Evans, N. J. 1999, ARA&A, 37, in press Field, G. B., Goldsmith, D. W., & Habing, H. J. 1969, ApJ, 155, L149 Garay, G., & Lizano, S. 1999, PASP, in press Gomez, M., Hartmann, L., Kenyon, S. J., & Hewett, R. 1993, AJ, 105, 1927 Goodman, A. A., Barranco, J. A., Wilner, D. J., & Heyer, M. H. 1998, ApJ, 504, 223 Heller, C. H. 1991, PhD thesis, Yale University Heller, C. H. 1995, ApJ, 455, 252 Hillenbrand, L. A. 1995, PhD thesis, Univ. of Massachusetts Hillenbrand, L., & Carpenter, J. 1999, BAAS, 31, 906 Hillenbrand, L. A., & Hartmann, L. W. 1998, ApJ, 492, 540 Hunter, D. A., Light, R. M., Holtzman, J. A., Lynds, R., O’Neil, E. J., & Grillmair, C. J. 1997, ApJ, 478, 124 Jeans, J. H. 1929, Astronomy and Cosmogony (Cambridge University Press, Cambridge; reprinted by Dover, New York, 1961) Jijina, J., & Adams, F. C. 1996, ApJ, 462, 874 Kroupa, P. 1998, in Brown Dwarfs and Extrasolar Planets, eds. R. Rebolo, E. L. Martín, & M. R. Zapatero-Osorio (ASP Conference Series, Vol. 134, San Francisco), 483 Larson, R. B. 1978, MNRAS, 184, 69 Larson, R. B. 1981, MNRAS, 194, 809 Larson, R. B. 1982, MNRAS, 200, 159 Larson, R. B. 1985, MNRAS, 214, 379 Larson, R. B. 1990, in Physical Processes in Fragmentation and Star Formation, eds. R. Capuzzo-Dolcetta, C. Chiosi, & A. Di Fazio (Kluwer, Dordrecht), 389 Larson, R. B. 1991, in Fragmentation of Molecular Clouds and Star Formation, IAU Symposium No. 147, eds. E. Falgarone, F. Boulanger, & G. Duvert (Kluwer, Dordrecht), 261 Larson, R. B. 1992, MNRAS, 256, 641 Larson, R. B. 1995, MNRAS, 272, 213 Larson, R. B. 1996, in The Interplay Between Massive Star Formation, the ISM and Galaxy Evolution, eds. D. Kunth, B. Guiderdoni, M. Heydari-Malayeri, & T.X. Thuan (Editions Frontières, Gif sur Yvette), 3 Larson, R. B. 1998, MNRAS, 301, 569 Larson, R. B. 1999, in The Orion Complex Revisited, eds. M. J. McCaughrean & A. Burkert (ASP Conference Series, San Francisco), in press Luhman, K. L., & Rieke, G. H. 1999, ApJ, in press Martín, E. L., Zapatero-Osorio, M. R., & Rebolo, R. 1998, in Brown Dwarfs and Extrasolar Planets, eds. R. Rebolo, E. L. Martín, & M. R. Zapatero-Osorio (ASP Conference Series, Vol. 134, San Francisco), 507 Massey, P. 1998, in The Stellar Initial Mass Function, eds. G. Gilmore & D. Howell (ASP Conference Series, Vol. 142, San Francisco), 17 Meyer, M. R., Adams, F. C., Hillenbrand, L. A., Carpenter, J. M., & Larson, R. B. 2000, in Protostars and Planets IV, eds. V. Mannings, A. P. Boss, & S. S. Russell (University of Arizona Press, Tucson), in press Miller, G. E., & Scalo, J. M. 1979, ApJS, 41, 513 Motte, F., André, P., & Neri, R. 1998, A&A, 336, 150 Myers, P. C. 1983, ApJ, 270, 105 Myers, P. C., & Goodman, A. A. 1988, ApJ, 329, 392 Myers, P. C., Evans, N. J., & Ohashi, N. 2000, in Protostars and Planets IV, eds. V. Mannings, A. P. Boss, & S. S. Russell (University of Arizona Press, Tucson), in press Nakajima, Y., Tachihara, K., Hanawa, T., Nakano, M. 1998, ApJ, 497, 721 Nakano, T. 1966, Prog. Theor. Phys., 36, 515 Nakano, T. 1989, ApJ, 345, 464 Nakano, T., Hasegawa, T., & Norman, C. 1995, ApJ, 450, 183 Ohashi, N. 2000, this conference Ohashi, N., Hayashi, M., Ho, P. T. P., Momose, M., Tamura, M., Hirano, N. & Sargent, A. I. 1997, ApJ, 488, 317 Pumphrey, W. A., & Scalo, J. M. 1983, ApJ, 269, 531 Reid, I. N. 1998, in The Stellar Initial Mass Function, eds. G. Gilmore & D. Howell (ASP Conference Series, Vol. 142, San Francisco), 121 Salpeter, E. E. 1955, ApJ, 121, 161 Scalo, J. 1986, Fundam. Cosmic Phys., 11, 1 Scalo, J. 1998, in The Stellar Initial Mass Function, eds. G. Gilmore & D. Howell (ASP Conference Series, Vol. 142, San Francisco), 201 Silk, J., & Takahashi, T. 1979, ApJ, 229, 242 Simon, M. 1997, ApJ, 482, L81 Spitzer, L. 1968, in Nebulae and Interstellar Matter (Stars and Stellar Systems, Vol. 7), eds. B. M. Middlehurst & L. H. Aller (University of Chicago Press, Chicago), 1 Spitzer, L. 1978, Physical Processes in the Interstellar Medium (Wiley-Interscience, New York) Stahler, S. W., Palla, F., & Ho, P. T. P. 2000, in Protostars and Planets IV, eds. V. Mannings, A. P. Boss, & S. S. Russell (University of Arizona Press, Tucson), in press Testi, L., & Sargent, A. I. 1998, ApJ, 508, L91 Testi, L., Palla. F., & Natta, A. 1999, A&A, 342, 515 Vázquez-Semadeni, E., Passot, T., & Pouquet, A., 1995, ApJ, 441, 702 Vázquez-Semadeni, E., Ostriker, E. C., Passot, T., Gammie, C. F., & Stone, J. M. 2000, in Protostars and Planets IV, eds. V. Mannings, A. P. Boss, & S. S. Russell (University of Arizona Press, Tucson), in press von Hippel, T., Gilmore, G., Tanvir, N., Robinson, D., & Jones, D. H. P. 1996, AJ, 112, 192 Williams, J. P., Blitz, L., & McKee, C. F. 2000, in Protostars and Planets IV, eds. V. Mannings, A. P. Boss, & S. S. Russell (University of Arizona Press, Tucson), in press Wolfire, M. G., & Cassinelli, J. P. 1987, ApJ, 319, 850 Wolfire, M. G., Hollenbach, D., McKee, C. F., Tielens, A. G. G. M., & Bakes, E. L. O. 1995, ApJ, 443, 152 Zinnecker, H. 1982, in Symposium on the Orion Nebula to Honor Henry Draper, eds. A. E. Glassgold, P. J. Huggins, & E. L. Schucking (Ann. New York Academy of Sciences, Vol. 395, New York), 226 Zinnecker, H., McCaughrean, M. J., & Wilking, B. A. 1993, in Protostars and Planets III, eds. E. H. Levy & J. I. Lunine (University of Arizona Press, Tucson), 429
no-problem/9908/chao-dyn9908016.html
ar5iv
text
# Directed current due to broken time-space symmetry ## I Introduction Transport phenomena are at the heart of many problems in physics. Nonlinear effects (as well as their quantized counterparts) may lead to many novel results in this area even for seemingly simple models (see e.g. ). Well-known applications include the dynamics of Josephson junctions and electronic transport through superlattices , to name a few. In the bulk of theoretical work on transport phenomena nonzero dc currents are obtained by applying time-dependent fields with nonzero mean. It is normally expected that the opposite case may not lead to a nonzero dc current. However it has been also known for a long time that nonlinear dynamical systems may allow for generation of ac fields from external dc fields (Josephson effect) and even vice versa . Of course what matters is a proper average over initial conditions, so that one has to ask whether there exist (or do not exist) sets of solutions which cancel their contribution to the total current. This question calls for an analysis of the symmetry properties of the system under consideration. Let us make things more precise by considering a paradigmatic equation of the following type: $$\ddot{X}+\gamma \dot{X}+f(X)+E(t)=0.$$ (1) Functions $`f`$ and $`E`$ are bounded and periodic with period $`2\pi `$ and $`T=2\pi /\omega `$ respectively and have zero mean, and $`\mathrm{max}(|f(X)|)1`$. This equation describes e.g. a particle moving in a periodic potential $`U(X)`$ with $`f(X)=U^{}(X)`$ in one space dimension under the influence of a periodic external field with friction . It also may describe the current-voltage properties of a small Josephson junction under the action of a time-periodic external current (here $`X`$ becomes the phase difference of the complex order parameter across the junction). This equation has been considered by numerous authors, however typically with harmonic functions $`f`$ and $`E`$. We will show below that this choice induces symmetries which lead to zero total dc current. The purpose of this letter is to demonstrate that a proper lowering of the symmetries of even $`E(t)`$ alone (still keeping its above defined properties) will lead to a nonzero dc current. ## II Dissipationless case $`\gamma =0`$ We first consider the case of zero friction $`\gamma =0`$ in (1). In the limit of large velocities $`|\dot{X}|1`$, $`f(X)`$ can be neglected and the solution $`X(t)=X_0+P_0t+_0^t𝑑t^{}_0^t^{}E(t^{\prime \prime })𝑑t^{\prime \prime }`$ has a bounded first derivative. Thus the time average over the velocity on a given trajectory is a well defined nondiverging quantity. To characterize the relevant symmetries of (1) we have to consider transformations in $`X,t`$ which lead to a change of sign in $`P`$. These are i) a reflection in $`XX`$ and a shift in $`t`$ or ii) a shift in $`X`$ and a reflection in $`tt`$. We need first to characterize the relevant symmetries of $`f(X)`$ and $`E(t)`$. For that we expand $`f`$ and $`E`$ into Fourier series: $`f(X)=_kf_k\mathrm{e}^{\mathrm{i}kX},E(t)=_kE_k\mathrm{e}^{\mathrm{i}\omega kt}`$. Zero mean implies $`f_0=E_0=0`$, and reality yields $`f_k=f_k^{}`$, $`E_k=E_k^{}`$ ($`A^{}`$ means complex conjugation). If $`f(X)=U^{}(X)`$ is antisymmetric after some appropriate argument shift $`f(X+𝒳)=f(X+𝒳)`$ we call $`f(X)`$ possessing $`\widehat{f}_a`$ symmetry. If $`E(t)`$ is symmetric after some appropriate argument shift $`E(t+\tau )=E(t+\tau )`$ we call $`E(t)`$ possessing $`\widehat{E}_s`$ symmetry. If $`E(t)`$ changes sign after a fixed argument shift (which trivially can be only equal to any odd multiple of $`T/2`$) $`E(t)=E(t+T/2)`$, resulting in $`E_{2k}=0`$, we call $`E(t)`$ possessing $`\widehat{E}_{sh}`$ symmetry. Now we can define the two relevant symmetry cases of (1) called $`\widehat{S}_a`$ and $`\widehat{S}_b`$ below. If functions $`f(X)`$ and $`E(t)`$ possess $`\widehat{f}_a`$ and $`\widehat{E}_{sh}`$ symmetries respectively, then (1) is invariant under symmetry $`\widehat{S}_a`$: $`X(X+2𝒳)`$, $`tt+T/2`$. If function $`E(t)`$ possesses $`\widehat{E}_s`$ symmetry, (1) is invariant under symmetry $`\widehat{S}_b`$: $`t(t+2\tau )`$. Given a trajectory $`X(t;X_0,P_0),P(t;X_0,P_0)`$ with $`X(t_0;X_0,P_0)=X_0`$ and $`P(t_0;X_0,P_0)=P_0`$ the presence of any of the two symmetries $`\widehat{S}_a`$, $`\widehat{S}_b`$ allows to generate new trajectories given by $`\widehat{S}_a:`$ $`X(t+T/2;X_0,P_0)+2𝒳,P(t+T/2;X_0,P_0),`$ (2) $`\widehat{S}_b:`$ $`X(t+2\tau ;X_0,P_0),P(t+2\tau ;X_0,P_0).`$ (3) Note that these transformations change the sign of the velocity $`P`$. Consequently the time average of $`P`$ on the original trajectory will be opposite to the time averages of $`P`$ on the generated new trajectories. There can be more symmetry operations generating other trajectories, but those will not change the sign of $`P`$ and are thus not of interest here. The dynamical evolution of (1) allows both for quasiperiodic solutions (cyclic in $`X`$ for large $`P_0`$ and periodic in $`X`$ for small $`P_0`$) and chaotic trajectories embedded in a stochastic layer . Assuming that ergodicity holds in the stochastic layer we conclude that the average velocity will be one and the same for all trajectories of the layer. Since $`\widehat{S}_a`$ and $`\widehat{S}_b`$ when applied to a trajectory inside the layer generate again trajectories inside the layer, the presence of any of these symmetries implies that the time-averaged velocity of any trajectory in the layer will be zero. Note that we cannot obtain such a conclusion if both symmetries are absent! Indeed in Fig. 1 we show the long-time run $`X(t)`$ for a trajectory in the layer for several cases with and without symmetries $`\widehat{S}_a,\widehat{S}_b`$. While with $`\widehat{S}_a,\widehat{S}_b`$ we find zero average velocities, we observe that the loss of $`\widehat{S}_a,\widehat{S}_b`$ leads to a nonzero average velocity which is independent on the initial conditions but whose sign depends on the way the symmetry is broken. The dynamics is characterized by anomalous transport, i.e. by Lévy flights of different length interrupted by direction-changing perturbations. Nonzero current appears due to a desymmetrization between Lévy flights to the left and right, respectively. Especially trajectory 2 in Fig. 1 yields a nonzero velocity for a spatially symmetric $`U(X)`$. To answer the question of how to invert the direction of a nonzero current in the stochastic layer, we note that considering the equation $`\ddot{X}+f(X)+E(t)=0`$ we arrive back at (1) by substitution $`t^{}=t`$. So the current can be inverted by applying $`E(t)`$ instead of $`E(t)`$ in (1). A second way is to consider equation $`\ddot{X}f(X)E(t)=0`$ which after substitution $`X^{}=X`$ again is mapped onto (1). Thus another way of inverting the current is to apply $`f(X)`$ instead of $`f(X)`$ and $`E(t)`$ instead of $`E(t)`$ in (1). There is no simple way to invert the current by just inverting space i.e. by considering $`f(X)`$. To get a grasp of this result we consider the quasiperiodic cyclic regime for $`U(X)=\mathrm{cos}X`$ and $`E(t)=E_1\mathrm{cos}\omega t+E_2\mathrm{cos}(2\omega t+\alpha )`$. Note that $`\widehat{S}_a`$ symmetry is present if $`E_2=0`$ or $`E_1=0`$ and $`\widehat{S}_b`$ symmetry is present if $`\alpha =0,\pi `$ or $`E_1=0`$ or $`E_2=0`$. Each individual trajectory for sufficiently large $`P_0`$ gives a nonzero average velocity. The question is whether we obtain a nonzero velocity after averaging over initial conditions with some distribution function $`\rho (X_0,P_0,t_0)`$ reflecting equilibrium properties, at least of course $`\rho (X_0,P_0,t_0)=\rho (X_0,P_0,t_0)`$. Here $`t_0`$ is the time when the trajectories with initial conditions $`X_0,P_0`$ are started. In the simplest case we might assume that $`\rho `$ is independent of $`t_0`$. Consider the case $`P_01`$ and $`\omega P_0`$. In that case we can separate the solution $`X(t)`$ into a slow part $`X_s(t)`$ and a small fast part $`\xi (t)`$. Expanding to linear order in the fast variable yields $$\ddot{X}_s+\ddot{\xi }\mathrm{sin}X_s\mathrm{cos}(X_s)\xi +E(t)=0.$$ (4) Collecting the fast variables we find $`\ddot{\xi }\mathrm{cos}(X_s)\xi +E(t)=0`$. This equation has to be solved by assuming that $`X_s`$ is constant and skipping the slow homogeneous solution part. We find $`\xi =A_1\mathrm{cos}\omega t+A_2\mathrm{cos}(2\omega t+\alpha )`$ with $`A_1=E_1/[\omega ^2\mathrm{cos}X_s]`$ and $`A_2=E_2/[4\omega ^2\mathrm{cos}X_s]`$. Final averaging over the fast variables in (4) gives $`\ddot{X}_s\mathrm{sin}X_s=0`$. The crucial point is to observe that the initial condition is now $`X_0=X_s(t_0)+\xi (t_0),P_0=\dot{X}_s(t_0)+\dot{\xi }(t_0)`$. Since $`\xi (t)`$ is a completely defined function, defining the initial conditions for $`X,P`$ we obtain initial conditions for the slow variables. The symmetry breaking will be hidden there. Indeed, averaging over time we find $`<P(t)>=<\dot{X}_s(t)>`$. Assuming e.g. large values of $`P_0`$ the time average velocity of the slow variable will be simply $`<\dot{X}_s(t)>=sgn(P_0)\sqrt{2H_s}[11/(4H_s^2)+0(P_0^8)]`$ with $`2H_s=P_s^22\mathrm{cos}X_s`$. Expanding $`<\dot{X}_s(t)>`$ in powers of $`1/P_0`$ we will encounter terms $`P_0^6\dot{\xi }^3(t_0)\mathrm{cos}^2[X_0\xi (t_0)]`$. Averaging over $`X_0`$ and $`t_0`$ we obtain in leading order for the average velocity $$\sqrt{2}\frac{25}{32}\frac{1}{P_0^6}\frac{E_1^2E_2}{\omega ^3}\mathrm{sin}\alpha $$ (5) which remains nonzero and will contribute to an average nonzero current after further averaging over $`P_0`$. Note that the directed current disappears if $`E_1=0`$ or $`E_2=0`$ or $`\alpha =0,\pi `$ when the mentioned symmetries are restored. The current direction is defined in this perturbation limit by the sign of the product $`E_2\mathrm{sin}\alpha `$. Finally in the limit $`P_0\mathrm{}`$ the current amplitude tends to zero, although the symmetries are not restored. The reason is that in this limit we recover the problem of a free particle moving under the influence of an external field $`E(t)`$ which can be easily solved . Averaging over $`t_0`$ in this case yields zero total current. It follows that nonzero total currents occur if symmetries $`\widehat{S}_a`$ and $`\widehat{S}_b`$ are violated and if we provide a mechanism of mixing of different harmonics as it happens in nonlinear equations of motion (see also ). We checked the above statements of the perturbation theory for the quasiperiodic regime by computing numerically the average velocity $`<\dot{X}_s>`$ for two initial conditions with opposite initial velocities $`\pm P_0`$, taking their half sum, and finally averaging over all possible initial positions $`X_0`$ and over the initial time $`t_0`$. We observe a nonzero current except for the symmetric values of $`\alpha `$. Finally we did the same direct computation in the initial equation (1). The results are similar. In order to keep the dc current nonzero the value of $`\alpha `$ should be kept fixed with time, or at least to be allowed to fluctuate only with small amplitude. Additional averaging over $`\alpha `$ will lead to a disappearance of the dc current. To our understanding this should not pose a technical difficulty, since one can take a monochromatic field source, and then experimentally generate a second harmonic from it such that the phase $`\alpha `$ is fixed. ## III The case with dissipation Consider now a small but nonzero value of $`\gamma `$ in (1)(see ). Generically the phase space of the system will separate into basins of attraction of low-dimensional attractors. There exist strong hints that when being close to the Hamiltonian case, these attractors will be periodic orbits or limit cycles (cyclic in $`X`$) . The stochastic layer is transformed into a complex transient part in phase space, where the basins of attraction of different limit cycles are entangled in a complicated way. For stronger deviations from the conservative limit the periodic attractors undergo (period doubling) bifurcations, and finally possibly chaotic attractors are generated, which are however not directly related to the stochastic layer of the conservative limit (see also ). Of the two symmetries $`\widehat{S}_a`$, $`\widehat{S}_b`$ in the conservative case only $`\widehat{S}_a`$ may survive for nonzero dissipation. Consider such a case when (2) holds. Suppose we find a limit cycle which is characterized by $`X(t+T)=X(t)+2\pi m`$ and $`P(t+T)=P(t)`$, $`mZ`$. Due to the external time-periodic field $`E(t)`$ we have $`T=n2\pi /\omega `$, $`nZ`$. The average velocity $`<P>=\frac{1}{T}_0^T\dot{X}𝑑t`$ on such a cycle will be given by $`<P>=\omega m/n`$. Due to the required symmetry there will be also a limit cycle with $`<P>=\omega m/n`$. Moreover the symmetry presence also implies that the basins of attraction of the two symmetry related limit cycles are equivalent. Assume now that we violate $`\widehat{S}_a`$. The two cycles previously related by symmetry to each other will generically continue to exist, but there is no obvious symmetry which relates them to each other. However after computing the average velocities, we will still find that they equal each other up to a sign! The symmetry breaking is in fact hidden in a desymmetrization of the two basins of attraction. It is this asymmetry which after averaging over initial condition distributions (symmetric in $`P`$) will lead to a different number of particles attracted to both cycles and thus to a nonzero current. To observe the desymmetrization of the basins locally we may tune some parameter of the equation to such a value that one of the cycles becomes unstable. In that case its basin of attraction shrinks to zero and disappears. If the other (previously symmetry related) cycle will be still stable, i.e. if its basin of attraction still exists, the asymmetry in the basins becomes obvious - one of them completely disappeared, the other one still exists. We tested these predictions and found complete agreement. We used $`f(X)`$ $`=`$ $`\mathrm{sin}X+v_2\mathrm{sin}(2X+0.4),`$ (6) $`E(t)`$ $`=`$ $`E_1\mathrm{sin}\omega t+E_2\mathrm{sin}(2\omega t+0.7)`$ (7) with $`\gamma =0.005`$ and $`\omega =1.1`$. The two symmetry related limit cycles ($`n=1`$ and $`m=\pm 1`$) have been computed with a Newton method (see e.g. ) for $`v_2=E_2=0`$, $`E_1=2.0`$. Then the parameters were changed to $`v_2=0.02`$, $`E_1=2.017`$ and $`E_2=0.06051`$ and the two limit cycles were traced again with a Newton method. Finally the eigenvalue problem ($`3\times 3`$ matrix) of the linearized phase space flow around each of the cycles has been evaluated in order to check the stability (see for details). For the given parameter values the $`m=1`$ cycle is stable (all Floquet eigenvalues inside the unit circle) while the $`m=1`$ cycle is unstable (one Floquet eigenvalue is outside the unit circle). To observe the effect of asymmetry of basins of attraction globally, we computed the ensemble averaged velocity for a distribution of initial conditions in the phase space of (1) with forces (6)-(7). The distribution was uniform in $`X`$ and $`t_0`$ (40 points on the interval from 0 to $`2\pi `$ for each of them) and $`2\times 20`$ points symmetrically chosen on the $`P`$-axis according to a Maxwell distribution with inverse dimensionless temperature $`\beta =0.01`$. In total 64000 trajectories have been computed. The velocity per trajectory averaged over the whole set of trajectories is shown in Fig. 2 as a function of time for the case with $`\widehat{S}_a`$ symmetry (curve 1) and the one without $`\widehat{S}_a`$ symmetry (curve 2). While the first case gives zero current density as $`t\mathrm{}`$, the second case yields nonzero negative current density in this limit. In order to invert the direction of a nonzero total current we have to apply $`f(X)`$ instead of $`f(X)`$ and $`E(t)`$ instead of $`E(t)`$ in (1). In contrast to the dissipationless case we cannot just invert time in $`E(t)`$ but have to perform a combined transformation both in space and time. Taking just $`f(X)`$ or $`E(t)`$ may or may not lead to a change of the current direction. Recall that directed currents can be generated by keeping $`U(X)=U(X)`$ and lowering the symmetry in $`E(t)`$ only. In that case the current direction is inverted by applying $`E(t)`$. ## IV Discussion There exist a lot of publications on the properties of (1) with $`\gamma =0`$ (and similar equations reduced to discrete maps), however we did not find studies of such a system when both symmetries $`\widehat{S}_a`$ and $`\widehat{S}_b`$ are broken. Evidently, when taking $`f`$ and $`E`$ with only one harmonic, no symmetry broken transport is possible. The closest study in this respect we found in , where however, as explicitely stated, the symmetry was kept, leading to zero current when averaging over all possible trajectories. The overdamped case was studied in . Finally we want to discuss the relation of our results to the well-known case of directed currents for particles moving in so-called ratchet potentials under the influence of friction and a stochastic force (see and references therein). These potentials lack inversion symmetry in space and thus lack $`\widehat{f}_a`$ symmetry (see above). However the noise process characterizing the stochastic force has to be non-white (see for details). It was then found that proper correlations in the noise allow for directed currents even in the presence of $`\widehat{f}_a`$ symmetry, i.e. for “non-ratchet” potentials. In these equations have been modified by adding time-periodic fields. Note that our model allows for an easy treatment of the symmetry analysis, since the symmetry breaking is not hidden in higher order moments of distribution functions. If we consider corresponding quantum systems, the symmetry breaking will be reflected in the properties of the eigenstates, and nonzero currents can be expected as well. The addition of e.g. particle-particle interaction or noise can only affect the amplitude of the current, since the broken symmetries cannot be restored by additional interactions. Applications of similar ideas to coherent photocurrents in semiconductors have been reported in . Further applications may include driven Josephson junctions or superlattices, electrons in time-dependent magnetic fields to name a few. Note that it should be much easier to realize experimentally our proposed symmetry breaking rather than to prepare correlated noise as proposed for ratchet transport. ## V Acknowledgements This work was partially supported by the INTAS foundation (grant No. 97-574). We are deeply indebted to A. A. Ovchinnikov and P. Hänggi for fruitful discussions and a critical reading of the manuscript. We thank D. K. Campbell, F. Izrailev, Y. A. Kosevich, F. Kusmartsev, M. Sieber, G. Zaslavsky for stimulating discussions and U. Feudel for sending us preprints prior publication. FIGURE CAPTIONS Fig.1 Dependence X(t) versus $`t`$ for different realizations of (1) and $`\gamma =0`$ with $`f(X)=\mathrm{cos}X+v_2\mathrm{cos}(2X+0.4)`$, $`E(t)=E_1\mathrm{sin}(\omega t)+E_2\mathrm{sin}(2\omega t+0.7)`$ and $`\omega =2.4`$. (1): $`v_2=0,E_1=2.4,E_2=0`$; (2): $`v_2=0,E_1=2.4,E_2=1.38`$; (3): $`v_2=0.6,E_1=2.4,E_2=1.38`$; (4): same as (2) but with $`f(X)`$ instead of $`f(X)`$. Note that in this case the direction of the current is not inverted as explained in the text. Fig.2 The averaged velocity (see text) as a function of time for (1) with $`\gamma =0.1`$, $`\omega =2.4`$ and $`E_1=5.23`$. (1): symmetric case, $`v_2=E_2=0`$; (2): asymmetric case, $`v_2=0.6`$, $`E_2=5.23`$.
no-problem/9908/hep-th9908076.html
ar5iv
text
# 1 Schematic view of the Kaluza-Klein gravity modes. The x-axis is the fifth dimension. The left/right vertical lines represent the Planck/TeV branes. The “volcano” potential rises then falls off rapidly away from the Planck brane. Plotted are the squared amplitudes of two KK gravity modes relative to the graviton zero mode. The heavy 𝑚>>1 TeV mode takes its asymptotic (oscillating) form at the TeV brane, the other mode exhibits the characteristic behavior for 𝑚<<1 TeV. Very light modes with 𝑚<10⁻⁴ eV would appear as flat lines, since they track the zero mode. Extra dimensions provide an alternative route to addressing the hierarchy problem. This is because the Planck scale, describing the strength of the graviton coupling at low energies, is a derived scale. In a simple factorizable geometry, the Planck scale of a four-dimensional world is related to that of a higher dimensional world simply by a volume factor. The large Planck scale indicates weak graviton coupling which is in turn a consequence of the large volume over which the graviton can propagate . In this scenario, a large hierarchy only arises in the presence of a large volume for the compactified dimensions, which is very difficult to justify. A more compelling alternative has been suggested in Ref. . The idea of this paper was that the weak graviton coupling arises because of an interesting shape of the graviton wave function in the extra dimensions. The graviton is localized away from the 3+1-dimensional world on which the Standard Model resides. The large value of the Planck scale arises because of the small amplitude for the graviton to coincide with our “brane”. In Ref. , it was shown that the geometry of a single brane with cosmological energy densities tuned to guarantee Poincare invariance takes the form: $$ds^2=e^{2k|y|}\eta _{\mu \nu }dx^\mu dx^\nu +dy^2,$$ (1) where $`\mu ,\nu `$ parameterize the four-dimensional coordinates of our world, and $`y`$ is the coordinate of a fifth dimension. The remarkable aspect of the above geometry is that it gives rise to a localized graviton field. Mechanisms for confining matter and gauge fields to a smaller dimensional subspace were already known. The new feature here is that the background geometry gives rise to a single gravitational bound state. This mode plays the role of the graviton of a four-dimensional world, and is responsible for reproducing four-dimensional gravity. In Ref. , the Kaluza-Klein (KK) spectrum reflecting the large extra dimension was derived and it was argued that the additional continuum gapless spectrum of states gives rise to very suppressed corrections to conventional four-dimensional gravity, suppressed by $`(\mathrm{energy}/M_{Pl})^2`$. However, from the perspective of generating the mass hierarchy between the Planck and weak scales, the important aspect of this geometry is the correspondence between location in the fifth dimension and the overall mass scale. This can be understood by the fact that the warp factor is a conformal factor so far as a four-dimensional world located at a fixed $`y`$ location is concerned. Mass factors are rescaled by this factor, so that a natural scale for mass parameters might be $`M_{Pl}=10^{19}`$ GeV on a brane at the origin, but is $`M_{Pl}`$exp$`k|y|`$ for physics confined to a location $`y`$. This exponential could be the source of the hierarchy between the electroweak scale of order TeV and the Planck scale which is approximately $`10^{15}`$ times bigger. Notice that the generation of this hierarchy only requires an exponential of order 30. In Ref. , this observation was exploited by introducing an orbifold geometry, and located a positive energy brane at one point and a negative energy brane at the second orbifold point. If the standard model is located on this second, negative energy brane, the amplitude of the graviton is exponentially suppressed and a hierarchy is generated. The potential disadvantage of this setup is the necessity for the negative energy object and the orbifold geometry. Although not ruled out, it is desirable to have an alternative setup involving only positive energy objects. The advantages of such a setup are as follows. First, there are positive energy objects, namely D-branes and NS-branes, that are well understood and on which gauge fields and matter fields can be localized so that the Standard Model fields can be placed there. Second, some potentially problematic aspects of the cosmology of this system were presented in , though it is not yet clear how general the conclusions will prove. Finally, there is the aesthetic advantage of allowing for an infinite dimensional space in which mass scales are associated with definite locations in the space, a point further emphasized by . If one permits all possible mass scales (all possible distances in the fifth dimension), one presumably has a better chance of addressing difficult cosmological issues such as the cosmological constant problem and black-hole physics. One also has a better chance for exploiting holographic ideas by exploiting the correspondence between location in the fifth dimension and mass scale. In this paper, we demonstrate that one can address the hierarchy problem with only positive energy objects by combining the two observations of Ref. , namely 1) it is consistent to live with an infinite fifth dimension, and 2) one can generate a hierarchy by living far from the brane on which gravity is localized. This was implicit also in Ref. , where the connection between distance in the fifth dimension and overall mass scale was made explicit in an AdS geometry derived from D-3 branes (so that the Maldacena conjecture could be exploited), and so that the TeV scale corresponded to a fixed coordinate $`y_0`$. The crucial question is whether an observer on this “TeV brane” sees a consistent theory of gravity. In Ref. , it was only shown that one sees a theory of gravity that is very close to a four-dimensional gravitational theory if one lives on the brane on which the graviton is localized. In this paper, we argue that even for an observer quite far from that brane, one obtains an acceptable gravitational theory, essentially indistinguishable from a four-dimensional world! The picture that emerges is remarkably beautiful. The graviton is localized on a brane that we call the Planck brane from now on. We live on a brane separated from the Planck brane by roughly 30 Planck lengths in the fifth dimension. On this brane, mass scales are exponentially suppressed, yielding a natural generation of the weak scale. Furthermore, the maximal scale we can probe on our brane is this same TeV scale, since all string modes become strongly interacting at this scale. The location of the brane, which we denote by $`y_0`$, was determined to give the correct ratio of the weak scale to the Planck scale. We call the brane at this location the TeV brane. The potentially dangerous aspect of this setup is the multiplicity of the arbitrarily light Kaluza-Klein modes. In Ref. , it was argued that the KK modes were extremely strongly coupled (with TeV coupling suppression rather than $`M_{Pl}`$). In Ref. , it was shown that one signal of the infinite extra dimension is a gapless continuum of Kaluza-Klein modes. Clearly, if these modes were all so strongly coupled, the theory would be disastrous, since gravitational and particle physics tests would be badly violated. What we show in this paper is that the situation is far more clever. Production of modes less than the TeV scale are suppressed. Futhermore, modes less than $`10^4`$ eV (which happens to correspond to the length scales on which gravity has been directly probed,) still couple with Planck scale suppression. Thus the theory interpolates between a four-dimensional and five-dimensional world (reminiscent of a holographic interpretation). The observer on the brane at the TeV scale sees the modes below a TeV in energy as weakly coupled. Modes higher in mass than a TeV are much more strongly coupled, and would in principle reproduce the expected five-dimensional result. However, they are impossible to access! Generalizing to an arbitrary location, one never recognizes the higher dimensional geometry. Independent of location, the world appears lower dimensional at low energies. We now elaborate on this observation. The results follow readily from papers . Our setup is a “Planck brane” (or set of branes) on which the graviton zero mode is confined, exponentially falling off in the direction $`y`$. The new feature is a single brane (or multiple branes) located a distance $`y_0`$ from this brane, where $`e^{ky_o}=\mathrm{TeV}/M_{Pl}`$, where $`k`$ is related to the cosmological constant on the brane and determines the exponential falloff of the graviton, as in Ref. . The new brane can be regarded as a probe of the geometry determined by the Planck brane, either by assuming that the Planck brane has much larger tension, or consists of a large set of branes. It is readily seen that inclusion of a small brane tension does not substantially affect the result. We also remark that we do not address the question of determining the location $`y_0`$ here, though mechanisms that stabilize the orbifold geometry (such as in Ref. ) should also apply. It is clear that the zero mode generates consistent gravity. If we take the coordinate $`y=0`$ to be the location of the Planck brane, one can readily derive: $$M_{Pl}^2=2_0^{\mathrm{}}𝑑ye^{2ky}M^3=\frac{M^3}{k},$$ (2) so that with $`M`$ and $`k`$ taken of order $`M_{Pl}=10^{19}`$ GeV, the zero mode is coupled correctly to generate four-dimensional gravity. It is therefore the contribution of the additional KK modes that is our focus. Everything follows from the detailed form of these modes, derived in . The graviton zero mode (properly normalized) is $$\widehat{\mathrm{\Psi }}_0(z)=\frac{1}{k(|z|+1/k)^{3/2}},$$ (3) where the coordinate $`z`$ is related to $`y`$ by the expression $$z=\frac{\mathrm{sgn}(y)}{k}\left(e^{k|y|}1\right).$$ (4) Note that at the TeV brane $`z=z_01`$ TeV<sup>-1</sup>. The continuum KK modes are given by: $$\widehat{\psi }_mN_m(|z|+1/k)^{1/2}\left[Y_2(m(|z|+1/k))+\frac{4k^2}{\pi m^2}J_2(m(|z|+1/k))\right],$$ (5) where $`m`$ is the mass of the mode, $`Y_2`$ and $`J_2`$ are Bessel functions, and $`N_m`$ is a normalization factor. For large $`mz`$, these modes asymptote to continuum plane wave behavior. This can be seen from the asymptotic form for the Bessel functions: $$\sqrt{z}J_2(mz)\sqrt{\frac{2}{\pi m}}\mathrm{cos}(mz\frac{5}{4}\pi ),\sqrt{z}Y_2(mz)\sqrt{\frac{2}{\pi m}}\mathrm{sin}(mz\frac{5}{4}\pi ).$$ (6) The normalization constants $`N_m`$ are determined by this plane wave behavior : $$N_m\frac{\pi m^{5/2}}{4k^2}.$$ (7) We are adopting here a delta-function normalization such that physical quantities will always involve an integration over $`m`$ for which the proper measure is just $`dm`$. None of our calculations will involve any dependence on the $`y\mathrm{}`$ regulator scheme (i.e. the “regulator brane” of or the alternative proposed in ). It is edifying to consider what these modes tell us in a couple of limiting situations. First, let us remind the reader of what happens if you live on the Planck brane ($`z=0`$). The exact effect depends on the particular gravitational process under consideration; let us first consider the corrections to Newton’s law from the KK modes. One finds a potential between two masses $`m_1`$ and $`m_2`$: $$V=G_N\frac{m_1m_2}{r}+_0^{\mathrm{}}\frac{dm}{k}G_N\frac{m}{k}\frac{m_1m_2e^{mr}}{r}=G_N\frac{m_1m_2}{r}\left(1+\frac{1}{k^2r^2}\right).$$ (8) The KK contribution is suppressed at large distances over and above that expected from having one additional dimension, because of the amplitude suppression near the brane. This is due to the barrier of the analog quantum mechanical problem used to find the KK modes. Now let us consider the opposite extreme. Suppose we were at high energies and suppose it were appropriate to use the plane wave form of the modes. At a given location $`y_0`$, what would be the corrections to Newton’s law? They would be $$VG_N\frac{m_1m_2}{r}+_0^{\mathrm{}}\frac{dm}{k}G_N\frac{m_1m_2e^{mr}}{r}e^{3ky_0}G_N\frac{m_1m_2}{r}\left(1+\frac{e^{3ky_0}}{kr}\right).$$ (9) It is useful to write this answer in terms of mass scales and compare to a flat five-dimensional space. If $`y_0`$ is chosen to address the TeV hierarchy, one finds the correction factor $`(M_{Pl}/\mathrm{TeV})^3/kr`$. Taking $`kM_{Pl}`$, one derives $`M_{Pl}^2/\mathrm{TeV}^3r`$. With a cross product background metric, one would derive the TeV scale by choosing $`r_c`$ as $`M^3r_c=M_{Pl}^2`$ where $`M`$ is of order a TeV and the mass of the KK modes would be starting at $`1/r_c`$. The corrections to gravity would be those of the number of modes of energy less than $`1/r`$, which would be $`M_{Pl}^2/rM^3`$. This precisely agrees with the contribution one would have in the warped background if one saw the full continuum contribution. This would of course be ruled out by current experiments as it is far too strong a correction to gravity. However, the calculation in the AdS background based on the continuum form of the KK modes is not appropriate. It helps to examine the detailed form of the KK modes. Recall that they were derived in a background AdS space in which there was a four-dimensional flat brane with localized energy density. One derives the KK modes by assuming they factorize into momentum eigenstates with mass $`m`$, where $`m`$ is determined by solving an analog quantum mechanics problem describing the shape of the KK mode in the fifth dimension. The analog potential for these modes was dubbed the “volcano” potential because there was a delta-function at the origin, a barrier, and then a smooth fall-off to zero The zero mode is the single bound state. All other modes are suppressed at the origin (as the first calculation of corrections to Newton’s law showed) and then turn into continuum plane wave modes in the large $`y`$ region, far from the brane. At a given location $`y_0`$, modes which are sufficiently light are suppressed relative to their continuum form, while modes which have already assumed their asymptotic form are unsuppressed. We can quantify this statement by examining the explicit expression for the modes Eq. (5). The asymptotic forms of the Bessel functions, and thus the onset of continuum behavior, requires $`mz_0`$ much greater than 1, which is only true for modes of mass greater than a TeV. This is an important result. It says that modes at all energies below the strongly interacting regime are more suppressed than a continuum KK mode. This result could have been anticipated from Ref. , where it was shown that quantization was in units of approximately TeV. Modes do not appear to have their continuum form until they are at least this massive. The suppression of the lighter modes is addressed by looking at the asymptotic form of the Bessel functions for small $`mz`$: $$\sqrt{z}J_2(mz)\frac{m^2}{8}z^{5/2},\sqrt{z}Y_2(mz)\frac{4}{\pi m^2z^{3/2}}\frac{z^{1/2}}{\pi }.$$ (10) We see that $`Y_2`$ tracks the zero mode, whereas $`J_2`$ rises sharply with respect to the zero mode. So long as $`Y_2`$ dominates, the contribution from the KK modes is as suppressed relative to that of the zero mode as if we were probing gravity on the Planck brane; e.g. the corrections to Newton’s law are given by Eqn. (8). We find that $`Y_2`$ dominates so long as we are exploring modes with mass less than $`1/(kz_0^2)`$, which is approximately $`10^4`$ eV. All gravitational experiments to date see the corrections to gravity to be as small as if we were living on the Planck brane! Modes with masses in the region intermediate between $`10^4`$ eV and 1 TeV are controlled by the small $`mz`$ behavior of the dominant $`J_2`$ term. If these modes had already reached their continuum form at $`z=z_0`$, the cross section for real emission of these modes would be proportional to $`E/(\mathrm{TeV})^3`$, where $`E`$ denotes the relevant physical energy scale. This agrees with for $`n=1`$ extra dimensions, and leads to astrophysical and collider effects which are clearly excluded by observations. Using the actual form of these modes at $`z_0`$, we find instead that the real emission cross section is proportional to $$\sigma \frac{E^6}{(\mathrm{TeV})^8}.$$ (11) So in fact the leading order energy dependence of these modes agrees with the large torus compactifications of for the case of $`n=6`$ extra dimensions! Because these effects are much softer in the infrared, they turn out to be easily compatible with all existing observations . In fact, a stronger result readily follows. If matter is localized to any four-dimensional flat brane between the Planck and TeV branes, the force between the matter will look four-dimensional for energies less than a TeV. This means that one could imagine doing physics in the bulk, analogous to what one might have tried in the orbifold case, to explain features of our observable world. What emerges is a very compelling picture. The world is five dimensional: the coordinate $`y`$ extends to infinity. However, for any observer localized to a given location $`y_0`$, the modes of mass greater than $`M_{Pl}`$exp$`k|y_0|`$ are strongly coupled. The amplitude of lighter modes on the $`y_0`$ brane is suppressed. Those of mass less than $`1/kz_0^2`$ ($`10^4`$ eV in our case) are coupled by $`1/M_{Pl}`$ with further amplitude suppression. Heavier modes are power law suppressed over what would be expected had the metric been flat. So the observer confined to the brane sees gravity as essentially four-dimensional, no matter where the brane is located! We can live with an infinite extra dimension and simply not know it. The scenario presented here will be tested at future collider experiments. To leading order in $`(E/\mathrm{TeV})`$, real emission effects will mimic those of $`n=6`$ extra dimensions in the scenario of . Because of the strong power suppression, it is important to be able to probe energy scales close to $`(1/z_0)`$. There may also be detectable effects from virtual exchanges of KK modes. However such effects are difficult to compute since they are dominated by heavy modes near the TeV cutoff; a string theory calculation is probably required to get a reliable estimate. This is a very tantalizing scenario. It clearly ties in well with the holographic picture advocated in . Again, with the infinite dimension, one expects the gravitational theory to correspond to a gauge theory cut-off in the ultraviolet. Within this theory, there is a correspondence between location $`y`$ and mass scale determined by the shape of the zero mode. The additional contribution of this paper is to demonstrate that the Kaluza-Klein excitations do not disturb this picture. They give small corrections to the theory of gravity, so long as one is at sufficiently low energy. This new venue should provide new avenues for addressing important problems in cosmology and gravity. Acknowledgements: We wish to acknowledge useful discussions with Savas Dimopoulos, Ann Nelson, Stuart Raby, Raman Sundrum, and Herman Verlinde. We thank Martin Gremm and Emanuel Katz for comments on the manuscript. We also wish to thank the Aspen Center for Physics, where this work was initiated. The research of Joe Lykken was supported by NSF grant PHY94-07194, and by DOE grant DE-AC02-76CH03000. The research of Lisa Randall was supported in part by DOE under cooperative agreement DE-FC02-94ER40818 and under grant number DE-FG02-91ER4071.
no-problem/9908/astro-ph9908281.html
ar5iv
text
# Spectral Lags of Gamma-Ray Bursts From Ginga and BATSE ## 1 INTRODUCTION More than 26 years have passed since gamma-ray bursts (GRBs) were discovered (Klebesadel, Strong & Olson (1973)). The Burst and Transient Source Experiment (BATSE) on the Compton Gamma Ray Observatory (CGRO) found that GRBs appear to be isotropic on the sky, yet there is a dearth of faint events compared to the brightest events, implying that the bursts are at cosmological distances (Meegan et al. (1992)). The cosmological origin of many GRBs was firmly established as a result of follow-up observations of fading X-ray counterparts to GRB sources discovered with the Beppo-SAX mission (e.g., Costa et al. (1997)). However, the radiation mechanism of GRBs is still unclear. This situation is partly due to the absence of any significant correlations between the GRB spectral and temporal properties. Recently, several research groups have investigated the correlation between GRB spectral and temporal properties. By using the average autocorrelation function and the average pulse width, Fenimore et al. (1995) presented a very well defined relationship: the average pulse width of many bursts, $`\mathrm{\Delta }\tau `$, is well fitted by a power law of the energy $`E`$ at which the observation is made: $`\mathrm{\Delta }\tau 18.1E^{0.45\pm 0.05}`$. Norris et al. (1996) proposed a “pulse paradigm” and also found that the average pulse shape dependence on energy is approximately a power law, with an index of $`0.40`$, consistent with the analysis of Fenimore et al. (1995). Kazanas, Titarchuk & Hua (1998) proposed that synchrotron cooling could account for the power law relationship between $`\mathrm{\Delta }\tau `$ and $`E`$. The general observed trend in spectral evolution is hard to soft (Norris et al. (1986), Norris et al. (1996), Band (1997)). The hard-to-soft spectral evolution can lead to a distinct, observed effects: pulse peaks migrate to later times and become wider at lower energies (Norris et al. (1996), Norris et al. (1999)). Cheng et al. (1995) claimed that about 24% of bursts in their sample of BATSE bursts have detectable time delay between BATSE channel 1 (25-57 keV) and channel 3 (115-320 keV). In this paper, we analyzed the cross-correlation average lags between different set of energy bands for two GRB samples: 19 events detected by Ginga, and 109 events detected by BATSE. We discussed our results and its implication for GRB models. ## 2 METHODOLOGY The cross-correlation function (CCF) has been widely used to measure the temporal correlation of two GRB energy bands $`c_{1i}`$ and $`c_{2i}`$ (Link et al. (1993), Cheng et al. (1995), Norris et al. (1999)). Here $`c_i=m_ib_i`$ is the net counts from GRB time profiles, where the background contribution, $`b_i`$, has been subtracted from raw counts $`m_i`$. A time interval of T is selected around the largest peak consisting of $`N`$ time bins each of duration $`\mathrm{\Delta }T`$ s, indexed between $`N/2`$ and $`+N/2`$. The CCF as a function of time lag, $`j\mathrm{\Delta }T`$, is $`CCF_j`$ $`=`$ $`{\displaystyle \underset{i=N/2}{\overset{N/2}{}}}{\displaystyle \frac{c_{1i+j}c_{2i}}{S}}\mathrm{for}j0`$ $`=`$ $`1\mathrm{for}j=0`$ where the normalization factor $`S`$ is $$S=\underset{i=N/2}{\overset{N/2}{}}(c_{1i}c_{2i}\sqrt{m_{1i}m_{2i}})$$ The $`\sqrt{m_{1i}m_{2i}}`$ term in S normalizes the CCF so that coherent noise at $`j=0`$ is accounted for. Here, we must point out that it is not always reliable to find the lags by the CCF, especially when the observed profiles are relatively smooth or these is strong spectral evolution. The reason, in part, is that the CCF is an average quantitative description of more than one peak in two time series. Norris et al. (1996) analyzed individual peaks by obtaining best fit positions, intensities, and widths for each energy channel. From this, we can tell if the CCF lag, indeed, tells us the lag between two energy bands. A worst case might be BATSE burst 1085. Burst 1085 was fitted with 4 pulses for channel 1 and channel 3. The peak shifts for the pulses are 0.657 s, 1.128 s, 1.181 s and $`0.160`$ s (Norris et al. (1996)). However, the CCF of the observed profile yields a time shift of 2.624 s between channel 1 and channel 3 (Cheng et al. (1995)). Another example is BATSE burst 543, which was fitted with 4 pulses for channel 1 and channel 3, and the peak shifts for the pulses are $`0.002`$, $`0.015`$, 0.06 and 0.09 s (Norris et al. (1996)), while the CCF of the observed profile yields a time shift of 0.256 s between channel 1 and channel 3 (Cheng et al. (1995)). In cases where the lags are small the CCF lag and the pulse fit lag usually agree. Thus, pulse fitting and CCF can give very different results. One must be careful to use them appropriately. To use the CCF lag, one must always compare to a model which has calculated two energy ranges and a resulting expected CCF. The pulse fitting could provide a more direct measurement of a lag than the CCF. Unfortunately, we do not often have statistically significant samples in most of the peaks, especially in Ginga. We will select bright GRB events from two samples: the 4th BATSE catalog and Ginga. From the current 4th BATSE catalog (Paciesas et al. (1999)), we use the 4-channel LAD DISCSC data with a time resolution of 64 ms which meet the criteria of $`T_{90}`$ $`>`$ 2s and fluence $`>`$ 5 photon cm<sup>-2</sup> s<sup>-1</sup>. We set the criteria for two main reasons: 1) short bursts are not suitable for timing analysis with a time resolution of 64 ms; 2) the strong bursts provide good statistics. This resulted in a total of 109 usable events. The Ginga GBD (Gamma-ray Burst Detector) was in operation from March 1987 to October 1991. During this time $`120`$ GRBs were identified (Ogasaka et al. (1991), Fenimore et al. (1993)). The GRB detectors on Ginga consisted of a proportional counter (PC) sensitive to photons in the 2-to-25 keV range and a scintillation counter (SC) recording photons with energies between 15 and 350 keV. The temporal resolution of the time history data depended on the telemetry mode. The on-board trigger system was very similar to BATSE. It checked the 50-to-380 keV count rate for a significant increase ($`11\sigma `$ on either 1/8, 1/4, or 1 s time scales). Upon such a trigger, special high resolution temporal data (called “Memory Read Out” \[MRO\]) would be produced. The MRO time history has a time resolution of 0.03125 s. The PC MRO time history extends from 32 s before the trigger to 96 s after the trigger. The SC MRO time history extends from 16 s before the trigger to 48 s after the trigger. Besides the time history data, the MRO also contains spectral data recorded at 0.5 s intervals. We selected from the 120 Ginga GRBs a sample of 19 MRO events for which statistically good light curves were available, for which the events were entirely covered by the MRO interval, and for which we could be reasonably certain that the burst occurred within the forward, $`\pi `$ steradian field of view of the detectors (front-side events.) We selected three sets of low and upper energy ranges to calculate the spectral lags: $`CCF_{x\gamma }`$, $`CCF_{pcsc}`$ and $`CCF_{13}`$. From the Ginga sample, $`CCF_{x\gamma }`$ is based on 2-10 keV and 50-100 keV count rates from the MRO spectral data (0.5 s resolution). Also from Ginga, $`CCF_{pcsc}`$ is based on the PC count rates (2-25 keV) and the SC count rates (15-350 keV) from the MRO time histories (31.25 ms resolution). The BATSE $`CCF_{13}`$ is based on channel 1 (25-57 keV) and channel 3 (115-320 keV) and has 64 ms resolution. Before computing the CCF, the background must be estimated and subtracted from the observed profiles to yield signal profiles. For majority of the analyzed bursts, a linear fit or quadratic fit was reasonable and it was unnecessary to propagate the background uncertainty into the CCF. If the observed data had much finer resolution than GRB temporal features and high signal-to-noise levels, the CCF curves would be very smooth and its side lobes would be much lower than its central peak. We could find the lag by simply recording the lag of the CCF peak itself (e.g., as was done by Cheng et al. (1995)). In our actual Ginga samples, the time resolution and/or SNR are not good enough. Norris et al. (1999) measured the lag in BATSE bursts by fitting the peak of the CCF and finding the peak location from the best fit function. Neither methods provide an uncertainty of the lag. One goal of this paper is to obtain approximate uncertainties via a bootstrap method so we can determine the significance of the lags. ## 3 AVERAGE LAGS WITH APPROXIMATE UNCERTAINTIES To obtain spectral lags with approximate uncertainties <sup>1</sup><sup>1</sup>1The uncertainties provide a measure of the stability of the calculation of the $`\tau `$ estimates., we did 10,000 Monte Carlo realizations from the observed time histories for each burst, and therefore got 10,000 lags for each burst by recording the lag of the CCF peak. From this method, we could find the mean lag and its approximate uncertainty (i.e., the region that includes $`68.3\%`$ of the realizations) for each burst and the average lag from all GRBs for each set of energy ranges. For each realization, we selected a count sample for each time bin and each spectral bin from a poissonian distribution based on the observed gross counts (including background). We then removed the background and calculated the CCF. Based on the Monte Carlo realizations for each burst, we computed the average lag and its variance for each set of energy bands. To compute the CCFs, a time interval $`T=N\mathrm{\Delta }T`$ of 32 s was used for the Ginga bursts, and $`T=N\mathrm{\Delta }T`$ of 32.768 s for the BATSE bursts. If the duration of a burst is less than the time interval, the whole profile of the burst was used. Table 1 summarizes our results for two sets of energy bands for the Ginga bursts. The main difference between them is that $`\tau _{pcsc}`$ is based on time history MRO data (combining two samples together to give 62.5 ms resolution), and $`\tau _{x\gamma }`$ uses the spectral MRO data (0.5 sec resolution). The detailed results for BATSE GRBs are not listed in the paper. In Table 1, the burst date is given by year/month/day in the first column, the next two columns ($`\tau _{x\gamma }`$ and $`\sigma _{x\gamma }`$) are the average lag and its standard deviation for energy ranges x (2-10 keV) and $`\gamma `$ (50-100 keV). The next two columns ($`\tau _{pcsc}`$ and $`\sigma _{pcsc}`$) are the average lag and its standard deviation for energy ranges PC (2-25 keV) and SC (15-350 keV). In the last two columns we list the duration ($`T_{90}`$) and peak intensity (50-300 keV) based on the SC time histories. Based on the $`CCF_{x\gamma }`$ from Ginga, 6 out of 19 GRBs ($`32\%`$) have detectable lags at a $`1\sigma `$ level (870707, 880205, 880725, 890929, 901001, and 910815); for $`CCF_{pcsc}`$, 5 of 19 GRBs ($`26\%`$) have the lags with $`1\sigma `$ (880205, 880725, 890929, 900126, and 910206). The difference between $`CCF_{x\gamma }`$ and $`CCF_{pcsc}`$ is not significant and is caused by the different spectral ranges and the different time resolution. In our BATSE sample, (a): 36 out of 109 GRBs ($`33\%`$) have lags less than $`0.0+1\sigma `$ s; (b): 53 out of 109 GRBs ($`49\%`$) have lags less than $`0.064+1\sigma s`$; and (c): 20 of 109 GRBs ($`18\%`$) have lags greater than $`0.064+1\sigma `$ s. (Here, we have compared the lags against a rough combination of the quantization uncertainty \[64 ms\] and the statistical uncertainty.) The lags for class (a) are not detectable within the current time resolution; the reality of lags for class (b) are questionable because the lags are very close to the uncertainty; and the lags for class (c) are fairly reliable lags. The fraction in our sample with lags is roughly consistent with the results of Cheng et al. (1995), who claimed that about $`24\%`$ of the bursts in their sample of BATSE bursts have detectable time delay. However, Cheng et al. (1995) included a few bursts in that 25% whose lags were 0.064 s, equal to the time resolution. (Remember, Cheng et al. (1995) identified the value of the lag as which 64 ms CCF sample was the largest, so the lags had to be multiples of 64 ms.) Our criterion is more strict, requiring a lag to be at least bigger than the time resolution plus 1$`\sigma `$. Even comparing to $`0.064+1\sigma `$ s might count some events as having lags when, in fact, the lag arose from statistical variations. Thus, we probably actually found more events with lags than did Cheng et al. (1995). Also, about 10% in the sample have lags more than $`3\sigma `$ from zero, and only 4% have negative lags. Figure 1 is the distribution of lags from the Monte Carlo realizations. There are 190,000 and 1,090,000 entries for Ginga and BATSE, respectively. These distributions are not estimates of the distribution of the true lags in the samples as might be found from pulse fitting. Rather, the distributions are very roughly a convolution of the true lag distribution with a function describing the measurement accuracy of using the CCF to estimate the lags. These distributions should only be compared to models where the lags are determined by a CCF of two energy ranges from the model. Figure 1a is the distribution of CCF lags $`\tau _{x\gamma }`$ for 19 GRBs from Ginga with a temporal resolution of 0.5 s. The mean lag for the sample is $`\tau _{x\gamma }=0.32_{0.48}^{+0.44}`$ s. Figure 1b is the distribution of lags $`\tau _{pcsc}`$ for 19 GRBs of Ginga with the temporal resolution of 0.0625 s. The mean lag for the sample is $`\tau _{pcsc}=0.19_{0.26}^{+0.32}`$ s. Figure 1c is the distribution of lags $`\tau _{13}`$ between channel 1 and channel 3 for 109 GRBs of BATSE sample with the temporal resolution of 0.064 s. The mean lag for the sample is $`\tau _{13}=0.077_{0.072}^{+0.034}`$ s. We used an average energy to qualitatively estimate the energy range represented by the CCF lag. We assumed a power law spectra with index of $`1.5`$ and use typical response functions to estimate the average photon energy in each energy range. The average energies for BATSE’s channel 1 (25-57 keV) & channel 3 (115-320 keV) are 48 keV and 193 keV, respectively. The average lag between 48 keV and 193 keV is less than $`0.08`$ s. The average energies for Ginga’s PC (2-25 keV) & SC (15-350 keV) are 9 keV and 86 keV, respectively. The average lag between 9 keV and 86 keV is less than $``$0.5 s. The above qualitative results show that the peak of the emission is not delayed substantially at lower energy. We note that the uncertainties of the (average) lags from Ginga are relatively large, which is mainly due to the limited signal-to-noise ratio (SNR). The BATSE events have much higher SNR than Ginga’s, therefore the uncertainty of the lags for BATSE sample is relatively small. Norris et al. (1999) has used lags from cross-correlations of BATSE data as a predictor for absolute luminosity. Based on six GRBs with known red-shifts, they found that the (isotropic) luminosity is approximately $`1.3\times (\tau _{13}/0.01\mathrm{s})^{1.14}\times 10^{53}`$ erg s<sup>-1</sup>. We calculated the lag somewhat differently than Norris et al.(1999). They included data above a fixed fraction of the highest peak and interpolate the cross-correlation function with a quadratic equation. We used the whole time history and performed Monte Carlo realizations to determine the uncertainty for the lag. In Table 2 we have analyzed four of the six BATSE GRBs with redshifts. (Only four had publicly available time histories.) The first column is the burst date, the second and third columns are the luminosity and $`\tau _{13}`$ lag as reported by Norris et al. (1999), the fourth column is our lag, the fifth column is our 1$`\sigma `$ uncertainty, the sixth column is our average lag found with a quadratic interpolation of each realization, and the last column is the 1$`\sigma `$ uncertainty found the quadratic interpolation of each realization. Note that we tend to find larger lags than Norris et al (1999) by a factor of 3 to 7. The difference is not due to the use of quadratic interpolation because that technique gives values very close to the average of many realizations (see Table 2). Perhaps the difference is due to the fact that Norris et al. (1999) takes only the tips of peaks and we use the whole time history including times when the signal is at background. In any case, it is clear that the luminosity-lag relationship is strongly dependent on how the lag is defined. ## 4 DISCUSSION In the standard fireball scenario, the most likely radiation process in GRBs is synchrotron emission (Katz (1994); Sari, R., Narayan, R., & Piran, T. (1996)). Synchrotron emission often gives a spectral-temporal correlation $`t_{\mathrm{syn}}(E)E^{0.5}`$ which is not very different from the observed correlation $`\mathrm{\Delta }\tau E^{0.45\pm 0.05}`$ (Fenimore et al. (1995)). Kazanas, Titarchuk and Hua (1998) claimed that the Fenimore et al. $`\mathrm{\Delta }\tau `$ relationship rises from synchrotron cooling. In the synchrotron cooling model, the electrons cool, and the electron’s average energy becomes smaller, which causes the emission peaks at lower energy at later time. But, if the lags we found in this paper are caused by the cooling process, the typical magnetic field should be $`100`$ gauss. In fact, most of the current GRB models require a much strong magnetic field (e.g., Piran (1999)) which leads to very fast cooling. For example, by using typical parameters in internal shock model, the observed cooling time $`\tau _{cool}`$ at a given frequency is (Piran (1999)): $$\tau _{cool}(h\nu )2\times 10^6\mathrm{sec}ϵ_B^{3/4}\left(\frac{h\nu _{obs}}{100\mathrm{keV}}\right)^{1/2}$$ where the dimensionless equipartition parameter $`ϵ_B`$ is the ratio of the magnetic field energy density to the total thermal energy $`e`$: $`ϵ_B=\frac{B^2}{8\pi e}`$, and its typical value is $`0.01`$. Thus, the typical cooling timescale is $`6.3\times 10^5`$ sec. We conclude that the similarity noted by Kazanas, Titarchuk and Hua (1998) between synchrotron cooling and the Fenimore et al. (1995) pulse width-energy relationship is actually a coincidence. The synchrotron cooling cannot simultaneously explain the large flux without a large magnetic field and the delays without a smaller magnetic field. What then could produce the delays? There are three contributers to the pulse width in the external shock model of GRB pulses: cooling, hydrodynamics, and angular spreading. To produce the observed flux, the strong magnetic field produces a cooling time that is much shorter than the hydrodynamic time scale and the angular time scale. The hydrodynamic time scale is the time it takes for the reverse shock to cross the shell, $`\tau _{\mathrm{hydro}}=2\mathrm{\Gamma }_{rs}^2\mathrm{\Delta }T`$ (Kobayashi, Piran & Sari (1997)). The angular time scale is the time it takes for off axis photons to arrive at the detector, $`\tau _{\mathrm{ang}}=2\mathrm{\Gamma }_{\mathrm{new}}^2\mathrm{\Delta }T`$ where $`\mathrm{\Gamma }_{\mathrm{new}}`$ is the Lorentz factor after the two shells collide. The resulting time profile is a convolution of the three processes. The cooling time is much shorter while the hydrodynamic and angular time scales are comparable. Thus, the time structure (e.g., the lags) in the profile can only come from the hydrodynamic time (which dominates the rise of the pulse) and the angular time scale (which dominates the fall of the pulse). The angular time arises from kinematics so has no dependency on energy. Therefore, the energy dependent lags that we report here must come from variations associated with hydrodynamic processes, such as variations in emission as the reverse shock moves through the shell. Perhaps density or magnetic field variations cause the differences we observe. We thank the referee for very extensive and detailed report on this manuscript. This work was done under the auspices of the US Department of Energy. Figure 1
no-problem/9908/cond-mat9908055.html
ar5iv
text
# Semiclassical mechanics of a non-integrable spin cluster ## I Introduction When $`S`$ is large, spin systems can be modeled by classical and semiclassical techniques. Here we reserve “semiclassical” to mean not only that the technique works in the limit of large $`S`$ (as the term is sometimes used) but that it implements the quantum-classical correspondence (relating classical trajectories to quantum-mechanical behavior). Spin systems (in particular $`S=1/2`$) are often thought as the antithesis of the classical limit. Notwithstanding that, classical-quantum correspondence has been studied at large values of $`S`$ in systems such as an autonomous single spin , kicked single spin , and autonomous two and three spin systems. When the classical motion has a chaotic regime, for example, the dependence of level statistics on the regularity of classical motion has been studied . In regimes where the motion is predominantly regular, the pattern of quantum levels of a spin cluster can be understood with a combination of EBK (Einstein-Brillouin-Keller, also called Bohr-Sommerfeld) quantization and tunnel splitting (Sec. V is a such a study for the current system.) The latter sort of calculation has potential applications to some problems of current numerical or experimental interest. Numerical diagonalizations for extended spin systems (in ordered phases) on lattices of modest size ($`10`$ to $`36`$ spins) may be analyzed by treating the net spin of each sublattice as a single large spin and thereby reducing the system to an autonomous cluster of a few spins; the clustering of low-lying eigenvalues can probe symmetry breakings that are obscured in a system of such size if only ground-state correlations are examined. Nonlinear self-localized modes in spin lattices , which typically span several sites, have to date been modeled classically, but seem well suited to semiclassical techniques. Another topic of recent experiments is the molecular magnets such as $`\mathrm{Mn}_{12}\mathrm{Ac}`$ and $`\mathrm{Fe}_8`$, which are more precisely modeled as clusters of several interacting spins rather than a single large spin; semiclassical analysis may provide an alternative to exact diagonalization techniques for theoretical studies of such models. In this paper, we will study three aspects of the classical-correspondence of an autonomous cluster of three spins coupled by easy-plane exchange anisotropy, with the Hamiltonian $$H=\left[\underset{i=1}{\overset{3}{}}𝐒_i𝐒_{i+1}\sigma 𝐒_i^z𝐒_{i+1}^z\right],$$ (1) This model was introduced in Ref. , a study of level repulsion in regions of $`(E,\sigma )`$ space where the classical dynamics is predominantly chaotic . Eq. (1) has only two nontrivial degrees of freedom, since it conserves total angular momentum around the z-axis. As did Ref. we consider only the case of $`_i𝐒_i^z=0.`$ While studying classical mechanics we set $`|𝐒|=1;`$ to compare quantum energy levels at different $`S`$, we we divide energies by by $`S(S+1)`$ to normalize them. The classical maximum energy, $`E=3`$, occurs at the ferromagnetic (FM) state – all three spins are coaligned in the equatorial (easy) plane. The classical ground state energy is $`E=1.5`$, in the antiferromagnetic (AFM) state, in which the spins lie $`120^{}`$ apart in the easy plane; there are two such states, differing by a reflection of the spins in a plane containing the $`z`$ axis. Both the FM and AFM states, as well as all other states of the system, are continuously degenerate with respect to rotations around the $`z`$-axis. The classical dynamics follows from the fact that $`\mathrm{cos}\theta _i`$ and $`\varphi _i`$, are conjugate, where $`\theta _i`$ and $`\varphi _i`$ are the polar angles of the unit vector $`𝐒_i`$; then Hamilton’s equations of motion say $`d\mathrm{cos}\theta _i/dt`$ $`=`$ $`\mathrm{}^1H/\varphi _i;`$ (2) $`d\varphi _i/dt`$ $`=`$ $`\mathrm{}^1H/\varphi _i.`$ (3) In the rest of this paper, we will first introduce the classical dynamics by surveying the fundamental periodic orbits of the three-spin cluster, determined by numerical integration of the equations of motion (Sec II). The heart of the paper is Sec. III: starting from the quantum density of states (DOS) obtained from numerical diagonalization, we apply nonlinear spectral analysis to detect the oscillations in the quantum DOS caused by classical periodic orbits; to our knowledge, this is the first time the DOS has been related to specific orbits in a multi-spin system. Also, in Sec. IV we smooth the DOS and compare it to a lowest-order Thomas-Fermi approximation counted by Monte Carlo integration of the classical energy surface; a flat interval is visible in the quantum DOS between two critical energies where the topology of the classical energy surface changes. Finally, in Sec. V, we use a combination of EBK quantization and tunneling analysis to explain the clustering patterns of the quantum levels in our system. ## II Classical periodic orbits Our subsequent semiclassical analysis will depend on identification of all the fundamental orbits and their qualitative changes as parameters are varied. Examining Poincaré sections and searching along symmetry lines of the system, we found four families of fundamental periodic orbits for the three-spin cluster. Figure 1 is an illustration of their motion, and Figure 2 gives classical energy-time curves. Orbits of types (a)-(c) are always at least threefold degenerate, since one spin is different from the other two; orbits of types (a)-(c) are also time-reversal invariant. Orbit (a), the counterbalanced orbit, exists when $`E>1`$ (including the FM limit) and, in the range $`0<\sigma <1`$ which we’ve studied, is always stable. Orbit (b), the unbalanced orbit, is unstable and exists when $`E<E_p,`$ where $$E_p=\frac{3}{4}\sigma \frac{3}{2}.$$ (4) Orbits of type (c), or stationary spin exist at all energies. Type (c) orbits are are unstable in the range, $`1>E>3.`$ Below $`E=1`$ the stationary spin orbit bifurcates into two branches without breaking the symmetry of the ferromagnetic ground state. At $$E_c(\sigma )=\frac{33\sigma +\sigma ^2}{\sigma 2},$$ (5) one branch vanishes and the other branch bifurcates into two orbits that are distorted spin waves of the two AFM ground states. (Below, in Sections IV and V, we will discuss topology changes of the entire energy surface.) Although they are not related by symmetry, all orbits of type (c) at a particular energy have the same period. Orbits of type (d), or three-phase orbits are named in analogy to three-phase AC electricity, as spin vectors move along distorted circles, $`120^{}`$ out of phase. The type (d) orbits break time-reversal symmetry and are hence at least twofold degenerate. A symmetry-breaking pitchfork bifurcation of the (d) family occurs (for $`\sigma =0.5`$ around $`E=0.75`$) at which a single stable orbit, approaching from high energy, bifurcates into an unstable and two stable precessing three-phase orbits without period doubling. (Strictly speaking, the precessing three-phase orbits are not periodic orbits of the three-spin system, since after one “period” the spin configuration is not the same as before, but rather, all three spins are rotated by the same angle around the $`z`$-axis). The unstable three-phase orbit disappears quickly as we lower energy, but the precessing three-phase orbits persist until $`E=1.5,`$ and become intermittently stable and unstable in a heavily chaotic regime near $`E_A,`$ but regain stability before $`E1.5`$: thus in the AFM limit, orbits (c) are stable while orbits (b) are unstable. More information on the classical mechanics of this system appears in Refs. and . ## III Orbit spectrum analysis Gutzwiller’s trace formula, the central result of periodic orbit theory, $$\rho (E)=\mathrm{Re}\underset{p}{}A_p(E)\mathrm{exp}[iS_p(E)/\mathrm{}]+\rho _{tf}(E),$$ (6) decomposes the quantum DOS $`\rho (E)`$ into a sum of oscillating terms contributed by classical orbits indexed by $`p,`$ where $`S_p(E)`$ is the classical action, and $`A_p(E)`$ is a slowly varying function of the period, stability and geometric properties of the orbit $`p`$), plus the zeroth-order Thomas-Fermi term, $$\rho _{tf}(E)=\frac{d^{2N}\stackrel{~}{z}}{(2\pi \mathrm{})^N}\delta \left(EH(\stackrel{~}{z})\right),$$ (7) This integral over phase space $`\stackrel{~}{z}`$ is simply proportional to the area of the energy surface. We do not know of any mathematical derivation of (6) in the case of a spin system. At a fixed H, the orbit spectrum is, as function of $`\tau ,`$ the power spectrum of $`\rho (E)`$ inside the energy window, $`H\mathrm{\Delta }H/2<E<H+\mathrm{\Delta }H/2.`$ (Figure 3, explained below, is an example of an orbit spectrum.) Since the classical period $`\tau _p(E)=S_p(E)/E,`$ Eq. (6) implies that $`O(H,\tau )`$ is large if there exists a periodic orbit with energy $`H`$ and period $`\tau .`$ The orbit spectrum can be estimated by Fourier transform, $$O(H,\tau )=\left|_{H\mathrm{\Delta }H/2}^{H+\mathrm{\Delta }H/2}\rho (E)e^{i\mathrm{}^1E\tau }𝑑E\right|^2.$$ (8) Variants of Eq. (8) have been used to extract information about classical periodic orbits from quantum spectra. Unfortunately, the resolution of the Fourier transform is limited by the uncertainty principle, $`\delta E\delta t=\mathrm{}/2.`$ Nonlinear spectral estimation techniques, however, can surpass the resolution of the Fourier transform. One such technique, harmonic inversion, has been successfully applied to scaling systems – i.e., systems like billiards or Kepler systems in which the (classical and quantum) dynamics at one energy are identical to those at any other energy, after a rescaling of time and coordinate scales. In a scaling system, windowing is unnecessary because there are no bifurcations and the scaled periods of orbits are constant. In this section, we will apply nonlinear spectral estimation to our system (1), which is nonscaling ### A Diagonalization To get the quantum level spectrum, we wrote software to diagonalize arbitrary spin Hamiltonians polynomial in ($`𝐒_i^x,𝐒_i^y,𝐒_i^z`$), where $`i`$ is an index running over arbitrary $`N`$ spins of arbitrary (and often large) spin $`S`$. The program, written in Java, takes advantage of discrete translational and parity symmetries by constructing a basis set in which the Hamiltonian is block diagonal, letting us diagonalize the blocks independently with an optimized version of LAPACK. Picturing the spins in a ring, the Hamiltonian Eq. (1) is invariant to cyclic permutations of the spins, so the eigenstates are states of definite wavenumber $`k=0,\pm \frac{2\pi }{3}`$ (matrix blocks for $`k=\pm \frac{2\pi }{3}`$ are identical by symmetry). In the largest system we diagonalized (three-spin cluster with $`S=65`$) , the largest blocks contained $`N=4620`$ states. ### B Autoregressive approach to construct spectrum The input to an orbit spectrum calculation is the list of discrete eigenenergies with total $`S_z=0`$; no other information on the eigenstates (e.g. the wavenumber quantum number) is necessary. This level spectrum is smoothed by convolving with a Gaussian (width $`10^3`$ for Figure 3) and discretely sampling over energy (with sample spacing $`\delta =4.5\times 10^4`$.) We estimate the power spectrum by the autoregressive (AR) method. AR models a discretely sampled input signal, $`y_i`$ (in our case the density of states) with a process that attempts to predict $`y_i`$ from its previous values, $$y_i=\underset{j=1}{\overset{N}{}}a_iy_{ij}+x_i.$$ (9) Here $`N`$ is a free parameter which determines how many spectral peaks that model can fit; Refs. and discuss guidelines for choosing $`N`$. Fast algorithms exist to implement least-squares, i.e. to choose $`N`$ coefficients $`a_i`$ to minimize (within constraints) $`x_i^2`$; of these we used the Burg algorithm . To estimate the power spectrum, we discard the original $`x_i`$ and model $`x_i`$ with uncorrelated white noise. Thinking of Eq. (9) as a filter acting on $`x_i,`$ the power spectrum of $`y_i`$ is computed from the transfer function of Eq. (9) and is $$P(\nu )=\frac{<x_i^2>}{1_{j=1}^Na_je^{i\nu \delta }}.$$ (10) Unlike the discrete Fourier transform, $`P(\nu )`$ can be evaluated at any value of $`\nu .`$ In our application, of course, $`\delta `$ has units of energy, so $`\nu `$ (more exactly $`\nu /\mathrm{}`$) actually has units of time and is to be identified with $`\tau `$ in (8). ### C Orbit spectrum results and discussion Figure 3 shows the orbit spectrum of our system with $`S=65`$ and $`\sigma =0.5.`$; it is displayed as a $`500\times 390`$ array of pixels, colored light where $`O(H,\tau )`$ is large. Each horizontal row is the power spectrum in an energy window centered at $`H;`$ we stack rows of varying $`H`$ vertically. With a window width 250 energy samples long ($`\delta H=0.1125,`$) we fit $`N=150`$ coefficients in Eq. (9). To improve visual resolution, we let windows overlap and spaced the centers of successive windows 25 samples apart. Comparing Figure 3 and Figure 2 we see that our orbit spectrum detects the fundamental periodic orbits as well as multiple transversals of the orbits. Interestingly, we produced Figure 3 before we had identified most of the fundamental orbits; Figure 3 correctly predicted three out of four families of orbits. We believe that, given the same data, the AR method normally produces a far sharper spectrum. This is not surprising, since the Fourier analysis allows the possibility of orbit-spectrum density at all $`\tau `$ values, whereas AR takes advantage of our a priori knowledge that there are only a few fundamental periodic orbits and hence only a few peaks. We have compared the Fourier and AR versions of the spectrum in a few cases, but have not systematically tested them against each other. Unfortunately, the artifacts and limitations of the AR method are less understood than those of the Fourier transform. At high energies, the classical periods are nearly degenerate, so we expect closely spaced spectral peaks in the orbit spectrum. In this situation, the Burg algorithm vacillates between fitting one or two peaks causing the braiding between the (a) and (c) orbits (labeled in Figure 2) in Figure 3. Also, in the range $`1<E<1.3,`$ where classical chaos is widespread, bifurcations increase the number of contributing orbits so that we cannot interpret the orbit spectrum for $`\tau >10.`$ ## IV Averaged density of states The lowest-order Thomas Fermi approximation, Eq. (7) predicts that the area of the classical energy surface is proportional to the DOS. We verify this in Figure 4, a comparison of the heavily smoothed quantum DOS to the area of the energy surface computed by Monte Carlo integration. An energy interval is visible in which the quantum DOS appears to be constant; we then verified that the classical DOS (which is more precise) is constant to our numerical precision; a similar interval was observed for all values of $`\sigma `$. We identified this interval as $`(E_p,1)`$, where the endpoints are associated with changes in the topology of the energy surface as the energy varies. At energies below $`E_c`$ (see Eq. (5)), the energy surface consists of two disconnected pieces, one surrounding each AFM ground state. The two parts coalesce as the energy surface becomes multiply connected at $`E_c.`$ For $`E<E_p,`$ (see Eq. (4)) the anisotropic interaction confines the spins to a limited band of latitude away from the poles. At $`E_p`$ it becomes possible for spins to pass over the poles. At $`E=1,`$ the holes that appeared in the energy surface at $`E_c`$ close up. A discontinuity in the slope of the area of the energy surface occurs at energy $`E_c`$ (not visible in Figure 4); in the range $`E_p<E<1`$ the area of the energy surface (and hence the slowly varying part of the DOS) seems to be constant as a function of energy. In the special isotropic ($`\sigma =0`$) case, the flat interval is $`(1.5,1)`$ and it can be analytically derived that the DOS is constant there. This is simplest for the smoothed quantum DOS, since for $`n=1,2,\mathrm{}`$ there are clusters of $`n`$ energy levels with level spacing proportional to $`n`$. (A derivation also exists for the classical case, but is less direct.) We have no analytic results for general $`\sigma `$. This flat interval is specific to our three-spin cluster, but we expect that the compactness of spin phase space will, generally, cause changes in the energy surface topology of spin systems that do not occur in traditionally studied particle systems. ## V Level clustering The quantum levels with total $`𝐒_z=0`$ show rich patterns of clustering, some of which are visible on Figure 5. The levels that form clusters correspond to three different regimes of the classical dynamics in which the motion becomes nearly regular: (1) the FM limit (not visible in Figure 5; (2) the AFM limit (bottom edge of Figure 5) and (3) the isotropic limit $`\sigma =0`$ (left edge of Figure 5). Indeed, the levels form a hierarchy in as the clusters break up into subclusters. In this section, we first approximately map the phase space from four coordinates to two coordinates – with the topology of a sphere. (Two of the original six coordinates are trivial, or decoupled, due to symmetry, as noted in Sec. II. Then, using Einstein-Brillouin-Kramers (EBK) quantization some consideration of quantum tunneling, many features of the level hierarchy will be understood. ### A Generic behavior: the polyad phase sphere In all three limiting regimes, the classical dynamics becomes trivial. For small deviations from the limit, the equations of motion can be linearized and one finds that the trajectory decomposes into a linear combination of two harmonic oscillators with degenerate frequency $`\omega `$, i.e., in a 1:1 resonance; the oscillators are coupled only by higher-order (=nonlinear) terms. There is a general prescription for understanding the classical dynamics in this situation . Near the limit, the low-excited levels have approximate quantum numbers $`n_{1,2}`$ such that the excitation energy $`\mathrm{\Delta }E_i`$ in oscillator $`i`$ is $`\mathrm{}\omega (n_i+1/2)`$. (In the FM limit, regime (1), this difference is actually measured downwards from the energy maximum.) Clearly, the levels with a given total quantum number $`P(n_1+n_2+1)`$ must have nearly degenerate energies, and thus form a cluster of levels, which are split only by the effects (to be considered shortly) of the anharmonic perturbation. A level cluster arising in this fashion is called a polyad . To reduce the classical dynamics, make a canonical transformation to the variables $`\mathrm{\Phi }`$ and $`𝐏(P_x,P_y,P_z)`$, where $`\mathrm{\Phi }`$ is the mean of the oscillators’ phases and $`\mathrm{\Psi }_x`$ is their phase difference, and $`P_x`$ $``$ $`{\displaystyle \frac{1}{2}}(n_1n_2),`$ (11) $`(P_y,P_z)`$ $``$ $`2\sqrt{(n_1+1/2)(n_2+1/2)}(\mathrm{cos}\mathrm{\Psi }_x,\mathrm{sin}\mathrm{\Psi }_x),`$ (12) Here $`\mathrm{\Phi }`$ is the fast coordinate, with trivial dynamics $`d\mathrm{\Phi }/dt=\omega `$ in the harmonic limit. The slow coordinates $`𝐏`$ follow a trajectory confined to the “polyad phase sphere” $`|𝐏|=P`$, since $`\mathrm{\Delta }E=\mathrm{}\omega P`$ is conserved by the harmonic-order dynamics. The reduced dynamics on this sphere is properly a map $`𝐏_i𝐏_{i+1}`$, defined by (say) the Poincaré section at $`\mathrm{\Psi }_x=0(\mathrm{mod}\mathrm{\hspace{0.33em}2}\pi )`$. But $`d𝐏/dt`$ contains only higher powers of the components of $`𝐏`$, so near the harmonic (small $`P`$) limit, $`|𝐏_{i+1}𝐏_i|`$ vanishes and the reduced dynamics becomes a flow. At the limit in which it is a flow, an effective Hamiltonian $`I`$ can be defined so that the dynamics becomes integrable. Applying EBK quantization to the reduced dynamics on the polyad phase sphere gives the splitting of levels within a polyad cluster. (Near the harmonic limit, the energy scale of $`I`$ is small compared to the splitting between polyads.) In all three of our regimes, we believe this flow has the topology shown schematically in Figure 6 Besides reflection symmetry about the “equator”, it also has a threefold rotation symmetry around the $`P_z`$ axis, which corresponds to the cyclic permutation of the three spins. (Figure 6 is natural for the three-spin system because it is the simplest generic topology of the phase sphere with that threefold symmetry.) The reduced dynamics has two symmetry-related fixed points at the “poles” $`P_z=\pm P`$, which always correspond to motions of the three-phase sort like (d) on Figure 1. There are also three stable and three unstable fixed points around the “equator”. The KAM tori of the full dynamics correspond to orbits of the reduced dynamics. These orbits follow contours of the effective Hamiltonian $`I`$ of the reduced dynamics (as in Figure 6). In view of the symmetries mentioned, $$I=\alpha P_z^2+\beta (P_x^33P_xP_y^2)+\mathrm{const}$$ (13) to leading order, where $`\alpha `$,$`\beta `$, and the constant may depend on $`\sigma `$, $`S`$, and $`P`$. The KAM tori surrounding the three-phase orbits represented by the “poles” are twofold degenerate, while the tori in the stable resonant islands represented on the “equator” are threefold degenerate. Hence, the EBK construction produces degenerate subclusters containing two or three levels depending on the energy range within the polyad cluster. The fraction of levels in one or the other kind of subcluster is proportional to the spherical areas on the corresponding side of the separatrix, which passes through the unstable points in Figure 6. These areas in turn depend on the ratio of the first to the second term in Eq. (13), i.e. $`\alpha P^2/\beta P^3`$. Evidently, as one moves away from the harmonic limit to higher values of $`P`$, one universally expects to have a larger and larger fraction of threefold subclusters. Given the numerical values of energy levels in a polyad, we can estimate the terms of Eq. (13) in the following fashion: (i) the energy difference between the highest and lowest 3-fold subcluster is the difference between the stable and unstable orbits on the equator, which is $`2\beta P^3`$ according to (13); (ii) the mean of the highest and lowest 3-fold subcluster would be the energy all around the equator if $`\beta `$ were to vanish; the difference between this energy and that of the farthest 2-fold subcluster in the polyad is $`\alpha P^2`$ according to (13). Furthermore, tunneling between nearby tori creates fine structure splitting inside the sub-clusters. The slow part of the dynamics on the polyad phase sphere, is identical to that of a single semiclassical spin with (13) as its effective Hamiltonian, so the effective Lagrangian is essentially the same, too. Then different tunneling paths connecting the same two quantized orbits must differ in phase by a topological term, with a familiar form proportional to the (real part of the) spherical area between the two paths. ### B Results Here we summarize some observations made by examination of polyads in the three regimes, for a few combinations of $`S`$ and $`\sigma `$. #### 1 Ferromagnetic limit This regime is the best-behaved in that regular behavior persists for a wide range of energies. The ferromagnetic state, an energy maximum, is a fixed point of the dynamics; around it are “spin-wave” excitations (viewing our system as the 3-site case of a one dimensional ferromagnet). These are the two oscillators from which the polyad is constructed. Thus, the “pole” points in Figure 6 correspond to “spin waves” propagating clockwise or counterclockwise around the ring of three spins, an example of the “three-phase” type of orbit. The stable and unstable points on the “equator” are identified respectively with the orbits (a) and (c) of Figure 1. Classically, in this regime, the three-phase orbit is the fundamental orbit with lowest frequency $`\omega _{3phase}`$; thus the corresponding levels in successive polyads have a somewhat smaller spacing $`\mathrm{}\omega _{3phase}`$ than other levels, and they end up at the top of each polyad. (Remember excitation energy is measured downwards from the FM limit.) Indeed, we observe that the high-energy end of each polyad consists of twofold subclusters and the low-energy end consists of threefold subclusters. We see a pattern of fine structure (presumably tunnel splittings) which is just like the pattern in the four-spin problem. Namely, throughout each polyad the degeneracies of successive levels follow the pattern (2,1,1,2) and repeat. (Here – as also for regime 3 – every “2” level has $`k=\pm 1`$ and every “1” level has $`k=0`$, where wavenumber $`k`$ was defined in Sec. III A.) Numerical data show that (independent of $`S`$) the pattern (starting from the lowest energy) begins (2112…) for even $`P`$, but for odd $`P`$ it begins (1221…). In the energy range of twofold subclusters, the levels are grouped as (2)(11)(2), i.e. one tunnel-split subcluster between two unsplit subclusters(and repeat); in the threefold subcluster regime, the grouping is (21)(12), so that each subcluster gets tunnel-split into a pair and a single level, but the sense of the splitting alternates from one subcluster to the next. An analysis of $`\sigma =0.4`$, $`S=30`$ showed that the fraction of threefold subclusters indeed grows from around 0.3 for small $`P`$ to nearly $`0.5`$ at $`P40`$. Furthermore, when $`\alpha P^2`$ and $`\beta P^3`$ were estimated by the method described near the end of Subsec. V A, they indeed scaled as $`P^2`$ and $`P^3`$ respectively. #### 2 Antiferromagnetic limit This regime occurs at $`E<E_c(\sigma )`$, where $`E_c(\sigma )`$ is given by (5). That means the classical energy surface is divided into two disconnected pieces, related by a mirror reflection of all three spins in any plane normal to the easy plane. Analogous to regime one, two degenerate antiferromagnetic “spin waves” exist around either energy minimum, and the polyad states are built from the levels of these two oscillators. Thus the clustering hierarchy outlined in Sec. V A – polyads clusters, EBK-quantization of $`I`$, and tunneling over barriers of $`I`$ on the polyad phase sphere – is repeated within each disconnected piece, leading to a prediction that all levels should be twofold degenerate. Consequently, on the level diagram (Figure 5), there should be half the apparent level density below the line $`E=E_c(\sigma )`$ as above it. Indeed, a striking qualitative change in the apparent level crossing behavior is visible at that line (shown dashed in the figure). Actually, tunneling is possible between the disconnected pieces of the energy surface and may split these degenerate pairs. In fact this hyperfine splitting happens to 1/3 of the pairs, again following the (2112) pattern within a given polyad. This (2112) pattern starts to break up as the energy moves away from the AFM limit; even for large $`S`$ ($`30`$ or $`65`$), this breakup happens already around the polyad with $`P=10`$, so it is much harder than in the FM case to ascertain the asymptotic pattern of subclustering. We conjecture that the breakup may happen near the energies where, classically, the stable periodic orbits bifurcate and a small bit of phase space goes chaotic. The barrier for tunneling between the disconnected energy surfaces has the energy scale of the bare Hamiltonian, which is much larger (at least, for small $`P`$) than the scale of effective Hamiltonian $`I`$ which provides the barrier for tunneling among the states in a subcluster. Hence, the hyperfine splittings are tiny compared to the fine splittings discussed at the end of Subsection V A. To analyze numerical results, we replace a degenerate level pair by one level and a hyperfine-split pair by the mean level, and treat the result as the levels from one of the two disconnected polyad phase spheres, neglecting tunneling to the other one. Then in the AFM limit, the “pole” points in Figure 6 again correspond to spin waves propagating around the ring, while the stable and unstable points on the equator are (c) and (b) on Figure 1. The three-phase orbit is the highest frequency orbit in the AFM limit, so again the twofold and threefold subclusters should occur at the high and low energy ends of each polyad cluster. What we observe, however, is that all the subclusters are twofold, except the lowest one is often threefold. #### 3 Isotropic limit This regime will includes only $`S_{\mathrm{tot}}S`$ i.e. $`E<1`$ – the same regime in which the flat DOS was observed (Sec. IV). Above the critical value $`E=1`$, the levels behave as in the “FM limit” described above. At $`\sigma =0`$, it is well-known that the quantum Hamiltonian reduces to $`{\scriptscriptstyle \frac{1}{2}}[S_{\mathrm{tot}}^23S(S+1)]`$. Thus each level has degeneracy $`P2S_{\mathrm{tot}}+1`$. (That is the number of ways three spins $`S`$ may be added to make total spin $`S_{\mathrm{tot}}`$, and each such multiplet has one state with $`S_{\mathrm{tot}}^z=0`$.) When $`\sigma `$ is small, these levels split and will be called a polyad. Classically, at $`\sigma =0`$ the spins simply precess rigidly around the total spin vector. These are harmonic motions of four coordinates; hence the polyad phase sphere can be constructed by (12). From the threefold symmetry, there should again be three orbit types as represented generically by Figure 6 and Eq. (13). For example, an umbrella-like configuration in which the three spin directions are equally tilted out of their plane corresponds to a three-phase type orbit, with two cases depending on the handedness of the arrangement. A configuration where one spin is parallel/antiparallel to the net moment, and the other two spins offset symmetrically from it), follows one of the threefold degenerate orbits. Numerically, the level behavior in the near-isotropic limit is similar to the near-FM limit. The fine structure degeneracies are a repeat of the (2112) pattern as in the other regimes; the lowest levels of any polyad always begin with (1221). The fraction of threefold subclusters is large here and, as expected, grows with $`S`$, (from 0.5 to 0.7 in the case $`S=15`$). However, the energy scales of $`\alpha P^2`$ and $`\beta P^3`$ behave numerically as $`\sigma P^0`$ and $`\sigma P^1`$. What is different about the isotropic limit is that the precession frequency – hence the oscillator frequency $`\omega `$ – is not a constant, but is proportional to $`S_{\mathrm{tot}}`$. Since perturbation techniques give formulas for $`I`$ with inverse powers of $`\omega `$, it is plausible that $`\alpha `$ and $`\beta `$ in (13) include factors of $`P^2`$ here, which were absent in the other two regimes. ## VI Conclusion and summary To summarize, by using detailed knowledge of the classical mechanics of a three spin cluster , we have studied the semiclassical limit of spin in three ways. First, using autoregressive spectral analysis, we identified the oscillating contributions that the fundamental orbits of the cluster make to the density of states, in fact, we detected the quantum signature of the orbits before discovering them. Secondly, we verified that the quantum DOS is proportional to the area of the energy surface; we also observed kinks in the smoothed quantum DOS, which are the quantum manifestation of topology changes of the classical energy surface; such topology changes, we expect, are more common in spin systems than particle phase space, since even a single spin has a nontrivial topology. Finally, we have identified three regimes of near-regular behavior in which the levels are clustered according to a four-level hierarchy, and we explained many features qualitatively in terms of a reduced, one degree-of-freedom system. This system appears promising for two extensions analgous to Ref. : tunnel amplitudes (and their topological phases) could be computed more explicitly; also, the low-energy levels from exact diagonalization of a finite piece of the anisotropic-exchange antiferromagnet on the triangular lattice could probably be mapped to three large spins and analyzed in the fashion sketched above in Sec. V. ###### Acknowledgements. This work was funded by NSF Grant DMR-9612304, using computer facilities of the Cornell Center for Materials Research supported by NSF grant DMR-9632275. We thank Masa Tsuchiya, Greg Ezra, Dimitri Garanin, Klaus Richter and Martin Sieber for useful discussions.
no-problem/9908/cs9908018.html
ar5iv
text
# Construction of regular languages and recognizability of polynomials ## 1 Introduction Recently, P. Lecomte and I have introduced in the concept of numeration system on a regular language. A numeration system is a triple $`(L,\mathrm{\Sigma },<)`$ where $`L`$ is an infinite regular language over a totally ordered finite alphabet $`(\mathrm{\Sigma },<)`$. The lexicographic ordering of $`L`$ gives a one-to-one correspondence $`\mathrm{r}_S`$ between the set of the natural numbers $``$ and the language $`L`$. For each $`n`$, $`\mathrm{r}_S(n)`$ denotes the $`(n+1)^{th}`$ word of $`L`$ with respect to the lexicographic ordering and is called the $`S`$-representation of $`n`$. For $`wL`$, we set $`\mathrm{val}_S(w)=\mathrm{r}_S^1(w)`$ and we call it the numerical value of $`w`$. When one has a simple method to represent integers, some natural questions about “recognizability” arise. By recognizability, one means the following. Let $`S`$ be a numeration system and $`X`$ be a subset of $``$. Then $`X`$ is said to be $`S`$-recognizable if $`\mathrm{r}_S(X)`$ is recognizable by a finite automaton. Therefore we can consider two kinds of questions. $`\mathrm{}`$ For a given numeration system $`S`$, is it possible to determine which subsets of $``$ are $`S`$-recognizable ? $`\mathrm{}`$ For a given subset $`X`$ of $``$, is it possible to find a numeration system $`S`$ in which $`X`$ is $`S`$-recognizable ? To give a partial but very important answer to the first question, it is shown in that arithmetic progressions are always recognizable in any numeration system. It is also shown that if $`X`$ is recognizable for some system $`S`$ then $`X+k`$ is also $`S`$-recognizable. (These two results will be useful in some proofs of this paper.) In , we were interested in the second question when $`X`$ is the set $`𝒫`$ of primes. It is shown that $`\mathrm{r}_S(𝒫)`$ is never recognizable for any numeration system $`S`$. In this paper, we will be mainly concerned by the second question when $`X`$ is a polynomial image of $``$. For classical numeration systems with integer base, it is well-known that the set of the perfect squares is not $`k`$-recognizable for any $`k\{0,1\}`$ (see for a survey about classical numeration systems). However, in we show quite easily that the numeration system $$S=(a^{}b^{}a^{}c^{},\{a,b,c\},a<b<c)$$ is such that the set $`\mathrm{r}_S(\{n^2:n\})`$ is regular. The choice of the language $`a^{}b^{}a^{}c^{}`$ was given by some density considerations: this language has exactly $`2n+1`$ words of length $`n`$. In view of this result, J.-P. Allouche asked the following question. Is it possible to generalize the result about the set of the perfect squares to the set $`\{n^k:n\}`$, $`k>2`$ ? Moreover, if $`P`$ is a polynomial belonging to $`[x]`$ (resp. $`[x]`$ or $`[x]`$) such that $`P()`$ then can one find a numeration system such that $`P()`$ is recognizable ? In all these cases, we answer affirmatively. For a given polynomial $`P`$, we give an explicit method to construct a numeration system such that $`\mathrm{r}_S(P())`$ is regular. For this purpose, we show how to obtain a regular language which contains exactly $`P(n+1)P(n)`$ words of length $`n`$ for $`n`$ large enough. The construction of regular languages with specified density is a problem beyond the concern of numeration systems. The fact that the set of primes is never recognizable and that the polynomial images of $``$ are recognizable give another interpretation of a well-known result (see \[4, Theorem 21\]): no non constant polynomial $`f(n)`$ with integral coefficients can be prime for all $`n`$, or for all sufficiently large $`n`$. ## 2 Recognizability of polynomials Our aim will be to construct a numeration system in which $`P()`$ is recognizable when $`P[x]`$ and $`P()`$. We will proceed in four steps. First of all, we give an explicit iterative method to obtain regular languages such that the number of words of length $`n`$ is exactly $`n^k`$ (in it is said that such languages can be easily obtained). The languages which are given here can be interpreted as the basic constructors of our method. In the three other steps, we increase gradually the difficulty. First we consider the case $`P[x]`$ which is quite simple since we only deal with the operation of addition. Next we consider $`P[x]`$; here the problem of substraction must be resolved. Finally, we have the most general case, $`P[x]`$ and the problem of division. In each of these last three steps, we give an instructive short example of construction. i) Languages with density $`n^k`$ First we recall some basic definitions and operations on languages. ###### Definition 1 The density function of a language $`L\mathrm{\Sigma }^{}`$ is $$\rho _L::n\mathrm{\#}(\mathrm{\Sigma }^nL)$$ where $`\mathrm{\#}A`$ denotes the cardinality of the set $`A`$. ###### Definition 2 If $`x`$ and $`y`$ are two words of $`\mathrm{\Sigma }^{}`$ then the shuffle of $`x`$ and $`y`$ is the language $`xy`$ defined by $$\{x_1y_1\mathrm{}x_ny_n:x=x_1\mathrm{}x_n,y=y_1\mathrm{}y_n,x_i,y_i\mathrm{\Sigma }^{},1in,n1\}.$$ If $`L_1,L_2\mathrm{\Sigma }^{}`$ then the shuffle of the two languages is the language $$L_1L_2=\{w\mathrm{\Sigma }^{}:wxy,\mathrm{for}\mathrm{some}xL_1,yL_2\}.$$ Recall that if $`L_1,L_2`$ are regular then $`L_1L_2`$ is also regular (see for instance \[3, Proposition 3.5\]). ###### Definition 3 Let $`L\mathrm{\Sigma }^{}`$. Then $`\mathrm{\Sigma }`$ is the minimal alphabet of $`L`$ if $`\sigma \mathrm{\Sigma }`$, $`wL:w=u\sigma v,u,v\mathrm{\Sigma }^{}`$. We want to construct regular languages $`L_k`$ such that $`\rho _{L_k}(n)=n^k`$. The first two languages are, for example, $`L_0=a^{}`$ and $`L_1=a^+b^{}`$. To construct a language $`L_2`$, we first need a language $`M_2`$ such that $`\rho _{M_2}(n)=n+1`$. We can take $`M_2=a^{}b^{}`$. Hence $`L_2=M_2\{c\}`$. Indeed if one considers the words of length $`n`$ belonging to $`L_2`$, they are obtained from $`n`$ distinct words of length $`n1`$ belonging to $`M_2`$ and for each of these words, $`c`$ can be positioned in $`n`$ different places. Thus one has exactly $`n^2`$ words of length $`n`$ in $`L_2`$. As an example, we have below the construction of the nine words of length $`3`$, $$\begin{array}{ccc}a^{}b^{}& & a^{}b^{}\{c\}\\ aa& & aac,aca,caa\\ ab& & abc,acb,cab\\ bb& & bbc,bcb,cbb.\end{array}$$ Observe that the letter $`c`$ does not belong to the minimal alphabet of $`M_2`$. To construct $`L_3`$, we simply need a language $`M_3`$ such that $`\rho _{M_3}(n)=(n+1)^2`$. This can be done using the previously defined languages $`L_0,L_1,L_2`$, each of them written on a different alphabet, $$M_3=\underset{\rho (n)=n^2}{\underset{}{(a^{}b^{}\{c\})}}\underset{\rho (n)=2n}{\underset{}{d^+e^{}f^+g^{}}}\underset{\rho (n)=1}{\underset{}{h^{}}}.$$ Then we have $`L_3=M_3\{i\}`$. This procedure can be repeated and thus for any $`k2`$, $`L_k`$ can be obtained as a union of previously constructed languages and one operation of shuffle with a new letter. In the following, the notations $`M_k`$ and $`L_k`$ will refer to the previously constructed languages such that $`\rho _{M_k}(n)=(n+1)^{k1}`$ and $`\rho _{L_k}(n)=n^k`$. ###### Remark 1 Let $`u_k`$ be the size of the minimal alphabet of $`L_k`$. The construction of $`L_k`$ gives $$\{\begin{array}{c}u_0=1,u_1=2,u_2=3,\hfill \\ u_m=\underset{k=0}{\overset{m1}{}}u_k\left(\genfrac{}{}{0pt}{}{m1}{k}\right)+1,m3.\hfill \end{array}$$ By direct inspection, one can check that $`u_3=9`$, $`u_4=26`$, $`u_5=90<5!`$ and for $`n=6,\mathrm{},10`$, $`u_n<n!`$. Let $`m11`$. Since $`\left(\genfrac{}{}{0pt}{}{m1}{i}\right)<\left(\genfrac{}{}{0pt}{}{m1}{5}\right)`$ for $`i4`$; one has easily, by recurrence on $`m`$, the following upper bound $$u_m<\underset{k=0}{\overset{m1}{}}k!\left(\genfrac{}{}{0pt}{}{m1}{k}\right)=e\mathrm{\Gamma }(m,1)<e(m1)!$$ where $`\mathrm{\Gamma }(m,1)`$ is the incomplete gamma function defined by $$\mathrm{\Gamma }(a,b)=_b^+\mathrm{}t^{a1}e^t𝑑t.$$ ###### Remark 2 In view of an earlier version of this paper, J. Shallit suggested another construction of a language $`K`$ such that $`\rho _K(n)=n^k`$. It uses the following result (see \[1, Section 6.5\]) $$n^k=\underset{t=0}{\overset{k}{}}t!S(k,t)\left(\genfrac{}{}{0pt}{}{n}{t}\right)$$ where $`S(k,t)`$ are the Stirling numbers of the second kind. The language over {a,b} with all strings of length $`n`$ containing exactly $`t`$ letters $`b`$ is regular and has a density $`\rho (n)=\left(\genfrac{}{}{0pt}{}{n}{t}\right)`$. Therefore a union of such languages on distinct alphabets gives the language $`K`$. This construction is perhaps simpler than the construction of $`L_k`$ but uses a greater alphabet. The size of the minimal alphabet is $`\mathrm{max}_{t=0,\mathrm{},k}t!S(k,t)`$ and a lower bound is given by $`k!`$. We won’t use it in the following. ii) Recognizability of polynomials belonging to $`[x]`$ The main idea is that we have to find a regular language such that the positions of the first words of each length are the values taken by the polynomial. ###### Proposition 4 Let $`P[x]`$. If $`P()`$ then there exists a numeration system $`S=(L,\mathrm{\Sigma },<)`$ such that $`P()`$ is $`S`$-recognizable. Proof. Since the translation by a constant doesn’t alter the recognizablity of a set, as recalled in the introduction (see for details), we can assume that $`P(0)=0`$. We have to construct a regular language $`L`$ such that the number of words of length $`n`$ is exactly $`P(n+1)P(n)`$. Since $`P(n+1)P(n)`$ only contains powers of $`n`$ with non-negative integral coefficients, the construction of $`L`$ can be easily achieved by union of languages $`L_k`$ on distinct alphabets (one has a small restriction for the language $`L_0`$; we explain it in the following example to keep this proof simple). To conclude the proof, the reader must recall that if a language $`L`$ is regular then the language $`(L)`$ formed of the smallest words of each length for the lexicographic ordering is still regular . One can check that $`\mathrm{r}_S(P())=(L)`$. $`\mathrm{}`$ ###### Example 1 Let $`P(x)=2x^2+3x`$. Then $$P(x+1)P(x)=4x+5.$$ We consider the language $`L`$ which is formed by four copies of $`L_1`$ and five copies of $`L_0`$. A very important remark is that with five copies of $`L_0`$, we obtain five words of any positive length but the only one empty word $`\epsilon `$. So to get rid of this problem we add to our language four new words of length $`1`$ (we thus add four letters to the alphabet). This remark applies for all the following constructions: if one uses $`n`$ copies of $`L_0`$ then add $`n1`$ words of length $`1`$ and treat the case $`n=1`$ separately. One can check that for $`n1`$, the first word of length $`n`$ is the $`[P(n)+1]^{th}`$ word of $`L`$ and $$\mathrm{r}_S(P(\{1\}))=(L\mathrm{\Sigma }).$$ Therefore $`\mathrm{r}_S(P())`$ is regular since we only add one word for $`\mathrm{r}_S(P(1))`$ to a regular language. ###### Corollary 5 Let $`k\{0,1\}`$. There exist a numeration system $`S`$ such that the set $`\{x^k:x\}`$ is $`S`$-recognizable. $`\mathrm{}`$ iii) Recognizability of polynomials belonging to $`[x]`$ This lemma gets rid of the problem of the coefficients belonging to $``$ instead of $``$. ###### Lemma 6 Let $`k`$ and $`\alpha `$ be two positive integers. There exist a regular language $``$ such that $`\rho _{}(n)=n^k\alpha n^{k1}`$ for all $`n\alpha `$. Proof. Assume that $`k2`$. Let $`\mathrm{\Sigma }_k`$ be the minimal alphabet of $`M_k`$. Then $`L_k=M_k\{\sigma \}`$ where $`\sigma \mathrm{\Sigma }_k`$. For $`i=1,\mathrm{},n`$, $`L_k`$ has exactly $`n^{k1}`$ words of length $`n`$ with $`\sigma `$ in position $`i`$. From this observation, one can check that $$=L_k\underset{i=0}{\overset{\alpha 1}{}}\mathrm{\Sigma }_k^{}\sigma \mathrm{\Sigma }_k^i$$ have exactly $`n^k\alpha n^{k1}`$ words of length $`n`$ for $`n\alpha `$. Notice that $`\rho _{}(n)=0`$ if $`n<\alpha `$. If $`k=1`$ then we have to remove the $`\alpha `$ first words of each length from $`L_1`$, $$=L_1[\underset{\begin{array}{c}firstwords\\ ofeachlength\end{array}}{\underset{}{(L_1)}}\underset{\begin{array}{c}secondwords\\ ofeachlength\end{array}}{\underset{}{(L_1(L_1))}}\mathrm{}]$$ Notice one more time that $`\rho _{}(n)=0`$ if $`n<\alpha `$. $`\mathrm{}`$ ###### Proposition 7 Let $`P[x]`$. If $`P()`$ then there exists a numeration system $`S=(L,\mathrm{\Sigma },<)`$ such that $`P()`$ is $`S`$-recognizable. Proof. We proceed as in Proposition 4 and consider the polynomial $`Q(n)=P(n+1)P(n)`$. Observe that since $`P()`$, the coefficient of the dominant power in $`P`$ is positive and thus the same remark holds for $`Q`$. By adding extra terms of the form $`x^jx^j`$, if $`\mathrm{deg}(Q)=k`$ we can assume that $$Q(x)=x^{i_1+1}a_{i_1}x^{i_1}+\mathrm{}+x^{i_r+1}a_{i_r}x^{i_r}+\underset{l=0}{\overset{k}{}}b_lx^l$$ where $`i_1,\mathrm{},i_r\{0,\mathrm{},k1\}`$, $`a_{i_1},\mathrm{},a_{i_r}\{0\}`$ and $`b_0,\mathrm{},b_k`$. Let $`\alpha =sup_{j=1,\mathrm{},r}a_{i_j}`$. Using Lemma 6, for $`j=1,\mathrm{},r`$ we construct languages $`_j`$ such that for all $`n\alpha `$, $`\rho __j(n)=n^{i_j+1}a_{i_j}n^{i_j}`$. The reader can construct easily a language $`L`$ such $`n\alpha `$, $`\rho _L(n)=Q(n)`$ by union of languages $`_j`$ and $`L_l`$. If we want to consider the smallest word of each length, as in Proposition 4, then the language $`L`$ must contain exactly $`P(\alpha )`$ words of length at most $`\alpha 1`$ (in this case, the first word of length $`\alpha `$ is the $`[P(\alpha )+1]^{th}`$ word of $`L`$ and its numerical value is thus $`P(\alpha )`$). This can be achieved by adding or removing a finite number of words from the regular language $`L`$ (this operation doesn’t alter the regularity of $`L`$). Thus $$\mathrm{r}_S(\{P(n):n\alpha \})=(L)\mathrm{\Sigma }^\alpha .$$ To conclude we have to add a finite number of words for the representation of $`P(0),\mathrm{},P(\alpha 1)`$ and $$\mathrm{r}_S(P())=((L)\mathrm{\Sigma }^\alpha )\{\mathrm{r}_S(P(0)),\mathrm{},\mathrm{r}_S(P(\alpha 1))\}.$$ $`\mathrm{}`$ ###### Example 2 Let $`P(x)=x^43x^22x+5`$. Then $`Q(n)=P(n+1)P(n)`$ $`=`$ $`4x^3+6x^22x4`$ $`=`$ $`4x^3+5x^2+x^23x+x4.`$ With four copies of $`L_3`$, five copies of $`L_2`$ and using Lemma 6, one can construct a regular language $`L`$ such that<sup>1</sup><sup>1</sup>1Here the expression of $`\rho _L(n)`$ is very simple since $`3`$ and $`4`$ only differ by one unit (remark that $`4n^3+6n^22n4=4n^3+6n^23nn=4`$ and $`4n^3+6n^23n=4n^3+5n^2n=3`$ or $`0`$). $$\rho _L(n)=\{\begin{array}{cc}4n^3+6n^22n4\hfill & \mathrm{if}n4\hfill \\ 4n^3+5n^2\hfill & \mathrm{otherwise}.\hfill \end{array}$$ We have $`P(4)=205`$ and the number of words of length at most $`3`$ belonging to $`L`$ is $`214`$ thus we remove $`9`$ words of length at most $`3`$ in $`L`$. Therefore, the first word of length $`4`$ in $`L`$ is the representation of $`P(4)`$ and $$\mathrm{r}_S(\{P(n):n4\})=(L)\mathrm{\Sigma }^4$$ (1) is a regular subset of $`L`$. Since $`\{P(0),\mathrm{},P(3)\}`$ is equal to $`\{1,5,53\}`$, we add the second, the $`6^{th}`$ and the $`54^{th}`$ word of $`L`$ to (1) to obtain $`\mathrm{r}_S(P())`$. ###### Example 3 We begin another example which show how to obtain a correct expression for $`\rho _L(n)`$ in a trickier situation. Let $`P(x)=x^54x^32x^2+8`$, then $$Q(x)=5x^4+9x^3+x^33x^2+x^212x+x5.$$ To construct a language $`L`$, we use five copies of $`L_4`$, nine copies of $`L_3`$ and apply three times Lemma 6. Thus $$\rho _L(n)=\{\begin{array}{cc}Q(n)\hfill & \mathrm{if}n12\hfill \\ 5n^4+10n^33n^2+n5\hfill & \mathrm{if}12>n5\hfill \\ 5n^4+10n^33n^2\hfill & \mathrm{if}5>n3\hfill \\ 5n^4+9n^3\hfill & \mathrm{otherwise}.\hfill \end{array}$$ iv) Recognizability of polynomials belonging to $`[x]`$ Finally, we obtain the theorem of recognizability in the general case. ###### Theorem 8 Let $`P[x]`$. If $`P()`$ then there exists a numeration system $`S=(L,\mathrm{\Sigma },<)`$ such that $`P()`$ is $`S`$-recognizable. Proof. Let $$P(x)=\frac{a_k}{b_k}x^k+\frac{a_{k1}}{b_{k1}}x^{k1}+\mathrm{}+\frac{a_0}{b_0}$$ with $`b_0,\mathrm{},b_k,a_k\{0\}`$ and $`a_0,\mathrm{},a_{k1}`$. Let $`s`$ be the least common multiple of $`b_0,\mathrm{},b_k`$. One has $$P=\frac{P^{}}{s}$$ with $`P^{}[x]`$. By hypothesis $`P()`$; thus $`P^{}()s`$. As in Proposition 7, there exist a constant $`\alpha `$ and a language $`L^{}`$ such that $`n\alpha `$, $$\rho _L^{}(n)=P^{}(n+1)P^{}(n)=s[P(n+1)P(n)].$$ We modify $`L^{}`$ (by adding or removing a finite number of words) to have $$\underset{i=0}{\overset{\alpha 1}{}}\rho _L^{}(i)=sP(\alpha ).$$ It was proved in that the arithmetic progression $`s`$ is recognizable for any numeration system. Let $`S^{}=(L^{},\mathrm{\Sigma },<)`$ then $`L=\mathrm{r}_S^{}(s)`$ is a regular language such that $$\underset{i=0}{\overset{\alpha 1}{}}\rho _L(i)=P(\alpha )\mathrm{and}n\alpha ,\rho _L(n)=P(n+1)P(n).$$ We conclude as in Proposition 7. $`\mathrm{}`$ ###### Example 4 Let $`P(x)`$ $`=`$ $`{\displaystyle \frac{x^4}{3}}2x^3+{\displaystyle \frac{37}{6}}x^2{\displaystyle \frac{17}{2}}x+4`$ $`=`$ $`{\displaystyle \frac{1}{3}}(x7)x^2(x+1)+{\displaystyle \frac{17}{2}}x(x1)+4.`$ The reader can check easily that $`P()`$. We have $`s=6`$ and $`P^{}(n+1)P^{}(n)`$ $`=`$ $`8n^324n^2+46n24`$ $`=`$ $`7n^3+45n+n^324n^2+n24.`$ Using seven copies of $`L_3`$, $`45`$ copies of $`L_1`$ and applying Lemma 6 twice, we construct a language $`L^{}`$ such that $$\rho _L^{}(n)=\{\begin{array}{cc}6(P(n+1)P(n))\hfill & \mathrm{if}n24\hfill \\ 7n^3+45n\hfill & \mathrm{otherwise}.\hfill \end{array}$$ The number of words of length at most $`23`$ in $`L^{}`$ is $`545652`$ and $`6P(24)=517776`$. Thus we remove $`27876`$ words from $`L^{}\mathrm{\Sigma }^{23}`$. In this new language lexicographically ordered, we only take the words at position $`6i+1`$, $`i`$, to obtain the regular language $`L`$. Thus the $`[P(24)+1]^{th}`$ word of $`L`$ is the first word of length $`24`$ belonging to $`L`$ and $$\mathrm{r}_S(\{P(n):n24\})=(L)\mathrm{\Sigma }^{24}.$$ To conclude, we have as usual to add a finite number of words for the representation of $`P(0),\mathrm{},P(23)`$. ###### Remark 3 In , we have studied the problem of changing the ordering of the alphabet and we have exhibit some subset $`X`$ of $``$ and some numeration systems $`S`$ and $`S^{}`$ which only differ by the ordering of the alphabet such that $`\mathrm{r}_S(X)`$ is regular and $`\mathrm{r}_S^{}(X)`$ not. This kind of singularity doesn’t appear here. For a given polynomial $`P`$, we have shown how to construct a particular numeration system $`S=(L,\mathrm{\Sigma },<)`$ such that $`P()`$ is $`S`$-recognizable. By construction, one can easily check that $`P()`$ is also $`T`$-recognizable for any system $`T=(L,\mathrm{\Sigma },)`$ where $``$ is a reordering of $`\mathrm{\Sigma }`$. ## 3 Acknoledgments The author would like to thank J.-P. Allouche and P. Lecomte for their support and fruitful conversations. We also thank J. Shallit for his valuable suggestions.
no-problem/9908/astro-ph9908234.html
ar5iv
text
# 1 Introduction ## 1 Introduction Among all astrophysical objects neutron stars (NSs) attract most attention of physicists. Now we know more than 1000 NSs as radiopulsars and more than 100 NSs emiting X-rays, but the Galactic population of these objects is about $`10^8`$$`10^9`$. Here the first number comes mainly from radiopulsars statistics, and should be considered as a low limit, because it is not clear if all NSs pass through the stage of a radiopulsar, as far as initial parameters (spin period and magnetic field) of significant part of NSs can be different from “standard” values: $`B10^{12}`$ G, $`p120`$ ms. For example, NSs can be born below the death-line due to small initial magnetic fields, or relatively long periods (fall-back after a supernova explosion also can be important, because magnetic momentum or spin period can be changed in that process). And the second number is in correspondence with models of chemical evolution of the Galaxy. So only a tiny fraction of one of the most fascinating astrophysical objects is observed at present. NSs can appear as sources of different nature: as isolated objects (radio pulsars, old isolated accreting NSs, soft $`\gamma `$– repeaters etc.) and as binary companions, usually as X-ray sources in close binary systems, powered by wind or disk accretion from a secondary companion. X-ray pulsars are probably one of the most prominent among these sources, because there important parameters of NSs (spin period, magnetic field etc.) can be determined. Now we know more than 40 X-ray pulsars (see e.g. Bildsten et al. 1997, Borkus 1998). Observations of optical counterparts of X-ray sources give an opportunity to determine distances to these objects and other parameters with relatively high precision, and with hyroline detections one can obtain the value of magnetic field, $`B`$, of a NS. But lines are not detected in all sources of that type (partly because they can lay out of the range of necessary spectral sensitivity of devices, when fields are too high, $`>10^{13}`$ G, for example), and magnetic field can be estimated from period measurements (see e.g. Lipunov 1982, 1992). Precise distance measurements usually are not available immediately after X-ray discovery (especially, if localization error boxes are large and X-ray sources have transient nature). In that sense methods of simultaneous determination of field and distance basing only on X-ray observations can be useful, and several of them were suggested by different authors previously. Here we try to obtain estimates of the magnetic fields (and distances) of NSs in X-ray pulsars from their period (and flux) variations. ## 2 Estimates of the magnetic field Magnetic fields of accreting NSs can be estimated using period variations or using the hypothesis of the equilibrium period (see Lipunov 1992). We use both of these methods. For estimating of magnetic momentum of NSs using observed values of maximum spin-down we use the following main equation: $$\frac{dI\omega }{dt}=k_t\frac{\mu ^2}{R_{co}^3},$$ where $`I`$ – NS’s momentum of inertia, $`\omega =\frac{2\pi }{p}`$ – spin frequency, $`\mu `$ – magnetic momentum, $`R_{co}=\left(\frac{GM}{\omega ^2}\right)^{1/3}`$– corotation radius. We used $`k_t=1/3`$, $`I=10^{45}`$ g cm<sup>2</sup>, $`M=1.4M_{}`$. We can use this approximation with no spin-up )accelerating) momentum, because we choose moments with maximum spin-down, when spin-doqn (braking) momentum is much larger than accelerating momentum. This estimate normally should be considered as a low limit on the value of the magnetic field. We used graphs from (Bildsten et al., 1997) to derive spin-up and spin-down rates and flux changes measurements. Data on these graphs is shown with one day time resolution. Usually errors are relatively small, and we neglecte them. Such estimates were obtained several times by different authors with different data sets, but usually these sets had had worse time resolution (see some examples in (Lipunov 1992)). And the BATSE data (Bildsten et al., 1997) gives an excellent opportunity to repeat these simple calculations. Equilibrium period can be written in different forms for disk and wind-fed systems. For the first case we used the following equation: $$p_{eq.disk}=2.7\mu _{30}^{6/7}L_{37}^{3/7}s.$$ (1) For wind-accreting systems we have: $$p_{eq.wind}=10.4L_{37}^1T_{10}^{1/6}\mu _{30}s.$$ (2) Here $`L_{37}`$ – luminosity in units $`10^{37}`$ erg s<sup>-1</sup>, $`T_{10}`$ – orbital period in units 10 days, $`\mu _{30}`$ – magnetic momentum in units $`10^{30}`$ G cm<sup>3</sup>. Estimates of the magnetic momentum, $`\mu `$, obtained with different assumptions are shown in the table 1. Three values are shown: an estimate from spin-down obtained from the BATSE data (Bildsten et al., 1997); an estimate from the equilibrium period for wind-fed systems (eq. (2)); an estimate for disk-accreting systems (eq. (1)). Both of the last two estimates were made for X-ray pulsars about which we were not sure if they are disk or wind-accreting systems, less probable values (wind accretion in Be-transients) are marked with asterix. In the table 2 we show values, which were used for estimates with the hypothesis of the equilibrium period: spin period, mean luminosity in units $`10^{37}`$ erg s<sup>-1</sup>, orbital period in units 10 days (see a compilative catalog of X-ray pulsars in the Web at the URL: http://xray.sai.msu.ru/~polar/html/publications/cat/x-ray\_n2.www). In table 1 we use the following notation: LMXRB- Low Mass X-Ray Binary; HMSG - High Mass SuperGiant; BeTR- Be-transient source. More precise estimates can be made by fitting all observed values of spin-up and spin-down rate together with flux measurements. When the distance to the source is know only the value of the magnetic field should be fitted. And on figures 1-2 we show such estimates for two X-ray pulsars. We plot spin-up and spin-down rates as a function of the parameter, which is a combination of the spin period and source’s luminosity. Spin-up and spin-down values derived from the BATSE data (Bildsten et al., 1997) are plotted as black dots, and theoretical curves for different values of the magnetic momentum are also shown. In ideal, the best curve for the magnetic momentum should exist, which fits all observational points. In reality points have some errors, distance to the source in also know with some uncertainty, and simple model of spin-up and spin-down can be only the first approximation. But these estimates of the magnetic momentum are more precise, than the ones obtained with the equilibrium hypothesis. These estimates can be different from other ones obtained from the equilibrium periods or from a single value of spin-down as can be seen from the table 1. ## 3 Discussion and conclusions We made estimates of the magnetic field of NSs in X-ray pulsars. Estimates which were made with an assumption that $`p=p_{eq}`$ are rather rough. Obtained values depend (except uncertainties connected with the method itself) on unknown parameters of NSs, such as masses, radii, moments of inertia. All of them were accepted to have “standard” values, and of course it is only the first approximation. For example, our estimate for the source GRO 1744-28 is $`\mu 10^{30}`$ G cm<sup>3</sup>, and it is smaller than the estimate shown in (Borkus 1998), which is $`B(25)10^{12}`$ G (we mark, that the estimate obtained by Joss & Rappaport (1997) is significantly lower than both: Borkus and our estimates). But if one take “non-standard” value for $`R`$, these estimates of $`\mu `$ and $`B`$ can be in good correspondence. We show several examples in table 3. NSs radii are calculated from the following simple formula: $$R=\left(2\mu /B\right)^{1/3}.$$ Here $`\mu `$ are taken from table 1, and values of $`B`$ are taken from Nagase (1992), Borkus (1998) and Wang (1996). As one can see from the table for several sources measured $`B`$ are not in correspondence with our calculated $`\mu `$, and radii of NSs are too big. Mostly these cases are long period wind-fed pulsars like GX 301-2, where formation of temporal reverse disk is possible for the cases of fast spin-down, so there maximum spin-down can be not the best field estimate, and estimates from the equilibrium period for wind-accretion case are in better correspondence with observations. For A 0535+26 our estimate was obtained only from the equilibrium period. And as far as this system is transient it can be far from the equilibrium. We note, that in general existence of high magnetic field in that source, as it comes from our estimates, is confirmed by observations. In the case of 4U 0115+63 errors for maximum spin-down rate are significant, and discrepancy between observed and calculated values can be due to this. We also note, that Ginga was not sensitive enough at the spectral region $`40`$ keV, where cyclotron line for $`\mu (23)10^{30}`$ G cm<sup>3</sup> are situated. In more clear cases (Her X-1, GRO 1744-28), where we are sure, that accretion is of the disk type, our estimates from maximum spin-down are in good correspondence with observations. And we predict for the cases of Be-transients, where disk accretion is working for sure, that in 2S 1417-624, GRO 1948+32, GRO 1008-57, A 1118-616 and 4U 1145-61 observations of cyclotron lines at energies $`100`$ keV are possible in future. Estimates obtained from maximum spin-down rate and estimates obtained with the hypothesis of equilibrium period are in rough correspondence, except sources OAO 1657-415 and 4U 1145-61, where maximum spin-down estimates are significantly higher. It can be an indication, that systems are far from equilibrium (especially in the case of Be-transient 4U 1145-61), or that some additional mechanism of spin-down (outflows, reverse disks, …?) work. In the case of OAO 1657-415 estimate based on maximum spin-down rate can be incorrect similar to GX 301-2 due to the reasons, which were discussed above. Observations of period and flux variations can be used also for simultaneous determination of magnetic field of a NS and distance to the X-ray source (Popov 1999). The method is based on several measurements of period derivative, $`\dot{p}`$, and X-ray pulsar’s flux, $`f`$. Fitting distance, $`d`$, and magnetic momentum, $`\mu `$, one can obtain good correspondence with the observed $`p,\dot{p}`$ and $`f`$, and that way produce good estimates of distance and magnetic field (see also another way of estimating of these parameters based on the equilibrium period and spin-up measurements applied to GRO1744-28 in (Joss & Rappaport 1997) and (Rappaport & Joss 1997)). Lets consider only disk accretion due to application of our method to the system, in which most probably accretion is of the disk type. In that case one can write (see Lipunov 1982, 1992): $$\dot{p}=\frac{4\pi ^2\mu ^2}{3GIM}\sqrt{0.45}\mathrm{\hspace{0.17em}2}^{1/14}\frac{\mu ^{2/7}}{I}\left(GM\right)^{3/7}\left[p^{7/3}L\right]^{6/7}R^{6/7},$$ (3) where $`L=4\pi d^2f`$ – luminosity, $`f`$ – the observed flux. So, with some small uncertainty in the equation above we know all parameters ($`I`$, $`M`$, $`R`$ etc.) except $`\mu `$ and $`d`$. Fitting observed points with them we can obtain estimates of $`\mu `$ and $`d`$. Uncertainties mainly depend on applicability of that simple model. To illustrate the method, we apply it to the X-ray pulsar GRO J1008-57, discovered by BATSE (Bildsten et al., 1997). It is a $`93.5`$ s X-ray pulsar, with the BATSE flux about $`10^9`$ erg cm<sup>-2</sup> s<sup>-1</sup>. A 33 day outburst was observed by BATSE in August 1993. The source was identified with a Be-system with $`135^d`$ orbital period (Shrader et al. 1999). We use here only 1993 outburst, described in Bildsten et al. (1997). Bildsten et al. (1997) show flux and frequency history of the source with 1 day integration. In the maximum of the burst errors are rather small, and we neglect them. Points with large errors were not used. We used standard values of NS parameters: $`I=10^{45}`$ g cm<sup>2</sup>, momentum of inertia; $`R=10`$ km, NS radius; $`M=1.4M_{}`$, NS mass. On figures 3-4 we show observations (as black dots) and calculated curves (in the disk model, see Shrader et al. (1999), who proposed a disk formation during the outbursts, in contrast with Macomb et al. (1994), who proposed wind accretion) on the plane $`\dot{p}`$$`p^{7/3}f`$, where $`f`$ – observed flux (logarithms of these quantities are shown). Curves were plotted for different values of the source distance, $`d`$, and NS magnetic momentum, $`\mu `$. Spin-up and spin-down rates were obtained from graphs in Bildsten et al. (1997). The best fit (both for spin-up and spin-down) gives $`d5.8\mathrm{kpc}`$ and $`\mu 37.610^{30}`$ G $``$ cm<sup>3</sup>. It is shown on both figures. The distance is in correspondence with the value in (Shrader et al. 1999), and such field value is not unusual for NSs in general and for X-ray pulsars in particular (see, for example, (Lipunov 1992) and (Bildsten et al. 1997)), and this value of $`\mu `$ is consistent with maximum spin-down (see table 1). Tests on some other X-ray pulsars with know distances and magnetic fields also showed good results. The method of distance and field estimates is approximate and depends on several assumptions (type of accretion, specified values of $`M,I,R`$, etc.). Estimates of $`\mu `$, for example, can be only in rough correspondence with determinations of magnetic field $`B`$ with hyrolines, if standard value of the NS radius, $`R=10`$ km is used (see, for example, the case of Her X-1 in (Lipunov 1992)). When the field and the distance are know with high precision observations of period and flux observations can be used to put limits on the equation of state (see e.g. Schaab & Weigel 1999). If one uses maximum spin-up, or maximum spin-down values to evaluate parameters of the pulsar, then one can obtain values different from the best fit (they are also shown on the figures): $`d8`$ kpc, $`\mu 37.610^{30}`$ G$``$ cm<sup>3</sup> for maximum spin-up, and two values for maximum spin-down: $`d4\mathrm{kpc}`$, $`\mu 37.610^{30}`$ G$``$ cm<sup>3</sup> and the one close to our best fit (two similar values of maximum spin-down were observed for different fluxes, but we mark, that formally maximum spin-down corresponds to the values, which are close to our best fit). It can be used as an estimate of the errors of our method: accuracy is about the factor of 2 in distance, and about the same value in magnetic field, as can be seen from the figures. Determination of magnetic field (and, probably, distance) only from X-ray observations can be very useful in uncertain situations, for example, when only X-ray observations without precise localizations are available. Acknowledgments PSB thanks prof. Joss for discussions. The work was supported by the RFBR (98-02-16801) and the INTAS (96-0315) grants.
no-problem/9908/hep-ph9908489.html
ar5iv
text
# References UCRHEP-T261 August 1999 Hierarchical Four-Neutrino Oscillations With a Decay Option Ernest Ma<sup>1</sup>, G. Rajasekaran<sup>2</sup>, Ion Stancu<sup>1</sup> <sup>1</sup> Physics Department, Univeristy of California, Riverside, CA 92521, USA <sup>2</sup> Institute of Mathematical Sciences, Madras 600113, India ## Abstract We present a new and novel synthesis of all existing neutrino data regarding the disappearance and appearance of $`\nu _e`$ and $`\nu _\mu `$. We assume four neutrinos: $`\nu _e,\nu _\mu ,\nu _\tau `$, as well as a heavier singlet neutrino $`\nu _s`$ of a few eV. The latter may decay into a massless Goldstone boson (the singlet Majoron) and a linear combination of the doublet antineutrinos. We comment on how this scenario may be verified or falsified in future experiments. Accepting the totality of present experimental evidence for neutrino oscillations, it is not unreasonable to entertain the idea that there are four light neutrinos. Since the invisible decay of the $`Z`$ boson tells us that there are only three light doublet neutrinos, i.e. $`\nu _e,\nu _\mu ,\nu _\tau `$, the fourth light neutrino $`\nu _s`$ should be a singlet. Usually, $`\nu _s`$ is assumed to mix with the other neutrinos in a $`4\times 4`$ mass matrix for a phenomenological understanding of all the data. However, given that $`\nu _s`$ is different from $`\nu _{e,\mu ,\tau }`$, it may have some additional unusual property, such as decay. In fact, as shown below, this is a natural consequence of the spontaneous breakdown of lepton number in the simplest model, and it has some very interesting and verifiable predictions in future neutrino experiments. If only atmospheric and solar neutrino data are considered, then hierarchical three-neutrino oscillations with $`\nu _1`$ $`=`$ $`\nu _e\mathrm{cos}\theta {\displaystyle \frac{1}{\sqrt{2}}}(\nu _\mu +\nu _\tau )\mathrm{sin}\theta ,`$ (1) $`\nu _2`$ $`=`$ $`\nu _e\mathrm{sin}\theta +{\displaystyle \frac{1}{\sqrt{2}}}(\nu _\mu +\nu _\tau )\mathrm{cos}\theta ,`$ (2) $`\nu _3`$ $`=`$ $`{\displaystyle \frac{1}{\sqrt{2}}}(\nu _\mu +\nu _\tau ),`$ (3) where $`m_1<<m_2<<m_3`$, would fit the data very well. Here $`m_3^210^3`$ eV<sup>2</sup>, $`(\mathrm{sin}^22\theta )_{atm}=1`$, and $`m_2^210^5`$ eV<sup>2</sup> for the matter-enhanced oscillation solution to the solar neutrino deficit with $`(\mathrm{sin}^22\theta )_{sol}10^3`$ or near 1, or $`m_2^210^{10}`$ eV<sup>2</sup> for the vacuum oscillation solution with $`(\mathrm{sin}^22\theta )_{sol}1`$. We now add a fourth neutrino $`\nu _s`$ and assume that it mixes a little with $`\nu _e`$ and $`\nu _\mu `$ to explain the LSND data. Since the relevant $`\mathrm{\Delta }m^2`$ is now about 1 eV<sup>2</sup>, it is natural to take $`m_4^21`$ eV<sup>2</sup>, but this hierarchical solution is disfavored, because the observed $`\overline{\nu }_\mu \overline{\nu }_e`$ probability is contradicted by the $`\nu _\mu \nu _\mu `$ data of CDHSW together with the $`\overline{\nu }_e\overline{\nu }_e`$ data of Bugey. However, there are two ways that this conclusion may be evaded. (1) Let $`m_4^225`$ eV<sup>2</sup>, then the constraint due to the CDHSW experiment is not a factor, but now there are three other accelerator $`\nu _\mu \nu _e`$ experiments: BNL-E734, BNL-E776, and CCFR, which have bounds close to but allowed by the LSND 99% likelihood contour. This is a marginal hierarchical four-neutrino oscillation solution to all the data. (2) If $`\nu _4`$ decays, then the parameter space for an acceptable solution should open up. For example, in the CDHSW experiment, two detectors at different distances compare their respective $`\nu _\mu `$ fluxes and the ratio is taken. If the $`\nu _4`$ component of $`\nu _\mu `$ decays away already before reaching the first detector, the ratio remains at unity. In contrast to the case of only oscillations, this experiment is then unable to restrict $`m_4^2`$. Not only that, since the argument against the hierarchical four-neutrino spectrum depends crucially on the CDHSW experiment, it is clear that it cannot be valid in general. The idea of neutrino decay is of course not new. It is naturally related to the spontaneous breakdown of lepton number. The associated massless Nambu-Goldstone boson is called the Majoron and the typical decay $`\nu _2\overline{\nu }_1+`$ Majoron occurs if kinematically allowed. The triplet Majoron is ruled out experimentally because the decay $`Z`$ Majoron + partner (imaginary and real parts respectively of the lepton-number carrying scalar field) would have counted as the equivalent of two extra neutrino flavors. The singlet Majoron is unconstrained because it has no gauge interactions. We assign lepton number $`L=1`$ to $`\nu _s`$ and assume the existence of a scalar particle $`\chi ^0`$ with $`L=2`$. \[By convention, $`\nu _s`$ is left-handed. If we use a right-handed singlet neutrino $`\nu _R`$ instead, then it would be assigned $`L=+1`$.\] Hence the relevant terms of the interaction Lagrangian are given by $$_{int}=g_s\nu _s\nu _s\chi ^0+\underset{\alpha =e,\mu ,\tau }{}h_\alpha \nu _s(\nu _\alpha \varphi ^0l_\alpha \varphi ^+)+h.c.$$ (4) As $`\chi ^0`$ and $`\varphi ^0`$ become nonzero, $`\nu _s`$ becomes massive and also mixes with $`\nu _{e,\mu ,\tau }`$ to form the mass eigenstates $`\nu _{1,2,3,4}`$. At the same time, $`\sqrt{2}Im\chi ^0`$ becomes the massless Majoron $`M`$ and the decay $$\nu _4\overline{\nu }_{1,2,3}+M$$ (5) is now possible. Neutrino decay involving only $`\nu _{e,\mu ,\tau }`$ was recently proposed to explain the atmospheric data, but that becomes a poor fit after the inclusion of the upward going muons. More recently, it was shown that combining oscillation and decay (at the expense of also adding $`\nu _s`$) gives again a good fit. In contrast, the effects we envisage here of $`\nu _4`$ decay in atmospheric and solar neutrino data are both small and do not change the usual oscillation interpretation appreciably, as shown below. Let $`\nu _{e,\mu ,\tau ,s}`$ be related to the mass eigenstates $`m_{1,2,3,4}`$ through the unitary matrix $`U_{\alpha i}`$, which will be assumed real in the following for simplicity. Let $`m_4>>m_3>>m_2>>m_1`$ with $`\nu _4`$ having the decay lifetime $`\tau _4`$. Then for solar and atmospheric neutrino oscillations with $`m_4^2L/4E>>1`$, the probability of $`\nu _\alpha \nu _\beta `$ is given by $$P_{\alpha \beta }=\delta _{\alpha \beta }(12U_{\alpha 4}^2)+U_{\alpha 4}^2U_{\beta 4}^2(1+x^2)4\underset{i<j<4}{}U_{\alpha i}U_{\alpha j}U_{\beta i}U_{\beta j}\mathrm{sin}^2\frac{\mathrm{\Delta }m_{ij}^2L}{4E},$$ (6) where $$x=e^{m_4L/2E\tau _4}.$$ (7) In the case of laboratory experiments where $`\mathrm{\Delta }m_{ij}^2L/4E<<1`$ for $`i<j<4`$ but $`m_4^2L/4E`$ is not necessarily large or small, the corresponding formula is $$P_{\alpha \beta }=\delta _{\alpha \beta }\left[12U_{\alpha 4}^2\left(1x\mathrm{cos}\frac{m_4^2L}{2E}\right)\right]+U_{\alpha 4}^2U_{\beta 4}^2\left[12x\mathrm{cos}\frac{m_4^2L}{2E}+x^2\right].$$ (8) Note that the above expression simplifies to a function of $`U_{\alpha 4}`$, $`U_{\beta 4}`$, and $`x`$ if $`m_4`$ is large, and to a function of $`U_{\alpha 4}`$ and $`U_{\beta 4}`$ alone if $`x=0`$ whatever the value of $`m_4`$. In those circumstances, the corresponding laboratory experiment has no sensitivity to oscillations, but does measure one fixed number. Specifically, if $`m_4`$ is large, then $$P_{\mu e}=U_{e4}^2U_{\mu 4}^2(1+x^2),P_{ee}=(1U_{e4}^2)^2+x^2U_{e4}^4,P_{\mu \mu }=(1U_{\mu 4}^2)^2+x^2U_{\mu 4}^4.$$ (9) If $`x=0`$, then regardless of $`m_4`$, Eq. (8) reduces to Eq. (9) but with $`x`$ set equal to zero. The LSND experiment obtains $$P_{\mu e}=3.1\begin{array}{c}+1.1\hfill \\ 1.0\hfill \end{array}\pm 0.5\times 10^3,$$ (10) whereas BNL-E734 has $`P_{\mu e}<1.7\times 10^3`$ and BNL-E776 has $`P_{\mu e}<1.5\times 10^3`$. Using the LSND 90% confidence-level limit of $`P_{\mu e}>1.3\times 10^3`$, we find therefore reasonable consistency among these experiments. \[The most recent result of the ongoing KARMEN II experiment is $`P_{\mu e}<2.1\times 10^3`$, which will eventually have the sensitivity to test Eq. (10).\] The recent CCFR experiment measures $`P_{\mu e}<0.9\times 10^3`$, but its average $`L/E`$ is one to two orders of magnitude smaller than those of the other experiments, hence its $`x`$-value may be taken to be close to one and the usual oscillation interpretation of the data holds. This constraint implies that $`m_4^2<30`$ eV<sup>2</sup>. At $`m_45`$ eV, we are below the CCFR exclusion and in a marginal region of the parameter space for pure neutrino oscillations consistent with the LSND evidence and the exclusion from BNL-E734 and BNL-E776. Between $`m_45`$ eV and $`m_43`$ eV, the BNL-E734 data exclude a solution if $`x=1`$ and because that experiment has an average $`L/E`$ an order of magnitude smaller than that of BNL-E776, LSND, or CDHSW, the decay factor goes against having a consistent solution here even if $`x<1`$. Below $`m_43`$ eV, the oscillation + decay interpretation of the latter 3 experiments becomes important, as shown below. Ideally, one should reanalyze the results of all the laboratory experiments using Eq. (8) and verify whether the positive LSND signal can coexist with the exclusion limits from the other laboratory experiments by extending the usual parameter space of $`m_4`$, $`U_{e4}`$, and $`U_{\mu 4}`$ to include $`\tau _4`$ as well. This can be done only by using the full data set of each of the experiments and is best performed by the experimenters themselves. In the absence of such a calculation, we point out here the crucial fact that the CDHSW experiment would see no difference in its two detectors at distances of 130 m and 885 m, if the effective values of the quantity $`\mathrm{exp}(m_4L/2E\tau _4)\mathrm{cos}(m_4^2L/2E)`$ is the same. In Table I, we show $`\mathrm{\Gamma }_4/m_4(=1/\tau _4m_4)`$ as a function of $`m_4^2`$ near 6 eV<sup>2</sup> for which this happens, using as our very crude approximation the fixed values of $`L_1/E=0.065`$ m/MeV and $`L_2/E=0.442`$ m/MeV. This illustrates the possibility that the decrease from $`x_1`$ to $`x_2`$ due to decay may be compensated by the increase in the value of the cosine from $`L_1`$ to $`L_2`$ due to oscillations. Note also that there is a range of $`m_4^2`$ for which a null solution exists with varying $`\mathrm{\Gamma }_4/m_4`$, whereas if the latter is zero, then $`m_4^2`$ has only discrete solutions (at 4.8 and 6.6 eV<sup>2</sup> for example). In the realistic case of integrating over the experimental energy spectrum, both solutions will be smeared out, but the possibility of decay should result in a larger range of acceptable values of $`m_4^2`$. For consistency, we also show in Table I the values of $`fP_{\mu e}/U_{e4}^2U_{\mu 4}^2=12x\mathrm{cos}(m_4^2L/2E)+x^2`$ for the LSND and BNL-E776 experiments, using the fixed values of $`L/E`$ = 0.75 and 0.5 m/MeV respectively. This shows that the value of $`P_{\mu e}`$ as seen by the LSND experiment can be larger than that of BNL-E776 for $`4.8<m_4^2<5.8`$ eV<sup>2</sup>. To discuss solar and atmospheric neutrino oscillations, let us focus on the following specific model. Let $`\mathrm{cos}\theta =\sqrt{2/3}`$ and $`\mathrm{sin}\theta =\sqrt{1/3}`$ in Eqs. (1) and (2), and let $`\nu _s`$ mix with $`\nu _2`$ only, then $`U_{\alpha i}`$ is given by $$U=\left[\begin{array}{cccc}\sqrt{2/3}& c\sqrt{1/3}& 0& s\sqrt{1/3}\\ \sqrt{1/6}& c\sqrt{1/3}& \sqrt{1/2}& s\sqrt{1/3}\\ \sqrt{1/6}& c\sqrt{1/3}& \sqrt{1/2}& s\sqrt{1/3}\\ 0& s& 0& c\end{array}\right],$$ (11) where $`c`$ and $`s`$ are respectively the cosine and sine of the $`\nu _s\nu _2`$ mixing angle. For solar neutrino oscillations, we have $$P_{ee}=\left(1\frac{s^2}{3}\right)^2\frac{4}{9}(1s^2)\left(1\mathrm{cos}\frac{\mathrm{\Delta }m_{12}^2L}{2E}\right)+\frac{x^2s^4}{9}.$$ (12) In the limit $`s=0`$, this reduces to the usual two-neutrino formula with $`\mathrm{sin}^22\theta =8/9`$ which is a good fit to the data, either as the large-angle matter-enhanced solution or the vacuum oscillation solution. With a small $`s^2/3`$ of order a few percent \[between 0.026 ($`x=1`$) and 0.037 ($`x=0`$) for $`P_{\mu e}`$(LSND) = $`1.35\times 10^3`$\], this is definitely still allowed. Note that this result is not sensitive at all to the last term because $`s^4/9`$ is of order $`10^3`$. For atmospheric neutrino oscillations, we have $$P_{ee}=\left(1\frac{s^2}{3}\right)^2+\frac{x^2s^4}{9},P_{e\mu }=P_{\mu e}=(1+x^2)\frac{s^4}{9},$$ (13) $$P_{\mu \mu }=\left(1\frac{s^2}{3}\right)^2\frac{1}{2}\left(1\frac{2s^2}{3}\right)\left(1\mathrm{cos}\frac{\mathrm{\Delta }m_{23}^2L}{2E}\right)+\frac{x^2s^4}{9}.$$ (14) Here the limit $`s=0`$ corresponds to the canonical $`\nu _\mu \nu _\tau `$ solution with $`\mathrm{sin}^22\theta =1`$. As it is, the prediction of $`\nu _e\nu _e`$ is still a fixed number, but smaller than unity (0.93 for $`s^2/3=0.037`$). Given that there is an uncertainty of about 20% in the absolute flux normalization, we should consider instead the ratio $$\frac{2P_{\mu \mu }+P_{e\mu }}{P_{ee}+2P_{\mu e}}2\left[1\frac{s^4}{6}\frac{1}{2}\left(1\frac{2s^4}{9}\right)\left(1\mathrm{cos}\frac{\mathrm{\Delta }m_{23}^2L}{2E}\right)\right],$$ (15) where we have made an expansion in powers of $`s^2`$ and assumed that the ratio of $`\nu _\mu `$ to $`\nu _e`$ produced in the atmosphere is two. It is clear that this is numerically indistinguishable from the case $`s=0`$ . In this model, the decay $`\nu _4\overline{\nu }_2+M`$ has some very interesting experimental consequences. For example, $`\nu _e`$ from the sun decays through its $`\nu _4`$ component into $`\overline{\nu }_2=(c/\sqrt{3})(\overline{\nu }_e+\overline{\nu }_\mu +\overline{\nu }_\tau )s\overline{\nu }_s`$. Hence $$P(\nu _e\overline{\nu }_e)=P(\nu _e\overline{\nu }_\mu )=P(\nu _e\overline{\nu }_\tau )=\frac{s^2c^2}{9}10^2,$$ (16) where the energy of $`\overline{\nu }_\alpha `$ is only 1/2 that of $`\nu _e`$ and $`x=0`$ has been assumed. This is in principle detectable especially since the $`\overline{\nu }_ep`$ capture cross section is about 100 times that of $`\nu _ee`$ scattering at a few MeV. Unfortunately, the Super-Kamiokande experiment has an energy threshold of 6.5 MeV for the recoil electron and taking into account the additional 1.8 MeV threshold for the $`\overline{\nu }_epe^+n`$ reaction, this would require the original $`\nu _e`$ energy to be above 16.6 MeV, placing it outside the solar neutrino spectrum. With the recently lowered Super-Kamiokande energy threshold of 5.5 MeV, the fraction of solar $`\nu _e`$ above 14.6 MeV is $`1.6\times 10^4`$. Given the small probability of $`P(\nu _e\overline{\nu }_e)`$, this will not change appreciably the total number of observed $`e`$-like events. Regardless of energy threshold, the inability of Super-Kamiokande to distinguish $`e^+`$ from $`e^{}`$ or to detect the 2.2 MeV photon from neutron capture on free protons makes it difficult to pin down this possibility in any case. In the Sudbury (SNO) neutrino experiment, the energy threshold for detecting recoil electrons is 5 MeV, but since there is also a threshold of about 4 MeV for breaking up the deuterium nucleus into two neutrons and a positron, the neutrino energy required is more than about 18 MeV. This again places it outside the solar neutrino spectrum. On the other hand, if the experimental energy threshold can be significantly lowered, then SNO may be able to see this effect because the $`\overline{\nu }_e`$ signature ($`\overline{\nu }_e+dn+n+e^+`$) is distinct from that of $`\nu _e`$. The best chance for detecting antineutrinos from the decay of $`\nu _4`$ is offered by the BOREXINO experiment with a very low energy threshold of 0.25 MeV. Taking into account the 1.8 MeV needed for inverse beta decay, i.e. $`\overline{\nu }_epe^+n`$, this means that solar neutrinos with energy above 4.1 MeV can be detected as antineutrinos. The idea of looking for antineutrinos from the sun was motivated by the possibility of a large neutrino magnetic moment which may convert $`\nu _e`$ into $`\overline{\nu }_e`$ in the sun’s magnetic field. The capability of BOREXINO for detecting this has been discussed earlier. For our new distinctive effect of $`\nu _4`$ decay, the observed antineutrino energy spectrum is predicted to go from $`f(E)`$ to $`f(E/2)`$, where $`E`$ is the energy of the original neutrino. For atmospheric neutrinos, since $`\overline{\nu }_\mu `$ and $`\overline{\nu }_e`$ are produced together with $`\nu _\mu `$ and $`\nu _e`$ in about equal amounts, it is not possible to tell if a given event comes from the primary neutrino or its decay product, even if the detector could measure the charge of the observed lepton. To search for the $`\nu _\mu \overline{\nu }_e`$ transition in the LSND and KARMEN experiments, one would use the monoenergetic (29.8 MeV) $`\nu _\mu `$ from $`\pi ^+`$ decay at rest, which has the signature of a monoenergetic positron of 13.1 MeV from inverse beta decay, i.e. $`\overline{\nu }_epe^+n`$, in coincidence with a 2.2 MeV photon from the subsequent capture of the neutron by a free proton. However, this signal is overwhelmed by the neutral-current reaction $`\nu {}_{}{}^{12}C\nu {}_{}{}^{12}C_{}^{}`$, with the subsequent emission of a 15.1 MeV photon. In proposed long-baseline $`\nu _\mu \nu _\tau `$ appearance experiments, the oscillation probability is given by $$P_{\mu \tau }=\left(1\frac{s^2}{3}\right)^2\frac{1}{2}\left(1\frac{2s^2}{3}\right)\left(1+\mathrm{cos}\frac{\mathrm{\Delta }m_{23}^2L}{2E}\right)+\frac{x^2s^4}{9},$$ (17) which is not easily distinguished from the $`s=0`$ case. However, the decay products of $`\nu _4`$, i.e. $`\overline{\nu }_e`$, $`\overline{\nu }_\mu `$, and $`\overline{\nu }_\tau `$, may be observable with their own unique signatures, depending on the capabilities of the proposed detectors. In the case of four-neutrino oscillations, the effective number of neutrinos $`N_\nu `$ in Big Bang Nucleosynthesis is an important constraint. In this model, with $`m_4`$ few eV and $`s^2`$ few percent, the presence of a stable $`\nu _s`$ would have counted as an extra neutrino species, making $`N_\nu =4`$. This may not be acceptable if $`N_\nu <4`$, as indicated from the observed primordial <sup>4</sup>He abundance. The decay of $`\nu _4`$ changes $`N_\nu `$ to 3 + the contribution of the Majoron (i.e. 4/7). With $`\nu _4`$ as a component of $`\nu _e`$, neutrinoless double decay has an effective $`\nu _e`$ mass of $`(s^2/3)m_40.2`$ eV if $`m_45`$ eV. This value is just at the edge of the most recent experimental upper bound. Finally a comment on the neutrino contribution to dark matter may be in order. With $`\nu _4`$ decaying and $`m_1`$, $`m_2`$, and $`m_3`$ being too small, there is no neutrino dark matter. However, it is possible that $`m_1m_2m_3`$ few eV, while $`m_4`$ is higher by another few eV, in which case $`\nu _1`$, $`\nu _2`$, and $`\nu _3`$ will contribute to dark matter. Our discussion goes through almost unchanged, except that $`m_4^2`$ in Eq. (8) will be replaced by $`m_4^2m_{1,2,3}^2`$. In conclusion, we have shown in this paper that a hierarchical four-neutrino scenario is acceptable as a solution to all present neutrino data regarding the disappearance and appearance of $`\nu _e`$ and $`\nu _\mu `$. The assumed singlet neutrino of a few eV may decay into a linear combination of the three known doublet neutrinos with half of the energy. This new feature allows our proposal to be tested in future solar neutrino experiments such as BOREXINO (and perhaps SNO), and should be considered in forthcoming long-baseline accelerator neutrino experiments. ACKNOWLEDGEMENT One of us (G.R.) thanks the Physics Department, University of California, Riverside, for hospitality while this work was done. The research of E.M. was supported in part by the U. S. Department of Energy under Grant No. DE-FG03-94ER40837.
no-problem/9908/cond-mat9908431.html
ar5iv
text
# Spin-polarized tunneling of La0.67Sr0.33MnO3/YBa2Cu3O7-δ junctions ## I Introduction Hybrid structures between ferromagnets and superconductors have been the focus of much attention in terms of spin-dependent spectroscopy and spin-injection devices. The fundamental properties of ferromagnet/insulator/superconductor (F/I/S) junctions fabricated from conventional metal superconductors have been studied for about 30 years . The recent rediscovery of perovskite manganites which exhibit colossal magnetoresistance (CMR) has aroused a new possibility in this field, because the layered structure fabrication of ferromagnets and high-$`T_c`$ superconductors is possible using these oxide compounds . There are two main intriguing aspects in this field. One is the influence of the carrier injection on bulk superconducting properties, such as the suppression of the critical current density and the critical temperature due to the nonequilibrium states . The injection of spin-polarized quasiparticles is expected to enhance the nonequiliburium because the spin-relaxation time is estimated to be much longer than the quasiparticle-recombination time in light metals . It has been experimentally verified that suppression of the critical current in high-$`T_c`$ superconductors is induced due to a spin-polarized quasiparticle injection from ferromagnets of CMR compounds or pure metals . The other aspect is the boundary properties, such as the connection of wave functions, the Andreev reflection , and bound states formation at the surface for $`p`$-wave and for $`d`$-wave superconductors . Moreover, several theories have elucidated the transport properties under the influence of an exchange field for $`s`$-wave superconductors and for $`d`$-wave superconductors . However, detailed comparisons between theory and experiment have not yet been accomplished. In this paper, we study the transport properties and magnetic field response of La<sub>0.67</sub>Sr<sub>0.33</sub>MnO<sub>3</sub>/YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7-δ</sub> (LSMO/YBCO) cross-strip type junctions. We discuss the properties of the interface between ferromagnets and high-$`T_c`$ superconductors, which is important even for the injection device because the spin injection utilizes transport through surfaces. In the case of $`d`$-wave superconductors, the formation of surface bound states is known to modify the transport property . When the orientation of the junction is along the $`ab`$-plane and the misorientation of the $`a`$-axis to the boundary is finite, zero-energy bound states are formed at the boundary . The presence of the zero-energy states at the boundary of high-$`T_c`$ superconductors has been detected as zero-bias conductance peaks (ZBCPs) in a wide variety of tunneling junctions . The modification of this property in LSMO/YBCO junctions is an interesting problem, where quasiparticles in the normal side are strongly spin polarized. This is because the transport properties of the junctions are expected to be sensitive to an applied magnetic field and to the interface properties. ## II Experimental Because of their simple geometry, cross-strip junctions are used for measurements. Figure 1 shows schematic illustrations of the top view and the cross-sectional view of the junctions. $`C`$-axis oriented epitaxial YBCO thin films of 100 to 150 nm thickness were deposited on SrTiO<sub>3</sub> (100) substrates by pulsed laser deposition (PLD). The films were patterned into bridges 30–60 $`\mu `$m wide and 30–100 $`\mu `$m long by conventional photolithography and wet chemical etching employing phosphoric acid. Next, 100-nm-thick LSMO films were deposited on the patterned YBCO films, also by PLD. After the deposition, the films were subsequently annealed at 400 C in an oxygen atmosphere for 1–2 hours. The LSMO/YBCO layered films were patterned into cross-strip structures by Ar ion milling. The substrate temperature was 750 C for the deposition of YBCO, and 700 C for LSMO. The laser energy density was 1.5 J/cm<sup>2</sup> for YBCO, and 2 J/cm<sup>2</sup> for LSMO. The laser repetition frequency was 2 Hz and the oxygen pressure was 700 mTorr for both materials. Au film contact pads, annealed at 400 C in an oxygen atmosphere to reduce the contact resistance, were used as electrodes. LSMO films on the $`c`$-axis-oriented YBCO films were confirmed to be $`c`$-axis oriented by X-ray diffraction measurement. Temperature dependence of resistance showed that the superconducting transition temperature $`T_c`$ of YBCO was about 90 K, and the magnetization measurement showed that the ferromagnetic transition temperature of LSMO was about 350 K. The $`I`$-$`V`$ characteristics of the junctions were measured using a dc four-probe method and the conductance spectra ($`i.e.`$, $`dI/dV`$-$`V`$ curves) were numerically calculated from the $`I`$-$`V`$ data. The magnetic fields of 0–12 T generated by a superconducting magnet were applied along the direction parallel to the film surface and perpendicular to the trajectories of tunneling electrons. ## III Results and Discussion In the following, experimental results of conductance spectra and their magnetic field response in the LSMO/YBCO junctions are presented. Since most of our samples exhibited similar features, we concentrate on the data obtained from the sample with the junction area of $`30\times 30`$ $`\mu `$m<sup>2</sup>. We will demonstrate two features peculiar to the LSMO/YBCO junctions: the tunneling electron is actually spin-polarized, and the barrier naturally formed between YBCO and LSMO behaves as a ferromagnetic insulator, leading to the spin-filtering effect. The conductance spectra are analyzed based on a theoretical formula for ferromagnet/ferromagnetic insulator/$`d`$-wave superconductor (F/FI/d) junctions and the notation used in the analysis mostly follows that used in Ref. . A cylindrical Fermi surface is assumed with the Fermi energy $`E_{FS}`$ of 0.3 eV in YBCO, and the effective masses are set to be equal in YBCO and LSMO. For the model of ferromagnet, the Stoner model is adopted. The polarization $`P`$ and the Fermi-wave vector of quasiparticles for up\[down\]-spins $`k_{N,[]}`$ in LSMO are not independent parameters in the framework of the Stoner model . The normalized barrier heights for up\[down\]-spin $`V_{[]}`$ in a ferromagnetic insulator are used as the fitting parameters in the following analysis. We note that the formula used in the present study has some drawbacks in the analysis of an actual junction. One is that the delta function form $`V_{[]}\delta (x)`$ at the interface $`x=0`$ is applied although a barrier in an actual junction has finite thickness. Moreover, the spin-flip effect at the interface and the nonequilibrium properties of YBCO are neglected. Figure 2 shows the temperature dependence of conductance spectra of the junction. As the temperature is lowered from room temperature, no noticeable changes are detected above 70 K. At temperatures below 40 K, a gap-like structure (suppression of conductance) appears at an energy level between $`\pm `$15 mV($`\mathrm{\Delta }`$). The presence of the ZBCP becomes clear as the temperature is further decreased. At the lowest temperature (4.2 K), a large peak appears at zero-bias level. The presence of the gap-like structure and ZBCP indicates that a barrier exists between the LSMO and YBCO layers. In this study, however, we did not deposit any material as a barrier between the LSMO and YBCO layers. It has been reported that normal metal/YBCO junctions in which no insulating material had been deposited between the normal metal and YBCO layers exhibited similar differential conductance spectra . Thus, the barrier layer is expected to form naturally at the interface between YBCO and other materials. However, the composition and structure of the barrier in our junctions are not clarified at present, because the barrier is too thin for these characteristics to be investigated. Moreover, as mentioned in the introduction, the existence of the ZBCPs is well explained in the tunneling theory for anisotropic superconductors by assuming that the tunneling current is governed by in-plane ($`ab`$-plane) components . In the cross-strip geometry, the in-plane contact between YBCO and LSMO existed at the side of YBCO film, and it also existed on the $`c`$-axis oriented surface because atomic force microscope measurement showed that it contains a large amount of $`ab`$-edges due to the island growth. Therefore, the above assumption is reasonable because the conductivity in the $`ab`$-plane is far larger than that for the $`c`$-axis direction in the cases of high-$`T_c`$ superconductors. The observed ZBCP is qualitatively consistent with those found in other reports on normal metal/YBCO tunneling junctions . On the other hand, the most significant difference between the present case and those in the other reports is that the counter electrode is not a normal metal but a ferromagnet, and as a result the tunneling electrons are expected to be spin-polarized. It has been theoretically shown that the polarization can be estimated from the height of the ZBCP, since the ZBCP is largely suppressed by the spin-polarization . This effect corresponds to the fact that the Andreev reflected quasiparticle exists not as a propagating wave but as an evanescent wave, referred to as the virtual Andreev reflection process, when the injection angle $`\theta `$ of quasiparticles to the interface satisfies $`\mathrm{sin}^1(k_S/k_{N,})<\theta <\mathrm{sin}^1(k_{N,}/k_{N,})`$, where $`k_S`$ is the Fermi-wave vector in superconductors. Hence, the current through surface bound states is prohibited when the energy of the quasiparticle is less than the gap amplitude. Figure 3 shows calculated normalized conductance spectra $`\sigma (eV)`$ for various polarizations. ($`\sigma (eV)=\overline{\sigma }_S(eV)/\overline{\sigma }_N(eV)`$, where $`\overline{\sigma }_S(eV)`$ and $`\overline{\sigma }_N(eV)`$ are the tunneling conductance in superconducting and normal states, respectively.) It is clear that the height of the ZBCP is largely suppressed when the polarization becomes larger. From this relationship, it is estimated that the polarization of the present experiments is much less than 90%. Next, we discuss the effect of the applied magnetic field. Figure 4 shows the conductance spectra measured in various applied fields at 4.2 K. As the magnetic field becomes larger, an enhancement of the background conductance is always observed. A similar feature has been reported by Vas’ko et al. in a DyBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub>/La<sub>2/3</sub>Ba<sub>1/3</sub>MnO<sub>3</sub> junction , but has not been observed in normal metal/YBCO junctions . The origin of the change of background conductance will be discussed later. All conductance spectra almost collapse onto a single curve except for a small change inside the gap ($`|eV|<\mathrm{\Delta }`$) when they are normalized by the conductance at $`V=20`$mV, as shown in the inset of Fig. 4. On the other hand, a small field response can be seen around zero-bias level. Figure 5 shows the variation of normalized conductance $`\sigma ^{}(eV,H)`$ near zero-bias level due to the applied field. To clearly show the features, the normalized conductance spectra are plotted with the zero field conductance subtracted \[$`\sigma ^{}(eV,H)\sigma ^{}(eV,0)`$\] for several applied fields. Two notable features have been observed. One is the development of a dip at zero bias, indicating that the ZBCP splits into two peaks. The ZBCP splitting has been observed in normal metal/high-$`T_c`$ superconductor junctions . The other is that the asymmetric heights of the two shoulders are seen beside the dip. This asymmetry may not be induced by the asymmetric background conductance, because symmetric peak splitting has been observed in normal metal/high-$`T_c`$ superconductor junctions which exhibit asymmetric backgrounds . Several possible origins of the ZBCP splitting have been proposed for $`d`$-wave superconductors; i) the Zeeman splitting of $`\pm g\mu _BH/2`$ in the energy levels between up and down spins, where $`g`$ is the $`g`$-factor and $`\mu _B`$ is the Bohr magneton, ii) the inducement of the broken time-reversal symmetry (BTRS) states such as $`d_{x^2y^2}+is`$ wave , iii) the spin-filtering effect due to the ferromagnetic tunneling barrier . In the following, we will show that the spin-filtering effect is the most plausible origin of the observed magnetic field response. The inset of Fig. 5 shows the amplitude of the peak splitting $`\delta _p`$ estimated from the peak-to-peak of the two shoulders as a function of applied magnetic field. Completely different from usual Zeeman splitting of which the response is linear to the applied field ($`g\mu _BH/2`$), $`\delta _p`$ shows nonlinear behavior with respect to the applied field: a rapid rise near the zero field ($`H<0.5T`$), and almost linear behavior in the high field ($`H>5T`$). Moreover, the observed $`\delta _p`$ is much larger than $`g\mu _BH`$ independent of $`H`$. It is important to note that this behavior is similar to $`M`$$`H`$ curves of a conventional paramagnet and is also consistent with peak splitting due to the spin-filtering effect which has been observed in Al/EuO/ and Al/EuS/ junctions . Based on this fact, we can reject the possibility of simple Zeeman splitting. Moreover, although the inducement of BTRS can also explain the nonlinear splitting, the asymmetry in the peak splitting observed in our result cannot be explained by this theory as described in Ref. . This is based on the fact that the splitting due to the BTRS is not a spin-dependent effect. If we assume that the barrier naturally forms between YBCO and LSMO and that it attains ferromagnetic insulator nature similar to that of EuO and EuS barriers , the observed field response can be consistently explained in terms of the spin-filtering effect. The strength of the exchange interaction in the barrier (represented by $`\widehat{U}_B`$ in Ref. ) responds to the applied field in a similar way to paramagnetic materials . In the applied magnetic field, the effective barrier height changes between up and down spin components due to the finite $`\widehat{U}_B`$, then the tunneling electron begins to exhibit spin-dependent energy splitting . Figure 6 simulates a theoretical calculation of magnetic field response of $`\sigma (eV,H)\sigma (eV,0)`$ with $`P`$=30%. As $`H`$ becomes larger, the shoulders develop. Moreover, the spin-polarization induces the different peak splitting heights. These features coincide with experimental data. The idea that the spin-filtering effect is the origin of ZBCP splitting is also supported by the magnetic field response of the background conductance. As described above, the background conductance increases as the magnetic field is increased. Although LSMO exhibits CMR at around the ferromagnetic transition temperature ($``$350 K), the magnetoresistance of the LSMO film at 4.2 K was less than 1% in the measured magnetic field range. Therefore, the increase of conductance with magnetic field originates in the transport property of the junction rather than the change in the resistivity of LSMO. In the framework of the spin-filtering effect, the magnetic field response of background conductance is understood as the field dependence of $`V_{[]}`$: i) in the absence of the field, $`V_{}=V_{}`$ applies, ii) as the magnetic field is increased, $`V_{}`$ decreases and $`V_{}`$ increases, iii) since the tunneling barrier height for majority carrier (up-spin) decreases, the total conductance of the junction is rapidly enhanced. Figure 7 shows the simulated results for background conductance as a function of $`(V_{}V_{})/(V_{}+V_{})`$ for $`P`$=0%, 30%, and 60% cases. It is clear that as the difference in $`V_{}`$ and $`V_{}`$ increases, the background conductance also increases. The influence of the polarization is clear especially near the zero field. Without the polarization, $`\sigma (eV)/H`$ is zero near $`H=0`$. This is because the effect of imbalance in tunneling probabilities in up- and down-spins is cancelled out without the polarization, while the cancellation becomes smaller as the polarization increases. We have shown that the observed conductance spectra and their magnetic field response in LSMO/YBCO junctions can be consistently understood in terms of the spin-filtering effect in $`d`$-wave superconductors. Two possibilities are deduced from the present results. One is that a degraded layer existing between LSMO and YBCO functions as an intrinsic barrier and behaves as a ferromagnetic insulator. The other is that a new type of magnetic boundary (surface) effect, such as Shottokey barrier of spins, exists at the interface or on the surface due to the termination of the CuO or the MnO planes. However, several questions still remain: i) what kind of material exhibits the ferromagnetic insulator behavior at the YBCO/LSMO interface, and ii) can the peak splitting observed in normal metal/high-$`T_c`$ superconductor junctions be attributed to the spin-filtering effect. To clarify these problems, a more detailed characterization of the interface layer will be accomplished in the near future. In addition, the origin of linear background conductance has not been discussed here. As is well known, this feature has been widely observed in high-$`T_c`$ superconductor junctions . Based on the above mentioned feature that the normalized conductance curves collapse onto a single curve, we assume that the origin of the linear background conductance is an effect independent of the boundary properties discussed above. Kirtley and Scalapino attributed the origin of the energy dependent conductance to an increment of tunneling probability due to an inelastic tunneling process via spin fluctuation . Although we believe that the present results do not contradict this theory, further study is required to clarify this point. ## IV Summary We observed the magnetic field responses of the conductance spectra peculiar to LSMO/YBCO junctions, such as an increase of background conductance and asymmetric ZBCP splitting, which have not been observed in normal metal/YBCO junctions. Moreover, the nonlinear response of $`\delta _p`$ to an applied field is different from the simple Zeeman splitting. Although the inducement of BTRS can explain the nonlinear ZBCP splitting to applied field, this explanation is not suitable because the asymmetry of the splitting cannot be explained. On the other hand, it is shown that the observed features in the present study agree with the theory of tunneling spectroscopy for F/FI/d junctions which assumes a spin dependent transmission (tunneling) probability between a ferromagnet and a superconductor. This suggests that the ferromagnetic barrier naturally forms between LSMO and YBCO, and that the field response of the conductance spectra is due to the spin polarization of tunneling carriers and the spin-filtering effect. From the present results, we deduce two possibilities of a ferromagnetic barrier. One is that a degraded layer existing between LSMO and YBCO functions as an intrinsic barrier and behaves as a ferromagnetic insulator. The other is that a new type of magnetic boundary (surface) effect, such as Shottokey barrier of spins, exists at the interface or on the surface due to the termination of the CuO or the MnO planes. ###### Acknowledgements. This work has been partially supported by the Core Research for Evolutional Science and Technology (CREST) of the Japan Science and Technology Corporation (JST) of Japan.
no-problem/9908/cond-mat9908073.html
ar5iv
text
# Fermi-level alignment at metal-carbon nanotube interfaces: application to scanning tunneling spectroscopy ## Abstract At any metal-carbon nanotube interface there is charge transfer and the induced interfacial field determines the position of the carbon nanotube band structure relative to the metal Fermi-level. In the case of a single-wall carbon nanotube (SWNT) supported on a gold substrate, we show that the charge transfers induce a local electrostatic potential perturbation which gives rise to the observed Fermi-level shift in scanning tunneling spectroscopy (STS) measurements. We also discuss the relevance of this study to recent experiments on carbon nanotube transistors and argue that the Fermi-level alignment will be different for carbon nanotube transistors with low resistance and high resistance contacts. The discovery of carbon nanotube opens up a new artificial laboratory in which one-dimensional transport can be investigated, similar to the semiconductor quantum wire. However, the study of transport in carbon nanotube has been complicated by the difficulty of making low resistance contacts to the measuring electrodes. The high resistances reported in various two- and three-terminal measurements have led Tersoff (and also the present authors ) to suggest that wavevector conservation at the metal-carbon nanotube contact may play an important role in explaining the high contact resistance. In this paper we address a different question: How does the Fermi-level in the metallic contact align with the energy levels of the nanotube? The answer to this question is very important in interpreting the transport measurements. Depending on the contact geometry, transport can occur in the direction parallel to the nanotube axis, in the case of nanotube field-effect-transistor (FET) , or perpendicular to it, in the case of the STS measurement. In the STS measurement, the Fermi-level is found to have shifted to the valence band edge of the semiconducting nanotube, which is then used to explain the operation of the nanotube FETs with high resistance contacts, where the measured two-terminal resistance for metallic nanotube is $`1M\mathrm{\Omega }`$. However, low temperature transport measurements using low resistance contacts (where the contact resistance is of the order of resistance quantum) indicate that the Fermi-level is located between the valence and conduction band of the semiconducting nanotube, instead of being pinned to the valence band edge. This conflict raises the important question of whether the Fermi-level positioning may depend on the contact geometry and/or the interface coupling. In this paper we present a theory of the scanning tunneling spectroscopy of a single-wall carbon nanotube (SWNT) supported on the Au(111) substrate. The main results of our work are: (1) the work function difference between the gold substrate and the nanotube leads to charge transfers across the interface, which induces a local electrostatic potential perturbation on the nanotube side giving rise to the observed Fermi-level shift in the STS measurement. (2) for nanotube transistors, the atomic-scale potential perturbation at the interface is not important *if the coupling between the metal and the nanotube is strong*. The metal-induced gap states (MIGS) model provides a good starting point for determining the Fermi-level position. (3) a proper theory of STS should take the tip electronic structure into account. For an ordinary metal-semiconductor interface, the MIGS model provides a conceptually simple way of understanding the band lineup problem which predicts that the metal Fermi-level $`E_F`$ should align with the “charge neutrality level” (which can be taken as the energy where the gap states cross over from valence- to conduction-type character) in the semiconductor. This elegant idea has been applied with impressive success by Tersoff to various metal-semiconductor junctions and semiconductor heterojunctions, which greatly simplifies the band lineup problem and gives quantitatively accurate prediction of the Schottky barrier height in many cases. The success of this model relies on the fact that there exists a continuum of gap states around $`E_F`$ at the semiconductor side of the metal-semiconductor interface due to the tails of the metal wavefunction decaying into the semiconductor, which can have significant amplitude over a few atomic layers near the interface. Any deviation from local charge neutrality in the interface region will result in metallic screening by the MIGS. However, this is not true for the interface formed when a SWNT is deposited onto the gold substrate. Since the coupling to the substrate is weak and the metal wave function decays across a significant van der Waals separation, the MIGS will provide only relatively weak screening. When the conductance spectrum is measured using a scanning tunneling microscope (STM), transport occurs perpendicular to the nanotube axis and the characteristic length scale is the diameter of the SWNT which is on the scale of nanometers and can be comparable to the range of the interfacial perturbation. The detailed potential variations in this dimension will be important in determining the STS current-voltage characteristics, similar to the case of molecular adsorbates on metal surfaces. Fig. 1 illustrates schematically the local electrostatic potential profile at the substrate-nanotube-tip heterojunction. If the charge distributions on both sides don’t change when interface is formed, then the vacuum levels line up. However, due to the difference of work functions (as shown in Fig. 1(b)), electrons will transfer from the SWNT to the gold substrate and the resulting electrostatic potential profile $`\delta \varphi `$ should be determined self-consistently (Since the perturbation due to the tip is much weaker, we neglect its effect when treating the substrate-SWNT interface). We assume an ideal substrate-SWNT interface and study the interface electronic structure using the $`\pi `$-electron tight-binding (TB) model of the SWNT. In this model, the bandstructure of SWNTs is symmetric with respect to the position of the on-site $`\pi `$ orbital energy. We take the Fermi-level of the gold as the energy reference, then the initial $`\pi `$ orbital energy at each carbon atom of the SWNT is $`W_mW_{nt}=0.8(eV)`$. The final on-site $`\pi `$ orbital energy is the superposition of this initial value and the change in the electrostatic potential $`\delta \varphi `$ which changes as one moves away from the gold substrate (Fig. 1(c)). For the gold substrate we use the TB parameters of Papaconstantopoulos . For the coupling between the SWNT and the gold surface, we use the values obtained from the Extended Hückel Theory (EHT) . Only the carbon atoms closest to the gold surface are assumed to be coupled. Since the SWNT has periodic symmetry along its axis, only one unit cell needs to be considered. We use the Green’s function method to calculate the electron population of each carbon atom from the expression: $`n_i=\frac{2}{\pi }Imag\{_{\mathrm{}}^{E_F}G_{i,i}(E)𝑑E\}`$ where $`G(E)`$ is the projection of the Green’s function onto one unit cell of the SWNT and $`G_{i,i}`$ is the $`i`$th diagonal matrix element corresponding to atom $`i`$ in the unit cell. $`G(E)`$ is calculated by reducing the Hamiltonian of the whole interface into an effective one in which the interactions between the given unit cell and the rest of the interface system are incorporated into the corresponding self-energy operators using the same method as described in ch. 3 of Datta . Within the tight-binding theory, self-consistency is achieved by adjusting the diagonal elements of the Hamiltonian and imposing Hartree consistency between the potential perturbation $`\delta \varphi _i`$ and the charge perturbation $`\delta n_i`$ using the self-consistent scheme similar to that developed by Flores and coworkers and also Harrison (for details see Ref. ). Fig. 2(a)-(b) and Fig. 3(a)-(b) show the results for (10,10) and (16,0) SWNTs with diameters of $`1.35`$ and $`1.25(nm)`$ respectively, close to those measured in Ref. . The substrate-SWNT distance is $`3.2(\AA )`$. We have also studied (15,0) and (14,0) SWNTs. All nanotubes show similar behavior. We believe similar conclusions can be reached for chiral nanotubes since the electronic structure of SWNTs depends only on their metallicity and diameter, not on chirality. The similarity between the metallic and the semiconducting nanotube shown here can be understood from the work of Benedict et al. , who show that the dielectric response of SWNTs in the direction perpendicular to the axis doesn’t depend on the metallicity, only on the diameter. Since the $`\pi `$ orbital energy coincides with the position of the Fermi-level (mid-gap level) of the isolated metallic (semiconducting) SWNT, then the Fermi-level shift in the STS measurement should correspond to the on-site $`\pi `$ orbital energy of the carbon atom closest to the STM tip *if only this atom is coupled to the tip*. However, considering the cylindrical shape of the SWNT, more carbon atoms could be coupled to the tip and the Fermi-level shift then corresponds to the average value of the on-site orbital energies of the carbon atoms within the coupling range. From the plotted values of Fig. 2 and Fig. 3, we then expect Fermi-level shifts of $`0.2(eV)`$ for both nanotubes, close to the measured values. The peak structures in the local density-of-states (LDOS) of the bottom carbon atom (closest to the gold substrate) corresponding to the Van Hove singularities are broadened due to the hybridization with the gold surface atomic orbitals. Their positions also change, which can be understood from the bonding-antibonding splitting resulting from the hybridization of the nanotube molecular orbitals and the gold orbitals. Also notable is the enhancement of the density of states in the gap at the expense of the valence band, reminding us of the Levinson theorem which states the total number of states should be conserved in the presence of perturbation, be it due to impurity or due to surface. In contrast, the perturbation of the LDOS at the carbon atom furthest to the substrate is much weaker. The calculated charge transfer per atom is small and mainly localized on the carbon atoms close to the gold surface, in agreement with recent *ab initio* calculations. *Applications to the scanning tunneling spectroscopy.* The differential conductance $`dI/dV`$ (or the normalized one $`d\mathrm{ln}I/d\mathrm{ln}V`$) obtained from the STS measurement is often interpreted as to reflect the local density of states of the sample, based on the s-wave model of the tip. However, first-principles calculations have shown this model to be inadequate for tips made from transition metals, where small clusters tend to form at the tip surface giving rise to localized d-type tip states. As a result, the tip electronic structure can have profound effects on the interpretation of the STS measurement. The STS current-voltage characteristics can be calculated using the standard technique of scattering theory. Here we have taken a simpler approach instead, aiming only to illustrate how the tip electronic structure may affect the interpretation of the STS measurement. Since the coupling across the SWNT-tip interface is weak, the tunneling Hamiltonian theory may be invoked to write the current crudely as: $$I_0^{eV}\rho _{nt}(E)\rho _{tip}(EeV)𝑑E$$ (1) where $`\rho _{nt}`$ and $`\rho _{tip}`$ are the density of states of the SWNT and the tip respectively. The differential conductance thus obtained then reflects the convolution of the density of states of the SWNT and the tip. If $`\rho _{tip}`$ is constant within the range of the integral, we recover the usual expression $`dI/dV\rho _{nt}`$. Note $`\rho _{nt}`$ is calculated taking the on-site perturbations and the coupling to the gold substrate into account (we use the LDOS of the carbon atom closest to the tip here). We have used two models for the tip: (1) as a semi-infinite Pt(111) crystal; (2) as a Pt atom adsorbed on the surface of the semi-infinite Pt(111) crystal . The results are shown in Fig. 2(c)-(d) and Fig. 3(c)-(d) along with that obtained from Eq. (1) assuming constant $`\rho _{tip}`$. As can be seen from the plots, additional fine structures are introduced between the peak structures of $`\rho _{nt}`$ when we take into account the electronic structure of the tip. *Discussions and conclusions* With the advancement of new techniques for making electric contact to the SWNT , low resistance contacts with two-terminal conductance close to the conductance quantum have been obtained . Current-voltage characteristics measured at low temperature using these new techniques show that the Fermi-level is located in the gap of the semiconducting SWNT. In these experiments, SWNTs are grown from the patterned catalyst islands on the silicon wafer, Au/Ti contact pads are then placed on the catalyst islands fully covering the islands and extending over their edge. Since the SWNTs thus grown are mostly capped, the coupling between the SWNTs and the electrode is presumably similar to that of fullerene where it is well known that fullerene forms a strong chemical bond with the noble and transition metal surfaces (see Dresselhaus et al. ). The large contact area between the SWNTs and the metal will make the coupling across the interface even stronger which then allows the metal wavefunctions to penetrate deep into the nanotube side. Therefore, we expect that the dominant contribution to the barrier height is from the metallic screening by MIGS, which tends to line up the metal Fermi-level with the “charge neutrality level” of the SWNT. Since the band structure of the SWNT is exactly symmetric within the $`\pi `$-electron model, the “charge neutrality level” will be at the mid-gap, although it can be different when a more accurate model of electronic structure is used. Our emphasis here is not to give a quantitative estimate of the barrier height, but rather to show that the MIGS model provides the conceptual base of understanding in the limit of strong interface coupling. The situation gets complicated for measurements using high resistance contacts, where the SWNT is side-contacted and the coupling across the interface is weak. In this case, the MIGS model of Schottky barrier is no longer applicable. The interface defects and bending of SWNT at the edge of the contact can induce localized states at the interface region which will accommodate additional charges and affect the formation of Schottky barrier. Therefore, we expect that the final Fermi-level position depends on the detailed contact condition and *may or may not be located at the valence band edge*. We believe that a detailed *ab initio* analysis is needed to clarify the various mechanisms involved. This work is jointly supported by NSF and ARO through grant number 9708107-DMR. We are indebted to M.P. Anantram for drawing our attention to this important topic.
no-problem/9908/astro-ph9908182.html
ar5iv
text
# The small-Péclet-number approximation in stellar radiative zones ## 1 Introduction The understanding of the flow dynamics within stellar radiative zones constitutes a major challenge for the current theory of stellar evolution. These motions transport chemical elements and it turns out that their contribution might reconcile the existing models of stellar structure with the observations of the surface abundances (Pinsonneault pin (1998)). By transporting angular momentum, such flows also play an important role in the evolution of star’s rotation. In particular, they could explain the nearly solid body rotation of the solar radiative zone which has been revealed by helioseismology (Gough et al. gough (1996)). We consider here the effects of the very high thermal diffusivity of stellar interiors on the dynamics of these motions. In most cases, radiation dominates the thermal exchanges within radiative zones. This heat transport is so efficient that the thermal diffusivities associated with the radiative flux are larger by several orders of magnitude than the thermal diffusivities encountered in colder media like planetary atmospheres. For example, the thermal diffusivity varies between $`10^5`$ and $`10^7\mathrm{cm}^2\mathrm{s}^1`$ inside the sun whereas it is equal to $`0.18\mathrm{cm}^2\mathrm{s}^1`$ in the standard conditions of the terrestrial atmosphere. This property of the stellar fluid is expected to strongly affect the flow dynamics especially inside the stably stratified radiative zone where the time scale of thermal diffusion appears to be shorter than the dynamical time scale characterizing radial motions. Helioseismology data show that the thermal structure of this region is very close to the one predicted by hydrostatic models, indicating that existing fluid motions are not fast enough to modify significantly the thermal structure built up by the radiative flux (Canuto & Christensen-Dalsgaard can (1998)). Qualitatively, the damping of temperature fluctuations by thermal diffusion is expected to have two main effects on the dynamics. The first one is to reduce the amplitude of the buoyancy force. This restoring force acts on fluid parcels displaced from their equilibrium level and is proportional to the density difference between the parcel and its environment. Since density fluctuations are proportional to temperature fluctuations for incompressible motions, fast thermal exchanges reduce the force amplitude. An important consequence of this effect is to favour the onset of shear layer instabilities in stably stratified layers (Dudis dudis (1974), Zahn zan (1974)). The second main effect of the thermal diffusion is to increase the dissipation of kinetic energy. Any vertical motion in a quiescent atmosphere induces a work of the buoyancy force so that a fraction of the injected kinetic energy is necessarily transformed into potential energy. If the fluid parcels could ”fall” adiabatically towards their equilibrium position, all the stored potential energy could return back to kinetic energy. However, the damping of temperature fluctuations provokes an irreversible loss of kinetic energy. A simple example of this process is the damping of gravity waves. Both effects of the thermal diffusivity are thus opposed. While a decrease of the buoyancy force amplitude reduces the associated work and thus the amount of kinetic energy extracted, the second effect increases the fraction of the kinetic energy which is irreversibly lost. Then, for a given mechanical forcing, a relevant question is whether a larger thermal diffusivity reduces or enhances the kinetic energy of the flow. We are lacking quantitative results especially in non-linear regimes to answer such a basic question and more generally to understand the effect of thermal diffusivity in a stellar context. This situation is partly due to the difficulty to reproduce flows with realistic Prandtl numbers either in laboratory experiments or by numerical simulations. The Prandtl number $`P_r=\nu /\kappa `$ which compares the kinematic viscosity $`\nu `$ and the thermal diffusivity $`\kappa `$, varies between $`10^6`$ and $`10^9`$ within the sun whereas it is equal to $`0.7`$ in the air. Although some fluid like metal liquid may have small Prandtl numbers in laboratory conditions ($`P_r=0.025`$ for mercury, see for example Cioni et al. som (1997))), these values remains far from the stellar case. The severe numerical limitation is explained by the huge separation between the time scales of viscous dissipation and thermal diffusion. The computation of both processes over a few dynamical times would require a prohibitive amount of computer time. In this paper, we investigate the limit where the time scale characterizing the thermal exchanges is much shorter than the time scale of the motions (the ratio between both time scales defines the Péclet number). In Sect. 2, an asymptotic form of the governing equations is derived in the context of the Boussinesq approximation. Evidences that these asymptotic equations actually approximate the Boussinesq equations for small Péclet numbers are presented in Sect. 3. Then, in Sect. 4, the elementary properties of the small-Péclet-number equations are described, emphasizing their theoretical and practical interests. Finally, the relevance of this approximation in a stellar context is commented in Sect. 5. ## 2 Derivation of the small-Péclet-number approximation We restrict ourselves to a fluid layer embedded in an uniform vertical gravity field and bounded by two horizontal plates. A mechanical forcing is assumed to drive motions which can be described by the Boussinesq approximation. We do not need to specify the forcing for the moment, we only assume that it introduces a velocity scale $`U_{}`$. The temperature is fixed on both plates so that a linear diffusive profile denoted $`T^i(z)`$ is established initially. The dynamical effect of the stable stratification is measured by the Brunt-Väisälä frequency, $`N_{}=\left(\beta g\mathrm{\Delta }T_{}/L_{}\right)^{1/2}`$, where $`g`$ denotes the gravity acceleration, $`\beta `$ is the thermal expansion coefficient, $`\mathrm{\Delta }T_{}`$ the temperature difference between the upper and lower plates and $`L_{}`$ the distance separating the plates. In the context of the Boussinesq approximation, the governing non-dimensional equations read: $$\frac{𝐮}{t}+𝐮𝐮=p+R_i\theta 𝐞_z+\frac{1}{R_e}^2𝐮,$$ (1) $$\frac{\theta }{t}+𝐮\theta +w=\frac{1}{P_e}^2\theta ,$$ (2) $$𝐮=0,$$ (3) where, $`𝐮=u𝐞_x+v𝐞_y+w𝐞_z`$ is the velocity vector, $`p`$ the pressure and $`\theta (x,y,z)=T(x,y,z)T^i(z)`$ the temperature deviation from the initial temperature profile. The $`z`$ axis refers to the vertical direction, while the $`x`$ and $`y`$ axis refer to the horizontal ones. In the heat equation, the third term of the left hand side corresponds to the vertical advection of temperature against the mean temperature gradient $`dT^i(z)/dz`$. This gradient is equal to unity in the dimensionless unit. To non-dimensionalize the equations we used the velocity scale $`U_{}`$, the length scale $`L_{}`$, the dynamical time scale $`t_\mathrm{D}=L_{}/U_{}`$, the pressure scale $`\varrho _0U_{}^2`$ and the temperature variation $`\mathrm{\Delta }T_{}`$. The system is then governed by the Richardson number, $`R_i`$, the Péclet number, $`P_e`$, and the Reynolds number, $`R_e`$, respectively defined as $$R_i=\left(\frac{N_{}L_{}}{U_{}}\right)^2,P_e=\frac{U_{}L_{}}{\kappa },R_e=\frac{U_{}L_{}}{\nu }.$$ The Richardson number is the square of the ratio between the dynamical time scale $`t_\mathrm{D}`$ and the buoyancy time scale $`t_\mathrm{B}=1/N_{}`$. The thermal diffusivity $`\kappa `$ appears in the Péclet number which compares the thermal diffusion time scale, $`t_\kappa =L_{}^2/\kappa `$ with the dynamical time scale. The Reynolds number is the ratio between the viscous time scale $`L_{}^2/\nu `$ and the dynamical time scale. In the limit of small Péclet number, we assume that the solutions $`𝐮`$ and $`\theta `$ of the Boussinesq equations behave like Taylor series: $$𝐮=𝐮_\mathrm{𝟎}+P_e𝐮_\mathrm{𝟏}+P_e^2𝐮_\mathrm{𝟐}+\mathrm{}$$ (4) $$\theta =\theta _0+P_e\theta _1+P_e^2\theta _2+\mathrm{}.$$ (5) Note that in the context of the Boussinesq equation, the pressure is an intermediate variable determined by the incompressibility condition (3). By inserting these asymptotic expansions in the heat equation, we find at the zero order in $`P_e`$: $$^2\theta _0=0.$$ (6) Since the temperature remains fixed to its initial value on both bounding plates, temperature deviations vanish on both plates. Then, Eq. (6) implies $$\theta _0=0.$$ (7) Thus, at the lowest order in $`P_e`$, the Boussinesq equations reduce to the Navier-Stokes equation: $$\frac{𝐮_\mathrm{𝟎}}{t}+𝐮_\mathrm{𝟎}𝐮_\mathrm{𝟎}=p_0+\frac{1}{R_e}^2𝐮_\mathrm{𝟎},$$ (8) together with the incompressibility condition, $$𝐮_\mathrm{𝟎}=0.$$ (9) At this order, the dynamical and thermal equations are decoupled. The coupling is recovered at the first order in $`P_e`$: $$\frac{𝐮_\mathrm{𝟏}}{t}+𝐮_\mathrm{𝟎}𝐮_\mathrm{𝟏}+𝐮_\mathrm{𝟏}𝐮_\mathrm{𝟎}=p_1+R_i\theta _1𝐞_z+\frac{1}{R_e}^2𝐮_\mathrm{𝟏},$$ (10) $$w_0=^2\theta _1,$$ (11) $$𝐮_\mathrm{𝟏}=0.$$ (12) Solutions $`\widehat{𝐮}=𝐮_\mathrm{𝟎}+P_e𝐮_\mathrm{𝟏},\widehat{\theta }=\theta _0+P_e\theta _1`$ valid up to the first order in $`P_e`$ must satisfy the above system of equations (7), (8), (9), (10), (11), (12). We note that the Lagrangian derivative of temperature deviations does not appear in the heat equation of this system. Thus, at the first order in $`P_e`$, one would have found the same system of equations for $`𝐮_\mathrm{𝟎}`$, $`𝐮_\mathrm{𝟏}`$, $`\theta _0`$, $`\theta _1`$ if the Taylor series had been introduced in the following equations: $$\frac{𝐮}{t}+𝐮𝐮=p+R_i\theta 𝐞_z+\frac{1}{R_e}^2𝐮,$$ (13) $$P_ew=^2\theta ,$$ (14) $$𝐮=0.$$ (15) Therefore, if $`𝐮`$ and $`\theta `$ actually behave as Taylor series for small Péclet numbers, the solution of the above equations is identical to the solution of the Boussinesq equations up to the first order in $`P_e`$. The unique difference with the Boussinesq equations comes from the heat equation. Physically, the process leading to the balance $`P_ew=^2\theta `$ can be described as follows: For large values of the thermal diffusivity, the temperature fluctuations are expected to be small and the mean temperature stratification to remain unchanged by the mechanical heat flux. However, vertical motions advecting fluid parcels against the mean temperature gradient always produce temperature deviations and, unlike the non-linear advection term $`𝐮.\theta `$, this generation process does not depend on the amplitude of the temperature deviations. As fluid parcels go up (or down) in a mean temperature gradient, the amplitude of the temperature deviations tends to increase continuously. In the mean time, thermal diffusion tends to reduce these temperature deviations. Inspection of the heat Eq. (2) shows that this diffusive process can lead to a stationary solution, namely $`P_ew=^2\theta `$. Clearly, if the time scale of the vertical motions is very slow compared to the diffusive time scale, one expects that this stationary solution is practically instantaneously reached. Again, it describes a balance between thermal diffusion and vertical advection against the mean temperature stratification. In the remainder of this paper, we will refer to the set of equations (13), (14), (15), as the small-Péclet-number equations or as the small-Péclet-number approximation. However, formal mathematical proof that $`𝐮`$ and $`\theta `$ actually behave as Taylor series does not exist in the general case. Then, to prove that the small-Péclet-number equations actually approximate the Boussinesq equations in the limit of small Péclet number, specific cases have to be considered. In the next section, we shall present two types of linear flows where the validity of the small-Péclet-number approximation can be proved. Some evidences will also be given for a non-linear flow. The theoretical and practical interest of the small-Péclet-number equations will be emphasized in Sect. 4. ## 3 Validity of the small-Péclet-number approximation The first example we consider is that of small amplitude perturbations in a linearly stably stratified atmosphere. The perturbations are resolved into modes proportional to $`\mathrm{exp}(\sigma t)\mathrm{exp}[i(k_xx+k_yy+k_zz)]`$ where $`\sigma `$ is a complex number and $`k_x`$, $`k_y`$, $`k_z`$, represent the horizontal and vertical wave numbers of the perturbation. In the following, the dispersion relation obtained using the Boussinesq equations is compared to that derived from the small-Péclet-number equations. The calculation is conducted for two dimensional disturbances ($`k_y=0`$), but the three-dimensional case can be readily recovered replacing $`k_x^2`$ by $`k_x^2+k_y^2`$ in the following expressions. To simplify the presentation we also limit ourselves to the inviscid case. It has been verified that our conclusions are not affected by taking into account the viscosity. Using the Boussinesq equations, the dispersion relation is: $$\sigma ^2\sigma _T\sigma +\sigma _B^2=0$$ (16) whereas the dispersion relation reduces to $$\sigma =\frac{\sigma _B^2}{\sigma _T}$$ (17) in the context of the small-Péclet-number equations. In these expressions, $$\sigma _T=\frac{k_x^2+k_z^2}{P_e}$$ is the damping rate associated with a pure thermal diffusion, and $$\sigma _B=\sqrt{R_i}\frac{k_x}{\sqrt{k_x^2+k_z^2}}$$ is the frequency of gravity waves in absence of diffusive processes. We observe that, in the context of the small-Péclet-number approximation, all disturbances are damped with a rate equal to $`\sigma _B^2/\sigma _T`$. On the contrary, the dispersion relation of the Boussinesq equations shows different types of solutions. These solutions are now analyzed for increasing values of the thermal diffusivity. At small thermal diffusivity, solutions of the dispersion relation correspond to gravity waves damped by thermal diffusion. The two roots of Eq. (16) correspond to two gravity waves propagating in opposed direction. Increasing the thermal diffusion reduces the wave frequency until the roots of (16) become purely real and the associated modes damped without propagating. This occurs when $$P_e<\frac{\left(k_x^2+k_z^2\right)^{3/2}}{2\sqrt{R_i}k_x}.$$ It is important to note that, as long as $`k_z`$ is not equal to zero, there always exists a Péclet number such that this expression is verified for all $`k_x`$ and $`k_z`$. If this was not the case, one could have gravity waves whatever the value of $`P_e`$. Then, the small-Péclet-number approximation would not be valid since gravity waves are absent in this approximation. In deriving the small-Péclet-number equations, we restricted ourselves to a fluid layer bounded vertically. Vertical wave numbers have therefore a lower limit, $`k_z^{min}`$, so that all modes are damped without propagation if $`P_e<3\sqrt{3}k_z^{min}/4\sqrt{R_i}`$. Then, the two distinct roots of the dispersion relation correspond to two damping modes. By further increasing the diffusivity, the damping rates take increasingly different values and the associated modes correspond to two different types of motions. In the limit of small Péclet numbers, the damping rate of the first type of mode is: $$\sigma =\frac{\sigma _B^2}{\sigma _T}=R_iP_e\frac{k_x^2}{\left(k_x^2+k_z^2\right)^2}$$ These are exactly the weakly damped modes found in the context of the small-Péclet-number approximation. Note that, despite the high thermal diffusivity, temperature perturbations can be weakly damped, if they are associated with vertical motions against the mean temperature gradient. The damping rate of the second type of mode is: $$\sigma =\sigma _T=\frac{k_x^2+k_z^2}{P_e}$$ Such modes are not found in the context of the small-Péclet-number equations. Note that this is not surprising since they correspond to solutions of the Boussinesq equations which do not behave like Taylor series (see equations (4) and (5)) when the Péclet number goes to zero. These modes undergo a purely diffusive damping which can be made arbitrarily fast as the Péclet number vanishes. Indeed, whatever the values of $`k_x`$ and $`k_z`$, all these modes are reduced by an arbitrary large factor after a time proportional to $`P_e/k_z^{min}`$. For this type of motions, the vertical advection term appearing in the linearized heat Eq. (2) is negligible. This shows that, in the limit of small Péclet number, temperature perturbations which are not produced by vertical advection are damped in a very short time. According to the above discussion, it is always possible to find a Péclet number such that, after an arbitrarily small time, the evolution of the infinitesimal perturbations is equally described by the Boussinesq equations or by the small-Péclet-number equations. We now consider another example of flow, yet in a linear regime. It concerns the evolution of small disturbances in a stably stratified shear layer. This configuration differs from the previous example by the presence of a mean horizontal flow sheared in the vertical direction. In this case, the validity of the linear version of the small-Péclet-number approximation has already been proved by Dudis’ theoretical work (1974). This author considered specifically an hyperbolic-tangent velocity profile in a stable atmosphere characterized by a hyperbolic-tangent temperature profile and used a normal mode approach to study the stability of the flow. He first determined the neutral stability curve, i.e. the curve separating the stable and unstable regions in the parameter space, for decreasing values of the Péclet number. Then, he showed that for small Péclet numbers these neutral curves could be recovered using a linear version of the small-Péclet-number equations. The convergence of the Boussinesq equations towards the small-Péclet-number equations appears fairly rapid in this case since, already at $`P_e=0.2`$, the maximum difference between the neutral curves is within $`3`$ percent. We recently revisited the work of Dudis by considering a linear temperature profile instead of the tangent hyperbolic profile to characterize the stable stratification (Lignières et al. moibis (1999)). We confirmed the validity of the small-Péclet-number equations to describe the neutral curves. In addition, we verified its validity for other types of mode (unstable modes symmetric to the shear layer mid-plane) as well as in the viscous case. Note that very rapidly damped modes corresponding to the second type of mode found in the previous discussion may also exist in this case. However, they can not affect the stability of the shear layer since they are very strongly damped. The third example is a two-dimensional non-linear flow where a shear layer is forced at the top of a linearly stratified fluid. This flow has been studied numerically by Lignières et al. (moi (1998)) for large Reynolds numbers ($`R_e2000`$ where $`R_e`$ is based on the layer thickness and the velocity difference across it). Figure 1a shows the typical vorticity field resulting from the destabilization of the shear layer and the concentration of vorticity into vortices. The mean shear and the thermal stratification are also represented in Figs. 1b and 1c (here, the means refer to horizontal averages). The other parameters being held fixed, we reduced the Péclet number from $`1000`$ to $`1`$ (equivalently the Prandtl number $`P_r`$ has been decreased from $`0.5`$ to $`5\times 10^4`$ which already requires some computational effort). Figures 1d and 1e present horizontal profiles of the vertical velocity and the temperature deviation for the two extreme values of the Péclet number. When this number is equal to $`1000`$ (Fig. 1d), one recovers a classical property of the inflexional shear layer instability in stably stratified medium, namely that the phase lag between the vertical velocity and the temperature deviation is $`\pi /2`$. By contrast, we observe that both fields are antiphased when the Péclet number is equal to unity (Fig. 1e). This striking difference reveals a change in the predominant terms of the heat equation and we verified that this equation is now dominated by a balance between the vertical advection against the mean temperature gradient and the thermal diffusion. These first results are consistent with the convergence of the Boussinesq equation towards the small-Péclet-number approximation. A detailed comparison of the results of these simulations with those obtained by using the asymptotic equations will be reported in a forthcoming paper. In this section, the validity of the small-Péclet-number approximation has been proved for two types of linear flows. For the case of infinitesimal perturbations in a linearly stably stratified atmosphere, we noted that vertical length scales of perturbations have to be limited to finite values to ensure uniform convergence. Moreover, the Boussinesq and the small-Péclet-number equations give the same solution after an arbitrarily small time, once initial temperature perturbations not associated with vertical advection against the mean temperature gradient have been damped. The example of a non-linear flow we considered is also consistent with the validity of the approximation. ## 4 Elementary properties of the small-Péclet-number approximation The elementary properties of the small-Péclet-number equations are analyzed in this section. From a practical point of view, the main interest of these equations is that their numerical integration does not require the computation of the very rapid temporal variation of temperature due to thermal diffusion. The Lagrangian derivative is indeed absent from the asymptotic heat Eq. (14). This property is crucial for the investigation of small Péclet number regimes since numerical simulations are no longer limited by the huge separation between the dynamical and diffusive time scales. Another simplifying property appears when the Péclet-number equations (13), (14), (15), are written in terms of $$\psi =\frac{\theta }{P_e}.$$ Using this rescaled temperature deviation, the small-Péclet-number equations become: $$\frac{𝐮}{t}+𝐮𝐮=p+R\psi 𝐞_z+\frac{1}{R_e}^2𝐮,$$ (18) $$w=^2\psi ,$$ (19) $$𝐮=0,$$ (20) where $$R=R_iP_e=\frac{t_\mathrm{D}t_\kappa }{t_{\mathrm{B}}^{}{}_{}{}^{2}}.$$ (21) Although these equations have been derived as a first order approximation of the Boussinesq equations in the limit of small Péclet number (see Sect. 2), they can also be interpreted as a zero order approximation of the Boussinesq equations in the limit of small Péclet number provided the non-dimensional number $`R=R_iP_e`$ is assumed to remain finite. Starting from the dimensional Boussinesq equations, one only has to use $`P_e\mathrm{\Delta }T_{}`$ as a reference temperature instead of $`\mathrm{\Delta }T_{}`$ and to assume Taylor-like expansions of the form (4) and (5). Then, provided $`R`$ remains constant, the above equations arise at the zero order in $`P_e`$. The derivation presented in Sect. 2 has been preferred because it does not require the assumption of an infinite Richardson number $`R_i`$. The main interest of the system (18), (19), (20), is that it only depends on two non-dimensional numbers, $`R`$ and $`R_e`$. This is an important simplification as compared to the original Boussinesq equations which are governed by three non-dimensional numbers, $`R_i`$, $`P_e`$ and $`R_e`$. This simplification corresponds to the fact that the amplitude of the buoyancy force is no longer determined by two distinct processes, namely the vertical advection against the stable stratification which produces temperature deviations and the thermal diffusion which smoothes them out. It is now determined by a single physical process which combines the effects of both processes.. We shall show below that this process is purely dissipative and that this dissipation is anisotropic (not effective for horizontal motions) and faster for large scale motions. To do so, we write down the kinetic energy conservation. Multiplying the momentum Eq. (18) by the velocity vector and integrating over the whole domain, we obtain: $$\frac{dE_{\mathrm{kin}}}{dt}=R_Vw\psi 𝑑V\frac{1}{R_e}_Vϵ𝑑V+_S𝐅_{\mathrm{kin}}\mathrm{𝐝𝐒}.$$ (22) where, on the r.h.s of this equation, the first term is the work done by the buoyancy force, the second term represents the viscous dissipation into heat and the third term is the kinetic energy flux on the surface bounding the domain. Using Eq. (19), the work done by the buoyancy force can be divided into two terms to give: $$R_Vw\psi 𝑑V=R_V\left(\psi \right)^2𝑑V+R_S\psi \psi \mathrm{𝐝𝐒}.$$ (23) As temperature deviations vanish on the bounding plates, the second term also vanishes so that the kinetic energy conservation reduces to: $$\frac{dE_{\mathrm{kin}}}{dt}=R_V\left(\psi \right)^2𝑑V\frac{1}{R_e}_Vϵ𝑑V+_S𝐅_{\mathrm{kin}}\mathrm{𝐝𝐒}.$$ (24) This equation shows that the combined effect of the stable stratification and the thermal diffusivity is purely dissipative. This simple result has to be compared with the case of the Boussinesq equations where the integrated work of the buoyancy could be positive or negative. As described in detail by Winters et al. (1995), it is then necessary to distinguish the amount of kinetic energy which is irreversibly lost from the amount of kinetic energy which has been transformed into potential energy but can still return back to kinetic energy. Here, the situation is simpler since all the kinetic energy extracted by the buoyancy work is irreversibly lost. In order to specify the time scale of this dissipative process, we rewrite the above equations without the non-linear terms and for inviscid motions restricted to a vertical plane ($`𝐞_x,𝐞_z`$). The pressure term can first be eliminated using the incompressibility condition (20). Then, the two momentum equations are combined to eliminate the horizontal velocity and the simplified heat equation allows to eliminate the rescaled temperature deviation. Finally, the evolution of the vertical velocity is governed by: $$\frac{\mathrm{\Delta }\mathrm{\Delta }w}{t}=R\frac{^2w}{x^2}.$$ (25) Considering isotropic motions of length scale $`l`$, the time scale of this process is $`1/(Rl^2)`$, that is $`t_\mathrm{B}^2/t_\kappa `$ in dimensional units. It appears that this dissipative process is faster at large scales than at small scales, which is just the opposite of what is observed in usual dissipative processes like thermal diffusion or viscous dissipation. Here, however, the thermal diffusion does not act directly on the dynamics; it affects the temperature deviations which in turn modifies the buoyancy force amplitude. We have already seen that, in the limit of small Péclet numbers, rapid thermal exchanges lead instantaneously to a balance between vertical advection against the mean stratification and thermal diffusion. This balance is described by equation (19) and it is straightforward to show that the resulting amplitude of the temperature deviations is stronger if the vertical velocity varies over a large length scale. The amplitude buoyancy force is therefore stronger for velocity fields varying over large length scales and this explains why the combined effect of the stable stratification and the thermal diffusion is faster at larger length scale. Note that, while classical dissipative processes are characterized by a Laplacian operator, the operator of the present dissipation is the inverse of a Laplacian. This can be seen by expressing $`\psi `$ as the inverse Laplacian of the vertical velocity and by reporting this expression in the momentum equation. Another interesting property is the anisotropy of this dissipative process. If one considers a velocity field of the form $`w\mathrm{exp}[i(k_xx+k_zz)]`$, where, as before, $`k_z`$ and $`k_x`$ represent its vertical and the horizontal scales, the characteristic time deduced from Eq. (25) is: $$\tau =\frac{\left(k_x^2+k_z^2\right)^2}{Rk_x^2},$$ (26) which unsurprisingly corresponds to the inverse the damping rate found in Sect. 2. The use of polar coordinates in the Fourier space is more appropriate to study the anisotropy of the process. With $`r^2=k_x^2+k_z^2`$ and $`tan(\alpha )=k_z/k_x`$, the time scale becomes: $$\tau =\frac{1}{R}\left(\frac{r}{\mathrm{cos}(\alpha )}\right)^2$$ (27) where $`\alpha =\pi /2`$ corresponds to horizontal motions and $`\alpha =0`$ corresponds to vertical motions. We observe that the dissipation acts primarily on vertical motions while purely horizontal motions are not affected. This is not surprising since the buoyancy force only applies on the vertical component of the velocity. What is more interesting is that this anisotropy is stronger than in the context of the non-diffusive Boussinesq equations. Indeed, for a given value of the wave vector modulus, the time scale $`\tau `$ increases faster towards horizontal motions ($`\alpha \pi /2`$) than the corresponding time scale of the buoyancy force in a non-diffusive atmosphere $`1/\sigma _B=1/(\mathrm{cos}(\alpha )\sqrt{R_i})`$. Considering motions strongly affected by the buoyancy force, we thus expect that these motions would be more predominantly horizontal in an atmosphere dominated by thermal diffusion than in a non-diffusive atmosphere. ## 5 Discussion In this paper, we derived a small-Péclet-number approximation, we discussed its validity for three flow examples and we analyzed its basic properties. In particular, we showed that the practical and theoretical difficulties characterizing the regime of very large thermal diffusivities and which had been mentioned in the introduction are considerably simplified in the context of the small-Péclet-number approximation. In what concerns applications for the dynamics of stellar radiative zones, it must be stressed that some type of motions can not be investigated using this approximation. First, there are no gravity waves in the context of the approximation whereas these waves could play an important role in the radiative zone dynamics (Schatzman 1996). Second, thermal convective motions penetrating the radiative zone boundary have a high Péclet number so that the small-Péclet-approximation is not suitable to investigate the overshooting layer at the boundary with the thermal convective zone. By contrast, there are various evidences that some motions contributing to the radial transport of chemical elements and angular momentum are characterized by very small Péclet numbers and could therefore be studied in the context of the small-Péclet-number approximation. The fact that the thermal structure is determined by the radiative heat flux only shows that the Péclet number characterizing eventual radial motions is necessarily smaller than unity. But other observational constraints, obtained by measuring the surface abundance of chemical elements, give much smaller Péclet numbers (see a recent review by Michaud & Zahn mich (1998)). These Péclet numbers are defined as the ratio between the diffusion coefficient necessary to recover the observed surface abundance and the thermal diffusivity. In the absence of more sophisticated models, this diffusion coefficient is assumed to represent a vertical turbulent transport. For the sun, a Péclet number as small as $`2\times 10^4`$ is obtained. This value may however be underestimated because the turbulence is most probably anisotropic. This aspect is taken into account in the model of Spiegel & Zahn (spie\_2 (1992)) which describes the tachocline (the abrupt change in angular velocity at the top the solar radiative zone). The Péclet number which characterizes the horizontal turbulent motions and which is compatible with observed thickness of the tachocline remains much smaller than unity ($`P_e10^2`$). These estimates suggest to use the small-Péclet-number approximation to investigate the property of small scale turbulent motions in stellar radiative zones. A first possible investigation could be the homogeneous turbulence in presence of a uniform mean shear and mean temperature gradient. With geophysical applications in mind, this configuration has already been extensively studied for large Péclet numbers (see Schumann 1996, for a review). A comparison with the small-Péclet-number case should be very instructive. Another important topic concerns the anisotropy between vertical and horizontal motions in an atmosphere dominated by thermal diffusion. Our linear study suggests that this anisotropy can be stronger than in a non-diffusive atmosphere. The ratio between vertical and horizontal turbulent viscosities could be affected and this can estimated through numerical simulations of the small-Péclet-number equations. Before concluding, it must be noted that the limit of large diffusivities has already been considered (Spiegel spiegel (1962), Thual thual (1992)) in the context of the Rayleigh-Bénard convection. However, an important physical property of thermal convection is lost in this limit. The thermal stratification is indeed assumed unchanged and this is not compatible with the general observation that convective motions transform the initial unstable stratification into an adiabatic stratification. There is no such an inconsistency for the case of stably stratified radiative zones we considered here.
no-problem/9908/astro-ph9908213.html
ar5iv
text
# Gravitational Lensing by NFW Halos ## 1 Introduction Several recent numerical investigations (e.g., Navarro, Frenk & White 1997, 1996, 1995) have indicated the existence of a universal density profile for dark matter halos that results from the generic dissipationless collapse of density fluctuations. Interior to the virial radius, the Navarro, Frenk & White (NFW) profile appears to be a very good description of the radial mass distribution of simulated objects that span 9 orders of magnitude in mass (mass scales ranging from that of globular clusters to that of large galaxy clusters). The apparent generality of the NFW density profile has been confirmed independently by a number of studies (e.g., Bartelmann et al. 1998; Thomas et al. 1998, Carlberg et al. 1997; Cole & Lacey 1997; Kravtsov, Klypin & Khokhlov 1997; Tormen, Bouchet & White 1997); however, there are a few controversial claims that the NFW prescription may fail at very small radii (e.g., Ghigna et al. 1998; Moore et al. 1998). The NFW density profile is given by $$\rho (r)=\frac{\delta _c\rho _c}{\left(r/r_s\right)\left(1+r/r_s\right)^2},$$ (1) where $`\rho _c=\frac{3H^2(z)}{8\pi G}`$ is the critical density for closure of the universe at the redshift, $`z`$, of the halo, $`H(z)`$ is Hubble’s parameter at that same redshift, and $`G`$ is Newton’s constant. The scale radius $`r_s=r_{200}/c`$ is a characteristic radius of the cluster, $`c`$ is a dimensionless number known as the concentration parameter, and $$\delta _c=\frac{200}{3}\frac{c^3}{\mathrm{ln}(1+c)c/(1+c)}$$ (2) is a characteristic overdensity for the halo. The virial radius, $`r_{200}`$, is defined as the radius inside which the mass density of the halo is equal to $`200\rho _c`$ (see, e.g., Navarro, Frenk & White 1997). The mass of an NFW halo contained within a radius of $`r_{200}`$ is therefore $$M_{200}M(r_{200})=\frac{800\pi }{3}\rho _cr_{200}^3=\frac{800\pi }{3}\frac{\overline{\rho }(z)}{\mathrm{\Omega }(z)}r_{200}^3$$ (3) where $`\overline{\rho }(z)`$ is the mean mass density of the universe at redshift $`z`$ and $`\mathrm{\Omega }(z)`$ is the density parameter at redshift $`z`$. Although it has not been proven categorically, it is certainly widely–thought that the masses of large galaxies, groups of galaxies, galaxy clusters, and superclusters are dominated by some form of dissipationless dark matter. Therefore, it would not be unreasonable to expect that the spherically–averaged density profiles of these objects would be approximated fairly well by NFW profiles. Observationally, the total masses and mass–to–light ratios of these objects are not constrained especially well at present; however, this situation is changing rapidly, due in large part to the fact that high–quality imaging of gravitational lens systems is yielding direct constraints on the nature of the mass distribution within the dark matter halos. Observations of gravitational lensing provide powerful constraints on both the total mass and the mass distribution within the lens itself, owing to the fact that one essentially uses photons emitted by objects more distant than the lens to trace the underlying gravitational potential of the lens directly. In particular, large clusters of galaxies (which are both massive and centrally–condensed) are especially good gravitational lens candidates, and detections of the coherent pattern of weak lensing shear due to a number of clusters has led to interesting constraints on the masses of these objects (e.g., Tyson, Wenk & Valdes 1990; Bonnet et al. 1994; Dahle, Maddox & Lilje 1994; Fahlman et al. 1994; Mellier et al. 1994; Smail et al. 1994, 1995, 1997; Tyson & Fischer 1995; Smail & Dickinson 1995; Kneib et al 1996; Seitz et al. 1996; Squires et al. 1996ab; Bower & Smail 1997; Fischer et al. 1997; Fischer & Tyson 1997; Luppino & Kaiser 1997; Clowe et al. 1998; Hoekstra et al. 1998). Although more controversial than the results for lensing clusters, detections of systematic weak lensing of distant field galaxies by foreground field galaxies have been reported and these have been used to place constraints on the physical sizes and total masses of the dark matter halos of the lens galaxies (e.g., Brainerd, Blandford & Smail 1996; Griffiths et al. 1996; Ebbels 1998; Hudson et al. 1998; Natarajan et al. 1998). Additionally, a detection of the coherent weak lensing shear due to a supercluster has been reported recently (Kaiser et al. 1998). Because of the apparent direct applicability of the NFW density profile to the dominant mass component of all of these objects, and because of the potential of observations of gravitational lensing to provide strong, direct constraints on the amount and distribution of dark matter within them, we investigate the lensing characteristics of dark matter halos with generic NFW–type density profiles in this paper. In §2 we compute the convergence and the shear profiles of NFW halos. In §3 we compare the mean shear induced by NFW lenses to that of simpler singular isothermal sphere (SIS) lenses and consider the implications of our results for possible systematic errors in lens masses that are determined in observational investigations which invoke an a priori assumption of an isothermal lens potential. A discussion of the results is presented in §4. ## 2 Convergence and Shear of an NFW Object We perform all of our calculations below using the thin lens approximation, in which an object’s lensing properties can be computed solely from a scaled, 2-dimensional Newtonian potential. The thin lens approximation is valid in the limit that the scale size of the lens is very much less than the path length traveled by the photons as they propagate from the source to the lens and from the lens to the observer. In this case the lensing properties of an object are completely described by two quantities, the convergence, $`\kappa `$, and the shear, $`\stackrel{}{\gamma }`$. The names of these quantities are indicative of their effects upon a lensed image; the convergence describes the isotropic focusing of light rays while the shear describes the effect of tidal gravitational forces. Convergence acting alone leads to an isotropic magnification or demagnification while the shear induces distortions in the shapes of lensed images. If we define $`z`$ to be the optic axis, then for a lens with a 3-dimensional potential $`\mathrm{\Phi }(D_d\stackrel{}{\theta },z)`$ we can formulate a conveniently–scaled potential as projected on the sky: $$\psi (\stackrel{}{\theta })=\frac{D_{ds}}{D_dD_s}\frac{2}{c^2}\mathrm{\Phi }(D_d\stackrel{}{\theta },z)𝑑z.$$ (4) Here $`\stackrel{}{\theta }`$ is a radius vector on the sky and $`D_d`$, $`D_s`$, and $`D_{ds}`$ are, respectively, the angular diameter distances between the observer and the lens, the observer and the source, and the lens and the source. Under the definition of $`\psi (\stackrel{}{\theta })`$ above, the convergence and the components of the shear tensor may be written as straightforward combinations of second-order derivatives of $`\psi `$ with respect to image plane coordinates $`\stackrel{}{\theta }=(\theta _1,\theta _2)`$, $$\kappa (\stackrel{}{\theta })=\frac{1}{2}\left(\frac{^2\psi }{\theta _1^2}+\frac{^2\psi }{\theta _2^2}\right)$$ (5) $$\gamma _1(\stackrel{}{\theta })=\frac{1}{2}\left(\frac{^2\psi }{\theta _1^2}\frac{^2\psi }{\theta _2^2}\right)$$ (6) $$\gamma _2(\stackrel{}{\theta })=\frac{^2\psi }{\theta _1\theta _2}=\frac{^2\psi }{\theta _2\theta _1}.$$ (7) The magnitude of the shear is simply $`\gamma =|\stackrel{}{\gamma }|=\sqrt{\gamma _1^2+\gamma _2^2}`$ (e.g., Schneider, Ehlers & Falco 1992). In the limit of weak gravitational lensing, the convergence and shear are formally small (i.e., $`\kappa <<1`$, $`\gamma <<1`$), the ellipticity induced in the image of an intrinsically circular source due to lensing is of order $`\gamma /2`$, and the position angle of the lensed image ellipse is of order the phase of $`\stackrel{}{\gamma }`$ (e.g., Schramm & Kayser 1995; Seitz & Schneider 1997). The local value of the convergence may be expressed simply as the ratio of the local value of the surface mass density to the critical surface mass density: $$\kappa (\stackrel{}{\theta })=\frac{\mathrm{\Sigma }(\stackrel{}{\theta })}{\mathrm{\Sigma }_c},$$ (8) where $$\mathrm{\Sigma }_c\frac{c^2}{4\pi G}\frac{D_s}{D_dD_{ds}}$$ (9) (e.g., Schneider, Ehlers & Falco 1992) and $`c`$ in the equation above is the velocity of light. The radial dependence of the surface mass density of a spherically symmetric lens such as an NFW lens is obtained simply by integrating the 3-dimensional density profile along the line of sight, $$\mathrm{\Sigma }(R)=2_0^{\mathrm{}}\rho (R,z)𝑑z,$$ (10) where $`R=D_d\sqrt{\theta _1^2+\theta _2^2}`$ is the projected radius relative to the center of the lens. For convenience we will adopt a dimensionless radial distance, $`x=R/r_s`$. Integrating equation (1) along the line of sight, the radial dependence of the surface mass density of an NFW lens can then be written as: $$\mathrm{\Sigma }_{\mathrm{nfw}}(x)=\{\begin{array}{cc}\frac{2r_s\delta _c\rho _c}{\left(x^21\right)}\left[1\frac{2}{\sqrt{1x^2}}\mathrm{arctanh}\sqrt{\frac{1x}{1+x}}\right]\hfill & \left(x<1\right)\hfill \\ & \\ \frac{2r_s\delta _c\rho _c}{3}\hfill & \left(x=1\right)\hfill \\ & \\ \frac{2r_s\delta _c\rho _c}{\left(x^21\right)}\left[1\frac{2}{\sqrt{x^21}}\mathrm{arctan}\sqrt{\frac{x1}{1+x}}\right]\hfill & \left(x>1\right)\hfill \end{array}$$ (11) (e.g., Bartelmann 1996). The radial dependence of the convergence due to an NFW lens is then simply $`\kappa _{\mathrm{nfw}}(x)=\mathrm{\Sigma }_{\mathrm{nfw}}(x)/\mathrm{\Sigma }_c`$. Since the NFW density profile is spherically symmetric, the radial dependence of the shear can be written as $$\gamma _{\mathrm{nfw}}(x)=\frac{\overline{\mathrm{\Sigma }}_{\mathrm{nfw}}(x)\mathrm{\Sigma }_{\mathrm{nfw}}(x)}{\mathrm{\Sigma }_c}$$ (12) (e.g., Miralda–Escudé 1991) where $`\overline{\mathrm{\Sigma }}_{\mathrm{nfw}}(x)`$ is the mean surface mass density interior to the dimensionless radius $`x`$. In terms of this radius, then, the mean surface mass density of an NFW halo is given by $$\overline{\mathrm{\Sigma }}_{\mathrm{nfw}}(x)=\frac{2}{x^2}_0^xx^{}\mathrm{\Sigma }_{\mathrm{nfw}}(x^{})𝑑x^{}=\{\begin{array}{cc}\frac{4}{x^2}r_s\delta _c\rho _c\left[\frac{2}{\sqrt{1x^2}}\mathrm{arctanh}\sqrt{\frac{1x}{1+x}}+\mathrm{ln}\left(\frac{x}{2}\right)\right]\hfill & \left(x<1\right)\hfill \\ & \\ 4r_s\delta _c\rho _c\left[1+\mathrm{ln}\left(\frac{1}{2}\right)\right]\hfill & (x=1)\hfill \\ & \\ \frac{4}{x^2}r_s\delta _c\rho _c\left[\frac{2}{\sqrt{x^21}}\mathrm{arctan}\sqrt{\frac{x1}{1+x}}+\mathrm{ln}\left(\frac{x}{2}\right)\right]\hfill & \left(x>1\right)\hfill \end{array}$$ (13) and the radial dependence of the shear is, therefore, $$\gamma _{\mathrm{nfw}}(x)=\{\begin{array}{cc}\frac{r_s\delta _c\rho _c}{\mathrm{\Sigma }_c}g_<(x)\hfill & \left(x<1\right)\hfill \\ & \\ \frac{r_s\delta _c\rho _c}{\mathrm{\Sigma }_c}\left[\frac{10}{3}+4\mathrm{ln}\left(\frac{1}{2}\right)\right]\hfill & \left(x=1\right)\hfill \\ & \\ \frac{r_s\delta _c\rho _c}{\mathrm{\Sigma }_c}g_>(x)\hfill & \left(x>1\right)\hfill \end{array}$$ (14) where the functions $`g_{<,>}(x)`$ above depend upon only the dimensionless radius $`x`$ and are explicitly independent of the cosmology: $`g_<(x)`$ $`=`$ $`{\displaystyle \frac{8\mathrm{a}\mathrm{r}\mathrm{c}\mathrm{t}\mathrm{a}\mathrm{n}\mathrm{h}\sqrt{\frac{1x}{1+x}}}{x^2\sqrt{1x^2}}}+{\displaystyle \frac{4}{x^2}}\mathrm{ln}\left({\displaystyle \frac{x}{2}}\right){\displaystyle \frac{2}{\left(x^21\right)}}+{\displaystyle \frac{4\mathrm{a}\mathrm{r}\mathrm{c}\mathrm{t}\mathrm{a}\mathrm{n}\mathrm{h}\sqrt{\frac{1x}{1+x}}}{\left(x^21\right)\left(1x^2\right)^{1/2}}}`$ (15) $`g_>(x)`$ $`=`$ $`{\displaystyle \frac{8\mathrm{arctan}\sqrt{\frac{x1}{1+x}}}{x^2\sqrt{x^21}}}+{\displaystyle \frac{4}{x^2}}\mathrm{ln}\left({\displaystyle \frac{x}{2}}\right){\displaystyle \frac{2}{\left(x^21\right)}}+{\displaystyle \frac{4\mathrm{arctan}\sqrt{\frac{x1}{1+x}}}{\left(x^21\right)^{3/2}}}.`$ (16) Equation (14) above can also be obtained straightforwardly from equations (7) through (11) of Bartelmann (1996). The radial dependence of the shear due to an NFW lens is shown in Fig. 1. The shear due to a given lens (e.g., a cluster of galaxies) is computed directly from the coherent distortion pattern that it induces in the images of distant source galaxies. In the realistic observational limit of weak shear and a finite number of lensed images, a measurement of the mean shear interior to a radius $`x`$ centered on the center of mass of the lens (i.e., $`\overline{\gamma }(x)`$) is more easily determined than the differential radial dependence of the shear (i.e., $`\gamma (x)`$). In the case of the NFW profile, the mean shear interior to a (dimensionless) radius $`x`$ can be computed directly from equation (14) above: $$\overline{\gamma }_{\mathrm{nfw}}(x)=\frac{2}{x^2}_0^xx^{}\gamma (x^{})𝑑x^{}=\frac{r_s\delta _c\rho _c}{\mathrm{\Sigma }_c}\left[\frac{2}{x^2}\left(_0^1g_<(x^{})x^{}𝑑x^{}+_1^xg_>(x^{})x^{}𝑑x^{}\right)\right].$$ (17) A useful fiducial radius interior to which one might measure the mean shear is the virial radius, $`R=r_{200}`$, or equivalently, interior to $`x=\left(r_{200}/r_s\right)=c`$, where $`c`$ is the concentration parameter. For all masses of astrophysical interest $`c`$ is greater than 1 and, therefore, the mean shear interior to the virial radius becomes $$\overline{\gamma }_{\mathrm{nfw}}(r_{200})=\frac{r_s\delta _c\rho _c}{\mathrm{\Sigma }_c}\left[\frac{2}{c^2}\left(_0^1g_<(x^{})x^{}𝑑x^{}+_1^cg_>(x^{})x^{}𝑑x^{}\right)\right]$$ (18) which we rewrite as $$\overline{\gamma }_{\mathrm{nfw}}(r_{200})=\frac{r_s\rho _c}{\mathrm{\Sigma }_c}\left(c\right),$$ (19) where $$\left(c\right)=\delta _c\left[\frac{2}{c^2}\left(_0^1g_<(x^{})x^{}𝑑x^{}+_1^cg_>(x^{})x^{}𝑑x^{}\right)\right]$$ (20) is a function of the concentration parameter alone. ## 3 Comparison to the Singular Isothermal Sphere Like the NFW mass profile, the singular isothermal sphere (SIS) mass profile is characterized by a single parameter (i.e., the velocity dispersion, $`\sigma _v`$). The mass of an SIS interior to a three dimensional radius $`r`$ is: $$M(r)=\frac{2\sigma _v^2r}{G}$$ (21) (e.g., Binney & Tremaine 1987) and the mean gravitational lensing shear interior to a radius $`R`$ that is induced by an SIS lens is: $$\overline{\gamma }_{\mathrm{sis}}\left(R\right)=\frac{1}{\mathrm{\Sigma }_c}\frac{\sigma _v^2}{GR}$$ (22) (e.g. Schneider, Ehlers & Falco 1992). Because of its simplicity, the SIS density profile is sometimes adopted in observational investigations in order to obtain an estimate of the mass of a lens without fully reconstructing its true underlying density profile (e.g., Tyson, Wenk & Valdes 1990; Bonnet et al. 1994; Smail et al. 1994, 1997; Smail & Dickinson 1995; Bower & Smail 1997; Fischer & Tyson 1997). By assuming that the underlying potential of the lens is well–approximated by an SIS, a measurement of the mean shear interior to a projected radius $`R`$ leads directly to a measurement of the velocity dispersion of the lens (e.g., equation 22), which in turn leads directly to an estimate of the mass of the lens (e.g., equation 21). The NFW density profile, which is shallower than isothermal on small scales, and which turns over to isothermal on large scales has, however, been shown to be a far better approximation than the SIS to the spherically–averaged density profiles of halos formed via dissipationless collapse. Therefore, it is likely that lens mass estimates based on an a priori assumption of an isothermal potential will be systematically in error. In this section we compare the mean shear induced by NFW lenses to that induced by SIS lenses, under the constraint that the NFW and SIS lenses both have identical virial radii, $`r_{200}`$, and, therefore, identical masses interior to $`r_{200}`$. From this we will then investigate the possible systematic errors in lens mass estimates that would arise due to the assumption of an isothermal potential when, in fact, the lens is best represented by an NFW density profile. Let us consider two lenses which have identical masses, $`M_{200}`$, interior to the virial radius. One of the lenses has an NFW density profile with a concentration parameter of $`c`$ and the other is a singular isothermal sphere with velocity dispersion $`\sigma _v`$. If these two objects have identical redshifts, $`z_d`$, and act as lenses for populations of source galaxies which have identical redshifts, $`z_s`$, then from equations (19) and (22) above, the ratio of the mean shears induced by these two lenses interior to $`r_{200}`$ is given by: $$\frac{\overline{\gamma }_{\mathrm{nfw}}\left(r_{200}\right)}{\overline{\gamma }_{\mathrm{sis}}\left(r_{200}\right)}=\frac{r_s\rho _cr_{200}}{\sigma _v^2}G\left(c\right).$$ (23) Using equations (3) and (21) above and recalling that the concentration parameter is $`c=r_{200}/r_s`$, it is straightforward to show that equation (23) reduces to $$\frac{\overline{\gamma }_{\mathrm{nfw}}\left(r_{200}\right)}{\overline{\gamma }_{\mathrm{sis}}\left(r_{200}\right)}=\frac{3}{400\pi }\frac{\left(c\right)}{c},$$ (24) which is a function solely of the concentration parameter of the NFW lens and is explicitly independent of the redshift of the sources, $`z_s`$. Because of the dependence of the concentration parameter on both the redshift of the lens and the cosmology through $`\overline{\rho }(z_d)`$, equation (24) is not explicitly independent of either the cosmology or the lens redshift, $`z_d`$. However, for lenses of a given mass, its dependence on both $`z_d`$ and the cosmology is relatively weak. Shown in Figs. 2 and 3 are the ratio of the mean shears interior to $`r_{200}`$ for NFW and SIS lenses with virial masses in the range of $`10^{11}M_{}M_{200}10^{16}M_{}`$. Fig. 2 shows the results for lenses located at $`z_d=0.1`$ and Fig. 3 shows the results for lenses located at $`z_d=0.5`$. The four panels in the figures show the effects of varying the cosmology, and plotted along the top axes of all of the panels is the NFW concentration parameter which corresponds to the lens mass plotted on the lower axes. Two of the cosmologies illustrated in Figs. 2 and 3 are standard cold dark matter (CDM) cosmologies, which differ from one another only in the choice of the normalization of the power spectrum (SCDM–I is a cluster abundance normalization while SCDM–II is COBE-normalized). The other two cosmologies are an open CDM model with zero cosmological constant (OCDM) and a spatially flat, low matter density CDM model with a large cosmological constant ($`\mathrm{\Lambda }`$CDM). The parameters adopted for each of the models are summarized in Table 1 where $`\mathrm{\Lambda }_0=\lambda /3H_0^2`$, $`H_0=100h`$ km/s/Mpc, $`n`$ is the index of the primordial power spectrum of density fluctuations and $$\sigma _8\left[\frac{\delta \rho }{\rho }(8h^1\mathrm{Mpc})\right]^2^{\frac{1}{2}}.$$ (25) Table 1: Cosmological Model Parameters | | $`\mathrm{\Omega }_0`$ | $`\mathrm{\Lambda }_0`$ | $`h`$ | $`\sigma _8`$ | $`n`$ | | --- | --- | --- | --- | --- | --- | | SCDM–I | 1.0 | 0.0 | 0.50 | 0.63 | 1.0 | | SCDM–II | 1.0 | 0.0 | 0.50 | 1.20 | 1.0 | | OCDM | 0.25 | 0.0 | 0.70 | 0.85 | 1.0 | | $`\mathrm{\Lambda }`$CDM | 0.25 | 0.75 | 0.75 | 1.30 | 1.0 | The FORTRAN program charden.f, written and generously provided by Julio Navarro, was used to calculate the values of the concentration parameters for the NFW lenses in the above cosmologies. For each of the cosmologies, $`c`$ was determined for halos with masses in the range of $`10^{11}M_{}M_{200}10^{16}M_{}`$ at redshifts of $`z_d=0.1`$ and $`z_d=0.5`$. These values of $`c`$ were then used in conjunction with equation (24) to compute the ratio of the NFW to SIS mean shear interior to the virial radius. For a given cosmology, it is clear by comparing Fig. 2 with Fig. 3 that equation (24) is only weakly dependent on the lens redshift, $`z_d`$. The largest difference between the various panels in Figs. 2 and 3 which correspond to identical cosmologies occurs for SCDM lenses with masses $`10^{11}M_{}`$, and in this case the difference between $`z_d=0.1`$ and $`z_d=0.5`$ is only $`10\%`$. Similarly, by comparing the results plotted in all of the individual panels of Fig. 2 and Fig. 3 at fixed $`z_d`$, it is clear that equation (24) is not tremendously sensitive to the cosmology. In particular, the $`\mathrm{\Lambda }`$CDM, OCM, and SCDM–I models all yield functions with nearly identical amplitudes for a given value of $`z_d`$. The SCDM–II model yields a function which is somewhat higher than the other three models, exceeding the others by $`25\%`$ for halos with masses $`10^{11}M_{}`$ and by $`20\%`$ for halos with masses $`10^{16}M_{}`$. Over the majority of the mass range investigated, the NFW lenses give rise to a mean shear interior to $`r_{200}`$ which is systematically larger than that of the SIS lenses. As a result, if one were to measure the mean shear interior to a radius of $`r_{200}`$ of an NFW halo, yet assume it to be an isothermal sphere, the resulting estimate of the virial mass of the lens ($`M_{200}`$) would be systematically high. From equations (21) and (22) above, it follows that the mass of an SIS lens interior to $`r_{200}`$ is simply: $$M_{200}=2\mathrm{\Sigma }_cr_{200}^2\overline{\gamma }(r_{200})$$ (26) so that the mass inferred for the lens scales linearly with the mean shear. Therefore, the systematic error in the true virial mass of the lens is simply the ratio of the mean shear due to an NFW lens to that of an SIS lens with an identical amount of mass contained inside $`r_{200}`$ (i.e., Figs. 2 and 3). Shown in Fig. 4 is the ratio of the mean shear (interior to $`r_{200}`$) of an NFW lens and an SIS lens, plotted as a function of the NFW concentration parameter. (As in Figs. 2 and 3, both lenses have identical masses interior to $`r_{200}`$). From this figure, then, if one were to measure a mean shear for a given NFW lens, yet model the lens as an isothermal sphere, the degree of systematic error in an estimate of the virial mass would clearly be a function of the concentration parameter of the lens. For a given halo mass, the concentration parameter is a function of the cosmology (e.g., the top axes of Figs. 2 and 3); however, it is always the case that for a given cosmology, the larger the value of $`c`$, the lower is the value of $`M_{200}`$. The general conclusions that can be drawn from Fig. 4 are: the lower the mass of an NFW halo, the larger the systematic error in the mass estimate if the lens is assumed to be an isothermal sphere and for a halo of a given mass, the largest systematic error in the mass estimate occurs in a COBE-normalized cosmology (i.e., SCDM-II). With the exception of SCDM-II for which the error is somewhat larger, the systematic error in an estimate of $`M_{200}`$ for rich clusters ($`M_{200}10^{15}M_{}`$) is negligible ($`<10\%`$). The systematic error in an estimate of $`M_{200}`$ for galaxy–mass objects ($`M_{200}10^{11}M_{}`$) is, on the other hand, considerable (of order 55% to 65% for the SCDM–II model and of order 30% to 40% for the other models). ## 4 Discussion It is generally thought that the masses of large galaxies, galaxy groups, galaxy clusters, and superclusters are dominated by some form of dissipationless dark matter and, thus, it is not unreasonable to expect that their underlying mass density profiles will be represented reasonably well by NFW profiles. In addition, since their total masses and mass-to-light ratios are not strongly constrained at present, a significant effort is currently being devoted to the use of observations of gravitational lensing by these objects to quantify the amount and distribution of dark matter within them. We have, therefore, investigated the properties of NFW lenses in this paper and we have presented analytic expressions for the radial dependence of the convergence, $`\kappa (x)`$, and shear, $`\gamma (x)`$, due to dark matter halos which have NFW density profiles. We have also presented an expression for the mean shear interior to a given radius, $`\overline{\gamma }(x)`$, due to NFW lenses and we have compared the mean shear interior to the virial radius of an NFW lens to that yielded by a singular isothermal sphere lens with an identical virial mass. It is not uncommon for the mass of a gravitational lens to be estimated under an assumption that the lens may be approximated by a singular isothermal sphere. However, it has been clearly demonstrated that the NFW density profile is a far better approximation to the density profile of objects formed by generic dissipationless collapse than is the isothermal sphere. We have computed the systematic error that would be encountered in an estimate of the mass of an NFW lens, where the lens is assumed a priori to be an isothermal sphere. Over mass scales of $`10^{11}M_{}<M_{200}<10^{15}M_{}`$, the mass of the NFW lens is systematically overestimated when it is assumed that, for a given measured value of $`\overline{\gamma }(r_{200})`$, the lens can be approximated by an isothermal sphere. The size of the systematic error in the lens mass due to the isothermal sphere assumption is a function of the NFW concentration parameter of the lens, with the largest error occurring for halos with the largest values of $`c`$ and, hence, with the smallest masses. The systematic error in the mass is not dramatic (i.e., not even as much as a factor of $`2`$), but this is unsurprizing since the shape of the NFW density profile in the outer regions of the halo is fairly close to an isothermal profile. In the case of halos with masses comparable to that of rich clusters, $`M_{200}10^{15}M_{}`$, the systematic error in the mass due to the assumption of an isothermal potential is small. Therefore, the masses of lensing clusters that are estimated under the assumption of an isothermal potential (and in the limit that the shear is detected out to a radius that is large enough to be comparable to $`r_{200}`$) should not have large systematic errors if, indeed, their density profiles are fitted well by NFW profiles. However, recent observations of lensing of distant field galaxies by nearby field galaxies (and, additionally, by the individual galaxies within clusters, e.g., Natarajan et al. 1998) have inspired a number of investigations through which the mass and extent of the dark matter halos of the lens galaxies might be constrained. The technique, known as galaxy–galaxy lensing, seems very promising at the moment, and in the near future a considerable amount of effort will be devoted to the use of observations of galaxy–galaxy lensing to constrain the nature of the dark matter halos of galaxies. The results of our investigation of systematic errors in the mass estimated for NFW lenses under the assumption of an isothermal potential indicate that these errors can be significant for galaxy–mass lenses ($`60\%`$ in the case of a COBE-normalized CDM universe). Therefore, in the upcoming studies of galaxy–galaxy lensing, should an observational constraint on the masses of galaxy halos be based upon the assumption of an isothermal potential, it will be important to keep such systematic errors in mind when judging the strength of such a constraint. ## Acknowledgments Support under NSF contract AST-9616968 (TGB and COW) and an NSF Graduate Fellowship (COW) are gratefully acknowledged.
no-problem/9908/hep-th9908086.html
ar5iv
text
# Values of the Couplings and Internal Geometry ## 1 Heuristic considerations and motivation It is expected, in certain way, that the underlying fundamental theory, if such exists, does not have adjustable free parameters. The values of the couplings in QFT, on the other hand, are not a priori determined at fixed points. E.g. $`\alpha _{emIR}`$ is given by experiment, but it can be expected to be, at least in principle, calculable within the some fundamental theory. That is, we expect the fundamental theory to have more degrees of freedom, which are integrated when going to QFT limit and thus provide us with the parameters of the effective theory. We also expect the both theories, fundamental and effective, to have the same infrared behaviour. Standard Renormalization Group (RG) arguments would suggest that at low energies one can integrate out all fluctuations of the string except the gauge theory degrees of freedom. This would seem to imply that the ST could not in principle teach us anything about low energy gauge dynamics. Recent work suggests that there are the sectors of ST which are important for the low energy structure of the theory. In brane theory, gauge theory arises as an effective low energy description that is useful in some region in the moduli space of the vacua. Of course, it does not help us much concerning the behaviour at the *literally infrared limit*. Anyway, it is unlikely that without some new ingredient in the theory gauge couplings can be practically calculable. We shall try here to test the applicability of some (at this stage rather loose) ideas to this problem. The squared couplings at their places in generating functional really look like the factors normalizing the statistical weights of the field configurations, but within the QFT itself we do not expect some new fundamental degrees of freedom. We can pose the following question. If there is some structure in the underlying geometrized theory responsible for the gauge degrees of freedom, is it possible that it allows a set of models at given energy level, and the measure of this set should be integrated out. Without the specific model, the question seems to be rather vague. The first key ingredient in our hypothesis is that the manifold with the same geometrical structure as the covering space of the gauge group manifold itself can be used to study the set of possible models. This requires an explanation. Namely, the moduli space of the theory which depends on some parameters is defined as the range of the parameters leading to distinct physics. If the vacua are related by the symmetry, it is not the case, so we cannot use the parts of the gauge group manifold as the moduli space. To elucidate this issue, let us remind ourselves how the ground state in the electroweak theory was chosen. Using the global gauge freedom , the vacuum condensate was located in the lower, neutral component of the isospinor. It was done using the means of the theory, and not as an actual choice between physically distinct situations. But if in the underlying fundamental theory all possible assignments of the charges (and all possible ways of the symmetry breaking ) consistent with the geometry of the gauge group manifold were allowed, the choice would be a physical one. In matter of fact, we suppose that within the underlying theory the phenomenological gauge charges are not quantized in familiar way. We observe that the simple factors of the electroweak gauge group correspond to the non-vanishing cycles on the covering space of the gauge group manifold. The topic of cycles in the internal geometry is a rapidly evolving subject, and an attempt to try something that resembles machinery already in use is the second key ingredient in our hypothesis. The existence of the solitonic degrees of freedom in the string theory, together with the properties of the internal geometry of the space *M* upon which the string is compactified leads to many interesting phenomena. In particular *p*-brane can wrap round the *q*-dimensional cycle *C*$``$*M* leading to *p-q* dimensional brane in the noncompactified space. The possible *C*’s and their physical consequences has been discussed in numerous contributions, e.g.. Basically, it is possible to consider the vanishing cycles, that are usually considered as a sources of the gauge charges, but we shall be interested in non-vanishing ones, typically considered in the study of black holes, where also exists, although in a different context, the problem of the "disappeared" degrees of freedom. They can be related to the number of ways the cycle (together with the choice of gauge field on it) can be deformed in the compactification geometry. For an excellent overview of this and related issues see . It should also be mentioned that the models were considered, where strings (WZNW models) and D-branes (models considered e.g. in refs. ) live on the group manifolds. Having this in mind, the use of ideas motivated by the research in D-brane theory in the study of the gauge group manifold does not seem unnatural. We are testing our idea in the environment of the electroweak theory. There are three basic reasons to do so. The topology of the gauge group manifold is rather simple, represented by the product of the spheres, charges can be measured with high precision and inverse coupling is maximal for IR fixed point. ## 2 The basic idea and geometrical set-up Loosely speaking, we are interested in how many ways the cycle containing the subgroup that corresponds to the particular charge can be embedded in the covering space of the gauge group manifold. To find it out, we need to parametrize the sets of non-vanishing cycles that can support the gauge groups corresponding to the particular charge operators. We consider the cycles that are geometrically similar<sup>1</sup><sup>1</sup>1I.e. we are discussing cycles, round spheres etc. Our statements could be expressed e.g. using “cycles with minimal volume”. At this stage we are trying to avoid mathematical formalization as much as possible. to the cycles identified as containing the particular subgroup· . After that we shall try to relate the integrals over parameter space to the phenomenological values of the couplings. For the standard electroweak theory, the gauge group manifold is $`S^3\times S^1/Z_2`$ and we shall use it as a toy model in an attempt to realize this idea. Let us first identify such cycles. For the $`SU(2)\times U(1)`$ and group manifold $`S^3\times S^1/Z_2`$, with the natural isomorphisms $$SU(2)S^3,U(1)S^1$$ (1) any of the points on $`S^3`$ can be identified with the unit matrix and any of the points on $`S^1`$ can be attached to the unit. So any of the $`S^3`$ can be identified as SU(2) and by this we have identified as SU(2) also the antipodal $`S^3`$ (in respect to $`S^1`$ ). Having the identification of the two cycles given by (1) , we can see that the cycles that can support the U(1) gauge group belong to the following sets: * a: set of $`S^3`$ cycles parallel to the $`S^1`$ that is given by (1) , and in the sense of our statement these cycles can support both SU(2) group and some its U(1) subgroup. * b: set of $`S^1`$ cycles normal to the $`S^3`$ that is given by (1) * c: cycles $`S^1`$ described for the given identification (1) by $$S^1exp(i\varphi )[\mathrm{cos}\varphi Ii\mathrm{sin}(a_i\tau _i)],a_ia_i=1$$ (2) and those parallel to them. For $`a_3=1`$ it is obviously the usual electromagnetism, represented as the subgroup of U(2): $$\left|\begin{array}{cc}w& 0\\ 0& 1\end{array}\right|,|w|=1$$ with the generator $`\frac{i}{2}(I+\tau ^3)`$ . * For the case "a" , it is natural to take as the measure of the number of modes the cycle can be located on the gauge group manifold the integral over $`S^1/Z_2`$, which equals $`\mathrm{\Xi }_a=\pi `$ . * For the "b" we have in the same sense integral over $`S^3/Z_2`$, being equal $`\mathrm{\Xi }_b=\pi ^2`$ . * For the "c", the situation is slightly more complicated. With the given point I on the $`S^3`$, every cycle given by (2) is described by some fixed $`\stackrel{}{a}`$. It is natural to take the integral over the unit sphere $`S^2`$ ($`a_ia_i=1`$ from (2)) that equals $`4\pi `$ as the measure of such cycles, and this holds for each point on $`S^3/Z_2`$ . So we have $`\mathrm{\Xi }_c=4\pi \mathrm{\Xi }_b,\mathrm{\Xi }_c=4\pi ^3`$. In that way, for all possible cycles that can support U(1), including those given by the linear combination of the generators of the simple factors, we have the following measure: $$\mathrm{\Xi }_a+\mathrm{\Xi }_b+\mathrm{\Xi }_c=137,036\mathrm{}$$ (3) what is very close to the phenomenological value of the fine structure constant. If we accept for a moment that we are not dealing with the bizarre coincidence, we are faced with many questions and puzzles. First of all, it is the question about the Weinberg angle. Namely, the value (3) is expected to hold as infrared limit in all possible unbroken low energy $`U(1)`$ theories. It is not only consequence of the presented construction. It is rather expected property of the theory, see e.g. Ch.18 of the for the discussion of the couplings for the continuos set of vacua with unbroken U(1). The very nature of the Weinberg angle is connected with the specific way of the symmetry breaking. The same is truth for the formula from the electroweak theory $$\frac{1}{e^2}=\frac{1}{g^2}+\frac{1}{g^2}$$ (4) It is not clear how to apply our idea in the cases where the couplings cannot be related to all considered cycles (couplings which are not related to unbroken gauge group). In plain words, (3) is independent of Weinberg angle, contributing only to appropriate normalization of $`g`$ and $`g^{}`$. The only thing that we can do now is to proceed straightforwardly and consider $`g^2`$ and $`g^2`$ to be proportional to $`1/\mathrm{\Xi }_a`$ and $`1/\mathrm{\Xi }_b`$ respectively. As we remember, these are the measures of the sets of cycles parallel on the $`S^3\times S^1/Z_2`$ to those identified as weak isospin and hypercharge group. In that way $`\mathrm{sin}^2\theta _W=0.241\mathrm{}`$ what is roughly satisfactory having in mind that the energy scale has not been precisely identified. Of course that without the elaborated theory seductive numerical result (3) can be in the best case considered only as an indication that we are on the right trace. ## 3 The possible meaning of the obtained construction It is premature to link our statements to any specific development of the ST, which was used here only as a motivation. In the same time, we can mention several theoretical constructions that exhibit the properties similar to ones described in this note. First of all, some recent developments in D-brane theory, where inverse squares of the gauge coupling constants are represented as volumes of the cycles $`S^3`$ and $`S^2\times S^1`$ on the Calabi-Yau manifolds . The volume is determined by the compactification radius $$\frac{1}{g^2}V_{C_i}$$ (5) The possibility that the infrared couplings correspond to some self dual (in the sense of T duality) radius deserves further attention, specially because the enhancement of the gauge symmetry is in some cases expected to occur there. We have already mentioned the branes living on the group manifolds . Strictly speaking, couplings as integrals over the gauge group manifolds and over cycles are known from long ago in the context of extended field configurations and specially in SUGRA, for recent review see and references therein, but the concept of duality was not developed enough, and it was quite unclear how to relate facts about extended configurations to QFT of point-like objects. It is also intriguing that our toy model is completely formally equivalent to the conformally compactified Minkowski space , as in the formalism developed by Penrose, what maybe opens the possibility for a link with holography. It seems that *in the underlying fundamental theory* the notion of the charge quantization must be somewhat modified, at least classically. This would also have consequences for the notions of duality, BPS states and the anomaly structure of the theory which are all conceptually deeply interrelated with the conservation of the charge. We think that this construction can be reconciled with the our present knowledge about the coupling unification. The expected gauge group manifolds at the unification energy scale have different topology and described cycles do not influence directly the values of the couplings, but this issue requires further study. Presented work has a preliminary and heuristic nature, and it does not make claim about "explanation of $`\alpha ^1`$ " or similar. It rather attempts to help bridge the gap between the phenomenological physics and the ST starting from the phenomenological side. To achieve this, very strong assumptions were made both explicitly and implicitly. In that way we got at once expression (3). This raises the hope that we are on the right trace, but only the future development could justify the proposed identifications. ## 4 Acknowledgments I would like to thank all people who were, at different stages of the work, patient enough to discuss presented ideas with me. Of course this does not make any of them responsible for my undue assumptions. I specially owe thanks to Kenan Suruliz, Finn Ravndal and Ulf Lindström.
no-problem/9908/astro-ph9908141.html
ar5iv
text
# The millimeter VLBI Properties of EGRET Blazars ## 1 Introduction A high fraction of high energy gamma-ray sources detected with the EGRET telescope are known to be associated with blazars (Mattox et al. 1997). Many of these blazars are among the brightest and most variable sources at millimeter wavelengths. Surveys at centimeter wavelengths have shown that a high fraction of the EGRET blazars are detected on long baselines with VLBI (Moellenbrock et al. 1996, Kellerman et al. 1998). The conditions necessary to produce high energy gamma-rays are similar to those for compact radio sources. For these reasons, it is likely that many of the EGRET blazars are also millimeter wavelength VLBI sources. Previous source compilations and surveys indicate that the EGRET blazars comprise between 1/3 and 2/3 of all 86 GHz-VLBI detections (Rogers 1994, Lonsdale, Doeleman et al. 1998). However, a survey of these objects has not been performed with high sensitivity and under good weather conditions. We report here on the first epoch of our survey of the EGRET blazars. The full survey will observe all EGRET blazars in the Mattox et al. (1997) sample with $`\delta >20^{}`$. All of these sources have $`S_51\mathrm{Jy}`$ and flat spectral indices ($`\alpha 0.5`$). Additional sources were observed in the program. These were selected from the Moellenbrock et al. (1996) and Kellerman et al. (1998) surveys as sources with long baseline fluxes greater than $`1`$ Jy. ## 2 Observations and Results Observations were made on 2 April 1998 between 0 and 8 UT with a global array of millimeter telescopes. Three sources were observed per hour, each for 6.5 minutes. Additional time was given for pointing and flux density measurements. Due to poor weather and to equipment failures, reliable data was obtained for only 3 stations: Pico Veleta, Bonn and Onsala. Coverage in the visibility plane was essentially linear. Data were correlated at Haystack and analyzed with the Haystack Observatory Post-processing Software (HOPS). Standard polynomial gain curves were applied for Onsala and Bonn. Antenna temperature measurements on source were used to calibrate Pico Veleta. These antenna temperature measurements were also used to determine the zero-baseline flux of the sources. We summarize in Table 1 the results of detection and model-fitting. Signal-to-noise ratios for the incoherent average are reported. A snr greater than 3.5 is considered a detection. Inspection of the fringe rate and delay solutions for 1739+522 indicates that the detection at $`s_{SX}=3.8`$ is firm. Similarly, MK 421 ($`s_{SX}=2.8`$) was clearly not detected. The sources 0954+658 and 1219+285 were not observed by Pico Veleta. Model-fitting was performed in two ways. For sources detected on all three baselines we fit a single Gaussian component to the three visibilities. For sources not detected on the Bonn-Onsala baseline, we fit a single component Gaussian to the zero baseline flux and the Pico Veleta-Onsala visibility. We show in Figure 1 the results for the source 1606+106. The errors given are the formal error assuming thermal noise and a 10% amplitude calibration error. This significantly underestimates the error for two reasons. First, the source structure may be more complex than a single component. This leads us in most cases to overestimate the size and underestimate the brightness temperature. However, in the event of beating between two components, we may underestimate the size significantly. Second, calibration errors may in fact be much larger than 10%. The low correlated fluxes on the baselines to Bonn for 3C 454.3, BL Lac and CTA 102 are suggestive of a pointing error. These three sources were observed in succession and may have been affected by poor weather. We do note that for many sources the Gaussian model indicates that most of the zero-baseline flux is recovered on the short baselines. However, in none of the sources is the zero-baseline flux fully recovered on the long baselines. That is, all sources are resolved to some extent on baseline lengths of 500 $`M\lambda `$. Closure phases are also included in Table 1. The sources 3C 279 and 3C 454.3 show significantly non-zero closure phases. ## 3 Discussion A high fraction (17/18) of the EGRET blazars observed on the Pico Veleta-Onsala baseline were detected. Six of the sources were detected for the first time. The detection threshold on this baselines was $`0.2`$ Jy, making this the most sensitive 3-mm VLBI survey to be performed. Many sources with total fluxes below 1 Jy were detected. We also detected four non-EGRET blazars, three of them for the first time. The high fraction of EGRET blazars detected supports the conclusion drawn at lower frequencies that peak gamma-ray intensity is a strong predictor of millimeter wavelength intensity. This has several implications. One, currently unidentified EGRET sources may be associated with specific objects through high frequency VLBI surveys of sources within the error box. The Third EGRET catalog contains 170 unknown sources (Hartman et al. 1999). The principal difficulty here will be discriminating the compact non-EGRET blazar sources from the compact EGRET sources. Two, improvements in gamma-ray telescope sensitivity will produce a much larger class of source available for study. The GLAST mission will be 30 times more sensitive than EGRET, implying a possible increase in the source counts by a factor of $`150`$. These sources will be of intrinsic interest, of use as a phase and flux calibrators for other areas of research and of use as probes of the intervening molecular gas. Three, in order for these sources to be accessible to millimeter VLBI, array sensitivity must be improved. The results presented here on the Haystack water vapor radiometer are very encouraging in this regard (Tahmoush & Rogers 1999, these proceedings).
no-problem/9908/nucl-th9908058.html
ar5iv
text
# 1 Quartet 𝑛-𝑑 scattering lengths (𝑎₄, in fm) calculated using potential models and 𝜒PT, together with the experimental value. LA-UR-99-4309-REV Quartet n-d Scattering Lengths by J. L. Friar and D. Hüber Theoretical Division Los Alamos National Laboratory Los Alamos, NM 87545 and H. Witała Institute of Physics Jagellonian University PL-30059 Cracow, Poland and G. L. Payne Department of Physics and Astronomy University of Iowa Iowa City, IA 52242 ## Abstract Quartet n-d scattering lengths are calculated using second-generation nucleon-nucleon potential models. These results are compared to the corresponding quantity recently calculated using chiral perturbation theory. Solving exact few-body equations offers a possibility to test the present understanding of nuclear forces by direct comparison of theoretical predictions with experimental data. It is the scattering problem which provides the real opportunity to explore in depth the accuracy of our knowledge of the nucleon-nucleon interaction. Neutron-deuteron (n-d) elastic scattering at zero incident energy is the simplest three-nucleon scattering problem. At this energy only the s-wave scattering lengths survive. In the limit of relative n-d momentum $`q_00`$ the eigenphase shift in the total angular momentum 3/2 state can be written in terms of quartet n-d scattering length $`a_4`$ by $`\delta _4(q_0)a_4q_0.`$ (1) Accurate calculations of n-d quartet scattering lengths were first performed 10 years ago. This quantity is known to be insensitive to most physics, such as $`\mathrm{}>0`$ partial waves of the nucleon-nucleon (NN) potential and three-nucleon forces, because of constraints arising from the Pauli principle. The low (actually, zero) energy of the incoming neutron emphasizes s-waves, while the quartet spin emphasizes $`S=1`$ between the two neutrons, which combination is Pauli forbidden. This reaction at zero energy depends only on details of the deuteron s-wave for an accurate calculation. The potentials of a decade ago (sometimes called “first-generation” potentials) were not particularly accurate fits to the NN data base (or even to the data bases in use when those potentials were constructed). Deuteron properties, such as binding energies and asymptotic normalization constants, had considerable variations. Thus, it is not surprising that three-nucleon properties showed considerable spread due to these indifferent fits, although it was never clear in advance which properties were suspect. One such property was $`a_4`$, the n-d quartet scattering length, where values of 6.304 fm and 6.380 fm were obtained for the RSC and AV14 potential models, respectively. Variations of these numbers due to partial-wave limitations or three-nucleon forces are of the order of $`10^3a_4`$ (or less), which is much smaller than the potential-model difference. Such minimal influence of three-nucleon force effects and higher nucleon-nucleon partial waves is due to the fact that Pauli repulsion for three nucleons in the same spin state keeps the nucleons apart. Recently, a new class of potentials has been developed (sometimes called “second-generation”) that provides greatly improved fits to the NN data base. Only a single calculation of $`a_4`$ exists for a single second-generation potential model (AV18), and that result lies between the RSC and AV14 results listed above. Until very recently, no particular motivation existed for revisiting the $`a_4`$ calculations. Chiral perturbation theory ($`\chi `$PT) provides an alternative path (to conventional potentials) for calculating few-nucleon observables. Scattering amplitudes are constructed directly from a field theory, employing one or another scheme of regularization and renormalization. In this fashion the first three-nucleon calculation exploiting chiral perturbation theory was recently performed for the observable $`a_4`$. The result, 6.33(10) fm, lies between the RSC and AV14 results quoted above, which motivates this brief update of the theoretical situation. We have calculated $`a_4`$ for a variety of second-generation NN potentials listed in Table I. These include the Nijmegen 93 (N93; nonlocal), the Nijmegen II (N II; partial-wave local), the Reid soft core 93 (RSC93; partial-wave local), the CD-Bonn (CDB; nonlocal), and the Argonne V18 (AV18; local) potentials. The large difference ($`>`$1%) seen between the previous (first-generation) potential-model results is not reproduced in our five (second-generation) results, which are within a factor of $`210^3`$ of each other. We also note the the AV18 potential contains an electromagnetic force that must be turned off in momentum-space procedures in order to obtain a result. We have determined using a configuration-space approach that eliminating this force component lowers $`a_4`$ by approximately 0.018 fm, which is a very small change. Our result in Table 1 incorporates the complete force, and is slightly larger than that of Ref. . All (second-generation) theoretical results agree with the experimental value. The large discrepancy seen for first-generation potentials has vanished. Second-generation potential results are now in close agreement with the $`\chi `$PT result. Although the latter has a relatively large theoretical error bar, that error reflects an estimate of uncalculated higher-order Lagrangian terms. Given that these would roughly correspond to small components of the nuclear potential (which scarcely affect the result), it seems likely that the error is overestimated for this reaction. In summary, second-generation NN potential calculations of $`a_4`$ are in much better agreement with each other, and with chiral perturbation theory, than were older first-generation potential calculations. Acknowledgments The work of JLF and DH was performed under the auspices of the United States Department of Energy, while that of GLP was supported in part by the United States Department of Energy. The work of DH was supported in part by the Deutsche Forschungsgemeinschaft under Project No. Hu 746/1-2. The work of HW was supported by the Polish Committee for Scientific Research. One of us (JLF) would like to thank G. M. Hale of Los Alamos for helpful discussions about n-d scattering.
no-problem/9908/astro-ph9908217.html
ar5iv
text
# Untitled Document Compton Electrons and Electromagnetic Pulse in Supernovae and Gamma-Ray Bursts J. I. Katz Department of Physics and McDonnell Center for the Space Sciences Washington U., St. Louis, Mo. 63130 Abstract When gamma-rays emerge from a central source they may undergo Compton scattering in surrounding matter. The resulting Compton-scattered electrons radiate. Coherent radiation by such Compton electrons follows nuclear explosions above the Earth’s atmosphere. Particle acceleration in instabilities produced by Compton electron currents may explain the radio emission of SN1998bw. Bounds on coherent radiation are suggested for supernovae and gamma-ray bursts; these bounds are very high, but it is unknown if coherent radiation occurs in these objects. High altitude (exoatmospheric) nuclear explosions are well known to produce striking electromagnetic phenomena on the surface of the Earth. These phenomena, termed HEMP (High altitude ElectroMagnetic Pulse) or EMP (Karzas and Latter 1962, 1965) occur when prompt gamma-rays following nuclear fission, radiative neutron capture or inelastic neutron scattering suffer Compton scattering in the upper atmosphere. The Compton electrons, with energies typically $`1`$ MeV, are preferentially directed along the direction of the incident gamma-rays, radially away from the gamma-ray source, and move at a speed close to the speed of light. They are deflected by the geomagnetic field and radiate synchrotron radiation. Because the gamma-rays and Compton electrons are produced over an interval $`<10^7`$ sec, shorter than the characteristic gyroperiod of the radiation ($`10^6`$ sec, allowing for the relativistic energy of the electrons) this radiation is coherent; it may be regarded as the effect of a continuously distributed time-dependent current density, rather than as the radiation of individual electrons. The number of radiating electrons is very large so that the currents and coherent radiation intensity are high, and are limited by the condition that the radiation field not exceed the geomagnetic field, for the radiation field of the Compton current acts to screen the geomagnetic field. In fact, the radiation may be crudely approximated as the diamagnetic field exclusion by the conducting swarm of Compton electrons. In atmospheric EMP the Compton electrons produce large numbers of low energy electrons by collisional ionization, and the effects of these electrons on the emergent radiation are the chief subject of the published calculations. Analogous phenomena may be produced by astronomical events. Karzas and Latter (1965) indicate a threshold gamma-ray fluence for the observation of EMP of $`10^6`$ erg/cm<sup>2</sup>. This is less than the fluence of many observed gamma-ray bursts (GRB), in some cases by more than two orders of magnitude. However, GRB do not produce observable EMP in the Earth’s atmosphere because their emission occurs over a duration from milliseconds to minutes, several orders of magnitude (even for the shortest GRB) longer than the electrons’ gyroperiod in the geomagnetic field. As a result the EMP, although coherent in the sense that many electrons radiate in phase, is much reduced in amplitude because it is an incoherent average over a very large number of cycles of electron gyromotion; the source current varies slowly, with only a very small Fourier component at synchrotron frequencies; in contrast, EMP from nuclear explosions is produced by a current distribution which is essentially a Dirac $`\delta `$-function in time. Incoherent radiation by the individual electrons does have the usual synchrotron spectrum, but because these fields add incoherently the total power is small. It may also be possible for radiation, analogous to EMP, by Compton electrons to be produced within distant astronomical objects rather than in the Earth’s atmosphere. This distant radiation may, in principle, be detected at Earth. Unfortunately, known astronomical sources of gamma-rays are not nearly as impulsive as nuclear bombs, so the resulting radiation cannot be predicted to be coherent. However, in some cases plasma phenomena may unexpectedly produce intense coherent radiation, so this possibility cannot be disregarded with confidence. A familiar example is the radio emission of pulsars, which is intense enough to observe only because, in a manner not yet well understood, coherent radiation by clumps of electrons is produced. Many supernovae produce large quantities of the radioactive isotope <sup>56</sup>Ni, which decays by electron capture with a 6.1 day half life to <sup>56</sup>Co, which in turn decays, 80% by electron capture and 20% by e<sup>+</sup> emission, with a 77 day half life to stable <sup>56</sup>Fe. These decays produce gamma-rays of several energies; the most important and abundant are the 0.812 MeV gamma-ray produced in 85% of <sup>56</sup>Ni decays and the 0.847 MeV and 1.24 MeV gamma-rays produced in 100% and 66%, respectively, of <sup>56</sup>Co decays. These gamma-rays then undergo Compton scattering as they travel through the expanding supernova debris. In the very energetic SN1998bw nearly 1 M of <sup>56</sup>Ni was prodced (Iwamoto, et al. 1998). SN1998bw was remarkable for its unprecedentedly intense, and double-peaked, radio emission, with evidence of a self-absorption frequency of several GHz, declining with time, and large magnetic fields (Kulkarni, et al. 1998). The characteristic time scales of the Compton currents are set by the radioactive half lives and the hydrodynamic expansion time scale, each of which is many days. It is thus unlikely that there will be coherent synchrotron emission, as in nuclear EMP. However, other kinds of emission are possible. Katz (1999) suggested that as the Compton electrons propagate into surrounding matter there will be counterstreaming plasma instabilities which may accelerate a few electrons to the Lorentz factors $`10^2`$ required to explain the observed synchrotron radiation. The synchrotron photosphere advances at the speed of the mildly relativistic Compton electrons, in agreement with the observed (Kulkarni, et al. 1998) Lorentz factor of expansion of the synchrotron source of 1.6–2. This model is consistent with the observed double-peaked time history of the radio intensity, for the first peak may be associated with Compton electrons following <sup>56</sup>Ni decay. The second peak may occur when a second front of more energetic Compton electrons, produced by the more energetic gamma-rays of <sup>56</sup>Co decay, overtake the slower initial wave of Compton electrons. The inferred magnetic fields in SN1998bw are $`10^1`$ gauss. Such fields are remarkably large for such a large astronomical object, apparently implying a large magnetic flux (although only the magnitude, and not the sign or direction, of the magnetic field is inferred form the observations). Despite this, the magnetic field may contain only a small fraction of the supernova energy, although this last conclusion is very sensitive to the expansion Lorentz factor of the radio source. The synchrotron radiation by the Compton electrons is therefore expected to be at unobservably low frequencies $`1`$ MHz. The observed radio emission implies a power law electron distribution function extending to much higher energies. There is no evidence of coherent emission, nor would it be expected from electrons accelerated by a local plasma instability driven by a smoothly varying flux of Compton electrons. An upper bound on the power radiated in coherent EMP by a source of size $`R`$ is given by the condition that the radiation field not exceed the background magnetic field $`B`$: $$P<\frac{B^2}{8\pi }4\pi R^2c10^{41}\left(\frac{B}{0.1\mathrm{gauss}}\right)^2\left(\frac{R}{3\times 10^{16}\mathrm{cm}}\right)^2\mathrm{erg}/\mathrm{sec}.$$ This is about two orders of magnitude in excess of the observed (presumably incoherent) radio luminosity of SN1998bw. It is possible for the incoherent synchrotron radiation power to exceed this bound on the coherent power; this would correspond to the condition that synchro-Compton radiation exceed synchrotron radiation in power, which violates no fundamental condition, although it is not often observed. The plasma physics of GRB is more complex. Their multi-peaked temporal structure implies the existence of many interacting relativistic shells. In order to explain the dissipative interaction of shells with each other (and with an external medium) there must be collective plasma interaction. In order to accelerate electrons to energies sufficient to produce the observed radiation by the synchrotron process there must be at least approximate electron-ion equipartition, again by some ill-understood collective process. Coherent emission can only be a speculation, but it is worthwhile to bound it. In order to bound the possible intensity of coherent EMP from GRB it is necessary to bound their magnetic fields and dimensions. The dimensions can be reasonably, but very roughly, estimated from the observed time scales of variations and Lorentz factors inferred by a variety of complex arguments. The magnetic fields are, however, nearly unknown. An earlier estimate (Katz 1994) of coherent EMP assumed microgauss interstellar fields, and concluded that even if coherent EMP were produced it would be unobservably weak. That estimate might be applicable if GRB involved only an “external” shock between the relativistic debris and the interstellar medium, but even in that case the appropriate field may be a turbulently amplified field in the shocked interstellar medium, possibly orders of magnitude higher. It is now generally accepted that GRB involve “internal” shocks between debris flows of differing Lorentz factors. Then the appropriate magnetic fields may be those of the shocked debris clouds, which may approach equipartition with the particle kinetic energy, although this last value is controversial and estimates differing by several orders of magnitude exist. Alternatively, it may be that GRB are powered by low frequency electromagnetic radiation from a magnetized accretion disc (Katz 1997). If this radiation is not completely converted to pair gas the magnetic energy density will be a significant fraction of the relativistic hydrodynamic energy density in the wind. In either case, the upper bound on the coherent EMP may approach the GRB power itself, although it is impossible to predict how the close the actual coherent power (if there is any at all) comes to this bound. Characteristic frequencies are also very uncertain, but typically of order GHz. An additional, and stricter, bound may be obtained by noting that the radiating matter in GRB models is optically very thin, with Compton optical depths typically $`10^6`$, although dependent on very uncertain parameters. This sets an upper bound on the efficiency of conversion of photon energy in Compton scattering. This upper bound is further reduced because in most models the photons have energy $`m_ec^2`$ in the co-moving frame, so only a small fraction of their energy is transferred to the electron in a scattering event. This suggests, very crudely, an overall efficiency bound of $`<10^9`$. The conditions for coherent EMP may be met in the evaporation of small primordial black holes, for their final burst of gamma-rays is very brief. The difficulty (aside from the very speculative assumption that such objects exist) is in the astronomical environment. In intergalactic or other nearly empty space the Compton scattering length is very long (a problem exacerbated by the decline of the Klein-Nishina cross-section with energy) and the production of Compton electrons is negligible. If the black hole is captured by dense matter, and is found within a star, planet, or similar object, then no radiation is observable because of absorption by the matter. The most favorable conditions may be those of a black hole in a giant molecular cloud, protostar, or a similar object, with column densities of order the Compton scattering length, but the expected event rates are very low, if not zero. For such favorably located small black holes, detection of their EMP may be more sensitive than direct observation of their gamma-rays. References Iwamoto, K., et al. 1998 Nature 395, 672. Karzas, W. J. and Latter, R. 1962 J. Geophys. Res. 67, 4635 Karzas, W. J. and Latter, R. 1965 Phys. Rev. B137, 1369. Katz, J. I. 1994 Ap. J. 422, 248. Katz, J. I. 1997 Ap. J. 490, 633. Katz, J. I. 1999 Ap. J. submitted (astro-ph/9904053). Kulkarni, S. R., et al. 1998 Nature 395, 663.
no-problem/9908/astro-ph9908250.html
ar5iv
text
# Effects of line asymmetries on the determination of solar internal structure ## 1 Introduction There is strong evidence that the observed profiles of solar oscillation are asymmetric (e.g. Duvall et al. duvall (1993)). This is thought to be a consequence of the localized nature of the stochastic driving source, possibly combined with a contribution due to noise which is correlated with the driving (e.g. Gabriel gabriel (1994); Abrams & Kumar abrams (1996); Roxburgh & Vorontsov roxburgh (1997); Nigam et al. nigam3 (1998); Rosenthal 1998a b; Nigam & Kosovichev 1998a ). Yet in most analyses of helioseismic data the observed Fourier or power spectra have been fitted with symmetric Lorentzian profiles. This leads to systematic errors in the inferred frequencies, of possibly quite serious effect on the inversion for, e.g., the solar internal structure. Just how serious will be the effect on the inversion depends on the variation of the frequency shift with mode frequency ($`\nu `$) and degree ($`l`$). Our aim then is first to assess how the frequency shift due to fitting asymmetric peaks with Lorentzian profiles varies in the $`l\nu `$ plane, and then to consider its effect on structural inversion, specifically a SOLA inversion for sound speed and density. This continues our earlier investigation of this problem (Christensen-Dalsgaard et al. jcd2 (1998), hereafter Paper I). First we need a model of the line asymmetry. We use the following simple representation of the asymmetric profiles in oscillation power (see also Rosenthal 1998b ): $$P_{\mathrm{as}}(\nu )=\frac{\alpha _1^2}{x^2+\alpha _1^2}\left(1\frac{2x}{\alpha _4}\right)+\frac{\alpha _1^2}{\alpha _4^2}+N.$$ (1) Here $$x=\frac{\pi }{\mathrm{\Delta }\nu }(\nu \nu _0),$$ (2) where $`\mathrm{\Delta }\nu `$ is the frequency separation between modes of adjacent radial orders. Also, $`\nu _0`$ is the eigenfrequency of the mode, and $`\alpha _4`$ is a measure of the asymmetry. Simple algebra shows that, provided $`\alpha _1^2\alpha _4^2`$, $`P_{\mathrm{as}}(\nu )`$ has a minimum value of $`N`$ at $`x\alpha _4`$. Thus $`N`$ is a measure of the ratio of noise to signal in the power. In the limit $`\alpha _4\mathrm{}`$ we recover a Lorentzian profile with maximum value unity, and with half width at half maximum (measured in terms of $`x`$) equal to $`\alpha _1`$, sitting on a uniform background of level $`N`$. We note that Nigam & Kosovichev (1998a ) presented a formally different, but mathematically essentially equivalent, expression for the asymmetrical line profile. This is characterized by the dimensionless asymmetry parameter $`B`$ which, in our notation, is given by $`B=\alpha _1/\alpha _4`$. To estimate the systematic error in the frequency determination we have carried out fits of Lorentzian profiles to timestrings of artificial data, assumed stochastically excited and with a power envelope (see Section 2 for a more precise description) given by Eq. (1). This simulates the analysis of the actual data. The fit results in a determination of the location $`x^{(\mathrm{as})}`$ of that symmetric Lorentzian which best fits the asymmetric power distribution. From this, the frequency error $`\delta ^{(\mathrm{as})}\nu `$ resulting from the assumption of a Lorentzian profile is obtained as $$\delta ^{(\mathrm{as})}\nu =\frac{x^{(\mathrm{as})}}{\pi }\mathrm{\Delta }\nu .$$ (3) Of course, $`x^{(\mathrm{as})}`$ depends on parameters $`\alpha _1`$ (linewidth), $`\alpha _4`$ (asymmetry parameter) and $`N`$ (background). In order to assess how $`\delta ^{(\mathrm{as})}\nu `$ varies over the $`l\nu `$ diagram, we need to know how linewidth, asymmetry parameter and background vary with $`l`$ and $`\nu `$. We have used GONG data to estimate the variation of these quantities. Finally, we have used a SOLA inversion technique to calculate the apparent differences in sound speed and density related to the frequency error $`\delta ^{(\mathrm{as})}\nu `$. ## 2 Predicted shifts using artificial data We have considered p-mode power spectra in the vicinity of a single mode. These were constructed, on the assumption of stochastic excitation, as normally distributed Fourier spectra with zero mean and variance given by Eq. (1). To save time and still have good frequency resolution, the simulated data frequency resolution was taken to be $`\alpha _1`$ divided by 5. Each synthetic power spectrum ($`P(\nu _i)`$) was fitted by minimizing the negative of the logarithmic likelihood function ($`L`$): $$S(𝒂)=\text{ln}(L)=\underset{i}{}\left\{\mathrm{log}[v(𝒂,\nu _i)]+\frac{P(\nu _i)}{v(𝒂,\nu _i)}\right\},$$ (4) where $`v(𝒂,\nu _i)`$ is the model of the variance of the spectrum, determined by the parameters $`𝒂`$. The model was taken to be a symmetric Lorentzian profile, $$v(𝒂,\nu )P_{\mathrm{Lor}}(\nu )=A\frac{\gamma ^2}{(\nu \nu _0)^2+\gamma ^2}+B,$$ (5) where $`A`$ is the amplitude, $`\nu _0`$ is the fitted frequency, $`\gamma `$ is the half width at half maximum and $`B`$ is the background. As in the fits of real data, the usual parameters were fitted: the central frequency, and the logarithms of the amplitude, linewidth and background noise level. We have fitted over intervals of 100 times $`\alpha _1`$. Fig. 1 shows two examples of ‘peaks’ in such artificial power spectra, and the discrepancy between the true line profile and the fitted Lorentzian. We have used data obtained by the GONG project (e.g. Hill et al. hilletal (1998)) to estimate the parameters of Eq. (1) used to generate the synthetic spectra corresponding to different points in the $`l\nu `$ plane. The data were determined in the GONG pipeline by peak-bagging (using Lorentzian profiles) individual modes in the averaged spectra from six 3-month series of observations (GONG months 10-27). For each $`(n,l)`$ we computed the $`m`$-averaged frequency, line width (specified in the GONG tables as the FWHM), mode power and background power; based on these averages, we determined the parameters $`\alpha _1`$ and $`N`$ (assuming the limit $`\alpha _4\mathrm{}`$) as well as $`\mathrm{\Delta }\nu `$ in Eqs (1) and (2); the resulting $`\alpha _1`$ and $`N`$ are illustrated in Fig. 2. Unfortunately, the GONG data contain few modes with $`l=0`$ and none with $`l=1`$ or 2. To complement our mode set and make a more realistic representation of inversions to infer solar structure, we have extrapolated the parameters $`\alpha _1`$ and $`N`$ determined through the GONG data for $`l<7`$ modes to estimate those for $`l<3`$. The estimation was made for the data set observed by the BiSON group and described by Basu et al. (basu2 (1997)). We restrict the analysis to modes with frequency smaller than $`3510\mu \mathrm{Hz}`$; the mode set contains modes of degree up to 150 (Fig. 3). The dimensionless asymmetry parameter $`\alpha _4`$ is determined by conditions near the upper turning point of the mode, including the excitation and possibly the effects of correlated noise. Although the details of these processes are as yet poorly understood, it is perhaps not unreasonable to expect that they depend little on degree, at least for the relatively modest degrees considered here; for in this case the horizontal scale of the modes far exceeds the scales of the relevant convective processes. Thus we might expect $`\alpha _4`$ to be a function of frequency alone. We assume that this is precisely the case and estimate $`\alpha _4`$ from the dimensional results for low-degree modes in Fig. 4 of Rosenthal (1998b ), scaling with the value of $`\mathrm{\Delta }\nu `$ for low-degree modes: $`\text{log}(\alpha _4)=2.70+9.52\times 10^4\nu `$, where $`\nu `$ is given in $`\mu `$Hz. We note that this $`\alpha _4`$ and our estimate of $`\alpha _1`$ are essentially consistent with the asymmetry parameter $`B`$ obtained from analysis of MDI (Toutain et al. toutain (1998)) and GOLF (Thiery et al. 1999) data. We estimate the systematic frequency shifts by fitting symmetric Lorentzians to asymmetric profiles, using the parameter values estimated from the GONG data. For each mode, 1000 simulations were performed; averages of the estimated dimensionless frequency shifts $`x^{(\mathrm{as})}/\pi `$ are shown in Fig. 4. To a considerable extent, $`x^{(\mathrm{as})}`$ seems to be just a function of frequency. This simple behaviour becomes more understandable if we consider the dependence of the relevant parameters on $`l`$ and $`\nu `$. As already stated, it is assumed that the dimensionless asymmetry parameter $`\alpha _4`$ is a function of frequency alone. Fig. 2 shows that the dimensionless line-width $`\alpha _1`$ and the noise-to-signal ratio $`N`$ for the GONG mode set are predominantly functions of frequency. (That this is so for $`\alpha _1`$ was indeed to be expected, from the physical properties of the damping.) However, they have also some small dependence on order $`n`$. The dimensional frequency shifts $`\delta ^{(\mathrm{as})}\nu `$ will have a much stronger $`l`$-dependence, dominated by the dependence of $`\mathrm{\Delta }\nu `$ on $`l`$. However, it was shown in Paper I that $`\mathrm{\Delta }\nu `$ is essentially inversely proportional to mode inertia (e.g. Christensen-Dalsgaard & Berthomieu 1991); this, therefore, is also the dominant $`l`$-dependence of the $`\delta ^{(\mathrm{as})}\nu `$. ## 3 Effect on Inferred Solar Structure To test the effect on a solar structure inversion (e.g. for sound speed or density) of the systematic error from fitting asymmetric peaks with Lorentzian profiles, we have run an inversion that assumes the frequency differences between observation and a reference model to arise from differences in internal structure, plus a possible contribution from the surface layers: $`{\displaystyle \frac{\delta \nu _{nl}}{\nu _{nl}}}={\displaystyle K_{c^2,\rho }^{(nl)}(r)\frac{\delta c^2}{c^2}(r)\mathrm{dr}}`$ $`+`$ $`{\displaystyle K_{\rho ,c^2}^{(nl)}(r)\frac{\delta \rho }{\rho }(r)\mathrm{dr}}`$ $`+`$ $`{\displaystyle \frac{_{\mathrm{surf}}(\nu _{nl})}{Q_{nl}}}.`$ (6) We use a SOLA inversion which seeks to estimate the sound-speed (or density) difference as a function of position within the Sun (see Basu et al. basu1 (1996) for details). Since the whole inversion process is linear, we do not need to create a full set of artificial data: rather, we simply use $`\delta ^{(\mathrm{as})}\nu _{nl}/\nu _{nl}`$ as our data, and this corresponds to the systematic error that would be added to a real inversion of the GONG data due to fitting asymmetric peaks with Lorentzian profiles. Note that our formulation in Eq. (6) assumes that the difference between the observed and model frequencies in general contains a contribution of the form $`_{\mathrm{surf}}(\nu _{nl})/Q_{nl}`$ where $`_{\mathrm{surf}}`$, which as indicated is a function of frequency alone, is determined by the near-surface errors in the model, and $`Q_{nl}`$ is the mode inertia normalized by the inertia of a radial mode of the same frequency (e.g. Christensen-Dalsgaard & Berthomieu jcd\_berthomieu (1991)). The SOLA inversion allows us to suppress the contribution from the function $`_{\mathrm{surf}}`$ by imposing a constraint that would completely remove such a function if it were a polynomial of degree $`\mathrm{\Lambda }`$ or smaller. Typically we choose $`\mathrm{\Lambda }=6`$: alternatively we can simply choose not to impose this extra constraint at all. If the frequency shifts introduced by mis-fitting the asymmetric peaks were also to be of the form $`_{\mathrm{surf}}(\nu _{nl})/Q_{nl}`$, they too would therefore be suppressed in the inversions. In Fig. 5, we have plotted the relative frequency error resulting from the assumption of a Lorentzian profile, multiplied by the normalized mode inertia $`Q_{nl}`$. As $`Q_{nl}`$ is asymptotically related to $`\mathrm{\Delta }\nu `$, Fig. 5 is very similar to Fig. 4. The scattered points at low frequency (crosses) are associated with the scatter in $`\alpha _1`$ (crosses in Fig. 2). The linewidth determination at low frequency is particularly difficult because of its small magnitude. We have looked at other data sets, e.g. MDI data, and this scatter at low frequency is not present, which suggests that it is an artifact of a poor determination (see Fig. 9a, below). Fig. 5 shows that the relative frequency error due to fitting Lorentzians, when scaled by mode inertia, is in fact largely just a function of frequency; thus one might expect that much of it would be suppressed by the suppression of the surface term in the SOLA inversion. The actual inferred sound-speed and density differences from performing SOLA inversions of $`\delta ^{(\mathrm{as})}\nu _{nl}`$ are illustrated in Fig. 6. For comparison, we show also in Fig. 7 the inferred sound-speed and density differences between the Sun and Model S of Christensen-Dalsgaard et al. (jcd1 (1996)), using the $`m`$-averaged mode frequencies obtained by the GONG project plus the BiSON mode frequencies for $`l<3`$ (described in Section 2). For consistency with the results for the solar data, in the inversions we assumed that the errors in the $`\delta ^{(\mathrm{as})}\nu _{nl}`$ were the same as the errors in these observed frequencies. The important conclusion is that the change in the inferred sound speed due to $`\delta ^{(\mathrm{as})}\nu _{nl}`$ is small compared with the total sound-speed difference between the Sun and the model. The same is true of the change in inferred density below the convection zone, for $`r<0.7R`$. However, it appears that the inference of density in the convection zone is more sensitive to the effects of asymmetry, as described here. The magnitude of the effect can perhaps be better appreciated in Fig. 7. There the dotted lines show the result that would be obtained from inversion of solar frequencies corrected for the effect of asymmetry, i.e., from observed frequencies fitted with an asymmetric profile given by Eq. (1), on the assumption that our representation of asymmetry is correct. The only significant modification from the original solar inference is for $`\delta \rho /\rho `$ at $`r>0.7R`$: there we see that the inferred density difference in the convection zone is no longer nearly constant, in contrast with our usual experience with density differences between pairs of solar models (e.g. Christensen-Dalsgaard jcd3 (1996)). We note that it actually makes rather little difference in this case whether the SOLA inversion explicitly suppresses surface contributions or not: the variation with frequency in Fig. 5 has little effect on the inversion even when the surface constraint is not applied. In fact, even without taking explicitly account of a surface term, the inversion provides some suppression of contributions of this form: this follows from the fact that the near-surface effect of, e.g., $`\delta c^2/c^2`$ in Eq. (6) is also of a form similar to the surface term; thus the localization implicit in the inversion leads to a partial elimination of such contributions. (Of course, the surface term in frequency differences between the Sun and adiabatic frequencies of a model is typically more than an order of magnitude larger than the differences in Fig. 5, so it is indeed important to apply a strict surface constraint when inverting real data.) It follows that the results of the inversions, shown in Fig. 6, are dominated by the scatter of the points in Fig. 5 around the curve fitted as a function of frequency. Much of this scatter could be due to noise in the GONG parameter values. This then affects the inversions in much the same way as random noise in the frequency data. We have carried out inversions of normally distributed random numbers with variances corresponding to the spread in the points in Fig. 5 from the fitted curve (neglecting the low-frequency values shown by crosses), to demonstrate the effect of the scatter on our inversions. The results are shown in Fig. 8, indicating that the sound-speed results are essentially consistent with such a random distribution. In addition, we have verified that the monotonic increase with $`r`$ in $`\delta \rho /\rho `$ in the convection zone results from the scattered points at low frequency, shown as crosses in Fig. 5. ## 4 Discussion As described in Section 2, when fitting the observations, the mode parameters were estimated for each ($`n,l,m`$) and averaged over the $`(2l+1)`$ different $`m`$ to give the values for the multiplet ($`n,l`$). To reproduce the properties of the observations, it would therefore be more realistic to perform $`2l+1`$ simulations and average the estimated $`\delta ^{(\mathrm{as})}\nu `$, instead of averaging 1000 realizations as considered so far. This will increase the scatter in the frequency shift, especially for low-degree modes. The dotted lines in Fig. 6 show the corresponding sound-speed and density differences. The monotonic decrease in sound-speed and increase in density differences towards the surface are still present, somewhat enhanced for sound speed as a result of the larger scatter in $`\delta ^{(\mathrm{as})}\nu `$. However, the variations are still generally small compared with the total difference between Sun and model (Fig. 7). The inferred frequency shifts depend rather sensitively on the parameters assumed for the modes, particularly their line widths, and hence the use of parameter values based on just a single data set might be cause for some concern. We have already noted that noise in the GONG parameter determination might have a substantial influence on the inversion results. Another point is that the GONG pipeline may systematically overestimate the linewidths. Evidence for this assertion is shown in Fig. 9a, comparing the GONG linewidths used here with MDI linewidths for a period corresponding to GONG months 29-31 obtained by the MDI Medium $`l`$ Program (Schou schou (1998)); evidently, the MDI linewidths are systematically smaller than those from GONG. It seems probable that the systematic discrepancy arises from the fact that the MDI parameter determination takes the $`m`$-leakage into account, whereas the GONG determination did not. (As an aside, we also note that the scatter at low frequency in the GONG data – crosses in Fig. 2a – is not present in the MDI data.) On the basis of this comparison it would obviously be of interest to repeat our study using parameters obtained from the MDI pipeline. Unfortunately, there is not at the moment a good determination of the background power when using the MDI pipeline (the fit is performed only in a very narrow window around each peak, so that the background far from the peak cannot be determined reliably). However, we have tried using MDI linewidths to estimate $`\alpha _1`$ and GONG data to estimate $`N`$: the resulting $`\delta ^{(\mathrm{as})}\nu /\nu `$ are shown in Fig. 9b (gray dots) and compared with the results we obtained with GONG linewidths (black dots). As one would expect, the smaller MDI linewidths result in smaller values of $`\delta ^{(\mathrm{as})}\nu `$. Thus, the results in the present study may be pessimistic in the sense that if the GONG linewidths are overestimated, then this will cause us also to overestimate the effects of line asymmetry. Besides, the MDI linewidths do not show a dependence on degree (or order) at given frequency, in contrast to GONG data (cf. Fig. 2a). Thus the dependence of $`\delta ^{(\mathrm{as})}\nu `$ on degree is small; the degree dependence that remains is probably introduced mostly by the $`N`$ parameter obtained from GONG. The dashed lines in Fig. 6 show the resulting inferred sound-speed and density differences. The monotonic variation towards the surface is substantially smaller than for the full GONG parameters, particularly for density, and has the opposite sign; this confirms our impression that the inferred structural variations in the solar interior (Fig. 6, circles) are probably largely an artifact of the scatter in the GONG parameter determinations, which we have then used in our asymmetry model. ## 5 Conclusions Our principal conclusion is that the frequency shift that arises from erroneously fitting the asymmetric line profiles with symmetric Lorentzians (as is commonly done at present) is rather benign: it is predominantly of the same form as a structural near-surface contribution – viz. of the form $`_{\mathrm{surf}}(\nu _{nl})/Q_{nl}`$, a function of frequency divided by mode inertia, which is largely suppressed in the inversion, and much of the departure from this behaviour is likely to be caused by observational scatter in the GONG mode parameters which form the basis for our study. Thus, although there is some residual error introduced into the sound-speed and density inversions at depth, its magnitude is generally very small. We therefore find no evidence to suggest that ignoring line asymmetry has compromised the helioseismic structural inversions published to date. One might worry that since the values of $`N`$ and $`\alpha _1`$ obtained from the GONG tables are themselves the result of a fit of a symmetric Lorentzian profile, they are also subject to systematic error. The symmetric fit picks out the power level far from the peak as being the noise level. This systematically over-estimates the true noise level which is given by the power minimum or trough which lies close to the peak. However, that error depends only on $`\alpha _1`$ and $`\alpha _4`$ and should therefore also be a function only of frequency. On the other hand, the amplitude and line width are only moderately affected by the asymmetry (see Paper I). Hence we do not expect this to affect our conclusions that $`x^{(\mathrm{as})}`$ is predominantly a function of frequency and that $`\delta ^{(\mathrm{as})}\nu `$ has predominantly the same functional form as a near-surface contribution. A recently published letter by Toutain et al. (toutain (1998)) purports to show a much more significant effect on the inferred sound speed in the solar core from neglecting line asymmetry when determining mode frequencies. Although we have not considered precisely the same mode set that they did, we view the result of Toutain et al. with some caution. They compared the inversion of mode frequencies obtained by fitting observational data with symmetric Lorentzians with the inversion of mode frequencies in which the low-degree modes only had been fitted with asymmetric profiles. By fitting the low degrees asymmetrically, and the rest symmetrically, it is quite probable that one will artificially introduce an $`l`$-dependent error that does not scale like inverse mode mass and which will be erroneously interpreted in the inversion as a spatial variation of the structure in the solar core. We did a similar experiment with our artificial data. In Fig. 10, we compare the inversion of mode frequencies obtained by fitting observational data with symmetric Lorentzians (triangles) with the inversion of mode frequencies which for the low-degree modes ($`l2`$) only have been corrected for the effect of asymmetry, by applying our estimates of the frequency shift (circles with error bars). Note that the inversion of the symmetrical fits (triangles) and the inversion of mode frequencies which have all been corrected for the effect of asymmetry (squares) agree quite well in the solar core, as already shown in Fig. 7a. However, using the asymmetrical fits only for $`l2`$ modes introduces a change in the solution in the solar core; this confirms our concern that such an inconsistent treatment of asymmetry introduces an artificial $`l`$-dependence and hence may affect the inversion in the core. We note that the effect obtained here is more modest than that of Toutain et al. (toutain (1998)); the effect we find is quantitatively similar to what has been found by S. Turck-Chièze (private communication). We have been able to make some qualitative comparisons with the asymmetry corrections applied by Toutain et al., thanks to T. Toutain (private communication). It appears that their asymmetry corrections for $`l=1`$ and $`l=2`$ are very similar and, encouragingly, these agree with our simulated $`\delta ^{(\mathrm{as})}\nu `$ quite well; but the $`l=0`$ corrections are rather different. Evidently such an effect would introduce some degree dependence and therefore radial variation into Toutain et al.’s inversion results. Although we conclude that ignoring line asymmetry has likely not compromised helioseismic structural inversions to date, we should like to emphasize the importance of taking into account the asymmetric profile in the estimation of mode parameters now and in the future. With the SOHO satellite and the GONG network giving us longer time series and with a better signal to noise ratio, the effect of asymmetry on the mode parameter determinations is significant. This is particularly true as we shall be looking for ever more subtle features in the solar interior. Another important reason for fitting an asymmetric profile is the possibility of combining different observables. For example, for solar oscillations observed in Doppler velocity and continuum intensity, the asymmetry in their power spectra has an opposite sign (Nigam & Kosovichev 1998b ) which can lead to different estimates of the mode parameters when fitting a symmetric profile. An inversion of datasets with incompatible frequencies due to different line asymmetries will be seriously compromised, because the combined frequency error will not look like a single near-surface term and will in general be interpreted by the inversion method as a spatial variation inside the Sun. Likewise, it is possible that non-contemporaneous data or data from instruments with different levels of background noise would likewise contain systematic errors with serious consequences for inversions, unless the peakbagging takes line asymmetry into account. We finally note that the determination of the asymmetry, and other parameters of the modes, provides crucial information about the excitation about the solar oscillations (e.g. Rosenthal 1998a; Kumar & Basu 1999; Nigam & Kosovichev 1999). ###### Acknowledgements. We have utilized data obtained by the Global Oscillation Network Group (GONG) project, managed by the National Solar Observatory, a Division of the National Optical Astronomy Observatories, which is operated by AURA, Inc. under a cooperative agreement with the National Science Foundation. The data were acquired by instruments operated by the Big Bear Solar Observatory, High Altitude Observatory, Learmonth Solar Observatory, Udaipur Solar Observatory, Instituto de Astrofisica de Canarias, and Cerro Tololo Interamerican Observatory. We are most grateful to Rachel Howe for providing us with the ‘grand average’ peak-bagged data we have used in this investigation. We thank Thierry Toutain for providing us with details of the MDI low-degree asymmetry corrections, to inform our discussion in Section 5 of the results of Toutain et al. (1998), and Jesper Schou for the MDI mode linewidths. The work was supported in part by the Danish National Research Foundation through its establishment of the Theoretical Astrophysics Center, by SOI/MDI NASA GRANT NAG5-3077, and by the UK Particle Physics and Astronomy Research Council. The National Center for Atmospheric Research is sponsored by the National Science Foundation.
no-problem/9908/hep-lat9908012.html
ar5iv
text
# Improved Overlap Fermions ## Abstract We test exact and approximate Ginsparg-Wilson fermions with respect to their chiral and scaling behavior in the 2-flavor Schwinger model. We first consider explicit approximate GW fermions in a short range, then we proceed to their chiral correction by means of the “overlap formula”, and finally we discuss a numerically efficient perturbative chiral correction. In this way we combine very good chiral and scaling properties with a relatively modest computational effort. Recent work revealed that the Ginsparg-Wilson relation (GWR) $$\{D_{x,y},\gamma _5\}=2(D\gamma _5RD)_{x,y}$$ provides the correct chiral behavior of the lattice fermion characterized by the lattice Dirac operator $`D`$, if $`R`$ is a local Dirac scalar . However, this relation does not imply anything about the scaling quality. Our goal is the combination of excellent chiral and scaling behavior – as well as practical applicability. For details, see Ref. . In perfect and classically perfect actions, the chiral symmetry is represented correctly, since it is preserved under block variable renormalization group transformations, so it can be traced back to the continuum . This is in agreement with the fact that such actions solve the GWR. For the classically perfect actions, this has been shown in Ref. , and it is also known that they scale excellently. However, they cannot be applied immediately since they involve an infinite number of couplings. Unfortunately this seems to be true for any solution of the GWR . A truncation, which is applicable to QCD , is the “hypercube fermion” (HF) with couplings to all sites in a unit hypercube. We test the quality of suitable 2d HFs regarding: * chirality: we focus on the form $`2R_{x,y}=\mu \delta _{x,y}`$ ($`\mu >0`$), where the spectrum $`\sigma (D)`$ of an exact GW fermion lies on the circle in $`CI`$ with radius and center $`1/\mu `$ (GW circle). We check how well this is approximated. * scaling: we test the fermionic and “mesonic” dispersion relation. Our general ansatz reads $`D=\rho _\nu \gamma _\nu +\lambda `$, where the vector term $`\rho _\nu `$ is odd in $`\nu `$ direction and even in the other direction, while the scalar term $`\lambda `$ is entirely even. In the free case, $`\rho _\nu (xy)`$ $`\{\lambda (xy)\}`$ contains 2 $`\{3\}`$ different couplings. We consider a massless “scaling optimal” HF (SO-HF), which is constructed by hand. Its free couplings are $`\rho _1(10)=0.334`$, $`\rho _1(11)=0.083`$; $`\lambda (00)=3/2`$, $`\lambda (10)=2\lambda (11)=1/4`$ (similar to a truncated perfect free fermion). The free spectrum approximates well the GW circle for $`\mu =1`$ (standard GWR) . From the free fermion dispersion relation shown in Fig. 1, we see that the SO-HF is strongly improved over the Wilson fermion. We now proceed to the 2-flavor Schwinger model, and we attach the free couplings to the shortest lattice paths only (where there exist 2 shortest paths, each one picks up half of the coupling). Moreover, we add a clover term with coefficient 1, since this turned out to be useful. <sup>1</sup><sup>1</sup>1This is not renormalized in the Schwinger model . Typical configurations on a $`16\times 16`$ lattice show that the deviation from the circle increases with the coupling strength, see Fig. 2. As a scaling test we consider the dispersion relations of the massless and the massive meson-type state – which we denote as $`\pi `$ and $`\eta `$ – and we find again a strong improvement of the SO-HF over the Wilson fermion, see Fig. 3. The SO-HF reaches the same level as the classically perfect action , although it only involves 6 different couplings per site (as opposed to 123). Hence the SO-HF has a remarkable quality with respect to chirality and scaling, but the figures also show one unpleasant feature: there is a strong additive mass renormalization, which leads to $`m_\pi 0.13`$ (at $`\beta =6`$). There are various ways to move towards the chiral limit: the standard method is to start from a negative bare mass, but one can also achieve criticality solely by introducing fat links with negative staple terms (which need to be tuned) . We now want to discuss yet another way, using the overlap formula . Our ansatz for $`D`$ obeys $`D^{}=\gamma _5D\gamma _5`$. We now define $`A=D\mu `$, thus the GWR (with $`2R_{x,y}=\mu \delta _{x,y}`$) is equivalent to $`A^{}A=\mu ^2`$. If we start from some $`A_0=D_0\mu `$, then the GWR does not hold in general. However, the operator can be “chirally corrected” to $`A=\mu A_0/\sqrt{A_0^{}A_0}`$. Now $`A`$ does solve the GWR. H. Neuberger suggested to insert the Wilson fermion $`D_0=D_W=A_W+1`$. We denote the fermion characterized by $`D_{Ne}=1+A_W/\sqrt{A_W^{}A_W}`$ as Neuberger fermion. At least in a smooth gauge background it is local (in the sense that its couplings decay exponentially) . The overlap-type of solution to the GWR can be generalized to a large class by varying $`D_0`$ . In particular, if $`D_0`$ represents a GW fermion already (with $`2R_{x,y}=\mu \delta _{x,y}`$), then $`D=D_0`$. If we now insert a short-ranged, approximate (standard) GW fermion like $`D_{SOHF}`$, then it is altered only little by the overlap formula (with $`\mu =1`$), $`DD_{SOHF}`$. As a consequence, we can expect a high degree of locality – i.e. a fast exponential decay – which is indeed confirmed in Fig. 4. Also the good scaling behavior of $`D_{SOHF}`$ is essentially preserved in the overlap SO-HF, see Figs. 1 and 5 for the fermionic and mesonic dispersions. The same is true for the approximate rotational invariance . At the same time, we do have exact chiral properties now (hence $`m_\pi =0`$). If we look at the spectra of certain configurations before and after the use of the overlap formula, the effect of the latter comes close to a radial projection of the eigenvalues onto the GW circle. However, in QCD the use of the full improved overlap fermion might be tedious due to the square root. In view of $`d=4`$ we suggest to evaluate the square root in $$D=\mu \left[1A_0/\sqrt{A_0^{}A_0}\right]$$ just perturbatively around $`\mu `$. For an approximate GW fermion like $`A_0=A_{SOHF}`$ this expansion converges rapidly, since the operator $`\epsilon A_0^{}A_0\mu ^2`$ obeys $`\epsilon 1`$. It fails to converge, however, for the Neuberger fermion. If we perform the perturbative chiral correction, we obtain an operator of the form $`D_{p\chi c}=\mu A_0Y`$. For the correction to $`O(\epsilon ^n)`$ the operator $`Y`$ is given by a polynomial in $`A_0^{}A_0/\mu ^2`$ of order $`n`$. The implementation then requires essentially $`1+2n`$ matrix-vector multiplications (the matrix being $`A`$ resp. $`A^{}`$), hence the computational effort increases only linearly in $`n`$. It turns out that for moderate couplings already the leading orders are efficient in doing most of the chiral projection, see Fig. 6. Therefore, the perturbative chiral correction of a good HF combines excellent scaling and chirality (and rotational invariance) as well as a high degree of locality with a relatively modest computational overhead. This method is very promising for the extension to 4d, which is currently in progress.
no-problem/9908/hep-ex9908015.html
ar5iv
text
# 1 Introduction ## 1 Introduction Strong-interaction measurements at a future high-energy linear e<sup>+</sup>e<sup>-</sup> collider (LC) will form an important component of the physics programme. A 1 TeV collider has an energy reach comparable with the LHC, and offers the possibility of testing QCD in the experimentally clean, more theoretically accessible e<sup>+</sup>e<sup>-</sup>environment. In addition, $`\gamma \gamma `$ interactions will be delivered free by Nature, and a dedicated $`\gamma \gamma `$ collider is an additional option, allowing detailed measurements of the relatively poorly understood photon structure. Here I review the main topics; more details can be found in : $``$ Precise determination of the strong coupling $`\alpha _s`$. $``$ Measurement of the $`Q^2`$ evolution of $`\alpha _s`$, searches for new coloured particles and constraints on the GUT scale. $``$ Measurements of the $`\mathrm{t}\overline{\mathrm{t}}`$ (g) system $``$ Measurement of the total $`\gamma \gamma `$ cross section and the photon structure function. Related top-quark, $`\gamma \gamma `$ and theoretical topics are summarised elsewhere . ## 2 Precise Determination of $`\alpha _s`$ The current precision of individual $`\alpha _s(M_Z^2)`$measurements is limited at best to several per cent . Since the uncertainty on $`\alpha _s`$translates directly into an uncertainty on perturbative QCD (pQCD) predictions, especially for high-order multijet processes, it would be desirable to achieve much better precision. In addition, since the weak and electromagnetic couplings are known with much greater precision, the error on $`\alpha _s`$represents the dominant uncertainty on our ‘prediction’ of the scale for grand unification of the strong, weak and electromagnetic forces . Several techniques for $`\alpha _s`$determination are available at the LC: ### 2.1 Event Shape Observables The determination of $`\alpha _s`$from event ‘shape’ observables that are sensitive to the 3-jet nature of the particle flow has been pursued for 2 decades and is generally well understood . In this method one usually forms a differential distribution, makes corrections for detector and hadronisation effects, and fits a pQCD prediction to the data, allowing $`\alpha _s`$to vary. Examples of such observables are the event thrust and jet masses. The latest generation of such $`\alpha _s`$measurements, from SLC and LEP, has shown that statistical errors below the 1% level can be obtained with samples of a few tens of thousands of hadronic events. With the current LC design luminosities of $`5\times 10^{33}`$/cm<sup>2</sup>/s (NLC/JLC) and $`3\times 10^{34}`$/cm<sup>2</sup>/s (TESLA), at $`Q`$ = 500 GeV, tens/hundreds of thousands of e<sup>+</sup>e<sup>-</sup>$``$$`q\overline{q}`$ events would be produced each year, and a statistical error on $`\alpha _s`$below the 1% level could be achieved easily. Detector systematic errors, which relate mainly to uncertainties on the corrections made for acceptance and resolution effects, are under control at the 1-3% level (depending on the observable). If the LC detectors are designed to be very hermetic, with good tracking resolution and efficiency, as well as good calorimetric jet energy resolution, all of which are required for the search for new physics processes, it seems reasonable to expect that the detector-related uncertainties can be beaten down to the 1% level or better. e<sup>+</sup>e<sup>-</sup>$``$$`Z^0`$ $`Z^0`$ , $`W^+W^{}`$, or $`\mathrm{t}\overline{\mathrm{t}}`$ events will present significant backgrounds to $`q\overline{q}`$ events for QCD studies, and the selection of a highly pure $`q\overline{q}`$ event sample will not be as straightforward as at the $`Z^0`$ resonance. The application of kinematic cuts would cause a significant bias to the event-shape distributions, necessitating compensating corrections at the level of 25% . More recent studies have shown that the majority of $`W^+W^{}`$ events can be excluded without bias by using only right-handed electron-beam produced events in the $`\alpha _s`$analysis. Furthermore, the application of highly-efficient $`b`$-jet tagging can be used to reduce the $`\mathrm{t}\overline{\mathrm{t}}`$ contamination to the 1% level. After statistical subtraction of the remaining backgrounds (the $`Z^0`$ $`Z^0`$ and $`W^+W^{}`$ event properties (will) have been measured accurately at SLC and LEP), the residual bias on the event-shape distributions is expected to be under control at the 1% level on $`\alpha _s`$. Additional corrections must be made for the effects of the smearing of the particle momentum flow caused by hadronisation. These are traditionally evaluated using Monte Carlo models. The models have been well tuned at SLC and LEP and are widely used for evaluating systematic effects. The size of the correction factor, and hence the uncertainty, is observable dependent, but the ‘best’ observables have uncertainties as low as 1% on $`\alpha _s`$. Furthermore, one expects the size of these hadronisation effects to diminish with c.m. energy at least as fast as 1/$`Q`$. Hence 10%-level corrections at the $`Z^0`$ should dwindle to less than 2% corrections at $`Q`$ $``$ 500 GeV, and the associated uncertainties should be well below the 1% level on $`\alpha _s`$. Currently pQCD calculations of event shapes are available complete only up to O($`\alpha _s^2`$). Since the data contain knowledge of all orders one must estimate the possible bias inherent in measuring $`\alpha _s(M_Z^2)`$using the truncated QCD series. Though not universally accepted, it is customary to estimate this from the dependence of the fitted $`\alpha _s(M_Z^2)`$on the QCD renormalisation scale, yielding a large and dominant uncertainty of about $`\pm 0.007`$ . Since the missing terms are O($`\alpha _s^3`$), and since $`\alpha _s`$(500 GeV) is expected to be about 25% smaller than $`\alpha _s(M_Z^2)`$, one expects the uncalculated contributions to be almost a factor of two smaller at the higher energy, leading to an estimated uncertainty of $`\pm 0.004`$ on $`\alpha _s`$(500 GeV). However, translating to the conventional yardstick $`\alpha _s(M_Z^2)`$yields an uncertainty of $`\pm 0.006`$, only slightly smaller than currently. Therefore, a 1%-level $`\alpha _s(M_Z^2)`$measurement is possible experimentally, but will not be realised unless O($`\alpha _s^3`$)contributions are calculated. ### 2.2 The $`\mathrm{t}\overline{\mathrm{t}}`$ System The value of $`\alpha _s`$controls the strong potential that binds quarkonia resonances. In the case of $`\mathrm{t}\overline{\mathrm{t}}`$ production near threshold, the large top mass and decay width ensure that the top quarks decay in a time comparable with the classical period of rotation of the bound system, washing out most of the resonant structure in the cross-section, $`\sigma _{\mathrm{t}\overline{\mathrm{t}}}`$. The shape of $`\sigma _{\mathrm{t}\overline{\mathrm{t}}}`$ near threshold hence depends strongly on both $`m_t`$ and $`\alpha _s`$. Fits of next-to-leading-order (NLO) pQCD calculations to simulated measurements of $`\sigma _{\mathrm{t}\overline{\mathrm{t}}}`$ showed that $`m_t`$ is strongly correlated with $`\alpha _s`$. Fixing $`\alpha _s`$allowed the error on $`m_t`$ to be reduced by a factor of 2. Since the main aim of such an exercise is to determine $`m_t`$ as precisely as possible, the optimal strategy would be to input $`\alpha _s`$from elsewhere. Moreover, recent NNLO calculations of $`\sigma _{\mathrm{t}\overline{\mathrm{t}}}`$ near threshold have caused consternation, in that the size of the NNLO contributions appears to be comparable with that of the NLO contributions, and the change in the shape causes a shift of roughly 1 GeV in the value of the fitted $`m_t`$. This mass shift can be avoided by a judicious top-mass definition , which also reduces the $`m_t`$-$`\alpha _s`$correlation. However, the resulting cross-section normalisation uncertainty translates into an uncertainty of $`\pm 0.012`$ on $`\alpha _s(M_Z^2)`$, i.e.about 5 times larger than the estimated statistical error . Although this may provide a useful ‘sanity check’ of $`\alpha _s`$in the $`\mathrm{t}\overline{\mathrm{t}}`$ system, it does not appear to offer the prospect of a 1%-level measurement. A preliminary study has also been made of the determination of $`\alpha _s`$from $`R`$ = $`\sigma _{\mathrm{t}\overline{\mathrm{t}}}/\sigma _{\mu ^+\mu ^{}}`$ above threshold. For $`Q`$ $``$ 400 GeV the theoretical uncertainty on $`R`$ is roughly 3%; for $`Q`$ $``$ 500 GeV the exact value of $`m_t`$ is much less important and the uncertainty is smaller, around 0.5%. However, on the experimental side the limiting precision on $`R`$ will be given by the uncertainty on the luminosity measurement. If this is only as good as at LEPII, i.e.around 2%, then $`\alpha _s(M_Z^2)`$could be determined with an experimental precision of at best 0.007, which is not especially useful other than as a consistency check. Finally, there remains the possibility of determining $`\alpha _s`$using $`t\overline{t}g`$ events, which have recently been calculated at NLO. For reasonable values of the jet-resolution scale $`y_c`$ the NLO contributions are substantial, of order 30%, which is comparable with the situation for massless quarks. The discussion of unknown higher-order contributions above is hence also valid here, and $`t\overline{t}g`$ events will only be useful for determination of $`\alpha _s`$once the NNLO contributions have been calculated. If the $`t\overline{t}g`$ event rate can be measured precisely, the ansatz of flavour-independence of strong interactions can be tested for the top quark, and the running of $`m_t`$ could be determined in a similar manner to the running $`b`$-quark mass . A precision of 1% implies a measurement of $`m_t(Q)`$ with an error of 5 GeV. ### 2.3 A High-luminosity Run at the $`Z^0`$ Resonance A LC run at the $`Z^0`$ resonance is attractive for a number of reasons. At nominal design luminosity tens of millions of $`Z^0`$ /day would be delivered, offering the possibility of a year-long run to collect a Giga $`Z^0`$ sample for ultra-precise electroweak measurements and tests of radiative corrections. Even substantially lower luminosity, or a shorter run, at the $`Z^0`$ could be useful for detector calibration. A Giga $`Z^0`$ sample offers two additional options for $`\alpha _s`$determination via measurements of the inclusive ratios $`\mathrm{\Gamma }_Z^{had}/\mathrm{\Gamma }_Z^{lept}`$ and $`\mathrm{\Gamma }_\tau ^{had}/\mathrm{\Gamma }_\tau ^{lept}`$. Both are indirectly proportional to $`\alpha _s`$, and hence require a very large event sample for a precise measurement. For example, the current LEP data sample of 16M $`Z^0`$ yields an error of 0.003 on $`\alpha _s(M_Z^2)`$from $`\mathrm{\Gamma }_Z^{had}/\mathrm{\Gamma }_Z^{lept}`$. The statistical error could, naively, be pushed to below the 0.0005 level, but systematic errors arising from the lepton selection will probably limit the precision to 0.0016 . Nevertheless this would be a very precise, reliable measurement. In the case of $`\mathrm{\Gamma }_\tau ^{had}/\mathrm{\Gamma }_\tau ^{lept}`$ the experimental precision from LEP and CLEO is already at the 0.001 level on $`\alpha _s(M_Z^2)`$. However, there has been considerable debate about the size of the theoretical uncertainties, with estimates ranging from 0.002 to 0.006. If this situation is clarified, and the theoretical uncertainty is small, $`\mathrm{\Gamma }_\tau ^{had}/\mathrm{\Gamma }_\tau ^{lept}`$ may offer a further 1%-level $`\alpha _s(M_Z^2)`$measurement. ## 3 $`Q^2`$ Evolution of $`\alpha _s`$ The running coupling is sensitive to the presence of any new coloured particles, such as gluinos, beneath the c.m. energy threshold via their vacuum polarisation contributions. Measurements of event shape observables at high energies, combined with existing lower energy data, would allow one to search for anomalous running. In addition, extrapolation of the running $`\alpha _s`$can be combined with extrapolations of the dimensionless weak and electromagnetic couplings in order to try to constrain the GUT scale . The highest-energy measurements, up to $`Q`$ = 200 GeV, are currently provided by LEPII. Older data from e<sup>+</sup>e<sup>-</sup> annihilationspan the range $`14Q91`$ GeV. A 0.5 - 1.0 TeV linear collider would increase significantly the lever-arm for measuring the running . However, over a decade from now the combination of LC data with the older data may not be straightforward, and will certainly not be optimal since some of the systematic errors are correlated among data at different energies. It would be desirable to measure in the same apparatus, with the same technique, and by applying the same treatment to the data at least one low-energy point - at the $`Z^0`$ or even lower - in addition to points at the $`W^+W^{}`$ and $`\mathrm{t}\overline{\mathrm{t}}`$ thresholds, as well as at the highest c.m. energies. ## 4 Other e<sup>+</sup>e<sup>-</sup>QCD Topics Limited space allows only a brief mention of several important topics : $``$ Searches for anomalous chromo-electric and chromo-magnetic dipole moments of quarks, which effectively modify the rate and pattern of gluon radiation. Limits on the anomalous $`b`$-quark chromomagnetic moment have been obtained at the $`Z^0`$ resonance . The $`t\overline{t}g`$ system would be important to study at the LC. $``$ Gluon radiation in $`\mathrm{t}\overline{\mathrm{t}}`$ events is expected to be strongly regulated by the large mass and width of the top quark. Measurements of gluon radiation patterns in $`t\overline{t}g`$ events may provide additional constraints on the top decay width . $``$ Polarised electron (and positron) beams can be exploited to test symmetries using multi-jet final states. For polarized e<sup>+</sup>e<sup>-</sup>annihilation to three hadronic jets one can define $`\stackrel{}{S_e}(\stackrel{}{k_1}\times \stackrel{}{k_2})`$, which correlates the electron-beam polarization vector $`\stackrel{}{S_e}`$ with the normal to the three-jet plane defined by $`\stackrel{}{k_1}`$ and $`\stackrel{}{k_2}`$, the momenta of the two quark jets. If the jets are ordered by momentum (flavour) the triple-product is CP even (odd) and T odd. Standard Model T-odd contributions of this form are expected to be immeasurably small, and limits have been set for the $`b\overline{b}g`$ system . At the LC these observables will provide a search-ground for anomalous effects in the $`t\overline{t}g`$ system. $``$ The difference between the particle multiplicity in heavy- ($`b,c`$) and light-quark events is predicted to be independent of c.m. energy. Precise measurements have been made at the $`Z^0`$ , but measurements at other energies are limited in precision, rendering a limited test of this important prediction. High-precision measurements at the LC would add the lever-arm for a powerful test. $``$ Colour reconnection and Bose-Einstein correlations are fascinating effects. They are important to study precisely since they may affect the precision with which the masses of heavy particles, such as the $`W^\pm `$ and top-quark, can be reconstructed kinematically via their multijet decays . ## 5 Photon Structure Though much progress has been made in recent years at LEP and HERA, a thorough understanding of the ‘structure’ of the venerable photon is still lacking. Away from the $`Z^0`$ resonance the relative cross-section for $`\gamma \gamma `$ scattering is large, but good detector acceptance in the low-polar-angle regions is required. The LC provides an opportunity to make definitive measurements, either from the ‘free’ $`\gamma \gamma `$ events provided in the e<sup>+</sup>e<sup>-</sup>collision mode, or via a dedicated high-luminosity ‘Compton collider’ facility. From the range of interesting $`\gamma \gamma `$ topics I mention only a few important ‘QCD’ measurements: $``$ The total cross-section, $`\sigma _{\gamma \gamma }`$, and the form of its rise with $`Q`$, will place constraints on models which cannot be differentiated with today’s data; ‘proton-like’ models predict a soft rise, whereas ‘minijet’ models predict a steep rise. $``$ The photon structure function, $`F_2^{\gamma \gamma }(x,Q^2)`$, and the nature of its rise at low $`x`$ in relation to ‘BFKL’ or ‘DGLAP’ evolution. $``$ Polarised structure functions, the charm content of the photon, and diffractive phenomena. ## 6 Summary and Conclusions Tests of QCD will enrich the physics programme at a high-energy e<sup>+</sup>e<sup>-</sup>collider. Measurement of $`\alpha _s(M_Z^2)`$at the 1% level of precision appears feasible experimentally, but will require considerable theoretical effort. A search for anomalous running of $`\alpha _s`$($`Q^2`$) is an attractive prospect, but presents serious requirements on the design of both the collider and detectors. Electron-beam polarisation can be exploited to perform symmetry tests using multi-jet final states. Interesting gluon radiation patterns in $`\mathrm{t}\overline{\mathrm{t}}`$ events could be used to constrain the top quark decay width. Measurement of the gluon radiation spectrum would also constrain anomalous strong top-quark couplings. Realistic hadron-level Monte Carlo simulations, including detector effects, need to be performed to evaluate these possibilities quantitatively. ## Acknowledgements I thank A. Brandenburg and A. de Roeck for their help in preparing this summary
no-problem/9908/cond-mat9908096.html
ar5iv
text
# Nonadibaticity and single electron transport driven by surface acoustic waves ## Abstract Single-electron transport driven by surface acoustic waves (SAW) through a narrow constriction, formed in two-dimensional electron gas, is studied theoretically. Due to long-range Coulomb interaction, the tunneling coupling between the electron gas and the moving minimum of the SAW-induced potential rapidly decays with time. As a result, nonadiabaticiy sets a limit for the accuracy of the quantization of acoustoelectric current. Recently, a new type of single electron devices was introduced. In the experiments , surface acoustic waves (SAW) induce, via piezo-electric coupling, charge transport through a point contact in GaAs heterostructure. When the point contact is biased beyond the pinch-off, the acoustoelectric current develops plateaus, where $$I=N_0ef.$$ (1) Here $`f`$ is SAW frequency, and $`N_0`$ is an integer. The plateaus were shown to be stable over a range of temperature, gate voltages, SAW power, and source-drain voltages. Remarkably high accuracy of the quantization (1), and high frequency of operation ($`f3\mathrm{GHz}`$) immediately suggest a possibility of metrological applications of the effect . However, deep understanding of these results is still lacking. Qualitatively, the effect is explained by a simple picture of moving quantum dots . Electrons, captured in the local potential minima (’dots’), created by SAW, are dragged through the potential barrier. The strong Coulomb repulsion prevents excess occupation of the dot. Increase of the SAW power deepens the dots, more states become available for the electrons to occupy, and new plateaus appear. By changing the gate voltage, the slope of the potential barrier can be lowered, which has a similar effect. Interestingly, the quantization was not observed in the open channel regime , although it should be expected on the quite general theoretical ground . For the mechanism of the quantization, discussed in , it is essential that the DC conductance for each instantaneous configuration of the SAW-induced potential is zero. In the open channel regime it would require the channel length to be much longer than SAW wavelength $`\lambda `$, which is difficult to realize. In the experiments this problem is avoided, since in the pinch-off regime the DC conductance is zero. However, as explained below, the rapid change of SAW potential near the entrance to the channel creates a new trouble, leading to nonadiabatic corrections to (1). Long-range Coulomb interaction plays crucial role in this phenomenon. The two-dimensional electron gas (2DEG) is depleted in the vicinity of the gates (see Fig.1). On the other hand, in the depleted region screening is lacking. The important parameters of the problem can be understood from the solution of the electrostatic model. In this approach, one assumes that, since the Fermi velocity $`v_F`$ is large compared to the sound velocity $`v_s`$, the 2DEG is able to follow the changing in time SAW-induced potential. Therefore, it is sufficient to consider an instantaneous electrostatic problem, treating time as a parameter. Since the screening length in 2DEG $`(10\mathrm{nm})`$ is much smaller, than $`\lambda 10^3\mathrm{nm}`$, one can assume that the SAW-induced potential is completely screened in the 2DEG-occupied region. Therefore, one has to solve the Poisson equation subjected to complicated boundary conditions. In particular, the potential at the gates ($`\phi =V_g`$), as well as the potential of 2DEG region ($`\phi =0`$), and density $`\rho =\rho _0`$ of the positive background charge in the depleted region are fixed. Furthermore, for simplicity, one can take an effect of SAW into account through a weak periodic modulation of $`\rho `$: $`\rho \rho _0+\delta \rho (x,t)`$. The self-consistent solution would yield the location of the edge of 2DEG, potential in the depleted region $`\phi \left(𝐫\right)`$, and number density $`n\left(𝐫\right)`$ of 2DEG. Then, using the Thomas-Fermi-type relation $`U\left(𝐫\right)+\left(\pi \mathrm{}^2/m^{}\right)n\left(𝐫\right)=ϵ_F`$, one would be able to find an effective confining potential $`U\left(𝐫\right)`$. However, the details of the full solution are not required, since the most important properties can be understood from the following simple arguments . Firstly, it is clear that the potential $`\phi (x,y)`$ has a minimum in the gap between the gates (see Fig.1). Secondly, the very presence of plateaus shows that the charge states of the dot are separated by the finite energy gaps. Since both $`N_0=\mathrm{odd}`$ and $`N_0=\mathrm{even}`$ plateaus are observed the energy gap is associated with Coulomb repulsion rather than single-particle level spacing. This means that for any given $`x`$, $`\phi (x,y)`$ has a sharp minimum near $`y=0`$. The shape of this minimum is slowly changing with $`x`$, and this change is controlled by the geometry of the device. Note that the smoothness of the change of the confining potential in $`y`$-direction is supported by the fact that the same systems exhibit very nice pattern of conductance quantization in the open channel regime . Finally, the weak perturbation of the background charge density $`\left|\delta \rho \right|\rho _0`$ should not affect significantly the position $`x_0`$ of the 2DEG edge. As function of time, $`x_0\left(t\right)`$ oscillates with SAW frequency $`f`$. However, the amplitude of the oscillations is small, compared to $`\lambda `$, therefore the velocity of the edge is negligible compared to $`v_s`$. On the other hand, the SAW-induced potential minimum (the dot) moves away from the edge with precisely the sound velocity $`v_s`$ (see Fig.2). Therefore, the width of the potential barrier, separating 2DEG and the dot, increases linearly with time. This in turn means that the tunneling coupling between 2DEG and the level, localized in the dot, rapidly decreases, approximately exponentially. The characteristic time can be estimated as $`\tau l_0/v_s`$, where $`l_0`$ is the distance over which the localized wave extends under the barrier. Since, evidently, $`l_0\lambda `$, the relation $$f\tau 1$$ (2) holds. Other parameters, such as the height of the barrier $`W`$, the energy of the localized level $`ϵ_0`$, etc., change during the time, which is of the order of SAW period $`1/f`$. Thus, the time dependence of all these parameters during the time $`\tau `$ can be neglected. Due to the rapid decrease of the tunneling coupling, the thermal equilibrium in the system can not be maintained, causing fluctuations of the occupation number of the dot. This results in nonadiabatic corrections to the quantized values of the acoustoelectric current. Whether these corrections have a significant impact on the accuracy of the quantization, depends on the value of the characteristic energy scale $`\mathrm{}/\tau `$, as compared to other energy scales in the problem. To obtain an order of magnitude estimate, we expand the potential near the minimum $`x_1`$ (see Fig.2), $`V\left(x\right)Aq^2\left(xx_1\right)^2/2,q=2\pi /\lambda `$. The amplitude $`A`$ is related to the single particle level spacing in the dot $`\mathrm{\Delta }`$ via $`Aq^2=m^{}\left(\mathrm{\Delta }/\mathrm{}\right)^2`$ ($`m^{}`$ is effective electron mass), to the ’size’ $`r`$ of the localized wave function via $`Aq^2r^2\mathrm{\Delta }`$, and to the charging energy $`E_c`$ via $`E_ce^2/ϵr`$. $`l_0`$ is estimated from WKB relation $`\left(\mathrm{}/l_0\right)^22m^{}W`$. Assuming that $`WA`$, it gives us four equations for five unknown quantities. Additional relation follows from the experimental results. It was demonstrated , that the quantization disappears above the activation temperature $`T^{}10\mathrm{K}`$, which we identify with the charging energy. Using typical parameters for the experiments , we find $`\tau 10\mathrm{ps}`$ . All the parameters manage to pass minimal consistency requirements $`r\lambda ,\mathrm{\Delta }A,E_c,f\tau 1`$. Since the corresponding energy scale $`\mathrm{}/\tau 0.1\mathrm{meV}`$, the nonadiabatic effects may have significant influence on (1) at low temperature. This can be understood from the following model Hamiltonian: $$H=H_{eg}+H_{dot}+H_T.$$ (3) Here $$H_{eg}=\underset{k\sigma }{}ϵ_kc_{k\sigma }^{}c_{k\sigma }$$ (4) describes electron gas in the lead, $$H_{dot}=\underset{n\sigma }{}E_nd_{n\sigma }^{}d_{n\sigma }+E_c\left(N𝒩_g\right)^2$$ (5) is the Hamiltonian of the dot ($`N=_{n\sigma }d_{n\sigma }^{}d_{n\sigma }`$ is the total number of electrons in the dot), and $$H_T=V(t)\underset{kn\sigma }{}c_{k\sigma }^{}d_{n\sigma }+\mathrm{H}.\mathrm{c}.,$$ (6) describes the tunneling coupling with time-dependent tunneling amplitude. We have included only one lead in the model, since tunneling coupling to the second lead is negligible. The electron gas in the lead is assumed to be in thermodynamic equilibrium at all times, by virtue of the inequality $`v_Fv_s`$. As discussed above, we neglected time dependence of various parameters in (5), due to the separation of the time scales $`f\tau 1`$. The last term in (5) describes the intra-dot Coulomb interaction. The parameter $`𝒩_g`$ describes the effect of the gate voltage. Since the width of the plateaus is approximately independent on the plateau’s number, it is a good approximation to assume that $`𝒩`$ is a linear function of $`V_g`$. The most important ingredient of (6) is the time-dependent tunneling amplitude. We take it in the form $$V\left(t\right)=V_0e^{t/\tau }$$ (7) (other possible choices will be discussed below). The time-independent version of (3-6) is commonly used in the theory of the Coulomb blockade. Similar models have been also employed to study transfer of charge during the atom-surface scattering , and nonadiabatic effects in charge pumping . Given that the system, described by (3-6), is in thermodynamic equilibrium at $`t=\mathrm{}`$, our task is to calculate the occupation of the dot at $`t\mathrm{}`$, $`N_0=N_{t=\mathrm{}}`$ The acoustoelectric current is related to $`N_0`$ through (1). Away from the Coulomb blockade degeneracy points (half-integer $`𝒩_g`$), when the inequality $$2E_c\left|𝒩_gn_01/2\right|\mathrm{max}\{T,1/\tau \}$$ (8) is satisfied, the time-dependence is too slow to cause the transitions between different charge states of the dot. In (8), $`n_0`$ is integer part of $`𝒩_g`$; units where $`\mathrm{}=k_B=1`$ are used throughout the rest of the paper. In this respect the evolution of the system is almost adiabatic, and the occupation of the dot $`N_0`$ is expected to coincide with equilibrium occupation, corresponding to the Hamiltonian (5), with $`T`$ replaced by the effective temperature $`T_{eff}\mathrm{max}\{T,1/\tau \}`$. However, in the vicinity of the transition region between the plateaus, when (8) breaks down, the time-dependence mixes states with $`N=n_0`$ and $`N=n_0+1`$. This means that the width of the transition region is given by $`T_{eff}`$, and in the zero-temperature limit saturates to $`1/\tau `$. In this regime the nearly adiabatic picture fails. The width of the charge states due to tunneling $`\mathrm{\Gamma }(t)`$ decreases with time. When $`\mathrm{\Gamma }(t)1/\tau `$, the system can no longer follow the changing tunneling coupling. Effectively, it can be described within the sudden approximation, where $`\mathrm{\Gamma }(t)`$ is replaced by the step-function, $`\mathrm{\Gamma }(t)\mathrm{\Gamma }_s\theta (t),\mathrm{\Gamma }_s1/\tau `$ . The occupation of the dot at $`t\mathrm{}`$ is therefore determined by (3-6) with time-independent tunneling amplitude, corresponding to the width $`\mathrm{\Gamma }_s`$. Due to the interaction term in (5), the model (3-6) is still difficult to solve analytically. To simplify the discussion, we limit our attention to the interval of $`𝒩_g`$, which includes only one transition region between the plateaus: $`n_0<𝒩_g<n_0+1`$. Furthermore, we neglect the spin degeneracy, and consider the limit of the large single-particle level spacing in the dot, $`\mathrm{\Delta }1/\tau `$. With these restrictions, at low temperature $`T\mathrm{\Delta }`$, only the lowest energy configurations, corresponding to $`N=n_0`$ and $`N=n_0+1`$, are important. Since these states are non-degenerate, one can introduce the fermion operator $`d=|n_0n_0+1|`$ to describe transitions between these states, and replace (5) by $$H_{dot}=E_0d^{}d,E_0=2E_c\left(1/2+n_0𝒩_g\right).$$ (9) The advantage of the model (3),(6),(9) is that it is exactly solvable for arbitrary $`V(t)`$. Indeed, the occupation of the dot $`n\left(t\right)=d^{}(t)d(t)=N\left(t\right)n_0`$satisfies equation of motion $$\frac{d}{dt}n(t)=\mathrm{\Gamma }(t)n(t)+𝑑\epsilon n_F(\epsilon )A(\epsilon ,t),$$ (10) $$A(\epsilon ,t)=\frac{1}{\pi }\mathrm{Im}𝑑t^{}\sqrt{\mathrm{\Gamma }(t)\mathrm{\Gamma }(t^{})}e^{i\epsilon (tt^{})}G^R(t,t^{}).$$ (11) Here $`\mathrm{\Gamma }(t)=2\pi \nu V^2\left(t\right)`$ is the width of the charge state, $`\nu `$ is density of states of conduction electrons at the Fermi level, $`n_F(\epsilon )`$ is Fermi function and $`G^R(t,t^{})=i\theta \left(tt^{}\right)\{d(t),d^{}(t^{})\}`$ is the exact retarded Green function of the dot: $`G^R(t,t^{})=i\theta \left(tt^{}\right)e^{i_t^{}^t𝑑t_1\left[E_0i\mathrm{\Gamma }\left(t_1\right)/2\right]}.`$ Solution for $`N_0`$ follows by simple integration of (10-11). With $`V\left(t\right)`$ given by Eq. (7), the result is $$N_0n_0=\frac{\tau _0}{2\pi }_{\mathrm{}}^{\mathrm{}}𝑑\epsilon \frac{n_F(\epsilon )}{\mathrm{cosh}\left[\left(\epsilon E_0\right)\tau _0\right]},$$ (12) where $`\tau _0=\pi \tau /2`$. At zero temperature, (12) reduces to $$N_0n_0=\frac{2}{\pi }\mathrm{tan}^1\left[e^{E_0\tau _0}\right].$$ (13) At finite temperature, (12) is described very well by the Fermi function $$N_0n_0\left(e^{E_0/T_{eff}}+1\right)^1,$$ (14) with an effective temperature $`T_{eff}=\sqrt{\left(c/\tau _0\right)^2+T^2}`$. We found that $`c=0.88`$ gives the best numerical fit . For $`\left(𝒩_gn_0\right)`$, Eqs. (1) and (14) give the following expression for the slope of the plateau: $$S=\frac{1}{I_0}\left(\frac{dI}{d𝒩_g}\right)_{𝒩_gn_0}\left(2E_c/T_{eff}\right)e^{E_c/T_{eff}}.$$ (15) Here $`I_0=n_0ef`$ corresponds to perfect quantization. Strictly speaking, to obtain the correct value of the slope precisely in the middle of the plateau, the two-state approximation is not sufficient: state with $`N=n_01`$ makes exactly the same contribution, as that with $`N=n_0+1`$. This complication, however, should not affect significantly the validity of (15): the exact result differs from (15) by the factor of the order of $`1`$ only . Due to the exponential factor in (15), $`S`$ depends very strongly on the ratio $`E_c/T_{eff}`$. For example, for $`E_c/T_{eff}=10`$, $`S10^3`$, while for $`E_c/T_{eff}=20`$, $`S10^7`$. According to the discussion above, in the transition region, the result can be obtained with the sudden approximation. Thus, we have $`N_0n_0{\displaystyle 𝑑\epsilon n_F(\epsilon )\frac{\mathrm{\Gamma }_s/2\pi }{\left(\mathrm{\Gamma }_s/2\right)^2+(\epsilon E_0)^2}},`$ or, for $`T=0`$, $`N_0n_01/2\left(1/\pi \right)\mathrm{tan}^1\left(2E_0/\mathrm{\Gamma }_s\right)`$. This expression indeed coincides with (13) in the limit $`E_00`$, if $`\mathrm{\Gamma }_s/2=\tau _0^1`$. Note that the width of the transition region is determined by essentially the same $`T_{eff}`$, that enters (15). The model (3-6) introduced above allows study of the nonadiabatic effects at the short time-scale. As the system evolves with time, the SAW-induced potential minimum moves uphill (see Fig.2), and may eventually cross the Fermi level. Due to the residual tunneling coupling in this regime, the leakage from the dot will introduce additional corrections to (1). These corrections, however, do not affect strongly the slope of the plateaus (15), and can be taken into account by multiplying (1) by the leakage factor $`P_l1`$: $`I=P_lN_0ef`$, where $`P_l`$ is expected to depend on system parameters, such as gate voltage and SAW power. Thus, the exact value of the quantized current $`I_0=P_ln_0ef`$ does not necessarily coincides with the transfer of precisely integer number of electrons per period $`n_0`$, and the plateaus can move in parameter space. In conclusion, we have shown that at low temperature, long-range Coulomb interactions may have dramatic effect on the accuracy of the quantization of the single-electron transport driven by surface acoustic waves through a narrow constriction, formed in two-dimensional electron gas. The effect of screening on the SAW-induced potential near the edge of 2DEG can be described by a single parameter - the time $`\tau `$ of the switching-off of the tunneling coupling between 2DEG and the moving quantum dot. As a result, both the slope of the plateaus and the width of the transition regions between the plateaus saturate at low temperature to the values, determined by the characteristic energy scale for nonadiabatic corrections $`\mathrm{}/\tau `$. We benefited from discussions with Henrik Bruus, Yuri Galperin, Antti-Pekka Jauho, Anders Kristensen, and Julian Shilton. This work was supported by EC under the SETamp project through the contract SMT4-CT96-2049 (KF), through the contract SMT4-CT98-9030 (MP), by the NSF under the Grants No. PHY94-07194 and DMR 9705406 (QN) and by Welch Foundation (QN). Two of us (KF and QN) acknowledge the hospitality of ITP at UC Santa Barbara, where part of this work was performed.
no-problem/9908/astro-ph9908035.html
ar5iv
text
# The Amount of Interstellar Carbon Locked in Solid Hydrogenated Amorphous Carbon ## 1. Introduction It is now well established on the basis of astronomical and laboratory-based infrared (IR) spectroscopy that some form of solid, hydrogenated carbonaceous material is present in the diffuse interstellar medium (ISM). The absorption feature at $`2950cm^1`$ characteristic of the C-H stretching mode in hydrogenated carbonaceous materials is clearly and consistently revealed by observational studies of dusty lines of sight, from early near-IR spectrophotometric observations of the Galactic center by Soifer, Russell, & Merrill (1976) to the most recent observations of Cygnus OB2 #12, the prototypical diffuse ISM line of sight, by Whittet et al. (1997) using the Infrared Space Observatory (ISO). Other relevant observational works include: the observation of Cygnus OB2 #12 by Adamson, Whittet, & Duley (1990), the near-IR spectral surveys of Galactic center sources and other diffuse ISM probes by Sandford et al. (1991) and Pendleton et al. (1994), and the near-IR spectroscopy of the proto-planetary nebula CRL 618 by Chiar et al. (1998). Without considering observational and laboratory work outside the $`35002500cm^1`$ spectral region, the spectral profile of the $`2950cm^1`$ C-H stretching feature itself is not consistent with any simple hydrocarbon molecule – including polycyclic aromatic hydrocarbons (PAHs) – or mixtures thereof (e.g. Bellamy 1975). It is the signature of some form of cold (but not frozen) hydrogenated carbonaceous material which we will call HAC. HACs comprise a wide range of materials composed only of carbon and hydrogen, but which vary in $`sp^1:sp^2:sp^3`$ hybridization ratios and H/C ratio. There is substantial empirical and experimental evidence to indicate that for C-H networks the carbon hybridization ratios, which determine the structural and optical properties of the material, and the H/C ratio are related; they do not vary independently. We point to the reviews of HAC by Robertson (1986); the “random covalent network” model by Angus & Jansen (1988); the “defected graphite” model by Tamor & Wu (1990); and the experimental characterizations of various HACs by Dischler, Bubenzer & Koidl (1983); McKenzie et al. (1983); Tsai & Bogy (1987); Gonzalez-Hernandez, Chao & Pawlik (1988); and Tamor, Wu et al. (1989). It is clear from these works that for HAC in general, regardless of production method: 1) the $`sp^3/sp^2`$ ratio, optical gap and the real part of the index of refraction all increase with increasing H/C ratio; 2) the density and the imaginary part of the index of refraction both decrease with increasing H/C ratio; 3) HACs seldom contain a significant concentration of $`sp^1`$ hybridized carbon; 4) the maximum value of H/C is near 1.5; and 5) HACs photoluminesce with an efficiency and energy of peak emission that depend on the H/C ratio. In total, these works demonstrate that amorphous, carbonaceous solids in general have predictable, well-determined optical and physical properties, despite having fundamental parameters like the H/C ratio and the $`sp^3/sp^2`$ ratio which are wide ranging. Thus it is possible and indeed practical to investigate and draw conclusions about the interstellar HACs from HAC analogs which are produced by methods and under conditions far different than the interstellar case. So, while the astrophysical production method of HAC may be the photolysis of organic ices as proposed by Bernstein et al. (1995) or Greenberg et al. (1995) (among others), or may be by direct deposition onto silicate cores as suggested by Jones, Duley, & Williams (1990), results of experiments on HAC materials produced by plasma-enhanced chemical-vapor deposition (PECVD), arc-evaporation, or laser ablation, for example, are directly applicable to the astrophysical problem under consideration. It is imperative to determine more precisely the nature of interstellar HAC in the light of recent reconsiderations of the abundance of solid- and gas-phase carbon in the diffuse ISM. It now seems likely that the cosmic abundance of carbon (and other heavy elements) is perhaps as low as $`0.50.75`$ of the solar abundance, which had long been thought to be representative of abundances in the ISM. Snow & Witt (1995) review the existing literature on the carbon abundance in the sun, in a number of recently formed B stars, and in F and G stars similar to the sun, and conclude that the abundance of carbon relative to hydrogen in the ISM is $`225\pm 50ppM`$. In addition, the amount of carbon in the gas phase has recently been redetermined more precisely by Cardelli et al. (1996) through observations with the Goddard High-Resolution Spectrograph of the weak C II\] $`\lambda 2325\mathrm{\AA }`$ absorption line along lines of sight with greatly different ratios of atomic to molecular hydrogen. They derive a gas-phase carbon abundance relative to hydrogen of $`140\pm 50ppM`$. Thus it appears that the amount of carbon available to form solid material in the ISM – a parameter that is crucial to all dust-models of interstellar extinction (e.g., Mathis 1996; Li & Greenberg 1997) – is likely to be only about $`80ppM`$ of hydrogen, to within an uncertainty of about a factor of two. Thus, the first goal of this current work is to present a rigorous determination of the portion of the solid-phase of carbon in the diffuse ISM that must be locked in the form of HAC to give rise to the observed absorption in the interstellar $`2950cm^1`$ C-H stretching feature. Further, it is important to understand the role interstellar HAC grains play in the extinction of starlight, given the presence of HAC in the general diffuse ISM. In order to proceed, the complex index of refraction of HACs as a function of energy in the ultraviolet (UV) and visible regions of the electromagnetic spectrum must be known, in addition to the amount of this material that is present. Assumptions about the structural nature of the grains (i.e., core/mantle or uniform composition) and the size/shape distributions for the grains must also be made, but laboratory investigation is of little help here. Tabulated optical constants of materials presented as analogs to the diffuse ISM carbonaceous material, ranging from “organic refractory” materials to HACs and other forms of amorphous carbon, are plentiful in the literature and are used as needed in models of interstellar extinction. For example, see Smith (1984); Duley (1984); Bussoletti et al. (1987); Alterovitz et al. (1991); Jenniskens (1993); Colangeli (1995); and Schnaiter et al. (1998, 1999). The list of “optical constants” works just cited, it is important to note, does not completely overlap with the list of “IR spectroscopy papers” cited in the previous paragraphs. So, while there has been a great deal of work in the areas of IR and UV/visible spectroscopies of HAC materials, the works are in large measure disjoint. The very recent works of Jaeger et al. (1998) and Schnaiter et al. (1998, 1999) deserve special mention as they begin to consider the problems of grain composition, structure, and optical properties collectively. A second goal of this current work, then, is to present the complex index of refraction as a function of energy in the UV/visible for a HAC material with an IR absorption spectrum that is quantitatively consistent with the latest IR observations of dust in the diffuse ISM, particularly the ISO spectrum of Cyg. OB2 #12 by Whittet et al. (1997), a data set that was until now not available in the literature. In addition to absorbing and scattering starlight, it has been suggested that HAC grains are the carrier of the broad, red emission band known as extended red emission (ERE) (e.g., Duley 1985; Witt & Schild 1988; Witt 1994). ERE has been detected in a variety of dusty astrophysical environments, including: reflection nebulae, planetary nebulae, dark nebulae, Galactic cirrus clouds, an H II region, the halo of M82, and a nova. Most recently, Gordon, Witt, & Friedmann (1998) have shown that ERE is also a phenomenon associated with general diffuse interstellar medium dust (see this work for references to the numerous original ERE detections). They also conclude that the quantum efficiency of the ERE process is in excess of 10%. Witt, Gordon, & Furton (1998) argue that this extremely high efficiency, among other things, forces a reconsideration of the notion that ERE is due to interstellar HAC grains. This same argument has also been made by Ledoux et al. (1998). At the very least, this finding demands a careful measurement of the photoluminescence (PL) efficiency of HAC materials that are otherwise viable analogs to the interstellar carbonaceous material. The experimental aspects of this work concerning the PECVD of and subsequent characterization by UV/visible/IR absorption spectroscopies, optical PL spectroscopy, and UV/visible spectroscopic ellipsometry are described in §2. A rigorous determination of the amount of carbon locked in interstellar HAC grains is presented in §3.1. The characteristics that any viable interstellar HAC analog must posess, in light of the analysis presented in §3.1, are discussed in §3.2. The optical role HAC grains are likely to play in the extinction of UV/visible starlight, and in the dust luminescence processes known as ERE in the diffuse ISM is considered in §3.3. All of this is followed by a brief conclusion in §4. ## 2. Experimental Considerations, Analysis and Results In our laboratories at the University of Toledo and Rhode Island College, we have been involved in depositing and characterizing thin films of HAC and related materials for almost a decade. We use both RF- and DC-based PECVD systems to produce thin, solid films from mixtures of gas-phase precursors. We rely on UV/visible/IR absorption spectroscopies, visible and near-IR PL spectroscopy, among other techniques such as spectroscopic ellipsometry and electron microscopy, for analysis and characterization of these thin-film materials. In this work, we present a thorough characterization of a single HAC sample, from among the many samples we have produced and analyzed, as a viable analog to the interstellar hydrogenated, carbonaceous material. The deposition method and conditions, along with a summary of the derived optical and physical properties are presented in §2.1. The measurement and analysis that were completed to derive the physical and optical properties of this sample, including its electronic band-gap, density, H/C ratio, $`sp^3/sp^2`$ ratio, PL efficiency, complex index of refraction, and IR ($`40001000cm^1`$) mass-absorption coefficient are presented in §§2.2-2.5. ### 2.1. HAC Sample Preparation and Summary of Derived Physical and Optical Properties This interstellar HAC analog was produced using a DC-based PECVD system of a somewhat unique design. Although DC PECVD is electronically simpler than RF PECVD, it does not lend itself as well to depositing materials (either conducting or insulating) onto insulating substrates, which are most commonly used (i.e., fused silica and salt). The system we have designed minimizes this problem. It consists of a vacuum chamber pumped by a roughing pump and a diffusion pump, with a four-channel precursor gas mixing manifold. Electronically, the chamber itself is grounded and the sample is deposited onto the substrate placed on a support about $`5mm`$ beneath a $`3.0cm`$ diameter nickel-chromium screen which serves as the cathode. All of this is surrounded by a chimney-like glass tube about $`7cm`$ in diameter, open at the bottom and very near the top of the chamber, in order to confine the plasma and to minimize possible contamination of the deposited samples by contaminants on the metal walls of the vacuum chamber. During the deposition process, the conducting screen is pulled to a negative DC potential which is variable between $`02000V`$ by a high-voltage power supply and a current-limiting resistor. The screen maintains the plasma, which causes deposition onto the substrate just below as ionized molecular fragments of the precursor gas mixture “overshoot” it. The screen itself is coated during the deposition process as well. For this reason, a clean screen is used for each deposition and, as a drawback, the deposition process is self-limiting because the current through the plasma into the screen decreases as the screen becomes coated by the insulating material, eventually becoming too low to support the plasma. HAC films can be deposited with thickness up to about $`200nm`$ in a single deposition lasting about $`30min`$; thicker films can be deposited by interrupting the process to replace the cathode screen. In general, samples deposited under identical conditions have indentical physical and optical properties. The materials that result are thin, solid, homogeneous films; they are not porous or particulate in nature. The deposition conditions for the HAC sample presented in this work are as follows: the precursor gas was 99.99% pure methane, at a pressure of $`200mTorr`$ with a flow-rate of $`5.0sccm`$; the electric potential was $`1000VDC`$ which established a current of $`3.0mA`$ initially. The total deposition time was $`60min`$, consisting of two sequential $`30min`$ depositions between which the cathode screen was changed. The several films produced in this way turned out to be about $`300nm`$ thick. A sodium chloride substrate was used for subsequent IR spectroscopic analysis of the sample; a fused silica substrate was used for subsequent UV/visible analyses. There was no indication that the HAC material deposited differently onto the salt substrate than onto the fused silica substrate. The UV/visible absorption spectrum of this HAC sample was recorded over the wavelength range $`900190nm`$ with a Perkin-Elmer 552 dual beam spectrophotometer; the absorbance spectrum and a corresponding “Tauc” plot, as described in §2.2, are shown in Figure 1. The optical PL spectrum covering the $`500950nm`$ wavelength range was recorded with an Ocean Optics S2000 fiber-fed CCD spectrometer using the $`488nm`$ line of an argon-ion laser as the source of excitation. The PL spectrum shown in Figure 2 was reduced to quantum efficiency as described in §2.3. The sample was analyzed by spectroscopic ellipsometry over the wavelength range $`3001000nm`$ with a J.A. Woollam Company SASE-1.1 ellipsometer. This analysis, which is discussed in much more detail in §2.4, was completed with the help of Dr. Margaret Tuma of the NASA Lewis Research Center; we are grateful for her assistance. The indices of refraction ($`n`$ and $`k`$) derived from this analysis are shown in Figure 3. Finally, the IR absorption of this sample was recorded over the frequency range $`40001000cm^1`$ with a Nicolet 510P FT-IR spectrometer (the low-frequency limit is imposed by the salt substrate); this spectrum, reduced to mass-extinction coefficient as described in §2.5 is shown in Figure 4. A summary of the physical and optical properties of this HAC sample, derived from the measurements indicated in the previous paragraph is presented in Table 1. The details of each analysis are presented in the following subsections. ### 2.2. HAC Sample Band-gap and Physical Properties A number of laboratory studies have demonstrated the electronic band-gap of HAC to be a useful parameter related to many of its optical and physical properties (e.g., Robertson 1986; Angus & Jansen 1988; Tamor, Haire et al. 1989; Tamor & Wu 1990; and Witt, Ryutov, & Furton 1997). The electronic band-gap of any amorphous semiconductor is conveniently characterized via the method outlined by Tauc (1973). This method prescribes that the band-gap is equal to the $`x`$-intercept of a plot of $`(\alpha E)^{\frac{1}{2}}`$ versus $`E`$ ($`\alpha `$ is the absorption coefficient, $`E`$ is energy), provided that the conduction and valence band edges are parabolic in energy, a condition that has proven to be more or less satisfied for most HACs. Figure 1 shows a plot of the UV/visible optical depth spectrum for the HAC sample, along with a plot of $`(\tau E)^{\frac{1}{2}}`$ derived from the optical depth spectrum according to $`\tau =\alpha t=\mathrm{ln}(\frac{I}{I_o})`$. The $`x`$-intercept of the $`(\tau E)^{\frac{1}{2}}`$ plot is $`E_g=1.9eV`$. Note that plots of $`(\tau E)^{\frac{1}{2}}`$ and $`(\alpha E)^{\frac{1}{2}}`$ versus $`E`$ differ only in slope, not in $`x`$-intercept.<sup>1</sup><sup>1</sup>1Note also that $`\alpha t=\mathrm{ln}(\frac{I}{I_o})`$ is only an approximate relation; $`\alpha `$ really needs to be determined more carefully. The use of this approximation in the “Tauc plot” is justified, however, given the assumptions made in this formalism. One physical property that is well-parameterized by $`E_g`$ is mass density. For example, Figure 3 in Tamor, Wu, et al. (1989) reveals an approximately inverse relationship between mass density and band-gap. The density of HAC with $`E_g=1.9eV`$, the case here, is about $`1.5g/cm^3`$. We also verified this by direct measurement; the mass of HAC deposited onto the substrate was determined with a microbalance (mass after deposition minus mass before deposition), then with the sample thickness determined by the ellipsometric analysis described below, the HAC sample density of $`1.5g/cm^3`$ was verfied to within the precision of the measurements. The uncertainty in this measurement is approximately $`0.1g/cm^3`$. A review of the literature shows that the $`sp^3/sp^2`$ carbon hybridization ratio, the H/C atom ratio, and the band-gap all are well correlated for HACs in general. Table I in Kaplan, Jansen, & Machonkin (1985); Figure 7 in Angus & Hayman (1988); and Figure 4 in Tamor & Wu (1990) are representative of this correlation. From these figures we determine that $`sp^3/sp^2=0.5\pm 0.1`$ and H/C$`=0.5\pm 0.1`$ for this HAC sample with $`E_g=1.9eV`$. The estimated errors for these quantities stem from the uncertainty in $`E_g`$, and from the precision with which the data in the tables and figures cited above can be interpolated. We also determined the H/C atom ratio directly from an analysis of the C-H stretch absorption feature via the method prescribed by Jacob & Unger (1996); the detailis of this derivation are desribed in §2.5. We find, consistent with the results above, that H/C$`0.5`$. ### 2.3. HAC Sample Photoluminescence It is well known that HACs luminesce upon exposure to blue and near-UV radiation, with an overall efficiency and energy of peak emission that depend strongly and directly on the band-gap of the material (e.g., Watanabe, Hasegawa, & Kurata 1982; Silva, et al. 1996; Witt, Ryutov, & Furton 1997). The optical PL spectrum for this HAC sample was recorded over the wavelength range $`500950nm`$ with an Ocean Optics S2000 fiber-fed CCD spectrometer. The excitation source was the $`488nm`$ line of an argon-ion laser. This fiber/spectrometer combination has been calibrated to absolute units via a NIST-traceable standard lamp. Furthermore, the spectrum, which is shown in Figure 2, has been reduced to approximately represent the PL photon quantum efficiency (QE) by dividing the observed PL photon count by the number of absorbed excitation laser photons, as computed from the measured laser intensity and the measured sample absorbance at the laser wavelength. This method is approximate because the radiative transport of the exciting and emitted photons, including scattering and self-absorption, is not accounted for. The error introduced by this simplification is smaller than the other errors associated with the PL observation and absolute calibration. Overall, this HAC sample has an integrated PL QE of $`0.05\pm 0.02`$, with a wavelength of peak emission around $`700nm`$. ### 2.4. HAC Sample Index of Refraction The complex index of refraction ($`nik`$) of this HAC sample and its thickness were determined over the wavelength range $`3001000nm`$ by spectroscopic ellipsometry. In this techniqure, the relative amplitudes and phase-shifts between parallel and perpendicularly polarized light reflected from the surface of a thin film sample are measured as functions of wavelength; these functions are denoted $`\mathrm{\Psi }(\lambda )`$ and $`\mathrm{\Delta }(\lambda )`$, respectively. Rather sophisticated modeling is required, however, to reduce the $`\mathrm{\Psi }(\lambda )`$ and $`\mathrm{\Delta }(\lambda )`$ data to the sample thickness and its optical constants $`n(\lambda )`$ and $`k(\lambda )`$ (Aspnes 1985). Despite this difficulty, spectroscopic ellipsometry is arguably the best technique to determine index of refraction for thin-film samples, and one which has not been applied to HAC materials in great measure. The ellipsometric analysis of this HAC sample (the measurement of $`\mathrm{\Psi }(\lambda )`$ and $`\mathrm{\Delta }(\lambda )`$) was performed with a J.A. Woollam Company SASE-1.1 spectroscopic ellipsometer by Dr. Margaret Tuma at the NASA Lewis Research Center. The reduction of $`\mathrm{\Psi }(\lambda )`$ and $`\mathrm{\Delta }(\lambda )`$ to $`n(\lambda )`$ and $`k(\lambda )`$ (and sample thickness) was completed by one of the authors (JWL); the method of analysis is described in some detail below since the results are heavily model dependent. Equations describing the reflection and transmission of light incident at some angle on a thin film of an optically isotropic and homogeneous material with complex index of refraction $`nik`$ deposited on a substrate are derived from Maxwell’s equations. The amplitude and phase of the reflected beam, which are analyzed by the ellipsometer, are different for light polarized parallel to the plane of incidence than for light polarized perpendicular to the plane of incidence. The ratio $`\frac{R_{}}{R_{}}`$ defines two functions of wavelength – $`\mathrm{\Psi }(\lambda )`$ and $`\mathrm{\Delta }(\lambda )`$ – according to the relation $$\frac{R_{}(\lambda )}{R_{}(\lambda )}=e^{i\mathrm{\Delta }(\lambda )}\mathrm{tan}\mathrm{\Psi }(\lambda ).$$ (1) Since $`R_{}(\lambda )`$ and $`R_{}(\lambda )`$ are complicated functions of $`n`$ and $`k`$, so are $`\mathrm{\Psi }(\lambda )`$ and $`\mathrm{\Delta }(\lambda )`$. The equations for $`\mathrm{\Psi }(\lambda )`$ and $`\mathrm{\Delta }(\lambda )`$ in terms of $`n`$ and $`k`$ can not be algebraically inverted to give $`n`$ and $`k`$ in terms of $`\mathrm{\Psi }(\lambda )`$ and $`\mathrm{\Delta }(\lambda )`$ except under the simplest of circumstances. Thus, the determination of $`n(\lambda )`$ and $`k(\lambda )`$ (and the sample thickness) from $`\mathrm{\Psi }(\lambda )`$ and $`\mathrm{\Delta }(\lambda )`$ measured by the ellipsometer is a problem which requires a numerical solution. We developed a non-linear least-squares fitting technique using MATHMATICA to determine $`n(\lambda )`$, $`k(\lambda )`$ and the film thickness for this HAC sample. In our method, $`n(\lambda )`$ and $`k(\lambda )`$ were assumed to be parameterized functions of wavelength, and the values of the parameters were determined so as to minimize the difference between the $`\mathrm{\Psi }(\lambda )`$ and $`\mathrm{\Delta }(\lambda )`$ measured at a given angle of incidence and those computed using the parameterized $`n(\lambda )`$ and $`k(\lambda )`$ functions. The thickness was also a parameter adjusted during the fitting process. The general form for the parameterized functions describing $`n(\lambda )`$ and $`k(\lambda )`$ were determined by trial and error fitting to $`n`$ and $`k`$ data presented by Smith (1984) for a number of HAC samples ranging from highly hydrogenated and transparent to graphitic and more opaque. It was found that the following functional forms for $`n(\lambda )`$ and $`k(\lambda )`$ were the simplest that could fit, given the choice of appropriate parameters, all of the data given by Smith (1984): $`n(\lambda )`$ $`=`$ $`n_o+n_1\lambda ^1+n_2\lambda ^2,`$ (2) $`k(\lambda )`$ $`=`$ $`k_oexp(k_1\lambda +k_2\lambda ^1).`$ (3) Values for the parameters describing $`n(\lambda )`$ and $`k(\lambda )`$ that minimized the least squares difference between measured and computed $`\mathrm{\Psi }(\lambda )`$ and $`\mathrm{\Delta }(\lambda )`$ were determined for the HAC sample and for the fused silica substrate itself at four angles of incidence evenly spaced between $`60^{}70^{}`$. The values of $`n`$ and $`k`$ derived for the fused silica blank agreed well with literature values, thus verifying the technique, and it was found for the HAC sample that one set of parameters could reproduce all the experimental data to a level of $`\chi ^210^3`$. The solutions were also found to be unique. The functions that emerged as solutions were generally insensitive to the initial guesses for the parameters, and were unchanged by the addition of other parameters (i.e., $`n_3`$, $`n_4`$, …, $`k_3`$, $`k_4`$, …). The values for the parameters of Equations 2 and 3 describing $`n(\lambda )`$ and $`k(\lambda )`$ for this HAC sample are shown in Table 2; the thickness of the film was found to be $`300\pm 30nm`$. Plots of $`n(\lambda )`$ and $`k(\lambda )`$ are shown in Figure 3. It should be noted that our determinations of $`n(\lambda )`$ and $`k(\lambda )`$ are certain to approximately $`1020\%`$ at any given wavelength, and that they compare favorably to other published data for similar HAC materials (e.g., Smith 1984; Alterovitz et al. 1991) and for organic grain mantle materials (Chlewicki & Greenberg 1990; Jenniskens 1993). ### 2.5. HAC Sample IR Mass-extinction Coefficient The IR absorption spectrum of this interstellar HAC analog was recorded over the frequency range $`40001000cm^1`$ with a Nicolet 510P FT-IR spectrometer. The sample was deposited onto a NaCl substrate for this analysis. The resulting spectrum, percent transmission as a function of frequency, was quantified by reducing it to spectral mass-absorption coefficient as follows. First, the raw absorption spectrum was divided by an absorption spectrum of the NaCl substrate itself and baseline corrected. This was done in order to highlight the principle absorption features, and to suppress continuum absorption by the HAC which is primarily due to scattering and interference from the HAC/NaCl interface and the exposed surfaces. The baseline-corrected spectrum was then converted to optical depth and reduced to mass-absorption according to the relation $$\kappa (\nu )=\frac{ln(I/I_o)}{\rho t},$$ (4) where $`\rho `$ and $`t`$ are the sample density and thickness determined as previously described. The final spectrum is shown in Figure 4. The small, double feature at $`2350cm^1`$ and the high-frequency structure in the $`20001500cm^1`$ range and above $`3500cm^1`$ are not associated with the HAC sample; they arise from water vapor and CO<sub>2</sub> not completely purged from the spectrometer when the spectrum was recorded. Inspection of this plot reveals that the single, strongest absorption feature in this type of HAC is the $`2950cm^1`$ C-H stretching band, with a maximum mass-absorption coefficient of $`1.4\times 10^3cm^2/g`$. Other absorption features are also apparent at the low-frequency end of the spectrum, but at levels significantly weaker than the main C-H stretching peak; these bands are due to C-H wagging and C-C stretching modes. The integrated absorption strength for this sample in the $`2950cm^1`$ feature is $`\kappa =46cm^2/g`$, which corresponds to an intrinsic, integrated band strength of $`\sigma =1.7\times 10^{21}cm^2`$ per C-H bond.<sup>2</sup><sup>2</sup>2See §3.1 for the details of this calcualtion. From Figure 1 in Jacob & Unger (1996) we thus find that H/C for this sample is about 0.5 as presented in §2.2. It should be pointed out that only bound hydrogen is included in this ratio. ## 3. Discussion Carbon is the most abundant element capable of maintaining itself in solid form under the conditions that prevail in the diffuse ISM. Various forms of solid carbon are invoked as sources or carriers of a variety of phenomena; for example, graphite and amorphous carbon grains for the $`220nm`$ bump (Mathis 1994; Mennella et al. 1998), HAC grains for ERE and for near-IR continuum emission (Duley 1985), and gas-phase HAC fragments for diffuse interstellar bands (Duley 1995). However, the only direct evidence that some form of solid carbonaceous material is present in the diffuse ISM is the detection of the $`2950cm^1`$ absorption feature due to C-H bonds along dusty lines of sight (e.g., Soifer, Russell, & Merrill 1976; Adamson, Whittet, & Duley 1990; Sandford et al. 1991; Pendleton et al. 1994; Whittet et al. 1997). This detection and identification are firm. The extent to which HAC can account for other astrophysical phenomena depends on its optical and physical properties, as derived from laboratory investigations of these materials. The most superficial comparison that can be made between laboratory-produced HAC and the true interstellar HAC, but nonetheless an important one, is of the profile of the C-H stretch feature. Indeed, comparisons of this sort are plentiful in the literature (e.g., Sandford et al. 1991; Duley 1994; Greenberg et al. 1995; Schnaiter et al. 1998). We provide our own such comparison in Figure 5, which shows the $`27503050cm^1`$ relative optical depth spectrum for the interstellar HAC analog presented here, along with that observed toward the Galactic center source IRS 6 and toward Cyg. OB2 #12. The IRS 6 data were provided by Y. Pendleton and are published in Pendleton et al. (1994) the Cyg. OB2 #12 data recorded by ISO were provided by D.C.B. Whittet and are published in Whittet et al. (1997). It is clear from this figure that the two interstellar profiles are similar and are both well represented by the profile of the interstellar HAC analog (see also Chiar et al. (1998) for additional, very recent comparisons of this sort). Principal uncertainties in the data are associated with determining the appropriate baseline to use to extract the profile of the feature from the astronomical spectra. It is, however, not sufficient to identify a laboratory-produced material as an analog to some interstellar material based on the observation of a single absorption feature. What other absorption features are produced by dust in the diffuse ISM? This question is most unambiguously answered by the recent observation of Cyg. OB2 #12 by Whittet et al. (1997) using ISO. Absorption along this line of sight is believed to arise nearly entirely from dust in the diffuse ISM.<sup>3</sup><sup>3</sup>3At least some of the absorption along the lines of sight to IR-bright Galactic center sources such as IRS 6 is known to arise from molecular cloud edges (McFadzean et al. 1989). Perhaps the most striking fact about the ISO Cyg. OB2 #12 observation is the lack of any strong absorption features in addition to the Si-O band at $`1000cm^1`$ and the C-H band at $`2950cm^1`$, except possibly a very weak feature at $`1600cm^1`$, either due to C-C stretching or C-H wagging vibrations. This, however, is useful evidence in itself which serves to exclude a large number of interstellar carbonaceous grain analog materials that have been proposed to date. Note, in this respect, that the IR absorption spectrum of the interstellar HAC analog presented here (Figure 4) is dominated by C-H stretching absorption at $`2950cm^1`$, with much less prominent absorption in the $`10001800cm^1`$ region which would be still less conspicuous at the signal-to-noise of the ISO Cyg. OB2 #12 observation. In addition to comparing the profiles of IR absorption features observed along lines of sight through the diffuse ISM with those produced by laboratory analogs, it is possible and indeed necessary to address the question of absorption-band strength. How much carbon needs to be locked in the form of interstellar HAC in order to explain the strength of the interstellar $`2950cm^1`$ feature? This question is crucial in light of the recent determinations of the amount of solid carbon likely to be available to form interstellar HAC material (Snow & Witt 1995). ### 3.1. The Amount of Carbon in Diffuse ISM HAC Grains In principle it is straight forward to determine the amount of carbon locked in diffuse ISM HAC grains from a quantitative analysis of the $`2950cm^1`$ interstellar absorption feature due to C-H bonds. One measures the total optical depth in the band (either by integrating over the band profile or by simply noting the maximum optical depth of the feature) and determines the number of C-H bonds necessary to produce the absorption using laboratory measurements of the intrinsic absorption strength or cross section per C-H bond. Indeed, several groups have done this, most notably Sandford et al. (1991) (hereafter S91) and Duley et al. (1998) (hereafter D98). But, in practice, there are significant difficulties with the laboratory measurement and interpretation of the C-H stretching absorption-band strength which have been overlooked. S91 analyzed the $`2950cm^1`$ C-H absorption feature observed toward several bright IR sources believed to be probes of the diffuse ISM. They concluded that the amount of carbon necessary to produce the observed optical depth is $`10\%`$, to within a factor of two, of the cosmic abundance of carbon as presented in Allen (1973) ($`370ppM`$ H), or roughly $`2080ppM`$ of hydrogen. They relied, however, on C-H absorption-band strengths (and feature widths) measured in small hydrocarbon molecules (C<sub>5</sub>H<sub>12</sub>, C<sub>6</sub>H<sub>14</sub>, and cyclo-C<sub>6</sub>H<sub>14</sub>), when there is no a priori reason to expect the C-H stretching band strength in small molecules to be the same as that in solid matter. D98 measured the C-H absorption band strength for a single HAC produced by laser ablation of carbon in a low pressure, hydrogen-rich atmosphere and analyzed the IR observations of lines of sight toward the Galactic center by Pendleton et al. (1994) to conclude that $`7297ppM`$ H of carbon is necessary to explain the optical depth in the interstellar $`2950cm^1`$ feature. But here, the assumption is made that the C-H stretching band strength is independent of the H/C ratio in HAC, when there is in fact evidence to the contrary (Jacob & Unger 1996, hereafter JU96). It is clear from both of these studies that the laboratory determinations of the intrinsic absorption strengths of various C-H modes contribute a large uncertainty to the determination of the amount of carbon locked in interstellar HAC. The literature concerning the quantitative analysis of IR absorption features is especially confusing to the uninitiated; there are about as many units for characterizing the intrinsic strength of an IR mode as there are papers doing so. The following paragraphs serve to connect two frameworks relevant to this current work. The relation most commonly used in the astronomical literature, and the one used by S91, among others is $$N=\frac{1}{A}\tau _{max}\mathrm{\Delta }\nu ,$$ (5) where N is the column density of absorbers, $`\tau _{max}`$ is the maximum optical depth in an IR absorption feature and $`\mathrm{\Delta }\nu `$ is the full-width at half maximum of the feature in wavenumbers. $`A`$ is a constant with units $`cmpergroup`$ or $`cmperbond`$. Equation 5 is an approximate relation, derived from the more formal, rigorous definition for the number density of C-H bonds, $$n=\frac{1}{\sigma }_{feature}\frac{\alpha (\nu )}{\nu }𝑑\nu ,$$ (6) where $`\alpha (\nu )`$ is the absorption coefficient as a function of frequency and the normalization constant $`\sigma `$ is the integrated cross section per absorber, in units $`cm^2perabsorber`$. Some physical-chemistry studies of IR band strengths report band cross sections, not the $`A`$-value of Equation 5, most notably in this context the work of JU96. Note that there is an approximate relation between $`A`$ and $`\sigma `$ $$A\sigma \overline{\nu },$$ (7) where $`\overline{\nu }`$ is the average or central frequency of the absorption band. In connection with determining the amount of carbon locked in interstellar HAC, the work of JU96 is of utmost relevance. These authors determined the integrated absorption cross section per C-H bond experimentally in a large number of HAC samples covering a range in H/C ratio from about 0.3 to about 1.<sup>4</sup><sup>4</sup>4They define a constant $`A`$ which is not the Sandford et al. $`A`$, but that is the reciprocal of the integrated band cross section. They found that the cross section varied linearly with H/C such that $`1\times 10^{21}\sigma 5\times 10^{21}cm^2`$ for $`0.3`$H/C$`1.`$<sup>5</sup><sup>5</sup>5There is some scatter in the data presented by JU96, and possibly an indication of a saturation of the $`2950cm^1`$ band strength at high levels of hydrogenation, but these attributes do not significantly affect the discussion and conclusions that follow. This result complicates the analysis of the interstellar $`2950cm^1`$ feature because it reveals that the H/C ratio in interstellar HAC grains must be estimated in order to determine how much carbon these grains contain, since the intrinsic band strength is a function of the H/C ratio. Before proceeding to discuss the results of JU96 more thoroughly, it is instructive to compare their results to those of S91 and D98. Both of these groups determined $`A`$-values for four principal $`sp^3`$ C-H stretching modes: –CH<sub>3</sub> symmetric and asymmetric, and $`>`$CH<sub>2</sub> symmetric and asymmetric. Their results are summarized in Table 3; $`A`$-values were converted to cross sections using Equation 7. The results in Table 3 are consistent with those of JU96. The relatively low value of $`\sigma _{D98}`$ is consistent with their determination of H/C in the range $`0.30.4`$ for the sample. The relatively high value of $`\sigma _{S91}`$ is consistent with the fact that the hydrocarbon molecules they analyzed were saturated, with H/C in excess of unity. The interstellar HAC analog presented in our work has $`H/C0.5`$ and we determine the integrated absorption cross section, according to Equation 6, to be $`\sigma =1.7\times 10^{21}cm^2`$, which is also consistent with the results of JU96. These results clearly demonstrate the problem which complicates the analysis of the interstellar $`2950cm^1`$ feature: the intrinsic cross section per C-H bond depends strongly on the H/C ratio of the interstellar HAC grains, which is not well determined. The only data that give an indication of the H/C ratio in the interstellar material is the profile of the C-H stretch feature itself. Materials with very low hydrogen content (and very low $`sp^3/sp^2`$ ratios) have profiles dominated by $`sp^2`$-modes which occur at frequencies near and above $`3000cm^1`$; materials with high hydrogen content (and higher $`sp^3/sp^2`$ ratios) have profiles dominated by $`sp^3`$-modes, as seems to be the astrophysical situation. But the profile is not very sensitive for HACs with H/C greater than about 0.3, perhaps because there is a preference for hydrogen to bond to $`sp^3`$-hybridized carbon. So, the profile of the C-H stretch feature alone is not sufficient to remove the complication. The mass-absorption coefficient $`\kappa `$ for the C-H stretch feature is the quantity that directly relates the strength of the observed interstellar band to the amount of carbon locked in interstellar HAC grains. $`\kappa `$ is defined to be the cross section per unit mass of material $$\kappa =\frac{\sigma }{m_{CH}},$$ (8) where $`m_{CH}`$ is the mass of material per C-H bond since $`\sigma `$ is the cross section per C-H bond. But, the definition $$\kappa ^{}(\nu )=\frac{\alpha (\nu )}{\rho },$$ (9) where $`\alpha (\nu )`$ is the absorption coefficient as a function of frequency and $`\rho `$ is the mass-density, is also commonly used. These equations define $`\kappa `$’s which are different but related – as reviewed below. Equation 6, which rigorously defines $`\kappa `$, can be approximated by $$n\frac{1}{\sigma }\alpha _{max}\frac{\mathrm{\Delta }\nu }{\overline{\nu }},$$ (10) where $`\alpha _{max}`$ is the maximum absorption coefficient in the feature, and $`\mathrm{\Delta }\nu `$ and $`\overline{\nu }`$ are the feature width and central frequency, respectively. For the interstellar C-H feature, $`\mathrm{\Delta }\nu /\overline{\nu }\frac{1}{30}`$. Given that $`n=\rho /m_{CH}`$ is the number-density of C-H bonds, combining Equations 9 and 10 gives $$\kappa \frac{1}{30}\kappa ^{}(2950)$$ (11) as the relation between $`\kappa `$ determined from $`\sigma `$ according to Equation 8 and $`\kappa ^{}`$ at the interstellar $`2950cm^1`$ feature maximum defined by Equation 9. Thus Equation 11 can be used to relate $`\kappa `$, the integrated mass-absorption coefficient, to $`\kappa ^{}(2950)`$ which is most commonly used in the astronomy literature. The mass-absorption coefficient for the C-H stretch feature in HAC materials depends on the H/C ratio. In Equation 8, both $`\sigma `$ and $`m_{CH}`$ are functions of the H/C ratio; specifically, it can be shown that $$m_{CH}=m_H+\frac{m_C}{\mathrm{H}/\mathrm{C}},$$ (12) where $`m_H`$ and $`m_C`$ are the mass of the hydrogen and carbon atoms, respectively. Using this relation, the values for $`\sigma `$ determined by JU96, and Equation 11, the mass-absorption coefficient as a function of H/C ratio can be computed as presented in Table 4. It is clear from these data that the mass-absorption coefficient for the C-H band in HAC materials varies by over a factor of 20 for $`0.2`$H/C$`1`$ What is the mass-absorption coefficient for interstellar HAC grains as determined observationally? The mass-absorption coefficient $`\kappa ^{}`$ defined by Equation 9 can be thought of as the optical depth per the column mass-density of absorbers. Accordingly, it can be stated in terms of relevant astrophysical quantities as follows: $$\kappa ^{}(2950)=\frac{\tau (2950)}{N(\mathrm{H})}\left(\frac{N(\mathrm{C})}{N(\mathrm{H})}\right)_{\delta HAC}^1\left(\frac{\mathrm{C}/\mathrm{H}}{m_{CH}}\right),$$ (13) where $`N(\mathrm{X})`$ is column density of element $`X`$, $`\left(\frac{N(\mathrm{C})}{N(\mathrm{H})}\right)_{\delta HAC}^1`$ is the amount of carbon relative to hydrogen depleted into the HAC grains (the quantity we wish to determine), and C/H is that of the HAC material itself. In what follows, we will use Equation 13 to analyze exclusively the line of sight toward Cyg. OB2 #12, the prototypical diffuse ISM line of sight. Similar results are obtained for lines of sight toward the Galactic center. The maximum optical depth in the $`2950cm^1`$ feature observed toward Cyg. OB2 #12 is $`\tau (2950)0.04`$ with $`A_V=10.2mag`$. Using $`\tau _V=4.92\times 10^{22}N(\mathrm{H})`$ given by Mathis (1996), we find $`\tau (2950)/N(\mathrm{H})=2.1\times 10^{24}cm^2`$ (which is consistent with other lines of sight as well). The ratio $`\left(\frac{\mathrm{C}/\mathrm{H}}{m_{CH}}\right)`$, derived from the values in Table 4, ranges from $`4.5\times 10^{22}g^1`$ to $`5.0\times 10^{22}g^1`$, so a constant value of $`4.7\times 10^{22}g^1`$ is representative of HACs to within the uncertainty of the observations. Thus, the mass-absorption coefficient for the interstellar C-H stretch feature is $$\kappa ^{}(2950)=\left(0.10cm^2g^1\right)\left(\frac{N(\mathrm{C})}{N(\mathrm{H})}\right)_{\delta HAC}^1$$ (14) which depends only on the amount of carbon relative to hydrogen depleted into HAC grains. Notice that the system of variables: $`\left(\frac{N(\mathrm{C})}{N(\mathrm{H})}\right)_{\delta HAC}`$, H/C for HAC, and $`\kappa ^{}(2950)`$ is underconstrained. There are three variables, but only two constraints: the relation between H/C for HAC and $`\kappa ^{}(2950)`$ summarized in Table 4, and the relation between $`\left(\frac{N(\mathrm{C})}{N(\mathrm{H})}\right)_{\delta HAC}`$ and $`\kappa ^{}(2950)`$ given in Equation 14. Table 5 summarizes this. The first column lists values for the amount of carbon relative to hydrogen locked in interstellar HAC grains $`\left(\frac{N(\mathrm{C})}{N(\mathrm{H})}\right)_{\delta HAC}`$; the second column shows the corresponding, required values of $`\kappa ^{}(2950)`$. The third column gives values for the HAC H/C ratio associated with the $`\kappa `$’s in the second column, interpolated from Table 4. Note the approximate inverse relationship between the amount of carbon locked in HAC grains and the H/C ratio the HAC material would need to have in order to explain the observed optical depth in the interstellar $`2950cm^1`$ feature. Thus we are lead to the conclusion that the amount of carbon locked in interstellar HAC grains is in the range of $`80ppM`$ hydrogen (or more, depending on the gas-phase abundance and depletion of carbon) to possibly less than $`20ppM`$ hydrogen, depending on the degree of hydrogenation of the material. Note that this result is similar to that presented by S91 with one significant difference: the large range in their estimate of $`\left(\frac{N(\mathrm{C})}{N(\mathrm{H})}\right)_{\delta HAC}`$ is, by their own acknowledgment, due to uncertainties in the observations and in band-strengths they rely on. This is not the case here; the large range in $`\left(\frac{N(\mathrm{C})}{N(\mathrm{H})}\right)_{\delta HAC}`$ is due to a fundamental lack of information to close the problem. We can, however, draw important conclusions from this analysis, and can use these results to place limits on the interstellar HAC grain problem. On the surface, it is clear that interstellar HAC grains must be hydrogenated at least to the level H/C$`0.5`$, and at this limit most of the available solid carbon would need to be in form of HAC grains. The other limit is not so precisely delineated. It is clear that if the HAC grains are extremely highly hydrogenated, to the maximum H/C$`11.5`$, then the amount of carbon needed to produce the observed optical depth in the $`2950cm^1`$ feature would be small, perhaps as low as $`20ppM`$ hydrogen. But, as we discuss in §3.3, this seems unlikely because these HACs photoluminesce efficiently in the blue-green and such emission has never been observed in interstellar environments. ### 3.2. Viable Interstellar HAC Analogs The data in Table 5 can be used to evaluate the viability of other potential interstellar HAC analogs. From a purely qualitative standpoint, any material proposed as an analog to the hydrogenated carbonaceous component of the diffuse ISM must produce absorption in the IR that is dominated by the C-H stretch feature at $`2950cm^1`$, given the recent ISO observation of Cyg. OB2 #12 by Whittet et al. (1998). The lack of other strong features along this line of sight (excepting the Si-O band at $`1000cm^1`$) is meaningful. In light of this, the suggestion that solids such as the organic refractory EUREKA materials of Greenberg et al. (1995), for example, are analogs to the carbonaceous component of the diffuse ISM must not be taken too literally because they show significant absorption due to bonds other than C-H and C-C. These materials do, however, offer intriguing insight into a possible interstellar HAC production method. From a quantitative standpoint, the results of the previous section provide a key criterion to which all interstellar HAC analogs must be held. Given that only $`80ppM`$ of carbon relative to hydrogen is likely to be depleted from the gas phase to form solid carbonaceous material (Snow & Witt 1995), any laboratory analog to the interstellar HAC material must have a mass-absorption coefficient at the peak of the C-H stretch feature in excess of $`10^3cm^2/g`$, and in order to do so must be hydrogenated to a level of H/C$`>0.5`$. The interstellar HAC analog presented in this work, with H/C $`0.5`$ and $`\kappa ^{}(2950)=1.4\times 10^3cm^2/g`$, appears to be a limiting material in the sense that it produces absorption in the C-H stretch feature with a strength that would require all of the available solid carbon to be in form of this type of material. Other interstellar HAC analogs presented to date, however, appear in this analysis to require more the solid carbon than is available in order to account for the interstellar $`2950cm^1`$ feature (Borghesi et al. 1985; Ogmen & Duley 1988; Colangeli et al. 1995; Jaeger et al. 1998). Of course, uncertainty in the true amount of solid-phase carbon in the ISM constrains each of these interstellar HAC analogs to different degrees; each must be considered on a case by case basis. However, that materials with lower mass-absorption coefficients in the $`2950cm^1`$ band require more solid-phase carbon to account for the observed strength of this feature – perhaps more than is available – remains a valid conclusion. While determination of IR mass-absorption spectra for interstellar grain analogs is a necessary quantification, it has only been done by a limited number of groups. Bussoletti et al. (1987) and Colangeli et al. (1995) produced and analyzed a variety of HAC materials (denoted BE, ACAR, ACH2, etc…), derived by arc-evaporation of graphite, and determined mass-absorption spectra for them through the near-IR along with other properties, including indices of refraction, which, incidently, have been used subsequently by Mathis (1996) to compute his most recent dust model. For these materials, however, $`\kappa ^{}(2950)`$ is only on the order of $`100150cm^2/g`$ – clearly too low to account for the interstellar $`2950cm^1`$ feature. More recently, Schnaiter et al. (1998) produced a variety of matrix-isolated nano-sized carbon grains and, among other things, determined the mass-absorption coefficient in the C-H stretch feature. They found $`\kappa ^{}(2950)`$ to range from near zero to around $`600cm^2/g`$ for the most highly hydrogenated materials they produced. The mostly highly hydrogenated of these materials may possibly be able to account for the $`2950cm^1`$ feature, given the uncertainty in the true amount of solid-phase carbon that is available. Finally, D98 produced an interstellar HAC analog by laser ablation of graphite in a low-pressure, hydrogen-rich atmosphere, and analyzed the strength of the C-H stretch feature. Although they do not specifically compute $`\kappa ^{}(2950)`$, it is apparently approximately $`450cm^2/g`$, based on the H/C ratio they quote and the “$`A`$-value” (of Equation 5) they determine; again considerably lower than necessary to account for all of the absorption in the interstellar $`2950cm^1`$ feature. So, although a number of carbonaceous materials have been presented in the literature over the past few years as sources of the interstellar $`2950cm^1`$ feature, and thus as interstellar HAC analogs, the HAC presented in this work is the first to be shown to be quantitatively consistent with IR observations in the $`40001000cm^1`$ spectral region of lines of sight which unambiguously sample only the diffuse ISM. ### 3.3. The Optical Role of HAC Grains in the Diffuse ISM Given the presence in the diffuse ISM of some form of hydrogenated carbonaceous solid, as betrayed by the interstellar $`2950cm^1`$ feature, it is necessary to consider carefully the roles it may and may not play in other UV/visible/IR phenomena. HAC is included as a source of opacity in the current dust models of Mathis (1996) and Li & Greenberg (1997), and has been proposed as the source or carrier of the broad, red dust luminescence band known as ERE (e.g., Duley 1985; Witt 1994), near-IR ($`13\mu m`$) non-thermal continuum radiation (Jones, Duley, & Williams 1990), and the $`220nm`$ bump (Colangeli et al. 1993; Schnaiter et al. 1998), in addition to being the assumed source of the interstellar $`2950cm^1`$ feature. The extent to which HAC grains can indeed fill all of these roles must continue to be reviewed in light of new astronomical and laboratory data. Although HAC is assumed to be a major source of opacity in the near-UV and visible (Li & Greenberg 1997), its optical constants in this spectral region are by far the poorest determined of most astrophysically important materials. In addition, the recent recent dust models of Li & Greenberg 1997 and Mathis (1996) require that the imaginary part of the index of refraction for the material causing visible/near-UV absorption needs to be as high as $`0.4`$. If so, then the material causing this absorption is not the HAC responsible for the interstellar $`2950cm^1`$ feature, for the simple reason that HACs which produce significant optical depth per carbon atom in the C-H stretch band are not nearly this absorbing in the visible. As demonstrated in Figure 3, our HAC sample, which matches the interstellar C-H feature both in profile and in mass-absorption coefficient (at a minimum level), is essentially transparent in the visible, with $`k`$ rising to about $`0.4`$ only in the near-UV. Further, we consider this a limiting material because increasing its visual opacity would require lowering its bandgap below its present value ($`1.9eV`$), decreasing its mass-absorpton coefficient in the C-H stretch band to where more than $`80ppM`$ solid phase carbon would be required to match the C-H band strength, and shifting its C-H band profile toward frequencies higher than observed. HAC, then, is not the sole source of absorption in the visible/near-UV. A recent paper by Adamson et al. (1998), describing spectropolarimetric observations of the interstellar $`2950cm^1`$ feature observed toward the Galactic center source Sgr. A IRS7, provides an additional clue to the nature of the carrier of this absorption band. They find this feature to be unpolarized with an upper limit well below that expected on the basis of a model in which the carrier bonds are associated with the aligned silicate grains (i.e., silicate-core/HAC-mantle grains). This suggests the possibility that the IR-active HAC grains are extremely small – below the Rayleigh limit. If this is the case, then it seems again that the HAC material is very highly hydrogenated with a correspondingly high C-H stretch mass-absorption coefficient. Interestingly enough, small HAC grains have recently been shown in the lab by Schnaiter et al. (1998) to produce broad absorption around $`220nm`$, thus making them candidates for the carrier of the interstellar $`220nm`$ extinction feature. They concluded, in fact, that small HAC grains locking up about $`100ppM`$ carbon relative to hydrogen could explain the observed strength of the $`220nm`$ bump. In addition, these laboratory-produced HAC grains have a mass-absorption coefficient in the $`2950cm^1`$feature of $`\kappa ^{}(2950)600cm^2/g`$, which is near but lower than the required $`10^3cm^2/g`$. It thus appears that small HAC grains are a promising source of both the $`220nm`$ bump and the $`2950cm^1`$ C-H feature. But, they would need to lock up essentially all the solid-phase carbon if the recent works of Schnaiter et al. 1998 and Schnaiter et al. 1999 are final. It is important to note that the HAC sample presented in this current work does not produce structure in the $`190250nm`$ spectral region. Finally, it has been suggested that HAC grains are the carrier of ERE observed in a variety of astrophysical environments (Duley, Seahra, & Williams 1997). More recently, however, Gordon, Witt, & Friedmann (1998) and Szomoru & Guhathakurta (1998) have shown that ERE is also produced by dust in the diffuse ISM with a quantum efficiency so high as to call into question its HAC origin. By comparing the absorbed fraction of the interstellar radiation field, integrated over the $`91.2550nm`$ range, with the observed ERE intensity over the same lines of sight, Gordon, Witt, & Friedmann (1998) conclude and Szomoru & Guhathakurta confirmed that $`(10\pm 3)\%`$ of all absorbed photons lead to the production of ERE photons. Since the ERE carrier is most likely only one of several dust components contributing to the absorption in the UV/visible wavelength range, the true quantum efficiency of the ERE process must be larger than 10%, and could be as high as 40% to 50%. The quantum efficiency of PL in HAC (Silva et al. 1996; Rusli, Robertson, & Amaratunga 1996; Witt, Ryutov, & Furton 1997) is closely correlated with the bandgap, which in turn is correlated with the H/C ratio. High-efficiency PL occurs in HAC only if the bandgap is large, of order $`3eV`$, which implies, under UV illumination, blue-green luminescence instead of red emission. The absence of blue emission in astronomical sources (Rush & Witt 1975; Witt & Boroson 1990) argues thus against the presence of such high-bandgap HAC and the large H/C ratios this would imply. Materials such as the HAC sample discussed in this paper can contribute to the ERE in the correct wavelength range, but only with a modest quantum efficiency of about 5%. At best, it would be a partial contributor to the ERE observed in the diffuse ISM. Other possible sources for the ERE, such as silicon nanoparticles (Witt, Gordon, & Furton 1998; Ledoux et al. 1998) or large PAH molecules (d’Hendecourt et al. 1986) must therefore be considered as main contributors. Further, the absence of blue emission in ERE sources also argues against the presence of HACs with H/C$`>0.5`$. This limits at the low end the amount of carbon contained in interstellar HAC grains because of the existence of the correlation between the H/C ratio and the band mass-absorption coefficient (see Table 5). We conclude, therefore, that the amount of carbon locked up in HAC is nearer $`80ppM`$ than $`20ppM`$ of hydrogen. ## 4. Summary In summary, we have produced and thoroughly characterized a HAC sample that is, on the basis of a quantitative comparison of the IR absorption spectra of the HAC and of the diffuse ISM, a viable analog to the true interstellar HAC material. Both spectra are dominated by the C-H stretching feature at $`2950cm^1`$, with much weaker absorption in the $`20001000cm^1`$ range due to C-C stretching and C-H wagging modes (in addition to the Si-O band in the interstellar spectrum). This HAC has a density of $`1.5g/cm^3`$, an electronic band-gap of $`1.9\pm 0.1eV`$, H/C and $`sp^3/sp^2`$ ratios both near $`0.5`$, a peak mass-absorption coefficient in the $`2950cm^1`$ band of $`\kappa ^{}(2950)=1.4\times 10^3cm^2/g`$, and an integrated PL quantum efficiency of $`0.05\pm 0.02`$ with peak emission near $`700nm`$. We have also determined via spectroscopic ellipsometry the complex index of refraction for this HAC in the wavelength range $`3001000nm`$, finding $`n`$ to be nearly constant near $`1.7`$ and $`k`$ to rise exponentially from near zero in the visible to around $`0.4`$ in the near-UV. We carefully analyzed the profile and strength of the $`2950cm^1`$ C-H stretching absorption band in HAC and the diffuse ISM. We review the results of Jacob & Unger (1996), showing that the intrinsic strength of the $`2950cm^1`$ feature per C-H bond, and thus the amount of carbon that needs to be bound in HAC in the diffuse ISM, is a strong function of the degree of hydrogenation of the material. HACs with H/C$`0.5`$, like the sample described in this work, are minimally able to provide the observed optical depth in the interstellar $`2950cm^1`$ feature, requiring approximately $`80ppM`$ carbon relative to hydrogen – essentially all the available solid-phase carbon. HACs with H/C much lower than $`0.5`$ require increasingly more than $`80ppM`$ of carbon relative to hydrogen, and thus do not appear to be viable interstellar HAC analogs. HACs with H/C much higher than $`0.5`$, while able to account for the $`2950cm^1`$ feature optical depth with less carbon, perhaps as little as $`20ppM`$ of hydrogen, also do not appear to be viable analogs because these materials as a rule exhibit blue-green photoluminescence with an efficiency on the order of $`10\%`$ – such dust emission has never been observed. So, we conclude that interstellar HAC must be hydrogenated to a level of about $`0.50.7`$ and that this material locks up most of the available solid-phase carbon. Finally, we consider other optical roles interstellar HAC is likely to play in the diffuse ISM. We conclude that HAC is not the dominant source of the ERE band, which has now been conclusively detected in the diffuse ISM, because HAC’s PL spectrum and quantum efficiency differ significantly from ERE. We also conclude that HAC is not the sole source of opacity in the visible and near-UV because HACs with $`k>>0`$ in this spectral region require far more carbon than is available in the solid phase to account for the observed $`2950cm^1`$ band optical depth. In addition, it is important to note that the bulk-solid HAC which we have produced does not show any structure in the spectral vicinity of $`220nm`$. The authors are grateful to Yvonne Pendleton and Doug Whittet for providing observational data, Margaret Tuma for assistance with the spectroscopic ellipsometry of our HAC sample, and Thomas Henning for a thorough critique of the manuscript. This work was supported by NASA grants to the University of Toledo and by subcontract to Rhode Island College.
no-problem/9908/physics9908010.html
ar5iv
text
# Untitled Document Possibility of Sound Propagation in Vacuums with the Speed of Light Robert Lauter Max-Planck Institut für Kolloid- und Grenzflächenforschung, Max-Planck Campus, Haus 2, Am Mühlenberg 2, 14476 Golm (Potsdam) An important question of theoretical physics is whether sound is able to propagate in vacuums at all and if this is the case, then it must lead to the reinterpretation of one zero restmass particle which corresponds to vacuum-sound waves. Taking the electron-neutrino as the corresponding particle, its observed non-vanishing rest-energy may only appear for neutrino-propagation inside material media. The idea may also influence the physics of dense matter, restricting the maximum speed of sound, both in vacuums and in matter, to the speed of light. PACS numbers: 03.30.+p; 14.60.Pq; 43.35.Gk; 97.60.Jd Introduction: Since the idea of sound propagation in vacuums is generally rejected, I will begin by discussing the various reasons for which this idea is not believed by the great majority of physicists. The most obvious reason is that the transmission of normal acoustic sound waves is, in contrast to electromagnetic waves, not observed in vacuums. Nevertheless, this is not an argument entirely against sound propagation in vacuums, since sound waves correspond to zero restmass particles which have to propagate with velocity c in vacuums. Therefore, from the very high ratio of the phase velocity of sound in vacuums and, for example, in air of approximately $`10^6`$, one has to expect negligible transmission of normal acoustic sound waves into vacuums even when sound propagation in vacuums exists. The second reason why it is not believed that sound propagates in vacuums, is that compressionable waves do not exist in vacuums. However, the criterion compressibility does not determine which thermodynamical parameter of state is altered during a particular propagation of sonic waves. Hence, there are some examples of acoustic wave propagation which occur without altering the pressure, the density, or the volume of the medium. The most ordinary case is the propagation of transversal sound waves which occur without altering the volume of the medium. The second example is the propagation of the so called “second sound” and the third one, the so called ”zero sound”. The two latter types of acoustic phenomena only occur inside superfluid matter, the first of which represents periodic oscillations of the temperature without altering the pressure (while the density is altered) while zero sound propagates without altering the density of the medium (while pressure changes). Thus, in material media, it depends on the actual physical circumstances inside the medium as to which parameters of state are changed during sound propagation, and neither changes of pressure or volume, nor changes in density are necessary preconditions for sound propagation in general. It is therefore doubtful if compressibility is a necessary property of a medium to permit sound propagation in general. The third argument against sound propagation in vacuums is the same as that used during the discussion about the ether-hypothesis in the last century. Most people, including physicists, could not imagine any wave propagation without the displacement of particular components (atoms, molecules) inside a medium. But since there is no doubt that electromagnetic waves propagate in vacuums without the oscillation of any material medium through which they are propagating, the same question also has to be asked for acoustic waves. In the following investigation I shall first derive some basic properties of vacuum-sound waves from the thermodynamical properties of a vacuum. This is possible because thermodynamics do not make any restriction concerning the internal structure of matter. Therefore the whole thermodynamics remained true even if matter was a continuum (which we will treat the vacuum as). Another approach is the investigation of vacuum-sound waves by means of lattice-theory. Secondly the results of this investigation are compared to the electron-neutrino’s properties, which seem to fit the properties of vacuum-sound waves. Investigation of the vacuum s properties, concerning sound propagation: The thermodynamical investigation of sound-propagation in vacuums requires at first a classical, i.e. non-quantized, definition of a vacuum. I shall adopt the definition of T. H. Boyer who states that a vacuum is a region of space from which all rest-energy and additionally all thermal radiation has been removed. This region of empty space still contains the zero-point energy which comes from the fluctuating fields of the strong and weak nuclear as well as the electromagnetic force, which can not be removed . Thus, propagation of sound in vacuums is considered as occurring at zero K like zero sound in material media. The above classical definition of a vacuum consequently leads to vanishing values for the pressure and the total energy density of a vacuum, or $`p0`$ and $`ϵ0`$. The relativistic relationship due to the velocity of sound propagation according to Bludman and Ruderman is: $$c_s^2=c^2(\frac{dp}{dϵ})_s$$ (1) with: $`c_s`$ speed of sound $`c`$ speed of light $`p`$ pressure $`ϵ`$ total energy density (including rest energy) $`S`$ entropy. This equation was derived by replacing the density, $`\rho `$, by the energy-density. This is possible because of the mass-energy equivalence which resulted from the theory of special relativity. It shows that rest-energy and hence matter, as a supporting medium, is not a necessary precondition for propagation of sonic waves in general, hence sound propagation in vacuums is permitted due to the theory of special relativity. It also shows that $`c_S`$ equals $`c`$ if, as in the case of a vacuum, the following equation is fulfilled: $$\underset{dp0,dϵ0}{lim}(\frac{p}{ϵ})_s=1$$ (2) According to Bludman and Ruderman , Lorentz-invariance imposes no restriction on the speed of sound in material media (this can also be explained by the inapplicability of the Lorentz-transformation to zero-restmass particles, since they have to propagate with velocity c in a vacuum). Thus, the principle of sound propagation in vacuums provides a limiting condition for the velocity of signal transmission. Bludman’s and Ruderman’s treatment proved that a material system which became ultrabaric, i.e. p exceeds $`ϵ`$, had to become superluminal before, i.e. dp exceeds $`dϵ`$. This is only possible in the case of one-dimensional pressure-fluctuations, since otherwise $`p\frac{1}{3}ϵ`$ (the equality holds for a photon-gas). Thus I want to investigate the case of a one-dimensional photon-gas in order to show that a vacuum which can be constructed according to the above given definition will yield $`p0`$ and $`ϵ0`$ while $`p=ϵ`$ always holds true. Consider a system of non interacting photons which are moving in parallel direction between two ideally reflecting surfaces perpendicular to the direction of the movement of the photons. The pressure on the two plates can be calculated from the momentum which is applied per time-interval on the two plates. For one photon this yields: $$p=\frac{F}{A}$$ (3) $`p`$ pressure $`F`$ Force $`A`$ surface-area and: $$F=\frac{dP}{dt}$$ (4) $`P`$ momentum $`t`$ time Since we considered non-interacting photons and ideal reflection we can also write: $$F=\frac{\mathrm{\Delta }P}{\mathrm{\Delta }t}$$ (5) The time which is needed by a photon to return to the same position after two reflections is: $$\mathrm{\Delta }t=\frac{2l}{c}$$ (6) $`l`$, distance between the two plates Consequently: $$p=\frac{c\mathrm{\Delta }P}{2lA}$$ (7) At each reflection a photon transfers twice its momentum to a plate, thus: $$p=\frac{cP}{lA}=\frac{E}{V}=ϵ$$ (8) $`E`$, energy of one particle $`V`$, Volume For a system of $`N`$ particles this gives: $$p=ϵ=\frac{_{N=1}^{N=N}E_N}{V}$$ (9) $`N`$, number of particles In this regime the vacuum can be constructed by two approaches. Either the number of particles approaches zero or the volume approaches infinity. In both cases $`p=ϵ`$ always holds true. Thus it is possible to apply thermodynamical functions to a vacuum. Since $`p0`$ and $`dp0`$ in vacuum, sonic excitations must represent isobaric changes of state, or $`dp=0`$. The heat which is changed in an isobaric process is the enthalpy, H. The total differential of the enthalpy is: $$dH=dU+pdV+Vdp$$ (10) $`U`$, internal energy For isobaric changes of state this reduces to: $$dH=dU+pdV$$ (11) Because of its Lorentz-invariance, every zero-restmass field which propagates in a vacuum must represent isoentropic (or adiabatic) changes of state. Since $`p0`$, no work is applied to the vacuum for sonic excitations and since for adiabatic changes of state: $$\delta W=\delta Q$$ (12) $`W`$, work $`Q`$, heat It follows that: $$dH=dU=0$$ (13) $$dU=(\frac{U}{T})_VdT+(\frac{U}{V})_TdV=C_VdT+pdV$$ (14) and , since $`p=0`$ $$dU=(\frac{U}{T})_pdT=C_VdT$$ (15) which reproduces the condition that the temperature of a vacuum always remains zero K. (This means that empty space can be heated up yielding a thermal, in addition to its zero-point spectrum, hence a vacuum is equivalent to empty space at zero K. Briefly summarising the results obtained until now, one can conclude that no thermodynamical parameter of state is changed during sound-propagation in vacuums. This result is obtained because we used a classical description of a vacuum. Considering quantum-theory, the particle corresponding to sound waves carries a momentum as well as energy. The energy is added to a vacuum merely for time-intervals which are permitted due to the uncertainty-principle while the momentum of zero restmass particles is a global property of the corresponding wave-field since individual particles are not localizable. But since the corresponding wave-field carries a momentum, a wave of determined internal symmetry (longitudinal or transversal) would also apply a pressure on the vacuum because in this case maxima and minima of pressure would have to appear. Thus we additionally have to assume that the vacuum s sonic wave field must be a scalar one. The theory of wave-propagation in periodic structures (lattices) shows another property of the vacuum. This theory shows that waves only suffer from dispersion (i.e. the phase-velocity is a function of the energy) when the wave length is comparable to twice the distance between lattice-points . No dispersion is observed for wavelengths, which are large compared with the distance between lattice-points. Thus, no dispersion for all wavelengths only agrees with vanishing distances between lattice-points which is generally known as the continuum-limit. This implies that the vacuum s sonic waves suffer from dispersion when they propagate in material media and hence they obtain a virtual restmass, like photons which propagate inside optical dense media. Since I want to compare the above derived properties of vacuum-sound waves to the properties of the electron-neutrino, I want to resume the obtained results. We have to expect the following properties of vacuum-sound waves: * Sound waves correspond to zero-restmass particles which have to propagate with velocity c in a vacuum. * The corresponding zero-restmass-particle should be emitted only from matter whose density is high enough to permit sound-propagation near velocity c inside the material. This restricts emission of vacuum-sound waves to material of nuclear density. The high energy which is liberated in nuclear processes causes an equivalent short wavelength of the emitted particles. * Thus, these sound waves should interact negligibly with matter of normal density, e.g. air, due to the high ratio of the phase-velocity of sound propagation in a vacuum and in air of approximately $`10^6`$ and because of their very short wavelength which is smaller than double the lattice-spacings of normal materials. * Propagation of those sound waves inside material of suitable phase-velocity should lead to a non-vanishing virtual restmass as in the equivalent case of photons when they propagate through optical dense media. * The corresponding elementary particle should correspond to a scalar wave-field. * The corresponding particle should carry no electric charge. As far as it is known to the author, the electron-neutrino $`(\nu _e,\overline{\nu }_e)`$ represents the unique zero-restmass particle which corresponds to a scalar wave-field and is additionally electrically neutral. Thus we now want to consider this elementary particle as corresponding to the vacuum-sound wave’s zero-restmass particle. Other theoretical evidence which agrees with this assumption represents the description of neutrino-emission in neutron-stars by statistical physics. Ruderman et al. showed that below the transition-temperature of the neutron fluid to the superfluid state, pairs of excited neutron quasiparticles may recombine, resulting in the emission of neutrino-antineutrino pairs. This process is similar to the phonon emission by recombination of quasiparticles in conventional superconductors and hence any difference between the emission of sound-quanta (phonons) and the emission of neutrino-antineutrino pairs cannot in this case be distinguished. Furthermore, neutrinos are emitted only from matter of nuclear or even higher density in which the speed of sound approaches the speed of light. They also have very short wavelengths due to their generation in high energetic, nuclear processes (neutrinos are, in a sense, the $`\gamma `$-quanta of sound) which causes them to be transmitted through materials of normal density. Actually, there is some discussion as to whether the observed rest-energy of the electron-neutrino appears only for neutrino-propagation inside matter or also in a vacuum. A non-vanishing rest-energy of this particle would enable it to be transformed into different kinds of neutrinos, $`\nu _\mu `$ and $`\nu _\tau `$ . Until now it is not clear whether this transformation is possible also in a vacuum or only when the particle propagates in material media . The latter case is similar to that of photons, since photons also omit no rest-energy when they propagate in a vacuum but obtain a rest-energy when they propagate in optical dense media. This leads to the proposal of an experiment which is, in principle, able to show, whether neutrinos propagate in a vacuum as zero-restmass particles or as non zero-restmass particles: If neutrinos are zero-restmass particles when they propagate in a vacuum, a detector which approaches the sun will recognise an intensity-increase which is exactly proportional to $`d^2`$ ($`d`$ = distance of the detector from the sun). Conclusion: Firstly, the possibility of sound-propagation in vacuums implies an important consequence for the cooling of neutron-stars by neutrino-emission. If neutrinos are the corresponding zero-restmass particles to vacuum-sound waves, a star whose matter remains in a state which approaches the velocity c for sound propagation is able to emit internal energy of heat by the emission of its acoustic excitations in form of neutrinos. This cooling-mechanism has to be considered as an important process for the calculation of the stellar collapse of stars which have exhausted their nuclear fuel. Secondly, the idea of sound propagation in vacuums distinguishes clearly between Einstein’s and Lorentz’s classical interpretation of the principle of relativity. Lorentz’s interpretation that there exists only one distinguished inertial frame, that one in which the speed of light is isotropical c, permits the existence of an “ether” (the medium’s properties of empty space) which according to J. Brandes is every physical property which empty space has additionally to its volume. Generally it is assumed that there is no measurable consequence between both theories of gravitation. But since Lorentz’s attitude allows for the concept of an ether , this concept is really a necessary consequence of his theory. Thus, the necessary existence of an ether and hence of sound waves which propagate inside this medium of empty space, consequently requires the existence of a zero restmass particle which corresponds to these sonic waves. For this particle, only zero rest-energy is permitted when it propagates through a vacuum. If neutrinos can be interpreted as the corresponding particle, the only measurable consequence which leads to a decision between Einstein’s and Lorentz’s attitude may be its rest-energy in vacuums. Acknowledgement: I wish to thank all my colleagues who contributed to this paper through engaged discussion. References: T. H. Boyer, Scientific American, 70, (August 1968). P. Yam, Scientific American, 54, (December 1997). S. A. Bludman, M. A. Ruderman, Phys. Rev. 170, 1176 (1968). L. Brillouin, Wave Propagation in Periodic Structures, (Dover Publica- tions, 1953) p. 4. M. Ruderman, E. Flowers, P. Sutherland, Ap.J. 205, 541 (1976). G. Drexlin, Phys. Bl. 55/2, 25 (1999/2). J. Brandes, Die relativistischen Paradoxien und Thesen zu Raum und Zeit: Interpretationen der speziellen und allgemeinen Relativitätstheorie, (Verlag relativistischer Interpretationen, 1995) p. 176 ff.
no-problem/9908/cond-mat9908460.html
ar5iv
text
# NMR Detection of Temperature-Dependent Magnetic Inhomogeneities in URu2Si2 ## Abstract We present <sup>29</sup>Si-NMR relaxation and spectral data in URu<sub>2</sub>Si<sub>2</sub>. Our echo-decay experiments detect slowly fluctuating magnetic field gradients. In addition, we find that the echo-decay shape (time dependence) varies with temperature $`T`$ and its rate behaves critically near $`T_\mathrm{N}`$, indicating a correlation between the gradient fluctuations and the transition to small-moment order. $`T`$-dependent broadening contributions become visible below $``$100 K and saturate somewhat above $`T_\mathrm{N}`$, remaining saturated at lower temperatures. Together, the line width and shift suggest partial lattice distortions below $`T_\mathrm{N}`$. We propose an intrinsic minority phase below $`T_\mathrm{N}`$ and compare our results with one of the current theoretical models. PACS numbers: 71.27.+a, 76.60.-k, 75.10.-b, 75.25.+z, 75.30.-m In URu<sub>2</sub>Si<sub>2</sub>, there is coexistence of magnetic order ($`\mu 0.04\mu _B`$/U, $`T_\mathrm{N}17.5`$ K) and superconductivity ($`T_\mathrm{c}1.2`$ K) . Much attention has been focused on the transition at $`T_\mathrm{N}`$ , with studies ultimately suggesting a quadrupolar order parameter . However, no direct evidence has been found for quadrupolar ordering. URu<sub>2</sub>Si<sub>2</sub> has little or no residual chemical disorder or magnetic frustration , but evidence for chemical order does not guarantee magnetic homogeneity . Here, we report <sup>29</sup>Si-NMR data (line width $`\sigma `$, shift $`K`$, and spin-echo decay rate $`R`$) which show $`T`$-dependent magnetic inhomogeneities below $``$100 K that correlate with the unusual magnetic order. The sample was an oriented (alignment$``$ 95%), epoxy-embedded powder (particle size$``$ 100 $`\mu `$m). Fig. 1(a) shows $`R(T)`$ obtained using conventional Hahn-echo (HE) \[$`R_{\mathrm{HE}}`$: solid circles\] and Carr-Purcell (CP) \[$`R_{\mathrm{CP}}`$: open circles\] sequences ($`Hc`$-axis=9.4 T). $`R_{\mathrm{CP}}`$ is nearly $`T`$-independent, whereas, $`R_{\mathrm{HE}}`$ is not. Also, $`R_{\mathrm{HE}}R_{\mathrm{CP}}`$ for $`T`$40 K. The shape of the HE-decay signal as a function of time $`t`$ (2$`\times `$pulse-separation) can be fitted to $`\mathrm{exp}(R_{HE}t)^2`$ or $`\mathrm{exp}(R_{HE}t)^3`$ for $`T>40`$ K, $`\mathrm{exp}(R_{HE}t)`$ for 40 K$`>T>T_\mathrm{N}`$, and $`\mathrm{exp}(R_{HE}t)^{1/2}`$ for $`T<T_\mathrm{N}`$. $`R_{HE}`$ remains close to its maximum value for $`T<T_\mathrm{N}`$ with no indication of dying out at lower $`T`$’s. Crude estimates of the homogeneous linewidth in the dilute limit (natural abundances of <sup>29</sup>Si, <sup>101,99</sup>Ru: 5%, 17%, 13%) come out too low ($``$20% of $`R_{\mathrm{HE}}`$ at low $`T`$) and appear inconsistent with the observed $`T`$-dependent shape of the HE decay. Our $`R`$ measurements are at odds with previously reported experiments . The NMR spectra was fit to a single gaussian function. $`\sigma (T)`$, $`K(T)`$ (from the fit), and the magnetic susceptibility $`\chi (T)`$ (from a SQUID magnetometer) are presented in Fig. 1(b). For $`T`$100 K, both $`K`$ (solid circles) and $`\sigma `$ (triangles) display the same $`T`$-dependence as $`\chi `$ (open circles). For $`T`$100 K, there are non-linearities in the $`K`$ and $`\sigma `$ vs. $`\chi `$ plots (inset). The excess width is inhomogeneous, i.e., $`R_{\mathrm{HE}}`$ can only account for about 1% of the total. In a solid, there is no reason to expect a large difference between $`R_{\mathrm{HE}}`$ and $`R_{\mathrm{CP}}`$ (Fig. 1(a), $`T`$40 K). In liquids it is quite common to find such difference, because the combination nuclear diffusion/magnetic field gradient creates more dephasing of the echo signal in the HE method; the CP method measures a smaller $`R`$ by eliminating the dephasing effect . By comparison, having $`R_{\mathrm{HE}}R_{\mathrm{CP}}`$ ($`T`$40 K) here implies randomly moving or fluctuating modulations of the U-spin system that can be sensed by the static <sup>29</sup>Si-nuclei. We believe that this is the first time that such effect is measured in a heavy-fermion system. The $`T`$-dependence of $`R_{\mathrm{HE}}`$ and $`\sigma `$ could be suggesting local charge-density wave (CDW) structures as a source of U-spin modulations. In CDW systems, $`R`$ can display a $`(TT_0)^{1/2}`$ dependence for $`TT_0`$ ($`T_0`$: CDW transition temperature) \[inset Fig. 1(a); $`T_0T_N`$\]. Also, incommensuration/randomness of forming CDW’s would distribute the U-<sup>29</sup>Si transferred hyperfine coupling locally and explain the observed $`T`$-dependence of $`\sigma `$. In addition, there is no clear change in NMR line shape below $`T_\mathrm{N}`$, so only a fraction of the <sup>29</sup>Si nuclei must be affected; the majority would sample an underlying paramagnetic fluid. Given the coexistence of magnetism and superconductivity and that two kinds of transitions seem to be needed to explain the macroscopic anomalies , an interpretation in terms of two different phases (as opposed to introducing a $`T`$-dependent hyperfine coupling) seems plausible. Therefore, we fit the data using the simple relations $`K=K_0+a\chi (T)+K_\alpha (T)`$, and $`\sigma ^2=\sigma _0^2+(\delta a)^2\chi ^2(T)+\sigma _\alpha ^2(T)`$. Here $`K_0=0.06`$% (contact hyperfine interaction), $`a=3.5`$ kOe/$`\mu _B`$, $`\sigma _0=0.005`$%, and $`(\delta a)`$0.5 kOe/$`\mu _B`$. $`K_\alpha (T)`$ and $`\sigma _\alpha (T)`$ are the contributions due to the second phase. In Fig. 1(c), a plot of $`\sigma _\alpha `$ vs. $`K_\alpha `$ shows proportionality at high $`T`$ (slope$`1`$) and a breakdown of linearity somewhat above $`T_\mathrm{N}`$. This is more qualitative evidence that the inhomogeneous magnetism is correlated with the unusual transition. The slope in this plot represents the fractional width of the distribution of U-<sup>29</sup>Si transferred hyperfine couplings for the second phase. The abrupt slope change just above $`T_\mathrm{N}`$ (from $`1`$ to $`0`$) would signal an intrinsic rearrangement of the U-spins in this phase with respect to the <sup>29</sup>Si nuclei, i.e., partial lattice distortions, since no distortion affecting the whole crystal has been detected (see ). The saturation of $`\sigma _\alpha `$ below $`T_\mathrm{N}`$ indicates that these distortions remain at lower $`T`$. In conclusion, we have found evidence for random or incommensurate structures in URu<sub>2</sub>Si<sub>2</sub> at low $`T`$. We propose slowly moving and/or fluctuating charge-density modulations, which could in turn serve as domain walls to antiferromagnetic regions consistent with the resistance anomaly measured at $`T_\mathrm{N}`$ . One could also argue that our results (local distortions near, but not at, $`T_\mathrm{N}`$, as well as U-spin modulation fluctuations when $`TT_\mathrm{N}`$) are consistent with the picture presented in Ref. . Finally, the spectra for $`Hc`$-axis are also consistent with U-spins modulations , Fig. 1(d). Systematic field and orientation dependence studies of the NMR anomaly are being carried out to make progress in this direction. We acknowledge helpful discussions with R.E. Walstedt and D.L. Cox. This research was supported by an award from Research Corporation and NSF grant DMR-9820631.
no-problem/9908/cond-mat9908129.html
ar5iv
text
# Can pulling cause right- to left-handed structural transitions in negatively supercoiled DNA double-helix? \[ ## Abstract The folding angle distribution of stretched and negatively supercoiled DNA double-helix is investigated based on a previously proposed model of double-stranded biopolymers (H. Zhou et al., Phys. Rev. Lett. 82, 4560 (1999)). It is shown that pulling can transit a negatively supercoiled DNA double-helix from the right-handed B-form to a left-handed configuration which resembles DNA Z-form in some important respects. The energetics of this possible transition is calculated and the comparison with recent experimental observations is qualitatively discussed. \] Because of its vital biological significance, the elasticity of DNA has invoked considerable interest during recent years, and it is now known experimentally that radical transitions in DNA internal structure can be induced by the action of mechanical forces and/or torques. For example, pulling a DNA chain with a force of $`70`$ piconewtons (pN) will convert DNA standard B-form conformation into a over-stretched S-form ; and at the joint action of a positive torque and a force about $`3`$ pN, a DNA will take on a novel P-form with exposed bases. Here we suggest a further possibility for this transition scenario and show that under-twisted (negatively supercoiled) DNA can take on a left-handed configuration under the action of a moderate stretching force. Such a left-handed conformation is found to resemble DNA Z-form in some important respects. The energetics of this possible transition is calculated and a qualitative comparison with very recent experiments is also performed. The extension vs supercoiling relation for under-twisted DNA is studied based on a model proposed earlier (see Fig. 1a). For small pulling forces ($`0.3`$ pN), a supercoiled DNA can shake off its twisting stress by writhing its central axis and forming plectonemic structures. However, this leads to shortening of DNA end-to-end distance and hence becomes very unfavorable as the force increases. At this stage, the torsional stress caused by supercoiling begins to unwind the B-form double-helix and triggers the transition of DNA internal structure, where a continuously increasing portion of DNA takes on some certain new configuration as supercoiling increases, while its total extension keeps almost invariant. Information about the new configuration can be revealed by the folding angle $`\phi `$ distributions $`P(\phi )`$. This distribution is calculated by $$P(\phi )=\mathrm{\Phi }^2(𝐭,\phi )𝑑𝐭,$$ (1) where $`𝐭`$ is the tangent vector of DNA’s central axis and $`\mathrm{\Phi }(𝐭,\phi )`$ is the (normalized) ground-state eigenfunction of the Green equation Eq. (9) in Ref. 8. The folding angle distribution (Fig. 1b) has the following aspects: When the torsional stress is small (with the supercoiling degree $`|\sigma |<0.025`$), the distribution has only one narrow and steep peak at $`\phi +57.0^{}`$, indicating that DNA is completely in B-form. With the increase of torsional stress, however, another peak appears at $`\phi 48.6^{}`$ and the total probability for the folding angle to be negatively-valued increases gradually with supercoiling. Since negative folding angles correspond to left-handed configurations , we can conclude that, with the increasing of supercoiling, left-handed DNA conformation is nucleated and it then elongates along the DNA chain as B-DNA disappears gradually. The whole chain becomes completely left-handed at $`\sigma 1.85`$. It is worth noticing that, (i) as the supercoiling degree changes, the positions of the two peaks of the folding angle distribution remain almost fixed and, (ii) between these two peaks, there exists an extended region of folding angle from $`0`$ to $`\pi /6`$ which always has only extremely small probability of occurrence. Thus, a negatively supercoiled DNA can have two possible stable configurations, a right-handed B-form and a left-handed configuration with an average folding angle $`48.6^{}`$. A transition between these two structures for a DNA segment will generally lead to an abrupt and finite variation in the folding angle. The sum of the average base-stacking energy and torsional energy caused by external torque (see caption of Fig. 2a) as a function of torque $`\mathrm{\Gamma }`$ is shown in Fig. 2a and the relation between $`\sigma `$ and $`\mathrm{\Gamma }`$ in Fig. 2b. From these figures we can infer that, (i) for negative torque less than the critical value $`\mathrm{\Gamma }_c3.8`$ k<sub>B</sub>T, DNA can only stay in B-form state; (ii) near $`\mathrm{\Gamma }_c`$ DNA can either be right- or be left-handed and, as negative supercoiling increases (see Fig. 2b) more and more DNA segments will stay in the left-handed form, which is much lower in energy ($`2.0`$ k<sub>B</sub>T per base pair (bp)) but stable only when torque reaches $`\mathrm{\Gamma }_c`$; (iii) for negative torque greater than $`\mathrm{\Gamma }_c`$ DNA is completely left-handed. B-form DNA at $`\mathrm{\Gamma }_c`$ has energy about $`0.0`$ k<sub>B</sub>T per bp, indicating that the work done by external torque just cancels the base-stacking energy. Therefore, it might not be enough to further break the hydrogen bonds between DNA complementary bases and cause denaturation . Nevertheless, since the transition from right- to left-handed structure requires radical rearrangement of DNA base pairs, the possibility of transient denaturation in DNA double-helix can not be ruled out. This is a subtle question, and maybe transient denaturations can occur in the weaker AT-rich regions, or even be induced and then captured by the added homologous single-stranded DNA probes in the solution. For the left-handed state revealed by Figs. 1b and 2 we have obtained that, at $`\mathrm{\Gamma }=4.0`$ k<sub>B</sub>T (where DNA is completely left-handed) the average rise per bp is about $`3.83`$ $`\AA `$, and the pitch per turn of helix is $`46.76`$ $`\AA `$, with the number of bps per turn of helix being $`12.19`$. Notice these characteristic quantities are very similar with those of DNA left-handed Z-form, which are $`3.8`$ $`\AA `$, $`45.6`$ $`\AA `$, and $`12`$ bps, respectively. We suspect that the identified left-handed configuration belongs to DNA Z-form. Recently, Léger et al. also pointed out that Z-form structure should be included to qualitatively interpret their experimental result.
no-problem/9908/hep-ex9908004.html
ar5iv
text
# A CCD Vertex Detector for a High-Energy Linear e+e-Collider ## 1 Introduction to CCDs Charge-coupled devices (CCDs) were originally applied in high-energy particle physics at a fixed-target charm-production experiment, and their utility for high-precision vertexing of short-lived particles was quickly realised . More recently two generations of CCD vertex detectors (VXDs) were used in the e<sup>+</sup>e<sup>-</sup>colliding-beam environment of the SLD experiment at the first linear collider, SLC, at SLAC. CCDs are silicon pixel devices which are widely used for imaging; one common application is in home video cameras, and there is extensive industrial manufacturing experience in Europe, Japan and the US. CCDs can be made with high pixel granularity. For example, those used at SLD comprise 20$`\times `$20 $`\mu `$m<sup>2</sup> pixels, offering the possibility of intrinsic space-point resolution of better than 4 $`\mu `$m, determined from the centroid of the small number of pixels which are hit when a charged particle traverses the device. The active depth in the silicon is only 20 $`\mu `$m, so each pixel is effectively a cube of side 20 $`\mu `$m, yielding true 3-dimensional spatial information. Furthermore, this small active depth allows CCDs to be made very thin, ultimately perhaps as thin as 20 $`\mu `$m, which corresponds to less than 0.1% of a radiation length ($`X_0`$), and yields a very small multiple scattering of charged particles. Also, large-area CCDs can be made for scientific purposes, allowing an elegant VXD geometry with very few (if any) cracks or gaps for readout cables or support structures. For example, the second-generation CCDs used at SLD were of size 80 $`\times `$ 16 mm<sup>2</sup>. The combination of superb spatial resolution, low multiple scattering, and large-area devices, with a decade of operating experience at the first linear collider, SLC, hence makes CCDs a very attractive option for use in a vertex detector at a second-generation linear collider (LC). Such colliders are being actively pursued by consortia centred around SLAC and KEK (NLC/JLC), DESY (TESLA) and CERN (CLIC). In Section 2 I review briefly the SLD CCD VXD experience; more details are given in a complementary presentation . In Section 3 I discuss the physics requirements for the LC VXD. In Section 4 I present the current conceptual design, and the simulated flavour-tagging performance. The R&D programme that is underway to achieve these goals is described in Section 5. In Section 6 I give a brief summary and outlook. ## 2 SLD CCD VXD Experience The SLD experiment has utilised three CCD arrays for heavy-flavour tagging in $`Z^0`$ decays. In 1991 a 3-ladder prototype detector, VXD1, was installed for initial operating experience. In 1992 a complete four-layer vertex detector, VXD2, was installed and operated until 1995. VXD2 utilised 64 ladders arranged in 4 incomplete azimuthal layers (Fig. 1). Due to the incomplete coverage a track at a polar angle of 90 to the beamline passed through, on average, only 2.3 layers, and $``$2-hit coverage extended down to polar angles within $`|\mathrm{cos}\theta |0.75`$. The device contained a total of 512 roughly $`1\times 1`$cm<sup>2</sup> CCDs, giving a total of 120M pixels. In 1996 a brand new detector, VXD3 , was installed that capitalised on improvements in CCD technology since VXD2 was originally designed. The main improvement was to utilise much larger, $`8\times 1.6`$cm<sup>2</sup>, and thinner ($`\times `$ 3) CCDs, which allowed a significantly improved geometry (Fig 1). Ladders were formed from two CCDs placed end-to-end (with a small overlap in coverage) on opposite sides of a beryllium support beam, and arranged in 3 complete azimuthal layers, with a ‘shingled’ geometry to ensure no gaps in azimuth. A much better acceptance of $`|\mathrm{cos}\theta |0.85`$ ($``$3 hits) and $`|\mathrm{cos}\theta |0.90`$ ($``$2 hits) was achieved with these longer ladders. 96 CCDS were used, giving a total count of 307M pixels. In operations from 1996 through 1998 VXD3 performed beautifully, yielding a measured single-hit resolution of 3.8 $`\mu `$m, and a track impact-parameter resolution of 7.8 $`\mu `$m (9.7 $`\mu `$m) in $`r\varphi `$ ($`rz`$) respectively, measured using 46 GeV $`\mu `$ tracks in $`Z^0`$ $``$$`\mu ^+\mu ^{}`$ events. The multiple scattering term was found to be $`33/p\mathrm{sin}^{3/2}\theta `$ $`\mu `$m. The measured precision on the position of the micron-sized (mm-long) SLC interaction point (IP) was found to be 3 $`\mu `$m in $`r\varphi `$ (30 $`\mu `$m in $`rz`$) respectively. As an illustration of the vertexing performance, the resolution on the decay-length of $`B_s`$ mesons w.r.t. the IP was estimated to be characterised by a double-Gaussian function with widths of 46 $`\mu `$m, representing 60% of the population, and 158$`\mu `$m representing the remainder, which is outstanding compared with other e<sup>+</sup>e<sup>-</sup>experiments. For inclusive $`b`$-hemisphere tagging a sample purity of 98% can be obtained with a tag efficiency of up to 45%, and for inclusive $`c`$-tagging a sample purity of around 70% can be obtained with a tag efficiency of up to 20%. Again, this performance is unsurpassed by other experiments. It is worth noting several important lessons learned from the SLD VXD3 experience: * The intrinsic spatial resolution of 6 $`\mu `$m, which one would naively estimate for a pixel size of 20 $`\mu `$m, was significantly improved to better than 4 $`\mu `$m by capitalising on the charge-sharing between several pixels and performing cluster centroid-finding. * The complete 3-layer coverage for $`|\mathrm{cos}\theta |0.85`$ allowed vector hits to be found within the VXD alone, which could then be included at an earlier stage of the track-finding algorithm, allowing a ‘pointing out’ rather than ‘pointing in’ approach to track-linking with the main tracking chamber. One (or two) VXD hits were also included in the track-finding of low polar-angle $`\mu ^+\mu ^{}`$ events, yielding an improved acceptance and significantly better momentum determination. * Even though in each triggered event there were, on average, roughly 15,000 pixels hit by background particles from SLC, the large pixellation of VXD3 ensured that the occupancy was approximately $`5\times 10^5`$, yielding essentially zero confusion between bonafidehits on tracks and background hits. * The long readout time of VXD3, around 180 ms, did not lead to any deadtime at all. If a second trigger was taken during the readout period of the previously triggered event, the VXD readout was simply restarted and two ‘overlapping frames’ were recorded. The low hit density ensured that there was zero confusion of hits between the two overlapping events. * Despite the VXD3 ladders being very thin, roughly 0.4% $`X_0`$, multiple scattering dominated the impact-parameter resolution for tracks with momenta less than 3 GeV/$`c`$, i.e.the vast majority of tracks in $`Z^0`$ decays. These lessons have proven invaluable for consideration of the design of a VXD for the future LC. ## 3 Linear Collider Physics Demands The second-generation linear collider will probably be built to operate at c.m. energies in the range between the current LEP2 energy of around 200 GeV and up to around 0.8 - 1 TeV. The strategy for choosing the energy steps will be developed as we learn more about the Higgs boson(s) and beyond-Standard-Model particles from searches at LEP2, the Tevatron, HERA and the LHC. Consideration of a high-statistics run at the $`Z^0`$ resonance, for super-precise measurements of electroweak parameters, is also under discussion. Many of the interesting physics processes can be characterised as multijet final states containing heavy-flavour jets. Some representative examples are: 1) e<sup>+</sup>e<sup>-</sup> $``$ $`Z^0`$ $`H^0`$ $``$ $`q\overline{q}`$ $`b\overline{b}`$ 2) e<sup>+</sup>e<sup>-</sup> $``$ $`Z^0`$ $`H^0`$ $``$ $`q\overline{q}`$ $`c\overline{c}`$ 3) e<sup>+</sup>e<sup>-</sup> $``$ $`Z^0`$ $`H^0`$ $``$ $`q\overline{q}`$ $`\tau ^+\tau ^{}`$ 4) e<sup>+</sup>e<sup>-</sup> $``$ $`\mathrm{t}\overline{\mathrm{t}}`$ $``$ $`bW^+`$ $`\overline{b}W^{}`$ 5) e<sup>+</sup>e<sup>-</sup> $``$ $`H^0`$ $`A^0`$ $``$ $`\mathrm{t}\overline{\mathrm{t}}`$ $`\mathrm{t}\overline{\mathrm{t}}`$ 6) e<sup>+</sup>e<sup>-</sup> $``$ $`\stackrel{~}{t}\stackrel{~}{\overline{t}}`$ $``$ $`\stackrel{~}{\chi ^0}c\stackrel{~}{\chi ^0}\overline{c}`$ 7) e<sup>+</sup>e<sup>-</sup> $``$ $`\mathrm{t}\overline{\mathrm{t}}`$ $`H^0`$ $``$ $`bW^+`$ $`\overline{b}W^{}`$ $`b\overline{b}`$ It should be noted that charm- and $`\tau `$-tagging, as well as $`b`$-tagging, will be very important. For example, measurements of the branching ratios for (the) Higgs boson(s) to decay into $`b`$, $`c`$, and $`\tau `$ pairs (examples 1-3) (and/or $`W`$, $`Z^0`$ and $`t`$ pairs for a heavy Higgs) will be crucial to map out the mass-dependence of the Higgs coupling and to determine the nature (SM, MSSM, SUGRA $`\mathrm{}`$) of the Higgs particle(s). Example 4 could yield a 6-jet final state containing 2 $`b`$-jets. Example 5 could yield a 12-jet final state containing 4 $`b`$-jets. Example 6 comprises 2 charm jets + missing energy in the final state. Example 7 could yield an 8-jet final state containing 4 $`b`$-jets. Because of this multijet structure, even at $`\sqrt{s}`$ = 1 TeV many of these processes will have jet energies in the range 50 $``$200 GeV, which is not significantly larger than at SLC, LEP or LEP2. The track momenta will be correspondingly low. For example, at $`\sqrt{s}`$ = 500 GeV the mean track momentum in e<sup>+</sup>e<sup>-</sup>$``$$`q\overline{q}`$ events is expected to be around 2 GeV/$`c`$, so that with the current SLD VXD3 multiple scattering would limit the impact-parameter resolution for the majority of tracks! Furthermore, some of these processes may lie close to the boundary of the accessible phase space, suggesting that extremely high flavour-tagging efficiency will be crucial for identifying a potentially small sample of events above a large multijet combinatorial background. It is worth bearing in mind that a doubling of the single-jet tagging efficiency at high purity is equivalent to a luminosity gain of a factor of 16 for a 4-jet tag (examples 5, 7); it is likely to be a lot cheaper (and easier) to achieve this gain by building a superior VXD than by increasing the luminosity of the accelerator by over an order of magnitude! ## 4 LC VXD Conceptual Design The LC VXD conceptual design is illustrated in Fig. 2. The main goals to be met to achieve this design can be summarised as: * Utilise large-area CCDs to construct a geometrically elegant, large array. * Obtain VXD self-tracking with redundancy by building a 5-layer device. * Require as short a track extrapolation to the IP as possible by putting the first layer as close as 12mm to the beamline. * Reduce multiple scattering by thinning the ladders to as little as 0.1% $`X_0`$ per layer. * Maintain the low occupancy, and hence zero hit confusion, by increasing the pixel readout rate to 50 MHz. * Improve the radiation tolerance beyond the $`10^{10}`$ neutrons/cm<sup>2</sup> level. We have simulated the jet flavour-tagging performance that could be achieved with such a ‘dream’ VXD. We have adapted the SLD charm- and bottom-jet tags which are based on the mass of secondary decay vertices reconstructed using a topological vertex-finding algorithm . The purity vs. efficiency trajectories are shown in Fig. 3, where the current SLD results, as well as results for an earlier LC VXD design , are shown for reference. For $`b`$-jet tagging a sample purity of 98% can be maintained for a tagging efficiency up to around 70%, almost a factor of two better than the current (world’s best) SLD VXD3. In the case of $`c`$-jet tagging a sample purity of 85% can be achieved for an efficiency up to 75%, which is a substantial gain in both purity and efficiency ($`\times `$ 3.5) w.r.t. SLD VXD3. A substantial R&D programme is needed to achieve this impressive flavour-tagging potential. ## 5 R&D Programme The UK-based Linear Collider Flavour Identification collaboration was formed, and was approved by the UK funding committee in October 1998 to initiate a 3-year programme of research and development in order to address these design challenges. The collaboration is working closely with the UK-based CCD manufacturer, EEV, as well as with colleagues in the US and Japan who are also engaged in CCD R&D for the LC VXD. Table 1 summarises the improvement factors that it is hoped to achieve, relative to the current SLD VXD3, for various parameters. In the first phase of the R&D programme so-called ‘setup grade’ CCDs will be purchased from EEV and used to test individual design aspects. These are typically devices with some defect(s) of a mechanical or electrical nature, but which are perfectly adequate for testing unaffected performance aspects. Two modular CCD test setups are being constructed, one located at the Rutherford-Appleton Laboratory (RAL) and the other at Liverpool University. The aim is to use the two test rigs to focus on complementary aspects: readout and electrical tests at RAL, and radiation damage studies and low-temperature operation at Liverpool. In addition, a metrology setup for mechanical testing and measurement is being developed at Oxford and RAL, which will focus on thin prototype ladder supports and thermal distortion characterisation. In a later phase custom-made CCDs may be commissioned from the manufacturer, and system design/integration issues will be investigated. By designing modular test setups in which only the local motherboard is CCD-specific it is hoped that CCDs can be readily exchanged with our US and Japanese colleagues, as well as with different CCD manufacturers. ### 5.1 CCD Area A modest increase in length of the longest CCD, by a factor of 1.6, is needed for the geometry shown in Fig. 2, with a corresponding area increase by just over a factor of two. With the widespread movement to 8-, and even 12-, inch wafers in the silicon chip industry there is not believed to be any fundamental obstacle towards manufacturing CCDs of this size. However, the increased area does impinge directly on the rigidity/stability requirements for the low-mass support structures. ### 5.2 Ladder Thickness If the CCDs can be successfully thinned down to the thickness of the epitaxial active silicon surface layer, they could in principle be as thin as 20 $`\mu `$m, or 0.02% $`X_0`$. Such CCDs would, if unsupported, immediately curl up into a ‘swiss roll’, and so the thinning process, and adhesion to a thin (beryllium) support beam, must be carefully thought out. A schematic thinning process has been devised and is illustrated in Fig. 4. Before thinning one would adhere the top surface of a processed CCD wafer with temporary adhesive, eg.wax, to a dummy wafer. One would then lap and etch the back of the processed wafer to the desired thickness, and dice the wafer into individual CCD units. One would then adhere the back CCD surface to the support beam with permanent adhesive, and finally disengage the dummy silicon block by removing the temporary adhesive. The support beam itself requires careful design to achieve a low-mass structure with the desired planarity and mechanical stability. One possibility is to use a thin flat Be beam with an intrinsic ‘omega’ or ‘V’ support structure. Finite-element analysis simulations have shown that such structures offer the possibility of small, and predictable, deformations under temperature cycling of the order of tens of $`\mu `$m. If the support beam comprises 250 $`\mu `$m Be-equivalent, or 0.07% $`X_0`$, and the adhesive an additional 0.02% $`X_0`$, the total ladder material budget might be made as low as 0.11% $`X_0`$. ### 5.3 Backgrounds, Inner Radius and Readout Rate The radial position of the innermost layer w.r.t. the beamline is strongly influenced by the accelerator-related backgrounds, and is correlated with the pixel readout rate, which determines the hit density accumulated during the CCD readout cycle, and hence the degree of fake hit confusion for bonafidetracks. The main sources of accelerator-related backgrounds are * Muons from beam interactions with upstream collimators. * e<sup>+</sup>e<sup>-</sup>pairs from converted photons and ‘beamstrahlung’. * Photoproduced neutrons from the interaction region material and back-shine from the beam dumps. * Hadrons from beam-gas and $`\gamma \gamma `$ interactions. From the occupancy point-of-view the most serious are the e<sup>+</sup>e<sup>-</sup>pairs. For example, beam-beam interaction simulations indicate that tens of thousands of e<sup>+</sup>e<sup>-</sup>pairs will be created per bunch crossing of the accelerator. A significant fraction of these populate the $`>`$ 1 GeV tail of the energy distribution, and it is clear that a large detector magnetic field will be required to contain the bulk within the beampipe, and maintain an acceptably low hit density in the VXD. Field strengths of between 3 and 6 T are being considered by the detector working groups. For example, in a 3 T field at TESLA 0.2 hits/mm<sup>2</sup> per beam-crossing are expected at the nominal first-layer radius of 12 mm. At NLC/JLC the corresponding figure is 0.1 hits/mm<sup>2</sup> in a 6 T field. At first sight these numbers do not appear forbidding. However, one must then consider the serial pixel readout of the CCD and the time-structure of the accelerator bunch trains. In the NLC/JLC case there are 100 bunches per train, with a bunch separation of 1.4 ns and a train separation of 8.3 ms. Hence if the CCD pixel readout rate is 5 MHz, as in SLD VXD3, it would take roughly 20 bunch-trains to read out a complete CCD, implying an integrated hit density of 200/mm<sup>2</sup>. Since there are 2500 pixels/mm<sup>2</sup> this implies an occupancy of almost 10%, which would lead to significant hit confusion. An occupancy of 1% would be much more manageable, and this could be achieved by increasing the pixel readout rate by a factor of 10, to 50 MHz. This will be one of the main topics in the CCD R&D programme. In the TESLA design the situation is made more difficult by the fact that there are roughly 3000 bunches per train, with a bunch separation of 337 ns and a train separation of 200 ms. Although a CCD could be read out completely, even at 5 MHz, between bunch trains, the hit density integrated during the readout cycle would be 600/mm<sup>2</sup>. Even with an increased pixel readout rate of 50 MHz the resulting 60 hits/mm<sup>2</sup> is not comfortable. For this reason we are also investigating the possibility of a higher-multiplex CCD readout scheme in which groups of, or even individual, columns would be read out through individual readout nodes. The hit densities are predicted to be significantly lower at larger radius, and are not expected to be a concern for layers 2-5, which lie at $``$24 mm from the beamline. If the inner layer were omitted, or if the whole detector were pushed out in radius to start at 24 mm, the extrapolation from the first track hit back towards the IP would be doubled, and our simulations have shown that the flavour tagging performance would be noticeably worse. Moreover, the backgrounds in the real accelerator may be larger than the current estimates suggest, so that in any case an increased CCD readout rate will help to secure additional ‘headroom’ against such an eventuality. It is likely that by a combination of increased pixel readout rate and multiplexed CCD readout, a large detector B-field, and in the last resort a (compromised) larger inner layer radius, the hit density can be kept at or below the 10 hits/mm<sup>2</sup> level. ### 5.4 Radiation Damage Studies Preliminary simulations have indicated that the neutron flux in the inner detector may be at the level of $`10^8`$ \- 10<sup>9</sup> per cm<sup>2</sup>. Though orders of magnitude less than at the LHC, the rate is large enough that more detailed simulations are warranted, and that consideration be given to the radiation tolerance of the CCDs. In many years of normal operating conditions at SLC, no radiation damage was observed in the CCDs. However, during one unusual period in which undamped beams were delivered for accelerator studies a noticeable charge-transfer inefficiency (CTI) was observed. This effect was completely ameliorated by cooling the CCDs by an additional 20 degrees to around 185 K. Neutrons are believed to cause a CTI by producing bulk damage sites in the silicon lattice, which act as charge-trapping centres. The minimum-ionising signal of roughly 2,000 electrons typically undergoes several thousand serial transfers from pixel to pixel before reaching the readout node, and a CTI of $`5\times 10^4`$ would cause a serious loss of signal. In recent neutron irradiation studies using VXD3 CCDs, corresponding to an integrated dose of 6.5$`\times 10^9`$ 1-4 MeV neutrons/cm<sup>2</sup>, a mean m.i.p. signal loss of 29% was observed. Interestingly, after flushing the CCD with charge, the loss was reduced to 18%, and reduced further to 11% by lowering the operating temperature from 185 K to 178 K. These studies suggest that charge-flushing may serve to fill, at least temporarily, the charge traps caused by radiation damage, and that the full-trap lifetime can be extended by lowering the temperature. We intend to pursue both approaches and there is good reason to believe that CCDs can be made radiation tolerant at the 10<sup>10</sup> neutrons/cm<sup>2</sup> level, which is 1-2 orders of magnitude above the estimated flux at the LC. In addition, the development of more optimised shielding strategies may serve to reduce the expected neutron flux in the interaction region. Finally, ideas to reengineer the CCD architecture, so as to reduce the effective charge-storage volume and hence the sensitivity to bulk damage, may also be pursued. ## 6 Summary and Outlook In summary, CCDs offer a very attractive option for a high-energy linear collider vertex detector. CCD VXDs have been ‘combat-tested’ at the first linear collider, SLC, and have allowed SLD to achieve unrivalled $`b`$ and $`c`$-jet tagging performance. Through further improvements, via the production of thinner, larger-area, faster-readout CCDs, there is every reason to expect that 21st-Century flavour-tagging at the next-generation linear collider will be substantially better, and able to meet the demands of high-efficiency tagging in a multijet environment. Groups centred in the USA/Japan and Europe are currently preparing the technical design reports for the accelerator and detector(s), which will be presented to the respective funding agencies in 2001/2. From the technical point-of-view construction could start as early as 2003, with first physics in 2008/9. The LCFI Collaboration has started an R&D programme to address the CCD design issues, such that a vertex detector blueprint could be credibly produced on a matching timescale. We look forward to presenting the first results of this endeavour at Vertex2000!
no-problem/9908/cond-mat9908361.html
ar5iv
text
# Hall effect of epitaxial double-perovskite Sr2FeMoO6 thin films ## I Introduction Half-metallic ferromagnetic oxides as the manganites have reattracted much interest. Recently, a large negative magnetoresistance (MR) in a further oxide Sr<sub>2</sub>FeMoO<sub>6</sub> (SFMO) was observed . This compound has an ordered double-perovskite structure and is, like the manganites, a ferrimagnetic (or ferromagnetic ) oxide with a Curie-temperature of 410-450 K and a highly spin-polarized conduction band. For applications as a magnetic field sensor at room temperature a high spin-polarization is necessary to obtain a large magnetoresistance in low fields . Therefore spin-polarized magnetic compounds with Curie temperatures well above 300 K are interesting. We prepared epitaxial thin films of SFMO and investigated their structural and magnetic properties. Further, the longitudinal and transverse resistivity were measured as a function of temperature and magnetic field. ## II Experiment Using pulsed laser deposition we prepared epitaxial thin films of SFMO from a stoichiometric target on (100) SrTiO<sub>3</sub> (STO) substrates. The substrate temperature during deposition was 700C in an oxygen partial pressure of $`10^3`$ Pa. In x-ray diffraction only film reflections corresponding to a (00$`\mathrm{}`$) orientation are visible. In Fig. 1 a detail of the $`\theta /2\theta `$-scan, showing the (008) reflection peak of SFMO nearby the (004) reflection peak of STO, is presented. A high degree of orientation of the $`c`$-axis is achieved. Rocking angle analysis of the SFMO (004) reflection shows an angular spread of 0.04, as can be seen in the inset of Fig. 1. The in-plane orientation was investigated by $`\varphi `$-scans using the {224} reflections. The film axes are parallel to the substrate axes with a 4-fold in-plane symmetry, demonstrated in Fig. 2. With this preparation conditions the $`c`$-axis is elongated to 7.998 Å compared to bulk material . This indicates either epitaxial strain between SFMO and the STO substrate or (and) a non stoichiometric oxygen content of the films. With scanning electron microscopy and atomic force microscopy we found a smooth surface with a roughness of 10 nm and the average grain size to be 50 nm. By Rutherford backscattering on a reference sample on MgO substrate the metal atom stoichiometry was determined to be Sr<sub>2</sub>Fe<sub>1.06</sub>Mo<sub>0.90</sub>O<sub>x</sub>. This is within the experimental errors identical to the nominal composition of the target (Sr<sub>2</sub>FeMoO<sub>6±δ</sub>). The film thickness was 250 nm, measured with a Mireau interferometer. The samples, not resistant against water, were patterned photolithographically and etched to a Hall bar structure. The longitudinal and transverse resistivity were measured in magnetic fields up to 8 T from liquid helium temperature to 300 K. A standard four point technique with DC current was used. The Hall coefficient was determined by slowly sweeping the magnetic field in positive and negative field direction with an asymmetric current injection to minimize the parasitic longitudinal voltage on the Hall contacts . Spontaneous magnetization was determined in small fields ($`B=100`$ mT) with a SQUID magnetometer. ## III Results and Discussion The zero field resistivity is presented in Fig. 3. The room temperature resistivity value of 3 m$`\mathrm{\Omega }`$cm is comparable with other reported results , but down to 4 K the resistivity increases almost an order of magnitude. Depending on preparation the sign of the temperature coefficient and the sign of the MR changes . We observed in our semiconducting films a negative MR of -3.5 % at a temperature of 4 K and a magnetic field of 8 T, similar to the result of Asano et al. . At higher temperatures the MR vanishes. While in the plot of Fig. 3 the resistivity shows a smooth temperature variation with a negative temperature coefficient, closer inspection indicates a change in conduction mechanism. Below 30 K the resistivity increases strictly logarithmic with falling temperature. This behavior is known from Kondo systems, where it results from carrier scattering at uncorrelated magnetic impurities. Above 100 K the temperature dependence of the resistivity is best described by $`\rho \mathrm{exp}((T_0/T)^{0.25})`$ with $`T_0=4900`$ K, as can be seen in the inset of Fig. 3. The origin of the opposite behavior in transport properties between different epitaxial films remains up to now unclear. One cause may be order-disorder effects in the B-cation arrangement of the A<sub>2</sub>B’B”O<sub>6</sub> double-perovskite. A segregation into clusters of compositions SrMoO<sub>3</sub> ($`a=3.975`$ Å) and SrFeO<sub>3</sub> ($`a=3.869`$ Å) should be visible in x-ray diffraction as severe peak broadening for small clusters or as peak splitting for large clusters. Both effects are not observed. The peak splitting visible in Fig. 1 for substrate and film peak is due to the Cu $`K_\alpha `$ doublet only. For the double-perovskites random, rock salt and layered B’,B” sublattice types are known . In neutron powder diffraction of SFMO Rietveld refinement showed perfect B’,B”-rock salt structure . Due to extinction conditions the x-ray diffraction in Bragg-Brentano geometry cannot resolve the sublattice structure for (00$`\mathrm{}`$) oriented films. However, the saturation magnetization of our films is with one $`\mu _\mathrm{B}`$ per formula unit (f.u.) much smaller than the moment of 3 $`\mu _\mathrm{B}`$/f.u. observed by Kobayashi et al. in a bulk sample. This discrepancy is a hint to cation site disorder . Other possible influences as non stoichiometric oxygen content and substrate induced strain will be the subject of future investigations. In ferromagnetic materials the transverse resistivity is given by $$\rho _{xy}=R_HB+R_A\mu _0M$$ (1) with the magnetization $`M`$ and the ordinary and anomalous Hall coefficients $`R_H`$ and $`R_A`$, respectively . The Hall voltage $`U_{hall}`$ was measured at several constant temperatures between 4 K and 300 K in magnetic fields up to 8 T. Fig. 4 shows the results for the Hall resistivity and the Hall voltage. The error in the data is smaller than the symbol size. The low magnetoresistivity of both the sample and the Pt-thermometer allows at high temperatures the elimination of the parasitic longitudinal part of the Hall voltage and a quantitative analysis of $`R_H`$. Due to the increasing resistance the measurement current was reduced from 1 mA to 100 $`\mu `$A for the data taken at $`T=4`$ K, leading to a worse signal to noise ratio. In low fields, a steep increase of $`U_{hall}`$ with increasing field is seen. This part, where the magnetization of the sample changes, is dominated by the anomalous Hall contribution. In the case of SFMO $`R_A`$ is holelike, in contrast to the manganites . At 1 T a maximum occurs and at higher fields the data show a linear negative slope. In this high field regime the magnetization of the sample is constant and therefore, according to Eq. 1, the ordinary Hall effect becomes visible. This behavior, positive $`R_A`$ and negative $`R_H`$, was also observed in iron and ferromagnetic iron alloys . The linear negative slope $`\mathrm{d}\rho _{xy}/\mathrm{d}B`$ indicates an electronlike charge-carrier concentration. The Hall coefficent at 300 K is $`1.87\times 10^{10}`$ m<sup>3</sup>/As, corresponding to a charge carrier density in a one-band model of 4.1 electrons/f.u.. The value of $`R_H`$ increases with decreasing temperature to $`1.15\times 10^{10}`$ m<sup>3</sup>/As at 80 K. If one assumes that there exists a residual magnetization increase in the high-field regime its anomalous contribution is holelike. Therefore it will lead to an underestimation of $`R_H`$, but not to a sign change. The observation of an electronlike ordinary Hall effect corresponding to several electrons per f.u. in SFMO is the central result of this work. The anomalous Hall effect in ferromagnetic materials has two possible origins, an asymmetry of scattering (skew scattering) or a sideward displacement of the center of weight of an electron wave packet during the scattering process (side-jump), both due to spin-orbit interaction . The anomalous Hall effect is closely related to the longitudinal resistivity $`\rho _{xx}`$ by $$R_A\mu _0M=\gamma \rho _{xx}^n,$$ (2) but with a different exponent $`n=1`$ and $`n=2`$ for skew scattering and side jump, respectively. The anomalous Hall coefficient can be extracted from the data by extrapolation the linear high-field data to $`B=0`$ . The obtained value is then, according to Eq. 1, $`R_A\mu _0M_{Sat}:=\rho _{xy}^{}`$. The resistivites $`\rho _{xx}`$ versus $`\rho _{xy}^{}`$ in a double logarithmic plot show indeed in the case of SFMO a linear slope with $`n=0.75`$, indicating skew scattering. Due to the opposite sign of the temperature coefficient of the resistivity between SFMO and the manganites in the ferromagnetic regime, the anomalous Hall coefficient increases for SFMO with decreasing temperature. In the manganites the anomalous Hall effect vanishes for very low temperatures, because of increasing magnetic order, as expected by theory . This is a further hint that in our SFMO thin films a full magnetic order is not obtained. ## IV conclusion In summary we prepared high epitaxial thin films of the compound Sr<sub>2</sub>FeMoO<sub>6</sub> with narrow rocking curves by pulsed laser deposition. We performed detailed transport measurements of the diagonal and nondiagonal elements of the resistivity tensor from 4 K up to room temperature in magnetic fields up to 8 T. An electronlike ordinary Hall effect and a holelike anomalous Hall contribution were observed. These signs are reversed compared to the colossal magnetoresistive manganites. The value of the nominal charge carrier density at 300 K is four electrons per formula unit. A full magnetic order was not observed in our samples. ###### Acknowledgements. We thank G. Linker from Forschungszentrum Karlsruhe for the Rutherford backscattering analysis of the film stoichiometry. This work was supported by the Deutsche Forschungsgemeinschaft through Project No. JA821/3.
no-problem/9908/cond-mat9908019.html
ar5iv
text
# Channel Flow of Smectic Films ## I Introduction The rheological properties of liquid crystalline systems continue to be of considerable interest because of their rich and complex behaviour. Thin films of anisotropic molecules, such as Langmuir monolayers at an air-water interface, are relevant to many industrial applications, and as such, have been subjected to detailed experimental studies. In particular, the viscous response of liquid crystalline films is often found to be non-Newtonian. Among the dominant causes is coupling of the flow to molecular alignment. In smectic films (crystalline in one dimension, but liquid-like in the other), the presence of unbound dislocations becomes a major factor affecting viscous response. When the film is riding over another phase, usually water, viscous drag from this subphase, if large, can modify the flow profile of the film quite significantly. The coupling between molecular alignment and flow has been seen in some cases in experiments by Mingotaud et al. , Maruyama et al. , and Kurnaz & Schwartz . The experiments involve Langmuir monolayers of rod-shaped molecules that are usually tilted with respect to the surface normal, forming a hexatic phase with anisotropic in-plane bond orientations. Typically the film consists of domains of a liquid crystalline phase co-existing with another liquid crystalline phase or with the liquid expanded phase (no orientational order). The domains can be distinguished through Brewster Angle Microscopy which is sensitive to molecular orientation, making it possible to follow the shape and movement of the domains along the flow. There is evidence of nonlinear shear response emerging from such studies, as well as of the molecular orientation being influenced by flow . Shear-thinning has often been observed in experiments involving Langmuir monolayers . A possible explanation of this phenomenon is provided by Bruinsma et al. in terms of shear-induced defect proliferation. Dislocation defects in a solid, if unbound, can relax an applied strain by moving in response to the resulting stress. Since the force on such a defect depends on its “charge”, oppositely charged defects tend to separate under an external stress. Tightly bound pairs cannot contribute to the steady state viscous response, although they can modify the response at non-zero frequencies and wave-vectors . However, at finite temperatures, bound pairs can dissociate under this separating influence which effectively tilts the potential well confining the pair, allowing them to be free beyond a potential barrier. Being thermally activated, the free dislocation density, and hence the viscous rsponse, would be very sensitive to temperature. Experiments conducted by Schwartz et al. on anisotropic hexatic and crystalline phases of Langmuir monolayers do indeed see a strong temperature dependence of the critical shear rate for onset of non-Newtonian behaviour. In this paper we study a simpler problem, the linear hydrodynamics of two-dimensional smectic films in a channel flow geometry. Dislocations still play an important role, and it is easier to analyze their effect on the smectic order embodied in a single set of Bragg planes. Although we do not study this here, channel flow of two-dimensional smectics would also be a promising context in which to explore a tractable model of shear thinning. In the absence of external strains, free dislocations or disclinations don’t occur in the most ordered two-dimensional phases; they are instead bound in pairs of opposite charges by a logarithmic potential. However, in two-dimensional layered materials such as smectics or cholesterics, there is exponential decay of translational order in both the layering direction, and the liquid-like direction along the layers . As a result, isolated dislocations have a finite energy and exist in a finite concentration at any finite temperature. In these materials, shear response is primarily due to the free dislocations; the viscosity diverges inversely as the dislocation density when it becomes small at low temperatures. This divergence is cut off at short length scales by the permeation mode of mass transfer in smectics, where a layer distortion induces molecules to jump from layer to layer without affecting the layering structure, allowing the distortion to relax over a finite distance. An analogy can be made to the screening effect of supercurrents in a superconductor that allow the magnetic field to penetrate a finite distance in from the surface, although the field is expelled from the bulk. The “permeation current” plays a similar role with respect to shear in a smectic without dislocations. In three dimensions, this analogy has been carried further to predict divergence of the response functions of the smectic near the second-order smectic-to-nematic transition. The coupling of the nematic order parameter to fluctuations in the magnitude of the smectic order parameter causes, among other quantities, the permeation constant of the smectic and the viscosity denoted $`\eta _1`$ in the literature, to diverge . However, in two dimensions, a dislocation-driven thermodynamic transition to the nematic state occurs at zero temperature , dislocations being the analog of Abrikosov flux vortices in two-dimensional superconductors. At a finite temperature, the nematic melts into an isotropic liquid via a disclination unbinding transition. Below this temperature, local smectic order is disrupted by singularities in the phase of the order parameter (i.e., dislocations). However, the local smectic order parameter has a finite magnitude, and fluctuations in the magnitude are irrelevant in the renormalization sense . Thus renormalization of the elastic coefficients and response functions as in the 3-d case does not occur. The phase of the local smectic order parameter is related to the layer displacement, and dislocations, which lead to branch cuts in this phase, cost a finite energy as for magnetic vortices in the 3-d superconductor. Coupling of the film flow to a subphase (a fluid body supporting the film on its surface) can also significantly alter its flow profile. Such experiments have been conducted by Schwartz et al. using Langmuir monolayers on water. When the subphase drag dominates the flow, the flow profile becomes semi-elliptical. Stone has performed calculations which confirm this profile and also yield the profiles interpolating between the elliptical and the parabolic as the viscosity of the film relative to that of the subphase is increased. The depth of the subphase was also a parameter in the calculations, since decreasing the depth results in increased drag. As we show below, Stone’s results can also be applied to hexatic films. The hydrodynamics of partially ordered hexatic films has been studied in detail by Zippelius et al. . These authors find a correction to the effective viscosity under flow conditions where the hexatic bond orientation is pinned at the boundaries, as compared to the case where the bond orientation is free to rotate. The correction comes from coupling of the flow to the bond orientation order parameter under the constraint imposed by the boundaries. A similar coupling can be enforced by imposing a pressure gradient on the flow. In the experiments conducted by Kurnaz & Schwartz on hexatic film flow, the domain structure of the hexatic mesophase can impose constraints on the bond orientation at domain boundaries, thus increasing the viscosity from its bare value. Annealing of the domains would then lead to a reduction in the effective viscosity. The experimental signature of this effect would be a time-dependent viscous response. Some transients have indeed been observed in these experiments, although other factors may be involved, such as domain boundary elasticity and shear thinning. To illustrate how to incorporate effects of a subphase into the hydrodynamics of a partially ordered film, we adapt the analysis of Ref. to films with hexatic order. In the presence of a subphase, the hydrodynamic equations of motion for a hexatic get modified by adding a subphase drag term to the viscous force: denoting by $`x`$ the co-ordinate across the channel, $`z`$ along the channel, and $`y`$ along the channel depth (see Fig. 1), we have (with stress matrix $`\sigma _{ij}`$), $`{\displaystyle \frac{g_z}{t}}`$ $`=`$ $`_z\sigma _{zz}+_x\sigma _{zx}+_y\sigma _{zy}`$ (2) $`=`$ $`\pi ^{}+_x\left(\nu _xg_z{\displaystyle \frac{K_A}{2}}_x^2\theta \right)\nu _b_yg_z|_{filmsurface}`$ (3) $`{\displaystyle \frac{\theta }{t}}`$ $`=`$ $`{\displaystyle \frac{_xg_z}{2\rho }}+\mathrm{\Gamma }_6K_A_x^2\theta `$ (4) where $`𝐠`$ is the momentum density, $`\theta `$ is the hexatic bond orientation order parameter, $`K_A`$ the bond orientation stiffness, $`\mathrm{\Gamma }_6`$ the corresponding kinetic coefficient, $`\nu `$ the dynamic shear viscosity ($`\nu =\eta /\rho `$), $`\nu _b`$ the viscosity of the bulk subphase, and $`\pi ^{}`$ the surface pressure gradient driving the film down the channel. Assuming $`_x\theta `$ is time-independent in the steady state, we have $`_t_x\theta =0`$. Therefore $`_t\theta `$ must be constant across the channel. Since $`g_z`$ is an even function of $`x`$ in this flow situation, $`_t\theta `$ is odd, and hence, must be zero. Eq. (4) then gives us the coupling between the flow and the bond orientation: $$_x^2\theta =\frac{1}{2\rho \mathrm{\Gamma }_6K_A}_xg_z.$$ (5) Upon substituting this result into Eq. (2), we find: $$\frac{g_z}{t}=\pi ^{}+_x\left(\left(\nu +\frac{1}{4\rho \mathrm{\Gamma }_6}\right)_xg_z\right)\nu _b_yg_z|_{filmsurface}.$$ (6) Thus $`g_z`$ obeys an equation of motion identical to that for an isotropic film with a subphase, but with the modified viscosity $`\eta +1/4\mathrm{\Gamma }_6`$, and we can take over the results of Ref. . As we shall see (Sec. V), this simplification does not apply to smectic flow. In the next section, we briefly review the equilibrium properties of two-dimensional smectic films . Section III discusses the hydrodynamics of two-dimensional smectics, and the implications of the coupling between the smectic order parameter and the flow for the response functions of both quantities. Section IV looks at flow of a smectic film in a channel flow geometry, and examines the behaviour in different regimes of the channel width. In Section V, we consider the effect of subphase drag on the smectic film flow as compared to previous results for an isotropic film. The results of both sections IV and V are consistent with the results of section III for the effective viscosity. The last section summarizes the results of this paper. ## II Review of Smectic Films Smectics are characterized by a crystal-like periodic modulation of the density along one direction, say, the $`z`$-direction, and liquid-like correlations perpendicular to it. In two dimensions, we take this to be the $`x`$-direction. The preferred orientation of the “layers” is also the average direction along which the directors $`\widehat{𝐧}`$ of the nematic molecules are oriented. Although it represents a spontaneously broken rotational symmetry, the layer orientation can be forced by boundary conditions on the molecules, or even by flow. Smectic order is characterized by a wavevector $`𝐪_0=\widehat{𝐳}2\pi /d`$, where $`d`$ is the layer spacing, usually slightly larger than the molecular length. The smectic density wave can be represented as $$\rho (𝐫)=\rho _0\left(1+\psi (𝐫)e^{i𝐪_0𝐫}\right).$$ (7) Here, $`\psi (𝐫)`$ is the complex smectic order parameter: its amplitude represents the strength of the smectic ordering, whereas the phase $`\varphi (𝐫)=q_0u(𝐫)`$ describes the phonons associated with broken translational symmetry along the layering direction. Phonons in two dimensions are very effective in destroying the one-dimensional translational order: The correlation $`\psi (𝐫)\psi ^{}(\mathrm{𝟎})`$ decays as the exponential of a power of the displacement. Since the square of the wavevector appears in the exponent , higher harmonics of $`q_0`$ in the density modulation are ignored. The Landau-Ginzburg free energy takes the form $$=d^2r\left[\frac{a}{2}|\psi |^2+\frac{u}{4}|\psi |^4+\frac{c_{}}{2}|_z\psi |^2+\frac{c_{}}{2}|(_xiq_0\delta n)\psi |^2+\frac{K_1}{2}(_x\delta n)^2+\frac{K_3}{2}(_z\delta n)^2\right].$$ (8) Here, $`K_1`$ and $`K_3`$ are splay and bend elastic constants. The twist elastic constant $`K_2`$ is absent in two dimensions. The coupling between $`_x\psi `$ and $`\delta n(\widehat{𝐧}\widehat{𝐳})\widehat{𝐱}`$ is required to satisfy the rotational invariance of $``$. Terms of order higher than quadratic in the order parameter and its gradients have been neglected. Well below the mean-field smectic-nematic transition temperature, fluctuations in the amplitude of $`\psi `$ can also be ignored, and in the absence of singularities in $`\widehat{𝐧}`$ (disclinations), $`\delta n`$ can be integrated out.The remaining long wavelength fluctuations can be expressed completely in terms of the layer displacement $`u(𝐫)`$ as $$=d^2r\frac{1}{2}B\left[(_zu)^2+\lambda ^2(_x^2u)^2\right],$$ (9) with $`B=\psi _0^2q_0^2c_{}`$, and $`\lambda ^2=K_1/B`$. Note that uniform gradients of $`u`$ along the layer direction ($`_xu`$) don’t cost any energy, because they represent tilting of the layering direction. This important difference compared to two-dimensional solids, hexatics, etc. implies that the lowest energy defects in the system, dislocations, have a finite energy $`E_D`$ and are not constrained to be bound in pairs at low temperatures . Whereas a smectic with thermally excited phonons would behave like a nematic with only a splay degree of freedom, the presence of dislocations allows for bend in the average layer orientation over scales larger than the typical size $`\xi _D`$ of a correlated “smectic blob” , given by $`\xi _D^2n_D^1a_D^2e^{E_D/k_BT}`$ ($`a_D`$ is a dislocation core diameter, $`a_D^2d\sqrt{\lambda d}`$). Therefore the long-wavelength behaviour of the smectic is that of a nematic with free energy $$=d^2r\frac{1}{2}\left[K_1(_x\delta \widehat{𝐍})^2+K_3(_z\delta \widehat{𝐍})^2\right],$$ (10) where $`\widehat{𝐍}`$ denotes the layer normal, and $`K_3\xi _D^2`$. As discussed by Nelson & Pelcovits , non-linearities in the nematic free energy modify the nematic Frank constants $`K_1`$ and $`K_3`$ such that at scales longer than $`de^{\xi _D^2/a^2}`$ the nematic can be described by a single Frank constant $`\xi _D^2`$. In practice, this length scale can be very large compared to typical system sizes, so one usually sees a 2-Frank constant nematic. A study of the dynamics of smectic films, taking dislocations into account , yields nematic behaviour corresponding to Eq. (10) at long length scales, with a nematic kinetic coefficient that vanishes like $`n_De^{E_D/k_BT}`$ at low temperatures. ## III Smectic Hydrodynamics The hydrodynamic variables for a two-dimensional smectic are the layer displacement $`u`$, and the conserved momentun densities $`g_x`$, $`g_z`$. In this section we focus for simplicity on the dynamics of free-standing smectic films , where the momentum is conserved to a good approximation. The drag due to a liquid subphase is considered in Sec. V. We assume the smectic to be incompressible, and so neglect density fluctuations, setting the density $`\rho =const`$. Consider the stress tensor $`\sigma _{ij}=\nu _{ijkl}_kg_l`$. From the symmetry properties of the viscosity tensor, it can easily be argued that there should be 4 independent viscosity coefficients: $`\eta _{xxxx}`$, $`\eta _{zzzz}`$, $`\eta _{xxzz}=\eta _{zzxx}`$, and $`\eta _{xzxz}=\eta _{zxzx}=\eta _{xzzx}=\eta _{zxxz}`$. Upon denoting $$h\frac{\delta }{\delta u}=B(_z^2\lambda ^2_x^4)u,$$ (11) the equations of motion can be written as $`{\displaystyle \frac{u}{t}}`$ $`=`$ $`{\displaystyle \frac{g_z}{\rho }}+\lambda _ph`$ (13) $`{\displaystyle \frac{g_z}{t}}`$ $`=`$ $`h_zp+\stackrel{~}{\nu _z}_z^2g_z+\nu _x^2g_z+\nu ^{}_z_xg_x`$ (14) $`{\displaystyle \frac{g_x}{t}}`$ $`=`$ $`\text{ }_xp+\stackrel{~}{\nu _x}_x^2g_x+\nu _z^2g_x+\nu ^{}_x_zg_z`$ (15) where $`\lambda _p`$ is the permeation constant for the smectic, $`p`$ is the surface pressure, and we have switched to kinematic viscosities by dividing by the density, $`\nu \eta /\rho `$, and denoted the four viscosities $`\nu _x,\nu _z,\nu `$ and $`\nu ^{}`$. The condition of constant density: $`\frac{\rho }{t}=_xg_x_zg_z=0`$, can be used to reduce the number of viscosity coefficients to 3, and decouple the $`g_x`$-motion from $`g_z`$. Permeation refers to the dissipative mode of mass transfer in smectics where the molecules jump from layer to layer in order to relax a layer distortion. Dislocations in the smectic introduce cuts into the displacement field, but it is possible to define the gradient $`𝐬=u`$ as a single-valued quantity . In the presence of dislocations, $`\mathrm{s}_x`$ and $`\mathrm{s}_z`$ are independent variables, with $`\times 𝐬=\widehat{𝐲}dm(𝐫)`$, where $`m(𝐫)`$ is the conserved dislocation density. Now $$\frac{𝐬}{t}=\frac{u}{t}+d\widehat{𝐲}\times 𝐉_D,$$ (16) where $`𝐉_D`$, the dislocation current, which satisfies $$_tm+𝐉_D=0,$$ (17) is given by $$𝐉_D=n_D\underset{¯}{\mathrm{\Gamma }}𝐟T\underset{¯}{\mathrm{\Gamma }}m.$$ (18) Here we have introduced the kinetic coefficients $`\underset{¯}{\mathrm{\Gamma }}=\left[\begin{array}{cc}\mathrm{\Gamma }_x& 0\\ 0& \mathrm{\Gamma }_z\end{array}\right]`$, and the 2-d analog of the Peach-Koehler force $`𝐟=d(B\mathrm{s}_z,B\lambda ^2_x^2\mathrm{s}_x)`$; we have also set $`k_B=1`$ for convenience. We have imposed the Einstein relation which relates the mobility embodied in the first term to the diffusion constant implicit in the second through the common matrix $`\underset{¯}{\mathrm{\Gamma }}`$. Since $`\mathrm{\Gamma }_z`$ and $`\mathrm{\Gamma }_x`$ correspond to dislocation glide and climb respectively, we expect $`\mathrm{\Gamma }_z\mathrm{\Gamma }_x`$. The equations of motion are then: $`{\displaystyle \frac{g_z}{t}}`$ $`=`$ $`B(_z\mathrm{s}_z\lambda ^2_x^3\mathrm{s}_x)+\pi ^{}+\nu _x^2g_z+\nu _z_z^2g_z`$ (20) $`{\displaystyle \frac{\mathrm{s}_x}{t}}`$ $`=`$ $`_x{\displaystyle \frac{g_z}{\rho }}+\lambda _pB(_z^2\lambda ^2_x^4)\mathrm{s}_x+\mathrm{\Gamma }_z\left(n_Dd^2B\lambda ^2_x^2\mathrm{s}_xT_z(_x\mathrm{s}_z_z\mathrm{s}_x)\right)`$ (21) $`{\displaystyle \frac{\mathrm{s}_z}{t}}`$ $`=`$ $`_z{\displaystyle \frac{g_z}{\rho }}+\lambda _pB(_z^2\lambda ^2_x^4)\mathrm{s}_z\mathrm{\Gamma }_x\left(n_Dd^2B\mathrm{s}_zT_x(_x\mathrm{s}_z_z\mathrm{s}_x)\right)`$ (22) where we have used $`\pi ^{}`$ to denote $`_zp`$, and $`\nu _z=\stackrel{~}{\nu _z}\nu ^{}`$. If conservation of momentum is neglected , Eqs. (18) lead in the limit of long wavelengths and low frequencies to a relaxation frequency for $`s_z`$ (which describes layer compression) $$\omega _{s_z}=i\mathrm{\Gamma }_xn_Dd^2B,$$ (23) and for $`\mathrm{s}_x`$ (i.e., layer undulations) a diffusive frequency $$\omega _{s_x}(𝐪)=i\left((\mathrm{\Gamma }_zn_Dd^2B\lambda ^2)q_x^2+(T\mathrm{\Gamma }_z+\lambda _pB)q_z^2\right).$$ (24) Using the relation $`\delta n=_xu`$ , this last result corresponds to a nematic-like behaviour for the director $`\widehat{𝐍}=\widehat{𝐳}+\mathrm{s}_x\widehat{𝐱}`$. Including $`g_z`$ in the hydrodynamic treatment introduces a pair of coupled $`g`$-$`s_x`$ modes with both diffusive and propagating characteristics. Denoting $`\omega _{g_z}(𝐪)=i(\nu q_x^2+\nu _zq_z^2)`$, the coupled modes have characteristic frequencies $$\omega (𝐪)\left(\frac{\omega _{g_z}+\omega _{s_x}}{2}\right)\pm \sqrt{\left(\frac{\omega _{g_z}\omega _{s_x}}{2}\right)^2+\frac{B}{\rho }\lambda ^2q_x^4}.$$ (25) Propagation dominates for $`𝐪\widehat{𝐱}`$ if the dissipation is small enough, which is possible at low temperatures, leading to $$\omega (𝐪)=\pm \sqrt{\frac{B}{\rho }}\lambda q_x^2.$$ (26) As in the case of hexatics, coupling to the smectic displacement field modifies the viscosity of the film. In the absence of dislocations, it is not possible to shear the smectic film without breaking it. The glide motion of dislocations facilitates shear deformation. Permeation can also support shear at short length scales. To calculate the effective viscosity, we apply an external shear stress to the system, and calculate the steady state response for $`g_z`$. The stress-strain relation calculated in Appendix A then yields $$\eta ^{eff}=\eta \left(1+\frac{1}{\eta \mathrm{\Gamma }_zn_Dd^2}\right)$$ (27) and $$\eta _z^{eff}=\eta _z\left(1+\frac{1}{\eta _z\mathrm{\Gamma }_xn_Dd^2}\right)$$ (28) in the long wavelength limit. This is similar in form to the viscosity correction for hexatics: $`\eta \eta \left(1+\frac{1}{4\eta \mathrm{\Gamma }_6}\right)`$. A similar calculation for $`\mathrm{s}_x`$ and $`\mathrm{s}_z`$ (again using the method sketched in Appendix A) shows that $$\mathrm{\Gamma }_zn_Dd^2\mathrm{\Gamma }_zn_Dd^2\left(1+\frac{1}{\eta \mathrm{\Gamma }_zn_Dd^2}\right)$$ (29) and $$\mathrm{\Gamma }_xn_Dd^2\mathrm{\Gamma }_xn_Dd^2\left(1+\frac{1}{\eta _z\mathrm{\Gamma }_xn_Dd^2}\right)$$ (30) The permeation constant is not affected by the coupling. At low temperatures (or large dislocation energy $`E_D`$), $`n_D`$ rapidly approaches 0 as $`e^{E_D/T}`$, and the effective viscosity of the smectic begins to diverge as $`e^{E_D/T}`$, whereas the response of the smectic displacement strains $`\mathrm{s}_x`$ and $`\mathrm{s}_z`$ to an external force goes to the finite value $`\eta ^1`$ instead of vanishing as $`\mathrm{\Gamma }_zn_Dd^2`$. However, since the permeation mode relaxes shear over scales shorter than the permeation length $`\delta =\sqrt{\lambda _p\eta }`$, the divergence of the shear viscosity is cut off for $`q_x\sqrt{\eta \mathrm{\Gamma }_z}(d/\delta \xi _D)`$ according to $$\mathrm{\Delta }\eta q_x^2=\frac{q_x^2}{\mathrm{\Gamma }_zn_Dd^2+\lambda _pq_x^2},\mathrm{o}r,\eta \eta \left(1+\frac{1}{\eta \mathrm{\Gamma }_zn_Dd^2+\delta ^2q_x^2}\right).$$ (31) Since these hydrodynamic equations are valid only for wavelengths longer than the dislocation correlation length in the x-direction, i.e., $`q_x\xi _{}^1`$ where $`\xi _{}=(\lambda \xi _D^2)^{1/3}`$ , this rounding off of the viscosity will extend to the hydrodynamic range only if $`\sqrt{\eta \mathrm{\Gamma }_z}(d\lambda ^{1/3}/\delta \xi _D^{1/3})1`$. We expect the bare viscosity $`\eta `$ and $`\lambda =\sqrt{K_1/B}`$ to stay finite as $`T0`$. However, $`\xi _D`$ diverges as $`e^{E_D/2T}`$. We expect the permeation constant $`\lambda _p`$ to behave like $`e^{E_p/T}`$ where $`E_p`$ is the energy barrier for molecules to jump from one layer to the next. The dislocation kinetic coefficient $`\mathrm{\Gamma }_z`$ would similarly correspond to the activation energy $`E_g`$ for dislocation glide by breaking and reforming of bonds around the dislocation core. But this energy barrier should be small compared to that required for molecular hopping across the layers, and we shall ignore it in comparison. Then the above condition is satisfied provided $`E_D/3>E_p`$, so that $`\xi _D^{1/3}\mathrm{}`$ faster than $`\delta 0`$. A similar rounding off is possible in principle for $`\eta _z`$: $$\eta _z\eta _z+\frac{1}{\mathrm{\Gamma }_xn_Dd^2+\lambda _pq_z^2},$$ (32) however, this saturation is unobservable in the hydrodynamic limit because the dislocation correlation length diverges more strongly in the z-direction: $`q_z^1\xi _{}=(\xi _D^4/\lambda )^{1/3}`$. Since dislocation climb is similar to the permeation process, $`\mathrm{\Gamma }_x`$ should behave like $`\lambda _p`$, and $`q_z^2n_D^{4/3}`$ would be much smaller than $`n_D`$ at low temperatures. Although we have assumed the viscosity to be independent of shear rate, at high shear rates we must account for shear thinning brought about by the increase in unbound dislocations in the presence of the shear strain. The mechanism for dislocation proliferation under a shear stress is similar to that described by Bruinsma et al. for a 2d crystal of point particles. The stress tilts the effective potential well binding the dislocation pair, allowing the pair to dissociate. The extra density of unbound dislocations facilitates further relaxation of the stress so that the effective viscosity decreases with increasing shear rate (the shear strain in the steady state depends on the shear rate imposed upon the flow). Note from Eqs. (27) and (28) that the effective viscosities do indeed drop with increasing dislocation density $`n_D`$. The same mechanism would also apply to shear flow in a hexatic film where disclination unbinding would occur in the presence of a strain in the bond-orientation angle. Since the orientational order parameter is coupled to the flow as in Eqs. (I), disclinations can mediate the shear thinning mechanism in the hexatic phase. ## IV Channel flow of smectic films We are interested in flow under shear or a pressure gradient for a film oriented with the layering direction along the channel (see Fig. 2). From the previous discussion, we expect a nematic-like profile for $`\mathrm{s}_x`$ unless the dislocation density is small, in which case we are in the permeation regime and shear is only supported in a boundary layer of width $`\delta `$. We assume the channel is much wider than the dislocation correlation length $`\xi _D`$, so that the hydrodynamic treatment is valid. Discussion of the effects of a subphase will be deferred to Sec. V. In the steady state, we expect $`\mathrm{s}_z`$ to be constant, and the equations of motion (18) reduce to $`{\displaystyle \frac{g_z}{t}}`$ $`=`$ $`\pi ^{}+\nu _x^2g_zB\lambda ^2_x^3\mathrm{s}_x`$ (34) $`{\displaystyle \frac{\mathrm{s}_x}{t}}`$ $`=`$ $`{\displaystyle \frac{g_z}{\rho }}\lambda _pB\lambda ^2_x^4\mathrm{s}_x+\mathrm{\Gamma }_zn_Dd^2B\lambda ^2_x^2\mathrm{s}_x`$ (35) For convenience we reduce the variables by their dimensionless counterparts: $$xax,g_z\left(\frac{\pi ^{}a^2}{\nu }\right)g_z,\mathrm{s}_x\left(\frac{\pi ^{}a^3}{B\lambda ^2}\right)\mathrm{s}_x,$$ (36) where $`2a`$ is the channel width, and $`x`$, $`g_z`$ and $`s_x`$ are now dimensionless. We also define the dimensionless dislocation density $$\mathrm{\Delta }\eta \mathrm{\Gamma }_zn_Dd^2,$$ (37) and $`ba/\delta `$, the scaled channel width. In the presence of dislocations, it is convenient to define $`b^{}b\sqrt{1+\mathrm{\Delta }}`$. Upon solving the equations above with the no-slip boundary condition: at $`x=\pm a`$, $`g_z=0`$ and the permeation current $`h_x^3\mathrm{s}_x=0`$ (see Eqs. (11) and (13)), we find $$g_z(x)=\frac{1}{1+\mathrm{\Delta }}\left[\mathrm{\Delta }\frac{(1x^2)}{2}+\frac{1}{b^2}\left(1\frac{\text{cosh}(b^{}x)}{\text{cosh}(b^{})}\right)\right]$$ (38) and $`\mathrm{s}_x=_xu`$ where $$u(x)=\frac{1}{1+\mathrm{\Delta }}\left[\frac{(1x^2)^2}{24}+\frac{\text{tanh}b^{}}{b^3}\frac{(1x^2)}{2}+\frac{1}{b^4}\left(1\frac{\text{cosh}(b^{}x)}{\text{cosh}(b^{})}\right)\right].$$ (39) There are two regimes of interest here: * narrow channel: $`b^{}1`$ ($`\delta a\sqrt{1+\mathrm{\Delta }}`$): $`g_z{\displaystyle \frac{(1x^2)}{2}},`$ i.e. we recover the usual Poiseuille profile expected for a structureless fluid. The permeation current $`\lambda _ph`$ has the same form, but is smaller than the momentum density by a factor of $`𝒪(b^2)`$. Also, in the same limit, $`u(x){\displaystyle \frac{b^2}{720}}(1x^2)^2(13x^2).`$ Since $`u1`$, the deviations in layer tilt, $`\theta =\mathrm{s}_x`$, are small. * wide channel: $`b^{}1`$ ($`a\sqrt{1+\mathrm{\Delta }}\delta `$): there are two distinct contributions to $`g_z`$ (Fig. 3): $`g_z\left({\displaystyle \frac{\mathrm{\Delta }}{1+\mathrm{\Delta }}}\right)\left({\displaystyle \frac{1x^2}{2}}\right)+{\displaystyle \frac{1e^{b^{}(1|x|)}}{b^2(1+\mathrm{\Delta })^2}}.`$ If $`\mathrm{\Delta }b^21`$ ($`a\sqrt{\mathrm{\Delta }(1+\mathrm{\Delta })}\delta `$), then the second term can be neglected and dislocations restore a fluid like response, but with an effective viscosity $$\eta ^{eff}=\eta \left(1+\frac{1}{\mathrm{\Delta }}\right),$$ (40) confirming the result we found in the previous section. On the other hand, if the dislocation density is so small that $`\mathrm{\Delta }b^21`$ ($`a\sqrt{1+\mathrm{\Delta }}\delta a\sqrt{\mathrm{\Delta }(1+\mathrm{\Delta })}`$), then the second term dominates and the plug flow profile characteristic of permeation flow can be seen. In the wide channel limit, we also have $`u(x){\displaystyle \frac{1}{\mathrm{\Delta }}}{\displaystyle \frac{(1x^2)^2}{24}}+{\displaystyle \frac{1}{(1+\mathrm{\Delta })^{5/2}}}{\displaystyle \frac{1}{b^3}}{\displaystyle \frac{(1x^2)}{2}}+{\displaystyle \frac{1}{(1+\mathrm{\Delta })^3}}{\displaystyle \frac{1}{b^4}}\left(1e^{b^{}(1|x|)}\right).`$ For the general case, we can estimate the effective viscosity from the flow rate: for Poiseuille flow, the momentum flux is given by $`_a^ag_z𝑑x=\frac{2}{3}\frac{\pi ^{}a^3\rho }{\eta }`$. Using this as the definition of $`\eta ^{eff}`$, we find $$\frac{\eta }{\eta ^{eff}}=\frac{\mathrm{\Delta }}{1+\mathrm{\Delta }}+\frac{3}{b^2(1+\mathrm{\Delta })}\left(1\frac{\text{tanh}b^{}}{b^{}}\right).$$ (41) For $`b^{}1`$ (permeation regime), $$\eta ^{eff}=\eta \left(\frac{1+\mathrm{\Delta }}{\mathrm{\Delta }+3/b^2}\right)$$ (42) which reduces to (40) for $`\mathrm{\Delta }b^21`$, whereas for $`\mathrm{\Delta }b^21`$ (and hence $`\mathrm{\Delta }1`$), $$\frac{\eta ^{eff}}{\eta }=\frac{b^2}{3}\mathrm{o}r\eta ^{eff}=\frac{a^2}{3\lambda _p},$$ (43) reminiscent of the result $`1/\lambda _pq_x^2`$ we found for low dislocation densities in the previous section. For $`b^{}1`$ ($`b1`$) (permeation regime), we have $$\frac{\eta }{\eta ^{eff}}=1+\frac{2}{5}b^2\mathrm{o}r\eta ^{eff}=\eta +\frac{2}{5}\frac{a^2}{\lambda _p},$$ (44) which is a small correction due to the permeation boundary layer. ## V Channel flow with a normal fluid subphase In practice , both the film and the barriers that restrict its flow to a channel geometry, float on a volume of fluid with viscosity $`\eta _b`$ and finite depth $`H`$ (Fig. 1). If the dimensionless parameter $`\mathrm{\Lambda }=\eta /\eta _bH`$ is small, then, as for a normal film, the effect of this subphase can be neglected, and the analysis of the previous section is sufficient to describe the two-dimensional film flow. If the subphase drag cannot be neglected, the flow profile can be calculated through an analysis similar to Ref. . We outline the steps here, and comment on the limiting cases. We consider a subphase extending from $`y=0`$ (surface with film) to $`y=H`$ (bottom). Let $`\stackrel{}{v}(x,y)`$ be the ($`z`$-independent) velocity field describing the subphase, $`\stackrel{}{v}(v_x,v_y,v)`$. The velocity profile in the film itself is $`v_0(x)v(x,y=0)`$. In steady state, the equation of motion for $`\stackrel{}{v}`$ in the bulk of the subphase is: $`(_x^2+_y^2)\stackrel{}{\upsilon }=0.`$ The boundary conditions on the subphase are: * $`\stackrel{}{\upsilon }=0`$ for $`x\pm \mathrm{}`$ or $`y=H`$ or $`y=0,|x|>a`$, * $`v_x`$ and $`v_y`$ $`=0`$ for $`y=0,|x|<a`$ as well. We assume that the subphase is incompressible ($`\stackrel{}{}\stackrel{}{\upsilon }=0`$), which implies that $`v_x`$ and $`v_y`$ must be zero. For the film, the equations of motion at the surface are modified by the subphase drag ($`K_1=B\lambda ^2`$): $`\pi ^{}+\eta _x^2v_0K_1_x^3\mathrm{s}_x\eta _b_yv|_{y=0}=0,`$ (46) $`_x(v_0\lambda _pK_1_x^3\mathrm{s}_x)+\mathrm{\Gamma }_zn_Dd^2K_1_x^2\mathrm{s}_x=0.`$ (47) Once again, we scale variables such that $$xax,yay,v\left(\frac{\pi ^{}a^2}{\eta }\right)v,\mathrm{a}nd_x^3\mathrm{s}_x\left(\frac{\pi ^{}}{K_1}\right)v_p.$$ (48) as well as $$\delta a\delta ,HaH.$$ (49) All quantities are now dimensionless. Since $`v_p`$ is proportional to the “permeation current”, it obeys the same boundary conditions as $`v_0`$. The equations of motion can now be written as $`_x^2v+_y^2v=0\mathrm{f}orH<y<0\mathrm{w}ithv=0\mathrm{a}ty=0,H`$ (51) $`1+\left(\delta ^2_x^2(1+\mathrm{\Delta })\right)v_p\mathrm{\Lambda }_yv|_{y=0}=0`$ (52) $`_x^2v_0=(\delta ^2_x^2\mathrm{\Delta })v_p`$ (53) where $`\delta `$ and $`\mathrm{\Delta }`$ were defined in Sec. IV. Eq. (51) implies that $`v`$ must have the form $$v(x,y)=_0^{\mathrm{}}𝑑k\frac{A(k)}{\mathrm{cos}kH}\mathrm{cos}kxsinhk(H+y).$$ (54) In terms of the Fourier transform $`A(k)`$, the film velocity $$v_0(x)=_0^{\mathrm{}}𝑑kA(k)\mathrm{t}anh(kH)\mathrm{cos}kx.$$ (55) Using Eq. (53), we can express $`v_p`$ in terms of $`v_0`$: $$v_p(x)=_0^{\mathrm{}}𝑑k\mathrm{\Omega }(k)coskx\mathrm{w}here\mathrm{\Omega }(k)=A(k)\mathrm{t}anh(kH)\left(\frac{k^2}{\delta ^2k^2+\mathrm{\Delta }}\right).$$ (56) Upon substituting these relations into Eq. (52), we obtain a relation for the Fourier transform $`A(k)`$: $$1=_0^{\mathrm{}}𝑑kA(k)\left[\left(\delta ^2k^2+(1+\mathrm{\Delta })\right)\left(\frac{k^2}{\delta ^2k^2+\mathrm{\Delta }}\right)\mathrm{t}anh(kH)+\mathrm{\Lambda }k\right]\mathrm{cos}kx.$$ (57) The boundary condition $`v_0(x)=0`$ for $`|x|>1`$ imposes another constraint on the $`A(k)`$: $$_0^{\mathrm{}}𝑑kA(k)\mathrm{t}anh(kH)\mathrm{cos}kx=0\mathrm{f}or|x|>1.$$ (58) This can be satisfied by $`A(k)`$ of the form $$A(k)\mathrm{t}anh(kH)=k^{1/2\beta }\underset{m=0}{\overset{\mathrm{}}{}}a_mJ_{2m1/2+\beta }(k)$$ (59) where $`\beta `$ can be chosen for convenience of computation. If this form is substituted into Eq. (57), the x-dependence can be integrated out to yield an infinite set of linear equations for the coefficients $`a_m`$: $$\underset{m=0}{\overset{\mathrm{}}{}}a_mG_{mn}^\beta (\mathrm{\Delta },\delta ,\mathrm{\Lambda },H)=\frac{\delta _{n0}}{2^{\beta 1/2}\mathrm{\Gamma }(\beta +1/2)},n=0,1,2,\mathrm{}$$ (60) where $$G_{mn}^\beta (\mathrm{\Delta },\delta ,\mathrm{\Lambda },H)=_0^{\mathrm{}}𝑑kG(k;\mathrm{\Delta },\delta ,\mathrm{\Lambda },H)k^{12\beta }J_{2m1/2+\beta }(k)J_{2n1/2+\beta }(k).$$ (61) The “kernel” for the smectic case, $$G(k;\mathrm{\Delta },\delta ,\mathrm{\Lambda },H)=k^2\left(1+\frac{1}{\mathrm{\Delta }+\delta ^2k^2}\right)+\frac{\mathrm{\Lambda }k}{\mathrm{t}anh(kH)},$$ (62) differs from that for a structureless fluid by the term $`\frac{k^2}{\mathrm{\Delta }+\delta ^2k^2}`$ (see Fig. 4). This term reflects the correction to $`\eta q_x^2`$ we found in Section III. As discussed there, the correction is small compared to the normal term $`k^2`$ for $`\mathrm{\Delta }1`$, but can grow at low temperatures. If $`\mathrm{\Delta }\delta ^2k^2`$, this correction simply appears as an enhancement of the effective viscosity. However, when $`\mathrm{\Delta }\delta ^2k^2`$, the correction gives rise to a qualitative change in the velocity profile, characteristic of the permeation regime. The plug flow profile (Fig. 3) in this regime is similar to that seen in the case of a thin sublayer ($`H1`$), with most of the shear occurring in a boundary layer thickness $`\delta `$ (as opposed to $`\sqrt{H/\mathrm{\Lambda }}`$ for the “thin sublayer” case). When the subphase drag on the film is large ($`\mathrm{\Lambda }1`$), the profile is very similar to that of a normal film, since the second term in the “kernel”, which is independent of the film structure, dominates the flow. The profile is semi-elliptical when the subphase is deep ($`H1`$), and resembles plug flow for thin sublayers ($`H1`$). ## VI Summary We have studied the hydrodynamics of two-dimensional smectics incorporating dislocations in the context of shear flow across the layers. The behaviour resembles that of a nematic for length scales beyond the dislocation correlation length, with an effective viscosity that represents the role of dislocation motion in making shear possible. At smaller length scales, the permeation mode of smectics determines the shear response. These different regimes can be observed in channel flow under a pressure head where the channel width sets the observation length scale, provided the drag due to the subphase can be neglected. At small dislocation densities, the permeation mode determines the flow profile, which evolves from a parabolic profile for channels narrower than the permeation length to a plug-flow shape as the channel becomes much wider. In the latter case (the permeation regime), shear is supported only in a boundary layer of thickness equal to the permeation length, hence the effective viscosity as determined by the net flow rate across the channel grows as the square of the channel width. On the other hand, for large dislocation densities, the flow profile is again parabolic, but with the viscosity modified by the dislocation density. The dislocation density in turn depends on the shear rate through the shear strain supported by the layers in steady state. Under this strain, dislocation pairs in the smectic unbind at a lower energy cost, increasing their equilibrium density and helping to further relax the imposed strain, resulting in a shear thinning effect. This effect has been calculated for a 2d crystal of point particles by Bruinsma et al. , and would also be present in the hexatic phase, mediated by disclinations rather than dislocations. The flow results in steady state strains in the bond orientation order parameter, which are relaxed by disclination motion. It would be interesting to explore this mechanism for shear thinning for the smectic films discussed here. When the film flows on the surface of a fluid subphase, and drag from the subphase must be taken into account, the flow profile depends on the relative viscosities of the film and the subphase as well as on the channel width and the subphase depth. The analysis by Stone , which is supported by experiments, predicts the evolution of the parabolic profile into a semi-elliptical or plug-flow profile, depending on whether the drag is due to the subphase viscosity or a shallow subphase. In the Introduction, we showed that the same results apply to a hexatic film if described by an effective viscosity incorporating the coupling to the bond-orientation order parameter. In the situations described above where the subphase drag dominates the flow, these results are also applicable to smectic fillms, since the modification to the “flow kernel” of a smectic film with respect to an isotropic film is decoupled from the terms describing the influence of the subphase. The subphase drag manifests itself at long length scales where the film structure is unimportant. ###### Acknowledgements. It is a pleasure to acknowledge helpful conversations with D. Schwartz. This research was supported by the National Science Foundation, through the MRSEC Program through Grant DMR-98-09363 and through Grant DMR-9714725. ## A Calculation of effective viscosity The hydrodynamic equations of motion can be represented schematically as $$i\omega X(𝐪,\omega )=𝐋(𝐪)X(𝐪,\omega )+f^{ext}(𝐪,\omega )$$ (A1) where $`X(g_z,\mathrm{s}_x,\mathrm{s}_z)`$, $`𝐋`$ is a hydrodynamic matrix (see Eq. (A7) below), and $`f^{ext}=(_j\sigma _{zj}^{ext},0,0)`$, $`\sigma _{zj}^{ext}`$ being the applied stress. Upon solving for the response to $`\sigma _{zj}^{ext}`$, we find $$X(𝐪,\omega )=\left(i\omega +𝐋(𝐪)\right)^1f^{ext}(𝐪,\omega ).$$ (A2) In the limit $`\omega 0`$, this simplifies to $`X(𝐪,\omega =0)=𝐋(𝐪)^1f^{ext}(𝐪,\omega =0)`$, or, $$g_z(𝐪,\omega =0)=\left(𝐋(𝐪)^1\right)_{g_zg_z}f^{ext}(𝐪,\omega =0).$$ (A3) Upon inverting this relation, we find $$iq_j\sigma _{zj}^{ext}(𝐪,\omega =0)=\left(\left(𝐋(𝐪)^1\right)_{g_zg_z}\right)^1g_z(𝐪,\omega =0).$$ (A4) Upon comparing this result to the definition of the viscosity: $`\sigma _{ij}\nu _{ijkl}_kg_l`$, we find the effective viscosity tensor $`\nu ^{eff}(𝐪)`$. Upon writing $`\left(\left(𝐋(𝐪)^1\right)_{g_zg_z}\right)^1`$ $`=`$ $`{\displaystyle \frac{\text{Det}𝐋(𝐪)}{\text{Minor}_{g_zg_z}𝐋(𝐪)}}`$ (A5) $`=`$ $`𝐋(𝐪)_{g_zg_z}+{\displaystyle \frac{𝐋(𝐪)_{g_z\mathrm{s}_x}\text{Minor}_{g_z\mathrm{s}_x}𝐋(𝐪)+𝐋(𝐪)_{g_z\mathrm{s}_z}\text{Minor}_{g_z\mathrm{s}_z}𝐋(𝐪)}{\text{Minor}_{g_zg_z}𝐋(𝐪)}},`$ (A6) we see that the first term simply yields the bare viscosity, and the second is the correction due to the coupling. A similar calculation can be carried out for the effcetive response of the displacement gradients $`\mathrm{s}_x`$ and $`\mathrm{s}_z`$ to external forces. The response matrix, $`𝐋(𝐪)`$, for a smectic is $$\left[\begin{array}{ccc}(\nu q_x^2+\nu _zq_z^2)& iB\lambda ^2q_x^3& iBq_z\\ iq_x/\rho & \lambda _pB(q_z^2+\lambda ^2q_x^4)+\mathrm{\Gamma }_z(n_Dd^2B\lambda ^2q_x^2+Tq_z^2)& T\mathrm{\Gamma }_zq_zq_x\\ iq_z/\rho & T\mathrm{\Gamma }_xq_xq_z& \lambda _pB(q_z^2+\lambda ^2q_x^4)+\mathrm{\Gamma }_x(n_Dd^2B+Tq_x^2)\end{array}\right]$$ (A7) We take $`q_z=0`$ when calculating $`\nu ^{eff}`$, and $`q_x=0`$ in calculating $`\nu _z^{eff}`$, which leads to Eqs. (27) and (28).
no-problem/9908/hep-ph9908281.html
ar5iv
text
# Transverse polarization distributions ## 1 Introduction The transverse polarization (or transversity) distributions $`\mathrm{\Delta }_Tq(x)`$, introduced 20 years ago by Ralston and Soper and studied in more detail in the last decade , are one of the three sets of leading-twist quark and antiquark distribution functions – the other two are the momentum distributions $`q(x)`$ and the helicity distributions $`\mathrm{\Delta }q(x)`$. (Note that $`\mathrm{\Delta }_Tq`$ is often called $`h_1`$). Formally $`\mathrm{\Delta }_Tq`$ is given by $$\mathrm{\Delta }_Tq(x)=\frac{1}{\sqrt{2}P^+}\frac{\mathrm{d}\alpha }{2\pi }\mathrm{e}^{i\alpha x}PS|\psi _+^{}(0)\gamma _{}\gamma _5\psi _+(\alpha n)|PS,$$ (1) where $`P`$ and $`S`$ are the momentum and the spin of the proton, respectively, $`n`$ is a null vector such that $`nP=1`$, and $`\psi _+=\frac{1}{2}\gamma ^{}\gamma ^+\psi `$. The antiquark distributions are obtained from (1) by exchanging $`\psi `$ with $`\psi ^{}`$. Inserting a complete set of intermediate states $`\{|X\}`$ and using the Pauli–Lubanski projectors $`𝒫_{}^{}=\frac{1}{2}(1\pm \gamma _{}\gamma _5)`$ one gets $$\mathrm{\Delta }_Tq(x)=\frac{1}{\sqrt{2}}\underset{X}{}\{|PS|𝒫_{}^{}\psi _+(0)|X|^2|PS|𝒫_{}^{}\psi _+(0)|X|^2\}\delta [(1x)P^+p_X^+].$$ (2) which clearly shows the probabilistic meaning of $`\mathrm{\Delta }_Tq`$ in the transverse polarization basis: $`\mathrm{\Delta }_Tq(x)`$ is the number density of quarks with momentum fraction $`x`$ and transverse polarization $``$ minus the number density of quarks with the same momentum and transverse polarization $``$, in a transversely polarized hadron. In the helicity basis $`\mathrm{\Delta }_Tq`$ is non diagonal and hence has no probabilistic interpretation. Being a chirally odd distribution, $`\mathrm{\Delta }_Tq`$ is not measurable in deep inelastic scattering. This makes it quite an elusive quantity. At present we have no experimental information on it. That is why model calculations and other nonperturbative studies are particularly useful. ## 2 Models The transverse polarization distributions have been calculated in a large number of models: i) bag model , ii) chromodielectric model , iii) chiral quark soliton and NJL model , iv) light-cone models , v) spectator model . Many of these calculations show that, at small $`Q^2`$ ($`\text{ }<0.5`$ GeV<sup>2</sup>), $`\mathrm{\Delta }_Tq`$ is not very different from $`\mathrm{\Delta }q`$, at least for $`x\text{ }>0.1`$. At low $`x`$ the situation is more controversial: some models predict a sensible difference between the two distributions. A definite conclusion cannot be drawn since the various models are valid at different scales and it is known that the QCD evolution induces a difference between $`\mathrm{\Delta }_Tq`$ and $`\mathrm{\Delta }q`$ which is relevant especially at low $`x`$ . As for the tensor charges $$\delta q=_0^1dx[\mathrm{\Delta }_Tq(x)\mathrm{\Delta }_T\overline{q}(x)],$$ (3) in addition to the predictions of the models listed above, there are other nonperturbative estimates: by QCD sum rule methods , and by lattice QCD . A rough (and personal) average of all model results is $$\delta u1.0\pm 0.2,\delta d0.3\pm 0.1,\mathrm{at}Q^22\mathrm{GeV}^2$$ where the error does not account for the intrinsic uncertainty of each model, but represents only the range spanned by the various results. For comparison, the lattice finding is $$\delta u=0.84,\delta d=0.23,\mathrm{at}Q^22\mathrm{GeV}^2.$$ Note that $`\delta u`$ and $`\delta d`$ are, in absolute value, only slightly smaller than the nonrelativistic expectations ($`\delta u_{NR}=4/3`$, $`\delta d_{NR}=1/3`$). ## 3 Possible measurements As already mentioned, the transverse polarization distributions cannot be measured in inclusive DIS. To extract $`\mathrm{\Delta }_Tq`$ one needs either two hadrons in the initial state (hadron-hadron collisions), or one hadron in the initial state and one in the final state (semiinclusive deep inelastic scattering). The measurement of $`\mathrm{\Delta }_Tq`$ in proton–proton collisions is part of the physics program of the experiments at RHIC and of the proposed HERA-$`\stackrel{}{N}`$ project . Among the possible $`pp`$ initiated processes one can make a selection choosing those which are expected to yield the largest spin asymmetry. Since there is no gluon transversity distribution , all processes dominated at the partonic level by $`qg`$ or $`gg`$ scattering produce a very small transverse asymmetry . Hence the most promising reaction is Drell-Yan lepton pair production with two transversely polarized beams. The relevant observable is the double–spin transverse asymmetry $$A_{TT}=\frac{d\sigma _{}d\sigma _{}}{d\sigma _{}+d\sigma _{}},$$ (4) which depends on the product ($`A`$ and $`B`$ are the two protons) $$\mathrm{\Delta }_Tq(x_A)\mathrm{\Delta }_Tq(x_B).$$ (5) The Drell-Yan $`A_{TT}`$ has been calculated at leading order and next-to-leading order . In $`\mathrm{\Delta }_Tq=\mathrm{\Delta }q`$ was assumed at a very low scale (the input $`\mu ^2`$ of the GRV distributions). The authors of , instead, set $`|\mathrm{\Delta }_Tq|=2(q+\mathrm{\Delta }q)`$ at the GRV scale, assuming the saturation of Soffer’s inequality. This yields the maximal value for $`A_{TT}`$. Summarizing the results of these calculations we can say that at RHIC energies ($`\sqrt{s}>100`$ GeV) one expects for the double-spin asymmetry, integrated over the invariant mass $`M^2`$ of the dileptons $$A_{TT}(12)\%,\mathrm{at}\mathrm{most}.$$ (6) It is quite interesting to note that, as $`\sqrt{s}`$ gets lower, the asymmetry tends to increase, as it was first pointed out in . Thus at the HERA-$`\stackrel{}{N}`$ energies ($`\sqrt{s}=40`$ GeV) $`A_{TT}`$ can reach a value of $`(34)\%`$, which should be measurable within the expected statistical errors for that experiment . Let us turn now to semiinclusive DIS on a transversely polarized proton. There are three candidate reactions for determining $`\mathrm{\Delta }_Tq`$ at leading twist. Detecting a transversely polarized hadron $`\stackrel{}{h}`$ (e.g., a $`\mathrm{\Lambda }`$) in the final state, $$e\stackrel{}{p}e\stackrel{}{h}X,$$ (7) one measures the product $$\mathrm{\Delta }_Tq(x)H_1^q(z),$$ (8) where $`H_1^q`$ is a chirally odd leading-twist fragmentation function. In principle there is no reason why $`H_1`$ should be much smaller than the unpolarized fragmentation function $`D_1`$. The model calculation of gives for instance $`H_1^u/D_1^u0.5`$ and $`H_1^d/D_1^d0.2`$. The second relevant reaction is semiinclusive DIS with an unpolarized final hadron $$e\stackrel{}{p}ehX.$$ (9) In this case $`\mathrm{\Delta }_Tq`$ might appear as a consequence of the Collins effect (a T-odd contribution arising from final state interactions). Here one measures $$\mathrm{\Delta }_Tq(x)H_1^q(z),$$ (10) where $`H_1^q`$ is a T-odd leading-twist fragmentation function. The estimate of $`H_1^q`$ presented in and based on the analysis of $`pp`$ reactions shows that this quantity is non negligible only at high $`z`$. A third way to extract $`\mathrm{\Delta }_Tq`$ from semiinclusive DIS has been explored in . The idea is to study the process $$e\stackrel{}{p}eh_1h_2X$$ (11) where $`h_1,h_2`$ are two mesons in a correlated state which is the superposition of two resonances $`h,h^{}`$ $$|h_1h_2=\mathrm{e}^{i\delta }|h+\mathrm{e}^{i\delta ^{}}|h^{}.$$ (12) For instance, $`h_1,h_2=\pi ^+,\pi ^{}`$ and $`h,h^{}=\sigma ,\rho `$. In this reaction one measures $$\mathrm{sin}\delta \mathrm{sin}\delta ^{}\mathrm{sin}(\delta \delta ^{})\mathrm{\Delta }_Tq(x)I_q(z)$$ (13) where $`I_q(z)`$ is the $`hh^{}`$ interference fragmentation function. Nothing is known at present about this quantity. From this sketchy presentation of the phenomenological perspectives it should be clear that, whereas the Drell-Yan process allows to determine $`\mathrm{\Delta }_Tq`$ in a clean way, semiinclusive DIS is characterized by the presence of fragmentation functions which are little known and, in some cases, are expected to be rather small. It is a pleasure to thank A. Drago for a long collaboration work on this subject and M. Anselmino for useful discussions.
no-problem/9908/cond-mat9908339.html
ar5iv
text
# Tunable Charge Density Wave Transport in a Current-Effect Transistor ## Abstract The collective charge density wave (CDW) conduction is modulated by a transverse single-particle current in a transistor-like device. Nonequilibrium conditions in this geometry lead to an exponential reduction of the depinning threshold, allowing the CDWs to slide for much lower bias fields. The results are in excellent agreement with a recently proposed dynamical model in which ”wrinkles” in the CDW wavefronts are ”ironed” by the transverse current. The experiment might have important implications for other driven periodic media, such as moving vortex lattices or ”striped phases” in high-T<sub>c</sub> superconductors. PACS numbers: 71.45.Lr, 71.45.-d, 72.15.Nj The charge-density-wave (CDW) state, characterized by a periodic modulation of the conduction electron density, is commonly observed in low-dimensional conductors . It is found to be the ground state in various inorganic and organic materials with a chain-like structure, giving rise to remarkable electrical properties . Similar charge-ordered states (”striped phases”) play an important role in high-T<sub>c</sub> superconductors and two-dimensional electron gases in the quantum Hall regime . A particularly interesting feature of the CDW state is its collective transport mode, very similar to superconductivity : under an applied electric field, the CDWs slide along the crystal, giving rise to a strongly nonlinear conductivity. Since even a small amount of disorder pins the CDWs, sliding occurs only when the applied electric field exceeds a certain threshold field. The pinning mechanisms, the onset of collective motion and the dynamics of a moving CDW are typical characteristics of the complex physics which describes a very general class of disordered periodic media . These include a wide variety of periodic systems, as diverse as vortex lattices in superconductors and Josephson junction arrays , Wigner crystals , colloids , magnetic bubble arrays and models of mechanical friction . The focus of recent theoretical and experimental research on disordered periodic media have been their nonequilibrium dynamical properties. One of the issues that have been raised is the effect of a single-particle current, due to uncondensed electrons and quasiparticle excitations. In a recent theoretical work, Radzihovsky and Toner discovered that a single-particle current has the most profound effects when it flows perpendicular to the CDW sliding direction. Based on general symmetry principles, this leads to nonequilibrium CDW dynamics even if CDW itself is stationary. Here we report our study of the CDW transport in the presence of such a transverse single-particle current. We find that the sliding CDW motion is stable against a small transverse current, but large currents have a dramatic effect: the longitudinal depinning threshold field is exponentially reduced for the normal current densities which exceed some crossover value J<sub>c</sub>. In other words, the collective longitudinal current is enhanced by the transverse single-particle current. The characteristics of this current-effect transistor are in excellent agreement with the predictions of Radzihovsky and Toner . The experiments were carried out on single crystals of NbSe<sub>3</sub>. This material has a very anisotropic, chain-like structure . It exhibits two CDW transitions, each involving different types of chains, at $`T_P=145K`$ and $`T_P=59K`$. A small portion of the conduction electrons remains uncondensed, providing a metallic single-particle channel. A single crystal of dimensions 2.7 mm$`\times `$36 $`\mu `$m$`\times `$240 nm was glued onto a sapphire substrate. A pattern of gold contacts was then defined on top of it using electron-beam lithography. The pattern consisted of two current leads at two ends of the crystal, and a row of devices, each with two transverse current leads and two voltage leads. A scheme of such a transistor device is shown in the inset of Fig. 1. The transverse current leads were 5-100 $`\mu `$m wide, and overlapped the crystal by 1-5 $`\mu `$m. To ensure contact on both sides of the crystal, 180 nm thick layer of gold was evaporated at angles of 45 degrees with respect to the substrate, as well as perpendicular to it. The contact resistance of the transverse leads was by 1-2 orders of magnitude larger than the resistance of the crystal in the longitudinal direction, which precludes considerable shunting of the current through the transverse leads. A dc current of up to 1 mA was injected at the transverse leads. The transverse leads were not electrically connected to the longitudinal circuit, except through the crystal. Since the CDWs can only slide in the longitudinal direction, the transverse current is due to single electrons. The longitudinal current was injected at the two far ends of the crystal. The voltage leads were 180 nm thick, 5 $`\mu `$m wide and the spacing between them was 50-500 $`\mu `$m. The longitudinal current-voltage characteristics and the differential resistance were studied as a function of transverse current at different temperatures, ranging from 25-120 K. The current-voltage characteristics for one of the devices are shown on Fig. 1. In the absence of the transverse current, the CDWs are pinned at low bias voltages. The I-V is linear, as the current is due to uncondensed electrons and quasiparticles that are thermally excited above the CDW gap. When the applied voltage reaches the threshold value $`V_T(I_x=0)`$, marked by an arrow in Fig. 1, the CDWs are depinned and start to slide. A sharp increase in current is observed at $`V_T`$ due to this additional conduction channel. When a transverse current I<sub>x</sub> is applied, $`V_T`$ decreases and the sliding starts at lower bias voltages. Thus, CDWs that were pinned for $`J_x=0`$, start sliding at lower fields when a transverse current is applied. A new linear regime sets in at low bias voltages, where $`V_T`$ $`<V_T(I_x=0)`$. The resistance in this regime is lower than the single particle contribution $`R`$ at $`J_x=0`$. This makes the effect easily distinguishable from heating: since most of the measurements were carried out at the temperatures at which $`dR/dT>0`$, heating would result in a higher single-particle resistance. The threshold field reduction is more strikingly visible in the differential resistance measurements, shown in Fig. 2. The differential resistance at low bias fields, due to uncondensed electrons and excited quasiparticles, is mostly unaffected by the transverse current. The onset of CDW sliding, characterized by a sharp drop in differential resistance, is shifted towards zero as I<sub>x</sub> is increased. The same reduction of the threshold field is also observed for negative bias voltages and the plots are nearly symmetric around $`V=0`$. We have found no differences when changing the sign of either the longitudinal current or the transverse current. The reduction of the sliding threshold does not occur for arbitrarily small transverse currents. The dependence of the threshold field $`E_T`$ on the transverse current density $`J_x`$ for two samples is shown in Fig. 3. It is evident that $`E_T`$ remains unchanged until $`J_x`$ reaches some crossover value $`J_c`$. For $`J_x>J_c`$, $`E_T`$ decreases with increasing $`J_c`$. The transverse current density dependence of the threshold field $`E_T`$ for $`J_x>J_c`$ can be fit by: $$E_T(J_x)=E_T(0)\frac{J_x}{J_c}\mathrm{exp}\left(1\frac{J_x}{J_c}\right)$$ (1) where $`E_T(0)`$ is the threshold field at $`J_x=0`$. Once the crossover value of the transverse current $`J_c`$ is exceeded, the depinning threshold field decreases and the CDW conduction channel is activated by much lower bias voltages. The observation of a crossover current $`J_c`$ rules out the possibility that the threshold field reduction is due to current inhomogeneities around the transverse contacts. If the changes in $`E_T`$ were due to a longitudinal component of an inhomogeneous transverse current, then such changes would be apparent at any value of $`J_x`$, and no $`J_c`$ would be observed. Furthermore, it is not clear that such inhomogeneities would lead to the observed exponential reduction of $`E_T`$. The exponential decrease of the threshold field described by Eq. 1. has recently been predicted by Radzihovsky and Toner . In their model, the value of the crossover current density $`J_c`$ needed for the initial suppression of $`E__T`$ is expected to be proportional to the value of the threshold field at $`J_x=0`$, and is given by : $$J_c\sigma _0E_T(0)(\xi _Lk_F)(\rho _n/\rho _{CDW})$$ (2) where $`\sigma _0`$ is the conductivity at very high bias fields, $`k_F`$ the Fermi wave vector and $`\rho _n`$ and $`\rho _{CDW}`$ are normal and CDW electron densities, respectively. The correlation length $`\xi _L`$ is a measure for the coherence in the sample and decreases with increasing disorder. $`E_T`$ is known to be temperature dependent, following $`E_T=E_T(0)e^{T/T_0}`$ , where $`T`$ is the temperature, and $`T_0`$ is a constant. The dependence of $`J_c`$ on $`E_T(0)`$ can therefore be studied by measuring at different temperatures. The dependence of $`J_c`$ on $`\sigma _0E_T(0)`$ is shown in the inset of Fig. 3: J<sub>c</sub> grows linearly with $`\sigma _0E_T(0)`$ and it extrapolates to zero for $`E_T(0)=0`$. The crossover current densities of $`10^310^4`$ A/cm<sup>2</sup> estimated from Eq. 2. are in excellent agreement with the values measured in our experiment. We have shown that the conduction in the CDW channel can be enhanced by a single-particle current flowing transversely to the CDW sliding direction. This surprising behavior has been observed in samples with different geometries, at different temperatures, and in both CDW regimes of NbSe<sub>3</sub>, suggesting that it is a general property of the CDW transport. The dynamical model of Radzihovsky and Toner provides a physical origin of this effect: the CDWs become more ordered due to momentum transfer with transversely moving normal carriers. This mechanism is illustrated in Fig. 4. In the absence of defects, the charge density wave fronts are straight and parallel to each other (left side of the picture). The single-particle transverse current, marked by ”a” on Fig. 4, can flow with little or no interaction with the CDW. In the presence of defects or impurities in the crystal, the CDW deforms to lower its energy and the wavefronts are ”wrinkled” (right side of the picture). In this case, the transversely moving electrons (”b”) are more likely to be deflected. The conservation of linear momentum results in a reaction force back on the CDW. This way the CDW roughness is reduced as the CDW wavefronts are straightened out or ”ironed” by the transverse current. The CDW transport across the sample is therefore more coherent and less susceptible to pinning. The lower pinning strength then leads to a lower threshold field. Since the conduction in the CDW channel can be modulated by a current in the single particle channel, this device in principle works as a transistor, raising a question of a possible practical application. The maximum gain observed in our experiments was $`\mathrm{\Delta }I/I_x=0.15`$. A simple estimate from our measurements suggests that the maximum gain is proportional to $`\xi _L^1`$. The gain can therefore be improved by using dirtier crystals or smaller samples in which $`\xi _L`$ is limited by the sample sizes. Apart from being intriguing in their own right as an important test of the theory, our results may provide a useful insight into related phenomena which are much more difficult to study experimentally. As mentioned above, this novel effect is relevant to a variety of other periodic systems which share the same symmetries and a similar geometry. A particularly interesting example might be the ”striped phases” in superconducting oxides, whose role in high-T<sub>c</sub> superconductivity is still not resolved. The authors are grateful to Yu. Latyshev and P. Monceau for providing the crystal, and to L. Radzihovsky, Yu. Nazarov and S. Zaitsev-Zotov for useful discussions. This work was supported by the Netherlands Foundation for Fundamental Research on Matter (FOM). HSJvdZ was supported by the Dutch Royal Academy of Arts and Sciences (KNAW).
no-problem/9908/cond-mat9908201.html
ar5iv
text
# Triplet Waves in a Quantum Spin Liquid ## Abstract We report a neutron scattering study of the spin-1/2 alternating bond antiferromagnet $`\mathrm{Cu}(\mathrm{NO}_3)_22.5\mathrm{D}_2\mathrm{O}`$ for $`0.06<k_BT/J_1<1.5`$. For $`k_BT/J_11`$ the excitation spectrum is dominated by a coherent singlet-triplet mode centered at $`J_1=0.442(2)`$ meV with sinusoidal dispersion and a bandwidth of $`J_2=0.106(2)`$ meV. A complete description of the zero temperature contribution to the scattering function from this mode is provided by the Single Mode Approximation. At finite temperatures we observe exponentially activated band narrowing and damping. The relaxation rate is thermally activated and wave vector dependent with the periodicity of the reciprocal lattice. Transverse phonons and spin waves are propagating small amplitude oscillations of a static order parameter in a broken symmetry phase. Isotropic quantum antiferromagnets with a gap in their excitation spectra can also support coherent wave-like excitations, but these differ from phonons and spin waves in that they move through a system with no static order. Specific examples of such systems include the spin-1 chain, even-leg ladders and the alternating bond spin-1/2 chain. Since they are not based on the existence of a static order parameter that sets in at a well defined transition temperature, coherent excitations in these systems are expected to emerge smoothly with decreasing temperature as short-range correlations develop. In this letter we document this unique cooperative behavior through an experimental study of the temperature dependence of magnetic excitations in an isotropic, order-parameter-free quantum magnet. Specifically, we have studied magnetic excitations in the alternating spin-1/2 chain Copper Nitrate (CN) as a function of wave vector, energy, and temperature. The spin Hamiltonian for this system can be written $$=\underset{n}{}(J_1𝐒_{2n}𝐒_{2n+1}+J_2𝐒_{2n+1}𝐒_{2n+2}).$$ (1) Because $`J_2/J_10.24`$ is small, it is useful to think of CN as a chain of pairs of spins-1/2. Each pair has a singlet ground state separated from a triplet at $`J_10.44`$ meV. The weak inter-dimer coupling ($`J_20.11`$ meV) yields a collective singlet ground state at low temperatures with triplet excitations that propagate coherently along the chain. We have characterized dynamic spin correlations in the temperature range $`0.06<k_BT/J_1<1.5`$, in considerable detail. We find that heating yields thermally activated band narrowing and an increased relaxation rate that varies with wave vector transfer with the periodicity of the reciprocal lattice. CN ($`\mathrm{Cu}(\mathrm{NO}_3)_22.5\mathrm{D}_2\mathrm{O}`$), is monoclinic (space group $`\mathrm{I12}/\mathrm{c1}`$) with low $`T`$ lattice parameters $`a=16.1`$ Å , $`b=4.9`$ Å , $`c=15.8`$ Å , and $`\beta =92.9^{}`$. The vector connecting dimers center to center is $`𝐮_0=[111]/2`$ for half the chains, and $`𝐮_0^{}=[1\overline{1}1]/2`$ for the other half. The corresponding intra-dimer vectors are $`𝐝_1=[0.252,\pm 0.027,0.228]`$ respectively. In our experiment, the wave vector transfer $`𝐐`$ was perpendicular to $`𝐛`$ so the two sets of chains contributed equally to magnetic neutron scattering. The sample consisted of four 92% deuterated, co-aligned single crystals with total mass 14.09 g. Inelastic neutron scattering measurements were performed on the inverse geometry time of flight spectrometer IRIS at the Rutherford Appleton Laboratory, UK . Disk choppers selected an incident spectrum from 1.65 meV to 3.25 meV pulsed at 50 Hz, and a backscattering pyrolytic graphite analyzer bank selected a final energy $`E_f=1.847`$ meV. The Half Width at Half Maximum (HWHM) elastic energy resolution was $`10.5\mu `$eV. The $`𝐛`$ direction of CN was perpendicular to the horizontal scattering plane and pointed towards the low angle part of the analyzer bank at an angle of $`\varphi =20(1)^{}`$ to the direct beam. The direction for rotation of the $`𝐚`$ axis into the $`𝐜`$ axis coincided with the direction of decreasing scattering angle. In this configuration the projection of wave vector transfer on the chain $`Q_{}=k_i\mathrm{cos}\varphi k_f\mathrm{cos}(\varphi 2\theta )`$ takes on a unique value for each detector in the range of scattering angles $`20^{}<2\theta <160^{}`$ covered. We can therefore present our data as a function of energy transfer $`\mathrm{}\omega `$ and wave vector transfer along the chain $`\stackrel{~}{q}=𝐐𝐮_0`$. In addition there is a specific value of $`Q_{}=k_i\mathrm{sin}\varphi k_f\mathrm{sin}(\varphi 2\theta )`$ associated with each $`(\stackrel{~}{q},\mathrm{}\omega )`$ point such that sensitivity to dispersion perpendicular to the chain is maintained in the projection. Count-rates were normalized to incoherent elastic scattering from the sample to provide absolute measurements of $`\stackrel{~}{I}(𝐐,\omega )=|\frac{g}{2}F(Q)|^22𝒮(𝐐,\omega )`$. Here $`g=\sqrt{(g_b^2+g_{}^2)/2}=2.22`$ , $`F(Q)`$ is the magnetic form factor for Cu<sup>2+</sup> , and $`𝒮(𝐐,\omega )`$ is the scattering function. Figures 1(a)-(c) show the measured neutron scattering spectrum at $`T=0.3`$ K, 2 K, and 4 K. Focusing at first on the 0.3 K data, we observe a resonant mode centered around $`J_1=0.44`$ meV with bandwidth $`J_2`$, consistent with the predictions of perturbation theory . The mode energy has the periodicity $`2\pi `$ of the one-dimensional reciprocal lattice with minima for $`\stackrel{~}{q}=𝐐𝐮_\mathrm{𝟎}=n2\pi `$, indicating antiferromagnetic inter-dimer interactions. The intensity of the mode varies with a periodicity that is incommensurate with that of the dispersion relation. There is a simple explanation for this, namely that the intra-dimer spacing that enters in the neutron scattering cross section is incommensurate with the period of the alternating spin chain. An exact sum-rule for $`𝒮(𝐐,\omega )`$ provides the following direct link between the microscopic structure and the intensity of inelastic neutron scattering: $`\mathrm{}\omega _𝐐`$ $``$ $`\mathrm{}^2{\displaystyle _{\mathrm{}}^{\mathrm{}}}\omega 𝒮(𝐐,\omega )𝑑\omega `$ (2) $`=`$ $`{\displaystyle \frac{2}{3}}{\displaystyle \underset{𝐝}{}}J_𝐝𝐒_0𝐒_𝐝(1\mathrm{cos}𝐐𝐝),`$ (3) where {d} is the set of all bond vectors connecting a spin to its neighbors. Figure 2 (a) shows $`\mathrm{}\omega _𝐐`$ at $`T=0.3`$ K and $`T=4`$ K derived by integrating the corresponding data sets in Fig. 1. The solid lines show fits based on Eq. 2 including only intra-dimer correlations $`𝐒_\mathrm{𝟎}𝐒_{𝐝_\mathrm{𝟏}}`$ and a constant to account for multiple scattering (see below). The excellent agreement between model and data provides direct evidence for singlet formation between spins separated by $`𝐝_\mathrm{𝟏}`$. The modulation amplitude is $`J_1𝐒_\mathrm{𝟎}𝐒_{𝐝_\mathrm{𝟏}}`$. From the wave vector dependence of energy integrated intensities we turn to spectra at fixed wave vector transfer. Figure 3 shows cuts through raw data for $`\stackrel{~}{q}=2\pi `$ and $`\stackrel{~}{q}=3\pi `$ at three temperatures. At $`T=0.3`$ K we see resolution limited peaks. Upon increasing temperature these peaks broaden and shift towards the center of the band. Careful inspection also reveals that the higher energy peak at $`\stackrel{~}{q}=3\pi `$ broadens less than the lower energy peak at $`\stackrel{~}{q}=2\pi `$. From gaussian fits to constant energy cuts such as these, we extract the resonance energy and half width at half maximum versus wave vector transfer shown in Fig. 2 (c) and (d). This analysis shows that the finite $`T`$ relaxation rate is wave vector dependent with the apparent periodicity of the reciprocal lattice. We now proceed to extract the detailed temperature dependence of the integrated intensity, the effective bandwith and the relaxation rate. To take full advantage of the wide sampling of $`𝐐\omega `$ space in our experiment and to account for resolution effects, the analysis is based on “global fits” of the following phenomenological form for $`𝒮(𝐐,\omega )`$ to the complete $`𝐐`$ and $`\omega `$ dependent data set at each temperature. For $`\mathrm{}\omega >0`$ we write $$𝒮(𝐐,\omega )=\frac{\mathrm{}\omega _𝐐}{ϵ(𝐐)}\frac{1}{1\mathrm{exp}(\beta ϵ(𝐐))}f(\mathrm{}\omega ϵ(𝐐)),$$ (4) where $`f(E)`$ is a normalized spectral function. The other terms implement the first moment sum-rule of Eq. 2 in the limit where $`f(E)`$ is sharply peaked on the scale of $`J_1`$. Eq. 4 represents the “Single Mode Approximation” (SMA) that has been used with success to link the equal time structure factor and the dispersion relation for collective modes in numerous many body systems. Given that two magnon scattering carries less than 1% of the spectral weight for CN, the SMA should be excellent at sufficiently low $`T`$. For the dispersion relation we use the following variational form based on first order perturbation theory $$ϵ(𝐐)=J_1\frac{1}{2}\underset{𝐮}{}J_𝐮\mathrm{cos}𝐐𝐮,$$ (5) The vectors {u} connect neighboring dimers center to center, both within and between the chains. For comparing to the experimental data, $`𝒮(𝐐,\omega )`$ was convolved with the instrumental resolution. In all fits it was necessary to take into account multiple neutron scattering events involving elastic incoherent nuclear scattering followed or preceded by coherent inelastic magnetic scattering. We believe that such processes are responsible for the weak horizontal band of intensity that is visible in Fig. 1(a). For $`T=0.3`$ K we used $`f(E)=\delta (E)`$ and obtained the fit shown in Fig 1 (d). To better evaluate the quality of the global fit we also show cuts through the model calculation as solid lines in Fig. 3 (a). There is excellent agreement between model and data with an overall prefactor and four exchange constants as the only fit parameters. The prefactor refined to $`𝐒_\mathrm{𝟎}𝐒_{𝐝_\mathrm{𝟏}}=0.9(2)`$ consistent with the value of -3/4 expected for isolated singlets. Allowing for inter-dimer correlations by including the corresponding term from Eq. 2 in the global fit yields $`𝐒_0𝐒_{𝐝_2}=0.04(8)`$. From the best fit parameters in the variational dispersion relation we get $`J_1=0.442(2)`$ meV and $`J_2=0.106(2)`$ meV for the two intra-chain interactions and $`J_L=0.012(2)`$ meV and $`J_R=0.018(2)`$ meV for dimers separated by $`𝐮_L=[\frac{1}{2},0,0]`$ and $`𝐮_R=[0,0,\frac{1}{2}]`$ respectively. Because $`𝐛𝐐`$ throughout the experiment our data do not yield an estimate for inter-chain interactions between dimers displaced by $`[111]/2`$. However, in a separate measurement with the crystal oriented in the $`(hkh)`$ plane, we were able to place an upper limit of 0.02 meV on the corresponding parameter in Eq. 5. The value $`J_2/J_1=0.240(5)`$ that we obtain is measurably smaller than the value $`J_2/J_10.27`$ derived from magnetic susceptibility data. A likely explanation for this discrepancy is that susceptibility measurements cannot distinguish intra-chain from inter-chain interactions. To analyze the finite $`T`$ data we replaced the spectral function with a normalized Gaussian with HWHM $$\mathrm{\Gamma }(\stackrel{~}{q})=\mathrm{\Gamma }_0+\frac{\mathrm{\Gamma }_1}{2}\mathrm{cos}\stackrel{~}{q},$$ (6) The functional form for the dispersion relation (Eq. 5) was maintained, but to account for the bandwidth narrowing that is apparent in the raw data, we introduced an overall renormalization parameter relating finite $`T`$ “effective” exchange constants in the variational dispersion relation to the bare temperature independent exchange parameters: $`\stackrel{~}{J}_𝐮=n(T)J_𝐮`$. The fits obtained are excellent as can be ascertained by comparing the left and right columns in Fig. 1 and the solid lines through the data in Fig. 3. The solid lines in Fig. 2(b) and (c) show the dispersion relation and $`\stackrel{~}{q}`$-dependent relaxation rate derived from the global fits. They are consistent with the data points derived from constant-$`\stackrel{~}{q}`$ cuts indicating that the variational forms employed for $`ϵ(\stackrel{~}{q})`$ and $`\mathrm{\Gamma }(\stackrel{~}{q})`$ have not biased the global fitting analysis. We note that the quality of the fits did not change significantly when using a Lorentzian rather than a Gaussian spectral function. So while the data provide reliable information on the $`\stackrel{~}{q}`$\- and $`T`$-dependent relaxation rate, they do not accurately determine the spectral function. The temperature-dependent parameters derived from this analysis are shown in Fig. 4. The prefactor for global fits at each temperature yields the intra-dimer spin correlation function, which we plot versus $`T`$ in Fig. 4(a). As expected $`|𝐒_0𝐒_{𝐝_1}|`$ decreases with increasing $`T`$ as the populations of the four states of each spin pair equalize. For an isolated spin pair it can be shown that $`𝐒_0𝐒_𝐝=(3/4)\mathrm{\Delta }n(\beta J_1)`$, where $`\mathrm{\Delta }n(\beta J_1)=(1e^{\beta J_1})/(1+3e^{\beta J_1})`$ is the singlet triplet population difference. After fitting a scale factor, this form provides an excellent description of the temperature dependence of the data (solid line in Fig. 4(a)). The Random Phase Approximation (RPA) applied to interacting spin dimers predicts that the bandwidth renormalization factor, $`n(T)`$, also follows the singlet triplet population difference. A similar result holds when the singlet ground state is induced by single ion anisotropy. We compare $`n(T)`$ to $`\mathrm{\Delta }n(\beta J_1)`$ in Fig. 4(b). While there is qualitative agreement, the RPA clearly predicts more bandwidth narrowing than is actually observed. Triplet relaxation is due to scattering from the thermal ensemble of excited states. Because the density of triplets is thermally activated, the relaxation rates should be too. Fits of a simple activated form $`\mathrm{\Gamma }_i=\gamma _i\mathrm{exp}(\mathrm{\Delta }_i/k_BT)`$ to the $`T4`$ K data in Figs. 4(c) and (d) give $`\mathrm{\Delta }_0=0.24(2),\gamma _0=0.10(2)`$ meV, $`\mathrm{\Delta }_1=0.32(3)`$, and $`\gamma _1=0.08(2)`$ meV. If we instead fix $`\mathrm{\Delta }_iJ_1`$ and allow for a power law prefactor : $`\mathrm{\Gamma }_i(T)=\gamma _i(J_1/k_BT)^{\alpha _i}\mathrm{exp}(J_1/k_BT)`$ we also obtain excellent fits with $`\alpha _0=1.0(2),\gamma _0=0.13(4)`$ meV, $`\alpha _1=0.6(2)`$, and $`\gamma _1=0.10(1)`$ meV. In summary, we have examined dynamic spin correlations in the strongly alternating spin chain $`\mathrm{Cu}(\mathrm{NO}_3)_22.5\mathrm{D}_2\mathrm{O}`$ for $`0.06<k_BT/J_1<1.5`$. For $`k_BTJ_1`$ we find a coherent dispersive triplet mode whose contribution to the scattering function is perfectly accounted for by the SMA, and we have determined accurate values for inter-dimer interactions in the material. Upon heating, the sharp dispersive mode gradually deteriorates through band-narrowing and the development of a wave-vector dependent lifetime. A semiclassical theory for finite-$`T`$ excitations in gapped spin chains was recently developed by Sachdev and Damle. It relies on $`\mathrm{\Delta }/J`$ being a small parameter as is the case for the Haldane phase of spin-1 chains, and in weakly dimerized spin-1/2 chains. In CN the spin gap is instead much greater than the magnetic bandwidth. Our data should provide a focus for theoretical attempts to describe finite temperature properties in this “strong coupling” limit. It is a pleasure to acknowledge the assistance provided by the staff of the RAL during the measurements, and we thank R. Eccleston for illuminating discussions. We also thank W. Wong-Ng for help on characterizing crystals, and R. Paul for neutron activation analysis at NIST. Work at JHU was supported by the NSF through DMR-9453362 and DMR-9357518. DHR acknowledges support from the David and Lucile Packard Foundation.
no-problem/9908/nucl-th9908063.html
ar5iv
text
# An Update on NA50 and LUCIFER ## I J/$`\psi `$ Suppression in ‘99 This brief note is presented to clear up any misinformation that may have been created by the NA50 contribution to QM’99 or to the Proceedings of the International school of Nuclear Physics Erice, 17-25 September, 1998 . The QM’99 document contains new data from the NA50 collaboration, taken in late 1998 with a thinner target than used previously. The total data sample is reanalysed and compared to a variety of theoretical simulations. In at least one case this comparison was not straightforwardly made . There are two noteworthy features of the combined new data set. One is the striking absence of the discontinuity in the $`E_t`$ spectrum which first appeared in the 1996 measurements for J/$`\psi `$ from Pb+Pb. The second is the very evident change in the $`E_t`$ scale between the present and several earlier submissions. Not much is said in the QM’99 NA50 presentation by way of explaining either of these changes, although presumably the ‘minimum-bias’ smoothing procedure, introduced in QM’99, accomplishes the removal of the discontinuity. The lack of such a singularity was of course anticipated in any theoretical calculation based on cascade-like simulations. Only theories proposing some or other phase change have introduced such singular behaviour, and it must be said, generally have done so in a rather ad hoc fashion. It is in fact not clear to what extent discontinuities can persist in finite systems, even when a ‘change of phase’ is present in an infinite medium. The change of scale for transverse energy was also anticipated in at least one theoretical work . In the present authors’ discussion of J/$`\psi `$ suppression a scale factor was clearly referred to in these two publications . In Reference this was done specifically in the caption to the figure describing the comparison with NA50 for Pb+Pb at $`158`$ GeV (Figure 15 in , Figure 1 below). At that point it was noted that an $`E_t`$ scale factor had been introduced to reconcile the experimental and theoretical spectra. Again in the earlier Reference , in the caption to Figure 11, the reader is referred to the text for a discussion of the $`E_t`$ scale, wherein it is stated that a scaling factor of $`1.25`$ was employed. Very clearly then, the present authors gave warning that the Pb+Pb transverse energy scale achieved with LUCIFER was not in accord with that presented by the NA50 collaboration. Reference was also made in both publications to private communications with the NA50 collaboration. These communications led to the necessity of a scale change and revealed that the ‘Collaboration’ did not at that time actually have good knowledge of the absolute $`E_t`$ scale. Indeed, a figure of $`125`$ GeV for the end point was cited as a reasonable alternative to the heretofore published value of 150-160 GeV . We are of course pleased if the absolute $`E_t`$ is now better understood by NA50 and for the moment, at least, is closer to our estimate. In fact, the scale factor we used to compare the LUCIFER calculation with experiment also took account of the seeming $`510\%`$ discrepancy in cutoff between the full $`E_t`$ spectra of NA49 and LUCIFER (see Figure 13 in Reference which is Figure 2 here). The agreements between simulations and both the NA49 inclusive meson and baryon spectra and this NA49 $`E_t`$ spectrum suggest an inconsistency with the cutoff earlier quoted for NA50 data. The theoretical calculation acts as an interpolation between experiments and predicted a cutoff transverse energy nearer $`120`$ GeV than $`150`$. Thus the overall factor was close to $`1.30`$, i.e. the factor between LUCIFER and NA50 ’96 $`E_t`$ scales. To further clear up any possible misapprehensions, we present here the earlier Figures 13 and 15 from Reference (Figures 1 and 2 here) and a new Figure 3 comparing our $`E_t`$-unscaled spectrum for Pb+Pb with one of the recent NA50 figures (Figure 57 in Kluberg QM’99). This NA50 plot is apparently obtained by rebinning from more complete spectra, but the overall effect is the same as graphing the totality of $`E_t`$ measurements. It is clear that some of our last J/$`\psi `$ to Drell-Yan values, at the end of the previous $`E_t`$ scaled calculation, appear slightly more suppressed. No significance can be attached to this since any cacscade is necessarily an inexact theory, to say the least, and our normalisatons are subject to some error from taking a ratio to Drell-Yan, perhaps $`7\%`$ or less. It would have been better to compare the theoretical survival probability directly to some experimental estimate of this quantity. The calculated survival rates are unchanged from our previous calculation. Of course none of these normalisation problems attach to the comparison with minimum bias J/$`\psi `$ production, which in both simulation and experiment in principle use absolute cross-sections. Our explanation of the anomalous suppression there remains in place. If anything, in this presentation of ‘unscaled $`E_t`$,’ we have exaggerated a small discrepancy at peripheral $`E_t`$, where unfortunately one does not expect any unusual or ‘plasma-like’ behaviour, and one must keep in mind the NA50 caveat concerning their absolute $`E_t`$ scale. We must still conclude that present deviations with NA50 do not justify any claims for startling medium-based effects. One might well argue that the breakup of a small object like the J/$`\psi `$ could never be ascribed to screening by a plasma. Ultimately, dissolution of the J/$`\psi `$ must result from gluon exchange interactions between quarks initially in hadrons and in the $`c\overline{c}`$ preresonant pair. It is probably hard for the charmonium state to distinguish between three quarks in a nucleon say, and the same three quarks somewhat spread out as in a plasma. There is no true continuous medium which can permeate the bound or preresonant charmonium state. The cascade theory can hardly be called ad hoc, as it is described in Reference . An attempt is simply made to incorporate as much information as is known from the elementary hadronic data in a comprehensive multi-scattering formalism. In our comparison with inclusive NA49 Pb+Pb spectra only a single intrinsic parameter of the model was determined from ion-ion data, i.e. the formation time for secondary mesons , and that was obtained from the light system S+S. The resulting good description of the NA49 Pb+Pb spectra surely removes the theory from any ad hoc category. The same cannot be said for the ‘Glauber’ calculations which yield neither inclusive meson spectra nor direct $`E_t`$ distributions, and which nevertheless were used to justify the inability of standard theory to explain the suppression in Pb+Pb. In particular, close to the correct number of produced mesons is achieved in the LUCIFER simulation and thus the breakup of charmonium states by these comovers is appropriately estimated, a feature intimately tied to the theory’s correct evaluation of the total transverse energy. For the purposes of calculating J/$`\psi `$ suppression, one requires other cross-sections. Breakup of J/$`\psi `$ from its collisions with baryons is determined from the nucleon-nucleus production data; breakup cross-sections on mesons are essentially taken as $`2/3`$ of that on nucleons. We indicated that breakup in meson-charmonium collisions mostly takes place well above threshold, so the latter estimate is likely good. It is incumbent on those proposing the production of ‘plasma’ in their measurements to demonstrate a clear deviation with the normal ‘background’ a cascade provides. This will prove as necessary at RHIC as it was at the SPS. ## II Acknowledgments The authors are grateful to Boris Kopeliovitch for several illuminating discussions. The present manuscript has been authored under US DOE grant No. DE-AC02-98CH10866. One of us (SHK) is pleased to acknowledge continuing support from the Alexander Von Humbodt Foundation, Bonn Germany.
no-problem/9908/gr-qc9908023.html
ar5iv
text
# Perturbative superluminal censorship and the null energy condition ## Introduction The relationship between the causal aspects of spacetime and the stress-energy of the matter that generates the geometry is a deep and subtle one. In this note, which is a simplified presentation based on our earlier work Non-perturbative , we shall focus in somewhat more detail on the perturbative investigation of the connection between the null energy condition (NEC) and the light-cone structure. We shall demonstrate that in linearized gravity the NEC always forces the light cones to contract (narrow): Thus the validity of the NEC for ordinary matter implies that in weak gravitational fields the Shapiro time delay is always a delay rather than an advance. This simple observation has implications for the physics of (effective) faster-than-light (FTL) travel via “warp drive”. It is well established, via a number of rigorous theorems, that any possibility of effective FTL travel via traversable wormholes necessarily involves NEC violations Morris-Thorne ; MTY ; Visser ; HV . On the other hand, for effective FTL travel via warp drive (for example, via the Alcubierre warp bubble Alcubierre , or the Krasnikov FTL hyper-tube Krasnikov ) NEC violations are observed in specific examples but it is difficult to prove a really general theorem guaranteeing that FTL travel implies NEC violations Non-perturbative . Part of the problem arises in even defining what we mean by FTL, and recent progress in this regard is reported in Non-perturbative ; Olum . In this note we shall (for pedagogical reasons) restrict attention to weak gravitational fields and work perturbatively around flat Minkowski spacetime. One advantage of doing so is that the background Minkowski spacetime provides an unambiguous definition of FTL travel. A second advantage is that the linearized Einstein equations are simply (if formally) solved via the gravitational Liénard–Wiechert potentials. The resulting expression for the metric perturbation provides information about the manner in which light cones are perturbed. ## Linearized gravity For a weak gravitational field, linearized around flat Minkowski spacetime, we can in the usual fashion write the metric as Visser ; MTW ; Wald $$g_{\mu \nu }=\eta _{\mu \nu }+h_{\mu \nu },$$ (1) with $`h_{\mu \nu }1`$. Then adopting the Hilbert–Lorentz gauge (aka Einstein gauge, harmonic gauge, de Donder gauge, Fock gauge) $$_\nu \left[h^{\mu \nu }\frac{1}{2}\eta ^{\mu \nu }h\right]=0,$$ (2) the linearized Einstein equations are Visser ; MTW ; Wald $$\mathrm{\Delta }h_{\mu \nu }=16\pi G\left[T_{\mu \nu }\frac{1}{2}\eta _{\mu \nu }T\right].$$ (3) This has the formal solution Visser ; MTW ; Wald $$h_{\mu \nu }(\stackrel{}{x},t)=16\pi Gd^3y\frac{\left[T_{\mu \nu }(\stackrel{}{y},\stackrel{~}{t})\frac{1}{2}\eta _{\mu \nu }T(\stackrel{}{y},\stackrel{~}{t})\right]}{|\stackrel{}{x}\stackrel{}{y}|},$$ (4) where $`\stackrel{~}{t}`$ is the retarded time $`\stackrel{~}{t}=t|\stackrel{}{x}\stackrel{}{y}|`$. These are the gravitational analog of the Liénard–Wiechert potentials of ordinary electromagnetism, and the integral has support on the unperturbed backward light cone from the point $`\stackrel{}{x}`$. In writing down this formal solution we have tacitly assumed that there is no incoming gravitational radiation. We have also assumed that the global geometry of spacetime is approximately Minkowski, a somewhat more stringent condition than merely assuming that the metric is locally approximately Minkowski. Finally note that the fact that we have been able to completely gauge-fix Einstein gravity in a canonical manner is essential to argument. That we can locally gauge-fix to the Hilbert–Lorentz gauge is automatic. By the assumption of asymptotic flatness implicit in linearized Einstein gravity, we can apply this gauge at spatial infinity where the only remaining ambiguity, after we have excluded gravitational radiation, is that of the Poincare group. (That is: Solutions of the Hilbert–Lorentz gauge condition, which can be rewritten as $`^2x^\mu =0`$, are under these conditions unique up to Poincare transformations.) We now extend the gauge condition inward to cover the entire spacetime, the only obstructions to doing so globally coming from black holes or wormholes, which are excluded by definition. Thus adopting the Hilbert–Lorentz gauge in linearized gravity allows us to assign a canonical flat Minkowski metric to the entire spacetime, and it is the existence of this canonical flat metric that permits us to make the comparisons (between two different metrics on the same spacetime) that are at the heart of the argument that follows. Now consider a vector $`k^\mu `$ which we take to be a null vector of the unperturbed Minkowski spacetime $$\eta _{\mu \nu }k^\mu k^\nu =0.$$ (5) In terms of the full perturbed geometry this vector has a norm $`k^2`$ $``$ $`g_{\mu \nu }k^\mu k^\nu `$ (6) $`=`$ $`h_{\mu \nu }k^\mu k^\nu `$ (7) $`=`$ $`16\pi G{\displaystyle d^3y\frac{T_{\mu \nu }(\stackrel{}{y},\stackrel{~}{t})k^\mu k^\nu }{|\stackrel{}{x}\stackrel{}{y}|}}.`$ (8) Now assume the NEC $$T_{\mu \nu }k^\mu k^\nu 0,$$ (9) and note that the kernel $`|\stackrel{}{x}\stackrel{}{y}|^1`$ is positive definite. Using the fact that the integral of a everywhere positive integrand is also positive, we deduce $`g_{\mu \nu }k^\mu k^\nu 0`$. Barring degenerate cases, such as a completely empty spacetime, the integrand will be positive definite so that $$g_{\mu \nu }k^\mu k^\nu >0.$$ (10) That is, a vector that is null in the Minkowski metric will be spacelike in the full perturbed metric. Thus the null cone of the perturbed metric must everywhere lie inside the null cone of the unperturbed Minkowski metric. Because the light cones contract, the coordinate speed of light must everywhere decrease. (Not the physical speed of light as measured by local observers, as always in Einstein gravity, that is of course a constant.) This does however mean that the time required for a light ray to get from one spatial point to another must always increase compared to the time required in flat Minkowski space. This is the well-known Shapiro time delay, and we see two important points: (1) to even define the delay (delay with respect to what?) we need to use the flat Minkowski metric as a background, (2) the fact that in the solar system it is always a delay, never an advance, is due to the fact that everyday bulk matter satisfies the NEC. (We mention in passing that the strong energy condition \[SEC\] provides a somewhat stronger result: If the SEC holds then the proper time interval between any two timelike separated events in the presence of the gravitational field is always larger than the proper time interval between these two events as measured in the background Minkowski spacetime.) Now subtle quantum-based violations of the NEC are known to occur Visser:ANEC , but they are always small and are in fact tightly constrained by the Ford–Roman quantum inequalities Ford-Roman ; Pfenning-Ford . There are also classical NEC violations that arise from non-minimally coupled scalar fields Flanagan-Wald , but these NEC violations require Planck-scale expectation values for the scalar field. NEC violations are never appreciable in a solar system or galactic setting. (SEC violations are on the other hand relatively common. For example: cosmological inflation, classical massive scalar fields, etc.) From the point of view of warp drive physics, this analysis is complementary to that of Olum , (and also to the comments by Coule Coule , regarding energy condition violations and “opening out” the light cones). Though the present analysis is perturbative around Minkowski space, it has the advantage of establishing a direct and immediate physical connection between FTL travel and NEC violations. Generalizing this result beyond the weak field perturbative regime is somewhat tricky Non-perturbative , and we have addressed this issue elsewhere. To even define effective FTL one will need to compare two metrics. (Just to be able to ask the question “FTL with respect to what?”). Even if we simply work perturbatively around a general metric, instead of perturbatively around the Minkowski metric, the complications are immense: (1) the Laplacian in the linearized gravitational equations must be replaced by the Lichnerowicz operator; (2) the Green function for the Lichnerowicz operator need no longer be concentrated on the past light cone \[physically, there can be back-scattering from the background gravitational field, and so the Green function can have additional support from within the backward light cone\]; and (3) the Green function need no longer be positive definite. For example, even for perturbations around a Friedman–Robertson–Walker (FRW) cosmology, the analysis is not easy FRW-Ford . Because linearized gravity is not conformally coupled to the background the full history of the spacetime back to the Big Bang must be specified to derive the Green function. From the astrophysical literature concerning gravitational lensing it is known that voids (as opposed to over-densities) can sometimes lead to a Shapiro time advance Advance1 ; Advance2 ; Advance3 . This is not in conflict with the present analysis and is not evidence for astrophysical NEC violations. Rather, because those calculations compare a inhomogeneous universe with a void to a homogeneous FRW universe, the existence of a time advance is related to a suppression of the density below that of the homogeneous FRW cosmology. The local speed of light is determined by the local gravitational potential relative to the FRW background. Voids cause an increase of the speed of photons relative to the homogeneous background. The total time delay along a particular geodesic is, however, affected by two factors: the gravitational potential effect on the speed of propagation and the geometric effect due to the change in path of the photon (lensing) which may make the total path length longer. Thus traveling through a void doesn’t necessarily imply an advance relative to the background geometry. ## Discussion This note argues that any form of FTL travel requires violations of the NEC. The perturbative analysis presented here is very useful in that it demonstrates that it is already extremely difficult to even get even started: Any perturbation of flat space that exhibits even the slightest amount of FTL (defined as widening of the light cones) must violate the NEC. The perturbative analysis also serves to focus attention on the Shapiro time delay as a diagnostic for FTL, and it is this feature of the perturbative analysis we have extended elsewhere to the non-perturbative regime to provide both a non-perturbative definition of FTL Non-perturbative , and a non-perturbative theorem regarding superluminal censorship.
no-problem/9908/astro-ph9908049.html
ar5iv
text
# Magnetic Stress at the Marginally Stable Orbit: Altered Disk Structure, Radiation, and Black Hole Spin Evolution ## 1. Introduction Early work on black hole accretion disks pointed out the possibility that magnetic stresses might exert a torque on the inner parts of the accretion disk (Page & Thorne 1974, Thorne 1974, Ruffini & Wilson 1975, King & Lasota 1977). However, in virtually every recent account of the dynamics of accretion disks around black holes, it has been assumed that there is no stress at the disk’s inner edge, which should occur very close to the radius of the marginally stable orbit, $`r_{ms}`$. That this should be so was variously argued on the basis that the plunging matter in the region of unstable orbits has too little inertia to affect the disk, or rapidly becomes causally disconnected from the disk, or that such stresses were due to relatively weak transport processes that could not compete with the large gravitational forces pulling matter away from the disk. Recently this view has been questioned (Krolik 1999) on the basis that magnetic fields are the likely agent of torque in accretion disks (Balbus & Hawley 1998). If this is so, and their strength in the plunging region is what would be expected on the basis of flux-freezing, they should be strong enough in that zone to both make the Alfvén speed relativistic (postponing the point of causal decoupling) and exert forces competitive with gravity. If matter inside the marginally stable orbit does, indeed, remain magnetically connected to the disk, it can exert a sizable torque on the the portion of the disk containing the field-line footpoints. Gammie (1999) has shown that, within the confines of a highly-idealized model of inflow dynamics, this torque can considerably enhance the amount of energy released in the disk. In fact, even if there were no continuing accretion, field lines attached to the event horizon of a spinning black hole and running through the disk could exert torques of a very similar character (Blandford 1998, D.M. Eardley, private communication). We will call this situation the “infinite efficiency limit.” A corollary of torque on the inner edge of the disk is an increase in the outward angular momentum flux. In a time-steady state, this additional angular momentum flux must be conveyed by additional stress. Additional local dissipation must accompany the additional stress. It is the principal object of this paper to compute how this dissipation is distributed through the disk, and examine the consequences for observable properties. Time-steady torques at $`r_{ms}`$ are not the only way that energy may be transmitted from the plunging region to the disk—the torque may be variable, it may be delivered over a range of radii, and there may be radial forces exerted that carry no angular momentum. However, in this paper, we will restrict our attention to this simplest possible case. ## 2. The Relativistic Correction Factors ### 2.1. Dissipation as a function of radius Novikov & Thorne (1973) and Page & Thorne (1974) showed how the surface brightness and vertically-integrated stress in the fluid frame for a time-steady, geometrically thin, relativistic accretion disk could be written as the Newtonian forms multiplied by correction factors that approach unity at large radius. In the notation of Page & Thorne (1974), conservation of angular momentum is given by $$\frac{}{r}\left(L^{}+\frac{C^{1/2}}{B\mathrm{\Omega }/r}f\right)=L^{}f,$$ (1) where $`r`$ is the Boyer-Lindquist radial coordinate, $`L^{}`$ is the conserved specific angular momentum of a circular orbit at radius $`r`$, $`f`$ is a function of radius defined such that the flux at the disk surface in the fluid frame $`F=\dot{M}_of/(4\pi r)`$, and $`\dot{M}_o`$ is the rest-mass accretion rate. As usual, $`\mathrm{\Omega }`$ is the angular frequency of a circular orbit at radius $`r`$. We also follow Novikov & Thorne (1973) by defining four auxiliary functions: $`B(x)=1+a_{}/x^{3/2}`$ (2) $`C(x)=13/x+2a_{}/x^{3/2}`$ (3) $`D(x)=12/x+a_{}^2/x^2`$ (4) $`F(x)=12a_{}/x^{3/2}+a_{}^2/x^2,`$ (5) with $`x`$ the radius in units of $`r_g=GM/c^2`$ and $`a_{}`$ the dimensionless black hole spin parameter. In the usual approach, the boundary condition on $`f`$ at the radius $`r_{ms}`$ of the marginally stable orbit is $`f_{ms}=0`$. The appropriate boundary condition when there is non-zero stress at $`r_{ms}`$ is $$f_{ms}=\frac{3}{2}\frac{\mathrm{\Delta }ϵ}{x_{ms}C_{ms}^{1/2}},$$ (6) where $`C_{ms}=C(r_{ms})`$, and $`\mathrm{\Delta }ϵ`$ is the additional radiative efficiency relative to the one computed in terms of the binding energy at $`r_{ms}`$, $`ϵ_0`$, so that $`ϵ=\mathrm{\Delta }ϵ+ϵ_0`$. This choice of $`f_{ms}`$ ensures that the integrated additional dissipation matches $`\mathrm{\Delta }ϵ`$, and corresponds to a stress $$W_\varphi ^r(r_{ms})=\frac{\mathrm{\Delta }ϵ\dot{M}_o}{2\pi r_{ms}\mathrm{\Omega }_{ms}}.$$ (7) We refer to a disk with $`\mathrm{\Delta }ϵ=0`$ as a “Novikov-Thorne disk.” Using this boundary condition, the locally generated surface flux becomes $$F(x)=\frac{3}{8\pi }\frac{GM\dot{M}_o}{r^3}\left[\frac{x_{ms}^{3/2}C_{ms}^{1/2}\mathrm{\Delta }ϵ}{C(x)x^{1/2}}+R_R^{NT}(x)\right]$$ (8) where $`R_R^{NT}(x)`$ is the expression found by Novikov & Thorne (1973). The standard relativistic correction factor $`R_R^{NT}`$ goes to zero as $`x`$ approaches $`x_{ms}`$ from above, so that $`F`$ (when the inner-edge stress is zero) peaks well outside the marginally stable orbit. By contrast, the additional dissipation due to a torque on the inner edge is concentrated very close to $`r_{ms}`$, and is non-zero at the inner edge. The degree of concentration can be quantified by measuring $`r_{1/2}`$, the radius within which fifty percent of the radiation is emitted: the half-light radius is a factor of a few smaller for torque-driven flux than for Novikov-Thorne flux. Figure 1 shows the half-light radius for a Novikov-Thorne disk, $`r_{1/2}^0`$, and an infinite-efficiency disk, $`r_{1/2}^{\mathrm{}}`$, as a function of $`a_{}`$. In the limit of infinite efficiency or zero accretion rate, $`\dot{M}_oϵ`$ remains finite, so the first term in equation dominates; in this case, the flux scales as $`r^{7/2}`$ at large $`r`$ rather than as $`r^3`$ as in the standard thin disk. The expression for the surface flux becomes: $$F^{(\mathrm{})}(x)=\frac{3}{2}\frac{c^3}{r_g\kappa _Tx^{7/2}}\frac{L}{L_{Edd}}\frac{x_{ms}^{3/2}C_{ms}^{1/2}}{C(x)},$$ (9) where $`L_{Edd}`$ is the Eddington luminosity and $`\kappa _T`$ is the Thomson opacity per unit mass. Fig. 1. Half-light radii for the Novikov-Thorne disk (dashed line) and infinite-efficiency disk (solid line). Also plotted are $`r_{ms}`$ (dotted line) and the region inside the horizon (shaded). The angular momentum conservation equation corresponding to equation is $$𝑑zT_{r\varphi }(z)=\frac{\dot{M}_o\mathrm{\Omega }_K(x)}{2\pi }\left[\frac{x_{ms}^{3/2}C_{ms}^{1/2}\mathrm{\Delta }ϵ}{D(x)x^{1/2}}+R_T^{NT}(x)\right],$$ (10) where $`R_T^{NT}`$ is the torque correction factor (Novikov & Thorne 1973, Page & Thorne 1974) in the notation of Krolik (1999). To clarify the meaning of the extra dissipation, we will write down equation pretending that gravity is purely Newtonian: $$F^{(N)}(r)=\frac{3}{8\pi }\frac{GM\dot{M}_o}{r^3}\left[\mathrm{\Delta }ϵ\sqrt{\frac{r_{in}}{r}}\frac{r_{in}}{r_g}+\left(1\sqrt{\frac{r_{in}}{r}}\right)\right],$$ (11) where $`r_{in}`$ is the disk inner edge. The first term in the bracket is the usual Shakura-Sunyaev (1973) correction factor, while the second term is derived from the extra torque at the inner edge. This equation (actually first derived by Popham & Narayan 1993) never applies in the relativistic case, but can apply, for example, to a disk around a star where a torque is exerted by the spinning magnetosphere or through a boundary layer, or to a thin disk surrounding a different disk solution, such as an ADAF, where a torque is exerted by the flow inside the transition point. ### 2.2. Returning radiation To find the surface brightness distribution of the disk as seen by distant observers, it is necessary first to correct the intrinsic surface brightness due to local dissipation for the additional energy supplied by photons originally emitted at a different radius, but returned to the disk by gravity. In the conventional picture, this is a small correction (Cunningham 1976). Here, however, because so much more of the energy is released deep in the relativistic potential, it can be a much greater effect. To compute the additional returning radiation, we followed the method developed by Cunningham (1976), with a few modifications. The numerical method is described in Agol (1997). We compute the flux transfer function, $`T_f`$, by following photons emitted from each radius that return to the accretion disk, assuming the disk surface is flat and the radiation is isotropic in the fluid frame. We ignore the stress carried by these photons (i.e., we set $`T_s=0`$ in Cunningham’s parlance). We also assume that any radiation that returns to the disk inside $`r_{ms}`$ is captured by the black hole—this radiation will be advected or scattered inwards by the inflowing gas, which has a large inward radial velocity. Finally, we (temporarily) assume (as does Cunningham) that the radiation returning to the disk is absorbed and thermalized before being reemitted; this assumption is probably not appropriate in practice, but greatly simplifies computation of the transfer function since in this approximation $`T_f`$ is independent of frequency. We will discuss later how breaking this assumption may change the spectrum. In Figure 2a, we plot versus radius the fraction of emitted radiation which returns to the disk outside $`r_{ms}`$, which enters the black hole or returns to the disk inside $`r_{ms}`$, and which reaches infinity directly, for the cases $`a_{}=0.9999`$ and $`a_{}=0`$. The fraction reaching infinity and returning to the disk are nearly independent of the black hole spin. The fraction returning to the disk is greater than 10% for $`r6r_g`$, so when the emitted energy is concentrated inside this radius (see figure 1), then returning radiation will play an important role in modifying disk spectra. For $`r1.5r_g`$, less than half of the radiation reaches infinity directly - most returns to the disk. For $`a_{}=0`$, the fraction of radiation which is captured by the black hole or returns inside $`r_{ms}`$ increases since $`r_{ms}`$ is so large; this fraction never exceeds 8%. The fraction of returning radiation integrated over all radii, $`f_{\mathrm{ret}}`$ (as measured at infinity), is shown (as a function of $`a_{}`$) in Figure 2b (dashed lines) for two limiting efficiencies: $`ϵ=ϵ_0`$ ($`f_{\mathrm{ret}}^0`$) and $`ϵ=\mathrm{}`$ ($`f_{\mathrm{ret}}^{\mathrm{}}`$). The fraction for any other efficiency can be found by taking a linear combination of the fraction for these two efficiencies $$f_{\mathrm{ret}}=(f_{\mathrm{ret}}^0ϵ_0+f_{\mathrm{ret}}^{\mathrm{}}\mathrm{\Delta }ϵ)/ϵ.$$ (12) As can be seen from the figure, $`f_{\mathrm{ret}}`$ is relatively small for $`a_{}=0`$, even for $`ϵ=\mathrm{}`$. However, $`f_{\mathrm{ret}}`$ grows quickly with increasing $`a_{}`$. The primary reason for this is that $`r_{ms}`$ shrinks with increasing $`a_{}`$, so that relativistic effects on the photon trajectories become more important. Trajectory curvature is especially strong for those photons coming from small radii whose initial direction would carry them over the black hole. When $`a_{}`$ and $`\mathrm{\Delta }ϵ`$ are comparatively large, up to 58% of the energy due to the extra dissipation ends up striking the disk. Fig. 2. (a) Fraction of locally emitted flux which reaches infinity (solid lines), returns to the disk (dotted lines), and enters the black hole or returns inside $`r_{ms}`$ (dashed lines) versus radius. The heavy lines are for $`a_{}=0.9999`$ and the lighter lines for $`a_{}=0`$. (b) Fraction of energy that returns to accretion disk integrated over radius (dashed lines) or that is absorbed by the black hole (solid lines) as a function of spin. The heavy lines are for an infinite efficiency disk, while the lighter lines are for a Novikov-Thorne disk. The enhanced dissipation near $`r_{ms}`$ also leads to an increase in the fraction of captured photons. We have computed this fraction integrated over all radii, $`f_{BH}`$, using the same general relativistic transfer code just described. Our results for this effect are also illustrated in Figure 2b, again for $`ϵ=ϵ_0,\mathrm{}`$. The $`ϵ=ϵ_0`$ results agree with Thorne (1974). Equation applies to $`f_{BH}`$ as well. The fraction of locally generated radiation that ultimately escapes from the disk to infinity is simply $`f_{esc}1f_{BH}`$. Radiation that returns to the accretion disk we assume is reradiated isotropically and locally, and thus eventually reaches infinity or the black hole. We fold these multiply reprocessed photons into the final result. The nominal accretion efficiency, $`ϵ`$, is then multiplied by $`f_{esc}`$ to find the actual radiative efficiency of the flow. The largest $`f_{BH}`$ is 0.15, achieved for $`ϵ\mathrm{}`$ and $`a_{}1`$. The black hole bends the radiation back to the disk so that an observer on the disk sees the far side of the disk as a mirage above the black hole, which peaks in brightness within a few $`r_g`$ of the disk plane. The flux at large radius then scales as $`H/r^3`$, where $`H`$ represents the flux-weighted height of the image above the disk plane. The ratio of the returning radiation to locally generated radiation, $`R_{ret}(ϵ,a_{},r)`$, varies as a function of radius. For a Novikov-Thorne disk, $`R_{ret}`$ is infinite at $`r_{ms}`$, then decreases rapidly, asymptoting to a constant for $`r10r_g`$. In the case of an infinite efficiency disk (with $`\dot{M}_o=0`$), the locally generated surface brightness scales as $`r^{7/2}`$ at large radius, while the returning radiation scales as $`r^3`$, so $`R_{ret}`$ diverges as $`r^{1/2}`$ at large radius. For finite $`\mathrm{\Delta }ϵ`$, the returning flux may dominate at intermediate radii; however, at large radius, $`R_{ret}`$ asymptotes to a constant due to the fact that for large enough radius both returning and locally generated flux scale as $`r^3`$. For $`r10r_{ms}`$ and $`ϵ1`$, $`R_{ret}`$ differs by at most 25% from the value at $`r=\mathrm{}`$. We computed $`R_{ret}(ϵ,a_{},\mathrm{})`$ as a function of $`a_{}`$ and $`ϵ`$; this function is shown in Figure 3. Fitting formulae for this quantity are given in the appendix; these formulae can be used to compute the returning flux at large radius for arbitrary $`a_{},ϵ`$. As can be seen in Figure 3, the returning radiation can be a significant fraction of the locally generated radiation, and may therefore be important for construction of disk atmospheres. Returning radiation can also lead to significant fluctuations on the light-crossing time, as discussed in §3.3. Fig. 3. Contour plot of $`R_{ret}`$ at large radius. Solid contours are shown with spacing of 0.2, from $`R_{ret}=0`$ at the bottom to $`R_{ret}=1.8`$ at the top. The dotted line is $`ϵ_0`$; the dashed line is the contour of $`R_{ret}=1`$. ## 3. Consequences ### 3.1. Black hole growth and spin-up (or spin-down) Accreting matter enters the black hole with a certain amount of angular momentum, changing the spin of the black hole. When there is no stress at the marginally stable orbit, the angular momentum absorbed per unit rest mass accreted is exactly the specific angular momentum of the marginally stable orbit, $`L_{ms}^{}=MF_{ms}C_{ms}^{1/2}x_{ms}^{1/2}`$ (here we use conventional relativistic units in which $`G=c=1`$). However, when there are stresses at $`r_{ms}`$, angular momentum is transferred from the matter inside $`r_{ms}`$ to the disk. This reduces the accreted angular momentum by an amount $`_{ms}L_{ms}^{}`$, where $$_{ms}=x_{ms}B_{ms}C_{ms}^{1/2}F_{ms}^1\mathrm{\Delta }ϵ$$ (13) when all the energy liberated in the plunging region is delivered to the disk in the form of work done by torque. $`_{ms}=3\sqrt{2}\mathrm{\Delta }ϵ`$ when $`a_{}=0`$, falling towards $`\sqrt{3}\mathrm{\Delta }ϵ`$ when $`a_{}`$ approaches one. Thus, the rate at which black holes are spun up is substantially reduced relative to what would be expected in the conventional picture. Surprisingly, even when the black hole is initially spinless, it can be spun backwards when $`ϵ>11/\sqrt{2}0.29`$! Considerations of black hole spin-up also place an upper bound on the possible increase in efficiency due to torques on the disk. By the second law of black hole dynamics, the area, $`A`$, of the black hole must increase with time; that is, $$\frac{dA}{dt}=\frac{A}{M}\frac{dM}{dt}+\frac{A}{J}\frac{dJ}{dt}>0.$$ (14) Since $`dM/dt`$ and $`dJ/dt`$ both depend on $`a_{}`$ and $`ϵ`$, this constraint can be changed into a constraint on $`ϵ`$ as a function of $`a_{}`$. For $`a_{}<0.3584`$, there is a maximum achievable efficiency $$ϵ_{max}=1\frac{a_{}C_{ms}^{1/2}x_{ms}^{3/2}}{a_{}^2+a_{}x_{ms}^{3/2}2(1+\sqrt{1a_{}^2})}.$$ (15) Note that $`ϵ_{max}=1`$ for $`a_{}=0`$, because, of course, there is no spin energy to tap. For $`a_{}0.3584`$, the denominator equals zero, so $`ϵ_{max}`$ diverges; above this critical spin, the decrease in angular momentum dominates the change in surface area, eliminating any upper bound on $`ϵ_{max}`$. When accreted radiation is included, $`ϵ_{max}`$ increases slightly. We plot $`ϵ_{max}`$ in Figure 4. Fig. 4. Plot of the maximum achievable efficiency ($`ϵ_{max}`$) vs. spin due to the limit imposed by the second law of black hole dynamics with (solid) and without (dotted) the effects of radiation. As Thorne (1974) showed, when $`a_{}`$ approaches one, the angular momentum of the black hole is also affected by photon capture. Most of the photons emitted close to $`r_{ms}`$ directed against the sense of black hole rotation are captured by the black hole, whereas fewer of the prograde photons fall into the hole. This extra negative angular momentum prevents it from spinning up all the way to $`a_{}=1`$. Under the assumption of isotropic radiation in the fluid frame (and, of course, zero stress at $`r_{ms}`$), Thorne estimated that the maximum achievable $`a_{}0.998`$. To describe this effect in our context, we again normalize to $`L_{ms}^{}`$, so that the photon “reverse torque” per unit accreted mass is $`_\gamma (L_{ms}^{}\dot{M}_o)^1(dJ/dt)_{rad}`$, where the notation is adapated from Thorne (1974). Combining the effects of mechanical torque and photon capture, we find that the net rate of change of the black hole’s angular momentum is $$\frac{dJ}{dt}=L_{ms}^{}\dot{M}_o(1_{ms}_\gamma ).$$ (16) Because both $`_\gamma `$ and $`_{ms}`$ depend on the state of magnetic coupling, as well as on $`a_{}`$, it is no longer possible to speak of a definite upper bound on the attainable black hole spin. Rather, one can instead define the accretion efficiency, $`ϵ_{eq}(a_{})`$, at which $`da_{}/dM=0`$ for a given $`a_{}`$; for $`ϵ>ϵ_{eq}`$, the black hole is spun down due to accretion. To compute $`ϵ_{eq}`$, we write $`L_\gamma =(dJ/dt)_{rad}/(\dot{M}_oL_{ms}^{})=J_1^{}+ϵJ_1^{}`$ and $`(dM/dt)_{rad}/\dot{M}_o=M_1^{}+ϵM_2^{}`$ (we give fitting formulae for these functions in the appendix). Then, the equilibrium efficiency is given by: $$ϵ_{eq}=\frac{2a_{}(1+M_1^{})ϵ_0x_{ms}^{3/2}B_{ms}L_{ms}^{}(1+J_1^{})}{2a_{}(1M_2^{})x_{ms}^{3/2}B_{ms}+L_{ms}^{}J_2^{}}.$$ (17) Fig. 5. Plot of $`ϵ_{eq}`$ with (solid) and without (dashed) effects of returning radiation. The dotted line is $`ϵ_0`$, the Novikov & Thorne efficiency. Figure 5 shows $`ϵ_{eq}(a_{})`$ when only magnetic and matter torques are included (dashed line), and when magnetic, matter, and radiation torques are included (solid line), as well as $`ϵ_0(a_{})`$, the efficiency of accretion (not in equilibrium) when magnetic torques are ignored (dotted line). When accreted radiation is ignored, equation simplifies to: $$ϵ_{eq}=1\frac{\sqrt{C_{ms}}}{2B_{ms}}.$$ (18) This limit is accurate for $`a_{}<0.5`$, and only creates a significant error for $`a_{}>0.9`$, for the radiation torque is unimportant when the spin is relatively small. Some interesting limiting values are $`ϵ_{eq}(0)=11/\sqrt{2}`$, and, when radiation effects are ignored, $`ϵ_{eq}(1)=11/\sqrt{3}`$, which equals $`ϵ_0(1)`$. However, when the radiation torque is included, $`ϵ_{eq}`$ is significantly reduced for $`a_{}>0.9`$, and $`ϵ_{eq}<ϵ_0`$ for $`a_{}>0.998`$, the same maximum spin found by Thorne (1974). The maximum equilibrium efficiency is 0.36, and occurs at $`a_{}=0.94`$. A variety of spin histories is possible in this picture, for the efficiency is controlled jointly by the strength of the magnetic torques and the black hole spin. If the torque on the disk is always positive, the region below the curve $`ϵ_0(a_{})`$ in figure 5 is unreachable. In that case, $`a_{}=0.998`$ would still be the maximum spin achievable by accretion, although other spin-up mechanisms, such as black hole mergers or non-magnetic accretion, might permit this limit to be exceeded. Fig. 6. Plot of the effective temperature vs. radius for a black hole with $`a_{}=0.998`$. The temperature is normalized to $`T_0[\dot{M}_oc^2/(r_g^2\sigma _B)]^{1/4}`$, where $`\sigma _B`$ is the Stefan-Boltzmann constant. The lower three curves are for a Novikov-Thorne disk, while the upper two are for an $`ϵ=1`$ disk. The solid lines are without returning radiation, while the dotted lines include returning radiation. The dashed line shows the result of Cunningham (1976). ### 3.2. Emitted spectrum The effective temperature is determined by the sum of the locally generated and returning flux. Figure 6 illustrates the effects discussed in §2, showing both how the intrinsic dissipation varies as a function of radius when there is a torque on the disk inner edge, and the total surface flux if one assumes that any incident radiation is absorbed. By comparing the curves for $`\mathrm{\Delta }ϵ=0`$ with the other curves, it is clear that the additional stress has two effects: the additional intrinsic dissipation creates a region at small radius where the effective temperature is rather higher than the disk could achieve otherwise; and returning radiation elevates the effective temperature at all radii, especially when $`a_{}`$ is near unity and $`\mathrm{\Delta }ϵ1`$ or more. Ideally, detailed atmosphere calculations should be performed in order to ascertain the predicted disk spectrum, Fig. 7. Comparison of the spectra as a function of inclination angle for $`ϵ=ϵ_0`$ (dashed lines) and $`ϵ=\mathrm{}`$ (solid lines). The other parameters are $`a_{}=0.998`$, and $`r_{out}=500r_g`$. The heavy lines are for $`\mu =0.01`$ while the lighter lines are for $`\mu =0.99`$. The frequency is scaled to $`\nu _0(k/h)\left[L/(r_g^2\sigma _B)\right]^{1/4}`$. The quantity $`L_\nu 4\pi D^2F_\nu (\mu )`$, where $`F_\nu (\mu )`$ is the flux seen by a Euclidean observer at distance $`D`$ and angle $`\mu =\mathrm{cos}i`$ relative to the accretion disk. with the downgoing flux of returning radiation included in the upper boundary condition. Interesting effects might well be expected due to comparable amounts of heat arriving from above as from below. Pending the completion of that work, we make the much simpler assumption that the intensity at the surface of the disk is a blackbody at the local effective temperature, and isotropic in the outward half-sphere. With that assumption, Figures 7 and 8 show the predicted integrated spectrum for a variety of values of $`a_{}`$, $`\mathrm{\Delta }ϵ`$, and inclination (parameterized by $`\mu =\mathrm{cos}i`$). Fig. 8. Comparison of the spectra as a function of inclination angle for $`a_{}=0`$ (solid lines) and $`a_{}=0.998`$ (dashed lines) with $`ϵ=1`$ and $`r_{out}=500r_g`$. The heavier lines are for $`\mu =0.01`$, while the lighter lines for $`\mu =0.99`$. Units are the same as in Figure 7. Figure 7 shows that for fixed luminosity and large spin, the efficiency of accretion can change the observed flux by factors of a few at different inclination angles. The angle dependence of the flux depends strongly on frequency—the highest frequency radiation is concentrated towards the disk plane, while the lowest frequencies are radiated as $`\mathrm{cos}i`$. Figure 8 shows the dependence of the spectrum on black hole spin for fixed luminosity and efficiency. The relativistic effects are much stronger for the higher spin, hardening the edge-on spectrum and causing strong limb-brightening at the highest frequencies. In contrast, the disk around the Schwarzschild hole is limb-darkened at most frequencies, and, when face-on, is brighter by a factor of a few at the mid-range frequencies than the extreme Kerr hole. These effects may also impact the profiles of Fe K$`\alpha `$ emission lines. If their emissivity is proportional to the local flux, the enhanced flux in the inner rings of the disk strengthens the red wings of the lines when viewed more or less face-on. We plot the profiles of K$`\alpha `$ lines for disks with $`\mathrm{\Delta }ϵ=0`$ and $`ϵ=ϵ_{eq}=0.293`$ for $`a_{}=0`$ and $`i=30^{}`$ in Figure 9. Disks with higher spin have a smaller change in the shape of the iron line as a function of $`ϵ`$ because the returning radiation is much stronger and creates an emissivity profile very similar to the Novikov-Thorne profile. Magnetized accretion may also lead to enhanced coronal activity immediately above the plunging region (Krolik 1999); if so, this would provide a physical realization for models like those of Reynolds & Begelman (1997), which call for a source of hard X-rays on the system axis a few gravitational radii above the disk plane. Fig. 9. Profiles of Fe K$`\alpha `$ lines for $`a_{}=0`$, $`i=30^{}`$, and $`ϵ=ϵ_{eq}=0.293`$ (solid curve), $`ϵ=ϵ_0`$ (dashed curve). Frequency is normalized to unshifted line frequency, and line amplitude is normalized to the line maximum. ### 3.3. Coordinated variations When $`\mathrm{\Delta }ϵ`$ is comparable to the ordinary efficiency, the inner rings of the disk radiate an amount of energy comparable to that radiated by all the rest of the disk. When, in addition, $`a_{}`$ is large enough that $`f_{\mathrm{ret}}`$ is significant, much of the light produced even at larger radii is reprocessed energy from the additional dissipation. If there are variations in that dissipation rate, they will be reproduced—at appropriate delays—in the reprocessed light. A prediction of this picture is therefore that fluctuations at a wide range of frequencies $`\nu `$ should all be describable as driven by a single source. When the fluctuations in the returning flux are small compared to the mean local flux (combining both the intrinsic and the mean returning flux), the relation between input and output may be written as the linear convolution $$\delta L_\nu (t)=𝑑\tau \mathrm{\Psi }_\nu (\tau )\delta L_c(t\tau ),$$ (19) where $`\delta L_c(t)`$ is the history of fluctuations in the intrinsic output near $`r_{ms}`$ and $`\mathrm{\Psi }_\nu `$ is a frequency-specific “response function” <sup>1</sup><sup>1</sup>1 We use the term “response function” to avoid confusion with the relativistic “transfer function”. that describes the distribution of relevant light-travel times. Note, however, that if there is a corona at small radii that receives a significant fraction of the total dissipation (indeed, such a corona might receive much of the extra accretion energy: Krolik 1999), it will also drive fluctuations in the output of the outer disk in very much the same manner, and with a substantially identical response function. The response function $`\mathrm{\Psi }_\nu (\tau )`$ is also predicted by this model. To compute this function we make several simplifying approximations: that all the returning radiation is absorbed; that it is reradiated in a spectrum that is locally blackbody and isotropic in the outer half-sphere; and that the radii of interest are far enough out in the disk that relativistic effects may be ignored. Then $`\mathrm{\Psi }_\nu {\displaystyle \frac{f_{\mathrm{ret}}\mu r_{}h\nu ^3}{2cL_{}}}{\displaystyle _{r_1}^{r_2}}𝑑r{\displaystyle \frac{r^{3/4}}{e^{r^{3/4}}+e^{r^{3/4}}2}}`$ (20) $`\times \left[1\mu ^2\left(c\tau /r1\right)^2\right]^{1/2},`$ (21) where radius $`r`$ and $`c\tau `$ are measured in units of $`r_{}`$, the radius at which $`h\nu =kT`$ when the flux takes its mean value, $`r_{in}`$ is the innermost radius at which the returning flux is $`r^3`$, $`r_1=max[r_{in},c\tau /(1\sqrt{1\mu ^2})]`$, $`r_2=c\tau /(1\sqrt{1\mu ^2})`$, and $`\mu `$ is the cosine of the inclination angle. The characteristic radius $`r_{}`$ is given by $$r_{}=\left(\frac{L_{}r_{in}k^4}{4\pi \sigma h^4\nu ^4}\right)^{1/3},$$ (22) where $`k`$ is the Boltzmann constant, $`\sigma `$ is the Stefan-Boltzmann constant, and $`L_{}`$ is the mean value of the luminosity emitted by the portion of the disk whose emissivity is $`r^3`$. Some sample response functions are illustrated in Figure 10. All of the curves have significant tails extending out to $`10r_{}/c`$, and, almost independent of inclination angle, the “half-response time” (in the sense of $`𝑑\tau \mathrm{\Psi }_\nu `$) occurs at $`\tau 3r_{}/c`$. However, the peak in the response function becomes sharper and moves to smaller multiples of $`r_{}/c`$ as the inclination angle increases. Two effects account for this behavior. The tails are due to the fact that the temperature declines only as $`r^{3/4}`$, so that the Wien cut-off sets in relatively slowly. The sharp peaks at small lag exhibited by disks with larger inclination angle are due to the significant amount of disk surface that lies close to the line of sight for those viewing angles. Fig. 10. Plot of continuum response $`\mathrm{\Psi }_\nu `$ as a function of lag for different inclinations. $`\mathrm{\Psi }_\nu `$ is in units of $`f_{\mathrm{ret}}r_{}h\nu ^3/(2cL_{})`$ and $`\tau `$ is in units of $`r_{}/c`$. ### 3.4. Polarization In fact, the inner rings of realistic disks, whether in AGN or Galactic black holes, are likely to be scattering-dominated, so that their albedo to the returning radiation will be significantly greater than zero. The scattered light may then be polarized. As discussed in §2.2, the maximum altitude $`H`$ above the disk plane achieved by any photon that ultimately returns to the disk cannot be much greater than a few gravitational radii. If the disk flare is small (see §3.6 for further discussion), the returning photons striking the disk at radius $`r`$ must then arrive from an angle $`H/r1`$ from the disk equator. When electron scattering is the dominant scattering opacity (as is nearly always the case), only those photons polarized parallel to the disk normal can scatter to outgoing directions near the equatorial plane but perpendicular to the original photon direction. The result is that disks viewed obliquely should acquire a small amount of polarization parallel to the disk axis, especially at the high frequencies produced predominantly in the inner rings. To quantify this suggestion, we have computed the disk spectrum, treating the locally generated radiation as a blackbody, and assuming the returning radiation is scattered off a semi-infinite electron scattering atmosphere (Chandrasekhar 1960). We assume the locally generated disk flux has either (1) the polarization of a semi-infinite electron scattering atmosphere (Chandrasekhar 1960) or (2) is unpolarized. The true polarization will be modified by Faraday rotation due to magnetic fields in the disk’s atmosphere and absorption/emission (Agol, Blaes, & Ionescu-Zanetti 1998), but the true answer will likely lie between our two assumptions. Figure 11 shows the flux, polarization, and polarization angle computed under these two assumptions. The spectrum is much broader than would be predicted by complete absorption, and returning radiation can cause a sharp rise in polarization towards the highest frequencies. There is a rotation in the polarization angle since the scattered returning radiation tends to be polarized perpendicular to the disk plane, while the locally generated radiation is polarized parallel to the disk plane. Whichever component dominates the flux at a given frequency determines the strength and angle of the polarization. We have included all relativistic effects that modify the final polarization angle (Laor, Netzer, & Piran 1990, Agol 1997). The returning fraction is largest for photons generated in the inner region of the disk; consequently, the highest frequencies have the largest scattered fraction and thus the highest polarization. In addition, the inner parts of the disk are strongly blueshifted, and the returning radiation is (weakly) Compton up-scattered by the bulk motion of the disk. Fig. 11. Flux, polarization, and polarization angle as a function of frequency for a disk viewed with an inclination of $`\mu =\mathrm{cos}i=0.2`$, with $`a=0.998`$, $`\mathrm{\Delta }ϵ=1`$. The dashed curves are for no returning radiation; dashed-dot for returning radiation, but unpolarized locally generated flux; the solid curves are for returning radiation plus polarized locally generated flux; and the dotted line in the top panel shows the flux computed assuming complete absorption, as in §3.2. The polarization angle, $`\theta `$, is zero for $`𝐄`$ parallel to the disk plane. The units are defined in Figure 7. ### 3.5. Bolometric Limb-brightening For a Newtonian disk, foreshortening causes limb-darkening proportional to $`\mu =\mathrm{cos}i`$, where $`i=0`$ is a face-on disk. Relativistic effects cause beaming and bending of the radiation towards the equatorial plane, which decrease the limb-darkening for disks around black holes. For large $`a_{}`$ and $`ϵ`$, the relativistic effects become so strong that a disk can actually become limb-brightened. In Figure 12 we show the bolometric disk flux as a function of inclination angle for the cases $`ϵ=\mathrm{}`$ (dashed line), $`ϵ=1`$ (dotted line), and $`ϵ=ϵ_0`$ (solid line) for $`a_{}=0.998`$. The limb-brightening is also be dependent on frequency as shown in §3.2; in practice, determining this quantitatively will require a detailed disk atmosphere model. Fig. 12. Comparison of the bolometric limb-brightening of a disk for $`a_{}=0.998`$ with $`ϵ=ϵ_0`$ (solid line), $`ϵ=1`$ (dotted line), and $`ϵ=\mathrm{}`$ (dashed line). The inclination angle, $`\mu =\mathrm{cos}i`$ is edge-on for $`\mu =0`$ and face-on for $`\mu =1`$. ### 3.6. Geometrical thickness of the disk When the accretion rate is greater than a small fraction of the Eddington rate, the innermost regions of accretion disks are expected to be supported against the vertical component of gravity by radiation (Shakura & Sunyaev 1973). In that case, the disk’s vertical thickness is directly proportional to the ratio of the local radiation flux to the vertical component of gravity; that is, $`hFr^3/R_z`$, where $`R_z`$ is the relativistic adjustment to the vertical gravity (Page & Thorne 1974; Abramowicz, Lanza, & Percival 1997). At radii large enough that the relativistic effects are small, but not so large as to no longer be in the radiation-dominated regime in a Novikov-Thorne disk, $`h`$ should be constant. In the relativistic portion of the disk, $`h`$ would shrink $`R^{NT}/R_z`$ if the stress at its inner edge were zero; additional dissipation, depending on its strength, could actually make the disk become somewhat thicker there (cf. equation 8). We plot some examples of $`h(r)`$ in Figure 13. At the inner edge of the disk, the height of the disk is non-zero when there is a non-zero torque, so the thin-disk approximation is valid only if $`h(r_{ms})0.1r_{ms}`$. This criterion can be translated into a limit on the extra luminosity due to the torque at the inner edge: $$\frac{L}{L_{Edd}}<0.1\frac{2}{3x_{ms}}\left(x_{ms}C_{ms}^{1/2}F_{ms}^2a^2G_{ms}+a^2C_{ms}^{1/2}\right),$$ (23) where $`L=\mathrm{\Delta }ϵ\dot{M}_oc^2`$. This limiting luminosity is plotted in Figure 14. When $`\mathrm{\Delta }ϵ=0`$, the thin disk approximation breaks down if $`h/r0.1`$ where $`h/r`$ is maximum. This limit can in turn be expressed as a limit on the luminosity $`ϵ_0\dot{M}_oc^2`$, which is also shown in Figure 14. The luminosity upper limit for the infinite efficiency disk is much smaller than for the Novikov-Thorne disk since $`h/r`$ peaks at $`r_{ms}`$, while for the Novikov-Thorne disk $`h/r`$ peaks at larger radius, where its magnitude is smaller \[$`(h/r)_{max}`$ occurs at $`r=24r_g`$ for $`a_{}=0`$ and $`r=7r_g`$ for $`a_{}=1`$\]. If either $`ϵ_0\dot{M}_oc^2`$ or $`\mathrm{\Delta }ϵ\dot{M}_oc^2`$ exceeds their respective limits, then the thin-disk approximation breaks down. In addition, if $`h/r`$ is small, then the approximation of a flat disk in the computation of the returning radiation will be appropriate. To treat the interesting cases where $`LL_{Edd}`$ will require a 2-D solution of the disk equations, which is beyond the scope of this work. The returning radiation will not affect the disk height since it diffuses through the disk on a thermal timescale, so there is no net flux due to returning radiation (unless the disk is warped). Fig. 13. Height of radiation pressure-supported disk vs. radius with $`ϵ=ϵ_0`$ (dotted line) and $`ϵ=1`$ (solid line) for $`a_{}=0.998`$. The height is normalized by $`h_03\kappa \dot{M}_o/(8\pi c)`$. Fig. 14. Upper limit on the luminosity for thin-disk ($`h/r0.1`$) approximation to be valid at $`r_{ms}`$ for a radiation pressure-supported disk. Dotted line is for Novikov-Thorne disk; solid line is for infinite-efficiency disk. ## 4. Conclusions We have generalized the equations for an azimuthally symmetric, geometrically thin, time-steady accretion disk around a black hole to include the effects of a torque operating at the inner boundary, taken to be at $`r_{ms}`$. Constant non-zero torque at $`r_{ms}`$ causes several physical consequences that change the fundamental properties of the accretion flow: 1) The flux can be expressed as a sum of the usual Novikov & Thorne expression plus a part due to the torque which scales roughly as $`r^{7/2}`$. 2) The accretion efficiency has a fundamental upper limit due to the second law of black hole dynamics for $`a_{}<0.36`$. For larger $`a_{}`$, infinite efficiency is possible in principle. 3) The black hole spin can reach an equilibrium for $`a_{}<0.998`$ since the angular momentum reaching the hole is smaller. Radiation can also exert a significant torque on the black hole, which changes the value of the equilibrium-spin efficiency. Above an efficiency of $`ϵ=0.36`$, the black hole must always be spun down. 4) Since the extra emissivity is peaked at the inner edge of the disk, if $`r_{ms}`$ is small then gravity causes a large fraction of the radiation (up to 58%) to return to the accretion disk. The flux of returning radiation scales as $`r^3`$ at large radius. Up to 15% of the radiation can be captured by the black hole. 5) The extra heating within the disk will increase the height of the disk if it is radiation pressure-supported. This limits severely the luminosity at which the thin-disk approximation is appropriate. 6) Doppler beaming and relativistic bending are strongest in the inner parts of the accretion disk where the extra flux peaks, so that for large $`a_{}`$ and $`ϵ`$, the disk will be limb-brightened. These each have multiple observable consequences: 1) The extra surface brightness changes the locally radiated spectrum. Though the local surface brightness is usually not directly observable, it may be possible to map it using several devices: eclipse mapping (Baptista et al. 1998), although no eclipsing black hole X-ray binaries have been discovered yet; quasar microlensing (Agol & Krolik 1999); or reverberation mapping (Collier et al. 1999). If the Fe K$`\alpha `$ emissivity is proportional to the local dissipation, these effects can strengthen the red wing of the line, particularly when the spin is small. This effect may undercut the argument that lines with strong red wings came from disks around black holes with higher spin (e.g. Dabrowski et al. 1997). Several authors have used the Novikov-Thorne model to fit the soft X-ray spectra of galactic black hole candidates. Their procedure was to estimate the effective radiating area required to emit the observed luminosity at the observed effective temperature. On the basis of these fits they inferred that some black holes have rather high spins because the effective radiating area of a Novikov-Thorne disk decreases with increasing spin (Zhang, Cui, & Chen 1997). However, for fixed spin and central mass, a disk with large $`\mathrm{\Delta }ϵ`$ has a smaller effective radiating area than a Novikov-Thorne disk, mimicking the effect of greater spin. 2) The outer parts of accretion disks can be unstable to warping due to irradiation from the center (Pringle 1996). The minimum radius for growth of small warps is proportional to $`ϵ^2`$; if the efficiency is much higher than that of a standard disk, the minimum radius may be greatly shrunk. Limb-brightening increases the effective efficiency and therefore makes the linear growth more rapid; on the other hand, the corresponding relative decrease in intensity away from the central disk plane may weaken this effect in the non-linear regime. 3) Wavelength-dependent limb-brightening (or limb-darkening) introduces viewing angle-dependent biases into any flux-limited sample. This is a particularly strong effect in the context of quasar surveys because the number count distribution is so steep. A variety of distortions could occur in our view of what constitutes a “typical” quasar (cf. Krolik & Voit 1998). 4) Returning radiation can change the conditions for launching a radiation-driven disk wind (e.g. Murray et al. 1995). The returning radiation is reradiated locally, so the net vertical force depends on the frequency-averaged opacities of the downgoing and upgoing radiation fields. The radial component of the radiation force will be larger than for a standard disk due to the higher efficiency and limb brightening. 5) Returning radiation causes the various annuli to “communicate” on the light crossing timescale. Fluctuations of the flux at small radii will cause only slightly delayed fluctuations of the flux at larger radii, which emit at longer wavelengths. Indeed, exactly this sort of behavior is commonly seen in accreting black hole systems. For example, campaigns monitoring AGN have consistently found that continuum fluctuations are very nearly simultaneous all the way from $`1300`$ Å to $`5000`$ Å (Clavel et al. 1991; Korista et al. 1995; Wanders et al. 1997; Collier et al. 1998; O’Brien et al. 1998; Cutri et al. 1985). Comparing the upper bounds on any inter-band delays to the radial scales expected on the basis of conventional disk models, these observations have been interpreted as requiring a coordinating signal group speed of at least $`0.1c`$ (e.g., Krolik et al. 1991; Courvoisier & Clavel 1991; Collier et al. 1998). 6) The scattered component of returning radiation is highly polarized parallel to the disk axis at high frequencies. This polarization rise may be related to the observed sharp rises in several quasars (Koratkar et al. 1995, Impey et al. 1995). If the inner regions of the disk have a strong Lyman continuum in emission, then this scattered emission edge will appear as a strongly polarized, blueshifted emission edge in the spectrum. In an irradiated disk atmosphere there might be an additional effect: heating of the upper layers of the atmosphere can cause a temperature inversion, which changes the sense of the polarization. We leave all such detailed calculations to future work. A question left unanswered by this work is what spin and efficiency we expect to be achieved by black holes in nature. That there is an upper limit on efficiency for an equilibrium spin means that a black hole with $`ϵ>ϵ_{eq}`$ must be born with original spin, or must be spun up by accretion in which magnetic torques inside $`r_{ms}`$ do not play an important role. In the supermassive black hole case, the spin may result from a merger. No one has computed the final spin of the resulting merger of two black holes; however, current approximate calculations indicate that the final spin could be quite large (Khanna et al. 1999). The strength of the torque at $`r_{ms}`$ depends on the strength of the magnetic field in the accretion disk and the geometrical thickness of the flow, which in turn depend on the accretion rate. This dependence will be best addressed with numerical simulations of MHD accretion. We would like to thank Roger Blandford, Doug Eardley, and Omer Blaes for helpful conversations. We would also like to thank the Institute for Theoretical Physics at U.C., Santa Barbara for its hospitality. This work was partially supported by NASA Grant NAG 5-3929 and NSF Grant AST-9616922. The function $`R_{ret}`$ can be expressed as the sum of a part due to the Novikov & Thorne accretion rate, and a part due to the torque at the inner edge: $$R_{ret}(ϵ,a_{},r)=R_0(a_{},r)+R_{\mathrm{}}(a_{},r)\mathrm{\Delta }ϵ.$$ (1) At large radius, these functions become constant, where we have fitted them with polynomials in $`xlog_{10}(1a_{})`$: $`R_0(a_{},\mathrm{})`$ $`=`$ $`0.02000.0360x+0.0279x^2+0.00213x^30.00153x^40.000225x^5,`$ (2) $`R_{\mathrm{}}(a_{},\mathrm{})`$ $`=`$ $`0.5940.199x0.116x^20.107x^30.0373x^40.00409x^5.`$ (3) The formulae are accurate to better 0.6% from $`a_{}=0`$ to $`a_{}=0.9999`$. For computing the spin evolution of an accreting black hole, it is useful to know $`(dJ/dt)_{rad}=(dJ/dt)_{rad}^0+\mathrm{\Delta }ϵ(dJ/dt)_{rad}^{\mathrm{}}`$ and $`(dM/dt)_{rad}=(dM/dt)_{rad}^0+\mathrm{\Delta }ϵ(dM/dt)_{rad}^{\mathrm{}}`$. Note that $`(dM/dt)_{rad}^0=M_1^{}+ϵ_0M_2^{}`$ and $`(dM/dt)_{rad}^{\mathrm{}}=M_2^{}`$, and likewise for J. We have fitted these with 6th order polynomials in $`x`$. The coefficients $`a_i`$ of the fits ($`_{i=0}^5a_ix^i`$) are given in table 1, where we have multiplied each $`a_i`$ by $`10^4`$.
no-problem/9908/astro-ph9908072.html
ar5iv
text
# Age Estimates for Galaxies in Groups ## 1 Introduction A galaxy’s environment plays a key role in determining its evolution. For elliptical galaxies, it is generally thought that mergers of disk galaxies in the field and in groups are the dominant formation mechanism. Elliptical–rich groups that fall in along filaments create the elliptical–rich clusters we seen today. Tracking the assembly of elliptical galaxies and the evolutionary status of groups would provide further insight into these processes. Until recently it was very difficult to directly age–date the stars in old stellar populations due to the age–metallicity degeneracy. This degeneracy has now been broken by new spectroscopic observations and models (e.g. Worthey 1994; Trager et al. 1999). Thus it is now possible to form an evolutionary sequence of elliptical galaxy formation and to age–date the ellipticals in different environments. ## 2 Deviations from Galaxy Scaling Relations In two recent papers (Forbes et al. 1998; Forbes & Ponman 1999) we showed that a galaxy’s position relative to the fundamental plane and other scaling relations depends on a galaxy’s age. Here age is the central luminosity weighted age of the galaxy from stellar spectroscopy. We found that young ellipticals were brighter with a higher surface brightness. Ellipticals that were $``$ 10 Gyr old would lie on the FP. From simple starburst models, we showed that fading central starburst could explain the overall trend. The situation was similar for the deviations from 2D scaling relations such as B–V vs M<sub>B</sub> and Mg<sub>2</sub>$`\sigma `$. Younger galaxies would redden and their Mg<sub>2</sub> line strengths weaken as the central starburst faded. We concluded that these scaling relations are metallicity–mass sequences with deviations caused by a galaxy’s age. ## 3 The Age and Metallicity Distribution of Galaxies Perhaps the best, high quality study of field ellipticals is that of Gonzalez (1993). He obtained new absoprtion line indices for about 40 early type galaxies in the field, and claimed that when plotted on a Worthey (1994) grid of H$`\beta `$ vs \[MgFe\] they generally scatter across a range in ages with metallicities concentrated around solar. Another high quality study is that of Kuntschner & Davies (1998) who studied early type galaxies in the Fornax cluster. They found all ellipticals to have a similar age of $``$ 8 Gyrs, covering a range in metallicity. Only the S0 galaxies scattered to young ages in the Worthey grid. There is certainly support from the Coma cluster that the colour–magnitude relation is largely a metallicity–mass sequence with the small scatter due to age effects (Terlevich et al. 1999). These field and cluster samples are shown in Fig. 1. Although the cluster galaxy trends are fairly convincing, more field data is needed to confirm the Gonzalez claims. If field ellipticals appear to describe a sequence in age, while cluster ellipticals describe a sequence in metallicity (at constant age), how do group ellipticals behave ? $``$ If group ellipticals resemble cluster ellipticals, then it suggests that ‘evolutionary’ processes have already occured, and must be related to non–cluster environments, e.g. merging. $``$ If group ellipticals resemble field ellipticals, then it suggests that ‘evolutionary’ processes have yet to occur, and must be related to cluster environments, e.g. ram pressure stripping, harassment. An H$`\beta `$ vs \[MgFe\] plot for loose groups and compact groups is shown in Fig. 2. In both cases, most early type galaxies are old ($``$ 10 Gyr), with some of young age but there are too few to make conclusive statements. However building up large samples of group galaxies with age estimates should provide unique clues to their star formation histories, and in the case of compact groups – the evolutionary status of the group itself. ###### Acknowledgements. We would like to thank R. Brown, T. Ponman for their contributions to this work.
no-problem/9908/hep-ph9908434.html
ar5iv
text
# Flavor Symmetry as a Spontaneously Broken Discrete Permutation Symmetry Embedded in Color ## Abstract A new mechanism for breaking an internal symmetry spontaneously is discussed, which is intermediate between the Nambu-Goldstone and Wigner modes of symmetry breaking. Here the $`q\overline{q}`$ sea takes the role of the vacuum of the Nambu-Goldstone case. Flavor symmetry becomes a discrete permutation symmetry of the valence quarks with respect to the sea quarks, which can be spontaneously broken without generation of massless Goldstone bosons. Pacs numbers:12.39.Ki, 11.30.Hv, 11.30.Qc, 12.15.Ff It is a well known fact that most hadrons are built from $`q\overline{q}`$ or $`qqq`$ valence quarks together with a $`q\overline{q}`$ and gluon sea. This two-component picture is crucial for my mechanism, to be discussed below, where I shall give the finite $`q\overline{q}`$ sea a prominent role in the symmetry breaking, which usually is given to the vacuum, when a symmetry is spontaneously broken. I consider flavor symmetry and take $`N_f=N_c=3`$ in the discussion, although the same arguments should hold for any number of flavors and colors, and might be applied to other symmetries as well (like chiral symmetry). If we undress a hadron from its soft confined gluons the $`q\overline{q}`$ valence quarks of a meson can be thought of as a degenerate nonet in color (and the $`qqq`$ valence quarks of a baryon as a $`\mathrm{𝟑}\mathrm{𝟑}\mathrm{𝟑}=\mathrm{𝟏}\mathrm{𝟖}\mathrm{𝟖}\mathrm{𝟏𝟎}`$ multiplet). Likewise the undressed $`q\overline{q}`$ sea is composed of a nonet and higher representations in color. After dressing with soft gluons, the hadrons become singlets<sup>*</sup><sup>*</sup>*A mechanism by which one can understand this is to assume all gluonic transitions, $`q_i\overline{q}_jglueq_i^{}\overline{q}_j^{}glue`$, between the N degenerate states to be equal $`H_{ij,i^{}j^{}}=const`$. After diagonalization this gives 0 for all other transitions than singlet to singlet which is $`Nconst`$, i.e., all other states except the singlet decouple. in color. Now chose a particular global reference frame in color, in which the sea becomes diagonal, such that it can be composed of diagonal Gell-Mann matrices $`\lambda _i`$: $`S(q\overline{q})=ϵ_0\lambda _0+ϵ_3\lambda _3+ϵ_8\lambda _8=diag(x,y,z)`$. This picks out a special direction and ordering in color space. One can still permute the $`x,y,z`$ but maintain the diagonal form. This permutation freedom will define my flavor symmetry, and we label one particular choice by the flavors, i.e., $$S(q\overline{q})=\left(\begin{array}{ccc}S(u\overline{u})& 0& 0\\ 0& S(d\overline{d})& 0\\ 0& 0& S(s\overline{s})\end{array}\right)$$ (1) where the diagonal terms need not be equal once flavor symmetry is broken. Of course, still under global color transformations of both valence and sea one remains within the same hadron, - a $`\pi ^{}`$ remains a $`\pi ^{}`$ and a $`K^{}`$ remains a $`K^{}`$ etc. More precisely the charge and strangeness operators, $$Q=\left(\begin{array}{ccc}\frac{2}{3}& 0& 0\\ 0& \frac{1}{3}& 0\\ 0& 0& \frac{1}{3}\end{array}\right),S=\left(\begin{array}{ccc}0& 0& 0\\ 0& 0& 0\\ 0& 0& 1\end{array}\right)$$ (2) which defineWe remind the reader that flavor must be represented by complex or non-Hermitian fields, e.g., $`|\pi ^+>=(\lambda _1i\lambda _2)/2`$, and that charge and strangeness are a kind of interference term between the Hermitian and the anti-Hermitian part. In practice, in a C-invariant theory one can often neglect the anti-Hermitian part and consider instead flavorless, Hermitian superpositions like $`|\pi ^+>+|\pi ^{}>=\lambda _1`$. the charge as $`q=\mathrm{Tr}[\mathrm{\Sigma }Q\mathrm{\Sigma }^{}\mathrm{\Sigma }^{}Q\mathrm{\Sigma }]`$ and strangeness $`s=\mathrm{Tr}[\mathrm{\Sigma }S\mathrm{\Sigma }^{}\mathrm{\Sigma }^{}S\mathrm{\Sigma }]`$ of a nonet meson $`\mathrm{\Sigma }`$, must transform covariantly under color $$Q^{}=UQU^{},S^{}=USU^{},\mathrm{\Sigma }^{}=U\mathrm{\Sigma }U^{}$$ (3) which of course only implies that our choice of $`u,d,s`$ above is always done in a particular, but arbitrary color reference frame, as chosen above for convenience We can write for e.g. the vector flavor nonet when unmixed $$V=\left|\begin{array}{ccc}(\omega +\rho )/\sqrt{2}& \rho ^+& K^+\\ \rho ^{}& (\omega \rho )/\sqrt{2}& K^0\\ K^{}& \overline{K}^0& \varphi \end{array}\right||\begin{array}{ccc}S(u\overline{u})& 0& 0\\ 0& S(d\overline{d})& 0\\ 0& 0& S(s\overline{s})\end{array}$$ (4) Each meson is also a nonet in color (before the gluon dressing), i.e., we have 9 nonets in all. Flavor is a relative quantum number given by the ordering of $`u,d,s`$ in the valence part with respect to the ordering of the diagonal terms in the sea. Thus one transforms a $`\pi ^{}`$ to a $`K^{}`$ by permuting $`ds`$ in the valence part but not in the sea. On the other hand in a global color transformation one preforms an SU3 rotation (or a permutation of quark labels) in both the valence and the sea part of the wave function. In the limit of a symmetric sea, $`S(u\overline{u})=S(d\overline{d})=S(s\overline{s})`$, all 9 degenerate nonets lie on top of each others, and both flavor and color is unbroken. But if the sea is asymmetric the flavor nonets are generallyA tricky point is that the sea can be somewhat asymmetric (since $`\mathrm{𝟖}\mathrm{𝟖}`$ contains an octet), but still one can have a flavor symmetric spectrum. Thus e.g. all of the $`d/u`$ asymmetry seen in deep inelastic scattering on the proton need not result in isospin breaking. Only if the $`d/u`$ asymmetry in the proton is not equal to the $`u/d`$ asymmetry in the neutron do we have isospin violation. For the $`s/d`$ quark asymmetry the flavor symmetry breaking is more obvious if one knows that in the sea of all hadrons the $`s`$ quark is less frequent than the $`d`$. split, but color remains always exact. A natural mechanism for generating the asymmetric sea is given by quantum loops such as $`K^{}K^{}\pi ,K\pi ,K\varphi ,\mathrm{}K^{}`$ or $`\pi \pi \sigma ,K^{}\overline{K},\mathrm{}\pi `$ etc. Hadrons are, in fact, unique compared to other bound states, like atoms or nuclei, in that they are partly composed of the hadrons themselves, although the latter are in virtual off shell states. A proton is part of the time a proton and a pion, and a pion is part of the time in a three pion state etc. Constituent quarks are again composed of virtual quarks and gluons $$|q>|q>(1+\alpha |q\overline{q}>+\beta |q\overline{q}q\overline{q}>+\mathrm{})\times gluons$$ (5) i.e., the same constituents occur on both the l.h.s. and the r.h.s. Introducing a reference frame for the quarks, the same reference frame appear on both sides of the equation. A color rotation is like rotating an object (valence) and the observer (sea) resulting in no change for the observer. On the other hand a change (permutation) in part of a state (the valence part) while the rest remains intact results in a new (flavor) state. Since a strange-antistrange ($`K^+K^{}`$) virtual state should be less frequent than a non strange ($`\pi ^+\pi ^{}`$) virtual state loops naturally lead to an asymmetric sea in all hadrons, where $`s\overline{s}`$ is less frequent than $`u\overline{u}`$ or $`d\overline{d}`$. Furthermore, this mechanism can be self-enhancing resulting in an instability: A small initial strange-nonstrange splitting for the (input) virtual states generates a bigger splitting in the output physical state. Although our ansatz that flavor symmetry is broken by an asymmetric sea, rather than by the infinite vacuum might seem to be a minor one, it has dramatic consequences. The most important one is that the symmetry can be broken spontaneously, without the appearance of massless Goldstone bosons. This has been a major stumbling block in previous attempts to break flavor symmetry spontaneously. The discrete flavor symmetry introduced above can thus be broken spontaneously without the necessary appearance of unseen Goldstone bosons, and the continuous SU3 symmetry is never broken. In a series of previous publications actual dynamic calculations of such spontaneous symmetry breaking was preformed. These involved only scalars in a simplified model, and pseudoscalars and vectors in a little more realistic model. It was demonstrated that the symmetry breaking goes in the right direction compared to experiment, and that e.g. the mechanism naturally explains the approximate nature of the Okubo-Zweig-Iizuka rule, the near ideal mixing and the equal spacing rule of meson multiplets. Furthermore this opens up a new scenario for predicting quark masses (and possibly the CKM matrix), since one can start from an exactly symmetric theory with few parameters (and in accord with the flavor blindness of QCD) which can have both an unstable symmetric and a stable flavor asymmetric solution. Then, of course, only the stable asymmetric one is the true physical solution. Of course our discrete symmetry can also be broken explicitly. In fact, one expects that electro-weak interactions should break the up-down symmetry and give the $`u,c,t`$ quarks an extra mass, while the $`d,s,b`$ quarks may be degenerate or nearly massless before the spontaneous breaking by strong interactions. A true lepton-quark symmetry might emerge. There is some similarity with the color-flavor connection discussed here, and the color-flavor locking of Schäfer and Wilczek and collaborators, although the latter is applied within another context of high density QCD, where the $`q\overline{q}`$ sea and the vacuum merge. The suggested new interpretation of flavor symmetry also throws some new light on the nature of superselection rules, which was vigorously discussed almost half a century ago when isospin invariance had been introduced.
no-problem/9908/math-ph9908021.html
ar5iv
text
# Cλ-extended oscillator algebras: Theory and applications to (variants of) supersymmetric quantum mechanics ## 1 Introduction Deformations and extensions of the oscillator algebra have found a lot of applications to physical problems, such as the description of systems with non-standard statistics, the construction of integrable lattice models, the investigation of nonlinearities in quantum optics, as well as the algebraic treatment of quantum exactly solvable models and of $`n`$-particle integrable systems. The generalized deformed oscillator algebras (GDOAs) (see e.g. Ref. and references quoted therein) arose from successive generalizations of the Arik-Coon and Biedenharn-Macfarlane $`q`$-oscillators. Such algebras, denoted by $`𝒜_q(G(N))`$, are generated by the unit, creation, annihilation, and number operators $`I`$, $`a^{}`$, $`a`$, $`N`$, satisfying the Hermiticity conditions $`\left(a^{}\right)^{}=a`$, $`N^{}=N`$, and the commutation relations $$[N,a^{}]=a^{},[N,a]=a,[a,a^{}]_qaa^{}qa^{}a=G(N),$$ (1.1) where $`q`$ is some real number and $`G(N)`$ is some Hermitian, analytic function. On the other hand, $`𝒢`$-extended oscillator algebras, where $`𝒢`$ is some finite group, appeared in connection with $`n`$-particle integrable models. For the Calogero model , for instance, $`𝒢`$ is the symmetric group $`S_n`$ . For two particles, the $`S_2`$-extended oscillator algebra $`𝒜_\kappa ^{(2)}`$, where $`S_2=\{I,KK^2=I\}`$, is generated by the operators $`I`$, $`a^{}`$, $`a`$, $`N`$, $`K`$, subject to the Hermiticity conditions $`\left(a^{}\right)^{}=a`$, $`N^{}=N`$, $`K^{}=K^1`$, and the relations $`[N,a^{}]`$ $`=`$ $`a^{},[N,K]=0,K^2=I,`$ $`[a,a^{}]`$ $`=`$ $`I+\kappa K(\kappa \text{R}),a^{}K=Ka^{},`$ (1.2) together with their Hermitian conjugates. When the $`S_2`$ generator $`K`$ is realized in terms of the Klein operator $`(1)^N`$, $`𝒜_\kappa ^{(2)}`$ becomes a GDOA characterized by $`q=1`$ and $`G(N)=I+\kappa (1)^N`$, and known as the Calogero-Vasiliev or modified oscillator algebra. The operator $`K`$ may be alternatively considered as the generator of the cyclic group $`C_2`$ of order two, since the latter is isomorphic to $`S_2`$. By replacing $`C_2`$ by the cyclic group of order $`\lambda `$, $`C_\lambda =\{I,T,T^2,\mathrm{},T^{\lambda 1}T^\lambda =I\}`$, one then gets a new class of $`𝒢`$-extended oscillator algebras , generalizing that describing the two-particle Calogero model. In the present communication, we will define the $`C_\lambda `$-extended oscillator algebras, study some of their properties, and show that they have some interesting applications to supersymmetric quantum mechanics (SSQM) and some of its variants. ## 2 Definition and properties of $`𝑪_𝝀`$-extended oscillator algebras Let us consider the algebras generated by the operators $`I`$, $`a^{}`$, $`a`$, $`N`$, $`T`$, satisfying the Hermiticity conditions $`\left(a^{}\right)^{}=a`$, $`N^{}=N`$, $`T^{}=T^1`$, and the relations $`[N,a^{}]`$ $`=`$ $`a^{},[N,T]=0,T^\lambda =I,`$ $`[a,a^{}]`$ $`=`$ $`I+{\displaystyle \underset{\mu =1}{\overset{\lambda 1}{}}}\kappa _\mu T^\mu ,a^{}T=e^{\mathrm{i2}\pi /\lambda }Ta^{},`$ (2.1) together with their Hermitian conjugates . Here $`T`$ is the generator of (a unitary representation of) the cyclic group $`C_\lambda `$ (where $`\lambda \{2,3,4,\mathrm{}\}`$), and $`\kappa _\mu `$, $`\mu =1`$, 2, $`\mathrm{}`$$`\lambda 1`$, are some complex parameters restricted by the conditions $`\kappa _\mu ^{}=\kappa _{\lambda \mu }`$ (so that there remain altogether $`\lambda 1`$ independent real parameters). $`C_\lambda `$ has $`\lambda `$ inequivalent, one-dimensional matrix unitary irreducible representations (unirreps) $`\mathrm{\Gamma }^\mu `$, $`\mu =0`$, 1, $`\mathrm{}`$$`\lambda 1`$, which are such that $`\mathrm{\Gamma }^\mu \left(T^\nu \right)=\mathrm{exp}(\mathrm{i2}\pi \mu \nu /\lambda )`$ for any $`\nu =0`$, 1, $`\mathrm{}`$$`\lambda 1`$. The projection operator on the carrier space of $`\mathrm{\Gamma }^\mu `$ may be written as $$P_\mu =\frac{1}{\lambda }\underset{\nu =0}{\overset{\lambda 1}{}}e^{\mathrm{i2}\pi \mu \nu /\lambda }T^\nu ,$$ (2.2) and conversely $`T^\nu `$, $`\nu =0`$, 1, $`\mathrm{}`$$`\lambda 1`$, may be expressed in terms of the $`P_\mu `$’s as $$T^\nu =\underset{\mu =0}{\overset{\lambda 1}{}}e^{\mathrm{i2}\pi \mu \nu /\lambda }P_\mu .$$ (2.3) The algebra defining relations (2.1) may therefore be rewritten in terms of $`I`$, $`a^{}`$, $`a`$, $`N`$, and $`P_\mu ^{}=P_\mu ^{}`$, $`\mu =0`$, 1, $`\mathrm{}`$$`\lambda 1`$, as $`[N,a^{}]`$ $`=`$ $`a^{},[N,P_\mu ]=0,{\displaystyle \underset{\mu =0}{\overset{\lambda 1}{}}}P_\mu =I,`$ $`[a,a^{}]`$ $`=`$ $`I+{\displaystyle \underset{\mu =0}{\overset{\lambda 1}{}}}\alpha _\mu P_\mu ,a^{}P_\mu =P_{\mu +1}a^{},P_\mu P_\nu =\delta _{\mu ,\nu }P_\mu ,`$ (2.4) where we use the convention $`P_\mu ^{}=P_\mu `$ if $`\mu ^{}\mu =0\mathrm{mod}\lambda `$ (and similarly for other operators or parameters indexed by $`\mu `$, $`\mu ^{}`$). Equation (2.4) depends upon $`\lambda `$ real parameters $`\alpha _\mu =_{\nu =1}^{\lambda 1}\mathrm{exp}(\mathrm{i2}\pi \mu \nu /\lambda )\kappa _\nu `$, $`\mu =0`$, 1, $`\mathrm{}`$$`\lambda 1`$, restricted by the condition $`_{\mu =0}^{\lambda 1}\alpha _\mu =0`$. Hence, we may eliminate one of them, for instance $`\alpha _{\lambda 1}`$, and denote $`C_\lambda `$-extended oscillator algebras by $`𝒜_{\alpha _0\alpha _1\mathrm{}\alpha _{\lambda 2}}^{(\lambda )}`$. The cyclic group generator $`T`$ and the projection operators $`P_\mu `$ can be realized in terms of $`N`$ as $$T=e^{\mathrm{i2}\pi N/\lambda },P_\mu =\frac{1}{\lambda }\underset{\nu =0}{\overset{\lambda 1}{}}e^{\mathrm{i2}\pi \nu (N\mu )/\lambda },\mu =0,1,\mathrm{},\lambda 1,$$ (2.5) respectively. With such a choice, $`𝒜_{\alpha _0\alpha _1\mathrm{}\alpha _{\lambda 2}}^{(\lambda )}`$ becomes a GDOA, $`𝒜^{(\lambda )}(G(N))`$, characterized by $`q=1`$ and $`G(N)=I+_{\mu =0}^{\lambda 1}\alpha _\mu P_\mu `$, where $`P_\mu `$ is given in Eq. (2.5). For any GDOA $`𝒜_q(G(N))`$, one may define a so-called structure function $`F(N)`$, which is the solution of the difference equation $`F(N+1)qF(N)=G(N)`$, such that $`F(0)=0`$ . For $`𝒜^{(\lambda )}(G(N))`$, we find $$F(N)=N+\underset{\mu =0}{\overset{\lambda 1}{}}\beta _\mu P_\mu ,\beta _00,\beta _\mu \underset{\nu =0}{\overset{\mu 1}{}}\alpha _\nu (\mu =1,2,\mathrm{},\lambda 1).$$ (2.6) At this point, it is worth noting that for $`\lambda =2`$, we obtain $`T=K`$, $`P_0=(I+K)/2`$, $`P_1=(IK)/2`$, and $`\kappa _1=\kappa _1^{}=\alpha _0=\alpha _1=\kappa `$, so that $`𝒜_{\alpha _0}^{(2)}`$ coincides with the $`S_2`$-extended oscillator algebra $`𝒜_\kappa ^{(2)}`$ and $`𝒜^{(2)}(G(N))`$ with the Calogero-Vasiliev algebra. In Ref. , we showed that $`𝒜^{(\lambda )}(G(N))`$ (and more generally $`𝒜_{\alpha _0\alpha _1\mathrm{}\alpha _{\lambda 2}}^{(\lambda )}`$) has only two different types of unirreps: infinite-dimensional bounded from below unirreps and finite-dimensional ones. Among the former, there is the so-called bosonic Fock space representation, wherein $`a^{}a=F(N)`$ and $`aa^{}=F(N+1)`$. Its carrier space $``$ is spanned by the eigenvectors $`|n`$ of the number operator $`N`$, corresponding to the eigenvalues $`n=0`$, 1, 2, $`\mathrm{}`$, where $`|0`$ is a vacuum state, i.e., $`a|0=N|0=0`$ and $`P_\mu |0=\delta _{\mu ,0}|0`$. The eigenvectors can be written as $$|n=𝒩_n^{1/2}\left(a^{}\right)^n|0,n=0,1,2,\mathrm{},$$ (2.7) where $`𝒩_n=_{i=1}^nF(i)`$. The creation and annihilation operators act upon $`|n`$ in the usual way, i.e., $$a^{}|n=\sqrt{F(n+1)}|n+1,a|n=\sqrt{F(n)}|n1,$$ (2.8) while $`P_\mu `$ projects on the $`\mu `$th component $`_\mu \{|k\lambda +\mu k=0,1,2,\mathrm{}\}`$ of the $`\mathrm{Z}_\lambda `$-graded Fock space $`=_{\mu =0}^{\lambda 1}_\mu `$. It is obvious that such a bosonic Fock space representation exists if and only if $`F(\mu )>0`$ for $`\mu =1`$, 2, $`\mathrm{}`$$`\lambda 1`$. This gives the following restrictions on the algebra parameters $`\alpha _\mu `$, $$\underset{\nu =0}{\overset{\mu 1}{}}\alpha _\nu >\mu ,\mu =1,2,\mathrm{},\lambda 1.$$ (2.9) In the bosonic Fock space representation, we may consider the bosonic oscillator Hamiltonian, defined as usual by $$H_0\frac{1}{2}\{a,a^{}\}.$$ (2.10) It can be rewritten as $$H_0=a^{}a+\frac{1}{2}\left(I+\underset{\mu =0}{\overset{\lambda 1}{}}\alpha _\mu P_\mu \right)=N+\frac{1}{2}I+\underset{\mu =0}{\overset{\lambda 1}{}}\gamma _\mu P_\mu ,$$ (2.11) where $`\gamma _0\frac{1}{2}\alpha _0`$ and $`\gamma _\mu _{\nu =0}^{\mu 1}\alpha _\nu +\frac{1}{2}\alpha _\mu `$ for $`\mu =1`$, 2, …, $`\lambda 1`$. The eigenvectors of $`H_0`$ are the states $`|n=|k\lambda +\mu `$, defined in Eq. (2.7), and their eigenvalues are given by $$E_{k\lambda +\mu }=k\lambda +\mu +\gamma _\mu +\frac{1}{2},k=0,1,2,\mathrm{},\mu =0,1,\mathrm{},\lambda 1.$$ (2.12) In each $`_\mu `$ subspace of the $`\mathrm{Z}_\lambda `$-graded Fock space $``$, the spectrum of $`H_0`$ is therefore harmonic, but the $`\lambda `$ infinite sets of equally spaced energy levels, corresponding to $`\mu =0`$, 1, $`\mathrm{}`$$`\lambda 1`$, may be shifted with respect to each other by some amounts depending upon the algebra parameters $`\alpha _0`$, $`\alpha _1`$, $`\mathrm{}`$$`\alpha _{\lambda 2}`$, through their linear combinations $`\gamma _\mu `$, $`\mu =0`$, 1, $`\mathrm{}`$$`\lambda 1`$. For the Calogero-Vasiliev oscillator, i.e., for $`\lambda =2`$, the relation $`\gamma _0=\gamma _1=\kappa /2`$ implies that the spectrum is very simple and coincides with that of a shifted harmonic oscillator. For $`\lambda 3`$, however, it has a much richer structure. According to the parameter values, it may be nondegenerate, or may exhibit some ($`\nu +1`$)-fold degeneracies above some energy eigenvalue, where $`\nu `$ may take any value in the set $`\{1,2,\mathrm{},\lambda 1\}`$. In Ref. , we obtained for $`\lambda =3`$ the complete classification of nondegenerate, twofold and threefold degenerate spectra in terms of $`\alpha _0`$ and $`\alpha _1`$. In the remaining part of this communication, we will show that the bosonic Fock space representation of $`𝒜^{(\lambda )}(G(N))`$ and the corresponding bosonic oscillator Hamiltonian $`H_0`$ have some useful applications to SSQM and some of its variants. ## 3 Application to supersymmetric quantum mechanics with cyclic shape invariant potentials In SSQM with two supercharges, the supersymmetric Hamiltonian $``$ and the supercharges $`Q^{}`$, $`Q=\left(Q^{}\right)^{}`$, satisfy the sqm(2) superalgebra, defined by the relations $$Q^2=0,[,Q]=0,\{Q,Q^{}\}=,$$ (3.1) together with their Hermitian conjugates . In such a context, shape invariance provides an integrability condition, yielding all the bound state energy eigenvalues and eigenfunctions, as well as the scattering matrix. Recently, Sukhatme, Rasinariu, and Khare introduced cyclic shape invariant potentials of period $`p`$ in SSQM. They are characterized by the fact that the supersymmetric partner Hamiltonians correspond to a series of shape invariant potentials, which repeats after a cycle of $`p`$ iterations. In other words, one may define $`p`$ sets of operators $`\{_\mu ,Q_\mu ^{},Q_\mu \}`$, $`\mu =0`$, 1, …, $`p1`$, each satisfying the sqm(2) defining relations (3.1). The operators may be written as $$_\mu =\left(\begin{array}{cc}^{(\mu )}_0^{(\mu )}I& 0\\ 0& ^{(\mu +1)}_0^{(\mu )}I\end{array}\right),Q_\mu ^{}=\left(\begin{array}{cc}0& A_\mu ^{}\\ 0& 0\end{array}\right),Q_\mu =\left(\begin{array}{cc}0& 0\\ A_\mu & 0\end{array}\right),$$ (3.2) where $`^{(0)}`$ $`=`$ $`A_0^{}A_0,`$ $`^{(\mu )}`$ $`=`$ $`A_{\mu 1}A_{\mu 1}^{}+_0^{(\mu 1)}I=A_\mu ^{}A_\mu +_0^{(\mu )}I,\mu =1,2,\mathrm{},p,`$ $`A_\mu `$ $`=`$ $`{\displaystyle \frac{d}{dx}}+W(x,b_\mu ),A_\mu ^{}={\displaystyle \frac{d}{dx}}+W(x,b_\mu ),\mu =0,1,\mathrm{},p,`$ (3.3) and $`_0^{(\mu )}`$ denotes the ground state energy of $`^{(\mu )}`$ (with $`_0^{(0)}=0`$). Here the superpotentials $`W(x,b_\mu )`$ depend upon some parameters $`b_\mu `$, such that $`b_{\mu +p}=b_\mu `$, and they satisfy $`p`$ shape invariance conditions $$W^2(x,b_\mu )+W^{}(x,b_\mu )=W^2(x,b_{\mu +1})W^{}(x,b_{\mu +1})+\omega _\mu ,\mu =0,1,\mathrm{},p1,$$ (3.4) where $`\omega _\mu `$, $`\mu =0`$, 1, …, $`p1`$, are some real constants. From the solution of Eq. (3.4), one may then construct the potentials corresponding to the supersymmetric partners $`^{(\mu )}`$, $`^{(\mu +1)}`$ in the usual way, i.e., $`V^{(\mu )}=W^2(x,b_\mu )W^{}(x,b_\mu )+_0^{(\mu )}`$, $`V^{(\mu +1)}=W^2(x,b_\mu )+W^{}(x,b_\mu )+_0^{(\mu )}`$. For $`p=2`$, Gangopadhyaya and Sukhatme obtained such potentials as superpositions of a Calogero potential and a $`\delta `$-function singularity. For $`p3`$, however, only numerical solutions of the shape invariance conditions (3.4) have been obtained , so that no analytical form of $`V^{(\mu )}`$ is known. In spite of this, the spectrum is easily derived and consists of $`p`$ infinite sets of equally spaced energy levels, shifted with respect to each other by the energies $`\omega _0`$, $`\omega _1`$, …, $`\omega _{p1}`$. Since for some special choices of parameters, spectra of a similar type may be obtained with the bosonic oscillator Hamiltonian (2.10) acting in the bosonic Fock space representation of $`𝒜^{(p)}(G(N))`$, one may try to establish a relation between the class of algebras $`𝒜^{(p)}(G(N))`$ and SSQM with cyclic shape invariant potentials of period $`p`$. In Ref. , we proved that the operators $`^{(\mu )}`$, $`A_\mu ^{}`$, and $`A_\mu `$ of Eqs. (3.2) and (3.3) can be realized in terms of the generators of $`p`$ algebras $`𝒜^{(p)}(G^{(\mu )}(N))`$, $`\mu =0`$, 1, …, $`p1`$, belonging to the class $`\left\{𝒜^{(p)}(G(N))\right\}`$. The parameters of such algebras are obtained by cyclic permutations from a starting set $`\{\alpha _0,\alpha _1,\mathrm{},\alpha _{p1}\}`$ corresponding to $`𝒜^{(p)}(G^{(0)}(N))=𝒜^{(p)}(G(N))`$. Denoting by $`N`$, $`a_\mu ^{}`$, $`a_\mu `$ the number, creation, and annihilation operators corresponding to the $`\mu `$th algebra $`𝒜^{(p)}(G^{(\mu )}(N))`$, where $`a_0^{}=a^{}`$, and $`a_0=a`$, we may write the fourth relation in the algebra defining relations (2.4) as $$[a_\mu ,a_\mu ^{}]=I+\underset{\nu =0}{\overset{p1}{}}\alpha _\nu ^{(\mu )}P_\nu ,\alpha _\nu ^{(\mu )}\alpha _{\nu +\mu },\mu =0,1,\mathrm{},p1,$$ (3.5) while the remaining relations keep the same form. The realization of $`^{(\mu )}`$, $`A_\mu ^{}`$, $`A_\mu `$, $`\mu =0`$, 1, …, $`p1`$, is then given by $`^{(\mu )}`$ $`=`$ $`F(N+\mu )=N+\mu I+{\displaystyle \underset{\nu =0}{\overset{p1}{}}}\beta _{\nu +\mu }P_\nu =H_0^{(\mu )}\frac{1}{2}{\displaystyle \underset{\nu =0}{\overset{p1}{}}}\left(1+\alpha _\nu ^{(\mu )}\right)P_\nu +_0^{(\mu )}I,`$ $`A_\mu ^{}`$ $`=`$ $`a_\mu ^{},A_\mu =a_\mu ,`$ (3.6) where $`H_0^{(\mu )}\frac{1}{2}\{a_\mu ^{},a_\mu ^{}\}`$ is the bosonic oscillator Hamiltonian associated with $`𝒜^{(p)}(G^{(\mu )}(N))`$, $`_0^{(\mu )}=_{\nu =0}^{\mu 1}\omega _\nu `$, and the level spacings are $`\omega _\mu =1+\alpha _\mu `$. For this result to be meaningful, the conditions $`\omega _\mu >0`$, $`\mu =0`$, 1, …, $`p1`$, have to be fulfilled. When combined with the restrictions (2.9), the latter imply that the parameters of the starting algebra $`𝒜^{(p)}(G(N))`$ must be such that $`1<\alpha _0<\lambda 1`$, $`1<\alpha _\mu <\lambda \mu 1_{\nu =0}^{\mu 1}\alpha _\nu `$ if $`\mu =1`$, 2, $`\mathrm{}`$$`\lambda 2`$, and $`\alpha _{\lambda 1}=_{\nu =0}^{\lambda 2}\alpha _\nu `$. ## 4 Application to parasupersymmetric quantum mechanics of order $`𝒑`$ The sqm(2) superalgebra (3.1) is most often realized in terms of mutually commuting boson and fermion operators. Plyushchay , however, showed that it can alternatively be realized in terms of only boson-like operators, namely the generators of the Calogero-Vasiliev algebra $`𝒜^{(2)}(G(N))`$ (see also Ref. ). Such an SSQM bosonization can be performed in two different ways, by choosing either $`Q=a^{}P_1`$ (so that $`=H_0\frac{1}{2}(K+\kappa )`$) or $`Q=a^{}P_0`$ (so that $`=H_0+\frac{1}{2}(K+\kappa )`$). The first choice corresponds to unbroken SSQM (all the excited states are twofold degenerate while the ground state is nondegenerate and at vanishing energy), and the second choice describes broken SSQM (all the states are twofold degenerate and at positive energy). SSQM was generalized to parasupersymmetric quantum mechanics (PSSQM) of order two by Rubakov and Spiridonov , and later on to PSSQM of arbitrary order $`p`$ by Khare . In the latter case, Eq. (3.1) is replaced by $$Q^{p+1}=0(\mathrm{with}Q^p0),$$ $$[,Q]=0,$$ $$Q^pQ^{}+Q^{p1}Q^{}Q+\mathrm{}+QQ^{}Q^{p1}+Q^{}Q^p=2pQ^{p1},$$ (4.1) and is retrieved in the case where $`p=1`$. The parasupercharges $`Q`$, $`Q^{}`$, and the parasupersymmetric Hamiltonian $``$ are usually realized in terms of mutually commuting boson and parafermion operators. A property of PSSQM of order $`p`$ is that the spectrum of $``$ is ($`p+1`$)-fold degenerate above the ($`p1`$)th energy level. This fact and Plyushchay’s results for $`p=1`$ hint at a possibility of representing $``$ as a linear combination of the bosonic oscillator Hamiltonian $`H_0`$ associated with $`𝒜^{(p+1)}(G(N))`$ and some projection operators, as in Eq. (3.6). In Ref. (see also Refs. ), we proved that PSSQM of order $`p`$ can indeed be bosonized in terms of the generators of $`𝒜^{(p+1)}(G(N))`$ for any allowed (i.e., satisfying Eq. (2.9)) values of the algebra parameters $`\alpha _0`$, $`\alpha _1`$, …, $`\alpha _{p1}`$. For such a purpose, we started from ansätze of the type $$Q=\underset{\nu =0}{\overset{p}{}}\sigma _\nu a^{}P_\nu ,=H_0+\frac{1}{2}\underset{\nu =0}{\overset{p}{}}r_\nu P_\nu ,$$ (4.2) where $`\sigma _\nu `$ and $`r_\nu `$ are some complex and real constants, respectively, to be determined in such a way that Eq. (4.1) is fulfilled. We found that there are $`p+1`$ families of solutions, which may be distinguished by an index $`\mu \{0,1,\mathrm{},p\}`$ and from which we may choose the following representative solutions $`Q_\mu `$ $`=`$ $`\sqrt{2}{\displaystyle \underset{\nu =1}{\overset{p}{}}}a^{}P_{\mu +\nu },`$ $`_\mu `$ $`=`$ $`N+\frac{1}{2}(2\gamma _{\mu +2}+r_{\mu +2}2p+3)I+{\displaystyle \underset{\nu =1}{\overset{p}{}}}(p+1\nu )P_{\mu +\nu },`$ (4.3) where $$r_{\mu +2}=\frac{1}{p}\left[(p2)\alpha _{\mu +2}+2\underset{\nu =3}{\overset{p}{}}(p\nu +1)\alpha _{\mu +\nu }+p(p2)\right].$$ (4.4) The eigenvectors of $`_\mu `$ are the states (2.7) and the corresponding eigenvalues are easily found. All the energy levels are equally spaced. For $`\mu =0`$, PSSQM is unbroken, otherwise it is broken with a ($`\mu +1`$)-fold degenerate ground state. All the excited states are ($`p+1`$)-fold degenerate. For $`\mu =0`$, 1, …, $`p2`$, the ground state energy may be positive, null, or negative depending on the parameters, whereas for $`\mu =p1`$ or $`p`$, it is always positive. Khare showed that in PSSQM of order $`p`$, $``$ has in fact $`2p`$ (and not only two) conserved parasupercharges, as well as $`p`$ bosonic constants. In other words, there exist $`p`$ independent operators $`Q_r`$, $`r=1`$, 2, …, $`p`$, satisfying with $``$ the set of equations (4.1), and $`p`$ other independent operators $`I_t`$, $`t=2`$, 3, …, $`p+1`$, commuting with $``$, as well as among themselves. In Ref. , we obtained a realization of all such operators in terms of the $`𝒜^{(p+1)}(G(N))`$ generators. As a final point, let us note that there exists an alternative approach to PSSQM of order $`p`$, which was proposed by Beckers and Debergh , and wherein the multilinear relation in Eq. (4.1) is replaced by the cubic equation $$[Q,[Q^{},Q]]=2Q.$$ (4.5) In Ref. , we proved that for $`p=2`$, this PSSQM algebra can only be realized by those $`𝒜^{(3)}(G(N))`$ algebras that simultaneously bosonize Rubakov-Spiridonov-Khare PSSQM algebra. ## 5 Application to pseudosupersymmetric quantum mechanics Pseudosupersymmetric quantum mechanics (pseudoSSQM) was introduced by Beckers, Debergh, and Nikitin in a study of relativistic vector mesons interacting with an external constant magnetic field. In the nonrelativistic limit, their theory leads to a pseudosupersymmetric oscillator Hamiltonian, which can be realized in terms of mutually commuting boson and pseudofermion operators, where the latter are intermediate between standard fermion and $`p=2`$ parafermion operators. It is then possible to formulate a pseudoSSQM , characterized by a pseudosupersymmetric Hamiltonian $``$ and pseudosupercharge operators $`Q`$, $`Q^{}`$, satisfying the relations $$Q^2=0,[,Q]=0,QQ^{}Q=4c^2Q,$$ (5.1) and their Hermitian conjugates, where $`c`$ is some real constant. The first two relations in Eq. (5.1) are the same as those occurring in SSQM, whereas the third one is similar to the multilinear relation valid in PSSQM of order two. Actually, for $`c=1`$ or 1/2, it is compatible with Eq. (4.1) or (4.5), respectively. In Ref. , we proved that pseudoSSQM can be bosonized in two different ways in terms of the generators of $`𝒜^{(3)}(G(N))`$ for any allowed values of the parameters $`\alpha _0`$, $`\alpha _1`$. This time, we started from the ansätze $$Q=\underset{\nu =0}{\overset{2}{}}\left(\xi _\nu a+\eta _\nu a^{}\right)P_\nu ,=H_0+\frac{1}{2}\underset{\nu =0}{\overset{2}{}}r_\nu P_\nu ,$$ (5.2) and determined the complex constants $`\xi _\nu `$, $`\eta _\nu `$, and the real ones $`r_\nu `$ in such a way that Eq. (5.1) is fulfilled. The first type of bosonization corresponds to three families of two-parameter solutions, labelled by an index $`\mu \{0,1,2\}`$, $`Q_\mu (\eta _{\mu +2},\phi )`$ $`=`$ $`\left(\eta _{\mu +2}a^{}+e^{\mathrm{i}\phi }\sqrt{4c^2\eta _{\mu +2}^2}a\right)P_{\mu +2},`$ $`_\mu (\eta _{\mu +2})`$ $`=`$ $`N+\frac{1}{2}(2\gamma _{\mu +2}+r_{\mu +2}1)I+2P_{\mu +1}+P_{\mu +2},`$ (5.3) where $`0<\eta _{\mu +2}<2|c|`$, $`0\phi <2\pi `$, and $$r_{\mu +2}=\frac{1}{2c^2}(1+\alpha _{\mu +2})\left(|\eta _{\mu +2}|^22c^2\right).$$ (5.4) Choosing for instance $`\eta _{\mu +2}=\sqrt{2}|c|`$, and $`\phi =0`$, hence $`r_{\mu +2}=0`$ (producing an overall shift of the spectrum), we obtain $`Q_\mu `$ $`=`$ $`c\sqrt{2}\left(a^{}+a\right)P_{\mu +2},`$ $`_\mu `$ $`=`$ $`N+\frac{1}{2}(2\gamma _{\mu +2}1)I+2P_{\mu +1}+P_{\mu +2}.`$ (5.5) A comparison between Eq. (5.3) or (5.5) and Eq. (4.3) shows that the pseudosupersymmetric and $`p=2`$ parasupersymmetric Hamiltonians coincide, but that the corresponding charges are of course different. The conclusions relative to the spectrum and the ground state energy are therefore the same as in Sec. 4. The second type of bosonization corresponds to three families of one-parameter solutions, again labelled by an index $`\mu \{0,1,2\}`$, $`Q_\mu `$ $`=`$ $`2|c|aP_{\mu +2},`$ $`_\mu (r_\mu )`$ $`=`$ $`N+\frac{1}{2}(2\gamma _{\mu +2}\alpha _{\mu +2})I+\frac{1}{2}(1\alpha _{\mu +1}+\alpha _{\mu +2}+r_\mu )P_\mu +P_{\mu +1},`$ (5.6) where $`r_\mu \mathrm{R}`$ changes the Hamiltonian spectrum in a significant way. We indeed find that the levels are equally spaced if and only if $`r_\mu =(\alpha _{\mu +1}\alpha _{\mu +2}+3)\mathrm{mod}\mathrm{\hspace{0.17em}6}`$. If $`r_\mu `$ is small enough, the ground state is nondegenerate, and its energy is negative for $`\mu =1`$, or may have any sign for $`\mu =0`$ or 2. On the contrary, if $`r_\mu `$ is large enough, the ground state remains nondegenerate with a vanishing energy in the former case, while it becomes twofold degenerate with a positive energy in the latter. For some intermediate $`r_\mu `$ value, one gets a two or threefold degenerate ground state with a vanishing or positive energy, respectively. ## 6 Application to orthosupersymmetric quantum mechanics of order two Mishra and Rajasekaran introduced order-$`p`$ orthofermion operators by replacing the Pauli exclusion principle by a more stringent one: an orbital state shall not contain more than one particle, whatever be the spin direction. The wave function is thus antisymmetric in spatial indices alone with the order of the spin indices frozen. Khare, Mishra, and Rajasekaran then developed orthosupersymmetric quantum mechanics (OSSQM) of arbitrary order $`p`$ by combining boson operators with orthofermion ones, for which the spatial indices are ignored. OSSQM is formulated in terms of an orthosupersymmetric Hamiltonian $``$, and $`2p`$ orthosupercharge operators $`Q_r`$, $`Q_r^{}`$, $`r=1`$, 2, …, $`p`$, satisfying the relations $$Q_rQ_s=0,[,Q_r]=0,Q_rQ_s^{}+\delta _{r,s}\underset{t=1}{\overset{p}{}}Q_t^{}Q_t=2\delta _{r,s},$$ (6.1) and their Hermitian conjugates, where $`r`$ and $`s`$ run over 1, 2, …, $`p`$. In Ref. , we proved that OSSQM of order two can be bosonized in terms of the generators of some well-chosen $`𝒜^{(3)}(G(N))`$ algebras. As ansätze, we used the expressions $$Q_1=\underset{\nu =0}{\overset{2}{}}\left(\xi _\nu a+\eta _\nu a^{}\right)P_\nu ,Q_2=\underset{\nu =0}{\overset{2}{}}\left(\zeta _\nu a+\rho _\nu a^{}\right)P_\nu ,=H_0+\frac{1}{2}\underset{\nu =0}{\overset{2}{}}r_\nu P_\nu ,$$ (6.2) and determined the complex constants $`\xi _\nu `$, $`\eta _\nu `$, $`\zeta _\nu `$, $`\rho _\nu `$, and the real ones $`r_\nu `$ in such a way that Eq. (6.1) is fulfilled. We found two families of two-parameter solutions, labelled by $`\mu \{0,1\}`$, $`Q_{1,\mu }(\xi _{\mu +2},\phi )`$ $`=`$ $`\xi _{\mu +2}aP_{\mu +2}+e^{\mathrm{i}\phi }\sqrt{2\xi _{\mu +2}^2}a^{}P_\mu ,`$ $`Q_{2,\mu }(\xi _{\mu +2},\phi )`$ $`=`$ $`e^{\mathrm{i}\phi }\sqrt{2\xi _{\mu +2}^2}aP_{\mu +2}+\xi _{\mu +2}a^{}P_\mu ,`$ $`_\mu `$ $`=`$ $`N+\frac{1}{2}(2\gamma _{\mu +1}1)I+2P_\mu +P_{\mu +1},`$ (6.3) where $`0<\xi _{\mu +2}\sqrt{2}`$ and $`0\phi <2\pi `$, provided the algebra parameter $`\alpha _{\mu +1}`$ is taken as $`\alpha _{\mu +1}=1`$. As a matter of fact, the absence of a third family of solutions corresponding to $`\mu =2`$ comes from the incompatibility of this condition (i.e., $`\alpha _0=1`$) with conditions (2.9). The orthosupersymmetric Hamiltonian $``$ in Eq. (6.3) is independent of the parameters $`\xi _{\mu +2}`$, $`\phi `$. All the levels of its spectrum are equally spaced. For $`\mu =0`$, OSSQM is broken: the levels are threefold degenerate, and the ground state energy is positive. On the contrary, for $`\mu =1`$, OSSQM is unbroken: only the excited states are threefold degenerate, while the nondegenerate ground state has a vanishing energy. Such results agree with the general conclusions of Ref. . For $`p`$ values greater than two, the OSSQM algebra (6.1) becomes rather complicated because the number of equations to be fulfilled increases considerably. A glance at the 18 independent conditions for $`p=3`$ led us to the conclusion that the $`𝒜^{(4)}(G(N))`$ algebra is not rich enough to contain operators satisfying Eq. (6.1). Contrary to what happens for PSSQM, for OSSQM the $`p=2`$ case is therefore not representative of the general one. ## 7 Conclusion In this communication, we showed that the $`S_2`$-extended oscillator algebra, which was introduced in connection with the two-particle Calogero model, can be extended to the whole class of $`C_\lambda `$-extended oscillator algebras $`𝒜_{\alpha _0\alpha _1\mathrm{}\alpha _{\lambda 2}}^{(\lambda )}`$, where $`\lambda \{2,3,\mathrm{}\}`$, and $`\alpha _0`$, $`\alpha _1`$, …, $`\alpha _{\lambda 2}`$ are some real parameters. In the same way, the GDOA realization of the former, known as the Calogero-Vasiliev algebra, is generalized to a class of GDOAs $`𝒜^{(\lambda )}(G(N))`$, where $`\lambda \{2,3,\mathrm{}\}`$, for which one can define a bosonic oscillator Hamiltonian $`H_0`$, acting in the bosonic Fock space representation. For $`\lambda 3`$, the spectrum of $`H_0`$ has a very rich structure in terms of the algebra parameters $`\alpha _0`$, $`\alpha _1`$, …, $`\alpha _{\lambda 2}`$. This can be exploited to provide an algebraic realization of SSQM with cyclic shape invariant potentials of period $`\lambda `$, a bosonization of PSSQM of order $`p=\lambda 1`$, and, for $`\lambda =3`$, a bosonization of pseudoSSQM and OSSQM of order two.
no-problem/9908/cond-mat9908457.html
ar5iv
text
# First order phase transition of the vortex lattice in twinned YBa2Cu3O7 single crystals in tilted magnetic fields ## I Introduction During the last years, vortex physics in high-temperature superconductors has become a major topic of research. The main reason for this interest is that the interplay between thermal fluctuations, anisotropy and disorder determine the existence of several vortex phases in the magnetic phase diagram of these materials. These phases are separated by different kinds of thermodynamic transitions. In particular, it is now well established that in clean YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> crystals the vortex solid transforms into a liquid through a first order phase transition . This thermodynamic transition shows up in the transport properties as a sharp drop or “kink” in the resistivity, $`\rho (T)`$, at the melting temperature $`T_m(H,\theta )`$ , and has an hysteretic behaviour both in temperature and field which corresponds to a superheating of the solid phase . When correlated disorder is present in the sample, as for example in twinned YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> crystals, the transition transforms into second order and occurs between a solid called Bose-Glass and an entangled vortex liquid. The Bose-Glass phase, existing only within a small angular region around the direction of the correlated defects ($`\theta =0`$), is characterized by a universal behaviour of the non-linear and linear resistivity with well defined critical exponents . At the Bose- Glass transition temperature, $`T_{BG}(H,\theta )`$, the presence of correlated defects introduces changes in the thermodynamic properties of the mixed state that go beyond the mass anisotropy approximation. In contrast to $`T_m(H,\theta )`$ which smoothly follows the angular dependence given by the anisotropy, $`T_{BG}(H,\theta )`$ shows a sharp cusp around $`\theta =0`$. Within this small angular region, $`\rho (T)`$ has a smooth temperature dependence near the transition, while at larger angles a similar kink to that observed in untwinned crystals develops . Using the Lindeman criterion and the scaling rules for anisotropic superconductors Kwok and co-workers fitted the angular dependence of the temperature at which the kink occurred. This fit was interpreted as an indication of a recovery of the melting transition in the vortex system when the magnetic field was tilted away from the planar defect direction . Using this interpretation, Langan et al. showed in a recent paper that the kink in $`\rho (T)`$ is suppressed for fields larger than a certain field $`H^{}(\theta )`$. This behaviour is similar to that observed in untwinned crystals where a critical end point for the melting line has been identified , and gives support to the occurrence of a first order solid- liquid transition when the field is rotated away from the twin planes. An interesting finding in oriented twinned crystals in inclined magnetic fields was done by Morré et al. , who showed that the transport properties of the liquid state at angles well beyond the so-called depinning angle, $`\theta _d`$, are quite different from those observed in clean samples. The depinning angle concept is commonly used in the literature to indicate the angle beyond which the twin boundaries becomes ineffective as a correlating potential. Morré et al. showed, however, that the twin boundary potentials continue to induce vortex velocity correlation even for $`\theta >>\theta _d`$. Their results demonstrated that the vortex liquid remains correlated in the field direction above the resistivity kink temperature. This behaviour is in sharp contrast to that observed in untwinned samples where the vortex velocity correlation in the field direction is lost at the melting temperature . The important difference between the characteristics of the two liquids was used to cast doubts about the interpretation of the resistivity kink in twinned crystals as a manifestation of a first order transition . In order to investigate further the nature of the vortex solid-liquid transition in the presence of correlated defects at finite angles, we studied the vortex dynamics in YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> crystals with oriented twin boundaries. We performed exhaustive transport measurements as a function of magnetic field, $`H`$, and angle, $`\theta `$, with respect to the twin boundary direction. Characterizing the resistivity kink near the transition by its width, we show that it remains almost constant as $`H`$ is increased until a field $`H^{}(\theta )`$ is reached. Below this field the V-I characteristics are non-linear and, what is more important, the resistive transitions are hysteretic. Moreover, our results indicate that the hysteresis corresponds to a superheating of the vortex lattice, in agreement with the results obtained by Charalambous et al. Above $`H^{}`$ which scales as $`\epsilon (\theta )H^{}(\theta )`$, the kink width suddenly increases, the V-I characteristics become linear and the hysteresis disappears. We find that the resistivity kink height follows a universal behaviour when the reduced variable $`\epsilon (\theta )H`$ is used, in contrast to the results of Langan et al.. Our data together with those previously reported , lead us to conclude that when $`H`$ is tilted away from the direction of the twin planes the transition to the vortex liquid state, characterized by a steep jump in the resistivity, is indeed a first order phase transition. This transition occurs between a highly correlated vortex liquid and a solid of unknown symmetry and therefore supports an interpretarion in therms of line-like melting rather than a sublimation of the vortex lattice . ## II Experimental We carried out transport measurements on YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> single crystals with only one family of oriented twins. The crystals were grown using the self flux method . The fully oxygenated samples had typical dimensions $`1\times 0.3\times 0.01mm^3`$, critical transition temperatures $`T_c=93.2K`$, and transition widths $`\mathrm{\Delta }T_c0.6K`$. Four parallel contacts separated by 150$`\mu m`$ were made with silver epoxy over evaporated gold pads, resulting in contact resistances lower than 1$`\mathrm{\Omega }`$. The crystal was mounted onto a rotatable sample holder with an angular resolution better than $`0.05^{}`$ inside a commercial gas flow cryostat with an 18T magnet. The current was injected at $`45^{}`$ off the twin planes. Transport measurements were performed using conventional DC techniques. All resistivity measurements were made within the linear regime using a current density $`J3A/cm^2`$. ## III Results and Discussion In Figure 1 we show the angular dependence of the temperature at which the resistivity of the crystal becomes zero within experimental resolution , at an applied magnetic field of 6T. Similar results were obtained for other applied fields. As clearly seen, there is a cusp at small angles which is indicative of the Bose-Glass phase, as has been recently demonstrated by Grigera et al. . In this paper we will concentrate on the investigation of the nature of the solid-liquid transition in the angular region away from that in which the Bose-Glass phase exists. In Figure 2 we present $`R(T)`$ data for three different angles, as a function of the applied magnetic field. As can be seen, the transitions at low fields display a sharp drop to zero which we identify with the characteristic resistivity kink. Note that, for a fixed angle, the kink is washed out as the field is increased, an effect similar to that observed in ref. and to that reported for untwinned crystals with the magnetic field applied parallel to the $`c`$-axis . In the case under study, the field value at which the transitions starts to broaden depends on the angle between $`H`$ and the correlated defects: the larger the angle, the larger the field at which the kink is washed out. This feature is more clearly seen in Figure 3 where, in order to quantify the above mentioned behaviour, we plotted the full width at half maximum (FWHM) of the temperature derivative, $`d\rho /dT`$, of the curves shown in Fig. 2 as a function of the applied field $`H`$. It can be seen that for $`\theta =23^{}`$ the transitions start to broaden at approximately $`H12T`$, while this field value is increased up to 14T for $`\theta =38^{}`$, and to almost 18T for $`\theta =58^{}`$. The values of the magnetic field at which the transitions start to broaden are independent of the criterion used for the definition of the transition width. According to the scaling theory of Blatter et al. , in an anisotropic superconductor a physical magnitude which is a function of angle and field should scale as the product $`H\epsilon (\theta )`$, where the anisotropy factor $`\epsilon (\theta )=\sqrt{cos^2(\theta )+\gamma ^2sin^2(\theta )}`$, with $`\gamma ^2=m_c/m_{ab}`$. Such scaling for the FWHM, with $`\gamma =7`$, is shown in Figure 4, where we have also included data for other measured angles. Note that all curves collapse onto one and that the sudden increase of the transition widths occurs at the same reduced field $`H^{}=H\epsilon (\theta )11T`$ for all angles. The sudden change in the transition width is indicative of a corresponding change in the vortex dynamics below and above the characteristic field $`H^{}`$. As is widely accepted, one of the most powerful tools to investigate vortex dynamics is the measurement of V-I characteristics. We performed such measurements in a temperature interval around the transitions for magnetic fields below and above $`H^{}`$. Typical results for the resistance as a function of the applied current at an angle $`\theta =23^{}`$ are shown in Figure 5. In panel (a) we plot the measurements for a field of 8T, lower than the critical field $`H^{}(23^{})`$. At low currents and high temperatures the vortex response is ohmic, changing to a non-linear behaviour as the temperature is reduced towards $`T_m=80.4`$. Note that in the intermediate temperature region, the R(I) curves have a characteristic s-shape. This feature, which has been reported in clean samples where a first order phase transition in the vortex lattice was identified , is characteristic of all the magnetic field region below $`H^{}(23^{})`$. It is more pronounced as $`H`$ is reduced, becoming less evident as $`H`$ approaches $`H^{}`$. Well above $`H^{}`$ the $`R(T)`$ curves are linear up to the maximum current that can be used without producing heating effects. In panel (b) we show the results for $`HH^{}(23^{})`$. Clearly, the vortex response is remarkably different from that shown in panel (a). In a wide range of applied currents the response is linear. Non-linearities developed at high currents and low temperatures and are due to vortex loop excitations . One may wonder if these non-linearities are a sign of a glass transition taking place at lower temperatures, a question that has also been raised concerning the nature of the transition above the critical point in clean crystals. If this were the case, the tails of the resistive transitions above the critical field should follow the scaling behaviour predicted by the theory , $`R(TT_g)^{\nu (z1)}`$. The analysis of our resistivity data in terms of this scaling yielded negative results. We cannot discard, however, the occurrence of the glass transition, since this negative results might be related to the voltage resolution we have in our experiments. Safar et al. experimentally observed the above mentioned scaling in Bi<sub>2</sub>Sr<sub>2</sub>CaCu<sub>2</sub>O<sub>8+δ</sub> crystals by using SQUID picovoltimetry. The extensive experimental results shown above display special features pointing towards the occurrence of a first order solid to liquid transition in the vortex lattice when $`H<H^{}`$. In order to consolidate this scenario, we have performed measurements to look for hysteretic behaviour in the resistive transitions. Due to experimental constrains, instead of searching for hysteresis in temperature, we performed such measurements at a fixed temperature (regulation better than 5 mK) and sweeping the magnetic field up and down. The results for an angle $`\theta =23^{}`$ and a measuring current $`I=50\mu A`$ are plotted in Figure 6. Panel (a) shows the data for $`TT^{}`$ where $`T^{}`$ is the temperature corresponding to the critical field at $`\theta =23^{}`$. The arrows indicate the sense in which the field was sweeped. Within experimental resolution no hysteresis is seen in this region. However, when the temperature is increased in such a way that $`T>T^{}`$ a clear hysteretic behaviour in the resistive transition develops, as can be seen in panel (b). It is important to mention that the width of the hysteresis is current independent between $`50`$ and $`100\mu A`$. When the applied current is increased above this value curve A shifts towards curve B and, at high enough currents (above $`150\mu A`$), the hysteresis is washed out. Following Charalambous et al. we interpret this behaviour as indicative of a superheating of the vortex lattice. In Figure 7 we compare the superconductor phase diagram for our sample with that obtained in a clean untwinned crystal . The inset show the raw H-T data, for different angles and the data at zero angle for the untwinned crystal. In order to take into account the anisotropy change as the angle between $`H`$ and the defects is changed, in the main panel we have used the corresponding scaling field $`H\epsilon (\theta )`$, while the reduced temperature scale $`t=T/T_c(H=0)`$ is used to account for the different critical temperature at zero field of both samples. The collapse of all data on one universal curve not only provides an impressive graphical view of our interpretation of the resistivity kink temperature in twinned samples as a melting temperature, but indicates that the presence of the twin boundary potentials does not modify this temperature. Later on we will come back to this point, continuing now with the anisotropy dependence of other physical quantities. In clean crystals a relevant quantity related to the melting transition is the kink height at the melting temperature. It has been found to be angle and field independent , and this particular behaviour has attracted theoretical interest . In contrast, we have found that in $`45^{}`$ oriented twinned crystals, the kink height is angle and field dependent, an observation already reported in ref. . Since this quantity related to the occurrence of a first order phase transition which follows the anisotropy, we expect the kink height also to scale with it. In Fig. 8 we show such scaling. We have plotted the kink height measured at fixed angles and increasing the magnetic field (open symbols), together with measurements of the same quantity but at a fixed applied field (6T) and reducing the angle $`\theta `$(full symbols). Clearly all data collapse onto a universal curve. The behaviour reported by Langan et al. is distinctly different. The reasons for the lack of scaling in their data is related to the fact that their measurements at $`\theta =5^{}`$ were taken in an angular region were the dissipation at the transition is greatly reduced due to the effect of the twin planes on the vortex dynamics. This reduction is easily seen in $`V(\theta )`$ measurements where a dip in the dissipation occurs for $`\theta 10^{}`$, an angle usually identified with $`\theta _d`$. One may wonder if this change in the vortex dynamics is a consequence of a corresponding change in the thermodynamic nature of the transition for $`\theta <\theta _d`$. This is certainly true when the Bose-Glass phase is reached since it has been demonstrated that in this case the transition is second order. However, this phase exists below $`\theta _{BG}2^{}`$, leaving a rather wide angular region ($`\theta _{BG}\theta \theta _d`$) where the full kink height develops (see e.g. ref. ). One interesting possibility is that this angular region comprises a reentrant liquid phase separating the Bose-Glass from the vortex solid, in a similar way to what has been predicted to occur near $`H_{c1}`$ in clean samples in . In this case the resistive transitions below the developing kink should be linear due to an increased viscosity of the reentrant liquid phase as the temperature is lowered. Within this picture $`\theta _d`$ would be the angle below which the reentrant phase starts to develope. Another possibility is that the angular region between $`\theta _{BG}`$ and $`\theta _d`$) is governed by critical fluctuations of the Bose-Glass phase associated to the new thermodynamic variable $`H_{}=Hsin(\theta )`$. If this were the case the so-called depinning angle should be interpreted as the critical angle above which the first order solid-liquid transition sets in. Although a detailed analysis of the nature of the solid-liquid transition in this angular region is out of the scope of this paper, further investigation is underway to elucidate this interesting issue. In the following, we would like to comment on a few important points concerning the $`HT`$ phase diagram shown in Fig. 7. The first one is related to the value of the critical field $`H^{}`$ compared to that of the untwinned sample. It is well known that in untwinned crystals this field has not a universal value. Its magnitude depends on the amount of point like disorder present in each crystal , the larger the disorder the lower the critical field. The results in Figure 7 which show $`H^{}H_S^{}`$ may therefore suggest that in tilted magnetic fields the presence of twin boundaries does not increase the amount of point like disorder. This suggestion seems to be corroborated by our measurements in another oriented twinned crystals and by the results shown in ref. with a value of $`H^{}11T`$, similar to that obtained in our crystals. The second remark is more fundamental since it is related to the nature of the solid and liquid vortex phases in the twinned crystal. In a recent paper Sasagawa et al. suggested that the first order solid-liquid phase transition in the vortex system of high temperature superconductors, including the less anisotropic YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub>, corresponds to a vortex sublimation rather than a line melting. Within this scenario the vortex velocity correlation length in the field direction should vanish at the melting temperature because the vortex liquid phase is formed by uncorrelated pancakes. As already mentioned in the introduction, Morré et al. have shown , using the flux transformer contact configuration, that the vortex liquid in tilted magnetic fields mantains the vortex velocity correlation in the field direction even at rather large angles off the twin planes. Since we have concluded that the transition in tilted fields is indeed first order, their results imply that the solid transforms into a vortex liquid of correlated lines, contrary to what happens in clean samples. This has a two important implications: first the sublimation scenario proposed by Sasagawa et al. does not hold for the melting transition in twinned crystals for tilted magnetic fields, and second, although the twin boundary potentials do not affect the melting temperature they play an important role in building up the vortex velocity correlation in the field direction. This last point indicates that the degree of correlation in the vortex liquid does not determine the nature of the thermodynamic transition. Therefore one may conclude that the liquid phase in clean and untwinned samples (in tilted magnetic fields) is essentially the same but with different dynamics due to the different vortex velocity correlation. On the other hand, it could also be possible that the nature of the liquid phase differs from that in untwinned samples. We speculate that for twinned crystals in tilted magnetic fields, the entangled vortex liquid might be formed by stair-like correlated lines which are stabilized by the twin boundary potential. Within this picture, the solid might have a complicated ordered structure also formed by stair-like vortices. ## IV Conclusion We have shown that the vortex solid-liquid transition in twinned YBa<sub>2</sub>Cu<sub>3</sub>O<sub>7</sub> crystals for magnetic fields tilted away from the planar defect’s direction is first order. This thermodynamic transition is reflected in the transport properties as a sharp kink in the $`R(T)`$ curves at the melting temperature $`T_m(H,\theta )`$. The resistive transitions are hysteretic below a critical field that scales with the anisotropy of the material, with the hysteresis indicating a superheating of the solid phase. The correlated vortex liquid phase near the melting temperature could be probably formed by stair like entangled lines that maintain their vortex velocity correlation in the field direction due to the effect of the twin boundary potential. The structure of the vortex solid is unknown. ## V Acknowledgments We acknowledge stimulating discussions and comments from F. De la Cruz. We thank J.Luzuriaga for a critical reading of the manuscript. This work is partially supported by ANPCyT, Argentina, PICT 97 No.03-00061-01116, and CONICET PIP 4207. E.O. and G.N acknowledge financial support from Consejo Nacional de Investigaciones Científicas y Técnicas.
no-problem/9908/astro-ph9908123.html
ar5iv
text
# The D/H Ratio in Interstellar Gas Towards G191-B2BBased on observations done with the NASA/ESA Hubble Space Telescope obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy (AURA), Inc. under NASA contract NAS5-26555 ## 1 Introduction The best post-IUE measurement of the D/H ratio in the LISM within $``$100 pc has been the HST-GHRS study by Linsky et al. (L93 & L95), who found D/H = 1.6 $`\pm `$ 0.2$`\times `$ 10<sup>-5</sup> towards Capella, a nearby late-type spectroscopic binary system \[$`d`$ = 12.5 pc; ($`l`$,$`b`$) = (162.6, $`+`$4.6)\]. Subsequent GHRS measurements towards other nearby late-type stars and WDs, reported by various authors indicate that within measurement uncertainties, the D/H ratio in the LISM is constant and the value is consistent with Linsky et al.’s results (Landsman et al. 1996, Piskunov et al. 1997, Dring et al. 1998). Only one result suggests significant variation of the D/H ratio within 69 pc. Using GHRS echelle data, VM98 reported the presence of three velocity components towards the white dwarf G191-B2B, with D/H ratios ranging between 0.9$`\times `$10<sup>-5</sup> and 1.56$`\times `$10<sup>-5</sup> implying a variation of the D/H ratio by $``$ 30% in the LISM. Towards the more distant star $`\delta `$Ori A ($`d`$ $``$ 350 pc), however, Interstellar Medium Absorption Profile Spectrograph (IMAPS) observations by Jenkins et al. (1999), suggest spatial variations of the D/H ratio. Current models of Galactic chemical evolution predict variations in the D/H ratio over length scales of $``$ 1 kpc (e.g. Tosi, 1998) but not over length scales as short as 69 pc. Possible implications of the spatial variation of the D/H ratio over such short length scales include a non-primordial source of deuterium production (Mullan & Linsky 1998). We have reinvestigated the question of spatial variation of the D/H ratio towards G191-B2B, using newer HST-Space Telescope Imaging Spectrograph (STIS) data. Compared to the GHRS echelle data, the STIS data have better scattered light corrections and include lines such as Si ii ($`\lambda `$$`\lambda `$1190, 1193, 1260 and 1526) and Fe ii $`\lambda `$1608.5 which were not observed with GHRS. In this Letter, we report the D/H ratio derived from STIS data and the disagreement between the D/H ratios derived from STIS and GHRS data . The most probable reason for this disagreement is the inadequate scattered light corrections available for the GHRS data ($`\mathrm{\S }`$6). An extended analysis of the STIS and GHRS data will be presented elsewhere. ## 2 Observations and data reduction G191-B2B was observed with STIS on 1998 December 17 using the high-resolution E140H (R$``$110,000) mode in the $``$ 1140 to 1700Å region. These observations were part of STIS flux calibrations and the entrance aperture used was 0.2 $`\times `$ 0.2 arcsec. The use of this aperture decreases the effective spectral resolution, particularly in the bluer wavelength regions near the Lyman-$`\alpha `$ region (where the telescope PSF halo is more pronounced). The spectra were processed with the IDL-based CALSTIS reduction package developed by the STIS Instrument Development Team (IDT) at Goddard Space Flight Center (Lindler 1999). There was appreciable scattered light in the Lyman-$`\alpha `$ region ($``$ 11% of the continuum at 1213Å) which was corrected using an iterative correction algorithm developed by the STIS IDT (Bowers & Lindler 1999, in preparation). This algorithm models several sources of stray light including the telescope PSF halo, echelle grating scatter and the detector halo. Archival GHRS echelle data of the G191-B2B (observed 1995 July 26 to 28) were reduced using the IDL-based CALHRS routine developed by the GHRS IDT at Goddard. Three individual GHRS FP-SPLIT exposures are combined to obtain the final spectrum containing the D i and H i interstellar lines. The core of the saturated Lyman-$`\alpha `$ absorption dips below zero flux and was corrected by setting the inter-order coefficients $`a`$ and $`b`$ at 0.9 (Cardelli et al. 1993). Near the D i feature, the S/N per data point in the continuum for both GHRS and STIS is $``$ 20, although GHRS has better wavelength sampling (0.003Å versus 0.005Å for STIS). ## 3 Use of NLTE stellar atmosphere models To disentangle the interstellar D i and H i absorption from the observed profiles, it is essential to use the most physically realistic model for the intrinsic stellar Lyman-$`\alpha `$ profile. G191-B2B \[$`d`$ = 68.8 pc; ($`l`$,$`b`$) = (155.9, $`+`$7.1)\] (Vauclair et al. 1997), belongs to a class of hot, DA WDs that contain significant amounts of heavy elements such as C, N, O, Si, Fe and Ni in their atmospheres. Lanz et al. (1996) performed NLTE calculations including the effects of line-blanketing from more than 9 $`\times `$ 10<sup>6</sup> atomic transitions (mainly Fe and Ni) and matched the flux level and shape of the EUV spectrum of G191-B2B for the first time. The apparent effective temperature of WDs like G191-B2B is sensitive to assumptions about the photospheric composition (Barstow et al. 1998) and must be taken into account in modeling the stellar Lyman-$`\alpha `$ profile. Barstow and co-workers (in preparation) have refined their stratified line-blanketed NLTE calculations (Barstow et al. 1999) and the best-fit model atmosphere \[T<sub>eff</sub> = 54,000 $`\pm `$ 2000 K and log $`g`$ = 7.5$`\pm `$ 0.03\] is adopted in our analysis to predict the intrinsic WD Lyman-$`\alpha `$ profile and to check for contamination of the interstellar lines by narrow WD absorption lines. The radial velocity of G191-B2B used in our analysis, estimated from STIS data of other WD lines (Bruhweiler et al. 1999, in preparation) is 24.6 $`\pm `$ 0.4 km s<sup>-1</sup> (including gravitational redshift). ## 4 The number of velocity components in the line-of-sight In addition to the interstellar D i and H i absorption lines, the STIS echelle spectra show interstellar absorption due to N i ($`\lambda `$$`\lambda `$1199.5, 1200.2 and 1200.7), C ii $`\lambda `$1334.5, C ii $`\lambda `$1335.7, O i $`\lambda `$1302, Si ii ($`\lambda `$$`\lambda `$1190, 1193, 1260, 1304 and 1526), Si iii $`\lambda `$1206.5, Al ii $`\lambda `$1670.8, S ii $`\lambda `$1259.5 and Fe ii $`\lambda `$1608.5. The interstellar N i $`\lambda `$1200.7, Si ii $`\lambda `$$`\lambda `$ 1193 & 1304 and Fe ii lines are not contaminated by WD lines. Figure 1 (a, b) shows the profile fits to the N i $`\lambda `$ 1200.7 and Si ii $`\lambda `$ 1304 lines and two distinct velocity components are seen in these uncontaminated lines (all velocities are in the heliocentric frame). One component is at $``$ 8.6 km s<sup>-1</sup> (hereafter referred to as comp 1). The other component is at 19.3 km s<sup>-1</sup>, which is within measurement uncertainties of the projected velocity of the LIC (20.3 km s<sup>-1</sup>) in the line-of-sight to G191-B2B (Lallement et al. 1995). This component is also seen in the Capella data (L93), suggesting both the G191-B2B and Capella sightlines intercept the LIC. Figure 1c shows the STIS and GHRS Si iii 1206.5 Å line utilized by VM98 to determine the number of components in the line-of-sight towards G191-B2B. Our GHRS profile is shifted to lower velocities by $``$ 4 km s<sup>-1</sup> compared to the profile in the top panel of Figure 5 in VM98. Unlike VM98, we are able to obtain an excellent fit to both the STIS and GHRS profiles of Si iii using only these two components. Detailed profile fitting of the other interstellar species confirmed that no more than two components are required (within the constraints imposed by S/N and spectral resolution of the STIS data) to yield acceptable fits. Our analysis of the STIS and GHRS data explicitly assumes the existence of two distinct components. ## 5 Profile fitting of the interstellar D i and H i lines Each component is assumed to be homogeneous and characterized by a column density $`N`$, radial velocity $`v`$ and a line-of-sight velocity dispersion defined by $`b`$ = (2kT/$`m`$ \+ $`\xi `$<sup>2</sup>)<sup>1/2</sup> where $`\xi `$ is the turbulent velocity parameter along the line-of-sight, T is the kinetic temperature and $`m`$ is the ion mass. The D i and H i interstellar lines were fit simultaneously since they are separated by only 0.33Å and the D i absorption is located on the wing of the broad H i absorption. Line profiles were convolved with either the STIS instrumental LSF for the 0.2 $`\times `$ 0.2 arcsec slit given by the STIS Handbook (Sahu 1999) or the two-component Gaussian LSF for GHRS given by Spitzer & Fitzpatrick (1993). The turbulent velocity parameters for the two components were determined by plotting the $`b`$ values for the various atomic species as a function of ion mass $`m`$ and performing a least-squares fit. The best-fit $`\xi `$ value for the LIC component is 1.7 km s<sup>-1</sup> (consistent with L93, L95) while for comp 1, $`\xi `$ is 2.5 km s<sup>-1</sup>. The STIS spectra near the Fe ii (the heaviest ion) absorption have low S/N and the $`\xi `$ value is probably not very accurate. However, the derived column densities are insensitive to the assumed values of $`\xi `$, and the D/H ratios presented here are not affected. For modeling of the Lyman-$`\alpha `$ profile, the velocities of the two components are kept fixed at 8.6 (comp 1) and 19.3 km s<sup>-1</sup> (LIC) and the $`\xi `$ values are fixed at 1.7 (LIC) and 2.5 km s<sup>-1</sup> (comp 1). Two types of profile fits are done for the STIS and GHRS data sets: (1) keeping the value of D/H free in both components (STIS-FREE, GHRS-FREE) and (2) forcing the same value of D/H in both components (STIS-FIXED and GHRS-FIXED). Table 1 lists the results of the profile fitting for the STIS and GHRS data sets. The parameters obtained for the two components are the H i column density, N(H i), the temperature derived from the H i thermal velocity dispersion and the D/H ratio (listed for comp 1 and the LIC component respectively in columns 4 through 9). The uncertainties quoted for the D/H ratios denote the 2$`\sigma `$ (95%) confidence limits obtained using the method of constant $`\chi `$<sup>2</sup> boundaries (Press et al. 1992). Figure 2 (a, b) shows the best-fit models to the STIS and GHRS data for the STIS-FREE and GHRS-FREE cases respectively. The total H i column density towards G191-B2B obtained from the STIS data is $``$2.04$`\times `$10<sup>18</sup> cm<sup>-2</sup>, consistent with the value of 2.05$`\times `$10<sup>18</sup> cm<sup>-2</sup> derived from the best-fit parameters to the EUVE data over the wavelength range 100 to 500Å (Barstow et al. 1999). The D/H ratios derived for the two components from the STIS and GHRS data clearly disagree (compare STIS-FREE and GHRS-FREE cases in Table 1). ## 6 Why do STIS and GHRS data give different values of the D/H ratios? Figure 3 compares the STIS and GHRS spectra in the region of the D i absorption. Note that the D i absorption in the GHRS spectrum is shallower than in the STIS spectrum. The difference in the derived D/H values is unlikely to be due to statistical fluctuations. For example, a value of D/H = 1.35 $`\times 10^5`$ in both components is 3$`\sigma `$ above the best-fit determination from the GHRS-FIXED fit, and 3$`\sigma `$ below the best-fit determination from the STIS-FIXED fit. Another unlikely possibility for the difference in two data sets is time variability in the observed profile near the D i feature, perhaps due to a stellar wind. While Barstow et al. (1999) do suggest the presence of a weak stellar wind in G191-B2B to maintain the stratification of the Fe abundances, they point out that the wind must be less than $`10^{16}`$ M/yr to avoid elimination of the heavy elements in the photosphere. Such a weak wind would not be detectable, even in Lyman-$`\alpha `$. The most probable cause is a systematic error in one of the data sets and two lines of evidence suggest that this error is more likely to be in the GHRS data. First, whereas $`\chi ^2`$/$`\nu `$ value of 1.065 for the model fit to the STIS data indicates a 16% probability that the model is correct and that the uncertainties are correctly estimated, the $`\chi ^2`$/$`\nu `$ value of 1.200 for the GHRS data indicates only a 0.01% probability of this being true. Second, when the D/H ratio is kept fixed in both components (STIS-FIXED and GHRS-FIXED), the D/H ratios derived with the STIS data (1.71$`\pm `$$`\stackrel{0.32}{_{0.24}}`$$`\times `$10<sup>-5</sup>) are consistent with the value of 1.5$`\pm `$0.1$`\times `$10<sup>-5</sup> determined for the LIC by Linsky (1998). The corresponding value derived for the GHRS data (1.17$`\pm `$$`\stackrel{0.12}{_{0.11}}`$$`\times `$10<sup>-5</sup>) is not consistent with observed LIC values. Fixed-pattern noise or wavelength drifts during FP-SPLIT subexposures in the GHRS data set could result in a shallower D i absorption. However, the observed scatter in the flux level among the 144 subexposures (comprising the three FP-SPLIT GHRS exposures), shows good agreement with the errors predicted by the CALHRS routine, making this possibility unlikely. We believe the most probable cause for the difference in the two data sets around the D i absorption is the characterization of the background due to scattered light in the GHRS and STIS spectrographs. The D i line located on the wing of the H i profile, almost reaches zero flux in the core and is sensitive to the background subtraction procedures employed. The two-dimensional MAMA detectors and smaller pixel sizes on STIS allow a better estimate of the wavelength and spatial dependence of scattered light than the one-dimensional GHRS digicon science diodes. After the scattered light correction was applied to the STIS data, the residual flux in the core of the saturated H i profile is less than 1% of the continuum flux at 1213 Å. The GHRS observations were done with the default STEP-PATT option, where the background is measured with the science diodes for only 6% of the total exposure time (Soderblom 1995). Due to the low S/N of this background measurement, only a low-order polynomial fit can be made to the background variations (Cardelli et al. 1990). Use of the standard values of the four echelle scatter correction coefficients recommended by Cardelli et al. (1993) yields a significant ($``$ 8% of the continuum flux at 1213 Å) oversubtraction in the core of the saturated H i profile (see Fig 1, VM98). We corrected for this by multiplying the background by 0.9 prior to subtraction ($`\mathrm{\S }`$2), but if there are significant variations in the background with wavelength, this ad hoc correction is inadequate. Most previous D/H ratios derived with GHRS have used late-type stars as the background sources and these emission-line sources should have a much less significant scattered light problem. Given the importance of the question of spatial variations of the D/H ratio in the LISM, future STIS observations of G191-B2B with a smaller aperture (to achieve higher wavelength resolution), would be valuable. This research was supported by a GTO grant to the STIS IDT. MAB acknowledges support from PPARC, UK. We are grateful to Jeff Linsky, the referee, for insightful and helpful comments.
no-problem/9908/cond-mat9908356.html
ar5iv
text
# Electron-Phonon Coupling Deduced from Phonon Line Shapes ## Abstract We investigate the Fano-type line shape of the Ba mode of $`\mathrm{Y}_{1\mathrm{x}}\mathrm{Ca}_\mathrm{x}\mathrm{Ba}_2\mathrm{Cu}_3\mathrm{O}_{6+\mathrm{y}}`$ films observed in Raman spectra with $`\mathrm{A}_{1\mathrm{g}}`$ symmetry. The line shape is described with an extended Fano formula that allows us to obtain the bare phonon parameters and the self-energy effects based on a phenomenological description of the real and imaginary part of the low-energy electronic response. It turns out that the phonon intensity originates almost entirely from a coupling to this electronic response with negligible contributions from interband (high-energy) electronic excitations. In the normal state we obtain a measure of the electron-phonon coupling via the mass-enhancement factor $`\lambda `$. We find $`\overline{\lambda }=6.8\pm 0.5`$ % around optimum doping and only weak changes of the self-energy in the superconducting state. With increased disorder at the Ba site we find a decreased intensity of the Ba mode which we can relate to a decreased electron-phonon coupling. Fano-type phonon line shapes and continua of electronic excitations are rather common observations in Raman experiments of doped high-temperature superconductors. Using extended Fano models like those presented by Chen et al. and Devereaux et al., the self-energy contributions to the bare phonon parameters as a consequence of the electron-phonon interaction can in principle be obtained. Moreover, knowledge of the self-energy effects allows one to obtain the mass-enhancement factor $`\lambda `$ as a measure of the electron-phonon coupling strength. This, however, requires a simultaneous description of real and imaginary part of the electronic response function $`\chi ^e(\omega )=R^e(\omega )+i\varrho ^e(\omega )`$. Such a description has recently been presented by us and applied to the $`\mathrm{B}_{1\mathrm{g}}`$ Raman-active phonon of $`\mathrm{Y}_{1\mathrm{x}}(\mathrm{Pr},\mathrm{Ca})_\mathrm{x}\mathrm{Ba}_2\mathrm{Cu}_3\mathrm{O}_7`$ \[$`\mathrm{Y}_{1\mathrm{x}}(\mathrm{Pr},\mathrm{Ca})_\mathrm{x}`$-123\] films. Here, we will use our description in order to investigate the Fano-type line shape of the Ba mode observed in $`\mathrm{A}_{1\mathrm{g}}`$ Raman spectra of $`\mathrm{Y}_{1\mathrm{x}}\mathrm{Ca}_\mathrm{x}`$-123-O<sub>6+y</sub> films near optimum doping. This mode exhibits a pronounced asymmetry indicative of a strong electron-phonon coupling which is in fact obtained with our description. We study epitaxial $`\mathrm{Y}_{1\mathrm{x}}\mathrm{Ca}_\mathrm{x}`$-123-O<sub>6+y</sub> films grown by pulsed laser deposition on SrTiO<sub>3</sub>(100) substrates in a process described elsewhere. Standard films (x=0 ; y=1) are slightly overdoped, increased (decreased) doping levels are obtained by Ca substitution (oxygen reduction). Some properties of the films are given in Table I. Raman spectra have been taken using the Ar<sup>+</sup> laser line at 458 nm in a setup described elsewhere with the spectral resolution (HWHM) set to 3 cm<sup>-1</sup>. They have been corrected for the spectral response of spectrometer and detector and are normalized such that the $`\mathrm{B}_{1\mathrm{g}}`$ spectrum of the film #O4 at 18 K approaches unity above 650 cm<sup>-1</sup>. $`\mathrm{A}_{1\mathrm{g}}`$ spectra are obtained by subtracting the $`\mathrm{B}_{2\mathrm{g}}`$ data from those measured in $`z(x^{}x^{})\overline{z}`$ geometry. All temperatures are spot temperatures with typical heatings of 2 K. In order to describe the Fano-type line shape of the Ba mode in $`\mathrm{A}_{1\mathrm{g}}`$ symmetry we subdivide the Raman efficiency $`I_0(\omega )`$ into the electronic response and an electron-phonon interference term: $`I_0(\omega )=\varrho _{}(\omega )+{\displaystyle \frac{C}{\gamma (\omega )\left[1+ϵ^2(\omega )\right]}}\left\{\left[{\displaystyle \frac{R_{tot}(\omega )}{C}}\right]^22ϵ(\omega ){\displaystyle \frac{R_{tot}(\omega )}{C}}{\displaystyle \frac{\varrho _{}(\omega )}{C}}\left[{\displaystyle \frac{\varrho _{}(\omega )}{C}}\right]^2\right\}.`$ (1) The constant $`C=A\gamma ^2/g^2`$ is a fit parameter for the intensity where $`\gamma `$ represents the symmetry element of the electron-photon vertex projected out by the incoming and outgoing polarization vectors and $`g`$ is the lowest order expansion coefficient of the electron-phonon vertex describing the coupling to non-resonant intraband electronic excitations. $`\varrho _{}(\omega )=Cg^2\varrho ^e(\omega )`$ and $`R_{}(\omega )=Cg^2R^e(\omega )`$ are the measured electronic response and the real part of the electronic response function which are connected by Kramers-Kronig relations. $`R_{tot}(\omega )=R_{}(\omega )+R_0`$ with $`R_0=Cg(g_{pp}/\gamma )`$ where $`g_{pp}`$ is a constant which represents an abbreviated “photon-phonon” vertex that describes the coupling to resonant interband electronic excitations. The renormalized phonon parameters in the above equation are $`\gamma (\omega )=\mathrm{\Gamma }+\varrho _{}(\omega )/C`$ and $`\omega _\nu ^2(\omega )=\omega _p^22\omega _pR_{}(\omega )/C`$ with $`ϵ(\omega )=\left[\omega ^2\omega _\nu ^2(\omega )\right]/[2\omega _p\gamma (\omega )]`$. Note, that the interference term in Eq. (1) can be negative in contrast to the Raman efficiency. The measured electronic response (background) is modeled by two contributions: $`I_{\mathrm{}}\mathrm{tanh}(\omega /\omega _T)`$ and $`I_{red}(\omega ,\omega _{2\mathrm{\Delta }},\mathrm{\Gamma }_{2\mathrm{\Delta }},I_{2\mathrm{\Delta }},I_{supp})`$. The first term models the incoherent background using a hyperbolic tangent and the second the redistribution below $`T_c`$ using two Lorentzians. For the two contributions analytic expressions of the real part of the electronic response function exist. In the present work, the hyperbolic tangent is cut off at $`\omega _{cut}=8000`$ cm<sup>-1</sup>. $`R_{}(\omega _p)/C`$ increases by typically 10 % when the cutoff is increased to 12000 cm<sup>-1</sup>. With the description according to Eq. (1) a measure of the electron-phonon coupling can be obtained via the mass-enhancement factor $`\lambda `$, defined by $`\lambda \omega _p=2R_{}(\omega _p)/C`$. In reference to the conventional Fano mechanism the total and the bare phonon intensity are $`I_{tot}=\frac{\pi }{C}R_{tot}^2(\omega _p)`$ and $`I_{phon}=\frac{\pi }{C}R_0^2`$. To give an example, Fig. 1 displays the results of the analysis of the $`\mathrm{A}_{1\mathrm{g}}`$ efficiency of the film #Ox4 taken at 18 K. We describe this figure from top to bottom: At the top, the measured efficiency as well as its description (solid line) is displayed. For the description we used interference terms given in Eq. (1) for the Ba and the O(4) mode, Lorentzians for the Cu(2) and the O(2)+O(3) mode, a simple Fano formula for the $`\mathrm{B}_{1\mathrm{g}}`$ phonon, and the two background contributions stated above. In the second trace from the top, the phononic signal, i.e. the Lorentzians, the Fano profile, and the interference terms are given. Obviously, the interference term of the Ba mode becomes negative in a region above $`120`$ cm<sup>-1</sup>. The electronic response $`\varrho _{}(\omega )`$ that remains after subtraction of the phonons is shown below the phononic signal. It exhibits a $`2\mathrm{\Delta }`$ peak at $`280`$ cm<sup>-1</sup> as well as a monotonically decreasing intensity for $`\omega 0`$ and is well described with our background model (solid line). At the bottom of Fig. 1 the real part of the electronic response function $`R_{}(\omega )`$ is shown. In order to obtain $`R_{}(\omega )`$ we have performed a numerical Hilbert transformation of $`\varrho _{}(\omega )`$. For the transformation the measured spectrum is taken as constant for high frequencies up to $`\omega _{cut}`$ and is interpolated to zero intensity at $`\omega =0`$; for negative frequencies the antisymmetry of $`\varrho _{}(\omega )`$ has been used. Evidently, the description of $`R_{}(\omega )`$ used in the fit agrees well with the numerically obtained data. This is important for the determination of the self-energy effects. At 152 K, i.e considerably above $`T_c`$ and below room temperature, our description of the Ba mode in the film #Ox4 yields $`\omega _p=121.6`$ cm<sup>-1</sup> and $`\lambda =6.3`$ %. Similar values are obtained in the other films as given in Table I. More specifically, we find a somewhat increasing bare phonon frequency with increasing doping with a mean value of $`\overline{\omega _p}=121.9\pm 1.4`$ cm<sup>-1</sup> and a mean value of the mass-enhancement factor of $`\overline{\lambda }=6.8\pm 0.5`$ %. The disordered film #Ca1 clearly deviates from the others exhibiting a low bare phonon frequency compared with its doping value and an almost 50 % smaller mass-enhancement factor. The low frequency is in fact one indication for the presence of disorder at the Ba site as enlarged $`c`$-axis parameters are expected in this case. In order to look more closely at the peculiarities appearing in the disordered film we compare the temperature dependencies of the fit parameters of the Ba modes in the disordered film #Ca1 with that of the ordered one #Ca2 in Fig. 2. Beside the bare and renormalized phonon parameters also the self-energy contributions at $`\omega =\omega _p`$ are depicted. Dashed lines are fits to anharmonic decays. Except for sharpenings of the bare and the renormalized phonon linewidth in the ordered film #Ca2, clear superconductivity-induced changes of the self-energies are not observed. This is in good agreement with earlier results obtained on a Y-123 single crystal. The self-energy contributions are weaker in the disordered film compared to the ordered one. In particular, they differ by a factor of two above $`T_c`$ which is the same value by which the mass-enhancement factors are apart. Noteworthy, the sharpening of the renormalized linewidth in the ordered film cannot be related to a suppression of the measured electronic response. This indicates the presence of an additional decay channel for the phonon. A similar effect has been observed in case of the $`\mathrm{B}_{1\mathrm{g}}`$ phonon in $`\mathrm{Y}_{1\mathrm{x}}(\mathrm{Pr},\mathrm{Ca})_\mathrm{x}`$-123 films, where it was suspected that not Raman-active electronic excitations may be present in these compounds. In the lowest panel in Fig. 2 the total intensities $`I_{tot}`$ as well as the bare phonon intensities $`I_{phon}`$ are given. It turns out that the bare phonon intensities are negligibly small in both films at least for temperatures below 250 K. The same finding is also observed in the other films studied. The total intensities, on the other hand, are 30 % stronger in the ordered film #Ca2 compared to the disordered one. This appears to be related to the stronger mass-enhancement factor in the ordered film which is further supported when comparing with the factors and intensities of the other films given in Table I. Regarding Table I, one finds an increasing intensity of the Ba mode with decreasing doping in the studied doping regime. This increase is carried by an increasing linewidth which rises from $`\mathrm{\Gamma }=4`$ cm<sup>-1</sup> in the film #Ca2 up to $`\mathrm{\Gamma }=8`$ cm<sup>-1</sup> in the film #Ox2. At even lower dopings, however, the Ba mode eventually diminishes and is no longer observed in plane-polarized Raman spectra in the parent compound Y-123-O<sub>6</sub>. Isotope experiments of Y-123 have shown that the mode, which we have called the Ba mode so far, is indeed dominated by vibrations of the Ba atom with less than 20 % admixture from the Cu(2) site. This experimental eigenvector has recently been obtained in a linearized-augmented-plane-wave (LAPW) frozen-phonon calculation within a generalized gradient approximation (GGA). Using the LAPW method within the local-density approximation (LDA), Cohen et al. find large non-local contributions to the electron-phonon coupling from the Ba site with $`\lambda 5.4`$ % in the Brillouin zone. The coupling appears to be in good agreement with the results of our extended Fano description of the ordered films. This is somewhat surprising as the observed background is believed to be a consequence of the strong electronic correlations which are not included in LDA-type calculations. Results of the electron-phonon coupling within the GGA are therefore of interest for a comparison as that method includes correlation effects. To conclude, we investigate the Ba mode of $`\mathrm{Y}_{1\mathrm{x}}\mathrm{Ca}_\mathrm{x}`$-123-O<sub>6+y</sub> films observed in Raman spectra with $`\mathrm{A}_{1\mathrm{g}}`$ symmetry. Our Fano-type analysis reveals that this mode is entirely described by a coupling to low-energy electronic excitations for $`T<250`$ K. The absence of this mode in antiferromagnetic Y-123-O<sub>6</sub> might therefore simply be a consequence of the vanishing low-energy electronic response. Our analysis yields mass-enhancement factors which appear to be in agreement with the result of the LAPW method within the LDA. With increasing disorder at the Ba site, the intensity of the Ba mode diminshes as a consequence of a reduced coupling strength. The authors thank D. Manske, U. Merkt, C.T. Rieck and M. Rübhausen for stimulating discussions. S.O. acknowledges a grant of the German Science Foundation via the Graduiertenkolleg “Physik nanostrukturierter Festkörper”.
no-problem/9908/hep-th9908205.html
ar5iv
text
# Contents ## 1 Decomposition of 10 Dimensions Some physics models have 10 dimensions that are usually decomposed into: 4 spacetime dimensions with local Lorentz $`Spin(1,3)`$ symmetry plus a 6-dimensional compact space related to internal symmetries. A possibly useful alternative decomposition is into: 6 spacetime dimensions with local $`C(1,3)=Spin(2,4)=SU(2,2)`$ Conformal symmetry. plus a 4-dimensional compact Internal Symmetry Space. ### 1.1 6-Dimensional Conformal spacetime Conformal symmetries and some of their physical applications are described in the book of Barut and Raczka . The Conformal group $`C(1,3)`$ of Minkowski spacetime is the group $`SU(2,2)=Spin(2,4)`$. As $`Spin(2,4)`$, the Conformal group acts on a 6-dimensional (2,4)-space that is related to the 6-dimensional $`𝐂P^3`$ space of Penrose twistors . It is reasonable to consider the 6-dimensional Conformal space as the spacetime in the dimensional decomposition of 10-dimensional models because Conformal symmetry is consistent with such physics structures as: Maxwell’s equations of electromagnetism; the quantum theoretical hydrogen atom; the canonical Dirac Lagrangian for massive fermions, as shown by Liu, Ma, and Hou ; gravity derived from the Conformal group using the MacDowell-Mansouri mechanism, as described by Mohapatra ; the Lie Sphere geometry of spacetime correlations; and the Conformal physics model of I. E. Segal . ### 1.2 4-Dimensional Internal Symmetry Space An example of a possibly useful 4-dimensional compact Internal Symmetry Space is complex projective 2-space $`𝐂P^2`$. Since $`𝐂P^2=SU(3)/U(2)`$, it is a natural representation space for $`SU(3)`$. Further, $`U(2)=SU(2)\times U(1)`$ can be represented naturally on $`𝐂P^2=SU(3)/U(2)`$ as a local action. Therefore, all three of the gauge groups of the Standard Model $`SU(3)\times SU(2)\times U(1)`$ can be represented on the 4-dimensional compact Internal Symmetry Space $`𝐂P^2=SU(3)/U(2)`$. The following section lists some examples of physics models that have such 10-dimensional spaces: Superstring theory; the Division Algebra model of Geoffrey Dixon; and the $`D_4D_5E_6E_7`$ physics model. ## 2 Superstrings, Dixon, and D4-D5-E6-E7 ### 2.1 Superstrings The 10-dimensional space of Superstring theory is well known, and described in many references, so I will not try to summarize it here. One particularly current and thorough reference is the 2-volume work of Polchinski . ### 2.2 Geoffrey Dixon’s Division Algebra model Geoffrey Dixon, in his publications and website , considers the real division algebras: the real numbers $`𝐑`$; the complex numbers $`𝐂`$; the quaternions $`𝐐`$; and the octonions $`𝐎`$. Dixon then forms the tensor product $`𝐓=𝐑𝐂𝐐𝐎`$ and considers the 64-real-dimensional space $`𝐓`$. Then Dixon takes the left-adjoint actions $`𝐓_𝐋=𝐂_𝐋𝐐_𝐋𝐎_𝐋`$, and notes that $`𝐓_𝐋`$ is isomorphic to $`𝐂(16)=Cl(0,9)=𝐂Cl(0,8)`$. Then Dixon considers the algebra $`𝐓`$ to be the spinor space of $`𝐓_𝐋`$. Then Dixon forms a matrix algebra $`𝐓_L(2)`$ as the $`2\times 2`$ matrices whose elements are in the left-action adjoint matrix algebra $`𝐓_𝐋`$ and notes that $`𝐓_𝐋(\mathrm{𝟐})`$ is isomorphic to $`𝐂(32)=𝐂Cl(1,9)`$. Dixon describes the matrices $`𝐓_𝐋(\mathrm{𝟐})`$ as having spinor space $`𝐓𝐓`$ and $`𝐂Cl(1,9)`$ as the Dirac algebra of 10-dimensional (1,9)-space. Dixon then describes leptons and quarks in terms of reduction of the Dirac spinors of the 10-dimensional (1,9)-space to the Dirac spinors of a 4-dimensional (1,3)-spacetime. The right-action adjoint matrix algebra $`𝐓_𝐑`$ is not the same as the left-action adjoint $`𝐓_𝐋`$, because, although $`𝐂_𝐑=𝐂_𝐋`$ and $`𝐎_𝐑=𝐎_𝐋`$, it is a fact that $`𝐐_𝐑𝐐_𝐋`$ (they are isomorphic but not identical). Since $`𝐐_𝐑=𝐐`$, the part of the matrix algebra $`𝐓_𝐑`$ that differs from $`𝐓_𝐋`$ is just $`𝐐`$, and the different part of the $`2\times 2`$ matrix algebra $`𝐓_R(2)`$ is just the $`2\times 2`$ matrix algebra with quaternion entries $`𝐐(2)`$. In section 6.7 of his book , Dixon shows that commutator closure of the set of traceless $`2\times 2`$ matrices over the quaternions $`𝐐`$, which he denotes by $`sl(2,𝐐)`$, is the Lie algebra of $`Spin(1,5)`$. Since the Lie algebra $`Spin(1,5)`$ is just the Lie algebra of the Conformal group $`C(1,3)=Spin(2,4)=SU(2,2)`$ with a different signature, I conjecture that it might be useful to consider the spacetime part of Dixon’s 10-dimensional (1,9)-space to be the 6-dimensional (1,5)-spacetime of $`Spin(1,5)`$. That would leave a 4-dimensional (0,4)-space to be used as an Internal Symmetry Space. ### 2.3 the D4-D5-E6-E7 model The $`D_5`$ Lie algebra of the $`D_4D_5E_6E_7`$ physics model corresponds (with Conformal signature) to the Lie algebra $`Spin(2,8)`$ of the Clifford algebra $`Cl(2,8)`$ whose vector space is 10-dimensional. As the $`D_4D_5E_6E_7`$ physics model is described on the web , I will not try to summarize it here. ## 3 Acknowledgements The idea of 6-dimenisonal spacetime with Conformal symmetry was motivated by the works of I. E. Segal and by e-mail conversations with Robert Neil Boyd. The idea of 4-dimensional Internal Symmetry Space was motivated by Cayley calibrations of octonions and by e-mail conversations with Matti Pitkanen.
no-problem/9908/cond-mat9908208.html
ar5iv
text
# 1 figure NORDITA-1999/36 CM Broken Symmetries in the Reconstruction of $`\nu =1`$ Quantum Hall Edges S.M. Reimann, M. Koskinen, S. Viefers<sup>(∗)</sup>, M. Manninen and B. Mottelson<sup>(∗)</sup> University of Jyväskylä, PO Box 35, FIN-40351 Jyväskylä <sup>(∗)</sup> NORDITA, Blegdamsvej 17, DK-2100 Copenhagen Abstract > Spin-polarized reconstruction of the $`\nu =1`$ quantum Hall edge is accompanied by a spatial modulation of the charge density along the edge. We find that this is also the case for finite quantum Hall droplets: current spin density functional calculations show that the so-called Chamon-Wen edge forms a ring of apparently localized electrons around the maximum density droplet (MDD). The boundaries of these different phases qualitatively agree with recent experiments. For very soft confinement, Chern-Simons Ginzburg-Landau theory indicates formation of a non-translational invariant edge with vortices (holes) trapped in the edge region. > > PACS 73.20.Dx, 73.61.-r, 85.30.Vw, 73.40.Hm Introduction Edge states in the quantum Hall regime have been subject to extensive study in recent years. In particular, much interest has focused on how the edge may reconstruct as the confining potential strength is varied (see and Refs. therein). Various theoretical approaches, including Hartree-Fock methods, density functional theory, composite fermion models and effective (mean field) theories have been used to examine both small electron droplets (quantum dots) and large quantum Hall systems, with and without spin. In particular, many authors have been interested in edge reconstruction of ferromagnetic quantum Hall states, including $`\nu =1`$ and simple fractional (Laughlin) fillings. Softening of the confining edge potential allows charge to move outward, and the edge may reconstruct. How this happens, and whether or not the reconstruction involves spin textures, depends on the relative strength of the electron-electron interactions and the Zeeman energy, and on the steepness of the confining potential. Much work has been based on Hartee-Fock techniques. In 1994, Chamon and Wen found that the sharp $`\nu =1`$ edge of large systems or quantum dots may undergo a polarized reconstruction to a “stripe phase” , in which a lump of electrons becomes separated at a distance $`2l_B`$ away from the original edge ($`l_B=\sqrt{\mathrm{}/eB}`$). This reconstructed state is translation invariant along the edge. Using an effective sigma model and Hartree-Fock techniques Karlhede et al. then showed that Chamon and Wen’s polarized reconstruction may be preempted by edge spin textures if the Zeeman gap is sufficiently small. Similar results were obtained by Oaknin et al. for finite quantum Hall droplets. Spin textures are configurations of the spin field in which the spins tilt away from their bulk direction on going across the edge; on going along the edge, they precess about the direction of the external field with some wave vector $`k`$. The edge textures posess a topological density which can be shown to be proportional to the electron density. Thus, one may say that tilting spins moves charge, which is why edge spin textures represent a mechanism for edge reconstruction. Later it turned out , that the Chamon-Wen edge is, in fact, unstable: a polarized reconstruction with a modulated charge density along the edge is always lower in energy than the translation invariant Chamon-Wen edge. Numerical (Hartree-Fock) studies of the ground state, together with an analysis of the softening of low-energy edge modes at weak confining potentials, have resulted in a phase diagram , giving the following picture of the $`\nu =1`$ edge: For very steep confining potentials, the edge is sharp and fully polarized. Upon softening of the confining potential, the edge will either reconstruct into a spin textured state with a translation invariant charge density along the edge (for small Zeeman gaps) or into a polarized charge density wave edge (for large Zeeman gaps). For even softer confining potentials and sufficiently small Zeeman gaps a combination of charge modulation along the edge and spin textures may occur. Broken-symmetry edge states in quantum dots The above mentioned phases of edge reconstruction can also occur in finite quantum Hall systems such as quantum dots. The spin-textured edge exists only for sufficiently smooth confinement and small enough Zeeman coupling. We restrict the following discussion to the spin-polarized regime. In a strong enough magnetic field, the electrons fill the lowest Landau level: the so-called maximum density droplet (MDD) is formed, in which the electrons occupy adjacent orbitals with consecutive angular momentum. The MDD is the finite-size analogue to the bulk $`\nu =1`$ quantum Hall state with an approximately constant density at its maximum value $`(2\pi l_B^2)^1`$. Increasing the magnetic field effectively compresses the electron droplet. At a certain field strength, the dense arrangement of electrons costs too much Coulomb energy. The droplet then takes advantage of moving electrons from lower to higher angular momentum states and re-distributes its density . This, however, may occur together with a breaking of the rotational symmetry in the internal coordinates of the many-body wave function. The self-consistent mean-field solution can show such intrinsic symmetry breaking. The latter implies the occurence of a rotational band which can be obtained by projection. For filling factors around $`\nu =1`$ we apply current spin density functional theory (CSDFT) to calculate the ground-state densities of $`N`$ parabolically confined electrons, avoiding any spatial symmetry restrictions of the solutions. For the technical details of the calculations, we refer to . An example for the edge reconstruction in finite quantum Hall droplets is shown in the left of Fig. 1 (see next page) for $`N=42`$ electrons. In the ground state the MDD is stable up to a field of about $`2.6`$ T. At about $`2.7`$ T, reconstruction has taken place: at a distance of $`2l_B`$ from the remaining inner (smaller) MDD, a ring of separate lumps of charge density is formed, with each lump containing one electron and having a radius somewhat larger than the magnetic length $`l_B`$. Goldmann and Renn recently suggested crystallized edge states which appear similar to the reconstructed edges within CSDFT. For still higher fields, the sequential formation of ring-like edges continues until the whole droplet is fully reconstructed . The apparent localization at the edge is accompanied by a narrowing of the corresponding band of single-particle energy levels. The existence of the inner MDD surrounded by the broken-symmetry edge opens up the possibility to observe rotational spectra of the edge. We next study the formation of the MDD and its reconstruction systematically, varying both particle number $`N`$ and magnetic field $`B`$. For fixed $`N`$ we keep the average electron density constant. Changing the field $`B`$ has a similar effect on the reconstruction as varying the softness of the external confinement: a higher field compresses the droplet. At constant strength of the oscillator for fixed particle number, but larger field the confinement then is effectively weaker. We obtain a phase diagram as a function of the number $`N`$ of confined electrons and the field $`B`$, which is schematically shown in Fig. 1. (For more details see ). With increasing $`N`$, the polarization line which separates the fully polarized MDD states from the unpolarized states approaches the reconstruction line. The latter separates the MDD regime from the Chamon-Wen (CW) edge formation. This is schematically indicated by the dashed lines in Fig. 1. Note that the shapes of these phase boundaries differ from the results of Ferconi and Vignale , as they used a fixed confinement strength for different dot sizes. In recent experiments a phase diagram was obtained from addition energy spectra measured as a function of magnetic field. The phase boundaries qualitatively agree with the results obtained from the CSDFT calculations, if the average electron density is kept constant. Its value determines the magnetic field strength at which the phase transitions occur: Increasing the density shifts the phase boundaries to higher $`B`$-values. Edge reconstruction within CSGL theory Turning to filling fractions $`\nu 1`$, we now study the infinite, straight quantum Hall edge within the framework of Chern-Simons Ginzburg-Landau (CSGL) theory . This is an effective (mean field) model of the FQHE, based on the concept of “statistical transmutation”: It models the electrons at $`\nu =1/(2m+1)`$ as bosons, each carrying $`2m+1`$ quanta of (“statistical”) flux; in the mean field sense, this statistical field is cancelled by the external magnetic field, making the $`\nu =1/(2m+1)`$ quantum Hall state equivalent to a system of charged bosons in zero magnetic field. This model has proven quite successful in describing bulk properties of the FQHE. The edge can be studied by solving the CSGL field equations in the presence of an external confining potential. Leinaas and Viefers recently showed the existence of edge spin textures in this model for soft enough confining potentials and Zeeman energies smaller than some critical value, in qualitative agreement with previous work . Fig. 2 shows such a solution in the limit where the minority spin density is small. As mentioned, the charge density of the spin textured edge is translation invariant along the edge. The CSGL studies further indicate the possibility of another kind of edge reconstruction, at even softer confining potentials, to a non-translation invariant edge with vortices (holes) trapped in the edge region. Several authors have adressed this type of reconstruction (see and references therein). References A. Karlhede and K. Lejnell, Physica E 1, 41 (1997). C. de C. Chamon and X.G. Wen, Phys. Rev. B49, 8227 (1994). A. Karlhede et al., Phys. Rev. Lett. 77, 2061 (1996). J.H. Oaknin et al., Phys. Rev. B 54, 16850 (1996); 57, 6618 (1998). M. Franco and L. Brey, Phys. Rev. B56 10383 (1997). A. H. McDonald, S.R.E. Yang, M. D. Johnson, Aust. J. Phys. 46, 345 (1993). S.M. Reimann, M. Koskinen, M. Manninen and B. Mottelson, cond-mat/9904067. G. Vignale and M. Rasolt, Phys. Rev. B37, 10685 (1988). E. Goldmann and S. Renn, to be published. H.-M. Müller and S.E. Koonin, Phys. Rev. B54, 14532 (1996). M. Ferconi and G. Vignale, Phys. Rev. B56, 12108 (1997). T. H. Oosterkamp et al., Phys. Rev. Lett. 82, 2931 (1999). S.C. Zhang et al., Phys. Rev. Lett. 62, 82 (1988); D.H. Lee and C.L. Kane, Phys. Rev. Lett. 64, 1313 (1990). J.M. Leinaas and S. Viefers, Nucl. Phys. B520, 675 (1998).
no-problem/9908/hep-lat9908017.html
ar5iv
text
# Vortex Structure in Abelian-Projected Lattice Gauge Theory ## Abstract We report on a breakdown of both monopole dominance and positivity in abelian-projected lattice Yang-Mills theory. The breakdown is associated with observables involving two units of the abelian charge. We find that the projected lattice has at most a global $`Z_2`$ symmetry in the confined phase, rather than the global U(1) symmetry that might be expected in a dual superconductor or monopole Coulomb gas picture. Implications for monopole and center vortex theories of confinement are discussed. Center vortices can be located on thermalized lattices by the technique of center projection in maximal center gauge, and their effects on gauge-invariant observables such as Wilson loops and topological charge have been clearly seen (e.g. in refs. , , and in contributions to these Proceedings). A competing theory of confinement is the dual-superconductor/abelian-projection theory, which has been intensively studied on abelian-projected lattices. It is of some interest to ask if there is evidence of vortex structure also on abelian-projected lattices and, if so, whether this structure is consistent with a picture of the vacuum as a Coulomb gas of monopole loops (for a more detailed presentation of this contribution, cf. ). There is already some evidence that center vortices, in the abelian projection, would appear in the form of a monopole-antimonopole chain, with the $`\pm 2\pi `$ monopole flux collimated (at fixed time) in tubelike regions of $`\pm \pi `$ flux . If this is so, then several qualitative predictions follow, which can be tested numerically: * There is $`Z_2`$, rather than $`U(1)`$, magnetic disorder on finite, abelian-projected lattices; * Monopole dominance breaks down for even multiples of abelian charge; * There is strong directionality of field strength around an abelian monopole, in the direction of the vortex. Consider large Wilson loops or Polyakov lines on the abelian-projected lattice, corresponding to $`q`$ units of the abelian electric charge: $`W_q(C)`$ $`=`$ $`<\mathrm{exp}[iq{\displaystyle 𝑑x^\mu A_\mu }]>`$ $`P_q`$ $`=`$ $`<\mathrm{exp}[iq{\displaystyle 𝑑tA_0}]>`$ (1) Collimated $`\pm \pi `$ flux tubes cannot disorder $`q=`$ even Wilson loops and Polyakov lines. If these vortex tubes are the confining objects, then only for $`q=`$ odd would we expect $`P_q=0`$, and an area law falloff for Wilson loops. In consequence, there would be $`Z_2`$, rather than U(1), magnetic disorder/global symmetry in the confined phase. In contrast, in the monopole Coulomb gas or dual superconductor pictures, we would expect all multiples $`q`$ of electric charge to be confined; $`P_q=0`$ for all $`q`$. This is inferred from saddlepoint calculations in $`QED_3`$ and strong-coupling calculations in $`QED_4`$, it appears to be true for the dual abelian Higgs theory (a model of dual superconductivity), as well as in a simplified treatment of the monopole gas in ref. . The $`Z_2`$ subgroup of U(1) plays no special role in the monopole picture. Center vortices are rather thick objects $`1`$ fm, so at, e.g., $`\beta =2.5`$ we would need $`12\times 12`$ $`q=2`$ Wilson loops to see string breaking. This is impractical. The “fat link” technique is untrustworthy in this case, due to the absence of a transfer matrix, and in any case rectangular loops (with $`RT`$) are inadequate; the appropriate operator mixings have to be taken into account. For these reasons, it is best to study $`q=`$ even Polyakov lines, rather than Wilson loops. Figure 1 shows our data, in the confined phase, for $`q=2`$ abelian Polyakov lines on a $`12^3\times 3`$ lattice. The upper line is the monopole dominance (MD) approximation for this quantity, following the method of . We see that $`P_2`$ is finite, negative, and that there is a severe breakdown of the MD approximation in this case. The negative sign is allowed by the absence of reflection positivity in maximal abelian gauge. The finiteness of $`P_2`$ is expected in the center vortex picture, and implies $`Z_2`$ rather than U(1) disorder on the abelian lattice, while the breakdown of the MD approximation indicates that the abelian monopole flux is *not* distributed Coulombically. It is possible to avoid positivity problems by fixing to a spacelike maximal abelian gauge $$R=\underset{x}{}\underset{k=1}{\overset{3}{}}\text{Tr}[\sigma _3U_k(x)\sigma _3U_k^{}(x)]\text{is maximized}$$ (2) What happens in this case is that the loss of positivity is replaced by a breaking of $`90^{}`$ rotation symmetry. Spacelike $`P_2`$ lines remain negative. Timelike $`P_2`$ lines become positive, although much smaller in magnitude, on a hypercubic lattice, than the spacelike lines. Since this is a physical gauge, the result means that $`q=2`$ electric charge is unconfined. It is also interesting to write the link phase angles $`\theta _\mu (x)`$ of the abelian link variables as a sum of the link phase angles $`\theta _\mu ^M(x)`$ in the MD approximation, plus a so-called “photon” contribution $`\theta _\mu ^{ph}(x)\theta _\mu (x)\theta _\mu ^M(x)`$. It is known that the photon field has no confinement properties at all ; Polyakov lines constructed from links $`U_\mu =\mathrm{exp}[i\theta _\mu ^{ph}]`$ are finite (also at higher $`q`$), and corresponding Wilson loops have no string tension. Since $`\theta _\mu ^M`$ would appear to carry all the confining properties, a natural conclusion is that the abelian lattice is indeed a monopole Coulomb gas. To see that this conclusion may be mistaken, suppose we *add*, rather than subtract, the MD angles to the abelian angles, i.e. $$\theta _\mu ^{}(x)=\theta _\mu (x)+\theta _\mu ^M(x)=\theta _\mu ^{ph}(x)+2\theta _\mu ^M(x)$$ (3) in effect doubling the strength of the monopole Coulomb field. In the monopole picture, this doubling would be expected to increase the $`q=1`$ string tension, with $`P_1`$ remaining zero. Surprisingly, the opposite occurs; we in fact find that $`P_1`$ is negative in the $`\theta ^{}`$ configurations, with values shown in Table 1. What this indicates is that the “photon” and MD contributions do *not* factorize in Polyakov lines and Wilson loops, contrary to the case in the Villain model. In fact, there is an important and non-perturbative correlation between Polyakov line phases $`\theta ^{ph}`$ and $`\theta ^M`$, with the former breaking the (near) U(1) symmetry of the MD lattice down to an exact $`Z_2`$ symmetry. For example, if one computes the average value of $`\theta ^{ph}`$ for $`\theta ^M`$ in the intervals $`[0,\frac{\pi }{2}]`$ and $`[\frac{\pi }{2},\pi ]`$ ($`\beta =2.1,T=3`$), one finds $$\overline{\theta }^{ph}=\{\begin{array}{cc}\hfill 0.027(4)& \text{for }\theta ^M[0,\frac{\pi }{2}]\hfill \\ \hfill 0.027(4)& \text{for }\theta ^M[\frac{\pi }{2},\pi ]\hfill \end{array}$$ (4) The question, of course, is what is the origin of this correlation. From the standpoint of the vortex theory, what is happening is that the Coulombic distribution of $`2\pi `$ monopole flux in the MD approximation is modified, by its correlation with $`\theta ^{ph}`$, into a configuration with an exact $`Z_2`$ remnant symmetry; confining flux has the same magnitude on the abelian projected and MD lattices, but is distributed differently (collimated vs. Coulombic) at large scales. The negative value of $`P_1`$ in the additive $`\theta ^{}`$ configurations can actually be deduced from the negative value of $`P_2`$ on the abelian projected lattice. For this, we refer the interested reader to ref. . Finally, one would like to see the collimation of field strength, in the neighborhood of an abelian monopole, more directly. Here we have extended the original efforts in ref. in two ways: First, in the indirect maximal center gauge, we have verified that there is an almost exact alternation of monopoles with antimonopoles along P-vortex lines, as previously conjectured. In the few exceptional cases, there is a static monopole or antimonopole within one lattice spacing of the P-vortex which, if counted as lying along the P-vortex, would restore the exact alternation. Secondly, we have considered spacelike cubes $`N=14`$ lattice spacings wide, pierced on two faces by a single P-vortex, and containing either one or zero static abelian monopoles. We define $`W_n^M(N,N)`$ as the vev of unprojected Wilson loops, bounding faces of an $`N\times N`$ cube containing one static monopole. The subscript $`n=0,1`$ indicates that the face is pierced ($`n=1`$) or unpierced ($`n=0`$) by a P-vortex line. $`W_n^0(N,N)`$ is the corresponding data for spacelike cubes containing no monopole currents. We then define the fractional deviations $`A_{0,1}^M={\displaystyle \frac{W_0^0(N,N)W_{0,1}^M(N,N)}{W_0^0(N,N)}}`$ $`A_{0,1}^0={\displaystyle \frac{W_0^0(N,N)W_{0,1}^0(N,N)}{W_0^0(N,N)}}`$ (5) The result for 4-cubes is shown in Fig. 2. It is clear that the flux is correlated very strongly with the P-vortex direction, and only rather weakly with the presence or absence of a monopole inside the cube. This is what is expected in the center vortex picture. We conclude that the (i) non-confinement of $`q=`$ even abelian electric charge; (ii) breakdown of the monopole dominance approximation; and (iii) highly asymmetric distribution of confining fields around monoples, is consistent with vortex structure on the abelian lattice, but is probably not compatible with monopole Coulomb gas or dual superconductor pictures. An important point is that charged fields (e.g. off-diagonal gluons) in a confining theory, even if very massive, can have a profound effect on infrared structure. We think it likely that the monopole Coulomb gas and dual-superconductor pictures also break down in the D=3 Georgi Glashow and the Seiberg-Witten models, respectively (cf. the discussion in refs. ), albeit on a $`q=2`$ string-breaking scale which increases with the mass of the W-bosons.
no-problem/9908/cond-mat9908177.html
ar5iv
text
# Molecular ratchets - verification of the principle of detailed balance and driving them in one direction ## Abstract We argue that the recent experiments of Kelly et. al.(Angew. Chem. Int. Ed. Engl. 36, 1866 (1997)) on molecular ratchets, in addition to being in agreement with the second law of thermodynamics, is a test of the principle of detailed balance for the ratchet. We suggest new experiments, using an asymmetric ratchet, to further test the principle. We also point out methods involving a time variation of the temperature to to give it a directional motion It was pointed out long ago by Feynman that a microscopic ratchet, in equilibrium with an isothermal heat bath cannot have a net rotation in any direction - otherwise, the ratchet can be used to extract work from an isothermal system, which is a violation of the second law of thermodynamics. Recently, in a very interesting paper, Kelly et. al. reported the synthesis and the study of the rotational motion of a molecular ratchet. They found the rotation of the ratchet to occur with equal likelihood in either direction, and they conclude that this is in agreement with the second law of thermodynamics (see also comment on this paper by Davis) In the following, we argue that the experiment not only verifies the second law of thermodynamics, but it also provides a direct test of the principle of detailed balance. Our argument is based upon the fact that the experiment is equivalent to putting a label on the Hydrogens which are opposite the pawl and then probing their dynamics under the rotation of the ratchet. By putting such a label, we are preparing the system in a rather special, but non-equilibrium state (see below). As time passes, the probability distribution evolves and eventually would reach equilibrium. Hence the fact that results of the experiment show no net rotation is surprising! We argue that this results from detailed balance and hence in this experiment, one is verifying more than the second law - actually the principle of detailed balance. We suggest new experiments involving an asymmetric ratchet which would further prove this conclusion. We also suggest a way to cause the symmetric ratchet to undergo a net directional motion, which should be possible to experimentally observe. In the experiment, first the spin of the atom H<sub>a</sub> in the molecule is selectively polarized. This means that a population inversion of the spin states of these atoms has been caused. Then, as the internal rotation proceeds, H<sub>a</sub> gets converted into H<sub>b</sub> or H<sub>c</sub> depending on the direction in which the rotation happens, resulting in a transfer of the polarization and the amount of this transfer is measured. We denote the population difference between the up and down states of H<sub>a</sub> at the time $`t`$ by $`\mathrm{\Delta }N_a(t)`$. Its equilibrium value is $`\mathrm{\Delta }N_{a,e}=N_0\frac{1\mu }{1+\mu }`$ , where $`N_0`$ is the total number of molecules and $`\mu =e^{\mathrm{\Delta }E/(k_BT)}`$, $`\mathrm{\Delta }E`$ being the energy difference between the up and the down spin states. Let $`n_𝒜(t)=\mathrm{\Delta }N_a(t)\mathrm{\Delta }N_{a,e}`$ denote the deviation of $`\mathrm{\Delta }N_a(t)`$ from its equilibrium value. Its initial value is $`n_𝒜(0)=2N_0\frac{1\mu }{1+\mu }`$. The molecular ratchet can undergo internal rotation and the corresponding angle coordinate is denoted by $`\phi `$. It varies in the range $`(\pi ,\pi )`$. We divide this range in to three regions $`𝒜`$ $``$ $`(\pi /3,\pi /3)`$, $``$ $`(\pi /3,\pi )`$ and $`𝒞(\pi ,\pi /3)`$ (see the figure 1). The equilibrium probability distribution $`P_e(\phi )`$ (see below) is shown in the figure 2(a). At equilibrium, all the three regions are equally likely. When H<sub>a</sub> is selectively spin polarized, one is effectively putting a label on a population $`n_𝒜(0)`$ of the molecules, which have $`\phi `$ in the range $`𝒜`$. The experiment studies the dynamics of internal rotation of these molecules by measuring the amounts $`n_{}(t)`$ and $`n_𝒞(t)`$ crossing over to the other regions $``$ and $`𝒞`$. The rotational motion may be taken to obey the diffusion equation $$\frac{P(\phi ,t)}{t}=\left\{\frac{}{\phi }V^{}(\phi )+k_BT\frac{^2}{\phi ^2}\right\}P(\phi ,t)$$ (1) We have absorbed the (unnecessary) constants into our definitions of variables because of which the ”time” $`t`$ has now dimensions of 1/energy. $`V(\phi )`$ is the potential energy for the (internal) rotation. It has an asymmetric form making the molecule a ratchet . We shall neglect spin relaxation in our analysis. The above equation has an equilibrium state with $`P_e(\phi )=𝒩e^{\beta V(\phi )}`$, where $`𝒩=1/_\pi ^\pi 𝑑\phi e^{\beta V(\phi )}`$ with $`\beta =1/(k_BT)`$. As $`V(\phi )`$ is periodic with period $`2\pi /3`$, the equilibrium probability distribution too is periodic with the same period. The spin polarization of H<sub>a</sub> is due to an initial distribution with the excess population spread only over the region $`𝒜`$ with a probability distribution $`P_e(\phi )`$. That is, $`P(\phi ,0)=3P_e(\phi )`$ if $`\pi /3<\phi <\pi /3`$ and $`P(\phi ,0)=0`$ otherwise (The numerical factor 3 is put to ensure normalization. The number density of molecules in the population, having an angle $`\phi `$ is $`n_𝒜(0)P(\phi ,0)`$). To calculate the values of $`n_𝒜(t)`$, $`n_{}(t)`$ and $`n_𝒞(t)`$, we need to look at the dynamics of this population. For this, we have to solve the equation (1) subject to this initial condition and then calculate $`n_{}(t)=_I𝑑\phi P(\phi ,t)`$, for $`=𝒜,,𝒞`$. This initial probability distribution function is shown by the full line in figure 2(b). The initial probability distribution $`P(\phi ,0)`$ is a truncated equilibrium probability function, truncated to zero outside the region $`𝒜`$. The second law and the symmetry of the ratchet requires that the amounts that pass over to $``$ and $`𝒞`$ would be the same initially - that is, at $`t=0`$, $`\frac{dn_{}(t)}{dt}=\frac{dn_𝒞(t)}{dt}`$ . However, as time passes, one expects $`P(\phi ,t)`$ to become a truly non-equilibrium probability distribution (a typical one is shown by the dotted curve of figure 2(b)), and hence one would expect that $`n_{_{}}(t)n_𝒞(t)`$, in general, even though, experiment shows the two are equal. We now ask why is this so. The solution of the equation (1) may be written as $$P(\phi ,t)=_\pi ^\pi 𝑑\phi _1G(\phi ,t;\phi _1,0)P(\phi _1,0)$$ (2) where $`G(\phi ,t;\phi _1,0)`$ is the Green’s function for the differential equation in (1). The principle of detailed balance implies $$G(\phi ,t;\phi _1,0)P_e(\phi _1)=G(\phi _1,t;\phi ,0)P_e(\phi )$$ (3) It is easy to derive this equation starting from the equation (1). The equation (2) can be written as $$P(\phi ,t)=3_𝒜𝑑\phi _1G(\phi ,t;\phi _1,0)P_e(\phi _1)$$ Now, $`n_{}(t)=_{}𝑑\phi P(\phi ,t)`$ $`=3_{}𝑑\phi _𝒜𝑑\phi _1G(\phi ,t;\phi _1,0)P_e(\phi _1)`$. Using the detailed balance condition of equation (3) we get $$n_{}(t)=3_𝒜𝑑\phi _{}𝑑\phi _1G(\phi ,t;\phi _1,0)P_e(\phi _1).$$ (4) As the potential is a periodic function, with period $`2\pi /3`$, the propagator and the equilibrium probability distribution too are periodic functions with the same period of $`2\pi /3`$. Hence we can write $$n_{}(t)=3_𝒞𝑑\phi _𝒜𝑑\phi _1G(\phi ,t;\phi _1,0)P_e(\phi _1)$$ (5) $$=_𝒞𝑑\phi P(\phi ,t)$$ $$=n_𝒞(t)$$ (6) Thus, though the probability distribution would develop in to a non-equilibrium one as in figure 2(b), the distribution is rather special and $`n_{}(t)`$ $`=n_𝒞(t)`$ at all times! Further, it is also clear that one can arrive at the same conclusion for any problem for which the equations (2) and (3) are valid. Having proved the general result, we ask: how can one overcome this, and cause $`n_{}(t)`$ $`n_𝒞(t)`$? Noticing that our arguments made use of the periodicity of the potential $`V(\phi )`$, we conclude that if one had an asymmetric ratchet, like the one in the figure 3, the step from equation (4) to (5) would not go through. Hence $`n_{}(t)`$ cannot be equal to $`n_𝒞(t)`$, and this should be seen if an experiment similar to that of Kelly et. al is performed. Making the ratchet asymmetric is not difficult - one would have to use a molecule like the one in the figure 4. It is also possible to use such a molecule for a more stringent test of the principle of detailed balance. One first polarizes H<sub>a</sub> and measures $`n_{}(t)`$ and then polarizes H<sub>b</sub> and then measures $`n_𝒜(t)`$ \- detailed balance implies that the two have to be equal. A similar test can be done with the molecule of Kelly too (though it has not been done), but an experiment with an asymmetric ratchet would be more interesting. An easy experiment to make the molecule have a net transient motion in one direction is to have a sudden temperature jump in the experiments of Kelly et. al. immediately after spin polarizing H<sub>a</sub>. This should lead to $`n_{}(t)n_𝒞(t)`$ which can then be experimentally observed. Finally, it is possible to vary the temperature periodically in time - this would correspond to a Carnot cycle for the molecular ratchet. This will cause the system to settle into a steady state with net rotation in one direction. We have performed model calculations and computer simulations and verified these possibilities . In principle, when ultrasonic waves pass through a liquid containing the molecular ratchet, transfer of energy to the rotational motion of the ratchet, from the translational motion of the surrounding liquid molecules can set the ratchet in a steady state with net rotation in one direction. I thank Professors E. Arunan, J. Chandrasekhar, S. Ramakrishnan and S.K. Rangarajan and A. Chakraborty for interesting discussions. Figure Captions 1. Figure 1: The ratchet and the regions $`𝒜`$, $``$ and $`𝒞`$ 2. Figure 2: (a) The equilibrium probability distribution against the angle co-ordinate. (b) The full line shows the initial probability distribution. It develops into a non-equilibrium distribution of the type shown by the dotted line. 3. Figure 3: The asymmetric ratchet. Notice that the teeth are of different sizes. 4. Figure 4: An asymmetric molecular ratchet.
no-problem/9908/cond-mat9908077.html
ar5iv
text
# 1 Introduction ## 1 Introduction Surface diffusion is a very important process in many phenomena, in particular in crystal growth. That is why the diffusion of single adatoms on stepped metal surfaces has been recently widely investigated both experimentally and theoretically. Energy barriers for the moves of an adatom on a surface with steps or islands are not easily accessible by experiment but for many elementary processes they can be calculated on microscopical level by molecular dynamics. The knowledge of the barriers can then be utilized in the construction of kinetic Monte Carlo models to study growth processes. The diffusion energy barriers have already been calculated for various metals and different surface orientations. For example, in the case of fcc (111) surface there are calculations for Al , Ag , Au , Cu , Ni , Pt , Ir . In most of these studies the semi-empirical potentials were used due to their simplicity allowing a systematic study of numerous possible processes. Comparable ab initio calculations demand much more computer power, therefore the number of investigated processes must considerably be reduced. Although the recent first principles calculations indicates that in the case of Pt(111) the semi-empirical potentials may be insufficient, in many other studies they lead to reasonable results and their application helped at least qualitative understand diffusion energetics and reveal new processes (see e.g. ). In this paper we study diffusion on stepped Rh(111) surface. The research was motivated by a recent STM experiment on unstable growth of Rh(111) where a coarsening due to a step-edge barrier was observed over three orders of magnitude of deposited amount. More recent observation on Pt(111) indicates almost no coarsening over the similar interval of deposited material. Whereas the step-edge barriers on Pt(111) surfaces were extensively studied (see references in ), results for Rh(111) are not available. We present here a systematic study of energy barriers for inter-layer transport as well as for diffusion along the step edges. ## 2 Method Our simulations were done for finite atomic slabs with a free surface on the top, two atomic layers fixed on the bottom, and periodic boundary conditions in the two directions parallel to the surface. The slab representing the substrate of (111) surface was 11 layers thick with 448 atoms per layer. We used systems of approximately 5000 atoms consisting of 19 to 44 layers, with 110 to 240 atoms per layer for diffusion along channels on the vicinal surfaces (311), (211), (331), (221) and (322). The semi-empirical many-body Rosato–Guillope–Legrand (RGL) potential including interactions up to the fifth nearest neighbors was used. For computational details see . The energy barrier for a particular diffusion process was obtained by testing systematically various possible paths of an adatom. The path with the lowest diffusion barrier was chosen to be the optimum one, and the diffusion barrier, $`E_d`$, was calculated as $`E_d=E_{sad}E_{min}`$ where $`E_{sad}`$ and $`E_{min}`$ are the total energies of the system with the adatom at the saddle point and at the equilibrium adsorption site, respectively. We considered both the jump and exchange processes. The minimum energy path for jump diffusion was determined by moving an adatom in small steps between two equilibrium positions and by allowing the adatom to relax in a plane perpendicular to the line connecting two equilibrium positions. The rest of the atoms in the system were allowed to relax in all directions. The energy barrier for exchange process was determined by moving the edge atom, that should be replaced, in small steps toward its final position. This final position was one of neighboring equilibrium sites. The moving atom was allowed to relax in the plane perpendicular to the exchange direction at each step, whereas the other atoms, including the adatom, relaxed free in all directions. ## 3 Results ### 3.1 Flat surface In our simulation we obtained the energy barrier $`0.15`$ eV for self-diffusion on the flat Rh(111) surface, which is in good agreement with experiments. In the field-ion-microscope (FIM) experiment the barrier $`0.15\pm 0.02`$ eV was found and recently the value $`0.18\pm 0.06`$ eV was obtained in the STM experiment from the temperature dependence of island density. The results of molecular statics calculations and experimental values are summarized in Table 1. We calculated also binding energy for the supported dimer. The value $`E_B=0.57`$ eV is in good agreement with $`0.6\pm 0.4`$ eV obtained in the STM experiment . ### 3.2 Descent to the lower terrace We studied the descent of an adatom to the lower terrace from both types of steps on the (111) surface, i.e., step A with {100} microfacet and step B with {111} microfacet (see Fig. 1). We performed calculations for several geometries: straight steps, steps with a kink, and also for a small island 3 $`\times `$ 3 atoms. For all considered geometries we systematically investigated all possible adatom jumps and pair exchange processes. Our results for straight steps and steps with a kink are summarized in Table 2. The energy barrier for a direct jump from the upper to the lower terrace is 0.73 eV for straight step A and 0.74 eV for straight step B. The presence of a kink decreases the barrier for the jump to $`0.57`$ eV on both steps. We can see that the energy barriers for the jumps are always larger than for the exchange which are 0.47 eV and 0.39 eV for A and B step, respectively. In more complex geometries the number of competing processes to be energetically compared increases, e.g. in the case of step B with a kink we consider four types of processes according to which step-edge-atom (denoted by r1, r2, r3, or r4) is pushed out (see Fig. 2). We call them exchange next to corner, exchange over kink I, exchange over kink II, and exchange next to kink, respectively. We consider all possible combinations of initial and final positions. For example, in the case of the exchange next to the corner there are three possible processes: $`1\mathrm{r1}`$, $`2\mathrm{r1}`$, $`3\mathrm{r1}`$. In the process $`3\mathrm{r1}`$, e.g., the adatom starts in the fcc site labeled by 3 and pushes out the edge atom r1. Two possible directions of moving for pushed atom r1 are shown schematically in Fig. 2. The lowest barrier for the inter-layer transport is the barrier for two exchange processes near the kink on the step B ($`0.24`$ eV), i.e. the Ehrlich-Schwoebel barrier is only $`90`$ meV. The barriers for exchange processes on the step A are significantly higher. For a 3 $`\times `$ 3 island, the minimal values were obtained for the exchange of the atom in the middle of the edge (0.43 eV for A-type edge and 0.24 eV for B-type edge). We found that for Rh(111) similar as for Pt(111) the barriers for the descent at a small island are significantly lower than for the descent at straight long steps. ### 3.3 Diffusion along the step edges Fig. 3 shows the energy profile for the diffusion along two edges of a large island. The structure in the middle corresponds to a diffusion around the corner formed by two edges. The angle contained by the edges is 120. There is a small minimum just at the corner positions. The transport between two edges is asymmetric. We found that the diffusion along the straight step of type A is faster (the barrier is 0.40 eV) than along the step of type B (the barrier is 0.81 eV). This could be attributed to a purely geometrical effect due to different local geometries along the steps. The adatom diffusing along the step B has to pass closer to the topmost atoms of the lower terrace than when it is diffusing along the step A (see Fig.1). There are no available experimental data for diffusion along the steps on Rh(111) surface. Only one measurement on the (311) and (331) surface has been published . In order to have some comparison, we calculated the energy barriers for the diffusion along steps on vicinal surfaces with terraces: (211), (311) - terraces with step edges of type A, and (332), (221) and (331) - step edges of type B. Results are summarized in Table 3. The vicinal surfaces are ordered according to the distance between terraces. We can see that there is a clear tendency with the decreasing distance between steps. In the case of A-step the barrier along the step is increasing with the step distance increasing, whereas for B-step it is decreasing. We obtained the barriers $`0.45`$ eV and $`0.78`$ eV for the diffusion along steps (311) and (331) surfaces, respectively. Experimental results of FIM measurements are the energy barriers $`E_{311}=0.52`$ eV and $`E_{331}=0.62`$ eV . There is qualitative agreement between experimental and calculated data: $`E_{311}<E_{331}`$. ## 4 Conclusion Using the RGL potential we calculated the energy barrier for self-diffusion on the flat Rh(111) surface and binding energy of supported dimer which are in good agreement with the experimental data. With the same potential, we systematically studied energy barriers for the descent at straight as well as rough steps on Rh(111). We found that the lowest energy barriers for descent to the lower terrace is for the exchange process near a kink on step B. We also calculated barriers for the diffusion along step edges on Rh(111) surface and along step edges on several vicinal surfaces. We found that the diffusion along step A is faster than along step B, which is in qualitative agreement with the FIM experiment. We observed that these barriers are slightly affected by the step-step interaction. We expect that due to rather large barriers for the diffusion along steps, both steps will be rough during the growth at lower temperatures and the interlayer transport will prefer step B. At a higher temperature the diffusion along step A starts to be active and the descent on both steps will be possible. However, step B will remain rough and the descent on this step will be easier. In island growth this would imply that B-edges of an island will grow faster than A-edges, therefore, B-edges will become shorter. However, the number of kinks for the easy descent on a shorter B-edge will be lower. Hence we expect that for a certain interval of temperatures the shape of the growing island will be asymmetric with longer A steps. This picture seems to be in agreement with the morphologies presented in . Acknowledgment Financial support for this work was provided by the COST project P3.80. Figure captions Fig. 1. Two types of step edges, A and B, for a large island. Solid line shows the diffusion path along the island edge. The atoms of different layers from the surface to the bulk are shown as large filled circles, large open circles, small open circles, and tiny open circles. Fig. 2: Different exchange processes near a kink site on step B on Rh(111) surface. The edge atoms undergoing exchange diffusion (r1,…,r4) and starting positions of an adatom (1,…,9) are shown. The four topmost atomic layers from the surface to the bulk are distinguished by different circle radii (cf. Fig. 1). Fig. 3: Dependence of the adatom energy for the path along the edge of a large island. The path is composed from three sections: along edge A, around the corner and along edge B. Tables Table 1: Self-diffusion barriers $`E_S`$ (in eV) on flat Rh(111) surface, SCh - Sutton-Chen potential, LJ - modified Lennard-Jones potential | | Method | Ref. | $`E_S`$ | | --- | --- | --- | --- | | Exp. | FIM | | 0.15 $`\pm `$ 0.02 | | | STM | | 0.18 $`\pm `$ 0.06 | | Theory | LJ | | 0.234 | | | SCh | | 0.106 | | | RGL | present | 0.15 | Table 2: Energy barriers $`E_d`$ (in eV) for descent at steps on Rh(111) | Step | Process | $`E_d`$ | | --- | --- | --- | | A | Jump over step | 0.73 | | | Jump over kink | 0.57 | | | Exchange over step | 0.47 | | | Exchange next to corner ($`3\mathrm{r1}`$) | 0.81 | | | Exchange over kink I ($`3\mathrm{r2}`$) | 0.47 | | | Exchange over kink II ($`4\mathrm{r3}`$) | 1.0 | | | Exchange next to kink ($`9\mathrm{r4}`$) | 0.80 | | B | Jump over step | 0.74 | | | Jump over kink | 0.57 | | | Exchange over step | 0.39 | | | Exchange next to corner ($`3\mathrm{r1}`$) | 0.24 | | | Exchange over kink I ($`3\mathrm{r2}`$) | 0.48 | | | Exchange over kink II ($`7\mathrm{r3}`$) | 0.63 | | | Exchange next to kink ($`9\mathrm{r4}`$) | 0.24 | Table 3: Calculated activation energy barriers $`E_d`$ (in eV) for diffusion along steps on different surfaces | Step A | | Step B | | | --- | --- | --- | --- | | Surface | $`E_d`$ | Surface | $`E_d`$ | | 111 | 0.40 | 111 | 0.81 | | 211 | 0.41 | 332 | 0.80 | | 311 | 0.45 | 221 | 0.78 | | - | - | 331 | 0.78 |
no-problem/9908/hep-ph9908421.html
ar5iv
text
# Baryogenesis and Low Energy 𝐶⁢𝑃 Violation ## 1 Baryogenesis in the MSSM The squarks (scalar partners of the quarks) present in the MSSM get contributions to their masses from supersymmetry breaking, as well as from electroweak symmetry breaking via the Higgs mechanism. In particular, $`\stackrel{~}{t}_L`$ and $`\stackrel{~}{t}_R`$ the scalar partners of the top quark get a large contribution from the Higgs mechanism due to the size of the top quark Yukawa coupling to the Higgs boson. If the supersymmetry breaking mass of the $`\stackrel{~}{t}_R`$ is negligible, and there is no $`\stackrel{~}{t}_L`$\- $`\stackrel{~}{t}_R`$ mixing, (the $`\stackrel{~}{t}_R`$ is chosen to be light in order to avoid conflicts with the $`\rho `$ parameter if the $`\stackrel{~}{t}_L`$ were light), Eq. (2) gets modified to $$\frac{H(T_0)}{T_0}\frac{2M_W^3+M_Z^3+2m_t^3}{2m_H^2v}3$$ (3) for $`m_t=175`$ GeV. Thus we see, the condition of Eq. (1) can be satisfied and we have a strongly first order phase transition. This simple relation is modified by the presence of supersymmetry breaking masses, $`\stackrel{~}{t}_L\stackrel{~}{t}_R`$ mixing, and finite temperature effects. A detailed analysis shows that an electroweak phase transition strong enough to allow baryogenesis is possible if $`m_{\stackrel{~}{t}_R}175`$ GeV and $`m_H115`$ GeV. Moreover, efficient baryogenesis requires rapid intraconversion between the particles and their supersymmetric partners. This means that most of the supersymmetric particles and especially the gauginos (fermionic partners of the gauge bosons) must also have masses of order $`T_0100`$ GeV, where $`T_0`$ is the critical temperature for the electroweak phase transition. Besides the obvious direct search implications of these light sparticles and Higgs boson, the light $`\stackrel{~}{t}_R`$ and charginos also result in large contributions to $`B\overline{B}`$ mixing This is because the $`b_L\stackrel{~}{t}_R\stackrel{~}{h}`$ coupling, proportional to the top quark mass, removes the possibility of any GIM cancellation of its contribution . The most effective way to generate a particle number asymmetry for some species is to arrange that, during the electroweak phase transition, a $`CP`$ violating space-time dependent phase appears in the mass matrix for that species. If this phase cannot be rotated away at subsequent points by the same unitary transformation, it leads to different propagation probabilities for particles and anti-particles, thus resulting in a particle number asymmetry. The existence of two Higgs fields in the MSSM makes this possible. If $`\mathrm{tan}\beta `$ (the ratio of the expectation values of the two Higgs fields) changes as one traverses the bubble wall separating the symmetric phase from the broken one, particle number asymmetries can be generated, which will be proportional to $`\mathrm{\Delta }\beta `$, the change in $`\beta `$ across the bubble wall . It has been estimated that $`\mathrm{\Delta }\beta m_h^2/m_A^20.01`$ for the pseudoscalar Higgs boson mass $`m_A=200300`$ GeV . This can actually be turned into an upper bound for $`\mathrm{\Delta }\beta `$ using the relation $`m_{h_+}^2=m_A^2+m_W^2`$, where $`m_{h_+}`$ is the charged Higgs boson mass. Charged Higgs bosons make large positive contributions to the $`bs\gamma `$ decay rate. The current experimental value for $`Br(bs\gamma )`$ already sets the limit $`m_{h_+}\stackrel{>}{}300`$ GeV at the $`2\sigma `$ level . This then implies $`\mathrm{\Delta }\beta \stackrel{<}{}0.01`$ through the relations above. Baryogenesis in the MSSM proceeds most efficiently through the generation of higgsino number or axial squark number in the bubble wall, which then diffuses to the symmetric phase. Here, they bias the Standard Model $`B+L`$ violation to produce a net baryon number . In this paper we present the special case of baryogenesis through the production of axial squark number, where the CKM phase responsible for kaon $`CP`$ violation is also directly responsible for baryogenesis . Consider the mass squared matrix for the up-type squarks: $$M_{\stackrel{~}{u}}^2=\left(\begin{array}{cc}M_{\stackrel{~}{u}_{LL}}^2& M_{\stackrel{~}{u}_{LR}}^2\\ M_{\stackrel{~}{u}_{LR}}^2& M_{\stackrel{~}{u}_{RR}}^2\end{array}\right)$$ (4) where $`M_{\stackrel{~}{u}_{LL}}^2`$ $`=`$ $`m_Q^2A_{U_{LL}}+(F,D)\mathrm{terms},`$ $`M_{\stackrel{~}{u}_{RR}}^2`$ $`=`$ $`m_U^2A_{U_{RR}}+(F,D)\mathrm{terms},`$ $`M_{\stackrel{~}{u}_{LR}}^2`$ $`=`$ $`m_Av_2\lambda _UA_{U_{LR}}+\mu v_1\lambda _U.`$ (5) where $`M_Q`$, $`M_U`$, and $`M_A`$ are supersymmetry breaking masses $`\lambda _U`$ is the Yukawa coupling matrix for up-type quarks, and the $`A_U`$’s are dimensionless matrices. Concentrating only on the production of $`\stackrel{~}{t}_R`$, and using $`m_{\stackrel{~}{t}_R}=175`$ GeV, $`m_{\stackrel{~}{t}_L}=300`$ GeV, and $`\mathrm{tan}\beta 1`$ we obtain the result $$\frac{n_B}{s}10^8\frac{\kappa \mathrm{\Delta }\beta }{v_w}\frac{m_A}{T_0}\frac{|\mu |}{T_0}Im[e^{i\varphi _B}A_{U_{LR}}^{}\lambda _U^{}\lambda _U]_{(3,3)}$$ (6) $`\kappa `$ is related to the rate of anomolous $`B+L`$ violation, $`\mathrm{\Gamma }_{B+L}=\kappa \alpha _w^4T`$. There is a large uncertainty in its precise value, with current estimates giving $`\kappa =10.03`$ . $`v_w0.1`$ is the velocity of the wall separating the phase where electroweak symmetry is broken (the Higgs field has an expectation value) from where it is unbroken (the Higgs field has no expectation value). $`\mathrm{\Delta }\beta \stackrel{<}{}0.01`$, and $`T_0m_A|\mu |100`$ GeV. The approximations made in deriving Eq. (6) and their validity our outlined in . If $`\stackrel{~}{t}_L`$ and $`\stackrel{~}{t}_R`$ have very different masses there is a suppression of the baryon asymmetry by $`m_{\stackrel{~}{t}_R}^2/m_{\stackrel{~}{t}_L}^2`$ that is not explicit in their work. Thus the estimate of Eq. (6) would be modified if $`m_{\stackrel{~}{t}_L}300`$ GeV. Consider the possibility that the supersymmetric parameters $`A_{U_{LR}}`$ and $`\mu `$ are real, with all the $`CP`$ violation being in the quark mass matrix . Notice that $`\lambda _U^{}\lambda _U`$ in Eq. (6) is Hermitian, hence the phase is on one of the off-diagonal terms. One then requires $`A_{U_{LR}}`$ to have off diagonal entries in order to move this phase to the (3,3) element of the product $`A_{U_{LR}}^{}\lambda _U^{}\lambda _U`$. These large off-diagonal terms in $`A_{U_{LR}}`$ always lead to large $`D\overline{D}`$ mixing due to gluino mediated box diagrams. The magnitude of the mixing is generically about an order of magnitude lower than the current experimental bound $`\mathrm{\Delta }(m_D)<1.3\times 10^{13}`$ GeV. Further, given the hierarchical structure of the quark masses and mixings, one expects the largest off-diagonal entry in $`\lambda _U^{}\lambda _U`$ to be $`\theta _C^20.04`$. For example the ansatz $`\lambda _U=V_{CKM}^{}\widehat{\lambda }_UV_{CKM}`$ where $`V_{CKM}`$ is the CKM matrix, and $`\widehat{\lambda }_U`$ is the diagonal matrix of up-type Yukawa couplings can lead to $$Im[A_{U_{LR}}^{}\lambda _U^{}\lambda _U]_{(3,3)}=\lambda _t^2|V_{cb}|\mathrm{sin}\gamma $$ (7) for $$A_{U_{LR}}=\left(\begin{array}{ccc}1& 0& 0\\ 0& 1& 1\\ 0& 1& 1\end{array}\right),$$ (8) where $`\gamma 1`$ is the phase in the CKM matrix. Thus, we see that the baryon asymmetry is directly related to the phase responsible for $`CP`$ violation in $`K\overline{K}`$ mixing. We can obtain a large enough baryon asymmetry \[cf Eq. (6)\] for $`\kappa =1,\mathrm{\Delta }\beta =0.01`$. ## 2 Baryogenesis via Leptogenesis The idea that one can obtain a baryon asymmetry by first generating a lepton asymmetry was first proposed in , and subsequently explored in several papers . As mentioned earlier, $`B+L`$ is anomously violated in the Standard Model, and the rate for this process is large at high temperatures. However, $`BL`$ is conserved. Thus given enough time for the $`B+L`$ violating processes to act, we obtain the relations: $`(BL)_f`$ $`=`$ $`(BL)_i`$ $`(B+L)_f`$ $`=`$ $`0`$ (9) where the subscripts $`f`$ and $`i`$ stand for final and initial respectively. Thus if one started with zero initial baryon number, but non-zero initial lepton number one would obtain the final condition $`B_f=L_i`$ (this relationship is slightly modified by a careful consideration of all the Standard Model interactions . The initial lepton number asymmetry is obtained by the $`CP`$ and lepton number violating decay of heavy right-handed Majorana neutrinos. Consider a model with right-handed Majorana neutrinos $`N_R`$. By definition these fields are self conjugate, $`N_R^c=N_R`$ where the superscript $`c`$ denotes the charge conjugated field. Thus given the Yukawa interaction $$_Y=h_{ij}\overline{l}_L^iN_R^jH+h.c.$$ (10) where $`h_{ij}`$ is the matrix of Yukawa couplings, the $`l_L`$ are left-handed Standard Model leptons and $`H`$ is the Higgs field one finds that $`N_R`$ can decay into both light leptons and anti-leptons. If these decays are $`CP`$ violating they will generate an excess of one over the other. Let us define an asymmetry $$\delta =\frac{\mathrm{\Gamma }\mathrm{\Gamma }^{CP}}{\mathrm{\Gamma }+\mathrm{\Gamma }^{CP}}$$ (11) where $`\mathrm{\Gamma }`$ is the decay rate into leptons, and $`\mathrm{\Gamma }^{CP}`$ into anti-leptons. In the case that the heavy neutrinos are not degenerate in mass, which is the case we study here, it is sufficient to consider only $`CP`$ violation in the decays of the heavy neutrinos (direct $`CP`$ violation). One then obtains the result $$\delta =\frac{1}{2\pi (h^{}h)_{11}}\underset{j=1}{\overset{6}{}}\mathrm{Im}[(h^{}h)_{1j}]^2f(m_j^2/m_1^2),$$ (12) where $`f(x)`$ is a kinematic function of order one for reasonable choices of the masses . The subscript 1 in the terms above is due to the fact that the lepton asymmetry is generated by the decay of the lightest of the right-handed neutrinos (any asymmetry generated by the heavier right-handed neutrinos will be washed out by the decays of the lightest). The first constraint on the mass scale of the $`N_R`$ is obtained by insisting that it be out of thermal equilibrium with the rest of the universe when it decays. This will hold if it lives till the universe has cooled to a temperature below the mass of the particle. This condition is encoded in the requirement that $$\mathrm{\Gamma }_RH(T=m_R)$$ (13) where $`\mathrm{\Gamma }_R`$ is the decay rate of the right-handed neutrino with mass $`m_R`$, and $`H`$ is the Hubble constant. This translates to $$\frac{(h^{}h)_{11}m_R}{8\pi }\stackrel{<}{}\frac{20m_R^2}{M_P}\frac{(h^{}h)_{11}}{m_R}\stackrel{<}{}10^{16}\mathrm{GeV}^1$$ (14) if the dominant decay is via the Yukawa coupling of Eq. (10). The second constraint is obtained by insisting that the heavy Majorana mass scale explain the solar and atmospheric neutrino data. If we assume that the observed deficit in $`\nu _e`$’s from the sun is due to $`\nu _e\nu _\mu `$ mixing, then the mass squared difference $`\mathrm{\Delta }m^210^6\mathrm{eV}^2`$, preferred by the data, implies $`m_{\nu _\mu }10^3`$ eV. Similarly, assuming the deficit in atmospheric $`\nu _\mu `$’s is due to $`\nu _\mu \nu _\tau `$ mixing, then the preferred mass squared difference, $`\mathrm{\Delta }m^210^3\mathrm{eV}^2`$, implies $`m_{\nu _\tau }3\times 10^2`$ eV. The see-saw mass relations $$m_{\nu _\mu }\frac{m_\mu ^2}{m_R};m_{\nu _\tau }\frac{m_\tau ^2}{m_R}$$ (15) then imply that $`m_R10^{10}10^{11}`$ GeV. Eq. (14) then tells us that $`h_{11}10^210^3`$ (assuming a hierarchical matrix of Yukawa couplings). Note, that if the electron gets its mass at tree level from the Yukawa coupling $`h`$ as in the Standard Model, one would obtain $`m_e=h_{11}v=1`$ GeV, for $`v=246`$ GeV, which is too large by several orders of magnitude. In order for this model to work, one has to impose symmetries such that the Standard Model fermions only get their masses at the loop level. In such a case the fermion masses would be proportional to the squares of the Yukawa coupling constants, and one obtains $`m_e=h_{11}^2v=0.2`$ MeV for $`h_{11}=10^3`$ which is the correct order of magnitude. It is indeed possible to construct a model that incorporates all these requirements . Moreover, in this model, the $`CP`$ violation responsible for baryogenesis is related in a calculable way to the $`CP`$ violation present in the CKM matrix. The model is based on the $`SU(4)\times SU(2)_L\times SU(2)_R`$ group. The Standard Model fermions transform in the usual representations: $$\mathrm{\Psi }_L^i(4,2,1)^i\left(\begin{array}{cccc}u_1& u_2& u_3& \nu \\ d_1& d_2& d_3& e^{}\end{array}\right)_L^i$$ (16) $$\mathrm{\Psi }_R^i(4,1,2)^i\left(\begin{array}{cccc}u_1& u_2& u_3& N\\ d_1& d_2& d_3& e^{}\end{array}\right)_R^i$$ (17) where $`i=1,2,3`$ is a generation index, and we have included a right handed neutrino $`N`$. We add to this three generations of (right-handed) sterile neutrinos $$s^i(1,1,1)^i.$$ (18) The matter spectrum is supersymmetric, so the scalars $`\stackrel{~}{\mathrm{\Psi }}_L^i`$, $`\stackrel{~}{\mathrm{\Psi }}_R^i`$, and $`\stackrel{~}{s}^i`$ in the model transform in exactly the same way. We will impose a discrete $`Z_3`$ symmetry on the gauge singlets (broken by the interactions of the Standard Model particles) under which $`s^je^{i(j\pi )/3}s^j`$ and $`\stackrel{~}{s}^je^{i(2j\pi )/3}\stackrel{~}{s}^j`$. This permits us to make the Lagrangian $`CP`$ invariant, with the vacuum expectation values of the $`\stackrel{~}{s}^j`$ breaking $`CP`$ spontaneously. We can choose parameters for the scalar potential such that it is minimized when $$\stackrel{~}{s}_j=\frac{v_0}{\sqrt{2}}e^{i\alpha _j};\stackrel{~}{N}_j=\frac{v_R}{\sqrt{2}}\delta _{1j};\stackrel{~}{\nu }_j=\frac{v_L}{\sqrt{2}}\delta _{1j}$$ (19) with $`|v_0|>|v_R||v_L|`$. This provides the correct pattern of symmetry breaking. The Yukawa interactions are given by $$_𝒴=y_i(\overline{s}^c)^is^i\stackrel{~}{s}^i(\kappa _L^a)_{ij}\overline{\mathrm{\Psi }}_L^is^j\stackrel{~}{\mathrm{\Psi }}_L^a(\kappa _R^a)_{ij}^T\overline{\mathrm{\Psi }}_R^i(s^c)^j\stackrel{~}{\mathrm{\Psi }}_R^a+\mathrm{h}.\mathrm{c}.,$$ (20) with all of the coupling constants real. However, the mass matrix of the $`s_i`$ will contain the phases $`\alpha `$ due to the spontaneous breaking of $`CP`$ invariance when the $`\stackrel{~}{s}_i`$ obtain a vacuum expectation value. Note that since $`\overline{\mathrm{\Psi }}_L\mathrm{\Psi }_R`$ transforms as $`(1,2,2)`$ and there are no scalars in this representation, none of the Standard Model fermions get masses at tree level. Their masses are generated at one loop by diagrams involving the $`(s_i)`$ on the internal lines. The $`CP`$ violating phases in the quark mass matrices and hence in the CKM matrix are a function of the phases $`\alpha `$ in the masses of the $`s_i`$. The out of equilibrium decays of the $`s_i`$ generate the lepton (and hence baryon) asymmetry. It is this same phase $`\alpha `$ that is responsible for the $`CP`$ violation in these decays. Thus one obtains a relationship between the CKM phase and the phase responsible for the baryon asymmetry. ## 3 Conclusions We have presented an overview of two models of baryogenesis that also have other low energy experimental consequences. Baryogenesis in the MSSM is possible if the Higgs and $`\stackrel{~}{t}_R`$ are light. Moreover, one expects large contributions to the $`bs\gamma `$ rate, and the $`B\overline{B}`$ mixing amplitude. Baryogenesis via the decay of heavy neutrinos can be constrained by insisting that they be at the see-saw scale implied by the solar and atmospheric neutrino data. We have presented specific implementations of these models where the CKM phase responsible for $`CP`$ violation in the neutral kaons is related to the phase responsible for the baryogenesis. Acknowldegements This work was supported in part by the National Science Foundation under grant PHY-95-147947 and by the U.S. Department of Energy under Contract DE-AC03-76SF00098.
no-problem/9908/quant-ph9908063.html
ar5iv
text
# Quantum Zeno effect in the decay onto an unstable levelPublished in Phys. Lett. A 257, 227-231 (1999) ## 1 Introduction The quantum Zeno effect (paradox) is the name for the phenomenon of freezing (or slowing down) the evolution of a continuously observed quantum system. Originally the effect has been discussed in the case of a spontaneously decaying system, and preventing (or slowing down) of the decay has been predicted. Later on it was argued that the Zeno effect cannot arise in the spontaneous decay. Instead, the Zeno effect has been thoroughly investigated and finally experimentally proved in a repeatedly measured two-level system otherwise undergoing Rabi oscillations. The possibility of the Zeno effect in the initial non-exponential stage of a spontaneous decay is yet under discussion . It was argued that the Zeno effect is observed in the real radioactive decay. A review on the subject can be found in . We shall present below a simple model predicting slowing down of a spontaneous decay in the case if the final (after this decay) state of the system is also unstable. This situation could in principle be interpreted as the Zeno effect in the continuously observed spontaneous decay, the second decay serving as a mechanism for the observation of the first one. We shall consider a 3-level system with level 2 spontaneously decaying onto level 1 and level 1 spontaneously decaying onto level 0. If the system is originally on level 2, then the decay $`10`$ (practically, observation of a photon radiated simultaneously with this decay) is a sign that the system has already arrived at level 1 and therefore that the transition $`21`$ occurred. Vice versa, the absence of the decay $`10`$ means that the system is yet at level 2. Thus, the very possibility of the decay $`10`$ means that the system prepared originally at level 2 is under permanent observation (measurement). Then, as a result of the Zeno effect, the system must be frozen at level 2 or at least the decay of this level must be essentially slowed down. In Sect. 3 we shall confirm by a direct quantum-mechanical calculation that this is the case: the decay $`21`$ is slowed down if level 1 is unstable, the greater instability of level 1, the less the rate of the decay $`21`$. In Sect. 4 we shall return to the question whether this phenomenon can be interpreted as a result of the Zeno effect. To make the calculation more clear, we shall consider in Sect. 2 the decay onto a stable level and then in Sect. 3 the model will be generalized to the case of interest. ## 2 The decay onto a stable level As the preliminary step, let us consider, by the method given in , a model of the decay $`21`$ onto a stable level 1. Let $`H_0`$ be a Hamiltonian of a multilevel system (atom) including also a continuous spectrum. The latter may originate from the interaction between the atom and the electromagnetic field (photons) which could be absorbed or radiated simultaneously with transitions of the atom. The nature of the continuous spectrum may be arbitrary, but for concreteness we shall speak of photons. The total Hamiltonian $`H=H_0+V`$ will contain also a potential $`V`$ leading to the transition between levels accompanying by the photon number change. Denote the state of the atom on level 2 by $`|2`$. Suppose that there is no photons (more generally, no contribution from the continuous spectrum) in the state $`|2`$. We wish to describe the decay of this state to the state $`|1E`$ in which the atom is on level 1 and there are also some photons, so that the total energy of the atom and the electromagnetic field is $`E`$. For simplicity we shall assume that the only non-zero matrix elements of the potential $`V`$ are $`1E|V|2=\overline{2|V|1E}`$. To describe the transition $`|2|1E`$, consider the general state of the system in the form $$|\psi =a_2(t)|2e^{iE_2t}+\underset{E}{}a_{1E}(t)|1Ee^{iEt}$$ (1) where the natural units ($`\mathrm{}=1`$) are used and the integration in energy is denoted as a sum. To return to the usual units, we have to replace $`t`$ by $`t/\mathrm{}`$. To return to the genuinely continuous spectrum, we have to replace a sum over $`E`$ by integration over $`E`$ with the weight $`\rho _1(E)`$ presenting the local density of states $`|1E`$. Substituting this form for the state in the Schrödinger equation, we have the following equations for the coefficients $`a_2`$, $`a_{1E}`$: $`\dot{a}_2e^{iE_2t}`$ $`=`$ $`i{\displaystyle \underset{E}{}}2|V|1Ea_{1E}e^{iEt},`$ (2) $`\dot{a}_{1E}e^{iEt}`$ $`=`$ $`i1E|V|2a_2e^{iE_2t}.`$ (3) To solve these equations, let us accept the anzatz $`a_2(t)=\mathrm{exp}(\gamma _2t)`$ corresponding to the exponential law of the decay of level 2 (this law is valid for not too small times). Then Eq. (2) will take the form $$i\underset{E}{}a_{1E}2|V|1Ee^{i(EE_2)t}=\gamma _2e^{\gamma _2t}$$ (4) while Eq. (3) may be explicitly solved to give $$a_{1E}(t)=\frac{1E|V|2}{EE_2+i\gamma _2}\left[1e^{i(EE_2+i\gamma _2)t}\right].$$ (5) The initial condition $`a_{1E}(0)=0`$ is used to describe the system being initially on level 2. Now we have to substitute the expression (5) for the function $`a_{1E}(t)`$ in Eq. (4). Evaluating the sum (integral) on energies in Eq. (4), we shall assume that the weight function $`\rho _1(E)`$ and the matrix element $`1E|V|2`$ are slow functions of energy and can be replaced by the constants equal to the values of these functions at $`E=E_2`$ (the energy of level 2, the point where the denominator in Eq. (5) has minimum). Under this assumption the energy integral can be evaluated. Eq. (4) may be shown to be satisfied provided that $$\gamma _2=\pi \rho _1(E_2)|1E_2|V|2|^2.$$ (6) This is nothing else than the “Fermi’s golden rule” for the decay of an unstable level. ## 3 The decay onto a decaying level Let us apply an analogous consideration to the three-level system of interest: level 2 may decay to level 1, and level 1 in turn may decay to level 0. The general state of the system (again containing a continuous spectrum, photons) may be presented in the form $$|\psi =a_2(t)|2e^{iE_2t}+\underset{E}{}(a_{0E}(t)|0E+\underset{E}{}a_{1E}(t)|1E)e^{iEt}.$$ (7) Here $`|1E`$ denotes the state of the atom at level 1 and the general energy of the system (atom plus photons) $`E`$, $`|0E`$ is an analogous state but with the atom at level 0. The sums over energies will be later replaced by the integrals with the corresponding weights: $`\rho _1(E)`$ for the states $`|1E`$ and $`\rho _0(E)`$ for $`|0E`$. The Hamiltonian of the system will be taken in the form $`H=H_0+V`$ with the following non-zero matrix elements of $`V`$: $`1E|V|2`$ and $`0E|V|1E^{}`$. Then the Schrödinger equation gives the following equations for the coefficients: $`\dot{a}_2`$ $`=`$ $`i{\displaystyle \underset{E}{}}2|V|1Ea_{1E}e^{i(EE_2)t}`$ (8) $`\dot{a}_{1E}`$ $`=`$ $`i1E|V|2a_2e^{i(E_2E)t}i{\displaystyle \underset{E^{}}{}}1E|V|0E^{}a_{0E^{}}e^{i(E^{}E)t}`$ (9) $`\dot{a}_{0E}`$ $`=`$ $`i{\displaystyle \underset{E^{}}{}}0E|V|1E^{}a_{1E^{}}e^{i(E^{}E)t}.`$ (10) To solve this set of equations, we shall present them in the vector form $`\dot{a}_2`$ $`=`$ $`iV_{21}a_1`$ (11) $`\dot{a}_1`$ $`=`$ $`iV_{12}a_2iV_{10}a_0`$ (12) $`\dot{a}_0`$ $`=`$ $`iV_{01}a_1`$ (13) where the following vectors and matrices are introduced: $`(a_1)_E=a_{1E},(a_0)_E=a_{0E},`$ $`(V_{21})_E=2|V|1Ee^{i(EE_2)t},(V_{12})_E=1E|V|2e^{i(E_2E)t},`$ $`(V_{10})_{EE^{}}=1E|V|0E^{}e^{i(E^{}E)t},(V_{01})_{EE^{}}=0E|V|1E^{}e^{i(E^{}E)t}.`$ (14) Let us introduce also the integral operations acting on the time-dependent vectors: $$I_{kl}a=i_0^tV_{kl}a𝑑t,J=I_{10}I_{01}.$$ (15) Then Eqs. (1213) may be replaced by the integral equations $$a_1=I_{12}a_2+I_{10}a_0,a_0=I_{01}a_1$$ (16) having the solution $$a_1=(1J)^1I_{12}a_2=\underset{n=0}{\overset{\mathrm{}}{}}J^nI_{12}a_2.$$ (17) Making use of the anzatz $`a_2=e^{\mathrm{\Gamma }_2t}`$ in the right-hand side of this equation and substituting the resulting expression for $`a_1`$ in Eq. (11), we have the following equation for $`\mathrm{\Gamma }_2`$: $$i\underset{n=0}{\overset{\mathrm{}}{}}V_{21}J^nI_{12}e^{\mathrm{\Gamma }_2t}=\mathrm{\Gamma }_2e^{\mathrm{\Gamma }_2t}.$$ (18) We can evaluate each term in the sum. For calculating sums (integrals) over energies, we shall use the same approximation as in the preceding section considering all matrix elements of $`V`$ and the weight functions $`\rho _1(E)`$ for states $`|1E`$ and $`\rho _0(E)`$ for states $`|0E`$ slow functions of energies. Then each term in the left-hand side of Eq. (18) can be evaluated. It turns out that the terms corresponding to the given $`n`$ differ from the term corresponding to $`n1`$ only by the numerical factor $`(N)`$ where $$N=\pi ^2\rho _0(E_2)|0E_2|V|1E_2|^2\rho _1(E_2).$$ (19) This gives $$\mathrm{\Gamma }_2=\frac{\gamma _2}{1+N}$$ (20) where $`\gamma _2`$ is defined by Eq. (6). Eq. (20) is proved in the assumption that $`N<1`$, however this does not exclude that it may be valid also in a wider region. The assumptions about the behavior of matrix elements of $`V`$ and functions $`\rho _1`$ and $`\rho _2`$ taken above are essential. The formula (20) leads to the main conclusion. The entity $`N`$ in its denominator is proportional to the rate of the decay of level 1 in the situation when the system starts in the state $`|1E_2`$. In other words, $`N`$ is a measure of instability of level 1 (under the condition that there are also photons so that the total energy of the system is $`E_2`$). We see therefore that the rate of the decay of level 2 decreases because of instability of the target level 1. The more instability of level 1, the less the rate of the decay $`21`$. The last claim may be made more concrete if we (roughly) estimate the rate $`\mathrm{\Gamma }_1`$ of the decay of level 1. It depends on the energy band of the decaying states $`|1E`$. Since these states result in the decay of the state $`|2`$, the energy $`E`$ should be of the order of $`E_2`$ and the width $`\mathrm{\Delta }E`$ of the energy band is of the order of $`\mathrm{\Gamma }_2`$. The rate of the decay of the level 1 may be obtained (as a rough estimate) by multiplication of the number (19) by $`\mathrm{\Delta }E`$ giving $`\mathrm{\Gamma }_1\mathrm{\Gamma }_2N`$. According to Eq. (20), the slowing down of the decay $`21`$ is essential if $`\mathrm{\Gamma }_1`$ is larger than or of the order of $`\mathrm{\Gamma }_2`$. ## 4 Discussion We showed, under certain assumptions, that the rate of the decay is slowed down by instability of the target level. It has been argued in Introduction that this may be interpreted as a consequence of the quantum Zeno effect. However, this must be compared with the arguments against the Zeno effect in spontaneous decays. Some authors argued that the quantum Zeno effect is impossible in spontaneous decay because of its exponential law (contrary to the quadratic small-time asymptotic of Rabi oscillations). One more doubt may be based on the following. A spontaneously decaying atom located in plasma is subject to repeated scattering of electrons on the atom. These events of scattering may be thought of as repeated measurements of the atom discriminating its levels. In this situation the Zeno effect, if existing, could slow down the decay. In reality the rate of the decay changes insignificantly due to scattering of electrons. This gives an additional argument against the Zeno effect in spontaneous decay. The phenomenon discussed in the preceding section may then be interpreted as a Zeno-like but not genuinely Zeno effect. In our opinion, the conclusion should not be as radical as this. Instead, one may consider the possibility of the Zeno effect for different types of continuous measurements. In the situation considered in the present paper the transition $`10`$ is an evidence that the system has already arrived at level 1, but after this evidence has been obtained the system is no more at level 1. If we consider a 2-level system with both levels 2 and 1 as our measured system, then the measurement leads to destruction of this system. On the contrary, scattering electrons gives information about the level the atom is on and leaves it at the same level. The measurements described by von Neumann’s projections (discussed in most papers on the Zeno effect) act analogously. Therefore, the results obtained in the present may point out that 1) the quantum Zeno effect does not arise in a spontaneous decay if the measurement is “minimally disturbing” (described by projectors), but 2) the effect takes place if the measurement is “destructive” i.e. leads to the disappearance of the measured state. Some other remarks must be added. The conclusion about slowing down the decay $`21`$ due to the decay $`10`$ has been proved above under certain assumptions. The most important of them are that the phase volumes of the transitions may be correctly accounted by the weights $`\rho _1`$, $`\rho _0`$ and these weights are slow functions of energy $`E`$. Under other conditions the conclusion about slowing down of the decay would be different. This may be considered as one more argument against the Zeno interpretation of slowing down, but instead this may point out on necessity of more accurate treatment of the concept of “observation”. Observation of the decay is characterized by the time of observation and energy of the decay products. Simple statements that the decay “is observed” or “is not observed” are hardly quite adequate. One needs quantitative characteristics of the observations. The degree of slowing down the decay must depend on these characteristics. The assumption accepted in the present paper that the functions $`\rho _1(E)`$, $`\rho _0(E)`$ are slowly varying, means that the products of the decays are efficiently observed in a wide energy band. In this condition the Zeno effect may be expected. If the functions $`\rho _1(E)`$, $`\rho _0(E)`$ have the shape of narrow peaks, the Zeno effect may be absent because of inefficient observation. It may be remarked in this connection that the very existence of the decay products may naively be considered as an evidence of the decay. If one accept this point of view, he is forced to conclude that the decay is always under continuous observation and therefore is always subject to the Zeno effect. This is however invalid (see the paper of A.Peres in ) because the decay products have to be considered as a part of the system necessary for the description of the decay itself. The secondary decay in a three-level system analyzed in the present paper may be in most cases considered as external to the primary decay. Hence, this secondary decay may be treated as an observation and must lead to slowing down of the primary decay. Even in this case much more accurate and detailed analysis is necessary to have complete and reliable description of the Zeno effect. This analysis has to include all temporal and energy characteristics of the process. As a limiting case, it cannot be excluded that in some conditions (for certain characteristics of the system and its environment) the secondary decay cannot be considered as being external in respect to the primary decay. Summing up, we suggest that the complete analysis of the Zeno effect in decay requires more detailed definition of the concept of observation and of the Zeno effect itself. The results of the present paper show that the development of this sort must be fruitful. ACKNOWLEDGEMENT The author acknowledges the fruitful discussions with V. Namiot and A. Panov concerning interpretation of the results. The work was supported in part by the Russian Foundation of Basic Research, grant 98-01-00161.
no-problem/9908/gr-qc9908020.html
ar5iv
text
# Gravitational Thermodynamics of Space-time Foam in One-loop Approximation ## Abstract We show from one-loop quantum gravity and statistical thermodynamics that the thermodynamics of quantum foam in flat space-time and Schwarzschild space-time is exactly the same as that of Hawking-Unruh radiation in thermal equilibrium. This means we show unambiguously that Hawking-Unruh thermal radiation should contain thermal gravitons or the contribution of quantum space-time foam. As a by-product, we give also the quantum gravity correction in one-loop approximation to the classical black hole thermodynamics. The space-time foam-like structure (FLS) was firstly proposed by J. A. Wheeler about forty years ago. He argued that the space-time may have a multiple connected non-trivial topological structure in Planck scale, though it seems smooth and simply connected in the large. The possible influence of FLS on field theory and the thermodynamical properties of FLS itself have been discussed by many authors\[2-9\]. In this paper, we would like to discuss the thermodynamical properties of FLS only from one-loop quantum gravity and statistical thermodynamics, so the result we get may be much more reliable. If time in Euclidean quantum gravity has an imaginary period of $`i\beta =i(1/T)`$, (henceforth, we take $`\mathrm{}=c=G=k=1`$) then the partition function $$Z=\underset{n}{}\mathrm{exp}(\beta E_n)$$ (1) of canonical ensemble can be rewritten as an Euclidean path integral $$Z=D(g,\varphi )\mathrm{exp}(\widehat{I}(g,\varphi )),$$ (2) where $`\widehat{I}(g,\varphi )`$ is the Euclideanized action of gravity, $`g`$, and matter field, $`\varphi `$, and $`E_n`$ is the $`n`$-th energy eigenvalue of certain differential field operator on its eigenstate vector $`|g,\varphi _n`$. For the pure quantum gravity case, we put $`\varphi =0`$ and $$g_{ab}=g_{ab}^{(0)}+\overline{g}_{ab},$$ (3) where $`\overline{g}_{ab}`$ is the metric fluctuation of the background metric $`g_{ab}^{(0)}`$, then we can expand the action in Taylor series about the background field $`g^{(0)}`$ as $$\widehat{I}(g)=\widehat{I}(g^{(0)})+\widehat{I}_2(\overline{g})+[\text{higher-order terms}],$$ (4) where $`\widehat{I}_2(\overline{g})`$ is the well known one-loop term of the Euclidean gravitational action. In one-loop approximation, the logarithm of partition function, $`Z`$, reads $$\mathrm{ln}Z=\widehat{I}(g^{(0)})+\mathrm{ln}D(\overline{g})\mathrm{exp}(\widehat{I}_2(\overline{g})).$$ (5) As $`\widehat{I}(g^{(0)})`$ is equal to the Gibbons-Hawking’s surface term for vacuum Einstein gravity without cosmological term, the contributions to $`\mathrm{ln}Z`$ come from the surface term and $`\widehat{I}_2(\overline{g})`$ in one-loop approximation. Now from quantum gravity and statistical thermodynamics we try to study the gravitational thermodynamics of the one-loop quantum gravity for flat space-time and Schwarzschild space-time background. Let us consider the gravitational field inside volume $`V`$ and with an imaginary time period $`i\beta `$. Hawking showed exactly that $`\mathrm{ln}Z`$ in one-loop approximation with flat space-time background reads $$\mathrm{ln}Z=\frac{4\pi ^3r_0^3T^3}{135}=\frac{\pi ^2}{45}\beta ^3V$$ (6) for a system at temperature $`T=\beta ^1`$, contained in a spherical box of radius $`r_0`$, where the Casimir effect of the finite size of volume $`V`$ is neglected. (Note that the factor $`4\pi ^5`$ in Eq.(15.99) of Hawking’s original paper should be corrected as $`4\pi ^3`$ in Eq.(6). ) Hawking argued that Eq.(6) is just the contribution of the thermal gravitons to the partition function. However, in our opinion, if FLS is created from metric fluctuation, an equivalent interpretation of Eq.(6) as the contribution of FLS can also be given. Let $`P_n`$ be the probability of FLS in volume $`V`$ in the $`n`$-th energy eigenstate, then from $$S=\underset{n}{}P_n\mathrm{ln}P_n$$ (7) and $$P_n=Z^1\mathrm{exp}(\beta E_n),$$ (8) the entropy of FLS in $`V`$ is given by $$S=\beta <E>+\mathrm{ln}Z,$$ (9) where the expected value of energy is given by $$<E>=\frac{}{\beta }\mathrm{ln}Z.$$ (10) From Eqs. (6), (9) and (10) it is easy to show that the entropy, $`S`$, and energy, $`U`$, of FLS inside volume $`V`$ are respectively $$S=\frac{4\pi ^2}{45}\beta ^3V$$ (11) and $$U=<E>=\frac{\pi ^2}{15}\beta ^4V.$$ (12) So, the entropy density, $`\rho _S`$, and energy density, $`\rho _U`$, of FLS are respectively $$\rho _S=\frac{4\pi ^2}{45}T^3$$ (13) and $$\rho _U=\frac{\pi ^2}{15}T^4.$$ (14) They are exactly the same as that of the black body radiation. As is known, the time coordinate of inertial system in flat space-time has no imaginary period, or the same, the period $`\beta `$ in inertial system is infinite. Hence the temperature of FLS is always zero for a inertial observer. However, the only case that the time coordinate in flat space-time has a finite imaginary period is that of Rindler system. This is clear from the coordinate transformation from inertial coordinates ($`t,x,\theta ,\phi `$) to Rindler coordinates ($`\eta ,\xi ,\theta ,\phi `$), $$\begin{array}{c}t=a^1e^{a\xi }sh(a\eta )\\ x=a^1e^{a\xi }ch(a\eta )\end{array}.$$ (15) Evidently, $`\eta `$ has an imaginary period of $`2\pi /a`$ after it is Euclideanized, i.e., $`\eta i\eta .`$ So, $`\beta =2\pi /a`$ and the famous Unruh temperature $$T_U=\beta ^1=\frac{a}{2\pi }$$ (16) results. The above discussion suggests that, though the temperature of FLS for inertial observer is zero, but the Rindler observer will find himself immersed in a heat bath of temperature $`T_U`$ of FLS. Now a problem confront us is whether we are really in an inertial system or in a Rindler system with heat bath? It seems no choice can be given a priori. The only reasonable choice depends on whether we can measure out the temperature of FLS or not. As is known, there is an universal background black body radiation of $`\mathrm{~}3^0k`$ everywhere in the universe. If the idea of thermal gravitons or thermal FLS is not wrong, the densities $`\rho _S`$ and $`\rho _U`$ may be too small compared with that of the $`\mathrm{~}3^0k`$ background radiation, so that a measurement of them can hardly be given, especially, if we remember that in order to get an Unruh temperature of $`1^0k`$ the proper acceleration of the Rindler observer should approximately be $$\alpha ^1=ae^{a\xi }2.4\times 10^{20}m/s^210^{19}g_E,$$ (17) where $`g_E`$ is the proper acceleration on the surface of the earth. Hence, it seems highly impossible that FLS can have any measurable thermal properties in practice. Hawking also showed that the one- loop approximation of the logarithm of the partition function, $`\mathrm{ln}Z,`$ for pure gravity in Schwarzschild black hole background metric is given approximately by $$\mathrm{ln}Z=\widehat{I}(g^{(0)})+\mathrm{ln}D(\overline{g})\mathrm{exp}(\widehat{I}_2(\overline{g}))=\frac{\beta ^2}{16\pi }+\frac{106}{45}\mathrm{ln}(\frac{\beta }{\beta _0})+\frac{4\pi ^3r_0^3}{135\beta ^3}+O(r_0^2\beta ^2),$$ (18) where $`\widehat{I}(g^{(0)})=\beta ^2/16\pi `$ is the Gibbons-Hawking’s surface term of Schwarzschild space-time, $`\beta =T^1=8\pi M`$ is just the imaginary time period of Schwarzschild black hole, $`\beta _0`$ is an arbitrary constant of energy dimensionality, and $`r_0`$ is the proper radius of a spherical box enclosing the Euclidean section of a Schwarzschild black hole in its center. From Eqs. (9) (10) and (18), we can get the gravitational entropy, $`S,`$ and energy, $`U,`$ of the whole system of proper volume $`V`$ as $$U=\frac{}{\beta }\mathrm{ln}Z=M\frac{53}{180\pi M}+\frac{\pi ^2}{15}T^4V,$$ (19) $$S=\beta <E>+\mathrm{ln}Z=4\pi M^2+\frac{106}{45}[\mathrm{ln}(\frac{M}{M_0})1]+\frac{4\pi ^2}{45}T^3V,$$ (20) where $`M_0\beta _0/8\pi `$. The last terms in the right-hand sides of Eqs. (19) and (20) are exactly the same as that of the Hawking radiation in equilibrium, that can only be ascribed to the contribution of thermal gravitons or of quantum foam, while the first terms are the familiar contribution of classical Schwarzschild black hole, and the second terms which originate from one-loop quantum gravity are no doubt the quantum gravity correction to the classical Schwarzschild black hole. It is interesting to note that, the quantum gravity correction of black hole entropy is always negative and inverse proportional to the black hole mass, while the quantum gravity correction of black hole entropy is negative, positive or zero in the case $`\frac{M}{M_0}<e`$, $`>e`$, or $`=e`$. In summary, the gravitational thermodynamics of FLS in one-loop approximation is exactly the same as that of Hawking-Unruh radiation in thermal equilibrium. In other words, we show unambiguously that the Hawking-Unruh thermal radiation not only contain the matter particles but also contain the thermal gravitons or the contribution from the quantum FLS of space-time. We show also that the one-loop quantum gravity gives quantum correction to the classical thermodynamics of Schwarzschild black hole. ###### Acknowledgements. This work is supported by the National Natural Science Foundation of China under Grand No. 19473005.