id
stringlengths 30
36
| source
stringclasses 1
value | format
stringclasses 1
value | text
stringlengths 5
878k
|
---|---|---|---|
no-problem/9905/quant-ph9905070.html
|
ar5iv
|
text
|
# Perturbative study of multiphoton processes in the tunneling regime
## Abstract
A perturbative study of the Schrödinger equation in a strong electromagnetic field with dipole approximation is accomplished in the Kramers-Henneberger frame. A prove that just odd harmonics appear in the spectrum for a linear polarized laser field is given, assuming that the atomic radius is much lesser than the free-electron quiver motion amplitude. Within this approximation a perturbation series is obtained in the Keldysh parameter giving a description of multiphoton processes in the tunneling regime. The theory is applied to the case of hydrogen-like atoms: The spectrum of higher order harmonics and the above-threshold ionization rate are derived. The ionization rate computed in this way determines the amplitudes of the harmonics. The wave function of the atom proves to be rigid with respect to the perturbation so that the effect of the laser field on the Coulomb potential in the computation of the probability amplitudes can be neglected as a first approximation: This approximation improves as the ratio between the amplitude of the quiver motion of the electron and the atom radius becomes larger. The semiclassical description currently adopted for harmonic generation is so rederived by solving perturbatively the Schrödinger equation.
Availability of powerful sources of laser light has permitted, in recent years, the realization of experiments through gaseous media that have shown several new physical effects as photoionization with a number of photons absorbed by the electron well above the ionization threshold and generation of a broad range of harmonics of the laser frequency . This latter effect could have a lot of technological applications and, as such, has been widely studied both theoretically and experimentally.
The possibility to turn a physical effect into a practical application is strongly linked with the availability of a satisfactory theoretical model. But, it is common belief that, due to the intensity of the laser field, no perturbation theory can be done. The main aim of this paper is then to show how perturbation theory can be straightforwardly applied also for intense laser fields and analytical expressions can be computed for any kind of multiphoton process, at least for hydrogen-like atoms. The development parameter turns out to be the the square root of the ratio between the ionization energy $`I_B`$ and the ponderomotive energy $`U_p`$ proportional to the intensity of the laser field, known in literature as the Keldysh parameter $`\gamma `$. The regime of a small Keldysh parameter characterize the so-called tunnelling regime that is the one of interest here.
Theoretical approaches to multiphoton processes are non-perturbative in nature and resort to Floquet theory as in , numerical methods applied directly to the Schrödinger equation as done firstly in or semiclassical models . On the basis of the semiclassical ideas, a quantum theory for harmonic generation has been obtained by L’Huillier and coworkers in : Our theory permits to justify the main assumptions of the quantum theory of these authors, so that, in turn, the semiclassical ideas prove to be a fairly good description of harmonic generation.
The approach we apply to the Schrödinger equation for an atom in an electromagnetic field can be easily understood using a two-level model, widely used for harmonic generation . This model has the Hamiltonian (here and in the following we will take $`\mathrm{}=c=1`$)
$$H=\frac{\omega _0}{2}\sigma _3+\mathrm{\Omega }\mathrm{cos}(\omega t)\sigma _1$$
(1)
being $`\omega _0`$ the level separation, $`\mathrm{\Omega }`$ the intensity of the laser field and $`\omega `$ its frequency, $`\sigma _1`$ and $`\sigma _3`$ are Pauli matrices. If $`\mathrm{\Omega }`$ is small with respect to $`\omega _0`$, standard perturbation theory applies by interaction picture through an unitary transformation that removes the unperturbed part of the Hamiltonian: This gives a Dyson series in the small development parameter $`\mathrm{\Omega }/(\omega _0\pm \omega )`$, out of resonance. Recently, duality has been introduced in perturbation theory and a dual interaction picture has been devised where one does an unitary transformation to remove the perturbation. For the above Hamiltonian one has to take $`U=e^{i\sigma _1\frac{\mathrm{\Omega }}{\omega _0}\mathrm{sin}(\omega t)}`$ that yields the transformed hamiltonian
$`H_F`$ $`=`$ $`{\displaystyle \frac{\omega _0}{2}}e^{2i\sigma _1\frac{\mathrm{\Omega }}{\omega _0}\mathrm{sin}(\omega t)}\sigma _3`$ (2)
$`=`$ $`{\displaystyle \frac{\omega _0}{2}}J_0\left({\displaystyle \frac{2\mathrm{\Omega }}{\omega }}\right)\sigma _3+{\displaystyle \frac{\omega _0}{2}}{\displaystyle \underset{n0}{}}J_n\left({\displaystyle \frac{2\mathrm{\Omega }}{\omega }}\right)e^{in\sigma _1\omega t}\sigma _3`$
where now, perturbation theory can be done for $`\mathrm{\Omega }\omega _0,\omega `$. We see straightforwardly that the unperturbed part of the Hamiltonian is “dressed” by the laser field and so, the energy levels are shifted. Then, the perturbation has odd and even harmonics of the laser frequency and both can appear in the spectrum. But, probability amplitudes that enters in the computation of the spectrum do not depend on the unitary transformations one does on the Hamiltonian and the states. So, we have sketched the physics of the two-level model in an intense monochromatic field just through dual interaction picture. Although, as we will show, the two-level model does not apply for current experiments with atomic samples as in this case one observes just odd harmonics in agreement with our full theory and it is not just a problem of a proper experimental setup, nevertheless it could have a wide range of applications in magnetic resonance experiments, for some other kind of media as optical cavities or wherever the conditions one meets for atomic samples are no more fulfilled.
The dual interaction picture applies in the same way also to the Schrödinger equation in a semiclassical laser field and in the dipole approximation, as currently treated in literature . The correspondence with the two-level model above is remarkable. The Hamiltonian in this case is
$$H=\frac{𝐩^2}{2m}+V(𝐱)+\frac{e}{m}𝐀(t)𝐩+\frac{e^2}{2m}𝐀^2(t).$$
(3)
By the unitary transformation $`U(t)=\mathrm{exp}\left(i\frac{e}{m}_0^t𝑑t^{}𝐀(t^{})𝐩i\frac{e^2}{2m}_0^t𝑑t^{}𝐀^2(t^{})\right)`$ the above Hamiltonian transforms into
$$H_{KH}=U^{}(t)\left(\frac{𝐩^2}{2m}+V(𝐱)\right)U(t)=\frac{𝐩^2}{2m}+V[𝐱+𝐚(t)]$$
(4)
being $`𝐚(t)=\frac{e}{m}_0^t𝑑t^{}𝐀(t^{})`$. This is the well-known Kramers-Henneberger Hamiltonian and the unitary transformation above define the so called Kramers-Henneberger frame that shows as the effect of the electromagnetic field is to introduce a time-dependent translation on the potential of the unperturbed Hamiltonian by a length $`𝐚(t)`$. The laser field can be modeled as $`𝐀(t)=\frac{E(t)}{\omega \sqrt{1+\xi ^2}}[\widehat{𝐱}\mathrm{cos}(\omega t)+\xi \widehat{𝐲}\mathrm{sin}(\omega t)]`$ for a general ellipticity parameter $`\xi `$. Here we consider the simplest case of a linear polarization $`\xi =0`$ and an instant rising of the laser field, that is $`E(t)=const`$. So, one has
$$H_{KH}=\frac{𝐩^2}{2m}+_{\lambda _L}^{\lambda _L}\frac{dx^{}}{\pi }\frac{V(xx^{},y,z)}{\sqrt{\lambda _L^2x^2}}+\underset{k=1}{\overset{+\mathrm{}}{}}i^k[e^{ik\omega t}+(1)^ke^{ik\omega t}]v_k(𝐱)$$
(5)
with
$$v_k(𝐱)=_{\lambda _L}^{\lambda _L}\frac{dx^{}}{\pi }V(xx^{},y,z)\frac{T_k\left(\frac{x^{}}{\lambda _L}\right)}{\sqrt{\lambda _L^2x^2}}$$
(6)
being $`T_k(x)=\mathrm{cos}(k\mathrm{arccos}(x))`$ the $`k`$-th Chebyshev polynomial of first kind and $`\lambda _L=\frac{eE}{m\omega ^2}=\sqrt{\frac{4U_p}{m}}\frac{1}{\omega }`$ the maximum free-electron quiver motion excursion. This length is pivotal in the study of atoms in an intense laser field as generally one has $`\lambda _La`$, being $`a=\frac{1}{mZe^2}`$ the Bohr radius. One can see that, as for the two-level model, we have the potential of the unperturbed part of the Hamiltonian “dressed” by the laser field and all the harmonics, odd and even, are present in the perturbation. We can now show that, in all the current experiments where the potential $`V(𝐱)`$ depends just on $`r=|𝐱|`$ and $`\lambda _La`$, being $`a`$ the Bohr radius of the atoms in the sample, then just odd harmonics appear in the spectrum. Indeed, we can rewrite eq.(6) as
$$v_k(𝐱)=_1^1𝑑x^{}V(\sqrt{(x\lambda _Lx^{})^2+y^2+z^2})\frac{T_k(x^{})}{\pi \sqrt{1x^2}}.$$
(7)
If the laser field is enough intense, a series in $`\frac{a}{\lambda _L}`$ is obtained if one develops eq.(7) in Taylor series as
$`v_k(𝐱)`$ $`=`$ $`{\displaystyle _1^1}𝑑x^{}V(\sqrt{(x\lambda _Lx^{})^2+y^2+z^2})|_{x=0,y=0,z=0}{\displaystyle \frac{T_k(x^{})}{\pi \sqrt{1x^2}}}`$ (8)
$`x{\displaystyle _1^1}𝑑x^{}V^{}(\lambda _L|x^{}|){\displaystyle \frac{x^{}}{|x^{}|}}{\displaystyle \frac{T_k(x^{})}{\pi \sqrt{1x^2}}}+\mathrm{}.`$
Despite its appearance, the terms of this series can be evaluated for a Coulomb potential and proved to be finite assuring the convergence. This is due to the fact that in this case the integrals can be computed analitically. Then, from the above expression two main conclusions can be drawn. Firstly, multiphoton effects are due to a dipole induced on the atom in the same direction as the electric field of the laser and secondly, Chebyshev polynomials have a definite parity and due to the symmetrical range of integration, only odd polynomials give a non-null contribution to the second term, while the first term has no physical consequences and in the following will be neglected. So, only odd harmonics contribute to the spectrum while, even harmonics are quadrupole radiation and then strongly depressed. Indeed, for a Coulomb potential one obtains
$$v_{2n+1}(𝐱)i(1)^n\frac{x}{\lambda _L}(2n+1)\frac{Ze^2}{\lambda _L}.$$
(9)
This result, that does not involve any other approximation beside the simmetry of the potential and the amplitude of the quiver motion of the electron with respect to the atomic radius, supports in some way the physical view recently given in , where it is assumed that the electron recolliding with the atomic core, emits bremsstrahlung radiation that is cut off at the maximum amplitude of the quiver motion of the electron, producing in this way just odd harmonics.
To complete the above discussion before introducing perturbation theory, we have to study the “dressed” potential $`v_0`$. This should be managed differently from the time-dependent part. Indeed, we have to separate the original potential $`V(r)`$ from the shifts induced by the laser field on the energy levels of the atom. This can be obtained by a Taylor expansion as
$`v_0(𝐱)`$ $`=`$ $`{\displaystyle _1^1}{\displaystyle \frac{dx^{}}{\pi }}{\displaystyle \frac{V(\sqrt{(x\lambda _Lx^{})^2+y^2+z^2})}{\sqrt{1x^2}}}`$ (10)
$`=`$ $`V(r)+\delta _LV(𝐱)`$
$`=`$ $`V(r)+{\displaystyle \frac{\lambda _L^2}{4r^3}}\left[V^{}(r)y^2+V^{}(r)z^2+V^{\prime \prime }(r)x^2r\right]+\mathrm{}`$
where is seen that only even terms survive and higher order terms fall off very rapidly with $`r`$. The above expression assumes a very simple form for a Coulomb potential
$$v_0(𝐱)=\frac{Ze^2}{r}\left[1+\underset{n=1}{\overset{+\mathrm{}}{}}A_n\left(\frac{\lambda _L}{r}\right)^{2n}P_{2n}\left(\frac{x}{r}\right)\right]$$
(11)
being $`A_n=_1^1𝑑xx^{2n}/(\pi \sqrt{1x^2})`$ and $`P_n`$ the $`n`$-th Legendre polynomial. This way to express the dressed Coulomb potential gives us a way to prove that the wave function is “rigid” with respect to the perturbation using standard Rayleigh-Schrödinger perturbation scheme, for the kind of problems we discuss here. But, it should be pointed out that for stabilization things are quite different .
The equations for the amplitudes are given by
$`i\dot{a}_m(t)`$ $`=`$ $`{\displaystyle \underset{nm}{}}a_n(t)m|\delta _LV(𝐱)|ne^{i(\stackrel{~}{E}_n\stackrel{~}{E}_m)t}+`$ (12)
$`{\displaystyle \underset{n}{}}{\displaystyle \underset{k=1}{\overset{+\mathrm{}}{}}}i^ka_n(t)m|v_k(𝐱)|n[e^{i(\stackrel{~}{E}_n\stackrel{~}{E}_mk\omega )t}+(1)^ke^{i(\stackrel{~}{E}_n\stackrel{~}{E}_m+k\omega )t}]`$
having set $`\stackrel{~}{E}_n=E_n+n|\delta _LV(𝐱)|n`$, being $`\delta _LV(𝐱)`$ the part of the static potential due to the laser field. At this point, all the machinery of standard perturbation theory applies . For our aim, we have to show that the Rayleigh-Schrödinger part gives indeed a small contribution to the amplitudes. By assuming the atom initially in its ground state, this contribution is
$$a_m^{RS}(t)\frac{m|\delta _LV(𝐱)|1}{\stackrel{~}{E}_1\stackrel{~}{E}_m}(e^{i(\stackrel{~}{E}_1\stackrel{~}{E}_m)t}1).$$
(13)
Using eq.(11) is easy to verify that no contribution comes for $`m=2`$ as $`2|\delta _LV(𝐱)|1=0`$ but the degeneracy of level 2 is removed by the dressed potential as one has $`m=2,l=1,l_z=0|\delta _LV(𝐱)|m=2,l=1,l_z=0=Ze^2/(240a)(\lambda _L/a)^2`$ and $`m=2,l=1,l_z=\pm 1|\delta _LV(𝐱)|m=2,l=1,l_z=\pm 1=Ze^2/(480a)(\lambda _L/a)^2`$ while $`m=2,l=0,l_z=0|\delta _LV(𝐱)|m=2,l=0,l_z=0=0`$. Indeed, one can see that all the states having $`m`$ even do not give a first order contribution even if the level shift is not null, while the level-shift is always $`0`$ when $`l=0`$. Instead, for $`m=3`$ one has e.g. $`m=3,l=2,l_z=0|\delta _LV(𝐱)|m=1,l=0,l_z=0=Ze^2\sqrt{150}/(10800a)(\lambda _L/a)^2`$ and for the level shifts $`m=3,l=2,l_z=0|\delta _LV(𝐱)|m=3,l=2,l_z=0=Ze^2/(5670a)(\lambda _L/a)^2Ze^2/(136080a)(\lambda _L/a)^4`$ and $`1|\delta _LV(𝐱)|1=0`$, so the correction of eq.(13) turns out to be
$$a_{3,2,0}^{RS}(t)\frac{\frac{\sqrt{150}}{10800}\left(\frac{\lambda _L}{a}\right)^2}{\frac{4}{9}+\frac{1}{5760}\left(\frac{\lambda _L}{a}\right)^2\frac{1}{136080}\left(\frac{\lambda _L}{a}\right)^4}(e^{i\frac{8}{9}E_1t}1)$$
(14)
that is indeed negligeable and the wave function turns out to be “rigid” with respect to the deformations introduced by the laser field. This is even more true as larger become the ratio $`\lambda _L/a`$. The reason for this is that only a finite number of terms of eq.(11) give a non-null contribution to the matrix elements. It is interesting to note that for stabilization of an atom in intense laser field the situation is exactly the contrary as one should be able to diagonalize the Hamiltonian $`H_0=𝐩^2/2m+v_0(𝐱)`$ being the time-dependent part negligible, an approximation that becomes exact in the limit of infinite frequency of the laser field .
Then, the iterative procedure to solve eq.(12) can be applied to compute the probability transition for any process. This approach implies that off-resonant contributions should be sistematically neglected. In this way, a golden rule is straightforwardly obtained as
$`P_{if}=2\pi {\displaystyle \underset{n=1}{\overset{+\mathrm{}}{}}}|i|v_n(𝐱)|f|^2\delta \left[\stackrel{~}{E}_f\stackrel{~}{E}_in\omega \right]`$ (15)
from which several results for multiphoton processes can be obtained. It is assumed a continuum of final states to sum over so that excited levels can decay, otherwise quantum resonance theory applies and Rabi flopping is obtained. In any case, going to second order gives a.c.Stark shifts of the energy levels. Rabi frequency due to resonance with the $`k`$-th harmonic of the perturbation with two levels $`m`$ and $`n`$ of the atom is (ref.) $`\frac{\mathrm{\Omega }_R}{2}=|m|v_k(𝐱)|n|`$.
From eq.(15) we can easily compute the rate of above threshold ionization. For hydrogen-like atoms \[eq.(9)\] and assuming the atom initially in its ground state one has
$$\mathrm{\Gamma }=\frac{32}{3}\frac{\omega ^2}{U_p}\gamma ^2\underset{n=n_0}{\overset{+\mathrm{}}{}}\left[\frac{I_B}{(2n+1)\omega }\right]^{\frac{5}{2}}\left[1\frac{I_B}{(2n+1)\omega }\right]^{\frac{3}{2}}$$
(16)
being $`n_0`$ the minimun integer for which $`(2n_0+1)\omega I_B0`$. It has been used the fact that, as shown above, for the ground state of hydrogen-like atoms there is no shift by the part of the static potential due to the laser field, that is $`1|\delta _LV(𝐱)|1=0`$ for Coulomb potential. Beside, a plane wave is assumed for the particle in the final state to make computation simpler. By taking ref. for experimental results , we can check the above expression for helium and neon that show a large plateau in the tunneling regime. So, we have $`U_p=`$ 155 eV being the intensity 1.5 $`\times `$ $`10^{15}`$ W/cm<sup>2</sup>, $`\omega =`$ 1.177 eV and $`I_B=`$ 24.59 eV. Then, $`\gamma `$ .4 and $`\mathrm{\Gamma }`$ 0.026 eV, that is small as it should be expected. The same computation for neon gives approximatively 0.02 eV.
To analyse the question of harmonic generation, one has to compute $`<x>=\mathrm{\Psi }(t)|x|\mathrm{\Psi }(t)`$. To complete this computation, we assume that no intermediate resonance is present and will justify this assumption a posteriori through the quantum resonance theory of ref., that here applies. So, let us take an atom initially prepared in its ground state as to have $`a_i(0)=\delta _{i0}`$. From eq.(12) one has
$`a_m(t)`$ $`=`$ $`\delta _{m0}+{\displaystyle \frac{m|\delta _LV(𝐱)|0}{\stackrel{~}{E}_0\stackrel{~}{E}_miϵ}}e^{i(\stackrel{~}{E}_0\stackrel{~}{E}_miϵ)t}+`$ (17)
$`{\displaystyle \underset{k=1}{\overset{+\mathrm{}}{}}}i^ka_n(t)m|v_k(𝐱)|0\left[{\displaystyle \frac{e^{i(\stackrel{~}{E}_0\stackrel{~}{E}_mk\omega iϵ)t}}{\stackrel{~}{E}_0\stackrel{~}{E}_mk\omega iϵ}}+(1)^k{\displaystyle \frac{e^{i(\stackrel{~}{E}_0\stackrel{~}{E}_m+k\omega iϵ)t}}{\stackrel{~}{E}_0\stackrel{~}{E}_m+k\omega iϵ}}\right]+\mathrm{}`$
with the limit $`ϵ0`$ understood as to have $`\frac{1}{x\pm i0}=P\frac{1}{x}i\pi \delta (x)`$, being $`P`$ the principal value. As is customary in perturbation theory, we keep just those terms that are near resonant with the harmonics of the perturbation: The only possibility left is the continuous spectrum, as it should be with the current understanding of harmonic generation. So, we take
$$a_𝐩(t)\underset{k=1}{\overset{+\mathrm{}}{}}i^k𝐩|v_k(𝐱)|0(1)^k\frac{e^{i(E_𝐩\stackrel{~}{E}_0k\omega +iϵ)t}}{E_𝐩\stackrel{~}{E}_0k\omega +iϵ}$$
(18)
being $`𝐩`$ the momentum of the particle in the continuous part of the spectrum. Now, we specialise this expression to the case of hydrogen-like atoms having
$$a_𝐩(t)\frac{Ze^2}{\lambda _L^2}\underset{n=0}{\overset{+\mathrm{}}{}}(2n+1)𝐩|x|0\frac{e^{i(E_𝐩E_0(2n+1)\omega +iϵ)t}}{E_𝐩E_0(2n+1)\omega +iϵ}.$$
(19)
Then, for the dipole moment one has
$$<x>\underset{𝐩}{}a_𝐩(t)e^{i(E_𝐩E_0)t}0|x|𝐩+c.c.$$
(20)
After passing from the sum to integration through $`_𝐩V\frac{d^3p}{(2\pi )^3}`$ and taking for the final state a plane wave, one gets the final expression for the harmonic spectrum
$$<x>\frac{64}{3^{\frac{9}{2}}}\frac{Ze^2\omega }{U_p^2}\gamma ^5\underset{n=n_0}{\overset{+\mathrm{}}{}}\frac{x_n^{\frac{3}{2}}}{\left(x_n+\frac{\gamma ^2}{3}\right)^5}\mathrm{sin}((2n+1)\omega t)$$
(21)
being $`x_n=\frac{(2n+1)\omega I_B}{3U_p}`$. The normalization to $`3U_p`$ for $`x_n`$ originates from the fact that, from the above expression, the intensities of the harmonics reduce as the factor $`3U_p`$ increases. Then, if the Keldysh parameter $`\gamma `$ is enough small we can take
$$<x>\frac{64}{3^{\frac{9}{2}}}\frac{Ze^2\omega }{U_p^2}\gamma ^5\underset{n=n_0}{\overset{+\mathrm{}}{}}\frac{1}{x_n^{\frac{7}{2}}}\mathrm{sin}((2n+1)\omega t)$$
(22)
so that, only for $`x_n1`$ the harmonic amplitudes are large. This is the approximate cut-off law found out through semiclassical methods in ref.. It should also be stressed the existence of a minimun harmonic order $`n_0`$ that should be expected due to the close connection between harmonic generation and multiphoton ionization. Indeed, this lower bound comes out from the phase space through the integration of the Dirac function both for the golden rule (15) and for the computation of the dipole moment $`<x>`$. Then, one gets $`n_0=10`$ and $`9`$ for helium and neon respectively, that means harmonic $`21`$ for the starting point of the spectrum in the regime of interest. It should be pointed out that the above equation for $`<x>`$ has to take properly into account the ionization rate $`\mathrm{\Gamma }`$ of eq.(16) as to have at last
$$<x>\frac{64}{3^{\frac{9}{2}}}\frac{Ze^2\omega }{U_p^2}\gamma ^5\underset{n=n_0}{\overset{+\mathrm{}}{}}\frac{x_n^{\frac{3}{2}}}{\left(x_n+\frac{\gamma ^2}{3}\right)^5}\mathrm{sin}((2n+1)\omega t)e^{\mathrm{\Gamma }t}.$$
(23)
One can estimate the constant factor that determines the amplitude of the harmonics $`\frac{64}{3^{\frac{9}{2}}}\frac{Ze^2\omega }{U_p^2}\gamma ^5`$. Indeed, for helium one obtains approximately .32 $`\times `$ 10<sup>-8</sup> eV<sup>-1</sup> and for neon about .12 $`\times `$ 10<sup>-7</sup> eV<sup>-1</sup>, showing, as it should be, a larger amplitude for neon.
A further analysis concerns the effect of intermediate resonances on the spectrum of harmonics. On the basis of the theory of ref., one can write down eq.(18) as
$$a_𝐩(t)\underset{k=1}{\overset{+\mathrm{}}{}}i^k𝐩|v_k(r)|0(1)^k\frac{e^{i(E_𝐩\stackrel{~}{E}_0k\omega +iϵ)t}}{E_𝐩\stackrel{~}{E}_0k\omega +iϵ}\mathrm{cos}\left(\frac{\mathrm{\Omega }_R}{2}t\right)$$
(24)
with $`\mathrm{\Omega }_R`$ the Rabi frequency computed taking in account the resonances between the ground state and other discrete levels. To compute the above expression we assumed that the atom is initially prepared in its ground state so that, $`a_0(t)=\mathrm{cos}\left(\frac{\mathrm{\Omega }_R}{2}t\right)`$, essentially the rotating wave approximation. It is easy to realize that one gets the harmonics in the spectrum shifted by the quantity $`\pm \frac{\mathrm{\Omega }_R}{2}`$.
The theory above could have wide applicability as, in principle for any multiphoton process one is able to compute analytical formulae to compare with experimental results. For instance, an improvement easy to implement is to use a full Coulomb wave function also for the final state in the above computations. On the other hand, even if major features of multiphoton processes are described by this theory, several problems are surely opened up as the applicability of the theory for an ellipticity parameter $`\xi 0`$, the introduction of a slower rising of the laser field or how to take into account all the features that real experiments have for harmonic generation. Beside, when the intensity of the laser field becomes too high the above approach should be properly modified as relativistic effects enter into the physical picture and, e.g. even harmonics can also be significant . Experiments to generate even harmonics are also carried out using solid surfaces as in . Anyhow, it should be stressed how the possibility to derive a perturbative solution to the Schrödinger equation could give a chance to check models of multiphoton physics that no other approach offers.
|
no-problem/9905/cond-mat9905288.html
|
ar5iv
|
text
|
# Oscillation modes of two-dimensional nanostructures within the time-dependent local-spin-density approximation
## Abstract
We apply the time-dependent local-spin-density approximation as general theory to describe ground states and spin-density oscillations in the linear response regime of two-dimensional nanostructures of arbitrary shape. For this purpose, a frequency analysis of the simulated real-time evolution is performed. The effect on the response of the recently proposed spin-density waves in the ground state of certain parabolic quantum dots is considered. They lead to the prediction of a new class of excitations, soft spin-twist modes, with energies well below that of the spin dipole oscillation.
Recent advances in semiconductor technology nowadays allow the fabrication of nanostructures with many different shapes. In these systems the electrons, which are laterally confined at the semiconductor boundary, form a two-dimensional quantum dot with a shape which, to a certain extent, follows that of the nanostructure. This opens up the exciting possibility to produce and study an enormous variety of quantum dots, or artificial atoms as they are often called. For instance, it has been shown that the electronic structure in the small vertical quantum dots of Ref. is given by the successive filling of shells obeying Hund’s rules as in atoms. Very relevant information about electronic excitations in quantum dots is also presently obtained from sophisticated far-infrared absorption and light scattering experiments .
Up to now, the great majority of experimental and theoretical efforts were focussed on quantum dots with circular symmetry. Many of the properties of circular dots are well reproduced by considering the electrons as confined by a parabolic potential, or by a simple jellium disk. To treat the electronic interactions, besides exact diagonalization for very small dots , the most succesful approaches have been mean field theories like Hartree-Fock (HF) and density functional in the local-spin-density approximation (LSDA) .
The latter ones have been extended using the random-phase approximation (RPA) to analyze collective excitations . To our knowledge, all theoretical approaches addressing collective excitations in 2d quantum dots are limited from the start by the circular symmetry assumption. In this Letter we show how LSDA can describe both ground state and linear response of 2d quantum dots of arbitrary shape by using, respectively, energy minimization and real time simulation of the spin-density oscillations as basic principles. We will show how from the response frequencies in the different channels (density, spin and free responses) it is possible to gain information about the system deformation in a quantitative way. Besides, we will also analyze the effect on the response of the recently proposed spin-density waves in the ground state of particular parabolic quantum dots. Static spin-density waves could manifest their existence by means of soft spin-twist modes, at energies well below that of dipole spin oscillation.
Several authors have recently addressed the problem of describing quantum dot ground states within LSDA. In particular, in Ref. the single particle Kohn-Sham equations for electrons in a parabolic potential were solved avoiding any symmetry restriction by using a plane-wave basis. We use here the same LSDA functional of Ref. , i.e., the local functional based on the von Barth-Hedin interpolation of the Tanatar-Ceperley results for the non-polarized and fully polarized 2d electron gas. However, we employ a different technique, based on the discretization of the $`xy`$ plane in a grid of uniformly spaced points. For each spin ($`\eta =,`$) the Kohn-Sham equations read
$`\left[{\displaystyle \frac{1}{2}}^2+v^{(\mathrm{c}\mathrm{o}\mathrm{n}\mathrm{f})}(𝐫)+v^{(H)}(𝐫)+v_\eta ^{(xc)}(𝐫)\right]\phi _{i\eta }(𝐫)`$ (1)
$`=ϵ_{i\eta }\phi _{i\eta }(𝐫)`$ $`,`$ (2)
where $`v^{(\mathrm{c}\mathrm{o}\mathrm{n}\mathrm{f})}(𝐫)`$ and $`v^{(H)}(𝐫)=𝑑𝐫^{}\rho (𝐫^{})/|𝐫𝐫^{}|`$ are, respectively, the confining and Hartree potentials. The exchange-correlation contributions are obtained from the local energy density $`_{xc}(\rho ,m)`$ by
$$v_\eta ^{(xc)}(𝐫)=\frac{}{\rho _\eta }_{xc}(\rho ,m).$$
(3)
We have defined total density and magnetization, in terms of the spin densities $`\rho _\eta (𝐫)=_i|\phi _{i\eta }(𝐫)|^2`$, as $`\rho =\rho _{}+\rho _{}`$ and $`m=\rho _{}\rho _{}`$, respectively.
As a test of the numerical code using the $`xy`$ grid we have checked that for a circular dot (namely, the parabolic one confined by $`\frac{1}{2}m\omega _0^2r^2`$, with $`r`$ the radial coordinate, $`\omega _0=0.25`$ H and $`N=20`$ electrons) we find the same solution that is obtained by solving only the radial equation and imposing $`e^{i\mathrm{}_i\theta }`$ as the angular part of the single particle wave functions. Next, we have considered different confining geometries for dots with $`N=20`$ electrons. In particular, we present here results for a deformed parabola $`\frac{1}{2}m(\omega _x^2x^2+\omega _y^2y^2)`$ with $`\omega _y=0.75\omega _x=0.22`$H; for a square jellium with $`r_s=1.51`$, that corresponds to side length $`L=11.96a_B^{}`$; and for a rectangular jellium with the same $`r_s`$ and sides $`L_y=0.75L_x=10.37a_B^{}`$. A more systematic investigation of ground states for different sizes and different confining geometries is left for future work. Here our aim is mainly to show the feasibility of the method and to concentrate on the spin-density oscillations.
The previously mentioned ground states are shown in Fig. 1. In the deformed parabola, the density shows an ellipsoidal shape with an aspect ratio similar to $`\omega _y/\omega _x`$. Three rows of local maxima can be seen in the inner part of the dot, aligned with the long axis. For the square jellium we obtain a rather abrupt electron density, with maxima at the corners and four additional inner maxima following the square symmetry. A similar structure is seen for the rectangle. Quite interestingly, while for the deformed parabola the magnetization vanishes everywhere, for the square and rectangle there is a magnetization wave in the ground state. The amplitude of this wave is approximately 15% and 25% of the maximum density, respectively. This finding is similar to the spin density waves predicted by Ref. in some circular parabolic dots.
The description of spin-density oscillations in quantum dots has raised great interest, mainly due to the manifestation of these modes in far-infrared absorption and in Raman scattering experiments . We refer here to general spin-density oscillations. When both spin components oscillate in phase they produce density modes and when they are out of phase, spin modes. In circularly symmetric dots, density modes have been studied using the Hartree and Hartree-Fock methods. More recently the LSDA to density functional theory has been used in circular dots to describe density and spin channels , taking into account the coupling between both. All these methods are based on the perturbative treatment of the response by diagonalization of the residual interaction within a space of particle-hole excitations. They share as essential ingredient the angular momentum selection rules given by the circular symmetry. In fact these methods use the well known RPA, based on different ground state theories.
To describe spin-density oscillations in an arbitrary structure the RPA approach becomes practically unfeasible because of the enormous dimension of the matrices. This is due to the lack of symmetry, which forces to deal with matrices $`\chi (x,y,x^{},y^{};\omega )`$ in the the formal RPA equation $`\chi =\chi ^{(0)}V_{ph}\chi `$, where $`\chi ^{(0)}`$ is the independent particle correlation function and $`V_{ph}`$ is the residual particle-hole interaction. The calculation is also complicated due to the breaking of degeneracies with deformation, that greatly increases the number of different particle-hole pairs contributing to $`\chi ^{(0)}`$. An alternative approach that permits to overcome these problems is based on real-time methods. These originate in fact from the time-dependent HF theory, and have been applied with success to nuclear and to cluster physics . In what follows we briefly comment this approach and show how it applyies to 2d nanostructures.
In the small amplitude limit, it is well known that both real time (TDHF, TDLSDA) and RPA methods based on the corresponding ground states become equivalent.
We have performed TDLSDA calculations by integrating the time-dependent Kohn-Sham equations
$$i\frac{}{t}\phi _{i\eta }(𝐫,t)=h_\eta [\rho ,m]\phi _{i\eta }(𝐫,t),$$
(4)
where $`h_\eta `$ is given by the square bracket in Eq. (1). We have integrated Eq. (4) using the Crank-Nicholson algorithm (for time step $`n`$ to $`n+1`$)
$$\left(1+\frac{i\mathrm{\Delta }t}{2}h_\eta ^{(n+1)}\right)\phi _{i\eta }^{(n+1)}=\left(1\frac{i\mathrm{\Delta }t}{2}h_\eta ^{(n)}\right)\phi _{i\eta }^{(n)}.$$
(5)
This is in an implicit problem for $`\phi ^{(n+1)}`$, since $`h_\eta ^{(n+1)}`$ depends on the orbitals through the density and magnetization. In practice this forces to proceed by iteration: with $`h_\eta `$ from the previous time step, a first guess of the new wave functions is obtained solving (5). These are then used to build a new $`h_\eta `$ and restart iteration. Using a rather small time step it is enough to make a double solution for each time step. If $`h_\eta `$ were constant in time, the algorithm would be exactly unitary. We have found that with small $`\mathrm{\Delta }t`$ norm conservation is fullfiled with excellent accuracy.
In order to excite the oscillation modes of the system an initial perturbation of the wave functions is needed. Physically, this corresponds for instance to the interaction with a short laser pulse or with an appropriate projectile. In the calculation, it can be mimicked simply by a rigid translation of the wave functions by means of the operator $`𝒯(𝐚_\sigma )=e^{𝐚_\sigma }`$ or by an initial impulse with $`\mathrm{\Pi }(𝐪_\sigma )=e^{i𝐪_\sigma 𝐫}`$. When either $`𝐚_\sigma `$ or $`𝐪_\sigma `$ are small these perturbations induce dipole oscillations predominantly and the system’s response is restricted to the linear regime. Total density and spin modes are obtained with the rigid translations $`𝐚_{}=𝐚_{}=𝐚`$, and $`𝐚_{}=𝐚_{}=𝐚`$, respectively. With the impulse initial conditions these are: $`𝐪_{}=𝐪_{}=𝐪`$ (density), $`𝐪_{}=𝐪_{}=𝐪`$ (spin). After the initial perturbation we keep track of the time dependent dipole moments $`d_t`$: $`𝐞𝐫_t`$ for density modes and $`𝐞𝐫\sigma _z_t`$ for spin modes. Here $`𝐞`$ corresponds to the direction of the initial perturbation given by $`𝐚`$ or $`𝐪`$.
A frequency analysis of the dipole signal $`d_t`$ gives the response frequencies of the system. Fourier tranform methods can be used for this purpose. However, we have found more efficient a method of direct peak fitting to the simulated signal. We perform a least squares minimization of $`\chi ^2=_t(d_tD(t))^2`$, where $`D(t)`$ is given by
$$D(t)=\underset{n=1}{\overset{N}{}}A_n\mathrm{cos}(\omega _nt)+B_n\mathrm{sin}(\omega _nt).$$
(6)
In Eq. (6) a fixed number of frequencies is assumed. The minimization yields the amplitudes $`A_n`$, $`B_n`$ and frequencies $`\omega _n`$. Of course, it must be checked that the number $`N`$ of frequencies is large enough to provide a good reproduction $`D(t)`$ of the time series $`d_t`$ and that convergence with increasing $`N`$ has been reached. From the fitted $`D(t)`$ we obtain $`D(\omega )`$ as a discrete set of Dirac delta functions. In practice, these are smoothed into Lorentzians and the response strength is obtained as $`S(\omega )=|D(\omega )|`$, or the power spectrum as $`𝒫(\omega )=|D(\omega )|^2`$.
We have performed the response calculation in real time for the same dots with 20 electrons discussed above. An excellent agreement with the RPA calculation was obtained for the circular parabola. Figure 2 shows the three responses for the deformed parabola, for which the magnetization $`m(𝐫)`$ is vanishingly small. The density response has only two peaks that coincide with the parabola frequencies $`\omega _x=0.29`$ and $`\omega _y=0.22`$. This is showing that TDLSDA satisfies the generalized Kohn’s theorem for a deformed parabola. Since the density dipole operator only couples to the center of mass motion, absorption in this channel can only take place exactly at the frequencies $`\omega _x`$ and $`\omega _y`$. The situation is completely different for the free and spin responses. They are rather fragmented and lie at lower energies. Peaks associated to oscillation along different axis are shown in different line types. The free response corresponds to keep the effective confining potential in (4), $`v^{(\mathrm{c}\mathrm{o}\mathrm{n}\mathrm{f})}+v^{(H)}+v_\eta ^{(xc)}`$, fixed to its static value. It thus models the oscillation of non interacting particles in the static mean field. By comparing free, density and spin responses we see the different nature of the residual interaction in both channels: weakly attractive in the spin response and repulsive in the density one. Figure 2 also shows the simulated time series in each case, as well as the fitted signal which on the plot scale superposes to the simulated one.
Figure 3 shows the corresponding results for the $`N=20`$ electrons dots in the square and rectangular jellium. The density response of the square is still characterized by a very dominant peak that, nevertheless, is slightly fragmented. The free and spin responses are more fragmented. Quite interestingly, the spin response of the square shows four groups of peaks with an almost constant separation of 0.05H. In the rectangle case, the density response is clearly showing that the oscillation frequencies are different in $`x`$ and $`y`$ directions. The same fact is an additional source of fragmentation for the free and spin channels.
In circular parabolic dots LSDA predicts spontaneous symmetry breaking ground states, of the type of spin density waves, for particular sizes . This LSDA spin density wave is more pronounced in quasi one-dimensional rings . In Ref. similar spin density waves, as well as Wigner crystallized ground states, have been very recently reported in circular parabolic dots using an unrestricted Hartree-Fock approach. There is, however, a continuing discussion about the interpretation and possible relevance of these states . We present in the rest of this Letter the TDLSDA result for the linear oscillations of a spin-density-wave ground state in a circular dot.
Finally, we have analyzed the oscillations of a dot with $`N=24`$ electrons in circular parabolic confinement ($`\omega _0=0.24`$ H), for which there is a static spin-density wave in the LSDA ground state . We have found that the dipole oscillations of spin densities are quite similar to those obtained from a fully circular model . However, the spin density wave can sustain a new type of oscillation. It is given by an alternating rotation of both spin densities in opposite directions, i.e., a spin twist of the static wave excited with the rotation operator $`(\theta _\sigma )`$, with $`\theta _{}=\theta _{}`$ being opposite rotation angles for each spin. The frequency of this mode is obtained analyzing the time evolution of the circular currents that appear after an initial rotation with $`(\theta _\sigma )`$. Figure 4 shows the strength of the spin-twist mode, in comparison with the normal spin dipole mode. Spin-twist modes are very soft, with energy well below the spin dipole one. They could signal the existence of static spin density waves in circular dots . The spin-twist frequency is reflecting the curvature of the energy minimum corresponding to the symmetry broken ground state with respect to the circular one. We expect that circular systems having strong spin density waves in their ground states will also exhibit enhanced spin-twist modes. This happens in circular parabolic dots with increasing $`r_s`$ for a fixed $`N`$ (the $`N`$-systematics with fixed $`r_s`$ is less clear) or in quasi one dimensional rings .
In conclusion, we have shown that TDLSDA can be applied to obtain the oscillation frequencies of nanostructures with arbitrary shape. It leads to the prediction of soft spin-twist modes in dots with circular parabolic confining and having static spin-density waves in their ground state.
This work was performed under Grant No. PB95-0492 from CICYT, Spain.
|
no-problem/9905/astro-ph9905181.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
We are used to consider earth’s particle accelerators as the realm of special relativity. But nature can do better and bigger, and is able to accelerate really large masses to relativistic speeds. Besides cosmic rays, we now know that in the jets of active galactic nuclei (AGN) which are also strong radio emitters plasma flows at $`v0.99c`$, and that in gamma–ray bursts (GRB) the fireball resulting from the release of $`10^{52}`$ erg in a volume of radius comparable to the Schwarzschild radius of a solar mass black hole reaches $`v0.999c`$. These speeds are so close to the light speed that it is more convenient to use the corresponding Lorentz factor of the bulk motion, $`\mathrm{\Gamma }=(1\beta ^2)^{1/2}`$: for $`\mathrm{\Gamma }1`$, we have $`\beta 11/(2\mathrm{\Gamma }^2)`$.
The energetics involved is huge. In AGNs, a fraction of a solar mass per year can be accelerated to $`\mathrm{\Gamma }10`$, leading to powers of $`10^{46}`$ erg s<sup>-1</sup> in bulk motion. In GRBs, the radiation we see, if isotropically emitted, can reach $`10^{54}`$ erg s<sup>-1</sup>, suggesting even larger values for the bulk motion power. Recently, very interesting sources have been discovered within our Galaxy through their activity in X–rays: they occasionally produce radio jets closely resembling those of radio–loud quasars. During these ejection episodes, the power in bulk motion can exceed $`10^{40}`$ erg s<sup>-1</sup>, a value thought to exceed the Eddington limit for these sources.
The study of these $`extended`$ objects moving close to $`c`$ requires to take into account the different travel paths of the photons reaching us. Curiously enough, the resulting effects had not been studied until 1959, when Terrel (1959) pointed out that a moving sphere does not $`appear`$ contracted, but $`rotated`$, contrary to what was generally thought (even by Einstein himself …). These results, which were “academic” in those years, are now fully applied to the above mentioned relativistic cosmic objects.
In this paper I will present some of the evidence in support of relativistic bulk motion in astronomical objects, and then discuss how “text–book special relativity” has to be applied when information are carried by photons.
## 2 Superluminal motion
Rees (1966) realized that an efficient way of transporting energy from the vicinity of super-massive black holes to the radio lobes of the recently discovered radiogalaxies is through the bulk motion of relativistically moving plasma. If this plasma emits radiation on its way, then we ought to see moving spots of emission in radio maps. One of the most spectacular prediction by Rees was that this motion could appear to exceed the speed of light. This was indeed confirmed in the early seventies, when the radio–interferometric techniques allowed to link radio telescopes thousands of kilometers apart. Among the first few observed targets was 3C 279, a radio–loud quasar at a redshift of $`z=0.538`$. Bright spots in radio maps taken at interval of months were apparently moving at a speed exceeding 10 times $`c`$. For obvious reasons, sources presenting this phenomenon are referred to as superluminal sources.
This phenomenon can be simply explained, as long as the plasma is moving at velocities close to $`c`$ at small viewing angle (i.e. the angle between the velocity vector and the observer’s line of sight). Consider Fig. 1: suppose the moving blob emits a photon from positions $`A`$ and then from $`B`$.
The time between the two emissions, as measured by an observer which sees the blob moving, is $`\mathrm{\Delta }t_e`$. Therefore the distance $`AB`$ is equal to $`\beta c\mathrm{\Delta }t_e`$, and $`AC=\beta c\mathrm{\Delta }t_e\mathrm{cos}\theta `$. In the same time interval, the photon emitted in $`A`$ has reached the point $`D`$, and the distance $`AD`$ is equal to $`c\mathrm{\Delta }t_e`$. Thus the two photons are now separated by $`DC=ADAC=c\mathrm{\Delta }t_e(1\beta \mathrm{cos}\theta )`$. The difference between the arrival times of these two photons is then $`\mathrm{\Delta }t_a=\mathrm{\Delta }t_e(1\beta \mathrm{cos}\theta )`$, and the projected separation of the blob in the two images is $`CB=c\beta \mathrm{\Delta }t_e\mathrm{sin}\theta `$, leading to an apparent velocity:
$$\beta _{app}=\frac{\beta \mathrm{sin}\theta }{1\beta \mathrm{cos}\theta }$$
(1)
It can be readily seen that $`\beta _{app}>1`$ for $`\beta 1`$ and small viewing angles $`\theta `$. The apparent speed is maximized for $`\mathrm{cos}\theta =\beta `$, where $`\beta _{app}=\beta \mathrm{\Gamma }`$. Notice that this simple derivation does not require any Lorentz transformation (no $`\mathrm{\Gamma }`$ factor involved!). The superluminal effect arises only from the Doppler contraction of the arrival times of photons.
## 3 Beaming
Let us assume that a source emits isotropically in its rest frame $`K^{}`$. In the frame $`K`$, where the source is moving relativistically, the radiation is strongly anisotropic, and three effects occur:
* Light aberration: photons emitted at right angles with respect to the velocity vector (in $`K^{}`$) are observed in $`K`$ to make an angle given by $`\mathrm{sin}\theta =1/\mathrm{\Gamma }`$. This means that in $`K`$ half of the photons are concentrated in a cone of semi-aperture angle corresponding to $`\mathrm{sin}\theta =1/\mathrm{\Gamma }`$.
* Arrival time of the photons: as discussed above, the emission and arrival time intervals are different. As measured in the same frame $`K`$ we have, as before, $`\mathrm{\Delta }t_a=\mathrm{\Delta }t_e(1\beta \mathrm{cos}\theta )`$. If $`\mathrm{\Delta }t_e^{}`$ is measured in $`K^{}`$, $`\mathrm{\Delta }t_e=\mathrm{\Gamma }\mathrm{\Delta }t_e^{}`$ leading to
$$\mathrm{\Delta }t_a=\mathrm{\Gamma }(1\beta \mathrm{cos}\theta )\mathrm{\Delta }t_e^{}\frac{\mathrm{\Delta }t_e^{}}{\delta }$$
(2)
Here we have introduced the factor $`\delta `$, referred to as the beaming or Doppler factor. It exceeds unity for small viewing angles, and if so, observed time intervals are contracted.
* Blueshift/Redshift of frequencies: since frequencies are the inverse of times, we just have $`\nu =\delta \nu ^{}`$.
It can be demonstrated (see e.g. Rybicki & Lightman 1979) that the specific intensity $`I(\nu )`$ divided by the cube of the frequency is Lorentz invariant, and therefore
$$I(\nu )=\delta ^3I^{}(\nu ^{})=\delta ^3I^{}(\nu /\delta ).$$
(3)
Integration over frequencies yields $`I=\delta ^4I^{}`$. The corresponding transformation between bolometric luminosities sometimes generates confusion. It is often said that $`L=\delta ^4L^{}`$. What this means is: if we estimate the observed luminosity $`L`$ from the received flux under the assumption of isotropy, this is related to $`L^{}`$ through the above equation.
But suppose that the photon receiver covers the entire sky, i.e. completely surrounds the emitting source: what is then the relation between the received power in the frames $`K`$ and $`K^{}`$? In this case we must integrate $`dL/d\mathrm{\Omega }`$ over the solid angle obtaining:
$$L=\delta ^4\frac{L^{}}{4\pi }2\pi \mathrm{sin}\theta d\theta =\mathrm{\Gamma }^2\frac{\beta ^2+3}{3}L^{}\frac{4}{3}\mathrm{\Gamma }^2L^{}$$
(4)
Note that in this situation the power is not a Lorentz invariant. This is because here we are concerned with the power received in the two frames, not with the emitted one (which is indeed Lorentz invariant). This is yet another difference with respect to “text–book” special relativity not accounting for photons: here the time transformation involves the Doppler term $`(1\beta \mathrm{cos}\theta )`$, causing the difference between emitted and received power (see also Rybicki & Lightman 1979, p. 141).
Because of beaming, relativistically moving objects appear much brighter if their beams point at us, and can therefore be visible up to large distances. Besides being extremely important in order to calculate the intrinsic physical parameters of a moving source, beaming is also crucial for the moving object itself. The observed objects are rarely isolated and more often are part of a jet immersed in a bath of radiation. Just for illustration, let us consider a blob moving close to an accretion disk and surrounded by gas clouds responsible for the emission of the broad lines seen in the spectra of quasars: as the blob moves at relativistic speed in a bath of photons it will see this radiation enhanced. Furthermore, because of aberration, in its frame most of the photons will appear to come from the hemisphere towards which the blob is moving, and be blueshifted. For an observer at rest with respect to the photon bath, the radiation energy density seen by the blob is enhanced by $`\mathrm{\Gamma }^2`$. This increases the rate of interaction between the photons and the electrons in the blob, leading to enhanced inverse Compton emission and possibly even deceleration of the blob by the so called Compton drag effect.
## 4 Evidences for relativistic motion
### 4.1 Radio–loud AGN with flat radio spectrum
$``$ Superluminal motion — The most striking evidence of bulk motion comes from the observation of superluminal sources. Improvements in the interferometric techniques have led to the discovery of more than 100 of these sources (see e.g. Vermeulen & Cohen 1994). The typical bulk Lorentz factors inferred range between 5 and 20.
$``$ Compton emission — From radio data (size, flux, spectrum) one can derive, through the synchrotron theory, the number density of emitting particles and the radiation energy density (Hoyle, Burbidge & Sargent 1966). These quantities determine the probability that particles and photons interact through the inverse Compton process, and thus it is possible to predict the amount of the high energy radiation (i.e. X–rays) produced. However, this estimated flux is often orders of magnitude larger than what is observed, if beaming is not taken into account. Conversely, the requirement that the radio source emits at most the observed X–ray flux, sets a (lower) limit on the beaming factor $`\delta `$. Typical values are in agreement with those derived from superluminal motion (Ghisellini et al. 1993).
$``$ High brightness temperatures — This argument is similar to that just presented. The brightness temperature $`T_b`$ \[defined through $`I(\nu )2kT_b\nu ^2/c^2`$\] is related to the density of particles and photons, and therefore to the probability of Compton scattering: high brightness temperature implies powerful Compton emission. More precisely, if $`T_b>10^{12}`$ K (this value is called the Compton limit), the luminosity produced by the first order Compton scattering is larger than the synchrotron luminosity, and that in the second order exceeds (by the same factor) that in the first order, and so on. Clearly, this can only occur until the typical photon energy reaches the typical electron energy, above which the power has to drop. This increasingly important particle cooling is called the Compton catastrophe. To avoid it, we resort to beaming, recalling that $`T_b`$ transforms according to
$`T_b`$ $`=`$ $`\delta T_b^{}\mathrm{Source}\mathrm{size}\mathrm{measured}\mathrm{directly}`$
$`T_b`$ $`=`$ $`\delta ^3T_b^{}\mathrm{Source}\mathrm{size}\mathrm{measured}\mathrm{through}\mathrm{variability}`$ (5)
The Compton limit of $`T_b>10^{12}`$ K is derived by requiring that the radiation energy density is smaller than the magnetic energy density. A more severe limit can be obtained imposing the condition of equipartition between particle and magnetic energy densities, as proposed by Readhead (1994). In the latter case one derives larger $`\delta `$ factors, which are in agreement with those obtained by the two previous methods. A note of caution: there is a significant number of sources, called intraday variables, whose radio flux changes on a timescale of hours. For them, $`T_b>10^{18}`$ K, and therefore it is inferred $`\delta >100`$, a value too large to be consistent with the those derived in other ways. This is an open issue, and probably effects due to interstellar scintillation and/or coherent radiation (as in pulsars) have to been invoked (for a review see Wagner & Witzel 1995).
$``$ Gamma–ray emission — The $`\gamma `$–ray satellite CGRO discovered that radio–loud quasars with flat radio spectrum (FSRQ) can be strong $`\gamma `$–ray emitters, and that often most of their power is radiated in this band. This emission is also strongly variable, on timescales of days or less.
However, photons above $`m_ec^2=`$511 keV can interact with lower energy photons, producing electron–positron pairs. This happens if the optical depth for the photon–photon interaction is greater than unity, and this depends on the density of the target photons and the typical size of the region they occupy. If not beamed, the large power and rapid variability observed imply optical depths largely in excess of unity, and so $`\gamma `$–ray would be absorbed within the source. The condition of transparency to this process leads to lower limits on beaming factors somewhat smaller than in the previous cases. There are also objects (still a few, but increasing in number) observed above a few tenths of a TeV (from the ground by Cherenkov telescopes). The corresponding limits on $`\delta `$ are the most severe.
$``$ One–sidedness of jets — Radio–sources are very often characterized by two lobes of extended emission, but in some of them only one jet - starting from the nucleus and pointing to one of the lobes - is visible. If the emitting plasma is relativistically moving, the radiation from the jet approaching us is enhanced, while that from the receding jet is dimmed, explaining the observed asymmetry. This is often referred to as Doppler favoritism.
$``$ Super–Eddington luminosities — FSRQs show violent activity: large luminosity changes on short timescales. If the variability timescales are associated to the Schwarzschild radius (hence the black hole mass) luminosities exceeding the Eddington limit are often derived (if isotropy is assumed). This difficulty can be easily overcome by beaming (which affects both the variability timescales and the observed power).
$``$ The Laing–Garrington effect — The two lobes of radio sources are often differently polarized (or, more precisely, they are differently depolarized): as always the (one sided) jet points towards the more polarized lobe, this convincingly indicates that the visible jet is the one approaching the observer (Laing 1988, Garrington et al. 1988).
$``$ Jet bending — Jets of AGNs are often significantly curved, posing problems of stability. But the strong bendings could be largely apparent, if the jet is seen at small viewing angles. This constitutes a further independent hint that these jets are pointing at us (although is not direct evidence of relativistic motion).
$``$ Parent population — For each source whose jet is direct towards us, there must be $`\mathrm{\Gamma }^2`$ other sources with jets pointing away. If beaming is important, then these objects must be less luminous, not (extremely) superluminal, and not showing violent activity. This parent population of sources can be identified with that of radiogalaxies (Blandford & Rees 1978; for a review see Urry & Padovani 1995): indeed their number agrees with what expected from the beaming parameters derived by the other methods.
### 4.2 Galactic superluminal sources
In 1994, the two X–ray transients GRS1915+105 and J1655–40 were monitored with the VLA during a campaign aimed at finding radio jets associated with galactic sources. Surprisingly, it was discovered superluminal motion in both of them (Mirabel & Rodriguez 1994; Hjellming & Rupen 1995). The most interesting observational fact is that in these two sources we see both the jet and the counter-jet. This is unprecedented among superluminal sources, and it is possible because: i) the viewing angle is large, suppressing the Doppler favoritism effect; ii) the bulk Lorentz factor is small, $`\mathrm{\Gamma }=2.5`$, leading to a moderate beaming and thus an observable flux even at large viewing angles.
Under the assumption that the superluminal blobs move in opposite directions and are characterized by the same (true) velocity $`\beta c`$, we can apply Eq. (1) $`twice`$ to derive $`both`$ $`\beta `$ and $`\theta `$. This in turn determines $`\delta `$.
Very severe limits can be set on the power carried by the moving blob in the form of both particle bulk motion and Poynting flux. The radio emission observed with VLA comes from a region $`10^{15}`$ cm in size, and is believed to be produced by synchrotron emission: it is then possible to calculate the number of particles responsible for the observed radiation (this is a lower limit, since other non emitting, i.e. sub relativistic, particles might be present).
The number inferred depends on the value of the magnetic field $`B`$: an increase of $`B`$ decreases the amount of particles required, but increases the implied Poynting flux. The sum of the power in particle bulk motion and in Poynting flux has therefore a minimum. For GRS1915+105, this is of the order of $`10^{40}`$ erg s<sup>-1</sup> (Gliozzi, Bodo & Ghisellini, 1999), and exceeds by a factor $``$ 10 the power emitted by the accretion disk (in soft and medium energy X–rays). This result itself puts strong constraints on any model for the acceleration of jets, by excluding the possibility that this occurs through radiation pressure.
### 4.3 Gamma–ray Bursts
GRBs are flashes of hard X– and $`\gamma `$–rays, lasting for a few seconds. Discovered by the Vela satellites in the late sixties, their origin remained mysterious until the Italian–Dutch satellite BeppoSAX succeeded in locating them on the sky with small enough error–boxes: prompt follow–up observations were then possible and led to the discovery that they are at cosmological distances. This in turn allowed to estimate the energy involved which, if the radiation is emitted isotropically, is in the range $`10^{52}`$$`10^{54}`$ erg. The energetics is in itself a strong evidence of relativistic bulk motion: the short duration and the even shorter variability timescale (of the order of 1 ms) imply a huge compactness (i.e. the luminosity over size ratio). This resembles the conditions during the Big Bang, and implies a similar evolution: no matter in which form the energy is initially injected, a quasi–thermal equilibrium between matter and radiation is reached, with the formation of electron–positron pairs accelerated to relativistic speeds by the high internal pressure. This is a fireball. When the temperature of the radiation (as measured in the comoving frame) drops below $``$50 keV the pairs annihilate faster then the rate at which are produced (50 keV, not 511 keV, as a thermal photon distribution has a high energy tail…). But the presence of even a small amount of barions, corresponding to only $`10^6M_{}`$, makes the fireball opaque to Thomson scattering: the internal radiation thus continues to accelerate the fireball until most of its initial energy has been converted into bulk motion. The fireball then expands at a constant speed and at some point becomes transparent. If the central engine is not completely impulsive, but works intermittently, it can produce many shells (i.e. many fireballs) with slightly different Lorentz factors. Late but faster shells can catch up early slower ones, producing shocks which give rise to the observed burst emission. In the meantime, all shells interact with the interstellar medium, and at some point the amount of swept up matter is large enough to decelerate the fireball and produce other radiation which can be identified with the afterglow emission observed at all frequencies.
For GRBs the limits to the required bulk Lorentz factors follow mainly from two arguments (for reviews see Piran 1999; Meszaros 1999):
$``$ Compactness — We see high energy (i.e. $`\gamma `$–rays above 100 MeV) emission, varying on short timescales. As for the AGN case (see above) bulk motion is required to lower the implied luminosity and increase the intrinsic size of the emission region in order to decrease the opacity to pair production. Limits to the Lorentz factor in the range 100–300 are derived.
$``$ Variability — The radiation we see originates when the fireball has become optically thin to Thomson scattering, i.e. when it has expanded to radii of the order of $`R_t=10^{13}`$ cm. Yet, we see 1 ms variability timescales. Assuming that the radiation is produced while the fireball expands a factor two in radius, short timescales are possible if $`\mathrm{\Delta }t_a=(1\beta \mathrm{cos}\theta )R_t/c`$. Furthermore, if isotropic, we always see the portion of it pointing at us ($`\theta =0^{}`$), and thus $`\mathrm{\Delta }t_a0.01(R_t/10^{13})(100/\mathrm{\Gamma })^2`$ Also this argument leads to values of $`\mathrm{\Gamma }`$ in the range 100–300.
## 5 Rulers and clocks vs photographs and light curves
In special relativity we are used to two fundamentals effects:
* Lengths shrink in the direction of motion
* Times get longer
The Lorentz transformations for a motion along the $`x`$ axis are ($`K`$ is the lab frame and $`K^{}`$ is the comoving one)
$`x^{}`$ $`=`$ $`\mathrm{\Gamma }(xvt)`$
$`t^{}`$ $`=`$ $`\mathrm{\Gamma }\left(1{\displaystyle \frac{v}{c^2}}x\right)`$ (6)
with the inverse relations given by
$`x`$ $`=`$ $`\mathrm{\Gamma }(x^{}+vt^{})`$
$`t`$ $`=`$ $`\mathrm{\Gamma }\left(t^{}+{\displaystyle \frac{v}{c^2}}x^{}\right).`$ (7)
The length of a moving ruler has to be measured through the position of its extremes at the same time $`t`$. Therefore, as $`\mathrm{\Delta }t=0`$, we have
$$x_2^{}x_1^{}=\mathrm{\Gamma }(x_2x_1)\mathrm{\Gamma }v\mathrm{\Delta }t=\mathrm{\Gamma }(x_2x_1)$$
(8)
i.e.
$$\mathrm{\Delta }x=\frac{\mathrm{\Delta }x^{}}{\mathrm{\Gamma }}\mathrm{contraction}$$
(9)
Similarly in order to determine a time interval a (lab) clock has to be compared with one in the comoving frame, which has, in this frame, the same position $`x^{}`$. It follows
$$\mathrm{\Delta }t=\mathrm{\Gamma }\mathrm{\Delta }t^{}+\mathrm{\Gamma }\frac{v}{c^2}\mathrm{\Delta }x^{}=\mathrm{\Gamma }\mathrm{\Delta }t^{}\mathrm{dilation}$$
(10)
An easy way to remember the transformations is to think to mesons produced in collisions of cosmic rays in the high atmosphere, which can be detected even is their lifetime (in the comoving frame) is much shorter than the time needed to reach the earth’s surface. For us, on ground, relativistic mesons live longer.
All this is correct if we measure lengths by comparing rulers (at the same time) and by comparing clocks (at the same position) \- the meson lifetime $`is`$ a clock. In other words, if we do not use photons for the measurement process.
### 5.1 The moving bar
If the information (about position and time) are carried by photons, we must take into account their (different) paths. When we take a picture, we detect photons arriving at the same time to our camera: if the moving body which emitted them is extended, we must consider that these photons have been emitted at different times, when the moving object occupied different locations in space. This may seem quite obvious. And it is. Nevertheless these facts were pointed out in 1959 (Terrel 1959), more than 50 years after the publication of the theory of special relativity.
Let us consider a moving bar, of proper dimension $`\mathrm{}^{}`$, moving in the direction of its length at velocity $`\beta c`$ and at an angle $`\theta `$ with respect to the line of sight (see Fig. 2). The length of the bar in the frame $`K`$ (according to relativity “without photons”) is $`\mathrm{}=\mathrm{}^{}/\mathrm{\Gamma }`$. The photon emitted in $`A_1`$ reaches the point $`H`$ in the time interval $`\mathrm{\Delta }t_e`$. After $`\mathrm{\Delta }t_e`$ the extreme $`B_1`$ has reached the position $`B_2`$, and by this time, photons emitted by the other extreme of the bar can reach the observer simultaneously with the photons emitted by $`A_1`$, since the travel paths are equal. The length $`B_1B_2=\beta c\mathrm{\Delta }t_e`$, while $`A_1H=c\mathrm{\Delta }t_e`$. Therefore
$$A_1H=A_1B_2\mathrm{cos}\theta \mathrm{\Delta }t_e=\frac{\mathrm{}^{}\mathrm{cos}\theta }{\mathrm{\Gamma }(1\beta \mathrm{cos}\theta )}.$$
(11)
Note the appearance of the term $`\delta =1/[\mathrm{\Gamma }(1\beta \mathrm{cos}\theta )]`$ in the transformation: this accounts for both the relativistic length contraction $`(1/\mathrm{\Gamma })`$, and the Doppler effect $`[1/(1\beta \mathrm{cos}\theta )]`$. The length $`A_1B_2`$ is then given by
$$A_1B_2=\frac{A_1H}{\mathrm{cos}\theta }=\frac{\mathrm{}^{}}{\mathrm{\Gamma }(1\beta \mathrm{cos}\theta )}=\delta \mathrm{}^{}.$$
(12)
In a real picture, we would see the projection of $`A_1B_2`$, i.e.:
$$HB_2=A_1B_2\mathrm{sin}\theta =\mathrm{}^{}\frac{\mathrm{sin}\theta }{\mathrm{\Gamma }(1\beta \mathrm{cos}\theta )}=\mathrm{}^{}\delta \mathrm{sin}\theta ,$$
(13)
The maximum observed length is $`\mathrm{}^{}`$ for $`\mathrm{cos}\theta =\beta `$.
### 5.2 The moving square
Now consider a square of size $`\mathrm{}^{}`$ in the comoving frame, moving at $`90^{}`$ to the line of sight (Fig. 3). Photons emitted in $`A`$, $`B`$, $`C`$ and $`D`$ have to arrive to the film plate at the same time. But the paths of photons from $`C`$ and $`D`$ are longer $``$ they have to be emitted earlier than photons from $`A`$ and $`B`$: when photons from $`C`$ and $`D`$ were emitted, the square was in another position.
The interval of time between emission from $`C`$ and from $`A`$ is $`\mathrm{}^{}/c`$. During this time the square moves by $`\beta \mathrm{}^{}`$, i.e. the length $`CA`$. Photons from $`A`$ and $`B`$ are emitted and received at the same time and therefore $`AB=\mathrm{}^{}/\mathrm{\Gamma }`$. The total observed length is given by
$$CB=CA+AB=\frac{\mathrm{}^{}}{\mathrm{\Gamma }}(1+\mathrm{\Gamma }\beta ).$$
(14)
As $`\beta `$ increases, the observer sees the side $`AB`$ increasingly shortened by the Lorentz contraction, but at the same time the length of the side $`CA`$ increases. The maximum total length is observed for $`\beta =1/\sqrt{2}`$, corresponding to $`\mathrm{\Gamma }=\sqrt{2}`$ and to $`CB=\mathrm{}^{}\sqrt{2}`$, i.e. equal to the diagonal of the square. Note that we have considered the square (and the bar in the previous section) to be at large distances from the observer, so that the emitted light rays are all parallel. If the object is near to the observer, we must take into account that different points of one side of the square (e.g. the side $`AB`$ in Fig. 3) have different travel paths to reach the observer, producing additional distortions. See Mook and Vargish (1987) for some interesting illustrations.
### 5.3 Rotation, not contraction
The net result (taking into account both the length contraction and the different paths) is an apparent rotation of the square, as shown in Fig. 3 (right panel). The rotation angle $`\alpha `$ can be simply derived (even geometrically) and is given by
$$\mathrm{cos}\alpha =\beta $$
(15)
A few considerations follow:
* If you rotate a sphere you still get a sphere: you do not observe a contracted sphere.
* The total length of the projected square, appearing on the film, is $`\mathrm{}^{}(\beta +1/\mathrm{\Gamma })`$. It is maximum when the “rotation angle” $`\alpha =45^{}\beta =1/\sqrt{2}\mathrm{\Gamma }=\sqrt{2}`$. This corresponds to the diagonal.
* The appearance of the square is the same as what seen in a comoving frame for a line of sight making an angle $`\alpha ^{}`$ with respect to the velocity vector, where $`\alpha ^{}`$ is the aberrated angle given by
$$\mathrm{sin}\alpha ^{}=\frac{\mathrm{sin}\alpha }{\mathrm{\Gamma }(1\beta \mathrm{cos}\alpha )}=\delta \mathrm{sin}\alpha $$
(16)
See Fig. 4 for a schematic illustration.
The last point is particularly important, because it introduces a great simplification in calculating not only the appearance of bodies with a complex shape but also the light curves of varying objects.
### 5.4 Light curves
We have already seen how intrinsic time intervals $`\mathrm{\Delta }t_e^{}`$ transform in observed $`\mathrm{\Delta }t_a`$ when taking into account different photon travel paths. The Doppler effect can oppose to time expansion and, depending on the viewing angle, $`\mathrm{\Delta }t_e`$ can be longer or shorter than $`\mathrm{\Delta }t_e^{}`$. There are however more complex cases, where it may be difficult to derive a prescription as simple as Equation (2). For instance, a relativistically moving blob which also expands relativistically (i.e. “a bomb” exploding in flight). Accounting for the superposition of the two motions is complex, but the introduction of the “aberrated angle observer” greatly simplifies this kind of problems: this observer would see the blob without bulk motion and the different travel paths are the geometric ones. Then the observer in the lab–frame $`K`$ just sees the same light curve as the “aberrated angle observer”, but with time intervals divided by $`\delta `$ and specific intensities multiplied by $`\delta ^3`$. In fact, in the frame $`K^{}`$ the photons emitted at the “de–aberrated angle” $`\alpha ^{}`$ are the very same ones that reach the observer in $`K`$, at the “aberrated angle” $`\alpha `$.
## 6 Conclusions
In the last 25 years special relativity has become necessary for the understanding of some of the most violent phenomena in our universe, in which large masses are accelerated to relativistic velocities. Instead of dealing with elementary particles in accelerators or with energetic cosmic rays, we can see fractions of a solar mass moving at 0.999$`c`$. These motions give raise to several effects, the most important being probably the collimation of the radiation along the direction of motion, making the source visible to distant observers. This implies that relativistic emitting sources – such as the plasma in jets of AGNs and gamma–ray bursts – can be good probes of the far universe.
As discussed in this contribution, spatial and temporal information are carried by photons, and therefore the differences in their paths to reach the observer must be taken into account. Extended moving objects are seen rotated and therefore spheres remain spheres.
Acknowledgments
It is a pleasure to thank Annalisa Celotti, Laura Maraschi, Aldo Treves and Meg Urry for years of fruitful collaboration.
|
no-problem/9905/astro-ph9905392.html
|
ar5iv
|
text
|
# Zeeman lines and a single cyclotron line in the low-accretion rate polar 1RXS J012851.9–233931 (RBS0206) Based on observations at the European Southern Observatory La Silla (Chile)
## 1 Introduction
Several dozen new AM Herculis binaries (polars) were identified in recent years as optical counterparts of bright soft X-ray/EUV emitters, most of them in the systematic identification program of bright, soft, high-galactic latitude X-ray sources found in the RASS (ROSAT All-Sky Survey) by Thomas et al. (1998). Other main sources of new systems are the ROSAT/WFC survey (Pye et al. 1995) and serendipitous ASCA/ROSAT/EUVE discoveries. Thus, a total of about 60 polars are known meanwhile, more than three times the number of sources from the pre-ROSAT era. New interesting individual systems, e.g. bright eclipsing systems, have emerged and systematic studies became possible concerning e.g. the period or magnetic field distribution.
We are running an identification program of all bright, high-galactic latitude sources found in the RASS, primarily in order to establish complete X-ray selected samples of extragalactic X-ray emitters. Selection criteria are a RASS count rate above 0.2 s<sup>-1</sup> and galactic latitude $`|b|>30\mathrm{°}`$. This program, termed the ROSAT Bright Survey RBS (Fischer et al. 1998), has impact also on galactic work. Among other fields, we mention the quest for isolated neutron stars (Schwope et al. 1999, Neuhäuser & Trümper 1999). By expanding the selection criteria applied by Thomas et al. including fainter and harder sources we also find new cataclysmic variables. The first one emerging from this program, RBS1735 (=EUVE J2115–58.6), turned out to belong to the rare class of only 4 polars with a slightly asynchronously rotating white dwarf (Schwope et al. 1997). This system had, contrary to the large majority of the newly discovered polars a rather hard X-ray spectrum. Here we report on the discovery of a new polar with a soft X-ray spectrum. The new system shows two features of magnetic activity, photospheric Zeeman absorption features and a cyclotron emission spectrum. Hence, it is a prime candidate for a detailed investigation establishing a map of the magnetic field on the surface of the white dwarf. The present database is too small to achieve this and further observations (photometry, spectroscopy, spectro-polarimetry) at optical, infra-red and X-ray wavelength are highly demanded.
## 2 Observations
### 2.1 X-ray observations
1RXS J012851.9–233931 was detected during the RASS as a variable soft X-ray source at a mean countrate of 0.34 s<sup>-1</sup>. The source was scanned 23 times during the RASS and had a total exposure of 452 sec. It displayed flux variations by 100% with minimum countrate of zero and peak countrate of 0.93 s<sup>-1</sup>. A variability analysis revealed no obvious periodicity. It has a soft spectrum with X-ray hardness ratio HR1 = $`0.84\pm 0.04`$, where HR1 is defined in the usual way, HR1 $`=(HS)/(H+S)`$ with $`H`$ and $`S`$ being the counts in the soft (0.1–0.4 keV) and hard (0.5–2.0 keV) bands, respectively. The X-ray source appeared as object 206 in the target list of the RBS (Schwope et al. 1999, in preparation), we refer to it as RBS0206 in the following.
It was not detected in the all-sky surveys performed in the softer spectral regimes with the EUVE satellite (Bowyer et al. 1996, Lampton et al. 1997) and with the WFC onboard ROSAT (Pye et al. 1995).
### 2.2 Low-resolution spectroscopy
The X-ray positional uncertainty of RBS0206 as given in the 1RXS-catalogue (Voges et al. 1996) is 9 arcsec. The digitized sky survey DSS shows several possible counterparts at or just within the 90% confidence X-ray error circle. The object nearest to the RASS X-ray position at a distance of $`7.6`$″ is the faintest (Gunn $`i=19\stackrel{m}{.}9`$), it appears just above the plate limit and was not observed optically by us. The others labeled ‘A’ and ‘B’ on the finding chart reproduced in Fig. 1 are at distances of $`13.2`$″ and $`15.9`$″, respectively.
Low-resolution spectra of objects ‘A’ and ‘B’ were taken with the ESO Faint Object Spectrograph and Camera EFOSC mounted to the ESO 3.6m telescope in the night November 14, 1998. Both objects were put on the spectrograph slit, the integration time was 1200 sec, the exposure started at 4.2872 UT. A grism with 236 grooves/mm was used as dispersive device (grism #13), providing a wavelength coverage of more than 5000 Å at a resolution of about 12 Å FWHM. Spectrophotometric standard stars were observed twice in the night. Due to transparency changes of the atmosphere these observations gave inconsistent photometric results. The observations of the standard stars were nevertheless useful in order to establish the overall spectral response. An absolute calibration of the stars in the field was possible with the CCD-photometry of Jan. 1999 (see next section). We estimate the finally achieved photometric accuracy of our spectra to be better than $``$15 %, where this estimate is based on the photometric accuracy of our Jan. 1999 imaging observations. Object ‘B’ turned out to be a faint K-star ($`V=19\stackrel{m}{.}1`$) and is not the counterpart of the RASS X-ray source. The spectrum of object ‘A’ is reproduced in Fig. 2. With $`V=18\stackrel{m}{.}9`$ this star was slightly brighter than ‘B’. It has a blue continuum with deep absorption lines and an intense broad hump centred on $``$8000 Å. H$`\alpha `$ was observed to be weak in emission. The object was identified as a likely AM Herculis star in a low state of accretion.
### 2.3 Optical photometry
Optical photometry of RBS0206 was obtained on January 24 and 26, 1999, with the Danish 1.5m telescope at ESO La Silla. The source was exposed for a total of 2 and 2.3 hours, respectively, with integration times of individual exposures of 60 sec. Main aim of these observations was the determination of the orbital period of the binary. The feature with expected highest photometric variability in the spectrum of RBS0206 (Fig. 2) is the hump at 8000 Å. The photometric observations were therefore performed with a Gunn $`i`$ filter with central wavelength at 7978 Å and width of 1430 Å. Differential magnitudes of objects ‘A’ and ‘B’ with respect to star ‘C’ were derived using the profile fitting algorithm of DOPHOT (Mateo & Schechter 1989). The accuracy of a single observation is $`0.05`$ mag. Subsequent observations of Landolt (1992) standards allowed the absolute calibration of our photometric and spectroscopic reference stars, $`i=13\stackrel{m}{.}92`$ for star ‘C’ and $`i=17\stackrel{m}{.}38`$ for star ‘B’. The resulting light curves of the two occasions are shown in Fig. 3. RBS0206 displayed brightness variations by 0.5 mag on a timescale of less than one hour with short-term fluctuations of 30% superimposed. The mean brightness level of $`i16\stackrel{m}{.}3`$ indicates a brightening of the source with respect to the spectroscopic observations by about $`1.3`$ mag.
## 3 Analysis and discussion
### 3.1 Spectroscopic identification of RBS0206
The spectrum of RBS0206 shows the signs of a polar in a low state of accretion. The emission lines of Hydrogen and Helium, which are so prominent in the high accretion states of these systems, have almost vanished. Only H$`\alpha `$ and H$`\beta `$ are weakly detectable in emission. The absence of all radiation components dominating the optical spectra of polars in their high states (atomic emission lines, recombination continuum, quasi-continuous cyclotron emission) unveils the presence of the magnetic white dwarf in RBS0206. The continuum in the blue spectral range is dominated by the white dwarf, the pronounced absorption lines are Zeeman shifted Balmer lines of photospheric origin (H$`\alpha `$, H$`\beta `$ and H$`\gamma `$). Although in a low state, the spectrum has a broad intensity hump in the near-infrared at 8000 Å and width of about 1000 Å (FWHM) which can be nothing else than a cyclotron feature. Hence, accretion did not cease entirely. The spectrum does not show any obvious feature from the mass-donating secondary star. We will discuss the various spectral features subsequently, but try to determine the binary period beforehand.
### 3.2 Optical variability
The lightcurves shown in Fig. 3 display pronounced minima which are in principle useful for a period determination. With the small database at hand, however, our results are not unique. It is in particular not clear, whether the two minima in the lightcurve of Jan. 26, 1999, represent the same feature in two successive cycles or not. A periodogram computed with the analysis-of-variance (AOV) algorithm (Fig. 4; Schwarzenberg-Czerny 1989) reveals possible periods around 90 min (0.0625 days) and around 146 min (0.1 days). The former solution would imply that the two observed minima on Jan. 26 represent the same feature, the latter that they are different. Folded light curves using the three trial periods indicated in Fig. 4 are shown in Fig. 5. Each folded light curve has intervals with a very large scatter of data points at given phase probably caused by cycle-to-cycle variations of the brightness. The folded light curves for periods around 90 min resemble those of MR Ser or QQ Vul, which both have their main accreting pole continously in view. The optical light curves of these two systems are modulated by strong cyclotron beaming. Primary minima occur in these systems when we are looking almost pole-on, secondary minima in the centre of the bright hump, when the main pole vanishes partly behind the limb of the white dwarf. The folded light curve using the trial period of 146.4 min shows a rather short bright phase (centred on phase zero) which is disrupted by an eclipse-like minimum. Such a feature may be caused by absorption in the intermittent accretion stream crossing the line of sight to the accretion spot. We regard a period around 90 min as likely but cannot exclude longer periods, significantly shorter periods are certainly excluded. Clearly, observations with a longer time base are needed to clarify the situation. With this period, the likely accretion geometry is such, that the accretion spot is continously in view. This view is supported by the RASS X-ray light curve which has non-vanishing X-ray flux for all but one scans.
### 3.3 Spectral analysis
The absence of any obvious M-star feature in our spectrum can be used to derive a distance estimate to the system. We assume that the maximum contribution of the M-star at 9000 Å is 80% and use as template spectrum for the secondary in RBS0206 the M6 dwarf Gl406 with $`M_K=9\stackrel{m}{.}19,VK=7\stackrel{m}{.}37`$. This type of M-star would be appropriate for a cataclysmic binary with $`P_{\mathrm{orb}}90`$ min. The scaled $`V`$\- and $`K`$-band brightnesses of Gl406 are $`V_{\mathrm{sc}}=23\stackrel{m}{.}5`$ and $`K_{\mathrm{sc}}=16\stackrel{m}{.}1`$. We assume a Roche-lobe filling secondary star at a period of 90 min which has a spherical equivalent Roche radius of $`\mathrm{log}(R_2/R_{\mathrm{}})=0.86`$. Using Bailey’s (1981) method combined with the improved calibration of the surface brightness of late-type stars by Beuermann & Weichhold (1999, in prep.) which predicts a surface brightness $`S_K=4.9`$ for a star like Gl406, the distance to RBS0206 is $`>240`$ pc.
Using the observed slope and flux level of the white dwarf in the blue spectral regime the questions of white dwarf radius, temperature and distance to the system can be addressed, too. For that exercise we use the model spectra for non-magnetic white dwarf atmospheres by Gänsicke et al. (1995) kindly provided by B. Gänsicke. We assume a normal 0.6 M white dwarf with $`R_{wd}=8\times 10^8`$ cm. The observed spectrum is reasonably well reflected with a 10000 K white dwarf at a distance of only 130 pc, although the model predicts a steeper spectral slope than observed. A white dwarf at a distance of 240 pc as estimated above must have a considerably higher temperature of about 20000 K in order to match the observed flux level at 6000 Å. At this high temperature the continuum slope is much steeper than observed and the fit in general is clearly worse than that for 10000 K. In order to resolve the discrepancy between the different distance estimates one definitely needs phase-resolved data. These would allow in the first place to determine the orbital period. Should our period estimate for some reason be wrong and the orbital period shorter than 90 min, one can hide an even fainter secondary star with later spectral type in the spectrum which would be less distant than the derived 240 pc for a period of 90 min. Phase-resolved spectroscopic data in the blue spectral regime would allow to disentangle between radiation from the undisturbed photosphere and a warm accretion spot.
The purity of the Zeeman spectrum shortward of H$`\alpha `$ allows a direct measurement of the mean magnetic field strength. We fitted a second order polynomial to interactively defined continuum points and divided the observed spectrum by this curve. The result is shown in Fig. 6 together with a simple Zeeman model plotted below the observed spectrum. The model is based on the detailed computations of the wavelengths and oscillator strengths of the individual non-degenerate Zeeman transitions of H-Balmer lines by Forster et al. (1984) and Rösner et al. (1984). For the present purpose we use the transitions of H$`\alpha `$, H$`\beta `$ and H$`\gamma `$, which lie in the spectral range covered by our spectrum. Our model just sums the oscillator strengths of all Balmer transitions mentioned weighted according to an assumed magnetic field distribution. For the model shown in Fig. 6 we assumed a Gaussian field distribution centred on $`B=36`$ MG with a spread $`\sigma _B=2`$ MG. The model, therefore, does not predict real spectral intensities but it predicts the wavelengths where spectral features are expected to occur. At the field strength realized in RBS0206 nearly all subcomponents of the Balmer lines appear as individual non-degenerate transitions due to the dominance of the quadratic over the linear Zeeman effect. Due to magnetic field smearing and to the limited spectral resolution these cannot be resolved in our spectrum, the Zeeman lines mainly appear as broad troughs. All features longward of 5300 Å belong to H$`\alpha `$, the features shortward of this wavelength belong to H$`\beta `$ and H$`\gamma `$ and become partly intermixed. There are, however, some isolated features e.g. at 4580 Å and 4920 Å reacting sensitively on the adopted value of the centroid field strength. We estimate the uncertainty of the centroid field strength to be about 1 MG. We regard this field strength as mean photospheric field strength. If we want to infer the value of the field strength at the pole, we need to know (a) the inclination of the magnetic axis with respect to the line of sight at the time of the observation, (b) the true underlying distribution of the magnetic field, and (c) information about the temperature distribution over the white dwarf surface. All these quantities and distributions are unknown. However, if we assume a dipolar field structure, a homogeneous temperature distribution and a line of sight not directly towards the magnetic pole, then a typical value for the conversion between the mean magnetic field and the polar field strength is 1.6 – 1.7. Using these values a likely value of the magnetic field at the pole is 55 – 60 MG. A lower limit for the field strength at the magnetic pole is given by the high-field wings of the observed Zeeman lines, which indicate about 40 MG.
The occurence of a single cyclotron hump at 8000 Å is puzzling. All AM Herculis stars with individually resolved cyclotron harmonics show more than one harmonic in their optical or infrared spectra. One apparent exception is the recently discovered polar HS 1023+3900 (Reimers et al. 1999) observed at a very low accretion rate which shows at certain phases only one prominent cyclotron line at 6000 Å. Only the detailed analysis revealed a second cyclotron line as a faint addition to the bright spectrum of the secondary star in the near infrared at 9000 Å. In RBS0206 the cyclotron fundamental lies in the unobserved infrared, $`\lambda _{\mathrm{cyc}}=1785026780`$ Å for $`B_{\mathrm{pole}}B_{\mathrm{cyc}}=4060`$ MG. Our spectrum covers the corresponding harmonic numbers 2 – 5.3 or 3 – 8, respectively, hence we could expect to observe several higher cyclotron harmonics. The only way to hide these higher harmonics is to assume that the particular observed one is already almost optically thin, that the plasma temperature is very low (which gives a steep dependence of the absorption coefficient on the harmonic number) and that the plasma is rather dilute. As one example for such a model we show also in Fig. 6 a cyclotron model which fits these requirements. It is computed for a homogeneous, isothermal plasma with $`kT=2`$ keV, magnetic field strength $`B=45`$ MG, viewing angle $`\mathrm{\Theta }=50\mathrm{°}`$ (angle between magnetic field and observer) and plasma density parameter $`\mathrm{log}\mathrm{\Lambda }=2`$ (see Barrett & Chanmugam 1985, Thompson & Cawthorne 1987). With these parameters the observed cyclotron hump is the $`3^{\mathrm{rd}}`$ harmonic, the $`2^{\mathrm{nd}}`$ is expected to lie in the infrared J-band (centred on 1.2 $`\mu `$m) and the cyclotron fundamental lies at 2.4 $`\mu `$m, at the edge of the K-band. The next higher harmonic, the fourth, is seen in the model as a small hump at 6000 Å. Such a feature can easily be overlooked in somewhat noisy data and can be hidden in a continuum which is strongly affected by Zeeman absorption. Our modeled hump at 8000 Å has the same width as the observed one. Since any inhomogeneity in the emitting plasma like a variation of the value of the magnetic field strength tend to smear a cyclotron line, the true plasma temperature is probably below 2 keV.
This is the lowest plasma temperature among all polars derived so far from optical cyclotron spectroscopy. It is in particular much lower than the shock temperature of a free-falling accretion stream on the white dwarf, by a factor of 10 for an assumed 0.6 M white dwarf. This suggests an accretion scenario termed ‘bombardement solution’ (Kuijpers & Pringle 1982) where the accretion spot is heated by particle collisions. This scenario has been worked out in detail in a series of papers by Woelk & Beuermann (1992, 1993, 1996) and Beuermann & Woelk (1996). Interpreting the measured temperature as maximum electron temperature, Figs. 6, 8, and 9 of their 1996 paper suggest, that the specific mass accretion rate $`\dot{m}`$ is below 0.1 g cm<sup>-2</sup> s<sup>-1</sup>. Their modelling predicts furthermore that 100% of the accretion luminosity appears as cyclotron radiation. This is in apparent contradiction to the fact, that RBS0206 was discovered as X-ray source. However, AM Herculis binaries are known to have highly variable mass accretion rates on different time scales and we cannot exclude something like a high accretion state during the RASS and a low accretion state during our spectroscopic observations.
The very low value of the plasma parameter $`\mathrm{\Lambda }`$ supports the applicability of the bombardement solution and predicts an even lower specific mass accretion rate. The plasma parameter is given in terms of the magnetic field strength $`B_{45}=B/45MG`$, the geometric path length through the plasma $`s_6=s/10^6`$ cm and the electron density $`n_{16}=n_e/10^{16}`$ cm<sup>-3</sup> as $`\mathrm{\Lambda }=1.3\times 10^6s_6n_{16}B_{45}^1`$. With the measured value of $`\mathrm{\Lambda }=100`$ this becomes $`n_{16}10^4s_6^1`$. The cooling length $`h`$ of the plasma will not be smaller than $`10^6`$ cm (Beuermann & Woelk 1996, Fig. 2), and by setting the cooling length $`h`$ equal to the geometrical path length $`s`$ the electron density in the cyclotron region must be extraordinarily low, $`n_e10^{12}`$ cm<sup>-3</sup>. For standard composition the post-shock density is $`n_e=3.8\times 10^{17}(\dot{m}/10^2\mathrm{g}\mathrm{cm}^2\mathrm{s}^1)(M_{\mathrm{wd}}/M_{\mathrm{}})^{1/2}(R_{\mathrm{wd}}/10^9\mathrm{cm})`$ cm<sup>-3</sup>, suggesting $`\dot{m}10^3`$ for the case of RBS0206.
The predicted integrated cyclotron flux (integrated over all harmonics) is about $`2\times 10^{12}`$ erg cm<sup>-2</sup> s<sup>-1</sup>. For a distance of $`130`$ pc the cyclotron luminosity is $`L_{\mathrm{cyc}}=\pi d^2F_{\mathrm{cyc}}=1\times 10^{30}(d/130)^2`$ erg s<sup>-1</sup>. Assuming that 100% of the accretion luminosity is released as cyclotron radiation, as predicted by the Woelk & Beuermann models, the total mass accretion is $`\dot{M}=LR_{\mathrm{wd}}/GM_{\mathrm{wd}}=1\times 10^{13}\mathrm{g}/\mathrm{s}=1.5\times 10^{13}`$ Myr<sup>-1</sup>. This value is $`23`$ orders of magnitude below the canonical value for a short-period polar, i.e. a system below the period gap, in a high accretion state.
The RASS X-ray data can be fitted with a soft blackbody and a (marginally detected) hard X-ray bremsstrahlung model. The integrated unabsorbed flux of the $`kT_{\mathrm{bb}}=20`$ eV blackbody gives $`5\times 10^{11}`$ erg cm<sup>-2</sup> s<sup>-1</sup>, which is a factor of 25 more than the derived cyclotron flux. These estimates suggest that the accretion rate has changed by a large amount between the RASS X-ray observations and the optical spectroscopy in November 1998.
## 4 Conclusions
We have presented first spectroscopic and photometric observations of the newly discovered AM Herculis star 1RXS J012851.9–233931 = RBS0206. Although not finally conclusive, our photometry suggests that it is a short-period system with $`P_{\mathrm{orb}}90`$ min or that it is a system in the period gap with $`P_{\mathrm{orb}}`$ around 140 min. The probable accretion geometry is such, that the accreting pole is continuously in view. The discovery spectrum of RBS0206 was taken when the system was in an extreme low state of accretion. This derives from the absence of bright emission lines and photospheric absorption lines from the white dwarf. Even in the single discovery spectrum Zeeman and cyclotron lines could be detected. The inferred magnetic field strengths are $`36\pm 1`$ MG for the mean photospheric field $`45\pm 1`$ MG for the accretion spot. The plasma temperature in the cyclotron emission region is extremely low, $`kT<2`$ keV, compared to the more usual 5 – 20 keV encountered in most AM Her systems. The derived integrated mass accretion rate is 2 – 3 orders of magnitude below the canonical value for short-period polars indicating a deep low accretion state of that system. RBS0206 seems to be an extraordinarily good target for a detailed study of the magnetic field structure over the white dwarf surface due to the presence of pronounced Zeeman lines and a cyclotron line in addition. The Zeeman lines are sensitive to the average surface field whereas the cyclotron line goves an extra constraint on the field in one particular spot. The system is also an excellent target for studies of cyclotron-line formation in the lowest harmonics by spectroscopy in the J and K bands, where pronounced optical depth effects are expected to occur (Woelk & Beuermann 1996).
###### Acknowledgements.
We thank Klaus Beuermann (Göttingen) for his surface brightness calibration of late-type stars and Boris Gänsicke (Göttingen) for providing a grid of white dwarf atmosphere models. This work was supported by the German Bundesministerium für Bildung, Wissenschaft, Forschung und Technologie (BMBF/DLR) under grants 50 OR 9403 5, 50 OR 9708 6 and 50 QQ 9602 3. The ROSAT project is supported by BMBF/DLR and the Max-Planck-Society.
|
no-problem/9905/cond-mat9905398.html
|
ar5iv
|
text
|
# Charge Density Wave Ratchet
## Abstract
We propose to operate a locally-gated charge density wave as an electron pump. Applying an oscillating gate potential with frequency $`f`$ causes equally spaced plateaux in the sliding charge density wave current separated by $`\mathrm{\Delta }I=2eNf,`$ where $`N`$ is the number of parallel chains. The effects of thermal noise are investigated.
A metallic gate electrode on the surface of charge density wave (CDW) compounds can cause transistor action by modulating the threshold field for depinning of the collective CDW mode . Several suggested microscopic mechanisms fail to explain the observed large asymmetric gate modulation. Nevertheless, we may expect that the sliding mode can be manipulated even more effectively on thin-films of charge density wave materials which have been grown recently . Here we propose an electron-pump based on structured CDW films, which we call a “CDW ratchet”.
In the early nineties the “single-electron turnstile” was realized in both metals and semiconductors . In these structures electrons are transferred one by one by using time-dependent gate voltages to modulate the Coulomb blockade or by alternatively raise and lower tunnel barrier heights. The dc current through such a device scales linearly with the applied frequency $`I=ef`$, and the current-(bias)voltage characteristics shows equally spaced current plateaux. The single electron pump may find an application as a current standard, since the oscillation frequency $`f`$ can be controlled to high precision. The accuracy of the current is affected by numerous error mechanisms as offset charges, cycle missing, thermal fluctuations, or co-tunneling . Some of these errors may be suppressed in linear arrays of tunnel junctions or in periodically gated quantum wires , but devices with a charge density wave ground state should not suffer from these drawbacks at all.
Charge density waves occur in quasi one-dimensional metals, like NbSe<sub>3</sub> or K<sub>0.3</sub>MnO<sub>3</sub> . Below a critical temperature $`T_c`$ ($`183K`$ for $`\mathrm{K}_{0.3}\mathrm{MoO}_3`$), the ground state consists of a lattice distortion coupled to an electron density modulation $`n_{CDW}|\mathrm{\Delta }(x,t)|\mathrm{cos}\left[2k_Fx+\chi (x,t)\right],`$ where $`2|\mathrm{\Delta }|`$ is the gap in the quasi-particle spectrum and the phase $`\chi `$ denotes the position of the CDW relative to the crystal lattice. Incommensurate CDW’s support a unique sliding mode of transport above a weak threshold field. This collective motion of the CDW carries electrical current proportional to $`_t\chi `$ and is the source of narrow-band noise and non-linear conductance characteristics. The threshold field arises from the interaction of the CDW with defects or impurities in the system, which can, as mentioned above, be manipulated by external gate electrodes .
As experimental setup for the CDW electron pump we envisage a thin strip of CDW material consisting of $`N`$ chains with dimensions of the order of the Fukuyama-Lee-Rice coherence lengths ($`\xi _{}`$ is typically micrometers and $`\xi _{}/\xi _{}10100`$ ), such that the CDW is characterized by a single degree of freedom. A thin metallic gate electrode separated by an insulating layer is placed on top, perpendicular to the CDW chains, and connected to an oscillating voltage. Alternatively, one could also think of the tip of an STM as gate electrode. The dynamics of the CDW with an oscillating time-dependent gate potential can be described by the classical equation of motion for the phase $`\chi (t)`$ in the single particle model
$$\eta \frac{\chi }{t}+V_p(\chi ;t)=V_b+V_n(t)$$
(1)
where $`\eta =\mathrm{}R_c/eR_Q`$, $`R_c`$ is a damping resistance and $`R_Q=h/2Ne^2`$ is the $`N`$-mode quantum resistance. The pinning potential $`eV_p`$ is periodic in the phase and contains an explicit time dependence $`V_p(\chi +2\pi ;t+2\pi /\omega )=V_p(\chi ;t)`$, where $`\omega =2\pi f`$ is the frequency of the oscillating gate potential. The driving term consists of the bias voltage $`V_b`$ and a thermal (Nyquist) noise term $`V_n(t)`$ with $`<V_n(t)>=0`$ and $`<V_n(t)V_n(t^{})>=2\eta k_BT\delta (tt^{})/e`$. We will disregard the inertia since CDW’s are in general strongly overdamped due to their large effective mass. Note that there is no oscillating drive term, which is known to lead to phase-locking and Shapiro steps in the current–voltage characteristic. The interpretation of Eq. (1) is straightforward: a thermally activated classical particle moving in a tilted washboard potential, of which the amplitude changes periodically in time. The applied electric field directs the motion of the CDW, and the oscillating gate guides the CDW downwards, thus causing an electric current.
In the presence of thermal noise, the dynamics of the CDW can be described by a Fokker-Planck equation for the probability density $`P(\chi ,\tau )`$ of finding the phase $`\chi `$ in the interval $`\chi +d\chi `$ at time $`\tau `$ . In the following we will assume that the pinning potential may be approximated by its lowest harmonics:
$$V_p(\chi ;t)=(V_T+\alpha V_g\mathrm{sin}\omega t)\mathrm{sin}\chi ,$$
(2)
where $`V_T`$ is a constant threshold potential, which is modulated by the fraction $`\alpha `$ of the oscillating gate potential $`V_g.`$ In this case the Fokker-Planck equation reads
$$\frac{P}{\tau }=D\frac{^2P}{\chi ^2}+(1+\stackrel{~}{V}_g\mathrm{sin}\stackrel{~}{\omega }\tau )\frac{}{\chi }(\mathrm{sin}\chi P)\stackrel{~}{V}_b\frac{P}{\chi },$$
(3)
where we introduced the dimensionless parameters $`\stackrel{~}{V}_g=\alpha V_g/V_T`$, $`\stackrel{~}{V}_b=V_b/V_T`$, $`D=k_BT/eV_T`$, and $`\stackrel{~}{\omega }=\eta \omega /V_T`$. The initial condition at time $`\tau =\tau _0`$ is given by $`P(\chi ,\tau _0)=\delta (\chi \chi _0).`$ The problem is now reduced to the diffusion of a classical particle in a time dependent periodic potential. By substituting the Fourier series
$$P(\chi ,\tau )=\underset{n=\mathrm{}}{\overset{\mathrm{}}{}}P_n(\tau )e^{in\chi }$$
(4)
into Eq. (3) we obtain the equation for the Fourier components $`P_n(\tau )`$
$`{\displaystyle \frac{P_n}{\tau }}=`$ (5)
$`(Dn^2+in\stackrel{~}{V}_b)P_n(1+\stackrel{~}{V}_g\mathrm{sin}\stackrel{~}{\omega }\tau ){\displaystyle \frac{n}{2}}[P_{n+1}P_{n1}].`$ (6)
The total dc current $`I`$ through the system is defined as
$$I=\frac{V_b}{R_c}\frac{1}{2iR_c}<(V_T+V_g\mathrm{sin}\stackrel{~}{\omega }\tau )[P_1(\tau )P_1(\tau )]>,$$
(7)
where the brackets denote time averaging.
We first consider the $`T=0`$ (noiseless) limit (Eq. (1)). Without gate voltage the current is obviously given by $`I=0`$ for $`V_b<V_T`$ and $`I=\sqrt{V_b^2V_T^2}/R_c`$ for $`V_bV_T.`$ Figure 1 shows the $`IV_b`$ characteristic for different values of the external frequency $`f`$. Distinct current plateaux appear in the $`IV_b`$ below the normal threshold potential. Each plateau corresponds to the displacement of a quantized number of wave fronts in one cycle. The steps are equally separated by $`\mathrm{\Delta }I=2eNf`$, where the factor $`2`$ reflects spin-degeneracy. Far above threshold the differential resistance approaches its normal value $`R_c`$. Furthermore, the current-frequency relation has a fan structure, as is shown in Fig. 2. The current scales linearly with frequency
$$I=2emNf,$$
(8)
where the integer $`m`$ is the number of displaced wave lengths in one cycle. In the limit $`\omega 0`$ the current approaches the constant value as
$$I(\omega 0)=\frac{1}{2\pi R_c}\mathrm{Re}_0^{2\pi }𝑑\tau \sqrt{V_b^2(V_T+V_g\mathrm{sin}\tau )^2}.$$
(9)
The current drops to zero at the frequency which corresponds to the time to displace the CDW by one wave length.
Next we investigate the effects of the thermal noise at finite temperatures. In the case where the external oscillating gate potential is absent, the stationary state solution to Eq. (6) is easily calculated as
$$P_n(\tau \mathrm{})=\frac{I_{niz_0}(z)}{I_{iz_0}(z)},$$
(10)
where $`I_q(z)`$ is a modified Bessel function with imaginary argument $`q`$, $`z_0=\stackrel{~}{V}_b/D`$ and $`z=1/D`$. Using the relation $`P_n^{}=P_n`$, the CDW current is obtained from Eq. (7)
$$I=\frac{V_b}{R_c}\frac{V_T}{R_c}\mathrm{Im}\frac{I_{1iz_0}(z)}{I_{iz_0}(z)}.$$
(11)
This static result is exactly analogous to the case of strongly overdamped Josephson junctions with an external noise current . The main effect of finite temperatures is a smoothening of the square-root threshold singularity near $`V_T`$ and an exponentially small (for $`1/D1`$) but nonzero conductance as $`V_b0`$. In the presence of the oscillating gate we solve Eq. (6) numerically. In Fig. 3 the $`IV_b`$ curves at external frequency $`\stackrel{~}{\omega }=0`$ and $`\stackrel{~}{\omega }=0.4`$ are shown for different temperatures $`D=k_BT/eV_T.`$ As expected, finite temperatures smear out the sharp transitions between the current plateaux. This is more pronounced at larger bias, since the escape rate due to the thermal fluctuations increases. At even higher temperatures $`1/D5`$ the plateaux disappear and the resistance becomes linear $`V_b=IR_c`$.
For operation as a current standard, an individual CDW ratchet must first be gauged to determine the number of parallel chains $`N`$. Once known, the linear dependence of the current on $`N`$ in principle improves the accuracy of the current quantization as compared to single electron pumps. The band-width of single-electron turnstile devices is limited by the competition between large detection currents (large frequencies) and low noise levels (low frequencies). The present CDW device, however, allows for large currents at low frequencies, and is robust to the error sources of the single-electron pump.
Since CDW’s have a large single-particle energy gap $`2|\mathrm{\Delta }|`$, the quasi-particle contribution to the current is of the order $`\mathrm{exp}(2|\mathrm{\Delta }|/k_BT)`$ and can be neglected for sufficiently low temperatures $`TT_c`$. The gap also makes the system robust to static disorder and single-particle quantum fluctuations in the total charge per unit cell. Phase-slip processes at the current contacts can be avoided in a four-terminal measurement, where the voltage probes are located far from the current source and drain. Higher harmonics of the pinning potential Eq. (2) will give corrections only near the thresholds. The elasticity of the CDW can phenomenologically be described by geometrical capacitances $`C`$ of the leads, which define the typical Coulomb energy for phase deformations. The assumption of a rigid CDW is then justified in the limit of weak pinning and low temperatures, such that $`|\mathrm{\Delta }|e^2/CeV_Tk_BT`$ . We disregarded the role of macroscopic quantum tunneling of the CDW, which we consider less important than coherent co-tunneling in single-electron devices because of the high effective mass.
We conclude by summarizing our results. We present the idea of a locally gated CDW as an electronic ratchet. An oscillating gate potential causes equidistant current plateaux. The current scales linearly with the external frequency and the number of chains. The coherent electronic ground state of CDW’s and the intrinsic property of parallel conducting chains makes this device a serious candidate for the current standard with possible higher accuracy than single electron devices, even in the presence of thermal noise.
It is a pleasure to thank Behzad Rejaei, Michel Devoret, Erik Visscher, Herre van der Zant, and Yuli Nazarov for stimulating discussions. This work is part of the research program for the “Stichting voor Fundamenteel Onderzoek der Materie” (FOM), which is financially supported by the ”Nederlandse Organisatie voor Wetenschappelijk Onderzoek” (NWO). This study was supported by the NEDO joint research program (NTDP-98).
|
no-problem/9905/cond-mat9905416.html
|
ar5iv
|
text
|
# Magnetic behavior of Eu2CuSi3 Large negative magnetoresistance above Curie temperature
## Abstract
We report here the results of magnetic susceptibility, electrical-resistivity, magnetoresistance (MR), heat-capacity and <sup>151</sup>Eu Mössbauer effect measurements on the compound, Eu<sub>2</sub>CuSi<sub>3</sub>, crystallizing in an AlB<sub>2</sub>-derived hexagonal structure. The results establish that Eu ions are divalent, undergoing long-range ferromagnetic-ordering below (T<sub>C</sub>=) 37 K. An interesting observation is that the sign of MR is negative even at temperatures close to 3T<sub>C</sub>, with increasing magnitude with decreasing temperature exhibiting a peak at T<sub>C</sub>. This observation, being made for a Cu containing magnetic rare-earth compound for the first time, is of relevance to the field of collosal magnetoresistance.
Large negative magnetoresistance (LNMR) in perovskite-based manganites \[colossal magnetoresistance (CMR) systems\] peaking in the vicinity of Curie temperature (T<sub>C</sub>) is one of the most important observations in modern condensed matter physics. In this regard, the observation of negative magnetoresistance, the magnitude of which increases with decreasing temperature peaking near the magnetic ordering temperature temperatures (T<sub>o</sub>) in the indirect-exchange controlled (not only in ferromagnetic, but also in antiferromagnetic) magnetic systems like those of Gd, Tb and Dy based alloys (by us) calls for mechanisms other than double-exchange and Jahn-Teller effects to explain such features. We have proposed the need to consider the possible role of magnetic polaronic effects even in metallic alloys. The importance of such an observation and hypothesis is apparent also from similar recent reports, both theoretical and experimental, from other groups as well. Apparently, it is not always necessary to observe such features in every Gd alloy and in fact many such alloys show normally expected negligibly small positive magnetoresistance (MR) at all temperatures above T<sub>o</sub>. However, it was noted that such features, if observed, are restricted to Pd, Pt, Co, Ni or Mn containing compounds, but not to Cu, Ag and Au containing ones, which raise a question whether such anomalies result from the intrinsic tendency of the d bands of the former class of elements to get easily polarised by the 4f local moment. In this article, among other magnetic properties, we emphasize on the observation of LNMR in a (Eu-based) Cu containing alloy for the first time over a wide range of temperature above Curie temperature (T<sub>C</sub>); Cu being non-magnetic, this result endorses our view that LNMR is a magnetic precursor effect of 4f-ion long-range magnetic ordering.
The investigation of Eu<sub>2</sub>CuSi<sub>3</sub>, crystallizing in an AlB<sub>2</sub>-type hexagonal structure has been undertaken considering current interest in synthesizing and investigating magnetic properties of ternary rare-earth/actinide compounds with the atomic ratios of 2:1:3, particularly those adopting variants of either the tetragonal ThSi<sub>2</sub> or hexagonal AlB<sub>2</sub> structure types. It is to be noted that the reports on Eu compounds of this type are generally rare. Recently, we reported the synthesis and interesting magnetic behavior of the compound, Eu<sub>2</sub>PdSi<sub>3</sub> \[Ref. 19\]. The scarcity of the reports on such Eu compounds is presumably due to the difficulties in controlling the Eu stoichiometry during sample preparation. We have carried out electrical resistivity ($`\rho `$), MR, magnetization (M) and magnetic susceptibility ($`\chi `$), heat-capacity (C) and <sup>151</sup>Eu Mössbauer effect (ME) studies on Eu<sub>2</sub>CuSi<sub>3</sub>, in order to understand the magnetic behavior of this alloy, the results of which are reported here.
The sample was prepared by induction melting stoichiometric amounts of constituent elements in an inert atmosphere. The ingot was melted four times and the loss due to evaporation of Eu after first melting was compensated by adding corresponding amounts of Eu. We noticed negligible loss of Eu in subsequent meltings. The ingot was homogenised in an evacuated, sealed quartz tube at 800<sup>o</sup>C. X-ray diffraction pattern, obtained by employing Cu K<sub>α</sub> radiation, established that the present compound crystallizes in an AlB<sub>2</sub>-derived hexagonal structure. Since we do not see any superstructure lines in the x-ray diffraction pattern, we presently believe that there is an intrinsic disorder between Cu and Si sites, unlike Eu<sub>2</sub>PdSi<sub>3</sub> (Ref. 19), and the lattice constants obtained are a= 4.095Å and c=4.488Å. No additional lines attributable to any other phase within the detection limit of x-ray diffraction could be seen.
The $`\rho `$ measurements (2-300 K) were performed by a conventional four-probe method employing silver paint for making electrical contacts. The sample is found to be very porous and tends to become powder with ageing and hence there are difficulties at arriving absolute values of $`\rho `$. The $`\chi `$ measurements (2-300 K) were performed employing a commercial SQUID magnetometer in the presence of several magnetic fields. The C data (2-70 K) were obtained by a semi-adiabatic heat-pulse method. The (longitudinal mode) MR data were obtained in the presence of a magnetic field (H) of 30 kOe in the temperature range 4.2-100 K and also as a function of H at selected temperatures (4.2, 25 and 50 K). <sup>151</sup>Eu ME measurements at selected temperatures (4.2 - 300 K) were performed employing <sup>151</sup>SmF<sub>3</sub> source (21.6 keV transition) in the transmission geometry.
The results of $`\rho `$ (normalised to the value at 300 K), inverse $`\chi `$ (measured in the presence of 2 kOe) and C measurements are shown in Fig. 1, only below 100 K; the data at higher temperatures are not shown as there are no interesting features to be highlighted. It is clear that there is a sudden drop in $`\rho `$ at 37(1) K as the temperature is lowered. This drop arises from the onset of magnetic ordering as evidenced below. There is a distinct anomaly even in the temperature dependent C data around 37 K, which originates from magnetic ordering. The $`\chi `$ is found to exhibit Curie-Weiss behaviour in the temperature range 40-300 K and the effective moment obtained from this linear region is found to be 7.8 $`\mu `$<sub>B</sub>/Eu, which is very close to that expected for divalent Eu ions, thereby confirming that all the Eu ions are divalent in this compound. This also suggests that there is no magnetic moment on Cu. The value of the paramagnetic Curie-temperature ($`\theta `$<sub>p</sub>) is found to be 38 K; inverse $`\chi `$ tends to saturate below the same temperature. These results establish that Eu ions undergo a ferromagnetic-type of magnetic ordering at 37(1) K; this temperature is practically the same as $`\theta `$<sub>p</sub>, indicating absence of competition from antiferromagnetic interaction, a situation different from that observed for Eu<sub>2</sub>PdSi<sub>3</sub> (Ref. 19).
There are also qualitative changes in the low temperature susceptibility behavior measured at different fields, as seen in Fig. 2. In addition to the sharp rise of $`\chi `$ around 40 K due to the onset of ferromagnetic ordering, there is a peak at about 5 K followed by a drop at lower temperatures for the data recorded in the presence of 100 or 1000 Oe, a feature absent (but showing a weak upturn) for the application of a higher field (say, 2 kOe). This might imply the presence of another magnetic transition around 5 K, which is modified with increasing magnetic field. This may be corroborated to the observation of a very weak peak in C around 5 K (visible if the plot of the low temperature data is drawn in an expanded scale). Presumably, the two magnetic transitions arise from two types of Eu ions with different chemical environment, which may be intrinsic to these 2-1-3 class of alloys; possibly the degree of Cu-Si disorder is actually small. The evidence for two types of Eu ions can be found even in the Mössbauer data and the broadening of the spectra of the minority site due to the transferred hyperfine field from the majority site (discussed below) establishes that minority Eu is not extrinsic to the sample. We also note that the zero-field and field-cooled data diverge at T<sub>C</sub> as the temperature is lowered, presumably due to anisotropy of the material; this divergence can not arise from spin-glass phenomenon, considering that the features in C and $`\rho `$ are sharp at the magnetic transition. We may also add that there is a hysteretic behavior of the isothermal M with a small coercive field (300 Oe) at 2 K, but with a still smaller value of coercive field at 10 K, if measured as a function of H (Fig. 3), typical of soft ferromagnets. It is obvious from Fig. 3 that the isothermal M does not saturate even at high fields and the value, say at 2 K for H= 50 kOe is far below the full value of 14$`\mu `$<sub>B</sub>/formula unit; which may suggest that the ferromagnetism could be of a canted-type.
We have also performed <sup>151</sup>Eu Mössbauer effect measurements as a function of temperature in order to get a microscopic picture of the magnetism. It may be remarked that the <sup>151</sup>Eu ME studies at 300 K for the composition, EuCu<sub>0.5</sub>Si<sub>1.5</sub>, was reported in Ref. 24 several years ago, as a continuation of substitutional studies in EuSi<sub>2</sub>, but to our knowledge there has been no further study on this alloy. Typical spectra obtained at various temperatures are shown in Fig. 4, reflecting the magnetic behavior below 38 K. The spectrum at 38 K indicates the absence of magnetic order at this temperature, with the dominant feature at -10.5 mm/s characteristic of divalent Eu ions. A weak feature around zero velocity ( at - 0.2 mm/s) with about 3% fractional intensity (as derived from the low temperature spectra) arises from trivalent Eu ions, presumably produced by surface oxidation when powdering the sample for the preparation of the absorber. The magnetic hyperfine split spectra below 38 K could not be consistently fitted with only one site; the assumption of two sites with the same intensity ratio of 3:1 as in Eu<sub>2</sub>PdSi<sub>3</sub> (Ref. 19), however, resulted in a better description of the observed data. The isomer shifts of the two sites are almost identical (-10.4 and -10.7 mm/s), in contrast to distinctly different values (-10.0 and -8.6 mm/s) observed in Eu<sub>2</sub>PdSi<sub>3</sub>. The hyperfine fields, B<sub>eff</sub>, of the two sites exhibit , similar to the Eu<sub>2</sub>PdSi<sub>3</sub> case, a drastically different temperature dependence, as displayed in Fig. 5. At 4.2 K, the values of B<sub>eff</sub>, -290 and -315 kOe, are rather close for the majority and minority sites, unlike the situation in Eu<sub>2</sub>PdSi<sub>3</sub> (in which case the corresponding values are -408 and -255 kOe respectively). Since the unit-cell volumes of these Cu and Pd containing alloys are almost identical, the differences in isomer shifts and hyperfine fields between these two compounds may be attributed to different transition metal ion surrounding for Eu. Obviously, the influence of Pd is stronger in bringing about a local modification of the conduction electron characteristics around the two Eu sites. In addition, Pd and Cu environments also modify the nature of magnetic ordering; while in the Pd case there is an evidence for antiferromagnetic coupling, the magnetic ordering is of a ferromagnetic type in the Cu case. It should also be noted that the plot of B<sub>eff</sub> versus temperature does not follow the magnetization curve expected for a S=7/2 spin system, which may be indicative of some degree of crystallographic disorder. There is a transfer of hyperfine field to the minority Eu site below 38 K as indicated by the broadening of the spectral features of this site, however, with the minority site ordering magnetically only at lower temperatures. Around 6 K, similar to a feature in the $`\chi `$ data, there is a noticeable change in the B<sub>eff</sub>(T) curves for both the sites, indicating a rearrangement of the spin direction, possibly resulting in a canted ferromagnetic alignment. From the absence of a significant reduction in the value of B<sub>eff</sub> below that due to core-polarization contribution, following the arguements given in our earlier article, we conclude that the antiferromagnetic coupling is however negligible. This is consistent with the observation that, besides the sign, the magnitude of $`\theta `$<sub>p</sub> is the same as T<sub>C</sub>.
We now present an observation of importance to the field of CMR. An application of H (say, 30 kOe) depresses the value of the electrical resistance, resulting in a negative MR even around 100 K, which is far above T<sub>C</sub> (Fig. 6). The magnitude of MR, defined as $`\mathrm{\Delta }\rho /\rho `$= \[$`\rho `$(H)-$`\rho `$(0)\]/$`\rho `$(0), grows with decreasing temperature peaking at T<sub>C</sub>, as shown in Fig. 6, similar to the behavior observed in by now well-known perovskites-based La manganites (CMR systems). Needless to mention that the negative MR in the ferromagnetically ordered state is usually expected. The data were also collected (Fig. 6, bottom) as a function of H at selected temperatures both above and below T<sub>C</sub> in order to correlate the values with those obtained from the temperature dependent measurements. It is clear that the value is as large as about -9% at 70 kOe even at 50 K. We also remark that there is a sign crossover around 120 K, beyond which one sees a positive MR; for instance, in a field of 30 kOe at 200 K, the value is about 2%, which reduces to negligibly small (but positive) values at 300 K (not shown in the figure).
To conclude, the compound, Eu<sub>2</sub>CuSi<sub>3</sub>, crystallizing in a AlB<sub>2</sub>-derived hexagonal structure, exhibits long-range ferromagnetic ordering at 37 K. The temperature dependent magnetoresistance behavior is qualitatively similar to that observed in CMR systems, though double-exchange or Jahn-Teller mechanisms are not operative in such Eu systems. Similar temperature dependent behavior in the paramagnetic state has been made by us in the past in some Gd, Tb, Dy based alloys. The uniqueness of the present result is that such an observation is made for the first time in a Eu-based / Cu-based alloy. The Cu ions do not carry any magnetic moment and therefore the present results establish that the observed magnetoresistance anomalies in the vicinity of magnetic transition temperature is solely related to magnetic precursor effect due to 4f magnetism. We suggest that the alloys exhibiting such features should be viewed together with LNMR systems to understand this phenomenon better. The results also give a clue to identify potential candidates exhibiting large MR at room temperature for applications in the sense that one can search for magnetic precursor effects in compounds undergoing magnetic ordering at temperatures rather close to (but below) 300 K.
The authors thank R. Pöttgen for a discussion.
Electronic address: sampath@tifr.res.in
|
no-problem/9905/cond-mat9905368.html
|
ar5iv
|
text
|
# Electrostatics of Inhomogeneous Quantum Hall Liquid
## I Introduction
During the last few years the new methods have been developed for direct imaging of the charge and current of the quantum Hall liquid. They are especially effective if the distribution changes along the sample either because of the sweeping magnetic field or due to an intentionally created gradient of electron density. These methods can be useful for studying a spatial modulation of the charge density in the electron liquid, which appears due to electron-electron interaction. Another very robust and very simple reason for inhomogeneity of electron density is fluctuations of charged donors in the doped region of semiconductor. These fluctuations cause the changes of electron density which are strongly dependent on the screening. On the other hand the screening decreases drastically each time when the chemical potential passes through an integer or a fractional gap. Thus, both the screening and the electron density itself strongly depend on the magnetic field. To study comparatively weak effects of the “internal” modulation of the liquid one should first have a possibility to discriminate the “external” effects. Moreover, the screening properties of the quantum Hall liquid is an interesting problem and imaging of the fluctuations induced by donors may provide the new opportunities for quantitative understanding of this problem.
The theoretical study of the screening properties have been conducted by many people. In the first part of this paper I discuss the hierarchy of lengths and magnitudes of the density fluctuations in the macroscopically homogeneous liquid at different values of parameters. These results were partially obtained in the papers , however, since the local density fluctuations were not an experimental object at the time, these fluctuations were not properly described. I concentrate here on the thermodynamic density of states $`dn/d\mu `$. In fact, fluctuations of the density make liquid compressible at all values of average density. This means that $`dn/d\mu `$ is non-zero even when the chemical potential $`\mu `$ is inside the gap. Note that from the microscopic point of view the system with a long-range disorder is still a mixture of metallic and incompressible phases. Finite compressibility appears in the electrostatics, where electric field is averaged over the size larger than the sizes of typical fluctuations. Depending on the sizes of the tips which are used for imaging and the sizes of fluctuations one can measure either average values, like $`dn/d\mu `$, or fluctuations themselves.
In the second part of the paper I consider a system with a density gradient. In this case the average occupations of different Landau levels (LL’s) depend on coordinate. The problem is that between the regions occupied by electrons of the two different LL’s there is a region of “incompressible liquid”, where density is supposed to be constant. This problem has been previously addressed by Chklovskii, Shklovskii and Glazman, referred below as CSG. They have presented an analytical solution of the equations of the non-linear screening in a magnetic field completely ignoring disorder. Their solution contains a strip of incompressible liquid which separates compressible regions of two different LL’s. From electrostatic point of view this strip has a dipole moment per length which depends on magnetic field.
I consider the case when applied gradient of density is much smaller than the random gradients created by disorder. In this case transition region is a random mixture of different compressible phases and incompressible phase. The system is neutral in average if one takes into account a realistic value of $`dn/d\mu `$. However, there are strong fluctuations of density, which are the only manifestation of the transition region.
## II Density fluctuations in macroscopically homogeneous system
I consider a model which consists of the plane with the two-dimensional electron liquid (TDEL) in a perpendicular magnetic field. Only few lowest LL’s are supposed to be occupied. The parallel plane with randomly distributed charged donors is at a distance $`s`$ from the TDEL. The average density of the charged donors is $`C`$. We assume that $`s`$ is much larger than both the average distances between electrons and the distance between donors. All spatial harmonics of the donor fluctuations with the wave length $`R`$ smaller than $`s`$ create an exponentially small potential in the plane of the TDEL. The mean square fluctuation of the donor charge in the square $`s\times s`$ is $`e\sqrt{Cs^2}`$, while the density fluctuation is of order $`\sqrt{C}/s`$. Such fluctuations create a random potential of order of $`W=e^2\sqrt{C}/\kappa `$, where $`\kappa `$ is the lattice dielectric constant. This potential can be successfully screened by by a small redistribution of electron density if average density $`n`$ of the highest partially occupied LL is larger than $`\sqrt{C}/s`$. This type of screening with $`\delta n/n1`$ is called the linear screening. Because of the linear screening the potential of the TDEL can be considered as a constant From electrostatic point of view the TDEL in this case is a metal, however, $`dn/d\mu `$ is negative.
The ratio of the gap $`\mathrm{\Delta }`$ to the energy $`W`$ is an important parameter of the problem. For integer gaps in GaAs-based structures $`\mathrm{\Delta }=W`$ in magnetic field $`B2\mathrm{T}`$ if $`C=10^{11}\mathrm{cm}^2`$. In what follows it is convenient to consider separately two different cases.
### A $`\mathrm{\Delta }W`$.
In this subsection I refer to the work where the non-linear screening has been considered in the one-level approximation ignoring holes at the lower LL. The break down of the linear screening has been found at $`n_c=0.42\sqrt{C}/s`$. It happens very sharp and at lower density the chemical potential goes down as
$$\mu =\frac{W}{14}\left(\frac{C^{1/2}}{ns}2.4\right)=\frac{WC^{1/2}}{14s}\left(\frac{1}{n}\frac{1}{n_c}\right),$$
(1)
where $`n<n_c`$.
The computer modeling does not show any deviation from Eq. (1) in the region defined by inequalities $`0>\mu >2W`$ and $`2.4>C^{1/2}/ns>40`$. Thus, if $`\mathrm{\Delta }<4W`$, one can use Eq. (1) to find the thermodynamic density of states $`dn/d\mu `$. Note that $`d\mu /dn`$ as obtained from Eq. (1) is positive. This approximation ignores internal energy of the TDEL which comes from the interaction and which is responsible for the negative compressibility. The chemical potential of an ideal interacting system can be just added to the right hand part of Eq. (1). Then one can find that the compressibility changes sign with decreasing $`n`$ at $`nn_c`$.
In the region of the non-linear screening the characteristic size $`R`$ of the potential fluctuations should increase with decreasing density as $`\sqrt{c}/n`$. This result has been first obtained by Shklovskii and Efros for the 3d non-linear screening. It can be easily generalized for the 2d-case as well. At $`n=n_c`$, the size $`R`$ is of the order of linear screening radius. This length is supposed to be much smaller than all characteristic lengths of the non-linear screening, which start with the spacer length $`s`$. Assuming that the linear screening radius is zero, one can propose the equation
$$R=\frac{b\sqrt{c}}{n}(0.42ns/\sqrt{c})=\frac{sb}{n}(n_cn),$$
(2)
where $`b`$ is a numerical constant and $`n<n_c`$. For a very rough estimate of the constant $`b`$ one can analyze the size effect in computer simulation of the chemical potential at small $`n`$. Such an analysis shows that $`b`$ is not very far from unity.
At $`\mu <\mathrm{\Delta }/2`$ the screening by the holes in the lower LL becomes important. One can rewrite Eq. (1) in such a form that it will be applicable when $`\mu `$ is near the lower LL
$$\mu =\mathrm{\Delta }+\frac{W}{14}\left(\frac{C^{1/2}}{ps}2.4\right)=\mathrm{\Delta }+\frac{WC^{1/2}}{14s}\left(\frac{1}{p}\frac{1}{n_c}\right),$$
(3)
where the density of holes $`p`$ in the lower LL is less than $`n_c`$.
The Eqs. (1,3) are not exact in the middle of the gap. Nevertheless, one can use them to estimate the electron and hole densities at $`\mu =\mathrm{\Delta }/2`$ as
$$n_m=p_m=\frac{e^2C}{7\kappa \mathrm{\Delta }}.$$
(4)
Electrostatic potential in the plane of TDEL changes with decreasing electron density $`n`$ in the following way. At $`n>n_c`$ a strong linear screening makes the change of the potential negligible. The peaks in the potential appear when $`n<n_c`$. In the planar regions the energy equals $`\mu `$ and electron density is not constant. We call these regions metallic. In the region of incompressible liquid the potential energy exceeds $`\mu `$. The percolation through metallic regions disappears at $`n_p=0.11\sqrt{C}/s`$. At $`\mu =\mathrm{\Delta }/2`$ almost all the area is occupied by the incompressible liquid. When $`\mu `$ becomes closer to the lower LL metallic regions appear again but this time they consist of the holes in the lower LL. The percolation through this new metal appears at hole density $`p_p=n_p`$ and, at $`p_c=n_c`$, the linear screening transforms all the area into metal.
In my early papers I predicted that the width of the plateaus of the quantum Hall effect in a clean material should be $`2n_p`$. Later on it becomes clear that the low-temperature experimental values are much larger and this contradiction is still not resolved completely. Koulakov et al. argued that the plateaus become larger because of the pinning of modulated TDEL. Hopefully, the simultaneous measurements of electron density distribution and quantum Hall effect may answer this question.
### B $`\mathrm{\Delta }W`$.
The distribution of electron density in this case can be obtained in a framework of an analytical theory which has been constructed for the fractional gaps to explain capacitance data by Eisenstein et al.. This theory can be reformulated for the integer gaps in the following way.
The non-linear screening appears at the same value of electron density in the upper LL, namely at $`n_c=0.42\sqrt{C}/s`$. With decreasing $`n`$ the random potential becomes very soon of the order of $`\mathrm{\Delta }`$. The plane with the TDEL consists of metallic regions of two different types. One type contains electron of the upper LL and the other one contains holes of the lower LL. The typical size of this regions is of the order of the spacer width $`s`$. A simple electrostatic estimate shows that the width of the incompressible strips $`l`$ between these two metals is of the order of $`s\sqrt{(\mathrm{\Delta }/W)}`$. Since $`ls`$, one can use the quantitative theory by CSG to write
$$l^2=\frac{4\kappa \mathrm{\Delta }}{\pi ^2e^2|n(𝐫)|},$$
(5)
where density gradient is taken at a boundary of a metallic region in the direction perpendicular to the boundary. Since statistical properties of the donor distribution $`C(𝐫)`$ is known, one can calculate the distribution of electron density. The most important result is the fraction $`Q`$ of the plane occupied by the incompressible liquid. It is given by equation
$$Q=2(3/\pi ^5)^{1/4}\mathrm{\Gamma }(\frac{5}{4})\left(\frac{\mathrm{\Delta }}{W}\right)^{1/2}\mathrm{exp}\frac{(n\nu n_0)^2}{\delta ^2},$$
(6)
where $`\delta ^2=C/(4\pi s^2)`$, and $`\nu `$ is an integer filling factor. The incompressible liquid occupies the maximum fraction of the plane when average electron density $`n`$ is $`\nu n_0`$. In this case $`Q0.57(\mathrm{\Delta }/W)^{1/2}`$. Computer simulation shows that this result is valid at least at $`\mathrm{\Delta }/W0.4`$. Note that at $`\mathrm{\Delta }/W=0.4`$ the maximum fraction of incompressible liquid $`Q0.36`$
Finally, at $`\mathrm{\Delta }<W`$ the fraction of incompressible liquid is small even when chemical potential $`\mu `$ is exactly in the middle of the gap. At this point the most part of the plane is occupied by electron and hole metals with narrow filaments of incompressible liquid between them. The thermodynamic density of states is of the order $`\mathrm{\Delta }/n_c`$.
## III Spatial transition between Landau levels due to a macroscopic density gradient.
Suppose now that there is a small density gradient $`g`$ in $`x`$-direction created by the side electrodes or inhomogeneous distribution of donors. We assume that that this gradient is much smaller than the typical microscopic gradient of density $`g_f`$ that appears due to fluctuations of donor’s charge in the scale $`s`$. One gets
$$g_f=\sqrt{C}/s^2.$$
(7)
The value of $`g_f`$ is $`1.3\times 10^{16}\mathrm{cm}^3`$ at $`C=10^{11}\mathrm{cm}^2`$ and $`s=5\times 10^6\mathrm{cm}`$.
If the thermodynamic density of states is large enough and $`g`$ is small enough, one can consider the TDEL as a metal where macroscopic electric field is averaged over the distances larger than the size of the fluctuations described in the previous section. The electron density can be found from the condition that electrostatic potential $`\varphi `$ is constant in the plane of the TDEL while the density itself is determined by a normal component of electric field $`E_z`$. In this approximation density is independent of the the magnetic field and reproduces the density gradient of charged donors. Since the TDEL is not an ideal metal this approximation is not exact. The lateral electric field $`E_x`$ is non-zero and it can be found from the condition $`\mu +e\varphi =\mathrm{const}`$. Thus, $`E_x=(1/e)(d\mu /dn)g`$. The metallic approximation for electron density is good if $`|E_x||E_z|`$, where $`E_z=4\pi en(x)/\kappa `$ is a normal component of electric field and the density $`n(x)`$ does not include completely occupied LL’s. The general condition of the metallic approximation is
$$|\frac{d\mu }{dn}|\frac{g\kappa }{4\pi e^2n(x)}1.$$
(8)
This approximation is good at small gradients $`g`$ and at large $`|dn/d\mu |`$. The theory by CSG assumes $`dn/d\mu =0`$. Thus, the two theories are applicable at different conditions.
In the case $`\mathrm{\Delta }<W`$ one can substitute the estimate $`d\mu /dn\mathrm{\Delta }/n_c`$ into Eq.(8) to find that metallic approximation is valid if
$$\frac{g}{4\pi g_f}\frac{\mathrm{\Delta }}{W}1.$$
(9)
At $`\mathrm{\Delta }W`$ this condition is always fulfilled. Note that CSG also mentioned that there theory is not applicable at $`\mathrm{\Delta }<W`$.
Now I come to the case of a strong magnetic field and weak disorder where $`\mathrm{\Delta }W`$. The electron density becomes inhomogeneous at $`n<n_c`$. At this density the estimate for the thermodynamic density of states is $`dn/d\mu n_c/W`$, and the metallic approximation works if $`gg_f`$. This condition is usually fulfilled, but at lower density, when chemical potential is deep in the gap, the condition changes. The spatial point, where the chemical potential is in the middle of the gap, is the most dangerous for the metallic approximation because of the small thermodynamic density of states. Thus, the metallic approximation is valid everywhere if Eq. (8) is fulfilled at the point $`\mu =\mathrm{\Delta }/2`$, where $`n=n_m`$. Making use of Eqs. (1,4,8 ) one gets the condition
$$2\left(\frac{\mathrm{\Delta }}{W}\right)^3\frac{g}{g_f}1.$$
(10)
If the condition Eq. (10) is not fulfilled the metallic approximation is violated within some spatial region with the chemical potential $`|\mu \mathrm{\Delta }/2|<\mathrm{\Delta }^{}`$. In this case one should apply the theory by CSG to the central part of the transition region substituting effective gap $`\mathrm{\Delta }^{}`$ instead of $`\mathrm{\Delta }`$. With increasing gradient and increasing magnetic field the length of this central part increases. Note that this possibility has been considered by CSG in Fig. 5 of their paper.
## IV Conclusion
The theory by CSG gives a sharp transition strip between two metallic regions corresponding two LL’s. Electron densities changes due to this transition to form the “dipolar strip” with the dipole moment depending on magnetic field. The width of the strip is given by Eq. (5) where $`|n(𝐫)|`$ should be substituted by $`g`$.
I have shown above that at small $`g`$ and moderate magnetic fields the average electron density is the same as without magnetic field. The fluctuations of this density are the only manifestation of the transition. These fluctuation are strong in the spatial region with the non-linear screening. The size of this region $`l`$ can be obtained from condition $`gl=2n_c`$. This might be a large band of the micron size. It is much larger than the typical size of the fluctuations. That is why I can use Eq. (8) with compressibility $`d\mu /dn`$ resulting from these fluctuations. Both the shape and the size of the fluctuations strongly depend on magnetic field.
If $`\mathrm{\Delta }W`$, the fraction of incompressible liquid as a function of coordinate can be obtained from the Eq.(6) by substituting $`n=n(x)`$, where $`n(x)`$ is electron density without magnetic field.
One can compare this conclusion with the experimental data by Yakoby et al., where $`\mathrm{\Delta }<W`$. This data show wide transition regions with intermediate compressibility, which is probably, the result of averaging.
## V Acknowledgment
I am grateful to A. Yakoby for attracting my attention to this problem and for sending me the preprint of the paper.
|
no-problem/9905/hep-ph9905518.html
|
ar5iv
|
text
|
# A QCD analysis of HERA and fixed target structure function data
## 1 INTRODUCTION
With the integrated luminosity of about 50 pb<sup>-1</sup> collected at HERA during the years 1994–1997 a new kinematic domain of large $`x`$ and $`Q^2`$ becomes accessible for the study of deep inelastic scattering in $`ep`$ collisions. Measurements by ZEUS of the $`e^+p`$ single differential neutral current (NC) and charged current (CC) Born cross sections for $`Q^2>200`$ GeV<sup>2</sup> have recently become available .
Standard Model (SM) predictions calculated with, for instance, the parton distribution set CTEQ4 are in good agreement with the NC cross sections but fall below the CC measurements at large $`x`$ and $`Q^2`$. This could be an indication of new physics beyond the SM but might also be due to an imperfect knowledge of the parton densities in this kinematic region. For instance in it is shown that a modification of the CTEQ4 down quark density yields SM predictions in agreement with the CC $`e^+p`$ data.
To investigate these issues we have performed a global NLO QCD analysis of structure function data to obtain the parton densities in the proton. A full error analysis provides estimates of the uncertainties in these parton densities, the structure functions and the SM predictions of the NC and CC cross sections.
## 2 QCD ANALYSIS
The data used in the fit are $`F_2^p`$ from ZEUS and H1 together with $`F_2^p`$, $`F_2^d`$ and $`F_2^d/F_2^p`$ from fixed target experiments. Also included were neutrino data on $`xF_3^{\nu Fe}`$ for $`x>0.1`$ and data on the difference $`x(\overline{d}\overline{u})`$ .
After cuts the data cover a kinematic range of $`10^3<x<0.75`$, $`3<Q^2<5000`$ GeV<sup>2</sup> and $`W^2>7`$ GeV<sup>2</sup>. Both $`F_2^d`$ and $`F_2^d/F_2^p`$ as well as $`xF_3^{\nu Fe}`$ were corrected for nuclear effects using the parameterization of for deuterium (with an assumed uncertainty of 100%) and that of for iron (assumed uncertainty 50%).
The QCD predictions for the structure functions were obtained by solving the QCD evolution equations in NLO in the $`\overline{\mathrm{MS}}`$ scheme . At the scale $`Q_0^2=4`$ GeV<sup>2</sup> the gluon density ($`xg`$), the sea and valence quark densities ($`xS,xu_v,xd_v`$) and the difference $`x(\overline{d}\overline{u})`$ were parameterized in the standard way (16 free parameters). The strange quark density was taken to be a fraction $`K_s=0.20(\pm 0.03)`$ of the sea . The normalizations of the parton densities were constrained such that the momentum sum rule and the valence quark counting rules are satisfied. The charm and bottom quarks were assumed to be massless and were generated dynamically above the thresholds $`Q_c^2=4(\pm 1)`$ and $`Q_b^2=30`$ GeV<sup>2</sup> respectively. The value of the strong coupling constant was set to $`\alpha _s(M_Z^2)=0.118(\pm 0.005)`$.
Higher twist contributions to $`F_2^p`$ and $`F_2^d`$ were taken into account phenomenologically by describing these structure functions as
$$F_2^{HT}=F_2^{LT}[1+H(x)/Q^2]$$
(1)
where $`F_2^{LT}`$ obeys the NLO QCD evolution equations and where $`H(x)`$ was parameterized as a fourth degree polynomial in $`x`$ (5 free parameters).
The normalizations of the ZEUS, H1 and NMC data were kept fixed to unity whereas those of the remaining data sets were allowed to float within the quoted normalization errors (7 parameters).
The uncertainties in the parton densities, structure functions and related cross sections were estimated taking into account all correlations. Included in the error calculation are the experimental statistical errors, 57 independent sources of systematic uncertainty (propagated using the technique described in ) and the errors on the input parameters of the fit.
## 3 RESULTS
The fit yielded a good description of the data with a $`\chi ^2`$ of 1540 for 1578 data points and 28 free parameters. The quality of the fit is illustrated in Fig. 1 where we show the fixed target $`F_2^p`$ data for $`x>0.1`$.
The higher twist coefficient $`H(x)`$ in Eq. (1) is found to be negative for $`x<0.5`$ and becomes large and positive at high $`x`$. Our result on the higher twist contribution is very close to that obtained by MRST .
Nuclear effects in neutrino scattering were investigated by calculating the ratios $`xF_3^{\nu Fe}/xF_3^{\nu N}`$ and $`F_2^{\nu Fe}/F_2^{\nu N}`$ of the CCFR data to the QCD predictions of scattering on a free nucleon. Both these ratios do not significantly depend on $`Q^2`$ in accordance with measurements of nuclear effects in muon scattering . The ratios, averaged over $`Q^2`$, are plotted in Fig. 2
and show the typical $`x`$ dependence of nuclear effects, including the rise at large $`x`$ due to Fermi motion. However, within the present accuracy it is not possible to establish whether these nuclear effects are significantly different from those measured in charged lepton scattering (full curves in Fig. 2).
In Fig. 3
are shown the parton densities obtained from the QCD fit (full curves). We have verified that the strange quark density is compatible with the measurements from CCFR and also that the QCD prediction of $`F_2^c`$ agrees well with the ZEUS data above $`Q^2=10`$ GeV<sup>2</sup>. This supports the assumption made in this analysis that, at least for $`x>10^3`$, quark mass effects do not spoil the QCD extrapolations to large $`Q^2`$.
Also shown in Fig. 3 are the parton densities from CTEQ4 and CTEQ5 . There is, within errors, good agreement between these results and the QCD fit, although both the present analysis and CTEQ5 yield a slightly harder $`xd_v`$ density than CTEQ4. Bodek and Yang also obtain a harder $`xd`$ density by modifying CTEQ4 (where the ratio $`d/u0`$ as $`x1`$) such that
$$d/ud^{}/u=d/u+Bx(1+x).$$
(2)
They find $`B=0.10\pm 0.01`$ which implies that $`d/u0.2`$ as $`x1`$. The $`d/u`$ ratio is shown in Fig. 4.
It is seen that the result from the QCD fit with $`B=0`$ (full curve) is for $`x<0.75`$ close to the modified CTEQ4 distribution with $`B=0.1`$ (dotted curve); if we leave $`B`$ a free parameter in the fit, we obtain $`B=0.02\pm 0.01`$ (statistical error), close to zero. In any case, the large error band clearly indicates that the exact behavior of $`d/u`$ at large $`x`$ is not well constrained. It might go to zero (CTEQ), to a constant (Bodek and Yang) or may even diverge (this analysis) as $`x1`$.
The CC $`e^+p\overline{\nu }X`$ cross section is predominantly sensitive to the $`d`$ quark density
$$d^2\sigma /dxdQ^2(1y)^2(xd+xs)+x\overline{u}+x\overline{c}$$
(3)
where $`y=Q^2/(xs)`$ with $`s`$ the $`ep`$ center of mass energy ($``$10<sup>5</sup> GeV<sup>2</sup> at HERA). In Fig. 5
we show the ZEUS measurements of the CC $`e^+p`$ single differential cross sections $`d\sigma /dQ^2`$ and $`d\sigma /dx`$ for $`Q^2>200`$ GeV<sup>2</sup>, normalized to the NLO predictions calculated with the CTEQ4 parton distribution set. The full (dotted) curves correspond to the predictions from the QCD fit (CTEQ5) which both achieve a better description of the data at large $`Q^2`$ and $`x`$ due to an improved determination of the $`d`$ quark density. SM predictions of the $`e^+p`$ NC cross sections, calculated with the parton densities obtained in this analysis, also agree very well with the recent ZEUS data (not shown).
We conclude that, within the present experimental accuracy, no significant deviations can be observed between the data and the Standard Model predictions.
|
no-problem/9905/hep-ph9905556.html
|
ar5iv
|
text
|
# 1 Introduction: The Spinless Salpeter Equation in One Dimension
## 1 Introduction: The Spinless Salpeter Equation in One Dimension
The spinless Salpeter equation arises either as a standard reduction of the well-known Bethe–Salpeter formalism for the description of bound states within the framework of relativistic quantum field theory or as a straightforward relativistic generalization of the nonrelativistic Schrödinger equation. This semirelativistic equation of motion with a static interaction described by the Coulomb potential (originating, for instance, from the exchange of a massless particle between the bound-state constituents) defines what we call, for short, the “spinless relativistic Coulomb problem.<sup>1</sup><sup>1</sup>1 The present state-of-the-art of the three-dimensional relativistic Coulomb problem has been reviewed, for instance, in Refs. .
Recently, confining the configuration space to the positive half-line (and mimicking thereby the effect of a “hard-core”), the relativistic Coulomb problem has been studied in one dimension . This one-dimensional case may serve as a toy model which might prove to be instructive for the analysis of the still unsolved three-dimensional problem. In view of its potential importance, we re-analyze this nontrivial and delicate problem.
The spinless Salpeter equation may be regarded as the eigenvalue equation
$$H|\chi _k=E_k|\chi _k,k=1,2,3,\mathrm{},$$
for the complete set of Hilbert-space eigenvectors $`|\chi _k`$ and corresponding eigenvalues
$$E_k\frac{\chi _k|H|\chi _k}{\chi _k|\chi _k}$$
of a self-adjoint operator $`H`$ of Hamiltonian form, consisting of a momentum-dependent kinetic-energy operator and a coordinate-dependent interaction-potential operator:
$$H=T+V,$$
(1)
where $`T`$ is the “square-root” operator of the relativistic kinetic energy of some particle of mass $`m`$ and momentum $`p`$,
$$T=T(p)\sqrt{p^2+m^2},$$
(2)
and $`V=V(x)`$ is an arbitrary, coordinate-dependent, static interaction potential. The action of the kinetic-energy operator $`T`$ on an element $`\psi `$ of $`L_2(R)`$, the Hilbert space of square-integrable functions on the real line, $`R`$, is defined by (cf. also Eq. (3) of Ref. )
$$(T\psi )(x)=\frac{1}{2\pi }\underset{\mathrm{}}{\overset{+\mathrm{}}{}}dp\underset{\mathrm{}}{\overset{+\mathrm{}}{}}dy\sqrt{p^2+m^2}\mathrm{exp}[\mathrm{i}p(xy)]\psi (y).$$
(3)
In Ref. , the domain of $`H`$ is restricted to square-integrable functions $`\mathrm{\Psi }(x)`$ with support on the positive real line, $`R^+`$, only, vanishing at $`x=0`$ (cf. Eq. (28) of Ref. ):
$$\mathrm{\Psi }(x)=0\text{for}x0.$$
This restriction may be interpreted as due to the presence of a “hard-core” interaction potential effective for $`x0`$. For $`x>0`$, the interaction potential $`V`$ is chosen to be of Coulomb type, its strength parametrized by a positive coupling constant $`\alpha `$, i.e., $`\alpha >0`$:
$$V(x)=V_\mathrm{C}(x)=\frac{\alpha }{x}\text{for}x>0.$$
Let the Coulomb-type semirelativistic Hamiltonian $`H_\mathrm{C}`$ be the operator defined in this way.
## 2 Concerns — Dark Clouds Appear at the Horizon
Now, according to the analysis of Ref. , the point spectrum of the Hamiltonian $`H_\mathrm{C}`$ consists of the set of eigenvalues (cf. Eq. (33) of Ref. )
$$\stackrel{~}{E}_n=\frac{m}{\sqrt{1+{\displaystyle \frac{\alpha ^2}{n^2}}}},n=1,2,3,\mathrm{}.$$
(4)
The corresponding eigenfunctions $`\mathrm{\Psi }_n(x)`$ must be of the form (cf. Eq. (28) of Ref. )
$$\mathrm{\Psi }_n(x)=\psi _n(x)\mathrm{\Theta }(x),n=1,2,3,\mathrm{},$$
(5)
where $`\mathrm{\Theta }(x)`$ denotes the Heaviside step function, defined here by
$`\mathrm{\Theta }(x)`$ $`=`$ $`1\text{for}x>0,`$
$`\mathrm{\Theta }(x)`$ $`=`$ $`0\text{for}x0.`$
In particular, the (not normalized) eigenfunctions $`\psi _n(x)`$, $`n=1,2,3,`$ corresponding to the lowest energy eigenvalues $`\stackrel{~}{E}_n`$ are explicitly given by (cf. Eqs. (37)–(40) of Ref. )
$`\psi _1(x)`$ $`=`$ $`x\mathrm{exp}(\beta _1x),`$
$`\psi _2(x)`$ $`=`$ $`x\left(x{\displaystyle \frac{m^2}{S_2^2\beta _2}}\right)\mathrm{exp}(\beta _2x),`$
$`\psi _3(x)`$ $`=`$ $`x\left(x^2{\displaystyle \frac{3m^2}{S_3^2\beta _3}}x+{\displaystyle \frac{3m^2(\beta _3^2+m^2)}{2S_3^4\beta _3^2}}\right)\mathrm{exp}(\beta _3x),`$ (6)
with (cf. Eq. (32) of Ref. )
$$\beta _n\frac{m\alpha }{n\sqrt{1+{\displaystyle \frac{\alpha ^2}{n^2}}}}=\frac{\alpha }{n}\stackrel{~}{E}_n,n=1,2,3,\mathrm{},$$
and the abbreviation (cf. Eq. (26) of Ref. )
$$S_n\sqrt{m^2\beta _n^2},n=1,2,3,\mathrm{}.$$
However, there are some facts which cause severe doubts about the validity of this solution:
For coupling constants $`\alpha `$ larger than some critical value $`\alpha _\mathrm{c}`$ (which has yet to be determined), the operator $`H_\mathrm{C}`$ is not bounded from below. This may be seen, for instance, already from the expectation value of $`H_\mathrm{C}`$ with respect to the (normalized) trial state $`|\mathrm{\Phi }`$ defined by the configuration-space trial function
$$\mathrm{\Phi }(x)=\phi (x)\mathrm{\Theta }(x)$$
with
$$\phi (x)=2\mu ^{3/2}x\mathrm{exp}(\mu x),\mu >0,$$
and satisfying the normalization condition
$$|\mathrm{\Phi }^2\mathrm{\Phi }|\mathrm{\Phi }=\underset{0}{\overset{\mathrm{}}{}}dx|\phi (x)|^2=1.$$
Apart from the arbitrariness of the variational parameter $`\mu `$, this trial function $`\mathrm{\Phi }`$ coincides, in fact, with the ground-state solution $`\mathrm{\Psi }_1`$ as given in Eqs. (5), (6). The expectation value of the Coulomb interaction-potential operator $`V_\mathrm{C}`$ with respect to the trial state $`|\mathrm{\Phi }`$ reads
$$\mathrm{\Phi }|V_\mathrm{C}|\mathrm{\Phi }=\alpha \underset{0}{\overset{\mathrm{}}{}}dx\frac{1}{x}|\phi (x)|^2=\mu \alpha .$$
There is a trivial (but nevertheless fundamental) inequality for the expectation values of a self-adjoint (but otherwise arbitrary) operator $`𝒪=𝒪^{}`$ and its square, taken with respect to an arbitrary Hilbert-space state $`|\psi `$ in the domain $`𝒟(𝒪)`$ of this operator $`𝒪`$:
$$\frac{|\psi |𝒪|\psi |}{\psi |\psi }\sqrt{\frac{\psi |𝒪^2|\psi }{\psi |\psi }}\text{for all}|\psi 𝒟(𝒪).$$
Application of this inequality to the kinetic-energy operator $`T`$ of Eq. (2) allows to get rid of the troublesome square-root operator:
$$\mathrm{\Phi }|T|\mathrm{\Phi }\sqrt{\mathrm{\Phi }|T^2|\mathrm{\Phi }}\sqrt{\mathrm{\Phi }|p^2|\mathrm{\Phi }+m^2}.$$
The expectation value of $`p^2`$ required here reads
$$\mathrm{\Phi }|p^2|\mathrm{\Phi }=\mu ^2.$$
Thus the expectation value of the Coulomb-like semirelativistic Hamiltonian $`H_\mathrm{C}`$ with respect to the trial state $`|\mathrm{\Phi }`$ is bounded from above by
$$\mathrm{\Phi }|H_\mathrm{C}|\mathrm{\Phi }=\mathrm{\Phi }|T+V_\mathrm{C}|\mathrm{\Phi }\sqrt{\mu ^2+m^2}\mu \alpha .$$
(7)
When inspecting this inequality in the limit of large $`\mu `$, that is, for $`\mu \mathrm{}`$, one realizes that, for $`\alpha `$ large enough, the operator $`H_\mathrm{C}`$ is not bounded from below. In fact, the expectation value of the kinetic-energy operator $`T`$ with respect to the trial state $`|\mathrm{\Phi }`$,
$$\mathrm{\Phi }|T|\mathrm{\Phi }=\underset{\mathrm{}}{\overset{+\mathrm{}}{}}dx\mathrm{\Phi }^{}(x)(T\mathrm{\Phi })(x)=\frac{4\mu ^3}{\pi }\underset{0}{\overset{\mathrm{}}{}}dp\frac{\sqrt{p^2+m^2}}{(p^2+\mu ^2)^2},$$
(8)
is simple enough to be investigated explicitly. For $`\mu m`$, this expectation value simplifies to
$$\mathrm{\Phi }|T|\mathrm{\Phi }=\frac{2\mu }{\pi }\text{for}\mu m.$$
Consequently, in the (ultrarelativistic) limit $`\mu \mathrm{}`$, the expectation value of $`H_\mathrm{C}`$ behaves like
$$\underset{\mu \mathrm{}}{lim}\frac{\mathrm{\Phi }|H_\mathrm{C}|\mathrm{\Phi }}{\mu }=\frac{2}{\pi }\alpha .$$
This clearly indicates that for the Hamiltonian $`H_\mathrm{C}`$ to be bounded from below the Coulomb coupling constant $`\alpha `$ has to be bounded from above by the critical value
$$\alpha _\mathrm{c}\frac{2}{\pi }.$$
(This upper bound on $`\alpha _\mathrm{c}`$ is, in fact, identical to the critical coupling constant $`\alpha _\mathrm{c}`$ found in the case of the three-dimensional spinless relativistic Coulomb problem .)
As rather trivial consequence of the famous minimum–maximum principle , the expectation value
$$\frac{\psi |H|\psi }{\psi |\psi }$$
of a self-adjoint operator $`H`$ bounded from below, with respect to some arbitrary state $`|\psi `$ in the domain of $`H`$, $`𝒟(H)`$, is always larger than or equal to the lowest eigenvalue $`E_1`$ of $`H`$:<sup>2</sup><sup>2</sup>2 This statement constitutes what is sometimes simply called “Rayleigh’s principle.”
$$E_1\frac{\psi |H|\psi }{\psi |\psi }\text{for all}|\psi 𝒟(H).$$
Accordingly, minimizing the expression on the right-hand side of inequality (7) with respect to the variational parameter $`\mu `$ yields a simple analytic upper bound $`\widehat{E}_1`$ on the ground-state energy eigenvalue $`E_1`$ of the Coulomb-like semirelativistic Hamiltonian $`H_\mathrm{C}`$:
$$E_1\widehat{E}_1$$
with
$$\widehat{E}_1=m\sqrt{1\alpha ^2}.$$
(9)
The same analytic upper bound on the ground-state energy $`E_1`$ has been found in the case of the three-dimensional spinless relativistic Coulomb problem . Reality of this latter expression requires again the existence of a critical coupling constant $`\alpha _\mathrm{c}`$ and indicates that this critical value of $`\alpha `$ is less than or equal to 1:
$$\alpha _\mathrm{c}1.$$
Moreover, at least for the energy eigenvalue $`E_1`$ corresponding to the ground state of the Hamiltonian $`H_\mathrm{C}`$, the supposedly exact value of Eq. (4),
$$\stackrel{~}{E}_1=\frac{m}{\sqrt{1+\alpha ^2}},$$
(10)
is in clear conflict with the naive upper bound $`\widehat{E}_1`$ of Eq. (9):
$$\frac{\widehat{E}_1}{\stackrel{~}{E}_1}=\sqrt{1\alpha ^4}$$
and therefore
$$\widehat{E}_1<\stackrel{~}{E}_1\text{for}\alpha >0.$$
For larger values of the Coulomb coupling constant $`\alpha `$, the upper bound (9) on the ground-state energy can be easily improved by fixing in the expectation value (8) of the kinetic-energy operator $`T`$ the variational parameter $`\mu `$ to the value $`\mu =m`$. In this case, this expectation value reads
$$\mathrm{\Phi }|T|\mathrm{\Phi }=\frac{4m}{\pi }.$$
Accordingly, the ground-state energy eigenvalue $`E_1`$ is bounded from above by
$$E_1\left(\frac{4}{\pi }\alpha \right)m.$$
(11)
For the Coulomb coupling constant $`\alpha `$ in the range
$$\frac{2}{\pi }\sqrt{\frac{1}{2}\frac{4}{\pi ^2}}<\alpha \frac{2}{\pi },$$
the above expression represents a genuine improvement of the upper bound (9).
The expectation value (8) of the kinetic-energy operator $`T`$ with respect to the trial state $`|\mathrm{\Phi }`$ may be written down explicitly:
$$\mathrm{\Phi }|T|\mathrm{\Phi }=\frac{2}{\pi }m\left[\frac{\mu }{m}+\frac{\mathrm{arccos}{\displaystyle \frac{\mu }{m}}}{\sqrt{1{\displaystyle \frac{\mu ^2}{m^2}}}}\right].$$
Now, for
$$\mu =\beta _1=\frac{m\alpha }{\sqrt{1+\alpha ^2}},$$
the trial function $`\mathrm{\Phi }`$ coincides with the normalized ground-state eigenfunction $`\mathrm{\Psi }_1`$. In this case, the corresponding expectation value of the Hamiltonian $`H_\mathrm{C}`$ becomes
$$\mathrm{\Psi }_1|H_\mathrm{C}|\mathrm{\Psi }_1=\frac{m}{\sqrt{1+\alpha ^2}}\left[\frac{2}{\pi }\left(\alpha +(1+\alpha ^2)\text{arccot}\alpha \right)\alpha ^2\right].$$
(12)
Unfortunately, the above expectation value does not agree with the ground-state energy (10) deduced from Eq. (4):
$$\mathrm{\Psi }_1|H_\mathrm{C}|\mathrm{\Psi }_1\stackrel{~}{E}_1.$$
Eigenstates $`|\chi _i`$, $`i=1,2,3,\mathrm{},`$ of some self-adjoint operator $`H`$ corresponding to distinct eigenvalues of $`H`$ are mutually orthogonal:
$$\chi _i|\chi _k\delta _{ik},i,k=1,2,3,\mathrm{}.$$
This feature is definitely not exhibited by the overlaps
$$\mathrm{\Psi }_i|\mathrm{\Psi }_k=\underset{\mathrm{}}{\overset{+\mathrm{}}{}}dx\mathrm{\Psi }_i^{}(x)\mathrm{\Psi }_k(x)=\underset{0}{\overset{\mathrm{}}{}}dx\psi _i^{}(x)\psi _k(x),i,k=1,2,3,\mathrm{},$$
of the lowest eigenfunctions $`\mathrm{\Psi }_i(x)`$, $`i=1,2,3,`$ given in Eqs. (5), (6). For instance, the overlap $`\mathrm{\Psi }_1|\mathrm{\Psi }_2`$ of the ground state $`|\mathrm{\Psi }_1`$ and the first excitation $`|\mathrm{\Psi }_2`$ is given by
$$\mathrm{\Psi }_1|\mathrm{\Psi }_2=\frac{2[3S_2^2\beta _2m^2(\beta _1+\beta _2)]}{(\beta _1+\beta _2)^4S_2^2\beta _2},$$
revealing thus, beyond doubt, the non-orthogonality of the vectors $`|\mathrm{\Psi }_1`$ and $`|\mathrm{\Psi }_2`$.
## 3 Exact Analytic Upper Bounds on Energy Levels
In view of the above, let us try to collect unambiguous results for the one-dimensional spinless relativistic Coulomb problem. With the help of the definition (3) of the action of a momentum-dependent operator in coordinate space, it is easy to convince oneself of the validity of the operator inequality
$$TT_{\mathrm{NR}}m+\frac{p^2}{2m};$$
the relativistic kinetic-energy operator $`T`$ is bounded from above by its nonrelativistic counterpart $`T_{\mathrm{NR}}`$: when introducing the Fourier transform $`\stackrel{~}{\psi }(p)`$ of the coordinate-space representation $`\psi (x)`$ of the Hilbert-space vector $`|\psi `$,
$$\stackrel{~}{\psi }(p)\frac{1}{\sqrt{2\pi }}\underset{\mathrm{}}{\overset{+\mathrm{}}{}}dx\mathrm{exp}[\mathrm{i}px]\psi (x),$$
one finds
$`\psi |T_{\mathrm{NR}}T|\psi `$ $`=`$ $`{\displaystyle \underset{\mathrm{}}{\overset{+\mathrm{}}{}}}dx\psi ^{}(x)\left[(T_{\mathrm{NR}}\psi )(x)(T\psi )(x)\right]`$
$`=`$ $`{\displaystyle \underset{\mathrm{}}{\overset{+\mathrm{}}{}}}dp|\stackrel{~}{\psi }(p)|^2\left(m+{\displaystyle \frac{p^2}{2m}}\sqrt{p^2+m^2}\right)`$
$``$ $`0.`$
Hence, adding the Coulomb interaction potential $`V_\mathrm{C}`$, the semirelativistic Hamiltonian $`H_\mathrm{C}`$ is, of course, bounded from above by the corresponding nonrelativistic Hamiltonian $`H_{\mathrm{C},\mathrm{NR}}`$:
$$H_\mathrm{C}H_{\mathrm{C},\mathrm{NR}}T_{\mathrm{NR}}+V_\mathrm{C}.$$
Now, upon invoking the minimum–maximum principle (which requires the operator $`H_\mathrm{C}`$ to be both self-adjoint and bounded from below) and combining this principle with the above operator inequality, we infer that every eigenvalue $`E_n`$, $`n=1,2,3,\mathrm{},`$ of $`H_\mathrm{C}`$ is bounded from above by a corresponding eigenvalue $`E_{n,\mathrm{NR}}`$, $`n=1,2,3,\mathrm{},`$ of $`H_{\mathrm{C},\mathrm{NR}}`$:<sup>3</sup><sup>3</sup>3 The line of arguments leading to the general form of this statement may be found, for instance, in Refs. . It is summarized in Appendix A. For a rather brief account of the application of these ideas to the three-dimensional spinless relativistic Coulomb problem, see, e.g., Ref. .
$$E_nE_{n,\mathrm{NR}}\text{for}n=1,2,3,\mathrm{}.$$
It is a simple and straightforward exercise to calculate the latter set of eigenvalues:
$$E_{n,\mathrm{NR}}=m\left(1\frac{\alpha ^2}{2n^2}\right),n=1,2,3,\mathrm{}.$$
These upper bounds on the energy eigenvalues $`E_n`$ may be easily improved by the same reasoning as before. Introducing an arbitrary real parameter $`\eta `$ (with the dimension of mass), we find a set of operator inequalities for the kinetic energy $`T`$ , namely,
$$T\frac{p^2+m^2+\eta ^2}{2\eta }\text{for all}\eta >0$$
and, consequently, a set of operator inequalities for the Coulomb-type semirelativistic Hamiltonian $`H_\mathrm{C}`$ :
$$H_\mathrm{C}\widehat{H}_\mathrm{C}(\eta )\frac{p^2+m^2+\eta ^2}{2\eta }+V_\mathrm{C}\text{for all}\eta >0.$$
Accordingly, every eigenvalue $`E_n`$, $`n=1,2,3,\mathrm{},`$ of $`H_\mathrm{C}`$ is bounded from above by the minimum, with respect to the mass parameter $`\eta `$, of the corresponding eigenvalue
$$\widehat{E}_{n,\mathrm{C}}(\eta )=\frac{1}{2\eta }\left[m^2+\eta ^2\left(1\frac{\alpha ^2}{n^2}\right)\right],n=1,2,3,\mathrm{},$$
of $`\widehat{H}_\mathrm{C}(\eta )`$:
$$E_n\underset{\eta >0}{\mathrm{min}}\widehat{E}_{n,\mathrm{C}}(\eta )=m\sqrt{1\frac{\alpha ^2}{n^2}}\text{for all}\alpha \alpha _\mathrm{c}.$$
For $`n=1`$, this (variational) upper bound coincides with the previous upper bound (9). It goes without saying that these upper bounds are violated by the energy eigenvalues $`\stackrel{~}{E}_n`$ given in Eq. (4):
$$\frac{1}{\stackrel{~}{E}_n}\underset{\eta >0}{\mathrm{min}}\widehat{E}_{n,\mathrm{C}}(\eta )=\sqrt{1\frac{\alpha ^4}{n^4}}<1\text{for}\alpha 0,\text{for all}n=1,2,3,\mathrm{},$$
means
$$\underset{\eta >0}{\mathrm{min}}\widehat{E}_{n,\mathrm{C}}(\eta )<\stackrel{~}{E}_n\text{for}\alpha 0,\text{for all}n=1,2,3,\mathrm{}!$$
Moreover, for $`\mu =m\alpha `$, our generic trial state $`|\mathrm{\Phi }`$ becomes the lowest eigenstate of the nonrelativistic Hamiltonian $`H_{\mathrm{C},\mathrm{NR}}`$, corresponding to the ground-state eigenvalue<sup>4</sup><sup>4</sup>4 The Coulomb problem involves no dimensional parameter other than the particle mass $`m`$. Therefore, both the energy eigenvalues $`E_n`$ and the parameter(s) $`\mu `$ have to be proportional to $`m`$.
$$E_{1,\mathrm{NR}}=m\left(1\frac{\alpha ^2}{2}\right),$$
which may be easily seen:
$$(T_{\mathrm{NR}}\phi )(x)=\left(m\frac{1}{2m}\frac{\mathrm{d}^2}{\mathrm{d}x^2}\right)\phi (x)=\left(m\frac{\mu ^2}{2m}+\frac{\mu }{m}\frac{1}{x}\right)\phi (x)\text{for}x>0$$
implies (with $`\mu =m\alpha `$)
$$H_{\mathrm{C},\mathrm{NR}}|\mathrm{\Phi }=E_{1,\mathrm{NR}}|\mathrm{\Phi }.$$
It appears rather unlikely that the same functional form represents also the eigenstate of the semirelativistic Hamiltonian $`H_\mathrm{C}`$.
## 4 Summary, Further Considerations, Conclusions
This work is devoted to the study of the one-dimensional spinless relativistic Coulomb problem on the positive half-line. Assuming a (dense) domain in $`L_2(R^+)`$ such that the semirelativistic Coulombic Hamiltonian $`H_\mathrm{C}`$ defined in the Introduction is self-adjoint, analytic upper bounds on the energy eigenvalues $`E_k`$, $`k=1,2,3,\mathrm{},`$ have been derived:
$$E_km\sqrt{1\frac{\alpha ^2}{k^2}}\text{for all}k=1,2,3,\mathrm{}.$$
(13)
Surprisingly, the explicit solution presented in Ref. does not fit into these bounds.
In order to cast some light into this confusing situation, let us inspect the action (3) of the kinetic-energy operator $`T`$ in more detail. Consider not normalized Hilbert-space vectors $`|\mathrm{\Phi }_n`$, $`n=0,1,2,\mathrm{},`$ defined, as usual, by the coordinate-space representation
$$\mathrm{\Phi }_n(x)=x^n\mathrm{exp}(\mu x)\mathrm{\Theta }(x),\mu >0,n=0,1,2,\mathrm{}.$$
These vectors certainly belong to the Hilbert space $`L_2(R)`$ for all $`n=0,1,2,\mathrm{},`$ since
$$|\mathrm{\Phi }_n^2\mathrm{\Phi }_n|\mathrm{\Phi }_n=\underset{\mathrm{}}{\overset{+\mathrm{}}{}}dx|\mathrm{\Phi }_n(x)|^2=\underset{0}{\overset{\mathrm{}}{}}dxx^{2n}\mathrm{exp}(2\mu x)=\frac{\mathrm{\Gamma }(2n+1)}{(2\mu )^{2n+1}}<\mathrm{}.$$
However: The norm $`T|\mathrm{\Phi }_n`$ of the vectors $`T|\mathrm{\Phi }_n`$, $`n=0,1,2,\mathrm{},`$ may be found from
$$T|\mathrm{\Phi }_n^2=\underset{\mathrm{}}{\overset{+\mathrm{}}{}}dx|(T\mathrm{\Phi }_n)(x)|^2=\frac{[\mathrm{\Gamma }(n+1)]^2}{2\pi }\underset{\mathrm{}}{\overset{+\mathrm{}}{}}dp\frac{p^2+m^2}{(p^2+\mu ^2)^{n+1}}.$$
This observation might be a hint that the vector $`|\mathrm{\Phi }_0`$, that is, $`\mathrm{\Phi }_0(x)=\mathrm{exp}(\mu x)\mathrm{\Theta }(x)`$, does not belong to the domain of the kinetic-energy operator $`T`$. If this is indeed true, it is by no means obvious how to make sense of Eq. (16) of Ref. for the case $`n=0`$.
Trivially, if Eq. (16) of Ref. is correct for $`n=0`$, all these relations for arbitrary $`n=1,2,\mathrm{}`$ may be obtained by a simple differentiation of the relation for $`n=0`$ with respect to the (generic) parameter $`\mu `$, taking advantage of
$$Tx^n\mathrm{exp}(\mu x)=\left(\frac{\mathrm{d}}{\mathrm{d}\mu }\right)^nT\mathrm{exp}(\mu x).$$
Similarly, it is somewhat hard to believe that Eq. (16) of Ref. holds for $`n=1`$. In our notation, Eq. (16) of Ref. would read for $`n=1`$
$$(T\mathrm{\Phi }_1)(x)=\left(S+\frac{\mu }{Sx}\right)\mathrm{\Phi }_1(x)$$
with
$$S\sqrt{m^2\mu ^2}.$$
Considering merely the norms of the vectors on both sides of this equation, we find, for the norm of the vector on the left-hand side,
$$T|\mathrm{\Phi }_1^2=\frac{m^2+\mu ^2}{4\mu ^3}$$
but, for the norm of the vector on the right-hand side,
$$\left(S+\frac{\mu }{Sx}\right)|\mathrm{\Phi }_1^2=\frac{m^4+\mu ^4}{4\mu ^3S^2}.$$
These two expressions for the norms become equal only for the—excluded—case $`\mu =0`$. Unfortunately, precisely the above relation forms the basis for the assertion in Ref. that $`\mathrm{\Phi }_1(x)`$ with $`\mu =\beta _1`$ is the ground-state eigenfunction of the (“hard-core amended”) one-dimensional spinless relativistic Coulomb problem as defined in the Introduction.
In conclusion, let us summarize our point of view as follows: The energy eigenvalues $`E_k`$, $`k=1,2,3,\mathrm{},`$ of the one-dimensional spinless relativistic Coulomb problem (with hard-core interaction on the nonpositive real line) are bounded from above by Eq. (13). For the ground-state energy eigenvalue $`E_1`$, this upper bound may be improved to some extent, by considering appropriately the minimum of the bounds of Eq. (11), Eq. (12), or Eq. (13) for $`k=1`$, that is, Eq. (9). To our knowledge, these upper bounds represent the only information available at present about the exact location of the energy levels of the (“hard-core amended”) one-dimensional spinless relativistic Coulomb problem.
## Acknowledgements
We would like to thank H. Narnhofer for stimulating discussions and a critical reading of the manuscript.
## Appendix A Combining Minimum–Maximum Principle with Operator Inequalities
There exist several equivalent formulations of the well-known “min–max principle” . For practical purposes, the most convenient one is perhaps the following:
* Let $`H`$ be a self-adjoint operator bounded from below.
* Let $`E_k`$, $`k=1,2,3,\mathrm{},`$ denote the eigenvalues of $`H`$, defined by
$$H|\chi _k=E_k|\chi _k,k=1,2,3,\mathrm{},$$
and ordered according to
$$E_1E_2E_3\mathrm{}.$$
* Consider only the eigenvalues $`E_k`$ below the onset of the essential spectrum of $`H`$.
* Let $`D_d`$ be some $`d`$-dimensional subspace of the domain $`𝒟(H)`$ of $`H`$: $`D_d𝒟(H)`$.
Then the $`k`$th eigenvalue $`E_k`$ (when counting multiplicity) of $`H`$ satisfies the inequality
$$E_k\underset{|\psi D_k}{sup}\frac{\psi |H|\psi }{\psi |\psi }\text{for}k=1,2,3,\mathrm{}.$$
The min–max principle may be employed in order to compare eigenvalues of operators:
* Assume the validity of a generic operator inequality of the form
$$H𝒪.$$
Then
$$E_k\frac{\chi _k|H|\chi _k}{\chi _k|\chi _k}\underset{|\psi D_k}{sup}\frac{\psi |H|\psi }{\psi |\psi }\underset{|\psi D_k}{sup}\frac{\psi |𝒪|\psi }{\psi |\psi }.$$
* Assume that the $`k`$-dimensional subspace $`D_k`$ in this inequality is spanned by the first $`k`$ eigenvectors of the operator $`𝒪`$, that is, by precisely those eigenvectors of $`𝒪`$ that correspond to the first $`k`$ eigenvalues $`\widehat{E}_1,\widehat{E}_2,\mathrm{},\widehat{E}_k`$ of $`𝒪`$ if the eigenvalues of $`𝒪`$ are ordered according to
$$\widehat{E}_1\widehat{E}_2\widehat{E}_3\mathrm{}.$$
Then
$$\underset{|\psi D_k}{sup}\frac{\psi |𝒪|\psi }{\psi |\psi }=\widehat{E}_k.$$
Consequently, every eigenvalue $`E_k`$, $`k=1,2,3,\mathrm{},`$ of $`H`$ is bounded from above by the corresponding eigenvalue $`\widehat{E}_k`$, $`k=1,2,3,\mathrm{},`$ of $`𝒪`$:
$$E_k\widehat{E}_k\text{for}k=1,2,3,\mathrm{}.$$
|
no-problem/9905/cond-mat9905393.html
|
ar5iv
|
text
|
# The Mesostructure of Polymer Collapse and Fractal Smoothing
## Abstract
We investigate the internal structure of a polymer during collapse from an expanded coil to a compact globule. Collapse is more probable in local regions of high curvature, so a smoothing of the fractal polymer structure occurs that proceeds systematically from the shortest to the longest length scales. A proposed universal scaling relationship is tested by comparison with Monte Carlo simulations. We speculate that the universal form applies to various fractal systems with local processes that promote smoothness over time. The results complement earlier work showing that on the macroscale polymer collapse proceeds by driven diffusion of the polymer ends.
Understanding the collapse of homopolymers from a flexible coil to a compact globule is a first step towards modeling the kinetics of molecular self-organization. It may be relevant to a description of DNA aggregation and the initial collapse of proteins from an expanded state to a molten globule from which the final ordered structure is formed. We have performed scaling analysis and simulation of this transition to investigate kinetic effects during collapse. The results suggest that the motion of the polymer ends plays an important role in kinetics because their motion is constrained only by a single bond. Along the contour monomers have two bonds, their motion is more constrained, and aggregation is more difficult. Thus, collapse for a long polymer occurs almost as a one dimensional process where the polymer ends accumulate mass by moving along the contour of the polymer while accreting monomers and small aggregates. Encounters between monomers far apart along the contour to form rings are rare so they play no role in the collapse. As a result of the faster aggregation at the polymer ends the collapsing polymer on a macroscopic scale takes on a dumbbell like appearance. The few DNA fluorescence measurements that follow a single polymer collapse and its metastable states also indicate the special role of polymer ends.\[5-7\] However, a description of the internal structure of the polymer away from the polymer ends has not, thus far, been obtained.
In this manuscript we consider the internal structure of the polymer during collapse, not including the ends. Our objective is to understand the local contour structure that consists of small aggregates and polymer segments between them. Our arguments generalize the consideration of the freedom of motion of monomers, because monomers found in straight segments are much more constrained in their motion than monomers in curved segments. This results in faster collapse in regions of high polymer curvature.
We will focus on intermediate length scales between the size of the expanded polymer and the size of the collapsed aggregate. The length and time scales to which this analysis is relevant are between the size of the initial coil, which scales as $`N^\nu `$ where $`N`$ is the number of monomers and $`\nu =0.6`$ is the Flory exponent, and the size of the final aggregate, which scales as $`N^{1/3}`$ (assuming a compact aggregate). For long polymers these scales are well separated. During collapse, at these intermediate length scales, the internal structure of the final aggregate as well as of intermediate clusters that are formed should not be relevant. When convenient we can treat clusters as point objects, though this is not always necessary. The dynamic properties of cluster movement follows Stokes’ Law– the diffusion constant of clusters decreases slowly with cluster size, $`DR^1`$, where $`R`$ is the radius of a cluster. This implies that the dynamics of clusters varies smoothly from that of the original monomers, and a universal scaling behavior of the polymer during collapse should be found. By focusing on intermediate length scales, our results should be widely relevant to polymers with varied properties. While the eventual structure of the collapsed polymer depends in detail on monomer-monomer interactions, the separation of lengths scales implies that for a long enough polymer with a compact final aggregate, the details of these interactions should not be relevant to the kinetics of the collapse at early times.
To characterize the collapse it is useful to compare the distance between two monomers with the contour length of the polymer connecting them. In conventional scaling the polymer end-to-end distance $`R`$ is expressed as a function of the number of monomers $`N`$, or the number of links in the chain $`L=N1`$. When aggregation occurs, the small aggregates that form, appearing like beads on a chain, decrease the effective contour length of the polymer. We can define the effective contour length by counting the minimum number of monomer-monomer bonds that one must cross in order to travel the polymer from one end to the other. Bonds formed by aggregation allow us to bypass parts of the usual polymer contour. Because we are not interested in the structure of aggregates we can neglect the difference between different kinds of bonds. In this way the effective number of links in the chain decreases over time. Thus, in order to study the internal polymer structure during collapse we investigate, via scaling arguments and simulations, the scaling of the end-to-end distance $`r(l,t)`$ of internal polymer segments as a function of their effective contour length $`l`$.
The equilibrium structure of the polymer before collapse—in good solvent conditions—is a self-avoiding random walk, where $`rl^\nu `$, and $`\nu =0.`$6 in three dimensions. The contour length is proportional to the number of monomers. During collapse, monomers are constrained from aggregating with other monomers by their already existing bonds. A completely straight segment of polymer does not allow aggregation because no monomer can move to bond with another monomer. In contrast, highly curved regions are more flexible and monomers in these regions may aggregate. Aggregation in a curved region reduces the contour length and the polymer becomes straighter, smoothing the rough fractal polymer structure. We therefore expect that the scaling exponent will increase over time. At long enough times the scaling will approach that of a straight line ($`rl`$). However, this smoothing occurs first at the shortest length scales. In effect the polymer structure becomes consistent with a progressively longer persistence length. Assuming scaling, we anticipate that the polymer end-to-end distance for a polymer segment away from the ends of contour length $`l`$ will follow the dynamic scaling formula:
$$r=lf(t/l^z)$$
(1)
The universal function $`f(x)`$ is a constant for large values of its argument so that $`rl`$ (long times), and scales as $`x^{(1\nu )/z}`$ for small values of its argument, so that $`rl^\nu `$ (short times). The short time regime described by Eq. (1) starts after an initial transient (a very short time regime) in which no new bonding has taken place. During the very short time regime the time dependence of the universal scaling function does not apply. The usual scaling of the contour length and end-to-end distance persists until just after the very short time regime because the bonds that are formed initially do not form large rings and thus do not affect the large scale polymer structure. The short time regime begins with the first formation of individual bonds and lasts until the characteristic relaxation time of the contour of length $`l`$. This time—the relaxation time of the contour of length $`l`$ —is the crossover time between the short and long time regimes which follows a scaling law $`\tau l^z`$. The dynamic exponent $`z`$ is assumed to be consistent with conventional Zimm relaxation, $`z=3\nu `$ . Finally, we can also rewrite this scaling relation in terms of the number of monomers $`n`$ in a polymer segment. Since the average mass along the contour is $`Mn/l`$ and $`M`$ follows power law scaling $`Mt^s`$— we substitute $`lnt^s`$ in Eq. (1) to obtain $`r(n,t)`$.
We emphasize that kinetic effects become important for collapse of polymers in poor solvent, after equilibration in good solvent the result of a quench in solvent affinity or temperature below the thermodynamic transition at $`\mathrm{\Theta }`$-solvent conditions. Close to the $`\mathrm{\Theta }`$-point a mean field argument where kinetics do not play a significant role is likely to apply. In contrast, we will approximate the collapse by a completely irreversible model where no disaggregation occurs. Because the scaling variable that determines the effective distance from the $`\mathrm{\Theta }`$-point is $`N^{1/2}\mathrm{\Delta }T`$, long lengths are equivalent to small temperatures, and microscopic reversibility becomes irrelevant at long enough length scales. We are thus consistently adopting a description that is valid for lengths longer than the microscopic regime. We further restrict our study to diffusive monomer motion and short-range interactions.
The scaling relationship, Eq. (1), was tested by Monte Carlo simulations. These simulations in part include the effects of hydrodynamics during collapse by scaling the diffusion constant of aggregates according to Stokes’ Law. The simulations are based on the two-space lattice Monte Carlo algorithm developed for simulating high-molecular-weight polymers, and shown to be significantly faster than previous state-of-the-art techniques. In the two-space algorithm odd monomers and even monomers of a polymer are distinct and may most easily be described as residing in two separate spaces. Each monomer occupies one cell of a square lattice. Both connectivity of the polymer and excluded volume are imposed by requiring that, in the opposite space, only the nearest neighbors along the contour reside in the $`3\times 3\times 3`$ neighborhood of cells around each monomer. Motion of monomers is performed by Monte-Carlo steps that satisfy the polymer constraints. Since adjacent monomers (and only adjacent monomers) may lie on-top of each other, the local motion of the polymer is flexible. Despite the unusual local polymer properties the behavior of long polymers is found to agree with conventional scaling results.
The polymer is initially relaxed into an equilibrium configuration using a fast non-local “reptation” Monte Carlo algorithm. Monomers are randomly moved from one end of the polymer to the other, which, for equilibrium geometries, provides equivalent results to the local two-space dynamics.
Collapse of the polymer is then simulated using local diffusive Monte Carlo dynamics, but without the excluded volume constraint. Simulations of a variety of models indicate that excluded volume does not significantly affect the kinetics of collapse. Monomers are no longer stopped from entering the neighborhoods of other monomers; they continue to be required not to leave any neighbors behind. This allows monomers in the same space (odd or even) to move on top of each other, and thus aggregate. Aggregates of any mass occupy only a single lattice site, and are moved as a unit by the same dynamics used for monomers. The mass of an aggregate is set equal to the number of monomers located at that site. The probability of moving an aggregate is adjusted to be consistent with a diffusion constant that scales by Stokes’ law for spherical bodies in 3-dimensions, $`DM^{1/3}`$. This represents the effects of hydrodynamics on individual aggregates, but does not include coupling of motion of different aggregates. One time interval consists of attempting a number of aggregate moves equal to the number of remaining aggregates.
The end-to-end distances of polymer segments, $`r`$, were measured as a function of the effective contour length, $`l`$. The effective contour length is the minimum number of links along the polymer that connect a monomer at one end of the segment with the other end of the segment. Since an aggregate occupies only a single lattice site, interior bonds of the aggregate need not be counted, and it can be treated like a single monomer. The end-to-end distance is not exactly the Euclidean distance between the ends. Instead it is correctly defined as the minimum number of links needed to connect the two ends by any curve in space. Due to the underlying lattice in our algorithm a Manhattan metric, with the inclusion of diagonals, is appropriate.
In addition to simulations of polymer collapse in three dimensions we also performed simulations of polymers whose motion is confined to two dimensions, which are convenient for pictorial illustration (Fig. 1). These are not conventional two dimensional simulations because of the well known problems with hydrodynamics in two dimensions. Instead, they represent the dynamics of a polymer confined at an interface (e.g. between two fluids). Thus the polymer is confined to two dimensions while the hydrodynamics is three dimensional. In this case we have $`\nu =0.75`$, $`z=3\nu `$ and $`DM^{1/2}`$. The frames in Fig. 1 illustrate contour smoothing. Starting with short length scales, the polymer becomes progressively smoother and approaches a straight line.
In Fig. 2 we show log-log plots of contour length ($`l`$) versus end-to-end distance ($`r`$) for both 2-d and 3-d simulations. Initially the results are consistent with $`rl^\nu `$ for a self avoiding random walk. As time progresses the polymer becomes smooth resulting in a slope that approaches $`1`$. The asymptotic behavior can be seen to occur earlier at the shortest length scales.
Fig. 3 shows the derivative, obtained from finite differences, as a function of time for different segment contour lengths. For all segment lengths the derivative starts at approximately $`\nu `$ and approaches $`1`$ as the collapse proceeds. The rate of collapse becomes progressively slower as $`l`$ increases. The scaling relation, Eq. (1), predicts that the relaxation time will scale with $`l`$ as $`l^z`$. Fig. 4 shows the data following rescaling. $`r/l`$ is plotted against the rescaled time, $`t/l^z`$. The generally good coincidence of the different curves confirms that the simulation obeys Eq. (1). An attempt to use an asymptotic scaling exponent, $`rl^u`$ , with $`u=0.95`$, led to a visibly poorer fit, as did small variations in the exponent $`z`$.
The excellent agreement with the expected universal scaling relationship using the hydrodynamic exponent, $`z=3\nu `$ , may be fortuitous because the simulations do not contain the full effects of hydrodynamics. Specifically they contain only the effect of hydrodynamics on individual aggregates and not the coupling between aggregate motion.
In summary, we have found that polymer collapse displays a process of fractal smoothing that occurs first at the shortest length scales. Our simulations were found to be in good agreement with a universal scaling relationship. It is interesting to speculate that this may also apply to other fractal systems where local smoothing processes occur. Several groups have attempted to measure the self-affine scaling of horizontal transects of mountain ranges. They found that a unique fractal dimension, $`\mathrm{D}_H`$ cannot be assigned, but that the effective fractal dimension decreases with length scale. For example Dietler and Zhang have performed calculations for Switzerland, an area of $`7\times 10^4km^2`$, with a resolution of 100 m. They obtained $`\mathrm{D}_H1.43`$ at length scales below approximately 5 km, and $`\mathrm{D}_H1.73`$ for larger length scales. The data points could also lie on a continuous curve rather than two distinct scaling regimes. Thus the landscape appears smoother at shorter length scales. Short range smoothing may arise from processes, such as weathering, that also give rise to short range correlations. Various fractal biological systems formed as a result of an initial developmental process may also suffer smoothing as part of aging.
Since this work was completed, a number of other works have explored the kinetics of collapse using simulations, analytic treatments and scaling arguments. Timoshenko, Kuznetsov and Dawson studied the kinetics of collapse using Monte Carlo simulations and a mean-field “Gaussian self-consistent” approach. Their Monte Carlo simulations are based upon an underlying lattice model which is similar to ours, however, they do not move aggregates as a unit. Since they only move individual monomers, Monte-Carlo rejection of moves causes the diffusion constant of aggregates to decreases very rapidly with aggregate size (naively, it decreases exponentially ). By contrast, in a fluid, collective motion results in Stokes’ law diffusion, which we have included in our simulations. The slow diffusion of aggregates in their simulations cause their results to be distinct from ours. From their figures it appears clear that aggregates tend to pin the polymer contour. Their Gaussian self-consistent approach is analytically elaborate, however, it is not clear from their analysis whether it treats correctly the diffusion of clusters. Moreover, since some equilibrium scaling laws are not correct in this method it is hard to evaluate whether the kinetic properties are correct and their analysis does not clarify this point aside from the claim that the analytic results are in aggrement with their Monte Carlo simulations.
Buguin, Brochard-Wyart and de Gennes have presented scaling arguments based on a model of local clusters “pearls” forming during collapse close to the $`\mathrm{\Theta }`$-point in the mean field regime where surface tension is the driving force of collapse. Pitard has further considered the dynamics of collapse in this mean field regime by discussing the effect of tension along a polymer contour between two clusters (pearls) and extended the arguments to considerations of a string of clusters. These papers refer to a different regime (i.e. the mean field regime) than our analysis. Within this regime they provide complementary insights about the structure of clusters or pearls during collapse and the formation of a globule, which is important both to the kinetics of collapse and to the eventual structure of the aggregate that is formed at the end. It is worth noting that our simulations do not allow monomer motion along the polymer contour which can allow monomers to leave and join aggregates. The distribution of cluster sizes may be affected by such motion. We note, however, that the essential results of this paper should not be changed by redistribution of monomers along the contour, and resulting change in the distribution of the sizes of clusters, because they only affect the distribution of diffusion constants which vary only weakly with aggregate size. Moreover, the scaling law Eq. (1) does not refer to aggregate size and should not be affected.
Finally, Kantor and Kardar have investigated the properties of charged polymers and find their compact form exhibits a necklace shape with end aggregates and intermediate aggregates forming as a function of the charge density. These results also display some interesting similarities to collapse behavior and further research may reveal a connection between their results and the studies of collapse.
We would like to thank A. Grosberg and M. Kardar for helpful discussions. A referee is to be acknowledged for pointing out that the universal scaling law does not apply in the limit $`t0`$ before the first bonding events.
|
no-problem/9905/astro-ph9905339.html
|
ar5iv
|
text
|
# References
Generation of density perturbations due to the birth of baryons
D.L. Khokhlov
Sumy State University, R.-Korsakov St. 2,
Sumy 244007, Ukraine
E-mail: khokhlov@cafe.sumy.ua
## Abstract
Generation of adiabatic density perturbations from fluctuations caused by the birth of baryons is considered. This is based on the scenario of baryogenesis in which the birth of protons takes place at the temperature equal to the mass of electron.
The observed large scale structure of the universe forms from the primary density perturbations after recombination $`z_{rec}=1400`$ . In the epoch of recombination, the photons decouple from the baryons and last scatter. After recombination the photons do not interact with the baryons, so the cosmic microwave background (CMB) anisotropy allows us to define the density perturbations of the baryonic matter in the epoch of recombination. Fluctuations in the CMB on the large scale detected by COBE satellite are $`\mathrm{\Delta }T/T=1.06\times 10^5`$. Generation of adiabatic density perturbations from quantum fluctuations is considered within the framework of inflation cosmology .
Let us consider generation of adiabatic density perturbations from fluctuations caused by the birth of baryons. According to the scenario of baryogenesis proposed in , at $`T>m_e`$, primordial plasma consists of neutral fermions. Neutral electrons are in the state being the superposition of electron and positron
$$|e^0>=\frac{1}{\sqrt{2}}(e^{}+e^+).$$
(1)
Neutral protons are in the state being the superposition of proton and antiproton
$$|p^0>=\frac{1}{\sqrt{2}}(\overline{p}+p).$$
(2)
At $`T>m_e`$, there exists neutral proton-electron symmetry. Proton-electron equilibrium is defined by the proton-electron mass difference. At $`T=m_e`$, pairs of neutral electrons annihilate into photons, pairs of neutral protons and electrons survive as protons and electrons. At $`T=m_e`$, the baryon-photon ratio is given by
$$\frac{N_b}{N_\gamma }=\frac{3}{4}\left(\frac{1}{2}\right)^5\left(\frac{m_e}{m_p}\right)^2.$$
(3)
Calculations yield $`N_b/N_\gamma =6.96\times 10^9`$. It should be noted that the observed value of $`N_b/N_\gamma `$ lies in the range $`215\times 10^{10}`$ . Possible explanation of the discrepancy between the result of calculations and the observed value is that the most fraction of baryonic matter decays into non-baryonic matter during the evolution of the universe.
The birth of baryons causes potential fluctuations
$$\frac{\delta \phi }{\phi }=\frac{\rho _b}{\rho }.$$
(4)
In the case of homogeneous universe, the spectrum of fluctuations is flat. At $`T=m_e`$, the value of fluctuations (4) is given by
$$\frac{\delta \phi }{\phi }=\frac{4}{3}\times \frac{7}{8}\frac{N_b}{N_\gamma }\frac{m_p}{m_e}.$$
(5)
Here the factor $`4/3\times 7/8`$ describes transition from the particle number density to the energy density. Calculations yield $`\delta \phi /\phi =1.49\times 10^5`$.
Density perturbations are related to potential fluctuations via the Poisson equation. Let us assume that the state of the fluid in the universe is given by
$$|\psi >=\frac{1}{\sqrt{3}}(|\psi _x>+|\psi _y>+|\psi _z>).$$
(6)
In comparison with the separate particles $`|\psi _x>`$, $`|\psi _y>`$, $`|\psi _z>`$, for the particles being in the superpositional state (6), the probability density is three times greater. In this case the adiabatic speed of sound is given by
$$v_s=\left(\frac{3p}{\rho }\right)^{1/2}.$$
(7)
For radiation with the equation of state
$$p=\frac{\rho c^2}{3}$$
(8)
the adiabatic speed of sound is equal to the speed of light
$$v_s=c.$$
(9)
At $`T=m_e`$, the fluid is radiation dominated. Relation between potential fluctuations and density perturbations of radiation is given by
$$\frac{\delta \phi }{\phi }=\frac{\delta \rho _\gamma }{\rho _\gamma }.$$
(10)
Relation between density perturbations of the baryonic matter and density perturbations of radiation is given by
$$\frac{\delta \rho _b}{\rho _b}=\frac{3}{4}\frac{\delta \rho _\gamma }{\rho _\gamma }.$$
(11)
In view of (7), fluctuations in the CMB is given by
$$\frac{\delta T}{T}=\frac{\delta \rho _b}{\rho _b}.$$
(12)
Calculations yield $`\delta T/T=1.12\times 10^5`$.
|
no-problem/9905/astro-ph9905340.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Mappings of sky brightness for extended areas were performed by Walker (1970, 1973), Albers (1998) in USA, Bertiau, de Graeve and Treanor (1973; Treanor 1974) in Italy and Berry (1976) in Canada with some simple modelling. These authors used population data of cities to estimate their upward light emission and a variety of propagation laws for light pollution in order to compute the sky brightness. Recently DMSP satellite images allowed direct information on the upward light emission from almost all countries around the World (Sullivan 1989, 1991; Elvidge et al.1997a, 1997b, 1998) and were used to study the increase of this flux with time (Isobe 1993; Isobe & Hamamura 1998).
In this paper we present a project to obtain detailed maps of artificial sky brightness in Europe in astronomical photometrical bands with a resolution better than 3 km. In order to bypass errors arising when using population data to estimate upward flux, we construct the maps measuring directly the upward flux in DSMP satellite night-time images and convolving it with a light pollution propagation function.
DMSP are satellites of the Defense Meteorological Satellite Program (DMSP) of the National Oceanic and Atmospheric Administration (NOAA) in a low altitude (830 km) sun-synchronous polar orbit with an orbital period of 101 minutes. Visible and infrared imagery from DMSP Operational Linescan System (OLS) instruments monitor twice a day, one in daytime and one in nightime, the distribution of clouds all over the world. At night the instrument for visible imagery is a Photo Multiplier Tube (PMT) sensitive to radiation from 410 nm to 990 nm (470-900 FWHM) with the highest sensitivity at 550-650 nm, where the most used lamps for external night-time lighting have the strongest emission: Mercury Vapour (545 nm and 575 nm), High Pressure Sodium (from 540 nm to 630 nm), Low Pressure Sodium (589 nm). The IR detector is sensitive to radiation from 10,0 $`\mu m`$ to 13,4 $`\mu m`$ (10.3-12.9 FWHM). Every fraction of a second each satellite scans a narrow swath extending 3000 km in east-west direction. Data received by NOAA National Geophysical Data Center have a nominal spatial resolution of 2.8km obtained by on-board averaging of five by five blocks of finer data with spatial resolution of 0.56km.
## 2 Description of the method
Main steps of our method are:
1. Construction of an expanded dynamics composite image. We search for images cloudfree as guaranteed by inspection of the IR images taken at the same time. In fact the presence of clouds over some cities constitute a possible source of error: these clouds could hide or dim the light received by the satellite. Due to the limited dynamic range of the satellite detectors, the automatic gain normally saturates the most lit pixel of the largest cities. Sensitivity reaches $`10^5`$ W $`m^2sr^1\mu m^1`$ (Elvidge et al. 1997). Few images sometime are taken with lower gain (50-24 db) and they have only a few saturated pixels. We construct a composite image replacing the saturated pixels in the higher gain images, useful to measure accurately low population sources, with the measurements coming from the low gain images, adequately rescaled. The pixel values are currently relative values rather than absolute values because instrumental gain levels are adjusted to have a constant cloud reference brightness in different lighting conditions related to the solar and lunar illumination at the time. Elvidge et al(1998) obtained a calibrated composite image of the entire word in 1998 from images specially taken without gain. A possible source of errors on satellite measurements is that each pixel of the 2.8km resolution images is the sum of smaller detector pixels and we haven’t any way to check if some of them were saturated. This uncertainty will be solved only when high resolution images will be at our disposal.
2. Estimate of the upward light flux. We analyze the composite image measuring pixel counts. Under the hypotesis of uniformity of the shape of the average upward emission function we obtain the relative upward light flux of the area covered by each pixel.
3. Propagation of the upward light flux in the atmosphere. The scattering from atmospheric particles and molecules spreads the light emitted upward by the cities. If $`f((x,y),(x^{},y^{}))`$ is a propagation law for light pollution giving the artificial sky brightness produced at a given position of the sky in a site in $`(x^{},y^{})`$ by an infinitesimal area $`dS=dxdy`$ in $`(x,y)`$ with unitary upward emission per unit area, the total artificial sky brightness $`b`$ at that position in the site is given by:
$$b(x^{},y^{})=e(x,y)f((x,y),(x^{},y^{}))𝑑x𝑑y$$
(1)
This expression is the convolution of the upward emission per unit area $`e(x,y)`$ with the propagation function $`f((x,y),(x^{},y^{}))`$. In pratice, we divide the surface of Europe in pixels with the same positions and dimensions as in the satellite image. We assume each area of the country defined by a pixel be a source of light pollution with an upward emission $`e_{x,y}`$ proportional to the measured pixel counts. In this case the sky brightness at the center of each pixel given by the expression (1) became:
$$b_{i,j}=\underset{h}{}\underset{k}{}e_{h,k}f((x_i,y_j),(x_h,y_k))$$
(2)
The propagation function $`f((x_i,y_j),(x_h,y_k))`$ for each couple of points $`(x_i,y_j)`$ and $`(x_h,y_k)`$ (the positions of the observing site and the polluting area) is obtained with detailed models based on the modelling technique introduced and developed by Garstang (1986, 1987, 1988, 1989a, 1989b, 1989c, 1991a, 1991b, 1991c, 1992, 1993, 1999) and also applied by Cinzano (1999a, 1999b, 1999c). For each infinitesimal volume of atmosphere along the line-of-sight, the direct illuminance produced by each source and the illuminance due at light scattered once from molecules and aerosols are computed, this last estimed with the approach of Treanor (1973) as extended by Garstang (1984, 1986). The total flux that molecules and aerosols in the infinitesimal volume scatter toward the observer is computed from the illuminance, and, with an integration, the artificial sky brightness of the sky in that direction is obtained. Extinction along light paths is taken in account. The model assumes Rayleigh scattering by molecules and Mie scattering by aerosols. It is possible take in account of the altitudes of sites and sources, the Earth Curvature effects, the scattering functions in the choosen photometrical band, the shape of average city upward emission functions, and their geographical gradients. These models allow to associate the predictions to well-defined parameters related to the aerosol content, so the atmospheric conditions at which predictions refer can be well known.
4. Calibration of the results for a choosen aerosol content. Except that in the case of Elvidge et al.(1998), usually satellite images are not calibrated so upward flux measurements and artificial sky brightness maps are only relative. We calibrate the maps on the basis of (i) analysis of existing radiance calibrated satellite images (Cinzano, Falchi, Elvidge, Serke, in prep.) or/and (ii) accurate measurements of sky brightness/luminance together with estinction from the earth-surface (e.g. Falchi, Cinzano 1999).
## 3 Subsequent upgrades
We plan to take in account subsequently of:
1. Altitude of each area. At first we plan to compute the maps for sea level.
2. Photometrical Bands. We plan to start with V and B photometrical bands and to extend to R and U bands later. Astronomical brightness in V mag/arcsec<sup>2</sup>d can be transformed in luminance in cd/m<sup>2</sup>.
3. Space resolution. We will start from $``$2.8 km resolution but we hope to go down to $``$0.5 km as soon high resolution images will be obtained.
4. Curvature of the earth. This might produce an error for isolated areas but in strongly urbanized areas it is negligible. The effect of earth curvature is about 2 percent at 50 km (Garstang 1989). We will extend our calculations later.
5. Geographical gradients of the atmospheric aerosol content. The same atmospheric model as Garstang (1996, 1991) (also used by Cinzano 1999a; 1999b; 1999c) will be assumed at first, with the density of molecules and aerosols decreasing exponentially with the height. We plan to use better atmospheric models as soon as available.
6. Geographical gradients of the average aerosol angular scattering function. The average scattering function of aerosols that will be adopted at first, is the representation of Garstang (1991) of the function measured by McClatchey et al.(1978).
7. Geographical gradients of the average upward flux emission function of cities. In order to became simple, we will assume at first that the lighting habits are similar in all the cities. Some studies has been undertaken to determinate the better average function from satellite data (Falchi and Cinzano, in prep.) and from Earth-based observations (Cinzano, in prep.).
To account for presence of sporadic denser aerosol layers at various heights or at ground level as Garstang (1991b) is beyond the scope of this work. We will also neglect the presence of mountains which might shield the light emitted from the sources to a fraction of the atmospheric particles along the line-of-sight of the observer. Given the vertical extent of the atmosphere in respect to the highness of the mountains, the shielding is not negligible only when the source is very near the mountain and both are quite far from the site (Garstang 1989, see also Cinzano 1999a). We also neglected the effects of the Ozone layer and the presence of volcanic dust studied by Garstang (1991b, 1991c).
## 4 Sky brightness in Italy in 1971 and in 1998.
In order to test our method we obtained a first map of the V-band zenith artificial sky brightness in Italy (Falchi and Cinzano 1999; Falchi 1999) applying the Treanor Law, a very simple light pollution propagation function obtainable simplifying the Garstang integral under the hypotesis of Treanor (1973):
$$f((x,y),(x^{},y^{}))\left(\frac{1}{\sqrt{(xx^{})^2+(yy^{})^2}}+\frac{C}{(xx^{})^2+(yy^{})^2}\right)e^{k\sqrt{(xx^{})^2+(yy^{})^2}}$$
(3)
where C and k are constants related to the relative importance of the direct beam and the scattered component, and the estinction of the city light. The constants were empirically determined by Bertiau et al (1973) and checked by Falchi (1999) fitting the zenith artificial brightness measured at various distances from some cities in Italy. Differences between B-band and V-band propagation are under the fluctuations given by the atmospheric conditions in standard ”clean” nights (Falchi 1999). Details about the input images and the computation have been discussed in Falchi & Cinzano (1999). In the same paper some maps of artificial and total sky brightness in linear and magnitude scales have been presented. The preliminary calibration was obtained comparing results with available measurements of sky brightness (Falchi & Cinzano 1999).
In order to compare our results with these of Bertiau et al.(1973) we present in figure 1 the map of artificial sky brightness in Italy in 1971 obtained from their data. Levels correspond respectively to these fractions of the natural sky brightness: $`<`$0.05, 0.05-0.15, 0.15-0.35, 0.35-1.1, $`>`$1.1. The resolution of the map is about 15 km.
In figure 2 we present the map of the artificial sky brightness in 1998. Levels are represented as in the previous image but other three levels have been added at 3-10, 10-30, $`>`$30 times the natural sky brightness in order to show better the situation in the most polluted areas. The true space resolution is about 2.7 km. The comparison of the maps shows as in less than 30 years the artificial sky brightness is increased quite uniformly about 5-10 times, so that now it is greater than the natural one in almost all the italian territory. Only in few small areas of Tuscany, Basilicata, Calabria and Sardinia the natural sky brightness is still greater than the artificial one. No area in Italy has an artificial sky brightness which can be considered acceptable according to IAU recomandations (i.e. lower than 10% of natural sky brightness - Smith 1979). Few areas show an increase over the mean like e.g. the nord-east of Sardinia (Costa Smeralda) where the artificial sky brightness is increased more than 20 times. The average growth rates in Italy are about 6%-9% per year in agreement with Cinzano (1999a, 1999b).
Figure 3 shows the prediction for the night sky brightness in 2025 extrapolated from the average increase from 1971 to 1998 measured in a sample of 40 sites ($`6.3\pm 0.3`$). We assumed that the trends of light pollution growth remain unchanged. Levels are represented as in the previous images but another level at brightness $`>`$100 times the natural sky has been added to better show the brighter areas. If the average growth rates will remain the same of the last 27 years, we predict that the night sky in 2025 will be 40 times brighter than the sky in 1971.
So our conclusion is that the artificial night sky brightness produced by light pollution is a problem which cannot be neglected or delayed anymore.
## Acknowledgements
|
no-problem/9905/physics9905022.html
|
ar5iv
|
text
|
# Single and Double Resonance Microwave Spectroscopy in Superfluid 4He Clusters
## Abstract
Purely rotational transitions of a molecule embedded in large <sup>4</sup>He clusters have been detected for the first time. The saturation behavior shows that, in contrast to previous expectations, the microwave line profiles are dominated by inhomogeneous broadening mechanisms. Spectral holes and peaks produced by microwave-microwave double resonance have widths comparable to those of single resonance lines, indicating that relaxation occurs among quantum states of the inhomogeneous distribution of each rotational level, at a rate $``$10 times faster than rotationally inelastic relaxation.
preprint: HEP/123-qed
The spectroscopy of molecules embedded in large <sup>4</sup>He clusters has recently received considerable experimental and theoretical attention . Nanometer scale helium clusters (nanodroplets), containing from several hundreds to more than 10<sup>4</sup> He atoms, provide a unique environment for high resolution matrix spectroscopy where the advantages of both conventional matrix spectroscopy and molecular beam spectroscopy are combined . Since these clusters will pick up molecules or atoms that they encounter on their path without being appreciably deflected , they allow for a high degree of synthetic flexibility, and in particular for the formation and stabilization of weakly bound and unstable species . Evaporative cooling has been found to maintain <sup>4</sup>He nanodroplets at a temperature of 0.4 K , well below the predicted superfluid transition temperature which ranges from 2.14 K in bulk liquid helium to 1.5 K for clusters of only 10<sup>2</sup> He atoms . As the perturbations imposed on the guest molecules by the helium host are minimal, the shift and width of spectroscopic lines in <sup>4</sup>He clusters are considerably less than for traditional matrix environments . Furthermore, rotationally resolved spectra have been observed for a large variety of molecules which show the structure predicted by the gas phase symmetry of the molecules with, however, reduced rotational constants. By showing that ro-vibrational spectra in <sup>3</sup>He clusters collapse into a single line, the weakly damped molecular free rotation present in liquid <sup>4</sup>He has recently been demonstrated to be a direct consequence of the boson nature of <sup>4</sup>He and is considered a microscopic manifestation of superfluidity .
An important unresolved question posed by the IR spectra relates to the physical process responsible for the line broadening observed in ro-vibrational transitions, which ranges from 150 MHz in the case of the R(0) line in the $`\nu _3`$ fundamental in OCS to 5.7 cm<sup>-1</sup> for the case of the P(1) line in the $`\nu _3`$ asymmetric stretch in H<sub>2</sub>O . For the carefully studied case of the $`\nu _3`$ fundamental of SF<sub>6</sub> , the lines were found to have a Lorentzian shape of width $``$300 MHz independent of the rotational transition, which led to the suggestion that the linewidth reflected vibrational relaxation and/or dephasing .
Since He clusters remain fluid down to zero temperature and because of the very large zero point motion of the <sup>4</sup>He atoms, it appears natural to assume that the spectra of molecules seeded in this medium should not display inhomogeneous effects other than contributions from the cluster size distribution via size-dependent frequency shifts which, however, have been shown to be small . In solids, variations of local binding sites lead to a distribution of vibrational frequencies, which results in inhomogeneous broadening that dominates the linewidths at low temperature. In contrast, in liquids local solvation fluctuations lead to dynamic dephasing. Treating the clusters as a classical liquid, one may expect the timescale of the solvation fluctuation (due to the large zero point kinetic energy of the He atoms) to be much faster than the dephasing times observed in most ro-vibrational spectra and hence the effect of fluctuating solvation would likely be strongly motionally averaged , leading to homogeneous, Lorentzian lineshapes. The spectra of SF<sub>6</sub> and CH<sub>3</sub>CCH , which are well described with a free rotor Hamiltonian and Lorentzian line shapes, seem to confirm the assumption that the major source of line broadening is of homogeneous nature.
However, at temperatures as low as 0.4 K in a superfluid medium solvation fluctuations are probably more appropriately described in terms of the interaction with the thermally populated fundamental modes of the cluster. For typical cluster sizes (well below $`N`$=10<sup>5</sup>) only surface excitations (ripplons) have to be considered . The coupling of the molecular vibration to these modes has been estimated and found to be too weak to explain the observed linewidths .
The observation of rotational structure in vibrational transitions suggests that pure rotational spectroscopy, by excitation of microwave (MW) radiation, should provide a useful probe of the rotational dynamics of the dopant molecules in the superfluid helium environment. It should be noted, that it was not clear a priori whether such spectra could be observed at all. Due to the short absorption path and low densities characteristic of molecular beams, direct MW absorption measurements are not viable. Whereas transitions in the UV and visible spectral range can be efficiently detected by laser induced fluorescence , beam depletion spectroscopy is employed to detect transitions in the near and mid IR . In this method, photon absorption and subsequent relaxation of the molecular excitation energy leads to He atom evaporation from the cluster and a decrease in the flux of He atoms in the droplet beam is observed. While absorption of a single IR photon leads to the evaporation of hundreds or more He atoms from a droplet, at least 10 microwave photons per droplet will need to be absorbed to provide sufficient energy to evaporate a single He atom ($``$5 cm<sup>-1</sup> ). In order to produce a signal of sufficient size to be detected, many He atoms per cluster must be evaporated. This requires the rotational relaxation to occur on a time scale significantly shorter than 10 $`\mu `$s. Although a lower limit of the rotational relaxation time of the order of hundreds of ps is established by the linewidth of typically 1 GHz observed in the IR spectra , the upper limit could be as high as tens of $`\mu `$s, in which case no signal would be observed. The upper limit is imposed only by the fact that for all observed IR spectra the rotational populations are fully thermalized at the temperature of the He droplets by the time the clusters reach the laser interaction region.
Here we report measurements of the microwave spectrum of HCCCN in He nanoclusters detected by the method of beam depletion spectroscopy. HCCCN was used because of its large dipole moment (3.7 Debye) and its linear structure (rotational constant $`B=`$1.5 GHz in the helium nanodroplets ), which leads to strong and well resolved rotational transitions.
The molecular beam set-up will be described in detail elsewhere . Here we will give only a short summary highlighting the aspects unique to the present study. Clusters are formed in a supersonic free-jet helium expansion from a cold 5 $`\mu `$m diameter nozzle which, in the measurements presented here, is operated at 26 K and a stagnation pressure of 100 atm yielding an average cluster size of $``$ 3 $``$ 10<sup>3</sup> atoms/cluster (estimated from Ref. ). After collimation by a conical skimmer, the clusters pass through a pickup cell containing typically 3$`10^4`$ torr of the gas of interest and collect (on average) one foreign molecule each. Subsequently the clusters pass through a 10 cm long P-band microwave guide (nominal 12-18 GHz) which is aligned parallel to the cluster beam. The MW amplitude is modulated at 310 Hz. The molecular beam enters and exits the waveguide through two 3 mm holes in E-bends located at each end of the device. If multiple resonant photon absorption and subsequent relaxation of the molecule-helium cluster system occurs, the beam depletion signal is recorded by a liquid helium cooled silicon bolometer using a lock-in technique.
The microwave radiation is produced by a sweep generator (HP 8350B) with a 0.01-26.5 GHz plug-in (HP
8359A) and is amplified by a traveling wave tube amplifier (Logi Metrics A310/IJ) to a power level between 0.05 W and 3.4 W (corresponding to a field strength of 0.78 to 6.5 kV/m in the center of the P-band waveguide). The power transmitted through the waveguide is attenuated by 30 dB and measured by a crystal detector (HP8473B), the output of which is used to level the power of the sweep generator during frequency scans.
Spectra of the J= 3$``$4 and the J=4$``$5 transitions in the ground vibrational state of HCCCN obtained for various MW field strengths between 0.78 and 6.5 kV/m are shown in Fig. 1. The line centers agree well with the line positions predicted from the molecular constants obtained from the ro-vibrational IR spectrum of HCCCN in the helium clusters . The linewidths (FWHM) are observed to increase from $``$0.6 to $``$1 GHz for the J= 3$``$4 transition and from $``$0.8 to $``$1.2 GHz for the J= 4$``$5 transition when the microwave field is increased from 0.78 to 6.5 kV/m. At low MW fields these linewidths are comparable to the width of the corresponding ro-vibrational transitions in the spectra of the fundamental CH stretching mode , indicating that vibrational relaxation and dephasing which frequently are the dominant line broadening mechanisms in the spectra of impurities in classical liquids are not the main source of broadening for a molecule such as HCCCN in a superfluid helium cluster. Similar linewidths have been observed by us for the corresponding microwave transitions in CH<sub>3</sub>CN and CH<sub>3</sub>CCH.
The dependence of the signal amplitude $`S`$ on the microwave field strength $`E`$ has been measured for HCCCN with the MW frequency fixed at the top of either the J=3$``$4 or the J=4$``$5 transition and is well described by $`SE^2/(1+E^2/E_{sat}^2)^{1/2}`$ with the saturation field $`E_{sat}=`$1.1(2) kV/m (see Inset Fig. 1). This saturation behavior, resulting in a linear dependence of the absorption as function of the MW field intensity for $`EE_{sat}`$, demonstrates that, in contrast to the previous expectations, the linewidth is dominated by inhomogeneous broadening . With the saturation parameter $`(E/E_{sat})^2`$, the homogeneous unsaturated linewidth is calculated to be at least a factor of 6 narrower than the inhomogeneous linewidth observed at a MW field intensity of 6.5 kV/m. This sets the lower limit of the rotational relaxation time to about 2 ns.
An upper limit for the rotational relaxation time has been set by a MW amplitude modulation experiment: With the MW frequency fixed on top of the 3$``$4 transition at a MW field of 7.8 kV/m, the signal height is monitored while the MW field is 100% square wave modulated at a frequency $`f`$. By modelling the 3$``$4 transition as a driven two level-system, the absorbed microwave power is calculated to increase by a factor of 2 when $`f`$ changes from $`f1/T_1`$ to $`f1/T_1`$, where $`1/T_1`$ is the population relaxation rate for the transition. This is basically independent of the dephasing rate (1/$`T_2`$), as long as the microwave power is sufficiently large to allow for saturation. From the fact that no increase in signal is observed for modulation frequencies up to 10 MHz we estimate an upper limit for the rotational relaxation time of about 20 ns. This limit is in agreement with the independent but less stringent estimate inferred from the comparison of the strengths of the MW spectra and the IR spectra , which implies that the rotational relaxation takes place at a rate not slower than in tens of ns.
In order to determine the homogeneous linewidth of the rotational transition, microwave-microwave double resonance experiments have been carried out. A second microwave source (HP8690B, plug-in HP8694B: 8-12.4 GHz) is employed to generate microwave radiation at a fixed frequency. While the first microwave field (the probe) is frequency scanned across the 3$``$4 and the 4$``$5 transition, the second microwave field pumps the J=3$``$4 transition at about 11.1 GHz.
As the probe frequency approaches the pump frequency a strong decrease in signal is observed due to the depletion of the J=3 state by the pump, whereas the 4$``$5 transition signal is increased according to the enhanced population of the J=4 state (Fig. 2). Remarkably, the hole burnt into the 3$``$4 transition has a width of $``$ 50-70 $`\%`$ of the single resonance linewidth, implying that the rotational population inversion relaxation time is larger than 4 ns. The increase in the 4$``$5 signal even occurs over the total width of the signal, indicating that there is a fast relaxation within the inhomogeneous distribution of each individual J level. This observation is important as it implies that a substantial part of the inhomogeneous line broadening is due to a dynamic effect rather than to
a static effect such as the cluster size distribution.
The phenomenon of ‘dynamic’ inhomogeneous broadening can be understood assuming that there are additional degree(s) of freedom associated with a splitting of the rotational state into several substates and that the molecule may transit among these substates. Such transitions, which may well change the kinetic and potential energy of the molecule, but produce only a small change in its rotational energy, will be denoted as “elastic”. If the elastic relaxation rate is much less than the spectral line width, the lineshape reflects the distribution of resonance frequencies of the molecules in these additional quantum states. Each substate has a homogeneous width much narrower than the width of the inhomogeneous line. A double resonance experiment would be expected to show a correspondingly narrow hole in the pumped transition and a peak on top of the transition starting from the population enhanced rotor level. However, if the relaxation rate for the rotor quantum number is slower still than the relaxation between the substates, then the population disequilibrium produced by the MW pumping will be spread over many or all the substates. This produces, in our case, a broad depletion in the lower rotor level and an enhancement over the complete width of the higher rotor level.
The relative areas of the depletion and enhancement signals compared to the single resonance MW signal can be used to extract the relative rates of the two relaxation processes. By kinetic modelling of the transition rates between the J=3 and J=4 rotational levels and among the substates of each individual rotational state we estimate that the elastic relaxation within one rotational state is about one order of magnitude faster than the inelastic population inversion relaxation.
In order to determine possible mechanisms underlying the observed inhomogeneous broadening one of the authors has analyzed the dynamics of a neutral impurity in a nanometer scale <sup>4</sup>He cluster showing that for an anisotropic impurity significant sources of line broadening arise from the coupling of the molecular rotation with the center of mass motion of the dopant. These couplings arise both from an anisotropic effective potential for the dopant when shifted from the exact center of the clusters, and from an orientationally dependent hydrodynamic contribution to the effective inertial mass of the dopant. It should be noted, however, that it is not clear how the molecular energy is transferred to the cluster since the energy released or absorbed by the molecular rotationally inelastic or elastic transition in general is not likely to match the quantized energy of the lowest cluster excitations.
The measurements presented here are the first observation of a purely rotational spectrum of any molecule in a liquid He environment, and have provided a unique window on the sources of line broadening and in particular onto the rotational dynamics of the dopant in the superfluid helium environment. It has been unambiguously demonstrated that the rotational lines are dominated by inhomogeneous broadening which is attributed to the coupling of the center of mass motion of the molecule within the finite size cluster to the molecular rotation. The second major observation is that the molecule transits among the quantum states of the inhomogeneous distribution on a time scale much faster than the rotational relaxation. These MW-MW double resonance measurements have been followed by a separate study using microwave - infrared double resonance which provides new information on the relaxation dynamics of the dopant in the cluster and its dependence on the finite cluster size .
We would like to acknowledge R.E. Miller and his coworkers for the free flow of information between the two groups. We are indebted to Prof. W. Warren and Prof. S. Staggs for providing us with the MW sweep generators and to Dr. J. Fraser for lending us the traveling wave tube amplifier. This work was supported by the National Science Foundation (CHE-97-03604). I.R. is grateful to the Alexander-von-Humboldt Foundation for financial support.
Present address: Physikalisches Institut, Universität Heidelberg, Germany
|
no-problem/9905/hep-ph9905218.html
|
ar5iv
|
text
|
# Hard diffraction from small-size color sources*footnote **footnote *Talk given by F. Hautmann at the Division of Particles and Fields Conference, Los Angeles, January 5-9, 1999.
## Abstract
We describe diffractive hard processes in the framework of QCD factorization and discuss what one can learn from the study of hadronic systems with small transverse size.
Diffractive deeply inelastic structure functions satisfy a factorization theorem of the form
$$F_2^{\mathrm{diff}}\widehat{F}_a\frac{df_{a/A}^{\mathrm{diff}}}{dx_Pdt},$$
(1)
where the first factor on the right hand side is a short-distance scattering function and the second factor is a diffractive parton distribution, containing the long-distance physics. The short distance factor is no different than in inclusive deeply inelastic scattering. The long distance factor is. Although the evolution equation for the diffractive parton distribution functions is the same as that of the inclusive parton distribution functions , their behavior at a fixed scale $`\mu _0`$ that serves as the starting point for evolution may be very different from the behavior of the inclusive functions. The different phenomenology that characterizes diffractive versus inclusive deeply inelastic scattering depends entirely on this.
Diffractive parton distributions in a proton at the scale $`\mu _0`$ are not perturbatively calculable. This is because the proton has a large transverse size. Suppose one had a hadron of a size $`1/M`$ that is small compared to $`1/\mathrm{\Lambda }_{\mathrm{QCD}}`$. Then one could compute diffractive parton distributions as a perturbation expansion. Results on diffraction of small-size hadronic systems have been presented in Ref. .
Fig. 1 shows a typical Feynman graph for one such case. Here we have considered a color-singlet current that couples only to heavy quarks of mass $`M\mathrm{\Lambda }_{\mathrm{QCD}}`$. This system gets diffracted and acts as a color source with small radius. This is represented in the lower part of the graph. The top part of the graph represents the bilocal field operator that defines the gluon distribution. The particular Feynman graph in Fig. 1, although of a rather high order in $`\alpha _s`$, is leading in the limit $`1/x_P\mathrm{}`$, where $`x_P`$ is the fractional loss of longitudinal momentum from the diffracted hadron.
The physical picture that emerges from the analysis of graphs like that shown in Fig. 1 in the limit $`1/x_P\mathrm{}`$ is that of the familiar “aligned jet” model . The bilocal operator creates a large-momentum parton together with a color source of the opposite color. This is confined to move on a lightlike line and is part of the definition of the (inclusive or diffractive) parton distribution functions. This happens far from the incoming small-size hadron. The system created by the operator then passes through the color field of the small-size hadron, absorbing two gluons. What we have, then, is the scattering of two color dipoles.
At this order of perturbation theory, the result for the diffractive parton distributions has the following form
$$\frac{df_{a/A}^{\mathrm{diff}}}{dx_Pdt}(\beta ,x_P,\text{q}^2,M)=\frac{\alpha ^2e_Q^4\alpha _s^4}{x_P^2M^2}h_a(\beta ,\text{q}^2/M^2)\left[1+𝒪(x_P)\right].$$
(2)
Here $`\beta x_P`$ is the fraction of the hadron’s longitudinal momentum carried by the parton and $`𝐪`$ is the diffracted transverse momentum ($`t𝐪^2`$). The functions $`h_a`$ are plotted in Fig. 2.
For small $`\beta `$ the distributions behave as
$$h_g\beta ^1,h_q\beta ^0(\beta 0).$$
(3)
For large $`\beta `$, both the gluon and quark distributions evaluated at any finite q have a constant behavior:
$$h_g,h_q(1\beta )^0(\beta 1,\text{q}0).$$
(4)
At $`\text{q}=0`$ there are cancellations in the leading $`\beta 1`$ coefficients, so that the distributions vanish in the $`\beta 1`$ limit:
$$h_g(1\beta )^2,h_q(1\beta )^1(\beta 1,\text{q}=0).$$
(5)
The distributions are dominated by small $`|t|\text{q}^2`$ everywhere except at very large $`\beta `$. Note that
$$h_gh_q.$$
(6)
Roughly, the order of magnitude of the ratio between the diffractive gluon and quark distributions can be accounted for by the ratio of the associated color factors, $`C_A^2(N_c^21)/[C_F^2N_c]=27/2`$.
The result given above does not describe scaling violation. When additional gluons are emitted from the top subgraph in Fig. 1, on the other hand, ultraviolet divergences arise. The renormalization of these divergences leads to the dependence of the diffractive parton distributions on a renormalization scale $`\mu `$. This dependence is governed by the renormalization group evolution equations. The higher order, ultraviolet divergent graphs are suppressed compared to the graphs of the type in Fig. 1 by a factor $`\alpha _s\mathrm{log}(\mu ^2/M^2)`$. When $`\mathrm{log}(\mu ^2/M^2)`$ is large, these contributions are important, and thus evolution is important. On the other hand, when $`\mu `$ is of the same order as the heavy quark mass $`M`$, the higher order contributions are small corrections to the graphs considered above. Thus one may interpret the result given above as a result for the diffractive parton distributions at a fixed scale of order $`\mu ^2M^2`$. Then the diffractive parton distributions at higher values of $`\mu ^2`$ are given by solving the evolution equations with the result of Eq. (2) as a boundary condition.
How are these calculations for small-size systems related to the real world? Obviously, the protons probed in deeply inelastic scattering experiments at HERA have a large transverse size. Suppose that one had available a hadron of adjustable size. Start with a very small size, in which case diffraction is forced to take place mostly on short distances, and let the size increase. Since this scale acts as a physical infrared cut-off, longer and longer distances are now allowed to contribute to the diffraction process. In a naive perturbation expansion, by the time one gets to $`1\text{fm}`$ (about the proton radius) the answer would be completely dominated by the soft region. On the other hand, as the size of the hadronic system increases, nonperturbative dynamics sets in. The infrared-sensitive behavior suggested by the perturbative power counting is likely smoothed out by this dynamics. We hypothesize that, as we go to larger and larger sizes, the distance scales that dominate the diffraction process, rather than continuing to grow, stay frozen at some intermediate, semihard scale. The effect of this is to enhance the contribution from hard physics with respect to the contribution from soft physics.
Note that recent experimental observations on the $`x_P`$ dependence in diffraction may be regarded as providing support for the hypothesis of dominance of semihard scales in the diffractive parton distributions. It has been stressed that the value of $`\alpha _P(0)1`$ (where $`\alpha _P(0)`$ is the pomeron intercept) measured in diffractive deeply inelastic scattering differs by a factor of $`2`$ from the corresponding value measured in soft hadron-hadron cross sections.
We may explore this hypothesis by investigating whether the short-distance result that we have found for small-size systems is also relevant to describe the physics of diffraction from large-size objects, such as protons at HERA. In particular, we are interested in two features of the HERA data for $`F_2^{\mathrm{diff}}`$ : the surprising delay in the fall-off with $`Q^2`$ and the surprising flatness in $`\beta `$.
To carry out this study, we set $`M`$ in Eq. (2) equal to $`1.5\text{GeV}`$ and take the scale dependence of the diffractive parton distributions to be that given by the two-loop evolution equations with the results (2) as a boundary condition at $`\mu =M`$. The choice of the value for $`M`$ corresponds to choosing a value for the semihard scale discussed above. The value of this scale, strictly speaking, is to be regarded as a free parameter to be adjusted phenomenologically. In setting the value to $`1.5\text{GeV}`$ we are guided by the expectation that this scale should be roughly of the order of a GeV. See below for some qualitative remarks on different choices.
The upper panel of Fig. 3 shows the result we obtain for the scale dependence of the diffractive (flavor-singlet) quark distribution at a fixed value of $`\beta `$, $`\beta =0.2`$. (In this figure the dependence on $`\text{q}^2|t|`$ is integrated over, from $`0`$ to $`M^2`$). By comparison, in the lower panel we show the analogous result for the ordinary (inclusive) quark distribution, taken from the standard set CTEQ4M . Fig. 3 illustrates the different pattern of scaling violation in diffractive and inclusive deeply inelastic scattering. At moderate values of momentum fractions, while the ordinary quark distribution is flat or weakly decreasing with $`Q^2`$, the diffractive distribution is rising with $`Q^2`$. The explanation for the rise in the diffractive case lies with the behavior of the distributions at the initial scale $`M`$ (Fig. 2). More precisely, it depends on the gluon distribution being dominant throughout the range of momentum fractions. As $`Q^2`$ increases, gluons splitting into $`q\overline{q}`$ pairs feed the quark distribution and cause it to grow in the region of moderately large $`\beta `$.
In Fig. 4 we show results for the diffractive structure function $`F_2^{\mathrm{diff}}`$ as functions of $`\beta `$ at various values of $`Q^2`$. In Fig. 5 we show the same results along with the ZEUS data . Notice the main qualitative features of these results. The sign of the scaling violation is positive up to about $`\beta 0.55`$, reflecting the behavior noted for the quark distribution $`\mathrm{\Sigma }`$ in Fig. 3. In the range of intermediate values of $`\beta `$ (centered about $`\beta 0.5`$) $`F_2^{\mathrm{diff}}`$ is rather flat in $`\beta `$. These features are distinctive of the diffractive structure function compared to the inclusive structure function.
These features are in qualitative agreement with what is seen in the HERA data (Fig. 5). Note that, once one combines the analysis of diffraction from small-size states with the hypothesis of dominance of semihard scales (with a particular scale choice), all the dependences of the structure function are fully determined. That is, not only the $`Q^2`$ dependence but also the $`\beta `$ dependence are determined from theory. (The same applies to the $`t`$ dependence. In
the results presented here $`t`$ is integrated over.) The only free parameter left is an overall normalization, which has been adjusted arbitrarily in Figs. 4,5.
In the region of small $`\beta `$, the curves of Figs. 4,5 have a different behavior from that suggested by the two data points at the lowest values of $`\beta `$ and lowest values of $`Q^2`$ ($`Q^2=8\text{GeV}^2`$ and $`Q^2=14\text{GeV}^2`$). If further data were to confirm this difference, this could point to interesting effects. Here we limit ourselves to a few qualitative remarks. As far as the theoretical curves are concerned, we note that the diffractive distributions that serve as a starting point for the evolution are fairly mild as $`\beta 0`$. The gluon distribution goes like $`1/\beta `$, while the quark distribution goes like a constant (see Eq. (3)). The small-$`\beta `$ rise of the structure function $`F_2^{\mathrm{diff}}`$ in the curves of Figs. 4,5 is essentially due to the form of the perturbative evolution kernels. As regards the data, it has been observed that for small $`\beta `$ the experimental identification of the rapidity gap signal may be complicated by the presence of low $`p_{}`$ particles in the final state. If the current data hold up and especially if the same features are observed at lower values of $`\beta `$, it would be interesting to see whether detailed models for the saturation of the unitarity bound could accommodate this small $`\beta `$ behavior.
It is of interest to study how the comparison in Fig. 5 changes as the value of the semihard scale $`M`$ is changed. For example, one may be interested to lower this scale with respect to the value of $`1.5\text{GeV}`$ used so far. For $`M=1\text{GeV}`$ we find that, roughly speaking, the overall description of the data is of comparable quality (except at the lowest $`\beta `$ and $`Q^2`$ values, where the discrepancy noted above becomes more pronounced). However, we also find, in particular from the modest steepness of the $`\beta `$ shape at the highest $`Q^2`$, that a scale a bit higher than $`1\text{GeV}`$ seems to be preferred, perhaps suggesting that the pomeron is a relatively “small” object. We do not push this study to a quantitative level at present, but it appears that in the future more and better data on diffraction could be able to give us useful information on the value of this scale. It would be very interesting if one could connect it to other nonperturbative scales that enter in related areas of hadronic physics.
This research is supported in part by the U.S. Department of Energy grants No. DE-FG02-90ER40577 and No. DE-FG03-96ER40969.
|
no-problem/9905/hep-ph9905554.html
|
ar5iv
|
text
|
# CCFM prediction for 𝐹₂ and forward jets at HERA
## 1 Introduction
The parton evolution at small values of $`x`$ is believed to be best described by the CCFM evolution equation , which for $`x0`$ is equivalent to the BFKL evolution equation and for large $`x`$ reproduce the standard DGLAP equations. The CCFM evolution equation takes coherence effects of the radiated gluons into account via angular ordering. On the basis of this evolution equation, the Monte Carlo program SMALLX has been developed already in 1992. In 1997 the Linked Dipole Chain was developed as a reformulation of the original CCFM equation. Predictions of the CCFM equation for hadronic final state properties were studied in paying special attention to non - leading effects. All approaches found a good description of $`F_2`$ but failed completely to describe the forward jet data of HERA experiments, which are believed to be a signature for new small $`x`$ parton dynamics.
In the following I discuss the treatment of the non-Sudakov form factor $`\mathrm{\Delta }_{ns}`$ as well as the effects of the so-called “consistency constraint”, which was found to be necessary to include non - leading contributions to the BFKL equation . I show that a good description of $`F_2`$ and the forward jet data can be achieved.
## 2 Implementation of CCFM in SMALLX
The implementation of the CCFM parton evolution in the forward evolution Monte Carlo program SMALLX is described in detail in . Here I only concentrate on the basic ideas and discuss the treatment of the non-Sudakov form factor.
The initial state gluon cascade is generated in a forward evolution approach from a starting distribution of the $`k_t`$ unintegrated gluon distribution according to:
$$xG_0(x,k_t^2)=N(1x)^4\mathrm{exp}\left(k_t^2/k_0^2\right)$$
(1)
where $`N`$ is a normalization constant and $`k_0^2=1`$ GeV<sup>2</sup>. Gluons then can branch into a virtual ($`t`$-channel) gluon $`k_{i+1}`$ and a final gluon $`q_{i+1}`$ according to the CCFM splitting function :
$`dP_i`$ $`=`$ $`\stackrel{~}{P}_g^i(z_i,q_{ti}^2,k_{ti}^2)\mathrm{\Delta }_sdz_i{\displaystyle \frac{d^2q_{ti}^{}}{\pi q_{ti}^2}}`$ (2)
$`\mathrm{\Theta }(q_{ti}^{}z_iq_{ti1}^{})\mathrm{\Theta }(1z_iϵ_i)`$
with $`q_{ti}^{}=q_{ti}/(1z_i)`$ being the rescaled transverse momentum, $`z_i=x_i/x_{i1}`$, $`ϵ_i=Q_0/q_{ti}^{}`$ being a collinear cutoff to avoid the $`1/(1z)`$ singularity and $`\mathrm{\Delta }_s`$ being the Sudakov form factor:
$$\mathrm{\Delta }_s(q_{ti}^{},z_iq_{ti1}^{})=\mathrm{exp}\left(\frac{d^2q_t^{}}{q_t^2}𝑑z\frac{\overline{\alpha }_s}{1z}\right)$$
(3)
which at an inclusive level cancels against the $`1/(1z)`$ collinear singularity and is used to generate $`q_{ti}^{}`$. The gluon splitting function $`\stackrel{~}{P}_g^i`$ is given by:
$$\stackrel{~}{P}_g^i=\frac{\overline{\alpha }_s(q_{ti}^2)}{1z_i}+\frac{\overline{\alpha }_s(k_{ti}^2)}{z_i}\mathrm{\Delta }_{ns}(z_i,q_{ti}^2,k_{ti}^2)$$
(4)
with the non-Sudakov form factor $`\mathrm{\Delta }_{ns}`$ being defined as:
$`\mathrm{log}\mathrm{\Delta }_{ns}`$ $`=`$ $`\overline{\alpha }_s(k_{ti}^2){\displaystyle \frac{dz^{}}{z^{}}\frac{dq^2}{q^2}}`$ (5)
$`\mathrm{\Theta }(k_{ti}q)\mathrm{\Theta }(qz^{}q_{ti})`$
which gives for the region $`k_{ti}^2>z_iq_{ti}^2`$:
$$\mathrm{log}\mathrm{\Delta }_{ns}=\overline{\alpha }_s(k_{ti}^2)\mathrm{log}\left(\frac{1}{z_i}\right)\mathrm{log}\left(\frac{k_{ti}^2}{z_iq_{ti}^2}\right)$$
(6)
The constraint $`k_{ti}^2>z_iq_{ti}^2`$ is often referred to as the “consistency constraint” . However the upper limit of the $`z^{}`$ integral is constraint by the $`\mathrm{\Theta }`$ functions in eq.(5) by<sup>1</sup><sup>1</sup>1I am grateful to J. Kwiecinski for the explanation of this constraints: $`z_iz^{}\text{min}(1,k_{ti}/q_{ti})`$, which results in the following form of the non-Sudakov form factor :
$$\mathrm{log}\mathrm{\Delta }_{ns}=\overline{\alpha }_s(k_{ti}^2)\mathrm{log}\left(\frac{z_0}{z_i}\right)\mathrm{log}\left(\frac{k_{ti}^2}{z_0z_iq_{ti}^2}\right)$$
(7)
where
$$z_0=\{\begin{array}{cc}1\hfill & \text{if }k_{ti}/q_{ti}>1\hfill \\ k_{ti}/q_{ti}\hfill & \text{if }z<k_{ti}/q_{ti}1\hfill \\ z\hfill & \text{if }k_{ti}/q_{ti}z\hfill \end{array}$$
giving no supression in the region $`k_{ti}/q_{ti}z`$ we have $`\mathrm{\Delta }_{ns}=1`$. The Monte Carlo program SMALLX has been modified to include the non-Sudakov form factor according to eq.(7) and the scale in $`\alpha _s`$ was changed to $`k_t^2`$ in the cascade and in the matrix element. To avoid problems at small $`k_t^2`$, $`\alpha _s(k_t^2)`$ is restricted to $`\alpha _s(k_t^2)0.6`$.
The “consistency constraint” was introduced to account for next-to-leading effects in the BFKL equation, and which was found to simulate about 70% of the full next-to-leading corrections to the BFKL equation. Since in LO BFKL the true kinematics of the branchings are neglected, they can be interpreted as next-to-leading effects, and this constraint is often also called “kinematic constraint”. In the CCFM equation energy and momentum conservation is already included at LO, and it is not clear, whether the arguments coming from BFKL also apply to CCFM. In the following the effects of the $`1/(1z)`$ terms and the “consistency constraint” are studied in more detail.
## 3 Predictions for $`F_2`$ and forward jets at HERA
With the modifications on the treatment of the non-Sudakov form factors described above, the predictions of the CCFM evolution equation, as implemented in the program SMALLX , for the structure function $`F_2(x,Q^2)`$ are shown in Fig. 1 without applying any additional “consistency constraint”.
Here the following parameters were used: $`Q_0=1.1`$ GeV, $`\mathrm{\Lambda }_{QCD}=0.2`$ GeV, $`N=0.4`$ and the masses for light (charm) quarks were set to $`m_q=0.25(1.5)`$ GeV. The scale of $`\alpha _s`$ in the off-shell matrix element was set to $`k_t^2`$. With this parameter settings are very good description of $`F_2`$ over the range $`0.510^5<x<0.05`$ and $`3.5<Q^2<90`$ GeV<sup>2</sup> is obtained. In Fig. 2 the prediction for the forward jets using the same parameter setting is shown. The data are nicely described.
The effect of the $`1/(1z)`$ term in the splitting function is shown in Figs. 1 and 2 separately with the dashed line. It is obvious that these terms are important for a reasonable description of $`F_2`$ and the forward jet data.
Including the “consistency constraint” the $`x`$ dependence of the cross section changes, but a similarly good description of $`F_2`$ is obtained by changing $`Q_0`$ to $`Q_0=0.85`$ GeV. However the forward jet cross section becomes smaller, as shown with the dotted line in Figs. 1 and 2. It is interesting to note, that this prediction is very similar to the one obtained from the BFKL equation as shown in for $`k_t^2`$ as the scale in $`\alpha _s`$.
## 4 Acknowledgments
I am very grateful to B. Webber for providing me with the code of SMALLX. I am grateful to B. Andersson, G. Gustafson, H. Kharraziha, J. Kwiecinski, L. Lönnblad, A. Martin, S. Munier, R. Peschanski and G. Salam for many very helpful discussions about CCFM.
|
no-problem/9905/nucl-th9905053.html
|
ar5iv
|
text
|
# Shape and blocking effects on odd-even mass differences and rotational motion of nuclei
## Abstract
Nuclear shapes and odd-nucleon blockings strongly influence the odd-even differences of nuclear masses. When such effects are taken into account, the determination of the pairing strength is modified resulting in larger pair gaps. The modified pairing strength leads to an improved self-consistent description of moments of inertia and backbending frequencies, with no additional parameters.
Since the BCS theory was applied to atomic nuclei , pairing correlations have been crucial to the understanding of many properties, such as binding energies, collective rotational motion and quasiparticle excitation energies. The interaction strength, $`G`$, of the pairing force is the key parameter that governs the properties of the short range correlations.
The $`G`$ value is usually determined by fitting the BCS pairing gaps ($`\mathrm{\Delta }=G_iU_iV_i`$) of even-even nuclei to experimental odd-even mass differences, $`D_{\mathrm{oe}}`$ (where $`U_i`$ and $`V_i`$ are the emptiness and occupation amplitudes of nucleon pairs). However, when one calculates theoretical $`D_{\mathrm{oe}}`$ values (i.e. in the same manner as calculating experimental $`D_{\mathrm{oe}}`$ values, but with theoretical nuclear masses) with the $`G`$-value determined according to the above prescription, it turns out that they do not agree with experiment. The theoretical values are systematically smaller than the pairing gaps, at least for the rare-earth deformed nuclei described below. In principle, experimental odd-even mass differences should be compared with theoretical $`D_{\mathrm{oe}}`$ values. Hence, the pairing strength $`G`$ should be adjusted to reproduce, at least on average, theoretical odd-even masses and not pair gaps of even-even nuclei.
Although pairing correlations are dominantly responsible for odd-even mass differences, there exist other non-negligible effects, in view of the systematic differences mentioned above. One important effect stems from the deformed mean field. Due to the Kramers degeneracy of single-particle levels, odd- and even-nucleon systems will have different energies in a deformed field. The interplay between pairing and this ‘mean field’ effect has been clarified in a recent work by Satula et al. . For light- and medium-mass nuclei, it is comparable with the pairing contribution. Furthermore, when neighbouring nuclei (which are involved in calculating odd-even mass differences) have different deformations, shape-changing effects will also play a role. These two factors originate from the mean field and we will refer to them in the following as shape effects.
Another important influence is the blocking effect. Experimental odd-even mass differences contain the odd-nucleon blocking effect in adjacent odd-particle systems, which is absent in even systems. This effect can become significant, especially when the densities of single-particle levels around the neutron and proton Fermi surfaces are not very high. Simple BCS calculations typically show that odd-nucleon blockings reduce pairing gaps by more than 10% for the nuclei in the rare-earth region. Both the shape and blocking effects will influence the determination of the pairing strengths.
Two very sensitive probes of pairing correlations, and therefore of the pairing strengths, are moments of inertia (see e.g. ) and backbending (bandcrossing) frequencies . States of high seniority may serve as another probe, and recent calculations of the energies of multi-quasiparticle states show the need for adjustment of the pairing strength. Hence, the question arises as to whether the pairing strength determined from odd-even mass differences is consistent with the pairing strength used to calculate moments of inertia or energies of high seniority states. A consistent way to determine the $`G`$-value is therefore an important issue for the quantitative description of nuclear properties. In this paper, we show that when shape and blocking effects are taking into account, the pairing strength $`G`$ needs to be modified in order to reproduce experimental $`D_{\mathrm{oe}}`$ values. Such modifications result in an improved self-consistent description for both moments of inertia and band-crossing frequencies.
In order to minimize the influences of the quantities that are not relevant for our discussion, we use the five-point formula of Ref. to determine experimental $`D_{\mathrm{oe}}`$ values. For an even-even nucleus,
$$D_{\mathrm{oe}}=\frac{1}{8}[M(N+2)4M(N+1)+6M(N)4M(N1)+M(N2)],$$
(1)
where $`M(N)`$ is the mass of an atom with neutron number, $`N`$ (or $`Z`$ for protons). The quantity, $`D_{\mathrm{oe}}`$, is calculated along an isotopic (or isotonic) chain. With Eq.(1) we investigate shape and blocking effects using the deformed Woods-Saxon (WS) model . According to the Strutinsky energy theorem , the total energy of a nucleus can be decomposed into a macroscopic and microscopic part. The latter consists of shell and pairing correction energies. For the macroscopic energy, we employ the standard liquid-drop model of Ref.. Pairing correlations are treated by a technique of approximate particle-number projection, known as the Lipkin-Nogami (LN) method which takes particle-number-fluctuation effects into account by introducing an additional Lagrange multiplier, $`\lambda _2`$. Both monopole and quadrupole pairings are included for residual two-body interactions.
Nuclear shapes are determined by minimizing calculated potential-energy-surface (PES) energies in the quadrupole deformation ($`\beta _2,\gamma `$) space with hexadecapole ($`\beta _4`$) variation. For well-deformed nuclei, pairing energies only weakly influence equilibrium deformations. Therefore, we use the monopole pairing strengths obtained by the average gap method to determine nuclear deformations. The quadrupole pairing strength is determined by restoring the local Galilean invariance with respect to quadrupole shape oscillations . Whereas quadrupole pairing is essential for the proper description of the moments of inertia , its influence on nuclear binding energies is negligible, since we use the doubly-stretched quadrupole operators .
In the present work, we focus on well-deformed rare-earth nuclei where an abundance of regular collective rotational bands with backbending have been observed. The shape effects, coming from the shell-correction and the macroscopic deformation energies, can be calculated using Eq.(1), after determining the equilibrium deformations. Our calculations show that the shape effects are usually of the order of 100 to 200 keV for a range of even-even Er, Yb, Hf and W isotopes. If one neglects the changes of deformation, the shape effect for a deformed even system ($`N=2n`$) can be written as $`\frac{1}{2}(e_{n+1}e_n)`$ for the three-point formula or $`\frac{1}{4}(e_{n+1}e_n)`$ for the five-point formula of Eq.(1) (where $`e_i`$ is the single-particle energy). Shape effects calculated from the above simple forms differ from those calculated according to Eq.(1), when shape changes are included. This implies that the polarization effects of the odd nucleons have to be considered explicitly, as is done in the present work. In contrast to light nuclei , the mean-field effects for heavy nuclei are not so large due to the relatively close spacing of the single-particle levels.
In the LN model (for the case of monopole pairing) the quantity, $`\mathrm{\Delta }+\lambda _2`$, is assumed to be identified with the odd-even mass difference, $`D_{\mathrm{oe}}`$, provided that other physical influences (e.g. shape and blocking effects) are ignored. Hence, the additional contribution coming from the blocking effect can be defined as
$$\delta _{\mathrm{block}}=D_{\mathrm{oe}}^{\mathrm{pair}}(\mathrm{\Delta }+\lambda _2),$$
(2)
where the $`D_{\mathrm{oe}}^{\mathrm{pair}}`$ is the theoretical odd-even difference of pairing energies. The $`D_{\mathrm{oe}}^{\mathrm{pair}}`$ values are calculated using Eq.(1) with the odd-nucleon blocking effect taken into account. Note that blocking also affects the pairing self energy (Hartree-term), which also contributes to the $`D_{\mathrm{oe}}^{\mathrm{pair}}`$ value. If the blocking (and deformation changing) effects were neglected, we should have $`D_{\mathrm{oe}}^{\mathrm{pair}}\mathrm{\Delta }+\lambda _2`$. Since both the $`D_{\mathrm{oe}}^{\mathrm{pair}}`$ and $`\mathrm{\Delta }+\lambda _2`$ values increase (decrease) with increasing (decreasing) pairing strengths, the $`\delta _{\mathrm{block}}`$ values are not very sensitive to the changes in the $`G`$ values. We calculate the blocking effects with the $`G`$ values obtained by the average gap method . The results show that the blocking effects are usually about $`200`$ to $`400`$ keV for the rare-earth nuclei. The shape and blocking effects partially cancel, but non-zero effects remain systematically.
The obtained shape and blocking effects, $`\delta `$, are shown in Fig.1. These values range mostly from $`100`$ to $`300`$ keV or about 10% to 30% of the corresponding odd-even mass differences, clearly suggesting that one cannot neglect this component. Note, that the size and the fluctuations of $`\delta `$ reflect two different ways to calculate odd-even mass differences, and not the difference between experimental and theoretical odd-even masses.
In general, the shape and blocking effects change smoothly with particle number and one would like to separate the contributions from $`\delta _{\mathrm{shape}}`$ and $`\delta _{\mathrm{block}}`$. However, the situation can become rather complex: For $`N=98`$–102, the calculated PES’s show that the nuclei are soft in $`\beta _2`$ deformation, particularly for <sup>172-176</sup>W. The $`\beta _2`$ softness results in relatively large uncertainties in the determination of the $`\beta _2`$ values and hence significantly influences the $`\delta _{\mathrm{shape}}`$ values. The same holds for the $`\delta _{\mathrm{block}}`$ values, leading to fluctuating results. Hence, separate consideration of the $`\delta _{\mathrm{shape}}`$ and $`\delta _{\mathrm{block}}`$ values can be misleading. In contrast, the combined value of the shape and blocking effects is less shape dependent, since the total energy of a nucleus is not so sensitive to the deformation value in a range around the minimum of a soft PES.
With the above shape and blocking effect ($`\delta `$) theoretical odd-even mass differences ($`D_{\mathrm{oe}}^{\mathrm{th}}`$) can be determined, and compared with experimental values ($`D_{\mathrm{oe}}^{\mathrm{expt}}`$) to obtain the pairing strengths. The modified $`D_{\mathrm{oe}}^{\mathrm{th}}`$ value can be written as
$$D_{\mathrm{oe}}^{\mathrm{th}}=\mathrm{\Delta }+\lambda _2+\delta .$$
(3)
In the practical calculations, the contribution from quadrupole pairing is included, though this term is not written explicitly in the above equation. However, as mentioned above, the contribution of the doubly-stretched quadrupole-pairing energies is very small (usually less than 30 keV in magnitude). In the right side of Eq.(3) the pairing gap, $`\mathrm{\Delta }`$, is the dominant term. Obviously, due to the presence of the $`\delta `$ term, $`D_{\mathrm{oe}}^{\mathrm{th}}`$ is in general not equivalent to the pairing gap ($`\mathrm{\Delta }+\lambda _2`$) of even-even nuclei. The $`\mathrm{\Delta }`$ value is very sensitive to the change in the $`G`$ value, while the $`\lambda _2`$ and $`\delta `$ values are not.
The presence of the negative $`\delta `$ value implies that the pairing strength $`G`$, needs to increase when one aims at self consistent calculations of odd-even mass differences. By adjusting the pairing strength, one can reproduce the experimental $`D_{\mathrm{oe}}`$ value. However, in that case we found that each nucleus requires its separate determination of $`G`$. Apparently, the average gap method does not have the proper particle-number and deformation dependence. Of course, other quantities contributing to the odd-even mass difference may be lacking in our model. One such effect is the coupling to phonons, which will influence the ground-state binding energy, depending on the softness of the nuclear shape. Also, displacements of the single-particle spectrum of the Woods-Saxon potential will affect the calculated $`D_{\mathrm{oe}}`$ values. To disentangle the different contributions, especially to optimize a method to determine the average pairing strength, is outside the scope of the present work.
In order to better reproduce the average of experimental $`D_{\mathrm{oe}}`$ values, we scale the pairing strength by $`G=FG^0`$, where $`G^0`$ is the pairing strength obtained by the average gap method . For reason of simplicity, we use a constant factor $`F`$, which we determine from individual $`F`$ values that have been fitted to the corresponding $`D_{\mathrm{oe}}^{\mathrm{expt}}`$ values in this mass region. We expect that using an average $`F`$ value will reduce the fluctuations arising from the uncertainties of experimental masses and from possible discrepancies between theoretical and experimental single-particle levels. For the region of the studied nuclei, we obtain $`\stackrel{~}{F}_\nu =1.08`$ (neutrons) and $`\stackrel{~}{F}_\pi =1.05`$ (protons) using the experimental masses of Ref.. This results in increases of the LN pairing gaps by about 25% for neutrons and 15% for protons.
To investigate the consistency of our method, we calculate the moments of inertia ($`J(\omega )`$) of yrast rotational bands by means of the pairing-deformation self-consistent cranked shell model . As mentioned at the beginning of the paper, the moment of inertia is a very sensitive probe of pairing correlations. It is not at all obvious, that a pairing interaction that reproduces the odd even mass difference at the same time also can reproduce the moments of inertia. In Fig. 2, we compare experimentally deduced moments of inertia with the results of our calculations, done with the standard pairing strength $`G^0`$, and the adjusted $`G=FG^0`$. Clearly, the adjusted $`G`$ values lead to improved descriptions of both moments of inertia and backbending frequencies. (No additional parameters are adjusted in the present calculations.)
In this context, one needs to recall of the long standing problem of cranking calculations with monopole pairing, that do not at the same time describe both moments of inertia and band-crossing frequencies (see e.g. ). In order to reproduce moments of inertia, one in general needs to use a reduced pairing strength . But on the other hand, an enhanced pairing field is required to reproduce band-crossing frequencies . The presence of the time-odd component of the quadrupole pairing field , results in a stiffer nucleus, which allows an increase of the $`G`$ value. Apparently, the doubly stretched quadrupole pairing interaction in combination with the Lipkin-Nogami method enables a consistent discription of both band crossing frequencies and moments of inertia.Other effects like the coupling to the vibrational phonons, that may affect the crossing frequencies are of course not taken into account
For some heavy isotopes, however, using the average $`F`$ values results in too small moments of inertia, e.g. in <sup>178</sup>Hf and <sup>180</sup>W. For these nuclei, the average $`F`$ values give too large $`D_{\mathrm{oe}}^{\mathrm{th}}`$ values. In fact, the $`\mathrm{\Delta }+\lambda _2`$ values obtained from $`G^0`$ have already over-estimated the experimental odd-even mass differences for some heavy isotopes, e.g. by 166 keV (neutrons) and 190 keV (protons) in <sup>178</sup>Hf and correspondingly 98 and 125 keV in <sup>180</sup>W. In general, the average gap method gives too large $`\mathrm{\Delta }+\lambda _2`$ values for heavy isotopes and too small $`\mathrm{\Delta }+\lambda _2`$ values for light isotopes, indicating the problems of the A-dependence of the average gap method. Obviously, averaging the $`G`$ adjusting factors does not change the $`A`$-dependence of the pairing gaps.
In order to check the influences from the possible discrepancy of the $`A`$-dependence of pairing gaps, we have also done the calculation with a pairing strength ($`G_0`$) that reproduces the $`D_{\mathrm{oe}}^{\mathrm{expt}}`$ value with $`\mathrm{\Delta }+\lambda _2`$ for each given nucleus. The $`G_0`$ values are normally smaller than the average-gap-method $`G^0`$ values for the heavy nuclei, e.g. <sup>178</sup>Hf and <sup>180</sup>W. Results show that the calculated moments of inertia with such $`G_0`$ values are systematically larger than the corresponding experimental values. However, when instead the $`G_0`$ values are adjusted by reproducing the $`D_{\mathrm{oe}}^{\mathrm{expt}}`$ values with $`\mathrm{\Delta }+\lambda _2+\delta `$ (i.e. including shape and blocking effects, $`D_{\mathrm{oe}}^{\mathrm{th}}=D_{\mathrm{oe}}^{\mathrm{expt}}`$) a significantly improved description can be obtained, as shown in Fig.3 for the <sup>178</sup>Hf and <sup>180</sup>W examples. Here, we have obtained different $`F`$ values (see Fig.3 caption) compared to the above average values, mainly because the different pairing strengths ($`G^0`$ or $`G_0`$) have been chosen as the reference of the $`G`$ adjustment. The non-average $`F`$ values are mostly in range of 1.05–1.10 for neutrons and 1.03–1.08 for protons. Clearly, the proper pairing strength for odd-even masse differences is also consistent with experimental moments of inertia. In addition, the increase of the pairing strength found in our work agrees with that needed to reproduce the excitation energies of high-seniority states.
Deformations, which can change with rotational frequency, are determined self-consistently by calculating the Total Routhian Surfaces (see e.g. ). With the determined deformations, the calculated intrinsic quadrupole moments ($`Q_0`$ ) agree with corresponding experimental values . The deformation changes due to the adjustments of the $`G`$ values are very small ($`|\mathrm{\Delta }\beta _2|<0.003`$ and $`|\mathrm{\Delta }\beta _4|0.002`$) for nuclei that are not soft. In <sup>172,176</sup>W, some shifts in $`J(\omega )`$ can be seen, which are due to the shifts of the $`\beta _2`$ values with increasing rotational frequency. The PES calculations, as mentioned, for <sup>172-176</sup>W are soft in $`\beta _2`$.
In summary, we have investigated the shape and blocking effects on odd-even mass differences for even-even rare-earth nuclei. These effects are shown to be in the range of 10–30% of the corresponding odd-even mass differences. The blocking effect, in principle, should belong to the category of pairing effects. Clearly, these effects should not be neglected in determining the pairing strengths. Indeed, when blocking and shape effects are taken into account, pairing strengths are increased by about 5–10% resulting in sizeble changes of the pair gaps. The adjusted strengths are consistent with what is needed to reproduce the excitation energies of multi-quasi particle configurations, and lead to an improved description of nuclear collective rotational motion, through calculating moments of inertia and backbending frequencies. The present work establishes a consistent relation between mass differences, moments of inertia and excitation energies of high seniority states.
This work was supported by the UK Engineering and Physical Sciences Research Council and the Swedish Natural Sciences Research Council.
|
no-problem/9905/hep-ph9905557.html
|
ar5iv
|
text
|
# CERN-TH/99-143May 1999 High-𝑡 Diffraction Talk presented at 7th International Workshop on Deep Inelastic Scattering, Zeuthen, Germany, April 1999.
## 1 Introduction
High-$`t`$ diffraction is the scattering of two particles $`A`$ and $`B`$ into a final state which is made up wholly of two systems $`X`$ and $`Y`$ which are far apart in rapidity and between which there is a large relative transverse momentum, $`P_t`$. The square of the exchanged four-momentum is $`tP_t^2`$. High $`t`$ means that $`t\mathrm{\Lambda }_{\mathrm{QCD}}^2`$. By studying this class of events we can hope to gain valuable insight into the Regge limit of strong interactions using the methods of perturbative QCD. It has been shown that large $`t`$ is a very effective way of squeezing the rapidity gap producing mechanism to short distances regardless of the sizes of the external particles .
Perturbative QCD in the Regge limit leads to the production of rapidity gaps via the exchange of a pair of interacting reggeised gluons in an overall colour singlet configuration (a reggeised gluon can be thought of as a colour octet compound state of any number of ordinary gluons). To leading logarithmic accuracy this colour singlet exchange is described by the BFKL equation and predicts a strong rise in the cross-section as the rapidity gap increases. It should be remembered that a characteristic of all leading logarithmic BFKL calculations is a large uncertainty in the overall normalisation of cross-sections. Next-to-leading logarithmic corrections to the BFKL equation have not yet been computed for $`t0`$.
## 2 Gaps between jets
One way of selecting high-$`t`$ events is to look for events which contain one or more jets in system $`X`$ and one or more in system $`Y`$ . The leading contribution to this process comes from the elastic scattering of partons via colour singlet exchange; the outgoing partons hadronise to produce the jets which are seen. Experiments at FNAL and HERA have reported their first results on the fraction of all dijet events which contain a gap in rapidity . For rapidities greater than about 3 units between the jet centres, all see a clear excess of rapidity gap events over the number expected in the absence of a strongly interacting colour singlet exchange. It is worth recalling that a large excess of gap events at high-$`t`$ cannot be explained by traditional soft pomeron exchange since such a contribution will have died away long before due to shrinkage.
Unfortunately, spectator interactions can spoil the gap. This physics is poorly understood and is often modelled by an overall ‘gap survival’ factor. Assuming the gap survival factor depends only upon the centre-of-mass energy of the colliding beams, the gap fraction data are mostly in agreement with the leading order BFKL predictions with $`\alpha _s0.2`$ (it doesn’t make sense to run the coupling in the leading order BFKL formalism). But the data still have rather large errors and it is too early to draw definitive conclusions. Notwithstanding this, there appears to be a problem for BFKL: the $`E_T`$ dependence of the D0 gap fraction shows a rise with increasing $`E_T`$ whereas the BFKL prediction is flat or falling, see Fig.1. A few comments are in order. Firstly, the solid line on Fig.1 falls as $`E_{T2}`$ increases as a consequence of the running of the QCD coupling (the gap fraction goes like $`\alpha _s^4/\alpha _s^2`$). Fixing the coupling, as is strictly proper in the leading logarithmic BFKL approach, leads to an essentially flat $`E_{T2}`$ distribution. As an aside, it might be of interest to note that a fixed coupling was needed in order to explain the high-$`t`$ data on high energy $`p\overline{p}`$ elastic scattering . Secondly, it is to be noted that all jets appearing in Fig.1 are corrected for an underlying event using minimum bias data. Typically, this means subtracting around 1 GeV from the jets. This is true even for the jets which make up the numerator of the gap fraction which, by definition, are produced in an environment free of any underlying event. This correction is not made in generating the theoretical prediction (solid line) which, roughly speaking, means that the theory curve ought to be multiplied by $`[E_{T2}/(E_{T2}+1\mathrm{G}\mathrm{e}\mathrm{V})]^4`$ (since both the numerator and denominator fall as $`1/t^2`$). For the lowest bin at $`E_{T2}=18`$ GeV this correction factor is $`0.8`$. The upshot is that theory is probably consistent with a flat $`E_{T2}`$ spectrum at large $`E_{T2}`$ falling to $`80\%`$ of this value in the lowest $`E_{T2}`$ bin, and this is not inconsistent with the D0 data .
The gaps between jets process has a number of drawbacks: the rapidity reach is seriously compromised by the need to see the two jets; there is the usual theoretical uncertainty in going from partons to jets; there is the cloudy issue of gap survival to deal with. One way of improving the situation is to look at the more inclusive double dissociation sample .
## 3 Gluon reggeisation
As mentioned above, gluons reggeise in QCD. The effect of their reggeisation can be studied directly in present day experiments by measuring a process first suggested in . The process is $`h_1+h_2j_1+j_2+X`$ where $`h_1`$ and $`h_2`$ are the incoming hadrons, $`j_1`$ and $`j_2`$ are the outgoing jets which are far apart in rapidity and $`X`$ is the rest of the final state subject to the constraint that, in the region between the two jets, there be no jets with transverse momentum larger than some value, $`\mu `$. The $`\mu `$ dependence of the ratio of these events to all dijet events is interesting since it provides a direct test of gluon reggeisation. The ratio goes (for $`\mu P_t`$) like
$$\mathrm{e}^{2\mathrm{\Delta }\eta (\alpha _g(P_t)1)}$$
where $`\alpha _g(P_t)=1(N_c/\pi )\alpha _s\mathrm{ln}(P_t/\mu ).`$
Another interesting consequence of gluon reggeisation is that, at large $`t`$, it indicates that fixed order perturbation theory can lead one to infer the wrong physics. To see this, one needs to realise that the first correction to two-gluon exchange contains a piece like
$$\mathrm{\Delta }=\alpha _s\mathrm{ln}\left(\frac{𝐤^2(𝐤𝐪)^2}{t^2}\right)\mathrm{ln}s$$
where $`𝐤`$ and $`𝐤𝐪`$ are the transverse momenta of the two internal gluons (to get to a cross-section requires an integration over these momenta with a weighting function dependent upon the system to which they couple). Such a contribution invites one to conclude that the most important configuration arises when one gluon carries all the momentum transfer ($`𝐪^2=t`$) and the other carries none. However this is not the case, to order $`\alpha _s^2`$ the contribution goes like $`\mathrm{\Delta }^2/2`$ and more generally the series exponentiates to $`e^\mathrm{\Delta }`$ which strongly suppresses the asymmetric configurations. Consequently, the dominant configurations are the symmetric ones where the exchanged gluons share the momentum transfer equally .
## 4 Exclusive vector meson and photon production
At HERA, focussing on the more exclusive subsample of high-$`t`$ diffractive events in which the photon dissociates to either a photon or a vector meson and nothing else allows one to neatly sidestep the issue of gap survival. The detection of vector mesons and photons is also clean enough to allow the $`t`$ distribution (here $`t`$ is essentially determined by the transverse momentum of the vector particle) to be measured down to small values. H1 has measured the $`t`$ distribution for $`J/\mathrm{\Psi }`$ production all the way from $`t0`$ out to $`t10`$ GeV<sup>2</sup>. The data agree with the leading order BFKL calculation with $`\alpha _s=0.2`$ . In addition, ZEUS has investigated $`\rho `$ mesons produced at high $`t`$ . The ability of the experiments to determine the kinematics without needing to observe the proton dissociation system is the crucial factor which allows the HERA experiments to study events with rapidity gaps as large as 6 units. Gaps of this size are certainly well into the region of theoretical interest. Leading logarithmic level calculations now exist for both vector meson production and photon production . As well as teaching us about colour singlet exchange, high-$`t`$ vector meson production can also help us understand the vector meson production mechanism. In this respect, ratios of vector meson production will be particularly useful.
Over the coming years HERA promises to produce high quality data on light and heavy vector meson production, and on photon production (where there is no uncertainty due to wavefunction effects). These data, in combination with the data on the more inclusive processes discussed above, will set the benchmark against which we can test our ever evolving understanding of QCD in the high energy domain.
## Acknowledgment
This work was supported by the EU Fourth Framework Programme ‘Training and Mobility of Researchers’, Network ‘Quantum Chromodynamics and the Deep Structure of Elementary Particles’, contract FMRX-CT98-0194 (DG 12-MIHT).
|
no-problem/9905/nucl-th9905045.html
|
ar5iv
|
text
|
# A phenomenological spin–orbit three-body force
## I Introduction
In the standard description of light nuclei as composed of structureless nucleons, three–nucleon forces (3NF’s) play an important role. The simple assumption of nucleons interacting through a pairwise $`NN`$ potential fails to reproduce the experimental binding energies. This suggests that the model has to be extended to include many–body forces in the potential energy. First derivations of 3NF’s are based on two-pion exchange involving a $`\mathrm{\Delta }`$ excitation. The most often used potentials of this kind are the Tucson-Melbourne (TM) , the Brazil (BR) models, and the Urbana (UR) model which also includes a central phenomenological repulsive term with no spin-isospin structure. In practical applications, the chosen 3NF is adjusted to reproduce the $`A=3,4`$ binding energies. When the calculations are extended to describe bound states in $`p`$–shell nuclei, a persistent underbinding is observed in the mass region with $`A=58`$ . In particular, several excited states are not well reproduced indicating that the 3NF could contain a more complicate structure.
A different problem has been observed in N-d scattering at low energies. The calculated vector analyzing powers $`A_y`$ and $`iT_{11}`$ show an unusual large discrepancy with the experimental data . Attempts to improve the description of these observables with the inclusion of a two-pion three-nucleon force ($`2\pi `$–3NF) were unsuccessful . This discrepancy has been called the $`A_y`$ puzzle since it is, by far, the largest disagreement observed in the theoretical description of the three-nucleon system at low energies. Recently, it has been shown that the puzzle is not limited to the three-nucleon system but a similar problem appears in the calculation of $`A_y`$ in $`p`$-<sup>3</sup>He scattering . The $`A_y`$ puzzle could be also a signal for different forms in the three-body potential
In ref. p-d and n-d scattering have been studied below the deuteron breakup. Using the experimental data for cross section and vector and tensor analyzing powers at $`E_{lab}=2.5`$ and $`3.0`$ MeV from Shimizu et al. , it was possible to perform a phase-shift analysis (PSA) and to compare the results to the theoretical phases. The conclusion was that small differences in the $`P`$-wave parameters are responsible for the large disagreement in $`A_y`$ and $`iT_{11}`$. It is a general feature that all realistic $`NN`$ potentials underpredict the splitting in the $`{}_{}{}^{4}P_{J}^{}`$ phases and the magnitude of the $`ϵ_{3/2}`$ mixing parameter. When one of the $`2\pi `$–3NF’s is included in the Hamiltonian there is not an appreciable reduction of the discrepancy, giving in some cases a poorer description. In fact, the inclusion of the $`2\pi `$–3NF tends to reduce the splitting in $`{}_{}{}^{4}P_{J}^{}`$ and to slightly increase $`ϵ_{3/2}`$. These two opposite effects almost cancel each other in the construction of $`A_y`$ and $`iT_{11}`$. The operator’s form of these particular models of 3NF’s does not include $`𝐋𝐒`$ terms which are to a large extent responsible for the splitting in the $`P`$-waves parameters.
In the present paper a phenomenological spin–orbit three–nucleon force (SO–3NF) is introduced in order to study its effect on the N-d vector analyzing powers $`A_y`$ and $`iT_{11}`$ at low energies. The $`𝐋𝐒`$ term in the $`NN`$ potential has been modified including a two-parameter three-body function depending on the hyperradius $`\rho `$. The two parameters are related to the strength and range of the force and they have been fixed with the intention to improve the description of $`A_y`$ and $`iT_{11}`$ at $`E_{lab}=3.0`$ MeV, just below the deuteron breakup. Three different sets of parameters have been considered and, accordingly, used to calculate scattering observables from $`E_{lab}=648`$ keV up to $`E_{lab}=10`$ MeV. The SO–3NF has been introduced in channels where the pair of nucleons $`(i,j)`$ are coupled to spin $`S_{ij}=1`$ and isospin $`T_{ij}=1`$. At the level of the two-nucleon ($`2N`$) system this channel is related to scattering in odd waves. In N-d scattering the $`{}_{}{}^{4}P_{J}^{}`$ parameters and hence $`A_y`$ and $`iT_{11}`$ are very sensitive to the force in this particular channel. On the other hand its effect on the binding energy of the three-nucleon system is very small. This is the opposite behavior from that produced by the $`2\pi `$–3NF which gives the main contribution in the $`J=1/2^+`$ state and has small influence on the vector observables. Therefore these two different classes of 3NF’s are to some extent complementary and can be studied separately.
The calculations of the N-d scattering observables presented here in the following at different energies have been done using the Pair Correlated Hyperspherical Harmonic (PHH) basis. In this method the wave function of the system is expanded in terms of correlated basis elements and the description of the system proceeds via a variational principle. Bound states are obtained using the Rayleigh-Ritz variational principle whereas scattering states are obtained using the generalized Kohn variational principle. This technique has been extensively discussed in refs. for energies below the deuteron breakup threshold and, very recently, also applied to energies above the breakup threshold .
The paper is organized as follows. In Sec. II the two parameter spin–orbit three-body force is introduced. In Sec. III the polarization observables as well as the binding energy of <sup>3</sup>He are studied for specific values of these parameters. The conclusions and perspectives are given in the last section.
## II A two-parameter spin–orbit three-body force
Disregarding for the moment the presence of three-nucleon forces, the potential energy operator of the three-nucleon system is
$$V_{3N}=\underset{i<j}{}V_{2N}(i,j),$$
(1)
where $`V_{2N}`$ is the $`NN`$ interaction that, in general, is constructed by fitting the $`2N`$ scattering data and the deuteron properties. Recently, several potential models have been determined including explicitly charge dependence which describe the $`2N`$ data with a $`\chi ^2`$ per datum $`1`$. Here we will refer to the Argonne AV18 interaction, which is one of these new generation potentials . The nuclear part of the AV18 potential consists of a sum over 18 different terms. The first 14 terms are charge independent whereas the four additional operators introduce charge symmetry breaking. Each of the first 14 terms includes a projector $`P_{ST}(ij)`$ onto the spin-isospin states $`S,T`$ of particles $`(i,j)`$ multiplied by one of the following operators, $`𝒪^p=1,S_{12},𝐋𝐒,L^2,(𝐋𝐒)^2`$. The strength of each term is given by a scalar function $`v_{ST}^p(r_{ij})`$ depending on the relative distance between particles $`(i,j)`$. For example the $`𝐋𝐒`$ interaction between particles $`(i,j)`$ is defined in the two channels with isospin $`T_{ij}=0,1`$ and spin $`S_{ij}=1`$ and is given by the functions $`v_{10}^{ls}(r_{ij})`$ and $`v_{11}^{ls}(r_{ij})`$, respectively.
In the three-nucleon system odd-parity scattering states are directly related to the potential in channels where particles $`(i,j)`$ are coupled to $`S_{ij}=1`$, $`T_{ij}=1`$. In particular here we are interested in the $`𝐋𝐒`$ term and its relation to the vector observables $`A_y`$ and $`iT_{11}`$. Limiting the discussion to the $`𝐋𝐒`$ interaction in this particular channel, the corresponding term in the three-nucleon potential energy is
$$V_{3N}^{ls}=\underset{i<j}{}v_{11}^{ls}(r_{ij})𝐋_{ij}𝐒_{ij}P_{11}(ij).$$
(2)
At this point we can conjecture that the presence of the third nucleon (particle $`k`$) could modify the interaction, introducing in the function $`v_{11}^{ls}`$ a dependence on the distances between the third nucleon $`k`$ and particles $`(i,j)`$. Accordingly, the above interaction transforms to
$$V_{3N}^{ls}=\underset{i<j}{}\frac{1}{2}[w_{11}^{ls}(r_{ijk})𝐋_{ij}𝐒_{ij}+𝐋_{ij}𝐒_{ij}w_{11}^{ls}(r_{ijk})]P_{11}(ij),$$
(3)
where $`r_{ijk}`$ is a scalar function of the three interparticle distances $`r_{ij},r_{jk},r_{ki}`$. The symmetric form has been introduced since, in general, the $`𝐋𝐒`$ operator does not commute with an operator depending on $`r_{ijk}`$. Different forms are possible for the three-body interaction $`w_{11}^{ls}(r_{ijk})`$ provided that $`w_{11}^{ls}(r_{ijk})v_{11}^{ls}(r_{ij})`$ when $`r_{ik},r_{jk}\mathrm{}`$. A simple two-parameter form that we are going to analyze here is
$$w_{11}^{ls}(r_{ijk})=v_{11}^{ls}(r_{ij})+W_0\mathrm{e}^{\alpha \rho },$$
(4)
where the hyperradius $`\rho `$ is
$$\rho ^2=\frac{2}{3}(r_{12}^2+r_{23}^2+r_{31}^2)$$
(5)
and $`W_0`$ and $`\alpha `$ are parameters characterizing the strength and range of the three-body term. When the dependence in the scalar function $`r_{ijk}`$ is limited to $`r_{ij}`$ and $`\rho `$, the operators $`w_{11}^{ls}(r_{ij},\rho )`$ and $`𝐋_{ij}𝐒_{ij}`$ commute. Accordingly, the spin–orbit force becomes
$$V_{3N}^{ls}=\underset{i<j}{}v_{11}^{ls}(r_{ij})𝐋_{ij}𝐒_{ij}P_{11}(ij)+W_0\mathrm{e}^{\alpha \rho }\underset{i<j}{}𝐋_{ij}𝐒_{ij}P_{11}(ij).$$
(6)
Replacing this term in eq.(1) and including now also the $`2\pi `$-3NF, the final form for the three-nucleon potential energy to be used in the present work is
$$V_{3N}=\underset{i<j}{}V_{2N}(i,j)+\underset{i<j<k}{}W_{3N}^{ls}(i,j,k)+\underset{i<j<k}{}W_{3N}^{2\pi }(i,j,k).$$
(7)
The $`W_{3N}^{ls}`$ term is the phenomenological spin–orbit force defined in the second term of eq.(6); for the $`W_{3N}^{2\pi }`$ term the discussion will be limited to the Urbana force.
The choice of the two-parameter exponential form in the definition of $`w_{11}^{ls}`$ (see eq.(4)) is arbitrary and was selected in order to make a phenomenological representation of a 3NF containing a spin–orbit interaction. The hyperradial dependence is the simplest scalar function depending on the three-interparticle distances which has the property of commuting with the spin–orbit operator. These choices, driven by simplicity, have been made in order to focus on the spin–orbit operator and its relation to $`A_y`$ and $`iT_{11}`$ in p-d scattering. The numerical values for the constants $`W_0,\alpha `$ are discussed in the next section.
In principle, the argument used to introduce the scalar function $`w_{11}^{ls}(r_{ijk})`$ as a modification of the function $`v_{11}^{ls}(r_{ij})`$ due to the presence of particle $`k`$, could be extended to the other functions $`v_{ST}^p(r_{ij})`$ of the $`NN`$ potential. This leads to a three-body force with an hybrid form in which nucleons $`(i,j)`$ interact with the same operator’s structure of the $`NN`$ potential but with scalar functions depending on the interparticle distances of the three nucleons $`(i,j,k)`$. The original function $`v_{ST}^p(r_{ij})`$ must be recovered when nucleon $`k`$ is at $`\mathrm{}`$. In the present work we are limiting the argument to one specific term and trying to relate its parametrization to the two vector observables $`A_y`$ and $`iT_{11}`$.
## III N-d scattering calculations with the SO–3NF
The inclusion of the SO–3NF has been done with the hope of improving the description of $`A_y`$ and $`iT_{11}`$ without destroying the agreement already observed for the cross section and tensor observables. At energies below the deuteron breakup, measurements for tensor and vector observables exist for p-d scattering at $`E_{lab}=648`$ keV and at $`E_{lab}=2.5`$ MeV and $`3.0`$ MeV . The $`648`$ keV data set is at the lowest energy at which these measurements have been made. The other two sets of measurements lie just below the deuteron breakup. Theoretical calculations using the AV18 potential underpredict $`A_y`$ and $`iT_{11}`$ of about $`30\%`$. Comparisons to phase-shift and mixing parameters extracted from PSA performed at these three energies show the aforementioned insufficient splitting in the $`{}_{}{}^{4}P_{J}^{}`$ phase shifts as well as an underpredicted $`ϵ_{3/2}`$. Calculations using the AV18+UR potential essentially do not change these findings. The almost constant underprediction in the vector polarization observables (in percentage) and the fact that the inclusion of the $`2\pi `$–3NF’s does not increase the splitting in $`{}_{}{}^{4}P_{J}^{}`$ is a motivation for considering new additional forms for the three–body potential. The selection of the $`𝐋𝐒`$ operator is a natural choice since when applied to the $`{}_{}{}^{4}P_{J}^{}`$ state it acts with opposite sign in the states $`J=1/2^{}`$ and $`J=5/2^{}`$, tending to increase the splitting.
Three different choices of the exponent $`\alpha `$ in the hyperradial spin–orbit interaction defined in eq.(4) have been selected with the intention of constructing forces with different ranges. The strength $`W_0`$ has been adjusted in each case in an attempt to improve the description of the vector observables. The analysis has been performed at $`E_{lab}=3.0`$ MeV. The selected ranges are $`\alpha =0.7,1.2,1.5`$ fm<sup>-1</sup>, so as to simulate a long, medium and short range force. The corresponding values for the depth are $`W_0=1,10,20`$ MeV. The calculations have been performed using the nuclear part of AV18 plus the Coulomb interaction. The $`2\pi `$-3NF has been disregarded at the present stage since its contribution to the description of the vector observables is small.
The results for the proton and neutron analyzing powers $`A_y`$ and the deuteron analyzing power $`iT_{11}`$ are given in Fig.1 together with the experimental data of ref. . The four curves correspond to the AV18 potential and the three different choices for the parameters $`(\alpha ,W_0)`$. The dotted line is the AV18 prediction and shows the expected discrepancy. The solid line corresponds to the AV18 plus the long range force (AV18+LS1), the long-dashed line to the AV18 plus the medium range force (AV18+LS2) and the dotted-dashed line to the AV18 plus the short range force (AV18+LS3). The inclusion of the spin–orbit force improves the description of the vector observables, although there is a slightly different sensitivity in $`A_y`$ and $`iT_{11}`$. The AV18+LS1 curve is slightly above the data, especially for $`iT_{11}`$. The AV18+LS2 curve is slightly below (above) the data in $`A_y`$ ($`iT_{11}`$). The AV18+LS3 curve is slightly below the data especially for $`A_y`$. In the bottom panel of Fig.1 the n-d analyzing power has been calculated using the same potential models as before. Again, there is an improvement in the description of $`A_y`$ equivalent to that one obtained in the p-d case. In Fig.2 the tensor analyzing powers $`T_{20},T_{21},T_{22}`$ are shown at the same energy and compared to the data of ref. . The inclusion of the SO–3NF has no appreciable effect and the four curves are practically on top of each other. These observables are not very sensitive to the splitting in $`{}_{}{}^{4}P_{J}^{}`$-waves. They are sensitive to scattering in $`D`$–waves and higher partial waves, which are only weakly distorted by the SO–3NF.
Before extending the calculations to other energies the analysis of the binding energy of <sup>3</sup>He deserves some attention. We expect that the inclusion of the $`W_{3N}^{ls}`$ term will produce only a small distortion in the bound state due to the low occupation probability of channels with $`S_{ij}=1`$ and $`T_{ij}=1`$. This is corroborated by the calculations shown in table I. The binding energy, the kinetic energy and the $`S^{}`$-, $`D`$-, and $`P`$-wave probabilities are given for the different potential models. The AV18 and the AV18+UR interactions have been considered. The $`2\pi `$–3NF of Urbana has been now taken into account since it produces large effects in the bound state. Calculations have been done for these two potentials with and without the inclusion of the three different choices for the SO–3NF. Its inclusion produces a small repulsion which in any case is not greater than $`50`$ keV, with all the other mean values modified very little. The force with the longest range produces very tiny effects due to its very small strength. The other two forces produce slightly greater, and similar, modifications. We can conclude that the structure of the three-body bound state remains essentially unaffected by the spin–orbit 3NF with the ranges and strengths considered.
The analysis of the bound state is important since changes in $`A_y`$ and $`iT_{11}`$ of the size shown in Fig.1 could, in principle, be obtained with modified forms of the $`2\pi `$–3NF’s. But, in general, these modifications are not anymore compatible with a correct description of the bound state. This is not the case for the spin–orbit 3NF we are considering. Therefore, we can extend the calculations to other energies based in the fact that the new model makes a selective and appreciable effect only in the vector observables. Calculations have been done at $`E_{lab}=648`$ keV and $`2.5`$ MeV. The results for $`A_y`$ and $`iT_{11}`$ are shown in Fig.3 and compared to the experimental data of refs. . Again a remarkable improvement in the description of both observables is obtained. At the lowest energy the AV18+LS1 model seems to be more effective because of its long range whereas for the other two ranges the centrifugal barrier plays some role. The results at $`E_{lab}=2.5`$ MeV have the same characteristics as those at $`3.0`$ MeV.
In table 2 we compare results for the $`P`$-waves parameters at $`E_{lab}=2.5`$ and $`3.0`$ MeV. Again, different cases using the AV18 potential with and without the inclusion of the spin–orbit 3NF have been considered. In the last column the parameters corresponding to the PSA from ref. are given for the sake of comparison. In the three cases the SO–3NF increases the splitting of the $`{}_{}{}^{4}P_{J}^{}`$ phases. The phases $`{}_{}{}^{4}P_{1/2}^{}`$ and $`{}_{}{}^{4}P_{5/2}^{}`$ and the mixing parameter $`ϵ_{3/2}`$, which play a major role in the description of the vector observables, are now in much better agreement with the PSA values. It was not obvious from the beginning that with two parameters $`(\alpha ,W_0)`$, it would be possible to increase the difference $`\mathrm{\Delta }P=^4P_{5/2}^4P_{1/2}`$ in the measure suggested by the PSA, together with a change in $`ϵ_{3/2}`$ in the expected direction and magnitude. At the three energies the change in $`ϵ_{3/2}`$ was observed to happen in the correct direction. Not all parameters are closer to the PSA values after including the SO–3NF. For example $`{}_{}{}^{4}P_{3/2}^{}`$ has slightly changed in the opposite direction to that suggested by the PSA. But the final result in the description of the observables was found always to produce an improvement.
In order to complete the study of the SO-3NF the extension to p-d calculations above the breakup channel is now considered. This extension is not straightforward since when the breakup channel is open a correct description of the three outgoing nucleons has to be done. In particular, in the p-d case, the Coulomb interaction introduces difficulties that have been, and are at present, subject of intense investigation. Recently the PHH technique has been extended to describe p-d elastic scattering above the deuteron breakup . The method is based on the use of the Kohn variational principle in its complex form and it provides an accurate description of the polarization observables. In ref. differential cross sections and vector and tensor analyzing powers at $`E_{lab}=5`$ and $`10`$ MeV have been calculated using the AV18 interaction. In the present work, calculations have been done at the same two energies using the AV18 interaction with and without the SO–3NF. The results for $`A_y`$ and $`iT_{11}`$ are given in Fig.4 together with the experimental data from ref. ($`E_{lab}=5`$ MeV) and ref. ($`E_{lab}=10`$ MeV). Also at these energies we observe the same trend as before, both observables are now better described. The sensitivity to the different ranges is slightly different with the medium range model starting to be more effective. In fact, at $`5`$ MeV the long and medium range curves overlap and at $`10`$ MeV the splitting in the curves is now reversed in that the upper curve corresponds to the medium range force.
Finally, let us extend the analysis of the effects of the SO–3NF to other observables at different energies. In Fig.5 the effects on the elastic cross section are shown in the energy interval from $`E_{lab}=648`$ keV up to $`E_{lab}=10`$ MeV. At each energy, the four curves corresponding to the different potential models are on top of each other. There is not enough sensitivity in this observable to changes in the phase-shift and mixing parameters of the magnitude introduced by the SO–3NF used. The tensor analyzing powers at $`E_{lab}=5`$ MeV and $`10`$ MeV are shown in Fig.6. The effects on these observables are very small. The three curves corresponding to the AV18 plus the SO–3NF are practically superposed with slight differencies with the AV18 curve (dotted line). In particular, the second minimum of $`T_{20}`$ is slightly improved as well as the second maximum of $`T_{21}`$, whereas $`T_{22}`$ is essentially unchanged.
In order to give a quantitative measure of the improvement introduced by the SO-3NF in the description of the data, a $`\chi ^2`$ (per datum) analysis is displayed in table III. The analysis includes the cross section, the two vector and the three tensor analyzing powers at $`E_{lab}=648`$ keV, $`3`$ MeV, $`5`$ MeV and $`10`$ MeV. The data set has been taken from ref. . The $`\chi ^2`$ per datum has been calculated using the theoretical predictions of the AV18 potential model with and without the inclusion of the SO-3NF. The cases corresponding to the three different choices of the strength and range parameters have been considered. From the table is clear the selective effect of the spin-orbit force on the vector observables. There is a dramatic improvement in terms of $`\chi ^2`$ in these observables, whos value is reduced by more than one order of magnitude in several cases. At each energy, the AV18+LS1 and AV18+LS2 potential models give the best description. At the first three energies both potential models reproduce the data reasonably well and with similar quality. At $`10`$ MeV the obtained $`\chi ^2`$ per datum for $`A_y`$ is very high in all cases. This is a consequence of the extremely small error bars in the data set at this particular energy. However, the $`\chi ^2`$ has been improved by a factor of $``$ 25 going from AV18 to AV18+LS2 and a further reduction could be obtained with a fine tune of the strength and range parameters $`W_0,\alpha `$.
Looking at the differential cross section, in all cases the $`\chi ^2`$ per datum is a large number and it changes very little when the SO-3NF is included. There is a sensitivity to the <sup>3</sup>He binding energy which is not well reproduced unless a $`2\pi `$-3NF is considered. For example, the value $`\chi ^2=30.3`$ at $`3`$ MeV obtained with the AV18 potential reduces to $`\chi ^2=4.0`$ with the AV18+UR potential. In the case of the tensor observables the changes in terms of $`\chi ^2`$ when the SO-3NF is considered are moderated. There is a slight improvement in the description of $`T_{20}`$ and $`T_{21}`$, whereas the reverse situation is seen in $`T_{22}`$.
## IV Conclusions
Elastic N-d scattering has been studied in the energy range from $`E_{lab}=648`$ keV to $`E_{lab}=10`$ MeV using a potential model which includes a spin–orbit three-nucleon interaction. This three-body potential was introduced as a “distortion” of the function $`v_{11}^{ls}(r_{ij})`$ of the $`NN`$ potential. This function gives the magnitude of the $`𝐋𝐒`$ interaction in the channels where particles $`(i,j)`$ are coupled to spin $`S_{ij}=1`$ and isospin $`T_{ij}=1`$ and was converted to a function $`w_{11}^{ls}(r_{ijk})`$ depending on the three-interparticle distances $`r_{ij},r_{jk},r_{ki}`$. The condition on $`w_{11}^{ls}`$ was that the interaction $`v_{11}^{ls}`$ is recovered when the third particle is far from the other two. A phenomenological hyperradial exponential form depending on two parameters was used which fixes the range and strength of the three-body part of the interaction.
The choice of the $`𝐋𝐒`$ operator acting on the $`S_{ij}=1`$, $`T_{ij}=1`$ spin-isospin channels was based on the large effects it has in the description of the two vector observables $`A_y`$ and $`iT_{11}`$ in N-d scattering. The origin of this discrepancy lies in the too low splitting predicted by all realistic potential models for the $`{}_{}{}^{4}P_{J}^{}`$ phase shifts. There is a close relation, due to the Pauli blocking, between scattering in $`{}_{}{}^{4}P_{J}^{}`$–waves and the interaction in this channel. Moreover the $`𝐋𝐒`$ operator is attractive (repulsive) in the $`J=1/2^{}`$ state ($`J=5/2^{}`$ state), therefore increasing the splitting of these phases.
The study of the SO–3NF was done phenomenologically by fixing three different values for its range parameter $`\alpha `$ and selecting the strength parameter $`W_0`$ to provide a better description of the vector observables. The energy at $`E_{lab}=3.0`$ MeV was chosen for this analysis and successively the three sets of parameters were used to describe the same observables at different energies. The difference between the curves obtained for the description of $`A_y`$ and $`iT_{11}`$ after including the spin–orbit force could be further reduced with a fine tuning of $`W_0`$. An important check was made resulting in the observation that the structure of the bound state and other observables, for example the tensor analyzing powers are not appreciable disturbed by this interaction. Moreover the n-d and p-d $`A_y`$ are described equally well when the spin–orbit 3NF is included. This means that no additional charge–symmetry breaking effects are needed.
The use of this phenomenological SO–3NF improves the description of the vector observables in all the studied cases, from $`E_{lab}=648`$ keV to $`E_{lab}=10`$ MeV. The sensitivity of the observables to the range parameters is slightly different at the energies studied and there is not a definitive preference for one of them. Perhaps the set with the medium range parameter ($`\alpha =1.2`$ fm<sup>-1</sup>, $`W_0=10`$ MeV) gives on average the best description. A better evaluation about the simple hyperradial dependence introduced here because of its nice property of commuting with the $`𝐋𝐒`$ operator, can be obtained only after extending the calculations to higher energies. The main conclusion of the present work is the identification of the $`𝐋𝐒`$ force in the $`S_{ij}=1`$, $`T_{ij}=1`$ spin-isospin channel as one which can resolve the $`A_y`$ puzzle. In addition, a simple model has been proposed to repair the discrepancy.
Further investigations, which are in progress, consist in the inclusion of the $`2\pi `$–3NF in the description of the scattering observables other than in the bound state, the extension of the calculations to higher energies and the study of the force in the four–body reaction p-<sup>3</sup>He. Whereas in the first case we can expect at most a small change of the two parameters $`(\alpha ,W_0)`$, the two other studies will give more insight about the force. In particular, by studying the p-<sup>3</sup>He reaction the extension to heavier systems can be tested.
Finally, the possibility of introducing a dependence on the three interparticle distances in other NN functions $`v_{ST}^p(r_{ij})`$, as discussed at the end of Sec. II, deserves some attention. Disregarding other types of 3NF’s, the modified potential energy between three nucleons would be
$$V_{3N}=\underset{i<j}{}\underset{S,T}{}\underset{p}{}\frac{1}{2}[w_{ST}^p(r_{ijk})𝒪_{ij}^p+𝒪_{ij}^pw_{ST}^p(r_{ijk})]P_{ST}(ij).$$
(8)
The original functions $`v_{ST}^p`$ are recovered when nucleon $`k`$ is far from the pair $`(i,j)`$. In addition, the symmetric form can be avoided if the three-body radial dependence is limited to the hyperradius, i.e. $`w_{ST}^p(r_{ijk})=w_{ST}^p(r_{ij},\rho )`$. The next step is the parametrization of the functions $`w_{ST}^p`$ with a number of parameters to be determined by a fit procedure. The maximum number of free parameters and the selection of the channels where the distorted functions are introduced are related to the observables included in the fit. For example, it would be impossible to reproduce simultaneously the binding energy of <sup>3</sup>He and the vector analyzing powers in p-d scattering by modifying the potential only in channels with $`S_{ij}=1,T_{ij}=1`$ and neglecting the $`2\pi `$-3NF. Conversely, this would be possible by extending the modification to other terms in channels with $`S_{ij}=1,T_{ij}=0`$ or $`S_{ij}=0,T_{ij}=1`$.
###### Acknowledgements.
I would like to thank the University of North Carolina at Chapel Hill and the Triangle Universities Nuclear Laboratory for hospitality and support during my stay in Chapel Hill, where this work was performed. Moreover, I would like to thank W. Tornow, E. Ludwig, H. Karwowski, C. Brune, L. Knutson and E. George as well as M. Viviani, S. Rosati and A. Fabrocini for useful discussions.
Table Captions
Table 1. The binding energy of <sup>3</sup>He is shown together with the kinetic energy and the $`S^{}`$-, $`P`$\- and $`D`$-wave probabilities. The AV18 and the AV18+UR potentials are considered with and without the spin–orbit 3NF for three choices for the parameters (see text).
Table 2. The $`P`$-waves phase shift and mixing parameters calculated at two different energies. The AV18 potential and three different sets for the parameters in the spin–orbit 3NF (see text) have been used. The results from the PSA of ref. are given in column 6 for the sake of comparison.
Table 3. $`\chi ^2`$ per datum at four different energies calculated using the AV18 potential model with and without the three different parametrization of the SO-3NF. The data set has been taken from refs. . The number in parenthesis is the number of data points.
Figure Captions
Fig.1. The p-d and n-d analyzing power $`A_y`$ and the deuteron analyzing power $`iT_{11}`$ are shown at $`E_{lab}=3.0`$ MeV. The different curves correspond to the following potential models: AV18(dotted line), AV18+LS1 (solid line), AV18+LS2 (dashed line), AV18+LS3 (dotted-dashed line). The experimental data are from ref. (p-d) and ref. (n-d).
Fig.2. The tensor analyzing powers $`T_{20},T_{21},T_{22}`$ at $`E_{lab}=3.0`$ MeV for the same potential model as Fig.1. The experimental data are from ref. .
Fig.3. $`A_y`$ and $`iT_{11}`$ at two different lab energies. The four curves correspond to the same potentials as in the previous figures. Experimental data are from ref. at $`648`$ keV and from ref. at $`2.5`$ MeV.
Fig.4. As in Fig.3 at two different energies above the deuteron breakup threshold. Experimental data are from ref. at $`5`$ MeV and from ref. at $`10`$ MeV.
Fig.5. The differential cross section at five different lab energies. At each energy the four curves corresponding to the AV18 potential with and without the SO-3NF overlap. Experimental points are from refs. .
Fig.6. The tensor analyzing powers at $`E_{lab}=5`$ MeV and $`10`$ MeV. At each energy there are four curves corresponding to AV18 (dotted line), AV18+LS1 (solid line), AV18+LS2 (dashed line) and AV18+LS3 (dotted-dashed line).
|
no-problem/9905/nucl-th9905018.html
|
ar5iv
|
text
|
# Influence of damping on the excitation of the double giant resonance
## Abstract
We study the effect of the spreading widths on the excitation probabilities of the double giant dipole resonance. We solve the coupled-channels equations for the excitation of the giant dipole resonance and the double giant dipole resonance. Taking $`Pb+Pb`$ collisions as example, we study the resulting effect on the excitation amplitudes, and cross sections as a function of the width of the states and of the bombarding energy.
Double giant dipole resonances have been mainly studied in heavy ion Coulomb excitation experiments at high energies (for a recent review, see ). The feasibility of such experiments has been predicted in 1986 where the magnitude of the cross sections for the excitation of the Double Giant Dipole Resonance (DGDR) was calculated (see also ). In ref. a recipe was given for treating the effect of the width of the giant resonances on the excitation probabilities and cross sections. In this letter we make a quantitative prediction of this effect using a realistic coupled-channels calculation for the excitation amplitudes. The coupling interaction for the nuclear excitation $`if`$ in a semiclassical calculation for a electric $`\left(\pi =E\right)`$, or magnetic $`\left(\pi =M\right)`$, multipolarity, is given by (eqs. (6-7) of ref. )
$$W_C=\frac{V_C}{ϵ_0}=\underset{\pi \lambda \mu }{}W_{\pi \lambda \mu }\left(\tau \right),$$
(1)
where
$$W_{\pi \lambda \mu }\left(\tau \right)=\left(1\right)^{\lambda +1}\frac{Z_1e}{\mathrm{}vb^\lambda }\frac{1}{\lambda }\sqrt{\frac{2\pi }{\left(2\lambda +1\right)!!}}Q_{\pi \lambda \mu }(\xi ,\tau )(\pi \lambda ,\mu ).$$
(2)
Above, $`b`$ is the impact parameter, $`\gamma =\left(1\beta ^2\right)^{1/2}`$, $`\beta =v/c`$, $`\tau =\gamma vt/b`$ is a dimensionless time variable, $`ϵ_0=\gamma \mathrm{}v/b`$ sets the energy scale and $`Q_{\pi \lambda \mu }(\xi ,\tau )`$, with $`\xi =\xi _{if}=\left(E_fE_i\right)/ϵ_0`$ as an adiabatic parameter, depends exclusively on the properties of the projectile-target relative motion. The multipole operators, which act on the intrinsic degrees of freedom are, as usual,
$$(E\lambda ,\mu )=d^3r\rho (𝐫)r^\lambda Y_{1\mu }(𝐫),$$
(3)
and
$$(M1,\mu )=\frac{i}{2c}d^3r𝐉(𝐫).𝐋\left(rY_{1\mu }\right),$$
(4)
We treat the excitation problem by the method of Alder and Winther. We solve a time-dependent Schrödinger equation for the intrinsic degrees of freedom in which the time dependence arises from the projectile-target motion, approximated by the classical trajectory. For relativistic energies, a straight line trajectory is a good approximation. We expand the wave function in the set $`\{k;k=0,N\}`$ of eigenstates of the nuclear Hamiltonian, where 0 denotes the ground state and $`N`$ is the number of intrinsic excited states included in the coupled-channels (CC) problem. We obtain a set of coupled equations.
To simplify the expression we introduce the dimensionless parameter $`\mathrm{\Theta }_{kj}^{(\lambda \mu )}`$ by the relation
$$\mathrm{\Theta }_{kj}^{(\lambda \mu )}=\left(1\right)^{\lambda +1}\frac{Z_1e}{\mathrm{}\mathrm{v}b^\lambda }\frac{1}{\lambda }\sqrt{\frac{2\pi }{\left(2\lambda +1\right)!!}}_{kj}(E\lambda )$$
(5)
Then we write the coupled channels equations in the form
$$\frac{da_k(\tau )}{d\tau }=i\underset{j=0}{\overset{N}{}}\underset{\pi \lambda \mu }{}Q_{\pi \lambda \mu }(\xi _{kj},\tau )\mathrm{\Theta }_{kj}^{\left(\lambda \mu \right)}\mathrm{exp}\left(i\xi _{kj}\tau \right)a_j(\tau ).$$
(6)
In what follows we concentrate on the $`E1`$ excitation mode. In this case, we have
$$Q_{E10}(\xi ,\tau )=\gamma \sqrt{2}\left[\tau \varphi ^3(\tau )i\xi \left(\frac{\mathrm{v}}{c}\right)^2\varphi (\tau )\right];Q_{E1\pm 1}(\xi ,\tau )=\varphi ^3(\tau ),$$
(7)
where $`\varphi (\tau )=\left(1+\tau ^2\right)^{1/2}.`$
Following ref. the inclusion of damping leads to the coupled-channels equations with damping terms, i.e.,
$$\frac{da_k(\tau )}{d\tau }=i\underset{j=0}{\overset{N}{}}\underset{\mu }{}Q_{E1\mu }(\xi _{kj},\tau )\mathrm{\Theta }_{kj}^{\left(1\mu \right)}\mathrm{exp}\left(i\xi _{kj}\tau \right)a_j(\tau )\frac{\mathrm{\Gamma }_k}{ϵ_0}a_k(\tau ).$$
(8)
These equations lead to master equations for the occupation probabilities, $`\stackrel{~}{P}_j(\tau )=\left|a_j(\tau )\right|^2`$, in the form
$$d\stackrel{~}{P}_k(\tau )/d\tau =G_k(\tau )L_k(\tau )$$
(9)
where
$$G_k(\tau )=2\mathrm{}m\underset{j}{}\underset{\mu }{}Q_{E1\mu }(\xi _{kj},\tau )\mathrm{\Theta }_{kj}^{\left(1\mu \right)}\mathrm{exp}(i\xi _{kj}\tau )a_k(\tau )a_j^{}(\tau )$$
(10)
and
$$L_k(\tau )=\frac{\mathrm{\Gamma }_k}{ϵ_0}\stackrel{~}{P}_k(\tau ).$$
(11)
These equations can be integrated, yielding the conservation law,
$$\underset{k}{}\left(\stackrel{~}{P}_k(\tau )+\stackrel{~}{F}_k(\tau )\right)=1,\mathrm{where}\stackrel{~}{F}_k(\tau )=\frac{\mathrm{\Gamma }_k}{ϵ_0}_{\mathrm{}}^\tau \stackrel{~}{P}_k(\tau ^{})𝑑\tau ^{}$$
(12)
Due to the exponential decay of the states with $`k1`$, we have for $`t\mathrm{}`$ the limit $`\stackrel{~}{P}_k(\mathrm{})=\delta _{j0}\stackrel{~}{P}_0(\mathrm{})`$ and
$$\stackrel{~}{P}_0(\mathrm{})+\underset{k}{}\stackrel{~}{F}_k(\mathrm{})=1.$$
(13)
This means that for $`t\mathrm{}`$ there is a probability to find the system in the ground state given by $`\stackrel{~}{P}_0(\mathrm{})`$ and a probability that it has been excited and decayed through the channel $`j`$ which is given by $`\stackrel{~}{F}_j(\mathrm{})`$. Thus, the set of equations 8 are shown to correctly describe the contribution to the excitation through channel $`j`$.
The excitation probability of an intrinsic state $`j`$ in a collision with impact parameter $`b`$ is obtained from an average over the initial orientation and a sum over the final orientation of the nucleus, as
$$P_j(b)=\frac{1}{2I_0+1}\underset{M_0,M_j}{}|\stackrel{~}{F}_j(\mathrm{})|^2,$$
(14)
and the cross section is obtained by the classical expression
$$\sigma _j=2\pi P_j(b)T(b)b𝑑b.$$
(15)
Above, $`T(b)`$ accounts for absorption according to the prescription of ref. , using the nucleon-nucleon cross sections and the ground state density of $`Pb`$ from experimental data.
We consider the excitation of giant resonances in <sup>208</sup>Pb projectiles, incident on <sup>208</sup>Pb targets at 640 A$``$MeV. This reaction has been studied at the GSI/SIS, Darmstadt . For this system the excitation probabilities of the isovector giant dipole ($`GDR`$) at 13.5 MeV are large and, consequently, high order effects of channel coupling should be relevant. To assess the importance of the damping effects, we calculate the matrix elements assuming that the GDR is an isolated state depleting 100% of the energy-weighted sum-rule. The matrix element for the GDR$``$ DGDR transition incorporates the boson factor $`\sqrt{2}`$, as usual . The energy location of the DGDR state is taken as 27 MeV, consistent with the experimental data. The spin and parities of the states are given by $`1^{}`$ for the GDR, and $`0^+`$ and $`2^+`$ for the DGDR, respectively. The distribution of the strength among the $`0^+`$ and $`2^+`$ DGDR states are simply obtained from Clebsh-Gordan coefficients .
In figure 1 we plot the time-dependent occupation probabilities of the ground state, in $`Pb`$, $`N=0`$, of the GDR state, $`N=1`$, and of the DGDR state, $`N=2`$, respectively. Figure 1(a) shows the occupation probabilities with the widths equal to zero, $`\mathrm{\Gamma }_N=0`$. In figure 1(b) we plot the occupation probabilities of the GDR state, $`N=1`$, and of the DGDR state, $`N=2`$, with $`\mathrm{\Gamma }_N=0`$ (full lines), and with $`\mathrm{\Gamma }_{GDR}=4`$ MeV (experimental), and $`\mathrm{\Gamma }_{DGDR}=5.7`$ MeV (dashed lines). The width of the DGDR is set to $`\mathrm{\Gamma }_{DGDR}=\sqrt{2}\mathrm{\Gamma }_{GDR}`$, following the apparent trend of the experimental data . Note, that $`\mathrm{\Gamma }_{DGDR}=2\mathrm{\Gamma }_{GDR}`$ has a better (and simpler) theoretical explanation . But, the time integrated population of the DGDR state will not be much influenced by using the later parametrization.
We observe that the inclusion of damping leads to strong modifications in the time-dependent occupation probabilities of the GDR and DGDR states. One might wrongly deduce from figure 1 that the excitation probabilities are reduced proportionally to the difference between the maximum value of $`\stackrel{~}{P}_N(\tau )`$, with and without damping. However, the quantities shown in figure 1(b) includes the loss of occupation probabilities of a given state, while it is being populated by the time-dependent transitions. Thus, the reduction of the excitation probabilities of the GDR and the DGDR due to damping is smaller than deduced from figure 1. The relevant quantity to calculate the excitation cross sections are the quantities $`\stackrel{~}{F}_j(\mathrm{})`$, which account for the time-integrated transition probability, followed by decay, of the state $`j`$.
In figure 2 we plot the flux functions, or time-integrated transition probabilities to the GDR and the DGDR states as a function of the width of the collective state, $`\mathrm{\Gamma }_{GDR}`$, keeping constant the ratio $`\mathrm{\Gamma }_{DGDR}/\mathrm{\Gamma }_{GDR}=\sqrt{2}`$. We keep the impact parameter fixed, $`b=15`$ fm. We note that, varying the width from 0 up to 4 MeV leads to a 10% decrease of the flux functions into GDR and DGDR states. A similar tendency is observed for the total cross section, integrated over impact parameters. The excitation probability decreases with about $`1/b^4`$, therefore the grazing collisions are weighted most strongly.
In figure 3(a) we plot the effect of damping in the total cross sections, as a function of the bombarding energy. We take $`\mathrm{\Gamma }_{GDR}=0`$ (dashed line) and $`\mathrm{\Gamma }_{GDR}=4`$ MeV (full line), keeping constant the ratio $`\mathrm{\Gamma }_{DGDR}/\mathrm{\Gamma }_{GDR}=\sqrt{2}`$. We observe that the effect of damping disappears, as the bombarding energy increases. At high energies the reaction is fast, and the system does not have time to dissipate during its excitation. In this regime the sudden approximation is a valid approach to the calculation of the excitation amplitudes.
We note that our model is restricted to the excitation of isolated resonant states, including a time-dependent loss term on the far right of the coupled-channels equations 8. This is different from a study of the influence of the fragmentation of the resonances into many neighbouring states. In this case, the Coulomb excitation of the giant resonances is obtained as a superposition of excitations to states spread over an energy envelope, usually taken as a Lorentzian shape. States at lower energy are more easily excited than states at higher energies. Thus, the spreading of the resonances may lead to another kind of effect of the widths, not obtainable in the above treatment. To study this effect in a simple way, we use the harmonic model of ref. . The photo-nuclear cross sections which enter in these calculations are of Lorentzian shapes, with a cut at the low energy limit of 8 MeV, corresponding to the threshold for neutron emission, since most of the DGDR manifestation in experiments come from neutron emission after relativistic Coulomb excitation. The magnitude of the photo-nuclear cross sections are obtained by using a 100% depletion of the Thomas-Reiche-Kuhn energy-weighted sum-rule applied to the GDR.
The harmonic model provides a simple analytical formula to calculate the excitation probabilities of the DGDR . The resulting cross sections are shown in figure 3(b) where in the solid line we take $`\mathrm{\Gamma }_{GDR}=\mathrm{\Gamma }_{DGDR}=0`$, while the dashed-lines are for $`\mathrm{\Gamma }_{DGDR}=5.7`$ MeV and $`\mathrm{\Gamma }_{GDR}=4`$ MeV. We observe a similar effect as in figure 3(a). This is understood as a reduction due to the spread of states at energies above the energy centroid of the Lorentzian envelope. The excitation amplitude for these states are smaller, thus leading to a net reduction of the energy integrated Coulomb excitation cross sections.
In conclusion, we have obtained the dependence of the excitation amplitudes on the width of the giant resonance states. We show that the effect reduces excitation probabilities, and cross sections. We have developed an approach to solve this problem in realistic situations. It is demonstrated that the dynamical effect of the widths of the GR’s in a time-dependent picture leads to a decrease of the cross sections, more accentuated for low energy collisions. The energy fragmentation of the giant resonances can be studied in a simple fashion within the harmonic model. The net effect is also to decrease the cross sections with increasing width, specially at low energy collisions.
Acknowledgments
This work was supported in part by the Brazilian funding agencies CNPq, FAPERJ, FUJB/UFRJ, and PRONEX, under contract 41.96.0886.00.
Figure Captions
Fig. 1 - Occupation probabilities in Coulomb excitation of $`Pb`$, in $`Pb+Pb`$ collisions at 640 MeV.A. $`N=0`$ for the ground state, $`N=1`$ for the GDR state, $`N=2`$, for the DGDR state, respectively. Figure 1(a) shows the occupation probabilities with the widths equal to zero, $`\mathrm{\Gamma }_N=0`$. In figure 1(b) we plot the occupation probabilities of the GDR state, $`N=1`$, and of the DGDR state, $`N=2`$, with $`\mathrm{\Gamma }_N=0`$ (full lines), and with $`\mathrm{\Gamma }_{GDR}=4`$ MeV, and $`\mathrm{\Gamma }_{DGDR}=5.7`$ MeV (dashed lines). The width of the DGDR is set to $`\mathrm{\Gamma }_{DGDR}=\sqrt{2}\mathrm{\Gamma }_{GDR}`$, according to the trend of the experimental data .
Fig. 2 - Flux functions, or time-integrated transition probabilities to the GDR and the DGDR states in $`Pb`$ as a function of the width of the collective state, $`\mathrm{\Gamma }_{GDR}`$, keeping constant the ratio $`\mathrm{\Gamma }_{DGDR}/\mathrm{\Gamma }_{GDR}=\sqrt{2}`$. We keep the impact parameter fixed, $`b=15`$ fm. $`N=1`$ for the GDR state, $`N=2`$, for the DGDR state.
Fig. 3 - (a) Total cross sections, as a function of the bombarding energy. We take $`\mathrm{\Gamma }_{GDR}=0`$ (dashed line) and $`\mathrm{\Gamma }_{GDR}=4`$ MeV (full line), keeping constant the ratio $`\mathrm{\Gamma }_{DGDR}/\mathrm{\Gamma }_{GDR}=\sqrt{2}`$. $`N=1`$ for the GDR state, $`N=2`$, for the DGDR state. (b) Results of the harmonic model for the Coulomb excitation cross sections of the GDR ($`N=1`$) and the DGDR ($`N=2`$). The solid lines use $`\mathrm{\Gamma }_{GDR}=\mathrm{\Gamma }_{DGDR}=0`$, while the dashed-lines are for $`\mathrm{\Gamma }_{DGDR}=5.7`$ MeV and $`\mathrm{\Gamma }_{GDR}=4`$ MeV.
|
no-problem/9905/hep-ph9905348.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Our present knowledge of the hadronic structure of the photon rests on rather limited data from inclusive deep-inelastic electron-photon scattering. At leading order (LO) of perturbative QCD, the photon structure function $`F_2^\gamma (x,Q^2)`$ is related to the singlet quark densities (dominated by the up-quark density) which are the only well constrained parton densities in the photon. In contrast, the gluon density in the photon is only constrained theoretically by a global momentum sum rule. Experimental constraints are weak, since the gluon contributes to $`F_2^\gamma (x,Q^2)`$ only at next-to-leading order (NLO) of QCD. Therefore, the available parametrizations of the photon structure function rely heavily on assumptions like Vector Meson Dominance. Valuable information on the gluon density in the photon is provided by jet photoproduction, where existing data have already ruled out a very large and hard gluon density.
Jet photoproduction has been measured with increasing precision at HERA since the electron-proton collider became operational in 1992. These data make it now possible to discriminate between different parametrizations of the photon structure if uncertainties from the proton structure and from the partonic scattering process can be minimized. The proton structure is well constrained in the relevant regions of $`x`$ from deep-inelastic HERA data. Direct and resolved photon-proton scattering processes into one or two jets have been calculated by three groups in NLO QCD . These calculations are also applicable to LO three-jet production. The purpose of this paper is to check the consistency of these three calculations using the kinematic cuts of recent ZEUS dijet and three-jet analyses at a precision that is only limited by the accuracy of the numerical integration. The organization of the paper is as follows: In Sect. 2 we briefly describe the theoretical methods used in the perturbative calculations. In Sect. 3 we present a detailed comparison of the LO three-jet distributions, and in Sect. 4 we present the comparison of the NLO dijet distributions. In Sect. 5 we discuss remaining theoretical and experimental uncertainties, and we give our conclusions in Sect. 6.
## 2 Theoretical Methods Used in the NLO Calculations
The basic components in current NLO jet photoproduction calculations are $`22`$ body squared matrix elements through one-loop order and tree-level $`23`$ body squared matrix elements, for both photon-parton and parton-parton initiated subprocesses. It is therefore possible to study single- and dijet production at NLO and three-jet production at LO. The goal of the next-to-leading order calculations is to organize the soft and collinear singularity cancellations without loss of information in terms of observable quantities. The methods to accomplish this cancellation can be categorized as the phase space slicing and subtraction methods.
The calculation of uses the subtraction method. In the center of mass frame of the incoming parton the final state parton four vectors may be written as $`p_i=\frac{\sqrt{S}}{2}\xi _i(1,\sqrt{1y_i^2}\stackrel{}{e}_{iT},y_i)`$ where $`\stackrel{}{e}_{iT}`$ is a transverse unit vector. By construction, the parton $`i`$ gets soft when $`\xi _i0`$, and collinear to the incoming partons when $`y_i\pm 1`$. The $`n`$-dimensional three-body phase space written in terms of $`\xi _i`$ and $`y_i`$ is proportional to $`\xi _i^{12ϵ}(1y_i^2)^ϵ`$, where $`ϵ=2n/2`$. The soft singularities in the matrix element squared, which are of $`𝒪(\xi _i^2)`$, are regulated by multiplying them by $`\xi _i^2`$ and at the same time dividing the phase space by $`\xi _i^2`$, resulting in a $`\xi _i^{12ϵ}(1y_i^2)^ϵ`$ structure. The term $`\xi _i^{12ϵ}`$ is replaced by plus distributions and soft poles in $`ϵ`$. Within these terms $`(1y_i^2)^ϵ`$ is replaced by additional plus distributions and collinear poles in $`ϵ`$. The soft and final state collinear singularities cancel upon addition of the interference of the leading order diagrams with the renormalized one-loop virtual diagrams. The initial state collinear singularities are removed through mass factorization. The result is a finite function of various combinations of plus distributions. The main drawback of this method is that in the subtracted integrals, the numerical singularity cancellation takes place between terms of different kinematics which requires special care.
The calculation of uses a phase space slicing method that employs two small cut-offs $`\delta _s`$ and $`\delta _c`$ to delineate soft and collinear regions of phase space. This avoids partial fractioning at the expense of a somewhat more complicated split of phase space. Defining the four vectors of the three-body scattering process as $`p_1+p_2p_3+p_4+p_5`$ one takes $`s_{ij}=(p_i+p_j)^2`$. The soft region is then $`E_i<\delta _s\sqrt{s_{12}}/2`$ where $`E_i`$ is the energy of the emitted gluon. In this region one puts $`p_i=0`$ everywhere except in denominators of matrix elements, and performs the integral over the restricted phase space in $`n`$ dimensions. The complementary region, $`E_i>\delta _s\sqrt{s_{12}}/2`$, is called the hard region. That portion of the hard region satisfying $`s_{ij}\mathrm{or}|t_{ij}|<\delta _cs_{12}`$ with $`t_{ij}=(p_ip_j)^2`$ is treated with collinear kinematics and is also integrated in $`n`$ dimensions. The poles in $`ϵ`$ cancel as described above, and terms of order $`\delta _c`$ and $`\delta _s`$ are neglected compared to double and single logarithms of the cut-offs. The hard and non-collinear phase space region is integrated numerically. The sum is independent of the cut-offs provided they are chosen small enough. This serves as a useful check on results.
The calculation of , which is also of the phase space slicing type, uses an invariant mass cut to isolate singular regions of phase space. The $`23`$ body squared matrix elements are partially fractioned to separate overlapping soft and collinear singularities. As above one defines $`s_{ij}=(p_i+p_j)^2`$. In the situation when $`s_{ij}ys_{12}`$ the partons $`i`$ and $`j`$ cannot be resolved. In this region the phase space integrals are performed in $`n`$ dimensions which produces double and single poles in $`ϵ`$. They cancel as described above. Terms of order $`y`$ are neglected in the process, but double and single logarithms in $`y`$ are retained. The region $`s_{ij}>ys_{12}`$ is integrated numerically. The sum is independent of $`y`$ provided it is chosen small enough. As above, this serves as a useful check on results.
The final result of these calculations is an expression that is finite in four-dimensional space-time. One can compute all phase space integrations using standard Monte-Carlo integration techniques. The result is a program which returns parton kinematic configurations and their corresponding weights, accurate to $`𝒪(\alpha \alpha _s^2)`$. The user is free to histogram any set of infrared-safe observables and apply parton level cuts, all in a single histogramming subroutine. The calculations have the added benefit that when one considers a manifestly three-body observable the two body contributions don’t contribute and a leading order three-jet prediction results.
## 3 Three-Jet Cross Sections
During 1995 and 1996, positrons of energy $`E_e=27.5`$ GeV were collided at HERA with protons of energy $`E_p=820`$ GeV. In ZEUS photoproduction events were selected by anti-tagging the positron such that the photon has a virtuality $`Q^2`$ smaller than 1 GeV<sup>2</sup> and an energy fraction in the positron $`0.2<y<0.8`$. Three-jet events were analysed with a $`k_T`$ clustering algorithm using a jet separation parameter of $`R=1`$ in a rapidity range of $`|\eta |<2.4`$. The jets were required to have transverse energies above 6 GeV (two highest $`E_T`$ jets) and above 5 GeV (third jet). Additional cuts were placed on the three-jet mass $`M_{3\mathrm{jet}}>50`$ GeV, the leading jet energy fraction $`x_3<0.95`$, and the cosine of the leading jet scattering angle $`|\mathrm{cos}\theta _3|<0.8`$ . NLO calculations for three-jet photoproduction are not yet available, so the theoretical predictions for three-jet distributions, which are compared here, are only accurate to LO. Therefore one tests only the $`23`$ phase space generators and the tree-level $`23`$ matrix elements, but no soft or collinear singular regions. All calculations use CTEQ4L and GRV-LO parton distributions in the proton and photon, respectively. The strong coupling constant $`\alpha _s(\mu )`$ is calculated in leading order with five flavors and $`\mathrm{\Lambda }_{\mathrm{QCD}}^{(5)}=181`$ MeV, and the renormalization and factorization scale $`\mu `$ is identified with the largest transverse energy of the three jets.
In Fig. 1 we compare the theoretical predictions for the LO three-jet
mass distribution by Harris and Owens (HO) and by Frixione and Ridolfi (FR) to those by Klasen and Kramer (KK). In the upper figure we plot the absolute cross section which falls exponentially with $`M_{3\mathrm{jet}}`$. This demonstrates that the total cross section is dominated by the region close to $`M_{3\mathrm{jet}}>50`$ GeV. In the lower figure we plot the relative difference between the results by HO and by FR to the results by KK, normalized to the latter. The statistical accuracy of the different calculations is comparable. It has been included in the error bars and decreases simultaneously with the magnitude of the cross section. The calculation of HO presented here differs from the previous results as published in , where the $`E_T`$ cuts were applied to energy, not transverse energy, ordered jets. It now agrees very well (better than 0.5% at low $`M_{3\mathrm{jet}}`$) with that by KK. The calculation by FR is systematically 2% lower.
A similar comparison for the distributions in the energy fractions of the leading and next-to-leading jets is shown in Fig. 2.
These distributions are dominated by the available phase space, not the QCD dynamics, and thus present a test on the two-to-three phase space generators of the numerical programs. The statistical accuracy depends again on the size of the cross section. Where the cross section is large, HO agree with KK to better than 0.5%. FR are again 2% lower, which indicates that the difference may come from the phase space generator or kinematic cuts.
The QCD matrix elements are tested in distributions of the cosine of the fastest jet scattering angle $`\mathrm{cos}\theta _3`$ and the angle $`\psi _3`$ between the three-jet plane and the plane containing the leading jet and the average beam direction. The results are presented in Fig. 3.
We find again very good agreement between HO and KK and a 2% difference with FR.
## 4 Dijet Cross Sections
For the dijet photoproduction analysis ZEUS selected again photons with a virtuality below 1 GeV<sup>2</sup>. The range of the energy fraction of the photon in the positron $`0.2<y<0.85`$ was slightly larger than in the three-jet analysis, and in addition a narrower band of $`0.5<y<0.85`$ was analyzed which enhances the sensitivity to the parton densities in the photon. The transverse energy of the leading (second) jet was required to be larger than 14 (11) GeV with both jets lying in the rapidity region of $`1<\eta _{1,2}<2`$ . All NLO calculations use the CTEQ4M and GRV-HO parton densities for the proton and photon and $`\mathrm{\Lambda }_{\mathrm{QCD}}^{(5)}=202`$ MeV corresponding to CTEQ4M in the NLO approximation of $`\alpha _s(\mu =\mathrm{max}(E_{T_{1,2}}))`$.
In Fig. 4 we compare the theoretical predictions for the NLO dijet
cross section as a function of the transverse energy $`E_T`$ of leading jet with both jets at central pseudorapidities $`0<\eta _{1,2}<1`$. The three calculations agree within the statistical accuracy. The errors are comparable for all three calculations. They are about $`\pm 1`$% ($`\pm 2`$%) at low $`E_T`$ in the full (high) $`y`$ regime and larger at high $`E_T`$ due to the steeply falling cross section.
In Fig. 5 we present rapidity distributions for the NLO dijet
cross section with a central first jet $`\eta _1[0;1]`$. HO agree with FR for the full $`y`$ range and are about 5% higher than KK. In the high $`y`$ range, FR are about 4% higher than KK, whereas HO have a slope from +4% in the backward direction to -4% in the forward direction. Studies have shown that HO agree with KK very well for the resolved processes and for the Born and virtual direct processes. This indicates that the difference, which is still under study, may come from the real direct processes. Within the statistical accuracy of about $`\pm 2`$% the overall agreement is, however, still acceptable.
## 5 Remaining Uncertainties
The main remaining theoretical uncertainties arise from the dependence of the hadronic cross section on the renormalization and factorization scale $`\mu `$. This scale dependence is an artifact of the truncation of the perturbative series at next-to-leading order. The scale $`\mu `$ has to be larger than $`𝒪(\mathrm{\Lambda }_{\mathrm{QCD}})`$ to ensure the applicability of perturbation theory. Although the scale $`\mu `$ is in principle arbitrary and the renormalization and factorization scales need not be equal, the logarithmic NLO corrections can be made small by choosing a common scale $`\mu `$ of the order of the hard scattering parameter. In jet photoproduction, the relevant large scales are the transverse energies of the jets $`E_{T_i}`$ which need not be equal in NLO QCD. This justifies the choice of $`\mu =\mathrm{max}(E_{T_i})`$ which we have used consistently throughout this paper.
It is customary to estimate the theoretical uncertainty of perturbative calculations by varying $`\mu `$ around the central scale. The dependence of the total dijet cross section with the same kinematic cuts as before on the scale $`\mu `$ is plotted in Fig. 6.
We have checked that the calculations by HO and KK agree very well. The LO cross section depends strongly and logarithmically on $`\mu `$ through the strong coupling constant $`\alpha _s(\mu )`$ and the parton densities in the photon $`f_{q,g}^\gamma (x_\gamma ,\mu ^2)`$ and proton $`f_{q,g}^p(x_p,\mu ^2)`$. The dependence is reduced in NLO due to explicit logarithms in the virtual and real corrections. However, it still amounts to a considerable uncertainty of about $`\pm 8`$% which can be traced back to the photon factorization scale dependence of the NLO resolved contribution. Whereas the LO photon factorization scale dependence is almost completely cancelled by the NLO direct contribution, the same cancellation for the NLO resolved contribution would require the next-to-next-to-leading order (NNLO) direct contribution which is unknown. The three-jet cross section is only accurate to LO QCD and suffers from even larger scale uncertainties. They have been estimated to be about a factor of two .
Further uncertainties arise from the power corrections in the Weizsäcker-Williams approximation . The non-logarithmic terms have been included in all of our numerical results. Although power corrections of $`𝒪(m_e^2/Q^2)`$ could be expected to be negligible, an omission of these terms results in an increase in the dijet and three-jet cross section of about 5%. The remaining uncertainty beyond this $`𝒪(m_e^2/Q^2)`$ correction is of $`𝒪(\theta _e^2,m_e^2/E_e^2)`$ and thus small.
While theoretical calculations are on the parton level, experiments measure hadronic jets. For LO three jet cross sections, every parton corresponds to a jet, making it impossible to implement an experimental jet definition in the theoretical calculation. For NLO dijet cross sections, every jet consists of one or two partons, and a jet definition can be implemented. The cone algorithm suffers from uncertainties with $`R_{\mathrm{sep}}`$, which are absent in the $`k_T`$ algorithm used here .
Although jet cross sections are mainly sensitive to the dynamics of the hard subprocess, the measured cross sections will at some level be effected by hadronization. These effects are expected to become smaller when the cross section refers to higher transverse energy jets. We have estimated hadronization effects based on the leading order Monte Carlo models HERWIG 5.9 and PYTHIA 5.7 . The jet cross section for hadrons in the final state was compared to the cross section of the partons produced from leading order matrix elements and parton showers (see Fig. 7). In HERWIG the change in the cross section due to fragmentation was found to be less than 10% in most of the kinematic regime. Only for events with one or more very backward jets ($`\eta ^{jet}<0.5`$) was a more sizeable change observed. For these events the cross section is reduced by up to 40% due to fragmentation. In PYTHIA the reduction of the cross section is much smaller, but shows the same trend. In a related study, presented in , HERWIG 5.9 was used to compare the cross section for the final state hadrons to that for the partons of the leading order matrix elements. The relative difference between these cross sections was found to be less than 20%, except again for events with very backward jets ($`\eta ^{jet}<0.5`$), where the change in the cross section can be as large as 50%. For the three jet measurement, a study of fragmentation effects using PYTHIA 5.7 was presented in . The three jet cross section based on the hadrons in the final state was compared to that of the partons produced in the hard subprocess and the parton showers. The cross section for hadrons was found to be approximately 5% lower than that for partons.
The experimental uncertainty on the dijet cross section is dominated by systematic uncertainties up to transverse jet energies of approximately 25 GeV, depending on the angles of the jets. At higher transverse energies statistical uncertainties dominate. The systematic uncertainties are roughly between 10 and 20% . The experimental uncertainty in the three jet measurement is dominated by systematics up to a three jet mass of approximately $`100`$ GeV and statistics dominated at higher masses. Here, the systematic uncertainties are of the order of 20% .
These measurements correspond to luminosities of $`6.3`$ and $`16`$ pb<sup>-1</sup>, respectively. Up to the beginning of 1999 the HERA experiments have each collected around $`50`$ pb<sup>-1</sup>. When these data are used to repeat the discussed measurements, it will be possible to reduce the statistical uncertainties significantly and to extend the measurement to higher transverse energies and masses. Moreover, it is likely that the increase in statistics can be exploited to reduce the systematic uncertainties as well. For the dijet analysis, it was estimated that, when using all available data, statistical uncertainties should dominate the measurement only above transverse energies of approximately 50 GeV, again depending on the angles of the jets. In the long term, after the luminosity upgrade planned in the year 2000, HERA is aiming to deliver about $`250`$ pb<sup>-1</sup> of luminosity each year. This will allow for the measurement of jet photoproduction cross sections up to still higher transverse energies and masses.
## 6 Conclusions
We have presented a detailed comparison of three theoretical predictions for LO three-jet and NLO dijet cross sections as measured recently by ZEUS. We found that in general all three calculations agree within the statistical accuracy of the Monte Carlo integration. In certain restricted regions of phase space, the calculations differ by up to 5%. We briefly discussed remaining theoretical and experimental uncertainties and future developments.
## Acknowledgments
Work in the High Energy Physics Division at Argonne National Laboratory is supported by the U.S. Department of Energy, Division of High Energy Physics, under Contract W-31-109-ENG-38. Work at NIKHEF is supported by the Dutch Foundation for Research on Matter (FOM). The FR results were produced by us, and we thank Stefano Frixione for assistance with his computer code.
|
no-problem/9905/hep-ph9905210.html
|
ar5iv
|
text
|
# Possible scenarios for soft and semi-hard components structure in central hadron–hadron collisions in the TeV region: pseudo-rapidity intervals.
## 1 Introduction
In the first paper of this series (from now on referred to as ‘I’), possible scenarios for collective variables properties in the TeV region have been examined in full phase space. Stated that shoulder structure in $`P_n`$ vs. $`n`$ and $`H_q`$ vs. $`q`$ oscillations can be interpreted for c.m. energies larger than 200 GeV as the effect of the weighted superposition of soft and semi-hard events, each class being described by a single Pascal (also known as negative binomial ) multiplicity distribution, (Pa(NB)MD)
$$P_n^{\text{(PaNB)}}(\overline{n},k)=\frac{k(k+1)\mathrm{}(k+n1)}{n!}\frac{\overline{n}^nk^k}{(\overline{n}+k)^{n+k}}$$
(1)
our approach consisted in finding physically motivated extrapolations of the free parameters of the mentioned distributions starting from their known behaviour in the GeV energy region. This fact has led us to imagine firstly possible extreme scenarios in the TeV region which basically should fix upper and lower bounds to the allowed variation path of the average multiplicity $`\overline{n}_i`$ and of the parameter $`k_i`$, which is linked to the dispersion $`D_i`$ by
$$(D_i^2\overline{n}_i)/\overline{n}_i^2=1/k_i$$
(2)
Here $`i`$ stands for ‘soft’ or ‘semi-hard’. Accordingly, in scenario 1 we assumed KNO scaling to be valid both for $`P_{n,\text{soft}}`$ and $`P_{n,\text{semi-hard}}`$ multiplicity distributions (MDs); consequently, $`k_{\text{soft}}`$ and $`k_{\text{semi-hard}}`$ parameters were taken constant with energy. In scenario 2, KNO scaling is realized for the soft component only ($`k_{\text{soft}}`$ constant with energy) and $`1/k_{\text{semi-hard}}`$ increases linearly in $`\mathrm{ln}(\sqrt{s})`$. Between these two quite extreme possibilities we proposed a third scenario, which in view of the chosen behaviour of the parameters of $`P_{n,\text{semi-hard}}`$ was called a QCD inspired scenario (the scenarios are described in greater detail in Section 3).
It is interesting to remark that data on MDs at 1.8 TeV c.m. energy (from the E735 experiment ), when compared with our predictions, are closer to scenario 2, characterised by a huge mini-jet production in the semi-hard component, but go beyond it showing an even wider MD: assuming these data will be confirmed, observed deviations from expectations of scenario 2 might very well indicate the onset in our framework of new substructures in the total MDs, which we suggested to interpret as probably due to a new species of mini-jets (see I). Our caution on this point was and is motivated by the fact that mentioned data on MDs in full phase space (f.p.s.) at lower c.m. energies show systematic differences with respect to UA5 data on which is based our general scheme for defining scenarios 1 and 2. That’s the reason why we decided to maintain this scheme also for extending our previous work from f.p.s. to pseudo-rapidity intervals. When data of the E735 experiment will be consolidated it is indeed not a too hard job to adapt our approach to the new experimental framework.
## 2 $`P_n`$ vs $`n`$ behaviour in pseudo-rapidity intervals in the GeV and TeV energy domains
In going from full phase space (f.p.s.) to pseudo-rapidity ($`\eta `$) intervals, our main concern is to be consistent with the scenarios explored in f.p.s., and extend them.
In f.p.s., the quadratic growth (in $`\mathrm{ln}\sqrt{s}`$) of the total average multiplicity was attributed to the growing contribution of semi-hard events. Notice that semi-hard events are defined by the presence of mini-jets or jets in the final state, irrespectively of the pseudo-rapidity interval under consideration. It must be stressed that only after this classification of events has been carried out we look at phase space intervals: thus the description of the total MD, $`P_n(\eta _c,\sqrt{s})`$, in terms of a weighted superposition of two multiplicity distributions holds in $`\eta `$ intervals with the same weighting factor as in f.p.s., namely $`\alpha _{\text{soft}}`$, function of energy only and not of $`\eta _c`$ (the $`\eta _c`$ dependence comes from $`\overline{n}`$ and $`k`$ parameters), i.e.:
$$\begin{array}{cc}\hfill P_n(\eta _c,\sqrt{s})=& \alpha _{\text{soft}}(\sqrt{s})P_n^{\text{(PaNB)}}(\overline{n}_{\text{soft}}(\eta _c,\sqrt{s}),k_{\text{soft}}(\eta _c,\sqrt{s}))+\hfill \\ & \left(1\alpha _{\text{soft}}(\sqrt{s})\right)P_n^{\text{(PaNB)}}(\overline{n}_{\text{semi-hard}}(\eta _c,\sqrt{s}),k_{\text{semi-hard}}(\eta _c,\sqrt{s}))\hfill \end{array}$$
(3)
precisely as in eq. (I.2).
In this paper, we will be concerned with symmetric pseudo-rapidity intervals $`[\eta _c,\eta _c]`$, with $`1\eta _c3`$. The joining of these intervals to f.p.s. is assumed to be smooth.
### 2.1 Average multiplicity in pseudo-rapidity intervals
In f.p.s. (see I, Section 2.1), it was assumed that each component has an average multiplicity which grows linearly with $`\mathrm{ln}\sqrt{s}`$:
$$\overline{n}_{\text{soft}}(\sqrt{s})=5.54+4.72\mathrm{ln}(\sqrt{s})$$
(I.3)
$$\overline{n}_{\text{semi-hard}}(\sqrt{s})2\overline{n}_{\text{soft}}(\sqrt{s})$$
(I.4.A)
Since the width of available phase space also grows linearly with $`\mathrm{ln}\sqrt{s}`$, we find that the simplest way to be consistent with our assumptions is to say that the single particle density must show an energy independent plateau around $`\eta =0`$ which extends some units in each direction (a plateau of this size is found in experimental data at UA5 energies for the full distribution: its height increases with c.m. energy indicating violation of Feynman scaling.)
Numerically, we fix the height $`\overline{n}_0`$ of the soft and semi-hard plateaus again respecting the result of in the investigation of UA5 data:
$$\overline{n}_{0,\text{soft}}2.45,\overline{n}_{0,\text{semi-hard}}6.4$$
(4)
and
$$\overline{n}_i(\eta _c)=2\overline{n}_{0,i}\eta _c(i=\text{soft,semi-hard})$$
(5)
Accordingly, from eq. (3)
$$\overline{n}_{\text{total}}(\eta _c,\sqrt{s})=\alpha _{\text{soft}}(\sqrt{s})\overline{n}_{\text{soft}}(\eta _c)+\left(1\alpha _{\text{soft}}(\sqrt{s})\right)\overline{n}_{\text{semi-hard}}(\eta _c)$$
(6)
where the last line follows from eq. (3). Notice that the semi-hard component is more than twice the soft component, and the value 2.45 for the soft component is compatible with low energy data (e.g., ISR data), where only the soft component is present.
There are no compelling physical reasons to assume that also the semi-hard component has an energy independent plateau. Indeed a logarithmic growth of the plateau with c.m. energy is compatible with a second possibility that was considered in I for the growth of $`\overline{n}_{\text{semi-hard}}`$:
$$\overline{n}_{\text{semi-hard}}(\sqrt{s})2\overline{n}_{\text{soft}}(\sqrt{s})+0.1\mathrm{ln}^2(\sqrt{s})$$
(I.4.B)
In the simplest approach where one neglects energy variations of $`d\overline{n}/d\eta `$ at the boundary of phase space, a parameterisation of the growth numerically compatible with eq. (I.4.B) is
$$\overline{n}_{0,\text{semi-hard}}6.3+0.07\mathrm{ln}\sqrt{s}$$
(7)
the effect of which in the 1–20 TeV range is in complete agreement with f.p.s. Therefore we limit ourselves to showing figures only for the case of linear $`\overline{n}_{\text{soft}}`$, mentioning the differences in the text below when relevant. We postpone to future work the discussion of the case in which the particle density varies at the boundary of phase space.
### 2.2 Dispersion in pseudo-rapidity intervals
We now examine the width of the multiplicity distribution; to this end, we use the parameter $`k`$ as defined in eq. (2). In particular, we have the following relation:
$$\overline{n}_{\text{total}}^2\left(1+\frac{1}{k_{\text{total}}}\right)=\alpha _{\text{soft}}\overline{n}_{\text{soft}}^2\left(1+\frac{1}{k_{\text{soft}}}\right)+\left(1\alpha _{\text{soft}}\right)\overline{n}_{\text{semi-hard}}^2\left(1+\frac{1}{k_{\text{semi-hard}}}\right)$$
(8)
obtained from eq.s (3) and (6) (for brevity, the dependence on $`\eta _c`$ and $`\sqrt{s}`$ has been omitted in this formula). The behaviour of $`k_{\text{soft}}`$ and $`k_{\text{semi-hard}}`$ is indeed of great importance in our subsequent discussion for at least three reasons. Firstly, in view of $`k`$’s relationship with the two-particle correlation function $`C_2(\eta _1,\eta _2;\sqrt{s})`$, :
$$k^1(\eta _c;\sqrt{s})=\frac{1}{\overline{n}^2(\eta _c;\sqrt{s})}_{\eta _c}^{\eta _c}C_2(\eta _1,\eta _2;\sqrt{s})𝑑\eta _1𝑑\eta _2$$
(9)
$`k_{\text{soft}}`$ and $`k_{\text{semi-hard}}`$ control two-particle correlation properties of the two components. Secondly, clan structure parameters, $`\overline{N}_i(\eta _c,\sqrt{s})`$ and $`\overline{n}_{c,i}(\eta _c,\sqrt{s})`$ ($`i`$ = soft, semi-hard), are defined for each component in terms of $`\overline{n}_i`$ and $`k_i`$ as follows :
$$\overline{N}_i(\eta _c,\sqrt{s})=k_i(\eta _c,\sqrt{s})\mathrm{ln}\left(1+\overline{n}_i(\eta _c)/k_i(\eta _c,\sqrt{s})\right);$$
$$\overline{n}_{c,i}(\eta _c,\sqrt{s})=\overline{n}_i(\eta _c)/\overline{N}_i(\eta _c,\sqrt{s})$$
(10)
It should be pointed out that the above definition is valid for a single Pa(NB)MD only: as explained in , clans cannot be defined for the total MD which, being the superposition of two (or possibly more) Pa(NB)MDs with in general different parameters, is not of Pa(NB)MD type. The third reason is a consequence of eq. (10), which allows us to interpret $`1/k_i`$ for a single Pa(NB)MD only:
$$k_i^1=\frac{𝙿_i(1;2)}{𝙿_i(2;2)}$$
(11)
where $`𝙿_i(N;m)`$ is the probability to have $`m`$ particles belonging to $`N`$ clans . Therefore any assumption or result on the energy or pseudo-rapidity dependence of $`k_i`$ has its counterpart in all above mentioned frameworks.
The soft component is taken to have $`1/k`$ constant with energy for each $`\eta `$ interval, but variable with the width of the interval. In low energy experimental data, $`1/k`$ is not constant but KNO scaling holds: in view of the growing value of $`\overline{n}`$, KNO scaling implies at high energies that $`1/k`$ reaches a constant value, which we infer from the highest energy data point in reference . For the actual numbers, see table 1; the behaviour of all the relevant Pa(NB)MD parameters is shown in Figures 1, 2 and 3. We notice that $`1/k`$ decreases slowly by increasing the width of the $`\eta `$ interval: as the interval gets larger, there is less aggregation. Particles generated by new clans fill the growing interval faster than those generated by old clans. Accordingly the linear growth of the average number of clans $`\overline{N}`$ is faster than the increase in the average number of particles per clan, $`\overline{n}_c`$. This behaviour also implies that, having clans a large extension in pseudo-rapidity, long range correlations become important.
In summary, for the soft component both $`\overline{n}`$ and $`k`$ are constant with energy: so are the clan parameters. $`\overline{n}`$, $`\overline{N}`$ and $`\overline{n}_c`$ all grow with $`\eta _c`$, while $`1/k`$ decreases.
## 3 The three scenarios in pseudo-rapidity intervals
For the semi-hard component, since in f.p.s. we devised three scenarios with a different variation of the dispersion of the multiplicity distribution with energy, we use the fact that low energy experimental results for the dispersion show the same energy behaviour in $`\eta `$ intervals as in f.p.s., and extend the f.p.s. behaviour to pseudo-rapidity intervals.
### 3.1 Scenario 1
The first scenario is characterized by a Feynman scaling and KNO scaling semi-hard component, just as for the soft component, but with different values for the parameters, see table 1 and the leftmost column of Figure 1: the average multiplicity is more than double, and $`1/k`$ is smaller, so that even with more particles there is less aggregation. In this case, the total distribution’s $`k_{\text{total}}`$ parameter is given by the superposition formula, Eq. (8).
It is interesting to notice in connection with this scenario that correlations increase due to the superposition of events of different type, both with smaller correlations, as
$$1/k_{\text{total}}>1/k_{\text{soft}}>1/k_{\text{semi-hard}}$$
(12)
This is an example of the situation examined in detail in : these enhanced correlations are a consequence of the fluctuations in single particle densities (due to the superposition of events with different average multiplicity), superimposed to “genuine” two-particle correlations.
Furthermore, the behaviour of $`1/k_{\text{total}}`$ with energy is peculiar in that it first increases (up to about 1 TeV) then decreases: the maximum is rather wide, resulting in an accidental KNO scaling behaviour for c.m. energy $`0.5\sqrt{s}1.8`$ TeV.
The KNO scaling behaviour of the total MD is unexpected because although we are superimposing two KNO scaling distributions, we are doing it with an energy dependent weight parameter. Of course scaling behaviour is expected both at low energy (only the soft component is present) and at very high energy, because in this simple picture only the semi-hard component is present.
The energy independence of $`\overline{n}`$ and $`k`$ in fixed $`\eta `$ intervals is contrasted with their energy dependence in f.p.s. in terms of clan parameters in the leftmost column in Figures 2 and 3.
The effect of a quadratic growth of $`\overline{n}_{\text{semi-hard}}`$ with energy is to increase slightly (around 10%) the value of $`1/k_{\text{total}}`$, compatibly with what we have seen in f.p.s.
### 3.2 Scenario 2
For the second scenario we choose to violate KNO scaling by making $`1/k_{\text{total}}`$ continue to grow with energy as it does up to UA5 energies with a linear behaviour in $`\mathrm{ln}\sqrt{s}`$ as given in table 1, where the parameters $`a`$ and $`b`$ have been fitted to experimental data from ISR to UA5. Notice that it appears that the best slope $`b`$ is the same for each interval. $`1/k_{\text{semi-hard}}`$ is then obtained using eq. (8): it also grows approximately linearly with c.m. energy, and decreases rapidly as $`\eta _c`$ increases (see Figure 1, central column). In particular, above 1 TeV, $`1/k_{\text{semi-hard}}`$ becomes larger than $`1/k_{\text{soft}}`$: this implies that correlations are much larger in the semi-hard events than in the soft events; because in both cases $`k<\overline{n}`$, this is probably due again to fluctuations in single particle distribution (the semi-hard component has indeed larger fluctuations in multiplicity).
At the same time, the average number of clans is seen to decrease very rapidly with the energy for the semi-hard component, $`\overline{n}_c`$ is seen to increase with energy; it also increases with $`\eta _c`$, and the increase is faster when the energy is higher (Figures 2 and 3.)
The effect of a quadratic growth of $`\overline{n}_{\text{semi-hard}}`$ with energy is to decrease slightly (less than 10%) the value of $`1/k_{\text{semi-hard}}`$, compatibly with what we have seen in f.p.s.
### 3.3 Scenario 3
In the third scenario we chose a QCD inspired shape, which has a behaviour which turns out to be intermediate between scenario 1 and 2: it starts growing with energy but asymptotically (well above the energy range we consider here) tends to a constant value:
$$\frac{1}{k_{\text{semi-hard}}}=C+\frac{D}{\sqrt{\mathrm{ln}(\sqrt{s}/10)}}$$
(13)
Again the values of the parameters for each interval are given in table 1: they were chosen to be compatible with the 900 GeV points and to lead to an intermediate value of $`1/k`$. The general behaviour of the Pa(NB)MD parameters is shown in the rightmost column of Figures 1, 2 and 3.
While the behaviour of the parameters for the semi-hard component is qualitatively similar to that of scenario 2, the behaviour for the total MD is qualitatively similar to that of scenario 1. Indeed, the increase of $`1/k`$ for the semi-hard with c.m. energy is not as fast as in scenario 2, and $`1/k`$ is smaller in this case, so this leads, for the total distribution, to a broad maximum in the energy range 2-10 TeV, which implies KNO scaling. The decrease from the maximum is slower than in scenario 1, and this accidental KNO scaling appears at higher energies.
The effect of a quadratic growth of $`\overline{n}_{\text{semi-hard}}`$ with energy is to increase slightly (around 10%) the value of $`1/k_{\text{total}}`$, compatibly with what we have seen in f.p.s.
## 4 Comments on the three scenarios
It is quite clear that in scenario 1 both soft and semi-hard components show wide self-similarity regions : the parameters $`k_{\text{soft}}`$ and $`k_{\text{semi-hard}}`$ vary very little from one pseudo-rapidity interval to another. A quite strong point, which can easily be tested by using Pa(NB)MD with a fixed $`k`$ (soft or semi-hard) parameter (determined by data in a small domain of rapidity space) as a “microscope”: by enlarging slowly the initial domain in rapidity one can explore up to which interval MDs are described by Pa(NB)MDs with the same initial $`k`$. This exercise will tell us that in that region two-particle correlations are dominant and that they vary according to the normalization $`\overline{n}_{\text{semi-hard}}^2(\eta _c)`$ only. It should also be noticed that the fact that $`k`$ parameter is energy independent in a fixed rapidity interval and vary very little from one interval to another has important consequences on $`\overline{N}_{\text{soft}}`$ and $`\overline{N}_{\text{semi-hard}}`$ (see Fig. 2): they do not vary with energy in a fixed rapidity interval and only very slowly by increasing the rapidity interval; their growth with energy in full phase space is is due to the growth of the average number of particles with a constant $`k`$ parameter.
In scenario 2 the soft component shows of course for all parameters the same behaviour seen in other scenarios. The interest here is on the semi-hard component structure and on its difference with that of scenario 1. $`\overline{N}_{\text{semi-hard}}`$ in scenario 2 decreases very fast as the c.m. energy increases, this trend should be compared with that of the semi-hard component in scenario 1: here $`\overline{N}_{\text{semi-hard}}`$ is an increasing function of c.m. energy in f.p.s. and is constant in different pseudo-rapidity intervals. Accordingly, $`\overline{n}_c`$ (see Fig. 3) is growing very fast with c.m. energy in scenario 2; it is growing very slowly in f.p.s. and is constant with energy in pseudo-rapidity intervals in scenario 1. These completely different clan structure behaviors when KNO and Feynman scaling are satisfied (scenario 1) and violated (scenario 2) have an interesting interpretation.
Newly created particles of the semi-hard component in scenario 1, being their aggregation power ($`1/k_{\text{semi-hard}}`$) quite limited and energy independent, give origin to clans whose average number of particles is an energy independent quantity in rapidity intervals (very slowly growing with the extension of the interval) and gently increasing from $``$ 2.5 at 1 TeV to $``$ 3 at 15 TeV in f.p.s. In scenario 2 as the energy increases newly created particles not only continue to aggregate in the existing clans in view of the large value of $`1/k_{\text{semi-hard}}`$ (if only this fact would occur $`\overline{N}_{\text{semi-hard}}`$ would be an energy independent quantity, a situation which could be true in scenario 2 for the semi-hard component only asymptotically) but in addition $`\overline{N}_{\text{semi-hard}}`$ starts to decrease in the TeV region, i.e., the aggregation is now involving clans themselves. Clan aggregation into “super-clans” is an unexpected new phenomenon, which occurs in all rapidity intervals and is less pronounced for pseudo-rapidity intervals of smaller size. For energies much higher than 10 TeV clan aggregation stops ($`\overline{N}_{\text{semi-hard}}`$ constant) and the new created particles continue to go in the existing super-clans. Scenario 3 confirms for the semi-hard component its main peculiarity to have intermediate properties among those of the semi-hard components in scenarios 1 and 2.
In order to complete our study in Figure 4 we show a comparison for the multiplicity distributions for the small interval $`\eta _c=1`$ at c.m. energy 1.8 TeV and c.m. energy 14 TeV. (The figure should be compared also with that one shown in paper I in full phase space).
We notice here how the semi-hard component becomes dominant at 14 TeV c.m. energy, although in the low multiplicity part of the distributions the soft component is almost as large as the semi-hard one. These graphs confirm the behaviour already seen in f.p.s.
From the analysis of these figures, we conclude that the interesting phenomenon which clan structure analysis allowed to see is not rapidly expanding. This fact points out that in order to distinguish the different scenarios, one should look at $`1/k`$ parameters and related clan structure analysis, in particular the different behavior between scenarios 1 and 2 is striking. In principle it should be measurable already at the Tevatron. Scenario 3 is different as its parameters due to the lack of precise QCD calculations on the matter are more flexible and can be adjusted to get close to either one of the other scenarios.
## 5 Summary
We studied possible scenarios for soft and semi-hard components structures in central hadron-hadron collisions in the TeV region in symmetric pseudo-rapidity intervals. The paper is the natural extension of previous work on hadron-hadron collisions in full phase space and has a twofold motivation. Firstly, in order to understand the dynamics of multiparticle production and related correlations it is important to study their behaviour in regions where the role of conservation laws is negligible. Secondly, future particle accelerators in the TeV energy domain are expected not to be equipped with full acceptance detectors; they will explore limited sectors of full phase space only, unfortunately. Accordingly, the contact of theoretical expectations with experiments in the next decade should be looked for in pseudo-rapidity intervals and not in full phase space. Since we wanted to avoid complications due to the presence of the dip around zero value of pseudo-rapidity variable we considered intervals greater than one unit in rapidity, and in order to be sure that the influence of conservation laws is small we fixed the upper bound to three unit in pseudo-rapidity variable to our intervals on both sides of the origin.
Selected intervals are therefore wide enough (they extend up to six units in rapidity) to allow significant predictions, and chosen in regions not too small and far from the borders of phase space enough to guarantee results not affected by the above mentioned problems. Following paper I and supported by data at lower energies our main assumption has been that soft and semi-hard events are described by Pascal (NB) multiplicity distributions; in addition, the soft event fraction has been taken pseudo-rapidity (interval) independent and varying with center of mass energy only. Total multiplicity distributions are the result of the weighted superposition of the two above mentioned more elementary substructures. Single particle densities develope in our picture an energy independent central plateau for soft and semi-hard components and their difference is limited to the heights of the corresponding two plateaus. The joining to full pase space is taken to be smooth for simplicity leaving more complex situations for future work.
The soft component structure is fully characterized by a $`k_{\text{soft}}`$ Pa(NB)MD parameter which is constant with energy for each pseudo-rapidity interval but varying with its width, i.e., is characterized by Feynman and KNO scaling behaviour.
Three possible scenarios are discussed for the semi-hard component. Scenario 1 has Feynman and KNO scaling as for the soft component, but with different values of the parameters ($`1/k_{\text{semi-hard}}`$ is energy independent and varies very little with pseudo-rapidity intervals). In scenario 2 KNO scaling is violated and the width of the total multiplicity distribution grows linearly as $`\mathrm{ln}\sqrt{s}`$ ($`1/k_{\text{semi-hard}}`$ is quickly increasing with energy and decreasing with pseudo-rapidity intervals). In scenario 3 the slope of $`1/k_{\text{semi-hard}}`$ is suggested by QCD: it grows initially with energy but asymptotically (well above the extreme values of the abscissa allowed in figure 1) it tends to a constant value; its increase with energy is not as fast as in scenario 2, but its decrease with pseudo-rapidity intervals quite similar.
In conclusion $`k_{\text{soft}}`$ show wide self-similarity regions and the average number of (soft) clans in a fixed rapidity interval is an energy independent quantity in all three scenarios. In scenario 1, $`k_{\text{semi-hard}}`$ behaves as $`k_{\text{soft}}`$ but it has a larger value; being the average number of particles in the semi-hard sector larger than in the soft one, clans of the semi-hard component are more numerous than clans of the soft component and have a larger number of particles per clan. In scenario 2 self-similarity appears only as an asymptotic property for the semi-hard component; the average number of clans is decreasing with energy and as the pseudo-rapidity interval decreases but the average number of particles per clan is becoming quite large as the energy increases. Scenario 3 has predictions which are —as expected— intermediate between the previous two.
Of course now the word is to experiments. They will determine which one of the discussed possibilities is closest to the real world. CDF can help in this direction. It is a fact that one should expect the dominance of the semi-hard component structure as the energy increases also in pseudo rapidity intervals, i.e., huge mini-jets production is also here the main characteristic in the new region, as shown explicitly by MDs in a fixed pseudo-rapidity interval at different energies in Figure 4.
In addition the semi-hard component behaviour has a suggestive interpretation in all scenarios in terms of its clan properties. In scenario 1 one notices numerous clan production of nearly equal size as the energy increases in all rapidity intervals. in scenario 2 in all pseudo-rapidity intervals to the aggregation of newly created particles into existing clans follows the aggregation of clans themselves into super-clans (a new species of (mini)-jets) whose average number becomes at asymptotic energies nearly constant. An interesting phenomenon resembling a phase transition in clan production mechanism. Notice that the average number of clan is higher in larger rapidity intervals and its decreasing with energy favours stronger long range correlations.
Scenario 3 leads to predictions which are —as usual— intermediate between the previous two extreme situations but in view of its flexibility can be modified in the two directions as long as QCD will not provide new constraints on our formulae.
## 6 Acknowledgements
This work was supported in part by M.U.R.S.T. (under Grant 1997). R. U. would like to acknowledge the financial support of the Portuguese Ministry of Science and Technology via the “Sub-Programa Ciência e Tecnologia do $`2^o`$ Quadro Comunitário de Apoio.”
## References
|
no-problem/9905/chao-dyn9905039.html
|
ar5iv
|
text
|
# Information-Theoretic Limits of Control
## Abstract
Fundamental limits on the controllability of physical systems are discussed in the light of information theory. It is shown that the second law of thermodynamics, when generalized to include information, sets absolute limits to the minimum amount of dissipation required by open-loop control. In addition, an information-theoretic analysis of closed-loop control shows feedback control to be essentially a zero sum game: each bit of information gathered directly from a dynamical systems by a control device can serve to decrease the entropy of that system by at most one bit additional to the reduction of entropy attainable without such information (open-loop control). Consequences for the control of discrete binary systems and chaotic systems are discussed.
Information and uncertainty represent complementary aspects of control. Open-loop control methods attempt to reduce our uncertainty about system variables such as position or velocity, thereby increasing our information about the actual values of those variables. Closed-loop methods obtain information about system variables, and use that information to decrease our uncertainty about the values of those variables. Although the literature in control theory implicitly recognizes the importance of information in the control process, information is rarely regarded as the central quantity of interest . In this Letter we address explicitely the role of information and uncertainty in control processes by presenting a novel formalism for analyzing these quantities using techniques of statistical mechanics and information theory. Specifically, based on a recent proposal by Lloyd and Slotine , we formulate a general model of control and investigate it using entropy-like quantities. This allows us to make mathematically precise each part of the intuitive statement that in a control process, information must constantly be acquired, processed and used to constrain or maintain the trajectory of a system. Along this line, we prove several limiting results relating the ability of a control device to reduce the entropy of an arbitrary system in the cases where (i) such a controller acts independently of the state of the system (open-loop control), and (ii) the control action is influenced by some information gathered from the system (closed-loop control). The results are applied both to the stochastic example of coupled Markovian processes and to the deterministic example of chaotic maps. These results not only combine concepts of dynamical entropy and information in a unified picture, but also prove to be fundamental in that they represent the ultimate physical limitations faced by any control systems.
The basic framework of our present study is the following. We assign to the physical plant $`𝒳`$ we want to control a random variable $`X`$ representing its state vector (of arbitrary dimension) and whose value $`x`$ is drawn according to a probability distribution $`p(x)`$. Physically, this probabilistic or ensemble picture may account for interactions with an unknown environment, noisy inputs, or unmodelled dynamics; it can also be related to a deterministic sensitivity to some parameters which make the system effectively stochastic. The recourse to a statistical approach then allows the treatment of both the unexpectedness of the control conditions and the dynamical stochastic features as two faces of a single notion: uncertainty.
As it is well known, a suitable measure quantifying uncertainty is entropy . For a classical system with a discrete set of states with probability mass function $`p(x)`$, it is expressed as
$$H(X)\underset{x}{}p(x)\mathrm{log}p(x),$$
(1)
(all logarithms are assumed to the base $`2`$ and the entropy is measured in bits). Other similar expressions also exist for continuous state systems (fine-grained entropy), quantum systems (von Neumann entropy), and coarse-grained systems obtained by discretization of continuous densities in the phase space by means of a finite partition. In all cases, entropy offers a precise measure of disorderliness or missing information by characterizing the minimum amount of resources (bits) required to encode unambiguously the ensemble describing the system . As for the time evolution of these entropies, we know that the fine-grained (or von Neumann) entropy remains constant under volume-preserving (unitary) evolution, a property closely related to a corollary of Landauer’s principle which asserts that only one-to-one mappings of states, i.e., reversible transformation preserving information are exempt of dissipation. Coarse-grained entropies, on the other hand, usually increase in time even in the absence of noise. This is due to the finite nature of the partition used in the coarse-graining which, in effect, blurs the divergence of sufficiently close trajectories, thereby inducing a “randomization” of the motion. For many systems, the typical average rate of this increase is given by a dynamical invariant known as the Kolmogorov-Sinai entropy .
In this context, we now address the problem of how a control device can be used to reduce the entropy of a system or to immunize it from sources of entropy, in particular those associated with noise, motion instabilities, incomplete specification of states, and initial conditions. Although the problem of controlling a system requires more than limiting its entropy, the ability to limit entropy is a prerequisite to control. Indeed, the fact that a control process is able to localize a system in definite stable states or trajectories simply means that the system can be constrained to evolve into states of low entropy starting from states of high entropy.
To illustrate, in its most simple way, how the entropy of a system can be affected by external systems, let us consider a basic model consisting of our system $`𝒳`$ coupled to an environment $``$. For simplicity, and without loss of generality, we assume that the states of $`𝒳`$ form a discrete set. The initial state is again distributed according to $`p(x)`$, and the effect of the environment is taken into account by introducing a perturbed conditional distribution $`p(x^{}|e)`$, where $`x^{}`$ is a value of the state later in time and $`e`$, a particular realization of the stochastic perturbation appearing with probability $`p(e)`$. For each value $`e`$, we assume that $`𝒳`$ undergoes a unique evolution, referred here to as a subdynamics, taken to be entropy conserving in analog to the Hamiltonian time evolution for a continuous physical system:
$$H(X^{}|e)\underset{x^{}}{}p(x^{}|e)\mathrm{log}p(x^{}|e)=H(X).$$
(2)
After the time transition $`XX^{}`$, the distribution $`p(x^{})`$ is obtained by tracing out the variables of the environment, and is used to calculate the change of the entropy $`H(X^{})=H(X)+\mathrm{\Delta }H`$. From the concavity property of entropy, it can be easily shown that $`\mathrm{\Delta }H0`$, with equality if and only if (iff) the state $``$ is perfectly specified, i.e., if a value $`e`$ appears with probability one. In practice, however, the environment degrees of freedom are uncontrollable and the uncertainty associated with the environment coupling can be suppressed by “updating” somehow our knowledge of $`X`$ after the evolution. One direct way to reveal that state is to imagine a measurement apparatus $`𝒜`$ coupled to $`𝒳`$ in such a way that the dynamics of the composed system $`𝒳+`$ is left unaffected. For this measurement scheme, the outcome of some discrete random variable $`A`$ of the apparatus is described by a conditional probability matrix $`p(a|x^{})`$ and the marginal $`p(a)`$ from which we can derive $`H(X^{}|A)H(X^{})`$ with equality iff $`A`$ is independent of $`X`$ . In this last inequality we have used $`H(X^{}|A)_aH(X^{}|a)p(a)`$, and $`H(X^{}|a)`$ given similarly as in Eq.(2).
Now, upon the application of the measurement, one can define the reduction of entropy of the system conditionally on the outcome of $`A`$ by $`\mathrm{\Delta }H_AH(X^{}|A)H(X)`$, which, obviously, satisfies $`\mathrm{\Delta }H_A\mathrm{\Delta }H`$, and $`H(A)\mathrm{\Delta }H\mathrm{\Delta }H_A`$. In other words, the decrease in the entropy of $`𝒳`$ conditioned on the state of $`𝒜`$ is conpensated for by the increase in entropy of $`𝒜`$. This latter quantity represents information that $`𝒜`$ posses about $`𝒳`$. Accordingly, the entropy of $`X`$ given $`A`$ plus the entropy of $`A`$ is nondecreasing, which is an expression of the second law of thermodynamics as applied to interacting systems . In a similar line of reasoning, Schack and Caves , showed that some classical and quantum systems can be termed “chaotic” because of their exponential sensitivity to perturbation, by which they mean that the minimal information $`H(A)`$ needed to keep $`\mathrm{\Delta }H_A`$ below a tolerable level grows exponentially in time in comparison to the entropy reduction $`\mathrm{\Delta }H\mathrm{\Delta }H_A`$.
It must be stressed that the reduction of entropy of $`𝒳`$ discussed so far is conditional on the outcome of $`A`$. By assumption, $`𝒳`$ is not affected by $`𝒜`$; as a result, according to an observer who does not know this outcome, the entropy of $`𝒳`$ is unchanged. In order to reduce entropy for all observers unconditioned on the state of any external systems, a direct dynamical action on $`𝒳`$ must be established externally by a controller $`𝒞`$ whose influence on the system is represented by a set of control actions $`x\stackrel{c}{}x^{}`$ triggered by the controller’s state $`c`$. Mathematically, these actions can be modelled by a probability transition matrix $`p(x^{}|x,c)`$ giving the probability that the system in state $`x`$ goes to state $`x^{}`$ given that the controller is in state $`c`$. The specific form of this actuation matrix will in general depend on the subdynamics envisaged in the control process: some of the actions, for example, may correspond to control strategies forcing several initial conditions to a common stable state, in which case the corresponding subdynamics is entropy decreasing. Others can model uncontrolled transitions perturbed by external or internal noise leading to “fuzzy” actuation rules which increase the entropy of the system. Hence, the systems $`𝒳`$ and $`𝒞`$ need not in general model a closed system; $`𝒳`$, as we already noted, can also be affected by external systems (e.g., environment) on which one has usually no control. However, formally speaking, one can always embed any open-system evolution in a higher dimensional closed system whose dynamics mimics a Hamiltonian system. This can be done by supplementing an open system with a set of ancillary variables acting as an environment $``$ in order to construct a global volume-preserving transition matrix such that, when the ancillary variables are traced out, the reduced transition matrix reproduces the dynamics of the system $`𝒳+𝒞`$.
Note that these ancillary variables thus introduced need not have any physical significance: they are only there for the purpose of simplifying the analysis of the evolution of the system. In particular, no control can be achieved through the choice of $``$. Within our model, the control of the system $`𝒳`$ can only be assured by the choice of the control $`C`$ whereby we can force an ensemble of transitions leading the system to a net entropy change $`\mathrm{\Delta }H`$. Since the overall dynamics of the system, controller and environment is Hamiltonian, Landauer’s principle immediately implies that if the controller is initially uncorrelated with the system (open-loop control), a decrease in entropy $`\mathrm{\Delta }H`$ for the system must be compensated for by an increase in entropy of at least $`\mathrm{\Delta }H`$ for the controller and the environment . Furthermore, using again the concavity property of $`H`$, it can be shown that the maximum decrease of entropy achieved by a particular subdynamics of control variable $`\widehat{c}`$ is always optimal in the sense that no probabilistic mixture of the control parameter can improve upon that decrease. Explicitly, we have the following theorem (we omit the proof which follows simply from the concavity property.)
Theorem 1.—For open-loop control, the maximum value of $`\mathrm{\Delta }H`$ can always be attained for a pure choice of the control variable, i.e., with $`p(\widehat{c})=1`$ and $`p(c)=0`$ for all $`c\widehat{c}`$, where $`\widehat{c}`$ is the value of the controller leading to $`\mathrm{max}\mathrm{\Delta }H`$. Any mixture of the control variables either achieves the maximum or yields a smaller value.
From the standpoint of the controller, one major drawback of acting independently of the state of the system is that often no information other than that available from the state of $`𝒳`$ itself can provide a reasonable way to determine which subdynamics are optimal or even accessible given the initial state. For this reason, open-loop control strategies implemented independently of the state of the system or solely on its statistics usually fail to operate efficiently in the presence of noise because of their inability to react or be adjusted in time. In order to account for all the possible behaviors of a stochastic dynamical system, we have to use the information contained in its evolution by considering a closed-loop control scheme in which the state of the controller is allowed to be correlated to the initial state of $`𝒳`$. This correlation can be thought as a measurement process described earlier that enables $`𝒞`$ to gather an amount of information given formally in Shannon’s information theory by the mutual information $`I(X;C)=H(X)+H(C)H(X,C),`$where $`H(X,C)=_{x,c}p(x,c)\mathrm{log}p(x,c)`$ is the joint entropy of $`𝒳`$ and $`𝒞`$. Having defined these quantities, we are now in position to state our main result which is that the maximum improvement that closed-loop can give over open-loop control is limited by the information obtained by the controller. More formally, we have
Theorem 2.—The amount of entropy $`\mathrm{\Delta }H_{\text{closed}}`$ that can be extracted from any dynamical system by a closed-loop controller satisfies
$$\mathrm{\Delta }H_{\text{closed}}\mathrm{\Delta }H_{\text{open}}+I(X;C),$$
(3)
where $`\mathrm{\Delta }H_{\text{open}}`$ is the maximum entropy decrease that can be obtained by open-loop control and $`I(X;C)`$ is the mutual information gathered by the controller upon observation of the system state.
Proof.—We construct a closed system by supplementing an ancilla $``$ to our previous system $`𝒳+𝒞`$. Moreover, let $`𝒞`$ and $``$ be collectively denoted by $``$ with state variable $`B`$. Since the complete system $`𝒳+`$ is closed, its entropy has to be conserved, and thus $`H(X,B)=H(X^{},B^{})`$. Defining the entropy changes of $`𝒳`$ and $``$ by $`\mathrm{\Delta }H=H(X)H(X^{})`$ and $`\mathrm{\Delta }H_B=H(B^{})H(B)`$ respectively, and by using the definition of the mutual information, this condition of entropy conservation can also be rewritten in the form $`\mathrm{\Delta }H=\mathrm{\Delta }H_BI(X^{};B^{})+I(X;B)`$ . Now, define $`\mathrm{\Delta }H_{\text{open}}`$ as the maximum amount of entropy decrease of $`𝒳`$ obtained in the open-loop case where $`I(X;C)=I(X;B)=0`$ (by construction of $``$, $`I(X;E)=0`$.) From the conservation condition, we hence obtain $`\mathrm{max}\mathrm{\Delta }H=\mathrm{\Delta }H_{\text{open}}+I(X;B)`$, which is the desired upper bound for a feedback controller.
To illustrate the above results, suppose that we control a system in a mixture of the states $`\{0,1\}`$ using a controller restricted to use the following two actions
$$\{\begin{array}{c}c=0:xx^{}=x\hfill \\ c=1:xx^{}=\mathrm{not}x\hfill \end{array}$$
(4)
(in other words, the controller and the system behave like a so-called ‘controlled-not’ gate). Since these actuation rules simply permute the state of $`𝒳`$, $`H(X^{})H(X)`$ with equality if we use a pure control strategy or if $`H(X)=H_{\text{max}}=1`$ bit, in agreement with our first theorem. Thus, $`\mathrm{\Delta }H_{\text{open}}=0`$. However, by knowing the actual value of $`x`$ ($`H(X)`$ bit of information) we can choose $`C`$ to obtain $`\mathrm{\Delta }H=H(X)`$, therefore achieving Eq.(3) with equality. Evidently, as implied by this equation, information is required here as a result of the non-dissipative nature of the actuations and would not be needed if we were allowed to use dissipative (volume contracting) subdynamics. Alternatively, no open-loop controlled situation is possible if we confine the controller to use entropy-increasing actuations as, for instance, in the control of nonlinear systems using chaotic dynamics.
To demonstrate this last statement, let us consider the feedback control scheme proposed by Ott, Grebogi and Yorke (OGY) as applied to the logistic map
$$x_{n+1}=rx_n(1x_n),x[0,1],$$
(5)
(the extension to more general systems naturally follows). The OGY method, specifically, consists of applying to Eq.(5) small perturbations $`rr+\delta r_n`$ according to $`\delta r_n=\gamma (x_nx^{})`$, whenever $`x_n`$ falls into a region $`D`$ in the vicinity of the target point $`x^{}`$. The gain $`\gamma >0`$ is chosen so as to ensure stability . For the purpose of chaotic control, all the accessible control actions determined by the values of $`\delta r_n`$, and thereby by the coordinates $`x_nD`$, can be constrained to be entropy-increasing for a proper choice of $`D`$, meaning that the Lyapunov exponent $`\lambda (r)`$ associated with any actuation indexed by $`r`$ is such that $`\lambda (r)>0`$ . Physically, this implies that almost any initial uniform distribution for $`X`$ covering an interval of size $`\epsilon `$ “expands” by a factor $`2^{\lambda (r)}`$ on average after one iteration of the map with parameter $`r`$ . Now, for an open-loop controller, it can be readily be shown in that case that no control of the state $`x`$ is possible; without knowing the position $`x_n`$, a controller merely acts as a perturbation to the system, and the optimal control strategy then consists of using the smallest Lyapunov exponent available so as to achieve $`\mathrm{\Delta }H_{\text{open}}=\lambda _{\mathrm{min}}<0`$. Following theorem 2, it is thus necessary, in order to achieve a controlled situation $`\mathrm{\Delta }H>0`$, to have $`I(X;C)\lambda _{\mathrm{min}}`$ using a measurement channel characterized by an information capacity of at least $`\lambda _{\mathrm{min}}`$ bit per use.
In the controlled regime ($`n\mathrm{}`$), this means specifically that if we want to localize the trajectory generated by Eq.(5) uniformly within an interval of size $`\epsilon `$ using a set of chaotic actuations, we need to measure $`x`$ within an interval no larger than $`\epsilon 2^{\lambda _{\mathrm{min}}}`$. To understand this, note that an optimal measurement of $`I(X;C)=\mathrm{log}a`$ bits consists, for a uniform distribution $`p(x)`$ of size $`\epsilon `$, in partitioning the interval $`\epsilon `$ into $`a`$ subintervals of size $`\epsilon /a`$. The controller under the partition then applies the same actuation $`r^{(i)}`$ for all the coordinates of the initial density lying in each of the subintervals $`i`$, therefore stretching them by a factor $`2^{\lambda (r^{(i)})}`$. In the optimal case, all the subintervals are directed toward $`x^{}`$ using $`\lambda _{\mathrm{min}}`$ and the corresponding entropy change is thus
$$\mathrm{\Delta }H_{\text{closed}}=\mathrm{log}\epsilon \mathrm{log}2^{\lambda _{\mathrm{min}}}\epsilon /a=\lambda _{\mathrm{min}}+\mathrm{log}a,$$
(6)
which is consistent with Eq.(3) and yields the aforementioned value of $`a`$ for $`\mathrm{\Delta }H=0`$. Clearly, this value constitutes a lower bound for the OGY scheme since not all the subintervals are controlled with the same parameter $`r`$, a fact that we observed in numerical simulations .
In summary, we have introduced a formalism for studying control problems in which control units are analyzed as informational mechanisms. In this respect, a feedback controller functions analogously to a Maxwell’s demon , getting information about a system and using that information to decrease the system’s entropy. Our main result showed that the amount of entropy that can be extracted from a dynamical system by a controller is upper bounded by the sum of the decrease of entropy achievable in open-loop control and the mutual information between the dynamical system and the controller instaured during an initial interaction. This upper bound sets a fundamental limit on the performance of any controllers whose designs are based on the possibilities to accede low entropy states and was proven without any reference to a specific control system. Hence, its practical implications can be investigated for the control of linear, nonlinear and complex systems (discrete or continuous), as well as for the control of quantum systems for which our results also apply. For this latter topic, our probabilistic approach seems particularly suitable for the study of quantum controllers.
The authors would like to thank J.-J.E. Slotine for helpful discussions. This work has been partially supported by NSERC (Canada), and by a grant from the d’Arbeloff Laboratory for Information Systems and Technology, MIT.
|
no-problem/9905/astro-ph9905378.html
|
ar5iv
|
text
|
# New evolutionary scenarios for short orbital period CVs.
## 1 Introduction
The ‘hibernation scenario’ (Shara 1989 for review) suggests that dwarf novae $``$ nova-likes $``$ novae $``$ nova-likes $``$ dwarf novae $``$ hibernation $``$ dwarf novae etc. However, it was later proposed that the hibernation stage ($`\dot{M}0`$) might not exist at all (Livio 1989), thus dwarf novae $``$ nova-likes $``$ novae $``$ nova-likes $``$ dwarf novae… The typical time scales for the transitions were estimated as a few centuries.
An alternative view to the ‘hibernation scenario’ was presented by Mukai and Naylor (1995). They suggested that nova-likes and dwarf novae constitute different classes of pre-nova systems. Therefore, there are two possibilities: 1. nova-likes $``$ novae $``$ nova-likes… 2. dwarf novae $``$ novae $``$ dwarf novae… Nova-likes should have more frequent nova outbursts than dwarf novae because their mass transfer rates are higher. The critical mass for the thermonuclear runaway is thus achieved much faster. According to this model, a long-term evolution between the two phases might occur.
## 2 Discussion
### 2.1 Permanent superhump novae
We note the similarity between long and short orbital period CVs. Permanent superhump systems have thermally stable accretion discs as do nova-likes, while SU UMa and U Gem systems are thermally unstable.
So far only two non-magnetic novae below the period gap have been discovered - CP Pup 1942 and V1974 Cyg 1992. Both have permanent superhumps in their light curves. To these systems, we naturally add V603 Aql 1918, the third permanent superhump nova, whose binary period is just the other side of the gap (Retter & Leibowitz 1998). These three objects show clearly that certain classical novae become permanent superhump systems. Since the three post-novae are permanent superhumpers, their accretion discs should be thermally stable. When we compare, however, the pre-outburst luminosities with the post-nova values, various types of behaviour are discovered. V603 Aql seems to have returned exactly to its pre-outburst magnitude. The upper limit on the brightness of the progenitor of CP Pup (Warner 1995) shows that it was fainter than its post outburst quiescent value, but prevents a precise decision concerning the thermal stability of the pre-nova. V1974 Cyg is the most interesting case among the three. Retter & Leibowitz (1998) argued that the pre-nova was faint, and therefore should have been a dwarf nova (SU UMa system) with a thermally unstable accretion disc. It is thus the only clear case of a classical nova that has changed its thermal stability state – a change to the thermally stable state from the thermally unstable state have been caused by the nova outburst. CP Pup might be a second example of this transition.
### 2.2 Summary of relevant observations
Observations of novae have shown the following:
1. Most systems probably have only a short cycle (nova-likes $``$ novae $``$ nova-likes… – i.e. Robinson 1975). Two selection effects might, however, be involved – brighter novae are better covered, and a longer observational base line might show a larger cycle.
2. There are two clear examples of novae that have turned into dwarf novae – V446 Her (Honeycutt et al. 1998) and GK Per (Sabbadin & Bianchini 1983). GK Per is, however, not a typical nova.
3. There is at least one example of a nova like (permanent superhump system) – V1974 Cyg, that should have been a dwarf nova (SU UMa system) before its nova event (Retter & Leibowitz 1998).
### 2.3 New evolutionary scenarios
We suggest that most (or perhaps all) non-magnetic short orbital period novae should evolve into permanent superhump systems. We also propose that most (or all) permanent superhump systems are ex-novae. An indirect evidence, supporting the last claim, is the fact that permanent superhumps have been detected in a few SW Sex stars (Patterson 1999), and three SW Sex candidates are actually old novae (Hoard 1998). Further evidence for this idea comes for the possible identification of BK Lyn, a permanent superhump system (Patterson 1999), with a Chinese guest star, which erupted in 101 (Hertzog 1986).
We further suggest that evolutionary scenarios, similar to those offered for the long orbital period CVs, are applicable to the short orbital period systems, as well. The equivalence of the ‘hibernation scenario’ is: SU UMa systems ($``$ permanent superhump systems) $``$ novae $``$ permanent superhumpers $``$ SU UMa systems $``$ hibernation $``$ SU UMa systems… The analogy of the ‘modern hibernation scenario’ is: SU UMa systems ($``$ permanent superhump systems) $``$ novae $``$ permanent superhumpers $``$ SU UMa systems… The extension of the two options of Mukai and Naylor (1995) ideas is: permanent superhumpers $``$ novae $``$ permanent superhumpers… and SU UMa systems $``$ novae $``$ SU UMa systems… The observed evolution of V1974 Cyg is consistent with this view only if the CV was caught in a very specific point in its long term evolution – a transition from the faint stage to the bright stage.
|
no-problem/9905/nucl-th9905061.html
|
ar5iv
|
text
|
# Shell model studies of the proton drip line nucleus 106Sb
## Abstract
We present results of shell model calculations for the proton drip line nucleus <sup>106</sup>Sb. The shell model calculations were performed based on an effective interaction for the $`2s1d0g_{7/2}0h11_{11/2}`$ shells employing modern models for the nucleon-nucleon interaction. The results are compared with the recently proposed experimental yrast states. A good agreement with experiment is found lending support to the experimental spin assignements.
PACS number(s): 21.60.-n, 21.60.Cs, 27.60.+j
Considerable attention is at present being devoted to the experimental and theoretical study of nuclei close to the limits of stability. Recently, heavy neutron deficient nuclei in the mass regions of $`A=100`$ have been studied, and nuclei like <sup>100</sup>Sn and neighboring isotopes have been identified . Moreover, the proton drip line has been established in the $`A=100`$ and $`A=150`$ regions and nuclei like <sup>105</sup>Sb and <sup>109</sup>I have recently been established as ground-state proton emitters . The next to drip line nucleus for the antimony isotopes, <sup>106</sup>Sb with a proton separation energy of $`400`$ keV, was studied recently in two experiments and a level scheme for the yrast states was proposed in Ref. .
The aim of this work is thus to see whether shell-model calculations, which employ realistic effective interactions based on state of the art models for the nucleon-nucleon interaction, are capable of reproducing the experimental results for systems close to the stability line. Before we present our results, we will briefly review our theoretical framework. In addition, we present results for effective proton and neutron charges based on perturbative many-body methods. These effective charges will in turn be used in a shell-model analysis of $`E2`$ transitions.
The aim of microscopic nuclear structure calculations is to derive various properties of finite nuclei from the underlying hamiltonian describing the interaction between nucleons. We derive an appropriate effective two-body interaction for valence neutrons and protons in the single-particle orbits $`2s_{1/2}`$, $`1d_{5/2}`$, $`1d_{3/2}`$, $`0g_{7/2}`$ and $`0h_{11/2}`$. As closed shell core we use <sup>100</sup>Sn. This effective two-particle interaction is in turn used in the shell model model study of <sup>106</sup>Sb. The shell model problem requires the solution of a real symmetric $`n\times n`$ matrix eigenvalue equation
$$\stackrel{~}{H}|\mathrm{\Psi }_k=E_k|\mathrm{\Psi }_k,$$
(1)
with $`k=1,\mathrm{},K`$. At present our basic approach to finding solutions to Eq. (1) is the Lanczos algorithm, an iterative method which gives the solution of the lowest eigenstates. The technique is described in detail in Ref. , see also Ref. .
To derive the effective interaction, we employ a perturbative many-body scheme starting with the free nucleon-nucleon interaction. This interaction is in turn renormalized taking into account the specific nuclear medium. The medium renormalized potential, the so-called $`G`$-matrix, is then employed in a perturbative many-body scheme, as detailed in Ref. and reviewed briefly below. The bare nucleon-nucleon interaction we use is the charge-dependent meson-exchange model of Machleidt and co-workers , the so-called CD-Bonn model. The potential model of Ref. is an extension of the one-boson-exchange models of the Bonn group , where mesons like $`\pi `$, $`\rho `$, $`\eta `$, $`\delta `$, $`\omega `$ and the fictitious $`\sigma `$ meson are included. In the charge-dependent version of Ref. , the first five mesons have the same set of parameters for all partial waves, whereas the parameters of the $`\sigma `$ meson are allowed to vary.
The first step in our perturbative many-body scheme is to handle the fact that the repulsive core of the nucleon-nucleon potential $`V`$ is unsuitable for perturbative approaches. This problem is overcome by introducing the reaction matrix $`G`$ given by the solution of the Bethe-Goldstone equation
$$G=V+V\frac{Q}{\omega QTQ}G,$$
(2)
where $`\omega `$ is the unperturbed energy of the interacting nucleons, and $`H_0`$ is the unperturbed hamiltonian. The operator $`Q`$, commonly referred to as the Pauli operator, is a projection operator which prevents the interacting nucleons from scattering into states occupied by other nucleons. In this work we solve the Bethe-Goldstone equation for five starting energies $`\omega `$, by way of the so-called double-partitioning scheme discussed in e.g., Ref. . A harmonic-oscillator basis was chosen for the single-particle wave functions, with an oscillator energy $`\mathrm{}\mathrm{\Omega }`$ given by $`\mathrm{}\mathrm{\Omega }=45A^{1/3}25A^{2/3}=8.5`$ MeV, $`A=100`$ being the mass number.
Finally, we briefly sketch how to calculate an effective two-body interaction for the chosen model space in terms of the $`G`$-matrix. Since the $`G`$-matrix represents just the summmation to all orders of ladder diagrams with particle-particle intermediate states, there are obviously other terms which need to be included in an effective interaction. Long-range effects represented by core-polarization terms are also needed. The first step then is to define the so-called $`\widehat{Q}`$-box given by
$`P\widehat{Q}P=PGP+`$ (3)
$`P\left(G{\displaystyle \frac{Q}{\omega H_0}}G+G{\displaystyle \frac{Q}{\omega H_0}}G{\displaystyle \frac{Q}{\omega H_0}}G+\mathrm{}\right)P.`$ (4)
The $`\widehat{Q}`$-box is made up of non-folded diagrams which are irreducible and valence linked. The projection operators $`P`$ and $`Q`$ define the model space and the excluded space, respectively, with $`P+Q=I`$. All non-folded diagrams through third order in the interaction $`G`$ are included in the definition of the $`\widehat{Q}`$-box while so-called folded diagrams are included to infinite order through the summation scheme discussed in Refs. .
Effective interactions based on the CD-Bonn nucleon-nucleon interaction have been used by us for several mass regions, and give in general a very good agreement with the data, see Refs. .
In addition to deriving an effective interaction for the shell model, we present also effective proton and neutron charges based on our perturbative many-body methods. These charges are used in our studies of available $`E2`$ data below. In this way, degrees of freedom not accounted for by the shell-model space are partly included through the introduction of an effective charge. The effective single-particle operators for the effective charge are calculated along the same lines as the effective interaction. In nuclear transitions, the quantity of interest is the transition matrix element between an initial state $`|\mathrm{\Psi }_i`$ and a final state $`|\mathrm{\Psi }_f`$ of an operator $`𝒪`$ (here it is the $`E2`$ operator) defined as
$$𝒪_{fi}=\frac{\mathrm{\Psi }_f\left|𝒪\right|\mathrm{\Psi }_i}{\sqrt{\mathrm{\Psi }_f|\mathrm{\Psi }_f\mathrm{\Psi }_i|\mathrm{\Psi }_i}}.$$
(5)
Since we perform our calculation in a reduced space, the exact wave functions $`\mathrm{\Psi }_{f,i}`$ are not known, only their projections $`\mathrm{\Phi }_{f,i}`$ onto the model space. We are then confronted with the problem of evaluating $`𝒪_{fi}`$ when only the model space wave functions are known. In treating this problem, it is usual to introduce an effective operator $`𝒪_{\mathrm{eff}}`$ different from the original operator $`𝒪`$ defined by requiring
$$𝒪_{fi}=\mathrm{\Phi }_f\left|𝒪_{\mathrm{eff}}\right|\mathrm{\Phi }_i.$$
(6)
The standard scheme is then to employ a perturbative expansion for the effective operator, see e.g. Refs. .
To obtain effective charges, we evaluate all effective operator diagrams through second-order, excluding Hartree-Fock insertions, in the $`G`$-matrix obtained with the CD-Bonn interaction. Such diagrams are discussed in the reviews by Towner and Ellis and Osnes . The state dependent effective charges are listed in Table I for the diagonal contributions only. In order to reproduce the experimental $`B(E2;4_1^+2_1^+)`$ transition of Ref. , the authors introduced effective charges $`e_p=1.72e`$ and $`e_n=1.44e`$ for protons and neutrons, respectively. We see from Table I that the microscopically calculated values differ significantly from the above values from Ref. . This could however very well be an artefact of the chosen model space and effective interaction employed in the shell model analysis of Ref. . The reader should also keep in mind that our model for the single-particle wave functions, namely the harmonic oscillator, may not be the most appropriate for the proton single-particle states, since the proton separation energy is of the order of some few keV. When compared with the theoretical calculation of Sagawa et al. , our neutron effective charges agree well with theirs, whereas the proton effective charge deduced in Ref. is slightly larger, $`e_p1.4e`$. We note also that in the Hartree-Fock calculation with a Skyrme interaction and accounting for effects from the continuum, Hamamoto and Sagawa obtained effective charges of $`e_n=1.35e`$ and $`e_p=1.0e`$ for <sup>100</sup>Sn. Below we will allow the effective charges to vary in order to reproduce as far as possible the experimental value of $`2.8(3)`$ W.u. for the transition $`B(E2;4_1^+2_1^+)`$. There we will also relate the theoretical values for the effective charges to those extracted from data around $`A=100`$, see e.g., Ref. .
The calculations were performed with two possible model spaces, one which comprises all single-particle orbitals of the $`1d_{5/2}0g_{7/2}1d_{3/2}2s_{1/2}0h_{11/2}`$ shell and one which excludes the $`0h_{11/2}`$ orbit. The latter model space was employed by the authors of Ref. in their shell model studies. Since the single-neutron and single-proton energies with respect to <sup>100</sup>Sn are not well-established, we have adopted for neutrons the same single-particle energies as used in large-scale shell-model calculations of the Sn isotopes, see Refs. . The neutron single-particle energies are $`\epsilon _{0g_{7/2}}\epsilon _{1d_{5/2}}=0.2`$ MeV, $`\epsilon _{1d_{3/2}}\epsilon _{1d_{5/2}}=2.55`$ MeV, $`\epsilon _{2s_{1/2}}\epsilon _{1d_{5/2}}=2.45`$ MeV and $`\epsilon _{0h_{11/2}}\epsilon _{1d_{5/2}}=3.2`$ MeV. These energies, when employed with our effective interaction described above, gave excellent results for both even and odd tin isotopes from <sup>102</sup>Sn to <sup>116</sup>Sn. The proton single-particle energies are less established and we simply adopt those for the neutrons. Since the proton separation energy is of the order of $`400`$ kev, it should suffice to carry out a shell-model calculation with just the $`1d_{5/2}0g_{7/2}`$ orbits for protons. The total wave function, see the discussion below, is however to a large extent dominated by the $`1d_{5/2}`$ orbital for protons, with small admixtures from the $`0g_{7/2}`$ proton orbital. The influence from the other proton orbits is thus minimal.
The resulting eigenvalues are displayed in Fig. 1 for the two choices of model space together with the experimental levels reported in Ref. . Not all experimental levels have been given a spin assignment and all experimental spin values are tentative. The label FULL stands for the model space which includes all orbits from the $`1d_{5/2}0g_{7/2}1d_{3/2}2s_{1/2}0h_{11/2}`$ shell while REDUCED stands for the model space where the $`0h_{11/2}`$ orbit has been omitted. As can be seen from Fig. 1, the agreement with experiment is also rather good, with the model space which includes all orbitals of the $`1d_{5/2}0g_{7/2}1d_{3/2}2s_{1/2}0h_{11/2}`$ shells being closest to the experimental level assignements. The reader should however note that in Ref. it is not excluded that the ground state could have spin $`1^+`$, which means that the experimental spin values in Fig. 1 should be reduced by 1. In our theoretical calculations we obtain in addition to a state with spin $`1^+`$, also a state with $`3^+`$ not seen in the experiment of Ref. . In case the ground state turns out to have spin $`1^+`$, the reduced model space in our calculations would yield a better agreement with the data.
The wave functions for the various states are to a large extent dominated by the $`0g_{7/2}`$ and $`1d_{5/2}`$ single-particle orbits for neutrons ($`\nu `$) and the $`1d_{5/2}`$ single-particle orbit for protons ($`\pi `$). The $`\nu 0g_{7/2}`$ and $`\nu 1d_{5/2}`$ single-particle orbits represent in general more than $`90\%`$ of the total neutron single-particle occupancy, while the $`\pi 1d_{5/2}`$ single-particle orbits stands for $`8090\%`$ of the proton single-particle occupancy. The other single-particle orbits play an almost negligible role in the structure of the wave functions. The only notable exception is the $`7_1^+`$ state where $`\pi 0g_{7/2}`$ stands for the $`84\%`$ of the proton single-particle occupancy. This has also important repercussions on the contributions to the measured $`E2`$ transition $`B(E2;4_1^+2_1^+)`$, where the stucture of the wave functions of the $`4_1^+`$ and $`2_1^+`$ states are to a large extent dominated by the $`\nu 1d_{5/2}`$ and $`\pi 1d_{5/2}`$ single-particle orbits. The $`0g_{7/2}`$ orbits play a less significant role in the structure of the wave functions, and since the $`0g_{7/2}1d_{5/2}`$ transition matrix element tends to be weaker than the one between $`1d_{5/2}1d_{5/2}`$, the $`E2`$ transition will be dominated by the latter contributions. As also noted by Sohler et al. , the $`E2`$ transition is dominated by neutron contributions. This can also be seen from Fig. 2 where we show the result for the above $`E2`$ transition as function of different choices for the effective charges. We see that the largest change in the value of the $`E2`$ transition takes place when we vary the effective charge of the neutron, whereas when the proton charge is changed, the percentual change is smaller. Furthermore, if we use the largest values for effective charges of Table I, namely $`e_n=0.72e`$ and $`e_p=1.16e`$, we obtain $`1.84`$ W.u. for the $`E2`$ transition. Compared with the experimental value of $`2.8(3)`$ W.u. this may indicate that both the proton and the neutron effective charges should be slightly increased. From Fig. 2 we see that effective charges of $`e_n=0.9e`$ and $`e_p=1.4e\pm 0.2e`$ seem to yield the best agreement with experiment, although neutron charges of $`e_n=0.8e`$ and $`e_n=1.0e`$ yield results within the experimental window of Fig. 2 . The neutron effective charges would agree partly with those exctracted from the data in the Sn isotopes and from theoretical calculations of $`E2`$ transitions in heavy Sn isotopes , where a value $`e_n1`$ is adopted in order to reproduce the data. The calculated effective charges of Table I are however on the lower side. However, the deduced effective charges from the $`B(E2;6_1^+4_1^+)`$ transitions in <sup>102</sup>Sn and <sup>104</sup>Sn indicate that $`e_n1.62.3e`$, depending on the effective interaction employed in the shell-model analyses. Clearly, the effective interaction which is used, and its pertinent model space, approximations made in the many-body formalism etc., will influence the extraction of effective charges. This notable difference in the effective charges could be due to the fact that the $`B(E2;6_1^+4_1^+)`$ transitions in <sup>102</sup>Sn and <sup>104</sup>Sn involve configurations not accounted for by the $`1d_{5/2}0g_{7/2}1d_{3/2}2s_{1/2}0h_{11/2}`$ model space. A proton effective charge of $`e_p1.4e`$ is close to values inferred from experiment for the $`N=50`$ isotones, see Ref. , shell-model calculations of $`E2`$ transitions for the $`N=82`$ isotones and the theoretical estimates of Ref. .
In summary, a shell-model calculation with realistic effective interactions of the newly reported low-lying yrast states of the proton drip line nucleus <sup>106</sup>Sb, reproduces well the experimental data. Since the wave functions of the various states are to a large extent dominated by neutronic degrees of freedom and neutrons are well bound with a separation energy of $`8`$ MeV, this may explain why a shell-model calculation, within a restricted model space for a system close to the proton drip line, gives a satisfactory agreement with the data. In order to reproduce the experimental $`B(E2;4_1^+2_1^+)`$ transition, we obtained effective charges from our shell-model wave functions of $`e_n=0.81.0e`$ and $`e_p=1.4e\pm 0.2e`$. Our microscopically calculated effective charges are however slightly smaller, $`e_n=0.50.7e`$ and $`e_p=1.11.2e`$.
|
no-problem/9905/quant-ph9905032.html
|
ar5iv
|
text
|
# Quantum Mechanics as a Classical Field Theory
## Abstract
The formalism of quantum mechanics is presented in a way that its interpretation as a classical field theory is emphasized. Two coupled real fields are defined with given equations of motion. Densities and currents associated to the fields are found with their corresponding conserved quantities. The behavior of these quantities under a galilean transformation suggest the association of the fields with a quantum mechanical free particle. An external potential is introduced in the Lagrange formalism. The description is equivalent to the conventional Schrödinger equation treatment of a particle. We discuss the attempts to build an interpretation of quantum mechanics based on this scheme. The fields become the primary onthology of the theory and the particles appear as emergent properties of the fields. These interpretations face serious problems for systems with many degrees of freedom.
KEY WORDS: quantum mechanics, field theory, interpretation.
I. INTRODUCTION
After almost one century that Planck and Einstein made the first quantum postulates and after 70 years that the mathematical formalism of quantum mechanics was established, the challenge posed by quantum mechanics is still open. Until now, no completely satisfactory interpretation of quantum mechanics has been found and we can still say today that “nobody understands quantum mechanics”. The lack of interpretation was compensated by the development of an extremely precise and esthetic mathematical formalism; we do not know what quantum mechanics is but we know very well how it works. The development of the very successful axiomatic formalism had the consequence that many physicists where satisfied with the working of quantum mechanics and did no longer tried to understand it. This attitude was favored by the establishment of an orthodox instrumentalist “interpretation” which, if we are allowed to put it in a somewhat oversimplified manner, amounts to say “thou shall not try to understand quantum mechanics”. Only a few authorities like Einstein, Schrödinger, Plank, could dare not to accept the dogma and insist in trying to understand quantum mechanics. Fortunately the situation has changed and today it is an acceptable subject to search for an interpretation of quantum mechanics. The roots for this change are found in the pioneering work of Einstein Podolsky and Rosen which pointed out to some peculiar correlations in the theory; followed by the work of Bell that established conditions for the existence of those correlations in nature and finally, the experimental evidence for their existence. Although no definite interpretation of quantum mechanics has been found, we have made some progress in the understanding of the quantum world. There exist correlations among the observables of a quantum system that can not be explained by some classical effects. These correlations appear always among noncommuting observables and also appear in some cases among commuting ones as is the case, for instance, in nonlocal or noseparable states of quantum systems. It is no longer possible to think that the observables assume values independent of the context, that is, of the values assigned to other (commuting) observables.
It is impossible to assign the established features of the quantum systems to a classical particle. It is impossible to develop an image for a particle having those properties. Therefore if we want to understand quantum mechanics and if we keep in mind the image of a particle, we are in troubles. In this work we will see that it is possible to develop an image, not of a particle but of a field, that is compatible with the properties of a quantum system. Guided by our sense perception we may have the tendency to assign somehow a higher priority to the particles than to the fields. Indeed we have a clear sense perception for macroscopic particles but we have no “feeling” for the fields (only after a long intellectual development did we realize that “light” is a field). So usually we think of particles as having onthological entity, that is, as really existent, whereas fields are some mathematical construction associated to particles like, for instance, the electric or gravitational fields associated to charged or massive particles. Classical electromagnetism give us however some indication that this hierarchy may be incorrect. True, electric fields are a property of charged particles but we find, through Maxwell equations, that time varying electric fields can exist without charged particles associated to them. We have therefore no deep reason to think that particles are “more fundamental” than fields. It could even be the other way round: we will see that quantum mechanics is simpler if we consider that the fundamental entities are the fields instead of taking the particles as primary objects. Quantum mechanics, if considered as a field theory, is no more weird than electrodynamics but if we take it as a theory of particles, very strange, unnatural and contradictory things must be introduced. For instance we must assign to a particle, a paradigm of a localized thing, a nonlocal quality. Movement and position become incompatible although movement is the change of position for all images that we can make of a particle. After having understood the mathematical and physical properties of the fields, we can recognize that some of its features can be associated to some properties of particles. In this way we may recover the particles as being emergent properties of the fields. One further advantage of considering nonrelativistic quantum mechanics as a field theory is that this paves the way towards relativistic quantum field theory.
In this work we will see a scheme that avoids the counterintuitive structures that appear when the quantum behavior of particles is presented. The main idea of the work is to present quantum mechanics as a field theory with acceptable features, no more abstract than the ones found in classical electromagnetism. We will see however that this revival of interpretations of quantum mechanics that assign onthological reality to the fields must face some severe problems. In section II, the mathematical structure of a field theory is presented and associated, in section III, to a free physical system. A potential is easily introduced in section IV and contact with the conventional formalism of quantum mechanics is made in section V. In section VI the possibility to build an interpretation of quantum mechanics based on this scheme is discussed.
II. MATHEMATICAL FEATURES
Let us assume a physical system represented by two coupled real fields. From their equations of motion we can find all relevant properties of the fields, and associate them to physical concepts. Let $`A(x,t)`$ and $`B(x,t)`$ denote the fields in the simplest case of a one dimensional space and time. These fields have no external sources like the electric charges and currents for the electromagnetic fields, but each field acts as a source for the polarization of the other field. This coupling through the polarization becomes clear in a discrete simulation of these fields in a lattice, where particles and antiparticles associated to the fields are created at neighboring sites in each step of time evolution. Let the time evolution of the fields be determined by the equations
$`_tA(x,t)=`$ $``$ $`_x^2B(x,t),`$ (1)
$`_tB(x,t)=`$ $`_x^2A(x,t).`$ (2)
We will use $`_x^nA(x,t)`$ to denote the $`n`$-th order partial derivative with respect to $`x`$ and similarly for the time derivatives. In these equations we have suppressed a constant in order to take the fields $`A`$ and $`B`$, as well as the space-time variables, as dimensionless. For the moment we are only interested in the mathematical structure of the fields. All constants and dimensions needed to make contact with physical reality can be introduced later. These two equations play the similar rôle as Maxwell equations for the electric and magnetic fields, but are however much simpler. By direct substitution we can prove the following result:
If $`A(x,t)`$ and $`B(x,t)`$ are solutions of the Eqs. 1 then $`_x^nA(x,t)`$ and $`_x^nB(x,t)`$ $`n`$ are also solutions. The same can be said for the time derivatives $`_t^mA(x,t)`$ and $`_t^mB(x,t)`$ $`m`$.
Since the Eqs. 1 are linear, all linear combinations of solutions are also solutions. Therefore if $`A(x,t)`$ and $`B(x,t)`$ are solutions of the Eqs. 1, then $`_n\lambda _n_x^nA(x,t)`$ and $`_n\lambda _n_x^nB(x,t)`$ are also solutions. As a special case we may choose $`\lambda _n=L^n/n!`$ and the summations become the Taylor expansion of $`A(x+L,t)`$ and $`B(x+L,t)`$. Therefore the solutions of Eqs. 1 are invariant under space translations $`xx+L`$. In the same way we can prove that the solutions are also invariant under a time translation $`tt+T`$. These two results also follow directly from inspection of Eqs. 1. Another symmetry of the solutions that can be proven by direct substitution is that, if $`A(x,t)`$ and $`B(x,t)`$ are solutions of Eqs. 1 then
$`A^{}(x,t)=`$ $`cA(x,t)sB(x,t),`$ (3)
$`B^{}(x,t)=`$ $`sA(x,t)+cB(x,t),`$ (4)
where $`c`$ and $`s`$ are arbitrary real constants, are also solutions. An interesting question is whether the constants $`c`$ and $`s`$ can become functions $`c(x,t)`$ and $`s(x,t)`$. Indeed, it can be easily proven that $`A^{}(x,t)`$ and $`B^{}(x,t)`$ given below are also solutions.
$`A^{}(x,t)=`$ $`\mathrm{cos}\left({\displaystyle \frac{v}{2}}(x{\displaystyle \frac{v}{2}}t)\right)A(xvt,t)\mathrm{sin}\left({\displaystyle \frac{v}{2}}(x{\displaystyle \frac{v}{2}}t)\right)B(xvt,t),`$ (5)
$`B^{}(x,t)=`$ $`\mathrm{sin}\left({\displaystyle \frac{v}{2}}(x{\displaystyle \frac{v}{2}}t)\right)A(xvt,t)+\mathrm{cos}\left({\displaystyle \frac{v}{2}}(x{\displaystyle \frac{v}{2}}t)\right)B(xvt,t),`$ (6)
where $`v`$ is an arbitrary constant. This solution is interesting because it shows how the fields behave in a galilean transformation $`xx^{}=xvt`$, $`tt^{}=t`$, if we require that the equations of motion in Eqs. 1 remain invariant. A similar situation is found in electrodynamics where the requirement of invariance of Maxwell equations under a Lorentz transformation mixes the electric and magnetic fields. In this nonrelativistic case we ask for invariance under a galilean transformation since Eqs. 1 are clearly not invariant under Lorentz transformation because of the different treatment of time and space variables. The equations above, considered as an active transformation, show how to boost the fields at an initial time ($`t=0`$) with a velocity $`v`$.
We will see now that Eqs. 1 fully determine the time evolution of the fields. That is, given the fields at some time, say $`t=0`$, we can determine the fields for all other times. In order to see this, we first derive an expression for all time derivatives of the fields in terms of their space derivatives. The result for the $`A`$ field is:
$$_t^nA=\{\begin{array}{cc}(1)^{\frac{n}{2}}_x^{2n}A\hfill & n=0,2,4,\mathrm{}\hfill \\ (1)^{\frac{n1}{2}}_x^{2n}B\hfill & n=1,3,5,\mathrm{}\hfill \end{array}.$$
(7)
The result for $`B`$ is similar, except for a sign change when $`n`$ is odd, and can be obtained from the above equation by the replacement $`AB`$ and $`BA`$. The proof of these equations follows by iterative time derivatives of Eqs. 1. Now we can use these time derivatives in a Taylor expansion around the value $`t=0`$. That is, for the $`A`$ field we have,
$$A(x,t)=\underset{n=0}{\overset{\mathrm{}}{}}\frac{t^n}{n!}_t^nA|_{t=0}=\underset{n=0,2,4,\mathrm{}}{\overset{\mathrm{}}{}}\frac{t^n}{n!}(1)^{\frac{n}{2}}_x^{2n}A(x,0)\underset{n=1,3,5,\mathrm{}}{\overset{\mathrm{}}{}}\frac{t^n}{n!}(1)^{\frac{n1}{2}}_x^{2n}B(x,0).$$
(8)
We can nicely express this result if we notice that the sums above correspond to the series expansion of sine and cosine. Therefore we obtain the formal expression, also for the $`B`$ field,
$`A(x,t)=`$ $`\mathrm{cos}\left(t_x^2\right)A(x,0)\mathrm{sin}\left(t_x^2\right)B(x,0),`$ (9)
$`B(x,t)=`$ $`\mathrm{sin}\left(t_x^2\right)A(x,0)+\mathrm{cos}\left(t_x^2\right)B(x,0).`$ (10)
Now we want to find some conserved properties of the fields. These conserved properties are candidates for a physical interpretation. In the search for conserved quantities, we must eliminate the space dependence. To do this we can integrate the fields and functions of them in all space. In order to have such integrals mathematically well defined, we impose the supplementary condition that the fields should be confined. This means that the fields and all their derivatives must vanish at infinity.
$$_x^nA(\pm \mathrm{},t)=_x^nB(\pm \mathrm{},t)=0,n=0,1,2,\mathrm{}.$$
(11)
Actually we need the stronger condition that they should vanish faster than any power of $`x`$.
Before we identify the conserved quantities, we need a couple of definitions. Given two fields $`A(x,t)`$ and $`B(x,t)`$, we define their associated $`n^{th}`$ order density $`_n(x,t)`$ and $`n^{th}`$ order current $`𝒫_n(x,t)`$, (shortly n-density and n-current) by
$`_n(x,t)`$ $`=`$ $`(_x^nA)^2+(_x^nB)^2,`$ (12)
$`𝒫_n(x,t)`$ $`=`$ $`_x^nA_x^{n+1}B_x^nB_x^{n+1}A,n=0,1,2\mathrm{}.`$ (13)
The reason for the names chosen is that if $`A(x,t)`$ and $`B(x,t)`$ are solutions of Eqs. 1 then these quantities obey a continuity equation,
$$_t_n(x,t)+2_x𝒫_n(x,t)=0.$$
(14)
This can be proven directly performing the derivatives and replacing from Eqs. 1. We can now prove that the space integrated n-densities and n-currents are conserved. That is, the quantities $`M_n`$ and $`P_n`$ defined as
$`M_n`$ $`={\displaystyle _{\mathrm{}}^{\mathrm{}}}_n(x,t)𝑑x`$ (15)
$`P_n`$ $`={\displaystyle _{\mathrm{}}^{\mathrm{}}}𝒫_n(x,t)𝑑x,`$ (16)
are such that
$$_tM_n=0,_tP_n=0.$$
(17)
These equations can be proven performing the time derivative of the products in Eqs. 8 and replacing them with Eqs. 1 and finally doing appropriate integrations by parts with vanishing border terms due to Eq. 7. However, the first of these equations follows easier from the continuity equation:
$$_tM_n=_{\mathrm{}}^{\mathrm{}}_t_n(x,t)dx=2_{\mathrm{}}^{\mathrm{}}_x𝒫_n(x,t)dx=2𝒫_n(x,t)|_{\mathrm{}}^{\mathrm{}}=0,$$
(18)
where, again, the last term vanishes because of Eq. 7. For later reference, it is convenient to perform $`n`$ integrations by parts of the densities and currents in Eqs. 10. The border terms in all these integrations vanish and we get the results
$`M_n`$ $`=(1)^n{\displaystyle _{\mathrm{}}^{\mathrm{}}}\left(A_x^{2n}A+B_x^{2n}B\right)𝑑x`$ (19)
$`P_n`$ $`=(1)^n{\displaystyle _{\mathrm{}}^{\mathrm{}}}\left(A_x^{2n+1}BB_x^{2n+1}A\right)𝑑x.`$ (20)
III. PHYSICAL FEATURES
In the last section we have identified several mathematical features of the fields $`A(x,t)`$ and $`B(x,t)`$ determined by Eqs. 1. Now we want to investigate what physical systems these fields may describe. A natural thing to do is to associate the conserved properties of the fields with some permanent features of the physical system.
We have seen that if we know the fields at a particular time, then the solutions are determined for all times. However there is a large class of possible initial conditions, producing different solutions to the field equations. We will identify these different solutions with different states of the same physical system. All these solutions share the property of invariance under space and time translation $`xx+L`$, $`tt+T`$. That is, it is irrelevant where and when the system is considered. Such a system is free, that is, noninteracting with any external system.
The fields have an inescapable and inherent time dependence. That is, there are no static $`A(x,t)`$ and $`B(x,t)`$ fields. Clearly, if the fields have no time dependence, that is if $`_tA=_tB=0`$, the only solution consistent with the asymptotic conditions of Eq. 7 is $`A(x,t)=B(x,t)=0`$. In electromagnetism we have static fields that are generated by static electric charges and currents. Our field equations have no such sources and therefore we do not have static solutions. The fields $`A`$ and $`B`$ must have then a time dependence. There are however solutions, with time varying fields, such that the n-densities and n-currents are time independent. Let us call them stationary solutions. From Eqs. 8 we can find that the condition $`_t_n(x,t)=0`$ is satisfied if
$`_tA(x,t)=`$ $`EB(x,t),`$ (21)
$`_tB(x,t)=`$ $`EA(x,t),`$ (22)
where $`E`$ is some constant; and $`_t𝒫_n(x,t)=0`$ is valid if
$`_x^2A(x,t)`$ $`=EA(x,t),`$ (23)
$`_x^2B(x,t)`$ $`=EB(x,t).`$ (24)
To reach this last condition we have used Eqs. 1. These two sets of equations are the sufficient (necessary?) conditions for the stationary states and are consistent with Eqs. 1. Indeed, the second set can be obtained using the first one in Eqs. 1. Each stationary solution is then characterized by a real constant $`E`$ (we will see later that $`E>0`$). These stationary solutions are linearly independent because linear combinations of stationary solutions are not stationary solutions. Indeed, if $`E_1`$ and $`E_2`$ characterize the solutions $`(A_1,B_1)`$ and $`(A_2,B_2)`$, one can prove that $`(c_1A_1+c_2A_2,c_1B_1+c_2B_2)`$ is not a stationary solution (although it is of course a solution to Eqs. 1). In order to study the properties of these solutions it is very useful to notice, from Eqs. 15, that the differential operator $`_x^2`$, when applied to the stationary solutions, can be simply replaced by the constant $`E`$. With this we can immediately give the stationary state in terms of some initial stationary state, replacing $`_x^2E`$ in Eqs. 6:
$`A(x,t)=`$ $`\mathrm{cos}\left(Et\right)A(x,0)+\mathrm{sin}\left(Et\right)B(x,0),`$ (25)
$`B(x,t)=`$ $``$ $`\mathrm{sin}\left(Et\right)A(x,0)+\mathrm{cos}\left(Et\right)B(x,0).`$ (26)
In the “$`A,B`$ plane”, the stationary states, at fixed $`x`$, rotate in a circle of radius $`_0`$ with constant angular velocity $`E`$. With respect to time, the stationary solutions oscillate with (time) frequency $`Et`$ and with respect to space they oscillate with (space) frequency $`\sqrt{E}x`$, as can be seen from the solutions of Eqs. 15. Here we must impose the condition $`E>0`$ in order to avoid divergent solutions.
In the stationary states, the n-densities and n-currents have no time dependence. What about their spatial dependence? For the n-currents the answer is immediately given by the continuity equation 9
$$_t_n(x,t)=0_x𝒫_n(x,t)=0.$$
(27)
The n-currents in the stationary states are therefore constants independent of time and space. To find the value of these constants we can use the replacement $`_x^2E`$ in Eqs. 8 to find
$$𝒫_n=E𝒫_{n1}𝒫_n=E^n𝒫_0.$$
(28)
The n-densities are, as seen, time independent but in general they may depend on $`x`$. With the same replacement $`_x^2E`$ we can prove that (dropping explicitly the time variable $`t`$)
$$_n(x)=\{\begin{array}{cc}E^n_0(x)\hfill & n=0,2,4,\mathrm{}\hfill \\ E^{n1}_1(x)\hfill & n=1,3,5,\mathrm{}\hfill \end{array}.$$
(29)
Furthermore, $`_1`$ can be given in terms of $`_0`$ because $`_x_1=_x((_xA)^2+(_xB)^2)=2_xA_x^2A+2_xB_x^2B=E(2A_xA+2B_xB)=E_x((A)^2+(B)^2)=E_x_0`$; therefore $`_1=E(C_0)`$ with an arbitrary constant $`C`$. This constant must be such that $`C>\mathrm{max}_0`$ to guarantee positivity of $`_1`$.
These stationary solutions oscillate for all time and in all space. They are similar to the electromagnetic plane wave solutions to Maxwell’s equations. The same happens here as with the electromagnetic plane waves: these solutions can only be considered to represent approximately a physical system. Strictly speaking these solutions are unphysical because they imply an infinitely extended system and the conserved quantities of Eqs. 10 are meaningless. However they are useful to represent approximate situations and, most important, these solutions are linearly independent and therefore can be used in superposition in order to construct wave packets with any desirable shape and extension. There is however an important difference between the wave packets solutions to Maxwell equations and the propagating “wave packets” solutions to our Eqs. 1. There exist electromagnetic pulses or packets of arbitrary shape that propagate without changing the shape. This is not possible in our case. Indeed, we can show that our Eqs. 1 do not admit solutions of the type $`A=f(x+vt),B=g(x+vt)`$ where $`f`$ and $`g`$ are arbitrary (differentiable) functions. Only sine and cosines are acceptable, and these are the stationary solutions already mentioned. Therefore the shape of our field distributions must change during the time evolution. In other words, the fields representing a quantum system described by Eqs. 1 are dispersive: localization and shape of the distributions are not a permanent feature of quantum systems.
We will see now, for arbitrary states, how the conserved quantities behave under the transformations of the fields. Let us first consider the transformation described by Eqs. 2. If the fields $`A(x,t)`$ and $`B(x,t)`$ have the n-densities and n-currents $`_n(x,t)`$ and $`𝒫_n(x,t)`$ and their corresponding conserved quantities $`M_n`$ and $`P_n`$, then the transformed fields $`A^{}(x,t)`$ and $`B^{}(x,t)`$ given by Eqs. 2 will correspond to $`_n^{}(x,t)=(c^2+s^2)_n(x,t)`$ and $`𝒫_n^{}(x,t)=(c^2+s^2)𝒫_n(x,t)`$ and also, $`M_n^{}=(c^2+s^2)M_n`$ and $`P_n^{}=(c^2+s^2)P_n`$. Therefore this transformation produces just a change of scale that can be canceled by a numerical factor multiplying the fields. Furthermore, if the constants are such that $`c^2+s^2=1`$ then the transformation is irrelevant for the physical quantities represented by the n-densities and n-currents and their conserved quantities.
The next transformation of Eqs. 3 gives much more information about the physical nature of the conserved quantities. This transformation represents either the observation of the same physical system from a reference frame moving with velocity $`v`$ or alternatively, the same physical system “boosted” with a velocity $`v`$. Let us consider first the 0-density. Although the fields are mixed in this transformation, this density remains unchanged in shape and is just boosted with velocity $`v`$.
$$_0^{}(x,t)=A^2(x,t)+B^2(x,t)=_0(xvt,t).$$
(30)
This suggests that the 0-density $`_0(x,t)`$ represents the space localization of the physical system described by the fields $`A(x,t)`$ and $`B(x,t)`$. We can always scale the field such that the conserved quantity associated to the 0-density takes the value $`M_0=1`$. With this normalization, the 0-density can be thought as a distribution of localization. Let us consider now the 0-current $`𝒫_0(x,t)`$. Using the transformation of Eqs. 3 we obtain, after straightforward manipulations,
$$𝒫_0^{}(x,t)=𝒫_0(xvt,t)+\frac{v}{2}_0(xvt,t),$$
(31)
and the corresponding conserved quantities, assuming the normalization $`M_0=1`$,
$$P_0^{}=P_0+\frac{v}{2}.$$
(32)
For the 1-density we get:
$$_1^{}(x,t)=_1(xvt,t)+v𝒫_0(xvt,t)+\left(\frac{v}{2}\right)^2_0(xvt,t),$$
(33)
and the corresponding conserved quantities are
$$M_1^{}=M_1+vP_0+\left(\frac{v}{2}\right)^2.$$
(34)
The conserved quantities $`P_0`$ and $`M_1`$ can be consistently associated to the momentum and kinetic energy of the system (with mass $`m=1/2`$) because under a boost of velocity $`v`$ they are changed accordingly. Indeed if $`P_0=\frac{1}{2}u`$ and $`M_1=\frac{1}{4}u^2`$ then $`P_0^{}=\frac{1}{2}(u+v)`$ and $`M_1^{}=\frac{1}{4}(u+v)^2`$. For the remaining n-densities and n-currents we can also find their transformation properties in a boost but there are no remaining particle observables to which they can be related. Furthermore, one must be cautious in the identification of the conserved properties of the quantum system with the conserved properties of a particle. For instance, the relation between kinetic energy and momentum valid for classical particles is not always true for the quantum system. One can prove that in general it is $`M_1P_0^2`$ and not $`M_1=P_0^2`$ as one would expect for a particle of mass $`m=1/2`$. Of course, the identification of a delocalized quantum system, for instance in a stationary state, with a localized classical particle becomes very suspect.
We can obtain further confirmation that the system is moving with momentum $`P_0`$ by calculating the time derivative of the center of the localization distribution defined as
$$X=_{\mathrm{}}^{\mathrm{}}x_0(x,t)𝑑x.$$
(35)
The center of the distribution moves with constant velocity $`2P_0`$:
$`_tX`$ $`={\displaystyle _{\mathrm{}}^{\mathrm{}}}x_t_0(x,t)dx=2{\displaystyle _{\mathrm{}}^{\mathrm{}}}x_x𝒫_0dx`$ (36)
$`=2x𝒫_0|_{\mathrm{}}^{\mathrm{}}+2{\displaystyle _{\mathrm{}}^{\mathrm{}}}𝒫_0𝑑x=2P_0,`$ (37)
where we have used the continuity equation and that the border term of the integration by parts vanish because the vanishing in Eq. 7 is faster than any power of $`x`$. We can use the relation between the velocity of the center of the distribution and the conserved momentum in order to define the mass of the free quantum system: $`m=P_0/_tX`$. Notice that with a similar calculation we reach the conclusion that the center of the distributions, corresponding to the n-densities $`_n(x,t)`$, all move with constant “velocities” equal to $`2P_n`$. The fact that these velocities are not all equal is indicative of the dispersive nature in the evolution of the quantum system.
We summarize: the fields $`A(x,t)`$ and $`B(x,t)`$ determined by Eqs. 1 describe a free quantum system localized according to the density distribution $`_0(x,t)=A^2(x,t)+B^2(x,t)`$ moving with a constant drift velocity $`2P_0=2(A_xBB_xA)𝑑x`$ and constant kinetic energy $`M_1=(_xA)^2+(_xB)^2dx`$. Referring to a similar classical system, we can denote this system as a free quantum particle. The shape of the distributions change in time, indicating that localization is not a permanent property of the free quantum system. When the distribution $`_0(x,t)`$ is localized in a small region, the system is in a particle-like state. If the distribution $`_0(x,t)`$ is delocalized in a large region, the system is in a wave-like state.
IV. EXTERNAL INTERACTION
The field equations given in Eqs. 1 can be derived using the Euler-Lagrange equations applied to the following lagrangian density, a functional of the fields and of their space and time derivatives,
$$=B_tAA_tB(_xA)^2(_xB)^2.$$
(38)
As usual, terms like $`_tB_xA_tA_xB`$ or $`B_xA+A_xB`$ and other total differentials can be added to this lagrangian density but they are irrelevant because they make a vanishing contribution to the field equations. From this lagrangian density we can find the canonical field momenta associated to the fields $`A(x,t),B(x,t)`$. It turns out that $`A(x,t)`$ and $`B(x,t)`$ are reciprocally the canonical momenta of each other (within a minus sign). With these momenta and with the lagrangian density we obtain the canonical hamiltonian density
$$=(_xA)^2+(_xB)^2.$$
(39)
Here again, total differential terms can be added that have no contribution when integrated in all space because of Eq. 7. Notice that $`=_1(x,t)`$ and therefore the integrated hamiltonian density is conserved. We should not naively attempt to derive the equations of motion from the hamiltonian formalism using the last expression. This fails because we are dealing here with a nonstandard lagrangian linear in the “velocities”. For this lagrangian we can not solve the “velocities” in terms of the canonical momenta. We will not treat here this constrained hamiltonian field theory. We have brought here the hamiltonian density, that can be interpreted as an energy density, only for the purpose of guiding us in the introduction of an interaction in our system.
The free system of Eqs. 1 is translation invariant and all regions of space are equivalent. Let us break this translation symmetry. Let us assume now that there are some regions of space where the energy of the system is changed by the action of an external field denoted by $`V(x)`$. We call this a potential field. The presence or localization of the system in the regions where the potential field $`V(x)`$ is nonvanishing, will change the energy of the system by an amount given by the product of the potential with the density of localization of the system in such a region. This last density is given by $`_0(x,t)`$. Therefore we introduce the interaction of the system with an external potential $`V(x)`$ just changing the hamiltonian density to the new expression
$$=(_xA)^2+(_xB)^2+V(x)\left(A^2+B^2\right).$$
(40)
With this new hamiltonian, that includes the interaction, we get a new lagrangian density
$$=B_tAA_tB(_xA)^2(_xB)^2V(x)\left(A^2+B^2\right),$$
(41)
and with the Euler-Lagrange equations we obtain the equations of motions for the fields describing a quantum particle in a potential,
$`_tA(x,t)=`$ $``$ $`_x^2B(x,t)+V(x)B(x,t),`$ (42)
$`_tB(x,t)=`$ $`_x^2A(x,t)V(x)A(x,t).`$ (43)
An analysis similar to the one performed for the free system could be now done. These equations are invariant under the transformation Eq. 2 and an observer in a moving frame will see the potential moving, $`V(xvt)`$, as expected. The time evolution can be determined in a similar way as before. A similar study of stationary states can be done with the replacement $`_x^2V(x)E`$
V. CONVENTIONAL FORMALISM
Once that we have seen the advantages of the description of quantum mechanics as a classical field theory we must make contact with the conventional formalism. For this we just have to define a complex field $`\mathrm{\Psi }(x,t)=A(x,t)+iB(x,t)`$. From the equations of motion for the fields $`A`$ and $`B`$, given in Eqs. 31, we obtain the corresponding equation for $`\mathrm{\Psi }`$. Furthermore we introduce constants in order to recover physical dimensions for space, time, mass and energy, and we obtain thereby Schrödinger’s equation, in all its glamour:
$$i\mathrm{}\frac{\mathrm{\Psi }(x,t)}{t}=\frac{\mathrm{}^2}{2m}\frac{^2\mathrm{\Psi }(x,t)}{x^2}+V(x)\mathrm{\Psi }(x,t).$$
(44)
All the formal expressions found before have a corresponding representation in terms of the complex field. For instance $`_0(x,t)=|\mathrm{\Psi }(x,t)|^2`$. The conserved quantities (when $`V=0`$) of Eqs. 13 correspond to the expectation value of even and odd powers of an operator
$`M_n`$ $`=\mathrm{\Psi },(i_x)^{2n}\mathrm{\Psi }`$ (45)
$`P_n`$ $`=\mathrm{\Psi },(i_x)^{2n+1}\mathrm{\Psi },`$ (46)
suggesting the association of that operator with the conserved momentum.
In the derivation of Schrödinger equation, or of the coupled equations in Eqs. 31, from the lagrangian Eq. 30, or its equivalent in terms of $`\mathrm{\Psi }`$ and $`\mathrm{\Psi }^{}`$, the two fields, either $`A`$ and $`B`$ or $`\mathrm{\Psi }`$ and $`\mathrm{\Psi }^{}`$, must be varied independently. It is not possible to write a lagrangian in terms of only one field that leads to Schrödinger’s equation. Two fields must be taken. The necessity of two fields is inherent to the formalism and therefore the use of a complex field for quantum mechanics (or the equivalent two fields $`A`$ and $`B`$) is different from other cases in physics, for instance in electrodynamics, where complex quantities are just a convenience. This necessity of two fields is more clearly stated in the approach presented here where the elegance of the Lagrange formalism becomes a very convenient way for introducing an external potential.
Another advantage of the formalism in terms of the two fields $`A`$ and $`B`$ instead of one complex field $`\mathrm{\Psi }`$ is the possibility of an enlightening analogy with classical mechanics. The phase space of a classical particle (in a line) is two dimensional and is spanned by the dynamic variables $`X`$ and $`P`$ whereas the “phase space” for the quantum system is also two dimensional and is spanned by the dynamic fields $`A`$ and $`B`$. This parallel can be continued noticing that the canonical transformation corresponding to a rotation in phase space
$`X^{}=`$ $`cXsP,`$ (47)
$`P^{}=`$ $`sX+cP,`$ (48)
with $`c^2+s^2=1`$, is equivalent to the transformation of the fields in Eqs. 2. Furthermore $`A`$ and $`B`$ as well as $`X`$ and $`P`$ are canonical conjugate of each other (within a minus sign).
V. INTERPRETATION
Quantum mechanics looses most of the strange and awkward aspects when it is seen as a field theory and becomes a simple subject in comparison with the unsurpassable conceptual difficulties found when we look at it as a theory for particles. The advantages of presenting quantum mechanics as a field theory become obvious. Encouraged by this, we want to face now the question of whether this is just a formal advantage or if this trick can become an interpretation of quantum mechanics.
To turn this into an interpretation we must decide to choose the fields as the primary onthology, that is, the fields are the really existent objects that the theory must describe, instead of taking the particles as the primary objects to be described by quantum mechanics. In a simplified manner, we may choose between two extreme interpretations: either the coupled fields $`A(x,t)`$ and $`B(x,t)`$ really exist and are determined by field equations with quite reasonable properties, no more strange than the electromagnetic fields, or on the contrary, the particles really exist and the fields are a mathematical construction corresponding to the probability of finding, or of localization, of such particles that must have strange schizophrenic properties of being at different places at the same time, and that sometimes appear as point like particles but some other times as extended waves. In the first option we just have fields as the primary onthology and “the particles” are not existent objects but are a name, learned from the classical physics of macroscopic object, that we may use to denote a set of emergent properties of the fields. In this interpretation, what we usually call “an electron” is not a point like particle that is detected somewhere according to a “probability cloud” but is the cloud itself.
There have been several attempts of interpretations based on different choices for the primary onthology. Without many details we just mention some of them. The well known probability interpretation of the wave function proposed by Max Born favors a particle onthology. In this case the field $`\mathrm{\Psi }`$ does not carry energy and is not really existent in physical space. Opposite to it, we can find Schrödinger interpretation proposing that only the wave function has objective existence. In between, we can find Madelung hydrodynamical interpretation, no longer considered probably because it does not clearly states what is precisely the fluid being described by Schrödinger equation. A hybrid interpretation was proposed by L. de Broglie with his “double solution” suggesting a mixed particle and field onthology. Another idea originated by L. de Broglie is based on a particle onthology with a pilot wave determining its motion. This interpretation was successfully taken by D. Bohm in his causal quantum mechanics (Bohmian mechanics) that has received much attention in the lasts years. We can safely say that no one of the mentioned interpretations is fully satisfactory because they all face some difficulties, and probably also no one of them is totally excluded at present. It is desirable to present all these alternatives instead of taking, as is unfortunately often done, an instrumentalist position that implies an evasion from the problem.
Since the scheme of this work clearly favors Schrödinger interpretation, we must recall the most serious problem faced in this choice. For a single quantum system, the formalism presented can be interpreted in the lines shown by Schrödinger. However for a compound system with many degrees of freedom we must introduce more variables upon which the fields depend. The fields no longer “live” in physical space but in a $`3N`$ dimensional phase space. Furthermore, if we want to give to the fields an onthological character then we must solve somehow the ambiguity in the fields apparent in Eqs. 2. Is there a way to fix the phase and define the fields without ambiguity? or can we tolerate the ambiguity and define the primary onthology as an equivalence class of fields? Another item that requires much thought is the determination of what are precisely the observable properties of the fields and the meaning of the measurement process. The fields are, apparently, not directly observables but this would not be a serious difficulty for the interpretation, under the consideration that any measurement is a complex process that involves macroscopic apparatus. The electron, as a particle in the conventional interpretation of quantum mechanics (whatever that is), is also, as is the case for the fields, not a direct observable.
VI. CONCLUSIONS
It has been shown that the consideration of quantum mechanics as a classical field theory can have significant advantages. In this way, very confusing concepts like, for instance, the probability of observing, or of existence, of a particle somewhere, or the incompatibility between position and momentum, are avoided. Quantum mechanics, as a field theory, is not stranger than classical electrodynamics. The extension of this work to the realistic three dimensional space is trivial and can be performed by the substitutions $`x𝐫`$ and $`_x`$.
The ideas presented in this work suggest a revival of the interpretations of quantum mechanics based on a field onthology, similar to the original interpretation proposed by Schrödinger. These interpretations have, however, serious difficulties in the case of a system with many degrees of freedom because the dimension of the space where the fields have support is not equal to the dimension of physical space.
We would like to acknowledge helpful discussions with H. Mártin and P. Sisterna. This work has received partial support from “Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET), Argentina.
|
no-problem/9905/astro-ph9905042.html
|
ar5iv
|
text
|
# 𝛾 Doradus Stars: Defining a New Class of Pulsating Variables
## 1 Introduction
Cousins & Warren (1963) discovered that the bright F0 v star $`\gamma `$ Doradus was variable over a range of several hundredths of a magnitude with two principal periods ($`0\stackrel{\mathrm{d}}{\mathrm{.}}733`$ and $`0\stackrel{\mathrm{d}}{\mathrm{.}}757`$). $`\gamma `$ Doradus has an absolute magnitude similar to that of a $`\delta `$ Scuti star, but is somewhat cooler, and thus for many years it was deemed a “variable without a cause”. Cousins (1992) stated: “The suggested W-UMa type no longer seems a possibility, but rotation with starspots and/or tidal distortion might account for the variability. The light-curve and dual periodicity would favor some form of pulsation, but the period is much longer than expected for a $`\delta `$ Scuti star.” Balona et al. (1994) tried to model the star using two starspots and differential rotation. They found that the large size of the required spots and the high stability of their periods did not bode well for the starspot hypothesis. Furthermore, they found evidence of a third period, later confirmed by Balona et al. (1996), which further diminishes the likelihood of the starspot hypothesis.
9 Aurigae (= HD 32537), a star very similar to $`\gamma `$ Doradus, was first noted to be variable by Krisciunas & Guinan (1990). Krisciunas et al. (1993) found evidence for two photometric periods between 1.2 and 3 days. Using infrared and IUE data, Krisciunas et al. (1993) found no evidence for a close companion or a lumpy ring of dust surrounding the star, but they could not rule out the idea of starspots.
Over the past decade, more than 40 variable stars with spectral types and luminosity classes similar to $`\gamma `$ Doradus have been discovered that exhibit variability on a time scale that is an order of magnitude slower than $`\delta `$ Scuti stars. Mantegazza et al. (1994), Krisciunas (1994), and Hall (1995) suggested that these objects may constitute a new class of variable stars. Breger & Beichbuchner (1996) investigated whether any known $`\delta `$ Scuti stars also showed $`\gamma `$ Doradus-type behavior and found no clear cut examples of stars that show both “fast” and “slow” variability; Fig. 1 of their paper nicely illustrates the locations of the two kinds of variables in the color-magnitude diagram. However, not all of their $`\gamma `$ Doradus stars are regarded as bona fide members of the group.
Krisciunas (1998) provides a good summary of our knowledge of $`\gamma `$ Doradus stars as a new class, but to date there is no publication in the refereed journal literature which summarizes and “defines” the characteristics of the class itself. It was quite evident early on that significant advancement in the understanding of the physical nature of $`\gamma `$ Doradus stars could be made only on the basis of a large observational effort. Hence, activities were concentrated in international multi-longitude photometric and spectroscopic campaigns.
On the basis of extensive photometry, radial velocities, and line-profile variations, it has been proven that 9 Aurigae (Krisciunas et al. 1995a, Zerbi et al. 1997a, Kaye 1998a), $`\gamma `$ Doradus (Balona et al. 1996), HD 164615 (Zerbi et al. 1997b; Hatzes 1998), HR 8330 (Kaye et al. 1999), HD 62454 and HD 68192 (Kaye 1998a), and HR 8799 (Zerbi et al. 1999) are indeed pulsating variable stars. Given the nature of the observed variability in these stars, the cause must be high-order ($`n`$), low-degree ($`\mathrm{}`$), non-radial $`g`$-modes. We assert this on the basis of evidence for non-radial $`g`$-modes and the lack of convincing evidence for other explanations, including starspots. Furthermore, we argue that since this small (but growing) group of objects all have similar physical characteristics and show broad-band light– and line-profile variations resulting from the same physical mechanism, they form a new class of variable stars. In this paper, we indicate the cohesiveness of this group and its differences from other variable star classes. Finally, we provide a set of criteria by which new candidates may be judged.
## 2 General Characteristics of the Class
Our list of bona fide $`\gamma `$ Doradus stars is complete to April 1999 and all objects of this class have extensive enough photometric and/or spectroscopic data sets to rule out other variability mechanisms. A complete, commented, up-to-date list of all proposed candidates for this group, as well as their observational history, is kept by Handler and Krisciunas at the World Wide Web site: http://www.astro.univie.ac.at/$``$gerald/gdor.html.
Table 1 lists the observed quantities of each of the 13 objects used to define this new class of variable stars. Column 1 gives the most common name of each object. Column 2 provides the best available value of $`(by)`$; columns 3 and 4 list the average apparent visual magnitude of each object ($`<V>`$) and the best determined spectral type. Column 5 lists the best available value of the projected equatorial velocity, $`v\mathrm{sin}i`$, in km s<sup>-1</sup>. Column 6 reports the hipparcos trigonometric parallax in milli-arcseconds (ESA 1997).
Table 2 presents derived properties of the thirteen objects. Estimates for the total metallicity ($`[Me/H]`$) are derived from the relations of Nissen (1988) and Smalley (1993), which are precise to within 0.1 dex in $`[Me/H]`$ and are listed in Column 2. The absolute visual magnitudes (Column 3) are calculated from the hipparcos parallaxes. Luminosities, using bolometric corrections listed in Lang (1992) and $`M_{\mathrm{bol},\mathrm{}}=4.75`$ (Allen 1973), are presented in Column 4. The effective temperatures are determined from the new calibration of Strömgren photometry by Villa (1998), for which we estimate errors of $`\pm `$ 100 K (Column 5); stellar radii precise to $`\pm 0.05R_{\mathrm{}}`$ are then calculated (Column 6). Finally, masses which are precise to $`\pm 0.03M_{\mathrm{}}`$ (internal model error), are inferred by comparison with solar-metallicity evolutionary tracks by Pamyatnykh et al. (1998) (Column 7). The final entry in Table 2 represents the unweighted average of each of the columns; presumably, these are the physical parameters of a “typical” $`\gamma `$ Doradus variable.
We present a color-magnitude diagram of all 13 stars, using the hipparcos parallaxes to calculate accurate values of $`M_V`$ in Figure 1. The observed zero-age main sequence (Crawford 1975) and the observed edges of the $`\delta `$ Scuti instability strip (Breger 1979) are shown as a solid line and dashed lines, respectively.
The truly intriguing characteristic of $`\gamma `$ Doradus stars is that they are variable; considering the part of the Hertzsprung-Russell diagram in which they lie, previous pulsational models say they should not be. The outer convection zones of these stars are too shallow to generate and sustain a large magnetic dynamo, thus making starspots improbable. Most of the $`\gamma `$ Doradus stars are multi-periodic; the average period is close to 0.8 days. The observed variations are not necessarily stable, and may be highly dynamic (Kaye & Zerbi 1997). Typical amplitudes cluster around 4 percent (= $`0\stackrel{\mathrm{m}}{\mathrm{.}}04`$) in Johnson $`V`$, and may vary during the course of an observing season by as much as a factor of four. For the best-studied stars (e.g., $`\gamma `$ Doradus itself, 9 Aurigae, and HR 8330), line-profile variations with periods equal to the photometric periods have been confirmed (Balona et al. 1996; Kaye 1998a; Kaye et al. 1999). No high-frequency signals have been detected in either the photometry or the spectroscopy, indicating a lack of the $`p`$-mode pulsation common in $`\delta `$ Scuti stars.
Despite their commonality, a small subset of $`\gamma `$ Doradus stars show remarkably peculiar pulsation characteristics. In several objects \[e.g., HD 224945 (Poretti et al. 1998), HD 224638 (Poretti et al. 1994), and 9 Aurigae (see Krisciunas et al. 1995, Zerbi et al. 1997a, Kaye 1998)\], amplitude variability of order 50% over a few years is observed. Other objects \[e.g., $`\gamma `$ Doradus (Cousins 1992), HD 164615 (Zerbi et al. 1997b), and HR 8799 (Zerbi et al. 1999)\] show amplitude modulation selectively located at the moment of maximum brightness, a characteristic of variability that is new to the field of stellar pulsation. Still other objects (e.g., HD 68192) show remarkably constant periods and amplitudes over long time scales. Clearly, these peculiarities within the $`\gamma `$ Doradus class need many more long-term observations to be explained.
## 3 Defining a New Class
We argue that the qualities and characteristics of the thirteen (13) above named and described stars form a homogeneous set based on their physical characteristics and their mechanism for variability, and thus form the basis for a new class of variable stars.
In following with the informal discussions at the “Astrophysical Applications of Stellar Pulsation” conference (Stobie & Whitelock 1995) held in 1995 at Cape Town, South Africa and in recent papers in the literature (see e.g., Krisciunas et al. 1993, Balona et al. 1996, Zerbi et al. 1997a, Poretti et al. 1997, Kaye 1998a, Kaye et al. 1999), we propose that this type of variable star henceforth be known and recognized by the name $`\gamma `$ Doradus variable stars. The extent of the $`\gamma `$ Doradus phenomenon, as it is currently known, consists of variable stars with an implied range in spectral type A7–F5 and in luminosity class iv, iv-v, or v; their variations are consistent with the model of high-order ($`n`$), low-degree ($`\mathrm{}`$), non-radial, gravity-mode oscillations. Although it is conceivable that variations such as those of the stars in this class may occur outside of this region, it is likely that other mechanisms of variability would then dominate, and thus this combination of spectral type, luminosity class, and (most importantly) variability mechanism, forms a suitable definition.
From an observational point of view, the $`g`$-mode oscillations seen in $`\gamma `$ Doradus variables are characterized by periods between 0.4 and 3 days and peak-to-peak amplitudes $`0\stackrel{\mathrm{m}}{\mathrm{.}}1`$ in Johnson $`V`$. The presence of multiple periods and/or amplitude modulation is common among these stars, but is not included in the formal definition presented here. Spectroscopic variations are also observed, and these manifest themselves both as low-amplitude radial-velocity variations (that cannot be attributed to duplicity effects) and as photospheric line-profile variations.
In addition to these features, we stress that any object put forth for consideration as a confirmed $`\gamma `$ Doradus variable star must not vary exclusively by other mechanisms, including: $`p`$-mode pulsations (e.g., $`\delta `$ Scuti stars), rotational modulation of dark, cool, magnetically-generated starspots; rotational modulation of bright, hot, abundance-anomaly regions; duplicity-induced variations; or other rotational effects. Obviously, dual-nature objects (e.g., pulsating stars showing both $`\gamma `$ Doradus– and $`\delta `$ Scuti-type behavior) must not be rejected. Prime candidates for $`\gamma `$ Doradus stars should therefore not be primarily variable due to the rotational modulation occuring in Am stars, Ap stars, Fm stars, RS CVn stars, or BY Dra stars. However, candidates may be members of a spectroscopically-defined class (e.g., $`\lambda `$ Boötis stars; see, e.g., Gray & Kaye 1999a).
## 4 Concluding Perspective
$`\gamma `$ Doradus stars constitute a new class of variable stars because they all have about the same mass, temperature, luminosity, and the same mechanism of variability. They are clearly not a sub-class of any of the other A/F-type variable or peculiar stars in this part of the HR diagram, and may offer additional insight into stellar physics when they are better understood (e.g., they may represent the cool portion of an “iron opacity instability strip” currently formed by the $`\beta `$ Cephei stars, the SPB stars, and the subdwarf B stars; they may also offer insight into the presence of $`g`$-modes in solar-like stars). Modeling by Kaye et al. (1999) is beginning to shed light on the theoretically required interior structure and on the specific physics driving the observed variability, but much theoretical work lies ahead.
To understand the behavior of $`\gamma `$ Doradus stars and to investigate how they differ from the $`\delta `$ Scuti variables and spotted stars, we need to investigate a number of star clusters of differing ages, perhaps up to as old as 1 Gyr. The fact that the Hyades has no $`\gamma `$ Doradus variables (Krisciunas et al. 1995b) may be a quirk of the Hyades, rather than proof that stars $``$ 600 Myr old are too old to exhibit $`\gamma `$ Doradus-type behavior. Clearly, the “outliers” of the $`\gamma `$ Doradus candidates that would extend the limits of the region of the HRD in which these new variables are found should be checked carefully for both photometric and spectroscopic evidence indicative of pulsations versus starspots, duplicity effects, and other causes of variability not consistent with the definition presented above (see, e.g., Aerts et al. 1998). Finally, additional observations of individual $`\gamma `$ Doradus stars are clearly warranted in order to understand better the nature of these objects. After all, thirteen objects does not an instability strip make. In the meantime, we must keep an open and critical mind about these variables.
## 5 Acknowledgments
This work was performed under the auspices of the U. S. Department of Energy by the Los Alamos National Laboratory under contract No. W-7405-Eng-36. We gratefully acknowledge the unpublished spectral types of some of the stars in this paper from R. O. Gray, and we thank Holger Pikall for computing evolutionary models upon request. ABK also gratefully acknowledges Drs. Guzik and Bradley for reading various drafts of this paper. GH was partially supported by the Austrian Fonds zur Förderung der wissenschaftlichen Forschung under grant No. S7304-AST.
|
no-problem/9905/cond-mat9905365.html
|
ar5iv
|
text
|
# Photoluminescence Detected Doublet Structure in the Integer and Fractional Quantum Hall Regime
\[
## Abstract
We present here the results of polarized magneto-photoluminescence measurements on a high mobility single-heterojunction. The presence of a doublet structure over a large magnetic field range (2$`>`$$`\nu `$$`>`$1/6) is interpreted as possible evidence for the existence of a magneto-roton minima of the charged density waves. This is understood as an indication of strong electronic correlation even in the case of the IQHE limit.
\]
The use of magneto-photoluminescence (MPL) spectroscopy to study the integer (IQHE) and fractional quantum Hall effects (FQHE) has attracted considerable interest in recent years. In the case of the IQHE, the appearance of the gap in the spectrum at the Fermi energy is caused by the quantization of the magnetic Landau levels or the quantization of the spin energy levels and is essentially a single particle phenomenon. In the case of FQHE, the appearance of the gap is understood as a result of condensation of the 2 DEG into an incompressible quantum liquid (IQL) due to strong electron-electron correlations which takes place at fractional filling factors $`\nu `$=p/q$`<`$1.
One of the most significant phenomenon that occurs in the FQHE regime is the emergence of low-lying charge-density (CD) waves that display characteristic magneto-roton (MR) minima at a wave vector close to the inverse magnetic length. At large wave vector, neutral CD waves excitations consist of pairs of fractionally charge quasiparticles that are associated with the energy gaps of the IQL. More recently, it was suggested that the strong correlations between electrons are important in explaining the structure observed in photoluminescence at $`\nu `$=1 and this state is considered to be a strongly correlated state, similar to the incompressible states at fractional filling factors. It was shown that the system always has a gap, even when the single-particle gap vanishes (i.e. when g=0), as a result of electron-electron repulsion. It has also been shown that, in that case when the electron Zeeman energy is large, the low energy states at $`\nu `$=1 are the excitonic states in which the spin-$``$ lowest Landau level is filled and the valence band hole binds with a single spin-$``$ electron to form an exciton. In analyzing the PL spectra obtained from a 2 DEG in the FQHE regime, the appearance of a doublet structure in the photoluminescence (PL) spectrum around $`\nu `$=2/3 was interpreted by Apalkov and Rashba as an evidence for the appearance of an indirect, single MR transition from an extensive area in k space for k$`\mathrm{}`$<sub>B</sub>$``$1 (where $`\mathrm{}`$<sub>B</sub> is the magnetic length). The formation of the rotons is due to the reduction, at large wave vectors, of the excitonic binding energy between the electron and the hole. The MR minimum is a precursor to the gap collapse associated with the Wigner crystal instability and is connected with the excitonic attraction between fractionally charged quasiparticles. The calculations reported for the case of the filling factor $`\nu `$=1/3 showed that the dispersion of the excitons is strongly suppressed by their coupling to the IQL and the lowest branch of the exciton spectrum passes completely below the MR spectrum. Another important result of note is the fact that, even if its theoretical prediction is based on calculations performed in the case of fractional filling factor $`\nu `$=1/3, the existence of the MR peak was observed in the MPL, also at filling factors that do not involve the FQHE. In addition, evidence of excitonic binding and roton minima formation was found by Pinczuk et.al. for the filling factors $`\nu `$$`>`$1 and $`\nu `$=1/3 from inelastic light scattering studies performed on the 2DEG formed in high-mobility GaAs structures. Karrai et.al. analyzed the magneto-transmission spectra obtained from a quasi-three-dimensional electron system subjected to a parallel magnetic field and found evidence for the existence of a MR excitation for a wide range of magnetic fields.
In this report, we present the results of MPL measurements of a MBE grown high quality GaAs/Al<sub>0.3</sub>Ga<sub>0.7</sub>As single heterojunction (SHJ) with a dark electron density of 1.2$`\times `$10$`{}_{}{}^{11}cm^2`$ and a mobility higher than 3$`\times `$10$`{}_{}{}^{6}cm^2`$/Vs. In these experiments, during constant laser illumination, the 2DEG density increased to 2.1$`\times `$10$`{}_{}{}^{11}cm^2`$. The experimental layout for the PL measurements has been described previously. Using a quasi-continuous magnet, the field was varied from 0 to 60 T, while the temperature was changed from 1.5K to 450mK. The polarized spectra that we obtained showed the appearance of a doublet at filling factors $`\nu `$$`>`$3/2 and the persistence of this effect to the highest magnetic fields utilized. In Fig. 1 we show the unpolarized spectra obtained at a temperature of 1.5K at the filling factor $`\nu `$=2, 3/2 and 1. The E0-hh peak that appears at $`\nu `$=5
(B=1.82T) shows a splitting for 2$`>`$$`\nu `$$`>`$1 and this is clearly shown in the spectra at $`\nu `$=3/2. This splitting, once formed, is present for the whole range of magnetic fields examined. We believe that the lower energy peak of the doublet is associated with neutral exciton (X) transition, while the higher energy peak may be evidence of a MR transition. The difference in the energies ($`\mathrm{\Delta }`$) between these two peaks as a function of magnetic field is shown in the inset of the Fig.1. It can be seen that $`\mathrm{\Delta }`$ has a sudden increase in the region of $`\nu `$=1/2 (16-19T) and then saturates for fields higher than 30T, a behavior similar to that reported by Heiman et.al. in their MR studies. The value of the separation (0.4-1.2 meV) is close to the FQHE quasiparticle-quasihole separation gap energy (about 1 meV). It does not scale as the magnetic energy (B<sup>1/2</sup>) but rather follows an almost linear behavior in the range 1$`>`$$`\nu `$$`>`$1/3.
Cooper and Chklovskii noted that in the situation where the valence-band hole is close to the electron gas compared with the electron-electron spacing, the most important initial states are the excitonic states, in which the Landau level of spin-$``$ electrons is fully occupied, and the valence-band hole binds with a spin-$``$ electron to form an exciton. The excitonic states will compete for the initial ground state of the system with the ”free-hole” states in which the photoexcited electron fills the vacant spin-$``$ state and the hole occupies the lowest Landau level single particle state. As the filling factor is swept from $`\nu `$$`>`$1 to $`\nu `$$`<1`$, the form of the initial state contributing to the PL signal is believed to undergo a transition from an excitonic state to a ”free-hole” state and a red shift in the luminescence line should be expected. The fact that in the case of our sample, there is no discontinuity in the energy line is an indication that there is no change in the nature of the initial excitonic state. The lack of the transition to a ”free-hole” state in our case is also
confirmed by the absence of the additional peak in the LCP polarization as observed by Plentz et.al..
The evolution of the intensities of the two peaks is shown in Fig. 2. The excitonic line shows distinctive minima at $`\nu `$=1, 2/3, 5/9, 4/11, 1/3. Similar behavior has been reported by Turberfield et.al. and can be related to the localization of the electrons in these states concomitant with a reduction of the screening factor. The intensity behavior of the line labeled MR shows a deep minimum at $`\nu `$=8/11 and local maxima at $`\nu `$=2/5 and 1/3 (1/3 fractional hierarchy). A similar transfer of intensity from the lower energy line to the higher energy one at $`\nu `$=1/3 was previously reported by Heiman et.al. and may be caused by an enhancement of the CD wave as a result of a reduction of screening.
Figure 3 is intentionally removed due to its size.
Fig. 3 shows the right circularly polarized (RCP) spectra at the same temperature of 1.5K at low and high magnetic fields. It can be seen that the spectra taken at low magnetic fields show a decrease of the intensity of the lower energy peak with magnetic field as a result of depopulation of the spin-$``$ electronic level, a behavior that is expected in the case of the neutral exciton. At large magnetic fields, the intensities of the two peaks are almost the same as a result of low filling factor. Fig. 4 shows the ratios of the intensities of the MR and excitonic peaks for the right (RCP) and left (LCP) circular polarizations. In the LCP case this ratio is less than one as a result of a higher intensity of the excitonic peak compared to the RCP case. The difference in the energies (not shown) of the two peaks as a function of magnetic field is almost the same for both polarizations; like the unpolarized data, this difference does not scale as B<sup>1/2</sup> as may be expected. Haldane and Rezay estimated a separation value $`\mathrm{\Delta }`$=0.075e<sup>2</sup>/$`ϵ`$$`\mathrm{}`$<sub>B</sub> (where $`ϵ`$=12.8 for GaAs) which is larger than the value that we measured for the magnetic fields in excess of 15T ($`\nu `$=0.6). The evolution of the energies with the magnetic field in both polarizations at the filling factor $`\nu `$=1 does not show any significant blue or red shift.
In the case of narrow quantum wells it has been shown, both theoretically and experimentally, that the correlation hole of the hole term is the primary mechanism that generates the blue shifted energy associated with the IQHE. In the case of wider quantum wells, a red shift in energy at $`\nu `$=1 indicates that the screening is reduced and the electron-hole Coulomb interaction (vertex correction) is enhanced. The fact that no significant
energy shift is seen in both RCP and LCP spectra at filling factor $`\nu `$=1 seems to indicate that the cancellation of the screened exchange and Coulomb hole terms for the electrons is close to exact, while the vertex correction term and correlation hole of the hole term cancel each other to a very good degree. This can be interpreted as an evidence of an incomplete cancellation of screening for this filling factor. Because of the finite temperatures, we expect that some of the low energy excited states will also be populated. These states will consist of both finite momentum exciton states and of long wavelength spin wave excitations of the system. They will be seen as fluctuations in the overall polarization of the system and will lead to a mixing between the two circular polarizations. For this reason, a degree of uncertainty in the measured ratio of the intensities is possible. For the LCP spectra, the doublet is resolved at magnetic fields higher than in the RCP spectra. We believe that this may be caused by a broadening of the excitonic transition line as a result of the presence of the spin waves in the LCP spectra. This appears as a result of the recombination of one of the valence-band holes with a spin-$``$ electron, a process that leaves a spin reversal in the final state.
Fig. 5 a and b show the MPL spectra in both LCP and RCP polarizations for two temperatures (1.5K and 450mK) at two filling factors $`\nu `$=1/5 and 1/3. All the intensities are normalized with respect to the zero field spectra. It can be seen that, by decreasing the temperature, the higher energy peak is the one that becomes stronger, ruling out the possibility of it being generated by the changes in the population of electrons and photo-excited holes. A similar behavior was reported by Heimanet.al. and was explained by Apalkov and Rashba as being due to the proximity between the electron and hole confinement planes. This proximity can also be explained by a process similar to the one described by Kim et.al. , namely the reduction in screening between electrons, and is the main reason for the existence of the excitonic states at $`\nu `$=1. In Fig. 4a the intensity of the excitonic peak in the LCP spectra is clearly larger than that observed in the RCP spectra. The poorer resolution of the two peaks in the LCP spectra compared to the RCP spectra can be explained by the broadening of the excitonic transition line as a result of the presence of the spin waves in the LCP spectra.
In conclusion, MPL measurements have been performed on a high quality GaAs/AlGaAs SHJ. The spectra showed a doublet structure over a large range of the magnetic field (2$`>`$$`\nu `$$`>`$1/6) that we believe to be evidence for the coexistence of the E0 excitonic excitations and MR minima of the CD waves. Their presence over such a wide field range and filling factor is understood as a confirmation of the strong correlation effects among electrons in the integer and quantum Hall regime.
The authors gratefully acknowledge the engineers and technicians at NHMFL-LAPF in the operation of the 60T QC magnet. Work at NHMFL-LAPF is supported by NSF Cooperative Agreement DMR 9527035, the Department of Energy and the State of Florida. Work at Sandia National Laboratory is supported by the Department of Energy.
|
no-problem/9905/hep-ex9905059.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Several measurements concerning the QED structure of the photon have been performed by various experiments. Prior to LEP, mainly the structure function $`F_2^\gamma `$ of the quasi-real photon, $`\gamma `$, as a function of the photon virtuality $`Q^2`$ of the virtual photon, $`\gamma ^{}`$, was measured. The LEP experiments refined the analysis of the $`\mu \mu `$ final state, and derived more information on the QED structure of the photon. The interest in the investigation of the QED structure of the photon is twofold. Firstly the investigations serve as tests of QED to $`𝒪(\alpha ^4)`$, but secondly, and also very important, the investigations are used to refine the experimentalists tools in a real but clean experimental environment to investigate the possibilities of extracting similar information from the much more complex hadronic final state.
## 2 The photon structure function $`F_2^\gamma `$
The structure function $`F_2^\gamma `$ has been measured using data in the $`Q^2`$ range from about 0.14 up to 400 $`\mathrm{GeV}^2`$. Results were published by the CELLO , DELPHI , L3 , OPAL , PLUTO and TPC/2$`\gamma `$ experiments. Special care has to be taken when comparing the experimental results to the QED predictions, because slightly different quantities are derived by the experiments. Some of the experiments express their result as an average structure function, $`F_2^\gamma (x,Q^2)`$, measured within their experimental acceptance in $`Q^2`$, whereas the other experiments unfold their result as a structure function for an average $`Q^2`$ value, $`F_2^\gamma (x,Q^2)`$. Figure 1 shows the world summary of the $`F_2^\gamma `$ measurements compared either to $`F_2^\gamma (x,Q^2)`$, assuming a flat acceptance in $`Q^2`$, or to $`F_2^\gamma (x,Q^2)`$, using the appropriate values for $`Q^2`$ and $`Q^2`$ given by the experiments. For the measurements which quote an average virtuality $`P^2`$ of the quasi-real photon for their dataset, this value is chosen in the comparison, otherwise $`P^2=0`$ is used. There is a nice agreement between the data and the QED expectations for about three orders of magnitude in $`Q^2`$. The LEP data are so precise that the effect of the small virtuality of the quasi-real photon can clearly be established, as shown, for example, in Figure 2 for the most precise data from OPAL.
The data are compared to the QED predictions of $`F_2^\gamma `$(x,$`Q^2`$,$`P^2`$,$`m_\mu `$), where either $`P^2`$ or $`m_\mu `$ is varied. Using this data, the mass of the muon is found to be $`m_\mu =0.113_{\mathrm{\hspace{0.17em}0.017}}^{+\mathrm{\hspace{0.17em}0.014}}`$ $`\mathrm{GeV}`$, assuming the $`P^2`$ value predicted by QED. Although this is not a very precise measurement of the mass of the muon it can serve as an indication on the precision possible for the determination of $`\mathrm{\Lambda }`$, if it only were for the pointlike contribution to $`F_{2,\mathrm{had}}^\gamma `$.
## 3 Azimuthal correlations
The structure functions $`F_\mathrm{A}^\gamma `$ and $`F_\mathrm{B}^\gamma `$ are obtained from the measured $`F_2^\gamma `$, and a fit to the shape of the distribution of the azimuthal angle $`\chi `$, which is the angle between the plane defined by the momentum vectors of the muon pair and the plane defined by the momentum vectors of the incoming and the deeply inelastically scattered electron. For small values of $`y`$, the $`\chi `$ distribution can be written as:
$$\frac{\mathrm{d}N}{\mathrm{d}\chi }1F_\mathrm{A}^\gamma /F_2^\gamma \mathrm{cos}\chi +\frac{1}{2}F_\mathrm{B}^\gamma /F_2^\gamma \mathrm{cos}2\chi .$$
(1)
The recent theoretical predictions from Ref. which take into account the important mass corrections up to $`𝒪(m_\mu ^2/W^2)`$, are consistent with the measurements of Refs. , Figure 3.
Both $`F_\mathrm{A}^\gamma `$ and $`F_\mathrm{B}^\gamma `$ are found to be significantly different from zero. The shape of $`F_\mathrm{B}^\gamma `$ cannot be determined very accurately, but it is not compatible with a constant. The best fit to a constant $`F_\mathrm{B}^\gamma `$/$`\alpha `$ leads to 0.032 and 0.042 with $`\chi ^2/\mathrm{dof}`$ of 8.9 and 3.1 for the L3 and OPAL results respectively.
## 4 Cross-section for highly virtual photons
The cross-section for the exchange of two highly virtual photons in the kinematical region under study can schematically be written as:
$`\sigma `$ $``$ $`\sigma _{\mathrm{TT}}+\sigma _{\mathrm{TL}}+\sigma _{\mathrm{LT}}+\sigma _{\mathrm{LL}}+`$ (3)
$`{\displaystyle \frac{1}{2}}\tau _{\mathrm{T}T}\mathrm{cos}2\overline{\varphi }4\tau _{\mathrm{T}L}\mathrm{cos}\overline{\varphi }.`$
Here the total cross-sections $`\sigma _{\mathrm{TT}}`$, $`\sigma _{\mathrm{TL}}`$, $`\sigma _{\mathrm{LT}}`$ and $`\sigma _{\mathrm{LL}}`$ and the interference terms $`\tau _{\mathrm{T}T}`$ and $`\tau _{\mathrm{T}L}`$ correspond to specific helicity states of the photons (T=transverse and L=longitudinal), and $`\overline{\varphi }`$ is the angle between the electron scattering planes.
There is good agreement between $`\mathrm{d}\sigma /\mathrm{d}x`$ measured by OPAL, Figure 4, and the QED predictions using the Vermaseren and the GALUGA Monte Carlo programs, provided all terms of the differential cross-section are taken into account. However, as apparent from Figure 4, if either $`\tau _{\mathrm{T}T}`$ (dot-dash) or both $`\tau _{\mathrm{T}T}`$ and $`\tau _{\mathrm{T}L}`$ (dash) are neglected in the QED prediction as implemented in the GALUGA Monte Carlo, there is a clear disagreement between the data and the QED prediction. This measurement clearly shows that both terms, $`\tau _{\mathrm{T}T}`$ and especially $`\tau _{\mathrm{T}L}`$, are present in the data in the kinematical region of the OPAL analysis. The contributions to the cross-section are negative and large, mainly at $`x>0.1`$.
As the kinematically accessible range in terms of $`Q^2`$ and $`P^2`$ for the measurement of the QED and the QCD structure of the photon is the same, and given the size of the interference terms in the leptonic case, special care has to be taken when the measurements of the QCD structure are interpreted in terms of hadronic structure functions of virtual photons.
## 5 Conclusions
QED has been tested to $`𝒪(\alpha ^4)`$ using the reaction $`\mathrm{ee}\mathrm{ee}\gamma ^{()}\gamma ^{}\mathrm{ee}\mu \mu `$, and was found to be in good agreement with all experimental results. Because the precision of the measurements is limited mainly by the statistical error, significant improvements will be made by using the full expected statistics of 500 $`\mathrm{pb}^1`$ of the LEP2 programme.
Acknowledgement:
I wish to thank the organisers of this interesting workshop for the fruitful atmosphere they created throughout the meeting.
|
no-problem/9905/astro-ph9905308.html
|
ar5iv
|
text
|
# 1 Broken Lorentz invariance?
## 1 Broken Lorentz invariance?
Lorentz invariance has, of course, been found to be satisfied to a high degree in all experiments performed to date. However, it has long been realized that attempts at finding a quantized description of space-time may lead to the appearance of non-Lorentz invariant terms. For example, applying quantum deformations to the 4-dimensional Poincaré group results in a so called $`\kappa `$deformed Poincaré algebra<sup>1</sup>, which corresponds to discretizing time, while “preserving almost all classical properties of three-dimensional euclidean space.” The $`\kappa `$deformation leads to a distorted mass shell condition of the form<sup>1,2</sup> $`m^2+𝐩^2=[2\kappa \mathrm{sinh}(p_0/2\kappa )]^2`$, with similar changes to the law of energy-momentum conservation. Here, $`\kappa `$ is presumably the energy scale at which Lorentz invariance no longer holds accurately.
More recently, it has been suggested<sup>3</sup> that in a wider class of approaches to quantizing gravity the photon dispersion relation would be
$$pc=E\sqrt{1+E/E_{QG}},$$
$`(1)`$
where $`E_{QG}`$ could be as low as $`E_{QG}10^{16}`$GeV$`10^3E_{Planck}`$, and would presumably be related, e.g., to some characteristic length scale on which the discrete nature of space-time becomes apparent, $`lE_Q^1`$. One consequence<sup>3</sup> of photons having a three-momentum of the magnitude given by eq. (1), would be that high-energy electromagnetic radiation would travel with a speed dependent on the photon energy, $`v=c(1E/E_{QG})`$, where $`c`$ is the ordinary speed of light.
In the discussion below, I assume that the photon dispersion relation of eq. (1) is valid, and show,<sup>4</sup> that it would have drastic effects on the kinematics of photon-photon collisions. When the soft photon has energy $`E_1`$ in the optical or infrared range, these new effects would appear when the energy, $`E_2`$, of the other photon is in the TeV range, more precisely, when $`E_2\sqrt{E_1E_{QG}}`$. This would make the currently observable Universe transparent to high energy photons. Similar considerations,<sup>5,6</sup> suggest that the Greisen-Kuzmin-Zatsepin cutoff would also not hold, i.e. the universe would be transparent to multi-PeV protons, as well.
## 2 Extragalactic TeV astronomy and the IR background
TeV photon astronomy is a well established field with the spectrum of at least one steady Galactic source (the Crab pulsar) reliably and reproducibly determined,<sup>7</sup> through observations of the Čerenkov radiation of atmospheric showers. Numerous powerful extragalactic sources, as well, are expected to exist in the violent Universe. However, only the closest are thought to be observable, because of severe attenuation<sup>8,9</sup> of TeV radiation over distances much larger than $`100`$Mpc by pair creation on the infrared background (IR), and (over much shorter distances) of $``$PeV radiation by the 2.7K cosmic microwave background. The limits on the propagation distance of Ultra High Energy (UHE) photons are so well established theoretically, that possible detections<sup>10,11</sup> of two particular gamma-ray bursts (GRBs) at, respectively, more than 50 TeV and more than 13 TeV, were interpreted in the context of supposed Galactic origin of GRBs.
However, the observational evidence for (or against) attenuation of the UHE signal over large distances is less secure. It is true, that only a few extragalactic sources have been reported, with the two best established (Markarian 421 and 501) at a relatively close distance (redshift 0.031 and 0.033, respectively), while brighter but more distance blazars in the EGRET catalog (of sources of multi–GeV radiation) have not been observed—this would be consistent with the predicted attenuation. On the other hand, the fairly large distance to these sources and the high energy range of observation (above 20 TeV) were an embarassment to some fairly recent estimates of the attenuation, so now the trend seems to be to constrain<sup>12,13</sup> the IR by fitting the data on Mkn 421 and Mkn 501 to theoretical models of UHE emission, and even to use the suggested attenuation to model Galaxy formation.<sup>14</sup> Further, the observed power-law spectrum of Mkn 421 is clearly different from the “curved” spectrum of Mkn 501, in spite of their nearly identical distance, strongly suggesting<sup>15</sup> different intrinsic spectra in different sources. Moreover, Mkn 421 is known<sup>a</sup><sup>a</sup>aWith one flare so short (several minutes) that it was possible to limit<sup>17</sup> the “quantum gravity” scale to $`E_Q4\times 10^{16}`$GeV, on the assumption that eq. (1) holds, so that a dispersion would be expected<sup>3</sup> between higher and lower energy photons from Mkn 421, as discussed in Section 1. to be extremely variable<sup>16</sup>.
Thus, it is as yet uncertain whether the more distant AGNs are truly invisible in UHE (e.g. 30 TeV) photons. The same is even more true of gamma-ray bursts, where models<sup>18</sup> of UHE emission and TeV observations<sup>10,11,19,20</sup> are even less clear. However, as all GRBs are thought to be cosmological in origin, with redshifts $`z=0.835`$, $`z=3.4`$, and $`z=0.97`$ reported for three particular sources<sup>21,22,23</sup>, the importance of potential secure detections of UHE photons in spatial and temporal coincidence with a GRB cannot be overstated, as they could clearly indicate that the universe is transparent to high energy photons.
## 3 Kinematics of pair creation
It remains to show that with dispersion relation of eq. (1), pair creation is forbidden for very asymmetric photons. This is related to the excess momentum of the higher energy photon. Regardless, of whether an additional term appears, or not, in the dispersion relation for the electron, as in
$$m^2c^4+p^2c^2=E^2+E^3/E_{QG}.$$
$`(2)`$
and whether or not a term of the same order as the last term in eq. (2) appears in the law of conservation of energy-momentum the result is the same qualitatively: in addition to the usual threshold for pair creation, $`E_1E_2m^2`$, there is a maximum energy of the hard photon $`E_2\sqrt{E_1/E_Q}`$. This result is really a consequence of the non-existence of the center of momentum when $`E_2`$ exceeds $`2\sqrt{E_1/E_Q}`$.
As a specific example, suppose that the ususal conservation law is true
$$E_T=E_1+E_2=E_1^{}+E_2^{},\mathrm{and}𝐩_\mathrm{𝟏}+𝐩_\mathrm{𝟐}=𝐩_\mathrm{𝟏}^{}+𝐩_\mathrm{𝟐}^{}.$$
$`(\mathrm{𝟑})`$
Of course, all calculations must be done in a single frame of reference, because we do not know how to transform frames.
Now, if eq. (2) holds (with eq. a special case for m=0), and if two particles of equal energy are created in a head-on collision of two photons, then<sup>4</sup> eq. (3) holds, i.e., high energy photons are absorbed, only if
$$E_22\sqrt{2E_1/E_{QG}}.$$
$`(4)`$
To illustrate the independence of the result on the dispersion relation, and indeed the mass, of the electron, consider now that eqs. (1) and (3) hold together with $`m^2c^4+p^2c^2=E^2`$. Then pair creation by two photons of energies $`E_1`$, $`E_2`$, is possible iff
$$E_1E_2m^2c^4+(E_1E_2)^2(E_1+E_2)/(4E_{QG}),$$
yielding the usual threshold for symmetric energies, but also an upper limit to the photon energy, similar to that of eq. (4)
$$\frac{m^2c^4}{E_1}E_22\sqrt{E_1E_{QG}}.$$
## 4 Conclusions
If Lorentz invariance is modified, the kinematics of photon-photon collisions (as well as of other transformations of particle identity) will certainly be drastically modified. Although, there is no known self-consistent theory of quantum gravity, if it is true that the dispersion relation of eq. (1) will hold for photons in a future theory, it seems very likely, as for the specific examples above, e.g. eq. (4), that the universe will be transparent to photons of energies higher than $`(30\mathrm{TeV})\times \sqrt{E_Q/10^{17}\mathrm{GeV}}`$.
|
no-problem/9905/cond-mat9905352.html
|
ar5iv
|
text
|
# Numerical renormalization-group study of spin correlations in one-dimensional random spin chains
## I Introduction
One-dimensional (1D) quantum spin systems have attracted much attention over the decades. This is not only because these systems have been a good testing ground for various theoretical techniques and approximations but also because they exhibit a wealth of fascinating phenomena in their ground states and low-lying excitations. These include quasi long-range-order (LRO), topological order and ground-state phase transitions, which are all purely quantum effects due to the low-dimensionality of the systems. Among these quantum phenomena, the effects of randomness on quantum spin systems have been studied intensively by many groups. These studies revealed the appearance of various exotic phases, which are realized neither in regular quantum systems nor classical random systems. The interplay of randomness and quantum fluctuations plays an essential role in these phases. Systems which are gapless in the absence of randomness are unstable against weak randomness, while gapped systems, e.g. integer spin chains in the Haldane phase and dimerized chains, are comparatively robust.
A simplest model of the 1D random spin-$`\frac{1}{2}`$ Heisenberg spin systems is described by the Hamiltonian of the form
$$H=\underset{i=1}{\overset{L1}{}}J_i\stackrel{}{s}_i\stackrel{}{s}_{i+1},$$
(1)
where $`\stackrel{}{s}_i`$ are $`S=\frac{1}{2}`$ spin operators and the exchange coupling constants $`J_i`$ are distributed randomly according to a probability distribution $`P(J_i)`$. There are several quasi-1D systems which provide the realizations of the model Hamiltonian (1). To our knowledge, the first example of such systems belongs to the class of the organic charge-transfer salts tetracyanoquinodimethanide (TCNQ). The low-temperature magnetic properties of these systems are successfully described by the model Hamiltonian (1) with random antiferromagnetic (AF) coupling, in which the couplings $`J_i`$ are restricted to take positive random values. A more recently studied system is Sr<sub>3</sub>CuPt<sub>1-x</sub>Ir<sub>x</sub>O<sub>6</sub>. While the pure compounds Sr<sub>3</sub>CuPtO<sub>6</sub> and Sr<sub>3</sub>CuIrO<sub>6</sub> represent, respectively, AF and ferromagnetic (FM) spin chains, Sr<sub>3</sub>CuPt<sub>1-x</sub>Ir<sub>x</sub>O<sub>6</sub> contains both AF and FM couplings. The fraction of FM bonds is simply related to the concentration $`x`$ of Ir. These compounds are modeled adequately by the Hamiltonian (1) with $`P(J_i)=p\delta (J_i+J)+(1p)\delta (J_iJ)`$, where $`p`$ is the probability of FM bonds. The experimental data of the susceptibility of the compounds were in fact explained successfully by a theory based on the Hamiltonian (1) with the above probability distribution of FM and AF bonds. The model (1) is also realized in the low-temperature regime of randomly depleted 1D spin-$`\frac{1}{2}`$ even-leg ladders. In these systems, effective $`S=\frac{1}{2}`$ spins are induced in the vicinity of each depleted site. If the density of depleted spins is sufficiently small, the induced effective spins are the only degrees of freedom which are active in the low-energy limit. The strength of the residual interaction between effective spins depends on the distance between the effective spins, and the coupling can be either antiferromagnetic or ferromagnetic, depending on whether or not two spins are depleted from the same sublattice. Thus, the low-energy physics of the randomly depleted spin-$`\frac{1}{2}`$ even-leg ladders is described as a random spin chain with both AF and FM exchange couplings.
For 1D random spin chains containing only AF couplings, rather complete theoretical understanding on their low-energy properties has been achieved. One of the most powerful techniques to study such systems is the real-space renormalization group (RSRG) method introduced by Ma, Dasgupta, and Hu. The basic idea of this method is iterative decimation of spin degrees of freedom by integrating out the strongest bonds in the chain successively. For the random AF chains this procedure keeps the form of the Hamiltonian (1) and renormalizes the probability distribution $`P(J_i)`$. Fisher has given a solution to this RG equation of $`P(J_i)`$ that becomes exact in the low-energy limit and shown that any normalizable initial distribution $`P(J_i)`$ flows to a single universal fixed point distribution. The ground state of the chain which belongs to the fixed point is characterized by the “random singlet phase,” where each spin forms a singlet with another spin which can be arbitrarily far away. From the intuitive picture of the random singlet phase, Fisher has also shown that the random average of the two-spin correlation function in this phase decays algebraically like $`1/r^2`$ with $`r`$ being the distance between the two spins. The main contribution to the average comes from the rare events that two spins separated by the distance $`r`$ form a singlet pair. The $`1/r^2`$ power-law decay has been verified by a numerical calculation for the random $`XX`$ spin-$`\frac{1}{2}`$ chains, which can be mapped to a disordered system of noninteracting fermions.
The random spin chains containing both AF and FM couplings have also been studied theoretically for the last five years, from which a qualitative picture of the low-energy physics of such random FM-AF chains has emerged. Westerberg et al. have adapted the RSRG scheme and shown through extensive numerical simulations that the distribution $`P(J_i)`$ is renormalized to a single universal fixed-point distribution unless the initial distribution is highly singular around $`J_i=0`$. The chain at this fixed point can be viewed as an ensemble of weakly interacting large effective spins whose average size $`\overline{S}T^\kappa `$ with $`\kappa 0.22`$ for temperature $`T0`$. These large effective spins are generated as a result of decimations of two spins coupled via a strong FM coupling into a larger spin. The results of the RSRG simulations are supported by a recent calculation by Frischmuth et al., who have used the continuous-time quantum Monte Carlo loop algorithm. In contrast to the success in the quantitative calculations on the thermodynamic properties, less is known about the spin correlations in the random FM-AF chains. Since the average size of the effective spins becomes very large in the low-energy (long-distance) limit, one can expect that the system should be close to a classical spin chain that can show the LRO of the generalized staggered spin component. One might also argue, however, that there should be no LRO in 1D quantum spin chains. It is therefore an interesting open question how the correlation function of the generalized staggered moment (whose definition will be given in Sec. III) behaves in the ground state. The purpose of this paper is in fact to present results of our extensive numerical calculation of the two-spin correlation function. We find that it decays very slowly with the form close to $`1/\mathrm{ln}(r)`$, which is consistent with the naive argument that the ground state is extremely long-range correlated, but not really long-range ordered.
The outline of this paper is as follows. Section II A is devoted to a brief review of the RSRG scheme of the generalized version which is applicable to both random AF and FM-AF case. Using the Wigner-Eckart theorem, we simplify the algorithm to allow for calculating the two-spin correlation function between original $`S=\frac{1}{2}`$ spins. In section II B, in order to achieve higher accuracy, we extended the RSRG scheme to take into account the contributions of local excitations to the ground state of the whole system. We perform these conventional and extended RSRG algorithm numerically on both random AF and FM-AF chains to calculate the “staggered” spin correlations on their ground states. The results on the random AF case are shown in section III A. In section III B, we show the results on the random FM-AF case. Analyzing the data obtained with the extended RSRG method (which can be applied to larger systems than the DMRG), we conclude that the mean correlations on the random FM-AF case decay very slowly with logarithmic dependence on $`r`$. We also discuss the distribution functions of the logarithm of the correlation functions in section III C. We find that, in the random AF case, the rare spin pairs, which are strongly correlated, dominate the mean correlation while such rare events are not essential in the random FM-AF case. Finally, our results are summarized in section IV.
## II The RSRG algorithm
Ma, Dasgupta and Hu introduced a RSRG procedure to investigate the low-temperature properties of random AF spin chains. The method has been generalized to the random FM-AF case by Westerberg et al. In this section we explain our extension of this scheme to calculate the ground-state two-spin correlation functions. We begin with a brief review of the RSRG method for the random FM-AF case.
### A Conventional RSRG
Let us consider a random FM-AF spin-$`\frac{1}{2}`$ chain described by the Hamiltonian (1). The basic strategy of RSRG is to decimate spin degrees of freedom by combining two spins connected via a strong bond into one effective spin. Consequently, the system is described in terms of effective spins of various sizes coupled by random exchange interactions, although the original Hamiltonian (1) consists of only $`S=1/2`$ spins. We accordingly treat the effective Hamiltonian,
$$H=\underset{l}{}J_l\stackrel{}{S}_l\stackrel{}{S}_{l+1},$$
(2)
where both the coupling $`J_l`$ and the size of the effective spins $`S_l`$ are random. We call the $`S=1/2`$ spins $`\stackrel{}{s}_i`$ appearing in the Hamiltonian (1) “original” spins in the following to distinguish them from the effective spins $`\stackrel{}{S}_l`$.
We define $`\mathrm{\Delta }_l`$ as the energy gap between the ground-state multiplet and the first excited multiplet of the corresponding bond Hamiltonian $`H_l=J_l\stackrel{}{S}_l\stackrel{}{S}_{l+1}`$,
$`\mathrm{\Delta }_l=\{\begin{array}{cc}|J_l|(S_l+S_{l+1})\hfill & (J_l<0)\hfill \\ J_l(|S_lS_{l+1}|+1)\hfill & (J_l>0)\hfill \end{array}`$
and focus on the bond with the largest gap $`\mathrm{\Delta }_l`$ in the chain. The terms in the effective Hamiltonian (2) which involve the effective spins $`\stackrel{}{S}_l`$ and $`\stackrel{}{S}_{l+1}`$ are
$$H^{}=H_0^{}+H_1^{}$$
(4)
where
$`H_0^{}`$ $`=`$ $`J_l\stackrel{}{S}_l\stackrel{}{S}_{l+1},`$ (5)
$`H_1^{}`$ $`=`$ $`J_{l1}\stackrel{}{S}_{l1}\stackrel{}{S}_l+J_{l+1}\stackrel{}{S}_{l+1}\stackrel{}{S}_{l+2}.`$ (6)
If $`\mathrm{\Delta }_l`$ is much larger than the gaps of the neighboring bonds, $`\mathrm{\Delta }_{l1}`$ and $`\mathrm{\Delta }_{l+1}`$, the spins $`\stackrel{}{S}_l`$ and $`\stackrel{}{S}_{l+1}`$, to a good approximation, are frozen into the ground-state multiplet of the local Hamiltonian $`H_0^{}`$. We, therefore, replace the block composed of $`\stackrel{}{S}_l`$ and $`\stackrel{}{S}_{l+1}`$ by the single effective spin $`\stackrel{}{S}`$. The Wigner-Eckart theorem then implies that both $`\stackrel{}{S}_l`$ and $`\stackrel{}{S}_{l+1}`$ are proportional to $`\stackrel{}{S}`$:
$`\stackrel{}{S}_l`$ $`=`$ $`\alpha \stackrel{}{S},`$ (7)
$`\stackrel{}{S}_{l+1}`$ $`=`$ $`\beta \stackrel{}{S},`$ (8)
where $`\alpha `$ and $`\beta `$ can be obtained from the Clebsch-Gordan coefficients. Substituting Eq. (8) into the four-spin Hamiltonian (4), we obtain, apart from a constant term coming from $`H_0^{}`$,
$$\stackrel{~}{H}=\stackrel{~}{J}_{l1}\stackrel{}{S}_{l1}\stackrel{}{S}+\stackrel{~}{J}_{l+1}\stackrel{}{S}\stackrel{}{S}_{l+2},$$
(9)
where
$`\stackrel{~}{J}_{l1}=\alpha J_{l1},`$
$`\stackrel{~}{J}_{l+1}=\beta J_{l+1}.`$
The case where $`J_l`$ is antiferromagnetic with $`S_l=S_{l+1}`$ needs a special treatment. In this case the two spins $`\stackrel{}{S}_l`$ and $`\stackrel{}{S}_{l+1}`$ form a singlet, and accordingly, we remove both spins from the effective Hamiltonian. Between the spins $`\stackrel{}{S}_{l1}`$ and $`\stackrel{}{S}_{l+2}`$, a coupling is generated through a second-order process that virtually breaks the singlet of $`\stackrel{}{S}_l`$ and $`\stackrel{}{S}_{l+1}`$. We obtain
$$\stackrel{~}{H}=\stackrel{~}{J}\stackrel{}{S}_{l1}\stackrel{}{S}_{l+2},$$
(10)
where
$`\stackrel{~}{J}={\displaystyle \frac{2J_{l1}J_{l+1}}{3J_l}}S_l(S_l+1).`$
By replacing the four-spin Hamiltonian (4) with $`\stackrel{~}{H}`$ \[Eq. (9) or Eq. (10)\] in the Hamiltonian of the whole system, we obtain a new effective Hamiltonian (See Fig. 1). We note that this procedure preserves the form of the effective Hamiltonian (2) but changes the distributions of the exchange couplings, $`J_l`$, and the size of effective spins, $`S_l`$. We repeat this procedure of integrating out the strongest bonds in a chain successively until the distribution functions for $`J_l`$ and $`S_l`$ converge to a scaling form. This RG flow has been investigated extensively for various initial distributions including both AF and FM-AF case. In particular, for the random AF $`S=1/2`$ chains Fisher has solved the RG equation exactly.
It is also possible to calculate the correlation functions between original spins within the scheme described above. Here we make use of the fact that the original spin operator $`\stackrel{}{s}_i`$ is, at every step in the RSRG procedure, proportional to the effective spin $`\stackrel{}{S}_l`$ to which it belongs. We can keep track of the coefficient for each original spin operator by applying Eq. (8) at each step. At the step where the effective spins $`\stackrel{}{S}_l`$ and $`\stackrel{}{S}_{l+1}`$ is added into $`\stackrel{}{S}`$, we calculate the correlations between $`\stackrel{}{s}_i`$ and $`\stackrel{}{s}_j`$ in the ground state of the bond Hamiltonian $`H_0^{}`$,
$$\stackrel{}{s}_i\stackrel{}{s}_j=\alpha _i\alpha _j\stackrel{}{S}_l\stackrel{}{S}_{l+1},$$
(11)
where $`\mathrm{}`$ represents the expectation value in the ground state; $`\stackrel{}{s}_i`$ and $`\stackrel{}{s}_j`$ belong to $`\stackrel{}{S}_l`$ and $`\stackrel{}{S}_{l+1}`$, respectively; $`\alpha _i`$ and $`\alpha _j`$ are the proportionality coefficients for each spin.
### B Extension of the RSRG scheme
As seen in the previous subsection, the conventional RSRG procedure consists of “diagonalizing the bond Hamiltonian with the largest gap” and “projecting the low-energy states onto the lowest multiplet.” This means that we completely neglect the contribution of the excited multiplets of the local Hamiltonian (5) to the ground state of the whole system. This approximation is valid only if the energy gap of $`H_0^{}`$ is much larger than the ones of the neighboring bonds. Fisher’s solution for the random AF $`S=1/2`$ chain becomes asymptotically exact since this condition is satisfied near the fixed point of the RG flow.
However, the condition is often not satisfied, in particular, in the early stage of the RG, unless the initial distribution of energy gap is very broad. As a result, we have no choice but to cut off the contributions of local excitations. This is a poor approximation that has a serious effect especially on the calculation of the expectation values of microscopic operators such as the original spins. The ground-state correlation functions between original spins calculated via the conventional RSRG in fact deviate largely from those obtained from the DMRG method and the extended RSRG algorithm described below (See Fig. 3 and Fig. 5 in section III).
One possible prescription to avoid this error is “to keep the local multiplet excitations” at each step. Of course, if we keep all eigenstates of the bond Hamiltonian $`H_0^{}`$ at every step of the RG, the calculation on $`H_0^{}`$ in the final step is equivalent to the exact diagonalization on the Hamiltonian of the whole system. In practice what we should do is to extend the RSRG algorithm under the policy that we keep as many states as we can store in computer memories. In the original RSRG scheme a segment of original spins are combined and represented by a single effective spin. In the extended scheme we keep more states than the lowest multiplet and call the segment a ‘block.’ Each block consists of several original spins and is represented by ‘block states,’ which are the $`m`$-lowest eigenstates of the block Hamiltonian. Since we have to keep or discard all states of a multiplet to ensure the SU(2) symmetry, the actual number of kept states for block $`l`$ is $`m_l^{}m`$. An original spin operator in block $`l`$ is represented by $`m_l^{}\times m_l^{}`$ matrix on the set of the block states accordingly.
Let us consider the effective Hamiltonian
$`H`$ $`=`$ $`{\displaystyle \underset{l}{}}H_l^{(B)}+{\displaystyle \underset{l}{}}H_{l,l+1}`$ (12)
$`H_{l,l+1}`$ $`=`$ $`\stackrel{~}{J}_l\stackrel{}{s}_l^{(R)}\stackrel{}{s}_{l+1}^{(L)},`$ (13)
where $`H_l^{(B)}`$ is a block Hamiltonian of the $`l`$th block, diagonal in the block states; $`H_{l,l+1}`$ is a coupling Hamiltonian between the $`l`$th and $`(l+1)`$th blocks; $`\stackrel{}{s}_l^{(R)}`$ and $`\stackrel{}{s}_l^{(L)}`$ are original spin operators on the right and left edge of the $`l`$th block, respectively; $`\stackrel{~}{J}_l`$ is a coupling between $`\stackrel{}{s}_l^{(R)}`$ and $`\stackrel{}{s}_{l+1}^{(L)}`$. In the extended RSRG scheme we renormalize the two-block Hamiltonian
$$H_{l,l+1}^{}=H_l^{(B)}+H_{l,l+1}+H_{l+1}^{(B)}$$
(14)
with the largest gap $`\mathrm{\Delta }_l`$ into one block Hamiltonian (see Fig. 2). Here we define $`\mathrm{\Delta }_l`$ as the energy difference between the highest energy in the eigen-multiplets of $`H_{l,l+1}^{}`$ which will be kept and the lowest energy in the multiplets which will be discarded after the decimation. The basic scheme of the extended RSRG is the same as the conventional RSRG except the changes described above. The algorithm is summarized as follows:
1. Focus on the bond with the largest gap $`\mathrm{\Delta }_l`$. Construct the two-block Hamiltonian (14) of the bond.
2. Diagonalize the two-block Hamiltonian to find a set of eigenvalues and eigenstates. At this point, we can calculate expectation values of various operators in the two blocks, such as the two-spin correlation functions between the original spins, using the ground state of the two-block Hamiltonian.
3. Discard all but the lowest $`m_l^{}(m)`$ eigenstates in the block Hamiltonian.
4. Express operators, such as the block Hamiltonian itself and the original spin operators in the new block, in terms of the new block states.
5. Rewrite the coupling Hamiltonians between the new block and its neighboring blocks in terms of the new $`\stackrel{}{s}^{(L)}`$ and $`\stackrel{}{s}^{(R)}`$. Diagonalize them to update the distribution of the energy gap $`\mathrm{\Delta }`$.
6. Return to the step (i).
We obtain all two-spin correlation functions by repeating this procedure until the whole chain is finally represented by one block.
## III Numerical Results
Using both the conventional RSRG and the extended RSRG with various values of $`m`$, we calculated the correlation functions $`\stackrel{}{s}_i\stackrel{}{s}_j`$ in the ground state of open chains for both the random AF case and the random FM-AF case. The maximum size of the chains used for the conventional and the extended RSRG simulations is $`L=100000`$ and $`L=1000`$, respectively. In the FM-AF case, the random average of the spin correlation, $`\stackrel{}{s}_i\stackrel{}{s}_j`$, always decays exponentially and is not an appropriate quantity to characterize the spin correlation because the sign of $`\stackrel{}{s}_i\stackrel{}{s}_j`$ can be either positive or negative depending on the number of AF bonds between $`\stackrel{}{s}_i`$ and $`\stackrel{}{s}_j`$. Instead we introduce the “generalized staggered” correlation function
$$C(|ij|)=\eta _{ij}\stackrel{}{s}_i\stackrel{}{s}_j,$$
(15)
where $`\eta _{ij}=_{k=i}^{j1}\mathrm{sgn}\left(J_k\right)`$ for $`j>i`$. For random AF case, $`C(r)`$ is the usual staggered spin correlation. We take the random average of $`C(r)`$ and $`\mathrm{ln}C(r)`$, which represents the mean correlations and the logarithm of the typical correlations, respectively. Note that it is impossible to take the random average of $`\mathrm{ln}C(r)`$ numerically in the conventional RSRG algorithm, within which $`C(r)=0`$ for two spins that do not form a singlet pair. To check the results of the RSRG methods, we also calculated the correlation functions on $`L=100`$ open chains using the DMRG method with the improved algorithm proposed by White. The number of kept states in the DMRG calculation was up to 100 and 200 for the random AF and FM-AF case, respectively. In both cases, the mean and typical correlations calculated by the extended RSRG are in excellent agreement with those by DMRG (see below). We note that the systems we have treated are much bigger than those in the earlier work by Hida, in which the DMRG method is applied for the FM-AF chains.
### A Random AF case
As a typical probability distribution of the random AF case, we choose the box-type bond distribution $`P(J_i)`$,
$$P(J_i)=\{\begin{array}{cc}\frac{1}{J_0}\hfill & (0J_iJ_0)\hfill \\ 0\hfill & (\mathrm{otherwise})\hfill \end{array}$$
(16)
where the cutoff, $`J_0`$, is taken as energy unit. For the random AF case, it is known that any normalizable initial distribution flows to a single universal fixed point. Hence, the results we obtained for the initial distribution (16) are generic. The number of sample chains we used for each method are shown in Table I.
Before discussing our numerical results, we briefly comment on the $`m`$-dependence of the data of the extended RSRG. As noted in the last section, the extended RSRG can provide more accurate results as the number of kept states, $`m`$, increases. In the random AF case, the ground-state multiplet of a block Hamiltonian is either singlet or doublet, depending on the number of original spins belonging to the block. The degeneracy of each low-lying excited multiplet is, therefore, always small. As a result, we can keep considerably many multiplets even if $`m`$ is rather small. We estimate the $`m`$-dependence of the results of the extended RSRG from the numerical data with $`m=10,20,30`$.
Figures 3 and 4 show, respectively, the numerical data of the mean correlations, $`C(r)`$, and the average of the logarithm of correlations, $`\mathrm{ln}C(r)`$, where $`\mathrm{}`$ represents random average. It is clear that the data of the extended RSRG converge already at $`m=30`$, and we can regard those data with $`m=30`$ as those in the limit $`m\mathrm{}`$. In Fig. 3 the data of the extended RSRG with $`m=30`$ are in good agreement with those of DMRG for $`r30`$, where the DMRG data are considered to be exact and free from finite-size effects. On the other hand, the conventional RSRG largely underestimates the correlations, but its data are on a line parallel to the data of the extended RSRG in the log-log plot. We conclude from these observations that the results of the extended RSRG are quantitatively reliable, whereas the conventional RSRG can be used to estimate the exponent of the power-law decay. Encouraged by the agreement between the extended RSRG and the DMRG data, we anticipate that the extended RSRG provide quantitatively reliable data even for $`r>30`$ where the reliable DMRG data are not available. From the data of the extended RSRG with $`m=30`$ for $`r300`$, where the data are expected not to be hampered by finite-size effects, we estimate the asymptotic form of the average correlations to be
$$C(r)r^2.$$
(17)
For the average of $`\mathrm{ln}C(r)`$, we also rely on the data obtained from the extended RSRG scheme. Figure 4 gives
$$\mathrm{ln}C(r)r^{0.5}.$$
(18)
Both results (17) and (18) agree with Fisher’s theory and can be considered as a numerical verification of his solution on the mean and typical correlations for the Heisenberg case. For the random $`XX`$ chains and for a related model of the random transverse-field Ising model, the power-law behavior (17) and (18) is observed numerically. To our knowledge, Figs. 3 and 4 are first numerical results confirming Fisher’s theory for the Heisenberg case.
### B Random FM-AF case
Westerberg et al showed that the RG trajectories of the random FM-AF chains flow towards a single universal fixed point under the conventional RSRG procedure, unless the initial distribution of couplings is more singular than $`P(J_i)|J_i|^{y_c}`$, $`y_c0.7`$. In this section we investigate the spin correlations at this fixed point with the extended RSRG scheme. For this purpose we assume the box-type bond distribution $`P(J_i)`$,
$$P(J_i)=\{\begin{array}{cc}\frac{1}{2J_0}\hfill & (J_0J_iJ_0),\hfill \\ 0\hfill & (\mathrm{otherwise}),\hfill \end{array}$$
(19)
with $`J_0`$ as energy unit, as a representative of distributions with no singularity at $`J_i=0`$. We expect that our results obtained for the initial distribution (19) should capture the universal behavior for the random FM-AF chains which are in the basin of the universal fixed point found in Ref. . We have numerically calculated spin correlations using the conventional RSRG, the extended RSRG with $`m=30,40,50,60`$ and the DMRG method.
In the random FM-AF chains the degeneracy of the lowest multiplet of a block becomes larger on average as the size of the block grows. At the point when the degeneracy exceeds the number of kept states, $`m`$, determined beforehand in the algorithm, the extended RSRG scheme breaks down because we need to keep all the degenerate states in the lowest multiplet to preserve the SU(2) spin symmetry. In fact, we could complete the extended RSRG procedure without exceeding the limit of the number of kept states $`m=30`$ ($`m=60`$) for 30% (75%) of the sample chains ($`L=1000`$). We then used only those samples for which the RSRG could be completed to take the random average. Although this sorting out of sample chains may lead to a systematic underestimate on the average values, we believe that we can correct it by carefully checking the $`m`$-dependence of the data. The number of sample chains we used to take the random average for each RG scheme is shown in Table II.
The numerical results for the mean correlations, $`C(r)`$, are shown in Fig. 5 in a log-log plot. It is clear that the data of the extended RSRG have almost converged with $`m=60`$ for $`r<500`$, where the data are expected to be free from the effect of the open boundaries. The data with $`m=60`$ are also in excellent agreement with the data of DMRG. We, therefore, regard the results of the mean correlation with $`m=60`$ as essentially converged. We notice that the curve in the log-log plot are bent upward, indicating that the mean correlations $`C(r)`$ decay more slowly than a power-law. To analyze the $`r`$-dependence of the mean correlations, we plot the inverse of $`C(r)`$ as a function of $`\mathrm{ln}r`$ in the inset of Fig. 5. We find that the data lie on a straight line in this plot, from which we speculate that the mean correlations decay with the logarithmic form,
$$C(r)\frac{a}{\mathrm{ln}\left(r/r_0\right)},$$
(20)
where $`a`$ and $`r_0`$ are constants. We note, however, that Eq. (20) is not the only form that can account for the numerical data. In any case the mean correlations show very weak $`r`$-dependence, probably through the form of $`\mathrm{ln}(r/r_0)`$, certainly different from power-law.
Figure 6 shows the numerical results of $`\mathrm{ln}C(r)`$. The data of the extended RSRG with $`m=60`$ exhibit a similar behavior to the logarithm of the mean correlations; The typical correlation $`\mathrm{exp}\left[\mathrm{ln}C(r)\right]`$ decays again more slowly than a power-law. This leads us to plot the inverse of the typical correlations as a function of $`\mathrm{ln}r`$ (see the inset of Fig. 6). Although the curve at $`m=60`$ seems almost linear for $`50<r<400`$, we cannot determine the $`r`$-dependence of the typical correlations from the figure due to the slow convergence of the data with increasing $`m`$. This slow convergence arises from the fact that the numerical estimate of $`\mathrm{ln}C(r)`$ is very sensitive to small fluctuations of $`C(r)`$ especially when the value of $`C(r)`$ is extremely small. Calculations with much larger values of $`m`$ would be needed to obtain the data accurate enough to determine the $`r`$-dependence of $`\mathrm{ln}C(r)`$.
### C Distribution of the correlation functions
In order to make a distinction between characteristics of ground-state correlation functions in the random AF chains and in the random FM-AF chains, we analyze the distributions of the logarithm of the correlations, $`D(x;r)`$, where $`x=\mathrm{ln}C(r)`$. Henelius and Girvin have shown numerically that for large $`r`$ the distribution function of $`XX`$ chains with random AF couplings scales to a fixed-point distribution of the form
$$D(x;r)=f(r)F\left(x/g(r)\right)$$
(21)
with
$`f(r)g(r)=1,`$ (22)
$`g(r)=c\left|\mathrm{ln}C(r)\right|,`$ (23)
where $`c`$ is a positive constant. The scaling function $`F(x/g(r))`$ satisfies the normalization condition $`F(y)𝑑y=1`$. We will demonstrate that the distribution $`D(x;r)`$ in random Heisenberg chains also exhibits the scaling behavior Eq. (21) for both the random AF and the random FM-AF case.
Figure 7 shows the data of $`D(x;r)`$ for the random AF case obtained using the extended RSRG with $`m=30`$. According to Eqs. (18) and (23), we can set $`g(r)=r^{0.5}`$ and $`f(r)=r^{0.5}`$. The data points (circles and squares) in Fig. 7 collapse on a single curve, indicating that the scaling (21) applies. In Fig. 7 we also plot
$$W(x;r)=\frac{e^xD(x;r)}{C(r)},$$
(24)
which measures the contribution to the mean value of the correlation $`C(r)`$. Although the curves are rather rough due to a statistical error, it is clear that $`W(x;r)`$ has a considerable weight in the range where $`D(x;r)`$ is very small. This means that a very few spin pairs that have much stronger correlations than typical ones give dominant contribution to the mean correlation $`C(r)`$. We also find that, as $`r`$ increases, the region of $`x/g(r)`$ where $`W(x;r)`$ is large moves towards $`x/g(r)=0`$. Indeed this is the behavior expected from Eq. (17); the value of $`x/g(r)`$ where $`W(x;r)`$ takes a large weight should change as $`\mathrm{ln}C(r)/g(r)\mathrm{ln}r/r^{0.5}0`$ as $`r\mathrm{}`$. Thus we regard the results shown in Fig. 7 as a further support for the random-singlet picture of the ground state of the random AF Heisenberg chains, where each spin forms a singlet pair with another spin that can be arbitrarily far away. The mean correlation $`C(r)`$ is dominated by the contribution from the rare case in which two spins separated by distance $`r`$ form a spin singlet.
Figure 8 shows the numerical data of $`D(x;r)`$ of the extended RSRG with $`m=60`$ for the random FM-AF chains. Here we set $`g(r)=\mathrm{ln}C(r)/\mathrm{ln}C(r=200)`$ and $`f(r)=1/g(r)`$. The data points (circles and squares) for $`100<r<500`$, where we may ignore the boundary effect, clearly lie on a single scaling curve. In this range of $`r`$, however, $`g(r)`$ changes by several percent only, from $`0.95`$ to $`1.04`$. To establish the scaling behavior for wide range of $`g(r)`$, the calculations on much (exponentially) larger systems might be necessary. Such large-scale calculations are obviously unfeasible with computers available at present, and thus we can only conclude that our results shown in Fig. 8 are consistent with the scaling hypothesis (21). In the following discussion, we regard $`D(x;r)/f(r)`$ in Fig. 8 as a fixed-point form of the distribution function.
Figure 8 clearly shows that the distribution function of the random FM-AF chains has a quite different form from that of the random AF chains. It is essentially zero for $`x/g(r)1`$ and increases approximately linearly at $`x/g(r)1`$. The weight function $`W(x;r)`$ representing the contribution to the mean correlation is also shown in Fig. 8. In contrast to the random AF case, $`W(x;r)`$ has most of its weight in the region where $`D(x;r)`$ is not negligible. This feature highlights the different nature of the spin correlations in the random FM-AF chains. As shown in the RSRG analysis, many spins in the random FM-AF chains correlate and form a large effective spin. This suggests that a spin is correlated with many other spins that belong to the same large effective spin, and, therefore, the mean value of the spin correlation function is not at all dominated by the “rare” events that two spins, far apart from each other, form a singlet pair.
From the observation that $`D(x;r)/f(r)`$ of the random FM-AF chains is negligible for $`x/g(r)>A`$ ($`A1`$ in Fig. 8) and has an approximately linear dependence for $`x/g(r)<A`$ until it takes a maximum value, we can also draw a conclusion that $`\mathrm{ln}C(r)`$ and $`\mathrm{ln}C(r)`$ should have a similar dependence on $`r`$, in agreement with Figs. 5 and 6. Let us assume that the scaling function has the form
$$F(y)=\{\begin{array}{cc}k(y+A)\hfill & yA,\hfill \\ 0\hfill & y>A,\hfill \end{array}$$
(25)
where $`y=x/g(r)`$, $`k`$ and $`A`$ are positive constants. Since $`g(r)\mathrm{}`$ in the limit $`r\mathrm{}`$, the mean correlation $`C(r)`$ is dominated by the contribution from the region of small $`|x/g(r)|`$. Thus, we are allowed to use Eq. (25) for calculating $`C(r)`$, even though this form is not correct for large $`|x/g(r)|`$. Using Eqs. (21), (22) and (25), we calculate the mean correlation:
$`C(r)={\displaystyle e^xD(x;r)𝑑x}=k{\displaystyle _{\mathrm{}}^{Ag(r)}}e^x\left({\displaystyle \frac{x}{g(r)}}+A\right)𝑑x={\displaystyle \frac{k}{[g(r)]^2}}e^{Ag(r)},`$
yielding
$$\mathrm{ln}C(r)=Ag(r)+𝒪\mathbf{\left(}\mathrm{ln}g(r)\mathbf{\right)}=Ac\mathrm{ln}C(r)+𝒪\mathbf{\left(}\mathrm{ln}|\mathrm{ln}C(r)|\mathbf{\right)}$$
(26)
in the limit $`r\mathrm{}`$. This result is in agreement with our observation that the curves in Figs. 5 and 6 look very similar. It is also consistent with the fact that $`W(x;r)`$ in Fig. 8 stays almost in the same region of $`x/g(r)`$ with increasing $`r`$. Equation (26) is not changed qualitatively even when $`F(y)`$ has an algebraic form $`F(y)[(y+A)]^\alpha `$, as far as $`A>0`$. Note that Eq. (26) does not hold in the random AF chains, for which we should take $`A=0`$ and $`\alpha =3`$.
## IV Conclusion
We have studied the spin correlations in spin-$`\frac{1}{2}`$ random Heisenberg chains for both the random AF and the random FM-AF case. One of the most powerful methods for the study of random spin chains is the real-space RG in which the spin degrees of freedom are decimated by integrating out the strongest bonds successively. In order to calculate the two-spin correlation functions between original $`S=\frac{1}{2}`$ spins, we modified the algorithm of the conventional RSRG using the Wigner-Eckart theorem. We also extended the RSRG scheme by keeping the local excited multiplets as block states to keep track of quantum effects. We demonstrated that the extended RSRG algorithm is very powerful, in the sense that it can be applied to rather large systems, and provides accurate numerical data that are in excellent agreement with those of the DMRG method. The numerical data of the mean and typical correlations for the random AF chains verified Fisher’s prediction \[Eqs. (17) and (18)\] for the Heisenberg case, which cannot be mapped to noninteracting fermions. For the random FM-AF chains, we found that the mean correlation, $`C(r)`$, decays very slowly with the logarithmic $`r`$-dependence. We, therefore, conclude that the generalized staggered spin correlation in the random FM-AF chains has no LRO, although it shows very long-range correlations decaying slower than power-law behavior. Investigating the distribution of the logarithm of the spin correlation functions, we also clarified the different nature of the ground states of the random AF and FM-AF chains. In both cases, our data of distributions satisfy the scaling hypothesis, Eq. (21). Analyzing the form of the scaling function, we found that the “rare spin pairs”, which are correlated much stronger than typical ones, dominate the mean correlation $`C(r)`$ in the random AF case. It is also shown that the $`r`$-dependence of the mean correlations in this case is quite different from that of the typical ones. These features strongly support the fact that the “random singlet phase” is realized in the ground state of the random AF Heisenberg chains. On the other hand, the scaling form of the distribution functions for the random FM-AF chains suggests that such “rare spin pairs” do not play an important role in these chains. We also deduced that $`\mathrm{ln}C(r)`$ and $`\mathrm{ln}C(r)`$ have very similar $`r`$-dependence, using the scaling hypothesis and the specific feature of the scaling function we obtained numerically. This result is non-trivial and consistent with our numerical data (Figs. 5 and 6).
###### Acknowledgements.
HT was supported by the Research Fellowship of the Japan Society for the Promotion of Science for Young Scientists. The work of AF and MS was in part supported by a Grant-in-Aid from the Ministry of Science, Education and Culture of Japan. Numerical calculations were carried out at the Yukawa Institute Computing Facility.
|
no-problem/9905/cond-mat9905169.html
|
ar5iv
|
text
|
# Scaling transformation and probability distributions for financial time series
## 1 Introduction
One of the pillars of modern physics is the covariance of theories under certain group actions. What is particular to a given application, such as initial and boundary conditions usually breaks the symmetries of the theory. The symmetry group of observed data is therefore usually much smaller than the covariance group of the theory. An example is hydrodynamics where the equations are invariant under space-time translation and scaling, but where the solutions are not, in general, invariant. For theories that are covariant under scaling (to be specific, we can think of Navier-Stokes or Korteweg-De Vries equation) the situation is clear: the scaling properties, such as the spectral decomposition of the solution at each time are given by the scaling properties of the initial condition, the boundary conditions and external forces. The study of scaling transformation properties in the domain of economics and finance is more complicated because the evolution equations (or even the theory) governing the dynamics are largely unknown. It is thus not possible, in this case, to separate these scaling properties into a general property of an underlying theory and into what is particular to the situation under study. The observed financial chronological data results at least to some extent, of the particularities of each market and not only from a general abstract dynamics. Therefore there is no a priori reason to expect the data to exhibit simple properties under scaling transformation. Keeping this simple observation in mind, we will base our analysis of the scaling transformation on methods adapted to physical systems with complex behavior:
i) Multifractal analysis of fully turbulent systems introduced in and within that approach further developed inversion techniques developed in (see also ).
ii) Non-linear group representation theory developed in and applied to many non-linear evolution equations in mathematical physics (see for recent contributions).
We apply these methods to two sets of financial chronological series :
1) Foreign exchange rate DM/$`\$`$: The data set provided by Olsen and Associates contains worldwide 1,472,241 bid-ask quotes for US dollar-German mark exchanges rates from 1 October 1992 until 30 September 1993. Tick by tick data are irregularly spaced in recorded time. To obtain price values at a regular time, we use linear interpolation between the two recorded time that immediately precede and follow the regular time. We obtain in this way, for a regular time of $`15`$ seconds, $`1,059,648`$ data. Our study focuses on the average price which is the mean of the bid and ask price.
2) Stock index CAC 40: The data set provided by the “Société de Bourse Française” contains $`1,045,890`$ quotes of the CAC 40 index from 3 January 1993 until 31 December 1996. Tick by tick data are regularly recorded every $`30`$ seconds, during opening hours (everyday from 10 am until 5 pm except week ends and national holidays). Our data base consist of the daily registers to which a constant has been subtracted such that the value at 10am is equal to the value of the previous day at 5pm. The subtracted jump process (with jumps at fixed times) can be analyzed on its own. This separation allows for a finer analysis of the rest of the process.
Using these data sets, we obtained three new results. First, the scaling transformation of the moments of the observed probability distribution is a non-linear representation that is well approximated by a linear representation, for small scaling parameters. This linear representation turns out to be diagonal. Secondly, the function of the order of the moment, defined by the spectrum of the generator is (non trivially) concave. This shows, by definition , that the data are multifractal. Note that the concavity in the case of FX market (DM/$`\$`$) can partially be deduced from and is confirmed, independently of our work, by . Our third new result is an explicit expression of the family of probability distributions of price increments corresponding to different time increments.
For larger values of the scaling parameter, the linear approximation breaks down and the non linear terms of the representation has to be considered. The analysis of this letter can also be applied to the SP 500 index, where the results should be compared to the (unifractal) scaling behavior found in . It should also be compared with the, from the point of view of finance, more fundamental approach of stochastic time transformation (subordinate processes) that were applied to SP500 . These points are left for future investigations.
## 2 The mathematical framework
We suppose that the financial variable is described by a stochastic process $`(u(t))_{t0}`$ such that the set of increments $`u(t+\tau )u(t)`$, $`\tau 0`$ has a well defined transformation property under scaling of the time increment $`\tau `$, $`\tau a\tau `$, $`a>0`$. To avoid complications, irrelevant for the quite crude application reported in this letter, we suppose that $`(u(t))_{t0}`$ is stationary. Moreover, we will only consider the absolute value $`|u(t+\tau )u(t)|`$ of increments. Let $`w(\tau )=|u(\tau )u(0)|`$, $`\tau 0`$. This means that for each (scaling) $`a>0`$, there is a map $`T_a`$ of the set $`𝒲=\{w(\tau )|\tau 0\}`$ such that $`T_a(w(\tau ))=w(a\tau )`$. A group action $`T`$ of the scaling (dilatation) group ID (the set of strictly positive real numbers) on the set $`𝒲`$ is then defined, i.e $`T_{ab}(x)=T_a(T_b(x))`$ and $`T_e(x)=x`$ for $`a,b\text{ID},x𝒲`$, where $`e=1`$ is the identity element in ID. In the cases under consideration in this letter, it follows from the observed time series that the estimated probability distribution $`p_{w(\tau )}`$ in IR of $`w(\tau )`$ is different for different $`\tau >0`$. This is enough to ensure the existence of the action $`T`$, and moreover shows that $`T`$ gives a group action $`\overline{T}`$, on the set $`=\{p_{w(\tau )}|\tau 0\}`$ of probability distributions, defined by $`\overline{T}_a(p_{w(\tau )})=p_{w(a\tau )}`$.
The group action $`\overline{T}`$ is not linear, in spite of its appearance. To explicit properties of the scaling action $`\overline{T}`$, we change the coordinates of the elements in $``$. As in the case of fully developed turbulence, we use the moments as coordinates. For $`q`$, let the moment vector be the sequence $`S(q)=(S_r(q))_{r0}`$, where $`S_r(q)=_0^{\mathrm{}}x^rq(x)𝑑x`$ and $`r\text{IR}^+`$. Here we suppose that the set $``$ of probability measures is such that $`S_r(q)`$ exists for all orders $`r`$ and moreover that $`q`$ is determined by its moments of order $`r\text{IR}^+`$ (which is the case if for example the Fourier Transform of elements in $``$ are quasi analytic). Let $`𝒮`$ be the image (in the space $`C(\text{IR}^+)`$ of continuous real functions on $`\text{IR}^+`$) of $``$ under the coordinate transformation $`S`$. The image $`U`$ of the group action $`\overline{T}`$ is given by $`U_a=S\overline{T}_aS^1`$, i.e $`U_a(S(p_{w(\tau )}))=S(p_{w(a\tau )})`$. In the case $`U`$ is a linear diagonal representation, it has the form $`U_a=U_a^{(1)}`$, where for given real numbers $`\zeta _r`$ with $`r\text{IR}^+`$:
$`U_a^{(1)}(m)=(a^{\zeta _r}m_r)_{r0}`$ (1)
for $`a\text{ID}`$ and $`mC(\text{IR}^+)`$, $`m_r`$ corresponding to a moment of order $`r`$. We note that $`\{\zeta _r|r\text{IR}^+\}`$ is the spectrum of the generator of the representation $`U^{(1)}`$. When $`U`$ is a non linear perturbation of $`U^{(1)}`$, there are algorithms permitting its construction. However they are outside the scope of this letter . For commodity we denote $`s_r(\tau )=S_r(p_{w(\tau )})`$ which is the r-th component of $`U_\tau (S(p_{w(1)}))`$. An accurate and explicit approximation of the inverse transformation $`S^1`$, of the moment vectors $`s_r(\tau )`$ to probability distribution $`p_{w(\tau )}`$ has been developed in . This permits to obtain directly from experimental data, an explicit formula for the family $`=\{p_{w(\tau )}\}_{\tau >0}`$ of probabilities. In fact, for each $`\tau \text{IR}^+`$ we can determine an element $`p_{w(\tau )}`$ by the formulas:
$`xp_{w(\tau )}(x)`$ $`=`$ $`\overline{p}(\mathrm{ln}x)`$
$`\alpha (r,\tau )`$ $`=`$ $`{\displaystyle \frac{d\mathrm{ln}s_r(\tau )}{dr}}`$ (2)
$`\mathrm{ln}\overline{p}(\alpha (r,\tau ))`$ $`=`$ $`\mathrm{ln}s_r(\tau )r{\displaystyle \frac{d\mathrm{ln}s_r(\tau )}{dr}}{\displaystyle \frac{1}{2}}\mathrm{ln}(2\pi ){\displaystyle \frac{1}{2}}{\displaystyle \frac{d^2\mathrm{ln}s_r(\tau )}{dr^2}}.`$ (3)
where $`r\text{IR}^+`$.
## 3 Results
When the representation $`U`$ is linear it follows from expression (1) that $`\mathrm{ln}s_r(\tau )=A_r+\zeta _r\mathrm{ln}\tau `$, where $`A_r`$ and $`\zeta _r`$ are independent of $`\tau `$. Figure (1A) shows that, in the case of FX DM/$`\$`$, this is satisfied, to a good approximation with time increments $`\tau `$ and moments of order $`r`$ in the interval $`11\tau 2896`$ minutes and $`1r10`$. In contrast, for CAC40 the domain of validity of
the linear approximation also contains the small values of $`\tau `$: $`1\tau 2048`$ minutes and $`1r10`$ (see figure (2A)). Outside this domain in the ($`r,\tau `$) plane, the linear representation approximation breaks down. Inside the domain of validity of the linear representation approximation, the spectrum of the generator is presented in figure (1B) (resp. (2B)) in the case of FX DM/$`\$`$ (resp. CAC40). The function $`r\zeta _r`$ is in both cases (non trivially) concave, which by definition (see ) shows that the system has a multifractal behavior.
Finally, we present in figure (3A) and figure (3B) (resp. figure (4A) and figure (4B)) probability densities (M.A.M) given by (2) and (3), for $`\tau =8`$ minutes and $`\tau =512`$ minutes in the case of FX DM/$`\$`$ index (resp. CAC40). In all the cases, the experimental probability distribution is well approximated, for a large range of price increments, by the corresponding probability distributions in the family $`\{p_{w(\tau )}\}_{\tau >0}`$ constructed by the inverse method developed in and . Other commonly used probability distributions are also presented in the figures for illustration.
|
no-problem/9905/hep-ph9905534.html
|
ar5iv
|
text
|
# Renormalization group improved BFKL equation Work supported in part by the E.U. QCDNET contract FMRX-CT98-0194 and by MURST (Italy)
## Abstract
I report on the recent proposal of a generalized small-$`x`$ equation which, in addition to exact leading and next-to-leading BFKL kernels, incorporates renormalization group constraints in the relevant collinear limits.
The calculation of next-to-leading log $`x`$ corrections to the BFKL equation was completed last year after several years of theoretical effort. The results, for both anomalous dimension and hard Pomeron, show however signs of instability due to both the size and the (negative) sign of corrections, possibly leading to problems with positivity also .
If we write the eigenvalue equation corresponding to the BFKL solution in the form
$`\omega `$ $`=`$ $`\overline{\alpha }_s(t)\left[\chi _0(\gamma )+\overline{\alpha }_s(\mu ^2)\chi _1(\gamma )+\mathrm{}\right],`$
$`t`$ $`=`$ $`\mathrm{log}{\displaystyle \frac{k^2}{\lambda ^2}},`$ (1)
where $`\omega =N1`$ is the moment index and $`\gamma `$ is an anomalous dimension variable, the NL eigenvalue function has the shape of Fig 1, which completely overtrows the LL picture, even for coupling values as low as .04.
The basic reason for the instability above lies in the $`\gamma `$-singularity structure of $`\chi _1`$ (cubic poles) which are of collinear origin, and keep track of the choice of the scaling variable, whether it is $`kk_0`$/s, or $`k^2`$/s, or $`k_0^2`$/s, in a two-scale hard process. An additional reason lies in the renormalization scale $`(\mu )`$ dependence of Eq (1), related to the method of solution.
In a recent proposal , both problems are overcome at once by a proper use of R.G. constraints on both kernel and solution. On one hand, the requirement of single-log scaling violations for both $`kk_0`$ with Bjorken variable $`k^2`$/s and for the symmetrical limit, imply an $`\omega `$-dependent shift of the $`\gamma `$-singularities in the kernel which resums the double logarithmic ones mentioned before.
On the other hand, a novel method of solution called $`\omega `$-expansion replaces $`\alpha _s`$ with $`\omega `$ as perturbative parameter of the subleading hierarchy, and allows a R.G. invariant formulation of the solution. More precisely, for large $`t`$ the gluon Green’s function takes the factorized form
$$G_\omega (𝐤,𝐤_0)=_\omega (𝐤)\stackrel{~}{}_\omega (𝐤_0),tt_01,$$
(2)
where
$`\dot{g}_\omega (t)`$ $``$ $`𝐤^2_\omega (𝐤)`$ (3)
$`=`$ $`{\displaystyle \frac{d\gamma }{2\pi i}\mathrm{exp}\left[\gamma t\frac{1}{b\omega }X(\gamma ,\omega )\right]}`$
is the $`t`$-dependent unintegrated gluon density. The phase function X is given in terms of the effective eigenvalue function
$$\frac{}{\gamma }X(\gamma ,\omega )=\chi (\gamma ,\omega )=\chi _0^\omega (\gamma )+\omega \frac{\chi _1^\omega (\gamma )}{\chi _0^\omega (\gamma )}+\mathrm{},$$
(4)
which now has a fully stable $`\omega `$-dependence (Fig.2).
The improved kernel eigenvalue functions $`\chi _0^\omega `$ and $`\chi _1^\omega `$ are constructed from the exact L+NL kernels, by incorporating the $`\omega `$-shift requirement. The neglected terms in Eq.4 yield a small error, corresponding to a coupling change $`\delta \alpha _s/\alpha _s=O(\alpha _s\omega )`$, subleading in both $`\frac{\alpha _s}{\omega }`$ and $`\alpha _s`$ expansions of the small-$`x`$ hierarchy.
The solution for the effective anomalous dimension $`\gamma _{eff}=\dot{g}_\omega (t)/g_\omega (t)`$ is shown in Fig.3, compared to L and NL approximations. The resummed result is remarkably similar to the fixed order value until very close to the singularity point $`\omega _c(t)`$, which lies below the saddle point breakdown value $`\omega _s(t)`$ used in previous NL estimates of the hard Pomeron. The latter signals the failure of the large-$`t`$ saddle point $`b\omega t=\chi (\overline{\gamma },\omega )`$ to yield a reliable anomalous dimension $`\overline{\gamma }`$, due to infinite $`\gamma `$-fluctuations. The former is the position of the true $`t`$-dependent $`\omega `$ singularity, and is systematically lower \[Fig.4\]. No instabilities and very little renormalization scheme dependence are found.
The critical exponents $`\omega _c(t)`$ and $`\omega _s(t)`$ are actually both needed for a full understanding of the Green’s function (2), whose coefficient $`\stackrel{~}{}_\omega (𝐤_0)`$ carries the $`t`$-independent, leading Pomeron singularity, which is really nonperturbative. While a precise estimate of the latter requires extrapolating the small-$`x`$ equation in the strong-coupling region $`k^2\mathrm{\Lambda }^2`$, one can argue that $`\omega _c`$ and $`\omega _s`$ provide lower and upper bounds on $`\omega _P`$, and thus a first rough estimate of the Pomeron intercept.
I wish to thank Dimitri Colferai and Gavin Salam for friendly and helpful discussions.
|
no-problem/9905/nucl-th9905067.html
|
ar5iv
|
text
|
# Mesonic correlations and quark deconfinement
## 1 Introduction
The experimental search for the QCD deconfinement phase transition in ultrarelativistic heavy-ion collisions will enter a new stage when the relativistic heavy-ion collider (RHIC) at Brookhaven will provide data complementary to those from the CERN SPS . It is desirable to have a continuum field-theoretical modeling of quark deconfinement and chiral restoration at finite temperature and density (or chemical potential $`\mu `$) that can be extended also to hadronic observables in a rapid and transparent way. Significant steps in this direction have recently been taken through a continuum approach to QCD<sub>T,μ</sub> based on the truncated Dyson-Schwinger equations (DSEs) within the Matsubara formalism and a recent review is available. A most appealing feature of this approach to modeling nonperturbative QCD<sub>T,μ</sub> is that dynamical chiral symmetry breaking and confinement is embodied in the the model gluon 2-point function constrained by chiral observables at $`T=\mu =0`$ and no new parameters are needed for extension to $`T,\mu >0`$. Approximations introduced by a specific truncation scheme for the set of DSEs can be systematically relaxed. However due to the breaking of $`O(4)`$ symmetry and the number of discrete Matsubara modes needed, the finite $`T,\mu `$ extension of realistic DSE models entails solution of a complicated set of coupled integral equations. The generation of hadronic observables from such solutions, although a straightforward adaption of the approach found to be successful at $`T=\mu =0`$, adds further to the difficulties. In the separable model we study here, detailed realism is sacrificed in the hope that the dominant and essential features may be captured in a simple and transparent format. To this end we simplifiy an existing $`T=\mu =0`$ confining separable interaction Ansatz to produce a gaussian separable model for $`T,\mu >0`$ .
## 2 Confining separable Dyson-Schwinger equation model
In a Feynman-like gauge where we take $`D_{\mu \nu }=\delta _{\mu \nu }D(pq)`$ to be the effective interaction between quark colored vector currents, the rainbow approximation to the DSE for the quark propagator $`S(p)=[i\text{/}pA(p)+B(p)+m_0]^1`$ yields in Euclidean metric
$`B(p)`$ $`=`$ $`{\displaystyle \frac{16}{3}}{\displaystyle \frac{d^4q}{(2\pi )^4}D(pq)\frac{B(q)+m_0}{q^2A^2(q)+\left[B(q)+m_0\right]^2}},`$ (1)
$`\left[A(p)1\right]p^2`$ $`=`$ $`{\displaystyle \frac{8}{3}}{\displaystyle \frac{d^4q}{(2\pi )^4}D(pq)\frac{(pq)A(q)}{q^2A^2(q)+\left[B(q)+m_0\right]^2}}.`$ (2)
We study a separable interaction given by
$$D(pq)=D_0f_0(p^2)f_0(q^2)+D_1f_1(p^2)(pq)f_1(q^2),$$
(3)
where $`D_0,D_1`$ are strength parameters and the form factors, for simplicity, are here taken to be $`f_i(p^2)=\text{exp}(p^2/\mathrm{\Lambda }_i^2)`$ with range parameters $`\mathrm{\Lambda }_i`$. It is easily verified that if $`D_0`$ is non-zero, then $`B(p)=\mathrm{\Delta }mf_0(p^2)`$, and if $`D_1`$ is non-zero, then $`A(p)=1+\mathrm{\Delta }af_1(p^2)`$. The DSE then reduces to nonlinear equations for the constants $`\mathrm{\Delta }m`$ and $`\mathrm{\Delta }a`$. The form factors should be chosen to simulate the $`p^2`$ dependence of $`A(p)`$ and $`B(p)`$ from a more realistic interaction. We restrict our considerations here to the rank-1 case where $`D_1=0`$ and $`A(p)=1`$. The parameters $`D_0`$, $`\mathrm{\Lambda }_0`$ and $`m_0`$ are used to produce reasonable $`\pi `$ and $`\omega `$ properties as well as to ensure the produced $`B(p)`$ has a reasonable strength with a range $`\mathrm{\Lambda }_00.6\mathrm{}0.8`$ GeV to be realistic .
If there are no solutions to $`p^2A^2(p)+(B(p)+m_0)^2=0`$ for real $`p^2`$ then the quarks are confined. If in the chiral limit ($`m_0=0`$) there is a nontrivial solution for $`B(p)`$, then chiral symmetry is dynamical broken. Both phenomena can be implemented in this separable model. In the chiral limit, the model is confining if $`D_0`$ is strong enough to make $`\mathrm{\Delta }m/\mathrm{\Lambda }_01/\sqrt{2\mathrm{e}}`$. Thus for a typical range $`\mathrm{\Lambda }_0`$, confinement will typically occur with $`M(p0)300`$ MeV.
Mesons as $`q\overline{q}`$ bound states are described by the Bethe-Salpeter equation which in the ladder approximation for the present approach is
$$\lambda (P^2)\mathrm{\Gamma }(p,P)=\frac{4}{3}\frac{d^4q}{(2\pi )^4}D(pq)\gamma _\mu S(q_+)\mathrm{\Gamma }(q,P)S(q_{})\gamma _\mu ,$$
(4)
where $`q_\pm =q\pm P/2`$ and $`P`$ is the meson momentum. The meson mass is identified from $`\lambda (P^2=M^2)=`$$`1`$. With the rank-1 separable interaction, only the $`\gamma _5`$ and the $`\gamma _5\overline{)}P`$ covariants contribute to the $`\pi `$ , and here we retain only the dominant term $`\mathrm{\Gamma }_\pi (p,P)=i\gamma _5E_\pi (p,P)`$. For the vector meson, the only surviving form is $`\mathrm{\Gamma }_{\rho \mu }(p,P)=`$ $`\gamma _\mu ^T(P)E_\rho (p,P)`$, with $`\gamma _\mu ^T(P)`$ being the projection of $`\gamma _\mu `$ transverse to $`P`$. The separable solutions have the form $`E_i(p,P)=f_0(p^2)C_i(P^2),i=\pi ,\rho `$, where the $`C_i`$ factor out from Eq. (4).
In the limit where a zero momentum range for the interaction is simulated by $`f_0^2(q^2)`$$`\delta ^4(q)`$, then the expressions for the various BSE eigenvalues $`\lambda (P^2)`$ reduce to those of the Munczek and Nemirovsky model which implements extreme infrared dominance via $`D(pq)\delta ^{(4)}(pq)`$. The correspondence is not complete because the quark DSE solution in this model has $`A(p)1`$. The $`T,\mu >0`$ generalization of this infrared dominant (ID) model have been studied recently .
## 3 Pion and rho-meson properties
With parameters $`m_0/\mathrm{\Lambda }_0=0.0096`$, $`D_0\mathrm{\Lambda }_0^2=128`$ and $`\mathrm{\Lambda }_0=0.687`$ GeV, the present Gaussian separable (GSM) model yields $`M_\pi =0.14`$ GeV, $`M_\rho =M_\omega =0.783`$ GeV, $`f_\pi =0.104`$ GeV, a chiral quark condensate $`\overline{q}q^{1/3}=0.248`$ GeV, and a $`\rho \gamma `$ coupling constant $`g_\rho =5.04`$.
The generalization to $`T0`$ is systematically accomplished by transcription of the Euclidean quark 4-momentum via $`q`$ $`q_n=`$ $`(\omega _n,\stackrel{}{q})`$, where $`\omega _n=(2n+1)\pi T`$ are the discrete Matsubara frequencies. The obtained $`T`$-dependence of the mass gap $`\mathrm{\Delta }m(T)`$ allows for a study of the deconfinement and chiral restoration features of this model. We find that both occur at $`T_c=`$ 146 MeV where, in the chiral limit, both $`\mathrm{\Delta }m(T)`$ and $`\overline{q}q^0`$ vanish sharply as $`(1T/T_c)^\beta `$ with the critical exponent having the mean field value $`\beta =1/2`$.
For the $`\overline{q}q`$ meson modes, the $`O(4)`$ symmetry is broken and the type of mass shell condition employed must be specified. If there is a bound state, the associated pole contribution to the relevant $`\overline{q}q`$ propagator or current correlator will have a denominator proportional to
$$1\lambda (\mathrm{\Omega }_m^2,\stackrel{}{P}^2)\mathrm{\Omega }_m^2+\stackrel{}{P}^2+M^2(T).$$
(5)
We investigate the meson mode eigenvalues $`\lambda `$ using only the lowest meson Matsubara mode ($`\mathrm{\Omega }_m=0`$) and the continuation $`\stackrel{}{P}^2M^2`$. The masses so identified are spatial screening masses corresponding to a behavior $`\mathrm{exp}(Mx)`$ in the conjugate 3-space coordinate $`x`$ and should correspond to the lowest bound state if one exists.
The obtained $`\pi `$ and $`\rho `$ masses displayed in Fig. 1 and are seen to be only weakly $`T`$-dependent until near $`T_c=146`$ MeV. This result for $`M_\pi (T)`$ reproduces the similar behavior obtained from the ladder-rainbow truncation of the DSE-BSE complex with a more realistic interaction . The qualitative behavior obtained for the 3-space transverse and longitudinal masses $`M_\rho ^T(T),M_\rho ^L(T)`$ agrees with that reported for the limiting case of the zero momentum range or ID model.
To explore the extent to which the model respects the detailed constraints from chiral symmetry, we investigate the exact QCD pseudoscalar mass relation which, after extension to $`T>0`$, is
$$M_\pi ^2(T)f_\pi (T)=2m_0r_P(T).$$
(6)
Here $`r_P`$ is the residue at the pion pole in the pseudoscalar vertex, and in the present model, is given by
$$ir_P(T)=N_cT\underset{n}{}\mathrm{tr}_s\frac{d^3q}{(2\pi )^3}\gamma _5S(q_n+\frac{\stackrel{}{P}}{2})\mathrm{\Gamma }_\pi (q_n;\stackrel{}{P})S(q_n\frac{\stackrel{}{P}}{2}).$$
(7)
The relation in Eq. (6) is a consequence of the pion pole structure of the isovector axial Ward identity which links the quark propagator, the pseudoscalar vertex and the axial vector vertex . In the chiral limit, $`r_P`$ $`\overline{q}q^0/f_\pi ^0`$ and Eq.(6), for small mass, produces the Gell-Mann–Oakes–Renner (GMOR) relation.
The exact mass relation, Eq. (6), can only be approximately satisfied when the various quantities are obtained approximately such as in the present separable model. The error can be used to assess the reliability of the present approach to modeling the behavior of the pseudoscalar bound state as the temperature is raised towards $`T_c`$. Our findings are displayed in Fig. 1. There the solid line represents $`r_P(T)`$ calculated from the quark loop integral in Eq. (7); the dotted line represents $`r_P`$ extracted from the other quantities in Eq. (6). It is surprising that the separable model obeys this exact QCD mass relation to better than 1% for the complete temperature range up to the restoration of chiral symmetry. Also evident from Fig. 1 is that $`M_\pi (T)`$ and the chiral condensate are largely temperature independent until within about $`0.8T_c`$ whereafter $`M_\pi `$ rises roughly as fast as the condensate falls.
We have also investigated the (approximate) GMOR relation for the present model. The quantity $`\overline{q}q^0/N_\pi ^0`$ is displayed in Fig. 1 as the long-dashed line, and if the GMOR relation were exactly obeyed, this would coincide with $`M_\pi ^2f_\pi /2m`$ which is the dotted line. The quantity $`N_\pi ^0`$ enters here via its role as the normalization constant of the chiral limit $`\pi `$ BS amplitude $`E_\pi ^0(p^2)=`$ $`i\gamma _5B_0(p^2)/N_\pi ^0`$. If all covariants for the pion were to be retained and the axial vector Ward identity were obeyed, one would have $`N_\pi =f_\pi `$ . The results in Fig. 1 indicate that the GMOR relation contains an error of about $`5`$% when compared either to the exact mass relation or to the quantities produced by the separable model and that this is temperature-independent until about $`0.9T_c`$. It should be noted that $`f_\pi ^0,N_\pi ^0`$ and $`\overline{q}q^0`$ are equivalent order parameters near $`T_c`$ and have weak $`T`$-dependence below $`T_c`$. A consequence is that $`M_\pi ^2f_\pi `$, $`r_P`$ and $`\overline{q}q^0/N_\pi ^0`$ are almost $`T`$-independent and so are the estimated errors for the two mass relations linking these quantities. Since we obtain $`M_\pi `$ and $`f_\pi `$ from the model BSE solutions at finite current quark mass, $`f_\pi `$ does not exactly decrease to zero and $`M_\pi `$ does not exactly diverge at $`T_c`$.
Vector mesons play an important role as precursors to di-lepton events in relativistic heavy-ion collisions and it is important to explore the intrinsic $`T`$-dependence of electromagnetic and leptonic vector coupling constants that can arise from the quark-gluon dynamics that underlies the finite extent of the vector $`\overline{q}q`$ modes. The present model provides a simple framework for such investigations. The electromagnetic decay constant $`g_\rho (T)`$ that describes the coupling of the transverse $`\rho ^0`$ to the photon is given by
$`{\displaystyle \frac{M_{\rho }^{T}{}_{}{}^{2}(T)}{g_\rho (T)}}`$ $`=`$ $`{\displaystyle \frac{N_c}{3}}T{\displaystyle \underset{n}{}}\mathrm{tr}_s{\displaystyle \frac{d^3q}{(2\pi )^3}\gamma _\mu S(q_n+\frac{\stackrel{}{P}}{2})\mathrm{\Gamma }_\mu ^T(q_n;\stackrel{}{P})S(q_n\frac{\stackrel{}{P}}{2})},`$ (8)
after accounting for the normalization of the present BS amplitudes. The electromagnetic decay width of the transverse $`\rho `$ mode is calculated from
$`\mathrm{\Gamma }_{\rho ^0e^+e^{}}(T)`$ $`=`$ $`{\displaystyle \frac{4\pi \alpha ^2M_\rho ^T(T)}{3g_\rho ^2(T)}}.`$ (9)
At $`T=0`$ the experimental value is $`\mathrm{\Gamma }_{\rho ^0e^+e^{}}(0)=`$ $`6.77`$ keV corresponding to the value $`g_\rho (0)=5.03`$. Our results for $`g_\rho (T)`$ and $`\mathrm{\Gamma }_{\rho ^0e^+e^{}}(T)`$ are displayed in Fig. 1. This electromagnetic width of the 3-space transverse $`\rho `$ increases with $`T`$ and reaches a maximum of 1.72 times the $`T=0`$ width at about $`0.9T_c`$. An increasing electromagnetic width for the $`\rho `$ has been found empirically to be one of the possible medium effects that influence the heavy-ion dilepton spectrum .
## 4 Equation of state (EOS) for quark matter
The thermodynamical properties of the confining quark model and in particular the EOS and the phase diagram can be obtained from the grand canonical thermodynamical potential $`\mathrm{\Omega }(T,V,\mu )=T\mathrm{ln}Z(T,V,\mu )=p(T,\mu )V`$, where the contributions to the pressure (for a homogeneous system)
$$p(T,\mu )=p_{\mathrm{cond}}(T,\mu )+p_{\mathrm{kin}}(T,\mu )+p_0$$
(10)
are obtained from a mean-field approximation to the Euclidean path integral representation of the grand canonical partition function $`Z(T,V,\mu )`$. In the rank-one separable gluon propagator model, the condensate contribution is $`p_{\mathrm{cond}}(T,\mu )=3\mathrm{\Delta }m(T,\mu )^2/(16D_0)`$ and the kinetic part of the quark pressure is given by
$$p_{\mathrm{kin}}(T,\mu )=2N_cN_f\frac{d^3k}{(2\pi )^3}T\underset{n}{}\mathrm{ln}\left(\frac{k_n^2+M^2(k_n^2)}{k_n^2+m_0^2}\right)+p_{\mathrm{free}}(T,\mu ).$$
(11)
In Eq. (11), $`k_n^2=[(2n+1)\pi T+i\mu ]^2+𝐤^2`$, $`M(k_n^2)=m_0+\mathrm{\Delta }m(T,\mu )f_0(k_n^2)`$ and the divergent 3-momentum integration has been regularized by subtracting the free quark pressure and adding it in the well-known regularized form $`p_{\mathrm{free}}(T,\mu )`$. The pressure contribution $`p_0`$ is found such that the total pressure (10) at the phase boundary in the $`T,\mu `$-plane vanishes, see Fig. 3. While investigating this EOS for the separable confining quark model defined above we have observed that, as a function of the coupling parameter $`D_0`$, an instability $`\mathrm{d}(p_{\mathrm{cond}}+p_{\mathrm{kin}})/\mathrm{d}T<0`$ occurs when the criterion for confinement (absence of quasiparticle mass poles) is fulfilled. The physical quark pressure in the confinement domain of the phase diagram vanishes, see Fig. 2. The results for the EOS and the phase diagram can be compared to those for the zero momentum range model with the important modification that in the present finite range model the tricritical point is obtained at finite chemical potential whereas with zero range it was found on the $`\mu =0`$ axis of the phase diagram, see Fig. 3. The location of the tricitical point could be experimentally verified in CERN-SPS experiments provided that changes in the pion momentum correlation function could be detected as a function of the beam energy .
A particularly interesting phenomenological application is the $`T=0`$ quark matter EOS which is a necessary element for studies of quark deconfinement in neutron stars. The present Gaussian separable model leads to a bag model EOS for quark matter with a bag constant $`B(T=0)=150`$ MeV/fm<sup>3</sup> for the parameter set ($`D_0\mathrm{\Lambda }_0^2=128`$) employed in Sec. 3. A second parameter set ($`D_0\mathrm{\Lambda }_0^2=97`$) that is also confining and provides an equally good description of the same $`\pi `$ and $`\rho `$ properties produces $`B(T=0)=75`$ MeV/fm<sup>3</sup>; see also Fig. 3. More stringent constraints on the low-temperature EOS will require the inclusion of hadronic excitations including the nucleon.
## 5 Deconfinement in rotating neutron stars
For the discussion of deconfinement in neutron stars, it is crucial to go beyond the mean field description and to include into the EOS also the hadronic bound states (neutrons, protons, mesons) in the confined phase. This task has not yet been solved and therefore, we adopt for this phase a Walecka model as it is introduced in . In constructing the phase transition to the deconfined quark matter as described by the GSM with a parameter set leading to $`B=75`$ MeV/fm<sup>3</sup> we have to obey the constraints of global baryon number conservation and charge neutrality . The composition of the neutron star matter is also constrained by the processes which establish $`\beta `$ equilibrium in the quark phase ($`du+e^{}+\overline{\nu }`$) and in the hadronic phase ($`np+e^{}+\overline{\nu }`$). For the given EOS we obtain a deconfinement transition of first order where the hadronic phase is separated from the quark matter one by a mixed phase in the density interval $`1.39n/n_02.37`$, where $`n_0=0.16\mathrm{fm}^3`$ is the nuclear saturation density .
All observable consequences should be discussed for fastly rotating compact objects and therefore we have studied these rotating configurations using this model-EOS with a deconfinement transition. The result is shown in Fig. 4 and shows that within the present approach a deconfinement transition in compact stars is compatible with constraints on radius and mass recently derived from the observation of QPO in low mass X-ray binaries (LMXBs) .
The basic quantity for the study of a deconfinement transition rotating compact stars is the moment of inertia which governs the rotation and thus the spin-down characteristics. Changes in the angular velocity $`\mathrm{\Omega }(t)`$ as a function of time can occur, e.g., due to magnetic dipole radiation or mass accretion . During the time evolution of an isolated pulsar, the deviation of the braking index from $`n(\mathrm{\Omega })=3`$ can signal not only the occurence of a quark matter core, but also its size . We have found that in LMXBs with mass accretion at conserved angular momentum the occurence of a quark matter core would reflect itself in a change from a spin-down to spin-up era , see Fig. 5.
More detailed investigations of these scenarios have to be performed with a more realistic EOS, in particular for the hadronic phase. The possibility of a $`T=0`$ quark matter EOS which corresponds to a bag model EOS with small bag constants of the order of $`B=70`$ MeV/fm<sup>3</sup> is an important result of the study of the confining quark model which bears interesting consequences for the study of further nontrivial phases in high-density QCD, as e.g. (color-) superconductivity.
## 6 Superconducting quark matter
The possible occurence of a superconducting quark matter phase has been recently reconsidered on the basis of nonperturbative approaches to the effective quark 4-point interaction and critical temperatures of the order of $`50`$ MeV with diquark pairing gaps of $`100`$ MeV have been obtained. So, if quark matter occurs in the interior of compact stars as advocated in the previous section, then it had to be realised in such a superconducting phase. Deconfinement in compact stars can thus result in effects on the magnetic field structure as well as the cooling curves of pulsars. Contrary to previous estimates , low temperature quark matter is a superconductor of second kind and thus the magnetic field can penetrate into the quark core in Abrikosov vortices and does not decay at timescales shorter than $`10^7`$ years. Thus the occurence of superconducting quark matter phase in compact stars does not contradict observational data . The recently developed nonperturbative approaches to diquark condensates in high-density quark matter can be further constrained by studying the consequences for cooling curves of pulsars which have to be consistent with the observational data .
## 7 Conclusions
A simple confining separable interaction Ansatz for the rainbow-ladder truncated QCD Dyson-Schwinger equations is found capable of modeling $`\overline{q}q`$ meson states at $`T>0`$ together with quark deconfinement and chiral restoration. Deconfinement and chiral restoration are found to both occur at $`T_c=146`$ MeV. The spatial screening masses for the meson modes are obtained. We find that, until near $`T_c`$, $`M_\pi (T)`$ and $`f_\pi (T)`$ are weakly $`T`$-dependent and that this model obeys the exact QCD pseudoscalar mass relation to better than 1%. The GMOR relation is found to be accurate to within 5% until very near $`T_c`$. For the vector mode, the 3-space transverse and longitudinal masses $`M_\rho ^T(T)`$ and $`M_\rho ^L(T)`$ are weakly $`T`$-dependent while the width for the electromagnetic decay $`\rho ^0e^+e^{}`$ is found to increase to 1.72 times the $`T=0`$ width. The equation of state (EOS) for the model is investigated in the $`T\mu `$ plane and it shows a tricritical point at $`T=127\mathrm{MeV},\mu =120\mathrm{MeV}`$. At $`T=0`$ the EOS can be given the form of a bag model where a broad range of bag constants $`B=75\mathrm{}150`$ MeV/fm<sup>3</sup> is obtained consistent with possible parametrizations of $`\pi `$ and $`\rho `$ observables.
The consequences for deconfinement transition in rapidly rotating neutron stars are considered and a new signal from the pulsar timing in binary systems with mass accretion is suggested. The model EOS under consideration meets the new constraints for maximum mass and radius recently derived from QPO observations. Within the present model, quark matter below $`T_c50`$ MeV is a superconductor of second kind and it is suggested that the magnetic field in a neutron star forms an Abrikosov vortex lattice which penetrates into the quark matter core and thus in accordance with the observation does not decay on timescales of $`10^4`$ years as previously suggested.
## Acknowledgments
P.C.T. acknowledges support by the National Science Foundation under Grant No. INT-9603385 and the hospitality of the University of Rostock where part of this work was conducted. The work of D.B. has been supported in part by the Deutscher Akademischer Austauschdienst (DAAD) and by the Volkswagen Stiftung under grant No. I/71 226. The authors thank Yu. Kalinovsky, P. Maris, C.D. Roberts and S. Schmidt for discussions and criticism.
## References
|
no-problem/9905/hep-ex9905025.html
|
ar5iv
|
text
|
# 1 Introduction:
## 1 Introduction:
The interest in precise measurements of the flux of neutrinos produced in cosmic ray cascades in the atmosphere has been growing over the last years due to the anomaly in the ratio of contained muon neutrino to electron neutrino interactions. The past observations of Kamiokande, IMB and Soudan 2 are now confirmed by those of SuperKamiokande, MACRO and Soudan2 (with higher statistics) and the anomaly finds explanation in the scenario of $`\nu _\mu `$ oscillation (Fukuda 1998a). The effects of neutrino oscillations have to appear also in higher energy ranges. The flux of muon neutrinos in the energy region from a few GeV up to a few TeV can be inferred from measurements of upward throughgoing muons (Ahlen 1995,Ambrosio 1998b,Hatakeyama 1998, Fukuda 1998b). As a consequence of oscillations, the flux of upward throughgoing muons should be affected both in the absolute number of events and in the shape of the zenith angle distribution, with relatively fewer observed events near the vertical than near the horizontal due to the longer path length of neutrinos from production to observation.
Here an update of the measurement of the high energy muon neutrino flux is presented. The new data are in agreement with the old data. The MACRO low energy data are presented in another paper at this conference (Surdo 1999).
## 2 Upward Throughgoing Muons:
The MACRO detector is described elsewhere (Ahlen 1993, Ambrosio 1998b). Active elements are streamer tube chambers used for tracking and liquid scintillator counters used for the time measurement. The direction that muons travel through MACRO is determined by the time-of-flight between two different layers of scintillator counters. The measured muon velocity is calculated with the convention that muons going down through the detector are expected to have 1/$`\beta `$ near +1 while muons going up through the detector are expected to have 1/$`\beta `$ near -1.
Several cuts are imposed to remove backgrounds caused by radioactivity or showering events which may result in bad time reconstruction. The most important cut requires that the position of a muon hit in each scintillator as determined from the timing within the scintillator counter agrees within $`\pm `$70 cm with the position indicated by the streamer tube track.
When a muon hits 3 scintillator layers, there is redundancy in the time measurement and 1/$`\beta `$ is calculated from a linear fit of the times as a function of the pathlength. Tracks with a poor fit are rejected. Other minor cuts are applied for the tracks with only two layers of scintillator hit.
It has been observed that downgoing muons which pass near or through MACRO may produce low-energy, upgoing particles. These could appear to be neutrino-induced upward throughgoing muons if the down-going muon misses the detector (Ambrosio 1998a). In order to reduce this background, we impose a cut requiring that each upgoing muon must cross at least 200 g/cm<sup>2</sup> of material in the bottom half of the detector. Finally, a large number of nearly horizontal ($`\mathrm{cos}\theta >0.1`$), but upgoing muons have been observed coming from azimuth angles corresponding to a direction containing a cliff in the mountain where the overburden is insufficient to remove nearly horizontal, downgoing muons which have scattered in the mountain and appear as upgoing. We exclude this region from both our observation and Monte-Carlo calculation of the upgoing events.
Figure 1A) shows the $`1/\beta `$ distribution for the throughgoing data from the full detector running. A clear peak of upgoing muons is evident centered on $`1/\beta =1`$.
There are 561 events in the range $`1.25<1/\beta <0.75`$ which we define as upgoing muons for this data set. We combine these data with the previously published data (Ahlen, 1995) for a total of 642 upgoing events. Based on events outside the upgoing muon peak, we estimate there are $`12.5\pm 6`$ background events in the total data set. In addition to these events, we estimate that there are $`10.5\pm 4`$ events which result from upgoing charged particles produced by downgoing muons in the rock near MACRO. Finally, it is estimated that $`12\pm 4`$ events are the result of interactions of neutrinos in the very bottom layer of MACRO scintillators. Hence, removing the backgrounds, the observed number of upgoing throughgoing muons integrated over all zenith angles is 607.
In the upgoing muon simulation we have used the neutrino flux computed by the Bartol group (Agrawal 1996). The cross-sections for the neutrino interactions have been calculated using the GRV94 (Glück,1995) parton distributions set, which varies by +1% respect to the Morfin and Tung parton distribution that we have used in the past. We estimate a systematic error of 9% on the upgoing muon flux due to uncertainties in the cross section including low-energy effects (Lipari 1995). The propagation of muons to the detector has been done using the energy loss calculation (Lohmann 1985) for standard rock. The total systematic uncertainty on the expected flux of muons adding the errors from neutrino flux, cross-section and muon propagation in quadrature is $`\pm 17\%`$. This theoretical error in the prediction is mainly a scale error that doesn’t change the shape of the angular distribution. The number of events expected integrated over all zenith angles is 824.6, giving a ratio of the observed number of events to the expectation of 0.74 $`\pm 0.031`$(stat) $`\pm 0.044`$(systematic) $`\pm 0.12`$(theoretical).
Figure 1 B) shows the zenith angle distribution of the measured flux of upgoing muons with energy greater than 1 GeV for all MACRO data compared to the Monte Carlo expectation for no oscillations and with a $`\nu _\mu \nu _\tau `$ oscillated flux with $`\mathrm{sin}^22\theta =1`$ and $`\mathrm{\Delta }m^2=0.0025`$ eV<sup>2</sup> (dashed line).
The shape of the angular distribution has been tested with the hypothesis of no oscillation excluding the last bin near the horizontal and normalizing data and predictions. The $`\chi ^2`$ is $`22.9`$, for 8 degrees of freedom (probability of 0.35% for a shape at least this different from the expectation). We have considered also oscillations $`\nu _\mu \nu _\tau `$. The best $`\chi ^2`$ in the physical region of the oscillations parameters is 12.5 for $`\mathrm{\Delta }m^2`$ around $`0.0025eV^2`$ and maximum mixing (the best $`\chi ^2`$ is 10.6 , outside the physical region for an unphysical value of $`\mathrm{sin}^22\theta =1.5`$).
To test the oscillation hypothesis, we calculate the independent probability for obtaining the number of events observed and the angular distribution for various oscillation parameters. They are reported for $`\mathrm{sin}^22\theta =1`$ in Figure 2 A) for $`\nu _\mu \nu _\tau `$ oscillations. It is notable that the value of $`\mathrm{\Delta }m^2`$ suggested from the shape of the angular distribution is similar to the value necessary in order to obtain the observed reduction in the total number of events in the hypothesis of maximum mixing. Figure 2 B) shows the same quantities for sterile neutrinos oscillations (Akhmedov 1993,Liu 1998).
Figure 3 A) shows probability contours for oscillation parameters using the combination of probability for the number of events and $`\chi ^2`$ of the angular distribution. The maximum of the probability is 36.6% for oscillations $`\nu _\mu \nu _\tau `$. The best probability for oscillations into sterile neutrinos is 8.4%. The probability for no oscillation is 0.36%.
Figure 3 B) shows the confidence regions at the 90% and 99% confidence levels based on application of the Monte Carlo prescription in (Feldman 1998). We plot also the sensitivity of the experiment. The sensitivity is the 90% contour which would result from the preceding prescription when the data are equal to the Monte Carlo prediction at the best-fit point.
## 3 Conclusions:
The upgoing throughgoing muon data set is in favor of $`\nu _\mu \nu _\tau `$ oscillation with parameters similar to those observed by Superkamiokande with a probability of 36.6% against the 0.36% for the no oscillation hypothesis. The probability of oscillations from the angular distributions only is 13%. The probabilities are higher than the ones of the old data (Ambrosio 1998b). The neutrino sterile oscillation hypothesis is slightly disfavored.
References
Agrawal V. et al 1996 Phys. Rev. D53 1314
Ahlen S. et al.(MACRO collaboration) 1995, Phys. Lett. B 357 481
Ahlen S. et al.(MACRO collaboration) 1993, Nucl.Instrum.Meth.A324:337-362
Akhmedov E. , Lipari P. Lusignoli M. 1993, Phys.Lett. B300:128-136
Ambrosio M. et al.(MACRO collaboration) 1998a, Astropart.Phys.9:105-117
Ambrosio M. et al.(MACRO collaboration) 1998b, Phys Lett. B. 434 451
Feldman G. and Cousins R. 1998 Phys. Rev. D57 3873
Fukuda Y. et al. (SuperKamiokande collaboration) 1998a Phys.Rev.Lett.81:1562-1567
Fukuda Y. et al. (SuperKamiokande collaboration) 1998b, e-Print Archive hep-ex/9812014
Glück M., Reya E. and Stratmann M.1995, Z. Phys. C67, 433
Hatakeyama S. et al. (Kamiokande collaboration) 1998, Phys Rev Lett 81 2016
Lipari P. Lusignoli M. and Sartogo F. 1995, Phys. Rev. Lett. 74 4384
Liu Q.Y. and Smirnov A.Yu. 1998, Nucl.Phys. B524 505
Lohmann H. Kopp R.,Voss R. 1985, CERN-EP/85-03
Surdo A. (MACRO collaboration) 1999, HE4.1.06 in this conference
|
no-problem/9905/hep-th9905203.html
|
ar5iv
|
text
|
# References
ON THE QUESTION OF DEGENERACY OF TOPOLOGICAL SOLITONS IN A GAUGED O(3) NON-LINEAR SIGMA MODEL WITH CHERN-SIMONS TERM
P.Mukherjee<sup>1</sup><sup>1</sup>1e-mail:pradip@boson.bose.res.in
Department of Physics
A.B.N.Seal College,Coochbehar
West Bengal,India
## Abstract
We show that the degeneracy of topological solitons in the gauged O(3) non-linear sigma model with Chern-Simons term may be removed by chosing a self-interaction potential with symmetry - breaking minima. The topological solitons in the model has energy,charge,flux and angular momentum quantised in each topological sector.
The 2+1 dimensional O(3) nonlinear sigma model has been studied extensively over a long period of time due to its own interest of providing topologically stable soliton solutions which are exactly integrable in the Bogomol’nyi limit and also for its applications in condensed matter physics .Soliton solutions of the model modified by the addition of Hopf term characterising maps from $`S^3`$ to $`S^2`$ reveal the occurence of fractional spin and statistics . The system can be cast in the form of a genuine gauge theory by the inclusion of the Chern - Simons (C-S) term which implements fractional spin and statistics in the context of local field theories .A gauge-independent analysis of the O(3) sigma model with C-S term shows conclusively that this fractional spin is a physical effect and not an artifact of the gauge.The fractional spin of the topologically stable solitons of the model were shown to scale as the square of the vortex number .It is also established that this property is not specific of the particular model \[8-11\] and rather shared by the Chern-Simons vortices in general .
A characteristic feature of the soliton solutions of the O(3) sigma model in (2+1) dimensions is its scale \- invariance which prevents particle interpretation on quantisation . An interesting method of breaking this scale-invariance is to gauge the U(1) subgroup as well as including a potential term .Recently it has been shown that gauging the U(1) subgroup by Chern - Simons term with a particlar potential gives rise to both topological and nontopological solitons. However the topological solitons are infinitely degenerate in a given topo- logical sector with quantised energy but degenerate charge , flux and angular momentum .This is certainly very characteristic outcome in the light of the findings detailed above about the Chern - Simons vortices.
The potential used in has two discrete minima $`\varphi _3=\pm 1`$ and the U(1) symmetry is not spontaneously broken.The topologically stable soliton solutions of the model are classified according to the homotopy $`\mathrm{\Pi }_2(S_2)=Z`$ just as the model without the gauge field coupling.The observed infinite degeneracy in each topological sector is thus physically undesirable .In the present letter we will show that inclusion of a different form of self-interaction with symmetry breaking minima leads to topologically stable soliton solutions which have all the desired features with quantised energy,charge,flux and angular momentum in each topological sector.
The Lagrangian of our model is given by
$$=\frac{1}{2}D_\mu \varphi D^\mu \varphi +\frac{k}{4}ϵ^{\mu \nu \lambda }A_\mu _\nu A_\lambda +U(\varphi )$$
(1)
Here $`\varphi `$ is a triplet of scalar fields constituting a vector in the internal space with unit norm
$`\varphi _a=𝐧_𝐚\varphi ,(a=1,2,3)`$ (2)
$`\varphi \varphi =\varphi _a\varphi _a=1`$ (3)
where $`𝐧_a`$ constitute a basis of unit orthogonal vectors in the internal space. We work in the Minkowskian space - time with the metric tensor diagonal, $`g_{\mu \nu }=(1,1,1)`$.
$`D_\mu \varphi `$ is the covariant derivative given by
$$D_\mu \varphi =_\mu \varphi +A_\mu 𝐧_3\times \varphi $$
(4)
The SO(2) (U(1)) subgroup is gauged by the vector potential $`A_\mu `$ whose dynamics is dictated by the CS term.The potential
$$U(\varphi )=\frac{1}{2k^2}\varphi _3^2(1\varphi _3^2)$$
(5)
gives a self interaction of the fields $`\varphi _a`$.Note that the minima of the potential arise when either,
$`\varphi _1=`$ $`\varphi _2=0and\varphi _3=\pm 1`$ (6)
$`or,\varphi _3=`$ $`0and\varphi _1^2+\varphi _2^2=1`$ (7)
In (6) the U(1) symmetry is unbroken whereas (7) corresponds to the spontaneous breaking of the same.For obvious reasons we will refer to (6) as the symmetric minima and (7) as the symmetry breaking minima.
The Euler - Lagrange equations of the system (1) is derived subject to the constraint (3) by the Lagrange multiplier technique
$`D_\nu (D^\nu \varphi )`$ $`=`$ $`[D_\nu (D^\nu \varphi )\varphi ]\varphi {\displaystyle \frac{1}{k^2}}𝐧_3\varphi _3(12\varphi _3^2)+{\displaystyle \frac{1}{k^2}}\varphi _3^2(12\varphi _3^2)\varphi `$ (8)
$`{\displaystyle \frac{k}{2}}ϵ^{\mu \nu \lambda }F_{\nu \lambda }`$ $`=`$ $`j^\mu `$ (9)
where
$$j^\mu =𝐧_3𝐉^\mu and𝐉^\mu =\varphi \times D^\mu \varphi $$
(10)
Using (8) we get
$$D_\mu 𝐉^\mu =\frac{1}{k^2}(𝐧_3\times \varphi )\varphi _3(12\varphi _3^2)$$
(11)
From (9) we find
$$j^0=kϵ_{ij}^iA^j=kB$$
(12)
where B = curlA is the magnetic field. Integrating (12) over the entire space we obtain
$$\mathrm{\Phi }=\frac{Q}{k}$$
(13)
where Q is the charge and $`\mathrm{\Phi }`$ is the magnetic flux.The relation (13) is characteristic of the CS theories.
The energy functional is now obtained from Schwinger’s energy - momentum tensor which in the static limit becomes
$$E=\frac{1}{2}d^2x[(D_i\varphi )(D_i\varphi )+\frac{k^2B^2}{1\varphi _3^2}+\frac{1}{k^2}\varphi _3^2(1\varphi _3^2)]$$
(14)
We have eliminated $`A_0`$ using (10) and (12).The energy functional (14) is subject to the constraint (3). We can construct a conserved current
$$K_\mu =\frac{1}{8\pi }ϵ_{\mu \nu \lambda }[\varphi D^\nu \varphi \times D^\lambda \varphi F^{\nu \lambda }\varphi _3]$$
(15)
By a straightforward calculation it can be shown that
$$_\mu K^\mu =0$$
(16)
The corresponding conserved charge is
$$T=d^2xK_0$$
(17)
Using (15) and (17) we can write
$$T=d^2x[\frac{1}{8\pi }ϵ_{ij}\varphi (^i\varphi \times ^j\varphi )]+\frac{1}{4\pi }_{boundary}\varphi _3A_\theta r𝑑\theta $$
(18)
where r,$`\theta `$ are polar coordinates in the physical space and $`A_\theta =𝐞_\theta 𝐀`$.
Let us now consider the symmetric minima (6).For finite value of the energy functional (14) we require the fields at the spatial infinity to be equal to either $`\varphi _3=1or1`$.The physical infinity is thus one point compactified to either the north or the south pole of the internal sphere.The static field configurations are thus classified according to the degree of the mapping from $`S_2`$ to $`S_2`$.Note that the first term of (18) gives the winding number of the mapping .But $`\varphi _3\pm 1`$ on the boundary.As a result the value of the topological charge is not quantised.So the static finite energy solutions corresponding to the symmetric minima are nontopological.Unlike topological solitons are not obtained in this limit.These observations may be compared with earlier findings about the Chern - Simons solitons .
The situation changes dramatically when we consider the symmetry breaking minima (7).Here the physical vaccua bear representation of the U(1) symmetry.
$$\psi e^{in\theta }$$
(19)
where $`\psi =\varphi _1+i\varphi _2`$ and n gives the number of times the infinite circle of the physical space circuitting around the equatorial circle of the internal sphere.The topological solitons of the model are now classified according to this winding number.When the equatorial circle is traversed once the physical space is mapped on a hemisphere of the internal sphere.In general the topological charge (18) will be quantised by
$$T=\frac{n}{2}$$
(20)
allowing half integral values of T.
Using the definition of $`\varphi `$ we can write
$$D_i\varphi D_i\varphi =|(_i+iA_i)\psi |^2+(_i\varphi _3)^2$$
(21)
From (14) and (21) we observe that for finite energy configurations we require the covariant derivative $`(_i+iA_i)\psi `$ to vanish at the physical boundary.Using (19) we then get on the boundary
$$𝐀=𝐞_\theta \frac{n}{r}$$
(22)
It is really interesting to observe that the asymptotic form (22) is sufficient to find the magnetic flux $`\mathrm{\Phi }`$,charge Q and spin S. Thus the magnetic flux
$$\mathrm{\Phi }=Bd^2x=_{boundary}A_\theta r𝑑\theta =2\pi n$$
(23)
and spin
$$S=\frac{k}{2}_{boundary}^i[x_iA^2A_ix_jA^j]d^2x=\pi kn^2$$
(24)
Using (13) and (23) we then find
$$Q=2\pi kn$$
(25)
Equations (23) to (25) show that the soliton solutions corresponding to the symmetry breaking vaccua have charge,flux and angular momentum quantised in each topological sector.
We then turn to show that the model satisfies Bogomol’nyi conditions. Rearrenging (14) we can write
$$E=\frac{1}{2}d^2x[\frac{1}{2}(D_i\varphi \pm ϵ_{ij}\varphi \times D_j\varphi )^2+\frac{k^2}{1\varphi _3^2}(F_{12}\pm \frac{1}{k^2}\varphi _3(1\varphi _3^2))^2]\pm 4\pi T$$
(26)
Equation (26) gives the Bogomol’nyi conditions
$`D_i\varphi \pm ϵ_{ij}\varphi \times D_j\varphi =0`$ (27)
$`F_{12}\pm {\displaystyle \frac{1}{k^2}}\varphi _3(1\varphi _3^2)=0`$ (28)
which minimises the energy functional in a particular topological sector, the upper sign corresponds to +ve and the lower sign corresponds to -ve value of the topological charge.The equations can be handled in the usual method to show that the scale invariance is removed by the artifice of the gauge-field coupling.
We will now show the consistency of (27) and (28) using the well-known Ansatz
$`\varphi _1(r,\theta )=\mathrm{sin}F(r)\mathrm{cos}n\theta `$
$`\varphi _2(r,\theta )=\mathrm{sin}F(r)\mathrm{sin}n\theta `$
$`\varphi _3(r,\theta )=\mathrm{cos}F(r)`$
$`𝐀(r,\theta )=𝐞_\theta {\displaystyle \frac{na(r)}{r}}`$ (29)
From (7) we observe that we require the boundary condition
$$F(r)\pm \frac{\pi }{2}asr\mathrm{}$$
(30)
and equation (22) dictates that
$$a(r)1asr\mathrm{}$$
(31)
Remember that equation (22) was obtained so as the solutions have finite energy. Again ,for the fields to be well defined at the origin we require
$$F(r)0or\pi anda(r)0asr0$$
(32)
Substituting the Ansatz(29) into (27) and (28) we find that
$`F^{}(r)=\pm {\displaystyle \frac{n(a+1)}{r}}\mathrm{sin}F`$ (33)
$`a^{}(r)={\displaystyle \frac{r}{nk^2}}\mathrm{sin}^2F\mathrm{cos}F`$ (34)
where the upper sign holds for +ve T and the lower sign corresponds to -ve T.Equations (33) and (34) are not exactly integrable.They may be solved numerically subject to the appropriate boundary conditions to get the exact profiles.
Using the Ansatz (29) we can explicitly compute the topological charge T by performing the integration in (18).The result is
$$T=\frac{n}{2}[cosf(\mathrm{})cosf(0)]$$
(35)
So we find that according to (30) and (32) T =$`\pm \frac{n}{2}`$ which is in agreement with our observation (20).Note that f(0) corresponds to +ve T and f(0) = $`\pi `$ corresponds to -ve T. If we take +ve T we find F(r) bounded between 0 and $`\frac{\pi }{2}`$ is consistent with (30),(32) and (33).Again a(r) bounded between 0 and -1 is consistent with (31),(32) and (34).Thus for +ve topological charge the ansatz (29) with the following boundary conditions
$`F(0)=0a(0)=0`$
$`F(\mathrm{})={\displaystyle \frac{\pi }{2}}a(\mathrm{})=1`$ (36)
is consistent with the Bogomol’nyi conditions. Similarly the consistency may be verified for -ve T.
To conclude,we find that gauging the U(1) subgroup of the nonlinear O(3) sigma model along with the inclusion of a self-interaction potential with degenerate minima where the U(1) symmetry is spontaneously broken provides topologically stable soliton solutions which have the desirable feature of the removal of scale invariance with quantised energy,charge,flux and angular momentum pertaining to each topological sector in contrast to where such solutions are infinitely degenerate.This breaking of the degeneracy is ascribed to the topology of the minima of the potential considered in our model.We have demonstrated that the theory satisfies Bogomol’nyi conditions and discussed the consistency of the solutions.Detailed calculations of the profiles are pending with other related issues.We propose to take up these works subsequently.
I like to thank Dr.R.Banerjee for helpful discussions and Dr.S.Roy for his encouragements.I also thank Professor C.K.Majumdar,Director S.N. Bose National centre for Basic Sciences for allowing me to use some of his institute facilities.Finally my earnest thanks are due to the referee for his comments which largely enabled me to put the work in proper perspectives.
|
no-problem/9905/cs9905016.html
|
ar5iv
|
text
|
# PROGRAMS WITH STRINGENT PERFORMANCE OBJECTIVES WILL OFTEN EXHIBIT CHAOTIC BEHAVIOR
## 1 Introduction
IBM’s Deep Blue, a powerful chess playing machine consisting of two parallel-process tandem supercomputers programmed by a team of experts lead by team manager C. Tan \[Hsu *et al.*, 1990; Horgan, 1996; Hsu, 1990; Slate, 1984\], played the world chess champion G. Kasparov several games in 1996 and 1997 with fairly even results. Actually, programmer Hsu’s estimate back in 1990 of the future machine’s playing strength was 4000 ELO points (chess’ rating system), far greater than Kasparov’s $``$2800 present rating. In three minutes, which is the game’s average pondering time, the machine could calculate 20 billion moves, enough to for a 24-ply search and an up to 60-ply search in critical tactical lines. Since grandmasters can calculate just a few moves ahead, it seems very peculiar that a human could hold his own on the face of such an overwhelming opposition.
In this paper we are interested in a special kind of problem and the software written for it. It is the kind of problem whose software would score high in the *Stringent performance objectives* \[Abran & Robillard, 1996\] adjustment factor of the International Function Point User’s Group (IFPUG). Examples are, for instance, the control of air-traffic at a busy airport, the scheduling of trains in areas with heavy traffic, and field military operations. One way of approaching this kind of problem is to treat it within the context of game theory, as a 2-player game. The first player would be the comptroller, central authority or headquarters, and the second is the system itself, that acts and reacts out of its own nature. The first player pursues to maintain control of a complicated system by choosing its moves, that is, by affecting the system in the ways available to him. He would like the system to always remain in states such that certain state variables (they could be safety, efficiency, lethality or others) are kept extremized. The performance objectives would be to extremize these state variables.
The nature of this kind of problem is such that it is necessary to see ahead what is going to happen. At least in theory, the first player must have arrived at his move only after having taken into consideration all of the possible responses of the second player. This is a common situation in game theory, and is another reason why the language of game theory is very well-suited to discuss both this kind of problem and the software developed to help deal with it. The typical program contains two fundamental sectors:
1. *a ply calculator,* that is able to look ahead at all possible continuations of the tree a certain number of plies ahead,
2. *a static evaluator,* that gives an evaluation of the resulting state of the problem at the end of each branch of plies.
Although there are many different ways of programming the ply calculator or the static evaluator, their complementary, basic functions are clear: the first is a brute force calculator of all possible states to come, and the second is an evaluator of the final resulting state of the system, intrinsically and on the basis of the state itself, without any resort to further calculation.
The different states of the problem can be seen as a function of time. If one is able to express each state using a mathematical description, the problem of the time-development of the state while maintaining certain state variables extremized can be described as an autonomous system. If the equations describing the time-development of the system are nonlinear, it is very likely that the problem is going to exhibit chaotic \[Alligood *et al.*, 1997\] behavior. Therefore, the software for these problems has an intrinsic limitation on its accuracy, even if it still may be extremely useful.
As an example we will work out the case of chess, a useful one because, while nontrivial, it is not nearly as complex as some of the other problems are. Chess’ software scores high in the *Stringent performance objectives* adjustment factor. We will prove that chess exhibits chaotic behavior in its configuration space and that this implies its static evaluators possess intrinsic limitations: there are always going to be states or positions that they will not be able to evaluate correctly. It is likely that this is precisely the explanation for the peculiar situation mentioned in the first paragraph: that a human being can hold his own at a chess game with a supercomputer. The ply calculator part of the program of the supercomputer would be tremendously effective, but the static evaluator would not be so faultless.
## 2 An abstract mathematical representation of chess
We have to describe each possible state (or position) in chess. To describe a particular state we shall use a 64 dimensional vector space, so that to each square of the board we associate a coordinate that takes a different value for each piece occupying it. A possible convention is the following:
* A value of zero for the coordinate of the dimension corresponding to a square means that there is no piece there.
* For the White pieces the convention would be: a value of 1 for the coordinate means the piece is a pawn, of 2 means it is a pawn without the right to the *en passant move,* of 3 that it is a knight, of 4 a bishop, of 5 a rook, of 6 a queen, of 7 a king, and of 8 a king without the right to castle.
* The values for the Black pieces would be the same but negative.
Let us represent the 64-component vector by the symbol $`x`$. A vector filled with the appropriate numbers can then be used to represent a particular state of the game. We shall call the 64-dimensional space consisting of all the coordinates *the configuration space C* of the game. The succeeding moves of a pure strategy can be plotted in this space, resulting in a sequence of points forming a path.
Now we construct a function $`f:CC`$ that gives, for any arbitrary initial state of a chess game, the control strategy to be followed by both players. The existence of this function is assured by the Zermelo-von Neumann theorem \[von Neumann & Morgenstern, 1944\] that asserts that a finite 2-person zero-sum game of perfect information is strictly determined, or, in other words, that a pure strategy exists for it. For a given initial chess state this means that either
* White has a pure strategy that wins,
* Black has a pure strategy that wins,
* both are in possession of pure strategies that lead to a forced draw.
Consider a certain given initial state of the game where White has a pure strategy leading to a win. (The two other cases, where Black has the win or both have drawing strategies can be dealt with similarly and we will not treat them explicitly.) Let the initial chess state be given by the 64-component vector $`x_0`$, where we are assuming that White is winning. The states following the initial one will be denoted by $`x_n`$, where the index is the number of plies that have been played from the initial position. Thus $`x_1`$ is the position resulting from White’s first move, $`x_2`$ is the position resulting from Black’s first move, $`x_3`$ is the position resulting from White’s second move, and so on. Since White has a winning pure strategy, it is obvious that, given a certain state $`x_n`$, $`n`$ even, there must exist a vector function $`f`$ so that, if $`x_{n+1}`$ is the next state resulting from White’s winning strategy, then $`f(x_n)=x_{n+1}`$. On the other hand, if $`n`$ is odd, so that it is Black’s turn, then we define $`f`$ to be that strategy for Black that makes the game last the longest before the checkmate. Again, the pure strategy that is available to Black according to the Zermelo-von Neumann theorem allows us to define a function $`f(x_n)=x_{n+1}`$. The function $`f`$ is thus now defined for states with $`n`$ both even and odd.
The function $`f`$ allows us to define another function $`g:CC`$, *the control strategy vector function* \[Abramson, 1989\], defined by $`g(x_n)=f(x_n)x_n`$. With it we can express the numerical difference between the vectors corresponding to two consecutive moves as follows:
$$g(x_n)=x_{n+1}x_n.$$
(1)
Given any initial state $`x_0`$, this function gives us an explicit control strategy for the game from that point on.
## 3 Chaos in configuration space
A set $`N`$ of simultaneous differential equations,
$$g(x)=\frac{dx}{dt},$$
(2)
where $`t`$ is the (independent) time variable, $`xR^N`$ and the $`g`$ are known $`g:R^NR^N`$ vector functions, is called an autonomous system \[Alligood *et al.*, 1997\]. The time $`t`$ takes values in the interval $`0tT`$. Let us discretize this variable, as is often done for computational purposes \[Parker & Chua, 1989\]. We assume it takes only discrete values $`t=0,\mathrm{\Delta }t,2\mathrm{\Delta }t,\mathrm{},T`$. After an appropriate scaling of the system one can take the time steps to be precisely $`\mathrm{\Delta }t=1`$. Let the initial condition of the system be $`x(0)x_0`$, and let us define $`x(1)x_1,`$ $`x(2)x_2`$, and so on. By taking $`N=64`$ one can then rewrite (2) in a form that is identical to (1).
Nonlinear autonomous systems in several dimensions are always chaotic, as experience shows. Is the control strategy function nonlinear? A moment’s consideration of the rules of the game tell us that the control function has to be nonlinear and that, therefore, the system described by (1) has to be chaotic.
For some kinds of chess moves the difference $`x_{n+1}x_n`$ has a relatively large value that would correspond to a jerky motion of the system, and the question can be raised if such a motion could really occur in a an autonomous system. But the important thing to realize is that if *even* typical autonomous nonlinear systems (that possess a smooth function $`g(x)`$) do show chaotic behavior, then *certainly* the system that represents chess (with a jerky control strategy function) should also show it.
The chaotic nature of the paths in configuration space has several immediate implications, but certainly one of the most interesting is the following:
###### Proposition 1
It is not possible to program a static evaluator for chess that works satisfactory on all positions.
Proof. The point of the proposition is that a program with a good static evaluator is always going to have shortcomings: it will always evaluate incorrectly at least some positions. If one programs another static evaluator that evaluates correctly these positions, one will notice soon that there are others that the new program still cannot evaluate correctly. In last analysis the perfect evaluator for chess would have to be an extremely long program, and for more complex systems of this kind, an infinite one. To see it is not possible to program a static evaluator for chess that works correctly on all positions, notice that it would have to evaluate on the basis of the state itself *without recourse to the tree.* The evaluation of the state has to be done using heuristics, that is, using rules that say how good a state is on the basis of the positions of the pieces and not calculating the tree. But this is not possible if chess is chaotic because then we know the smallest difference between two states leads to completely diverging paths in configuration space, that is, to wholly differing states a few plies later. Therefore the heuristic rules of the static evaluator have to take into account the smallest differences between states, and the evaluators have to be long or infinite routines. Static evaluators, on the other hand, should be short programs, since they have to evaluate the states at the end of each branch of the tree.
Another interesting point is that chaos exacerbates the horizon effect \[Berliner, 1973\]. This is the problem that occurs in game programming when the computer quits the search in the middle of a critical tactical situation and thus it is likely that the heuristics return an incorrect evaluation \[Shannon, 1950\]. In a sense, what the proposition is saying is that practically all states are critical, and that the horizon effect is happening all the time and not only for some supposedly very special positions.
## 4 Comments
We have seen that it is likely that the pure strategy paths of chess in configuration space follow chaotic paths. This implies that practical static evaluators must always evaluate incorrectly some of the states. As a result the horizon problem is exacerbated.
The reason why a machine such as Deep Blue is not far, far stronger than a human has to do again with the problem of programming a static evaluator. Even though the machine searches many more plies than the human does, at the end of each branch it has to use a static evaluator that is bound to incorrectly evaluate some states. This adds an element of chance to the calculation. The fact that Deep Blue at present has a playing strength similar to the best human players tells us that the human mind has a far better static evaluator than Deep Blue (assuming one can apply these terms to the human mind). If chess were not chaotic the overwhelming advantage in ply calculation that the machine has would allow it to play much better than any human could.
In practice, of course, as long as computers keep getting faster and having more memory available, it is always possible to keep improving the static evaluators. If computers can be programmed to learn from their experience they could improve their static evaluators themselves. This was the idea of the program for another game, link-five \[Zhou, 1993\].
Now, in a general vein, it should be clear why programs that would score high in the IFPUG’s *Stringent performance objectives* adjustment factor would tend to be exhibit chaotic behavior in their configuration spaces. The software of this type of program has to foresee what is going to happen while extremizing certain state variables, as we mentioned before. This kind of problem is equivalent to an autonomous system of differential equations that exhibits chaos, so that the control strategy vector function $`g`$ of the system is extremely sensitive to the smallest differences in a state $`x`$. Thus any static evaluator that one programs (that has to be heuristic in nature) is going to be severely limited.
Nevertheless, the consideration we made two paragraphs ago for chess is also true for this kind of problem in general: as long as computers get faster and have more memory the programs can be prepared to deal with more and more situations. Rules of thumb that humans have learned from experience can be added to the evaluators. Alternatively, the programs can be written so that they learn from their experience. But they are always going to be *very* long programs.
References
Abramson, B. “Control strategies for two-player games”, ACM Comp. Surveys 21, 137- 161.
Abran, A. & Robillard, P. N. IEEE Trans. Software Eng. 22, 895-909.
Alligood, K. T., Sauer, T. D. & Yorke J. A. *Chaos: An Introduction to Dynamical Systems* (New York, Springer-Verlag).
Berliner, H. J. “Some necessary condition for a master chess program”, *Proceedings of the 3rd International Joint Conference on Artificial Intelligence, Stanford, CA* (Los Altos, Morgan Kaufmann), 77-85.
Horgan, J. \[1996 \] “Plotting the next move”, Scien. Am. 274 no. 5, 10.
Hsu, F. “Large scales parallelization of alpha-beta search: an algorithmic and architectural study with computer chess”, Ph.D. thesis. Carnegie-Mellon University Computer Science Department, CMU-CS-90-108.
Hsu, F., Anantharaman, T., Campbell M., & Nowatzyk, A. “A Grandmaster Chess Machine”, Scient. Am. 263 no. 4, 44-50.
von Neumann, J. & Morgenstern O. *Theory of Games and Economic Behavior* (Princeton, Princeton University Press).
Parker, T.S. & Chua L.O. *Practical Numerical Algorithms for Chaotic Systems* (New York, Springer-Verlag).
Shannon, C.E. “Programming a computer for playing chess”, Philos. Mag. 41, 256-275.
Slate, D. J. & Atkin, L. R. “Chess 4.5-The Northwestern University chess program”, in Chess *Skill in Men and Machine*, ed. Frey, P. W. (New York, Springer Verlag).
Zhou, Q. “A distributive model of playing the game link-five and its computer implementation”, IEEE Trans. Syst., Man, Cybern., SMC-23, 897-900.
|
no-problem/9905/quant-ph9905046.html
|
ar5iv
|
text
|
# Non-separability without Non-separability in Nonlinear Quantum Mechanics
## 1 Introduction
Recently, we have presented the nonlinear phase modification of the Schrödinger equation called the simplest minimal phase extension (SMPE) of the Schrödinger equation together with some one-dimensional solutions to it . The most interesting of them is a free solitonic Gaussian solution describing a quantum particle. By assuming that the self-energy term constitutes the rest mass-energy of the soliton, we obtained a model particle whose physical size defined as the width of its Gaussian probability density is equal to its Compton wavelength. In this approach, that we call subrelativistic, the coupling constant is inevitably a particle-dependent quantity, an intrinsic property of the particle not unlike its mass. In fact, it is a function of the mass. In a more general approach in which no such use is made of the self-energy term, the size of the particle is determined both by its mass and by the coupling constant, the latter being entirely independent of the former.
The main purpose of this paper is to present a free $`n`$-particle solution to the modification in question describing uncorrelated solitons and to demonstrate some unusual property of it that one could call the non-separability without non-separability. As we will see, the $`n`$-particle equation can be separated into $`n`$ one-particle equations, but some residual non-separability that manifests itself in the dependence of individual particle’s wave function on the presence and properties of other particles in the system remains. In addition, we will present a solution describing $`n`$ uncorrelated particles coupled to linear harmonic oscillators and discuss the non-separability that occurs in this case which, in general, is less benign than the non-separability of free particles. It is the analysis of this case that demonstrates that a different $`n`$-particle extension from the one we concentrate on in this paper is necessary to avoid pathological situations in which the nonlocality is manifestly compromised. We suggest such an extension for factorizable wave functions.
In what follows, $`R`$ and $`S`$ denote the amplitude and the phase of the wave function<sup>1</sup><sup>1</sup>1We follow here the convention of that treats the phase as the angle. In a more common convention $`S`$ has the dimensions of action and $`\mathrm{\Psi }=Rexp(iS/\mathrm{})`$. $`\mathrm{\Psi }=Rexp(iS)`$, $`C`$ is the only constant of the modification which does not appear in linear quantum mechanics.
For $`n`$ particles of mass $`m_i`$, the equations of motion for the modification read
$$\mathrm{}\frac{R^2}{t}+\underset{i=1}{\overset{n}{}}\frac{\mathrm{}^2}{m_i}\stackrel{}{}_i\left(R^2\stackrel{}{}_iS\right)2C\underset{i=1}{\overset{n}{}}\mathrm{\Delta }_i\left(R^2\underset{i=1}{\overset{n}{}}\mathrm{\Delta }_iS\right)=0,$$
(1)
$$\underset{i=1}{\overset{n}{}}\frac{\mathrm{}^2}{m_i}\mathrm{\Delta }_iR2R\mathrm{}\frac{S}{t}2RV\underset{i=1}{\overset{n}{}}\frac{\mathrm{}^2}{m_i}R\left(\stackrel{}{}_iS\right)^22CR\left(\underset{i=1}{\overset{n}{}}\mathrm{\Delta }_iS\right)^2=0,$$
(2)
and the energy functional is
$$E=d^3x\left\{\underset{i=1}{\overset{n}{}}\frac{\mathrm{}^2}{2m_i}\left[\left(\stackrel{}{}_iR\right)^2+R^2\left(\stackrel{}{}_iS\right)^2\right]+CR^2\left(\underset{i=1}{\overset{n}{}}\mathrm{\Delta }_iS\right)^2+VR^2\right\}.$$
(3)
These can be derived from the Lagrangian density proposed in and generalized to $`n`$ particles as follows
$$L_n(R,S)=\mathrm{}R^2\frac{S}{t}+\underset{i=1}{\overset{n}{}}\frac{\mathrm{}^2}{2m_i}\left[\left(\stackrel{}{}_iR\right)^2+R^2\left(\stackrel{}{}_iS\right)^2\right]+CR^2\left(\underset{i=1}{\overset{n}{}}\mathrm{\Delta }_iS\right)^2+R^2V.$$
(4)
The energy functional is a conserved quantity for the potentials $`V`$ that do not depend explicitly on time and coincides with the quantum-mechanical energy defined as the expectation value of the Hamiltonian for this modification .
## 2 N-Particle Solution to the SMPE
As demonstrated in , the modification under discussion possesses a free solitonic solution. Its amplitude is that of a Gaussian,
$$R(x,t)=N\mathrm{exp}\left[\frac{(xvt)^2}{s^2}\right],$$
(5)
where $`v`$ is the speed of the particle and $`s`$ is the half-width of the Gaussian to be determined through the coupling constant $`C`$ and other fundamental constants of the modification. The normalization constant $`N=\left(2/\pi s^2\right)^{1/4}`$. The phase of the soliton has the form
$$S(x,t)=a(xvt)^2+bvx+c(t),$$
(6)
where $`a`$ and $`b`$ are certain constants and $`c(t)`$ is a function of time, all of which, similarly as $`v`$ and $`s`$, are to be found from the equations of motion. These equations are satisfied provided
$$b=m/\mathrm{},s^2=8mC/\mathrm{}^2,s^4a^2=1,$$
(7)
and
$$2\mathrm{}s^4m\frac{c(t)}{t}+2\mathrm{}^2s^2+\mathrm{}^2s^4b^2v^2+8Ca^2s^4m=0.$$
(8)
We see that the coupling constant $`C`$ has to be negative, $`C=|C|`$. The energy of the particle turns out to be
$$E=E_{st}+\frac{mv^2}{2},$$
(9)
where
$$E_{st}=\frac{\mathrm{}^2}{2ms^2}=\frac{\mathrm{}^4}{16m^2\left|C\right|}$$
(10)
is the stationary part of it.
Before we present the solitonic solution for $`n`$ uncorrelated solitons that generalizes the above one, let us first mention the existence of a two particle solution that differs from this $`n`$-soliton solution when the latter is specified to the two-particle case. This is so because the solution in question is not factorizable for $`x`$ and $`y`$. If we assume that the particles have the same mass $`m`$, their “entangled” configuration is given by the amplitude and the phase of wave function as follows
$$R(x,y,t)=N_2\mathrm{exp}\left[\frac{(xyvt)^2}{s^2}\right]\mathrm{exp}\left[\frac{(x+yvt)^2}{s^2}\right],$$
(11)
$$S(x,y,t)=a\left[(xyvt)^2+(x+yvt)^2\right]+v\left[b_{}(xy)+b_+(x+y)\right]+c(t),$$
(12)
where $`s^2=32Cm/\mathrm{}^2`$, $`a^2=1/s^4`$ and $`b_{}=b_+=m/2\mathrm{}`$ in accord with the upper/lower signs $``$ in these formulas. We note that the term $`\left[(xyvt)^2+(x+yvt)^2\right]/s^2`$, that appears in the amplitude, cannot be obtained as a linear combination of $`(xvt)^2/s^2`$ and $`(yvt)^2/s^2`$ which we will use to build the amplitude for two particles in the general case of uncorrelated solitons.<sup>2</sup><sup>2</sup>2On the other hand, the amplitudes containing only $`(xy\pm vt)^2/s^2`$ or $`(x+y\pm vt)^2/s^2`$ would give rise to wave functions that are not normalizable. This confirms that the solution (11-12) cannot be factorized for $`x`$ and $`y`$. The energy of this configuration is $`E=mv^2/2+\mathrm{}^4/16m^2\left|C\right|`$ with the kinetic energy being, surprisingly enough, just as for one particle. We did not succeed in finding similar solutions in the presence of potentials. The most obvious choice, the potential of harmonic oscillator does not support solutions of this type. It does support though the solutions that we will discuss now.
To find the already mentioned $`n`$-soliton solution, we will proceed in the same manner as for the one particle solution. To begin with, we assume that the solitons are one-dimensional, but we will remove this condition later on. The amplitude for the system of $`n`$ such solitons is chosen by analogy to (5) as
$$R_n(x_1,x_2,\mathrm{},x_n,t)=N_n\underset{i=1}{\overset{n}{}}\mathrm{exp}\left[\frac{(x_iv_it)^2}{s_i^2}\right],$$
(13)
and its phase is taken in the form
$$S_n(x_1,x_2,\mathrm{},x_n,t)=\underset{i=1}{\overset{n}{}}\left[a_i(x_iv_it)^2+b_iv_ix_i\right]+c(t).$$
(14)
The energy equation (2) is separable if and only if
$$s_i^4=\frac{1}{a_i^2},$$
(15)
and
$$b_i=\frac{m_i}{\mathrm{}},$$
(16)
whereas the continuity equation (1) requires that
$$s_i^2=\frac{8m_iC\left(a_j\right)}{a_i\mathrm{}^2},$$
(17)
and
$$s_i^2a_im_j=s_j^2a_jm_i.$$
(18)
Even if the last formula is valid for any $`i`$ and $`j`$, it is trivial unless $`ij`$. This comment applies also to other relationships that like (18) contain $`i`$ and $`j`$ that we will deal with throughout the rest of this paper. Now, (15) and (18) lead to a consistency condition implying that $`m_1=m_2=\mathrm{}=m_n=m`$. This is somewhat reminiscent of superselection rules . However, whereas the superselection rules stem from some kinematical or dynamical symmetries, the rule in question is the consequence of the assumption that particles are uncorrelated and separable and forbids free of diferent masses to belong to the same quantum-mechanical Universe. Thus, this rule is much stronger than the superselection rules of linear quantum mechanics that merely exclude certain types of superpositions, such as between particles of different mass or between different species of particles as, for instance, bosons and fermions. The latter exclusion is called the univalence superselection rule. We suggest that this new type of rules be called supersuperselection rules.<sup>3</sup><sup>3</sup>3Or simply $`S^2S`$ rules. The mass, the univalence, and the particle number along with other superselection rules are believed to be firmly established in linear quantum mechanics although objections against some of them have been raised in the literature .
Moreover, (17) entails that all $`a_i`$’s have to be of the same sign or at least one of $`s_i^2`$’s will be negative, and that $`C=|C|`$. Combining (17) and (15) produces another consistency condition,
$$\underset{i=1}{\overset{n}{}}a_i=\pm \frac{\mathrm{}^2}{8m|C|},$$
(19)
with the sign of the RHS of (19) depending on the sign of $`a_i`$’s, but otherwise $`a_i`$’s being unspecified. It is also a straightforward implication of (17) that
$$\underset{i=1}{\overset{n}{}}\frac{1}{s_i^2}=\frac{\mathrm{}^2}{8m|C|}$$
(20)
which has important consequences for the shape of the probability density of a three-dimensional particle, as we will see later on.
The energy equation (2) together with (15-18) gives
$$\mathrm{}\frac{c(t)}{t}=\frac{1}{2}\underset{i=1}{\overset{n}{}}mv_i^2\frac{\mathrm{}^2\left(a_j\right)}{2ms_i^2a_i}=\frac{1}{2}m\underset{i=1}{\overset{n}{}}v_i^2\frac{1}{n}\underset{i=1}{\overset{n}{}}\frac{\mathrm{}^2\left(a_j\right)}{2ms_i^2a_i}.$$
(21)
Now, as seen from (2-3), the energy $`E=\mathrm{}S/t`$, where $`<>`$ denotes the expectation value of the quantity embraced. Therefore, we find that for $`R_n`$ normalized to unity,
$$E=\frac{1}{2}\underset{i=1}{\overset{n}{}}mv_i^2+\frac{1}{n}\underset{i=1}{\overset{n}{}}\frac{\mathrm{}^2\left(a_j\right)}{2ms_i^2a_i}=E_{kin}+\frac{\mathrm{}^4}{16m^2|C|}.$$
(22)
What we have arrived at is quite remarkable: the system consisting of $`n`$ objects, in principle different, even if of the same mass, seemingly weakly non-separable due to the $`\left(\mathrm{\Delta }S\right)^2`$-coupling between them turns out to be separable and the energy of the system is just the sum of the kinetic energy of its constituents and some constant self-energy term. In the subrelativistic approach, this constant term is identified with the rest mass-energy of the system of $`n`$ particles equal $`nmc^2`$. In a more general nonrelativistic scheme, the term in question represents the internal energy of this system not necessarily equal to its rest mass-energy. In either approach though, this term is, surprisingly enough, independent of the number of particles; it is some constant, the same for one particle and for a gazillion of them.
For identical particles the above formulas become somewhat simpler. For instance, (17) turns into
$$s^2=\frac{8nm|C|}{\mathrm{}^2},$$
(23)
and since $`M=nm`$ is the total mass of the Universe filled with the identical solitonic particles, we see that the size of the particle, a local quantity, is determined by the mass of the entire Universe! One can view it as a truly Machian effect . To recall, Mach ca. 1883 conjectured that the inertia of an object, a local property, is due to the rest of the Universe, the foremost global object. This came to be known as Mach’s principle, and the situations where a local physical quantity is related to some global one can be thought of as manifestations of the generalized Mach principle. As we see, in the case under discussion, the physical size of one particle defined as the width of its Gaussian probability density,
$$L_{ph}=\sqrt{2}s=\frac{4\sqrt{nm|C|}}{\mathrm{}}=\frac{4\sqrt{M|C|}}{\mathrm{}},$$
(24)
is affected by the presence of other particles even though they remain uncorrelated, and its square is directly proportional to the mass of the Universe. In reality, a free single particle is never alone, it is part of a larger entity, the Universe. Assuming that the Universe consists of $`n`$ such non-interacting particles, the formalism of linear quantum mechanics allows us to analyze the equation for the single particle independently of others since the Schrödinger equation for the entire Universe can be separated into $`n`$ equations for each of its constituents. There are no side effects to this formal procedure. A similar procedure, as just demonstrated, can be applied to the case of $`n`$ particles in the modification under study. Yet, in this case the presence of other particles affects the physical size of each separate particle. In the limit of large $`n`$, their size would become macroscopic, even gigantic. The experiment however suggests that quantum particles do not come in sizes greater than some $`l_{\mathrm{max}}`$ that is assumed to be microscopic. To meet this experimental constraint, $`C`$ should not be greater than $`l_{\mathrm{max}}^2\mathrm{}^2/M`$, where $`M`$ stands for the total mass of identical particles in a model Universe consisting of only one kind of particles. As already observed, free particles of different masses constitute separate Universes in our scheme. Consequently, if one assumes that there is a soliton associated with every free quantum system in this Universe and that their number is large, but their size is small as it is in the actual Universe we live in where microscopic quantum particles abound, then this constant becomes unimaginably small and the theory becomes “almost linear.” Nevertheless, the solitons persist. On the other hand, if we could somehow determine (the upper bound on) the coupling constant $`C`$, then knowing the size of one particle (say $`s`$) we would be able to tell (the lower bound on) the mass of the model Universe of our modification!
Let us now discuss the case of $`n`$ one-dimensional particles coupled to harmonic oscillators. The wave function for this system of particles is taken in the form of (13-14) and the potential is assumed to be
$$V(x_1,x_2,\mathrm{},x_n,t)=\underset{i=1}{\overset{n}{}}k_i(x_iv_it)^2=\frac{1}{2}\underset{i=1}{\overset{n}{}}m_i\omega _i^2(x_iv_it)^2.$$
(25)
The energy equation (2) is separable if and only if
$$b_i=m_i/\mathrm{}$$
(26)
and
$$s_i^4=\frac{2\mathrm{}^2}{2a_i^2\mathrm{}^2+\frac{1}{2}m_i^2\omega _i^2}.$$
(27)
The continuity equation (1) is soluble for each $`x_i`$ separately provided
$$s_i^2=\frac{8m_iC\left(a_j\right)}{a_i\mathrm{}^2}$$
(28)
and
$$s_i^2a_im_j=s_j^2a_jm_i$$
(29)
for any $`i`$ and $`j`$. Equation (28) implies that either all $`a_j`$ are positive or negative whereas equations (27) and (29) lead to the first consistency condition,
$$\left(\frac{s_i}{s_j}\right)^4=\frac{2\mathrm{}^2a_j^2+\frac{1}{2}m_j^2\omega _j^2}{2\mathrm{}^2a_i^2+\frac{1}{2}m_i^2\omega _i^2}=\left(\frac{m_i}{m_j}\frac{a_j}{a_i}\right)^2,$$
(30)
valid for any $`i`$ and $`j`$ which in the free particle case produces the condition that the masses of particles must be equal. Now, however, this condition does not have to be met, which, in other words, means that the potential can remove the degeneration in the mass. This last formula can be transformed into
$$\left[\left(\frac{m_i}{m_j}\right)^21\right]=\frac{m_j^2\omega _j^2}{4\mathrm{}^2a_j^2}\left[1\left(\frac{m_i}{m_j}\right)^4\left(\frac{a_j}{a_i}\right)^2\left(\frac{\omega _i^2}{\omega _j^2}\right)^2\right].$$
(31)
This constitutes the equation from which to determine $`a_j^2`$ as a function of the other parameters. In what follows, we will consider only the simplest case that corresponds to $`m_i=m_j=m`$ for any $`i`$ and $`j`$. Then
$$\frac{a_j}{a_i}=\pm \frac{\omega _j}{\omega _i}$$
(32)
for arbitrary $`i`$ and $`j`$. Consequently,
$$s_i^2=\frac{8Cm\omega _j}{\mathrm{}^2\omega _i},$$
(33)
where $`C=|C|`$. The last formula and (27) lead to another consistency condition
$$a_1^2=\omega _1^2\left[\frac{\mathrm{}^4}{64C^2m^2\left(\omega _j\right)^2}\left(\frac{m}{2\mathrm{}}\right)^2\right]$$
(34)
which essentially is an equation for $`a_1`$ in terms of the parameters of the system. Knowing the value of $`a_1`$ one can determine other $`a_i`$’s from (32). Moreover, since $`a_1^2>0`$ (the case of $`a_1^2=0`$ corresponds to linear quantum mechanics), we obtain that
$$\underset{i=1}{\overset{n}{}}\omega _i<\frac{\mathrm{}^3}{4|C|m^2}.$$
(35)
This formula replaces (19) valid for free particles, but (20) still holds in this case. Now, the energy equation (2) yields
$$\mathrm{}\frac{c(t)}{t}=\frac{1}{2}\underset{i=1}{\overset{n}{}}mv_i^2\frac{\mathrm{}^2\left(a_j\right)}{2ms_i^2a_i}\frac{1}{2}\underset{i=1}{\overset{n}{}}m\omega _i^2s_i^2$$
(36)
so that the total energy of the system is
$$E=\mathrm{}\frac{S}{t}=E_{kin}+\frac{\mathrm{}^4}{16m^2|C|}+\frac{4|C|m^2}{\mathrm{}^2}\left(\underset{i=1}{\overset{n}{}}\omega _i\right)^2<E_{kin}+\frac{5\mathrm{}^4}{16m^2|C|}.$$
(37)
We observe that the wave function of one particle is affected not only by the frequency of the oscillator it is coupled to but also by the frequencies of other oscillators. One can thus wonder if this violates causality. One can imagine that, for simplicity, in a system of two particles coupled to oscillators of different frequencies changing the frequency of one of them will affect the probability density of the particle coupled to the other one. The change in question means a global change in the potential of one of the particles as the potential is affected in its entire domain. In this sense, this differs from a telegraph that employs spins on which operations are local. Nevertheless, in a system like that one can transmit information instantaneously from one observer to another one separated by an arbitrary distance and thus the causality in this system can indeed be violated. The energy changes in this process, but one cannot tell whether it is transmitted or not for it is not localized.
This is not necessarily unexpected for, as already pointed out in , the equations of motion are not separable in the weak sense that assumes a factorizable wave function for a compound system. On the other hand, the discussed model constitutes an elementary example of a causality violation mechanism in nonlinear quantum mechanics which does not involve the spin or any complex reasoning. If only because of that, it may deserve some attention. It should also be noted that situations like that are rather generic in nonlinear systems. Nevertheless, the consensus is that situations of this kind are unphysical and because of that they are dismissed.
The problem in question can, in fact, be alleviated within the theory itself by selecting the parameters of the solution in such a way that the demonstrated causality violation does not occur. One does that by assuming that all $`a_i`$’s are equal. As a result, (30) implies now that
$$\omega _j^2=\left(\frac{m_1}{m_j}\right)^4\omega _1^2+2\mathrm{}^2a^2\left[\left(\frac{m_1}{m_j}\right)^21\right],$$
(38)
whereas (27) and (28) yield that
$$\omega _i^2=\frac{4\mathrm{}^6}{64m_i^4C^2n^2}\frac{4a^2\mathrm{}^2}{m_i^2}.$$
(39)
Applying the last formula for $`i=1`$ and using it in (38) gives
$$\omega _j^2=\frac{4\mathrm{}^6}{64m_j^4C^2n^2}\frac{4m_1^2a^2\mathrm{}^2}{m_j^4}+2\mathrm{}^2a^2\left[\left(\frac{m_1}{m_j}\right)^21\right]$$
(40)
which is equal to (39) specialized for $`i=j`$ only if $`m_j=m_1`$. In view of the generality of this argument one is lead to the conclusion that all the masses $`m_i`$ must be the same and so the particles-oscillators are identical. The parameter $`a`$ is determined by the frequency $`\omega `$ and other constants,
$$a^2=\frac{\mathrm{}^4}{64m^2C^2n^2}\frac{m^2\omega ^2}{4\mathrm{}^2}.$$
(41)
As seen from the last formula or (35), the frequency, which is the same for all oscillators, has to satisfy the condition
$$\omega <\frac{\mathrm{}^3}{4|C|nm^2}$$
(42)
corresponding to $`a^2>0`$. The energy of the field of oscillators is now
$$E=E_{kin}+\frac{\mathrm{}^4}{16m^2|C|}+\frac{4|C|m^2n^2}{\mathrm{}^2}\omega ^2<E_{kin}+\frac{5\mathrm{}^4}{16m^2|C|}.$$
(43)
The picture that emerges from this is that of a field of $`n`$ identical particles oscillating in unison. Changing the frequency of a single oscillator is possible only if the same change takes place in the entire field. This can happen instantaneously but does not lead to any quantum-mechanical violation of causality because the probability density of the particles is not changed in the process, as opposed to their phase. The situation in question is somewhat similar to the EPR phenomenon in that the observer measuring or changing the frequency of one oscillator knows what the frequency of other oscillators is going to be at the very same moment. However, unlike in the standard EPR experiment, the nonlocal impact of measurement is not confined to just one particle but has a global sweep. Moreover, the particles are not entangled as is the case for the EPR type of correlations. Yet, by changing the frequency of her oscillator, one observer can change the frequency of all the other oscillators in the Universe. If there are any observers associated with them, they will notice this change. It is obvious that one can instantaneously transmit information in this way, although it is not clear if the same is true about the energy for the latter is not localized. It does change though in this process.
It should be stressed that the discussed solitonic solution does not invalidate nor replaces the multi-oscillator solutions of linear quantum mechanics. Such solutions still exist in the modification under study. However, if the wave function of the Universe is to be factorizable, the oscillators of linear theory cannot share this Universe with their solitonic brethren. A wave function that would accomodate both kinds of oscillators is not supported by the equations of motion as seen from (29). This relation cannot be satisfied if one of $`a_i`$’s is zero which would correspond to a “linear” quantum oscillator. Either all $`a_i`$’s must be zero or none. As a matter of fact, even the free solitons and the solitons coupled to harmonic oscillators cannot belong to the same Universe if they are to be uncorrelated. Let us now demonstrate this fact. Without any loss of generality, we can assume that the Universe contains only one free particle, all others being oscillators. Putting $`\omega _1=\omega _i=0`$ in (30) and solving it for $`a_j`$ gives
$$a_j=\pm \frac{m_j\omega _j}{2\mathrm{}\sqrt{1\eta ^2}},$$
(44)
where we need to assume that $`\eta =m_1/m_j<1`$. $`\eta `$ equal $`1`$ implies that $`\omega _j=0`$ and so is not interesting, but this observation will prove useful later on. Inserting this into (27) leads to
$$s_j^4=\frac{4\mathrm{}^2\left[1\eta ^2\right]}{m_j^2\omega _j^2\left[2\eta ^2\right]}.$$
(45)
Now, from (27) and (28) applied to the free particle we obtain that
$$1=\left(s_1^2a_1\right)^2=\left(\frac{8m_1|C|}{\mathrm{}^2}\right)^2\left(a_j\right)^2.$$
(46)
Applying (28) to the $`j`$-th oscillator and using (46) and (44) yields
$$s_j^4=\left(\frac{8m_j|C|}{a_j\mathrm{}^2}\right)^2\left(a_j\right)^2=\frac{m_j^2}{m_1^2a_j^2}=\frac{4\mathrm{}^2\left[1\eta ^2\right]}{m_1^2\omega _j^2}.$$
(47)
If (45) and (47) are to be equal, $`\eta `$ must be$`1`$, which means that the $`j`$-th particle is also free. Therefore, we conclude that the factorizable wave function of the Universe can describe either only the free or only the coupled solitons, but not both those species together.
Finally, following the scheme for the $`n`$ one-dimensional particles, it is straightforward to find a $`d`$-dimensional one-particle solitonic solution. The wave function of such a particle is given by (13-14) with $`n`$ replaced by $`d`$. The energy of a single $`d`$-dimensional quanton is
$$E=\frac{1}{2}\underset{i=1}{\overset{d}{}}mv_i^2+\frac{\mathrm{}^2}{2s^2}d=E_{kin}+\frac{\mathrm{}^4}{16m^2|C|},$$
(48)
where
$$s^2=\frac{8d|C|m}{\mathrm{}^2}.$$
(49)
The last formula assumes that the particle is spherical. One can now treat even a more general case of $`n`$ particles in $`d`$ dimensions, which would remove the assumption of one-dimensional particles. Instead of (49) we can also use (20) to obtain a greater variety of shapes for one three-dimensional particle. For instance, if, say, $`s_1`$ and $`s_2`$ are much greater than $`s_3`$ we obtain a sheet-like configuration extended in the $`x_1`$-$`x_2`$ plane, while for $`s_1`$ and $`s_2`$ being small but of the same order and $`s_3`$ being much greater we obtain a string-like configuration extended along the direction of $`x_3`$. Even if these configurations have different shapes they are topologically equivalent to a three-dimensional spherical particle. Moreover, they all have the same energy.
Finally, let us note that if the multi-particle equations are derived from the Lagrangian
$$L_n(R,S)=\mathrm{}R^2\frac{S}{t}+\underset{i=1}{\overset{n}{}}\frac{\mathrm{}^2}{2m_i}\left[\left(\stackrel{}{}_iR\right)^2+R^2\left(\stackrel{}{}_iS\right)^2\right]+CR^2\underset{i=1}{\overset{n}{}}\left(\mathrm{\Delta }_iS\right)^2+R^2V,$$
(50)
they are weakly separable and do not lead to any strange nonlocal effects in the systems we examined above. In particular, one can assume here that $`C=C_i`$, i.e., the coupling constant is particle-dependent, similarly as the mass. For $`n=1`$, the Lagrangian in question reduces to the same form as the Lagrangian of (4), but in the multi-particle case these Lagrangians describe different theories. The nonlocal effects may however appear when nonfactorizable wave functions are considered.
## 3 Conclusions
We have presented the $`n`$-particle free solitonic solution and the solution describing $`n`$ particles coupled to harmonic oscillators in the nonlinear modification of the Schrödinger equation put forward recently . In addition, a particular solution for two free solitons of the same mass that does not stem from this general scheme has also been presented. This solution represents two particles whose compound wave function does not factorize and whose kinetic energy is, surprisingly enough, the same as the kinetic energy of one particle.
The most remarkable property of the free $`n`$-particle solution is the remnant inseparability that manifests itself in the wave function of individual particles depending on the properties of other particles of the system. In the simplest case of identical particles, it is the total number of particles that appears in the wave function of an individual particle. As a result, the width of each particle’s Gaussian probability density, a local property, is directly linked to the mass of the entire Universe in a very Machian manner. Nevertheless, they all evolve independently. In the case of solitons coupled to harmonic oscillators the probability density of one particle is affected not only by the frequency of its own oscillator, but by the frequencies of other oscillators as well. This leads to a causality violation as any change in the frequency of one of the oscillators is instantaneously detected by the observers related to other oscillators. By measuring the probability density of the particles coupled to their oscillators, they realize that someone is “messing” with these particles. We have shown, however, that it is possible to avoid this particular problem by an appropriate choice of the parameters of the solution. Namely, if the ratio of $`a_i/a_j`$ is one, the nonseparability becomes somewhat similar to the EPR kind, but without the entanglement typical for the EPR correlations. The probability density of a single oscillator is then not related to what is happening to the other oscillators, but they all have to oscillate with the same frequency and so in this sense they remain correlated. It is possible to transmit information in this way, just by changing the frequency of one oscillator, similarly as in the more generic case. The simplest way to avoid the discussed nonlocality problems is to adopt a weakly separable multi-particle extension of the basic one-particle nonlinear equation, although this does not guarantee that similar problems will not occur for nonfactorizable wave functions.
It would be interesting to see how one can circumvent this problem in the strong separability framework advocated by Czachor that originated in the work of Polchinski and Jordan . This approach chooses as its starting point the nonlinear von Neumann equation for the density matrices and proceeds from there to the $`n`$-particle extension. In a sense, one can call this approach “effective” as opposed to the “fundamentalist” one. The latter approach does not resort to reformulating in terms of density matrices various proposed in the literature nonlinear modifications of the Schrödinger equation that, similarly as the linear equation itself, are postulated to be obeyed by pure states. The problem of nonlocality is confounded since these approaches, even though they belong to the same strong separability framework, yield different results. For instance, the Białynicki-Birula and Mycielski modification that is both weakly and strongly separable in the effective approach turns out to be essentially nonlocal in the fundamentalist approach for a general nonfactorizable two-particle wave function, as recently demonstrated by Lücke . The same applies to the Doebner-Goldin modification that being weakly separable and strongly in the effective approach fails to maintain the separability for a more general nonfactorizable wave function describing a system of two particles if the fundamentalist approach is used. Only in some special case, that corresponds to the linearizable Doebner-Goldin equation, are the particles separable in the latter approach .
One of the consequences of the discussed residual inseparability of free particles is the smallness of the coupling constant $`C`$. In the simplest case of identical particles, the width of the probability density of a single particle is proportional to the total number of particles and the constant in question. In the limit of large $`n`$ which is the most natural to consider if one assumes that every quantum system is associated with a particle, this width can be macroscopic if not just gigantic unless $`C`$ is really very small. Since the experiments do not support the view that quantum particles are of macroscopic size, the nonlinear coupling constant must indeed be very small. One needs to realize though that this is only a model which rests on the assumptions that there exist a particle associated with every quantum system, that quantum systems are microscopic, and that all of them have the same mass. None of these assumptions have to be true, but if there existed a Universe where these assumptions were correct then the coupling constant of the discussed nonlinear extension of the Schrödinger equation would be incredibly small.
A novel property that also emerged from the free multi-particle solution is an exclusion rule that forbids particles of different masses to be part of the same Universe if they are to be uncorrelated and separated. The same holds true for the solitonic oscillators of common $`a`$ which are supposed to share the frequency. This is somewhat characteristic of superselection rules that stem from the symmetry considerations, but is much stronger in its implications for the standard superselection rule applicable in this case only forbids superpositions of quantum mechanical systems of different masses. We propose that the rules of this type be called supersuperselection rules. Similar to it is the univalence supersuperselection rule that under the same conditions prevents different species of particles, such as the free solitons and the solitonic oscillators in our case, from belonging to the same Universe. The same rule applies also to the oscillators of linear quantum mechanics and the “nonlinear” oscillators of the modified Schrödinger equation. Similarly as the nonlocal effects, the rules in question are native to only one of the discussed $`n`$-particle extensions; they do not emerge if the multi-particle equations are weakly separable.
## Acknowledgments
I would like to thank Professor P. O. Mazur for bringing my attention to the work of Professor Staruszkiewicz that started my interest in nonlinear modifications of the Schrödinger equation. A correspondence with Dr. M. Czachor and his comments on the preliminary versions of this paper are gratefully acknowledged as is a correspondence with Professor Wolfgang Lücke concerning his recent work. This work was partially supported by the NSF grant No. 13020 F167 and the ONR grant R&T No. 3124141.
|
no-problem/9905/chao-dyn9905018.html
|
ar5iv
|
text
|
# Anomalous spatio-temporal chaos in a two-dimensional system of non-locally coupled oscillators
## I Introduction
Assemblies of dynamical elements coupled with each other are widely seen in nature. Simplified models of such systems, e.g. coupled limit cycle oscillators or chaotic maps, have played important roles not only in modeling such systems realistically, but also in understanding the varieties of possible behavior of systems far from equilibrium. Many important concepts, such as pattern formation or spatio-temporal chaos, have been extracted from the detailed studies on such models.
The interaction between the elements are usually assumed to be attractive, and of mean-field type in a wide sense; each element feels the mean amplitude of its neighboring elements, and driven by the difference between its amplitude and the mean amplitude, in such a way that the amplitude differences between the elements decrease.
The diffusive coupling is a representative limiting case. Each element interacts strongly with its nearest neighbors, so that the amplitude field of the system is always continuous and smooth. It is well known that some diffusively coupled systems of dynamical elements, such as the complex Ginzburg-Landau equation, exhibit spatio-temporal chaos.
The opposite limiting case is the global coupling, or the mean field coupling in the narrow sense. Each element feels the mean field of the entire system, and is thus coupled to all the elements with equal strength. The amplitude field becomes statistically spatially homogeneous, and the notion of space is lost. It is known that systems with global coupling generally show some typical behavior, e.g. clustering and collective chaos.
In Ref., Kuramoto introduced an intermediate system between the above two limiting cases, namely a system of non-locally coupled elements. Our subsequent numerical simulations of one-dimensional non-locally coupled systems with various elements revealed that such systems generally exhibit anomalous spatio-temporal chaotic behavior, which cannot be seen in the above two limiting cases. In this chaotic regime, the amplitude field becomes fractal, and the spatial correlation of the amplitude field shows power-law behavior on small scales. Furthermore, the fractal dimension and the exponent of the spatial correlation vary continuously with the coupling strength. Later, we developed a theory that can explain the fractality of the amplitude field and the power-law behavior of the spatial correlation based on a simple multiplicative stochastic model. Such a model is frequently employed in describing the noisy on-off intermittency phenomena found in many physical systems, and this implies that our system should also exhibit this type of temporal intermittency. Induced by the temporal intermittency, our system is also spatially intermittent. In order to study this, we expanded our analysis of the amplitude field into more general $`q`$-th structure functions, and found that the amplitude field shows multi-affinity. We also introduced multi-fractal analysis of the difference field of the original amplitude field, inspired by its seeming similarity to the intermittent energy dissipation field in fluid turbulence.
All our previous studies have been done in one-dimensional systems. However, our previous theory does not require the systems to be one-dimensional, and spatio-temporal chaos with power-law structure functions is also expected in higher dimensions. In this paper, we study a system of non-locally coupled complex Ginzburg-Landau oscillators in two dimensions for the first time, and investigate its anomalous spatio-temporal chaotic regime. Some attention is paid to the multi-scaling properties of the intermittent amplitude and difference fields.
## II Model
As proposed by Kuramoto, the non-local coupling naturally appears in the following plausible situation. Consider an assembly of spatially distributed dynamical elements, e.g. cells. Each element is assumed to interact indirectly with other elements through some (e.g. chemical) substance, which diffuses and decays much faster than the dynamics of the element. Such a situation will be described by the following set of equations:
$`\dot{𝑿}(𝒓,t)`$ $`=`$ $`𝑭(𝑿(𝒓,t))+𝐊𝑨(𝒓,t),`$ (1)
$`ϵ\dot{𝑨}(𝒓,t)`$ $`=`$ $`\eta 𝑨(𝒓,t)+D^2𝑨(𝒓,t)+𝑿(𝒓,t),`$ (2)
where $`𝑿`$ is the amplitude of the element, $`𝑭`$ is the dynamics of the amplitude, and $`𝑨`$ is the concentration of the substance with decay rate $`\eta `$ and diffusion rate $`D`$. The substance $`𝑨`$ is generated at a rate proportional to the amplitude $`𝑿`$, and the amplitude $`𝑿`$ is affected by the substance $`𝑨`$ with a coupling matrix $`𝐊`$. The parameter $`ϵ`$ determines the ratio of time scale of the elements to that of the substance, and is assumed to be very small. Namely, the dynamics of the substance is much faster than that of the elements.
Now, let us consider the $`ϵ0`$ limit and eliminate the dynamics of $`𝑨`$ adiabatically. Putting the left-hand side of Eq.(2) to $`0`$, we can solve the equation for $`𝑨`$ as
$$𝑨(𝒓,t)=𝑑𝒓^{}G(𝒓^{}𝒓)𝑿(𝒓^{},t),$$
(3)
where $`G(𝒓^{}𝒓)`$ is a kernel that satisfies
$$(\eta D^2)G(𝒓^{}𝒓)=\delta (𝒓^{}).$$
(4)
By inserting Eq.(3) into Eq.(1), we obtain the following system of non-locally coupled dynamical elements:
$$\dot{𝑿}(𝒓,t)=𝑭(𝑿(𝒓,t))+𝐊𝑑𝒓^{}G(𝒓^{}𝒓)𝑿(𝒓^{},t).$$
(5)
The kernel $`G(𝒓^{}𝒓)`$ can be solved as
$$G(𝒓^{}𝒓)=\frac{1}{(2\pi )^d}d^d𝒒\frac{\mathrm{exp}[i𝒒(𝒓^{}𝒓)]}{\eta +D|𝒒|^2},$$
(6)
where $`d`$ is the dimension of the space. When the system is isotropic, the kernel $`G`$ becomes a function of the distance $`r:=|𝒓^{}𝒓|`$, and is expressed as
$`G(r)`$ $`\mathrm{exp}(\gamma |r|)`$ $`\text{d=1},`$ (7)
$`K_0(\gamma |r|)`$ $`\text{d=2},`$ (8)
$`{\displaystyle \frac{\mathrm{exp}(\gamma |r|)}{\gamma |r|}}`$ $`\text{d=3},`$ (9)
where $`K_0`$ is the modified Bessel function. The constant $`\gamma `$ gives the inverse of the coupling length, and is calculated as
$$\gamma =\sqrt{\frac{\eta }{D}}.$$
(10)
Each $`G(r)`$ must satisfy the normalization condition $`G(r)d^d𝒓=1`$. Since we treat a two-dimensional system, we use Eq.(8) for $`G(r)`$ hereafter.
As elements, we use complex Ginzburg-Landau oscillators. They are the simplest limit cycle oscillators that can be derived by the center-manifold reduction technique from generic oscillators near their Hopf bifurcation points. The corresponding non-locally coupled system is given by the following equation for the complex amplitude $`W`$:
$$\dot{W}(𝒓,t)=W(1+ic_2)|W|^2W+K(1+ic_1)(\overline{W}W),$$
(11)
where $`K`$ is the coupling strength, $`c_1,c_2`$ are real parameters, and the non-local mean field $`\overline{W}`$ is given by
$$\overline{W}(𝒓,t)=𝑑𝒓^{}G(𝒓^{}𝒓)W(𝒓^{},t).$$
(12)
This is the non-local complex Ginzburg-Landau equation introduced by Kuramoto as the first concrete example of non-locally coupled systems.
## III Anomalous spatio-temporal chaos
In the numerical simulations presented here, we assume the total system to be a square lattice with both sides unit length long. The elements are placed on the lattice sites, and periodic boundary conditions are assumed. We use $`N^2=512^21024^2`$ elements, and fix the coupling length $`\gamma ^1`$ at $`1/8`$. The non-local mean field is easily calculated by using the FFT technique, since it is simply a convolution of the amplitude field with the kernel (8). We fix the parameters $`c_1`$ and $`c_2`$ at $`2`$ and $`2`$ respectively. These are the standard values already used in our previous one-dimensional simulations.
In Fig.1-3, typical snapshots of the real part $`X(x,y)`$ of the complex variable $`W(x,y)`$ are shown for three different values of coupling strength $`K`$. Since we obtain similar figures for the imaginary part $`Y(x,y)`$ by symmetry, we use $`X(x,y)`$ in the following analysis and call it amplitude field. The amplitude field at $`K=1.05`$ is continuous and smooth, while at $`K=0.65`$ it seems to be discontinuous and disordered, although not completely random. The amplitude field at the intermediate coupling strength $`K=0.85`$ looks somewhat more complex and intriguing; it is composed of intricately convoluted smooth and disordered patches of various length scales. This is the anomalous spatio-temporal chaotic regime that our interest is focused on.
## IV Spatial correlation function
Let us examine the spatial correlation function first. Figures 4(a)-(c) show spatial correlation functions $`C(x,y):=X(0,0)X(x,y)`$ corresponding to the amplitude fields shown in Figs.1-3. Each correlation function is clearly rotationally symmetric due to the isotropy of the system. As the amplitude field becomes disordered, the correlation function becomes steep, and the center of the graph, which corresponds to the self-correlation $`C(0,0)`$, becomes peaked.
In Ref., the anomalous spatio-temporal chaotic regime was characterized by the power-law behavior of the spatial correlation function in small distance:
$$C(l):=X(0)X(l)C_0C_1l^\alpha (l1),$$
(13)
where $`C_0,C_1`$ are constants and $`\alpha `$ is a non-integer parameter-dependent exponent.
To confirm if this power-law behavior also holds in two dimension, we calculated the radial correlation function $`C(l)=X(𝒓)X(𝒓+𝒍)`$ ($`|𝒍|=l`$) along a straight line in a certain directionWe mainly used the $`(0,1)`$ or the $`(1,1)`$ direction, but results are independent of the direction. , and estimated the best fitting parameters $`C_0`$ and $`C_1`$. Figure 5 shows $`\mathrm{ln}l`$ vs. $`\mathrm{ln}\left[C_0C(l)\right]`$ for several values of the coupling strength $`K`$. For each coupling strength, the experimental data are almost on a straight line, and the power-law behavior is evident. The exponent $`\alpha `$ of the power law varies continuously with the coupling strength. Although not shown in the figure, the correlation function $`C(l)`$ is continuous at the origin $`l=0`$ for $`K0.85`$, but discontinuous at $`K=0.80`$. There appears a finite gap between the self-correlation $`C(0)`$ and the correlation between the nearest-neighbor elements $`lim_{l+0}C(l)C_0`$. This means that the individual motion of the element becomes so violent that the amplitude field is no longer continuous statistically. In Ref., this transition point was identified with the blowout bifurcation point in the on-off intermittent dynamics of the amplitude difference between nearby elements.
Thus, the anomalous spatio-temporal chaos in two dimension is also characterized by power-law behavior of the spatial correlation function with a parameter-dependent exponent.
## V Noisy on-off intermittency
In Ref., we clarified that the scaling behavior of the spatial correlation is a consequence of underlying multiplicative processes of amplitude differences between neighboring elements. We described this process by a multiplicative stochastic model, and related the exponent $`\alpha `$ of the spatial correlation with the fluctuation of the finite-time Lyapunov exponent of the element. Such a model is essentially identical with those used in describing noisy on-off intermittency, which indicates that our system also exhibits this type of temporal intermittency. Actually, the finite-time Lyapunov exponent of the complex Ginzburg-Landau oscillator can fluctuate between positive and negative values, and neighboring oscillators are subjected to only slightly different non-local mean field. Therefore, the conditions for the appearance of noisy on-off intermittency are satisfied in our system.
Now, let us confirm this in our system numerically. The coupling strength is set at $`K=0.85`$. Figure 6 shows typical time sequences of amplitude differences $`\mathrm{\Delta }X_1(t)`$ and $`\mathrm{\Delta }X_2(t)`$. The distance between the elements is $`512^1`$ for $`\mathrm{\Delta }X_1(t)`$, and $`64^1`$ for $`\mathrm{\Delta }X_2(t)`$, respectively. Strong intermittency of the signals is apparent. It can be seen that $`\mathrm{\Delta }X_2(t)`$ shows more frequent bursts than $`\mathrm{\Delta }X_1(t)`$, reflecting that $`\mathrm{\Delta }X_2(t)`$ is subjected to larger fluctuations than $`\mathrm{\Delta }X_1(t)`$.
We can confirm that these intermittent signals are actually noisy on-off intermittent by calculating the laminar length distribution. The laminar phase is defined as a successive duration where the absolute value of the difference does not exceeds a certain threshold. Here we choose $`0.5`$ as the threshold value. Figure 7 shows laminar length distributions $`R(t)`$ obtained from $`\mathrm{\Delta }X_1(t)`$ and $`\mathrm{\Delta }X_2(t)`$. The characteristic shape of the distribution $`R(t)`$, i.e., the power-law dependence on $`t`$ with slope $`3/2`$ for small $`t`$, and the exponential shoulder seen in the large $`t`$ region, clearly indicates that the signals are actually noisy on-off intermittent. The shoulder reflects broken scale invariance due to the additive noise. As expected, the shoulder of $`\mathrm{\Delta }X_2(t)`$ appears at a smaller value of $`t`$ than that of $`\mathrm{\Delta }X_1(t)`$.
## VI Multi-scaling analysis
The notion of multi-scaling, i.e., multi-affinity and multi-fractality, have been employed successfully in characterizing complex spatio-temporal behavior of various phenomena, such as velocity and energy dissipation fields in fluid turbulence, rough interfaces in fractal surface growth, nematic fluid electro-convective turbulence, financial data of currency exchange rates, and even in natural images. In Ref., we introduced multi-scaling analysis to our system for the one-dimensional case, inspired by the seeming similarity of the amplitude and difference fields in our system to the velocity and energy dissipation fields in fluid turbulence. Here, we attempt the multi-scaling analysis for the two-dimensional case.
First, we introduce the difference field $`Z(𝒓)`$ as
$$Z(𝒓):=|X(𝒓)|=\sqrt{\left(\frac{X}{x}\right)^2+\left(\frac{X}{y}\right)^2},$$
(14)
which emphasizes the edges of the original amplitude field $`X(𝒓)`$ Here, the differential should not be interpreted literally. We always use finite difference in the actual calculation, e.g. $`(X(x+\mathrm{\Delta }x,y)X(x,y))/\mathrm{\Delta }x`$ with sufficiently small $`\mathrm{\Delta }x`$, and this is important to observe the multi-scaling behavior.. This quantity is an analogue of the energy dissipation field in fluid turbulence. Figure 8 shows a typical snapshot of the difference field $`Z(x,y)`$ at $`K=0.85`$, corresponding to the amplitude field shown in Fig.2. The intermittency underlying the original amplitude field is now apparent.
We then introduce the following quantities as measures for the amplitude field $`X(𝒓)`$ and the difference field $`Z(𝒓)`$:
$`h(𝒓;l)`$ $`:=`$ $`|X(𝒓+𝒍)X(𝒓)|,`$ (15)
$`m(𝒓;l)`$ $`:=`$ $`{\displaystyle _{S(𝒓;l)}}Z(𝒓^{})d^2𝒓^{},`$ (16)
where $`|𝒍|=l`$, and the domain of integration $`S(𝒓;l)`$ is a square of size $`l`$ placed at $`𝒓`$. The first quantity is a difference of the amplitude field $`X(𝒓)`$ between two points separated by a distance of $`l`$, and the second quantity is a volume enclosed by the difference field $`Z(𝒓)`$ and the square $`S(𝒓;l)`$.
According to the multi-fractal formalism, two types of partition functions are defined as
$`Z_h^q(l)`$ $`:=`$ $`h(l)^q={\displaystyle \frac{1}{M(l)}}{\displaystyle \underset{i=1}{\overset{M(l)}{}}}h(𝒓_i;l)^q,`$ (17)
$`Z_m^q(l)`$ $`:=`$ $`N(l)m(l)^q={\displaystyle \underset{i=1}{\overset{N(l)}{}}}m(𝒓_i;l)^q,`$ (18)
where $`Z_h^q(l)`$ is calculated along a certain straight line in some direction as in the case of the previous spatial correlation function, while $`Z_m^q(l)`$ is calculated over the whole system. $`𝒓_i`$ is either the position of the line segment or the position of the square. $`M(l)`$ is the number of line segments of length $`l`$ that are needed to cover the entire line, and $`N(l)`$ is the number of squares of size $`l`$ that are needed to cover the whole system. The function $`Z_h^q(l)`$ is called structure function in the context of fluid turbulence.
When the measures have scaling properties, the partition functions are expected to scale with $`l`$ as $`Z_h^q(l)l^{\zeta (q)}`$ and $`Z_m^q(l)l^{\tau (q)}`$. Furthermore, if these exponents $`\zeta (q)`$ and $`\tau (q)`$ depend nonlinearly on $`q`$, the corresponding measures $`h(l)`$ and $`m(l)`$ are called multi-affine and multi-fractal, respectively.
For the one-dimensional case, we already know that the amplitude field is multi-affine and the difference field is multi-fractal. Moreover, our previous theory predicts the following form for the scaling exponent $`\zeta (q)`$ of the amplitude field:
$$\zeta (q)=q(0<q<\beta ),\beta (\beta <q),$$
(19)
where $`\beta `$ is a positive constant determined by the fluctuation of the finite-time Lyapunov exponent of the element, and is related to the slope of the probability distribution of $`h(l)`$. This is the simplest form of multi-affinity, and sometimes called bi-fractality<sup>§</sup><sup>§</sup>§There is some confusion in terminology due to historical reasons. A word ’bi-affinity’ will be more appropriate if it exists.. The same form of the scaling exponent $`\zeta (q)`$ is also expected in two dimension, since our previous theory imposed no restriction on the dimensionality of the system.
For the scaling exponent $`\tau (q)`$ of the difference field, we have not been able to develop a satisfactory theory yet. Numerical results in one-dimensional systems suggest that $`\tau (q)`$ also depends nonlinearly on $`q`$, and the difference field is multi-fractal with a rather simple functional form for $`\tau (q)`$. However, the scaling exponent $`\tau (q)`$ for the two-dimensional system may be different from that for the one-dimensional case, since $`Z_m^q(l)`$ is defined depending on the dimensionality of the system, while $`Z_h^q(l)`$ is always defined along a one-dimensional line.
Let us proceed to the numerical results now. The coupling strength is fixed at $`K=0.85`$ hereafter, where the system is fully in the anomalous spatio-temporal chaotic regime.
Figure 9 shows the partition function $`Z_h^q(l)`$ obtained for several values of $`q`$. Each curve depends on $`l`$ in a power-law manner for small $`l`$, and its exponent increases with $`q`$. The dependence of the scaling exponent $`\zeta (q)`$ on $`q`$ is shown in Fig.11. The $`\zeta (q)`$ curve is a strongly nonlinear function of $`q`$, and the multi-scaling property of the amplitude field is evident. Furthermore, the $`\zeta (q)`$ curve has a bi-linear form as expected in Eq.(19), although a sharp transition is absent due to the limited number of oscillators. From the large $`q`$ behavior of the exponent, we can roughly estimate the value of $`\beta `$ as $`0.48`$.
Thus, the amplitude field turns out to be multi-affine, and the behavior of the scaling exponent is the same as that for the one-dimensional case.
Figure 10 shows the partition function $`Z_m^q(l)`$ for several values of $`q`$. It is clear that each curve shows a power-law dependence on $`l`$. The width of the region where the power law holds seems much wider than the previous case. The scaling exponent $`\tau (q)`$ is plotted with regard to $`q`$ in Fig.12. The corresponding generalized dimension $`D(q):=\tau (q)/(q1)`$ is also shown in the inset. The $`\tau (q)`$ curve is again a nonlinear function of $`q`$, but its dependence on $`q`$ does not seem to be so simple as that of the $`\zeta (q)`$ curve. However, as we conjectured numerically for the one-dimensional case, asymptotic linearity of the $`\tau (q)`$ curve seems to hold. Correspondingly, the $`D(q)`$ curve seems to saturate to a horizontal line $`D(q)=D(\mathrm{})`$ quickly. But we can not observe a clear transition to the horizontal line as in the previous one-dimensional case, which may be due to the limited number of oscillators, or the two-dimensionality of the system.
Thus, the difference field also turns out to be multi-fractal. The behavior of the scaling exponent somewhat resembles that in the one-dimensional case, but is not completely the same.
## VII Probability distributions of the measures
The multi-scaling properties of amplitude and difference fields are consequences of intermittency underlying the system. In order to analyze this intermittency in more detail, we study here the probability distribution functions (PDF) of both measures at each length scale.
Let us consider the PDFs of the measures $`h(l)`$ and $`m(l)`$. It is convenient to use rescaled measures $`h_r(l):=h(l)/l`$, $`m_r(l):=m(l)/l^2`$ and corresponding rescaled PDFs $`P_r(h_r;l)`$, $`Q_r(m_r;l)`$. With this rescaling, the peaks and the widths of the PDFs become relatively close. We show our results in these rescaled variables.
The scaling exponents $`\zeta (q)`$ and $`\tau (q)`$ are fully determined by the dependence of these PDFs on the scale $`l`$. In Ref. , we approximated the tails of the PDFs by certain functional forms and extracted the scaling exponents asymptotically in the small $`l`$ limit. Here we present the numerical results only briefly for the purpose of emphasizing the intermittency of the amplitude and difference fields.
Figure 13 shows the rescaled PDF $`P_r(h_r;l)`$ of the rescaled amplitude difference $`h_r(l)`$ for several values of $`l`$ in log-log scales. Each PDF has a characteristic truncated Lévy-like shape, as we already obtained previously for the one-dimensional case; it is composed of a constant region near the origin, a power-law decay in the middle, and a sharp cutoff. Each curve roughly collapses to a scale-invariant curve in the constant and power-law regions, while the cut-off of the tail moves to the right with decreasing of $`l`$ due to the intermittency. More precisely, the cut-off position of the PDF (defined in some suitable way) is proportional to $`l^1`$. This gives rise to the bi-fractal behavior of the $`\zeta (q)`$ curve, see Ref. for detail.
Our previous theory predicts that the slope of the power-law decay is given by $`1\beta `$ with the constant $`\beta `$ from Eq. (19). This is confirmed in Fig.13, where the slope of the power-law decay can be read off as $`1.4`$, roughly in agreement with the previously obtained value $`\beta 0.48`$ from the scaling exponent $`\zeta (q)`$. The inset of Fig. 13 shows the PDF of the amplitude difference $`h(l)`$ (without taking the absolute value) rescaled by the standard deviation in linear-log scales, in order to further emphasize the intermittency of the amplitude field. The PDF evolves from nearly Gaussian into intermittent power-law with the decrease of scale $`l`$.
Figure 14 shows the PDFs $`Q_r(m_r;l)`$ of the rescaled volume $`m_r(l)`$. Their shapes are not so simple as those for $`P_r(h_r;l)`$. They are also qualitatively different from the PDF for the difference field in the previous one-dimensional case, which is responsible for the slightly different behavior of the scaling exponent $`\tau (q)`$ from the one-dimensional case. But we can still see that both tails extend, and the distribution widens with the decrease of scale $`l`$. Namely, as we decrease the observation scale $`l`$, largely deviated events appear more frequently. It is obvious that this intermittency effect gives rise to the nonlinearity of the scaling exponent $`\tau (q)`$, although the precise functional form is difficult to obtain. The inset shows the PDF of the measure $`m(l)`$ rescaled by the standard deviation in linear-log scales for several values of $`l`$. The PDF gradually gets steeper with the decrease of $`l`$ due to the intermittency.
Thus, the PDFs of the measures reveal the intermittency in our system clearly. Especially, the PDF for the amplitude difference $`h(l)`$ has the same shape as that already obtained in the one-dimensional case.
## VIII Conclusion
We numerically analyzed a two-dimensional system of non-locally coupled complex Ginzburg-Landau oscillators. As in the previous one-dimensional case, we found an anomalous spatio-temporally chaotic regime characterized by a power-law behavior of the spatial correlation function. As expected from our previous theory, the amplitude difference between neighboring elements exhibits noisy on-off intermittency, giving a microscopic dynamical origin for the power-law spatial correlations. We performed multi-scaling analysis in that regime, and found that the amplitude and difference fields are indeed multi-affine and multi-fractal, indicating strong intermittency underlying the system. By studying the PDFs of the measures at each length scale, the intermittency was clearly observed as scale-dependent deviations of the PDFs in their tails.
Multi-scaling properties are also known in various phenomena such as turbulence or fractal surface growth. The appearance of similar multi-scaling properties in many different systems implies some underlying common statistical law leading to such behavior. Further study of the intermittency in our system will give more insights and hints for the understanding of the multi-scaling properties observed in complex dissipative systems.
###### Acknowledgements.
The author gratefully acknowledges helpful advice and continuous support by Prof. Yoshiki Kuramoto, and thanks Dr. Axel Rossberg for a critical reading of the manuscript. He also thanks the Yukawa Institute for providing computer resources, and the JSPS Research Fellowships for Young Scientists for financial support.
|
no-problem/9905/cond-mat9905305.html
|
ar5iv
|
text
|
# Scaling of the distribution of fluctuations of financial market indices
## I Introduction and Background
The analysis of financial data by methods developed for physical systems has a long tradition and has recently attracted the interest of physicists. Among the reasons for this interest is the scientific challenge of understanding the dynamics of a strongly fluctuating complex system with a large number of interacting elements. In addition, it is possible that the experience gained by studying complex physical systems might yield new results in economics.
Financial markets are complex dynamical systems with many interacting elements that can be grouped into two categories: (i) the traders — such as individual investors, mutual funds, brokerage firms, and banks — and (ii) the assets — such as bonds, stocks, futures, and options. Interactions between these elements lead to transactions mediated by the stock exchange. The details of each transaction are recorded for later analysis. The dynamics of a financial market are difficult to understand not only because of the complexity of its internal elements but also because of the many intractable external factors acting on it, which may even differ from market to market. Remarkably, the statistical properties of certain observables appear to be similar for quite different markets, consistent with the possibility that there may exist “universal” results.
The most challenging difficulty in the study of a financial market is that the nature of the interactions between the different elements comprising the system is unknown, as is the way in which external factors affect it. Therefore, as a starting point, one may resort to empirical studies to help uncover the regularities or “empirical laws” that may govern financial markets.
The interactions between the different elements comprising financial markets generate many observables such as the transaction price, the share volume traded, the trading frequency, and the values of market indices \[Fig. 1\]. A number of studies investigated the time series of returns on varying time scales $`\mathrm{\Delta }t`$ in order to probe the nature of the stochastic process underlying it . For a time series $`S(t)`$ of prices or market index values, the return $`G(t)G_{\mathrm{\Delta }t}(t)`$ over a time scale $`\mathrm{\Delta }t`$ is defined as the forward change in the logarithm of $`S(t)`$,
$$G_{\mathrm{\Delta }t}(t)\mathrm{ln}S(t+\mathrm{\Delta }t)\mathrm{ln}S(t).$$
(1)
For small changes in $`S(t)`$, the return $`G_{\mathrm{\Delta }t}(t)`$ is approximately the forward relative change,
$$G_{\mathrm{\Delta }t}(t)\frac{S(t+\mathrm{\Delta }t)S(t)}{S(t)}.$$
(2)
In 1900, Bachelier proposed the first model for the stochastic process of returns—an uncorrelated random walk with independent, identically Gaussian distributed (i.i.d) random variables. This model is natural if one considers the return over a time scale $`\mathrm{\Delta }t`$ to be the result of many independent “shocks”, which then lead by the central limit theorem to a Gaussian distribution of returns . However, empirical studies show that the distribution of returns has pronounced tails in striking contrast to that of a Gaussian. To illustrate this fact, we show in Fig. 2 the 10 min returns of the S&P 500 market index for 1986-1987 and contrast it with a sequence of i.i.d. Gaussian random variables. Both are normalized to have unit variance. Clearly, large events are very frequent in the data, a fact largely underestimated by a Gaussian process. Despite this empirical fact, the Gaussian assumption for the distribution of returns is widely used in theoretical finance because of the simplifications it provides in analytical calculation; indeed, it is one of the assumptions used in the classic Black-Scholes option pricing formula.
In his pioneering analysis of cotton prices, Mandelbrot observed that in addition to being non-Gaussian, the process of returns shows another interesting property: “time scaling” — that is, the distributions of returns for various choices of $`\mathrm{\Delta }t`$, ranging from 1 day up to 1 month have similar functional forms. Motivated by (i) pronounced tails, and (ii) a stable functional form for different time scales, Mandelbrot proposed that the distribution of returns is consistent with a Lévy stable distribution — that is, the returns can be modeled as a Lévy stable process. Lévy stable distributions arise from the generalization of the central limit theorem to random variables which do not have a finite second moment \[see Appendix A\].
Conclusive results on the distribution of returns are difficult to obtain, and require a large amount of data to study the rare events that give rise to the tails. More recently, the availability of high frequency data on financial market indices, and the advent of improved computing capabilities, has facilitated the probing of the asymptotic behavior of the distribution. For these reasons, recent empirical studies of the S&P 500 index analyze typically $`10^6`$$`10^7`$ data points, in contrast to approximately 2000 data points analyzed in the classic work of Mandelbrot. Reference reports that the central part of the distribution of S&P 500 returns appears to be well fit by a Lévy distribution, but the asymptotic behavior of the distribution of returns shows faster decay than predicted by a Lévy distribution. Hence, Ref. proposed a truncated Lévy distribution—a Lévy distribution in the central part followed by an approximately exponential truncation—as a model for the distribution of returns. The exponential truncation ensures the existence of a finite second moment, and hence the truncated Lévy distribution is not a stable distribution . The truncated Lévy process with i.i.d. random variables has slow convergence to Gaussian behavior due to the Lévy distribution in the center, which could explain the observed time scaling for a considerable range of time scales.
In addition to the probability distribution, a complementary aspect for the characterization of any stochastic process is the quantification of correlations. Studies of the autocorrelation function of returns show exponential decay with characteristic decay times $`\tau _{ch}`$ of only 4 min. As is clear from Fig. 3(a), for time scales beyond 20 min the correlation function is at the level of noise, in agreement with the efficient market hypothesis which states that is not possible to predict future stock prices from their previous values. If price-correlations were not short-range, one could devise a way to make money from the market indefinitely.
It is important to note that lack of linear correlation does not imply an i.i.d. process for the returns, since there may exist higher-order correlations \[Fig 3(b)\]. Indeed, the amplitude of the returns, referred to in economics as the volatility, shows long-range time correlations that persist up to several months, and are characterized by an asymptotic power-law decay.
## II Motivation
A recent preliminary study reported that the distributions of 5 min returns for 1000 individual stocks and the S&P 500 index decay as a power-law with an exponent well outside the stable Lévy regime. Consistent results were found by studies both on stock markets and on foreign exchange markets. These results raise two important questions:
First, the distribution of returns has a finite second moment, thus, we would expect it to converge to a Gaussian because of the central limit theorem. On the other hand, preliminary studies suggest the distributions of returns retain their power-law functional form for long time scales. So, we can ask which of these two scenarios is correct? We find that the distributions of returns retain their functional form for time scales up to approximately 4 days, after which we find results consistent with a slow convergence to Gaussian behavior.
Second, power-law distributions are not stable distributions, but the distribution of returns retains its functional form for a range of time scales. It is then natural to ask how can this scaling behavior possibly arise? One possible explanation is the recently-proposed exponentially-truncated Lévy distribution. However, the truncated Lévy process is constructed out of i.i.d. random variables and hence is not consistent with the empirically-observed long persistence in the autocorrelation function of the volatility of returns . Moreover, our data support the possibility that the asymptotic nature of the distribution is a power-law with an exponent outside the Lévy regime. Also, we will argue that the scaling behavior observed in the distribution of returns may be connected to the slow decay of the volatility correlations.
The organization of the paper is as follows. Section III describes the data analyzed. Sections IV and V study the distribution of returns of the S&P 500 index on time scales $`\mathrm{\Delta }t1`$ day and $`\mathrm{\Delta }t>1`$ day, respectively. Section VI discusses how time correlations in volatility are related to the time scaling of the distributions, and Sect. VII presents concluding remarks.
## III The Data analyzed
First, we analyze the S&P 500 index, which comprises 500 companies chosen for market size, liquidity, and industry group representation in the US. The S&P 500 is a market-value weighted index (stock price times number of shares outstanding), with each stock’s weight proportional to its market value. The S&P 500 index is one of the most widely used benchmarks of U.S. equity performance. In our study, we first analyze database (i) which contains “high-frequency” data that covers the 13 years period 1984–1996, with a recording frequency of less than 1 min. The total number of records in this database exceeds $`4.5\times 10^6`$. To investigate longer time scales, we study two other databases. Database (ii) contains daily records of the S&P 500 index for the 35-year period 1962–1996, and database (iii) contains monthly records for the 71-year period 1926–1996.
In order to test if our results are limited to the S&P 500 index, we perform a parallel analysis on two other market indices. Database (iv) contains 3560 daily records of the NIKKEI index of the Tokyo stock exchange for the 14-year period 1984–1997, and database (v) contains 4649 daily records of the Hang-Seng index of the Hong Kong stock exchange for the 18-year period 1980–1997.
## IV The distribution of returns for $`\mathrm{\Delta }t1`$ day
### A The distribution of returns for $`\mathrm{\Delta }t=1`$ min
First, we analyze the values of the S&P500 index from the high-frequency data for the 13-year period 1984–1996, which extends the database studied in Ref. by an additional 7 years. The data are typically recorded at 15 second intervals. We first sample the data at 1 min intervals and generate a time series $`S(t)`$ with approximately 1.2 million data points. From the time series $`S(t)`$, we compute the return $`GG_{\mathrm{\Delta }t}(t)`$ which is the relative change in the index, defined in Eq. (1).
In order to compare the behavior of the distribution for different time scales $`\mathrm{\Delta }t`$, we define a normalized return $`gg_{\mathrm{\Delta }t}(t)`$
$$g\frac{GG_T}{v}.$$
(3)
Here, the time averaged volatility $`vv(\mathrm{\Delta }t)`$ is defined through $`v^2G^2_TG_{T}^{}{}_{}{}^{2}`$ and $`\mathrm{}_T`$ denotes an average over the entire length of the time series. Figure 4(a) shows the cumulative distribution of returns for $`\mathrm{\Delta }t=1`$ min. For both positive and negative tails, we find a power-law asymptotic behavior
$$P(g>x)\frac{1}{x^\alpha },$$
(4)
similar to what was found for individual stocks. For the region $`3g50`$, regression fits yield
$$\alpha =\{\begin{array}{cc}3.05\pm 0.04\hfill & \text{(positive tail)}\hfill \\ 2.94\pm 0.08\hfill & \text{(negative tail)}\hfill \end{array},$$
(5)
well outside the Lévy stable range, $`0\alpha <2`$ . Consistent values for $`\alpha `$ are also obtained from the density function. For a more accurate estimation of the asymptotic behavior, we use the modified Hill estimator \[Fig. 5(a,b)\]. We obtain estimates for the asymptotic slope in the region $`3g50`$ :
$$\alpha =\{\begin{array}{cc}2.93\pm 0.11\hfill & \text{(positive tail)}\hfill \\ 3.02\pm 0.15\hfill & \text{(negative tail)}\hfill \end{array}.$$
(6)
For the region $`g3`$, regression fits yield smaller estimates of $`\alpha `$, consistent with the possibility of a Lévy distribution in the central region. The values of $`\alpha `$ obtained in this range are quite sensitive to the bounds of the region used for fitting. Our estimates range from $`\alpha 1.35`$ up to $`\alpha 1.8`$ for different fitting regions in the interval $`0.1g6`$. For example, in the region $`0.5g3`$, we obtain
$$\alpha \{\begin{array}{cc}1.6\hfill & \text{(positive tail)}\hfill \\ 1.7\hfill & \text{(negative tail)}\hfill \end{array},$$
(7)
which are consistent with the result $`\alpha 1.4`$ found for small values of $`g`$ in Ref. . Note that in Ref. the estimates of $`\alpha `$ were calculated using the scaling form of the return probability to the origin $`P(0)`$. It is possible that for the financial data analyzed here, $`P(0)`$ is not the optimal statistic, because of the discreteness of the individual-company distributions that comprise it. It is also possible that our values of $`\alpha `$ for small values of $`g`$ could be due to the discreteness in the returns of the individual companies comprising the S&P 500.
### B Scaling of the distribution of returns for $`\mathrm{\Delta }t`$ up to 1 day
Next, we study the distribution of normalized returns for longer time scales. Figure 6(a) shows the cumulative distribution of normalized S&P 500 returns for time scales up to 512 min (approximately 1.5 days). The distribution appears to retain its power-law functional form for these time scales. We verify this scaling behavior by analyzing the moments of the distribution of normalized returns $`g`$,
$$\mu _k|g|^k_T,$$
(8)
where $`\mathrm{}_T`$ denotes an average over all the normalized returns for all the bins. Since $`\alpha 3`$, we expect $`\mu _k`$ to diverge for $`k3`$, and hence we compute $`\mu _k`$ for $`k<3`$.
Figure 6(b) shows the moments of the normalized returns $`g`$ for different time scales from 5 min up to 1 day. The moments do not vary significantly for the above time scales, confirming the apparent scaling behavior of the distribution observed in Fig. 6(a).
## V The distribution of returns for $`\mathrm{\Delta }t1`$ day
### A The S&P 500 index
For time scales beyond 1 day, we use database (ii) which contains daily-sampled records of the S&P 500 index for the 35-year period 1962–1996. Figure 7(a) shows the agreement between distributions of normalized S&P 500 daily-returns from database (i), which contains 1 min sampled data, and database (ii), which contains daily-sampled data. Regression fits for the region $`1g10`$ give estimates of $`\alpha 3`$. Figure 7(b) shows the scaling behavior of the distribution for $`\mathrm{\Delta }t=`$ 1, 2, and 4 days. For these choices of $`\mathrm{\Delta }t`$, the scaling behavior is also visible for the moments \[Fig. 7(c)\].
Figure 8(a) shows the distribution of the S&P 500 returns for $`\mathrm{\Delta }t=4,8`$ and 16 days. The data are now consistent with a slow convergence to Gaussian behavior. This is also visible for the moments \[Fig. 8(b)\].
### B The NIKKEI and Hang-Seng indices
The S&P 500 is but one of the many stock market indices. Hence, we investigate if the above results regarding the power-law asymptotic behavior of the distribution of returns hold for other market indices as well. Figure 9 compares the distributions of daily returns for the NIKKEI index of the Tokyo stock exchange and the Hang-Seng index of the Hong Kong stock exchange with that of the S&P 500. The distributions have similar functional forms, suggesting the possibility of “universal” behavior of these distributions. In addition, the estimates of $`\alpha `$ from regression fits,
$$\alpha =\{\begin{array}{cc}3.05\pm 0.16\hfill & \text{(NIKKEI)}\hfill \\ 3.03\pm 0.16\hfill & \text{(Hang-Seng)}\hfill \end{array},$$
(9)
are in good agreement for the three cases.
## VI Dependence of average volatility on time scale
The behavior of the time-averaged volatility $`v(\mathrm{\Delta }t)`$ as a function of the time scale $`\mathrm{\Delta }t`$ is shown in Fig. 3(c). We find a power-law dependence,
$$v(\mathrm{\Delta }t)(\mathrm{\Delta }t)^\delta .$$
(10)
We estimate $`\delta 0.7`$ for time scales $`\mathrm{\Delta }t<20`$ min. This value is larger than $`1/2`$ due to the exponentially-damped time correlations, which are significant up to approximately 20 min. Beyond 20 min, $`\delta 0.5`$, indicating the absence of correlations in the returns, in agreement with Fig. 3(a). The time-averaged volatility is also consistent with essentially uncorrelated behavior for the daily and monthly returns.
## VII Volatility Correlations and Time Scaling
We have presented evidence that the distributions of returns retain the same functional form for a range of time scales\[see Fig. 10 and Table I\]. Here, we investigate possible causes of this scaling behavior. Previous explanations of scaling relied on Lévy stable and exponentially-truncated Lévy processes. However, the empirical data that we analyze are not consistent with either of these two processes.
### A Rate of convergence
Here, we compare the rate of convergence of the probability of the returns to that of a computer-generated time series which has the same distribution but is statistically independent by construction. This way, we will be able to study the convergence to Gaussian behavior of independent random variables distributed as a power-law, with an exponent $`\alpha 3`$.
First, we generate a time series $`XX_k,k=1,\mathrm{},40\times 10^6`$ distributed as $`P(X>x)1/x^3`$. We next calculate the new random variables $`I_n_{i=1}^nX_k`$, and compute the cumulative distributions of $`I_n`$ for increasing values of $`n`$. These distributions show faster convergence with increasing $`n`$ than the distributions of returns \[Fig. 11(a)\]. This convergence is also visible in the moments. Figures 11(a,b) show that for $`n=256`$, both the moments and the cumulative distribution show Gaussian behavior. In contrast, for the distribution of returns, we observe significantly slower convergence to Gaussian behavior: In the case of the S&P 500 index, one observes a possible onset of convergence for $`\mathrm{\Delta }t`$ 4 days (1560 mins), starting from 1 min returns.
These results confirm the existence of time dependencies in the returns. Next, we show that the scaling behavior observed for the S&P 500 index no longer holds when we destroy the dependencies between the returns at different times.
### B Randomizing the time series of returns
We start with the 1 min returns and then destroy all the time dependencies that might be present by shuffling the time series of $`G_{\mathrm{\Delta }t=1}(t)`$, thereby creating a new time series $`G_1^{sh}(t)`$ which contains statistically-independent returns. By adding up $`n`$ consecutive returns of the shuffled series $`G_1^{sh}(t)`$, we construct the $`n`$ min returns $`G_n^{sh}(t)`$.
Figure 12(a) shows the cumulative distribution of $`G_n^{sh}(t)`$ for increasing values of $`n`$. We find a progressive convergence to Gaussian behavior with increasing $`n`$. This convergence to Gaussian behavior is also clear in the moments of $`G_n^{sh}(t)`$, which rapidly approach the Gaussian values with increasing $`n`$ \[Fig. 12(b)\]. This rapid convergence confirms that the time dependencies cause the observed scaling behavior.
## VIII Discussion
We have presented a detailed analysis of the distribution of returns for market indices, for time intervals $`\mathrm{\Delta }t`$ ranging over roughly 4 orders of magnitude, from 1 min up to 1 month ($``$ 16,000 min). We find that the distribution of returns is consistent with a power-law asymptotic behavior, characterized by an exponent $`\alpha 3`$, well outside the stable Lévy regime $`0<\alpha <2`$. For time scales $`\mathrm{\Delta }t(\mathrm{\Delta }t)_\times `$, where $`(\mathrm{\Delta }t)_\times 4`$ days, our results are consistent with slow convergence to Gaussian behavior.
We have also demonstrated that the scaling behavior does not hold if we destroy all the time dependencies by shuffling. The breakdown of the scaling behavior of the distribution of returns upon shuffling the time series suggests that the long-range volatility correlations, which persist up to several months, may be one possible reason for the observed scaling behavior.
Recent studies show that the distribution of volatility is consistent with an asymptotic power-law behavior with exponent 3, just as observed for the distribution of returns. This finding suggests that the process of returns may be written as
$$g(t)=ϵ(t)v(t),$$
(11)
where $`g(t)`$ denotes the return at time $`t`$, $`v(t)`$ denotes the volatility, and $`ϵ(t)`$ is an i.i.d. random variable independent of $`v(t)`$. Since the asymptotic behavior of the distributions of $`v(t)`$ and $`g(t)`$ is consistent with power-law behavior, $`ϵ(t)`$ should have an asymptotic behavior with faster decay than either $`g(t)`$ or $`v(t)`$. In fact, Eq. (11) is central to all the ARCH models , with $`ϵ(t)`$ assumed to be Gaussian distributed.
Different ARCH processes assume different recursion relations for $`v(t)`$. In the standard ARCH model, $`v(t)=\alpha +\beta g^2(t1)`$, leading to a power law distribution of returns with exponent depending on the parameters $`\alpha `$ and $`\beta `$. However, the standard ARCH process predicts a volatility correlation that decays exponentially, since $`v(t)`$ depends only on the previous event, and cannot account for the observed long-range persistence in $`v(t)`$. To try to remedy this, one can require $`v(t)`$ to depend not only on the previous value of $`g(t)`$ but on a finite number of past events. This generalization is called the GARCH model. Dependence of $`v(t)`$ on the finite past leads not to a power-law decay (as is observed empirically), but to volatility correlations that decay exponentially —with larger decay times as the number of events “remembered” is increased.
In order to explain the long range persistence of the autocorrelation function of the volatility, one must assume that $`v(t)`$ depends on all the past rather than a finite number of past events . Such a description would be consistent with the empirical finding of long-range correlations in the volatility, and the observation that the distributions of $`g(t)`$ and $`v(t)`$ have similar asymptotic forms. If the process of returns were governed by the volatility, as in Eq. (11), then the volatility would seem to be the more fundamental process. In fact, it is possible that the volatility is a measure of the amount of information arriving into the market, and that the statistical properties of the returns may be “driven” by this information.
## IX Acknowledgments
We thank J.-P. Bouchaud, M. Barthélemy, S. V. Buldyrev, P. Cizeau, X. Gabaix, I. Grosse, S. Havlin, K. Illinski, P. Ch. Ivanov, C. King, C.-K. Peng, B. Rosenow, D. Sornette, D. Stauffer, S. Solomon, J. Voit and especially R. N. Mantegna for stimulating discussions and helpful suggestions. The authors also thank Bob Tompolski for his help throughout this work. MM thanks DFG and LANA thanks FCT/Portugal for financial support. The Center for Polymer Studies is supported by the NSF.
## A Lévy Stable Distributions
Lévy stable distributions arise from the generalization of the central limit theorem to a wider class of distributions. Consider the partial sum $`P_n_{i=1}^nx_i`$ of independent identically distributed (i.i.d.) random variables $`x_i`$. If the $`x_i`$’s have finite second moment, the central limit theorem holds and $`P_n`$ is distributed as a Gaussian in the limit $`n\mathrm{}`$.
If the random variables $`x_i`$ are characterized by a distribution having asymptotic power-law behavior
$$P(x)x^{(1+\alpha )},$$
(A1)
where $`\alpha <2`$, then $`P_n`$ will converge to a Lévy stable stochastic process of index $`\alpha `$ in the limit $`n\mathrm{}`$.
Except for special cases, such as the Cauchy distribution, Lévy stable distributions cannot be expressed in closed form. They are often expressed in terms of their Fourier transforms or characteristic functions, which we denote $`\phi (q)`$, where $`q`$ denotes the Fourier transformed variable. The general form of a characteristic function of a Lévy stable distribution is
$$\mathrm{ln}\phi (q)=\{\begin{array}{cc}i\mu q\gamma |q|^\alpha \left[1+i\beta \frac{q}{|q|}tg\left(\frac{\pi }{2}\alpha \right)\right]\hfill & [\alpha 1]\hfill \\ i\mu q\gamma |q|\left[1+i\beta \frac{q}{|q|}\frac{2}{\pi }\mathrm{ln}|q|\right]\hfill & [\alpha =1]\hfill \end{array},$$
(A2)
where $`0<\alpha 2`$, $`\gamma `$ is a positive number, $`\mu `$ is the mean, and $`\beta `$ is an asymmetry parameter. For symmetric Lévy distributions ($`\beta =0`$), one has the functional form
$$P(x)=\frac{1}{2\pi }_{\mathrm{}}^{\mathrm{}}exp(\gamma |q|^\alpha )e^{iqx}𝑑q.$$
(A3)
For $`\alpha =1`$, one obtains the Cauchy distribution and for the limiting case $`\alpha =2`$, one obtains the Gaussian distribution.
By construction, Lévy distributions are stable, that is, the sum of two independent random variables $`x_1`$ and $`x_2`$, characterized by the same Lévy distribution of index $`\alpha `$, is itself characterized by a Lévy distribution of the same index. The functional form of the distribution is maintained, if we sum up independent, identically distributed Lévy stable random variables.
For Lévy distributions, the asymptotic behavior of $`P(x)`$ for $`x1`$ is a power-law,
$$P(x)x^{(1+\alpha )}.$$
(A4)
Hence, the second moment diverges. Specifically, $`E\{|x|^n\}`$ diverges for $`n\alpha `$ when $`\alpha <2`$. In particular, all Lévy stable processes with $`\alpha <2`$ have infinite variance. Thus, non-Gaussian stable stochastic processes do not have a characteristic scale. Although well-defined mathematically, these distributions are difficult to use and raise fundamental problems when applied to real systems where the second moment is often related to the properties of the system. In finance, an infinite variance would make risk estimation and derivative pricing impossible.
## B The Hill estimator (“local slopes”)
A common problem when studying a distribution that decays as a power law is how to obtain an accurate estimate of the exponent characterizing the asymptotic behavior. Here, we review the methods of Hill. The basic idea is to calculate the inverse of the local logarithmic slope $`\zeta `$ of the cumulative distribution $`P(g>x)`$,
$$\zeta \left(\frac{d\mathrm{log}P}{d\mathrm{log}x}\right)^1.$$
(B1)
We then estimate the inverse asymptotic slope $`1/\alpha `$ by extrapolating $`\zeta `$ as $`1/x0`$. We start with the normalized returns $`g`$ and proceed in the following steps:
Step I: We sort the normalized returns $`g`$ in descending order. The sorted returns are denoted $`g_k`$, $`k=1,\mathrm{},N`$, where $`g_k>g_{k+1}`$ and $`N`$ is the total number of events.
Step II: The cumulative distribution is then expressed in terms of the sorted returns as
$$P(g>g_k)=\frac{k}{N}.$$
(B2)
Figure 13 is a schematic of the cumulative distribution thus obtained. The inverse local slopes $`\zeta (g)`$ can be written as
$$\zeta (g_k)=\frac{\mathrm{log}(g_{k+1}/g_k)}{\mathrm{log}(P(g_{k+1})/P(g_k))}.$$
(B3)
Using Eq. (B2), the above expression can be well approximated for large $`k`$ as
$$\zeta (g_k)k(\mathrm{log}(g_{k+1})\mathrm{log}(g_k)),$$
(B4)
yielding estimates of the local inverse slopes.
Step III: We obtain the inverse local slopes through Eq. (B4). We can then compute an average of the inverse slopes over $`m`$ points,
$$\zeta \frac{1}{m}\underset{k=1}{\overset{m}{}}\zeta (g_k),$$
(B5)
where the choice of the averaging window length $`m`$ varies depending on the number of events $`N`$ available.
Step IV: We plot the locally averaged inverse slopes $`\zeta `$ obtained in Step III as a function of the inverse normalized returns $`1/g`$ \[see, e.g., Fig. 5\]. We can then define two methods of estimating $`\alpha `$. In the first method, we extrapolate $`\zeta `$ as a function of $`1/g`$ to $`0`$, similarly to the method of successive slopes; this procedure yields the inverse asymptotic slope $`1/\alpha `$. In the second method, we average over all events for $`1/g`$ smaller than a given threshold , with the average yielding the inverse slope $`1/\alpha `$.
To test the Hill estimator, we analyze two surrogate data sets with known asymptotic behavior: (a) an independent random variable with $`P(g>x)=(1+x)^3`$, and (b) an independent random variable with $`P(g>x)=\mathrm{exp}(x)`$. As shown in Figs. 13(b,c), the method yields the correct results $`\alpha =3`$ and $`\alpha =\mathrm{}`$, respectively.
|
no-problem/9905/hep-ph9905365.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
Charm quarks produced in deep-inelastic scattering have been identified in sizable numbers by the H1 and ZEUS collaborations at HERA , and considerably more charm (and bottom) data are anticipated. At the theoretical level the reaction has already been studied extensively. In the framework where the heavy quark is not treated as a parton, leading order (LO) and next-to-leading order (NLO) calculations of the inclusive structure functions exist. Moreover, LO (AROMA, RAPGAP) and NLO (HVQDIS) Monte-Carlo programs, allowing a much larger class of observables to be compared with data, have been constructed in recent years. The NLO QCD description agrees quite well with the HERA data. Most of the recent theoretical attention for this reaction has been in the context of variable flavor number schemes , which we shall not address here. We shall review here two issues regarding heavy quark production in deep-inelastic scattering. In the next section we discuss two new features of HVQDIS, relevant to recent and future analyses; first, the inclusion of charmed-meson semi-leptonic decays, and second, a switch to describe (LO) $`D`$$`\overline{D}`$$`jet`$ final states. In the third section we review the methods and some key results of the next-to-leading logarithmic Sudakov resummation for DIS production of heavy quarks, and NNLO estimates derived from this resummation.
## 2 The NLO Monte-Carlo HVQDIS
The program HVQDIS is based on the next-to-leading order fully differential heavy quark contributions to the proton structure functions . The basic components (in terms of virtual-photon-proton scattering) are the 2–to–2 body squared matrix elements through one-loop order and tree level 2–to–3 body squared matrix elements, for both photon-gluon and photon-light-quark initiated subprocesses. It is therefore possible to study single- and semi-inclusive production at next-to-leading order, and three body final states at leading order. The goal of the next-to-leading order calculation is to organize the soft and collinear singularity cancellations without loss of information in terms of observables that can be predicted.
The subtraction method provides a mechanism for this cancellation. It allows one to isolate the soft and collinear singularities of the 2–to–3 body processes within the framework of dimensional regularization without calculating all the phase space integrals in a space-time dimension $`n4`$. Expressions for the three-body squared matrix elements in the limit where an emitted gluon is soft appear in a factorized form where poles $`ϵ=2n/2`$ multiply leading order squared matrix elements. These soft singularities cancel upon addition of the interference of the leading order diagrams with the renormalized one-loop virtual diagrams. The remaining singularities are initial state collinear in origin wherein the three-body squared matrix elements appear in a factorized form, with poles in $`ϵ`$ multiply splitting functions convolved with leading order squared matrix elements. These collinear singularities are removed through mass factorization.
The result of this calculation is an expression that is finite in four-dimensional space time. One can compute all phase space integrations using standard Monte-Carlo integration techniques. The final result is a program which returns parton kinematic configurations and their corresponding weights, accurate to $`𝒪(\alpha \alpha _s^2)`$. The user is free to histogram any set of infrared-safe observables and apply cuts, all in a single histogramming subroutine. Additionally, one may study heavy hadrons using the Peterson et al. model. Detailed physics results from this program are given in .
Below we discuss and give examples of two new options of HVQDIS version 1.3<sup>1</sup><sup>1</sup>1available from harris@hep.anl.gov: electrons from semileptonic decays of the charmed hadron, and a switch for retaining only three body final states.
### 2.1 Semileptonic decays
HVQDIS has been extended to include the electron from the semileptonic decay of the charmed hadron. In the rest frame of the decaying hadron, the electrons are distributed isotropically. Their momentum distribution is the weighted average of multiple decay modes of many different charmed hadrons, and has been deduced from RAPGAP. The parameterization is shown in fig. 1. The implementation used in HVQDIS1.3 (shown as points) was derived from the fit (dashed line) to the RAPGAP output (histogram). The overall normalization of the cross section is then fixed by the semileptonic branching ratio Br$`(ce)`$ which we take to be $`9.5\%`$.
The inclusive transverse momentum and pseudo-rapidity distributions of the semileptonic decay electrons in the lab frame in deep inelastic scattering of 820 GeV protons with 27.5 GeV electrons in the kinematic range $`0<y<0.7`$ and $`2<Q^2<100`$ GeV<sup>2</sup> are shown in fig. 2. We also show the corresponding distributions for the parent parton ($`c`$-quark) and hadron (charmed-meson). The curves are produced using the next-to-leading order Glück-Reya-Vogt 1994 (GRV94) parton distribution functions, a two-loop $`\alpha _s`$ with $`n_f=3`$ and $`\mathrm{\Lambda }_{\mathrm{QCD}}^{(n_f=3)}=248`$ MeV, and $`m_c=1.35`$ GeV. The distributions of the charmed partons and hadrons are highly correlated because of the simple Peterson et al. fragmentation model. The semileptonic decay electrons are very soft, taking only a small portion of the hadron $`p_t`$, and more central due to the isotropic nature of the decay.
### 2.2 Three body final states
For speed considerations it pays to add a switch to turn off all two body contributions (primarily the very slow virtual routines) when one is interested only in a manifestly three body observable. Such a switch has been added to HVQDIS1.3. We give here a sample of three body observables.
The final state of interest is $`D`$$`\overline{D}`$jet corresponding to the partonic states $`c`$$`\overline{c}`$$`g`$ and $`c`$$`\overline{c}`$$`q`$. We begin by requiring the $`D`$, $`\overline{D}`$ and jet to be above some minimum transverse momentum ($`P_t^D>1.2`$ GeV, $`P_t^{\overline{D}}>1.2`$ GeV, $`P_t^{jet}>6`$ GeV), to be central ($`|\eta _D|<1.5`$, $`|\eta _{\overline{D}}|<1.5`$, $`|\eta _{jet}|<2.4`$), and to be isolated ($`R_{D\overline{D}}>0.7`$, $`R_{Djet}>0.7`$, $`R_{\overline{D}jet}>0.7`$) in the lab frame. The cone size $`R_{ij}=\sqrt{(\eta _i\eta _j)^2+(\varphi _i\varphi _j)^2}`$.
The total cross section for the deep inelastic production of $`D`$$`\overline{D}`$$`jet`$ as a function of their invariant mass $`M_{D\overline{D}j}`$ is shown in fig. 3 for $`0<y<0.7`$ and $`2<Q^2<100\mathrm{GeV}^2`$. A one loop $`\alpha _s`$ with $`n_f=3`$ and $`\mathrm{\Lambda }_{QCD}^{(n_f=3)}=232`$ MeV, leading order GRV94 parton distributions, and $`m_c=1.35`$ GeV where used. The normalization of this LO curve has a large uncertainty as demonstrated by the various scale choices $`\mu =\{\mu _0/2,\mu _0,2\mu _0\}`$, with $`\mu _0=\sqrt{Q^2+m_c^2+M_{D\overline{D}j}^2}`$. Also shown in the figure is a decomposition into the gluon and light-quark initiated subprocesses. The gluon initiated subprocess dominates due to the relatively large size of the gluon parton distribution function at small $`x`$. As another example, in the $`D\overline{D}j`$ center-of-mass frame we construct the Dalitz energy fractions $`x_i=2E_i/M_{D\overline{D}j}`$, ($`i=D,\overline{D}`$, or $`j`$) that specify how much available energy is shared between the $`D,\overline{D}`$, and jet. They satisfy $`x_D+x_{\overline{D}}+x_j=2`$. The normalized cross section differential in $`x_D`$ and $`x_j`$ is shown in fig. 4.
## 3 Soft-gluon resummation
As already remarked, existing NLO calculations for heavy quark electroproduction provide a solid theoretical perturbative QCD description for this reaction, and agree well with present data . At moderate $`Q^2`$ and $`x`$ values larger than $`0.01`$, the charm structure function $`F_2^{\mathrm{charm}}`$ is increasingly dominated by partonic processes near the charm quark pair production threshold. The large size of the gluon density $`f_g(x,\mu )`$ for small momentum fractions $`x`$ gives relatively large weight to such processes . Although the QCD corrections at presently accessible $`x`$ values are moderate (about 30-40%), with an increasing number of data to be gathered at higher $`x`$, it is worthwhile to have a closer look at such near-threshold subprocesses. In this kinematic region, the QCD corrections are dominated by large Sudakov double logarithms. Recently , these Sudakov logarithms have been resummed to all orders of perturbation theory, to next-to-leading logarithmic (NLL) accuracy, and, moreover, in single-particle inclusive (1PI) kinematics <sup>2</sup><sup>2</sup>2Analytical results for pair-inclusive kinematics are also given in . Let us recall at this point the main results. First, the quality of the approximation for the next-to-leading logarithmic threshold resummation was found to be clearly superior to leading logarithmic one. Furthermore, the resummation provided NNLO estimates ,which were found to be sizable for $`x0.05`$.
Below we give a synopsis of the analysis presented in . We study electron proton scattering with the exchange of a single virtual photon, $`Q^2=q^2`$, and a detected heavy quark (we concentrate on the charm quark case here) in the final state, i.e. the subprocess
$`\gamma (q)+P(p)`$ $``$ $`\mathrm{Q}(p_1)+X[\overline{\mathrm{Q}}],`$ (1)
where $`X`$ denotes any additional hadrons, including the heavy anti-quark, in the final state and $`p_1^2=m^2`$. The Mandelstam invariants, $`S^{}=(p+q)^2+Q^2,T_1=(pp_1)^2m^2`$ and $`U_1=(qp_1)^2m^2`$ can be used to define $`S_4=S^{}+T_1+U_1`$, which vanishes at the hadronic threshold. The double differential heavy quark structure function $`dF_2`$ associated to (1) may be written as
$$\frac{d^2F_{2,P}(x,S_4,T_1,U_1,Q^2,m^2)}{dT_1dU_1}=\frac{1}{S^2}\underset{i=q,\overline{q},g}{}\underset{z^{}}{\overset{1}{}}\frac{dz}{z}f_{i/P}(z,\mu ^2)\omega _{2,i}(\frac{x}{z},\frac{s_4}{\mu ^2},\frac{t_1}{\mu ^2},\frac{u_1}{\mu ^2},\frac{Q^2}{\mu ^2},\frac{m^2}{\mu ^2},\alpha _s(\mu )),$$
(2)
where $`z^{}=U_1/(S^{}+T_1)`$. The $`f_{i/P}`$ denote parton distributions in the proton at momentum fraction $`z`$ and $`\overline{\mathrm{MS}}`$-mass factorization scale $`\mu `$. The functions $`\omega _{2,i}`$ describe the underlying hard parton scattering processes and depend on the partonic Mandelstam variables $`s^{},t_1,u_1`$ and $`s_4`$, which are derived from (1) by replacing the proton $`P`$ by a parton of momentum $`k=zp`$. At $`n`$-th order in perturbation theory, the gluonic hard part $`\omega _{2,g}`$ in eq.(2) typically depends on singular distributions $`\alpha _s^n[\mathrm{ln}^{2n1k}(s_4/m^2)/s_4]_+,k=0,1,\mathrm{}`$, that must be resummed. Contributions from light initial-quark states are neglected, as they are about 5% at NLO.
The resummation of threshold logarithms rests upon the factorization of the kinematics and dynamics of the relevant degrees of freedom near threshold . The dynamical factors involved can be each be assigned a kinematic weight $`w_i`$ that is defined to vanish at threshold. For $`dF_{2,P}`$ in eq.(2), the factorization of the kinematics implies that these weights sum to the overall inelasticity near threshold: $`S_4/m^2w_1(u_1)/m^2+w_s`$, with $`w_1=1z`$ and $`w_s=s_4/m^2`$. Correspondingly, the infrared regulated partonic structure function $`dF_{2,g}`$ factorizes into functions that individually organize contributions from these near-threshold degrees of freedom. Thus, there is here a function $`\psi _{g/g}`$ that sums the singular distributions from incoming collinear gluons, and a soft function $`S`$ that organizes those due to soft gluons not collinear to the incoming parton. Finally, there is a hard function $`H_{2,g}`$ incorporating only regular short-distances corrections. Replacing the proton in eq.(2) by a gluon, and passing to Laplace-moment space, $`\stackrel{~}{f}(N)=_0^{\mathrm{}}𝑑w\mathrm{exp}[Nw]f(w)`$, this gives
$`\stackrel{~}{\omega }_{2,g}(N,{\displaystyle \frac{t_1}{\mu ^2}},{\displaystyle \frac{u_1}{\mu ^2}},{\displaystyle \frac{Q^2}{\mu ^2}},{\displaystyle \frac{m^2}{\mu ^2}})`$ $`=`$ $`H_{2,g}({\displaystyle \frac{t_1}{\mu ^2}},{\displaystyle \frac{u_1}{\mu ^2}},{\displaystyle \frac{Q^2}{\mu ^2}},{\displaystyle \frac{m^2}{\mu ^2}})\left[{\displaystyle \frac{\stackrel{~}{\psi }_{g/g}(N_u,p\zeta /\mu )}{\stackrel{~}{\varphi }_{g/g}(N_u,\mu )}}\right]\stackrel{~}{S}({\displaystyle \frac{m}{N\mu }},\zeta ),`$ (3)
where $`\varphi _{g/g}`$ is the usual $`\overline{\mathrm{MS}}`$-distribution from mass factorization and $`N_u=N(u_1)/m^2`$. In moment space, the Sudakov logarithms appear as factors $`\alpha _s^n\mathrm{ln}^{2ni}N`$, with $`i=0,1`$ for NLL accuracy. The $`N`$-dependence in eq.(3) exponentiates for each function individually. All leading logarithms (LL) are exclusively contained in $`\stackrel{~}{\psi }_{g/g}`$, which is a gluon distribution at fixed fraction of $`p\zeta `$ and can be defined as an operator matrix element. It depends on a time-like vector $`\zeta =p_2/m`$ ($`p_2`$ is the heavy antiquark momentum). Its collinear poles are canceled by $`\varphi _{g/g}`$. The threshold logarithms in $`\stackrel{~}{\psi }_{g/g}`$ are resummed in analogy to the Drell-Yan process , while all scale dependence of $`\stackrel{~}{\psi }_{g/g}`$ and $`\stackrel{~}{\varphi }_{g/g}`$ is governed by renormalization group equations (RGE) with anomalous dimensions $`\gamma _\psi =2\gamma _g`$ and $`\gamma _{g/g}`$ .
The soft function $`S`$ requires renormalization, since it is defined as a composite operator, that connects Wilson lines in the direction of the scattering partons . Its RGE, $`\mu (d/d\mu )\mathrm{ln}\stackrel{~}{S}(N)=2\mathrm{Re}\mathrm{\Gamma }_S`$, resums all threshold logarithms in $`\stackrel{~}{S}`$. Its gauge dependence cancels precisely that of $`\psi _{g/g}`$. The soft anomalous dimension $`\mathrm{\Gamma }_S`$ is to order $`\alpha _s`$
$`\mathrm{\Gamma }_S(\alpha _s)`$ $`=`$ $`{\displaystyle \frac{\alpha _s}{\pi }}\{({\displaystyle \frac{C_A}{2}}C_F)(L_\beta +1){\displaystyle \frac{C_A}{2}}(\mathrm{ln}\left({\displaystyle \frac{(p\zeta )^2}{m^2}}\right)+\mathrm{ln}{\displaystyle \frac{4m^4}{t_1u_1}})\},`$ (4)
with $`\beta =\sqrt{14m^2/s}`$ and $`L_\beta =(12m^2/s)\{\mathrm{ln}(1\beta )/(1+\beta )+\mathrm{i}\pi \}/\beta `$.
The final result for the hard scattering function $`\stackrel{~}{\omega }_{2,g}`$ in moment space resums all large logarithms in single-particle inclusive kinematics up to NLL accuracy. Combining the resummed $`\stackrel{~}{\psi }_{g/g}`$ with the integrated RGE for $`\stackrel{~}{S}`$, we obtain for $`\stackrel{~}{\omega }_{2,g}`$
$`\stackrel{~}{\omega }_{2,g}(N,{\displaystyle \frac{t_1}{\mu ^2}},{\displaystyle \frac{u_1}{\mu ^2}},{\displaystyle \frac{Q^2}{\mu ^2}},{\displaystyle \frac{m^2}{\mu ^2}})`$ $`=`$
$`H_{2,g}({\displaystyle \frac{t_1}{m^2}},{\displaystyle \frac{u_1}{m^2}},{\displaystyle \frac{Q^2}{m^2}},1)\stackrel{~}{S}(1,\alpha _s({\displaystyle \frac{m}{N}}))\mathrm{exp}\left\{2{\displaystyle \underset{\mu }{\overset{m}{}}}{\displaystyle \frac{d\mu ^{}}{\mu ^{}}}\gamma _g\left(\alpha _s(\mu ^{})\right)\right\}`$
$`\times \mathrm{exp}\left\{{\displaystyle \underset{0}{\overset{\mathrm{}}{}}}{\displaystyle \frac{dw}{w}}(1\mathrm{e}^{N_uw})\left[{\displaystyle \underset{w^2}{\overset{1}{}}}{\displaystyle \frac{d\lambda }{\lambda }}A_{(g)}(\alpha _s(\sqrt{\lambda }m))+{\displaystyle \frac{1}{2}}\nu _{(g)}(\alpha _s(wm))\right]\right\}`$
$`\times \mathrm{exp}\left\{{\displaystyle \underset{m}{\overset{m/N}{}}}{\displaystyle \frac{d\mu ^{}}{\mu ^{}}}2\mathrm{Re}\mathrm{\Gamma }_S\left(\alpha _s(\mu ^{})\right)\right\}\mathrm{exp}\left\{2{\displaystyle \underset{\mu }{\overset{2p\zeta }{}}}{\displaystyle \frac{d\mu ^{}}{\mu ^{}}}\left(\gamma _g\left(\alpha _s(\mu ^{})\right)\gamma _{g/g}(N_u,\alpha _s(\mu ^{}))\right)\right\}.`$
The second exponent gives the leading $`N`$-dependence of the ratio $`\stackrel{~}{\psi }_{g/g}/\stackrel{~}{\varphi }_{g/g}`$ with $`\nu _{(g)}(\alpha _s)=2C_A\alpha _s/\pi `$, $`A_{(g)}(\alpha _s)=C_A(\alpha _s/\pi )+(C_AK/2)(\alpha _s/\pi )^2`$ and $`K=C_A(67/18\pi ^2/6)5/9n_f`$ . For NLL Sudakov resummation, the product $`H_{2,g}S`$ on the first line of eq. (3) is determined from matching to the Born cross section at the scale $`\mu =m`$.
The resummed result for $`\stackrel{~}{\omega }_{2,g}`$ in eq. (3) may be used as a generating functional for fixed order approximate perturbation theory by re-expanding $`\stackrel{~}{\omega }_{2,g}`$ to NLO and NNLO and inverting the Laplace transform. After insertion of eq. (3) into eq. (2) and integration over $`T_1,U_1`$, we may expand the structure function as
$`F_2^{\mathrm{charm}}(x,Q^2,m^2)`$ $`=`$ $`{\displaystyle \frac{\alpha _s(\mu )e_\mathrm{c}^2Q^2}{4\pi ^2m^2}}{\displaystyle \underset{ax}{\overset{1}{}}}𝑑zf_{g/P}(z,\mu ^2){\displaystyle \underset{k=0}{\overset{\mathrm{}}{}}}(4\pi \alpha _s(\mu ))^k{\displaystyle \underset{l=0}{\overset{k}{}}}c_{2,g}^{(k,l)}(\eta ,\xi )\mathrm{ln}^l{\displaystyle \frac{\mu ^2}{m^2}},`$ (6)
where $`a=(Q^2+4m^2)/Q^2`$ and $`e_c=2/3`$.
The quality of the NLL approximation eq.(3) can then be investigated by comparing exact, LL and NLL approximation to the gluon coefficient functions $`c_{2,g}^{(k,l)}`$, which is done in fig. 5(a). The functions $`c_{2,g}^{(k,l)}`$ depend on the scaling variables
$`\eta `$ $`=`$ $`{\displaystyle \frac{s}{4m^2}}1,\xi ={\displaystyle \frac{Q^2}{m^2}},`$ (7)
where $`\eta `$ is a direct measure of the distance to the partonic threshold.
Fig. 5(a) reveals that, although at one loop the LL accuracy provides a good approximation for very small $`\eta `$, the NLL approximation is excellent over a much wider range in $`\eta `$, up to values of about 10 (the same actually holds true for the $`c_{2,g}^{(k,l)},l>0,k2`$). We also show $`c_{2,g}^{(2,0)}`$, which has more structure than in the $`c_{2,g}^{(1,0)}`$ curves.
To address the threshold sensitivity of the integrated charm structure function to threshold processes, we perform the integral over $`z`$ in eq. (2) only up to a value $`z_{\mathrm{max}}`$, and plot the integral then as a function of $`z_{\mathrm{max}}`$. In this way we can see where the integral eq. (2) acquires most of its value. The physical structure function corresponds to $`z_{\mathrm{max}}=1`$. Fig. 5(b) demonstrates that for $`x=0.01`$ $`F_2^{\mathrm{charm}}`$ is mostly determined by partonic processes close to threshold.
In fig. 6a we display at a fixed value of the factorization scale $`\mu =m`$ over a range of $`x`$, $`0.003x0.3`$ the effect of the NNLO corrections. We plot the K-factors $`F_{2(NNLO)}^{\mathrm{charm}}/F_{2(NLO)}^{\mathrm{charm}}`$ and, for comparison, also $`F_{2(NLO)}^{\mathrm{charm}}/F_{2(LO)}^{\mathrm{charm}}`$ <sup>3</sup><sup>3</sup>3For $`F_{2(LO)}^{\mathrm{charm}}`$ we used a two-loop $`\alpha _s`$ and NLO gluon density.. At NNLO we have taken the improved NLL approximation to $`F_2^{\mathrm{charm}}`$ (the exact NLO result plus the NLL approximate NNLO result). We see that particularly for smaller $`x`$, the size of the NNLO corrections is negligible, the K-factor being close to one, whereas for larger $`x`$, their overall size is still quite big, almost a factor of 2 at $`x=0.1`$.
Finally, in fig. 6b we show the NLO results as a function of $`p_T`$ for $`x=0.01`$, $`m=1.6\mathrm{GeV}`$, $`Q^2=10\mathrm{GeV}`$ and $`\mu =\sqrt{Q^2+4(m^2+p_T^2)}`$. At NLO, we compare our LL and NLL approximate results with the exact results of the second Ref. in . We see that the exact curves are reproduced well both in shape and magnitude by our NLL approximations, whereas the curves for LL accuracy systematically underestimate the true result. We also display the improved NLL approximation to $`dF_2^{\mathrm{charm}}/dp_T`$ at NNLO, which contains sizeable contributions to the value of the maximum increases by $`40\%`$ \- $`50\%`$.
## 4 Conclusions
Driven by the ever increasing variety and quantity of deep-inelastic charm production data from the H1 and ZEUS experiments at HERA, we have updated and reviewed two important tools: soft gluon threshold resummation and the next-to-leading order Monte-Carlo HVQDIS. The addition of semileptonic decay electron information and an option for only three body final states to the Monte-Carlo will enhance future physics analysis options. Soft gluon threshold resummation, on the other hand, teaches us about the size of the terms neglected in fixed order calculations.
### Acknowledgments
The work of B.W.H. was supported by the U. S. Department of Energy, High Energy Physics Division, Contract No. W-31-109-Eng-38. The work of J.S. was supported in part by the U. S. National Science Foundation grant PHY-9722101. The work of S.M. and E.L. is part of the research program of the Foundation for Fundamental Research of Matter (FOM) and the National Organization for Scientific Research (NWO).
|
no-problem/9905/cond-mat9905244.html
|
ar5iv
|
text
|
# Quasicrystals in a Monodisperse System
## I Introduction
Stable quasicrystalline phases are typically found in binary mixtures , where the various arrangements of the two components contribute to the degeneracy of the local environments , allowing a quasicrystalline phase to be entropy stabilized . With one notable exception , previous studies did not support the existence of a stable quasicrystalline phase in a monodisperse system interacting with a simple potential .
We study a simple model that allows us to estimate the crystal and quasicrystal entropies and thereby study the Helmholtz potentials of the crystals and quasicrystal. The ground state of this system is a periodic crystal, yet we explore the possibility that the quasicrystalline configuration is the equilibrium state in a certain temperature regime. Although quasicrystals do not have short range order, they do have recurring local environments that, in our model, resemble the basic cells of the stable crystalline phases. From the entropies of the stable crystalline phases and by estimating the configurational entropy of the quasicrystal, we infer that the quasicrystal may be an equilibrium state. We observe sharpening of five fold diffraction peaks when the starting amorphous phase is annealed. In two dimensions, five fold diffraction peaks pertain to crystallographically disallowed point groups which characterize quasicrystals .
## II MD Methods
To study quasicrystalline stability in a monodisperse system, we perform molecular dynamics (MD) simulations of a two-dimensional model of hard spheres interacting with an attractive square-well (SW) potential \[Fig. 1\]. The simplicity of this SW potential allows us to study the fundamental characteristics of the system. By tuning the width of the SW potential, we can control the local geometric configurations formed by the particles. The structures of the crystalline and quasicrystalline phases can thus be clearly defined and analyzed.
We perform MD simulations in the NVT ensemble, using a standard collision event list algorithm to evolve the system, while we use a method similar to the Berendsen method to achieve the desired temperature . The depth of the potential well is $`ϵ=1.0`$. Energies are measured in units of $`ϵ`$, temperature is measured in units of energy divided by the Boltzmann constant, $`ϵ/k_B`$, and the mass of the particle is $`m=1`$. We choose the value of the hard core distance to be, $`a=10`$, and the ratio of the attractive distance $`b`$ to the hard core distance $`a`$, to be $`b/a=\sqrt{3}`$. Since the diagonal distance between two corners of a square is $`\sqrt{2}`$ times the length of one side, choosing $`b/a=\sqrt{3}`$ favors the formation of a square crystal lattice where each particle interacts with 8 neighbors \[Fig. 2a\]. This constraint inhibits the formation of a triangular crystal, which would form at low temperatures if $`b/a>\sqrt{3}`$ or at high densities.
## III Crystal and Amorphous Phases
Studying the behavior of the system at low temperatures we observe the formation of local structures similar to that shown in Fig. 2. These structures constitute local environments that can reproduce crystallographically allowed symmetry if translationally ordered. First, we consider the stable periodic crystal phases produced by translationally ordering each of the configurations in Fig. 2 and calculate the energies of these crystal structures at $`T=0`$. In our system, the two allowed local configurations are the 4-particle square and the 5-particle pentagon (indicated by the symbol “P” in Fig. 2b,c). Particles form these two geometries because the nearest neighbor diagonal and adjacent distance between particles in these configurations is less than $`b/a=\sqrt{3}`$, the SW width. Four particle squares make up the square crystal; since each particle has $`8`$ neighbors, at $`T=0`$, the potential energy per particle is $`U_{sq}=4.0`$. Pentagons do not tile the plane; however, the formation of two kinds of crystals based on the local five particle pentagon is possible. In the type I pentagonal crystal, each crystalline cell consists of $`5`$ particles, one of which has $`8`$ neighbors and four of which have $`9`$ neighbors; hence, $`U_{pI}=4\frac{2}{5}`$ \[Fig. 2b\]. In the type II pentagonal crystal, each crystalline cell consists of $`6`$ particles, two of which have $`8`$ neighbors and $`4`$ of which have $`9`$ neighbors; hence, $`U_{pII}=4\frac{1}{3}`$ \[Fig. 2c\]. Since $`U_{pI}<U_{pII}<U_{sq}`$, at our chosen density and low enough temperatures, the type I pentagonal crystal should be the stable phase at $`T=0`$ \[Fig. 3\].
Next, we investigate the stability of the three crystalline phases at $`T>0`$ by estimating the Helmholtz potential per particle $`A=UTS`$ in the square crystal and in the pentagonal crystals of type I and type II. Here $`S`$ is the entropy. Since our simulations are performed at constant density, we must use the Helmholtz potential instead of the Gibbs potential. We study the system at dimensionless number density $`\rho =a^2N/V0.857`$. We have simulated a square crystal with N=961, a pentagonal crystal type I with N=1040, and a pentagonal crystal type II with N=792, all at the same $`\rho `$. We checked that at low temperatures, $`T<0.1`$, the potential energy $`U(T)`$ is temperature independent, and has the same value as the potential energy of the ideal crystal at $`T=0`$. Hence, we approximate $`U(T)`$ at higher $`T`$ by $`U(0)`$.
In order to plot the behavior of the Helmholtz potentials of the three crystals for $`T<0`$, we find the entropic contributions $`S`$, by estimating the entropy per particle for each of the three crystal types. We use the probability density $`p(x,y)`$ to find a particle at position $`x,y`$, where the average is taken over every particle in the crystalline cell:
$$S=p(x,y)\mathrm{ln}p(x,y)𝑑x𝑑y_{cell}.$$
(1)
We estimate $`p(x,y)`$ by the fraction of the total time $`t`$ spent by a particle in a discretized area, $`\mathrm{\Delta }x\mathrm{\Delta }y`$, at a low enough temperature that the potential energy fluctuations of the crystalline structure are negligible. The values of the entropies for the three crystals are given in Table I.
Our estimates for the temperature dependence of the Helmholtz potential for the three types of crystals are given in Fig. 3. The condition for stability of the pentagonal crystals is that their Helmholtz potentials, $`A_{pI}`$ and $`A_{pII}`$, are lower than the Helmholtz potential of the square crystal, $`A_{sq}`$. In accord with this condition, the square crystal is stable at temperatures above $`T=0.203`$, the type II pentagonal crystal is stable between $`T=0.195`$ and $`T=0.203`$ and the type I pentagonal crystal is stable below $`T=0.195`$.
While studying the interesting region around $`T0.2`$ (see Fig 3), we observe the formation of the quasicrystal. We choose to investigate, using MD simulations, our system at $`T0.2`$ because this is the temperature regime where the three crystals have similar values of Helmholtz potential. Cooling the fluid phase, we find the formation of the square crystal below $`T0.5`$. However, when further cooled into the temperature regime where the Helmholtz potentials of the two pentagonal crystals are lower than the Helmholtz potential of the square crystal, the system does not form pentagonal crystal I or pentagonal crystal II (within our simulation times), but remains as the square crystal. Hence, we use a different approach to try to form the pentagonal crystals: we heat an amorphous phase. We first form the amorphous phase by quenching the system from high to very low temperatures $`T0.1`$. To do this, we study a system of $`N=961`$ particles at $`\rho =0.857`$ which is initially in the fluid phase at high temperature, $`T=10`$. We quench this system to $`T=0.1`$ and thermalize for $`10^7`$ time units . Time constraints prevent us from studying systems with more than 961 particles. Long thermalization times are required to stabilize thermodynamic observables like energy and pressure.
The amorphous phase is a homogeneous mixture of pentagons and squares \[Fig. 4a\]. The lack of long range structural order in the amorphous phase is evident from the homogeneity of the circles in the isointensity plot \[Fig. 4b\]. When heating the amorphous phase to temperatures above $`T0.15`$, we find that diffusion becomes sufficient for local rearrangement to occur, and the pentagons begin to coalesce. Instead of forming type I or II pentagonal crystals, the pentagons begin to form rows \[Fig. 4c\] that bend at angles which are multiples of 36<sup>o</sup>. The angle in the bending of the rows gives rise to the five-fold orientational symmetry, which corresponds to the ten easily observed peaks in the isointensity plot \[Fig. 4d\]. These 10 peaks are characteristic of the quasicrystal phase , as they are arranged with disallowed fifth order point group symmetry . The configuration that we obtain has defects, mainly patches of square crystal, which cause the discontinuity in the rows and lead to the broadening of the diffraction peaks. For comparison, we present in Fig. 5 the isointensity plots of the simulated square and pentagonal crystals. The diffraction patterns illustrate the symmetry of the original crystal system. The four equal sides of the square crystal unit cell \[Fig 2a\] are clear in the symmetry of the isointensity plot Fig. 5. In the isointensity plot of pentagonal crystal I \[Fig. 5b\], the central region which corresponds to the long range order, shows no hints of anything but well defined centered-rectangular symmetry \[Fig 2b\] . The isointensity plot of pentagonal crystal II has mainly a rectangular symmetry that matches the rectangular symmetry of the unit cells \[Fig 2c\]. Although the two pentagonal crystals are formed from ordered pentagons, their long range symmetries are four sided. Their corresponding isointensity plots illustrate these four fold symmetries which are distinctly different from the five fold quasicrystal isointensity plot.
## IV Quasicrystal
### A Formation
Since the phase transition between the two pentagonal crystals occurs at $`T0.2`$, we choose this temperature as the one to investigate for quasicrystal formation. After the amorphous phase is quenched to T=0.1, we anneal the system at $`T=0.205`$, for $`2\times 10^7`$ time units, and calculate the diffusion coefficient $`D`$, pressure $`P`$ and potential energy $`U`$. We calculate $`D`$ using the Einstein relation $`D=\frac{1}{2d}lim_t\mathrm{}\frac{<\mathrm{\Delta }r(t)^2>}{t}`$, where $`d`$ is the system dimension. After a short initial period of increase, we observe that $`D`$ and $`U`$ decrease with time and reach plateaus \[Fig. 6\]. The diffusion coefficient approaches zero, which is consistent with the possible formation of a quasicrystal phase. The isointensity peaks also sharpen with the duration of annealing. Due to MD time constraints, we are not sure that we reach the potential energy of a perfect quasicrystal, which is expected to be comparable to the energies, $`U_{pI}=4\frac{2}{5}`$ and $`U_{pII}=4\frac{1}{3}`$, of the pentagonal crystals. The lowest potential energy reached is $`U_{qc}=4.25`$.
We observe the spontaneous formation of the quasicrystal phase in the range of temperatures between $`T=0.190`$ and $`T=0.205`$. As we heat either the amorphous phase or the quasicrystal above $`T=0.21`$ , the square crystal forms, consistent with the Helmholtz potential estimations of Fig. 3.
Next we address the question of whether the quasicrystal phase is stable, by comparing the values of the Helmholtz potential for the three crystal types. As can be seen \[Fig. 4c,d\], the structure of the quasicrystal arises from the bending rows of pentagons which locally resemble the pentagonal crystals of either type I or II. We assume that local arrangements of particles corresponding to a square crystal are defects that would be absent in the perfect quasicrystal. If we assume that the local arrangement of the quasicrystal is similar to a combination of the local arrangements in the pentagonal crystal I and the pentagonal crystal II, we can approximate the Helmholtz potential of the quasicrystal by the average Helmholtz potential of the two pentagonal crystals. Because the quasicrystals have a positive entropy contribution to the total entropy due to their degeneracy , we add an additional term $`TS_c`$ to the original estimate of the Helmholtz potential energy. Here $`S_c`$ is the entropy due to the possible configurations of the quasicrystal.
### B Entropy
We estimate $`S_c`$ as the logarithm of the number of configurations formed by $`n`$ pentagons in the quasicrystal. A single pentagon can be oriented in two possible ways when attached side by side to an existing row of pentagons. Neglecting the interaction between adjacent rows, we can estimate the upper bound for the number of configurations as $`2^n`$, where $`n`$ is the total number of pentagons in the quasicrystal. Note that at point A on Fig. 3, the Helmholtz potentials of both pentagonal crystals coincide, so an additional $`TS_c`$ term should stabilize the quasicrystal in the vicinity of point A.
To better estimate $`S_c`$, we notice that the bending rows of pentagons forming the quasicrystal resemble a compact self-avoiding random walk on the hexagonal lattice. The number of such walks grows as $`Z^n`$ where $`Z1.3`$ and $`n`$ is the number of steps . Since the formation of one pentagon in the midst of a perfect square crystal lowers the energy of the system by $`U=1`$, we estimate $`n`$ to be $`(U_{qc}U_{sq})N`$. Assuming that the ground state energy of the quasicrystal is between $`U_{pI}`$ and $`U_{pII}`$, the number of pentagons in the quasicrystal, should not be smaller than the number of pentagons in the crystal of type II (which is the pentagonal crystal with the lesser number of pentagons and has $`n=\frac{1}{3}N`$). We estimate the entropy of configuration per particle to be $`S_c\mathrm{ln}(Z^n)/N=\frac{1}{3}\mathrm{ln}(1.3)=0.087`$. Thus, the quasicrystal should be more stable than the pentagonal crystals between $`T=0.16`$ and $`T=0.23`$ , where the gap between the Helmholtz potential of the pentagonal crystals is smaller than the configuration term $`TS_c`$ which ranges from $`0.014`$ to $`0.020`$ in the interval where $`T`$ increases from $`0.16`$ to $`0.23`$. Since the $`TS_c`$ term lowers the Helmholtz potential of the obtained quasicrystal configuration below the Helmholtz potentials of the two pentagonal crystals, it is likely that the obtained state with five-fold rotational symmetry is not the coexistence of type I and II pentagonal crystals, but is a stable quasicrystalline phase. A more rigorous investigation of this problem would either require the construction of a perfect Penrose tiling or of a random tiling involving the local structures of crystals type I and II.
## V Discussion
To summarize, perfect pentagonal crystals of type I and II do not form spontaneously during the time scales of our study. Instead, the quasicrystal, having long-range, five-fold orientational order with no translational order, forms from the coalescence of pentagons present in the starting amorphous phase. The starting amorphous configuration must initially be quenched at a low enough temperature in order to prevent crystallization to the square phase. Moreover, the amorphous phase must be carefully thermalized at the quench temperature, as we have observed that, upon heating a poorly equilibrated amorphous phase with a higher concentration of squares, the system phase separates into regions of pentagons and squares. If the starting amorphous phase does not have a sufficient concentration of pentagons, the quasicrystal will not form: large regions of square crystal will inhibit the long range order of pentagons and thus not give rise to the 10 diffraction peaks in the isointensity plot. It is interesting to notice that the bending rows observed in our quasicrystal could resemble the stripe structure of a spinodal decomposition . Anyhow, in the case of spinodal decomposition, the diffraction pattern would be similar to that of an amorphous structure.
Before concluding, we note that Jagla, using Monte Carlo simulations, recently reported the existence of quasicrystals in a two-dimensional, monodisperse system of hard spheres interacting with a purely repulsive potential . The quasicrystal we observe has a different structure from that modeled by Jagla: our quasicrystal is not a ground state structure and forms only at nonzero temperature. Also, formation of quasicrystals in monodisperse systems has been observed using complex radially symmetric potentials both in two dimensions and three dimensions . To the best of our knowledge, the quasicrystal found in our simulations has a structure different from those previously studied.
We are very grateful to the late Shlomo Alexander, who pointed out the possibility of the formation of quasicrystals in the square-well potential, and we dedicate this work to his memory. We thank R. Hurt and his colleagues at Brown University for encouraging this project in its early stages, L. A. N. Amaral, C. A. Angell, E. Jagla, J. E. McGarrah, C. J. Roberts, R. Sadr, F. Sciortino, F. W. Starr, A. Umansky, Masako Yamada for helpful interactions, and the referee for constructive criticism. We also thank DOE and NSF for financial support.
|
no-problem/9905/hep-ex9905024.html
|
ar5iv
|
text
|
# The 𝒃𝒃̄ Production Cross Section and Angular Correlations in 𝒑𝒑̄ Collisions at √𝒔="1.8 TeV"
## Abstract
We present measurements of the $`𝒃\overline{𝒃}`$ production cross section and angular correlations using the DØ detector at the Fermilab Tevatron $`𝒑\overline{𝒑}`$ Collider operating at $`\sqrt{𝒔}`$ = 1.8 TeV. The $`𝒃`$ quark production cross section for $`\mathbf{|}𝒚^𝒃\mathbf{|}\mathbf{<}\mathbf{1.0}`$ and $`𝒑_𝑻^𝒃\mathbf{>}\mathrm{𝟔}`$ GeV/$`𝒄`$ is extracted from single muon and dimuon data samples. The results agree in shape with the next-to-leading order QCD calculation of heavy flavor production but are greater than the central values of these predictions. The angular correlations between $`𝒃`$ and $`\overline{𝒃}`$ quarks, measured from the azimuthal opening angle between their decay muons, also agree in shape with the next-to-leading order QCD prediction.
PACS numbers: 13.65.Fy, 12.38.Qk, 13.85.Ni, 13.85.Qk – 14.65.Hq, 13.85.Qk
Measurements of the $`𝒃`$ quark production cross section and $`𝒃\overline{𝒃}`$ correlations in $`𝒑\overline{𝒑}`$ collisions provide an important test of perturbative quantum chromodynamics (QCD) at next-to-leading order (NLO). The measured $`𝒃`$ quark production cross section at $`\sqrt{𝒔}`$ = 1.8 TeV is systematically larger than the central values of the NLO QCD predictions.
Measurements of $`𝒃\overline{𝒃}`$ correlations such as the azimuthal opening angle between $`𝒃`$ and $`\overline{𝒃}`$ quarks allow additional details of $`𝒃`$ quark production to be tested since these quantities are sensitive to the relative contributions of different production mechanisms to the total cross section. Two measurements of $`𝒃\overline{𝒃}`$ angular correlations using the azimuthal opening angle of muons from the heavy quark decays, one at $`\sqrt{𝒔}`$ = 1.8 TeV and another at $`\sqrt{𝒔}`$ = 630 GeV, are in qualitative agreement with the NLO QCD predictions. A different measurement at $`\sqrt{𝒔}`$ = 1.8 TeV using the azimuthal opening angle between a muon from $`𝑩`$ meson decay and the $`\overline{𝒃}`$ jet shows qualitative differences with the predictions, while a direct measurement of $`𝒃\overline{𝒃}`$ rapidity correlations is found to be in agreement with the NLO QCD predictions.
In this paper we provide an additional measurement of the $`𝒃`$ quark production cross section and $`𝒃\overline{𝒃}`$ angular correlations. The analysis makes use of the fact that the semileptonic decay of a $`𝒃`$ quark results in a lepton (here a muon) associated with a jet. We use a sample of dimuons and their associated jets to tag both $`𝒃`$ and $`\overline{𝒃}`$ quarks. By tagging both the $`𝒃`$ and $`\overline{𝒃}`$ quarks, we are able to significantly reduce the number of background events in our data sample. Also included is a revised measurement of the $`𝒃`$ quark production cross section based on an earlier DØ analysis using the inclusive single muon measurement.
The DØ detector and trigger system are described in detail elsewhere. The central muon system consists of three layers of proportional drift tubes and a magnetized iron toroid located between the first two layers. The muon detectors provide a measurement of the muon momentum with a resolution parameterized by $`𝜹\mathbf{(}\mathrm{𝟏}\mathbf{/}𝒑\mathbf{)}\mathbf{/}\mathbf{(}\mathrm{𝟏}\mathbf{/}𝒑\mathbf{)}\mathbf{=}\mathbf{0.18}\mathbf{(}𝒑\mathbf{}\mathrm{𝟐}\mathbf{)}\mathbf{/}𝒑\mathbf{}\mathbf{0.008}𝒑`$, with $`𝒑`$ in GeV/$`𝒄`$. The calorimeter is used to measure both the minimum ionizing energy associated with the muon track and the electromagnetic and hadronic activity associated with heavy quark decay. The total thickness of the calorimeter plus toroid in the central region varies from 13 to 15 interaction lengths, which reduces the hadronic punchthrough in the muon system to less than 0.5% of low transverse momentum muons from all sources. The energy resolution for jets is approximately 80%/$`\sqrt{𝑬\mathbf{(}\mathrm{𝐆𝐞𝐕}\mathbf{)}}`$.
The data used in this analysis were taken during the 1992–1993 run of the Fermilab Tevatron collider and correspond to a total integrated luminosity $`\mathbf{}𝓛𝒅𝒕\mathbf{=}\mathbf{6.5}\mathbf{\pm }\mathbf{0.4}`$ pb<sup>-1</sup>. The dimuon data were collected using a multilevel trigger requiring at least one reconstructed muon with transverse momentum $`𝒑_𝑻^𝝁\mathbf{>}\mathrm{𝟑}`$ GeV/$`𝒄`$ and at least one reconstructed jet with transverse energy $`𝑬_𝑻\mathbf{>}\mathrm{𝟏𝟎}`$ GeV.
The events are then fully reconstructed offline and subjected to event selection criteria. The offline analysis requires two muons with $`𝒑_𝑻^𝝁\mathbf{>}\mathrm{𝟒}`$ GeV/$`𝒄`$ and pseudorapidity $`\mathbf{|}𝜼^𝝁\mathbf{|}\mathbf{<}\mathbf{0.8}`$. In addition, both muon tracks have to be consistent with originating from the reconstructed event vertex and deposit $`\mathbf{>}\mathrm{𝟏}`$ GeV of energy in the calorimeter. Each muon is also required to have an associated jet with $`𝑬_𝑻\mathbf{>}\mathrm{𝟏𝟐}`$ GeV within a cone of radius $`𝓡\mathbf{=}\sqrt{\mathbf{(}𝚫𝜼\mathbf{)}^\mathrm{𝟐}\mathbf{+}\mathbf{(}𝚫\mathit{\varphi }\mathbf{)}^\mathrm{𝟐}}\mathbf{<}\mathbf{0.8}`$. The jet energies are measured using a cone of $`𝓡\mathbf{=}\mathbf{0.7}`$. Finally, muon candidates in the region $`\mathrm{𝟖𝟎}^{\mathbf{}}\mathbf{<}\mathit{\varphi }^𝝁\mathbf{<}\mathrm{𝟏𝟏𝟎}^{\mathbf{}}`$ are excluded due to poor chamber efficiencies near the Main Ring beam pipe.
Further selection criteria are placed on the dimuon candidates to reduce backgrounds to $`𝒃\overline{𝒃}`$ production. The invariant mass of the dimuons is restricted to the range $`\mathrm{𝟔}\mathbf{<}𝒎^{𝝁𝝁}\mathbf{<}\mathrm{𝟑𝟓}`$ GeV/$`𝒄^\mathrm{𝟐}`$. The lower limit removes dimuons resulting from the cascade decay of single $`𝒃`$ quarks and from $`𝑱\mathbf{/}𝝍`$ resonance decays, while the upper limit reduces the number of dimuons due to $`𝒁`$ boson decays. An opening space angle requirement of $`\mathbf{<}\mathrm{𝟏𝟔𝟓}^{\mathbf{}}`$ between the muons is also applied to remove contamination from cosmic ray muons. A total of 397 events pass all selection criteria.
The trigger and offline reconstruction efficiencies are determined from Monte Carlo event samples. Events generated with the ISAJET Monte Carlo are passed through a GEANT simulation of the DØ detector followed by trigger simulation and reconstruction programs. Trigger and some offline efficiencies found in this way are crosschecked by using appropriate data samples. The overall acceptance times efficiency as a function of the higher (leading) muon $`𝒑_𝑻`$ in the event increases from about 1% at 4 GeV/$`𝒄`$ to a plateau of 9% above 15 GeV/$`𝒄`$. We define the leading muon in the event as the muon with the greater value of $`𝒑_𝑻^𝝁`$.
In addition to $`𝒃\overline{𝒃}`$ production, dimuon events in the invariant mass range of 6–35 GeV/$`𝒄^\mathrm{𝟐}`$ can also arise from other sources. These processes include semileptonic decays of $`𝒄\overline{𝒄}`$ pairs, events in which one or both of the muons are produced by in-flight decays of $`𝝅`$ or $`𝑲`$ mesons, Drell-Yan production, and $`𝚼`$ resonance decays. Muons from the Drell-Yan process and $`𝚼`$ decays are not expected to have jets associated with them. Monte Carlo estimates normalized to the measured Drell-Yan and $`𝚼`$ cross sections show that less than one event is expected to contribute to the final data sample from these two sources. An additional source of dimuon events is cosmic ray muons passing through the detector.
To extract the $`𝒃\overline{𝒃}`$ signal, we use a maximum likelihood fit with four different input distributions. The input distributions are chosen based on their effectiveness in distinguishing between the different sources of dimuon events. We use the transverse momenta of the leading and trailing muons relative to their associated jet axes ($`𝒑_𝑻^{\mathrm{𝐫𝐞𝐥}}`$), the fraction of longitudinal momentum of the jet carried by the leading muon divided by the jet $`𝑬_𝑻`$ ($`𝒓_𝒛`$), and the reconstructed time of passage ($`𝒕_\mathrm{𝟎}`$) of the leading muon track through the muon chambers with respect to the beam crossing time. The variable $`𝒕_\mathrm{𝟎}`$ is used to identify the cosmic ray muon background, which is not expected to be in time with the beam crossing. Monte Carlo studies show that the variables $`𝒑_𝑻^{\mathrm{𝐫𝐞𝐥}}`$ and $`𝒓_𝒛`$ help to discriminate between background and $`𝒃\overline{𝒃}`$ production. For both variables, the jet energy is defined to be the vector sum of the muon energy and the jet energy measured in the calorimeter less the expected minimum ionizing energy of the muon deposited in the calorimeter.
The $`𝒑_𝑻^{\mathrm{𝐫𝐞𝐥}}`$ and $`𝒓_𝒛`$ distributions for $`𝒃\overline{𝒃}`$, $`𝒄\overline{𝒄}`$, and $`𝒃`$ or $`𝒄`$ plus $`𝝅\mathbf{/}𝑲`$ decay are modeled using the ISAJET Monte Carlo. Each of these samples is processed with a complete detector, trigger, and offline simulation. The distributions for $`𝒃`$ quark decays includes both direct ($`𝒃\mathbf{}𝝁`$) and sequential ($`𝒃\mathbf{}𝒄\mathbf{}𝝁`$) decays. The distributions for $`𝒄\overline{𝒄}`$ and a $`𝒄`$ quark plus a $`𝝅`$ or $`𝑲`$ decay are very similar, so both contributions are fit to the same function. The distributions for $`𝒕_\mathrm{𝟎}`$ are obtained from two different sources. The $`𝒕_\mathrm{𝟎}`$ distribution for cosmic ray muons is obtained from data collected between collider runs using cosmic ray triggers. For beam-produced muons, $`𝒕_\mathrm{𝟎}`$ is measured using muons from $`𝑱\mathbf{/}𝝍`$ decays.
Figure 1 shows the result of the maximum likelihood fit for $`𝒑_𝑻^{\mathrm{𝐫𝐞𝐥}}`$ of the leading muon and $`𝒓_𝒛`$. Included in Fig. 1 are the contributions from each of the major sources of dimuon events. The $`𝒃\overline{𝒃}`$ contribution to the final data sample is found to be 45.3$`\mathbf{\pm }`$5.8%. The other fractions fit to the data set consist of $`𝒃`$ quark plus $`𝝅\mathbf{/}𝑲`$ decay (37.9$`\mathbf{\pm }`$5.6%), $`𝒄\overline{𝒄}`$ production (14.0$`\mathbf{\pm }`$3.8%), and cosmic ray muons (2.8$`\mathbf{\pm }`$1.6%). From the fit, we obtain the number of $`𝒃\overline{𝒃}`$ events per bin as a function of $`𝒑_𝑻^𝝁`$ of the leading muon and as a function of the difference in azimuthal angle between the two muons, $`𝚫\mathit{\varphi }^{𝝁𝝁}`$.
The systematic errors on the number of $`𝒃\overline{𝒃}`$ events per bin (8%) are estimated by varying the input distributions to the maximum likelihood fit within reasonable bounds. As a crosscheck of the fitting procedure, we calculate the fraction of events originating from $`𝒃\overline{𝒃}`$ production using appropriately normalized Monte Carlo samples. Good agreement is found between the Monte Carlo calculated fraction and that found from the maximum likelihood fit to the data. The fractions agree as a function of both $`𝒑_𝑻`$ of the leading muon and $`𝚫\mathit{\varphi }^{𝝁𝝁}`$. A complete description of the fitting procedure can be found in Ref. .
The dimuon cross section originating from $`𝒃\overline{𝒃}`$ production is calculated using
$$\frac{𝒅𝝈_{𝒃\overline{𝒃}}^{𝝁𝝁}}{𝒅𝒙}\mathbf{=}\frac{\mathrm{𝟏}}{𝚫𝒙}\frac{𝑵_{𝒃\overline{𝒃}}^{𝝁𝝁}\mathbf{(}𝒙\mathbf{)}𝒇_𝒑\mathbf{(}𝒙\mathbf{)}}{\mathit{ϵ}\mathbf{(}𝒙\mathbf{)}\mathbf{}𝓛𝒅𝒕}\mathbf{,}$$
(1)
where $`𝒙`$ is either the $`𝒑_𝑻`$ of the leading muon or $`𝚫\mathit{\varphi }^{𝝁𝝁}`$. Here, $`\mathit{ϵ}`$ is the total efficiency, $`\mathbf{}𝓛𝒅𝒕`$ is the integrated luminosity, $`𝑵_{𝒃\overline{𝒃}}^{𝝁𝝁}`$ is the number of $`𝒃\overline{𝒃}`$ events determined from the fit, and $`𝒇_𝒑`$ is an unfolding factor to account for smearing caused by the muon momentum resolution. An unfolding technique is used to determine $`𝒇_𝒑`$. The factor $`𝒇_𝒑`$ varies from 0.78 at low $`𝒑_𝑻^{𝝁_\mathrm{𝟏}}`$ to 0.93 in the highest $`𝒑_𝑻^{𝝁_\mathrm{𝟏}}`$ bin ($`𝝁_\mathrm{𝟏}`$ is the leading muon in the event) and takes into account our invariant mass and $`𝒑_𝑻^𝝁`$ requirements. The systematic uncertainty associated with $`𝒇_𝒑`$ is found to be $`𝒑_𝑻^𝝁`$ dependent and varies from 13% to 22%.
Figure 2(a) shows the result of the cross section calculation as a function of $`𝒑_𝑻^{𝝁_\mathrm{𝟏}}`$ for $`\mathrm{𝟒}\mathbf{<}𝒑_𝑻^𝝁\mathbf{<}\mathrm{𝟐𝟓}`$ GeV/$`𝒄`$, $`\mathbf{|}𝜼^𝝁\mathbf{|}\mathbf{<}\mathbf{0.8}`$, and $`\mathrm{𝟔}\mathbf{<}𝒎^{𝝁𝝁}\mathbf{<}\mathrm{𝟑𝟓}`$ GeV/$`𝒄^\mathrm{𝟐}`$. The total systematic error is found to be $`𝒑_𝑻^{𝝁_\mathrm{𝟏}}`$ dependent, ranging from 25% to 31%. This includes uncertainties from the trigger efficiency (19%), offline selection efficiency (5%), maximum likelihood fit (8%), momentum unfolding (13–22%), and integrated luminosity (5%).
The theoretical curve of Fig. 2(a) is determined using the HVQJET Monte Carlo. HVQJET is an implementation of the NLO calculation of Ref. (MNR) for $`𝒃\overline{𝒃}`$ production. It uses the MNR parton level generator and a modified version of ISAJET for hadronization, particle decays, and modeling of the underlying event. The particle decays are based on the ISAJET implementation of the CLEO decay tables. In HVQJET the MNR prediction is realized by combining parton level events having negative weights with those having positive weights and similar topologies. The prediction shown is the NLO calculation and includes all four $`𝒈𝒈`$, $`𝒈𝒒`$, $`𝒈\overline{𝒒}`$, and $`𝒒\overline{𝒒}`$ initiated subprocesses with $`𝒎_𝒃\mathbf{(}\mathrm{𝐩𝐨𝐥𝐞}\mathrm{𝐦𝐚𝐬𝐬}\mathbf{)}\mathbf{=}\mathbf{4.75}`$ GeV/$`𝒄^\mathrm{𝟐}`$. The MRSR2 parton distribution functions (PDFs) are used with $`𝚲_\mathrm{𝟓}`$ = 237 MeV.
The shaded region in Fig. 2(a) shows the combined systematic and statistical error from the HVQJET prediction ($`{}_{\mathbf{}\mathrm{𝟓𝟎}}{}^{}{}_{}{}^{\mathbf{+}\mathrm{𝟕𝟒}}`$%). This error is dominated by the uncertainty associated with the MNR prediction and is determined by varying the mass of the $`𝒃`$ quark between 4.5 GeV/$`𝒄^\mathrm{𝟐}`$ and 5.0 GeV/$`𝒄^\mathrm{𝟐}`$, and the factorization and renormalization scales, taken to be equal, between $`𝝁_\mathrm{𝟎}\mathbf{/}\mathrm{𝟐}`$ and $`\mathrm{𝟐}𝝁_\mathrm{𝟎}`$, where $`𝝁_\mathrm{𝟎}^\mathrm{𝟐}\mathbf{=}𝒎_𝒃^\mathrm{𝟐}\mathbf{+}\mathbf{}𝒑_𝑻^𝒃\mathbf{}^\mathrm{𝟐}`$. Additional systematic errors include those associated with the PDFs (20%), the Peterson fragmentation function (8%), the $`𝑩`$ meson semileptonic branching fraction (7%), and the muon decay spectrum from $`𝑩`$ mesons (20%). Varying these parameters does not appreciably change the shape of the prediction. The Monte Carlo statistical errors are less than 10%.
To extract the $`𝒃`$ quark cross section from the dimuon data, we employ a method first used by UA1 and subsequently used by CDF and DØ . Since a correlation exists between the $`𝒑_𝑻`$ of the muon produced in a $`𝒃`$ quark decay and the parent $`𝒃`$ quark $`𝒑_𝑻`$, cuts applied to the muon $`𝒑_𝑻`$ in the data are effectively $`𝒃`$ quark $`𝒑_𝑻`$ cuts. For a set of kinematic cuts, which include cuts on the transverse momentum of the muons, we define $`𝒑_𝑻^{\mathrm{𝐦𝐢𝐧}}`$ as that value of the $`𝒃`$ quark $`𝒑_𝑻`$ where 90% of the accepted events have $`𝒃`$ quark transverse momentum greater than $`𝒑_𝑻^{\mathrm{𝐦𝐢𝐧}}`$. The $`𝒃`$ quark cross section is then calculated as
$$𝝈_𝒃\mathbf{(}𝒑_𝑻^𝒃\mathbf{>}𝒑_𝑻^{\mathrm{𝐦𝐢𝐧}}\mathbf{)}\mathbf{=}𝝈_{𝒃\overline{𝒃}}^{𝝁𝝁}\mathbf{(}𝒑_𝑻^{𝝁_\mathrm{𝟏}}\mathbf{)}\frac{𝝈_𝒃^{\mathrm{𝐌𝐂}}}{𝝈_{𝒃\overline{𝒃}\mathbf{}𝝁𝝁}^{\mathrm{𝐌𝐂}}}\mathbf{,}$$
(2)
where $`𝝈_{𝒃\overline{𝒃}}^{𝝁𝝁}\mathbf{(}𝒑_𝑻^{𝝁_\mathrm{𝟏}}\mathbf{)}`$ is the measured dimuon cross section of Eq. (1) integrated over different intervals of $`𝒑_𝑻^{𝝁_\mathrm{𝟏}}`$, $`𝝈_𝒃^{\mathrm{𝐌𝐂}}`$ is the total Monte Carlo $`𝒃`$ quark cross section for $`𝒑_𝑻^𝒃\mathbf{>}𝒑_𝑻^{\mathrm{𝐦𝐢𝐧}}`$ (where $`\mathbf{|}𝒚^𝒃\mathbf{|}\mathbf{<}\mathbf{1.0}`$ and no cut on $`𝒚^{\overline{𝒃}}`$), and $`𝝈_{𝒃\overline{𝒃}\mathbf{}𝝁𝝁}^{\mathrm{𝐌𝐂}}`$ is the Monte Carlo cross section for dimuon production with the same requirements used to select the data set. For each interval of $`𝒑_𝑻^{𝝁_\mathrm{𝟏}}`$, $`𝒑_𝑻^{\mathrm{𝐦𝐢𝐧}}`$ and $`𝝈^{\mathrm{𝐌𝐂}}`$ are calculated using HVQJET. Combining the uncertainties of the measured dimuon cross section with those associated with extracting the $`𝒃`$ quark cross section, we obtain a total systematic uncertainty of 34–38% on the measured $`𝒃`$ quark cross section. The latter uncertainties are associated with $`𝒃`$ quark fragmentation: Peterson fragmentation function; semileptonic branching fraction; and muon decay spectrum with the magnitudes noted above.
Figure 2(b) shows the $`𝒃`$ quark production cross section for the rapidity range $`\mathbf{|}𝒚^𝒃\mathbf{|}\mathbf{<}\mathbf{1.0}`$ as a function of $`𝒑_𝑻^{\mathrm{𝐦𝐢𝐧}}`$. The NLO QCD prediction is computed using Ref. with $`𝒎_𝒃\mathbf{(}\mathrm{𝐩𝐨𝐥𝐞}\mathrm{𝐦𝐚𝐬𝐬}\mathbf{)}\mathbf{=}\mathbf{4.75}`$ GeV/$`𝒄^\mathrm{𝟐}`$ and the MRSR2 PDFs. The theoretical uncertainty of $`{}_{\mathbf{}\mathrm{𝟐𝟖}}{}^{}{}_{}{}^{\mathbf{+}\mathrm{𝟒𝟕}}`$% results from varying the mass of the $`𝒃`$ quark and the factorization and renormalization scales as described above and is dominated by the variation of the scales. The ratio of the data to the central NLO QCD prediction is approximately three over the entire $`𝒑_𝑻^{\mathrm{𝐦𝐢𝐧}}`$ range covered.
Also shown in Fig. 2(b) is a revised result based on the previous inclusive single muon measurement from DØ . In light of revised $`𝑩`$ meson decay modes and Monte Carlo improvements, the cross section is re-evaluated by using HVQJET to calculate new values of $`𝝈_𝒃^{\mathrm{𝐌𝐂}}\mathbf{/}𝝈_{𝒃\mathbf{}𝝁}^{\mathrm{𝐌𝐂}}`$ for extraction of the $`𝒃`$ quark cross section from the measured inclusive single muon spectrum. In addition, the high $`𝒑_𝑻`$ inclusive muon data ($`𝒑_𝑻^𝝁\mathbf{>}`$12 GeV/$`𝒄`$) are excluded due to large uncertainties in the cosmic ray muon background subtractions. The resulting increase in the $`𝒃`$ quark cross section is primarily caused by the new $`𝑩`$ meson decay modes and lower semileptonic branching fractions. The re-evaluated cross section supersedes that of Ref. . The tabulated data for the dimuon and inclusive single muon data sets can be found in Tables I and II.
The differential $`𝒃\overline{𝒃}`$ cross section, $`𝒅𝝈_{𝒃\overline{𝒃}}^{𝝁𝝁}\mathbf{/}𝒅𝚫\mathit{\varphi }^{𝝁𝝁}`$, gives further information on the underlying QCD production mechanisms. The azimuthal opening angle between $`𝒃`$ and $`\overline{𝒃}`$ quarks (or between their decay muons) is sensitive to the contributing production mechanisms. These contributions are the leading order (LO) subprocess, flavor creation, and the next-to-leading order subprocesses, gluon splitting and flavor excitation. There are also contributions from interference terms.
The cross section $`𝒅𝝈_{𝒃\overline{𝒃}}^{𝝁𝝁}\mathbf{/}𝒅𝚫\mathit{\varphi }^{𝝁𝝁}`$ is shown in Fig. 3. Also shown are the LO and NLO QCD predictions which are determined using HVQJET and include all subprocesses. The grey band around the NLO prediction shows the combined statistical and systematic errors associated with the prediction, which is $`{}_{\mathbf{}\mathrm{𝟓𝟎}}{}^{}{}_{}{}^{\mathbf{+}\mathrm{𝟕𝟒}}`$% as detailed above. The data again show an excess above the NLO QCD prediction but agree with the overall shape. The agreement in shape is consistent with the presence of NLO subprocesses since the LO prediction, which contains the smearing from the $`𝒃\mathbf{}𝑩\mathbf{}𝝁`$ fragmentation and decay chain, does not describe the data.
In conclusion, we have measured the $`𝒃`$ quark production cross section and the $`𝒃\overline{𝒃}`$ azimuthal angle correlations using dimuons to tag the presence of $`𝒃`$ quarks. These measurements, as well as the revised inclusive single muon measurement, are found to agree in shape with the NLO QCD calculation of heavy flavor production but lie above the central values of these predictions.
We thank the staffs at Fermilab and at collaborating institutions for contributions to this work, and acknowledge support from the Department of Energy and National Science Foundation (USA), Commissariat à L’Energie Atomique and CNRS/Institut National de Physique Nucléaire et de Physique des Particules (France), Ministry for Science and Technology and Ministry for Atomic Energy (Russia), CAPES and CNPq (Brazil), Departments of Atomic Energy and Science and Education (India), Colciencias (Colombia), CONACyT (Mexico), Ministry of Education and KOSEF (Korea), CONICET and UBACyT (Argentina), A.P. Sloan Foundation, and the Humboldt Foundation.
FIGURE CAPTIONS
Figure 1 The results of the maximum likelihood fit to the data for (a) $`𝒑_𝑻^{\mathrm{𝐫𝐞𝐥}}`$ of the leading muon and (b) $`𝒓_𝒛`$. Also included are the curves showing the contribution from each process to the dimuon sample.
Figure 2 (a) The unfolded leading muon $`𝒑_𝑻`$ spectrum for $`𝒃\overline{𝒃}`$ production compared to the predicted spectrum (see text) where the data errors are statistical (inner) and total (outer) and the Monte Carlo errors are total (shaded band); (b) the $`𝒃`$ quark production cross section for $`\mathbf{|}𝒚^𝒃\mathbf{|}\mathbf{<}\mathbf{1.0}`$ compared with the revised inclusive single muon results and the NLO QCD prediction. The error bars on the data represent the total error. The theoretical uncertainty shows the uncertainty associated with the factorization and renormalization scales and the $`𝒃`$ quark mass. Also shown are the inclusive single muon data from CDF.
Figure 3 The $`𝚫\mathit{\varphi }^{𝝁𝝁}`$ spectrum for $`𝒃\overline{𝒃}`$ production compared to the predicted spectrum (see text). The errors on the data are statistical and total. The solid histogram shows the NLO prediction with the grey band indicating the total uncertainty. Also shown is the LO prediction (dotted histogram) with the statistical error only.
Figure 2
Figure 3
TABLE CAPTIONS
Table 1 Cross sections for $`𝒃\overline{𝒃}\mathbf{}𝝁𝝁`$ production.
Table 2 Results for the $`𝒃`$ quark production cross section for $`\mathbf{|}𝒚^𝒃\mathbf{|}\mathbf{<}\mathbf{1.0}`$.
Table 2
|
no-problem/9905/hep-ex9905030.html
|
ar5iv
|
text
|
# Measurement of the decay ϕ→𝜇⁺𝜇⁻.
## 1 Introduction
The $`\varphi \mu ^+\mu ^{}`$ decay reveals itself as an interference pattern in the energy dependence of the cross section of process $`e^+e^{}\mu ^+\mu ^{}`$ in the region around $`\varphi `$-resonance peak. The interference amplitude is determined by the branching ratio of the decay $`\varphi \mu ^+\mu ^{}`$. The table value of the branching ratio $`B(\varphi \mu ^+\mu ^{})=(2.5\pm 0.4)10^4`$ is based on the experiments on photoproduction of $`\varphi `$ meson . In $`e^+e^{}`$ collisions directly measurable is the square root of the product of the branching ratios
$`\sqrt{B(\varphi \mu ^+\mu ^{})B(\varphi e^+e^{})}`$. The $`\mu e`$-universality requires $`B(\varphi \mu ^+\mu ^{})`$ to be equal to $`B(\varphi e^+e^{})`$ with the accuracy much higher than our experimental one despite the difference in masses of the leptons. The measurements of the branching ratio $`B(\varphi e^+e^{})`$ at $`e^+e^{}`$ colliders are performed through summation of cross sections over all decay channels . The study of the interference $`e^+e^{}\varphi \mu ^+\mu ^{}`$ gives the independent evaluation of the $`\varphi `$-meson leptonic width. First measurement of the decay $`\varphi \mu ^+\mu ^{}`$ on $`e^+e^{}`$ collider was performed in Orsay in 1972 . In this experiment the value of branching ratio of $`\varphi `$-meson leptonic decay
$`\sqrt{B(\varphi \mu ^+\mu ^{})B(\varphi e^+e^{})}=(2.93\pm 0.96\pm 0.32)10^4`$ was obtained. Later similar measurements were performed in Novosibirsk .
## 2 Experiment
The experiment was carried out with SND detector (Fig. 1) at VEPP-2M in 1996–1997. SND is a general purpose non-magnetic detector . The main part of the SND is a spherical electromagnetic calorimeter, consisting of 1630 NaI(Tl) crystals. The solid angle of the calorimeter is $`90\%`$ of $`4\pi `$ steradian. The angles of charged particles are
measured by two cylindrical drift chambers covering 95% of $`4\pi `$ steradian solid angle. The important part of the detector for the process under study is a muon system, consisting of streamer tubes and plastic scintillation counters, with $`1`$ cm iron plates between blocks of tubes and counters. The simultaneous hits in the streamer tubes and scintillation counters produce a signal of muon system. The time difference between the hit and the beam collision is measured by inner and outer scintillation counters .
The experiment was carried out in the energy range $`2E_b=984`$–1040 MeV and consisted of 6 data taking runs :
PHI\_9601 – PHI\_9606. Five runs were used for analysis, corresponding to the total integrated luminosity $`\mathrm{\Delta }L=2.61`$ pb<sup>-1</sup>.
About $`4.610^6`$ of $`\varphi `$ mesons were produced. The integrated luminosity was measured with the accuracy of about 3% using $`e^+e^{}e^+e^{}`$ and $`e^+e^{}\gamma \gamma `$ events.
## 3 Event selection
The energy behavior of the cross section of the process
$$e^+e^{}\mu ^+\mu ^{}(\gamma )$$
(1)
was studied in the vicinity of $`\varphi `$ meson. Events with two collinear charged particles were selected for analysis. The selection of the events of the process (1) was performed with the following cuts on angles of acollinearity of the charged particles in azimuth and polar directions: $`\mathrm{\Delta }\varphi <10^{}`$, $`\mathrm{\Delta }\theta <25^{}`$. Additional photon emitted by either initial or final particles was permitted. To avoid possible losses of events due to beam background or knock-on electrons in the drift chambers, one additional charged particle was also permitted. To suppress the beam background the production point of charged particles was required to be within $`0.5`$ cm from the interaction point in the azimuth plane and $`\pm 7.5`$ cm along the beam direction (the longitudinal size of the interaction region $`\sigma _z`$ is about $`2`$ cm). The polar angles of the charged particles were limited to the range $`45^{}<\theta <135^{}`$, corresponding to the acceptance angle of the muon system.
The main sources of background are the cosmic muons and the processes
$$e^+e^{}e^+e^{},$$
(2)
$$e^+e^{}\pi ^+\pi ^{}(\gamma ).$$
(3)
To suppress the background from the process (2) a procedure of $`e/\pi (\mu )`$ separation was used. The algorithm is similar to that, developed for the ND detector . It utilizes the difference in total energy depositions and the longitudinal energy deposition profiles for electrons, pions and muons. As a result of this procedure the background from the process (2) was suppressed down to 4% of the events of the process (1). To suppress the contribution of the process (3) a procedure of $`\pi /\mu `$ separation by the muon system has been used. In the energy range under study the probability to hit the muon system for a muon varies from 80 to 93% and is as low as 1.5% for pions of the process (3). A requirement of a hit in the muon system also reduced the background from the process (2) by two order of magnitude, making its contribution negligible.
After these cuts 80% of selected events are still cosmic background. The rejection of the events with the hit in two top segments of the muon system (about $`45^{}`$ in azimuth direction) (Fig. 1) suppressed the cosmic events by two times. Further suppression of the cosmic background was performed using the following parameters:
1. the time difference between the hit of the muon system and the beam collision – $`\tau `$;
2. the time difference between the hits in upper and lower halfs of the muon system – $`TOF`$;
3. the sum of the distances from the tracks to the production point – $`R`$;
4. the likelihood function – $`P_\mu `$, which is built on the basis of the energy depositions in the calorimeter layers for the muon:
$$P_\mu =P_{\mu 1}(E_1)P_{\mu 2}(E_2)P_{\mu 3}(E_3),$$
(4)
where $`P_{\mu i}(E_i)`$ is the value of the probability density function for the energy depositions $`E_i`$ in $`i`$-th calorimeter layer. These functions were obtained from the data sample, where muons were selected by muon system using strict cuts ($`\tau <5`$ ns, $`TOF<0`$ ns, see explanation below).
The distribution of the parameter $`\tau `$ is shown in Fig. 2. The scale for parameter $`\tau `$ was adjusted to have a peak at $`0`$ for $`e^+e^{}\mu ^+\mu ^{}`$ events. The events with $`\tau <5`$ ns were selected for further analysis. In about 75% of the selected events there is a hit in the lower half of the muon system. For these events Fig. 3 shows the distribution of the time of flight $`TOF`$. Peaks in the spectra at $`TOF=0`$ ns and $`TOF=7.2`$ ns are due to the process (1) and cosmic background respectively.
To subtract the cosmic background the combination of the independent parameters $`R`$ and $`P_\mu `$ was used:
$$RP=\mathrm{ln}(P_\mu )16R(cm)+25.$$
(5)
The factor before $`R`$ has been chosen to provide maximum separation between cosmic events and the events of the process (1). Additive constant equal to $`25`$ fixes the scale of the parameter $`RP`$. Fig. 4 shows the $`RP`$ distribution for the events with $`TOF>5`$ ns (mainly cosmic events) and for the events with $`TOF<0`$ ns (mainly events of the process (1)). Most of the events of the process (1) are in the region $`RP>0`$. At each energy point the number of events of the process (1) was estimated by the following formula:
$$N_\mu =N_{RP>0}CN_{RP<0}.$$
(6)
Here $`C=\frac{N_{RP>0}^{}}{N_{RP<0}^{}}`$; $`N_{RP>0}^{}`$ and $`N_{RP<0}^{}`$ were determined from the sample of the events with $`TOF>5`$ ns; $`N_{RP>0}`$, $`N_{RP<0}`$ — from the sample of the events with $`TOF<5`$ ns.
The detection efficiency $`\epsilon _\mu `$ for the process (1) was determined from the Monte Carlo simulation. The events generator was based on the formulae from the work . The passage of the particles through the detector was simulated by the program UNIMOD2 . The energy dependence of the detection efficiency is especially important for the process under study. This dependence is determined mainly by the probability for a muon to reach the muon system.
The detection efficiency determined from the Monte Carlo was corrected in order to take into account the effects, which were not included into simulation. First is the efficiency of a software 3-rd level trigger, which selected the events during the data taking. This efficiency equals to 97–98% for all runs, except the run PHI\_9601, where it was 93%. Second effect not included into the simulation is the inefficiency of the muon system. It was determined from the experimental data for each energy point and was also used as a correction for overall efficiency. This correction is about $`9\%`$ and was determined mainly by the dead time of the time channels.
## 4 Data analysis
The energy dependence of the detection cross section was fitted with the following formula:
$$\sigma _{vis}=\epsilon _\mu (E)\sigma _{\mu \mu (\gamma )}(E)+\epsilon _\pi \sigma _{\pi \pi (\gamma )}(E),$$
(7)
where $`E=2E_b`$; $`\epsilon _\mu (E)`$ and $`\epsilon _\pi 0.017`$ — the detection efficiencies of the processes (1) and (3), $`\sigma _{\mu \mu (\gamma )}`$ and $`\sigma _{\pi \pi (\gamma )}`$ — the cross sections of these processes. The energy dependence of the efficiency obtained from the simulation was approximated by a smooth function.
To perform a combined fit of the detection cross sections for all experimental runs the scale factors $`\epsilon _{sf}^i`$ for each run were introduced into the detection efficiencies as free fit parameters. They are asymptotic detection efficiencies at energies higher than 520 MeV, because the probability for a muon with such energy to hit the muon system is almost constant. This method provides an evaluation of efficiencies independent from simulation.
The cross section $`\sigma _{\pi \pi (\gamma )}`$ was calculated according to formulae from the work and taking into account the experimental data on the pion form factor and the decay $`\varphi 2\pi `$ . The cross section $`\sigma _{\mu \mu (\gamma )}(E)`$ was taken in the following form:
$$\sigma _{\mu \mu (\gamma )}(E)=\sigma _{\mu \mu }(E)\beta (E),$$
(8)
$$\sigma _{\mu \mu }(E)=83.50(nb)\frac{m_\varphi ^2}{E^2}Z^2,$$
$$Z=1Qe^{i\psi _\mu }\frac{m_\varphi \mathrm{\Gamma }_\varphi }{m_\varphi ^2E^2iE\mathrm{\Gamma }(E)},$$
where $`\sigma _{\mu \mu }(E)`$ is the Born cross section of the process $`e^+e^{}\mu ^+\mu ^{}`$; $`m_\varphi ,\mathrm{\Gamma }_\varphi `$ — mass and width of $`\varphi `$ meson; $`Q,\psi _\mu `$ — module and phase of the interference amplitude; $`\beta (E)`$ — factor taking into account the radiative corrections. This factor was obtained as a ratio of the cross section of process (1) to the Born cross section for the same acceptance angles and with the $`\varphi `$-meson contribution. The cross section of the process (1) was calculated by the Monte Carlo method using the formulae from the work with appropriate cuts on the angles and momenta of the final muons.
The luminosity measurement was performed taking into account the interference term in the cross section of the process (2). The interference amplitude is about $`0.5\%`$ and the phase differs by 180 degrees from the phase in the process (1).
The fit of the experimental data gives the interference phase $`\psi _\mu =(4.5\pm 3.4)^{}`$, which is consistent with the theoretical expectation of $`0^{}`$. Therefore the final fit was made with the fixed phase $`\psi _\mu =0^{}`$. As a result the value of the interference amplitude
$$Q=0.129\pm 0.009$$
and the detection efficiencies $`\epsilon _{sf}^i`$ for five runs were obtained (table 1).
One can see from the table that the detection efficiencies, obtained from the simulation and the experiment, are in good agreement, except for the runs PHI\_9601 and PHI\_9606. In PHI\_9601 run the muon system worked in different regime. It was not fully compensated by the corrections. In addition, the efficiency of the software trigger in this run was lower. The explanation of the lower efficiency in the run PHI\_9606 is a lower gain in the drift chambers and consequently a worse resolution in $`\theta `$.
The error in the detection efficiency does not directly contribute into the error at interference amplitude if each run is recorded in the same conditions because the interference amplitude is a relative value. Therefore the data with low gain in the drift chambers were excluded from the runs.
Fig. 5 shows the Born cross section of the process $`e^+e^{}\mu ^+\mu ^{}`$. The $`\chi ^2`$ value obtained in the fit equals to $`36.0`$ for 45 degrees of freedom.
The systematic error of the amplitude of the interference is determined by the errors of the luminosity estimation and the calculation of the radiative corrections. The measurement of the luminosity using the events of the process $`e^+e^{}\gamma \gamma `$ gives the estimation of the systematic error $`\delta _{lum}=2.1\%`$. Additional systematic error can be due to imprecise evaluation of the background from the process (3). The estimation of this contribution is $`\delta _{\pi \pi }=0.6\%`$. The contribution of the error in the calculation of the radiative corrections to the systematic errors of the interference amplitude is $`\delta _\beta =4\%`$. The resulting systematic error is $`\delta =4.6\%`$.
The interference amplitude is related to the branching ratio of the decay $`\varphi \mu ^+\mu ^{}`$ by the following formula:
$$Q=\frac{3\sqrt{B(\varphi e^+e^{})B(\varphi \mu ^+\mu ^{})}}{\alpha },$$
(9)
where $`\alpha `$ is the fine structure constant. From this relation the product is
$`\sqrt{B(\varphi e^+e^{})B(\varphi \mu ^+\mu ^{})}=(3.14\pm 0.22\pm 0.14)10^4`$. Using the table value $`B(\varphi e^+e^{})=(2.99\pm 0.08)10^4`$ , we obtain $`B(\varphi \mu ^+\mu ^{})=(3.30\pm 0.45\pm 0.32)10^4`$.
## 5 Conclusion
In this work the energy dependence of the cross section of the process $`e^+e^{}\mu ^+\mu ^{}`$ in the vicinity of the $`\varphi `$ resonance was studied. The interference pattern determined by the decay $`\varphi \mu ^+\mu ^{}`$ was measured, giving the product of the leptonic branching ratios of $`\varphi `$ meson
$`\sqrt{B(\varphi e^+e^{})B(\varphi \mu ^+\mu ^{})}=(3.14\pm 0.22\pm 0.14)10^4`$. Assuming $`\mu e`$-universality one can compare this value with the table branching ratio $`B(\varphi e^+e^{})=(2.99\pm 0.08)10^4`$ They are in good agreement. Not using $`\mu e`$-universality the branching ratio $`B(\varphi \mu ^+\mu ^{})=(3.30\pm 0.45\pm 0.32)10^4`$ was obtained. It is close to one standard deviation from the table value $`B(\varphi \mu ^+\mu ^{})=(2.5\pm 0.4)10^4`$ .
## 6 Acknowledgement
This work is supported in part by Russian Fund for basic researches (grant 96-15-96327) and STP ”Integration” (No.274).
|
no-problem/9905/cond-mat9905037.html
|
ar5iv
|
text
|
# Study of the one-dimensional off lattice hot monomer reaction model.
## I Introduction
Interacting particle systems are relevant to wide-ranging phenomena in physics, chemistry, biophysics, ecology, etc. The concept of ’particles’ is used in a broad sense, that is ’particles’ can be atoms, molecules, spins, individuals, etc, and whilst attention is drawn to the interactions among particles no attempt is made in order to achieve a detailed description (e. g. quantum mechanical) of the particle itself. Therefore, due to interactions, the occurrence of complex behavior, such as phase transitions, self-organization, chaos, bistability, etc, may be observed .
Within this context, an active field of research is the study of far-from-equilibrium reaction systems . Irreversible phase transitions (IPT) between active regimes (where reactions are sustained) and absorbing states (where reactions are no longer possible) have been reported in a great variety of models such as the Ziff, Gulari, and Barshad (ZGB) model for the catalytic oxidation of CO , the dimer-dimer model , the contact process , forest-fire models , etc (for a recent review see e. g. ). According to the Janssen-Grassberger conjecture , irreversible reaction systems that exhibit a phase transition to a single absorbing state characterized by a scalar order parameter belong to the Directed Percolation (DP) universality class. This conjecture, stated a long time ago for unique absorbing states, has been further generalized for the cases where such states are non unique . A special case corresponds to non-equilibrium systems where, provided an IPT exists, there is in addition a local or global conservation of particles modulo two, such as the branching and annihilating random walks with an even number of offsprings . In these cases a new universality class emerges, commonly called parity conserving (PC) class, which is due to the existence of two statistically equivalent absorbing states at the critical point . However, global conservation of particles of modulo two may also lead to exponents in the PC class only when local spontaneous annihilation ($`1X0`$) is highly inhibited. Then, at a coarse grained level, the relevant surviving processes are those conserving parity. In other words, parity conservation can be restored at a coarse grained level . A nice example where global parity conservation still leads to DP exponents is given by Inui et. al. . It is clear in this case that spontaneous annihilation must be taken into account.
IPT are studied largely by means of Monte Carlo simulations and mean-field approaches. Recent developments of field-theoretic renormalization group techniques have provided a new theoretical framework where non-equilibrium phase transitions can be studied . These techniques are able to identify the relevant processes in a given universality class although the quantitative predictions are still poor.
So far, most of the simulations have been performed using discrete lattices where each particle fills a single site on the lattice and neighboring particles may react with a certain probability. In contrast, our knowledge on the behavior of irreversible reaction systems in continuous media is rather poor. In order to stress some dramatic differences that may arise for a reaction system when it is simulated off lattice, let us consider the simplest case of the $`B+B0`$ irreversible reaction which proceeds according to the Langmuir-Hinshelwood mechanism.
$`B(g)+SB(a)`$ (1)
$`B(g)+SB(a)`$ (2)
$`B(a)+B(a)0+2S`$ (3)
where $`g`$ and $`a`$ refer to the gas and adsorbed phases, respectively, while $`S`$ represents a site on the surface. At first, we assume that $`B`$-species adsorbed on nearest neighbor sites react with unitary probability ($`P_r=1`$). If we used a discrete lattice, reactions would be sustained indefinitely, i. e. the system could not irreversibly evolve into an absorbing state. However, considering a continuous media, the random adsorption of $`B`$ particles of finite size $`\sigma `$ causes the formation of several interparticle gaps of size smaller than $`\sigma `$. So, in the infinite time limit ($`t\mathrm{}`$) the sample becomes imperfectly covered by $`B`$-species separated by small interparticle gaps. Reaction is no longer possible and the system becomes irreversible trapped into an absorbing state (infinitely degenerated). The maximum jamming coverage attained in one dimension is $`\mathrm{\Theta }_j0.74759`$, which corresponds to the so called car parking problem .
In this paper we show that by introducing the adsorption of species with transient mobility in a continuous one-dimensional medium, it is possible to open a window where reactions are sustained. However, by tuning the external parameter which controls the transient mobility of the particles it is possible to irreversible drive the system into an absorbing state.
It should be mentioned that the study of reactions of atoms in the gas phase possessing thermal energy with adsorbed atomic CO species on metal and semiconductor surfaces is a topic of current interest. In contrast to thermally activated reactions among adsorbed species, i. e., the so called Langmuir-Hinshelwood mechanism, these kind of reactions take place under far-from-equilibrium conditions. Consequently, the determination of the underlying mechanism as well as the understanding of the dynamic behavior is challenging work. Within this context, very recently Kim et al have reported experimental studies of the reaction of hot $`H`$-atoms with adsorbed $`D`$-atoms (for further experimental works see references in ).
It should be noted that from the theoretical point of view a number of related models for random sequential adsorption with diffusion and desorption have also been proposed and studied (for a review see also ). However, interest in such studies is addressed to the asymptotic approach to the jammed state. In contrast, in this paper our interest is focused on the irreversible critical behaviour of a reactive system.
So this work is devoted to the characterization of such IPT in continuous media and it is organized as follows: section 2 gives the description of the model and the simulation technique. In section 3 we discuss the results while conclusions and remarks are presented in section 4.
## II The Model and the Monte Carlo simulation method
In this paper, we study a 1D off-lattice adsorption-reaction model in which particles of size $`\sigma `$ undergo a ballistic flight just after depositing on the substrate. The system evolves in time under the following local rules. (i) A position $`x`$ is randomly selected on the substrate. If the interval $`[x\sigma /2,x+\sigma /2]`$ is empty, then the adsorption trial is successful, otherwise it is rejected. So, it is clear that double occupancy of positions is forbidden. (ii) Right after a successful adsorption trial, a random direction is selected (left or right). Then, the particle undergoes a ballistic flight in the previously selected direction up to a distance $`R`$ from the adsorption position $`x`$, provided that no other already deposited particle is found along the way. (iii) If during the flight one particle hits another previously adsorbed particle which is already at rest on the substrate, the following alternatives can occur: (1) the annihilation ($`B+B0`$) occurs with probability $`P_r`$. Then, both particles react and leave the system. (2) Particles do not react (with probability $`(1P_r)`$), and the flying particle is frozen at the collision point.
The ballistic flight mimics ’hot monomer’ adsorption, allowing the incoming particle to transform its energy into degrees of freedom parallel to the substratum. The length of the flight $`R`$ is finite in order to account for frictional dissipation. The model has two externally tunable parameters, namely $`R`$ and $`P_r`$. For $`P_r=0`$ one recovers the ’hot monomer’ random sequential adsorption model while for $`R=0`$ and $`P_r=0`$ one has the 1d car parking problem .
In order to simulate a continuous medium on a digital computer, one actually considers a discrete lattice. However, each site of size $`\sigma `$ is subdivided into $`2^{64}`$ different adsorption positions. This high degree of discretization has provided excellent results when compared with the exact analytic solution of a related problem .
Preliminary results show that the system can undergo continuous IPT between a stationary reactive state and an absorbing state without reactions when varying the parameters. This can easily be tested by considering the case $`P_r=1`$ and $`R=0`$ ($`R>1`$) which gives an absorbing (reactive) state, respectively. It should be pointed out that continuous IPT are dominated by fluctuations. Consequently, in a finite system and close to the critical point, the stationary state of the reactive phase can irreversibly evolve into the saturated state (absorbing state). Due to this circumstance, the precise determination of both critical points and critical exponents is rather difficult. However, this shortcoming can be avoided by performing an epidemic analysis. For this purpose one starts, at $`t=0`$, with a a configuration close to one of the absorbing states. It is clear that different absorbing states will normally differ in the density of monomers. It should be pointed out that the dynamical critical behavior of systems with infinitely many absorbing configurations is expected to depend upon the initial density of inactive particles . However, static critical behavior appears to be independent of it. From the set of possible initial densities, a value $`\rho _n`$ is particularly important, namely the stationary density of inactive particles which results after the natural evolution of the system in the subcritical region has finished. The value $`\rho _n`$ is relevant, since only for this value the natural dynamical critical behavior emerges. Preliminary simulation results show that $`\rho _n`$ depends on the parameter $`P_r`$, but their values have not been included in this work for the sake of space. The dependence of the critical behavior on the set of initial densities is the subject of an ongoing investigation.
Consequently, the initial state for the epidemic analysis is generated by the evolution of the system very close to the critical point until poisoning is achieved. After generating this stationary poisoned state, we remove one or two particles from the middle of the system in order to create a small active area where adsorption is now possible. It should be noted that an empty area is considered to be active if it is longer than or equal to $`\sigma `$. Then, the time evolution of the system is analyzed by measuring the following properties: (i) the average amount of active area at time $`t`$, $`A(t)`$; (ii) the survival probability of the active area at time $`t`$, $`P_s(t)`$; and (iii) the average distance over which the active area have spread at time $`t`$, $`D(t)`$. Finite size effects are absent because the system is taken large enough to avoid the presence of active area at the boundaries. For this purpose a sample of $`10^4\sigma `$ is enough. Averages are taken over $`10^5`$ to $`10^6`$ different samples. Near the critical point, the amount of active area is often very small. Then, we improve the efficiency of the algorithm by keeping a list of the positions where there is active area. Time is incremented by $`1/a(t)`$, where $`a(t)`$ is the amount of active area at time $`t`$. The time evolution of the active area is monitored up to $`t=10^5`$. At criticality, the following scaling behavior holds:
$$A(t)t^\eta ,$$
(4)
$$P_s(t)t^\delta ,$$
(5)
and
$$D(t)t^{z/2}$$
(6)
where $`\delta `$, $`\eta `$ and $`z`$ are dynamic exponents.
## III Results and discussion
Preliminary simulations show that for $`P_r=1`$ it is possible to achieve a stationary reactive state in the large $`R`$ limit (i. e. for $`R2`$) while in the opposite limit ($`R1.5`$) the system becomes irreversible saturated by $`B`$-species. In order to obtain a quantitative description of the IPT we have performed epidemic studies around the critical edge. Figure 1 (a - c) shows log-log plots of $`A`$, $`P_s`$ and $`D`$ versus the time $`t`$, obtained for different parameter values. The three plots exhibit a power law behavior which is the signature of a critical state. Using smaller (greater) $`R`$-values we observe slight upwards (downwards) deviations in three plots which indicate supercritical (subcritical) behavior (these results are not shown for the sake of clarity). Then, the critical exponents obtained by regressions are:
$$\eta =0.308\pm 0.004\delta =0.165\pm 0.003z/2=0.625\pm 0.002$$
(7)
These values are in excellent agreement with the exponents corresponding to the DP universality class in $`1+1`$ dimensions . Recently, extended series expansions calculations have provided very accurate values for the DP critical exponents, namely
$$\eta =0.31368(4)\delta =0.15947(3)z/2=0.63261(2)$$
(8)
Therefore we conclude that the studied adsorption-reaction model on a continuous medium belongs to the DP universality class like many other systems already studied on discrete lattices. It should be noticed that the present model has infinitely many absorbing states, so as in the case of the dimer-dimer model the DP conjecture holds for non-unique absorbing states at least as long as the absorbing states can be solely characterized by the vanishing of a single scalar order parameter.
We have also studied the case of imperfect reaction, i. e. $`P_r<1`$. Figure 2 shows a plot of the phase diagram. The phase boundary curve was determined by means of an epidemic analysis, as is shown in figure 1. The obtained critical exponents are
$$\eta =0.312\pm 0.004\delta =0.157\pm 0.003z/2=0.631\pm 0.001$$
(9)
Once again, these exponents are in good agreement with those corresponding to DP. Scanning the whole critical curve we obtain second order IPT’s that belong to the DP universality class. However, the special case $`R\mathrm{}`$ merits further discussion. For $`P_r=1`$ ($`P_r=0`$) the system evolves towards a reactive (absorbing) state, respectively. Then, a transition is expected at some intermediate $`P_r`$ value. In this case, the time evolution of the active area can be described by means of a mean field equation. In fact, the active area $`A(t)`$ will grow (decrease) proportionally to $`A(t)P_r`$ ($`A(t)(1P_r)`$), respectively; so
$$\frac{dA}{dt}=A(t)P_rA(t)(1P_r)$$
(10)
which leads to
$$A(t)=A_0e^{(2P_r1)t}$$
(11)
Therefore, $`P_r=1/2`$ is a critical value such as for $`P_r>1/2`$ ($`P_r<1/2`$) the active area will increase (decrease) exponentially in time, while just at criticality $`A(t)`$ will remain constant ($`A(t)=A_0`$), which is consistent with a mean field exponent $`\eta _{MF}=0`$. The predicted behavior is confirmed by means of simulations as it is shown in figure 3. By means of linear regressions the following exponents are obtained for $`P_r=\frac{1}{2}`$:
$$\eta 0.0\delta 1.0z/21.0$$
(12)
Then, our mean field estimate for $`\eta `$ is in good agreement with the simulation results. Regrettably, we were unable to derive the mean field values for the remaining exponents.
We conclude that the particular point $`P_r=1/2`$, $`R\mathrm{}`$ is a first order point (see figure 2) which is not in the DP class which characterizes the whole critical curve.
In the following, we give theoretical arguments by means of a coarse-grained Langevin description that support the result concerning the universality class of the model. First, note that the normalized variables needed to characterize the configurations of the system are the amount of active area $`a(x,t)`$, the number of monomers in the system $`n(x,t)`$ and the amount of inactive area $`v(x,t)`$. These variables are not independent since we have the constrain $`a(x,t)+n(x,t)+v(x,t)=1`$. It is clear from the above discussion that the time evolution of the system ends when $`a(x,t)=0`$. Since $`a(x)0`$ at criticality, this quantity can be chosen as the order parameter of the system. Then, we will try to describe the time evolution of the system near the critical point by means of two coupled Langevin equations, for instance, one for $`a(x,t)`$ and the other for $`n(x,t)`$. Due to the nature of the absorbing configurations, each term of these equations must vanish when $`a(x,t)0`$.
Let us consider the microscopic processes which are relevant to characterize the critical behavior of the system. First of all, both diffusion of $`a(x)`$ and $`n(x)`$ can be interpreted as successive adsorption-reaction processes. Within a one site description, both $`n(x)`$ and $`a(x)`$ will increase proportional to $`a(x)`$. The reaction processes will contribute to the equations with a coupling term proportional to $`a(x)n(x)`$. It is also clear that monomer flights will introduce terms proportional to $`a(x)^2`$, $`a(x)^3`$, etc. Since only the lower order terms are relevant for a renormalization group treatment , we just keep the term proportional to $`a(x)^2`$. Then, we can write down the following Langevin equations:
$$n(x,t)/t=k_1^2a(x,t)+k_2a(x,t)k_3a(x,t)^2k_4a(x,t)n(x,t)+\eta _1(x,t)$$
(13)
$$a(x,t)/t=u_1^2a(x,t)+u_2a(x,t)u_3a(x,t)^2u_4a(x,t)n(x,t)+\eta _2(x,t)$$
(14)
where $`\eta _1(x,t)`$ and $`\eta _2(x,t)`$ are uncorrelated noises proportional to $`\sqrt{a(x,t)}`$, $`k_i`$ and $`u_i`$ are coefficients. This system of coupled Langevin equations is similar to that obtained for the ‘pair contact process’ which is one of the prototype systems with multiple absorbing states. Muñoz et al have shown that for large $`t`$ the equation corresponding to the activity (equation (14) for the present model) reduces to the Langevin representation of DP. Then, our simulation results are consistent with the above-presented theoretical arguments. The same authors have also shown that systems with many available absorbing configurations display strong memory effects that may lead to anomalous scaling. In addition to this, Mendes et al have proposed a generalized hyperscaling relation which has proved to be valid in systems with multiple absorbing configurations. Simulation results on several lattice models with infinitely-many absorbing states support both theoretical argument . The role that initial states play in the temporal evolution of the present model is under investigation.
## IV Conclusions and final remarks
A model for the irreversible adsorption-reaction of a single species on a continuous medium is studied by means of numerical simulations. We would like to stress and comment upon the following interesting features of the system: (i) in contrast to standard (reversible) transitions, non-equilibrium IPT can happen in one dimension. (ii) the studied adsorption-reaction model clearly shows interesting new effects that may arise when a process is modelled on a continuous medium. Since the system always reaches a stationary reactive state when simulated on a discrete lattice but a final poisoned state can be observed on a continuous one (e. g. for $`P_r=1`$ and $`R=0`$), one may expect a crossover behaviour when the ’discretization degree’ of the surface is tuned from one extreme to the other. This can be achieved by considering the adsorption on discrete lattices of species of arbitrary length $`r`$, i. e. $`r`$-mers. We found that the reactive random sequential adsorption (RRSA) of dimers ($`r=2`$) always leads to a reactive steady state, whose stationary coverage is close to $`\mathrm{\Theta }0.5`$, and no poisoning is observed. However, the RRSA of trimers ($`r=3`$) causes irreversible poisoning of the lattice with an average saturation coverage close to $`\mathrm{\Theta }_{ST}0.7`$. In the case of dimers, two absorbing states of the type
$`\mathrm{}BBVBBVBBVBB\mathrm{}`$ (15)
$`\mathrm{}VBBVBBVBBVBBV\mathrm{}`$ (16)
where $`V`$ is an empty site, can be expected. So, during the RRSA the formation of several interfaces of the type $`\mathrm{}BBVBBVVBBVBB\mathrm{}`$ takes place. Due to coarsening we expect that in the asymptotic limit ($`t\mathrm{}`$) both absorbing states will form two semi-infinite domains separated by an interface. The competition between these domains will keep the system in the reactive state for ever. Just by increasing $`r=2`$ to $`r=3`$, an infinite number of absorbing configurations appear in the system. So, the arguments developed above no longer hold and poisoning is observed. Consequently, an IPT is located between $`r=2`$ and $`r=3`$ and its precise location requires the study of the adsorption-reaction problem with particles of non-integer length. However, adsorption of $`r`$-mers of length $`r=2+ϵ`$ would cause the argument of the two competing absorbing states to fail. Therefore, we expect that the critical size, i. e. ’the discretization degree’ is $`2`$. (iii) To our best knowledge, this is the first off-lattice model which exhibits a second-order IPT in the DP universality class. Consequently, this results once again supports the DP conjecture which can now be generalized to off-lattice models with infinitely many absorbing states.
We expect that the interesting behavior of the present simple reaction model will stimulate further work on irreversible transitions and critical phenomena on continuous media, a field that, to our best knowledge, remains almost unexplored.
Acknowledgments: This work was financially supported by CONICET, UNLP, CIC (Provincia Bs. As.), ANPCyT, the Fundación Antorchas, and the Volkswagen Foundation (Germany).
|
no-problem/9905/astro-ph9905258.html
|
ar5iv
|
text
|
# Cosmic Neutrinos and their Detection
## I Introduction
According to the standard Big Bang model, a cosmic neutrino background, similar to the familiar microwave background, should exist in our universe. Because of their weak interactions, cosmic neutrinos fell out of thermal equilibrium at $`t1\mathrm{sec}`$, and have been redshifting since then. Neutrinos of mass $`10^3`$ eV would still be relativistic today, have a Fermi-Dirac spectrum with $`T1.9`$ K, and be spatially uniform with $`n_{\nu _L}=n_{\overline{\nu }_R}50/\mathrm{cm}^3`$ per flavor. In contrast, neutrinos of mass $`10^3`$ eV would be nonrelativistic today, and be clustered around galaxies with a typical velocity $`v_\nu 10^3`$. They contribute to the cosmological energy density an amount $`\mathrm{\Omega }_\nu _im_{\nu _i}/(92h^2\mathrm{eV})`$, where $`h0.65`$ is the Hubble expansion rate in units of $`100\mathrm{km}\mathrm{s}^1\mathrm{Mpc}^1`$, and where the sum includes all neutrino flavors that are nonrelativistic today. Massive neutrinos are a natural candidate for the hot component in currently favored “Mixed Hot+Cold Dark Matter (HCDM)” models of galaxy formation. In this scenario, neutrinos would contribute $``$ 20 % ($`_im_{\nu _i}8\mathrm{eV}`$), and CDM (e.g. Wimps and axions) the remainder of the dark matter. Another popular model involving a cosmological constant is $`\mathrm{\Lambda }\mathrm{HCDM}`$ with $`\mathrm{\Omega }_\nu 0.050.1`$. The available phase space for massive neutrinos restricts the local neutrino number density to
$$n_\nu <\mathrm{\hspace{0.17em}2}\times 10^6\mathrm{cm}^3\left(\frac{v_m}{10^3}\right)^3\underset{i}{}\left(\frac{m_{\nu _i}}{10\mathrm{eV}}\right)^3.$$
(1)
where $`v_m`$ is the maximum neutrino velocity in the halo. Hence, neutrinos of $`m_\nu `$ eV, cannot account for all of the local halo matter density $`300\mathrm{MeV}/\mathrm{cm}^3`$.
The detection of cosmic neutrinos by interactions with single electrons or nucleons is unlikely because of the extremely small rates and energy deposits. Past proposals have therefore focused on detecting the coherent mechanical effects on macroscopic targets due to the “neutrino wind”. In 1974, Stodolsky suggested to use the energy splitting of an electron,
$$\mathrm{\Delta }E=\sqrt{2}G_Fv_e\left((n_{\nu _e}n_{\overline{\nu }_e})(n_{\nu _\mu }n_{\overline{\nu }_\mu })(n_{\nu _\tau }n_{\overline{\nu }_\tau })\right)$$
(2)
moving through the neutrino background with velocity $`v_e`$. However even for large neutrino number asymmetries, e.g. $`n_{\overline{\nu }_e}=0`$, $`n_{\nu _{\mu ,\tau }}=n_{\overline{\nu }_{\mu ,\tau }}`$, $`\mathrm{\Delta }E1.3\times 10^{33}\mathrm{eV}(v_e/10^3)(n_{\nu _e}/10^7\mathrm{cm}^3)`$ is still tiny. In principle, this effect could be measured by observing a torque on a permanent magnet, which is shielded against magnetic noise with superconductors. A benefit of this detection method is that it works equally well for Dirac and Majorana neutrinos. Other forces $`G_F`$ arising from the reflection or refraction of neutrinos by macroscopic objects have been proposed by several authors . However, all of these ideas were later found to be flawed .
Another approach is to consider forces $`G_F^2`$ due to random elastic neutrino scattering . Here, spatial coherence dramatically increases the cross section of targets smaller than the neutrino wavelength $`\lambda _\nu `$. In the nonrelativistic limit, one must distinguish between Majorana and Dirac neutrinos. For Dirac $`\mu `$ or $`\tau `$ neutrinos, the cross section is dominated by the vector neutral current contribution
$$\sigma _D\frac{G_F^2m_\nu ^2}{8\pi }N_n^2=2\times 10^{55}\left(\frac{m_\nu }{10\mathrm{eV}}\right)^2N_n^2\mathrm{cm}^2$$
(3)
where $`N_n`$ is the number of neutrons in the target of size $`<\lambda _\nu /2\pi =20\mu \mathrm{m}(10\mathrm{eV}/m_\nu )(10^3/v_\nu )`$. For Majorana neutrinos the vector contribution to the cross section is suppressed by a factor $`(v_\nu /c)^2`$ and potentially the largest contribution arises from the axial current. The cross section of a spin-polarized target is $`\sigma _MN_s^2`$ , where $`N_s`$ is the number of aligned spins in the grain.
Due to the Sun’s peculiar motion $`(v_s220\mathrm{km}/\mathrm{s})`$ through the galaxy, a test body will experiences a neutrino wind force through random neutrino scattering events. The wind direction is modulated with the sidereal period of 23hrs+56min due to the combined effects of the Earth’s diurnal rotation and annual revolution around the Sun. For Dirac neutrinos, the acceleration of a target of density $`\rho `$ and optimal radius $`\lambda _\nu /2\pi `$ has an amplitude
$$a=8\times 10^{24}\left(\frac{AZ}{A}\right)^2\left(\frac{v_\mathrm{s}}{10^3}\right)^2\left(\frac{n_\nu }{10^7\mathrm{cm}^3}\right)\left(\frac{\rho }{20\mathrm{gcm}^3}\right)\frac{\mathrm{cm}}{\mathrm{s}^2}$$
(4)
and is independent of $`m_\nu `$. For clustered Majorana neutrinos, the acceleration is suppressed by a factor $`10^6(10^3)`$ for an unpolarized (polarized) target. For unclustered relativistic neutrinos, the force should be aligned with the direction of the microwave background dipole. Neutrinos of Dirac or Majorana type would have the same cross section, giving rise to an acceleration $`10^{34}\mathrm{cm}/\mathrm{s}^2`$. A target size much larger than $`\lambda _\nu `$ can be assembled, while avoiding destructive interference, by using foam-like or laminated materials . Alternatively, grains of size $`\lambda _\nu `$ could be randomly embedded (with spacing $`\lambda _\nu `$) in a low density host material.
## II Detector Concept
The measurement of the neutrino-induced forces and torques will require major improvements in sensor technology. At present, the most sensitive detector of small forces is the “Cavendish-type” torsion balance (see Figure 1), which has been widely used for measurements of the gravitational constant and searches for new forces. A typical arrangement consists of a dumbbell shaped test mass suspended by a tungsten fiber inside a vacuum chamber at room temperature. The angular deflection is usually read out with an optical lever. The torsional oscillation frequency is constrained by the yield strength of the fiber to $`>`$ 1mHz. Internal friction in the fiber limits the $`Q`$ to $`<10^6`$. Nonlinearities in the suspension make the balance quite sensitive to seismic noise. Additional background forces arise from thermal noise and time varying gravity gradients due to tides, weather, traffic, people etc. The angular resolution of optical levers is $`10^{10}\mathrm{rad}`$ and the smallest measurable acceleration is $`10^{13}\mathrm{cm}/\mathrm{s}^2`$ .
Several improvements seem possible (see Figure 2): Thermal noise can be decreased by lowering the temperature and by employing a low dissipation (high $`Q`$) suspension, as seen from the expression for the effective thermal noise acceleration
$$a_{\mathrm{th}}2\times 10^{23}\left(\frac{T}{1\mathrm{K}}\right)^{1/2}\left(\frac{1\mathrm{k}\mathrm{g}}{m}\right)^{1/2}\left(\frac{1\mathrm{d}\mathrm{a}\mathrm{y}}{\tau _0}\right)^{1/2}\left(\frac{10^6\mathrm{s}}{\tau }\right)^{1/2}\left(\frac{10^{16}}{Q}\right)^{1/2}\frac{\mathrm{cm}}{\mathrm{s}^2}$$
(5)
where $`m`$ is the target mass, $`\tau `$ is the measurement time, $`\tau _0`$ is the oscillator period, and $`T`$ is the operating temperature. A promising high-$`Q`$ suspension method is a Meissner suspension consisting of a superconducting body floating above a superconducting coil. Niobium or NbTi in bulk or film form have been used in the past in such diverse applications as gravimeters, gyros, and gravitational wave antennas. Generally the magnetic field applied to the superconductor is limited to $`B0.2\mathrm{T}`$ to avoid flux penetration or loss of superconductivity. However, even for small fields the magnetic flux exclusion is usually incomplete in polycrystalline Nb superconductors and flux creep will cause noise . The lifting pressure $`B^2`$ is limited to about $`100\mathrm{g}/\mathrm{cm}^2`$. This could be dramatically increased by replacing the bulk superconductor with a persistent-mode superconducting magnet, as shown in Figure 2. Commonly used NbTi wire has a critical field of several T. Moreover, the flux lines are strongly pinned by artificial lattice defects in the magnet wire, leading to low levels of flux noise and dissipation. Very low resistance $`10^{14}\mathrm{\Omega }`$ wire joints can be made by cold welding, and the magnetic field decay rate can be as low as $`\dot{B}/B=R/L10^9/\mathrm{day}`$, where $`R`$ is the joint resistance and $`L`$ is the inductance. The current decay due to losses in the joint will cause the suspended magnet to sink gradually. The concomitant change of magnetization will produce a slowly varying torque on the float due to the gyromagnetic (“Einstein-deHaas”) effect. Due to unavoidable deviations from cylindrical symmetry, the oscillator will have a very long, but finite rotational oscillation period. For small amplitudes, the oscillating supercurrents in the wire will cause the fluxoids to move elastically about their pinning centers, and very little dissipation is expected. In order to match the rotation period to the signal period $`(1\mathrm{day})`$, an additional restoring potential may be added. One possibility is a superconducting or low-loss dielectric (e.g. single-crystal sapphire) ellipsoid in the constant electric field of a parallel plate capacitor. Both negative and positive torsion coefficients can be produced by aligning the long axis of the ellipsoid perpendicular or parallel to the field.
The effects of time-varying gravity gradients can be reduced with a highly cylindrically symmetric target (see Figure 1). With the c.m. of the target centered below the suspension support, the leading order gravitational torque arise from the dipole moment of the torsion balance. For a source mass $`M`$ at a distance $`R`$ from the balance c.m., the torque is
$$\tau _z\frac{G_NM}{R^3}(I_xI_y)$$
(6)
where $`I_x,I_y`$ are the moments of inertia about the two horizontal axes. Diurnally varying mass distributions, $`e.g.`$ changes of the atmospheric air mass due to solar heating, are especially worrisome. These effects can be minimized with a careful mechanical design and subsequent balancing steps using a movable laboratory source mass. In addition, locating the experiment in a deep mine would be beneficial. The gravitational interaction of the balance with the Sun and the Moon are less dangerous, because they produce torques with a 12 hr period to first order.
Low frequency seismic noise can also be a performance limiting factor in torsion balance experiments. At a period of $``$ 24 hrs, the noise is dominated by tidal effects, with typical variations in the strain of $`\mathrm{\Delta }L/L10^7`$ , in the tilt about the local (g-referenced) vertical axis of $`\mathrm{\Delta }\theta 10^6`$ rad, and in the local gravitational acceleration of $`\mathrm{\Delta }g/g10^7`$. Small deviations from ideal symmetry will couple these motions into the rotational mode of the balance. For example, the tilt-rotation coupling of a tungsten fiber suspension is typically $`0.02`$. Two possible remedies are (1) improving the symmetry of the balance, and (2) employing active anti-seismic isolation systems. A remaining worry is low frequency rotational seismic noise for which no reliable data is available. It will directly mask the signal and needs to be compensated.
The proposed angular rotation readout has very high immunity against vertical, horizontal, and tilt seismic noise. It consists of a parametric transducer, which converts the angle to an optical frequency. As shown in Figure 3, the transducer consists of a high-$`Q`$ optical cavity of length $`l`$, tuned by a Brewster-angled low loss dielectric plate of thickness $`d`$. Using high-reflectivity mirrors and a cavity length $`10\mathrm{cm}`$ should give linewidths of order 100 kHz for the Gaussian $`\mathrm{TEM}_{00p}`$ modes, with a frequency tuning sensitivity of $`df/d\theta f(d/l)`$ $`10^{14}\mathrm{Hz}/\mathrm{rad}`$. The angular measurement precision depends on the number of photons $`N`$ via $`\mathrm{\Delta }\theta \lambda /(dF\sqrt{N})`$, where $`F`$ is the cavity finesse, and $`\lambda `$ is the laser wavelength. This is a factor $`F`$ better than the optical lever readout for equal laser power. Cryogenic optical resonators have excellent long-term stability and have been proposed as secondary frequency standards. The measured frequency drifts range from $`1\mathrm{H}\mathrm{z}`$ over minutes to $`100\mathrm{Hz}`$ over days. For the measurement of the rotation angle, a stable reference frequency will be required. This can be easily implemented using a laser locked to a second cavity. The described angle readout has little sensitivity to lateral, tilt, and vertical motion, but couples to rotational noise. A possible solution would be to suspend the target as well as the optical cavity in order to suppress the common rotation mode. The cavity suspension should have a much longer natural period than the target suspension to avoid its excitation at the signal frequency. A different approach to suppress rotational noise would employ two identical torsion balances, but rotated by 180 degrees with respect to each other.
Additional background forces will arise from gas collisions, cosmic ray hits, radioactivity, solar neutrino and WIMP interactions resulting in a Brownian motion of the target. The equivalent acceleration is $`a(\overline{p}/m)\sqrt{n/\tau }`$, where $`\overline{p}`$ is the average momentum transfer, and $`n`$ is the collision or decay rate. The requirements on the residual gas pressures are very severe with only a few collisions with the target allowed per second. Cryopumping and the use of getters will be essential. The cosmic muon flux at sea level is about $`0.01\mathrm{cm}^2\mathrm{s}^1`$ with a mean energy of $`1\mathrm{GeV}`$. Thus, a target of area $`100\mathrm{cm}^2`$ will experience a collision rate $`𝒪(1)\mathrm{s}^1`$, causing an acceleration comparable to the signal in Equation 4. Cosmic rays can also lead to a net charge buildup on the torsion balance and spurious electrostatic forces. This background can be reduced by several orders of magnitude by going underground. Further disturbing forces may be caused by time-varying electric and magnetic background fields, which can be shielded with superconducters. Blackbody radiation and radiometric effects would be greatly reduced in a temperature controlled cryogenic environment.
Finally, there is a fundamental limit imposed by the uncertainty principle. The accuracy of a force measurement obtainable by continously recording the test mass position (standard quantum limit) is
$$a_{\mathrm{SQL}}=5\times 10^{24}\left(\frac{10\mathrm{k}\mathrm{g}}{m}\right)^{1/2}\left(\frac{1\mathrm{d}\mathrm{a}\mathrm{y}}{\tau _0}\right)^{1/2}\left(\frac{10^6\mathrm{s}}{\tau }\right)\frac{\mathrm{cm}}{\mathrm{s}^2}$$
(7)
where $`m`$ is the test mass, $`\tau _0`$ is the oscillator period, and $`\tau `$ is the measurement time. In our proposed position readout, the disturbing back action force arises from spatial fluctuations in the photon flux passing through the central tuning plate. A laser beam pulse containing $`N`$ photons, and centered on the plate rotation axis, will produce a random change in the angular momentum of the plate of $`\mathrm{\Delta }LF\sqrt{N}\mathrm{}\omega d/c`$. Here, $`d`$ is the average transverse photon displacement with respect to the rotation axis, which is also of order the plate thickness. A slightly higher sensitivity can be obtained with a stroboscopic “quantum nondemolition” (QND) measurement , where the test mass position is recorded every half period. Here,
$$a_{\mathrm{QND}}=a_{\mathrm{SQL}}\sqrt{\omega _0\mathrm{\Delta }t}$$
(8)
and $`\mathrm{\Delta }t`$ is the strobe duration.
There is hope that cosmic neutrinos will be detected in the laboratory early in the next century, especially if they have masses in the eV range and are of Dirac type. The most viable way seems to be a greatly improved torsion balance operated in a deep mine.
Naturally, the proposed torsion balance could also be applied to tests of the weak equivalence principle with the Sun as the source mass, and to searches for new short range $`(1\mathrm{m})`$ forces with a movable laboratory mass.
## ACKNOWLEDGMENTS
This research was performed by LLNL under the auspices of the U.S. Department of Energy under contract no. W-7405-ENG-48. I thank E. Adelberger, S. Baessler, I. Ferreras, A. Melissinos, R. Newman, P. Smith, and W. Stoeffl for useful discussions.
|
no-problem/9905/adap-org9905006.html
|
ar5iv
|
text
|
# Globally coupled bistable elements as a model of group decision making
\[
## Abstract
A simple mathematical model is proposed to study the effect of the average trend of a population on the opinion of each individual, when a group decision has to be made by voting. It is shown that if such effect is strong enough a transition to coherent behavior occurs, in which the whole population converges to a single opinion. This transition has the character of a first-order critical phenomenon.
\]
Group decision making is a complex social process in which the inherent factors that determine the position of each individual –such as previous experience, prospective benefits, current personal circumstances, and character– interact in a nontrivial way with the average trend, to which individuals are exposed through communication between them. Group decision results from this interaction as an emerging property of their collective behavior.
Consider as a specific case an ensemble of individuals that have to choose, at a given time in the future and by individual voting, and option among a prescribed set of instances. After vote counting, the decision is simply taken for the most voted option. This decision –which, once votes have been emitted, is straightforwardly defined– results however from the complex collective process that builds up the opinion of each individual . During a certain period previous to the voting act, in fact, the opinion of a given individual evolves due to the modulation that the knowledge of another’s position imposes on the own tendency. In an efficiently communicating ensemble, like in any modern population, individuals are continuously exposed to the average opinion of the ensemble –for instance, through poll results, published by mass communication media– and are expected to be more or less strongly influenced by this collective element .
This paper is aimed at exploring, in the frame of a simple mathematical model, how the effects of the personal trend and of the average opinion in defining the individual vote combine with each other to lead the group to its collective decision. For the sake of concreteness, suppose that the population has to choose by voting among two candidates, $`C^+`$ and $`C^{}`$. In the model, the time evolution of the opinion of the $`i`$-th individual is described by a variable $`x_i(t)`$, with $`x_i[1,1]`$ for all $`i`$ and $`t`$. Large values of $`x_i`$, $`|x_i|1`$, are to be associated with a firm decision to vote for one of the two candidates, ($`C^+`$ for $`x_i>0`$ and $`C^{}`$ for $`x_i<0`$, say), whereas small values of $`x_i`$ correspond to a looser opinion. In any case, when the voting act takes place the individual opinion is quenched and the decision is made according to the sign of $`x_i`$ at that time. In practice, it is supposed that the typical evolution times for the individual opinion are shorter than the time elapsed up to the voting act, so that one will focus the attention on the long-time asymptotics of the model.
The average opinion, which is expected to play a relevant role in the definition of the individual decision, is here characterized by the arithmetic mean value
$$\overline{x}(t)=\frac{1}{N}\underset{i}{}x_i(t),$$
(1)
where $`N`$ is the size of the population. This mean value is a measure of how much defined is the global trend towards on the two candidates. In fact, the sign of $`\overline{x}`$ at the time of the voting act determines the chosen candidate.
To stress the effect of the average opinion on the individual vote it is assumed that, in the absence of such effect, each individual would simply reinforce his or her initial personal opinion as time elapses. This means, in particular, that a given individual would not change his or her original preference for one of the candidates. This behavior is well represented by the following dynamical equation for $`x_i(t)`$:
$$\frac{dx_i}{dt}=x_ix_i^3(i=1,2,\mathrm{},N).$$
(2)
In fact, the solution to this equation approaches the asymptotic value $`x_i(\mathrm{})=+1`$ or $`x_i(\mathrm{})=1`$ depending on the initial value $`x_i(0)`$ being positive or negative, respectively. Moreover, $`x_i(t)`$ does not change its sign during the whole evolution. If $`x_i(0)=0`$, then $`x_i(t)=0`$ for all $`t`$, but this stationary state is unstable. These facts can readily be verified from the explicit solution to Eq. (2), which reads,
$$x_i(t)=\frac{x_i(0)}{\sqrt{x_i(0)^2[x_i(0)^21]\mathrm{exp}(2t)}}.$$
(3)
From a dynamical viewpoint, Eq. (2) implies that each individual behaves as a bistable element, its asymptotic state being fixed by the initial condition. In physics, this kind of model has been used to study spin systems (in the soft-spin approximation) and neural networks .
The effect of the average opinion on the evolution of the individual trend is described by modifying Eq. (2) in the following way:
$$\frac{dx_i}{dt}=x_ix_i^3+k_i(\overline{x}x_i)(i=1,2,\mathrm{},N),$$
(4)
where $`\overline{x}(t)`$ has been defined in (1) and $`k_i`$ is a constant that, as discussed in the following, measures the influence of the average opinion on the $`i`$-th individual. For $`k_i>0`$ the new terms drive $`x_i(t)`$ towards the average $`\overline{x}(t)`$. In fact, for large values of $`k_i`$ and slowly varying $`\overline{x}`$, the individual variable $`x_i`$ would exponentially approach the average. The new terms thus represent, for positive $`k_i`$, a trend of the $`i`$-th individual to follow the average opinion, which can either reinforce or compete with his or her individual position. Negative values of $`k_i`$ would correspond to individuals who tend to take a position opposite to the average.
In a physical context, the new terms represent an “interaction” between individuals. From a mathematical viepoint, in fact, those terms couple the set of equations (4) through the average $`\overline{x}`$, which depends on the whole set of $`x_i`$ $`(i=1,\mathrm{},N)`$. This coupling makes it impossible to give the exact solution to the model equations (4), and the system has to be treated numerically. In particular, note that it is not possible to derive an autonomous equation for the evolution of the average $`\overline{x}(t)`$.
The case where the coupling constant $`k_i`$ is positive and the same for all individuals, $`k_i=k>0`$ for all $`i`$, is considered first. Obviously, for $`k=0`$ the uncoupled ensemble –whose behavior has been discussed above– is recovered. For $`k=1`$, Eq. (4) reduces to
$$\frac{dx_i}{dt}=\overline{x}x_i^3.$$
(5)
Let $`r_{ij}=x_ix_j`$ be the difference between the states of any two individuals. It can be shown from Eq. (5) that, if $`1<x_i,x_j<1`$, $`r_{ij}`$ tends to zero as time elapses . In other words, for $`k=1`$ the model predicts that all the individuals will have the same opinion at sufficiently long times.
Figure 1 shows the evolution of $`x_i`$ in a population of 1000 individuals for $`k=0`$ and $`k=1`$. For the sake of clarity, only 100 variables are displayed. Initially, the individual states are randomly distributed in $`[1,1]`$. As expected, for $`k=0`$ the states are soon divided into two clusters, according to their signs. For $`k=1`$, instead, all states are attracted to a single cluster. Since for this value of the coupling constant the average opinion is already dominant and all the individuals behave in a coherent way, it can be predicted that for $`k>1`$ the dynamics of the ensemble is qualitatively the same. This is indeed verified from numerical results. On the other hand, for $`0<k<1`$ a transition is expected to occur between the two qualitative different behaviors observed at $`k=0`$ and $`k=1`$. This transition is characterized in the following.
According to numerical calculations, for sufficiently small values of the coupling constant the collective behavior qualitatively reproduces the evolution of the uncoupled ensemble ($`k=0`$). In fact, if the initial distribution of $`x_i`$ is uniform over $`[1,1]`$ the population becomes divided into two groups –as when, for $`k=0`$, both signs are initially present. If, instead, one of the signs is much more abundant than the other, all the variables may ultimately converge to one of the extreme values $`x_i=\pm 1`$ –as when, for $`k=0`$ only one sign is initially present. The interacting ensemble is therefore “bistable,” in the sense that two qualitatively different asymptotic states are observed depending on the initial condition: either all individuals behave coherently, or they become divided into two groups. On the other hand, as stated above, for larger values of $`k`$ only coherent behavior is observed. Figure 2 illustrates these behaviors for intermediate values of $`k`$.
The transition between bistable and coherent behavior is due to a stability change in the possible asymptotic states of the coupled ensemble. Suppose that, as the system evolves, the $`N`$ individuals become divided into two groups. One of them, with $`pN`$ individuals ($`0p1`$) approaches the asymptotic state $`X_1`$, whereas the other, with $`(1p)N`$ individuals, approaches $`X_2`$. It has to be stressed that the value of $`p`$ depends in a nontrivial way on the initial condition, and cannot be analytically determined a priori. According to Eq. (4) the following identities should hold for $`t\mathrm{}`$ and $`N\mathrm{}`$:
$$\begin{array}{cc}\hfill 0& =(1k)X_1+k[pX_1+(1p)X_2]X_1^3\hfill \\ & \\ \hfill 0& =(1k)X_2+k[pX_1+(1p)X_2]X_2^3.\hfill \end{array}$$
(6)
These equations include also the case of coherent behavior, if one puts $`X_1=X_2`$ with any value of $`p`$. Their solutions constitute the set of stationary states for the system, whose stability can be studied by means of standard linearization around equilibria.
Equations (6) can be reduced to a 9th-degree polynomial equation for either $`X_1`$ or $`X_2`$, and have therefore nine solutions –which in general are complex numbers. The trivial solution $`X_1=X_2=0`$ is unstable. The remaining eight solutions can be grouped into symmetrical pairs, ($`X_1,X_2`$) and ($`X_1,X_2`$), both with the same stability properties. It is therefore enough to analyze, for instance, the four solutions with $`X_10`$. (i) The first one, $`X_1=X_2=1`$, is stable for all $`k`$ and corresponds to the asymptotic state of coherent evolution. (ii) The second solution is real and unstable for all $`k`$. It approaches the unstable solution $`(0,1)`$ for $`k0`$ and the trivial solution for $`k1`$. (iii) Another unstable solution approaches the unstable state ($`0,1`$) as $`k0`$. (iv) Finally, there is a stable solution that approaches $`(1,1)`$ as $`k0`$. This solution corresponds to the case where the individuals have become divided into two groups.
Figure 3 shows the numerical results for $`X_1`$ and $`X_2`$ as a function of $`k`$, for $`p=0.52`$. Solid lines indicate stable solutions whereas dashed lines stand for unstable solutions. As the coupling constant grows, there is a critical value $`k_c`$ at which the two solutions (iii) and (iv) collide and become complex. At this critical value, the solution where the population is divided into two groups dissapears. The value of $`k_c`$ is related to $`p`$ by the following polynomial equation:
$$4k_c18k_c^2(pp^2)+27k_c^4(p^22p^3+p^4)=1.$$
(7)
Thus, for a given value of $`p`$ –which is determined by the initial condition– and $`k<k_c`$, two qualitatively different behaviors can occur. Either $`X_1=X_2=\pm 1`$, and the system evolves coherently, or $`X_2X_1`$, and the individuals are divided into two groups. For $`k>k_c`$, instead, only coherent behavior is possible. Thus, for sufficiently large $`k`$, the opinion of the whole population approaches the same state. Figure 4 shows a phase diagram $`k`$ versus $`p`$, where the boundary between the zones of bistability and coherence given by Eq. (7) is shown.
The transition between bistability and coherence can be characterized by a single order parameter introducing, for instance, the mean difference $`\delta `$ between the states of any pair of individuals,
$$\delta =\frac{1}{2}p(1p)|X_1X_2|,$$
(8)
which has been plotted in Fig. 5 as a function of $`k`$ for $`p=0.52`$. The dependence of $`\delta `$ on the coupling constant suggests classifying the transition as a first-order critical phenomenon.
Naturally, the assumption that all the individuals in the population are equally influenced by the average opinion –i.e. that all the individuals have the same coupling constant $`k`$– is not a very realistic one. Rather, it is to be expected that the coupling constants are distributed within a certain interval, with some individuals being more influenced by another’s opinion than other. It could moreover be supposed that some individuals are negatively affected by the mean trend, tending to make their own opinion diverging from the average. This case would correspond to $`k_i<0`$.
In this case of inhomogeneous behavior it can be easily shown from Eq. (4) that the asymptotic state of $`x_i(t)`$ depends on the value of $`k_i`$. This relation is implicitly given by the equation
$$k_i=\frac{x_i^3x_i}{\overline{x}x_i}.$$
(9)
Numerical simulations show however that, as far as the distribution of coupling constants $`k_i`$ is moderately narrow, the qualitative collective behavior is the same as for uniform $`k`$. Mathematically, it is not an easy task to characterize the situation in which this behavior breaks down as the values of $`k_i`$ become more and more scattered. Is is nevertheless expected that coherent evolution can be destroyed if the coupling constants are sufficiently different from each other, including in particular some negative values. A sufficient condition for coherence to fail is in fact that a single individual has a coupling constant $`k_i<2`$. In this situation, however, it is this only individual who fails to behave coherently, thus not affecting the result of the collective decision making.
In summary, it has been here shown within a simple mathematical model that, under a sufficiently strong influence of the average trend of a population on the opinion of each individual, the population behaves coherently and votes converge towards a single candidate. For weaker coupling between individuals, instead, votes are more evenly divided between the two candidates. The transition between both behaviors is abrupt and, in fact, has the character of a critical phenomenon. This is qualitatively similar to the ferromagnetic phase transition observed in spin systems –though in this physical phenomenon the phase transition is of the second order . Indeed, beyond the critical point the state of all the elements in the system coincide, even in spite of the initial condition corresponding to a uniform distribution of states. The coupling mechanism is in fact able to break the initial macroscopic homogeneity, enhancing microscopic fluctuations. It can be interesting to further analyze this model, including for instance local communication ways between individuals as well as noise, that can perturb in a nontrivial way the properties of the quoted transition .
The present results could encourage unfair, unscrupulous candidates to manipulate poll results published in mass media –if they have the power to do so– in their own benefit.
|
no-problem/9905/astro-ph9905033.html
|
ar5iv
|
text
|
# Asymptotic Giant Branch Stars as Astroparticle Laboratories
## 1 Introduction
It is well known that the standard theory does not predict several properties of the elementary particles (such as mass and magnetic moment). Furthermore, the different non-standard theories leave open the possibility that unknown (exotic) particles could still exist. In the recent past, several attempts have been made to understand how these particles could modify stellar evolution and, in turn, to use these modifications to constrain particle theory (see Raffelt 1996 for a recent review). The general procedure consists in the comparison of the observed properties of selected stars (or of a cluster of stars) with the prediction of theoretical stellar models obtained under different assumptions about the microphysics input.
Axions and WIMPS (Weak Interactive Massive Particles) are the most promising candidates for non baryonic dark matter. Non accelerator tests for axions in the acceptable mass range are up to date available. In this situation, stars in general and especially the best-known one, the Sun, are being widely used to test particle physics. Several experiments to detect solar and galactic axions have been performed (Sikivie 1983; Lazarus et al. 1992). Experiments with better sensitivity are currently underway (van Bibber et al. 1994; Hagmann et al. 1996 and 1998 (U.S. Axion Search); Matsuki et al. 1996 (Kyoto experiment CARRACK); Moriyama 1998, Moriyama et al. 1998; Avignone III et al. 1998 and Gattone et al. (SOLAX collaboration)) and there is a proposal to use the Large Hadron Collider at CERN (Zioutas et al. 1998) to detect solar axions. For the moment, results have proved to be negative and therefore the existence of axions is an attractive but speculative hypothesis.
The main purpose of the present paper is to show that Asymptotic Giant Branch (AGB) stars can also be used successfully as astroparticle physics laboratories. Since they are very bright, their photometric and spectroscopic properties are well known; for example, the AGB luminosity function can be used to test the efficiency of any phenomenon which is related with the production, removal or transport of energy. Furthermore, the possibility of observing directly the ongoing nucleosynthesis provides a rare opportunity to obtain information about the internal physical conditions. We recall that AGB stars are formed by a dense and degenerate core made up of carbon and oxygen (the CO core) surrounded by two interacting burning shells. The typical densities and temperatures of the CO core and of the He and H-rich layers make these stars a suitable environment to check the reliability of the theory of nuclear and particle physics.
In order to illustrate the possible use of AGB stars in the framework of astroparticle research, we will address the question of the existence of axions (Peccei and Quinn 1977a and 1977b). There are two types of axion models, the hadronic model or KSVZ (Kim 1979, Shifman et al. 1980) and the DFSZ (Zhitniskii 1980, Dine et al. 1981) model. In the first model, the axion couples to hadrons and photons and in the second one also to charged leptons. The axion mass is $`m_{\mathrm{ax}}=0.62\mathrm{eV}(10^7\mathrm{GeV}/f_\mathrm{a})`$, where $`f_\mathrm{a}`$ is the Peccei–Quinn scale and the axion coupling to matter is proportional to $`f_\mathrm{a}^1`$. In principle, the model does not put any constraint on the value of $`f_\mathrm{a}`$, so the limits to the $`m_{\mathrm{ax}}`$ must be obtained from experimental arguments.
Within the accepted mass range of DFSZ axions, 10<sup>-5</sup> eV $`m_{\mathrm{ax}}`$ 0.01 eV (see Raffelt, 1998), axions produced in stellar interiors can freely escape from the star and remove energy, as do neutrinos. The possibility of using stellar interiors to constraint the axion mass was early recognised (Fukugita et al. 1982). The axion energy loss rates depend on the axion coupling strenght to electrons, photons and nucleons and the corresponding coupling constants depend on the axion mass and other model dependent constants. In this way axion mass limits are obtained.
Axions may couple to electrons (Kim 1987; Cheng 1988; Raffelt 1990; Turner 1990 and Kolb and Turner 1990); the axionic fine structure constant is taken as $`\alpha =g^2/4\pi `$, g is a dimensionless coupling constant that in the DFSZ models is $`g=2.83\times 10^8m_a/cos^2\beta `$, $`m_a`$ is the axion mass in meV (10<sup>-3</sup> eV) and $`cos^2\beta `$ is a model-dependent parameter that is usually set equal to 1.
In models in which axions couple to electrons, the two most interesting types of axion interaction that can occur in stellar interiors are photoaxions ($`\gamma +ee+a`$), which is a particular branch of the Compton scattering, and bremsstrahlung axions ($`e+[Z,A][Z,A]+e+a`$)(see Raffelt, 1990). Figure 1 illustrates the axion energy loss rates for these two kinds of interactions, for two chemical compositions. Compton emission strongly increases with temperature (approximately as T<sup>6</sup>) and is almost independent of the density. It is, however, inhibited by electron degeneracy. Thus, Compton emission is important in non-degenerate He-burning regions, when the temperature and, in turn, the photon density are high enough. Bremsstrahlung requires greater densities and is not damped at large electron degeneracies. Thus, it is dominant in degenerate He and CO cores.
As yet, few works have included axion interactions in stellar model computations (see Raffelt 1996 and references therein). Isern, Hernanz & García–Berro (1992) use the rate of change of the pulsational period of the white dwarfs during the cooling sequence to constrain the axion mass. Such a rate depends on the cooling rate, namely: $`dt/d\mathrm{ln}Pdt/d\mathrm{ln}T`$, where $`P`$ is the period of pulsation and T is the temperature of the white dwarf. If the white dwarf has not entered the crystallizing region of the cooling sequence, the discrepancy between the rate of change of the period due to photons alone, which can be obtained from models, and the observed one, which can be assumed to be caused by photons and axions, is
$$\frac{L_{\mathrm{phot}}+L_{\mathrm{ax}}}{L_{\mathrm{phot}}}=\frac{\dot{P}_{\mathrm{obs}}}{\dot{P}_{\mathrm{mod}}}$$
In such a way, Isern, Hernanz & García–Berro (1992 and 1993) found, assuming $`cos^2\beta `$ = 1, that $`m_{\mathrm{ax}}`$ 8.7, 15 and 12 meV by using the seismological data of the white dwarfs G117–B15A, L19–2 and R548, respectively.
Other constraints can be obtained from the possible influence of axions on the evolution of low mass red giants. These stars develop a degenerate He core in which axions, if they exist, could produce a significant energy loss. Dearborn, Schramm and Steigman (1986) found that the helium ignition would be suppressed for values of $`m_{\mathrm{ax}}`$ over a certain limit. Unfortunatelly they overestimated the energy loss rate and later on, Raffelt and Dearborn (1987) found that, with the current limits, He ignition would never be supressed. These authors derived the most stringent limit at that time, for the axion-photon coupling, from the duration of the helium burning lifetime, $`m_{\mathrm{ax}}`$ 0.7 eV (KSVZ model). Raffelt and Weiss (1995) subsequently reexamined the problem, analyzing the effects of the axion-electron coupling in observable quantities, such as the luminosity of the RGB (red giant branch) tip. They concluded that $`m_{\mathrm{ax}}`$ is lower than 9 meV, assuming $`cos^2\beta `$=1 (DFSZ model). Another limit was obtained for the coupling strenght to photons by Raffelt (1996) by comparing the number of HB (horizontal branch) with the number of RGB stars of galactic Globular Clusters, giving $`m_{\mathrm{ax}}`$ 0.4 eV (DFSZ model). The most stringent limit come from the SN 1987A, constraints on the axion-nucleon coupling (Janka et al. 1996; Keil et al. 1997) give, taken into account the overall uncertainties, $`m_{\mathrm{ax}}`$ 0.01 eV for both, KSVZ and DFSZ models (Raffelt, 1998).
If axions exist, this would be of major consequence on the evolution of AGB stars. In fact, in both the He shell and in the CO core the physical conditions for an important emission of axions are met. Inside the core there are no nuclear reactions and the gain of gravitational energy would be balanced by neutrino and axion energy losses. The He-burning shell reaches temperatures above $`3\times 10^8`$ K, much higher than those experienced during the central He-burning, and this could induce a huge production of photoaxions. Therefore axions could alter the stellar lifetime and change the final mass of the CO core.
In the following sections we will present various models of low and intermediate mass stars in the range of masses $`0.8M/M_{}9`$, with and without axion interactions. In particular we assume three different values of the leading parameter $`m_{\mathrm{ax}}`$ (the axion mass), namely: 0, 8.5 and 20 meV (in the following Case 0, 1 and 2 respectively) and set always $`cos^2\beta `$=1. Note that the larger value is roughly double the maximum value proposed by Raffelt (1998) yet still marginally compatible with the properties of white dwarfs. We show that the inclusion of axions would imply a significant modification of the AGB characteristics. We do not address the question of setting an astrophysical upper limit of the axion mass here, since this requires a profound comparison between observations of galactic and Magellanic Clouds AGB stars and will be done in a forthcoming paper.
## 2 The models
We have followed the evolution of a set of low and intermediate mass stars (M= 0.8, 1.5, 3, 5, 7, 8 and 9 M) with solar metallicities, Z=0.02 and Y=0.28. This mass interval includes stars that ignite He in degenerate conditions (0.8 and 1.5 M), stars that ignite He in non–degenerate conditions but form a degenerate CO core (3–7 M) and go through the entire AGB phase, and stars that ignite C in partially degenerate conditions (8 and 9 M). The evolutionary sequences are started at the ZAMS phase. For the two smaller masses (0.8 and 1.5 $`M_{}`$) we stopped the sequence at the onset of the He flash. All the other sequences were terminated at the beginning of the thermal pulse phase or at C ignition. Due to the large amount of computer time required to compute thermally pulsing models, only the 5 $`M_{}`$ sequences (with and without axions) were followed throughout this phase.
All the evolutionary sequences of stars were computed with the FRANEC code (Frascati RAphson Newton Evolutionary Code), as described by Chieffi and Straniero (1989). The most recent updates of the input physics were described by Straniero, Chieffi and Limongi (1997). When included, the axion energy loss rate have been computed according to Raffelt and Weiss (1995) in the case of Bremsstrahlung in degenerate conditions and Compton and according to Raffelt (1990) in the case of Bremsstrahlung in non degenerate conditions.
Finally, for the 5 M star, we considered the case in which only the CO core experiences axion energy losses, in order to distinguish the relative effects on the core and on the He shell and between the AGB and HB phases.
## 3 Approaching the AGB
Table 1 summarizes the results of the computed evolutionary sequences. From column 1 to 10 we report: the total mass (in solar units), the axion mass (in meV), the surface He mass fraction after the first dredge-up, the tip luminosity of the first RGB (red giant branch), the mass of the He core at the beginning of the He-burning (in solar units), the central He burning life time, the He core mass (in solar units) at the end of central He burning, the E-AGB (early asymptotic giant branch) life time, the surface He mass fraction after the second dredge-up and the CO core mass (in solar units) at the end of the E-AGB.
During the central H-burning and up to the base of the RGB, the energy loss due to axions is negligible. However, when the temperature in the degenerate He core of a low mass star approaches $`10^8`$ K, the contribution of the axion interactions to the energy balance becomes significant. In spite of the different initial chemical composition, our results for the 0.8 $`M_{}`$ star are in good agreement with those of Raffelt and Weiss (1995). This is no surprise since our axion rates are mainly taken from them. For instance, they obtained an increment of the mass of the He core at the onset of the He–flash of 0.022 M and 0.056 M for m<sub>ax</sub> = 8.9 and 17.9 meV respectively, while we obtain 0.021M and 0.065 for axion masses of 8.5 and 20 meV respectively.
Note that some widely used expressions like delay to higher densities or He ignition is delayed might be misleading. The He ignition does indeed occur when the central density is higher, but the density at the off centre ignition point in our calculations is lower (namely 30–40 $`\%`$ less in Case 2 than in Case 0). This point is situated further from the centre in those models that include axions, since the maximum temperature moves outwards as a consequence of the strong axion cooling in the central region. For example, in the case of a 1.5M star, the ignition occurs at 0.14M, 0.25M and 0.40M in Cases 0, 1 and 2 respectively. Similar differences are obtained in the 0.8M. Furthermore, He ignition is not delayed in time, but anticipated. The reason is that axion emission accelerates the contraction of the He core. For instance, concerning the 0.8M, the RGB evolutionary time is about 150 Myr ($`6\%`$) lower in Case 2 than in Case 0, while that of 1.5M is 68 Myr ($`10\%`$) lower. The luminosity at the RGB tip (column 4 ) is also increased (a factor $``$ 2 in Case 2) by axions in those stars that ignite He in a degenerate core.
The evolution of the more massive models (M $`3M_{}`$) is not affected by axions until the central He-burning is well established. As already found in low mass stars by Raffelt and Dearborn (1987), since the nuclear burning must supply the energy lost by axions, the temperature is higher and He consumption is faster. The reduction of the total evolutionary time of the He-burning phase (column 6) varies from nearly 40$`\%`$ for the 3M to 6$`\%`$ for the 9M (for the higher value of the axion mass, Case 2). In the models in which the smaller axion mass value was used, the reduction is obviously less: 11$`\%`$ for the 3M and negligible for the 9M. Owing to the contraction of the evolutionary time, the H-burning shell has less time to advance in mass and so the final He core mass is reduced (column 7 in Table 1). As a consequence of the higher temperature in the He-burning core, the 3$`\alpha `$ reaction is favoured with respect to the concurrent $`\alpha `$–burning of <sup>12</sup>C. Thus at the end of the He-burning, the abundance of carbon is greater. For instance, in the 5M model, the central C abundance changes from 0.224 in Case 0 to 0.322 in Case 2. On the whole, the larger the stellar mass the smaller the effect of axion emission.
## 4 The AGB
The Asymptotic Giant Branch is characterized by two different phases, namely the Early AGB (E-AGB) and the thermal pulse phase (TP-AGB). During the E-AGB, the H and the He-burning shells are simultaneously active. However fuel consumption is faster in the inner He-burning shell and so the mass separation between the two burning regions becomes progressively smaller. The advancing He-burning induces an expansion and a cooling of the more external layers. If the stellar mass is large enough, the H-burning shell is finally quenched. In such a case, the convective envelope can penetrate the H/He discontinuity, bringing to the surface the products of the H-burning (basically N and He). This is called the second dredge-up and marks the end of the E-AGB and the beginning of the thermal pulses. During this second part of the AGB phase, the H-burning shell is reignited while the He-burning one is quenched. Once a suitable amount of He is accumulated by the H-burning, a strong He-burning starts again, expands the external layer and again quenches the H burning. In a very short time the He consumption has progressed such that the He shell again extinguished, the contraction takes place and H is reignited. This sequence repeats until the mass loss removes the envelope and the star leaves the AGB. Note that during the pulse the region between the two shells becomes convectively unstable and fully mixed. After the thermal pulse, when quiescent He-burning occurs and the H-burning shell is still extinguished, a new convective envelope penetration at the H/He discontinuity brings to the surface the products of the He burning (essentially carbon) and that of the neutron capture nucleosynthesis (s-process, see e.g. Straniero et al. 1996, Gallino et al. 1998). This is the so-called third dredge-up. The following sections describe the modifications to this scenario induced by axion emission.
### 4.1 The early-AGB and the second dredge-up
The most important effect of axion emission during the E-AGB is the faster consumption of fuel within the He-shell. This causes the H-shell to be extinguished more quickly and the second dredge-up to occur earlier, resulting in a markedly smaller CO core mass. This behaviour is shown in Figure 2.
The reduction of the duration of the E-AGB phase (column 8, Table 1) is in the range of 30–40$`\%`$, for Case 1; and as much as 50–70$`\%`$ for Case 2. The CO core mass at the end of the E-AGB is reported in column 10. Note the significant differences with respect to the standard case. For the 7M model, the CO core mass obtained for the highest axion mass is even smaller than that obtained for the standard 5M. This reduction is not equal for all the masses, varying from 5$`\%`$ to 15$`\%`$ in Case 1, and from 15$`\%`$ to 28$`\%`$ in Case 2 for stars in the mass range 3M to 7M.
As a consequence of the deeper penetration of the convective envelope, the surface He abundance after the 2<sup>nd</sup> dredge-up is greater (see column 9 in Table 1). For example, for the 5M stars, the H/He discontinuity moves inward 0.17M in Case 0, and 0.30M in Case 2.
As expected, the temperature within the CO core is lowered by the larger energy loss. Figure 3 clearly illustrates such an occurrence.
In order to elucidate how far these changes of the E-AGB phase are due to the previously altered evolution (central He-burning) and/or to axion emission in the He shell, we have limited their inclusion to the CO core. The result is that the duration of the E-AGB is similar to the one obtained in the case of the full axion inclusion. This result is important, since it clearly shows the strong influence of the physical conditions in the degenerate core on the evolution of the active shells. Furthermore, it means that the effects found during the E-AGB are almost independent of the axion energy losses during the previous phases.
### 4.2 The thermal pulse phase
During the TP phase, axion emission in the core and in the He-burning shell becomes more important. In Table 2 we report some properties of the three sequences computed for the 5 $`M_{}`$, namely the He core mass at the beginning of the TP (column 1), the duration of the interpulse period (column 2), the increment of the He core mass during the interpulse (column 3), the penetration (in solar masses) of the convective envelope at the time of the $`3^{rd}`$ dredge-up (column 4) and the $`\lambda `$ parameter, i.e. the ratio between the quantities in columns 3 and 4 (column 5). In figure 4 we show the evolution of the surface luminosity. The evolution of the location of the external boundary of the CO core, of the He core and that of the internal boundary of the convective envelope are shown in figure 5.
The most striking consequence of axion emission is the extension of the evolutionary time. The duration of a typical interpulse period increases by a factor of $``$2.5 in Case 1 and by a factor of $``$6 in Case 2, both with respect to the standard case. This is due to the combined effects of the lower core mass (see the previous subsection) and the axion cooling of the He shell. The luminosity is about 1.5 times greater in Case 0 than in Case 2. However, note that all three sequences asimptotically follows the classical core mass/luminosity relation (Pacinzsky 1977; Iben and Renzini, 1983). This is shown in figure 6 for Case 0 and 1. The temperature at the bottom of the convective envelope (figure 7) is lower in models which include axion emission. Thus the occurrence of hot bottom burning in the more advanced AGB evolution and the consequent deviation from the core mass/luminosity relation (Blöcker and Schönberner 1991) should be reduced by this axion cooling.
More He is accumulated by the H-burning during the extended interpulse period, so that more nuclear energy must be produced in the pulse in order to expand the stored matter. A stronger thermal pulse favours the following dredge-up. For this reason the third dredge-up starts after the first TP in the two sequences including axions, while in Case 0 it starts after the $`4^{th}`$ TP. The mass of He and C material mixed in the envelope is nearly twice as much in Case 1 and $``$8 times larger in Case 2, both with respect to Case 0.
With a resolution of just 1M, the minimum mass for which carbon ignition occurs before the TP-AGB phase, i.e. M<sub>up</sub>, is the same in all three cases, namely $``$8 $`M_{}`$. However the C ignition occurs farther from the centre in models with axions. For instance, in Case 1 we found an off centre C ignition at 0.93M instead of the 0.40M in the standard case. Note that the mass of the CO core is smaller when axions are included, namely 1.148 M instead of 1.259 M.
Finally, it is interesting to note that the inclusion of axions modifies the initial/final mass relation (see e.g. Weidemann 1987). During the thermal pulses, the mass of the CO core increases as a result of the accretion of C freshly synthesized by the He flashes. However, the observed luminosity of AGB stars (Weidemann 1987; Wood et al. 1992) and the observed mass-loss rate at this phase, indicate that the occurrence of too many pulses is prevented by the removal of the envelope (Bedijn 1987, Baud and Habing 1983, van der Veen, Habing and Geballe 1989). The mass of the CO white dwarf is essentially determined by the value reached at the end of the E-AGB phase (see Vassiliadis and Wood 1993; Blöcker 1995). In figure 8 we compare the CO core mass (a guess for the WD mass) with the initial mass (progenitor mass), for Case 0 and Case 1. Two more masses have been included, 4 and 6 M.
The relative abundance of white dwarfs with masses $`m_0`$ and $`m_1`$ is given by
$$r(m_1,m_0)=\frac{\delta n(m_1)}{\delta n(m_0)}=\left[\frac{M(m_0)}{M(m_1)}\right]^\alpha \frac{(dM/dm)_{m_0}}{(dM/dm)_{m_1}}$$
where we have assumed that the star formation rate per unit volume in the solar neighbourhood is constant and that the initial mass function follows Salpeter’s law with $`\alpha =2.35`$ and M(m) is the relationship between the mass of the parent star and that of the white dwarf. Note that if it were possible to separate the influence of the mass loss and the influence of axions during the AGB phase, it would be possible to obtain very strong constraints to the properties of these particles.
## 5 Conclusions
We have studied the effects of including axions (DFSZ model) in the evolution of intermediate and low mass stars ($`0.8M/M_{}9`$). Among the various evolutionary phases affected by axion emission, the AGB seems to be the most promising in terms of obtaining stringent constraints to the theory of particle physics. These effects are:
1.- A severe reduction in the size of the final CO core, as compared with the values obtained from standard evolutions, implies some interesting modifications of observable quantities, such as the luminosity of AGB stars and their residual mass. Concerning the AGB luminosity function in particular, the axion emission would imply a deficit of bright AGB stars. The initial/final mass relation is also substantially modified and a deficit of massive white dwarfs is expected.
2.- As a consequence of the axion cooling, the convective instabilities that characterize the double nuclear shell burning are more extended and the chemical composition of the surface could be significantly modified. A stronger $`3^{rd}`$ dredge-up implies that the C star stage could be more easily (and perhaps rapidly) obtained. In addition the surface abundance of s-elements, which are the products of the neutron capture nucleosynthesis occurring during the interpulse (through the $`{}_{}{}^{13}C(\alpha ,n)`$ neutron source) and during the thermal pulse (through the $`{}_{}{}^{22}N(\alpha ,n)`$), are other important ”observable” quantities to be used to constrain the properties of the axions as well as those of any other weak interactive particle proposed in non-standard theories of particle physics.
It is a great pleasure to thank Alessandro Chieffi and Marco Limongi for helpful discussion. This work has been supported in part by the DGICYT grants PB96-1428 and ESP98-1348 , by the Italian Minister of University, Research and Technology (Stellar Evolution project), by the Junta de Andalucia FQM-108 project and by the CIRIT(GRC/PIC).
|
no-problem/9905/hep-th9905084.html
|
ar5iv
|
text
|
# 1 The vortex potential, eq. ().
Potential Energy of Yang–Mills Vortices
in Three and Four Dimensions
Dmitri Diakonov
NORDITA, Blegdamsvej 17, 2100 Copenhagen Ø, Denmark
and
Petersburg Nuclear Physics Institute, Gatchina, St.Petersburg 188350, Russia
E-mail: diakonov@nordita.dk
## Abstract
We calculate the energy of a Yang–Mills vortex as function of its magnetic flux or, else, of the Wilson loop surrounding the vortex center. The calculation is performed in the 1-loop approximation. A parallel with a potential as function of the Polyakov line at nonzero temperatures is drawn. We find that quantized $`Z(2)`$ vortices are dynamically preferred though vortices with arbitrary fluxes cannot be ruled out.
The hypothesis that $`Z(N_c)`$ vortices are responsible for confinement has several attractive features. First, one immediately gets the area law for Wilson loops in the fundamental representation of the $`SU(N_c)`$ gauge group, even under the simplest assumption that vortices are non-interacting and therefore the number of vortices piercing a given loop is Poisson-distributed . Furthermore, Wilson loops in any representation transforming under the group center $`Z(N_c)`$ will have the same string tension as in the fundamental representation, while the ‘center-blind’ representations will have, asymptotically, zero string tension. This is what is expected from screening by gluons. The temporal approximate Casimir scaling of the string tensions may be probably explained by the finite sizes of the vortex cores .
Second, heating the $`d=3+1`$ Yang–Mills system towards the high-temperature $`d=3`$ case and then heating $`d=2+1`$ system eventually to the $`d=2`$ case, one does not observe abrupt changes in the spatial string tensions. This continuity as one goes from four to two dimensions favours $`d=2`$ objects as being basic for confinement in all the above cases <sup>1</sup><sup>1</sup>1In $`d=2`$ pure Yang–Mills theory the confinement is trivial in axial gauges and may seem to have no relation to vortices. However, it is known that at least on compact $`d=2`$ manifolds the theory is exactly equivalent to a sum over vortices, see a recent paper and references therein.. In three dimensions vortices form closed lines; in four dimensions they form closed surfaces.
For vortices to be physical objects and not artifacts of a regularization, their cores should be finite in physical units, i.e. to be of the order of $`1/\mathrm{\Lambda }M^1\mathrm{exp}(38\pi ^2/11N_cg^2)`$ in $`d=4`$ and of the order of $`1/g_3^2`$ in $`d=3`$. Correspondingly, the energy of a vortex per unit length should be of the order of $`g_3^2`$ in $`d=3`$, and the energy per unit surface should be of the order of $`\mathrm{\Lambda }^2`$ in $`d=4`$. First indications that in might indeed be the case came recently from lattice studies using the smoothing procedure .
In order to reveal ‘thick’ vortices theoretically, one has first of all to integrate out the high-momentum components of the Yang–Mills field. The resulting effective action may then have a stable saddle point of a vortex type, with the core size remaining finite as one takes the UV cutoff to infinity. When and if it happens, one can speak of vortices as physical entities, and further on to try to build a theory of those extended objects in the vacuum.
The main dynamical variable of a vortex is the azimuthal component of the Yang–Mills field $`A_\varphi ^a(\rho )=ϵ_{\alpha \beta }n_\alpha A_\beta ^a`$, where $`n_\alpha `$ is a unit vector in the plane transverse to the vortex. One can always choose a gauge where $`A_\varphi `$ is independent of the azimuth angle $`\varphi `$. Generally speaking, it implies that the radial component $`A_\rho (\rho )0`$, however, we shall neglect this component as it can be always reconstructed from gauge invariance by replacing $`_\rho _\rho \delta ^{ab}+f^{acb}A_\rho ^c`$. A circle Wilson loop lying in the transverse plane and surrounding the vortex center is then
$$W_{\frac{1}{2}}(\rho )=\frac{1}{2}\text{Tr}\mathrm{P}\mathrm{exp}i\rho A_\varphi ^at^a𝑑\varphi =\mathrm{cos}[\pi \mu (\rho )],\mu (\rho )=\rho \sqrt{A_\varphi ^a(\rho )A_\varphi ^a(\rho )},$$
(1)
taking for simplicity the fundamental representation of the $`SU(2)`$ gauge group. For an arbitrary representation of $`SU(2)`$, labelled by spin $`J`$, one has
$$W_J(\rho )=\frac{1}{2J+1}\frac{\mathrm{sin}[(2J+1)\pi \mu (\rho )]}{\mathrm{sin}[\pi \mu (\rho )]}.$$
(2)
If $`\mu (\rho )integer`$ at large distances $`\rho `$ from the vortex core, the Wilson loop $`W_J(\rho )(1)^{2J}`$. This is the definition of the $`Z(2)`$ vortex. We shall see that integer values of $`\mu (\mathrm{})`$ are dynamically preferred.
Neglecting all components of the Yang–Mills field except the essential azimuthal one, the classical action of the vortex becomes
$$d^dx\frac{(F_{\mu \nu }^a)^2}{4g_d^2}=d^{d2}x_{}d^2x_{}\frac{(B_{}^a)^2}{2g_d^2}=\frac{1}{2g_d^2}d^{d2}x_{}\mathrm{\hspace{0.33em}2}\pi _0^{\mathrm{}}𝑑\rho \rho \left[\frac{1}{\rho }_\rho (A_\varphi ^a\rho )\right]^2.$$
(3)
To get the effective action for $`\mu (\rho )`$ we integrate over quantum fluctuations about a given background field $`A_\varphi ^a(\rho )\rho `$ considered to be a slowly varying field with momenta up to certain $`k_{min}`$. Accordingly, quantum fluctuations have momenta from $`k_{min}`$ up to the UV cutoff $`k_{max}`$. Writing $`A_\mu =\overline{A}_\mu +a_\mu `$ where $`\overline{A}_\mu =\delta _{\mu \varphi }A_\varphi ^a(\rho )`$ we expand $`F_{\mu \nu }^2(A+a)`$ in the fluctuation field $`a_\mu `$ up to the second order appropriate for the 1-loop calculation. The term linear in $`a_\mu `$ vanishes due to the orthogonality of high and low momenta. The quadratic form for $`a_\mu `$ is the standard
$$W_{\mu \nu }^{ab}=[D^2(\overline{A})]^{ab}\delta _{\mu \nu }2f^{acb}F_{\mu \nu }^c(\overline{A}),$$
(4)
if one imposes the background Lorentz gauge condition,
$$D_\mu ^{ab}(\overline{A})a_\mu ^b=0,D_\mu ^{ab}(\overline{A})=_\mu \delta ^{ab}+f^{acb}\overline{A}_\mu ^c.$$
(5)
The effective action is
$$S_{eff}[\overline{A}]=\frac{1}{2}\mathrm{ln}det(W_{\mu \nu })\mathrm{ln}det(D_\mu ^2).$$
(6)
In the presence of dynamical fermions in the fundamental representation the Dirac determinant should be added:
$$\mathrm{ln}det(_\mu \gamma _\mu )=\frac{1}{2}\mathrm{ln}det\left(_\mu ^2\frac{i}{2}[\gamma _\mu \gamma _\nu ]F_{\mu \nu }^at^a\right),_\mu =_\mu i\overline{A}_mu^at^a.$$
(7)
The effective action may be expanded in powers of the (covariant) derivatives of the background field $`\overline{A}_\mu `$. We are now interested in the first nontrivial term of this expansion, namely in the zero-derivative term, that is in the effective potential as function of the flux $`\rho A_\varphi `$. The next term with two derivatives will renormalize eq. (3), and there will be, generally speaking, further terms. In the zero-derivative order $`F_{\mu \nu }(\overline{A})=0`$, therefore $`det(W_{\mu \nu })=ddet(D^2)`$, where $`d`$ is the full space dimension. With $`A_\varphi `$ being the only nonzero component of the background field, it is natural to write down $`D^2`$ in the cylindric coordinates:
$$D^2=\frac{1}{\rho }\frac{}{\rho }\rho \frac{}{\rho }+\frac{1}{\rho ^2}\left(\frac{}{\varphi }+f^{acb}A_\varphi ^c\right)^2+_3^2+\mathrm{}+_d^2.$$
(8)
The effective action (6) for slowly varying $`\rho A_\varphi `$ is
$$S_{eff}[A_\varphi ]=\left(\frac{d}{2}1\right)_{s_{min}}^{s_{max}}\frac{ds}{s}\text{Sp}\mathrm{exp}(sD^2)=d^{d2}x_{}d^2x_{}V^{(d)}(\mu ),\mu =\rho \sqrt{A_\varphi ^aA_\varphi ^a},$$
(9)
where we have introduced the ‘vortex potential energy’ in $`d`$ dimensions $`V^{(d)}(\mu )`$. Here Sp denotes the functional and the colour traces, and integration over the proper time $`s`$ has been introduced. The limits $`s_{min}1/k_{max}^2`$ and $`s_{max}1/k_{min}^2`$ are gauge-invariant UV and IR cutoffs, respectively. In saturating the functional trace one can use any complete set of functions. Naturally, one uses the plane wave basis for the longitudinal components $`x_3\mathrm{}x_d`$, and the polar basis $`\mathrm{exp}(im\varphi )f_{k,m}(\rho )`$ for the transverse ones, where the functions $`f_{k,m}(\rho )`$ must form a complete set, see below. In what follows we shall consider only the $`SU(2)`$ case. The transverse part of the covariant Laplacian (8), after acting on the polar harmonics $`\mathrm{exp}(im\varphi ),m=integer`$, has three eigenvalues as a $`3\times 3`$ colour matrix:
$$\frac{1}{\rho }\frac{}{\rho }\rho \frac{}{\rho }\frac{1}{\rho ^2}\{\begin{array}{c}m^2,\\ (m+\mu )^2,\\ (m\mu )^2.\end{array}$$
(10)
The first one cancels out when one subtracts the free determinant (without the external field); the last two are actually coinciding, given that $`m`$ ranges from minus to plus infinity. The differential operator (eq. (10), last line) has eigenfunctions $`J_{\pm (m\mu )}(k\rho )`$ with eigenvalues $`k^2`$. The index of Bessel functions must be non-negative to ensure regularity at the origin. We thus choose the functions $`F_{m,k}(\varphi ,\rho )=\mathrm{exp}(im\varphi )J_{|m\mu |}(k\rho )`$ as a complete functional basis in the transverse plane:
$$\frac{1}{2\pi }\underset{m=\mathrm{}}{\overset{+\mathrm{}}{}}_0^{\mathrm{}}𝑑kkF_{m,k}(\varphi ,\rho )F_{m,k}^{}(\varphi ^{},\rho ^{})=\frac{1}{\rho }\delta (\rho \rho ^{})\delta (\varphi \varphi ^{})|_{\mathrm{mod}\mathrm{\hspace{0.33em}2}\pi }.$$
(11)
These functions satisfy also the ortho-normalization condition:
$$\frac{1}{2\pi }_0^{2\pi }𝑑\varphi _0^{\mathrm{}}𝑑\rho \rho F_{m,k}(\varphi ,\rho )F_{m^{},k^{}}^{}(\varphi ,\rho )=\frac{1}{k}\delta (kk^{})\delta _{mm^{}}.$$
(12)
Eqs.(11, 12) can be checked using the formula 6.541.1 from Gradshteyn and Rhyzhik . Eq. (9) can be thus rewritten as
$$V^{(d)}(\mu )=(d2)\frac{dp_3\mathrm{}dp_d}{(2\pi )^{d2}}_{s_{min}}^{s_{max}}\frac{ds}{s}_0^{\mathrm{}}k𝑑k\mathrm{exp}[s(k^2+p_3^2+\mathrm{}+p_d^2)]$$
$$\frac{1}{2\pi }\underset{m=\mathrm{}}{\overset{\mathrm{}}{}}\left[J_{|m\mu |}^2(k\rho )J_{|m|}^2(k\rho )\right].$$
(13)
We see that the dependence on the flux $`\mu `$ enters only through the indices of the Bessel functions and that the potential is explicitly periodic in $`\mu `$ with period 1. Integration over momenta can be easily performed using eq. 6.633.2 from , yielding
$$V^{(d)}(\mu )=(d2)_{s_{min}}^{s_{max}}\frac{ds}{s}\left(\frac{1}{4\pi s}\right)^{\frac{d}{2}}\mathrm{exp}\left(\frac{\rho ^2}{2s}\right)\underset{m=\mathrm{}}{\overset{\mathrm{}}{}}\left[I_{|m\mu |}\left(\frac{\rho ^2}{2s}\right)I_{|m|}\left(\frac{\rho ^2}{2s}\right)\right].$$
(14)
We next sum over $`m`$ using the integral representation for modified Bessel functions, eq. 8.431.5 of . To write explicit formulae we imply that $`\mu (0,\mathrm{\hspace{0.25em}1})`$. If $`\mu `$ is outside this interval $`V^{(d)}(\mu )`$ should be continued by periodicity. We get
$$V^{(d)}(\mu )=\frac{d2}{\left(2\pi \rho ^2\right)^{\frac{d}{2}}}\frac{\mathrm{sin}(\pi \mu )}{\pi }_0^{\mathrm{}}𝑑x\frac{\mathrm{cosh}\left(\frac{1}{2}\mu \right)x}{\mathrm{cosh}\frac{x}{2}}_{t_{min}}^{t_{max}}𝑑tt^{\frac{d}{2}1}\mathrm{exp}\left[t(\mathrm{cosh}x+1)\right],$$
(15)
where we have introduced a new variable $`t=\rho ^2/(2s)`$ instead of $`s`$. Correspondingly, the integration limits become $`t_{min}=\rho ^2/2s_{max}`$ and $`t_{max}=\rho ^2/2s_{min}`$. It should be stressed that the above expression is finite in the limit when one removes both the UV cutoff ($`t_{max}\mathrm{}`$) and the IR cutoff ($`t_{min}0`$). In this limit both integrations in (15) become elementary, and we finally get:
$$V^{(d)}(\mu )=\frac{d2}{\left(\pi \rho ^2\right)^{\frac{d}{2}}}\frac{\mathrm{sin}(\pi \mu )}{\pi }\frac{\mathrm{\Gamma }\left(\frac{d}{2}\right)\mathrm{\Gamma }\left(\frac{d}{2}+\mu \right)\mathrm{\Gamma }\left(\frac{d}{2}+1\mu \right)}{\mathrm{\Gamma }(d+1)}$$
$$=\{\begin{array}{cc}\frac{1}{\rho ^4}\frac{1}{12\pi ^2}\mu (1\mu ^2)(2\mu )|_{\mathrm{mod}\mathrm{\hspace{0.33em}1}}\mathrm{for}d=4,& \\ & \\ \frac{1}{\rho ^3}\frac{1}{96}\frac{\mathrm{tan}(\pi \mu )}{\pi }(14\mu ^2)(32\mu )|_{\mathrm{mod}\mathrm{\hspace{0.33em}1}}\mathrm{for}d=3.& \end{array}$$
(16)
At $`d=2`$ the vortex potential is identically zero as there are no transverse gluons in two dimensions. Though the potential is given just by a polynomial (in case $`d=4`$), it is actually a periodic function of $`\mu `$ with a unit period, as seen explicitly from eq. (13). At integer $`\mu `$ the potential is zero but has a jump in the first derivative; it is depicted in Fig.1.
The problem we have solved has certain similarity with that of the potential energy as function of the $`d`$-th component of the Yang–Mills field $`A_{\mu =d}^a`$, in case of nonzero temperatures. In that problem one integrates over all Matsubara frequences in the background of a static $`A_d^a`$, where $`d=4`$ for $`3+1`$ dimensions, and $`d=3`$ for $`2+1`$ dimensions. The two problems are ideologically similar, only the topology is different: in the nonzero-temperature case the topology is that of a cylinder $`S^{(1)}\times R^{(d1)}`$, with the compact dimension in the ‘temperature’ direction, while in the problem considered here the topology is that of a plane with a deleted point at the origin, $`R^{(2)}\{0\}\times R^{(d2)}`$. The role of Matsubara frequencies is played by the polar harmonics $`\mathrm{exp}(im\varphi )`$. In the first case one finds the potential energy as function of the Polyakov line winding in the compact dimension; in the second case one finds the potential energy as function of the Wilson loop winding around the point at the origin. In both cases the potential energy is a periodic function, however the calculations are easier in the nonzero-temperature case as one can evaluate the functional trace (9) in the plane-wave basis. A simple calculation of eq. (9) gives the following form of the nonzero-temperature potential energy as function of $`\nu =\sqrt{A_d^aA_d^a}/(2\pi T)`$, valid for any number of dimensions:
$$V^{(d)}(\nu )=(d2)T\frac{d^{d1}p}{(2\pi )^{d1}}\mathrm{ln}\frac{\mathrm{cosh}\frac{p}{T}\mathrm{cos}2\pi \nu }{\mathrm{cosh}\frac{p}{T}1}.$$
(17)
It is explicitly periodic in $`\nu `$. At $`d=4`$ the integration can be performed with the help of eq. 3.533.3 , and we get the well-known result :
$$V^{(4)}(\nu )=\frac{(2\pi T)^4}{12\pi ^2}\nu ^2(1\nu )^2|_{\mathrm{mod}\mathrm{\hspace{0.33em}1}}=\frac{T^2}{3}(A_4^a)^2+O(A^3).$$
(18)
At $`d=3`$ the integral (17) cannot be expressed through elementary functions; however it can be compactly written as
$$V^{(3)}(\nu )=\frac{2T^3}{\pi }\mathrm{sin}^2\pi \nu _0^{\mathrm{}}𝑑x\frac{x^2\mathrm{cosh}x}{\mathrm{sinh}x(\mathrm{sinh}^2x+\mathrm{sin}^2\pi \nu )}$$
$$=\pi T^3\nu ^2\mathrm{ln}\frac{1}{\nu ^2}+O(\nu ^4)=\frac{T}{4\pi }(A_3^a)^2\mathrm{ln}\frac{T^2}{(A_3^a)^2}+O(A^4).$$
(19)
To our best knowledge, this is a new result.
It is worth mentioning that for $`d=3`$ the Debye mass is infrared-divergent, hence the non-analiticity in the $`\nu ^2`$ term. In the case of the vortex potential (16) the infrared divergency is even stronger: as a result the expansion of the potential starts from the non-analytic $`|\mu |`$ term both for $`d=3`$ and 4. It is interesting that in all cases above the non-analytic terms are due to the contribution of zero Matsubara frequencies (in case of the vortex “zero frequency” means the azimuthal harmonic with $`m=0`$). Indeed, the contributions of zero frequencies to the ‘temperature’ potentials (18) and (19) are
$$V_{\omega =0}^{(4)}(\nu )=\frac{(2\pi T)^4}{12\pi ^2}2\nu ^3+\mathrm{UV}\mathrm{divergence},V_{\omega =0}^{(3)}(\nu )=\pi T^3\nu ^2\mathrm{ln}\frac{1}{\nu ^2}+\mathrm{UV}\mathrm{divergence}.$$
(20)
The $`m=0`$ contribution to the vortex potential (16) is
$$V_{m=0}^{(4)}(\mu )=\frac{1}{\rho ^4}\frac{1}{12\pi ^2}(\mu \mu ^3)+\mathrm{UV}\mathrm{divergence}.$$
(21)
These are exactly the non-analytic terms in all the cases. The UV divergences are cancelled by contributions from nonzero frequencies.
If there are dynamical fermions in the problem, they should be treated differently in the nonzero-temperature case and in the vortex one. In the first case fermions are antiperiodic in the ‘temperature’ direction, hence one has to sum over half-integer Matsubara frequencies. In the latter case fermion wave functions are periodic functions of the azimuthal angle $`\varphi `$. The resulting fermion contribution to the vortex potential (in the fundamental representation) is obtained from eq. (16) by substituting $`\mu \mu /2`$ and multiplying the result by $`2`$. The fermion contribution is periodic in $`\mu `$ with period 2, and not 1 as in the boson case. It can be easily checked that in the supersymmetric case, with Majorana gluinos belonging to the adjoint representation, the fermion contribution cancels exactly with the boson one, so that the vortex potential is zero.
Returning to the bosonic contribution to the effective vortex potential (16) in the pure glue case, one concludes that integer values of $`\mu (\rho )`$ are a must at $`\rho 0`$, otherwise the integral over $`\rho `$,
$$\mathrm{potential}\mathrm{energy}=2\pi _0^{\mathrm{}}𝑑\rho \rho V^{(d)}(\mu ),$$
(22)
diverges. Integer values of $`\mu (\rho )`$ at $`\rho \mathrm{}`$ are, clearly, energetically favourable though the energy (22) remains finite at noninteger values of $`\mu (\mathrm{})`$ as well because of the convergence of the integration over $`\rho `$ at large $`\rho `$. It means that the quantized $`Z(2)`$ vortices are dynamically preferred but noninteger fluxes are not altogether ruled out by the energetics. This should be contrasted to the case of nonzero temperatures where quantized values of $`A_d`$ at spatial infinity are necessary to make the energy finite.
I am grateful to Victor Petrov, Maxim Chernodub and Konstantin Zarembo for useful discussions.
|
no-problem/9905/hep-ex9905047.html
|
ar5iv
|
text
|
# 1 Introduction
## 1 Introduction
It is well known that neutrons produced by cosmic-ray muons can move far away from the muon tracks or muon-initiated cascades and contribute to the background for large underground experiments searching for rare events such as neutrino interactions, proton decay etc. (see, for example, Khalchukov et al., 1983). At present there are few measurements of the muon-produced neutron flux at large depths underground (Bezrukov et al., 1973, Enikeev et al., 1987, Aglietta et al., 1989). In these experiments the detection of both muon and neutron was required but the distance between them was not measured. In this work we analyse the most general case, when both muon and neutron are detected by LVD (Large Volume Detector at the underground Gran Sasso Laboratory) and the distance between them (or between neutron and muon-initiated cascade) is known. The present experiment is carried out with the same scintillator ($`C_nH_{2n}`$, $`<n>9.6`$) as used in aforementioned earlier experiments.
## 2 Detector and Data Analysis
The data presented here were collected with the 1st LVD tower during 13639 hours of live time. The 1st LVD tower contains 38 identical modules. Each module consists of 8 scintillation counters, each 1.5 m $`\times `$ 1.0 m $`\times `$ 1.0 m, and 4 layers of limited streamer tubes (tracking detector) attached to the bottom and to one vertical side of the metallic supporting structure. Each counter is viewed by 3 photomultiplier tubes (PMT) on top of the counter. A detailed description of the detector was given in Aglietta et al. (1992). The depth of the LVD site averaged over the muon flux is about 3650 hg/cm<sup>2</sup> which corresponds to mean muon energy underground of about 270 GeV. Each scintillation counter is self triggered by the three-fold coincidence of the PMT signals after discrimination. The high-energy threshold (HET) is set at 4-5 MeV for inner counters. During the 1 ms time period following an HET trigger, a low-energy threshold (LET) is enabled for counters belonging to the same quarter of the tower which allows the detection of the 2.2 MeV photons from neutron capture by protons. Further on we will consider only the signals induced by neutrons in the inner counters, where the LET is low enough (0.8 MeV) to allow high neutron detection efficiency, while the background rate is quite small. 138 counters were considered as inner ones.
All muon events were divided into two classes: i) ’muons’ – single muon events, where a single muon track is reconstructed (small cascades cannot be excluded), and ii) ’cascades’ – there is no clear single muon track but the energy release is high enough to indicate that at least one muon is present; such events may be due to either muon-induced cascades or multiple muons.
Each neutron ideally should generate two pulses: the first pulse above the HET is due to the recoil protons from $`np`$ elastic scattering (its amplitude is proportional to and even close to the neutron energy); the second pulse, above the LET in the time gate of about 1 ms is due to the 2.2 MeV gamma from neutron capture by a proton. The sequence of two pulses (one above the HET and one above the LET) was the signature of neutron detection. The energy of the first pulse (above HET) was measured and attributed to the neutron energy. Note that really this is not a neutron energy but the energy transferred to protons in the scintillator and measured by the counter. The distance between the counter in which a neutron was detected, and the muon track was calculated. For single muons this was the minimal distance between the center of the counter which detected a neutron and the centers of counters traversed by the muon. The precision (about 1 m) is really restricted by the fact that neither the point of neutron production nor the point of neutron capture are known with better accuracy. If the neutron is detected in a counter crossed by the muon (distance is less than 1 m) the energy cannot be attributed to the neutron alone but includes the muon energy loss. For cascade events we calculated the minimal distance between the center of counter where the neutron was detected and the centers of the counters struck by the cascade excluding that with the neutron.
Pairs of seemingly time-correlated pulses might also be produced by random coincidences of high-energy pulses (above HET) due to muon or cascade energy release and low-energy pulses (above LET) due to local radioactivity. The counting rate of such random coincidences per muon event is determined by the counting rate of background pulses in the time gate. To evaluate this background the counting rate of low-energy pulses in the counters in the absence of high-energy pulses was measured. The true neutron flux per unit energy and unit distance was calculated as a difference between the total number of correlated pairs observed and the number of pairs expected due to random coincidences.
## 3 Results and Discussion
A typical time distribution of the LET pulses after an HET pulse is shown in Figure 1. The plot includes single muon events with all HET energies and distances (1–2) m from the muon track. Although the expected background has already been subtracted, as described in Section 2, the figure shows an exponential superimposed on a flat distribution of background pulses. This implies that the real level of background of LET pulses is higher in the counters with HET pulses than without HET pulses. The measured distribution was fitted with the following formula: $`dN/dT=B+N_n/\tau exp(t/\tau )`$, where $`B=92_{15}^{+13}`$ is the constant term (residual background) per bin, $`N_n=2746_{218}^{+273}`$ is the total number of neutrons, and $`\tau =187_{19}^{+24}`$ $`\mu `$s is the mean time of neutron capture. The value of $`\tau `$ is in good agreement with previous measurements (Aglietta et al., 1989, and references therein). To obtain the numbers of neutrons at various distances from the muon track, similar distributions were fitted to the above equation with fixed value of $`\tau =190`$ $`\mu `$s. The average neutron multiplicity per event (whether single muon or cascade) per counter is plotted against distance from the muon track or cascade core in Figure 2 (single muons - open circles, cascades - open squares). Only the statistical errors from the fits are shown. Horizontal bars show the range of distances for each point. The total contributions to the neutron flux of single muons and cascades are found to be roughly comparable. As the number of reconstructed single muon events exceeds that of ’cascade’ events by an order of magnitude, the number of neutrons per cascade is several times higher than the number of neutrons per single muon event. This supports the results of previous measurements (Bezrukov et al., 1973, Enikeev et al., 1987, Aglietta et al., 1989) and early estimations (Ryazhskaya & Zatsepin, 1966). The results of combined treatment of single muons and cascades are shown by filled circles. Only upper limits to the neutron multiplicity can be obtained for last three bins. The exponential fit to the all-event distribution is shown by the solid curve: $`F=Aexp(R/<R>)`$ where $`A=(4.17\pm 0.17)10^3`$ neutrons/(muon event)/counter and $`<R>=(0.634\pm 0.012)`$ m.
Note that the LVD is not a uniform detector, and there are several ten-centimeter air gaps between modules and several centimeter gaps between the counters in a module. This means that the neutrons, as well as other secondary particles, can escape from the counters where they were produced and reach another counter (possibly quite far from the original one) by way of low density air gaps.
The energy of a trigger pulse in a counter can be attributed to the neutron kinetic energy if: 1) there is no energy loss of the muon nor that of secondary particles (other than neutrons) in this particular counter; 2) all neutron kinetic energy is tranferred to protons inside this counter; 3) the energy deposited by a recoil proton is proportional to the pulse amplitude and this proportionality is the same as for electron pulses. Although these conditions are not strictly satisfied, we assume (at zero approximation) that the measured spectrum of HET pulses at large enough distances from the muon track corresponds to the neutron energy spectrum (near the point of neutron capture, within a sphere with a diameter of about 1 m). Such a spectrum is presented in Figure 3 by filled circles for distances $`R>`$ 1 m. Only statistical errors are shown. Horizontal bars show the range of energies for each point. To check the contamination of the distribution by energy deposited by secondary particles (other than neutrons) we plotted also the energy spectrum of HET pulses at distances $`R>`$ 2 m (open circles). It is obvious that the contributions of secondary particles of all kinds should decrease with $`R`$. Both data samples show similar behaviour at $`E<`$ 200 MeV. At $`E>`$ 200 MeV the points for $`R>`$ 1 m (filled circles) are higher than is expected from the general trend of the spectrum. This can be explained by contamination from the energy loss of muons (this is a region near the peak of muon energy release in the counter) and cascade particles. There is no such excess of events at 200-400 MeV at distances $`R>`$ 2 m. This means that the contribution of secondary particles other than neutrons to the neutron spectrum at high energies is negligible. Both spectra were fitted with power-law functions with two free parameters: $`dN/dE=AE^\alpha `$. The energy bins 200-300 MeV and 300-400 MeV were excluded from the analysis of data at $`R>`$ 1 m. The results of the fits are: $`A=(1.58\pm 0.14)10^5`$ neutrons/(muon event)/counter/MeV, $`\alpha =0.99\pm 0.02`$ for $`R>`$ 1 m, and $`A=(4.67\pm 0.71)10^6`$ neutrons/(muon event)/counter/MeV, $`\alpha =1.08\pm 0.04`$ for $`R>`$ 2 m. The errors are statistical only. The slopes of the two spectra are in good agreement, a more quantitative indication that the contributions of energy losses of particles other than neutrons are not very important, if not negligible at energies less than 200 MeV or distances more than 2 meters. The slope of the spectrum is also in reasonable agreement with the results of Monte Carlo simulations (Dementyev et al., 1997). The units used in Figure 2 and 3 can be converted to more convenient ones (m<sup>-2</sup>) assuming that each counter has an average area of about 1.5 m<sup>2</sup> orthogonal to the direction of neutron flux and dividing each value by the neutron detection efficiency (about 0.6 for MeV-neutrons uniformly distributed in the counter volume).
Finally, we calculated the average number of neutrons produced by a muon per unit path length in liquid scintillator using the formula: $`<N>=N_nQ/(N_cLϵ)`$, where $`<N>`$ is the average number of neutrons produced by a muon per 1 g/cm<sup>2</sup> of its path in scintillator, $`N_n`$ is the total number of neutrons at all distances from the track (result of the fit similar to that shown in Figure 1), $`N_c`$ is the number of counters crossed by muons, $`L`$ is the mean path length of a muon inside the counter, $`ϵ`$ is the efficiency of neutron detection in the inner LVD counters, and $`Q`$ is the correction factor which takes into account the neutron production in iron of the supporting structure and counter walls. The fit of the all-data sample gives $`N_n=(2.34\pm 0.46)10^4`$ neutrons. The error includes both statistical and systematic uncertainties. The total number of counters crossed by all muons can be calculated directly only for single muon events when the muon track is well reconstructed. The estimation for all events results in the value of $`N_c=(3.3\pm 0.2)10^6`$. The mean path length of muons in the scintillation counter is equal to $`65\pm 7`$ g/cm<sup>2</sup> (Aglietta et al., 1989). The efficiency of neutron detection by one scintillation counter has been measured with a source and calculated by Monte Carlo techniques (see, for example, Aglietta et al., 1989, 1992). For MeV-neutron sources uniformly distributed in the counter volume the efficiency of neutron detection is $`0.6\pm 0.1`$. The correction factor $`Q`$ takes into account the neutron production in iron which should be subtracted since we want to calculate that in scintillator only. It was obtained by Aglietta et al. (1989) for LSD, $`Q=0.61\pm 0.04`$. Similar estimation for LVD gives $`Q=0.85\pm 0.10`$. Finally, one gets $`<N>=(1.5\pm 0.4)10^4`$ neutrons/(muon event)/(g/cm<sup>2</sup>). The value of $`<N>`$ is 3.5 times smaller than that obtained with LSD detector at larger depth. This difference cannot be easily explained even by large systematic uncertainties and the difference of the depths of the detector sites.
## 4 Conclusions
The muon events collected by the first LVD tower (1.56 years of live time) were used to estimate the neutron flux at various distances from the muon track (or from the muon-produced cascade). The neutron flux decreases by more than three orders of magnitude at distances more than 5 meters from muon track or cascade core. The average number of neutrons produced per muon per g/cm<sup>2</sup> of its path in the liquid scintillator is found to be $`(1.5\pm 0.4)10^4`$ neutrons/(muon event)/(g/cm<sup>2</sup>). Under the assumptions mentioned in the previous section the neutron differential energy spectrum in the kinetic energy range (5 – 400) MeV was found to follow a power law with exponent close to -1.
## 5 Acknowledgements
We wish to thank the staff of the Gran Sasso Laboratory for their aid and collaboration. This work is supported by the Italian Institute for Nuclear Physics (INFN) and in part by the Italian Ministry of University and Scientific-Technological Research (MURST), the Russian Ministry of Science and Technologies, the US Department of Energy, the US National Science Foundation, the State of Texas under its TATRP program, and Brown University. One of us (V. A. Kudryavtsev) is grateful to the University of Sheffield for hospitality.
References
Aglietta, M. et al. 1989, Nuovo Cimento 12C, 467.
Aglietta, M. et al. 1992, Nuovo Cimento 105A, 1793.
Bezrukov, L. B. et al. 1973, Sov. J. Nucl. Phys. 17, 987.
Dementyev, A. et al. 1997, Preprint INFN/AE-97/50.
Enikeev, R. I. et al. 1987, Sov. J. Nucl. Phys. 46, 1492.
Khalchukov, F. F. et al. 1983, Nuovo Cimento 6C, 320.
Ryazhskaya, O. G. & Zatsepin, G. T. 1966, Proc. 9th Intern. Cosmic Ray Conf. (London) 3, 987.
Zatsepin, G. T. et al. 1989, Sov. J. Nucl. Phys. 49, 266.
|
no-problem/9905/astro-ph9905064.html
|
ar5iv
|
text
|
# Near-infrared observations of galaxies in Pisces-Perseus: III. Global scaling relations of disks and bulges
## 1 Introduction
The existence of tight scaling relations between observable photometric and kinematic galaxy parameters, in particular the Fundamental Plane (FP) of elliptical galaxies (Jørgensen et al. jorg (1996), Scodeggio scodeggio (1997)), and the Tully-Fisher relation for spirals (TF, Tully & Fisher tf (1977)), finds its most straightforward application in the evaluation of galaxy distances. The small scatter observed in both relations implies a fine tuning between parameters strictly related to the stellar component alone, such as the optical luminosity, and the kinematic properties of the galaxy, which are affected by the overall mass distribution (both dark and visible).
In the case of spiral galaxies, it is therefore suggestive that a tight scaling relation may also exist between photometric and kinematic properties of the disks alone, with a scatter as low as, or even lower, than the TF relation. Karachentsev (kara (1989)) found that, for a sample of 15 edge-on galaxy, the disk scale length $`R_d`$ was connected to the 21 cm line velocity width $`W`$ and the $`I`$–band central disk brightness $`I(0)`$ by the relation $`R_dW^{1.4}I(0)^{0.74}`$, with a scatter in $`\mathrm{log}(R_d)`$ of 0.048. The uncertainty implied in the distance is around 12%, smaller than the one usually achieved by the TF relation ($`1520`$%). More recently Chiba & Yoshii (cy (1995), CY95 hereafter) tested on a sample of 14 nearby spirals the relation
$$\mathrm{log}R_d=a\mathrm{log}(V_2I(0)^{0.5})+b$$
(1)
in the $`B`$ band, where $`V_2`$ is the galaxy rotation velocity measured at 2.2 disk scalelengths. Since the contribution from the disk to the overall rotation curve (RC) has a maximum at this particular distance, the authors argue that the contributions from bulge and dark halo are likely to be less important, and therefore the measured velocity should represent a good estimate for the rotation of the disk alone. For a set of exponential disks of fixed mass–to–light ratio ($`M/L`$) one would expect $`\mathrm{log}R_d\mathrm{log}[V_2^2/I(0)]`$, corresponding to $`a=2`$ in Eq. (1). Actually CY95 find $`a=1.045`$, again with a remarkably small scatter. On the other hand, recent work from Giovanelli (giovatf (1997)) suggests that the accuracy of Eq. (1) as a distance indicator is inferior to the one attained by the traditional TF relation by at least a factor of 2.
Besides beeing a tool to provide redshift-independent distances, scaling relations also contain information about how galaxies – and in particular their “visible” constituents – have formed and evolved (Gavazzi gava (1997), Ciotti ciotti (1997), Dalcanton et al. dalca (1997), Burstein et al. burst (1997) – BBFN hereafter). In this respect, spiral disks are a potentially “easy” class of systems, since they are all characterized by well defined shape and kinematics: if the effects of extinction on their surface brightness distribution are accounted for, and a reliable estimate of their mass is obtained, the subsequent scaling relations will be directly related to systematic variations in the disks’ stellar content. Further information can be provided by the variation with wavelength of such properties, in particular by scaling relations involving colours (e.g., the colour magnitude relation; see Gavazzi gava1 (1993), Tully et al. tully (1998), Peletier & de Grijs pdg (1998)).
In this work we investigate the existence and tightness of a general scaling relation in the near infrared (NIR), similar to the one defining the FP, for both structural components (disks and bulges) of a sample of 40 nearby spiral galaxies. Our aim is to test the power of these relations as tools to measure galaxy distances, and use them to provide new information about the stellar content and star formation history of spiral galaxies. In the present paper we are going to deal mainly with the problem of the distance measurements, leaving a thorough discussion of the second point to future work. The use of NIR photometry is well suited for a study of this kind, since it minimizes the effect of internal extinction, and provides a good tracing of the stellar mass. In most cases, high resolution, optical RC’s allow us to trace the gravitational potential of the galaxies up to their innermost regions.
The photometric parameters are obtained, in the NIR $`H`$ band, from a bi-dimensional decomposition of the galaxy images, as described in Sect. 3. In Sect. 4 we show how the kinematical information are extracted from the galaxy RC’s, to which we fit a model composed by bulge, disk, and a dark halo. We subsequently derive the coefficients of the FP for bulges and disks, and compare the potential accuracy in a distance determination achieved by the disks’ relation to the one we obtain using the TF relation for the same sample. We finally investigate the trends of $`M/L`$’s with luminosity and galaxy size. Throughout the paper we adopt a Hubble constant $`H_{}=75`$ km s<sup>-1</sup> Mpc<sup>-1</sup>.
## 2 The data
The 40 spiral galaxies considered for this study are drawn from a larger sample selected in the Pisces–Perseus supercluster region, for which $`H`$ band images are available (see Moriondo et al. 1998b and noi\_1 (1999) for a thorough description of the original sample). This subset in particular contains the galaxies for which a RC is also available both from the literature or the private database of RG and MH, and includes morphological types ranging from Sa to Scd. In most cases, the RC’s are derived from optical emission lines measurements and are confined within two or three disk scale lengths. For a few galaxies radio aperture–synthesis RC’s (21 cm HI line) are also available, extending well beyond the optical radius. All the RC’s were rescaled to our adopted values of distance and inclination. For all the galaxies (except UGC 2885) 21 cm line velocity widths are also available from the database of RG and MH. We will use these values later on to derive the $`H`$ band TF relation for the sample. Table 1 contains the basic information on the galaxies of the sample as well as the references for the RC’s; many of these were retrieved from the compilations by Corradi & Capaccioli (corra (1991)) and Prugniel et al. (prugni (1998)). We note that this sample is not appropriate to obtain an absolute calibration of distances. It is however well suited to compare two different scaling relations and their relative accuracy as distance indicators.
## 3 The fits to the brightness distributions
The photometric data reduction and analysis are described in detail in Moriondo et al. (1998a , noi\_1 (1999)). Briefly, each galaxy image is fitted with a model consisting of an exponential disk and a bulge, whose shape is described by a generalized exponential (Sèrsic sersic (1968)). Both brightness distributions are assumed to be elliptical in the plane of the sky, and with a common centre and major axis. The parameters of the fit are the two scalelengths, surface brightnesses and apparent ellipticities (one for each component). The results of these decompositions were used in Moriondo et al. (1998b ) to evaluate the effect of internal extinction on the $`H`$-band structural parameters and derive the corrections to the face-on aspect. The corrected values for scalelengths, surface brightnesses and total luminosities will be used in the following analysis. In most cases an exponential bulge yields a satisfactory fit to the data. For two galaxies in the subsample considered here (namely UGC 26 and UGC 820), a value $`n=2`$ of the “shape” parameter in the exponent of the bulge brightness distribution produces better results. In two cases (UGC 673 and UGC 975) the disk alone is sufficient to fit the galaxy, i.e. no trace of a bulge is found in the brightness distribution. Table 2 contains the photometric parameters of the sample galaxies, corrected to the face–on aspect and for the redshift; the $`H`$band total luminosity, in column 8, will be introduced in Sect. 5. The resulting decompositions are plotted in the top panels of Fig. 1, together with the surface brightness profiles averaged along elliptical contours (from Moriondo et al. 1999), and the brightness profiles along the major axis. In the case of UGC 673 and UGC 12666, the discrepancy between these two profiles is due to the ellipse–fitting routine, which in both cases has underestimated the galaxy apparent ellipticity in the regions of lower signal–to–noise ratio. Our fits, however, are in good agreement with the respective brightness distributions, so that the estimated parameters are reliable for these galaxies as well.
## 4 The fits to the rotation curves
To estimate the contribution of bulge and disk to a given RC we use the information from the photometric data to predict the shape of the RC’s of the two components, and derive their $`M/L`$ ratios from a best fit to the observed velocity profile. The accuracy of this technique, which has been used for a long time by many authors (van Albada et al. vanalb (1985), Kent kent1 (1986), Martimbeau et al. mart (1994), Moriondo et al. 1998c ), is actually impaired by the scarce knowledge on the contribution of the dark component to the overall RC. This becomes certainly important beyond the disk rotation peak, at around two disk scalelengths; however, it might also be significant at smaller radii, especially for low luminosity and low surface brightness galaxies – as suggested by Persic et al. (persic (1996)) and de Blok & McGaugh (dbmg (1997)) – but also in the case of bright spirals (see Bosma, bosma (1998), for a recent review on this topic). Courteau and Rix (cou\_rix (1999)) and Bottema (botte (1993), botte2 (1997)), for instance, estimate the disk contribution to be about 60% of the overall rotation at 2.2 scalelengths, whereas different authors (e.g. Verheijen & Tully verhe (1998), Dubinski et al. dubin (1999), Gerhard gerhard (1999), Bosma bosma (1998)) support a “maximum disk” scenario in which such contribution is about 85%, at least for bright galaxies. The shape of the dark matter distribution is rather uncertain as well: even the reliability of the distributions derived from numerical simulations of structure formation (e.g. Navarro et al. navarro (1996)) is weakened by their apparent discrepancy with the observed RC’s of low surface brightness galaxies. Since in this paper we are mainly interested in the properties of bulges and disks, the choice of the dark halo distribution is not likely to be important, at least as long as the visible matter dominates the mass distribution in the inner galaxy regions.
To have an idea of how the fit to the RC is influenced by the inclusion in the model galaxy of a dark component, we perform two different fits for each RC: one using only bulge and disk, and one including also the contribution from a dark halo. The expressions for the rotational velocity of an exponential disk and a generic ellipsoidal bulge are reported in Moriondo et al. (1998c ), as well as two possible dark halo distributions, namely a constant density sphere and a pseudo–isothermal one (Kent, kent1 (1986)). The first halo distribution yields a linearly rising RC, and is an approximation of the latter as $`R`$ tends to zero; the rotation velocity associated to the pseudo–isothermal sphere, on the other hand, tends to a constant value as $`R`$ goes to infinity. This last distribution has been considered only for the 4 galaxies whose RC is well sampled in the outer, flat part (if this is not the case, and the asymptotic rotation velocity is not well defined, usually the two types of dark matter halos yield the same result, in the sense that only the linear part of the isothermal sphere is used by the minimization routine that fits the data).
In general we find that if a dark component is included in the fit, it never turns out to be dominant in the inner part of the RC, within two disk scalelengths; this is also true when the dark matter distribution is well constrained by a very extended RC. In other words, the solutions we obtain are usually not very different from the “maximum disk” ones, implying that, in most cases, the visible part of the galaxy provides a good match to the observed RC; we will therefore assume that the “maximum disk” hypothesis is basically correct for our galaxies. An alternative scenario, however, will also be considered in Sect. 8. In a few cases the available RC is not extended enough to constrain even a constant density halo, and in these cases we use the results from the “bulge + disk” fits. We consider these values reliable, since in general the inclusion of the dark halo in the fits does not produce major changes in the estimated mass and $`M/L`$ of the visible components.
We also note that, in a few cases, at 2.2 $`R_d`$’s the contribution to the overall rotation from the bulge is still significant, so that the disk contribution is well below the measured circular velocity. In these cases neglecting the presence of the bulge would certainly lead to overestimate the disk mass and $`M/L`$.
Table 3 contains the parameters derived from the RC fits. Also included in the table are the values of the velocity width $`W`$ (corrected for instrumental smoothing, redshift, turbulence, and inclination), and of the velocity at 2.2 $`R_d`$, $`V_2`$ (corrected for redshift and inclination); both quantities will be introduced in the next section. In Fig. 1 (bottom panels) we show the best fits to the RC’s for all the galaxies in our sample.
In 17 cases out of 40, the observed RC has either no data-points in the innermost region (3 cases), or shows no evidence of contribution by the bulge. For these galaxies the bulge mass and $`M/L`$ are either unconstrained or likely to be underestimated. Most of the 14 galaxies, for which the disk contribution alone is sufficient to match the inner RC, are characterized by the smallest bulge–to–disk ratios ($`B/D`$) of the sample ($`B/D<0.05`$), or by the faintest bulge luminosities ($`_b>21`$). It is not surprising, therefore, that in these cases the bulge contribution to the overall RC cannot be adequately resolved. A direct comparison between the $`M/L`$ of disks and bulges seems to confirm these statements. In Fig. 2 mass versus luminosity is plotted for the disks, and for the bulges with $`M_b>10^9M_{}`$. These are the 23 cases for which we obtain a reliable fit to the RC for both components, and the two sets of objects seem to form a smooth sequence, roughly delimited by $`M/L=0.25`$ and $`M/L=2.5`$, in solar units. The remaining “low mass” bulges are placed well below the sequence, out of the plot window, suggesting that their mass is largely underestimated. In the following analyisis, therefore, we will consider only the bulges plotted in Fig. 2.
## 5 The Tully Fisher relation for the sample
To derive an $`H`$-band TF relation for our galaxies the 21 cm line velocity widths from the RG and MH database (Table 3) have been corrected for instrumental smoothing, redshift, turbulence, and inclination of the galaxy to the line of sight, according to the prescriptions in Giovanelli et al. (giova97 (1997), giovanew (1998)). The absolute galaxy magnitudes in the $`H`$ band (Table 2) are derived integrating the surface brightness profiles extrapolated up to 8 disk scalelengths; a small correction to face-on aspect is applied, using the results obtained for the disks alone in Moriondo et al. (1998b ). Figure 3 shows the plot we obtain, with the best fit to the data after the exclusion of the three discrepant points to the right. In the case of UGC 12618, our value for the inclination is probably underestimated, leading to an exceedingly large correction for $`W`$; on the other hand, due to its large errorbars, the contribution of this galaxy to the fit is not very significant anyway. In the case of UGC 1350 and UGC 2617, the observed $`W`$ is likely to be overestimated, maybe for a misidentification of the galaxy; we note that for these two objects an estimate of the rotation derived from the RC is, respectively, about 75% and 50% smaller than $`W`$.
The errors on both axes are of comparable magnitude, therefore the weight for each data point is the RMS of the two contributions. Since the relation we fit is $`y=ax+b`$, the i-th residual is weighted by
$$w(i)=\left[\sqrt{ϵ_{y(i)}^2+a^2ϵ_{x(i)}^2}\right]^1,$$
(2)
where the $`ϵ`$’s are the estimated errors. The slope and zero offset, with respect to the average $`\mathrm{log}W`$, are respectively $`7.7\pm 1.0`$ and $`23.67\pm 0.08`$.
## 6 Fundamental Planes
We now fit separately to bulges and disks a relation
$$\mathrm{log}R=a\mathrm{log}V+b\mathrm{log}I+c$$
(3)
involving a scalelength, a velocity, and a surface brightness, analogous to the FP of elliptical galaxies. The designated parameters for the bulges are the effective radius $`R_e`$, the surface brightness at $`R_e`$ ($`I_e`$), and a velocity defined as
$$V_b=\sqrt{\frac{M_b}{R_e}}$$
(4)
where $`M_b`$ is the bulge mass, in units of 10$`{}_{}{}^{9}M_{}^{}`$. Since for all the bulges selected (the ones with $`M_b>1`$) the brightness distribution is fitted by a pure exponential – i.e. they all have the same shape – this velocity scales as the rotation velocity, measured at 2.2 bulge scalelengths $`R_b`$. Also, $`R_e`$ and $`I_e`$ are related by constant factors respectively to $`R_b`$ and the central surface brightness $`I(0)`$ (the two parameters usually adopted for the disks): $`R_e=1.67R_b`$, and $`I_e=0.19I(0)`$. For the disks we choose the exponential scalelength $`R_d`$, the central surface brightness $`I(0)`$, and a velocity $`V_d`$ defined via a relation analogous to Eq. (4), with the disk mass and the disk scalelength. Again, this velocity differs from the value at 2.2 $`R_d`$ by the same factor for all the exponential distributions. Following CY95, we also consider a relation involving $`R_d`$, $`I(0)`$, and the total galaxy rotation velocity at 2.2 $`R_d`$, derived directly from the RC, which we will indicate as $`V_2`$.
It turns out that the three parameters to be fitted have comparable uncertainties, and each point needs to be weighted by a combination of them. In the case of the structural parameters ($`R`$, $`I`$), besides the formal error from the fit to the brightness distributions, a major contribution is added from the errors in the corrections to face-on aspect, that is from the uncertainty in the amount of internal extinction. These are evaluated according to the results in Moriondo et al. (1998b ). The uncertainty on the velocity depends in principle on the errors associated both to the RC measurement and to the scale length. However we expect this latter contribution to be less important, especially for the disks, whose scalelength is always the best determined parameter in the surface brightness decompositions. Therefore we assume the uncertainty on the velocities to be well represented by the formal error derived from the fitting routine. The errors on all other quantities are derived from these values: for example, the error on the disk mass is obtained combining the uncertainties on the velocity and the scale length, since $`MV^2R`$.
The fit to the disk parameters yields
$$\begin{array}{ccc}\mathrm{log}R_d\hfill & =\hfill & (1.31\pm \mathrm{\hspace{0.17em}0.19})(\mathrm{log}V_d<\mathrm{log}V_d>)+\hfill \\ & & (0.62\pm \mathrm{\hspace{0.17em}0.09})(\mathrm{log}I(0)<\mathrm{log}I(0)>)+\hfill \\ & & (1.29\pm \mathrm{\hspace{0.17em}0.07}),\hfill \end{array}$$
(5)
where “$`<>`$” designates the average over the sample. If $`V_2`$ is chosen, instead of the value defined in terms of the disk mass and scalelength, we obtain
$$\begin{array}{ccc}\mathrm{log}R_d\hfill & =\hfill & (1.47\pm \mathrm{\hspace{0.17em}0.16})(\mathrm{log}V_2<\mathrm{log}V_2>)+\hfill \\ & & (0.61\pm \mathrm{\hspace{0.17em}0.07})(\mathrm{log}I(0)<\mathrm{log}I(0)>)+\hfill \\ & & (1.39\pm \mathrm{\hspace{0.17em}0.06}).\hfill \end{array}$$
(6)
We note that these coefficients are close to the values implied by the TF relation derived in Sect. 5: in fact, from $`LW^{3.1}`$, we derive $`RW^{1.5}I^{0.5}`$. Therefore, the TF relation is a nearly edge-on projection of the FP. Also, these values are not inconsistent with the constraint implicit in the relation suggested by CY95 (Eq. (1) ), i.e. $`a=2b`$.
The best fit to the bulge parameters yields:
$$\begin{array}{ccc}\mathrm{log}R_e\hfill & =\hfill & (0.97\pm \mathrm{\hspace{0.17em}0.13})(\mathrm{log}V_b<\mathrm{log}V_b>)+\hfill \\ & & (0.61\pm \mathrm{\hspace{0.17em}0.08})(\mathrm{log}I_e<\mathrm{log}I_e>)+\hfill \\ & & (0.46\pm \mathrm{\hspace{0.17em}0.10}).\hfill \end{array}$$
(7)
Again, a relation like Eq. (1) is not ruled out, even if with a slope $`a`$ slighlty different from the disks’ value.
The errors associated to the various coefficients are estimated by performing a large number of Monte Carlo simulations of the data sample and fitting each simulation, to derive a distribution of values for each coefficient. Figures 4 and 5 show the relations we find for the disks, whereas Fig. 6 shows the one for the bulges.
### 6.1 A comparison with published results
The coefficients we derive for the disks are different from the values found by CY95 ($`a=1.0`$, $`b=0.5`$), but closer to the ones reported by Karachentsev (kara (1989)), i.e. $`a=1.4`$, $`b=0.7`$; we note however that, in the case of CY95, the photometric data were in a very different passband ($`B`$). Both our coefficients and the other quoted values are not consistent with what would be expected on the basis of the virial theorem and a universal mass–to–light ratio ($`a=2`$ and $`b=1`$).
The various sets of coefficients can also be compared to the ones which define the FP of elliptical galaxies, evaluated using the central velocity dispersion as the kinematic parameter, and the effective surface brightness. For example, Bender et al. (bbf (1992)) report for the Virgo cluster $`a=1.4`$ and $`b=0.85`$, in the $`B`$ band; more recently Jørgensen et al. (jorg (1996)) find $`a=1.24`$ and $`b=0.82`$ in the Gunn $`r`$ band, with a scatter of 0.084 in $`\mathrm{log}R_e`$; Pahre et al. (1998a ) estimate the NIR coefficients of the plane to be $`a=1.53`$ and $`b=0.79`$. The various FP’s are not very different, and actually the existence of a “cosmic metaplane” has already been claimed by Bender et al. (bendk (1997)), and BBFN. In particular they defined a set of three parameters (the $`k`$ parameters) particularly suited to represent the FP of elliptical galaxies in the $`B`$ band, and found that basically all the self–gravitating systems show a similar behaviour in the $`k`$-parameter space. In the case of spiral galaxies, they considered their global properties, without attempting a decomposition into structural components, and for the bulges in their data sample they used the central velocity dispersion as the kinematical parameter. This work improves the approach by characterizing separately bulges and disks from the photometric point of view; in addition, we are able to determine the bulge mass independently of its kinematical status (i.e., if it’s more or less supported by rotation), and obtain an independent estimate of the coefficients of the bulge FP.
Figure 7 shows our data in the $`H`$band $`k`$–space, with bulges denoted as open circles, and disks as triangles. We have defined the three coordinates as $`k_1=\mathrm{log}(M)`$, $`k_2=log(M/LI_e^3)`$, and $`k_3=\mathrm{log}(M/L)`$, where $`M`$ is the total mass in units of $`M_{}`$, $`M/L`$ is the stellar mass–to–ligth ratio in solar units, and $`I_e`$ is the effective surface brightness in $`L_{}pc^2`$. These definitions, besides being applied to a different passband, are slightly different from the ones introduced by Bender et al. (bbf (1992)). We can calculate the transformations between the two sets of coordinates from the relations reported in the Appendix A to BBFN, and assuming that all our bulges and disks are adequately described by exponentials. Using a typical $`BH=3.5`$ for both components, we find
$$\{\begin{array}{ccc}k_1^B\hfill & =\hfill & \frac{1}{\sqrt{2}}(k_15.97)\hfill \\ k_2^B\hfill & =\hfill & \frac{1}{\sqrt{6}}(k_2+0.58)\hfill \\ k_3^B\hfill & =\hfill & \frac{1}{\sqrt{3}}(k_3+1.37)\hfill \end{array}$$
(8)
where $`k_1^B`$, $`k_2^B`$, and $`k_3^B`$ are the $`B`$band BBFN parameters.
The dotted line in the upper right corner of the $`k_1`$ vs. $`k_2`$ plot corresponds, in our set of coordinates, to the boundary of the so–called Zone of Exclusion (ZOE), defined in the $`B`$ band by $`k_1^B+k_2^B>8`$: it is consistent with our data, in the sense that most bulges and all the disks are placed to its left side. The BBFN database for the bulges is here extended to lower masses, with several data–points falling in the typical range of dwarf ellipticals ($`M<10^{10}M_{}`$); the two classes of objects appear however separated, with the bulges shifted towards higher values of $`k_2`$, and therefore higher concentrations (note that, for a given mass, $`k_2`$ is proportional to $`\mathrm{log}(I_e^2/R_e^2)`$, which is higher for compact, bright objects).
For what concerns spiral galaxies, BBFN find that their average distance from the ZOE increases steadily with morphological type, from Sa’s to Irregulars; the data points of our disks, however, are all placed in about the same region of the $`k_1`$$`k_2`$ plane, quite distant from the ZOE, and roughly coincident with the locus occupied by Scd’s galaxies in the BBFN’s plots. Most likely, this difference arises from the separation of the two structural components, which has allowed us to plot the $`k`$ parameters of the disks alone: if the values for the whole galaxies are considered, one would expect the systems with higher $`B/D`$ (namely, the early–type spirals) to lie closer to the ZOE, due to the contamination of the bulge. The BBFN sequence, therefore, would be mainly driven by the average $`B/D`$, which in turn is roughly correlated with morphological type. When considered separately, on the other hand, disks and bulges are located in two distinct, contiguous regions of the $`k`$space, as it is also evident from the $`k_2`$$`k_3`$ projection, with the disks shifted towards higher masses and lower concentrations. A different ZOE could in principle be defined for the disks, with about the same slope but shifted by about two decades towards lower $`k_2`$ values.
In the top panel of Fig. 7, we have plotted the slope of the $`B`$band FP for the Virgo cluster, scaled to the centroid of the bulges (dashed line) and of the disks (solid line). It appears to be consistent with our data, as suggested by the similarity of our coefficients in Eqs. (5) and (6) to the ones reported by Bender et al. (bbf (1992)). Also, using Eqs. (8) and the definition in BBFN, we can estimate the quantity $`\delta _{3:1}`$ for bulges and disks, representing their average vertical distance from the ellipticals’ FP in the $`k_1^B`$$`k_3^B`$ projection. In the case of the bulges, we find $`\delta _{3:1}=0.03`$ dex, in fair agreement with the BBFN estimate (-0.03); for the disks we find a shift of -0.19 dex, which is about the value found by BBFN for spirals from type Sa to Sc.
## 7 The disk FP as a distance indicator
In order to assess the goodness of Eqs. (5) and (6) as tools to measure galaxy distances, we compare the scatter of the data points, with respect to the best fit, to the scatter associated to the TF relation. The dispersion is defined as
$$\sigma =\sqrt{\frac{_i(\mathrm{\Delta }_iw_i)^2}{_iw_i^2}},$$
where $`\mathrm{\Delta }_i`$ is the residual on the i-th data point and $`w_i`$ the associated weight, defined by Eq. (2). We find a dispersion of 0.38 mag for the TF relation, implying an uncertainty of 19% on the distance of a single galaxy. We note that the galaxy distances used in this context have been estimated purely from redshifts. The peculiar velocity field thus adds scatter to the corresponding TF relation in a measure which we estimate on the order of 0.15 to 0.20 mag, at the typical distance of these objects. Had peculiar velocity–corrected distances been used, the TF scatter would have been so that distance estimates would have uncertainties of about 16%, rather then 19%.
In the case of the FP, we find a dispersion of 0.11 in $`\mathrm{log}r`$ if we use velocities derived from the best fits (Eq. (5)), and of 0.09 if the velocities are the actual rotation velocities (Eq. (6) ). The associated uncertainties in the distance are respectively about 29% and 23%. Even if these two values are less than 1/2 of the dispersion quoted by Giovanelli for the relation proposed by CY95 and a sample of 153 spiral galaxies (0.25 in $`\mathrm{log}r`$, not based on a detailed disk–bulge decomposition analysis as we use here), they are still larger than the uncertainty yielded by the TF relation for our sample. Therefore, according to our data, the disk FP is not as accurate, as a distance indicator, as the TF relation, even if it allows for one more free parameter in the relation. This result could be imputed to the scatter added to the relation by the uncertainties introduced in the data analysis process (in particular, the decomposition of the surface brightness distribution and the fit to the RC). On the other hand, we find that a modified Tully-Fisher relation, in which the velocity width is replaced by $`V_2`$, is characterized as well by a larger dispersion of 0.45 mag. This value is equivalent to the dispersion of 0.09 in $`\mathrm{log}r`$ associated to Eq. (6). In a similar way, if the total luminosity is replaced by the luminosity within 2.2 $`R_d`$’s, we find a very large dispersion of 0.64 mag. Since these alternative parameters are not strictly derived from the surface brightness decompositions or by the fit to the RC’s, as it is the case for the FP ones, we argue that using parameters associated with the inner part of the galaxy (rather than global quantities), is in itself a major source of scatter in the scaling relation considered. Thus, the fine tuning between photometric and kinematic parameters that yield the TF relation would be more effective because of their global character. About the much reduced dispersion with respect to the Giovanelli data, we attribute this difference to the use of a more refined data-analysis technique for our sample.
## 8 The relation between mass and luminosity
Eqs. (5)–(7) show that for both disks and bulges a systematic variation of $`M/L`$ exists, if we assume that we have properly evaluated their shape, photometric properties (including the effect of internal extinction), and contribution to the RC. In particular we can rewrite Eqs. (5) and (7) as
$$L_d\frac{M_d^{1.0\pm 0.2}}{R_d^{0.7\pm 0.4}}$$
(9)
and
$$L_b\frac{M_b^{0.8\pm 0.2}}{R_e^{0.5\pm 0.3}}.$$
(10)
Whereas for elliptical galaxies the $`ML`$ plane shows an edge-on view of the FP, this is not the case for disks and bulges, and a residual dependence on the scalelength is left; this dependence is such that, if a given mass settles into a larger size, the corresponding $`M/L`$ ratio is larger. Actually, this result is already implicit in the work by CY95: a good match to equation 1 with $`a2`$ is possible only if $`M/L`$ is a function of $`R_d`$ alone. Previous results, both observational (Persic et al. 1996), and theoretical (Navarro et al. navarro (1996), Navarro nava (1998)), have suggested that the stellar $`M/L`$ in the $`B`$ band increases with the galaxy luminosity, rather than with its size. This conclusion is probably consistent with ours, since the total galaxy luminosity, especially in the $`B`$ band, is mainly determined by the disk scalelength. Since the average stellar $`M/L`$ ratios are determined by the star–formation history of the galaxy, a connection must necessarily exist between the structural properties of the system, in particular its size, and the characteristics of its average stellar population. This is not surprising, since connections of this kind also exist for ellipitical galaxies: for example the one at the base of the well–known color–magnitude relation. On the other hand, the parameters which characterize stellar populations (age, metallicity, star–formation rate, etc.), whose systematic variation could be responsible of the observed trend, are far too many to be constrained by our $`H`$band data alone. Some additional information on this issue could be provided by a detailed multiband analysis – possibly integrated by spectroscopic data – as the one recently carried out by Pahre et al. (1998b ) for elliptical galaxies.
We note however that a different scenario can be outlined, in particular if we forget Eqs. (9) and (10), and assume instead that the stellar $`M/L`$ ratio is constant for all the galaxies. In this hypothesis, Eq. (6) can be used to predict how the contribution of the disk to the RC should scale with surface brightness and scalelength. Since we have, for an exponential disk:
$$V_d[(M/L)I(0)R_d]^{0.5},$$
(11)
and from Eq. (6):
$$V_2I(0)^{0.41}R_d^{0.68},$$
if $`M/L`$ is a constant, we derive
$$\frac{V_d}{V_2}\frac{I(0)^{0.09\pm 0.07}}{R_d^{0.18\pm 0.07}},$$
(12)
implying that the relative disk contribution to the RC should be the smallest for large galaxies and, although with less significance, for low surface brightness ones<sup>1</sup><sup>1</sup>1 From Eqs. (5) and (6) we can see instead that, in our RC fits, the ratio $`V_d/V_2`$ is about constant, i.e. independent of $`R_d`$ and $`I(0)`$: it amounts to about 85 % on average, the typical “maximum disk” value. As a consequence, the disk $`M/L`$ must increase with $`R_d`$.. For the bulges, using Eqs. (7) and (11), we find a trend with size similar to the one expressed by Eq. (12). Considering the range of parameters, rather uniformely covered by our sample (about three magnitudes in $`I(0)`$ and a factor of 4 in $`R_d`$), and assuming that the brightest and more compact disks contribute about 90% to the overall RC at 2.2 $`R_d`$, then Eq. (12) implies that, for the galaxies with the largest radii and faintest surface brightness, $`V_d`$ should be about 55% of the observed rotation velocity. In this hypothesis a “maximum disk” RC can be ruled out in most cases; of course, to match the observed RC’s, we have to postulate a conspicuous increase with size of the relative amount of dark matter. As already mentioned in Sect. 4, the results by Bottema (botte (1993), botte2 (1997)) and by Courteau & Rix (cou\_rix (1999)) support a scenario of this kind, with the disk contributing, on average, about 60% of the rotation at 2.2 $`R_d`$’s. Assuming that this rule holds for our sample as well, then, Eq. (12) predicts that the ratio $`V_d/V_2`$ should vary throughout the sample between about 0.45 and 0.75, in a well defined way according to the size and surface brightness of each disk. Neglecting the presence of the bulge, and assuming an infinitely thin disk, the corresponding ratio of dark to visible mass within $`R_2`$ can be expressed as
$$\frac{M_h}{M_d}0.8\left[\left(\frac{V_2}{V_d}\right)^21\right].$$
This quantity changes by more than a factor of 8 in our sample, from 0.4 to 3.3, whereas in the “maximum disk” hypothesis it stays roughly constant: in principle, the predictions of a reliable model for galaxy formation could help to distinguish between two such different behaviours.
Of course, any intermediate scenario between the two extremes discussed here (“maximum disk” solutions and constant stellar $`M/L`$) could be considered as well. From the observational point of view, again, more detailed spectrophotometric data might better constrain possible variations in the average stellar population of different galaxies, and help to distinguish between the different possibilities.
## 9 Summary
Using near–infrared images and rotation curves of a sample of 40 spiral galaxies, we have determined the scaling relations, between structural and kinematic parameters of bulges and disks, analogous to the Fundamental Plane of elliptical galaxies. The accuracy of the disk FP as a distance indicator, for this set of data and our photometric decompositions, is comparable but slightly lower than the one attained by the Tully–Fisher relation. This suggests that the fine tuning between dark and visible components at the basis of the various scaling relations is more effective for global parameters. Also, we deduce that (a) either the stellar mass-to-light ratio of the disk increases with $`R_d`$, or (b) the disk contribution to the observed RC decreases according to Eq. (12) for galaxies of large size. A similar behaviour is observed for the bulges.
###### Acknowledgements.
We would like to thank the referee, A. Bosma, for his careful reading of the manuscript, C. Giovanardi and L. Hunt for insightful comments and suggestions, and Stephane Courteau for having provided his data in electronic format. This research was partially funded by ASI Grant ARS–98–116/22. Partial support during residency of G.M. at Cornell University was obtained via the NSF grant AST96-17069 to R. Giovanelli.
|
no-problem/9905/math9905012.html
|
ar5iv
|
text
|
# Some Polyomino Tilings of the Plane
## 1 Introduction
Tilings of the plane are of interest both to statistical physicists and recreational mathematicians. While the number of ways to tile a lattice with dominoes can be calculated by expressing it as a determinant , telling whether a finite collection of shapes can tile the plane at all is undecidable .
In the 1994 edition of his wonderful book Polyominoes , Solomon W. Golomb states that the problem of determining how many ways a $`4\times n`$ rectangle can be tiled by right trominoes “appears to be a challenging problem with reasonable hope of an attainable solution.” Here we solve this problem, and several more. Specifically, we find the generating functions for tilings of $`4\times n`$ and $`5\times n`$ rectangles by the right tromino, and tilings of $`4\times n`$ rectangles by the $`L`$ and $`T`$ tetrominoes.
The number of ways of tiling a $`m\times n`$ rectangle with any finite collection of shapes, where $`m`$ is fixed, can be found by calculating the $`n`$th power of a transfer matrix whose rows and columns correspond to the various interface shapes a partial tiling can have. As $`n`$ grows large, the number of tilings $`N(n)`$ grows as $`\lambda ^n`$ where $`\lambda `$ is the largest eigenvalue of this matrix. In general, the number of interface shapes and therefore the size of the transfer matrix grows exponentially in $`m`$. However, if a small number of interface shapes suffice to generate all possible tilings, then we can use a small transfer matrix, and $`\lambda `$ will then be an algebraic number of low degree. In fact, we find that for the problems considered here the largest matrix we need is $`4\times 4`$.
By calculating $`\lambda `$ for these rectangles, we can place a lower bound on the entropy per site of tilings of the plane by these polyominoes. In the case of the $`T`$ tetromino, we improve this bound by using the exact solution of the Ising model in two dimensions.
## 2 The right tromino
Consider tilings of a $`4\times n`$ rectangle with right trominoes, where $`n`$ is a multiple of 3. The number of interfaces seems potentially large. However, it turns out that we only need to think about three, so that a $`3\times 3`$ matrix will suffice. These are shown in Figure 1, with the various kinds of transitions that can take place as the tiling grows to the right.
For these three interfaces, which we call ‘straight,’ ‘deep jog,’ and ‘shallow jog,’ we define $`N(t)`$, $`N_1(t)`$ and $`N_2(t)`$ respectively as the number of tilings when there are $`n=3t`$ columns to the left of the dotted line. Obviously $`N_1(t)`$ and $`N_2(t)`$ stay the same if we count tilings of their vertical reflections instead. Our goal is to find $`N(t)`$. To express these, we write them as generating functions,
$$G(z)=\underset{t}{}N(t)z^t$$
and similarly for $`G_1(z)`$ and $`G_2(z)`$.
From Figure 1, we see that a straight interface can make a transition to itself in four ways, each of which increases $`t`$ by 1. It has one transition each to a deep jog and its reflection, each of which has one transition back to a straight, and both of these increase $`t`$ by 1. Deep and shallow jogs of a given orientation have two transitions in each direction; as we have defined them a shallow-to-deep transition increments $`t`$ but a deep-to-shallow transition does not. A shallow jog has two transitions to its reflection, incrementing $`t`$. Finally, $`N(0)=1`$ since there is exactly one way to tile a $`4\times 0`$ rectangle.
This gives us a $`3\times 3`$ transfer matrix, and the system of linear equations
$`G`$ $`=`$ $`1+4zG+2zG_1`$
$`G_1`$ $`=`$ $`zG+2zG_2`$
$`G_2`$ $`=`$ $`2G_1+2zG_2`$
The solution for $`G`$ is
$$G(z)=\frac{16z}{110z+22z^2+4z^3}$$
The coefficients $`N(t)`$ of the first few terms of $`G`$’s Taylor expansion are
$$1,4,18,88,468,2672,16072,100064,636368,4097984,26579488,\mathrm{}$$
giving the number of ways to tile rectangles of size $`4\times 0`$, $`4\times 3`$, $`4\times 6`$, and so on.
To gain a better understanding of this series, we concentrate on fault-free rectangles, those whose only straight interfaces are those at the left and right end. The first few of these are shown in Figure 2. These always begin and end with transitions from a straight edge to a deep jog and back, except for $`t=1`$ where we have a pair of $`2\times 3`$ rectangles. By disallowing transitions back to a straight interface except at the end, we find that the linear equations for fault-free rectangles are
$`G^{}`$ $`=`$ $`4z+2zG_1^{}`$
$`G_1^{}`$ $`=`$ $`z+2zG_2^{}`$
$`G_2^{}`$ $`=`$ $`2G_1^{}+2zG_2^{}`$
This has the solution
$$G^{}(z)=2z\frac{211z2z^2}{16z}$$
The Taylor series of $`G^{}`$ tells us that the number $`N^{}`$ of fault-free tilings is
$$4,2,8,48,288,1728,\mathrm{}$$
which we can write as
$$N^{}(t)=\{\begin{array}{cc}4\hfill & \text{if }t=1\hfill \\ 2\hfill & \text{if }t=2\hfill \\ 86^{t3}\hfill & \text{if }t3\hfill \end{array}$$
Since any tiling consists of a concatenation of fault-free tilings, we have
$$G(z)=1+G^{}(z)+G^{}(z)^2+G^{}(z)^3+\mathrm{}=\frac{1}{1G^{}(z)}$$
which the reader can verify. Note that we have to define $`G^{}(0)=N^{}(0)=0`$ in order for this formula to work.
The asymptotic growth of $`N(t)`$ is the reciprocal of the radius of convergence of $`G`$’s Taylor series. Thus $`N(t)\lambda ^t`$ where $`\lambda `$ is the largest positive root of
$$\lambda ^310\lambda ^2+22\lambda +4=0$$
This gives
$$\lambda =\frac{2}{3}\left(5+\sqrt{34}\mathrm{cos}\frac{\theta }{3}\right)$$
where $`\pi /2<\theta <\pi `$ and
$$\mathrm{tan}\theta =\frac{3}{11}\sqrt{\frac{519}{2}}$$
Numerically, we have
$$\lambda =6.54560770847481152029\mathrm{}$$
It would be nice to find a simpler expression for $`\lambda `$, perhaps using the decomposition into fault-free rectangles.
For $`5\times 3t`$ rectangles, four interface shapes suffice. We call these straight, little jog, big jog, and slope, and their transitions are shown in Figure 3 (reflections and reversals are not shown). If their generating functions are $`G`$, $`G_1`$, $`G_2`$ and $`G_3`$ respectively, our $`4\times 4`$ transfer matrix is
$`G`$ $`=`$ $`1+8zG_2+4zG_3`$
$`G_1`$ $`=`$ $`4zG+zG_1+2z^2G_2+(2z+4z^2)G_3`$
$`G_2`$ $`=`$ $`2G_1+zG_2+4zG_3`$
$`G_3`$ $`=`$ $`2zG+4zG_1+(2z+4z^2)G_2+4z^2G_3`$
whose solution is
$$G(z)=\frac{12z31z^240z^320z^4}{12z103z^2+280z^3+380z^4}$$
The coefficients $`N(t)`$ of the first few terms of $`G`$’s Taylor expansion are
$$1,0,72,384,8544,76800,1168512,12785664,170678784,2014648320,25633231872,\mathrm{}$$
Note that there are no $`5\times 3`$ tilings.
The generating functions for fault-free tilings obey
$`G^{}`$ $`=`$ $`8zG_2^{}+4zG_3^{}`$
$`G_1^{}`$ $`=`$ $`4z+zG_1^{}+2z^2G_2^{}+(2z+4z^2)G_3^{}`$
$`G_2^{}`$ $`=`$ $`2G_1^{}+zG_2^{}+4zG_3^{}`$
$`G_3^{}`$ $`=`$ $`2z+4zG_1^{}+(2z+4z^2)G_2^{}+4z^2G_3^{}`$
and so
$$G^{}(z)=24z^2\frac{3+10z+15z^2}{12z31z^240z^320z^4}$$
The first few $`N^{}(t)`$, starting with $`t=2`$, are then
$$72,384,3360,21504,163968,1136640,8283648,58791936,423121920,\mathrm{}$$
These are shown for $`t=2`$ and $`t=3`$ in Figure 4.
The asymptotic growth of the number of fault-free rectangles is $`N^{}(t)\lambda ^t`$ where $`\lambda ^{}`$ is the largest positive root of
$$\lambda ^42\lambda ^331\lambda ^240\lambda ^{}20=0$$
which has a rather complicated closed-form expression which we will not reproduce here. Numerically,
$$\lambda ^{}=7.16235536278185348653\mathrm{}$$
For all rectangles, including faulty ones, we have $`N(t)\lambda ^t`$ where $`\lambda `$ is the largest positive root of
$$\lambda ^42\lambda ^3103\lambda ^2280\lambda 380=0$$
This is
$$\lambda =\frac{1}{2}+\frac{\sqrt{627y^{\frac{1}{3}}3y^{\frac{2}{3}}13107}}{6y^{\frac{1}{6}}}+\frac{\sqrt{4369+418y^{\frac{1}{3}}+y^{\frac{2}{3}}+2304\sqrt{\frac{3y}{209y^{\frac{1}{3}}y^{\frac{2}{3}}4369}}}}{2\sqrt{3}y^{\frac{1}{6}}}$$
where
$$y=120432772\sqrt{263697405}$$
Numerically, we have
$$\lambda =12.36366722455963019234\mathrm{}$$
## 3 $`L`$ tetrominoes
We can perform a similar analysis for tilings of $`4\times n`$ rectangles with $`L`$ tetrominoes. In fact, here the analysis is easier. As Figure 5 shows, we have just two kinds of interfaces, straight and jogged. We define $`N(t)`$ and $`N_1(t)`$ as the number of tilings for a straight or jogged interface respectively when there are $`2t`$ columns to the left of the dotted line. Then a straight has two transitions to itself that increase $`t`$ by 1, four to itself that increase $`t`$ by 2, and two to itself that increase $`t`$ by 3. It has two transitions each to a jog and its reflection, one of which increases $`t`$ by 1 and the other by 2. Finally, a jog has two transitions to another (reflected) jog that increase $`t`$ by 1, one transition to itself that increases $`t`$ by 3, and the two inverse transitions back to a straight.
Thus we have a $`2\times 2`$ transfer matrix, and defining generating functions as before gives us the linear equations
$`G`$ $`=`$ $`1+(2z+4z^2+2z^3)G+(2z+2z^2)G_1`$
$`G_1`$ $`=`$ $`(z+z^2)G+(2z+z^3)G_1`$
so
$$G(z)=\frac{12zz^3}{14z2z^2+z^3+4z^4+4z^5+2z^6}$$
The coefficients $`N(t)`$ of the first few terms of $`G`$’s Taylor expansion are
$$1,2,10,42,182,790,3432,14914,64814,281680,1224182,\mathrm{}$$
giving the number of ways to tile a rectangle of size $`4\times 0`$, $`4\times 2`$, $`4\times 4`$, etc.
As before, we can focus our attention on fault-free rectangles. For $`t>3`$, any such rectangle consists of transitions from a straight edge to a jog, and transitions between jogs in between. For $`t=1,2,3`$ we have an additional 2, 4, and 2 fault-free rectangles respectively as shown in Figure 5. The linear equations for the fault-free generating functions are thus
$`G^{}`$ $`=`$ $`2z+4z^2+2z^3+2(z+z^2)G_1^{}`$
$`G_1^{}`$ $`=`$ $`z+z^2+(2z+z^3)G_1^{}`$
This gives
$$G^{}(z)=\frac{2z(1+z)^2(1zz^3)}{12zz^3}$$
whose Taylor expansion tells us that the number $`N^{}`$ of fault-free tilings is
$$2,6,10,18,38,84,186,410,904,1994\mathrm{}$$
The first few of these are shown in Figure 6. The reader can verify that $`G(z)=1/(1G^{}(z))`$.
The asymptotic growth of the number of fault-free rectangles is $`N^{}(t)\lambda ^t`$ where $`\lambda ^{}`$ is the largest root of
$$\lambda ^32\lambda ^21=0$$
which is
$$\lambda ^{}=\frac{1}{3}\left(2+\sqrt[3]{\frac{43}{2}\frac{3\sqrt{177}}{2}}+\sqrt[3]{\frac{43}{2}+\frac{3\sqrt{177}}{2}}\right)$$
or numerically
$$\lambda ^{}=2.20556943040059031170\mathrm{}$$
For all rectangles, including faulty ones, we have $`N(t)\lambda ^t`$ where $`\lambda `$ is the largest root of
$$\lambda ^64\lambda ^52\lambda ^4+\lambda ^3+4\lambda ^2+4\lambda +2=0$$
This appears not to have a closed form, but it is approximately
$$\lambda =4.34601641141649282849\mathrm{}$$
## 4 $`T`$ tetrominoes
Of these three polyominoes, finding the number of tilings of a $`4\times n`$ rectangle is easiest for the $`T`$ tetromino. In fact, such tilings only exist if $`n`$ is a multiple of 4. Figure 7 shows two kinds of interfaces, straight and jagged, with $`4t`$ columns to the left of the dotted line. A straight edge must make a transition either to a jag or its reflection, incrementing $`t`$. A jag can make a transition back to straight, leaving $`t`$ fixed, or to itself, incrementing $`t`$. Thus the generating functions obey
$`G`$ $`=`$ $`1+2G_1`$
$`G_1`$ $`=`$ $`zG+zG_1`$
This gives
$$G(z)=\frac{1z}{13z}$$
whose Taylor expansion tells us that
$$N(t)=\{\begin{array}{cc}1\hfill & \text{if }t=0\hfill \\ 23^{t1}\hfill & \text{if }t>0\hfill \end{array}$$
Thus we can find a closed form for $`N(t)`$, unlike in the previous two cases. There are exactly two fault-free rectangles for each $`t`$, as shown in Figure 8. Thus the number of tilings is $`23^{t1}`$ since there are two choices of initial fault-free rectangle and three choices for each increment of $`t`$, namely either continuing the current fault-free rectangle, or ending it and starting one of two new ones.
It is easy to show that the $`T`$ tetromino cannot tile any rectangles of width 5, 6, or 7. Proofs are shown in Figure 9. More generally, Walkup showed that a rectangle can be tiled with $`T`$ tetrominoes if and only if its length and width are both multiples of 4.
## 5 Tilings of the plane
Since the plane can be divided into rows or columns, the entropy per site of tilings of rectangles of a given width serves as a lower bound for the entropy per site of tilings of the plane. Specifically, if the number of ways to tile a $`m\times n`$ rectangle is $`N(m,n)`$, we define the entropy $`\sigma `$ as
$$\sigma =\underset{m,n\mathrm{}}{lim}\frac{\mathrm{ln}N(m,n)}{mn}$$
so that
$$N(m,n)e^{\sigma mn}$$
for large rectangles. For the right tromino, our analysis of $`5\times n`$ rectangles gives
$$\sigma \frac{1}{15}\mathrm{ln}\lambda =0.1676508\mathrm{}$$
where the factor of 15 comes from the fact that each increment of $`t`$ adds 15 sites to the rectangle. Similarly, for the $`L`$ tetromino on $`4\times n`$ rectangles, we have
$$\sigma \frac{1}{8}\mathrm{ln}\lambda =0.183657\mathrm{}$$
Better lower bounds could be obtained by looking at wider rectangles. Note that if there are any tilings we failed to see, this only improves these lower bounds, since additional transitions can only increase $`\lambda `$.
For the $`T`$ tetromino, the lower bound we get from $`4\times n`$ rectangles, $`\frac{1}{16}\mathrm{ln}30.06866`$, is not as good as the bound $`\frac{1}{8}\mathrm{ln}20.08664`$ which we can derive by noting that the 8-cell shape in Figure 10 can tile the plane, and in turn can be tiled in two ways by the $`T`$. We can get a better bound as follows.
Suppose that the boundaries between tiles at points $`(4x,4y)`$ are clockwise or counterclockwise fylfots as shown in Figure 11. Whenever neighboring fylfots have opposite orientations, there is only one way to tile the space between them, but if they have the same orientation the space between them can be tiled in two ways as in Figure 10. (Note that this is also the source of two of the choices in the $`4\times n`$ tilings above.) If we define the two orientations as $`+1`$ and $`1`$, then we can sum over all configurations of fylfots $`\{s_i=\pm 1\}`$, with each one giving us 1 choice for pairs of unlike neighbors and 2 choices for like ones. Thus we have a lower bound of
$$N(m,n)=\underset{\{s_i=\pm 1\}}{}\underset{ij}{}\left\{\begin{array}{cc}1\hfill & \text{if }s_is_j=1\hfill \\ 2\hfill & \text{if }s_is_j=+1\hfill \end{array}\right\}$$
where the product is over all pairs of nearest neighbors, and the sum is over the configurations of a lattice of size $`\frac{m}{4}\times \frac{n}{4}=\frac{mn}{16}`$. We can rewrite this as
$`N(m,n)`$ $`=`$ $`{\displaystyle \underset{\{s_i=\pm 1\}}{}}{\displaystyle \underset{ij}{}}e^{\frac{1}{2}(s_is_j+1)\mathrm{ln}2}`$
$`=`$ $`2^{\frac{mn}{16}}{\displaystyle \underset{\{s_i=\pm 1\}}{}}{\displaystyle \underset{ij}{}}e^{(\frac{1}{2}\mathrm{ln}2)s_is_j}`$
(note that there are $`\frac{mn}{8}`$ edges in the fylfot lattice).
We now note that the latter sum is the partition function of an antiferromagnetic Ising model with $`\beta =\frac{1}{2}\mathrm{ln}2`$. We can transform this to a ferromagnetic model by negating the $`s_i`$ on one of the checkerboard sublattices. Using the exact solution of the Ising model in two dimensions , we then have
$$\sigma \frac{1}{16}(\mathrm{ln}2+\sigma _{\mathrm{Ising}})$$
where
$$\sigma _{\mathrm{Ising}}=\mathrm{ln}2+\frac{1}{2}\frac{1}{(2\pi )^2}_0^{2\pi }_0^{2\pi }d\omega _1d\omega _2\mathrm{ln}\left[\mathrm{cosh}^22\beta (\mathrm{sinh}2\beta )(\mathrm{cos}\omega _1+\mathrm{cos}\omega _2)\right]$$
For $`\beta =\frac{1}{2}\mathrm{ln}2`$, where $`\mathrm{cosh}^22\beta =\frac{25}{16}`$ and $`\mathrm{sinh}2\beta =\frac{3}{4}`$, we have
$$\sigma _{\mathrm{Ising}}=0.8270269567\mathrm{}$$
and so
$$\sigma 0.09501088358\mathrm{}$$
For upper bounds, we can generalize an argument given in as follows. If we construct a tiling of the plane with right trominoes by scanning from top to bottom and left to right, at each step the first unoccupied site can be given a tromino with only four different orientations. Since we have at most $`n/3`$ such choices, the entropy is at most $`\frac{1}{3}\mathrm{ln}40.462`$. Similar considerations give $`\frac{1}{4}\mathrm{ln}80.520`$ and $`\frac{1}{4}\mathrm{ln}40.347`$ for the $`L`$ and $`T`$ tetromino respectively. Obviously the gap between our upper and lower bounds is still quite large.
For dominoes or ‘dimers,’ the entropy is known to be $`G/\pi =0.29156\mathrm{}`$ where $`G`$ is Catalan’s constant . It is tempting to think that other polyominoes might have exact solutions; one related model which does is the covering of the triangular lattice with triangular trimers . However, while the general problem of covering an arbitrary graph with dimers can be solved in polynomial time, covering it with triangles is NP-complete , so generalized versions of the tromino problem are probably hard.
Acknowledgements. I am grateful to Mark Newman, Lauren Ancel, Michael Lachmann and Aaron Meyerowitz for helpful conversations, and Molly Rose and Spootie the Cat for warmth and friendship.
Note added. Meyerowitz has calculated the generating function for the $`T`$ tetromino in rectangles of width 8.
|
no-problem/9905/astro-ph9905126.html
|
ar5iv
|
text
|
# X-ray Spectroscopy of the Nearby, Classical T Tauri Star TW Hya
## 1 Introduction
A primary result of X-ray surveys of star formation regions by the Einstein and ROSAT X-ray satellite observatories is the association of strong X-ray emission with solar-mass, pre-main sequence (PMS) stars at various stages of evolution (e.g., Montmerle et al. 1993; Feigelson 1996; and references therein). This result, when viewed in the context of recent theoretical predictions concerning the potentially profound effects of X-rays from young stars on the physics and chemistry of their circumstellar, protoplanetary disks (e.g., Glassgold, Najita, & Igea 1997; Igea & Glassgold 1999), makes clear that knowledge of the detailed X-ray spectra of pre-main sequence stars is central to an understanding of the early evolution of the Sun and solar system. Low-mass PMS stars in Taurus-Auriga, Chamaeleon, Lupus, and Ophiuchus are intrinsically 2–4 orders of magnitude more X-ray luminous than the Sun; however, as a result of the large ($`150`$ pc) distances to these “local” regions of active star formation and, in many cases, the large X-ray absorbing column depths characteristic of the dark clouds in which the PMS stars reside, these stars’ X-ray fluxes rarely exceed $`10^{12}`$ ergs/cm<sup>2</sup>/s except during strong flares. Thus, it is difficult to perform X-ray spectroscopy of individual pre-main sequence stars that are located in well-studied regions of star formation.
As a consequence of these limitations, the X-ray emission mechanisms of PMS stars remain poorly understood. However, the available evidence suggests that coronal activity is the source of X-rays from low-mass PMS stars. Until quite recently, most of the evidence was indirect and based on, e.g., the observed continuum of X-ray activities from pre-main sequence through early main sequence evolution and the generally large rotation rates and X-ray-flaring behavior of T Tauri stars (Kastner et al. 1997 \[hereafter KZWF\]; Skinner et al. 1997; and references therein). These observations support a scaled-up solar dynamo model underlying the X-ray emission from low-mass PMS stars (e.g., Carkner et al. 1996). With the advent of the ASCA X-ray satellite observatory, direct spectroscopic evidence also is accumulating in support of a coronal origin for the X-rays (e.g., Carkner et al.; Skinner et al.; Skinner & Walter 1998). Generally, ASCA spectra of PMS stars obtained to date can be well modeled as arising in coronal plasma with a bimodal temperature distribution characterized by temperatures of $`330`$ MK. Thus, ASCA observations have provided new impetus to the study of PMS stellar X-ray sources.
To better understand what role X-ray emission may play in the chemical and physical evolution of circumstellar material, and thereby test the predictions of recent theory (Glassgold, Najita, & Igea 1997; Shu et al. 1997), we must obtain high-quality X-ray spectra of individual PMS stars that are directly analogous to the primordial Sun and solar system. Such data can also provide probes of intervening circumstellar material, via the spectral signature of absorption of soft X-rays. With a few exceptions (e.g., Skinner & Walter 1998), however, ASCA investigations have focused either on deeply embedded protostars whose nature and evolutionary status is uncertain, or on highly active, weak-lined TTS. Classical TTS — i.e., TTS surrounded by massive disks that are the likely sites of planet formation — have largely been neglected by ASCA studies of PMS stars, mostly due to the difficulty in identifying suitably X-ray bright candidates.
The classical TTS TW Hya, noteworthy for its large projected distance from the nearest dark cloud (Rucinski & Krautter 1983), is a particularly interesting object in this regard. KZWF established that TW Hya belongs to a small group of T Tauri star (TTS) systems likely comprising a physical association (the TW Hya Association; hereafter TWA). These TTS systems form an exceptionally bright group in X-rays compared with TTS in Taurus and Chamaeleon, suggesting they are quite close to the Earth. Indeed, on the basis of ROSAT X-ray and other data, KZWF concluded that the TWA stars are 10–30 Myr old and thus, based on theoretical PMS evolutionary tracks, the Association lies a mere $`50`$ pc distant. Hipparcos distances to Association members TW Hya and HD 98800 — 57 pc (Wichmann et al. 1998) and 48 pc (Soderblom et al. 1998), respectively — confirm these results, although there is growing evidence that the ages of most of the Association stars lie in the range 5 to 15 Myr (Webb et al. 1998; Weintraub et al. 1999). Hence the TW Hya Association, consisting of a small population of T Tauri stars with no known associated cloud, probably represents the nearest region of recent star formation. Recently, Webb et al. identified another five PMS/TTS systems that are likely members of the TWA.
Though all of the late-type TWA members display various TTS characteristics, such as H$`\alpha `$ emission, Li absorption, and/or IR excesses, TW Hya is is the only TWA star to display both strong H$`\alpha `$ (Rucinski & Krautter 1983) and submillimeter continuum emission (Weintraub, Sandell, & Duncan 1989). It is also the only known CO emission line source among the TWA stars (Zuckerman et al. 1995) and, furthermore, displays submillimeter emission lines of <sup>13</sup>CO, HCN, CN, and HCO<sup>+</sup> (KZWF), despite the apparent lack of interstellar molecular gas in its vicinity. KZWF conclude that, given the narrow ($`0.7`$ km s<sup>-1</sup>) observed molecular emission line widths, TW Hya possesses a dusty molecular disk that is viewed nearly face-on. Considering the $`20`$ Myr age of TW Hya, this disk may closely resemble the solar nebula at a time during, or shortly after, the formation of Jupiter.
Among the small but growing number of PMS solar analogs known to possess circumstellar, molecular disks, TW Hya is the brightest in X-rays. Furthermore, it is found in relative isolation, with no nearby X-ray sources or intervening dark cloud material, and we likely view its star-disk system nearly pole-on (KZWF). For these reasons, TW Hya presents an excellent subject for X-ray spectroscopy with ASCA, which features X-ray spectral resolution and spectral coverage superior to that of ROSAT but spatial resolution inferior to that of its immediate predecessor. In this paper, we present and analyze ASCA data we acquired for TW Hya and also re-analyze archival ROSAT pointed observations (Hoff et al. 1998). The results of this analysis provide constraints for models of X-ray irradiation of circumstellar molecular gas and, more generally, serve as a starting point for inferences concerning the primordial solar X-ray spectrum at the epoch of Jovian planet formation. The results also shed light on the well-documented but puzzling optical variability of TW Hya.
## 2 Observations
### 2.1 ASCA
TW Hya was observed with the two Solid-state Imaging Spectrometers (SIS) and two Gas Imaging Spectrometers (GIS) aboard ASCA on 1997 June 26-27. Both types of detectors are sensitive from $``$0.5 keV to $``$10 keV. The two SIS instruments afford superior broad-band sensitivity and spectral resolution ($`E/\mathrm{\Delta }E30`$ at 6.7 keV), while the two GIS are somewhat more sensitive than the SIS at higher energies. For these observations, 2 of the 4 available CCDs in each SIS detector were active. This configuration allowed us sufficient off-source detector field of view to determine background count rates and spectra, while limiting the reporting of spurious events from off-source CCDs.
Data reduction was accomplished via standard processing software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC). We used standard screening criteria (via the HEASARC XSELECT software) to screen event data on the basis of Earth elevation angles, particle background levels, and event CCD pixel distributions, and to reject hot and flickering CCD pixels, thereby producing lists of valid X-ray events. To maximize signal to noise ratio, we have combined data telemetered at “medium” and “low” bit rates with data telemetered at “high” bit rate. The resulting SIS dataset may, in principle, exhibit slightly degraded spectral resolution relative to a dataset constructed from the “high” bit rate data alone, since data obtained in “medium” and “low” bit rates cannot be corrected for the so-called “echo effect” in the SIS CCDs. We cannot discern any such degradation in the combined dataset relative to that of the “high” bit rate data alone, however, and hence the inclusion of “medium” and “low” bit rate data improves the analysis presented below. Resulting net (post-screening) exposure times were 31.1 ks for each SIS detector and 31.4 ks for each GIS detector, out of the total observation duration of $`94`$ ks ($`1.1`$ days).
Within the $`40^{}`$ diameter GIS and $`11^{}\times 22^{}`$ SIS fields, TW Hya is the only X-ray source detected. We extracted events in $`4^{}`$ (SIS) and $`6^{}`$ (GIS) radii circles centered on the position of the source, to determine count rates and to accumulate spectra (§3.2.1). We also extracted events in off-source regions of each detector, again using extraction radii of $`4^{}`$ (SIS) and $`6^{}`$ (GIS), to obtain representative background count rates and spectra. These background regions were located $`12^{}`$ away from the source regions and hence are very unlikely to be contaminated with source photons, given the modest count rates of TW Hya. Results for count rates are listed in Table 1.
### 2.2 ROSAT
TW Hya was the target of a pointed ROSAT Position Sensitive Proportional Counter (PSPC) observation on 1991 December 12, and is also included in the ROSAT All-Sky Survey (RASS) Bright Source Catalog (Voges et al. 1998). Here we re-analyze the pointed PSPC data, which were summarized in KZWF and subsequently presented in more detail by Hoff et al. (1998). The X-ray sensitivity of the PSPC plus ROSAT X-ray telescope extends from about 0.1 keV to 2.4 keV, with energy resolution of $`40`$% at $`1`$ keV. We find that a total of 2100 counts were obtained for TW Hya in 6.8 ks of detector live time. This count rate, 309$`\pm `$7 counts ks<sup>-1</sup>, is consistent with the count rate listed in Table 2 of Hoff et al. The background count rate, as estimated from off-source regions of the PSPC image, was $`<6`$ counts ks<sup>-1</sup>.
## 3 Results
### 3.1 ROSAT
The PSPC spectrum, binned into 34 energy channels, is displayed in Fig. 1 (see also Fig. 2 of Hoff et al. 1998). The spectrum is double-peaked, with peaks near 0.25 and 0.9 keV, and is similar to PSPC spectra of certain Taurus-Auriga T Tauri stars that are located along lines of sight with relatively little intervening cloud material (Carkner et al. 1996). Our tests of fits of pure continuum emission models, such as blackbody or thermal bremsstrahlung emission, indicate that such models cannot match the observed shape of the PSPC spectrum peak near 0.9 keV. Pure continuum models are also ruled out, somewhat more emphatically, by the ASCA data (§3.2.1). Thus, to fit the PSPC spectrum, we adopt the popular model for X-ray emission from active stars, i.e., Raymond-Smith (R-S) coronal plasma emission suffering intervening absorption due to circumstellar and/or interstellar gas. The R-S models assume collisional ionization equilibrium, and include both emission lines and continuum contributions for a given temperature and set of elemental abundances (Raymond & Smith 1976). We attempted fits of both single-component and two-component R-S models, with solar abundances. After convolving with the ROSAT/PSPC spectral response, we find that the latter model, consisting of a linear combination of R-S models at two different temperatures, provides a superior fit to the PSPC data. Results for the best-fit R-S model parameters are listed in Table 2, and the model (folded through the PSPC spectral response) is overlaid on the data in Fig. 1. The best-fit characteristic temperatures of $`kT_1=0.14`$ keV ($`T_1=1.7`$ MK) and $`kT_2=0.85`$ keV ($`T_2=9.7`$ MK) for the soft and hard emission components, respectively, are similar to results obtained by Hoff et al. (1998).
A light curve extracted from the pointed PSPC data (not shown) indicates that, in contrast to the ASCA observations (§3.2.2), the X-ray flux from TW Hya was nearly constant during the ROSAT observing intervals. However, the RASS count rate (570$`\pm `$40 counts ks<sup>-1</sup>; KZWF) is approximately double that of the pointed observations. Our event extraction radius for the pointed data is sufficiently large to rule out differences in extraction radii as the origin of the different RASS and pointed count rates. Thus it appears that, like typical TTS, TW Hya probably is variable in X-rays.
### 3.2 ASCA
#### 3.2.1 Spectral analysis
We extracted source spectra from the screened SIS0 and SIS1 event lists using uniform bins of width 29 eV for both instruments. Although the count rates of the two instruments differ by $`30`$% due to different detector sensitivities (Table 1), the resulting SIS0 and SIS1 spectra appear very similar. In particular, source spectra obtained with both instruments peak at about 0.95 keV and display plateaus or “shoulders” of emission on either side of this peak; these shoulders are centered at about 0.75 keV and 1.3 keV. Other features are also well reproduced in the corresponding source and background spectra, and tests of independent fits to the SIS0 and SIS1 source spectra yield similar results for the two datasets for model parameters such as emitting region temperature and intervening absorbing column (see below), suggesting the SIS0 and SIS1 data can be combined without compromising spectral response. Hence, to maximize signal-to-noise, we averaged the SIS0 and SIS1 source spectra, and then subtracted the average of the SIS0 and SIS1 background spectra from the resulting source spectrum. A weak high-energy tail that is present in the “raw” source spectra is also present in the background spectra, suggesting it is not intrinsic to the source; this feature largely disappears from the combined SIS spectrum of TW Hya once background is subtracted (Fig. 2). We performed all subsequent spectral analysis on this combined, background-subtracted SIS0 and SIS1 spectrum (which we hereafter refer to as the TW Hya SIS spectrum), using a hybrid spectral response matrix constructed from a weighted average of the SIS0 and SIS1 response matrices for the appropriate regions of the detectors.
Attempts to fit pure continuum models, such as blackbody emission and thermal bremsstrahlung emission, fail to reproduce the relatively sharp peak in the spectrum near 0.95 keV. Significantly improved fits are obtained once line emission is included. In particular, R-S thermal plasma models with characteristic temperatures $`10`$ MK (and solar metallicities) reproduce the intensity and overall shape of the observed 0.95 keV peak (Fig. 2). We therefore conclude that this feature is dominated by a complex of lines generated by highly ionized species of Fe, as these lines dominate the spectra of $`10`$ MK R-S models in the spectral region near $`1`$ keV.
We find that single-component R-S models are not able to simultaneously fit the 0.95 keV peak and the low-energy ($`<0.7`$ keV) tail in the SIS spectrum. This residual tail of emission suggests the presence of an additional, softer component. As the presence of a soft component is also required to fit the PSPC spectrum of TW Hya (§3.1; Hoff et al. 1998), we attempted fits with two-component R-S models. We did not attempt to use the PSPC spectral fitting results to constrain the temperatures of the R-S model fits to the ASCA data, in light of the $`5.5`$ year time interval between the ROSAT and ASCA observations. Over this interval there is the possibility of variations in the physical conditions of the X-ray emitting region(s) and/or in the distribution of absorbing material along the line of sight to the star. If we use the results of PSPC spectral analysis to constrain the absorbing column toward TW Hya at $`N_H=5\times 10^{20}`$ cm<sup>-2</sup>, we find characteristic temperatures of $`2.4`$ MK and $`12`$ MK, respectively, for the soft and hard components in the ASCA spectrum (Table 2; Fig. 2). If, on the other hand, we allow $`N_H`$ to be a free parameter, we find from the SIS spectrum characteristic temperatures that are very similar to those obtained from our independent fit to the PSPC data, albeit with substantially larger emission measures for both soft and hard components (since the best-fit $`N_H`$ is larger than that obtained from the fit to the PSPC spectrum). The fit with $`N_H`$ unconstrained also represents a marginal improvement over that obtained by holding $`N_H`$ fixed at $`5\times 10^{20}`$ cm<sup>-2</sup>, particularly in the $`1`$ keV spectral region. The results of the ASCA/SIS fit with $`N_H`$ unconstrained would suggest that $`N_H`$ was larger by a factor of $`6`$ at the epoch of the ASCA observations.
We followed a similar procedure to perform spectral analysis of GIS data. Source spectra were extracted from the screened GIS2 and GIS3 event lists using uniform bins of width 47 eV. These spectra were combined, and the combined background spectrum was subtracted from this combined source spectrum to produce the final GIS spectrum (Figure 3). This spectrum, like the SIS spectrum, displays a sharp peak at $`1`$ keV. A hybrid instrument response matrix was constructed for purposes of spectral fitting. Fits to the GIS spectrum were constrained by the results of PSPC and SIS model fitting. Specifically, we used these results to obtain two separate fits to the GIS spectrum, with $`N_H`$ fixed at $`5\times 10^{20}`$ cm<sup>-2</sup> and emission temperature fixed at $`kT=0.98`$ keV, and with $`N_H`$ fixed at $`2.9\times 10^{21}`$ cm<sup>-2</sup> and emission temperature fixed at $`kT=0.86`$ keV. We did not include a soft component corresponding to those used in the PSPC and SIS model fitting, in light of the poor soft X-ray response of the GIS detectors. With these constraints, we obtain good fits to the GIS spectrum (Table 2; Fig. 3) at energies $`<2`$ keV, provided we apply an offset of 0.1 keV in the energy scale of the GIS response function. These fits suggest the presence of an additional, harder emission component in the GIS spectrum (Fig. 3), although due to the small count rates at channel energies $`>2`$ keV we are unable to determine a unique emission temperature for this potential, additional component.
#### 3.2.2 Temporal Analysis
A light curve extracted from the SIS and GIS event lists is presented in Fig. 4. The merged GIS and SIS light curve shows some evidence for sudden, short-lived increases in X-ray flux. These “flares” (detected $`0.15`$ and $`0.87`$ days after the start of the observation) were of duration $`1`$ hour, separated by about 0.72 days, and represented a doubling to tripling of the count rate. The increases were observed independently in all four detectors; therefore, we are able to rule out background or instrumental effects as the source of these count rate fluctuations. Spectra extracted from time intervals at high count rate yield fit results for model parameters that are the same to within the uncertainties as those obtained for spectra extracted from the entire observation interval. Hence it appears there were no significant changes in the X-ray spectrum of TW Hya during the periods of increased X-ray flux, although the relatively low signal-to-noise ratios of ASCA spectra obtained during these periods preclude a definitive comparison.
The apparent modest flaring detected in the ASCA observations of TW Hya stands in contrast to the strong, long-lived flares that have been detected during ASCA observations of deeply embedded protostars (Koyama et al. 1996) as well as of magnetically active, weak-lined TTS (Skinner et al. 1997). These flares are characterized by an order of magnitude or more increase in X-ray flux accompanied by a significant hardening of the X-ray spectrum, and have durations of order days. Nevertheless, flaring activity seems a more plausible explanation for the ASCA count rate increases observed for TW Hya than does, e.g., rotational modulation of coronal features (§4.3).
## 4 Discussion
### 4.1 The Soft, “Quiescent” X-ray Spectrum of TW Hya
Comparison of the results of model fits to the ROSAT and ASCA spectra (Table 2) indicates that the characteristic emitting region temperatures and X-ray luminosity of TW Hya changed little between the epochs of the ROSAT and ASCA observations, although they were obtained about 5.5 years apart. These results indicate that the ASCA and ROSAT spectra are generally characteristic of a “quiescent” state of TW Hya, although we apparently detected relatively modest, short-lived flares during the ASCA observation. Evidently, the quiescent X-ray spectrum of TW Hya ($`kT0.85`$ keV for the “hard” component) is much softer than the quiescent X-ray spectra of embedded protostars ($`kT6`$ keV; Koyama et al. 1996) or of relatively young TTS ($`kT3`$ keV for the hard components; Skinner et al. 1997, Skinner & Walter 1998). Given KZWF’s conclusion that TW Hya is unusually “old” for a classical TTS, this comparson suggests a trend toward softer X-ray emission as a TTS settles onto the main sequence (see also Skinner & Walter 1998). This trend is evidently accompanied by only a modest decrease, if any, in X-ray luminosity; indeed, the ratio of X-ray to bolometric luminosity increases throughout a late-type star’s pre-main sequence evolution (KZWF). Further X-ray observations may establish whether or not TW Hya exhibits powerful X-ray flares such as have been observed for many other TTS, whether its apparent short-term variability is periodic, and whether its “quiescent” X-ray spectral distribution varies with time.
### 4.2 X-ray Ionization of the TW Hya Molecular Disk
The ASCA data likely well characterize the X-ray spectrum that is incident on surface or subsurface layers of the circumstellar molecular disk surrounding TW Hya. As described by Glassgold et al. (1997), X-ray radiation incident on a circumstellar molecular disk can partially ionize the disk if the incident radiation is sufficiently intense and hard. The position-dependent ionization rate and resulting electron fractions, though small, may be sufficient to produce stratified disk instabilities that can govern accretion onto the star as well as protoplanetary accretion at the disk midplane (Glassgold et al.). Also, X-ray ionization of molecular gas may play a central role in determining circumstellar chemistry and, in particular, may explain the enhanced abundance of HCO<sup>+</sup> measured at TW Hya (KZWF). Thus, the X-ray emission from TW Hya could be an important factor in determining the chemical composition of protoplanetary gas and may play a central role in the formation of any protoplanets themselves, provided there is sufficient X-ray flux to effectively ionize molecular material in the disk.
The Glassgold et al. (1997) and Igea & Glassgold (1999) models are formulated for hypothetical star-disk systems. It is worthwhile, therefore, to consider how closely these models actually resemble the TW Hya system, which represents in many respects the prototypical example of an X-ray-bright TTS surrounded by a circumstellar, molecular disk. We have demonstrated above that TW Hya displays a relatively soft X-ray spectrum; of the spectra considered in the Glassgold et al. (1997) models, for example, the TW Hya spectrum evidently more closely resembles their assumed “soft” (1 keV) spectrum than their “hard” (5 keV) spectrum. As Glassgold et al. point out, the greater penetration of hard X-rays enhances the ionization rates of the hard model relative to the soft model, for a given X-ray luminosity $`L_x`$. Hence, the ionization rates produced by the X-ray field of TW Hya fall short of those that would be produced by, e.g., a younger T Tauri star or protostar of comparable $`L_x`$ (but harder X-ray spectrum) that is surrounded by a molecular disk of comparable mass. However, the X-ray luminosity of TW Hya is an order of magnitude larger than that assumed for the central star in the Glassgold et al. models. Therefore, as the X-ray ionization rate at a given point in the disk is proportional to $`L_x`$, the ionization structure of the TW Hya disk may more closely resemble that of the Glassgold et al. 5 keV model than their 1 keV model (see Fig. 2 of Glassgold et al.). Moreover, the ASCA data demonstrate that, despite its relatively soft spectrum, $`10`$% of the X-ray luminosity of TW Hya, or $`2\times 10^{29}`$ ergs s<sup>-1</sup>, emerges at energies $`>2`$ keV. Hence there is clearly ample hard X-ray flux to produce significant X-ray ionization of the TW Hya molecular disk, according to the Glassgold et al. and Igea & Glassgold models.
The foregoing comparison suggests that a more detailed analysis, along the lines of that carried out by Igea & Glassgold (1999) but tailored to conditions appropriate for TW Hya, would be of great interest. Such an analysis should take into account the fact that the X-ray spectrum of TW Hya evidently is dominated by emission lines at energies near 1 keV (§3.2.1), and that its molecular disk mass ($`7\times 10^{28}`$ g, or $`12`$ Earth masses; KZWF) is a factor $`100`$ smaller than that of the “canonical” minimum mass solar nebula. The latter constraint implies large ionization rates near the TW Hya disk mid-plane, relative to the ionization rates characteristic of models tailored to the minimum mass solar nebula.
### 4.3 $`N_H`$: Constraints on Circumstellar and Viewing Geometries
The range in absorbing column indicated by the fits to the PSPC and SIS spectra constrains both the distribution of circumstellar material and our view of the star through this material. Assuming that $`N_H`$ is dominated by circumstellar (as opposed to interstellar) extinction at the large neutral H densities implied by the detection of HCN line emission ($`n_{H_2}>10^7`$ cm<sup>-3</sup>; KZWF), the value of $`N_H`$ derived from the SIS data ($`N_H2.9\times 10^{21}`$ cm<sup>-2</sup>) suggests the column length along the line of sight to the star is $`l<3\times 10^{14}`$ cm ($`l<20`$ AU). The PSPC result for $`N_H`$ would impose a more stringent upper limit of $`l<5\times 10^{13}`$ cm ($`l<3`$ AU). Based on CO line strengths and ratios, KZWF concluded that the projected radius of the molecular line emitting region is $`60`$ AU. Thus, if the material responsible for the absorption measured in the X-ray spectra of TW Hya resides in the same circumstellar structure as is responsible for the molecular line emission, then this structure is flattened and is viewed from high latitude (if viewed edge-on, the TW Hya disk would have a molecular column density $`N_{H_2}>10^{22}`$ cm<sup>-2</sup>). On the other hand, the presence of a considerable column depth of ionized gas is reflected in the fact that TW Hya displays a huge H$`\alpha `$ equivalent width (e.g., KZWF). Such a large H$`\alpha `$ EW likely results from a large mass loss rate (e.g., Bouvier et al. 1995) and this material most likely is ejected along the poles of the star. Hence the X-ray absorption may be produced in an extended, ionized stellar wind. In either case, the results for $`N_H`$ appear to support the conclusion that the dense, relatively cold ($`T100`$ K) region of circumstellar molecular gas and dust detected in the millimeter and submillimeter wavelength regime resides in a flattened structure surrounding TW Hya that is viewed nearly pole-on (Zuckerman et al. 1995; KZWF).
### 4.4 Variability of Optical Photometry and X-rays
Photometry published in Rucinski & Krautter (1983) indicated that TW Hya displays large-amplitude ($`0.5`$ mag) variability over short (1-day) timescales. Herbst & Koret (1988) derived periods of 1.28 days and 1.83 days from their own $`UBV`$ photometric monitoring and from the older dataset of Rucinski & Krautter, respectively. Herbst & Koret attributed the variability to rotational modulation of a hot spot. More recently, Mekkaden (1998) derived a period of 2.196 days from independent photometric monitoring and claimed that this longer period satisfies all available photometry, including the monitoring data analyzed by Herbst & Koret. However, the phases of maximum and minimum light can be seen to shift randomly in the folded light curves for various observing epochs presented by Mekkaden.
Given the apparent inconsistency in the various published results for the period of TW Hya, we examined archival optical ($`V`$ band) photometry of TW Hya obtained during the course of the Hipparcos astrometry mission. This photometry is rather sparsely sampled, but extends over a long temporal baseline encompassing the time period of the various photometric datasets presented by Mekkaden (1998). In addition to revealing $`V`$ fluctuations of $`0.2`$ mag over the course of a day or so, the Hipparcos data well illustrate the larger, longer-term optical variability of TW Hya (Fig. 5a). Indeed, these data include the brightest and faintest $`V`$ magnitudes (minimum $`V=10.58\pm 0.03`$; maximum $`V=11.38\pm 0.04`$), and hence display the largest amplitude of $`V`$ band variation ($`\mathrm{\Delta }V=0.8`$ mag), observed thus far for TW Hya.
We have folded the Hipparcos V photometry according to the various periods cited in the literature (Fig. 5b-d). There is considerable scatter in each of these folded lightcurves, and no regular variation can be discerned. We conclude that none of the published periods well describes the Hipparcos photometry. The Hipparcos data therefore cast doubt on the existence of well-defined, short-period optical variability of TW Hya, such as would be due to rotational modulation.
A general problem with the interpretation of the optical variability in terms of rotational modulation is the likely viewing geometry. Both Herbst & Koret (1988) and Mekkaden (1998) contend that the wavelength dependence of the variability amplitude of TW Hya is best interpreted as rotation of a hot (as opposed to cool) spot and, hence, may indicate the presence of an accretion hotspot at the boundary layer between star and disk. There now exists independent evidence for the presence of a circumstellar disk (Weintraub et al. 1989; KZWF). However, there is accumulating evidence that the disk is viewed nearly pole-on (§4.3; KZWF), and the measured $`v\mathrm{sin}i<15`$ km s<sup>-1</sup> (Franchini et al. 1992; Webb 1998, private comm.) suggests the star itself is also viewed from a high latitude. Even if the system is viewed nearly, but not precisely, pole-on, a model that interprets the variation of $`UBV`$ radiation in terms of the rotation of an accretion hotspot requires either that the hotspot is very near the star’s equator or that the star’s rotation axis is not well aligned with the disk axis. Neither idea is favored by contemporary protostellar accretion theory (e.g., Shu et al. 1997).
Accepting for the moment that Herbst & Koret (1988) and Mekkaden (1998) have detected rotational modulation in the light curve of TW Hya, any of the periods derived by these investigations would make TW Hya one of the fastest rotating classical TTS known. Generally, however, its somewhat erratic photometric behavior resembles that of other classical TTS stars, many of which exhibit temporal variations in and disappearances/reappearances of photometric periodicity (Bouvier et al. 1995). Like Herbst & Koret and Mekkaden, Bouvier et al. interpret the variability of the stars in their classical TTS sample in terms of rotational modulation of hot spots. However, the changes, disappearances, and reappearances of periodicity in the photometry of TW Hya and other classical TTS suggest that quasi-random flaring, perhaps caused by short-lived, small-scale accretion events, might better explain their short timescale optical variability. Similar processes may be responsible for the X-ray outbursts seen by ASCA (Fig. 4).
The longer-term optical variability of TW Hya, on the other hand, may be related to changes in X-ray absorbing column (§3.2.1). The fits to the PSPC and SIS spectra indicate that $`N_H`$, and hence $`A_V`$, may have changed by a factor $`6`$ over the intervening $`5.5`$ yr. Based on conversion factors from $`A_V`$ to $`N_H`$ listed in Ryter (1996) and Neuhauser, Sterzik, & Schmitt (1995), the results for $`N_H`$ correspond to $`A_V0.25`$ and $`A_V1.5`$ for the epochs of the ROSAT and ASCA data, respectively. Given the star’s UV and near-IR excesses, it is difficult to determine reddening via e.g. observed vs. intrinsic $`BV`$ color (Rucinski & Krautter 1983); hence these results for $`A_V`$ serve as perhaps the most reliable indications yet obtained of the magnitude and variability of visual extinction toward TW Hya.
The large range inferred for $`A_V`$ raises the possibility that the long timescale, large amplitude optical variability of TW Hya is due to variable circumstellar obscuration. In this regard it is intriguing that the Hipparcos data appear to show long-term ($`600`$ day) periodicity characterized by steep declines in $`V`$ followed by more gradual recoveries. This variation is reminiscent of, though not nearly as dramatic as, the behavior of R CrB stars, which vary by many magnitudes in the optical due to the formation and then gradual dispersal of dust clouds along the line of sight to the stars (Clayton 1996). Observations that the amplitude of variation increases steeply with decreasing wavelength and that the degree of optical polarization changes with time (Mekkaden 1998) are consistent with variable obscuration of the star by dust. Furthermore the difference in inferred $`A_V`$ between the epochs of the ROSAT and ASCA observations (1.2 mag) is on the order of the amplitude of $`V`$ band variation measured by Hipparcos (0.8 mag). On this basis, we would predict that TW Hya was at or near its faintest at the time of the ASCA observations (June 1997; J.D. 2450626), as the $`A_V`$ inferred from the ASCA data is large. Unfortunately we are not aware of the existence of optical photometry that might provide a test of this prediction. However, at the approximate time of the ROSAT pointed observations (Dec. 1991; J.D. 2448602), for which we infer a relatively small $`A_V`$ and therefore we anticipate that TW Hya was quite bright optically, Hipparcos measured only $`V11.0`$ (Fig. 5a) — roughly the median of the Hipparcos $`V`$ band measurements. Thus, while the large range of $`A_V`$ inferred from the X-ray data is suggestive of the presence of variable obscuration, it is not clear that these values provide an accurate measure of the relative $`V`$ magnitude of TW Hya.
In contrast to the episodic dust formation and dispersal characteristic of R CrB stars, we speculate that the appearance and gradual disappearance of dust clouds along the line of sight to TW Hya might be produced by episodes in which infalling disk material is partially accreted and partially redirected into polar jets or outflows. This explanation seems consistent with the variability of optical continuum and H$`\alpha `$ emission from TW Hya (Mekkaden 1998) as well as its X-ray variability. Further contemporaneous X-ray spectroscopy and optical photometric and polarimetric monitoring would test the hypothesis that the long-term variability of TW Hya and other classical T Tauri stars is due to variable circumstellar extinction.
## 5 Conclusions
We have obtained and analyzed ASCA data and re-analyzed archival ROSAT data for the nearby, isolated classical T Tauri star TW Hya. We find that both ROSAT and ASCA spectra are well fit by a two-component Raymond-Smith thermal plasma model, with characteristic temperatures of $`T1.7`$ MK and $`T9.7`$ MK and X-ray luminosity of $`L_x2\times 10^{30}`$ ergs s<sup>-1</sup>. Prominent line emission, likely arising from highly ionized species of Fe, is apparent in the ASCA spectra at energies $`1`$ keV, while the GIS spectrum displays a weak hard X-ray excess at energies $`>2`$ keV. Although modest flaring is apparent in the ASCA light curve, the overall similarity of the X-ray spectra inferred from the ASCA and ROSAT observations suggests that the X-ray data obtained to date are characteristic of a “quiescent” TW Hya.
There is evidence that the absorption of the X-ray spectrum of TW Hya may have changed in the $`5.5`$ year interval between ROSAT and ASCA observations. The apparent long-term change in $`N_H`$ and the appearance of short-lived flares in the ASCA data appear to be related to the large amplitude optical variability of TW Hya on both long and short timescales; neither the X-ray nor optical variability appears to be well explained by rotational modulation.
Since TW Hya appears to be a single star of age $`10`$ Myr that is surrounded by a circumstellar molecular disk (KZWF), the ASCA spectra offer an indication of the X-ray spectrum of the young Sun during the epoch of Jovian planet building. These spectra indicate that the hard X-ray flux from TW Hya ($`2\times 10^{29}`$ ergs s<sup>-1</sup> at energies $`>2`$ keV) is sufficient to produce substantial ionization of its circumstellar molecular disk.
Further X-ray spectroscopic and monitoring observations of TW Hya might find the star in a strong, longer-lived flaring state, and would better establish whether or not the physical conditions responsible for its “quiescent” spectrum are stable. In this regard, we note that TW Hya will be observed by both the Chandra X-ray Observatory (CXO) and the X-ray Multimirror Mission (XMM) during their first years of operation<sup>2</sup><sup>2</sup>2As of this writing, CXO was scheduled for launch in summer 1999, while XMM launch was scheduled for January 2000.. Chandra and XMM observations of TW Hya should far surpass the ASCA and ROSAT results presented here in terms of spectrum quality, and thus should provide additional insight into the nature of the X-ray emission from this seminal young, Sun-like star. In particular, high-resolution (gratings spectrometer) CXO and XMM spectra of TW Hya will provide critical tests of our conclusions that the X-ray spectrum of TW Hya is coronal in origin and that the features seen near 1 keV in ASCA and ROSAT spectra are formed by a blended complex of highly ionized iron lines. Low-resolution (CCD) spectra resulting from both missions will provide additional measures of the degree and variability of intervening circumstellar absorption.
Primary support for this work was provided by NASA ASCA data analysis grant NAG5–4959 to M.I.T. Research by J.H.K., D.P.H., and N.S.S. at M.I.T. is also supported by the Chandra X-ray Observatory Science Center as part of Smithsonian Astrophysical Observatory contract SVI–61010 under NASA Marshall Space Flight Center. Support for research by D.A.W. and J.H.K. was also provided by NASA Origins of Solar Systems program grant NAG5–8295 to Vanderbilt University.
|
no-problem/9905/astro-ph9905300.html
|
ar5iv
|
text
|
# The HI and Ionized Gas Disk of the Seyfert Galaxy NGC 1144 = Arp 118: A Violently Interacting Galaxy with Peculiar Kinematics
## 1 Introduction
The Arp 118 system is comprised of a distorted disk galaxy NGC 1144 and an elliptical galaxy NGC 1143. NGC 1143 is $``$40$`\mathrm{}`$ to the NW of NGC 1144, which corresponds to a linear separation of $``$20 kpc (assuming a distance of 110 Mpc to Arp 118). NGC 1144 is also classified as a Seyfert 2 galaxy (Hippelein 1989, hereafter H89). There exists a knotty “ring” or loop, that extends from NGC 1144 towards the companion in the north-west, and this is most easily seen in the H$`\alpha `$ (+continuum) image of the galaxy pair shown in Figure 1. However, the structure of the disk of NGC 1144 is more complicated than a simple ring, containing many loops and filaments and a prominent dust-lane which runs from the SE to the NW from a point east of the nucleus. These details can be seen more clearly in the red HST image of Figure 1 (image courtesy of M. Malkan).
The inner region of the disk of NGC 1144 contains extended regions of star formation. Quite apart from the strong H$`\alpha `$ emission from these regions (H89), Joy & Ghigo (1988) showed that a giant H II region complex NW of the nucleus contributed 35$`\%`$ of the galaxy’s substantial $`\lambda `$10$`\mu `$m emission. They estimated that NGC 1144 has a bolometric luminosity of 2.5 $`\times `$ 10<sup>11</sup> L, 80$`\%`$ of which is re-radiated in the thermal infrared. Their 6 and 20 cm radio continuum images revealed several radio sources corresponding with blue optical emission knots, suggesting that the radio continuum was originating from recent star formation. Higher sensitivity VLA radio observations by Condon et al. (1990) supported the idea that, in addition to the Seyfert nucleus, there are other regions of bright radio emission within the inner disk, including a strong source to the east, and a spiral-like arc of emission extending over several kiloparsecs.
The early pioneering mapping of the H$`\alpha `$ velocity field of NGC 1144 showed a huge velocity spread across the galaxy of over $``$1100 km s<sup>-1</sup>, implying the necessity for a mass for the galaxy in excess of 10<sup>12</sup> M (H89). Furthermore, the velocity-field is highly asymmetric, with a change in velocity of over 700 km s<sup>-1</sup> from the center of NGC 1144 to the south-eastern edge, but only a change of over 400 km s<sup>-1</sup> from the center to the north-western edge of the disk. The very large spread in velocity seen in the ionized gas is also apparent in the molecular CO emission (Gao, Solomon, Downes, & Radford 1997, hereafter GSDR), where CO linewidths of 1100 km s<sup>-1</sup> are seen in their IRAM single-dish (22$`\mathrm{}`$ beam) spectra. These authors showed Arp 118 to have a CO luminosity nearly twice that of Arp 220. The CO emission is distributed non-uniformly, and is highly concentrated in the SE quadrant of the ring, with much weaker emission from the NW quadrant. CO maps made with the IRAM interferometer (5$`\stackrel{}{\mathrm{.}}`$3 $`\times `$ 2$`\stackrel{}{\mathrm{.}}`$5 beam) revealed that the CO emission is not centrally concentrated in the nucleus, but traces the southern arm of an inner ring coincident with the most luminous H II regions.
Hippelein (1989) suggested a dynamical model for a ring-making collision between NGC 1144 and NGC 1143 in which the disk of NGC 1144 is severely distorted by the collision. However, in order to reproduce the large spread in velocity, the mass of NGC 1144 seemed excessive (of the order of 10<sup>12</sup> M). More recently, Lamb, Heran & Gao (1998) have attempted to model the interaction in a more sophisticated manner, including a massive halo, full self-gravity and the inclusion of gas-dynamics. Although their final model, with a total mass (including massive halo) for NGC 1144 of 5 $`\times `$ 10<sup>11</sup> M does produce a large velocity spread (of about 950 km s<sup>-1</sup>) in the disk after the collision, it fails to reproduce the asymmetry in the velocity field of the disk, discussed earlier.
This paper presents VLA HI observations of Arp 118 in an attempt to shed new light on the interpretation of the dynamics, as well as presenting new H$`\alpha `$ observations, revealing the kinematics of the ionized gas in the disk of NGC 1144, using the ANU 2.3-m telescope. We assume throughout this paper a distance to Arp 118 of 110 Mpc, based on an assumed heliocentric velocity<sup>1</sup><sup>1</sup>1This value of the assumed radial velocity is based on an interpretation of H$`\alpha `$ isovelocity map presented in this paper. As we shall see, emission lines across the nucleus reveal a sudden jump in velocity which is probably related to a peculiar outflow–see text. for the galaxy of 8800 km s<sup>-1</sup>, and a Hubble constant of 80 km s<sup>-1</sup>.
## 2 The Observations
### 2.1 The HI Observations of Arp 118
Arp 118 was observed at the VLA on 17 March 1996 using all 27 telescopes configured in the C-array. The correlator was set in a two IF (intermediate frequency) mode (2AD) with on-line Hanning smoothing and 32 channels per IF. For each IF we used a bandwidth of 6.25 MHz, which provided a frequency separation of 195.3 kHz per channel. This frequency separation corresponds to a velocity separation of 43.8 km s<sup>-1</sup> channel<sup>-1</sup> in the rest frame of the galaxy (using the optical definition of redshift). Arp 118 contains H II regions (in the ring) that have radial velocities spanning a range of more than 1000 km s<sup>-1</sup> (H89). GSDR reported a wide velocity range in CO emission (1100 km s<sup>-1</sup>). In order to achieve velocity coverage consistent with these observed velocity ranges, IF1 was centered at 8269 km s<sup>-1</sup> and IF2 at 9230 km s<sup>-1</sup>. The resulting velocity coverage of our observations was 2275 km s<sup>-1</sup>. A total of 3 hours and 47 minutes were spent on source. Strong winds during the observations were responsible for a loss of 25$`\%`$ of our original allocation of observing time. Flux and phase calibration were performed using the sources 3C48 and 0320+053 (B1950), respectively.
The data were first amplitude and phase calibrated, and bad data due to interference were flagged and ignored by the AIPS software. An image cube was created from the UV data by giving giving more weight to those baselines that sampled the UV plane more frequently (natural weighting). This provided a synthesized beam with a FWHM of 21$`\stackrel{}{\mathrm{.}}`$2 $`\times `$ 17$`\stackrel{}{\mathrm{.}}`$0.
Subtraction of the continuum emission in each line map was performed using a standard interpolation procedure based on five continuum maps free from line emission at the ends of the bands. The rms noise per channel was 0.36 mJy beam<sup>-1</sup>. The highest dynamic range achieved in any channel map was 10:1. A single image cube (we will refer to this as the combined cube) was made by combining IF1 and IF2.
We determined the HI line integral profiles, or moments, from the spectral data cube. The zeroth moment corresponds to the integrated intensity over velocity and the first moment to the intensity weighted velocity (the second moment–the velocity dispersion map–was uninteresting and was not included here). We employed the following procedure. We smoothed the combined cube with a beam twice that of the synthesized beam. As a result, a smooth cube was created with a resolution of 42$`\stackrel{}{\mathrm{.}}`$4 $`\times `$ 34$`\stackrel{}{\mathrm{.}}`$0. We applied a signal-to-noise threshold to the smoothed cube, blanking all pixels that fell below a 3$`\sigma `$ cutoff. A new image cube was then created by applying the blanked, smoothed cube as a “mask” to the original full resolution map. The total HI surface density map was then produced from the full resolution, blanked cube. The first-moment (intensity-weighted mean velocity field) was created from the same blanked cube. For those channels where obvious absorption was present, we employed a different procedure. In those cases, we directly fitted to the channel maps showing clear absorption using the AIPS routine IMFIT. The depth of the absorption was then determined from the fit.
### 2.2 Optical Spectroscopy
Observations of Arp 118 were made in 1992 with the Double Beam Spectrograph on the 2.3-m ANU telescope at Siding Spring Observatory. Longslit spectra of length 400$`\mathrm{}`$ were obtained at a series of position angles across the disk of NGC 1144. The dispersion was 25 km s<sup>-1</sup> pixel<sup>-1</sup> and the slit width was 1$`\stackrel{}{\mathrm{.}}`$8, which projects to 2 pixels at the detector. The positioning of the slits will be discussed in a later section. A full discussion of the spectral reduction and a more complete discussion of these optical data will be discussed in a separate paper. In the present paper, we will restrict our discussion to the presentation of the average velocity field of the H$`\alpha `$ emission and its ramifications for the interpretation of the HI observations.
## 3 Integrated Properties of the HI, its Distribution and the Relationship Between the HI and CO Emission
Figure 2 shows the global HI profile of Arp 118, obtained by integrating the HI emission spatially over the galaxy at each velocity. Notice the narrow spread in velocity of the emission, spanning $``$350 km s<sup>-1</sup>, compared with the huge velocity spread seen in the ionized and molecular gas. With the exception of a faint feature seen at 9200 km s<sup>-1</sup>, the global HI profile has about one-third the velocity spread of the CO and H$`\alpha `$ emission, as shown in Figure 3. A majority of the CO emission is at high velocities (in the SE) whereas the HI emission is observed at low velocities (in the NW). Taken together, the total spread in velocity seen in the HI and CO (from 8180-9290 km s<sup>-1</sup>) agrees closely with the spread seen in the ionized gas (H89, also this paper).
Two faint but narrow HI absorption features may be present at velocities centered around 8800 and 9000 km s<sup>-1</sup>. The lower-velocity line is close to the noise-level of the observations, but the second is stronger and is seen at the 3-5$`\sigma `$ level over three channels (see also discussion in $`\mathrm{\S }`$4). Both lines, if real, appear spatially as negative contours against the brightest part of the radio continuum source seen in the galaxy (see $`\mathrm{\S }`$4 and 6.2). There is also fainter emission seen in the higher velocity channels near 9200-9300 km s<sup>-1</sup> (see $`\mathrm{\S }`$4). We will discuss these absorption lines and the possibility that much of the HI profile from NGC 1144 is nullified by a stronger, and broader, absorption line in $`\mathrm{\S }`$6.
Integrated HI properties for NGC 1144 are given in Table 1. Table 1 also includes the integrated HI properties for a dwarf galaxy KUG 0253-003, located $``$8$`\mathrm{}`$ NE of Arp 118 (which we discuss fully in $`\mathrm{\S }`$7). The total HI mass was determined from the spectrum of Figure 2 from the formula:
$$\mathrm{M}_\mathrm{H}=2.356\times 10^5\mathrm{D}^2\mathrm{F}_\mathrm{H}(\mathrm{M}_{}),$$
(1)
where D is the distance in Mpc, and
$$\mathrm{F}_\mathrm{H}=\mathrm{S}_\nu \mathrm{dV}(\mathrm{Jy}\mathrm{km}\mathrm{s}^1).$$
(2)
Two independent single-dish profiles for NGC 1144 are available from the literature. The integrated flux density F<sub>H</sub> detected by Bushouse (1987) using the NRAO 91-m telescope was 2.83 Jy km s<sup>-1</sup> and by Jeske (1986) using Arecibo was 2.55 Jy km s<sup>-1</sup>. Our value was determined to be 2.40 Jy km s<sup>-1</sup>, or 85$`\%`$ of the total emission quoted by Bushouse and 94$`\%`$ of the total emission quoted by Jeske. The lower value obtained here is consistent with the possibility that the C-array has resolved-out faint extended HI emission which would be detected by the single-dish observations. However, neither single dish spectra cover the velocity range of the possible absorption lines, and our estimate of the total HI mass given in Table 1 does not take into account the possible contamination of the HI emission line profile by a broad, but deep, HI absorption feature.
The total HI mass of Arp 118 based on our C-array observations is M<sub>H</sub> = 7.0 $`\times `$ 10<sup>9</sup> M. This compares with an estimated molecular mass of 1.8 $`\times `$ 10<sup>10</sup> M (GSDR), based on the standard galactic H<sub>2</sub> mass-to-CO luminosity. GSDR, however, noted that the standard conversion factor might be 2-3 times lower in Arp 118 than in the Milky Way, which would therefore correspond to a molecular hydrogen mass of $``$6-9 $`\times `$ 10<sup>9</sup> M.
We also include in Table 1 an estimate of the dynamical mass M<sub>d</sub> (see for example Appleton, Charmandaris, & Struck 1996) of NGC 1144. The value of M<sub>d</sub> was calculated using the formula:
$$\mathrm{M}_\mathrm{d}=\frac{[(\frac{1}{2})\mathrm{\Delta }\mathrm{V}_{{\scriptscriptstyle \frac{1}{2}}}\mathrm{cosec}(\mathrm{i})]^2\mathrm{R}_{\mathrm{HI}}}{\mathrm{G}}(\mathrm{M}_{})$$
(3)
where R<sub>HI</sub> is the radius of the HI disk, $`\mathrm{\Delta }`$V$`_{{\scriptscriptstyle \frac{1}{2}}}`$ is the width of the profile where the flux density is one-half the peak, and i is the inclination of the disk. This formula applies to gas that is in bound circular orbits. Note that the assumption of circular orbits and a “normal” rotation curve may be quite incorrect in Arp 118. Because the spread in velocities of the HI emission is about one-third that of the CO (and H$`\alpha `$), we have estimated $`\mathrm{\Delta }`$V$`_{{\scriptscriptstyle \frac{1}{2}}}`$ from the combined HI and CO profiles, giving a $`\mathrm{\Delta }`$V$`{}_{{\scriptscriptstyle \frac{1}{2}}}{}^{}{}_{}{}^{\mathrm{CO}+\mathrm{HI}}`$ of 1020 km s<sup>-1</sup>. Based on the HI map, we estimate R<sub>HI</sub> to be 10 kpc, and i = 50$`\mathrm{°}`$, based on the ratio of the major to minor axis of the optical outer ring. Hence we obtained a value of M<sub>d</sub> = 1 $`\times `$ 10<sup>12</sup> M.
Figure 4 contains the integrated HI distribution of Arp 118, overlayed with a digitized sky survey (DSS) grey-scale image of Arp 118. The HI emission is distributed non-uniformly throughout the disk of NGC 1144, and is concentrated in the NW quadrant of the ring, with fainter emission to the south.
Figure 5 (see also Figure 6) is an overlay of the integrated HI map with the a grey-scale image of the CO knots imaged by GSDR. As mentioned, the high resolution CO data of GSDR shows that the CO emission traces the southern arm of the radial ring wave (the approximate location of the ring has been drawn to guide the eye). Further, the knots of CO emission are coincident with luminous H II regions, also concentrated along the southern arm of the ring. Inspection of Figure 5 reveals that the brightest complexes of CO emission do not overlap the HI emission. In Figure 6 we have convolved the CO emission to the same beam size as our VLA data. The CO emission appears to, more strikingly, “fill-in” the “missing” HI emission in the south-eastern half of NGC 1144. Considering Figures 3, 5, and 6, the HI and CO appear “segregated”, both spatially and kinematically. Note, however, that we use the word “segregated” rather loosely, since there is weaker CO emission in the NW quadrant (see next paragraph), and low-level HI emission in the SE. The word is used to point to the fact that the distributions of CO and HI are asymmetrically peaked, CO to the SE and HI to the NW.
Interestingly, there is a striking similarity between the single dish CO profile in the NW quadrant of the of the ring and our HI profile (see Figure 1 in GSDR for the single dish profile). The CO profile peaks at a velocity of 8250 km s<sup>-1</sup> and the HI profile peaks at 8267 km s<sup>-1</sup>. Both profiles fall off rapidly towards higher velocities.
## 4 HI Kinematics
In Figure 7 we present those channel maps containing detectable emission from 8180 to 8661 km s<sup>-1</sup>. At the rest frame of the galaxy, the channel separation is 43.8 km s<sup>-1</sup>. The emission at the lower velocity channels is mainly concentrated in the northern part of the galaxy in the region associated with the NW quadrant of the outer optical ring seen in Figure 1, but also extends significantly to the NE, outside the optical bounds of the galaxy (V = 8267 km s<sup>-1</sup>). The emission becomes more scattered and moves further south as one proceeds to higher velocities. There is a marginal detection of faint emission to the SE of the main body of the galaxy around 8573 km s<sup>-1</sup> and 8661 km s<sup>-1</sup>. With the exception of a faint HI features centered on the nucleus at velocities of 9200 to 9300 km s<sup>-1</sup> (see later), no significant HI emission is observed at velocities in excess of 8661 km s<sup>-1</sup>.
Figure 8 contains the mean velocity field of Arp 118, which appears rather disturbed as compared with the optical velocity (see below). The velocities appear, approximately, to run from lower velocities in the north to higher velocities in the south. Except for contours at 8355 km s<sup>-1</sup> , there is little emission from the SE disk of NGC 1144. The velocity isocontours generally have a bent and/or rather distorted S-shaped appearance. In the regions of common coverage between the optical and HI velocity fields (see later), there is approximate agreement. However, the fact that much of the HI disk of NGC 1144 is missing in the SE make the HI velocity field very difficult to interpret.
Figure 9 contains another set of channel maps showing the faint absorption features seen against the nuclear radio disk. The absorption appears in 5 channels, at the same spatial location, but is strongest at a velocity of 9011 km s<sup>-1</sup>. Figure 9 also contains a faint emission feature seen near the nuclear region. This emission appears in 5 channels, most strongly in in the 9318 and 9274 km s<sup>-1</sup> channels.
In order to show the absorption features more clearly, we have constructed, in Figure 10, a spectrum through the unblanked data cube centered on the galaxy’s nucleus. This has the effect of integrating only that emission or absorption associated with the nuclear source. This spectrum differs in nature from the integrated (global) spectrum of Figure 2 because it concentrates on the nuclear regions only, and is determined from the cube without any flux-threshold blanking. Figure 10 reveals faint nuclear absorption and emission. Interestingly, Figure 10 suggests that the two faint absorption features, which we isolated in Figure 2, may be part of a faint, broader absorption complex. The absorption features at their deepest are detected at the 4$`\sigma `$ (8791 km s<sup>-1</sup>) and 5$`\sigma `$ (9011 km s<sup>-1</sup>) level. Figure 10 also contains the faint emission features at velocities of 9200 to 9300 km s<sup>-1</sup>, as shown in Figure 2. These emission features are at a similar level of significance as the absorption features. In $`\mathrm{\S }`$6.2 we will return to the astrophysical implications of the broad absorption features and their possible effect on the total HI profile.
## 5 The Kinematics of the Ionized Disk
In Figure 11, we show the mean velocity field of the H$`\alpha `$ emitting gas based on the observations made with the ANU 2.3-m telescope. A more complete discussion of these optical data will be presented in a separate article (McCain and Freeman, in preparation). In this paper we restrict ourselves to a discussion of the velocity field of the galaxy. In the upper right of Figure 11 we show the coverage of the long-slits used to create the velocity map. The slits ranged in position angle from 318$`\mathrm{°}`$ to 273$`\mathrm{°}`$, and each slit was positioned on the nucleus of the elliptical companion NGC 1143 during each observation. The one exception was a slit oriented at 309$`\mathrm{°}`$ which was positioned to pass through the nucleus of NGC 1144, but was not centered on NGC 1143. Ionized gas emission was detected over the entire face of NGC 1144. The velocity field in the upper left of Figure 11 shows the entire H$`\alpha `$ emitting disk, and the inset to the lower right shows the details of the disk closer to the center. We overlay the contour map of the velocity field with a grey-scale representation of the $`\lambda `$20cm radio continuum emission from Condon et al. (1990), and we will discuss the significance of this later.
The velocity field of NGC 1144, derived from the ANU 2.3-m spectra, is very similar in overall appearance to the earlier work of H89. The gross characteristics of the system are large-scale rotation over a velocity range of 1100 km s<sup>-1</sup>. The ANU 2.3-m data show a sudden velocity discontinuity in the vicinity of the nucleus, which in the inset shows a drop in radial velocity of about 600 km s<sup>-1</sup>, from a value in the disk of around 9000 km s<sup>-1</sup> to a value around 8400 km s<sup>-1</sup>. This region is almost unresolved at the level of the seeing (1$`\mathrm{}`$), and so the velocity discontinuity is most probably associated with the nucleus. The cross shows the nominal position of the optical nucleus, but this is not well determined from the spectra. (Note: we have assumed in overlaying the map with the radio image that the velocity discontinuity corresponds to the position of the nuclear source).
Putting aside for the moment the question of the origin of the velocity jump at the center of the galaxy, we now address the question of the value of the systemic velocity of the galaxy. Based on the appearance of the isovelocity contours, it would be normal to consider the systemic velocity of a galaxy to be the velocity associated with the contour which passed through the nucleus. Ignoring the velocity discontinuity, the most natural contour which would pass close to the nucleus, and which runs parallel to the minor axis of the galaxy, would be the velocity contour at 9050$`\pm `$50 km s<sup>-1</sup>. It is interesting to note that the ANU 2.3-m spectra show CaII H and K absorption lines, presumably from the underlying stellar population in the galaxy, which yield a velocity of 9000$`\pm `$100km s<sup>-1</sup> for the nucleus, in approximate agreement with the global velocity field of the ionized gas. It is clear that the velocity discontinuity is a major deviation away from this value for the systemic velocity, since the emission lines in the nucleus yield a velocity of 8400km s<sup>-1</sup>. Does this mean that the nucleus is moving relative to the gas disk, or simply that there are peculiar motions in the ionized gas in and around the core? In the next section, we argue for the latter, based on the HI absorption results.
## 6 What is the Origin of the Asymmetric HI Distribution in NGC 1144?
One of the surprising results of our HI observations of NGC 1144 is the rather dramatic asymmetry in the distribution of the gas in the disk. Our VLA observations show that most of the gas detected is centered on the NW half of the galaxy, whereas ionized and molecular gas are spread over the whole disk, but are concentrated mainly in the southeastern half of the galaxy. This apparent segregation is also reflected in the kinematics. HI from the disk of NGC 1144 is seen exclusively at velocities below 8600 km s<sup>-1</sup>, and little detectable HI emission is seen over a wide range of velocities represented by a large part of the ionized and molecular disk. We address below two possible explanations for the missing HI. The first is the possibility that the segregation is a result of the large-scale conversion of HI into molecules via strong shocks in the disk of this violently colliding galaxy. The second, and perhaps more plausible explanation, is that HI is present over the entire velocity range, but that very powerful, and extremely broad HI absorption lines are present in the nuclear regions that have “nullified” the entire emission-line profile of the galaxy over a 600 km s<sup>-1</sup> interval. The evidence for the latter is strengthened by the discovery of a rapid change in velocity in the ionized gas over the same velocity range in the nucleus, as we discussed in the previous section.
### 6.1 Conversion of atomic to molecular gas?
If we take at face-value the asymmetry in the HI distribution in the disk of NGC 1144, we can attempt to determine quantitatively the relative importance of the atomic and molecular hydrogen content across the disk NGC 1144 by calculating the molecular-to-atomic gas mass ratio, M$`_{\mathrm{H}_2}`$/M<sub>HI</sub>, in the NW and SE regions. We have converted the observed CO emission from GSDR into a mass of molecular gas (using the standard galactic conversion from CO to H<sub>2</sub>), and calculated the HI mass, separately, for the NW and SE regions. The molecular-to-atomic gas mass ratio is 0.77 in the NW section of the ring, consistent with the observed ratios in normal spiral galaxies (Casoli et al. 1998). The ratio is very large in the SE section, 17.6, consistent with ratios measured in other interacting, infrared luminous galaxies (Mirabel & Sanders 1989). If this scenario is correct, the gas mass ratios suggest that the dominant state of the interstellar medium proceeds from mainly atomic in the NW, to molecular in the SE. In the SE region less than 6$`\%`$ of the interstellar gas is in atomic form. Considering that the total (HI+H<sub>2</sub>) mass of hydrogen remains approximately equal in both regions ($``$1 and 2 $`\times `$ 10<sup>10</sup> M in the NW and SE, respectively) suggests the possibility that there may be a large scale conversion of HI to H<sub>2</sub> in the southern region of NGC 1144.
One possible way to enhance the molecular gas mass fraction is by compressing the disk in NGC 1144. Modest compression and/or shocks in the interstellar medium can lead to the conversion of atomic to molecular gas (Elmegreen 1993; Honma, Sofue, Arimoto 1995). The salient features of Elmegreen’s model are that the HI-H<sub>2</sub> gas phase transition depends sensitively on the pressure in the interstellar medium and the radiation field: H<sub>2</sub> molecules are formed on the surface of dust, but can be destroyed by UV photons. The results of his models imply that large regions within galaxies can spontaneously convert atomic into molecular gas following an interaction, a tidal encounter that has led to accretion, or increase in the gas surface density.
While it is plausible that such a process is occurring in the disk of NGC 1144, there exists a much more likely explanation for the “missing” HI, namely the possibility of HI absorption.
### 6.2 HI absorption
We will begin this section by discussing the HI absorption line seen at approximately 9000 km s<sup>-1</sup> in the channel maps, as shown in Figure 9. Figure 12 contains a contour map of the continuum emission, overlayed upon a grey scale representation of the integrated HI emission. The absorption appears nearly coincident with the peak continuum emission in NGC 1144. The total flux we detected from the continuum source associated with NGC 1144 was 139.5 mJy, within a synthesized beam of 21$`\stackrel{}{\mathrm{.}}`$2 $`\times `$ 17$`\stackrel{}{\mathrm{.}}`$0, which is in excellent agreement with the 20 cm flux of 136 mJy from Condon et al. (1990) in an 18$`\mathrm{}`$ $`\times `$ 18$`\mathrm{}`$ beam. However, higher resolution observations of the continuum emission by Condon et al. reveal a compact nucleus, and two extra-nuclear emission regions, as well as a faint arc within the inner disk (see greyscale overlay in upper right of Figure 11). In determining the HI column density responsible for the faint absorption line at 9000 km s<sup>-1</sup> it is necessary to assume a plausible location for the HI absorbers, i.e., which of the continuum emitting regions, seen in the higher resolution continuum image, the absorbers cover.
For an HI cloud seen in absorption (F<sub>abs</sub>) against a continuum source (F<sub>con</sub>), the optical depth, $`\tau `$, of the cloud is given by (e.g. Mirabel 1982):
$$\tau =\mathrm{ln}(1+\frac{\mathrm{F}_{\mathrm{abs}}}{\mathrm{F}_{\mathrm{con}}})$$
(4)
and hence the hydrogen column density N<sub>HI</sub> is given by:
$$\mathrm{N}_{\mathrm{HI}}=1.823\times 10^{20}\mathrm{T}_{\mathrm{S100}}\tau \mathrm{dv}\mathrm{atoms}\mathrm{cm}^2$$
(5)
where T<sub>S100</sub> is the spin temperature in units of 100 K.
As a working hypothesis, we will assume that the HI seen in absorption lies in the disk of NGC 1144, and exhibits similar kinematics to the ionized gas. It can be seen from the 20 cm map of Condon et al. (Figure 11) that the brightest radio emission regions are the unresolved nucleus (25.9 mJy) and the eastern extra-nuclear region (22.9 mJy: angular size 2$`\mathrm{}`$ $`\times `$ 1$`\mathrm{}`$). The third extra-nuclear region to the west of the nucleus is significantly fainter (9.4 mJy), and probably does not contribute to any absorption profile. Of the strong emission regions, only the eastern radio source lies in projection against the velocity field at 9000 km s<sup>-1</sup>. We will adopt this source as a plausible continuum feature against which we might see the higher velocity HI absorption feature of Figure 2.<sup>2</sup><sup>2</sup>2Had we chosen the nuclear source, rather than the eastern extra-nuclear source, the calculated optical depth would be similar since its flux is almost identical For an absorption feature seen over one channel (43.8 km s<sup>-1</sup>) and a depth F<sub>abs</sub> = 2 mJy, and F<sub>con</sub> = 22.9 mJy, the neutral hydrogen column density needed to create the absorption will be N<sub>HI</sub> = 6.7 $`\times `$ 10<sup>20</sup> T<sub>S100</sub> atoms cm<sup>-2</sup> channel<sup>-1</sup>. Since the feature is seen over three channels, the total column density is similar to that seen commonly in the disks of galaxies in emission, and is $``$2 $`\times `$ 10<sup>21</sup> T<sub>S100</sub> atoms cm<sup>-2</sup>. A single cloud covering the area of the eastern radio emitting region, and with the above column density, would have a mass of $``$8 $`\times `$ 10<sup>6</sup> M, the mass of a typical giant molecular cloud (GMC). Hence our basic hypothesis that the absorbing cloud lies in the disk of NGC 1144 is consistent with the observations.
The next question to ask is why we don’t see a large population of such clouds in emission, as we apparently do in the NW part of the disk?<sup>3</sup><sup>3</sup>3We remind the reader that we do see some faint emission centered on the nucleus over a few channels in the velocity range 9200-9300 km s<sup>-1</sup>. Here we are confronted with two possible alternatives. The first hypothesis - which we call the “minimal absorption” hypothesis - is that the clouds we see in absorption have a small filling factor compared with the larger VLA beam (21$`\mathrm{}`$ $`\times `$ 17$`\mathrm{}`$). How many such clouds could be hidden within the C-array VLA beam before we would see the clouds in emission rather than absorption? The 3$`\sigma `$ noise per channel was found to be approximately 1 mJy beam<sup>-1</sup>, and so it would be possible to hide 1.3 $`\times `$ 10<sup>8</sup> M of HI within one beam and still fail to detect it. If we assume that such emission would be in the form of the clouds we detect in absorption, this implies an upper limit of 16 absorber clouds per beam that could go undetected in emission. As a point of reference, in the NW disk of NGC 1144, the typical column density of gas seen in emission would imply approximately 200 similar clouds per beam. This would argue for a depletion in the total HI mass in that region of the disk, and would be consistent with the scenario presented in $`\mathrm{\S }`$6.1, in which some process is destroying HI clouds in the SE part of the disk of NGC 1144.
A second hypothesis - we call the ”extreme-absorption” hypothesis - is that the entire HI profile from 8400 to 9000 km s<sup>-1</sup> is affected by deep absorption against the nucleus, which nullifies any emission seen in the larger VLA C-array HI beam. We note that the HI emission and absorption in the nucleus is spotty and not “perfectly” balanced. Such a scenario is strongly hinted at in Figure 10, which shows the possibility of a broad absorption complex in the nuclear region.
Strong, broad (up to 700 km s<sup>-1</sup> wide) HI absorption against Seyfert nuclei is not uncommon (e.g. Dickey 1982,1986; Mirabel 1982,1983). Van Gorkom et al. (1989) detected HI in absorption (up to $``$600 km s<sup>-1</sup> wide) against the nuclei of radio elliptical galaxies. These systems have absorption features which are consistent with both infalling and outflowing HI gas. Further, IC 5063 has a 700 km s<sup>-1</sup> HI absorption feature, with a depth of 10-15 mJy, seen against its Seyfert 2 nucleus (Morganti, Oosterloo, & Tsvetanov 1998). The absorption is seen blueward of the systemic velocity of IC 5063, indicating a blueshifted outflow of HI. The interpretation of these various observations is that the HI is likely to be in the form of an inflowing/outflowing stream, or complex of clouds, seen in projection against a background continuum source (the AGN). If a high column-density stream of HI were mixed, for example, with the ionized gas associated with the velocity discontinuity seen in the nucleus (Figure 11), then such absorption would have a profound effect on the HI emission spectrum.
To explore the feasibility of the extreme-absorption hypothesis, we will assume that, in the absence of absorption, NGC 1144 would have a normal double-horned HI profile with a velocity width similar to the observed spread in the ionized gas velocities (1100 km s<sup>-1</sup>) and a single-channel flux of 10 mJy over that mid-range of the spectrum. We can then ask what column density of HI would be required to be seen throughout the range of 8400-9000 km s<sup>-1</sup> (the velocity jump seen near the nucleus in the ionized gas) to reduce the putative HI emission line profile to zero flux? If F<sub>abs</sub> = 10 mJy and the continuum source is the nucleus (F<sub>con</sub> = 25.9 mJy) then we derive N<sub>HI</sub> = 4.2 $`\times `$ 10<sup>22</sup> T<sub>S100</sub> atoms cm<sup>-2</sup> over the full 600 km s<sup>-1</sup> range seen at the velocity discontinuity. This must be considered an upper limit to the true column density since the radio source is not resolved in the observations made by Condon et al. (1990). If we assume it is just resolved, the implied cloud mass of the HI absorbers would be 1.6 $`\times `$ 10<sup>8</sup> M, assuming that the area of the nuclear region is 1.65 square arcseconds, the approximate area of the beam from Condon et al. (1990).
The advantage of the extreme-absorber hypothesis over the minimal-absorber picture, is that it can readily explain the lack of HI in emission from the mid- to SE disk of NGC 1144, and indeed from the HI data-cube over the entire range of velocities in excess of 8661 km s<sup>-1</sup>. It also is very testable. Higher resolution HI observations should separate any, so far only hypothetical, deep absorption in the nucleus from extended HI emission in the disk.
## 7 HI Distribution and Kinematics of a ”Ultra-violet Excess” Dwarf Companion in the Arp 118 Group = KUG 0253-003
We have discovered HI emission from a position ($``$8$`\mathrm{}`$ to the NE of Arp 118 = 256 kpc) corresponding to the same spatial location as a galaxy found by Takase & Miyauchi (1988) in a survey of ultraviolet-excess galaxies (KUG 0253-003 or PGC 011066). NED<sup>4</sup><sup>4</sup>4The NASA/IPAC Extragalactic Database (NED) is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. lists this galaxy as a “spiral” with a blue magnitude of 16.5, and with major & minor axes of 0$`\stackrel{}{\mathrm{.}}`$3 $`\times `$ 0$`\stackrel{}{\mathrm{.}}`$3 and no cataloged redshift. There are no obvious spiral arms in the DSS image, and it appears as a high-surface brightness, compact galaxy.
Figure 13 contains the global HI profile of the companion, which is narrow and single peaked, with a $`\mathrm{\Delta }`$V$`_{{\scriptscriptstyle \frac{1}{2}}}`$ of 51 km s<sup>-1</sup>. The systemic velocity (V<sub>HI</sub>) is 8749 km s<sup>-1</sup>, which gives a velocity difference of $``$460 km s<sup>-1</sup> between the distant, uv-excess dwarf companion and Arp 118. Assuming it to be part of the extended Arp 118 group, the HI mass of KUG 0253-003 is 3.4 $`\times `$ 10<sup>9</sup> M, roughly half the value of Arp 118 (see Table 1). This “dwarf” galaxy therefore appears to be somewhat rich in atomic hydrogen. In Figure 14 we present the integrated HI image of the companion, overlayed with a DSS grey-scale image. The HI emission is smooth, with a peak that appears to be somewhat off-set from the bright centrally concentrated light distribution of KUG 0253-003. The HI emission appears to be elongated to the SW (towards Arp 118).
Figure 15 contains the 3 channel maps with HI emission from the companion. Figure 16 shows the mean velocity field of KUG 0253-003, which, though we barely resolve the galaxy, hints at rotation, with a major kinematic axis that runs approximately SE to the NW.
One might speculate that the dwarf is undergoing a major episode of star formation, perhaps as a result of an interaction with Arp 118. The difference in velocity (460 km s<sup>-1</sup>) and projected separation suggest a time-scale of 5 $`\times `$ 10<sup>8</sup> years to get to its present location if it did pass close to Arp 118 in the past. This is the typical crossing-time for a small group and does not seem unreasonable.
## 8 Conclusions
A mapping of the neutral and ionized gas in the Arp 118 system has revealed the following:
1) The HI emission, in addition to being highly disturbed, is distributed non-uniformly throughout the disk of NGC 1144, being highly concentrated in the NW region of the ring away from the nuclear star formation complexes and the Seyfert 2 nucleus. This distribution is anti-correlated with the most powerful CO emission which lies in the SE part of the disk. Ionized gas is seen distributed throughout the entire disk of NGC 1144. No HI or H$`\alpha `$ emission is seen associated with the elliptical companion NGC 1143.
2) Unlike the huge spread in the ionized gas velocities in the disk of NGC 1144 (1100 km s<sup>-1</sup>), strong HI emission is observed only over one-third of this interval and is consistent with HI emission associated with the NW part of the ionized disk. HI emission over a 600 km s<sup>-1</sup> interval (which covers most of the mid-disk and SE half of the ionized disk) is missing from the velocity channels.
3) Observations of the ionized gas component of the disk of NGC 1144 show that, in addition to the large-scale rotation of the galaxy over the range of 1100 km s<sup>-1</sup>, there exists a sudden jump in velocity of the ionized gas in the nuclear regions of the galaxy of approximately 600 km s<sup>-1</sup>. If we adopt the systemic velocity of the H$`\alpha `$ disk to be 9000 km s<sup>-1</sup> determined from the large-scale velocity field away from the nucleus, then this implies blue-shifted gas streaming motions within the nucleus. Alternatively, the observations may imply that the nucleus itself is moving at a velocity of 600 km s<sup>-1</sup> with respect to the host disk. However, CaII H & K absorption lines support a nuclear velocity of around 9000 km s<sup>-1</sup> suggesting that it is the ionized gas that has the peculiar motion in the nucleus.
4) Very weak HI absorption is seen against the radio continuum sources (including an off-center nucleus) which are concentrated in the SE section of the ring. We interpret the absorption lines in two ways: a) the “minimal-absorption” hypothesis, in which the weak absorption lines are assumed to be the only absorption present, and b) the “extreme-absorption” hypothesis, in which it is assumed that there is strong and deep nuclear HI absorption over the same velocity range as the H$`\alpha `$ velocity discontinuity (600 km s<sup>-1</sup>). Hypothesis a) results in an explanation for the missing HI in the mid-to-SE disk of NGC 1144 as a depletion of HI clouds by at least a factor of 10 over the NW disk. However, b) offers the advantage that it explains the missing HI in the disk of NGC 1144 as being due to a high column-density stream of HI seen in absorption against the nucleus, negating the HI emission in the larger-scale disk. If true, b) suggests a streaming of high column-density HI gas (N<sub>H</sub> $`>`$ 4 $`\times `$ 10<sup>22</sup> atoms cm<sup>-2</sup>) is mixed with ionized gas and is emanating from the nucleus at up to 600 km s<sup>-1</sup>. The total HI mass of such a stream could be as high as 1.8 $`\times `$ 10<sup>8</sup> M in the neutral hydrogen component alone.
5) We detect an HI-rich dwarf galaxy 8$`\mathrm{}`$ to the NE of Arp 118. The dwarf galaxy, KUG 0253-003, was detected in a survey of “uv excess” galaxies, and shows a disturbed HI distribution with an extension pointing towards Arp 118. It is possible that the galaxy has interacted with the Arp 118 system in the past (500 Myr ago, based on its projected separation and velocity difference).
It is clear that high signal-to-noise HI observations of much higher spatial resolution are required to determine the true nature of the absorption spectrum of NGC 1144. The source that is naturally implicated is the nuclear source, which we know has a flux of about 25 mJy at $`\lambda `$20 cm. Although observations on the scale of several arcseconds will be required to determine whether the disk of NGC 1144 is truly asymmetric in its HI content, the study of the postulated high-column density stream will require sub-arcsecond observations of the absorption lines over the nuclear source. If we are correct in our extreme-absorber hypothesis, the absorption lines could be as deep as 10 mJy (against a source of 25 mJy) over a velocity range from 8400 to 9000 km s<sup>-1</sup>.
Acknowledgements
The authors would like to thank Yu Gao (U. of Toronto) for helpful exchanges of information about his CO observations of Arp 118. Particular thanks are due to Evan Skillman (U. of Minnesota) whose insight has been of crucial importance. We also thank Matt Malkan for providing us with the HST image of NGC 1144. This work is supported by NSF grant AST-9319596.
|
no-problem/9905/astro-ph9905166.html
|
ar5iv
|
text
|
# Untitled Document
The equation of state of a Bose gas: some analytical results
Vladan Celebonovic
Institute of Physics,Pregrevica 118,11080 Zemun,Yugoslavia
vladan@phy.bg.ac.yu
vcelebonovic@sezampro.yu
Abstract:This is a short review of the main results on the equation of state of a degenerate,non-relativistic Bose gas.Some known results are expressed in a new form,and their possible applications in astrophysics and solid state physics are pointed out.
Introduction
The aim of this letter is to review briefly some results concerning the equation of state (EOS) of a non-relativistic,degenerate, Bose gas.The study to be reported is a logical continuation of previous work concerning the Fermi-Dirac integrals,and the EOS of a Fermi gas (Celebonovic,1998,a,b).Apart from being useful pedagogically,this review contains new expressions of some existing results.
Calculations
It can be shown (for example Landau and Lifchitz,1976) that the number density of a Bose gas is given by the following integral:
$$n=\frac{N}{V}=\frac{gm^{3/2}}{2^{1/2}\pi ^2\mathrm{}^3}_0^{\mathrm{}}\frac{ϵ^{1/2}dϵ}{\mathrm{exp}[(ϵ\mu )/T]1}$$
(1)
All the symbols have their usual meanings, $`s`$ is the particle spin,$`g=2s+1`$ and Boltzmann’s constant has been set equal to 1.
Introducing the change of variables $`\frac{ϵ}{T}=z`$, it follows from eq.(1) that
$$n=\frac{g(mT)^{3/2}}{2^{1/2}\pi ^2\mathrm{}^3}_0^{\mathrm{}}\frac{\sqrt{z}dz}{\mathrm{exp}[z\frac{\mu }{T}]1}$$
(2)
The function under the integral can be transformed as follows
$$\frac{\sqrt{z}}{\mathrm{exp}\left[z\frac{\mu }{T}\right]1}=\frac{\sqrt{z}}{\mathrm{exp}\left[z\frac{\mu }{T}\right](1\mathrm{exp}\left[\frac{\mu }{T}z\right])}$$
(3)
Developing eq.(3) into series,one gets the expression for the integral in eq.(2)
$$I_B=_0^{\mathrm{}}\frac{\sqrt{z}dz}{\mathrm{exp}\left[z\frac{\mu }{T}\right]1}=\underset{l=0}{\overset{\mathrm{}}{}}_0^{\mathrm{}}\sqrt{z}\mathrm{exp}[\left(l+1\right)(\frac{\mu }{T}z)]𝑑z$$
(4)
which after some algebra can be transformed into the following final form
$$I_B=\underset{l=0}{\overset{\mathrm{}}{}}\mathrm{exp}\left[(l+1)\frac{\mu }{T}\right]_0^{\mathrm{}}\mathrm{exp}\left[(l+1)z\right]𝑑z=\underset{l=0}{\overset{\mathrm{}}{}}\mathrm{exp}[(l+1)\frac{\mu }{T}]\frac{\sqrt{\pi }}{2}(l+1)^{3/2}$$
(5)
Generalizing the reasoning which led to eq.(5),it can be shown that
$$I_n=_0^{\mathrm{}}\frac{z^ndz}{\mathrm{exp}\left[z\frac{\mu }{T}\right]1}=\underset{l=0}{\overset{\mathrm{}}{}}\mathrm{exp}[\left(l+1\right)\frac{\mu }{T}]\frac{\mathrm{\Gamma }(n+1)}{(l+1)^{(n+1)}}$$
(6)
where $`\mathrm{\Gamma }`$ denotes the gamma function. Inserting eq.(5) into eq.(1),one gets the following form of the EOS of a Bose gas
$$n=\frac{g(mT)^{3/2}}{2^{1/2}\pi ^2\mathrm{}^3}\underset{l=0}{\overset{\mathrm{}}{}}\frac{\sqrt{\pi }}{2}\mathrm{exp}\left[(l+1)\frac{\mu }{T}\right](l+1)^{3/2}$$
(7)
This EOS relates the chemical potential, temperature and number density of a Bose gas.Inserting $`\mu =0`$ in eq.(7),one can determine the temperature $`T_B`$ of Bose condensation. It thus turns out that
$$T_B=\frac{2\mathrm{}^2}{\pi m}\left(\frac{n}{g\zeta (3/2)}\right)^{2/3}$$
(8)
where $`\zeta `$$`(3/2)`$ denotes Riemann’s zeta function.
Usually,the EOS of a system is a relationship between its pressure,volume and energy.It is known from general statistical physics ( such as Landau and Lifchitz,1976) that for a Bose gas the EOS has the form
$$pV=\frac{2}{3}E$$
(9)
where V is the volume and E the energy of the system.The expression for the energy of a Bose gas contains an integral of the form given by eq.(6) with $`n=3/2`$ and the same prefactor as the right side of eq.(1). Using eqs.(2),{6) and (9),the following final result is obtained for the EOS of a Bose gas in the region $`TT_{B.}`$
$$p=\frac{2^{1/2}g}{3\pi ^2}(\frac{mT}{\mathrm{}})^{3/2}\underset{l=0}{\overset{\mathrm{}}{}}\mathrm{exp}\left[\left(l+1\right)\frac{\mu }{T}\right]\frac{\mathrm{\Gamma }(5/2)}{\left(l+1\right)^{5/2}}$$
(10)
In the domain $`TT_B`$,for which $`\mu =0`$ one gets that
$$p=\frac{2^{1/2}g}{3\pi ^2}(\frac{mT}{\mathrm{}})^{3/2}\mathrm{\Gamma }(5/2)\zeta (5/2)$$
(11)
Discussion
In the preceeding section we have derived several analytical expressions for the EOS of a Bose gas,both in the region of the existence of the Bose condensation and outside it.The derivation of these expressions (and even more their extension to the $`T=0K`$ case ) was an interesting problem in itself.However,much more interesting are the possibilities for applications of the results obtained here in studies of various systems occuring in astrophysics and physics.
Bose gas occurs in astrophysics in interesting and varied situations,which range from the early universe to the interior of the giant planets.In cosmology,Bose gases occur during the reheating after inflation.In a recent paper (Khlebnikov and Tkachev, 1999) a nonequilibrium Bose gas with an attractive interaction between particles was studied. It was shown there that the system evolves into a state that contains drops of the Bose-Einstein condensate.Could it be the possible explanation of the formation of primordial clums out of which later clusters of galaxies were formed?In planetary interiors ,of course depending on the chemical composition,Bose gas can occur in some cases.Closely related is the problem of the isolator $``$ metal transition in planetary interiors.The point here is that the transition occurs at a pressure which depends on the chemical composition and the form of the EOS ( Stevenson,1998 for details on this problem) .
Important applications of Bose gas thery are abundant in the theory of superconductivity.It is known in ordinary superconductors that pairs pf charge carriers form a superfluid Bose condensate.The mechanism responsible for high temperature superconductivity has not yet been discovered (Marston,1999),but it is almost certain that some form of a Bose gas will also occur.The net conclusion from this short list of examples of applicability testifies that studuing the EOS formalism is far from being a mathematical ”tour de force”,but that quite to the contrary,it paves the way to imoprtant astrophysical and physical applications.
References
Celebonovic,V,.: 1998a, Publ.Astron.Obs.Belgrade,60,16
Celebonovic,V.: 1998b, Publ.Astron.Obs.Belgrade,61,75
Khlebnikov,S.and Tkachev,I.: 1999,preprint CERN-TH/99-21.
Landau,L.D. and Lifchitz,E.M.: 1976,Statisticheskaya fizika,Vol.I,Nauka Publ.House,Moscow.
Marston,J.B.: 1999,preprint cond-mat/9904437 .
Stevenson,D.J.:1998,J.Phys.:Condens.Matter., 10,11227.
|
no-problem/9905/cond-mat9905209.html
|
ar5iv
|
text
|
# Magnetic Domain Walls in Double Exchange Materials
## Abstract
We study magnetic domain walls in double exchange materials. The domain wall width is proportional to the square root of the stiffness. In the double exchange model the stiffness has two terms: the kinetic energy and the Hartree term. The kinetic energy term comes from the decrease of the tunneling amplitude in the domain wall region. The Hartree term appears only in double exchange materials and it comes from the connection between band-width and magnetization. We also calculate the low-field magnetoresistance associated with the existence of magnetic domains. We find a magnetoresistance of $`12\%`$. The magnetoresistance can be considerably larger in magnetically constrained nanocontacts.
PACS number 71.10.-w, 75.10.-b, 72.10
Mixed valence compounds of the form $`La_{1x}A_xMnO_3`$ ($`A`$ being Ca, Sr or Ba) have recently been shown to present extremely large (colossal) magnetoresistance. In these materials strong Hund’s interaction between the charge carriers and the manganese ions leads to a strong coupling between the electrical resistivity and the magnetic state. For $`0.1x0.5`$ and low temperatures, the system is metallic and presents ferromagnetic order. As the temperature increases the system becomes insulator and paramagnetic. The magnetic transition occurs at a $`x`$-dependent critical temperature $`T_c300K`$. Colossal magnetoresistance occurs near $`T_c`$ and in presence of magnetic fields of several Teslas.
In the $`La_{1x}A_xMnO_3`$ compounds, the electronically active orbitals are the $`Mn`$ $`d`$ orbitals, and the mean $`d`$ occupancy is $`4x`$. A strong ferromagnetic Hund’s rule coupling aligns all electron spins in the $`Mn`$ $`d`$ orbitals. The $`Mn`$ ions form a simple cubic lattice of lattice parameter $`a`$. The cubic crystal symmetry splits the $`d`$ orbitals into a $`t_{2g}`$ triplet and an $`e_g`$ doublet. Three electrons fill up the $`t_{2g}`$ levels forming a core spin of magnitude $`S=3/2`$ and the rest of the electrons, $`1x`$ per $`Mn`$, go to the $`e_g`$ orbitals.
Ferromagnetism in these materials is explained by the Double Exchange (DE) mechanism, in which the electrons get mobility between the $`Mn`$ ions using the oxygen as an intermediate. This conduction process is proportional to the electron transfer integral and due to the strong ferromagnetic Hund’s rule coupling it is maximum when the two cores spins involved in the process are parallel and it is zero when they are antiparallel. Because the alignment of spins favors electronic motion, the ferromagnetic ground state maximizes the electron kinetic energy. When the temperature increases, the DE model undergoes a phase transition towards a paramagnetic state. In this phase the core spins are randomly oriented and the electron kinetic energy is minimized. In the paramagnetic phase these materials behave as electrical insulators.
Large low-field magnetoresistance has been observed in ferromagnetic $`La_{1x}A_xMnO_3`$ compounds with different structural discontinuities. These effects are associated with a lack of oxygen at the interface which produces an antiferromagnetic ordering at the interface and breaks the DE mechanism.
Below $`T_C`$ these materials may contain magnetic domains separated by domain walls (DW’s). Domain walls produce a resistance to the electrical current, and for understanding low-field magnetoresistive effects in DE metals it is important to know the width and the resistance of DW’s. Domain wall magnetoresistance effects have been recently observed in itinerant ferromagnet systems such as $`Co`$, $`Ni`$ and $`Fe`$ and also in colossal magnetoresistance perovsquites. In reference it is obtained that in $`La_{1x}Ca_xMnO_3`$ the resistance of a domain wall is $`8\times 10^{14}\mathrm{\Omega }m^2`$, a quantity that the authors argue is $`4`$ orders of magnitude larger that one might expect based in DE models. Recent theoretical works have studied the ballistic and diffusive transport through domain walls in itinerant ferromagnets, however for DE systems only the transmission in one-dimensional models have been studied..
In this paper we study magnetic domain walls in DE systems. In the DW the direction of the $`Mn`$ spin changes from $`0`$ to $`\pi `$ over a region of width $`L_W`$. In this region the $`Mn`$ core spins are misaligned and the tunneling amplitude between $`Mn`$ ions along the DW is reduced. This loss of kinetic energy in the DE model is the equivalent to the loss of exchange energy in the Heisenberg model. The chemical potential in the system is fixed by the magnetic domains which have all the same hole concentration $`x`$. The reduction of the bandwidth in the DW region with respect of the surround magnetic domains produces a change in the density of electrons in the DW region. This effect cost a lot of Hartree energy and the system prefers to create dipoles at the edge of the DW’s. In this way the $`Mn`$ ion levels change and the local charge is not modified in the DW. The shift of the energy level of the $`Mn`$ ions modifies the cost of creating a DW and therefore its width. This effect is new and occurs due to the DE mechanism. In the first part of the paper we characterized this effect and study how it affects the width of the DW. In the second part of the paper we study the ballistic transport through DW’s in DE systems.
Microscopic Hamiltonian. We are interested in a hole concentration in the range $`0.1x0.4`$. For this doping range the $`e_g`$ orbitals are degenerated and for simplicity we consider that the Hamiltonian is degenerated in the $`e_g`$ orbital index. Also in our model the $`Mn`$ spins are treated as classical. For temperatures below $`T_c`$ and in the limit of infinite Hund’s coupling, the electronic properties of the $`Mn`$ oxides are described by the nearest neighbor tight binding Hamiltonian,
$$\widehat{H}_{DE}=\underset{i,j,\alpha }{}t_{i,j}\widehat{C}_{i,\alpha }^+\widehat{C}_{j,\alpha },$$
(1)
Here $`\widehat{C}_{i,\alpha }^+`$ creates an electron at site $`i`$, in the orbital $`\alpha `$ and with spin parallel to the core spin at site $`i`$, and the hopping amplitude is given by
$$t_{i,j}=t\left(\mathrm{cos}\frac{\theta _i}{2}\mathrm{cos}\frac{\theta _j}{2}+\mathrm{sin}\frac{\theta _i}{2}\mathrm{sin}\frac{\theta _j}{2}e^{i(\varphi _i\varphi _j)}\right),$$
(2)
where $`\theta _i`$ and $`\varphi _i`$ are the angles which characterize the orientation of the core spin at site $`i`$.
Long-wavelength functional. In a perfect system all core spins are parallel and the electron kinetic energy gets its maximum value. Now consider a modulation of the core spin direction characterized by a vector $`𝐌_i=𝐒_i/S`$. In the long-wavelength limit the modulation can be described by a continuum unitary vector field $`𝐌(𝐫)`$. This modulation produces a decrease of the value of the hopping amplitude, and therefore a kinetic energy loss. For smooth modulations the local loss of kinetic energy is proportional to $`\left(𝐌\right)^2`$ and the difference in kinetic energy (KE) between the uniform system and the modulated system is given by
$$\mathrm{\Delta }E^{KE}=\frac{\rho ^{KE}}{2}d^3𝐫\left(𝐌\right)^2.$$
(3)
where $`\rho ^{KE}`$ is the KE stiffness of the system. This term is the equivalent to the term describing the exchange energy loss in the Heisenberg model. Note, however that in the DE system $`\rho ^{KE}`$ is associated with the loss of electron kinetic energy due to the spatial variations of $`𝐌`$. By diagonalizing the Hamiltonian Eq.1 with different boundary conditions we have calculated the value of $`\rho ^{KE}`$ for different hole concentration in the system. In Fig.1 we plot $`\rho ^{KE}`$ as a function of $`x`$. Note that we have a degeneration 2 associated with the $`e_g`$ orbitals and we measure the hole concentration with respect half-filling. When the electron concentration is zero ($`x=1`$) the band is completely empty there is not KE in the system and $`\rho ^{KE}=0`$. When the band is half filled ($`x=0`$) the KE is maximum and $`\rho ^{KE}`$ gets its maximum value.
Besides the KE, there is also a Hartree (H) term which contributes to the long-wavelength functional. The electron density, $`n`$ ,in different regions of the sample is fixed by the chemical potential $`\mu `$ which is obtained from the value of $`n`$ in the regions of constant magnetization (magnetic domains). The tunneling amplitude only depends on the relative orientation of the $`Mn`$ spins and therefore $`n`$ and $`\mu `$ get the same values within all the magnetic domains. These regions represent basically all the system and they can be considered as reservoirs for the electrons. The total charge in the system should be zero and a background of positive charge equal to $`1x`$ is assumed to exist in the sample.
In regions with modulated magnetization, $`𝐌0`$, the tunneling amplitude is reduced and the bandwidth is narrowed. Since $`\mu `$ is fixed by the reservoirs, the decrease of the bandwidth would produce a change in $`n`$ in these regions. This breaks local charge neutrality and it would cost a lot of Hartree energy. The system prefers to create dipoles in order to shift the $`Mn`$ ion energy levels in such a way that local charge neutrality is recovered. The energy shift is negative (positive) for electron concentration smaller (bigger) than half-filling. For smooth modulation of the magnetization, we have calculated from the Hamiltonian Eq.(1) the energy shift required for keeping local charge neutrality. We find that the shift is proportional to $`\left(𝐌\right)^2`$ and therefore the local charge neutrality constrain gives a contribution to the energy of the system of the form,
$$\mathrm{\Delta }E^H=\beta d^3𝐫\left(𝐌\right)^2.$$
(4)
In Fig.1 we plot the value of $`\beta `$ as a function of $`x`$, as obtained by solving the Hamiltonian Eq.(1) with different boundary conditions. In principle we should add also a term describing the energy cost of creating the dipoles responsible for the energy shifts, however we find that this energy is much smaller than $`\mathrm{\Delta }E^H`$.
Finally we need to know the energy term corresponding to the constrain which creates the magnetic domains. In general this term has the form,
$$\mathrm{\Delta }E^C=d^3𝐫f(\theta ).$$
(5)
For uniaxial crystals $`f(\theta )=K\mathrm{sin}^2(\theta )`$ being $`K`$ the anisotropy constant. For domain walls created by magnetic constrictions $`f(\theta )=g\mu _BHS/a^3\mathrm{cos}\theta \text{sgn}(x)`$, being $`H`$ the magnetic field.
Adding the different contributions we get the functional
$$F=\frac{\rho }{2}d^3𝐫\left(𝐌\right)^2+d^3𝐫f(\theta ).$$
(6)
being $`\rho =\rho ^{KE}+2\beta `$.
Domain wall width. A DW along the $`\widehat{x}`$ direction has the general form $`𝐌=(0,\mathrm{sin}\theta ,\mathrm{cos}\theta )`$ and $`\mathrm{cos}\theta (x)`$ is obtained by minimizing $`F`$ with the boundary conditions, $`𝐌\pm \widehat{z}`$ at $`x\pm \mathrm{}`$. For uniaxial crystals the optimum form of the DW is
$$\mathrm{cos}\theta =\mathrm{tanh}\frac{4x}{L_W}$$
(7)
with $`L_W=\sqrt{\rho /2K}`$. For a magnetic constrain the form of the domain wall is given by
$$\mathrm{cos}\theta =\left\{12\mathrm{cosh}^2\left[\left|\frac{4x}{L_W}\right|+\mathrm{ln}(\sqrt{2}+1)\right]\right\}\text{sgn}(x)$$
(8)
and $`L_W=4\sqrt{\rho \frac{a^3}{g\mu _BHS}}`$. In both cases $`L_W`$ represents the width of the DW, and it is proportional to the square root of the total stiffness $`\rho `$. The effect of the Hartree term to the DW width depends on the electron concentration: for electron concentrations smaller than half filling $`\beta `$ is negative and the Hartree term prefers thin DW’s. On the contrary for electron concentrations bigger than half filling $`\beta `$ is positive and the Hartree term likes wide DW’s.
For a given value of $`L_W`$ equations (7) and (8) have essentially the same form and for simplicity we use expression (7) for describing a domain wall.
Transport through a domain wall. Now we calculate the ballistic conductance, $`G`$, associated with a DW. From the difference between the conductances of a perfect system and a system with a DW we can evaluate the low field magnetoresistance associated with the alignment of the magnetic domains. A DW modulates the magnetization only along $`\widehat{x}`$ direction and the Hamiltonian is invariant in the $`\widehat{y}\widehat{z}`$ plane. Therefore the conductance of the system can be written as
$$G(\mu )=𝑑E^{}g_{1D}(E^{})n^{2D}(\mu E^{}),$$
(9)
here $`n^{2D}(E)`$ is the two-dimensional density of states per unit area at energy $`E`$, and $`g_{1D}(E)`$ is the conductance of a one dimensional system at energy $`E`$. Expression (9) can be interpreted as the sum of the conductance of all the one dimensional channels. The one dimensional conductance between the sites $`1`$ and $`N`$ is written as,
$$g_{1D}(\mu )=2\frac{e^2}{h}4\pi ^2t^4D^2(\mu )|G_{N,1}(\mu )|^2$$
(10)
being $`D(\mu )=\sqrt{4t^2\mu }/(2\pi t^2)`$ the local density of states at an edge of the isolated lead, and $`G_{N,1}(\mu )`$ the Green function connecting the sites $`1`$ and $`N`$ of an infinite one dimensional system. The factor 2 in Eq.(10) corresponds to the $`e_g`$ degeneracy. The DW is fully contained between the sites $`1`$ and $`N`$.
In order to calculate the effect of the DW on the transport properties it is necessary to know the effect that the magnetization, $`𝐌=(0,\mathrm{sin}\theta ,\mathrm{cos}\theta )`$ has on the electron Hamiltonian, There are two effects: the modulation of the hopping amplitude along the $`\widehat{x}`$ direction and the shift of the $`Mn`$ ion levels. Both effects are obtained from the form of the DW, Eq.(7).
From Eq.(9) we evaluate the magnetoresistance,
$$MR=\frac{G_0G_{DW}}{G_0}.$$
(11)
Here $`G_{DW}`$ and $`G_0`$ are respectively the conductance in presence and in absence of the DW. $`MR`$ represents the low-field ballistic magnetoresistance associated with the presence of magnetic domains. In Fig.2 we plot, for different hole concentrations, $`MR`$ as a function of $`L_W`$. For small values of $`L_W`$, $`MR`$ gets rather large values ($`>10\%`$). For $`L_W20a`$, the magnetoresistance is always smaller than $`1\%`$. In the inset of Fig.2 we plot the the one-dimensional conductance, $`g_{1D}`$ as a function of the energy for a DW with $`L_W`$=$`5a`$. The asymmetry with respect to zero energy, is due to the Hartree energy level shift. We see that $`g_{1D}`$ is suppressed mainly at the band edges.This is the reason why the $`MR`$ is bigger for small concentration of electrons, where the Fermi energy is close to the band edge.
Discussion. We now discuss the application of our results to manganese oxides. First we want to know the width of the DW produced by crystal anisotropy. The width is determined by the stiffness, $`\rho `$, and by the anisotropy constant $`K`$. $`K`$ can be obtained experimentally from neutron scattering, by microwave absorption and by studies of the resistance saturation with magnetic fields. From these experiments we estimate an anisotropy constant in the range $`2\times 10^55\times 10^6Jm^3`$, for $`x=0.3`$ this implies a DW width in the range $`10aL_W30a`$ and a ballistic MR of $`12\%`$.
In the case of DW’s created by magnetic constrictions, $`L_W`$ depends on the applied magnetic fields. For $`x=0.3`$, we obtain $`L_W26a/\sqrt{H}`$ being $`H`$ the external magnetic field in Teslas. This corresponds to a rather large DW. Therefore we expect that the DW width should be determined by the crystal anisotropy and $`12\%`$ magnetoresistances are expected. This value is similar to the obtained in reference , assuming $`10aL_W30a`$. However we do not know what is the contribution of diffusive processes to $`MR`$. Within the simplest Born approximation, diffusive effects are described by the reduction of the bandwidth, and for the values of $`L_W`$ considered this produces also a $`1\%`$ magnetoresistance.
From the inset of figure 2, we can say that the magnetoresistive effects should be much more important in geometrically constrained domain walls. Recently magnetoresistances of 200$`\%`$ have been observed in nanocontacts of itinerant ferromagnets.. We expect similar effects to occur in DE materials confined to reduced dimensions.
Summary. We have calculated the width of domain walls in double exchange materials. The width is proportional to the square root of the stiffness. The stiffness has two terms; the kinetic energy and the Hartree term. The kinetic energy is the equivalent to the exchange energy in the Heisenberg model and it comes from the decrease of the tunneling amplitude in the domain wall region. The Hartree term appears only in double exchange materials and it comes from the connection between band-width and magnetization. We have also calculated the low-field magnetoresistance associated with the existence of magnetic domains. We have found a magnetoresistance of $`12\%`$. The magnetoresistance can be considerably larger in magnetically constrained nanocontacts.
We thank G.Platero, M.J.Calderón, J.A.Vergés and F.Guinea for useful discussions. This work was supported by the CICyT of Spain under Contract No. PB96-0085 and by the Fundación Ramón Areces.
|
no-problem/9905/hep-ph9905230.html
|
ar5iv
|
text
|
# Urgent problems at small 𝑥
## 1 INTRODUCTION
During the 1960’s, a lot was learnt about the analytic properties of scattering amplitudes. Much of this knowledge was incorporated in Regge theory, but it has been largely forgotten. However, Regge theory provides the best available description of the structure function data at small $`x`$, right from $`Q^2=0`$ up to the very highest available values. It should not be regarded as a competitor for perturbative QCD; rather, it is complementary to it, and we need to learn how to make the two live together. In recent years, a belief has grown that the spectacular small-$`x`$ behaviour seen at HERA may be associated with the collinear singularity of the DGLAP splitting function. However, this belief conflicts with what we know about the analytic properties of the structure function.
## 2 REGGE FIT TO SMALL-$`x`$ DATA
Regge theory should be valid at any value of $`Q^2`$, provided only that $`x`$ is small enough. In its simplest form, it describes the structure function as a sum of fixed powers of $`x`$, multiplied by functions of $`Q^2`$:
$$F_2(x,Q^2)\underset{i}{}f_i(Q^2)x^{ϵ_i}$$
(1)
Regge theory tells us little about the coefficient functions $`f_i(Q^2)`$, beyond that they are analytic functions of $`Q^2`$. Also, we know from QED gauge invariance that at $`Q^2=0`$ they vanish at least linearly with $`Q^2`$. It may be that the assumption of simple powers of $`x`$ is too simple, and it certainly must be corrected at some level, but it does fit the data extraordinarily well and so there is no reason to suppose that the correction is numerically significant at present $`x`$ values. We find that three powers are sufficient: two are taken from our old fits to hadronic total cross sections
$`ϵ_1`$ $`=0.08`$ soft pomeron exchange (2)
$`ϵ_2`$ $`=0.45`$ $`f,a\text{ exchange}`$ (3)
The data require the remaining power to be
$$ϵ_0=0.4$$
(4)
with an error of about $`\pm 10\%`$. We call this the “hard pomeron”.
We have made a fit to the data at each available value of $`Q^2`$ to extract the values of the coefficient functions $`f_i(Q^2)`$. The data do not constrain the $`f,a`$-exchange coefficient function $`f_2(Q^2)`$ at all well, but the result for the hard-pomeron function $`f_0(Q^2)`$ and the soft-pomeron function $`f_1(Q^2)`$ are shown in figure 1.
Each vanishes at $`Q^2=0`$, as it has to. The hard-pomeron coefficient remains small until about $`Q^2=10`$ GeV<sup>2</sup>, after which it rises approximately logarithmically. This is no surprise. What is surprising is that the soft-pomeron coefficient, after rising rapidly away from $`Q^2=0`$, reaches a peak at about $`Q^2=10`$ and then falls again. That is, soft-pomeron exchange is higher twist. For even quite large values of $`Q^2`$ this higher-twist component is a major part of the small-$`x`$ structure function: see figure 2. This raises serious questions about all perturbative-QCD fits to structure functions.
The three-term form (1) gives an excellent fit to $`F_2(x,Q^2)`$ for $`x<0.07`$ and $`0Q^22000`$. With 8 free parameters, including $`ϵ_0`$, one can achieve a $`\chi ^2`$ per data point well below 1.0, so the exact values of the parameters are not completely determined by these data points. Figure 3 shows how such a fit compares with the largest-$`Q^2`$ data and the real-photon data. A combination of the hard and the soft pomerons also describes well the data for the process $`\gamma p\psi p`$.
## 3 PERTURBATIVE EVOLUTION
If we Mellin transform with respect to $`x`$, the DGLAP equation reads
$$\frac{}{\mathrm{log}Q^2}𝐮(N,Q^2)=𝐏(N,Q^2)𝐮(N,Q^2)$$
(5)
where $`𝐮`$ is a two-component object whose elements are the singlet quark distribution and the gluon distribution, while $`𝐏`$ is the splitting matrix. A power contribution
$$f(Q^2)x^ϵ$$
(6)
to $`F_2(x,Q^2)`$ corresponds to a pole
$$\frac{𝐟(Q^2)}{Nϵ}$$
(7)
in $`𝐮(N,Q^2)`$. Inserting such a pole into each side of (5) gives a differential equation for $`f(Q^2)`$. If we use the lowest-order approximation to the splitting matrix $`𝐏`$ the solution to this equation is that, for large $`Q^2`$, $`f(Q^2)`$ behaves as a power of $`\mathrm{log}Q^2`$.
However, there is a serious problem with this lowest-order approximation. It gives $`𝐏(N,Q^2)`$ a pole at $`N=0`$, and it is this pole that largely determines the magnitude of the power of $`\mathrm{log}Q^2`$. Conventional perturbative-QCD fits to the data also rely on this pole to explain the rapid rise of $`F_2`$ at small $`x`$. But we know that in fact such a pole cannot be present: higher-order corrections must resum it away. We know this because at small $`Q^2`$ $`𝐮(N,Q^2)`$ does not have a singularity at $`N=0`$: rather, its singularities in the complex $`N`$-plane are the standard singularities of Regge theory — the soft pomeron, the mesons, and possibly also a hard pomeron. It is also supposed to be analytic in $`Q^2`$, so a singularity at $`N=0`$ cannot suddenly appear when we continue from small $`Q^2`$ up to beyond the values of $`Q^2`$ at which the DGLAP equation (5) begins to be valid. Thus $`𝐏(N,Q^2)`$ cannot have a pole at $`N=0`$, nor indeed at any other value of $`N`$.
At small $`N`$, the $`gg`$ element of the splitting matrix is found by solving the equation
$$\chi (P_{gg}(N,Q^2),Q^2)=N$$
(8)
where $`\chi (\omega ,Q^2)`$ is the Lipatov characteristic function. In lowest order,
$$\pi \chi (\omega ,Q^2)=3\alpha _S(Q^2)[2\psi (1)\psi (\omega )\psi (1\omega )]$$
(9)
If one uses this approximation to $`\chi (\omega ,Q^2)`$ one indeed finds that $`P_{gg}(N,Q^2)`$ is nonsingular at $`N=0`$, even though the terms of its expansion in powers of $`\alpha _S`$ are each singular at $`N=0`$. Compare the expansion of the function
$$p(N,Q^2)=N\sqrt{N^2\alpha _S(Q^2)}$$
(10)
whose expansion is
$$p(N,Q^2)=\frac{\alpha _S(Q^2)}{2N}+\frac{\alpha _S^2(Q^2)}{8N^3}+\mathrm{}$$
(11)
but which is evidently finite at $`N=0`$. Near $`N=0`$ the expansion parameter $`\alpha _S(Q^2)/N`$ is so large that the expansion is illegal.
We know that $`P_{gg}(N,Q^2)`$ is finite at $`N=0`$, but we do not know how large it is, because the lowest-order approximation (9) to $`\chi (\omega ,Q^2)`$ is apparently not a good one: the next-to-leading order correction is huge
More work is needed!
|
no-problem/9905/cond-mat9905302.html
|
ar5iv
|
text
|
# Equilibrium sedimentation profiles of charged colloidal suspensionsfootnote footnoteLPENS-Th 10/99
## I Introduction
Under the action of gravity a colloidal suspension sediments to form a stratified fluid. The equilibrium density profile of the colloidal particles results from the balance between the gravitational force and thermodynamic forces as derived from the free energy of the system. The density profiles usually exhibits a dense layer of colloidal particles at the bottom of the container above which a light cloud of colloidal particles floats. In this last regime, the density of particles is small enough to treat the fluid as an ideal gas. Under the reasonable assumption that density gradients can be neglected, the equilibrium colloidal density obey the well known barometric law:
$$\rho _{\mathrm{col}}(z)=\rho _{\mathrm{col}}^0\mathrm{exp}(z/l_g)$$
(1)
Here, $`\rho _{\mathrm{col}}(z)`$ denotes the density profile of the colloidal particles, $`z`$ is the altitude and $`l_g=(\beta Mg)^1`$ is the gravitational length where $`\beta =(k_\mathrm{B}T)^1`$ is the inverse temperature, $`M`$ is the buoyant mass of a colloidal particle and $`g`$ the intensity of the gravitational field. This exponential law is of practical interest since it gives a prescription for the measurement of the buoyant mass $`M`$ of the particles. However a recent experimental study of the sedimentation profiles of strongly de-ionized charged colloidal suspensions lead the authors to challenge the validity of this barometric law. An exponential behaviour was indeed observed in the asymptotic regime, but the measured gravitational length $`l_g^{}`$ could differ significantly from the expected one (a factor of two). $`l_g^{}`$ was found to systematically overestimate the actual value $`l_g`$, with the result that the buoyant mass measured within these experiments is systematically reduced compared to the known buoyant mass of the particles.
Some theoretical efforts have been made to study this problem. First Biben and Hansen solved numerically the problem in a mean field approach, but unfortunately due to numerical difficulties the samples height considered where of the order of the micron while in the experiment the samples height are of the order of the centimeter. As a consequence, the dilute region at high altitude could not be studied in this approach. Nevertheless the numerical results show a positive charge density at the bottom of the container and a negative charge at the top while the bulk of the container is neutral. This result show that a non-zero electric field exists in the bulk of the container and acts against gravity for the colloids.
More recently one of the authors studied a two-dimensional solvable model for this problem . This model is not very realistic (the valency of the colloids was $`Z=1`$ and there was no added salt) but has the nice feature of being exactly solvable analytically. It confirmed the condenser effect noticed for small height containers in Ref. . For large height containers it showed a new interesting phenomenon: while there is still a positive charge density at the bottom of the container, the negative charge density is not any more at the top of the container floating but at some altitude. Interestingly, the analytical expression for the density profiles in the asymptotic regime predicts a decay in $`\mathrm{exp}(z/l_g)/z`$ for the colloidal density. Besides the $`1/z`$ factor that cannot be explained by a mean field approach, no mass reduction is predicted by this model. However one should be cautious when comparing two-dimensional systems to the three dimensional case because the density in not relevant in two-dimensional Coulomb systems: no matter how small the density is the system is always coupled, the ideal gas regime is never attained. For this reason a decay of the density similar to the one of an ideal gas is in itself surprising in two dimensions.
Lately new results based on an approximate version of the model introduced in reference lead the authors of these studies to conclude that the mean-field approach was indeed able to predict a mass reduction in the asymptotic regime. Here we present some new results about this problem treated under the Poisson-Boltzmann approximation, and show that it is indeed not the case.
## II The model and the Poisson-Boltzmann approximation
Let us consider some colloidal particles (for example some latex spheres) in a solution with some amount of added salt. In a polar solvent like water the colloids release some counterions and therefore acquire a surface electric charge $`Ze`$ ($`Z`$ is a entire number usually positive and $`e`$ is the charge of the electron). We consider that the colloidal sample is monodisperse, all colloids have the same valency $`Z`$, and that the counterions and the salt cations are both monovalent and therefore we shall not make any distinction between cations coming from the colloids and salt cations. We then consider a three-component system composed of colloidal particles with electric charge $`Ze`$ and mass $`M`$, counterions with charge $`e`$ and coions with charge $`+e`$. We shall neglect the masses of the counterions and coions when compared with the mass of the colloids. The solvent shall be considered in a primitive model representation as a continuous medium of relative dielectric permittivity $`ϵ`$ (for water at room temperature $`ϵ80`$). The system is in a container of height $`h`$, the bottom of the container is at $`z=0`$ altitude. We consider that the system is invariant in the horizontal directions. The density profiles of each species are denoted by $`\rho _{\mathrm{col}}(z)`$, $`\rho _+(z)`$ and $`\rho _{}(z)`$ ($`z`$ is the vertical coordinate) for the colloids, the cations and the anions respectively at equilibrium. Let us define the electric charge density (in units of $`e`$) $`\rho =Z\rho _{\mathrm{col}}\rho _{}+\rho _+`$ and the electric potential $`\mathrm{\Phi }`$, solution of the Poisson equation
$$\frac{d^2\mathrm{\Phi }}{dz^2}(z)=\frac{4\pi }{ϵ}e\rho (z)$$
(2)
It is instructive to recall that the Poisson-Boltzmann equation can be derived from the minimization of the free energy density functional
$`[\rho _{\mathrm{col}},\rho _+,\rho _{}]`$ $`=`$ $`{\displaystyle \underset{i\{\mathrm{col},+,\}}{}}{\displaystyle _0^h}k_\mathrm{B}T\rho _i(z)\left[\mathrm{ln}(\lambda _i^3\rho _i(z))1\right]𝑑z`$ (4)
$`+{\displaystyle _0^h}Mgz\rho _{\mathrm{col}}(z)𝑑z+{\displaystyle \frac{1}{2}}{\displaystyle _0^h}e\rho (z)\mathrm{\Phi }(z)`$
where $`\lambda _i`$ is the de Broglie wavelength of species $`i`$. Minimization of the grand potential with respect to the densities: $`\delta /\delta \rho _i(z)\mu _i=0`$, where $`\mu _i`$ is the chemical potential of species $`i`$, yields
$`\rho _{\mathrm{col}}(z)`$ $`=`$ $`\rho _{\mathrm{col}}^0\mathrm{exp}(\beta Ze\mathrm{\Phi }(z)\beta Mgz)`$ (6)
$`\rho _+(z)`$ $`=`$ $`\rho _+^0\mathrm{exp}(\beta e\mathrm{\Phi }(z))`$ (7)
$`\rho _{}(z)`$ $`=`$ $`\rho _{}^0\mathrm{exp}(\beta e\mathrm{\Phi }(z))`$ (8)
We shall work in the canonical ensemble, the prefactors $`\rho _i^0`$ which depend on the chemical potentials $`\mu _i`$ are determined by the normalizing conditions
$$_0^h\rho _i(z)𝑑z=N_i$$
(9)
where $`N_i`$ is the total number of particles per unit area of species $`i`$. The system is globally neutral so we have $`ZN_{\mathrm{col}}N_{}+N_+=0`$.
Let us introduce the following notations: $`l_g=(\beta Mg)^1`$ is the gravitational length of the particles, $`l=\beta e^2/ϵ`$ is the Bjerrum length, $`\varphi =\beta e\mathrm{\Phi }`$ is the dimensionless electric potential and $`\kappa _i=(4\pi lN_i/h)^{1/2}`$. $`\kappa _\pm ^1`$ are the Debye lengths associated to the counterions and the coions and $`(Z\kappa _{\mathrm{col}})^1`$ is the Debye length associated to the colloidal particles. For a quantity $`q(z)`$ depending on the altitude, let us define its mean value $`q=_0^hq(z)𝑑z/h`$. With these notations equations (2) and (II) yield the modified Poisson-Boltzmann equation
$$\frac{d^2\varphi }{dz^2}(z)=Z\kappa _{\mathrm{col}}^2\frac{e^{Z\varphi (z)z/l_g}}{e^{Z\varphi (z^{})z^{}/l_g}}+\kappa _{}^2\frac{e^{\varphi (z)}}{e^{\varphi (z^{})}}\kappa _+^2\frac{e^{\varphi (z)}}{e^{\varphi (z^{})}}$$
(10)
From Eq. (10) it is clear that the problem has the following scale invariance: if $`\varphi (z)`$ is a solution of (10) then $`\varphi (\alpha z)`$ is a solution of the problem with the rescaled lengths $`\alpha l_g`$ and $`\alpha \kappa _i^1`$.
The advantage of the density functional formulation of the problem is that it allows for systematic corrections to the Poisson-Boltzmann approximation. For instance, one may be interested in the effect of the finite size of the macroions. Let $`\sigma `$ be the diameter of the colloids, $`\eta _{\mathrm{col}}=\pi \sigma ^3\rho _{\mathrm{col}}/6`$ the volume fraction of colloids and $`\rho _\pm ^{}=\rho _\pm /(1\eta _{\mathrm{col}})`$ the effective densities of the microions. Then, the finite size of the colloids can be accounted in a local density approximation (LDA) by adding to the free energy density functional (4) the free energy excess term
$$_0^hf_{\mathrm{exec}}(\rho _{\mathrm{col}}(z))𝑑z$$
(11)
where $`f_{\mathrm{exec}}(\rho _{\mathrm{col}})`$ is the excess free energy of a hard sphere fluid derived by the Carnahan–Starling equation of state
$$f_{\mathrm{exec}}(\rho _{\mathrm{col}})=k_\mathrm{B}T\rho _{\mathrm{col}}\frac{4\eta _{\mathrm{col}}3\eta _{\mathrm{col}}^2}{(1\eta _{\mathrm{col}})^2}$$
(12)
and by replacing in (4) the entropy term of the microions by
$$k_\mathrm{B}T_0^h\rho _\pm (z)(\mathrm{ln}(\lambda _\pm ^3\rho _\pm ^{}(z))1)𝑑z.$$
(13)
Minimization of this new free energy functional gives the modified version of Eqs. (II)
$`\rho _{\mathrm{col}}(z)`$ $`=`$ $`\rho _{\mathrm{col}}^0\mathrm{exp}(\beta Ze\mathrm{\Phi }(z)\beta Mgz)`$ (16)
$`\times \mathrm{exp}\left[{\displaystyle \frac{3\eta _{\mathrm{col}}(z)^39\eta _{\mathrm{col}}(z)^2+8\eta _{\mathrm{col}}(z)}{(1\eta _{\mathrm{col}}(z))^3}}+{\displaystyle \frac{\pi \sigma ^3}{6}}\left(\rho _+^{}(z)+\rho _{}^{}(z)\right)\right]`$
$`\rho _+(z)`$ $`=`$ $`\rho _+^0(1\eta _{\mathrm{col}}(z))\mathrm{exp}(\beta e\mathrm{\Phi }(z))`$ (17)
$`\rho _{}(z)`$ $`=`$ $`\rho _{}^0(1\eta _{\mathrm{col}}(z))\mathrm{exp}(\beta e\mathrm{\Phi }(z))`$ (18)
There are several length scales in this problem: the gravitational length of the colloids $`l_g`$, the Debye or screening length, the height $`h`$ of the container and eventually the hard core diameter of the particules. In a realistic case $`h`$ is of the order of the centimeter, $`l_g`$ of the order of $`0.1`$ mm, the screening length of the order of 10 nm. We are faced to a practical numerical problem, when we will transpose the problem to a lattice, the lattice spacing should be smaller than all the physical lengths, but since $`h`$ is much larger than the others lengths, the number of sites in the lattice should be very high (of order $`10^6`$). A possible approach to deal with this problem is to study small containers as in Ref. . In this paper we want to study the case of high containers so we will consider very deionized systems in which the screening length is of the order of 0.1 mm, much larger than in usual physical cases, the other lengths taking “physical” values. That way the necessary number of points in the lattice will be reasonable (a few hundreds). Also, since the screening length is so large, the hard core of the macroions will not change the results drastically from the case of point particles so we will concentrate from now on on the Poisson-Boltzmann problem for point particles (Eq. 10).
## III Results
Equation (10) is solved numerically by an iterative method . Using the Green function of the one-dimensional Laplacian
$$G(z,z^{})=\frac{1}{2}\left|zz^{}\right|$$
(19)
the Poisson equation (2) can be written as
$$\varphi (z)=4\pi l_0^hG(z,z^{})\rho (z^{})𝑑z^{}$$
(20)
Starting with an arbitrary electric potential, one can compute the corresponding density profiles using Eqs. (II) and derive a new electric potential using Eq. (20), then reiterate the process until a stationary solution is attained. In practice instead of using the new potential directly for the next iteration a mixing of the old and new densities is used.
### A Generic results
As stated before we have to consider very deionized systems in which the Bjerrum length $`l`$ is smaller than $`10^6`$ Å (the physical value of $`l`$ for water at room temperature is 7 Å). Figure 1 shows the density profiles of each species, the charge density, the electric potential and the electric field profile of a typical sample with the following parameters: $`l=710^8`$ Å, $`l_g=0.128`$ mm, $`h=30`$ mm, $`Z=100`$, a salt concentration $`C_{\mathrm{salt}}=0.1`$ mMol/l and a mean colloidal volume fraction $`\overline{\eta }_{\mathrm{col}}=0.12`$ (we consider that the particles have a hard core diameter $`\sigma =180`$ nm to express the colloidal density as a volume fraction in order to use units familiar with the experiments but we do not account for hard core effects in the Poisson–Boltzmann equation).
The log plot of colloidal density profiles is similar to the experimental ones . In the bottom there is a slow decay whereas at high altitudes there is a faster barometric decay. Since we did not take into account the hard core of the particles in the theory we do not find the discontinuity in the density profiles near the bottom of the sample observed in the experiments , due to the phase transition of the colloids from an amorph solid to a fluid. At very low altitudes (near the bottom, very high volume fractions) the Poisson–Boltzmann theory is not valid.
The charge density profile confirms the results of Ref. , that there is a strong accumulation of positive charges at the bottom of the container while there is a cloud of negative charge density floating at some altitude $`z^{}`$. There are clearly two neutral regions in the container: one at low altitude between the positive charge density at the bottom and the negative cloud, in which a non-vanishing electric field exists and a second neutral region at high altitude, over the negative cloud. The electric field in the lower region acts against gravity for the colloids therefore, as seen in the log plot of the colloids density profile, the decay is much slower than the one for an ideal neutral gas. Numeric results for other series of samples suggest that this electric field is proportional to $`Mg/Z`$. In the upper region the colloidal density drops exponentially as $`\mathrm{exp}(z/l_g)`$ since the electric potential is almost constant and the electric field vanishes.
Since the different densities vary with the altitude we can define a local screening length which depends on the altitude by
$$\lambda (z)=\left(4\pi ł\left(Z^2\rho _{\mathrm{col}}(z)+\rho _+(z)+\rho _{}(z)\right)\right)^{1/2}$$
(21)
The two regions of the sediment are caracterized by a very different behavior of this local screening length. In the lower region the colloidal density is so high that $`Z^2\rho _{\mathrm{col}}(z)\rho _+(z)+\rho _{}(z)`$. In that region the colloids dominate the screening length. On the other hand in the upper region the colloidal density is very small and salt now controles the screening length which is then constant since at high altitudes the cations and anions densities are almost constant and equal to the mean salt concentration as seen in figure 1. It is interesting to notice that electric charges accumulate in the intermediate region around $`z^{}`$ where there is a change of regime, in agreement with macroscopic electrostatics principles.
The preceding remark allows us to understand how the physical parameters (mean volume fraction of colloids, mass of the colloids, amount of added salt) will modify the altitude $`z^{}`$ which separates the two regions. For example if we add more salt, $`z^{}`$ will diminish since we reach sooner the regime where $`Z^2\rho _{\mathrm{col}}(z)<\rho _+(z)+\rho _{}(z)`$. We have computed the density profiles in several other cases changing the values of the parameters in order to find the depency of $`z^{}`$ in these parameters. Our numerical results suggest that
$$z^{}=\frac{c_1}{\sqrt{lC_{\mathrm{salt}}}}+a_2\frac{Z\sqrt{N_{\mathrm{col}}l_g}}{\sqrt{C_{\mathrm{salt}}}}$$
(22)
with $`c_1=0.15\pm 0.05`$ and $`a_2=1.0\pm 0.1`$. The preceding equation can be written in a more attracting way, introducing the screening length associated to the salt $`\lambda _{\mathrm{salt}}=(4\pi lC_{\mathrm{salt}})^{1/2}`$ and the effective screening length associated to the colloids $`\lambda _{\mathrm{col}}^{\mathrm{eff}}=(4\pi lZ^2N_{\mathrm{col}}/l_g)^{1/2}`$, as
$$z^{}=\lambda _{\mathrm{salt}}\left(a_1+a_2\frac{l_g}{\lambda _{\mathrm{col}}^{\mathrm{eff}}}\right)$$
(23)
with $`a_1=\sqrt{4\pi }c_1=0.5\pm 0.2`$. We do not consider here boundary effects: this equation is only valid if $`z^{}`$ is smaller than $`h`$. The finite height $`h`$ of the container will have the effect to “push” the negative cloud downwards if the parameters are such that $`z^{}`$ approaches the top of the container. The same holds for the bottom of the container if $`z^{}`$ is too small.
Another quantity of interest is the size $`\mathrm{\Delta }z^{}`$ of the negative cloud, defined as the mid-height width of the negative peak in the charge density profile (see figure 1). Since we know that at $`z^{}`$ altitude, $`Z^2\rho _{\mathrm{col}}(z^{})`$ is of the same order of magnitude as $`\rho _+(z^{})+\rho _{}(z^{})=2C_{\mathrm{salt}}`$, the screening length at that altitude is proportional to $`\lambda _{\mathrm{salt}}`$. From basic electrostatics we know that the system will only tolerate charges over a length of order of the screening length, we deduce that $`\mathrm{\Delta }z^{}`$ is proportional to $`\lambda _{\mathrm{salt}}`$. In fact the numerical results suggest also a linear dependency of $`\mathrm{\Delta }z^{}`$ in $`l_g`$:
$$\mathrm{\Delta }z^{}=b_1l_g+b_2\lambda _{\mathrm{salt}}$$
(24)
with $`b_1=5.0\pm 0.5`$, $`b_2=0.7\pm 0.2`$, and the same restrictions concerning boundary effects as for the equation for $`z^{}`$.
### B The apparent mass
As we mentioned before, at high altitudes (larger than $`z^{}`$) the electric potential is almost constant and the electric field vanishes. From this it is clear that the colloidal density will decay as $`\mathrm{exp}(z/l_g)`$ and there is no apparent reduced mass. Nevertheless let us notice that in the regime where the electric potential is almost constant in our calculations the corresponding colloidal volume fraction is smaller than $`10^9`$. Such volume fractions cannot be measured experimentally. In practice the optical methods used in allow to measure only volume fractions larger than $`10^5`$. A possible explanation to the apparent mass observed in the experiments is that for volume fractions higher than $`10^5`$ the asymptotic regime where the electric field vanish have not been reached yet: there is a residual electric field responsible of the observed reduced mass.
In order to test this hypothesis we made a log plot of several colloidal volume fraction profiles restricting the plot to volume fractions higher than $`10^5`$ (Figure 2). We computed the slope of the wing of the colloidal density to find an effective gravitational length $`l_g^{}`$ which is higher than the actual gravitational length $`l_g`$ as observed in the experiments. Futhermore when we plot the colloidal volume fraction profile and the corresponding electric field profile together (Figure 3) we notice that for volume fractions higher than $`10^5`$ the electric field is not zero.
The different plots in Figure 2 where obtained using different salt concentrations, so the sediment height (which is proportional to $`z^{}`$) varies. In this case we found that the apparent mass is a decreasing function of the height of the sediment, in agreement with the experiments. However the sediment height can be changed by changing other parameters like the mean colloidal density or their valency $`Z`$. Computing the apparent gravitational length $`l_g^{}`$ as defined before for other series of samples, we found that the apparent gravitational length $`l_g^{}`$ does not depend on $`Z`$ or the mean colloidal density. In our model the ratio $`l_g^{}/l_g`$ is only a function of the salt density. Figure (4) shows the ratio $`l_g^{}/l_g`$ as a function of salt screening length $`\lambda _{\mathrm{salt}}`$.
## IV Comparison with previous approaches
As mentionned in the introduction the model presented above has motivated several studies both numerically and analytically . The purpose of this section is to compare our numerical results with the most achieved version of the theory presented in reference . This theoretical approach is based on a constrained minimization of the free energy functional (4) assuming an exponential ansatz for the density profiles:
$`\rho _{\mathrm{col}}(z)`$ $`=`$ $`{\displaystyle \frac{N_{\mathrm{col}}a}{l_g}}\mathrm{exp}(az/l_g)`$ (26)
$`\rho _+(z)`$ $`=`$ $`C_{\mathrm{salt}}`$ (27)
$`\rho _{}(z)`$ $`=`$ $`C_{\mathrm{salt}}+{\displaystyle \frac{ZN_{\mathrm{col}}b}{l_g}}\mathrm{exp}(bz/l_g)`$ (28)
With this parametrization $`a=M^{}/M`$ is the ratio of the reduced mass $`M^{}`$ by the buoyant mass $`M`$ of a colloidal particle, $`C_{\mathrm{salt}}`$ denotes the fixed salt concentration, and $`N_{\mathrm{col}}`$ is the fixed overall colloidal density per unit area, i.e. $`_0^+\mathrm{}𝑑z\rho _{\mathrm{col}}(z)N_{\mathrm{col}}`$. The system considered in is semi-infinite, $`z=0`$ corresponds to the bottom of the sample and $`h=+\mathrm{}`$. $`a`$ and $`b`$ are the two variational dimensionless parameters of the theory, and the equilibrium density profiles $`\rho _{\mathrm{col}}(z)`$ and $`\rho _{}(z)`$ correspond to the values of $`a`$ and $`b`$ that minimize the free energy functional (4). Following reference , the minimization conditions express:
$`b(a)=a\left({\displaystyle \frac{2}{\sqrt{1+(1a)/\gamma }}}1\right)`$ (30)
$`Zb(a)\kappa I\left({\displaystyle \frac{Zb(a)}{\kappa }}\right)\gamma +{\displaystyle \frac{4\gamma b^2(a)}{(a+b(a))^2}}`$ $`=`$ $`0`$ (31)
where $`\gamma =\pi Z^2lN_{\mathrm{col}}l_g`$ is the coupling parameter ($`l`$ is the Bjerrum length introduced previously), and $`\kappa =C_{\mathrm{salt}}l_g/N_{\mathrm{col}}`$ is the relative amount of added salt. Function $`I`$ is defined by $`I(x)=_0^x𝑑y(\mathrm{ln}(1+y))/y`$.
Although equations (IV) require a numerical treatment, it is possible to extract asymptotic expressions when the coupling parameter $`\gamma `$ is vanishingly small (strong gravitational coupling regime) or large compared to unity (strong Coulomb coupling regime). Such an analysis is presented in reference , and we only reproduce here the main features. When gravitational coupling is strong, $`\gamma 1`$, the reduced mass is given by $`a13\gamma `$ (forall values of the salinity $`\kappa `$) and therefore no mass reduction is observed in this regime (in agreement with the numerical results presented in reference ). On the contrary, in strong Coulomb coupling regimes, $`\gamma 1`$, quite a large mass reduction is predicted, even in low salinity regimes $`\kappa 1`$ (in such a situation the mass reduction is given by $`a\left[1+\frac{\kappa }{2}\mathrm{ln}^2(\kappa (1+\frac{1}{Z}))\right]/(Z+1)+O(1/\gamma )`$). Our numerical results based on a free minimization of the functional (4) show that it is indeed not the case, even though we observe nice exponential asymptotic behaviours at high altitudes. To emphasize this point we present in table I data obtained mostly in the strong Coulomb coupling regime. Excepted for the first value $`\gamma =7.210^3`$, which corresponds to the opposite strong gravitational coupling regime, the agreement between the theory presented in reference and the present free minimization is very poor. The effective mass predicted by the free minimization procedure corresponds to the actual mass within less than 0.1% in most situations, whereas the theory predicts effective masses that are only 2% of the actual mass!
To our opinion, the failure of the parametrization presented in reference is not due to the exponential ansatz itself, but to the constraint of global charge neutrality applied to the asymptotic regime. The theory presented in reference assumes that the profiles are exponential from the bottom of the sample to the top, as a result the free energy functional (4) has to be minimized with the constraint of global charge neutrality. However, the actual situation is quite different. If we refer to the experimental work done by Piazza et al. , the exponential regime is reached only above a macroscopic layer of strongly interacting colloidal particles. Data presented in the previous section (see e.g. figure 1) resulting from a free minimization of the functional also exhibit a dense macroscopic layer of colloidal particles in the bottom of the cell, and these profiles cannot be simply represented by a single exponential. This feature can be incorporated to the model suggested by Löwen, by splitting the cell into two parts. The upper part of the cell (above a given altitude “$`z_o`$”) corresponds to the asymptotic region where the profiles can be accurately represented by an exponential, whereas below $`z_o`$ the profiles are more complicated. As we can see, $`z_o`$ is defined by the condition that the profiles are exponential above it. There is then no upper bound on the value of $`z_o`$ and the asymptotic profiles should not depend on its precise value. As a result $`z_o`$ can be chosen arbitrarily large. As a consequence, the part of the fluid located below $`z_o`$ can be considered as a reservoir fixing the chemical potential of the ionic species $`\mu _{\mathrm{col}}`$ and $`\mu _{}`$ ($`\mu _+`$ is irrelevant since the local density $`\rho _+(z)`$ is held fixed, and is thus not a variational parameter). Although the full system must be charge neutral, the asymptotic part above $`z_o`$ has no reason to be neutral. We are then lead to minimize the free energy of the upper part of the cell in the grand-canonical ensemble. Assuming that parametrization (IV) is valid above $`z_o`$ the minimization equation associated to the colloidal particles reads:
$$\frac{[\rho _{\mathrm{col}},\rho _+,\rho _{}]}{a}=\mu _{\mathrm{col}}\frac{N_{\mathrm{col}}}{a}$$
(32)
where $`[\rho _{\mathrm{col}},\rho _+,\rho _{}]`$ is now the free energy functional above $`z_o`$:
$`[\rho _{\mathrm{col}},\rho _+,\rho _{}]`$ $`=`$ $`{\displaystyle \underset{i\{\mathrm{col},+,\}}{}}{\displaystyle _{z_o}^+\mathrm{}}k_\mathrm{B}T\rho _i(z)\left[\mathrm{ln}(\lambda _i^3\rho _i(z))1\right]𝑑z`$ (34)
$`+{\displaystyle _{z_o}^+\mathrm{}}Mgz\rho _{\mathrm{col}}(z)𝑑z+{\displaystyle \frac{1}{2}}{\displaystyle _{z_o}^+\mathrm{}}e\rho (z)\mathrm{\Phi }(z)`$
and $`N_{\mathrm{col}}=_{z_o}^+\mathrm{}\rho _{\mathrm{col}}(z)𝑑z`$ is the number of colloidal particles above $`z_o`$ per unit area. After some algebra, this minimization equation can be written on the form:
$`(a1)`$ $`\left\{1+z_o^{}+z_{o}^{}{}_{}{}^{2}\right\}+a\left\{{\displaystyle \frac{\mu _{\mathrm{col}}}{k_BT}}\mathrm{ln}\left(\lambda _{\mathrm{col}}^3{\displaystyle \frac{N_{\mathrm{col}}a}{l_g}}\right)\right\}z_o^{}`$ (36)
$`+\gamma \left[e^{z_o^{}}(1+2z_o^{})+4e^{z_o^{}b/a}\left({\displaystyle \frac{a^2}{(a+b)^2}}{\displaystyle \frac{1}{2}}{\displaystyle \frac{z_o^{}(a^2+b^2)}{b(a+b)}}\right)\right]=0`$
where $`z_o^{}az_o/l_g`$. We can easily check that when $`z_o^{}=0`$ we recover the first equation of condition (IV). As $`z_o^{}`$ can be chosen arbitrarily large in our model, we easily see that this equation implies $`a=1`$ (no mass reduction) and:
$$\frac{\mu _{\mathrm{col}}}{k_BT}=\mathrm{ln}\left(\lambda _{\mathrm{col}}\frac{N_{\mathrm{col}}}{l_g}\right)$$
(37)
This new version of the theory is consistent with our numerical results predicting no mass reduction.
## V Conclusion
A free minimization of the Poisson–Boltzmann theory used in references have been performed in this article which lead us to conclude that this simple mean field theory does not predict any mass reduction contrarily to previous approximate minimization of the same functional. These new results are fully consistent with the analytical results obtained in a two-dimensional case by Téllez . In particular, we observe the same condenser effect between the bottom of the container and the top of the dense region, resulting from a competition between electroneutrality and entropy of the microions. Data plotted in figure 2 give a possible explanation for the experimental results obtained by Piazza et al. . Although in the asymptotic regime we observe no mass reduction, this regime is attained for very low values of the colloidal packing fractions, below the experimental resolution ($`10^5`$) in some situations. As a result, the residual electrostatic field can affect the profiles resulting in an apparent effective mass.
|
no-problem/9905/physics9905008.html
|
ar5iv
|
text
|
# Limit Cycle Oscillations in Pacemaker Cells.
## I Introduction
In elementary electrostatics it is well known that the relation between the voltage and the charge of a capacitor is
$$q=Cv,$$
(1)
where $`v`$ is voltage, $`C`$ is capasitance and $`q`$ is charge. Differentiating this equation with respect to time we obtain
$$\frac{dq}{dt}=C\frac{dv}{dt},$$
(2)
where $`\frac{dq}{dt}i`$ is a current, and the sign of $`v`$ is a matter of convention. In physiology this second relation (2) is being used to describe how the membrane potential ($`v`$) is changed when ions move across the cell membrane. Unfortunately we observe that this equation also is being used in some models where one in addition keeps track of the charge ($`q`$) concentrations inside and outside the cell . In those models voltage and charge are believed to be independent dynamic variables: first one determines the voltage by integrating the membrane currents, then one determines the charge by integrating the same membrane currents.
The purpose of this article is to point out that integrating the membrane currents once is enough. Voltage and charge cannot simultaneously be independent dynamical variables in a model, simply because of (1).
In order to visualize the drawbacks of treating voltage and charge as independent variables, we explore numerically the nonlinear dynamics of two different models describing the pacemaker activity of the rabbit sinoatrial node. The procedure is as follows:
1. we integrate numerically the equations of motion for a sufficiently long time to detect a steady state,
2. we change the initial conditions and repeat 1.
First we display results from the Wilders et al. model , a model that treats voltage and charge as independent variables. In that model it is thus possible to select an initial voltage and an initial charge independently. The dynamics of that model seems peculiar. An infinity of limit cycles is observed: each time we select new initial conditions a new limit cycle, corresponding to a new value of the constant of motion $`qCv`$, is found. This hampers the usefulness of the model.
Second we display results from a new model of Endresen et al. , where the voltage is not a dynamic variable. Here we cannot select an initial voltage independently of the initial charge, and only one limit cycle is observed.
## II Existing Models
The model of Wilders et al. of the pacemaker activity of the rabbit sinoatrial node serves as an excellent example of the many models where the membrane potential is thought to be independent of the intracellular and extracellular charge concentrations. In that model the equation of motion for the voltage is given by (2)
$$\frac{dv}{dt}=\frac{1}{C}(i_{\mathrm{b},\mathrm{Ca}}+i_{\mathrm{b},\mathrm{K}}+i_{\mathrm{b},\mathrm{Na}}+i_{\mathrm{Ca},\mathrm{L}}+i_{\mathrm{Ca},\mathrm{T}}+i_\mathrm{f}+i_\mathrm{K}+i_{\mathrm{Na}}+i_{\mathrm{NaCa}}+i_{\mathrm{NaK}}).$$
(3)
There are fifteen dynamic variables in that model, the voltage $`v`$, the gating variables $`d_\mathrm{L}`$, $`d_\mathrm{T}`$, $`f_\mathrm{L}`$, $`f_\mathrm{T}`$, $`x`$, $`y`$, $`h`$, $`m`$, $`p`$, and the ionic concentrations $`[\mathrm{Ca}]_\mathrm{i}`$, $`[\mathrm{Ca}]_{\mathrm{rel}}`$, $`[\mathrm{Ca}]_{\mathrm{up}}`$, $`[\mathrm{K}]_\mathrm{i}`$, $`[\mathrm{Na}]_\mathrm{i}`$. We want to determine how the long term dynamics in that model is changed when we change the initial conditions. To keep matters simple, we only change one initial condition: the initial intracellular concentration of potassium ($`[\mathrm{K}]_\mathrm{i}`$); and we study the dynamics in two dimensions only: the phase space of $`v`$ and $`[\mathrm{K}]_\mathrm{i}`$.
Figure 1 displays the two–dimensional dynamics of the model with the initial conditions (dimensions skipped)
$$\begin{array}{ccc}v=60.03\hfill & x=0.3294906\hfill & [\mathrm{Ca}]_\mathrm{i}=0.0000804\hfill \\ d_\mathrm{L}=0.0002914\hfill & y=0.1135163\hfill & [\mathrm{Ca}]_{\mathrm{rel}}=0.6093\hfill \\ d_\mathrm{T}=0.0021997\hfill & h=0.1608417\hfill & [\mathrm{Ca}]_{\mathrm{up}}=3.7916\hfill \\ f_\mathrm{L}=0.9973118\hfill & m=0.1025395\hfill & [\mathrm{K}]_\mathrm{i}=\mathrm{variable}\hfill \\ f_\mathrm{T}=0.1175934\hfill & p=0.2844889\hfill & [\mathrm{Na}]_\mathrm{i}=7.5,\hfill \end{array}$$
and $`[\mathrm{K}]_\mathrm{i}=140`$. In figure 1 (b) the point $`\mathrm{P}_1`$ denotes the $`v`$ and $`[\mathrm{K}]_\mathrm{i}`$ coordinates of the initial conditions, and the closed loop at the bottom is the limit cycle. The trajectory of the model is displayed in 1 (a) where we observe that the model spirals downwards from the point $`\mathrm{P}_1`$ to the limit cycle.
If the limit cycle in figure 1 (b) is unique it should be possible to reach it from another initial condition. Let us try an initial condition below the limit cycle, and investigate whether the model spiral up towards it. We change the initial concentration of potassium from $`140`$ to $`130`$, leaving the fourteen other initial conditions unchanged. The result is displayed in figure 2. The model does not spiral upwards to the limit cycle in figure 1, instead the model spiral downwards to a different limit cycle. We observed numerically a new limit cycle for each new initial value of $`[\mathrm{K}]_\mathrm{i}`$, implying the existence of an infinite number of limit cycles. The model’s fundamental flaw is clearly demonstrated.
## A New Model
In a new model of the pacemaker activity of the rabbit sinoatrial node, the membrane potential is determined by (1)
$$v=\frac{FV}{C}\left\{[\mathrm{K}]_\mathrm{i}[\mathrm{K}]_\mathrm{e}+2([\mathrm{Ca}]_\mathrm{i}[\mathrm{Ca}]_\mathrm{e})+[\mathrm{Na}]_\mathrm{i}[\mathrm{Na}]_\mathrm{e}\right\},$$
(4)
where $`q=FV\left\{[\mathrm{K}]_\mathrm{i}[\mathrm{K}]_\mathrm{e}+2([\mathrm{Ca}]_\mathrm{i}[\mathrm{Ca}]_\mathrm{e})+[\mathrm{Na}]_\mathrm{i}[\mathrm{Na}]_\mathrm{e}\right\}`$ is the charge difference, $`F`$ is Faraday’s constant and $`V`$ is cell volume. Here the ionic currents alter the concentrations which in turn alter the voltage, i.e. the physical quantities were calculated in the following order: $`\mathrm{current}i\mathrm{charge}q\mathrm{voltage}v`$. The model has five dynamic variables, the gating variables $`x`$, $`h`$ and the ionic concentrations $`[\mathrm{Ca}]_\mathrm{i}`$, $`[\mathrm{K}]_\mathrm{i}`$, $`[\mathrm{Na}]_\mathrm{i}`$, and we use the initial conditions:
$$\begin{array}{c}x=0.9165\hfill \\ h=0.0000\hfill \\ [\mathrm{K}]_\mathrm{i}=\mathrm{variable}\hfill \\ [\mathrm{Ca}]_\mathrm{i}=0.004094141\hfill \\ [\mathrm{Na}]_\mathrm{i}=18.73322695.\hfill \end{array}$$
(5)
In this model we first notice that the initial value of $`[\mathrm{K}]_\mathrm{i}`$, due to (4), is not independent of the initial value of the voltage $`v`$. Thus changing $`[\mathrm{K}]_\mathrm{i}`$ changes $`v`$ as is always the case when charging a capacitor (1). Second we notice that a tiny change in $`[\mathrm{K}]_\mathrm{i}`$ corresponds to a large change in voltage $`v`$, since the constant $`FV/C`$ is large in most situations.
In figure 3 we have displayed the simulation results from the model of Endresen et al. with three slightly (due to the large constant $`FV/C`$) different initial values of $`[\mathrm{K}]_\mathrm{i}`$: 130.650 (a), 130.655 (b), and 130.662 (c). In figure 3 (d) the three initial conditions $`\mathrm{P}_1`$, $`\mathrm{P}_2`$ and $`\mathrm{P}_3`$ all converge towards the same limit cycle. In an extensive numerical study we have not observed any physiological initial conditions that do not converge toward this limit cycle. In fact the same limit cycle can be reached when starting from the full equilibrium situation with equal intracellular and extracellular ionic concentrations .
## III Discussion
We have displayed numerical results from two types of mathematical models of the pacemaker activity of the rabbit sinoatrial node. The first type of model showed an infinite number of limit cycles, the second type of model a limit cycle that could be reached from many different initial conditions. In order to avoid the drawback with an infinite number of limit cycles seen in the first type of models, we suggest that one should not treat membrane voltage ($`v`$) as a dynamic variable. Instead one should calculate the voltage using (1), or at least select the initial conditions in agreement with (1) .
## Acknowledgments
Lars Petter Endresen was supported by a fellowship at NTNU, and has received support from The Research Council of Norway (Programme for Supercomputing) through a grant of computing time.
|
no-problem/9905/cond-mat9905297.html
|
ar5iv
|
text
|
# Critical behavior of the conductivity of Si:P at the metal-insulator transition under uniaxial stress
\[
## Abstract
We report new measurements of the electrical conductivity $`\sigma `$ of the canonical three-dimensional metal-insulator system Si:P under uniaxial stress $`S`$. The zero-temperature extrapolation of $`\sigma (S,T0)SS_c^\mu `$ shows an unprecidentedly sharp onset of finite conductivity at $`S_c`$ with an exponent $`\mu =1`$. The value of $`\mu `$ differs significantly from that of earlier stress-tuning results. Our data show dynamical $`\sigma (S,T)`$ scaling on both metallic and insulating sides , viz. $`\sigma (S,T)=\sigma _c(T)(SS_c/T^y)`$ where $`\sigma _c(T)`$ is the conductivity at the critical stress $`S_c`$. We find $`y=1/z\nu `$ = 0.34 where $`\nu `$ is the correlation-length exponent and $`z`$ the dynamic critical exponent.
\]
Quantum phase transitions have become of steadily increasing interest in recent years . These continuous transitions ideally occur at temperature $`T=0`$ where quantum fluctuations play the role corresponding to thermal fluctuations in classical phase transitions. In particular, certain types of metal-insulator transitions (MIT) such as localization transitions have been studied extensively. Experimentally, the MIT may be driven by an external parameter $`t`$ such as carrier concentration $`N`$, uniaxial stress $`S`$, or electric or magnetic fields. Generally, electron localization might arise from disorder (Anderson transition) or from electron-electron (e-e) interactions (Mott-Hubbard transition) . In Nature, these two features go hand in hand. For instance, the disorder-induced MIT occurring as a function of doping in three-dimensional ($`d`$ = 3) semiconductors where the disorder stems from the statistical distribution of dopant atoms in the crystalline host, bears signatures of e-e interactions as evidenced from the transport properties in both metallic and insulating regimes . This makes a theoretical treatment of the critical behavior of a MIT exceedingly difficult. Even for purely disorder-induced transitions, the critical behavior of the zero-temperature dc conductivity, $`\sigma (0)tt_c^\mu `$ where $`t_c`$ is the critical value of $`t`$, is not well understood. Theoretically, $`\mu `$ is usually inferred from the correlation-length critical exponent $`\nu `$ via Wegner scaling $`\mu =\nu (d2)`$. Numerical values of $`\nu `$ range between 1.3 and 1.6 .
Experimentally, it has long been suggested that the critical behavior of the conductivity falls into two classes: $`\mu 0.5`$ for uncompensated semiconductors and $`\mu 1`$ for compensated semiconductors and amorphous metals . However, there appears to be no clear physical distinction between these materials that would justify different universality classes. While many different materials were reported to show $`\mu 1`$, the exponent $`\mu 0.5`$ was largely based on the very elegant experiments by Paalanen and coworkers , where uniaxial stress was used to drive an initially insulating uncompensated Si:P sample metallic. This allows to fine-tune the MIT since the stress can be changed continuously at low $`T`$ thus eliminating geometry errors incurring when different samples are employed in concentration tuning the MIT.
As always when dealing with critical phenomena, the range of critical behavior is a source of controversy. A few years ago we suggested to limit the critical concentration region in doped semiconductors on the metallic side of the MIT to samples where $`\sigma (T)`$ actually decreases with decreasing $`T`$, i.e. the sample becomes less conducting when approaching the MIT. In doped semiconductors, $`\sigma (T)`$ is nearly independent of $`T`$ at the crossover concentration $`N_{cr}`$, with a value $`\sigma _{cr}`$ of a few times $`10\mathrm{\Omega }^1\mathrm{cm}^1`$, e.g. $`\sigma _{cr}40\mathrm{\Omega }^1\mathrm{cm}^1`$ in Si:P. $`\sigma (T)`$ exhibits a negative temperature coefficient above $`N_{cr}`$ which is explained in terms of e-e interactions . Typically the critical region $`N_c<N<N_{cr}`$ is within 10$`\%`$ or less of the critical concentration $`N_c`$. This eliminates a large number of studies purporting to show $`\mu =0.5`$ where actually only a few samples in the critical regime were investigated. Even the recent study on transmutation-doped Ge:Ga where $`\mu =0.5`$ was suggested, presents only three metallic samples in the critical region below $`\sigma _{cr}10\mathrm{\Omega }^1\mathrm{cm}^1`$ . An earlier study of a large number of Si:P samples showed that the conductivity exponent $`\mu `$ changed from $`\mu =0.64`$ for $`N>N_{cr}1.1N_c`$ to 1.3 for $`N_c<N<N_{cr}`$ . On the other hand, sample inhomogeneities might affect the behavior very close to $`N_c`$. For this reason, data for stress-tuning with stress close to the critical value were discarded in the earlier study, leading to $`\mu =0.5`$ . It is therefore absolutely necessary to perform additional stress-tuning experiments on Si:P with finely tuned stress values including data on the insulating side to check for the critical behavior.
The notion of a quantum phase transition allows a second important aspect to be addressed, namely the interdependence of static and dynamic behavior. The dynamics is reflected in the finite-temperature behavior of critical quantities. Concerning the MIT in heavily-doped semiconductors, this point has not received much attention from the experimental side. A first attempt was made using the scaling function
$$\sigma (t,T)=(tt_c)^\mu (T/(tt_c)^{z\nu })$$
(1)
where $`z`$ is the dynamic critical exponent. This relation is often referred to as dynamic scaling. Approximate dynamic scaling was observed for Si:P on the metallic side of the MIT with $`t=N`$, yielding $`\mu =1.3`$ and $`z=2.4`$ . On the other hand, the stress-tuning data did not obey scaling . Very recently, Bogdanovich et al. demonstrated that conductivity data for Si:B under uniaxial stress obey very nicely the dynamic scaling on both metallic and insulating sides, yielding $`\mu =1.6`$ and $`z=2`$, while concentration tuning of $`\sigma (0)`$ on the same system had suggested $`\mu =0.63`$ . This large difference is not understood at present. In this situation, an examination of possible dynamic scaling of the canonical metal-insulator system Si:P appears of utmost importance in order to resolve the question of critical behavior and to appraise the possibly strongly different roles of stress and concentration in tuning the MIT.
In this paper, we report on stress tuning of the MIT of Si:P by measuring the electrical conductivity down to 15 mK. We find by extrapolating to $`T=0`$ an unprecidently sharp onset of $`\sigma (t,0)`$ which allows to unambiguously extract $`\mu 1`$. In addition, dynamic scaling yielding $`z3`$ is found. The value of $`\mu `$ is in reasonable agreement with that derived from concentration tuning. We further demonstrate that stress tuning and concentration tuning lead to very different $`T`$ dependencies of $`\sigma `$.
The samples were taken from the same Si:P crystals which have been employed previously . Here we report on investigations on two crystals with $`N=3.21`$ and 3.43 $`10^{18}\mathrm{cm}^3`$, just below the critical concentration $`N_c=3.5210^{18}\mathrm{cm}^3`$ as determined for our samples. Similarly grown samples with an even higher concentration ($`N710^{19}\mathrm{cm}^3`$) showed no sign of P clustering as investigated with scanning tunneling microscopy . The samples were cut to a size of $``$ 15 x 0.8 x 0.9 mm<sup>3</sup> and contacted with four Au leads by spark welding, with the voltage leads $``$ 6 mm apart. The sample was mounted in a <sup>4</sup>He-activated uniaxial pressure cell equipped with a piezoelectric force sensor. The stress was applied along the direction which was the most elongated dimension of the sample. The stress was determined from the ratio of the area of the cell base plate and the sample cross section. Calibration of the cell showed a linear increase of force with pressure applied at room temperature to gazeous He, with no hysteresis between increasing and decreasing pressure. The cell, incorporating a thermal shield, was tightly screwed to the mixing chamber of a dilution refrigerator. During one run a thermometer was attached to the sample showing that temperature deviations to the main thermometer directly mounted at the mixing chamber were less than 0.5 mK at the lowest measuring temperature of 15 mK. The conductance was measured with a LR 700 resistance bridge at 16 Hz.
Fig. 1 shows the electrical conductivity $`\sigma (T)`$ of sample 1 ($`N=3.2110^{18}\mathrm{cm}^3`$) for uniaxial pressures between 1 and 3.05 kbar. The data are plotted vs. $`\sqrt{T}`$ which is the $`T`$ dependence expected due to e-e interactions and indeed observed well above the MIT, $`\sigma (T)=\sigma _0+m\sqrt{T}`$ with $`m<0`$ . The smooth curves are in fact polygons connecting adjacent data points (see Fig. 2a for a set of actual data points). Under uniaxial stress between 1 and 2.57 kbar the $`\sigma (T)`$ curves evolve smoothly from insulating to metallic behavior with $`m>0`$, and $`\sigma (T)`$ becomes nearly independent of $`T`$ with a value $`\sigma _{cr}12\mathrm{\Omega }^1\mathrm{cm}^1`$ at $``$ 2.7 kbar. For larger stress $`\sigma (T)`$ passes over a shallow maximum signaling the crossover to $`m<0`$, as observed with concentration tuning . It is interesting to note that $`\sigma _{cr}(S)0.3\sigma _{cr}(N)`$, thus severely limiting the critical region. Our data do not exhibit the precipitous drop of $`\sigma (T)`$ below $``$ 40 mK for pressures closest to the MIT, in distinction to the earlier stress-tuning work on Si:P extending to 3 mK . Instead, our $`\sigma (T)`$ data exhibit a $`T`$ dependence that varies only gently with stress.
Closer inspection shows that the data near the MIT are actually better described by a $`T^{1/3}`$ dependence for low $`T`$ as can be seen from Fig. 2a for a few selected pressures in the immediate vicinity of the MIT. $`\sigma (0)`$ obtained from the $`T^{1/3}`$ extrapolation to $`T=0`$ is shown in Fig. 2b, together with data for sample 2 ($`N=3.4310^{18}\mathrm{cm}^3`$) (see Fig. 3 for $`\sigma (T)`$ of this sample for a few representative uniaxial pressures). $`\sigma (0)`$ is plotted linearly vs. $`S`$, yielding $`S_c`$ = 1.75 kbar for sample 1 and 1.54 kbar for sample 2. Note that the critical stress $`S_c`$ is quite well defined, as $`\sigma (0)`$ breaks away roughly linearly from zero within less than 0.1 kbar. Applying our criterion for the critical region, the analysis should be limited to data with $`\sigma <\sigma _{cr}12\mathrm{\Omega }^1\mathrm{cm}^1`$. In this range the critical exponent $`\mu `$ is 0.96 and 1.09 for sample 1 and 2, respectively. $`\mu 1`$ are found also when the more conventional $`\sqrt{T}`$ extrapolation is employed as can be inferred from Fig. 1. This behavior contrasts with the earlier stress-tuning data reproduced in the inset of Fig. 2b, where appreciable rounding close to $`N_c`$ is visible as compared to our samples when plotted against $`SS_c`$ (see also ). However, those $`\sigma (0)`$ data between 4 and $`16\mathrm{\Omega }^1\mathrm{cm}^1`$ are compatible with linear dependence on uniaxial stress.
Fig. 3 shows $`\sigma (T)`$ of sample 2 for a range of selected uniaxial pressures, again applied along . The overall behavior is very similar to that of sample 1. The fact that $`\sigma _{cr}`$ is the same for both samples is nevertheless surprising given the difference in $`S_c`$. It has been suggested that tuning with $`S`$ or $`N`$ should yield the same critical exponents . The decrease of $`N_c`$ with uniaxial stress is attributed to the admixture of the more extended 1$`s`$(E) and 1$`s`$(T<sub>2</sub>) excited states to the 1$`s`$(A<sub>1</sub>) groundstate of the valley-orbit split sixfold donor 1$`s`$ multiplet . However, comparison of $`\sigma (T)`$ for various $`S`$ and $`N`$ (Fig. 3) reveals that stress and concentration tuning lead to strikingly different $`T`$ dependences of the conductivity in the vicinity of the MIT. As the exact origin of the $`\sigma (T)`$ behavior close to the MIT is unknown, we cannot offer an explanation for the different behavior which, of course, must arise from the change of donor wave functions under uniaxial stress. In this respect, experiments on similar samples for stress applied along different directions leading to different types of mixing among the states of the 1$`s`$ multiplet will be helpful. The fact that stress was applied to different directions in the previous and present study, i.e. \[12$`\overline{3}`$\] and , respectively, may well be one reason for the different behavior of $`\sigma (T)`$.
We finally turn to the scaling behavior of $`\sigma `$ at finite temperatures using the data of sample 1. We employ the scaling relation
$$\sigma (t,T)=\sigma _c(T)^{^{}}((tt_c)/T^y)$$
(2)
where $`\sigma _c(T)=\sigma (t_c,T)`$ is the conductivity at the critical value $`t_c`$ of the parameter $`t`$ driving the MIT. This scaling relation is equivalent to Eq. (1), both are derived from the general scaling relation
$$\sigma (t,T)=b^{(d2)}^{^{\prime \prime }}((tt_c)b^{1/\nu },b^zT)$$
(3)
where $`b`$ is a scaling parameter. If the leading term to $`\sigma _c(T)`$ is proportional to $`T^x`$, one obtains $`x=\mu /\nu z`$ and $`y=1/\nu z`$ from a scaling plot. Fig. 1 and 2a show that $`\sigma `$ for $`S`$ close to $`S_c`$ does not exhibit a simple power-law $`T`$ dependence over the whole $`T`$ range investigated. We therefore determine $`\sigma _c(T)`$ self-consistently in the following manner. The critical stress for sample 1 is taken from the above analysis as $`S_c=1.75`$ kbar. In order to obtain $`\sigma _c(T)`$, we interpolate linearly between the two $`\sigma (T)`$ curves for $`S=1.72`$ and 1.77 kbar. The resultant $`\sigma _c(T)`$ is then fitted by the function $`\sigma _c(T)=aT^x(1+dT^w)`$ with $`a=6.01\mathrm{\Omega }^1\mathrm{cm}^1,x=0.34,d=0.202,w=0.863`$, and $`T`$ is expressed in K. Here the $`dT^w`$ term presents a correction to the critical dynamics. This $`\sigma _c(T)`$ curve is shown in a dashed line in Fig. 2a. All $`\sigma (S,T)`$ curves with 1.00 kbar $`<S<`$ 2.34 kbar up to 800 mK are then used for the scaling analysis according to Eq.(2). The same procedure was repeated for other choices of $`\sigma _c(T)`$ between the two measured $`\sigma (T)`$ curves embracing the critical stress with clearly less satisfactory results.
Fig. 4 shows the resulting scaling plot of $`\sigma (S,T)/\sigma _c(T)`$ vs. $`SS_c/S_cT^y`$. The data are seen to collapse on a single branch each for the metallic and insulating side, respectively. The best scaling, as shown, is achieved for $`y=1/z\nu =0.34`$. Together with $`\mu =1.0`$ as obtained from Fig. 1 and assuming Wegner scaling $`\nu =\mu `$ for $`d=3`$, we find $`z=2.94`$, which is indeed consistent with $`\sigma _cT^{1/z}T^{1/3}`$ for $`T0`$ (see Fig. 2a). Alternatively, we may use Eq.(1) plotting $`\sigma (S,T)/SS_c^\mu `$ vs. $`T/SS_c^{z\nu }`$ (not shown) with the three parameters $`S_c,\mu =\nu `$ and $`z`$. The best data collapse is found for $`\mu =1.0\pm 0.1`$ and $`z=2.94\pm 0.3`$, in very good agreement with the values obtained from Fig. 4. Additionally, we note the broad consistence with the earlier concentration tuning data where $`\mu =1.3`$ and $`z=2.4`$ was inferred . We estimate the error of our combined analysis of the present stress-tuned data to 10% for $`\mu `$ and $`z`$. The critical stress is determined with a relative accuracy to better than 0.1 kbar. It is important to note that either $`\sigma (0)`$ scaling (Fig. 2b) or dynamic scaling (Fig. 4) when taken by itself, may lead to a rather large error in $`\mu `$ and/or $`z`$, just because of the ambiguity of determining the critical region. However, the consistent determination of exponents from the combined scaling lends confidence to the values reported here.
The above procedure to determine the conductivity at the critical stress is necessary because $`\sigma _c`$ does not obey a simple power-law $`T`$ dependence over the whole $`T`$ range. Above 100 mK the correction term $`dT^w`$ (with $`d<0`$) comes into play. This is at variance with Si:B where $`\sigma _cT^{1/2}`$ was observed in the whole range from 60 to 800 mK . On the other hand, a $`T^{1/3}`$ dependence of $`\sigma `$ in the vicinity of the MIT has been reported for transmutation-doped Ge:Ga over a large $`T`$ range . Certainly, the finite-$`T`$ behavior near the quantum critical point needs closer theoretical scrutiny, in particular since the dynamic scaling is observed up to 800 mK when the correction term to $`\sigma _c(T)`$ is included. We remark that a simple algebraic $`T`$ dependence $`\sigma _c=aT^x`$ which yields good dynamic scaling for Si:B , clearly leads to less satisfactory scaling in Si:P for any choice of $`x`$.
In conclusion, we have demonstrated dynamic scaling of stress-tuned Si:P at the metal-insulator transition. The conductivity exponent $`\mu 1`$ is close to the exponents derived earlier from concentration tuning. However, upon application of stress, the critical range is narrowed to conductivities below 12 $`\mathrm{\Omega }^1\mathrm{cm}^1`$. Therefore, it is the absence of appreciable rounding effects in our samples close to the MIT that allows us to determine $`\mu 1`$ reliably, thus resolving the conductivity exponent puzzle. The temperature dependence of the conductivity starting from the same $`\sigma (0)`$ value for $`T=0`$ is distinctly different for samples under zero stress and under stress. It is predicted that in the region between 15 and 40 $`\mathrm{\Omega }^1\mathrm{cm}^1`$ initially insulating stress-tuned samples will exhibit a negative slope of $`\sigma (T)`$, while samples under zero stress in this range are known to have a positive $`\sigma (T)`$. In view of these differences away from the quantum critical point, the similarity of asymptotic dynamic scaling behavior is particularly noteworthy. A more detailed theoretical treatment which may eventually also account for the effective exponent $`\mu 0.5`$ for samples above the crossover conductivity $`\sigma _{cr}`$ is highly desirable.
We thank W. Zulehner, Wacker Siltronic AG, Burghausen, Germany for the samples, and M. P. Sarachik and P. Wölfle for useful discussions. This work was supported by the Deutsche Forschungsgemeinschaft through SFB 195.
|
no-problem/9905/cond-mat9905021.html
|
ar5iv
|
text
|
# Optical properties of an interacting large polaron gas
## I Introduction.
The infrared absorption of large polarons is a well studied problem. However, a large part of the studies has been focused mainly on the very low density regime (non interacting polaron limit) which is suitable for not heavily doped polar semiconductors and ionic insulators. In this regime the most accurate approach has been discussed by Devreese et al. starting from the expression of the impedance function derived by Feynmann. More recently the interest for this problem has been renewed in connection with the infrared absorption of cuprates in the normal phase. In this materials the interaction among polarons can be important. It is, therefore, of interest to consider the effect of the Coulomb long range interaction on the large polaron absorption. To our knowledge this effect has not been yet considered within the Devreese-Feynman approach to the polaron absorption.
The aim of this paper is to calculate the conductivity $`\sigma (\omega )`$ of a large polaron gas within the random phase approximation (RPA) by using, for the lowest order polarization insertion, the propagator of the Feynman polaron model. We find that it is possible to identify in $`\sigma (\omega )`$ different contributions with features common to some structures of the infrared spectra of cuprates. In particular we observe a significant displacement of spectral weight from higher to lower frequencies when the polaron density increases.
## II The model.
We consider a system made of band electrons in the effective mass approximation interacting with non dispersive LO phonons and repelling each other through the Coulomb potential screened by the background high frequency dielectric constant $`ϵ_{\mathrm{}}`$. The electron-phonon (e-ph) interaction is assumed to be:
$$H_{eph}=\underset{\stackrel{\stackrel{}{p}\stackrel{}{q}}{\sigma }}{}M_qc_{\stackrel{}{p}+\stackrel{}{q}\sigma }^{}c_{\stackrel{}{p}\sigma }\left(a_\stackrel{}{q}+a_\stackrel{}{q}^{}\right)$$
(1)
where
$$M_q=i\mathrm{}\omega _0\frac{R_p^{1/2}}{q}\left(\frac{4\pi \alpha }{V}\right)^{1/2}$$
(2)
is the Fröhlich e-ph matrix element.
In the above expressions $`c_{\stackrel{}{p}\sigma }`$ ($`c_{\stackrel{}{p}\sigma }^{}`$) and $`a_\stackrel{}{q}`$ ($`a_\stackrel{}{q}^{}`$) indicate, respectively, the annihilation (creation) operators for electrons and phonons, $`V`$ is the system’s volume, $`\alpha `$ is the Fröhlich e-ph coupling constant, $`R_p=\left(\mathrm{}/\left(2m\omega _0\right)\right)^{1/2}`$ is the polaron radius and $`\omega _0`$ is the LO phonon frequency.
The normal state conductivity of this system, $`\sigma (\omega )`$, is related to the retarded form of the current-current correlation function $`\mathrm{\Pi }(\omega )`$ by the Kubo formula:
$$\sigma (\omega )=\frac{i}{\omega }\left(\frac{ne^2}{m}+\mathrm{\Pi }(\omega )\right)$$
(3)
where $`n`$ is the density of charge carriers. In order to estimate the conductivity we follow the approach suggested by Feynman and adopted with success by Devreese et al. for the optical absorption of non interacting large polarons. Within this approach the real part of the conductivity is given by:
$$Re\left[\sigma (\omega )\right]=G(\omega )\frac{Im\mathrm{\Sigma }(\omega )}{\left(\omega Re\mathrm{\Sigma }(\omega )\right)^2+\left(Im\mathrm{\Sigma }(\omega )\right)^2}$$
(4)
where $`G(\omega )=ne^2/m`$ and $`\mathrm{\Sigma }(\omega )=\mathrm{\Pi }^0(\omega )\omega /G(\omega )`$ (for the definition of $`\mathrm{\Pi }^0(\omega )`$ see below). A justification of eq.(4) has been given by Feynman, while a more general derivation can be obtained by making use of the Mori-Zwanzig projection operator technique, as shown by Devreese et al. and Maldague. It is worth to note that the sum rule for the real part of the conductivity:
$$_0^{\mathrm{}}𝑑\omega Re\left[\sigma (\omega )\right]=\frac{\pi ne^2}{2m}$$
(5)
is satisfied by using eq.(4).
To evaluate $`\mathrm{\Pi }^0(\omega )`$ we use the set of diagrams shown in Fig.1, i.e., we consider a R.P.A.-like approximation for polarons interacting each other through the Coulomb potential screened by $`ϵ_{\mathrm{}}`$ ($`v_q^{\mathrm{}}`$). Following Mahan, we write:
$$\mathrm{\Pi }^0(\omega )=\frac{e^2}{m^2\omega ^2}\frac{d^3q}{\left(2\pi \right)^3}q_\mu ^2\left[N(\stackrel{}{q},\omega )N(\stackrel{}{q},0)\right]$$
(6)
where
$$N(\stackrel{}{q},\omega )=_{\mathrm{}}^{\mathrm{}}\frac{du}{\pi }_{\mathrm{}}^{\mathrm{}}\frac{ds}{\pi }\frac{1}{v_q^{\mathrm{}}}Im\left[\frac{1}{ϵ(\stackrel{}{q},u)}\right]Im\left[\overline{M_q}^2D(\stackrel{}{q},s)\right]\left(\frac{n_B(s)n_B(u)}{us+\omega +i\delta }\right).$$
(7)
In the eqs.(6) and (7) $`\mu =x,y`$ or $`z`$, $`ϵ(\stackrel{}{q},\omega )`$ is the dielectric function, $`\overline{M_\stackrel{}{q}}`$ is the renormalized e-ph matrix element, $`D(\stackrel{}{q},\omega )`$ is the phonon Green function and $`n_B(\omega )`$ is the boson occupation number. In our model the dielectric function of the system is assumed to be (see Fig.1):
$$ϵ(\stackrel{}{q},\omega )=1v_q^{\mathrm{}}P(\stackrel{}{q},\omega )$$
(8)
where $`P(\stackrel{}{q},\omega )`$, the lowest order polarization insertion, can be expressed in terms of the polaron spectral weight function $`A(\stackrel{}{q},\omega )`$:
$$P(\stackrel{}{q},\omega )=\frac{2}{\mathrm{}^2}\frac{d^3p}{\left(2\pi \right)^3}_{\mathrm{}}^{\mathrm{}}\frac{du}{2\pi }_{\mathrm{}}^{\mathrm{}}\frac{ds}{2\pi }A(\stackrel{}{p},s)A(\stackrel{}{p}+\stackrel{}{q},u)\left(\frac{n_F(s)n_F(u)}{\omega +su+i\delta }\right).$$
(9)
Here $`n_F(\omega )`$ is the fermion occupation number. For simplicity we shall use in the Eq.(7) the unperturbed phonon Green function $`D^0(\omega )=2\omega _0/\left(\omega ^2\omega _0^2+i\delta \right)`$ and we shall ignore the frequency dependence of the dielectric function in the expression of the renormalized e-ph matrix element assuming:
$$\overline{M_\stackrel{}{q}}=\frac{M_q}{ϵ(\stackrel{}{q},0)}.$$
(10)
Until this point we have not specified the polaron spectral weight function that appears in the Eq.(9). It is well known that, from a theoretical point of view, the problem of an electron in a polar crystal which interacts with the longitudinal optical modes of lattice vibrations has not been solved exactly. However, it is universally recognized that of all published theories that of Feynman, which uses a variational method based on path integrals, gives the best available results in the entire range of the coupling constant $`\alpha `$. For this reason we choose the spectral weight function of the Feynman polaron model:
$$A(q,\omega )=\underset{l=\mathrm{}}{\overset{\mathrm{}}{}}\delta \left[\omega \left(\frac{\mathrm{}^2q^2}{2M_T}+lv\right)\right]exp\left[\frac{l\beta v}{2}\right]exp\left[\frac{q^2R}{M_T}\left(2n_B(v)+1\right)\right]2\pi I_l(z)$$
(11)
where $`R=\left(\frac{v^2}{w^2}1\right)/v`$, $`M_T=v^2/w^2`$, $`z=\frac{2q^2R}{M_T}\left[n_B(v)\left(n_B(v)+1\right)\right]^{1/2}`$ and $`I_l`$ are the Bessel functions of complex argument. The dimensionless parameters $`v`$ and $`w`$ are related to the mass and the elastic constant of the model, in which the electron is coupled via a harmonic force to a fictitious second particle simulating the phonon degrees of freedom. The values of $`v`$ and $`w`$ can be obtained by the variational approach described by Feynman and Schultz. In particular, at $`T=0`$, $`A(q,\omega )`$ takes the form:
$$A(q,\omega )=2\pi \underset{l=0}{\overset{\mathrm{}}{}}\delta \left[\omega \left(\frac{\mathrm{}^2q^2}{2M_T}+lv\right)\right]e^{\frac{q^2R}{M_T}}\left(\frac{q^2R}{M_T}\right)^l/l!.$$
(12)
The spectral weight follows a Poissonian distribution and it is maximum for an excitation involving a number $`l`$ of phonons of order of $`l\frac{q^2R}{M_T}`$. We note that, in all the numerical results which will be shown in this paper, the terms with $`6l6`$ in the Eq.(11) and the first six term in the Eq.(12) are enough to a good convergence of the real and imaginary parts of the correlation function $`\mathrm{\Pi }^0(\omega )`$ in the frequency range of interest ($`\omega 10\omega _0`$).
We want to note that the approach proposed in this paper restores, when $`n0`$ , the well known results of the optical absorption of a single polaron and allows to introduce within the R.P.A. approximation the effects of the polaron-polaron interaction. This approach is expected to be valid when the formation of bipolaron states is not favored ($`\eta =ϵ_{\mathrm{}}/ϵ_0>.01`$ or $`\eta <0.01`$ and $`\alpha <67`$). In fact, in this situation the overlap between the wells of two particles can be neglected and the system is well approximated by an interacting large polaron gas. Of course the proposed approach is valid in the density range where we can exclude Wigner-like localization. As discussed by Quemerais et al. for coupling constant not larger than $`\alpha =6`$ and density $`n10^{18}cm^3`$ our approach is justified.
## III The results
In Fig.2a is reported the optical absorption per polaron as a function of the frequency for different values of the charge carrier density at $`T=0`$. Three different structures appear in the normal state conductivity: a) a zero frequency delta function contribution; b) a strong band starting at $`\omega =\omega _0`$ that is the overlap of two components: a contribution from the intraband process and a peak due to the polaron transition from the ground state to the first relaxed excited state; c) a smaller band at higher frequency due to the Frank-Condon transition of the polaron. The identification and characterization of these structures stem from the detailed analysis of the polaron absorption in the case of non interacting large polarons. Increasing the charge carrier density, we find that the large polaronic band due to the excitation involving the relaxed states ( b contribution) tends to move towards lower frequencies while its intensity decreases in favor of the rise of a Drude-like term around $`\omega =0`$ (see Fig.2a and Fig.2b).
In particular, at $`T=0`$ the Drude weight D, i.e., the coefficient of the zero frequency delta function contribution, is determined making use of the sum-rule for the real part of the conductivity (see Eq.(5)). This allows an estimate of the effective polaron mass $`M_{eff}(n)`$ as a function of the electron density, being the Drude weight D related to $`M_{eff}(n)`$ by the following expression:
$$D=\frac{\pi ne^2}{2M_{eff}(n)}.$$
(13)
In table 1 the values of $`M_{eff}(n)`$ for $`\alpha =5`$ and $`\alpha =6`$ are reported. It is evident that, increasing the charge carrier density, the screening of the e-ph interaction increases and the polaron mass is reduced, tending to the band mass for large values of $`n`$. This behavior is a confirmation of the trend obtained at weaker coupling in a model that includes polaron screening but neglects exchange effects.
In Fig3a we plot the normal state conductivity for larger electron densities at $`T/T_D=.5`$, where $`T_D=\mathrm{}\omega _0/k_B`$ and $`k_B`$ is the Boltzman constant. Increasing the charge density, we find that the optical absorption, in agreement with experimental data in the metallic phase of the cuprates , is more and more controlled by the Drude-like term and, only for very high electron densities, no signature of the b) contribution is left .
The behavior of the polaron absorption with temperature is also of interest. As shown in Fig.3b, with increasing $`T`$ there is a transfer of spectral weight towards higher frequencies. Moreover, the intensity of the contributions b) and c) increases with decreasing the temperature saturating at $`T/T_D0.5`$.
## IV Discussion
The effects shown are very intriguing in connection of recent measurement of infrared absorption in cuprates. In fact several experiments on the infrared response of the cuprates have pointed out that the optical absorption exhibits features which are common to many families of high-$`T_c`$ superconductors.
In particular, the infrared normal state conductivity does not diminish with frequency as rapidly as one expects from a simple Drude picture. This behavior has been interpreted in terms of two different models: the anomalous Drude model and the multi-component or Drude-Lorentz model. In the former approach, the infrared conductivity has been attributed to a contribution from free charge carriers with an $`\omega `$-dependent scattering rate (arising, for example, by strong interactions with spin waves). In the Drude-Lorentz approach, instead, one assumes that the absorption is the result of the superposition of different structures which can be identified in the conductivity spectra: 1) a Drude-like peak centered at zero frequency; 2) infrared-active vibrational modes (IRAV); 3) a broad excitation in the infrared band which is constituted by two different components: one temperature-independent around $`.5eV`$ ($`4000cm^1`$) and the other strongly dependent on temperature with a peak around $`.1eV`$ ($`1000cm^1`$) ( d band); 4) the charge transfer band (CT) in the visible range which is attributed to the charge transfer transitions between $`O_{2p}`$ and $`Cu_{3d}`$ states. The Drude, the d band and IRAV contributions depend on the charge density injected by the doping and a significant transfer of spectral weigth from the d band and Irav contributions to the Drude peak is observed increasing the doping. To the d band and the IRAV, which appear when extra charge are injected into the lattice of a cuprate, it has been assigned a polaronic origin both in electron-doped and hole doped compounds. However, there is no general consensus on the type of polarons involved in the absorption. In particular, the $`1000cm^1`$ feature has been attributed to optical transitions involving small, large polarons or both types of polarons. In any case, as first noted by Bi and Eklundin $`La_{2x}Sr_xCuO_{4+\delta }`$, the polaron tend to be small in the dilute limit (small $`x`$) whereas in the limit of large $`x`$ the polaron tend to expand, i.e., the polaron size increases with $`x`$ tracking a dopant-induced transformation to metallic conductivity.
Finally we note that, recently, Calvani et al. have measured the infrared absorption of four different perovskite oxides observing an opposing behaviour between the cuprates and the non cuprates. In particular, in $`La_2NiO_{4+y}`$ and in $`SrMnO_{4+y}`$, in which there is a strong evidence of the presence of small polarons, the minimum of the d band deepens at low $`T`$ by a transfer of spectral weight towards higher energies. On the contrary, in the slightly and heavily doped cuprates at low temperatures the minimum of the d band tends to be filled by a transfer of spectral weight in the opposite direction (see also F. Li et al.,) as it happens for a large polaron system previously described.
The above scenario shows that the normal state conductivity of an interacting large polaron system presents features common to the optical absorption of the cuprates. Even if the similarities found are not conclusive in assigning to large polarons the main role in the absorption of cuprates - more accurate calculations are nedeed to attempt a quantitative comparison between theory and experiments - we believe that the inclusion of long range Coulomb interactions among polarons is essential in understanding whether infrared absorption in cuprates can be assigned to large polarons.
## V Conclusions
In this paper we calculated the normal state conductivity of a large polaron system including the electrostatic polaron-polaron interaction within the R.P.A. approximation. The approach recovers the optical absorption of non interacting Feynman polaron and allows to introduce in a perturbative way the many polaron effects. With increasing charge carrier density, we have found evidence of a transfer of spectral weight of the optical absorption towards lower frequencies: the Drude-like contribution increases as well as the screening effects of the e-ph interaction. Moreover, with increasing temperature, there is a transfer of spectral weight of the infrared absorption towards higher frequencies. Both these behaviours are observed in the infrared spectra of cuprates.
## Acknowledgments
We thank P. Calvani, S. Lupi and A. Paolone for helpful discussions, and acknowledge the partial support from INFM.
## Table 1 captions
Tab.1. Effective polaron mass, in units of the electron band mass, as a function of the charge carrier density. The value of $`n_0`$ is $`n=1.410^5`$ in terms of $`R_p^3`$, $`R_p`$ being the Fröhlich polaron radius.
## Figure captions
Fig.1. Diagrammatic representation of the Eq.(6). The solid line indicates the polaron propagator, the phonon is given by the dashed line, the dotted line describes the Coulomb interaction and the dotted-dashed line represents the incident photon.
Fig.2. a) Optical absorption per polaron, at $`T=0`$, as a function of the frequency for different electron densities: $`n=1.410^5`$ (solid line), $`n=1.410^4`$ (dashed line), $`n=1.410^3`$ (dotted line), $`n=1.410^2`$ (dotted-dashed line); b) Optical absorption per polaron at finite temperature, $`T/T_D=.5`$, for different values of the charge carrier density: $`n=1.410^4`$ (dashed line), $`n=1.410^3`$ (dotted line), $`n=1.410^2`$ (dotted-dashed line). The value of $`ϵ_0/ϵ_{\mathrm{}}`$ is $`3.4`$. The electron density is measured in units of $`R_p^3`$, $`R_p`$ being the Fröhlich polaron radius, and the conductivity $`\sigma (\omega )`$ is expressed in terms of $`ne^2/m\omega _0`$. The value of $`R_p^3`$ is $`710^{20}cm^3`$ when $`\omega _0=30meV`$ and $`m=m_e`$, where $`m_e`$ is the electron mass.
Fig.3. a) Optical absorption per polaron at finite temperature, $`T/T_D=.5`$, for different values of the charge carrier density: $`n=4.510^2`$ (solid line), $`n=1.410^1`$ (dashed line), $`n=.28`$ (dotted line), $`n=1.4`$ (dotted-dashed line). The electron density is measured in units of $`R_p^3`$, $`R_p`$ being the Fröhlich polaron radius, and the temperature is given in units of $`T_D=\mathrm{}\omega _0/k_B`$, $`k_B`$ being the Boltzman constant. In the inset is plotted the polaron mobility ($`\mu `$), i.e. $`Re\sigma (\omega 0)`$, as a function of the charge density for $`\omega _0=30meV`$, $`ϵ_{\mathrm{}}=3`$ and $`m=m_e`$ where $`m_e`$ is the electron mass; b) Optical absorption of a single Feynman polaron for different values of $`T`$: $`T/T_D=0`$ (dashed line), $`T/T_D=.5`$ (dotted line), $`T/T_D=1`$ (dotted-dashed line).
| | n<sub>0</sub> | 10 n<sub>0</sub> | 10<sup>2</sup>n<sub>0</sub> | 10<sup>3</sup>n<sub>0</sub> |
| --- | --- | --- | --- | --- |
| $`\alpha `$=5 | 3.56 | 3.07 | 2.19 | 1.42 |
| $`\alpha `$=6 | 6.21 | 5.28 | 3.58 | 1.94 |
Table 1
|
no-problem/9905/cond-mat9905291.html
|
ar5iv
|
text
|
# Self-avoiding polygons on the square lattice
## 1 Introduction
A self-avoiding polygon (SAP) can be defined as a walk on a lattice which returns to the origin and has no other self-intersections. The history and significance of this problem is nicely discussed in . Alternatively we can define a SAP as a connected sub-graph (of a lattice) whose vertices are of degree 0 or 2. Generally SAPs are considered distinct up to a translation, so if there are $`p_n`$ SAPs of length $`n`$ there are $`2np_n`$ walks (the factor of two arising since the walk can go in two directions). The enumeration of self-avoiding polygons on various lattices is an interesting combinatorial problem in its own right, and is also of considerable importance in the statistical mechanics of lattice models .
The basic problem is the calculation of the generating function
$$P(x)=\underset{n}{}p_{2n}x^{2n}A(x)+B(x)(1x^2/x_c^2)^{2\alpha },$$
(1)
where the functions $`A`$ and $`B`$ are believed to be regular in the vicinity of $`x_c.`$ We discuss this point further in Sec. 3, as it pertains to the presence or otherwise of a non-analytic correction-to-scaling term. Despite strenuous effort over the past 50 years or so this problem has not been solved on any regular two dimensional lattice. However, much progress has been made in the study of various restricted classes of polygons and many problems have been solved exactly. These include staircase polygons , convex polygons , row-convex polygons , and almost convex polygons . Also, for the hexagonal lattice the critical point, $`x_c^2=1/(2+\sqrt{2})`$ as well as the critical exponent $`\alpha =1/2`$ are known exactly , though non-rigorously. Very firm evidence exists from previous numerical work that the exponent $`\alpha `$ is universal and thus equals 1/2 for all two dimensional lattices . Thus the major remaining problem, short of an exact solution, is the calculation of $`x_c`$ for various lattices. Recently the authors found a simple mapping between the generating function for SAPs on the hexagonal lattice and the generating function for SAPs on the $`(3.12^2)`$ lattice . Knowledge of the exact value for $`x_c`$ on the hexagonal lattice resulted in the exact determination of the critical point on the $`(3.12^2)`$ lattice.
In order to study this and related systems, when an exact solution can’t be found one has to resort to numerical methods. For many problems the method of series expansions is by far the most powerful method of approximation. For other problems Monte Carlo methods are superior. For the analysis of $`P(x)`$, series analysis is undoubtedly the most appropriate choice. This method consists of calculating the first coefficients in the expansion of the generating function. Given such a series, using the numerical technique known as differential approximants , highly accurate estimates can frequently be obtained for the critical point and exponents, as well as the location and critical exponents of possible non-physical singularities.
This paper builds on the work of Enting who enumerated square lattice polygons to 38 steps using the finite lattice method. Using the same technique this enumeration was extended by Enting and Guttmann to 46 steps and later to 56 steps . Since then they extended the enumeration to 70 steps in unpublished work. These extensions to the enumeration were largely made possible by improved computer technology. In this work we have improved the algorithm and extended the enumeration to 90 steps while using essentially the same computational resources used to obtain polygons to 70 steps.
The difficulty in the enumeration of most interesting lattice problems is that, computationally, they are of exponential complexity. It would be a great breakthrough if a polynomial time algorithm could be found, while a linear time algorithm is, to all intents and purposes, equivalent to an exact solution. Initial efforts at computer enumeration of square lattice polygons were based on direct counting. The computational complexity was proportional to $`\lambda _1^n,`$ where $`n`$ is the length of the polygon, and $`\lambda _1=1/x_c2.638.`$ The dramatic improvement achieved by the finite lattice method can be seen from its complexity, which is proportional to $`\lambda _2^n,`$ where $`\lambda _2=3^{\frac{1}{4}}1.316.`$ Our new algorithm, described below, has reduced both time and storage requirements by virtue of a complexity which is proportional to $`\lambda _3^n,`$ where $`\lambda _31.20.`$ It is worth noting that for simpler restricted cases it possible to devise much more efficient algorithms. For problems such as the enumeration of convex polygons and almost convex polygons the algorithms are of polynomial complexity. Other interesting and related problems for which efficient transfer matrix algorithms can be devised include Hamiltonian circuits on rectangular strips (or other compact shapes) and self-avoiding random walks .
In the next section we will very briefly review the finite lattice method for enumerating square lattice polygons and give some details of the improved algorithm. The results of the analysis of the series are presented in Section 3 including a detailed discussion of a conjecture for the exact critical point.
## 2 Enumeration of polygons
The method used to enumerate SAP on the square lattice is an enhancement of the method devised by Enting in his pioneering work. The first terms in the series for the polygon generating function can be calculated using transfer matrix techniques to count the number of polygons in rectangles $`W+1`$ edges wide and $`L+1`$ edges long. The transfer matrix technique involves drawing a line through the rectangle intersecting a set of $`W+2`$ edges. For each configuration of occupied or empty edges along the intersection we maintain a (perimeter) generating function for loops to the left of the line cutting the intersection in that particular pattern. Polygons in a given rectangle are enumerated by moving the intersection so as to add one vertex at a time, as shown in Fig. 1. The allowed configurations along the intersection are described in . Each configuration can be represented by an ordered set of edge states $`\{n_i\}`$, where
$$n_i=\{\begin{array}{cc}\hfill 0& \text{empty edge},\hfill \\ \hfill 1& \text{lower part of loop closed to the left},\hfill \\ \hfill 2& \text{upper part of loop closed to the left}.\hfill \end{array}$$
(2)
Configurations are read from the bottom to the top. So the configuration along the intersection of the polygon in Fig. 1 is $`\{0112122\}`$.
The rules for updating the partial generating functions as the intersection is moved are identical to the original work, so we refer the interested reader to for further details regarding this aspect of the transfer matrix calculation.
Due to the obvious symmetry of the lattice one need only consider rectangles with $`LW`$. Valid polygons were required to span the enclosing rectangle in the lengthwise direction. So it is clear that polygons with projection on the $`y`$-axis $`<W`$, that is polygons which are narrower than the width of the rectangle, are counted many times. It is however easy to obtain the polygons of width exactly $`W`$ and length exactly $`L`$ from this enumeration . Any polygon spanning such a rectangle has a perimeter of length at least $`2(W+L)`$. By adding the contributions from all rectangles of width $`WW_{\mathrm{max}}`$ (where the choice of $`W_{\mathrm{max}}`$ depends on available computational resources, as discussed below) and length $`WL2W_{\mathrm{max}}W+1`$, with contributions from rectangles with $`L>W`$ counted twice, the number of polygons per vertex of an infinite lattice is obtained correctly up to perimeter $`4W_{\mathrm{max}}+2`$.
The major improvement of the method used to enumerate polygons in this paper is that we require valid polygons to span the rectangle in both directions. In other words we directly enumerate polygons of width exactly $`W`$ and length $`L`$ rather than polygons of width $`W`$ and length $`L`$ as was done originally. The only drawback of this approach is that for most configurations we have to use four distinct generating functions since the partially completed polygon could have reached neither, both, the lower, or the upper boundaries of the rectangle. The major advantage is that the memory requirement of the algorithm is exponentially smaller.
Realizing the full savings in memory usage requires two enhancements to the original algorithm. Firstly, for each configuration we must keep track of the current minimum number of steps $`N_{\mathrm{cur}}`$ that have been inserted to the left of the intersection in order to build up that particular configuration. Secondly, we calculate the minimum number of additional steps $`N_{\mathrm{add}}`$ required to produce a valid polygon. There are three contributions, namely the number of steps required to close the polygon, the number of steps needed (if any) to ensure that the polygon touches both the lower and upper boundary, and finally the number of steps needed (if any) to extend at least $`W`$ edges in the length-wise direction. If the sum $`N_{\mathrm{cur}}+N_{\mathrm{add}}>4W_{\mathrm{max}}+2`$ we can discard the partial generating function for that configuration because it won’t make a contribution to the polygon count up to the perimeter lengths we are trying to obtain. For instance polygons spanning a rectangle with a width close to $`W_{\mathrm{max}}`$ have to be almost convex, so very convoluted polygons are not possible. Thus configurations with many loop ends (non-zero entries) make no contribution at perimeter length $`4W_{\mathrm{max}}+2`$.
The number of steps needed to ensure a spanning polygon is straightforward to calculate. The complicated part of the new approach is the algorithm to calculate the number of steps required to close the polygon. There are very many special cases depending on the position of the kink in the intersection and whether or not the partially completed polygon has reached the upper or lower boundary of the bounding rectangle. So in the following we will only briefly describe some of the simple contributions to the closing of a polygon. Firstly, if the partial polygon contains separate pieces these have to be connected as illustrated in Fig. 2. Separate pieces are easy to locate since all we have to do is start at the bottom of the intersection and moving upwards we count the number of 1’s and 2’s in the configuration. Whenever these numbers are equal a separate piece has been found and (provided one is not at the last edge in the configuration) the currently encountered 2-edge can be connected to the next 1-edge above. $`N_{\mathrm{add}}`$ is incremented by the number of steps (the distance) between the edges and the two edge-states are removed from the configuration before further processing. It is a little less obvious that if the configuration start (end) as $`\{112\mathrm{}2\}`$ ($`\{1\mathrm{}122\}`$) the two lower (upper) edges can safely be connected (note that there can be any number of 0’s interspersed before the $`\mathrm{}`$). Again $`N_{\mathrm{add}}`$ is incremented by the number of steps between the edges, and the two edge-states are removed from the configuration – leading to the new configuration $`\{001\mathrm{}2\}`$ ($`\{1\mathrm{}200\}`$) – before further processing. After these operations we may be left with a configuration which has just one 1- and one 2-edge, in which case we are done since these two edges can be connected to form a valid polygon. This is illustrated in Fig. 2 where the upper left panel shows how to close the partial polygon with the intersection $`\{12112212\}`$, which contain three separate pieces. After connecting these pieces we are left with the configuration $`\{10012002\}`$. We now connect the two 1-edges and note that the first two-edge is relabeled to a 1-edge (it has become the new lower end of the loop). Thus we get the configuration $`\{00001002\}`$ and we can now connect the remaining two edges and end up with a valid completed polygon. Note that in the last two cases, in addition to the steps spanning the distance between the edges, an additional two horizontal steps had to be added in order to form a valid loop around the intervening edges. If the transformation above doesn’t result in a closed polygon we must have a configuration of the form $`\{111\mathrm{}222\}`$. The difficulty lies in finding the way to close such configurations with the smallest possible number of additional steps. Suffice to say that if the number of non-zero entries is small one can easily devise an algorithm to try all possible valid ways of closing a polygon and thus find the minimum number of additional steps. In Fig. 2 we show all possible ways of closing polygons with 8 non-zero entries. Note that we have shown the generic cases here. In actual cases there could be any number of 0-edges interspersed in the configurations and this would determine which way of closing would require the least number of additional steps.
With the original algorithm the number of configurations required as $`W_{\mathrm{max}}`$ increased grew asymptotically as $`3^{W_{\mathrm{max}}}`$ . Our enumerations indicate that the computational complexity is reduced significantly. While the number of configurations still grows exponentially as $`\lambda ^{W_{\mathrm{max}}}`$ the value of $`\lambda `$ is reduced from $`\lambda =3`$ to $`\lambda 2`$ with the improved algorithm (Fig. 3 shows the number of configuration required as $`W_{\mathrm{max}}`$ increases). Furthermore, for any $`W`$ we know that contributions will start at $`4W`$ since the smallest polygons have to span a $`W\times W`$ rectangle. So for each configuration we need only retain $`4(W_{\mathrm{max}}W)+2`$ terms of the generating functions while in the original algorithm contributions started at $`2W`$ because the polygons were required to span only in the length-wise direction. We also note that on the square lattice all SAP’s are of even length so for each configuration every other term in the generating function is zero, which allows us to discard half the terms and retain only the non-zero ones.
Finally a few remarks of a more technical nature. The number of contributing configurations becomes very sparse in the total set of possible states along the boundary line and as is standard in such cases one uses a hash-addressing scheme . Since the integer coefficients occurring in the series expansion become very large, the calculation was performed using modular arithmetic . This involves performing the calculation modulo various prime numbers $`p_i`$ and then reconstructing the full integer coefficients at the end. In order to save memory we used primes of the form $`p_i=2^{15}r_i`$ so that the residues of the coefficients in the polynomials could be stored using 16 bit integers. The Chinese remainder theorem ensures that any integer has a unique representation in terms of residues. If the largest absolute values occurring in the final expansion is $`m`$, then we have to use a number of primes $`k`$ such that $`p_1p_2\mathrm{}p_k/2>m`$. Up to 8 primes were needed to represent the coefficients correctly.
Combining all the memory minimization tricks mentioned above allows us to extend the series for the square lattice polygon generating function from 70 terms to 90 terms using at most 2Gb of memory. Obtaining a series this long with the original algorithm would have required at least 200 times as much memory, or close to half a terabyte! The calculations were performed on an 8 node AlphaServer 8400 with a total of 8Gb memory. The total CPU time required was about a week per prime. Obviously the calculation for each width and prime are totally independent and several calculations were done simultaneously.
In Table 1 we have listed the new terms obtained in this work. They of course agree with the terms up to length 70 computed using the old algorithm. The number of polygons of length $`56`$ can be found in .
## 3 Analysis of the series
We analyzed the series for the polygon generating function by the numerical method of differential approximants . In Table 2 we have listed estimates for the critical point $`x_c^2`$ and exponent $`2\alpha `$ of the series for the square lattice SAP generating function. The estimates were obtained by averaging values obtained from first order $`[L/N;M]`$ and second order $`[L/N;M;K]`$ inhomogeneous differential approximants. For each order $`L`$ of the inhomogeneous polynomial we averaged over those approximants to the series which used at least the first 35 terms of the series (that is, polygons of perimeter at least 74), and used approximants such that the difference between $`N`$, $`M`$, and $`K`$ didn’t exceed 2. These are therefore “diagonal” approximants. Some approximants were excluded from the averages because the estimates were obviously spurious. The error quoted for these estimates reflects the spread (basically one standard deviation) among the approximants. Note that these error bounds should not be viewed as a measure of the true error as they cannot include possible systematic sources of error. We discuss further the systematic error when we consider biased approximants. Based on these estimates we conclude that $`x_c^2=0.1436806289(5)`$ and $`\alpha =0.5000005(10)`$.
As stated earlier there is very convincing evidence that the critical exponent $`\alpha =1/2`$ exactly. If we assume this to be true we can obtain a refined estimate for the critical point $`x_c^2`$. In Fig. 4 we have plotted estimates for the critical exponent $`2\alpha `$ against estimates for the critical point $`x_c^2`$. Each dot in this figure represents a pair of estimates obtained from a second order inhomogeneous differential approximant. The order of the inhomogeneous polynomial was varied from 0 to 10. We observe that there is an almost linear relationship between the estimates for $`2\alpha `$ and $`x_c^2`$ and that for $`2\alpha =3/2`$ we get $`x_c^20.14368062928\mathrm{}`$. In order to get some idea as to the effect of systematic errors, we carried out this analysis using polygons of length up to 60 steps, then 70, then 80 and finally 90 steps. The results were $`x_c^2=0.1436806308`$ for $`n=60,`$ $`x_c^2=0.14368062956`$ for $`n=70,`$ $`x_c^2=0.14368062930`$ for $`n=80,`$ and $`x_c^2=0.14368062928`$ for $`n=90.`$ This is a rapidly converging sequence of estimates, though we have no theoretical basis that would enable us to assume any particular rate of convergence. However, observing that the differences between successive estimates are decreasing by a factor of at least 5, it is not unreasonably optimistic to estimate the limit at $`x_c^2=0.14368062927(1).`$
This leads to our final estimate $`x_c^2=0.14368062927(1)`$ and thus we find the connective constant $`\mu =1/x_c=2.63815853034(10)`$. It is interesting to note that some years ago we pointed out that since the hexagonal lattice connective constant is given by the zero of a quadratic in $`x^2,`$ it is plausible that this might be the case also for the square lattice connective constant. On the basis of an estimate of the connective constant that was 4 orders of magnitude less precise, we pointed out then that the polynomial
$$581x^4+7x^213=0$$
was the only polynomial we could find with “small” integer coefficients consistent with our estimate. The relevant zero of this polynomial is $`x_c^2=0.1436806292698685..`$ in complete agreement with our new estimate — which, as noted above, contains four more significant digits! Unfortunately the other zero is at $`x_c^2=0.1557288\mathrm{},`$ and we see no evidence of such a singularity. Nevertheless, the agreement is so astonishingly good that we are happy to take this as a good algebraic approximation to the connective constant. An argument as to why we might not expect to see the singularity on the negative real axis from our series analysis would make the root of the above polynomial a plausible conjecture for the exact value, but at present such an argument is missing.
Two further analyses were carried out on the data. Firstly, a study of the location of non-physical singularities, and secondly, a study of the asymptotic form of the coefficients — which is relevant to the identification of any correction-to-scaling exponent. Singularities outside the radius of convergence give exponentially small contributions to the asymptotic form of the coefficients, so are notoriously hard to analyse. Nevertheless, we see clear evidence of a singularity on the negative real axis at $`x^20.40`$ with an exponent that is extremely difficult to analyse but could be $`1.5,`$ in agreement with the physical exponent. There is weaker evidence of a further conjugate pair of singularities. First order approximants locates these at $`0.015\pm 0.36i,`$ while second order approximants locates them at $`0.035\pm 0.31i.`$ There is also evidence of a further singularity on the negative real axis at $`x_c^2=0.7.`$ We are unable to give a useful estimate of the exponents of these singularities.
We turn now to the asymptotic form of the coefficients. We have argued previously that there is no non-analytic correction-to-scaling exponent for the polygon generating function. This is entirely consistent with Nienhuis’s observation that there is a correction-to-scaling exponent of $`\mathrm{\Delta }=\frac{3}{2}.`$ Since for the polygon generating function exponent $`\alpha =\frac{1}{2},`$ the correction term has an exponent equal to a positive integer, and therefore “folds into” the analytic background term, denoted $`A(x)`$ in Eqn.(1). This is explained in greater detail in . We assert that the asymptotic form for the polygon generating function is as given by Eqn.(1) above. In evidence of this, we remark that from (1) follows the asymptotic form
$$p_{2n}x_c^{2n}n^{\frac{5}{2}}[a_1+a_2/n+a_3/n^2+a_4/n^3+\mathrm{}].$$
(3)
Using our algebraic approximation to $`x_c`$ quoted above, we show in Table 3 the estimates of the amplitudes $`a_1,\mathrm{},a_4.`$ From the table we see that $`a_10.0994018,`$ $`a_20.02751,`$ $`a_30.0255`$ and $`a_40.12,`$ where in all cases we expect the error to be confined to the last quoted digit. The excellent convergence of all columns is strong evidence that the assumed asymptotic form is correct. If we were missing a term corresponding to, say, a half-integer correction, the fit would be far worse. This is explained at greater length in . So good is the fit to the data that if we take the last entry in the table, corresponding to $`n=45,`$ and use the entries as the amplitudes, then $`p_4\mathrm{}p_{16}`$ are given exactly by the above asymptotic form (provided we round to the nearest integer), and beyond perimeter $`20`$ all coefficients are given to the same accuracy as the leading amplitude.
Finally, to complete our analysis, we estimate the critical amplitudes $`A(x_c^2)`$ and $`B(x_c^2),`$ defined in Eqn.(1). $`A(x_c^2)`$ has been estimated by evaluating Padé approximants to the generating function, evaluated at $`x_c^2.`$ In this way we estimate $`A(x_c^2)0.036,`$ while $`B(x_c^2)`$ follows from the estimate of $`a_1`$ in Eqn.(3), since $`B(x_c^2)=\frac{4\sqrt{\pi }a_1}{3}0.234913.`$
## 4 Conclusion
We have presented an improved algorithm for the enumeration of self-avoiding polygons on the square lattice. The computational complexity of the algorithm is estimated to be $`1.2^n.`$ Implementing this algorithm has enabled us to obtain polygons up to perimeter length 90. Decomposing the coefficients into prime factors reveals frequent occurrence of very large prime factors, supporting the widely held view that there is no “simple” formula for the coefficients. For example, $`p_{78}`$ contains the prime factor 7789597345683901619. Our extended series enables us to give an extremely precise estimate of the connective constant, and we give a simple algebraic approximation that agrees precisely with our numerical estimate. An alternative analysis provides very strong evidence for the absence of any non-analytic correction terms to the proposed asymptotic form for the generating function. Finally we give an asymptotic representation for the coefficients which we believe to be accurate for all positive integers.
## 5 Acknowledgments
We gratefully acknowledge valuable discussions with Ian Enting, useful comments on the manuscript by Alan Sokal and financial support from the Australian Research Council.
|
no-problem/9905/hep-ph9905235.html
|
ar5iv
|
text
|
# Light-Pair Corrections to Small-Angle Bhabha Scattering in a Realistic Set-up at LEP
## Abstract
Light-pair corrections to small-angle Bhabha scattering have been computed in a realistic set-up for luminosity measurements at LEP. The effect of acollinearity and acoplanarity rejection criteria has been carefully analysed for typical calorimetric event selections. The magnitude of the correction, depending on the details of the considered set-up, is comparable with the present experimental error.
CERN-TH/99-116 FNT/T-99/06
thanks: Supported by a Marie Curie fellowship (tmr-erbfmbict 971934)
In the last years, many efforts were made to reduce the sources of theoretical error in the prediction of the small-angle Bhabha (hereafter SABH) scattering cross section, in order to match the increased experimental accuracy. The main results were achieved in the sector of the $`O(\alpha ^2L)`$ photonic corrections, by lowering the associated uncertainty to the $`0.03\%`$ level . Moreover, the uncertainty associated to the light-pair contribution was reduced to the $`0.01\%`$ level. The ultimate result of these works was to lower the total theoretical error to the $`0.05\%`$ level for LEP1 energies, as it can be read from table 1 (see also refs. ). On the other hand, at present, the experimental error associated to luminosity measurements is below the $`0.05\%`$ level. Since the size of light-pair contributions is of the order of some $`0.01\%`$ and will depend, in general, on the event selection (hereafter ES), it is important to include the best available estimate for light-pair corrections. In particular, the presence of tight cuts, which select events with soft-pair emission, such as acollinearity and acoplanarity cuts, can significantly alter the light-pair correction.
At present, the theoretical error to SABH scattering, due to pair production, can be evaluated by approximate means, such as the Monte Carlo (hereafter MC) results based on $`t`$-channel approximation or the analytical calculations in the quasi-collinear approximation , or by the MC calculation of ref. , which includes the exact QED four-fermion matrix element, two-loop virtual corrections according to ref. , initial-state radiation (hereafter ISR) in the collinear approximation, and realistic ES’s. In this note the light-pair contribution is studied in the presence of ES’s as realistic as possible by using the approach of ref. , to which the reader is addressed for any technical detail. Particular attention is payed to the OPAL ES , as a significant case study.
Before entering the details of the OPAL ES, it is worthwhile to consider a typical calorimetric ES, such as one of the CALO2 ES’s adopted in ref. . Apart from other technical features, it is characterized by an energy cut defined in terms of the kinematical variable $`z`$:
$$z1\frac{E_1E_2}{E_{\mathrm{beam}}^2}z_{\mathrm{max}},$$
(1)
where $`E_{1,2}`$ are the energies of the two clusters of particles hitting the forward and backward calorimeters. The definition of the ES, involving angular and energy cuts only, is reported in the caption of fig. 1. Notice that small values of $`z`$ inhibit hard-pair emission, while large values do not. In fig. 1 the light-pair contribution to SABH scattering is shown as a function of $`z`$. As expected, the magnitude of the correction undergoes a significant variation by changing the $`z`$ value. In particular the correction grows in absolute value if the available phase-space region favours soft-pair radiation. This is the same behaviour as observed if one studies photon emission, instead of pair emission. It is worth noticing that the enhancement of the pair correction is valid within the $`t`$-channel approximation too. It is also important to stress that superimposing an acollinearity and acoplanarity cut means inhibiting hard radiation, so that such a cut can effectively mimic a cut on $`z`$, constraining it in the soft region. As an example, these cuts were superimposed on CALO2, by considering only the electron-pair contribution at the tree level. The results are shown in table 2 for asymmetric angular acceptances ($`3.49^{}\theta _\mathrm{N}6.11^{}`$ and $`2.97^{}\theta _\mathrm{W}6.73^{}`$) at $`\sqrt{s}=92.0\mathrm{GeV}`$ with acollinearity and acoplanarity cuts ($`\theta _{\mathrm{ac}}=0.58^{}`$ and $`\varphi _{\mathrm{ap}}=11.46^{}`$). This exercise shows that the presence of acollinearity and acoplanarity cuts increases, in absolute value, the light-pair correction significantly, i.e. it has the same effect of lowering the $`z`$ cut.<sup>1</sup><sup>1</sup>1With the given values of the acollinearity and acoplanarity cuts, the largely dominant effect is due to the acollinearity cut. This link can be easily understood since acollinearity and acoplanarity cuts select events with soft-pair radiation. It is worth noticing that the higher the value of $`z`$, the higher the relative enhancement of the correction with respect to the correction itself, as expected.
Let us now consider the more realistic OPAL case. The OPAL luminosity is measured with an experimental precision of $`0.034\%`$ , and similar performances are attained by the other collaborations. On the other hand, the light-pair correction is of the order of some $`0.01\%`$ and, moreover, it could be critically enhanced by tight acollinearity and acoplanarity cuts, as just shown in the CALO2 case (see table 2). It is thus crucial, for the luminosity measurements, to include a careful estimate of the pair corrections. The OPAL collaboration defines a reference theoretical cross section in terms of simple cuts at four-vector level on the generated particles . This rejection set-up, reviewed in table 3, is named M4SEL; it comes in three flavours SWITL, SWITR and SWITA, corresponding to whether the narrow cut in polar angle is applied to the left or the right hand calorimeter, or, in the case of SWITA, to the average of the polar angles measured on the right and left .
A complete calculation, performed with the MC code of ref. , including ISR via collinear structure functions and the muons contribution, gives the results shown in table 4, leading to a correction of $`0.044\%`$. Two comments are in order here. The first is that the pair correction is of the same order as the experimental error. The second is that the pair correction computed for an ES with similar angular and energy cuts, but without an acollinearity and acoplanarity cut, is at the level of $`0.025`$$`0.030\%`$ , i.e. smaller than the present prediction.
In this short discussion the relevance of taking into account the light-pair correction to SABH scattering for luminosity measurements at $`e^+e^{}`$ colliders is pointed out. This need is due to the high experimental accuracy now achieved by the LEP collaborations, better than the $`0.05\%`$ level. In particular the effects of acollinearity and acoplanarity cuts are analysed, and general arguments are given to understand why the presence of such rejection criteria increases the size of light-pair corrections. Moreover a realistic ES, the M4SEL adopted by the OPAL collaboration, has been implemented in a MC code to size the light-pair contribution to SABH scattering, leading to a correction for light pairs at the $`0.04\%`$ level.
Acknowledgements
The authors wish to thank R. Kellogg and D. Strom, of the OPAL collaboration for several stimulating and fruitful discussions and for having provided the specifications of the OPAL reference ES of table 3.
|
no-problem/9905/cond-mat9905322.html
|
ar5iv
|
text
|
# Spreading and shortest paths in systems with sparse long-range connections.
## Abstract
Spreading according to simple rules (e.g. of fire or diseases) and shortest-path distances are studied on $`d`$-dimensional systems with a small density $`p`$ per site of *long-range connections* (“Small-World” lattices). The volume $`V(t)`$ covered by the spreading quantity on an infinite system is exactly calculated in all dimensions. We find that $`V(t)`$ grows initially as $`\mathrm{\Gamma }_dt^d/d`$ for $`t<<t^{}=(2p\mathrm{\Gamma }_d(d1)!)^{1/d}`$ and later exponentially for $`t>>t^{}`$, generalizing a previous result in one dimension. Using the properties of $`V(t)`$, the average shortest-path distance $`\mathrm{}(r)`$ can be calculated as a function of Euclidean distance $`r`$. It is found that $`\mathrm{}(r)r`$ for $`r<r_c=(2p\mathrm{\Gamma }_d(d1)!)^{1/d}\mathrm{log}(2p\mathrm{\Gamma }_dL^d)`$ and $`\mathrm{}(r)r_c`$ for $`r>r_c`$. The characteristic length $`r_c`$, which governs the behavior of shortest-path lengths, *diverges* with system size for all $`p>0`$. Therefore the mean separation $`sp^{1/d}`$ between shortcut-ends is not a relevant internal length-scale for shortest-path lengths. We notice however that the globally averaged shortest-path length $`\overline{\mathrm{}}`$ divided by $`L`$ is a function of $`L/s`$ only.
Regular $`d`$-dimensional lattices with a small density $`p`$ per site of long-ranged bonds (or “small-world” networks) model the effect of weak unstructured (mean-field) interactions in a system where the dominant interactions have a regular $`d`$-dimensional structure, and thus may have many applications in physics as well as in other sciences .
In this work we study the spreading (See e.g. and references therein) of some influence (e.g. a forest fire, or an infectious disease) according to the following simple law: we assume that, at each time-step, the fire or disease propagates from an burnt (or infected) site to all unburnt (uninfected) sites connected to it by a link. Long-range connections, or *shortcuts* represent sparks that start new fires far away from the original front, or, in the disease-spreading case, people who when first infected move to a random location amongst the non-infected population. For the dynamics of this simple problem, an important network property is the set of shortest-path distances $`\{\mathrm{}_{ij}\}`$, where $`\mathrm{}_{ij}`$ is defined as the minimum number of links one has to traverse between $`i`$ and $`j`$. On isotropic $`d`$-dimensional lattices $`\mathrm{}_{ij}`$ is proportional to $`d_{ij}^E`$, the Euclidean distance between $`i`$ and $`j`$. On regular lattices, both the number of sites within an Euclidean distance $`r`$ from $`i`$, and the number of sites within $`r`$ nearest-neighbor steps from $`i`$ behave as $`r^d`$.
Disorder can modify this in several ways. On a random fractal, the number of sites contained in a volume of Euclidean radius $`r`$ is $`r^{d_f}`$ , where $`d_f<d`$ is the fractal dimension. On the other hand, the number of sites visited in at most $`\mathrm{}`$ steps is $`\mathrm{}^{d_f/d_{min}}`$ , where $`d_{min}1`$ is the *shortest-path dimension*, defined such that $`\mathrm{}r^{d_{min}}`$. On fractals, shortest-path lengths $`\mathrm{}(r)`$ are thus much *larger* than Euclidean distances $`r`$.
Consider now a randomly connected network. If we have $`L^d`$ sites sitting on a regular $`d`$-dimensional lattice, but connect them at random with an average coordination number $`C`$ (i.e. a total of $`L^dC/2`$ bonds), the number of sites in a volume of radius $`r`$ is still $`r^d`$, but we can visit $`C^k`$ sites in $`k`$ steps. Thus all $`L^d`$ sites can be visited in $`𝒪(\mathrm{log}L^d)`$ steps, and therefore the typical shortest-path distance $`\overline{\mathrm{}}`$ is of order $`\mathrm{log}L`$, much *shorter* than the typical Euclidean distance $`\overline{r}L`$.
“Small-World” networks are intermediate between the regular lattice, where $`\overline{\mathrm{}}L`$, and the random graph, where $`\overline{\mathrm{}}\mathrm{log}L`$. They consist of a regular $`d`$-dimensional lattice with $`N=L^d`$ sites, on which $`pL^d`$ *additional* long-range bonds have been connected between randomly chosen pairs of sites. The key finding of Watts and Strogatz is that a vanishingly small density $`p`$ of long-range bonds is enough to make shortest-path distances proportional to $`\mathrm{log}L`$ instead of $`L`$. If $`L^dp<<1`$, the system typically contains no shortcuts, and the average shortest-path distance $`\overline{\mathrm{}}=1/N^2_{<ij>}\left[\mathrm{}_{ij}\right]_p`$ scales as $`L`$. If on the other hand $`L^dp>>1`$, one finds $`\overline{\mathrm{}}\mathrm{log}L`$ . For any fixed density $`p`$ of long-ranged bonds, a *crossover-size* $`L^{}(p)`$ exists , above which shortest-path distances are only logarithmically increasing with $`L`$. This crossover size diverges as $`p^{1/d}`$ when $`p0`$. The nature of the small-world transition at $`p=0`$ was recently discussed controversially. It is still a matter of debate whether $`p=0`$ can be regarded as being equivalent to a critical point.
In this work we calculate the volume $`V(t)`$ that is covered, on a small-world network, by a spreading quantity as a function of time when the spreading law is the simple rule above, and derive an exact expression for the average shortest-path $`\mathrm{}(r)`$ on these systems.
Assume a disease spreads with constant radial velocity $`v=1`$ from an original infection site $`A`$, as shown in Fig. 1. Let $`\rho =2p`$ be the density of *shortcut-ends* on the system. We work on the continuum for simplicity, so that the infected volume will initially grow as a sphere of radius $`t`$ and surface $`\mathrm{\Gamma }_dt^{d1}`$. We call the sphere stemming from $`A`$ “primary sphere”.
Each time the primary sphere hits a shortcut end, which happens with probability $`\rho \mathrm{\Gamma }_dt^{d1}`$ per unit time, a new sphere (“secondary”) starts to grow from a random point in non-infected space (the other end of the shortcut). These in turn later give rise to further secondary spheres in the same fashion.
Following Newman and Watts , we notice that the total infected volume is the sum of the primary volume $`\mathrm{\Gamma }_d_0^t\tau ^{d1}𝑑\tau `$ plus a contribution $`V(t\tau )`$ for each new sphere born at time $`\tau `$. Thus in the continuum the average total infected volume satisfies
$$V(t)=\mathrm{\Gamma }_d_0^t\tau ^{d1}\left\{1+\rho V(t\tau )\right\}𝑑\tau ,$$
(1)
which can be rewritten in terms of rescaled variables $`\stackrel{~}{V}=\rho V`$ and $`\stackrel{~}{t}=(\rho \mathrm{\Gamma }_d(d1)!)^{1/d}t`$ as
$$\stackrel{~}{V}(\stackrel{~}{t})=\frac{1}{(d1)!}_0^{\stackrel{~}{t}}(\stackrel{~}{t}\stackrel{~}{\tau })^{d1}\left\{1+\stackrel{~}{V}(\stackrel{~}{\tau })\right\}𝑑\stackrel{~}{\tau }$$
(2)
It is interesting to notice that $`\stackrel{~}{V}`$ is the total number of infected shortcut-ends, while $`\stackrel{~}{t}^d/d!`$ is the total number of shortcut-ends infected by the primary sphere. On an infinite system, the functional relation (2) that links these two variables has no parameters except for the space dimensionality $`d`$. On a system of finite volume $`\mathrm{\Gamma }_dL^d/d`$, an important parameter is the rescaled linear size $`\stackrel{~}{L}=(\rho \mathrm{\Gamma }_d(d1)!)^{1/d}L`$, whose $`d`$-th power gives the total number $`N_s`$ of shortcut-ends as $`N_s=\stackrel{~}{L}^d/d!`$.
Deriving (2) $`d`$ times with respect to $`\stackrel{~}{t}`$ we obtain
$$\frac{^d}{\stackrel{~}{t}^d}V(\stackrel{~}{t})=1+V(\stackrel{~}{t}),$$
(3)
whose solution is
$$\stackrel{~}{V}(\stackrel{~}{t})=\underset{k=1}{\overset{\mathrm{}}{}}\frac{\stackrel{~}{t}^{dk}}{(dk)!}$$
(4)
Notice that (4) is a series expansion of $`(e^{\stackrel{~}{t}}1)`$ with all powers not multiples of $`d`$ removed. Thus (4) can be written as a sum of $`d`$ exponentials, each with a different $`d`$-root of $`1`$ in its argument. In this way, powers which are not multiples of $`d`$ cancel out.
$$\stackrel{~}{V}(\stackrel{~}{t})=\frac{1}{d}\underset{n=0}{\overset{d1}{}}\mathrm{exp}\{\mu _d^n\stackrel{~}{t}\}1$$
(5)
where $`\mu _d=e^{i2\pi /d}`$. Some specific examples are
$`\begin{array}{cccc}\stackrel{~}{V}(\stackrel{~}{t})\hfill & =& e^{\stackrel{~}{t}}1\hfill & \text{in 1d,}\hfill \\ & & & \\ \stackrel{~}{V}(\stackrel{~}{t})\hfill & =& \mathrm{cosh}\stackrel{~}{t}1\hfill & \text{in 2d,}\hfill \\ & & & \\ \stackrel{~}{V}(\stackrel{~}{t})\hfill & =& \frac{e^{\stackrel{~}{t}}+e^{\mu _3\stackrel{~}{t}}+e^{\mu _{3}^{}{}_{}{}^{2}\stackrel{~}{t}}}{3}1\hfill & \text{in 3d,}\hfill \end{array}`$
where the one-dimensional solution coincides with that previously derived by other methods .
A general property of (4) is that $`\stackrel{~}{V}`$ grows as $`\stackrel{~}{t}^d/d!`$ for $`\stackrel{~}{t}<1`$, and later exponentially as $`e^{\stackrel{~}{t}}/d`$. Thus the characteristic timescale for the spreading process is $`t^{}=(\rho \mathrm{\Gamma }_d(d1)!)^{1/d}`$.
Notice that (1), and thus also (4), only hold on an infinite system. On a finite system with $`\stackrel{~}{L}^d>>1`$, $`\stackrel{~}{V}`$ will saturate after a time $`\stackrel{~}{t}_{sat}`$ that can be estimated by equating $`\stackrel{~}{V}e^{\stackrel{~}{t}_{sat}}/d\stackrel{~}{L}^d/d!`$ and therefore
$$\stackrel{~}{t}_{sat}\mathrm{log}(\stackrel{~}{L}^d/(d1)!),$$
(6)
which can be rewritten as
$$t_{sat}(\rho \mathrm{\Gamma }_d(d1)!)^{1/d}\mathrm{log}(\rho \mathrm{\Gamma }_dL)$$
(7)
If on the other hand $`\stackrel{~}{L}^d<<1`$, the spreading stops at $`\stackrel{~}{t}_{sat}=\stackrel{~}{L}`$, before reaching the exponential growth regime.
Thus for a finite system with $`\stackrel{~}{L}^d>>1`$ one has
$$\stackrel{~}{V}(\stackrel{~}{t})d\{\begin{array}{ccc}\stackrel{~}{t}^d/d!\hfill & \text{ for }& \stackrel{~}{t}<<1\hfill \\ e^{\stackrel{~}{t}}/d\hfill & \text{ for }& 1<<\stackrel{~}{t}<\stackrel{~}{t}_{sat}\mathrm{log}(\frac{\stackrel{~}{L}^d}{(d1)!})\hfill \\ \stackrel{~}{L}^d/d!\hfill & \text{ for }& \stackrel{~}{t}>\stackrel{~}{t}_{sat}\hfill \end{array}$$
(8)
Assume now that $`\stackrel{~}{L}^d>>1`$. Because of the exponentially fast spreading process, the fraction of the total volume covered by the disease is negligible for $`\stackrel{~}{t}<\stackrel{~}{t}_{sat}`$ and saturates to one abruptly at $`t=t_{sat}`$. Therefore on a large system most of the points become infected essentially at the same time $`\stackrel{~}{t}_{sat}`$.
Now let us see how to calculate the average shortest-path distance $`\mathrm{}(r)`$ as a function of the Euclidean separation $`r`$ between two points. Since we assumed that the disease spreads with unit velocity, it is clear that the time $`t`$ at which a point $`x`$ becomes first infected is exactly the shortest-path distance $`\mathrm{}(A,x)`$ from $`A`$ to $`x`$. By definition, no part of the finite system remains uninfected after $`t=t_{sat}`$, so we conclude that no shortest-path distance can be larger than $`t_{sat}`$ on a finite system. Combining this with the fact that $`\mathrm{}(r)`$ cannot decrease with increasing $`r`$, we conclude that
$$\mathrm{}(r)=t_{sat}\text{for}rt_{sat}$$
(9)
In order to calculate $`\mathrm{}(r)`$ for $`r<t_{sat}`$, let us write $`V(t)=V_1(t)+V_2(t)`$, where $`V_1`$ is the primary volume and $`V_2`$ the volume infected by secondary spheres.
Let $`p_2(t)`$ be the probability to become infected by the secondary infection exactly at time $`tt_{sat}`$. Consequently $`I_2(t)=_0^tp_2(\tau )𝑑\tau `$ is the probability for a point to become infected at time $`t`$ or earlier. Assuming that $`p_2(t)`$ is known, it is easy to calculate the average shortest-path distance $`\mathrm{}(r)`$ as a function of Euclidean distance $`r`$, according to the following. If an individual at $`x`$ becomes infected by a secondary sphere at time $`\tau <d^E(A,x)`$, its shortest-path distance $`\mathrm{}(A,x)`$ to $`A`$ is $`\tau `$. Otherwise if $`x`$ is still uninfected at time $`t=d^E(A,x)`$ (which happens with probability $`1I_2(d^E(A,x))`$), then $`\mathrm{}(A,x)=d^E(A,x)`$, since at that time the primary sphere hits $`x`$ with probability one. Therefore the average shortest-path satisfies
$`\mathrm{}(r)`$ $`=`$ $`{\displaystyle _0^r}tp_2(t)𝑑t+r\left\{1I_2(r)\right\}`$ (10)
$`=`$ $`r{\displaystyle _0^r}I_2(t)𝑑t`$ (11)
The fact that the secondary volume $`V_2`$ is randomly distributed in space makes this problem relatively simple. The probability $`I_2(t)`$ for a point to be infected by the secondary version of the disease at time $`t`$ or earlier is simply $`I_2=V_2(t)/(1/d\mathrm{\Gamma }_dL^d)`$, i.e. the fraction of the total volume which is covered by the secondary infection. Thus
$$\mathrm{}(r)=r\frac{d}{\mathrm{\Gamma }_dL^d}_0^rV_2(t)𝑑t$$
(12)
If there are no shortcuts on the system, $`V_2`$ is zero at all times and thus $`\mathrm{}(r)=r`$ as expected. But it is also clear from this expression that $`\mathrm{}(r)=r`$ when $`L\mathrm{}`$, for all *finite* $`r`$, i.e. in the thermodynamic limit the shortest-pat distance $`\mathrm{}(r)`$ coincides with the Euclidean distance $`r`$ for all *finite* $`r`$, no matter what $`\rho `$ is.
On a finite system with $`\stackrel{~}{L}^d>>1`$, $`V_2(t)/L^d`$ is negligible for all $`t<t_{sat}`$ as we have already noticed. Therefore $`\mathrm{}(r)=r`$ if $`r<t_{sat}`$. Combining this with (9) we have
$$\mathrm{}(r)\{\begin{array}{ccc}r\hfill & \text{for}\hfill & r<r_c=(\rho \mathrm{\Gamma }_d(d1)!)^{1/d}\mathrm{log}(\rho \mathrm{\Gamma }_dL^d)\hfill \\ r_c\hfill & \text{for}\hfill & rr_c\hfill \end{array}$$
(13)
Detailed knowledge of $`\mathrm{}(r)`$ for $`r=r_c`$ would only be possible if the finite-size effects that we ignored in (2) were exactly known, but the interesting remark is that the lack of this knowledge has little or no importance for $`rr_c`$.
We thus see that on a *finite* system, a characteristic length $`r_c=(\rho \mathrm{\Gamma }_d(d1)!)^{1/d}\mathrm{log}(\rho \mathrm{\Gamma }_dL^d)`$ exists, that governs the behavior of average shortest-path distances as a function of Euclidean separation. This characteristic length diverges when $`L\mathrm{}`$, for any $`\rho >0`$. The typical separation $`s=\rho ^{1/d}`$ between shortcut ends , which is size-independent, is *not relevant* for $`\mathrm{}(r)`$. The validity of (13) has been verified numerically in one dimension recently .
It is interesting to notice that the rescaled shortest-path distance $`\stackrel{~}{\mathrm{}}=(\rho \mathrm{\Gamma }_d)^{1/d}\mathrm{}`$ is a simple function of the rescaled Euclidean distance $`\stackrel{~}{r}=(\rho \mathrm{\Gamma }_d)^{1/d}r`$.
$$\stackrel{~}{\mathrm{}}(\stackrel{~}{r})=\{\begin{array}{ccc}\stackrel{~}{r}\hfill & \text{for}\hfill & \stackrel{~}{r}<1\hfill \\ 1\hfill & \text{for}\hfill & \stackrel{~}{r}1\hfill \end{array}$$
(14)
Using (13) we can now calculate $`\overline{\mathrm{}}(\rho ,L)`$, the (global) average shortest-path length , when $`\stackrel{~}{L}^d>>1`$. One has
$`\overline{\mathrm{}}(\rho ,L)`$ $`=`$ $`{\displaystyle \frac{d}{L^d}}{\displaystyle _0^L}\mathrm{}(r)r^{d1}𝑑r`$ (15)
$`=`$ $`{\displaystyle \frac{d}{L^d}}{\displaystyle _0^{r_c}}r^d𝑑r+{\displaystyle \frac{r_cd}{L^d}}{\displaystyle _{r_c}^L}r^{d1}𝑑r`$ (16)
$`=`$ $`r_c\left[1{\displaystyle \frac{1}{d+1}}\left({\displaystyle \frac{r_c}{L}}\right)^d\right]`$ (17)
So that the “order parameter” $`=\overline{\mathrm{}}/L`$ reads
$$=z(1\frac{z^d}{d+1})$$
(18)
where $`z=r_c/L`$.
When $`\rho 0`$ faster than $`L^{1/d}`$ (so that $`\stackrel{~}{L}^d<<1`$), formula (16) holds with $`r_cL`$, and thus $`d/(d+1)`$ as expected. On the other hand if $`\rho >0`$ one has that $`r_c<<L`$ when $`L\mathrm{}`$, and thus $`0`$ in this limit. Therefore $``$ undergoes a discontinuity at $`\rho =0`$ in the $`L\mathrm{}`$ limit .
Notice that $`r_c/L=\mathrm{log}(\rho \mathrm{\Gamma }_dL^d)/L(\rho \mathrm{\Gamma }_d(d1)!)^{1/d}\mathrm{log}(L/s)/(L/s)`$, , where $`s\rho ^{1/d}`$ is the mean separation between shortcut-ends. Thus $``$ can be written as a function of $`L/s`$ only. Therefore if we measure $``$ on systems with several values of $`L`$ and $`\rho `$ and plot the data versus $`L/s`$, we would find that they *collapse* . Because of this behavior some authors have suggested that the transition at $`\rho =0`$ is a *critical point* with a size-independent characteristic length $`\xi s\rho ^{1/d}`$. Our results here and in previous work suggest that this is not the case. According to our calculation, the only characteristic length in regard to shortest-paths is $`r_c`$, and it diverges with system size $`L`$.
We have thus shown that, on a finite system with $`L^dp>>1`$, two widely separated timescales for spreading can be identified. The first one $`t^{}=(2p\mathrm{\Gamma }_d(d1)!)^{1/d}`$ determines the crossover from normal (i.e. proportional to $`t^d`$) to exponential spreading. A much larger timescale $`t_{sat}`$ given by (7) determines the saturation of the spreading process. This second timescale coincides with the lengthscale $`r_c`$ at which the behavior of shortest path lengths $`\mathrm{}(r)`$ saturates, as given by Eq. (13).
It is clear from our calculation that $`r_c`$ diverges with $`L`$ because the locations of the secondary spheres are uncorrelated with the location of the primary infection. In other words, because on a system of size $`L`$, the typical separation between both ends of a shortcut scales as $`L`$. A different situation would certainly arise if shortcuts had a length-dependent distribution. For example one can connect each site $`i`$, with probability $`p`$, to a single other site $`j`$, chosen with probability $`r_{ij}^\alpha `$, where $`\alpha `$ is a free parameter. For $`\alpha 0`$, this model is the same as discussed here, while for $`\alpha `$ large one would only have short-range connections and thus there would be no short-distance regime, even for $`p=1`$. We are presently studying this general model .
###### Acknowledgements.
I acknowledge useful discussions with M. Argollo de Menezes. This work is supported by FAPERJ.
|
no-problem/9905/hep-ph9905487.html
|
ar5iv
|
text
|
# KEK-TH-630May 1999 Inflation in the five-dimensional universe with an orbifold extra dimension
New inflationary solutions to the Einstein equation are explicitly constructed in a simple five-dimensional model with an orbifold extra dimension $`S^1/Z_2`$. We consider inflation caused by cosmological constants for the five-dimensional bulk and the four-dimensional boundaries. In our solutions the extra dimension is static, and the background metric has a non-trivial configuration in the extra dimension. In addition to the solution for a vanishing bulk cosmological constant, which has already been discussed, we obtain solutions for a non-zero one.
Recently, some people proposed a scenario to solve the hierarchy problem by invoking large extra dimensions. In this scenario a fundamental Planck mass is near $`\mathrm{TeV}`$, and the weakness of the ordinary gravity is explained by a large ratio between the size of the sub-millimeter extra dimensions and the fundamental scale. The extra dimensions are compactified on a manifold like $`S^2`$, and the gravity can propagate in the higher-dimensional bulk, while the standard model fields are localized in the four-dimensional wall with $`\mathrm{TeV}`$ scale thickness in the extra dimensions. The extra dimensions are not homogeneous, since they would have special properties at the wall. Thus it seems important to discuss taking account of a non-trivial structure of the extra dimensions.
More recently, the authors of Ref. proposed a different scenario where the size of an extra dimension is not so large as in Ref. but the hierarchy is still explained by an exponential suppression factor. In this scenario the extra dimension is an orbifold $`S^1/Z_2`$. As in Ref., the gravity can propagate in the bulk whereas the standard model fields are confined in one boundary. The background metric has exponential dependence in the orbifold direction due to non-trivial boundary conditions, and this behavior is essential for solving the hierarchy problem in Ref.. This set-up is similar to that of the heterotic M-theory, where the low energy effective theory of the strongly coupled heterotic string theory is described by an eleven-dimensional supergravity compactified on $`S^1/Z_2`$, and super Yang-Mills gauge multiplets live on the two boundaries. A non-trivial metric distortion in the orbifold dimension is one of the important ingredients in the M-theory too.
The extra dimensions mentioned above might play an important role in cosmology. Inflationary solutions of the Einstein equation and early cosmology in models with extra dimensions have been discussed by many authors. In some analyses, the metric is assumed to be uniform in the extra dimension. If the metric has non-trivial dependence in the extra dimension at the early universe, the analyses on inflation may be altered. There also analyses where non-trivial background metrics are considered. However inflationary exact solutions in the presence of both a bulk cosmological constant and boundary ones have not been discussed in these papers.
In this paper, we try to find new inflationary solutions with non-trivial configurations in the extra dimension. We restrict ourselves to a five-dimensional gravity with an orbifold extra dimension $`S^1/Z_2`$. Starting from a five-dimensional Einstein action with two boundary terms, we analyze classical solutions to the Einstein equation. The five-dimensional bulk action is described by a five-dimensional gravity and a cosmological constant. For the boundary actions, we treat them as localized cosmological constants. With simple assumptions for the metric, we find new inflationary solutions which have non-trivial dependences in the extra dimension. In addition to the solution for a vanishing bulk cosmological constant, which has already been discussed in Ref., we obtain solutions for a non-zero one.
We consider the five-dimensional universe compactified on an orbifold $`S^1/Z_2`$. We use coordinates $`x^M`$ $`=`$ $`(x^\mu ,y)`$, where $`M`$ $`=`$ $`0,1,2,3,5`$ is an index for the five-dimensional space, and $`\mu `$ $`=`$ $`0,1,2,3`$ is that for the uncompactified four-dimensional space. The coordinate $`y`$ is assigned to the extra orbifold dimension with an identification $`y+2L`$ $``$ $`y`$. To describe $`S^1/Z_2`$, we work on orbifold picture. Namely, we analyze physics in the region $`L`$ $``$ $`y`$ $``$ $`L`$, requiring that every field is even under $`Z_2`$ action $`y`$ $``$ $`y`$. Fixed points of the $`Z_2`$ action are located at $`y`$ $`=`$ 0, $`L`$. These two points correspond to two boundaries of the segment $`S^1/Z_2`$.
The action is given by
$`S`$ $`=`$ $`{\displaystyle \frac{1}{2\kappa _5^2}}{\displaystyle _L^L}𝑑y{\displaystyle d^4x\sqrt{g}(R+2\mathrm{\Lambda })}+{\displaystyle d^4x\sqrt{g_1}_1}+{\displaystyle d^4x\sqrt{g_2}_2}.`$ (1)
Here the first term represents the five-dimensional bulk action which includes the five-dimensional gravitational fields and the cosmological constant $`\mathrm{\Lambda }`$. The second and the third terms are the four-dimensional boundary actions localized at $`y`$ $`=`$ 0 and $`y`$ $`=`$ $`L`$, respectively. The fields in the standard model are supposed to be confined at one boundary, and some hidden matter fields are confined at the other boundary. For simplicity, we assume that the boundary potentials have slow-roll properties, and we can treat these boundary terms as localized cosmological constants during inflation. The $`g_1`$ and $`g_2`$ are metrics on the boundaries, and are written by the five-dimensional metric $`g_{MN}`$ as $`g_1^{\mu \nu }`$ $`=`$ $`g^{\mu \nu }(y=0)`$ and $`g_2^{\mu \nu }`$ $`=`$ $`g^{\mu \nu }(y=L)`$. The sign convention for the metric is $`(+,,,,)`$. Note that the physical length of the orbifold dimension is $`L_{\mathrm{phys}}`$ $`=`$ $`𝑑y\sqrt{g_{55}}`$. The five-dimensional gravitational coupling constant $`\kappa _5`$ is related to the four-dimensional Newton constant $`G_N`$ as $`\kappa _5^2`$ $`=`$ $`16\pi G_NL_{\mathrm{phys}}`$.
Minimizing the action (1), we obtain the Einstein equation as
$`\sqrt{g}\left(R^{MN}{\displaystyle \frac{1}{2}}g^{MN}R\right)`$ (2)
$`=`$ $`\kappa _5^2\left[\sqrt{g_1}g_1^{\mu \nu }\delta _\mu ^M\delta _\nu ^N_1\delta (y)+\sqrt{g_2}g_2^{\mu \nu }\delta _\mu ^M\delta _\nu ^N_2\delta (yL)\right]+\sqrt{g}g^{MN}\mathrm{\Lambda }.`$
In deriving Eq.(2), we neglect dynamics of the fields on the boundaries. Namely, $`_1`$ and $`_2`$ in this equation are treated as constants. We put the following ansatz for the metric:
$`ds^2=g_{MN}dx^Mdx^N=u(y)^2dt^2a(y,t)^2d\stackrel{}{x}^2b(y,t)^2dy^2,`$ (3)
where $`u`$, $`a`$ and $`b`$ are scale factors for $`t`$ $`(=x^0)`$, $`x^{1,2,3}`$ and $`y`$, respectively. With this metric, Eq.(2) reduces to the following relations:
$`{\displaystyle \frac{1}{u^2}}\left[\left({\displaystyle \frac{\dot{a}}{a}}\right)^2+{\displaystyle \frac{\dot{a}}{a}}{\displaystyle \frac{\dot{b}}{b}}\right]{\displaystyle \frac{1}{b^2}}\left[{\displaystyle \frac{a^{\prime \prime }}{a}}+\left({\displaystyle \frac{a^{}}{a}}\right)^2{\displaystyle \frac{a^{}}{a}}{\displaystyle \frac{b^{}}{b}}\right]`$ $`=`$ $`{\displaystyle \frac{\kappa _5^2}{3b}}\left[\delta (y)_1+\delta (yL)_2\right]+{\displaystyle \frac{\mathrm{\Lambda }}{3}},`$ (4)
$`{\displaystyle \frac{1}{u^2}}\left[2{\displaystyle \frac{\ddot{a}}{a}}+{\displaystyle \frac{\ddot{b}}{b}}+\left({\displaystyle \frac{\dot{a}}{a}}\right)^2+2{\displaystyle \frac{\dot{a}}{a}}{\displaystyle \frac{\dot{b}}{b}}\right]{\displaystyle \frac{1}{b^2}}\left[2{\displaystyle \frac{a^{\prime \prime }}{a}}+{\displaystyle \frac{u^{\prime \prime }}{u}}+\left({\displaystyle \frac{a^{}}{a}}\right)^2+2{\displaystyle \frac{u^{}}{u}}{\displaystyle \frac{a^{}}{a}}2{\displaystyle \frac{a^{}}{a}}{\displaystyle \frac{b^{}}{b}}{\displaystyle \frac{u^{}}{u}}{\displaystyle \frac{b^{}}{b}}\right]`$ (5)
$`=`$ $`{\displaystyle \frac{\kappa _5^2}{b}}\left[\delta (y)_1+\delta (yL)_2\right]+\mathrm{\Lambda },`$
$`{\displaystyle \frac{1}{u^2}}\left[{\displaystyle \frac{\ddot{a}}{a}}+\left({\displaystyle \frac{\dot{a}}{a}}\right)^2\right]{\displaystyle \frac{1}{b^2}}\left[\left({\displaystyle \frac{a^{}}{a}}\right)^2+{\displaystyle \frac{u^{}}{u}}{\displaystyle \frac{a^{}}{a}}\right]`$ $`=`$ $`{\displaystyle \frac{\mathrm{\Lambda }}{3}},`$ (6)
$`{\displaystyle \frac{\dot{a}^{}}{a}}+{\displaystyle \frac{u^{}}{u}}{\displaystyle \frac{\dot{a}}{a}}+{\displaystyle \frac{a^{}}{a}}{\displaystyle \frac{\dot{b}}{b}}`$ $`=`$ $`0,`$ (7)
where primes and dots denote derivatives with respect to $`y`$ and $`t`$, respectively. We look for classical solutions to these equations from now on.
We focus on Eq.(7) first. Useful observation follows if we assume that $`u(y)`$ and $`a(y,t)`$ have common $`y`$-dependence and $`a(y,t)`$ is separable:
$`u(y)=f(y),a(y,t)=f(y)v(t).`$ (8)
With this assumption, the first and the second terms in the left-hand side of Eq.(7) cancel out, and we obtain $`\dot{b}=0`$ or $`a^{}=0`$. However, for the latter case, Eqs.(4) and (5) cannot be satisfied due to the non-zero boundary terms. Therefore it follows that the extra dimension is automatically static:
$`\dot{b}(y,t)`$ $`=`$ $`0.`$ (9)
Note that if both $`_1`$ and $`_2`$ vanish, there is also a possibility of $`\dot{b}0`$. Though $`b(y,t)`$ is time independent, it may have non-trivial $`y`$-dependence. For simplicity, we assume that $`b(y,t)`$ also has the same $`y`$-dependence as $`u(y)`$ and $`a(y,t)`$:
$`b(y,t)`$ $`=`$ $`f(y).`$ (10)
We can solve $`v(t)`$ and $`f(y)`$ easily. For $`v(t)`$, we obtain $`\ddot{v}/v`$ $`=`$ $`(\dot{v}/v)^2`$ from Eqs.(4), (5), (8) and (10). Thus the three spatial dimensions expand exponentially:
$`v(t)`$ $`=`$ $`e^{Ht},`$ (11)
where $`H`$ is a constant, and we choose the direction of time so that $`H`$ $``$ 0. In this way, assuming Eqs.(8) and (10), we can realize the exponential expansion of the universe together with the static extra dimension. The expansion parameter $`H`$ is determined later. Collecting everything, we have assumed the metric
$`ds^2`$ $`=`$ $`f(y)^2(dt^2e^{2Ht}d\stackrel{}{x}^2dy^2).`$ (12)
The geometry of hypersurfaces $`y`$ $`=`$ constant is that of the de Sitter space. For $`f(y)`$, Eqs.(4)-(6) provide the following equations:
$`\left({\displaystyle \frac{f^{}}{f}}\right)^2`$ $`=`$ $`H^2{\displaystyle \frac{\mathrm{\Lambda }}{6}}f^2,`$ (13)
$`{\displaystyle \frac{f^{\prime \prime }}{f}}`$ $`=`$ $`{\displaystyle \frac{\kappa _5^2}{3}}f\left[\delta (y)_1+\delta (yL)_2\right]+H^2{\displaystyle \frac{\mathrm{\Lambda }}{3}}f^2.`$ (14)
We require that the scale factor $`f`$ is continuous across the boundaries, while we allow its derivatives to be discontinuous. The curvature of the five-dimensional space is constant $`R`$ $`=`$ $`10\mathrm{\Lambda }/3`$ except at the boundaries. The equations (13) and (14) have different types of solutions for $`\mathrm{\Lambda }=0`$, $`\mathrm{\Lambda }>0`$ and $`\mathrm{\Lambda }<0`$.
First we describe the case $`\mathrm{\Lambda }=0`$. The solution to Eq.(13) consistent with the orbifold symmetry $`f(y)`$ $`=`$ $`f(y)`$ is given by
$`f(y)`$ $`=`$ $`e^{H|y|+c_0},`$ (15)
where $`c_0`$ is a constant. This solution corresponds to the solution in Refs. and , and also can be written in a form similar to the solution in Ref. by a coordinate transformation $`Y`$ $`=`$ $`[e^{H|y|+c_0}+1]/H`$. In addition to Eq.(15), a solution $`e^{H|y|+c_0}`$ is also possible. The general solution for $`\mathrm{\Lambda }`$ $`=`$ 0 without the ansatz (12) is given in Ref.. Note that we use the solution (15) only for $`L`$ $`<`$ $`y`$ $``$ $`L`$, and keep in mind that $`f(y)`$ is a periodic function. Explicitly, we use formulas $`|y|^{}`$ $`=`$ $`ϵ(y)`$ $``$ $`ϵ(yL)`$ $`1`$ and $`|y|^{\prime \prime }`$ $`=`$ $`2\delta (y)`$ $``$ $`2\delta (yL)`$, where $`ϵ(y)`$ is 1 for $`y0`$ and $`1`$ for $`y<0`$. Calculating $`f^{\prime \prime }`$ from Eq.(15), the delta-functions $`\delta (y)`$ and $`\delta (yL)`$ arise. For general values of $`_1`$ and $`_2`$, Eq.(15) is not compatible with Eq.(14). However, if we assume the following conditions, Eq.(15) is consistent with Eq.(14):
$`{\displaystyle \frac{\kappa _5^2}{3}}_1=2He^{c_0},{\displaystyle \frac{\kappa _5^2}{3}}_2=2He^{HLc_0}.`$ (16)
We must require $`_1`$ $`<`$ 0 and $`_2`$ $`>`$ 0 to satisfy these conditions for $`H`$ $`>`$ 0. We must also require $`_1`$ $`+`$ $`_2`$ $`>`$ 0, since the physical length of the extra dimension $`L_{\mathrm{phys}}`$ $`=`$ $`𝑑y\sqrt{g_{55}}`$ $`=`$ $`(6/_1+6/_2)/\kappa _5^2`$ must be positive. For $`_1`$ $`>`$ 0 and $`_2`$ $`<`$ 0, we can satisfy similar conditions with another solution $`e^{H|y|+c_0}`$. In the case of $`_1_2`$ $`>`$ 0, we have no solution for $`\mathrm{\Lambda }`$ $`=`$ 0 under the present ansatz for the metric (12). We can determine $`H`$ and $`c_0`$ from Eq.(16) as $`H`$ $`=`$ $`\mathrm{log}(_1/_2)`$ $`/L`$ and $`c_0`$ $`=`$ $`\mathrm{log}(6H/\kappa _5^2_1)`$. For $`_1`$ $`=`$ $`_2`$ $`=`$ 0, which corresponds to $`H`$ $`=`$ 0, the solution (15) reduces to $`f`$ $`=`$ constant. Then the metric is static and uniform in the extra dimension. Note that for vanishing $`_1`$ and $`_2`$ we also have a possibility $`a^{}`$ $`=0`$ instead of Eq.(9).
The expansion rate depends on the position in the extra dimension. After we canonically normalize the four-dimensional coordinate so that $`ds^2`$ $`=`$ $`dt^2`$ $``$ $`v^2d\stackrel{}{x}^2`$ for a fixed $`y`$, we observe that in general the effective expansion rate $`H_{\mathrm{eff}}(y)`$ $``$ $`H/f(y)`$ depends on the position in the extra dimension: $`H_{\mathrm{eff}}(y)`$ $`=`$ $`(\kappa _5^2_1/6)`$ $`(_1/_2)^{|y|/L}`$. In particular the expansion rates for the boundaries are given by $`H_{\mathrm{eff}}(0)`$ $`=`$ $`\kappa _5^2_1/6`$ and $`H_{\mathrm{eff}}(L)`$ $`=`$ $`\kappa _5^2_2/6`$.
The relation between the expansion rate and the cosmological constant mentioned above is quite unconventional as pointed out in Ref.. In a usual four-dimensional scenario, we have a relation $`H`$ $``$ $`\sqrt{V}/M_{\mathrm{Pl}}`$, where $`V`$ denotes an inflaton potential and $`M_{\mathrm{Pl}}`$ is the four-dimensional Planck mass. On the other hand, in the present case we have $`H`$ $``$ $`\kappa _5^2_1`$ $``$ $`L_{\mathrm{phys}}V/M_{\mathrm{Pl}}^2`$, where we put $`_1`$ $``$ $`V`$. The expansion rate is suppressed by $`M_{\mathrm{Pl}}^2`$ rather than $`M_{\mathrm{Pl}}`$, hence it may be difficult to obtain a sufficient $`e`$-folding unless we take $`L_{\mathrm{phys}}`$ $``$ $`1/M_{\mathrm{Pl}}`$.
We can also find a solution for $`\mathrm{\Lambda }>0`$. In this case, the solution to Eq.(13) consistent with the orbifold symmetry is
$`f(y)`$ $`=`$ $`\sqrt{{\displaystyle \frac{6H^2}{\mathrm{\Lambda }}}}{\displaystyle \frac{1}{\mathrm{cosh}(H|y|+c_+)}},`$ (17)
where $`c_+`$ is a constant. Conditions for Eq.(17) to be consistent with Eq.(14) are
$`{\displaystyle \frac{\kappa _5^2}{3}}_1`$ $`=`$ $`2\sqrt{{\displaystyle \frac{\mathrm{\Lambda }}{6}}}\mathrm{sinh}(c_+),`$
$`{\displaystyle \frac{\kappa _5^2}{3}}_2`$ $`=`$ $`2\sqrt{{\displaystyle \frac{\mathrm{\Lambda }}{6}}}\mathrm{sinh}(HL+c_+).`$ (18)
From these relations, we can determine $`H`$ and $`c_+`$ in terms of $`_1`$, $`_2`$ and $`\mathrm{\Lambda }`$. The physical length of the extra dimension is given by $`L_{\mathrm{phys}}`$ $`=`$ $`\sqrt{6/\mathrm{\Lambda }}`$ ($`\mathrm{arctan}\theta _1`$ $`+`$ $`\mathrm{arctan}\theta _2`$), where $`\theta _i`$ $`=`$ $`\sqrt{6/\mathrm{\Lambda }}`$ $`\kappa _5^2_i/6`$. Hence we must require $`_1`$ $`+`$ $`_2`$ $`>0`$ to ensure $`L_{\mathrm{phys}}`$ $`>0`$. The shape of the solution (17) depends on the signatures of the boundary terms $`_1`$ and $`_2`$. For $`_1`$ $`<`$ 0 and $`_2`$ $`>`$ 0, we have a solution with $`c_+`$ $`>`$ 0. In this case the shape of the solution in the region 0 $`<`$ $`y`$ $`<`$ $`L`$ is similar to that of the case $`\mathrm{\Lambda }`$ $`=`$ 0. The scale factor $`f`$ has a maximum at $`y`$ $`=`$ 0 and a minimum at $`y`$ $`=`$ $`L`$. For $`_1`$ $`>`$ 0 and $`_2`$ $`>`$ 0, we have a solution with $`HL`$ $`<`$ $`c_+`$ $`<`$ 0. In this case the scale factor has a maximum at $`y`$ $`=`$ $`c_+/H`$ and a minimum at $`y`$ $`=`$ 0 or $`y`$ $`=`$ $`L`$. For $`_1`$ $`>`$ 0 and $`_2`$ $`<`$ 0, the scale factor has a maximum at $`y`$ $`=`$ $`L`$ and a minimum at $`y`$ $`=`$ 0. We have no solution for $`_1`$ $`<`$ 0 and $`_2`$ $`<`$ 0.
The effective expansion rate $`H_{\mathrm{eff}}(y)`$ depends on $`_1`$, $`_2`$ and $`\mathrm{\Lambda }`$. If the bulk contribution $`\sqrt{\mathrm{\Lambda }}`$ is much larger than the magnitude of the boundary terms $`\kappa _5^2_1`$ and $`\kappa _5^2_2`$, the effective expansion rate is dominated by $`\mathrm{\Lambda }`$ in the whole region of the orbifold: $`H_{\mathrm{eff}}(y)`$ $``$ $`\sqrt{\mathrm{\Lambda }/6}`$. If $`\mathrm{\Lambda }`$ is much smaller than the boundary contributions, $`H_{\mathrm{eff}}(y)`$ reduces to the result in the case $`\mathrm{\Lambda }`$ $`=`$ 0: $`H_{\mathrm{eff}}(0)`$ $`=`$ $`|\kappa _5^2_1/6|`$ and $`H_{\mathrm{eff}}(L)`$ $`=`$ $`|\kappa _5^2_2/6|`$. Notice that the extra dimension is static even in the presence of the bulk cosmological constant $`\mathrm{\Lambda }`$. This constant works to generate the metric distortion in the extra dimension and also to inflate the four-dimensional subspaces.
Finally we describe the case $`\mathrm{\Lambda }<0`$. The solution to Eq.(13) is obtained as follows:
$`f(y)`$ $`=`$ $`\sqrt{{\displaystyle \frac{6H^2}{\mathrm{\Lambda }}}}{\displaystyle \frac{1}{\mathrm{sinh}(H|y|+c_{})}}.`$ (19)
This scale factor is finite everywhere in the extra dimension if $`c_{}>0`$. We can also find a similar solution $``$ $`1/\mathrm{sinh}(H|y|+c_{})`$ with $`c_{}>HL`$. Conditions for Eq.(19) to be consistent with Eq.(14) are written as
$`{\displaystyle \frac{\kappa _5^2}{3}}_1`$ $`=`$ $`2\sqrt{{\displaystyle \frac{\mathrm{\Lambda }}{6}}}\mathrm{cosh}(c_{}),`$
$`{\displaystyle \frac{\kappa _5^2}{3}}_2`$ $`=`$ $`2\sqrt{{\displaystyle \frac{\mathrm{\Lambda }}{6}}}\mathrm{cosh}(HL+c_{}).`$ (20)
To satisfy these conditions, we must require $`_1`$ $`<`$ 0 and $`_2`$ $`>`$ 0. The scale factor (19) has a maximum at $`y`$ $`=`$ 0 and a minimum at $`y`$ $`=`$ $`L`$. For $`_1`$ $`>`$ 0 and $`_2`$ $`<`$ 0, we can satisfy similar conditions with another solution $``$ $`1/\mathrm{sinh}(H|y|+c_{})`$. We have no solution for $`_1_2`$ $`>`$ 0. The physical length of the extra dimension $`L_{\mathrm{phys}}`$ is determined by $`_1`$, $`_2`$ and $`\mathrm{\Lambda }`$ in a similar way to the cases $`\mathrm{\Lambda }`$ $`=0`$ and $`\mathrm{\Lambda }>0`$.
There are a few comments in order. We can also find inflationary solutions to Eqs.(4)-(7) by assuming $`b`$ $`=`$ constant instead of Eq.(10). However, with this assumption, it follows that we must tune the parameters $`\mathrm{\Lambda }`$, $`_1`$ and $`_2`$ to obtain solutions. This tuning is similar to that in Refs. and . On the other hand, with the assumption (10) which we adopted, we do not need to tune the parameters to find solutions, and we can rather determine the expansion parameter $`H`$ according to these parameters. Hence we expect that the assumption (10) is more likely than $`b`$ $`=`$ constant unless we introduce some principle to justify such a tuning of parameters.
The general solution for $`\mathrm{\Lambda }=0`$ is given in Ref., where $`u`$, $`a`$ and $`b`$ in Eq.(3) are allowed to be general functions of $`t`$ and $`y`$. In that analysis it is discussed that the solution with the static extra dimension corresponds to a specific choice of initial conditions, and all other choices of the initial conditions lead to a collapse of the extra dimension or the three spatial dimensions. This implies that the solution (15) for $`\mathrm{\Lambda }=0`$ is unstable under a small perturbation. It is worth studying whether the situation is the same or not for $`\mathrm{\Lambda }`$ $`0`$.
In addition to the problem stated above, there remain a lot of works to be done in order to construct a realistic scenario in this framework. To describe the scenario, we must include dynamics of inflaton fields. For example, we replace the bulk cosmological constant $`\mathrm{\Lambda }`$ by a Lagrangian for bulk inflaton fields, or take into account the time dependence of Lagrangians for the boundary inflaton fields included in $`_1`$ and $`_2`$. We expect that the inflating universe is in a false vacuum, and the universe go to the true vacuum in the end. In such a set-up, we should discuss various subjects: slow-roll condition, $`e`$-folding number, density fluctuation, end of inflation and reheating temperature, big-bang nucleosynthesis, and so on.
In summary, we have constructed new inflationary solutions to the Einstein equation in a simple five-dimensional theory with an orbifold extra dimension $`S^1/Z_2`$. We have considered inflation caused by cosmological constants for the five-dimensional bulk and the four-dimensional boundaries. The solutions have non-trivial behaviors in the extra dimension. In addition to the solution for a vanishing bulk cosmological constant, we have obtained solutions for a non-zero one.
We would like to thank Y. Okada for a useful discussion. This work was supported in part by the Grant-in-Aid for Scientific Research from the Ministry of Education, Science and Culture, Japan.
|
no-problem/9905/physics9905023.html
|
ar5iv
|
text
|
# Progetto per la costituzione del CENTRO DI DIVULGAZIONE DELLA CULTURA SCIENTIFICASu questo progetto il Comune di Sesto Fiorentino (Provincia di Firenze, Italia) sta basando gran parte del suo intervento negli Istituti scolastici pubblici del proprio territorio, nel quadro della Riforma dell’Autonomia Scolastica varata dal ministro L. Berlinguer dei Governi Prodi e D’Alema. di Sesto FiorentinoThe English version is going to be published soon as possible!.
## I Premessa.
Una delle caratteristiche più peculiari ed importanti della ”società civile” nella Città di Sesto Fiorentino è senza dubbio la presenza massiccia di associazioni; colpisce non soltanto il loro numero, letteralmente sterminato in rapporto agli abitanti, ma anche la loro varietà per carattere organizzativo, per interesse, per obbiettivi.
Colpisce il rilievo storico delle Case del Popolo sestesi, dei gruppi parrocchiali, le iniziative ricreative ed umanitarie a cui danno vita; la tradizione della Venerabile Misericordia. E colpisce soprattutto la vitalità con cui, continuamente, nascono e crescono nuove associazioni, sportive e culturali e politiche, che fioriscono vivissime a fianco dei grandi protagonisti che le hanno precedute.
Per questo Sesto Fiorentino dev’essere considerato un esempio unico di Città in cui è facile ed incoraggiante costruire iniziative nuove in campo associativo, in cui una tradizione lunga, ed una lungimirante Amministrazione Comunale, hanno permesso l’evolversi di questa grande ricchezza.
In questa Città esistono molte proposte di iniziativa culturale nell’ambito umanistico: la tradizione italiana del resto vede l’evoluzione culturale di cittadini e lavoratori avvenire soprattutto attraverso la letteratura, la pittura, il teatro, la musica, le forme di comunicazione che narrano di quella stessa gente, delle sue lotte, delle sue condizioni, dei suoi sogni. Questo panorama ultimamente si è sviluppato in una nuova direzione, rispondendo a trasformazioni culturali di tutta la società.
Le società industriali si trasformano sempre più rapidamente, al passo delle nuove tecnologie, e questo produce nella cultura due effetti collegati e solo apparentemente antitetici: l’interesse per le scienze esatte, per gli aspetti anche più difficili di tali discipline (di cui le tecnologie e i progressi sono sottoprodotto), il richiamo del mondo naturale, la nascita dell’attenzione agli equilibri ecologici (che con quei progressi spesso sono entrati in drammatico conflitto). C’è un risveglio, una curiosità nuova verso gli esseri viventi vegetali ed animali, c’è una coscienza ecologica che si fa faticosamente strada, che lotta per venir fuori, per chiedere alla cultura spiegazioni ed atteggiamenti nuovi, valori diversi; c’è un’attenzione alla propria salute, ed ai propri diritti in materia ambientale, che fa il cittadino più curioso di come è governato il territorio in quanto tale, e la curiosità diviene richiesta, istanza, mobilitazione. Il mondo associativo registra questo risveglio, si sono costituite associazioni nuove rivolte all’ambiente, alla cultura naturalistica.
Più difficile, perché meno ”ludico” forse, è il compito di gruppi culturali che rivolgono la loro attenzione alla diffusione delle scienze esatte di per sé; nel 1990 prende avvio a Sesto Fiorentino l’attività di Math90, operante attorno alla Biblioteca pubblica, con l’ambiziosa intenzione di fare della matematica materia di socializzazione, divertimento, approfondimento culturale ”ricreativo”; nel 1998 due giovani associazioni, Galea e Perseverare Ovest, operanti a livello intercomunale fra Sesto, Campi e Calenzano, danno vita ad una mostra in cui gli studenti delle scuole dell’obbligo e superiori entrano in contatto con fatti elementari e non della Fisica e della Chimica, suscitando un notevole interesse, spia delle grandi possibilità che offre, a Sesto Fiorentino, il campo delle divulgazione, ”ricreativa” e non solo, delle scienze.
Di lavoro nell’ambito della diffusione della cultura scientifica, come si vede, ce n’è da fare: la società in cui viviamo richiede un numero sempre maggiore di tecnici specializzati nelle più varie discipline, ma la troppa specializzazione porta spesso a perdere il contatto con la scienza nel suo insieme, e chi sa tutto su un argomento spesso ignora tutto il resto. Tale effetto porta alla formazione di coltissime tribù tecnologiche chiuse in se stesse, talora diffidenti o concorrenti le une delle altre, e questo è un regresso, e non un progresso, del vivere civile: questa scienza, patrimonio di professionisti settari, è vista dai non adepti come qualcosa di estremamente misterioso e oscuro, tanto da rappresentare per molti ciò che nel medioevo erano alchimia e magia. Questo può essere combattuto cercando di avvicinare la popolazione alla scienza in maniera semplice e intuitiva, collegando fenomeni osservabili quotidianamente alle leggi della Fisica, della Chimica e della Matematica.
Nonostante il problema dell’inquinamento sia molto sentito, la popolazione attende soluzioni calate dall’alto, ma può comunque essere educata al rispetto dell’ambiente, anzitutto tramite la conoscenza del territorio e della sua natura, tra i quali spesso vive distrattamente. La cittadinanza può non solo essere abituata al rispetto dell’ambiente, al riciclaggio dei rifiuti, al riutilizzo dei materiali usati, al risparmio energetico, ma può anche diventare un valido supporto alle Istituzioni nella lotta all’inquinamento e al degrado del territorio che abita, se opportunamente stimolata a difenderlo.
## II L’idea.
Simili premesse spingono le Associazioni Galea (Navigare nella Cultura) e Set (Scienza, Educazione e Territorio) a muoversi al fine di diffondere la cultura scientifica, coerentemente al loro operare in quell’ambito con una particolare attenzione alla divulgazione. Le sue caratteristiche, del resto, fanno di questa Città un contesto veramente appetibile per chi voglia lavorare nel campo delle scienze naturali, fisiche e matematiche, a vari livelli.
In aggiunta a tutto ciò dobbiamo tenere presente che Sesto Fiorentino si prepara ad essere interessata da potenti novità, che avranno a che rivedere col suo ambiente e la fruizione della cultura scientifica.
* Di qui a pochi anni si verificheranno delle trasformazioni ambientali importanti, per l’avvio della costituzione del Parco della Piana e per il recupero del Parco di Doccia, con conseguenti mutamenti della viabilità.
* È ormai vicinissimo il trasferimento del Polo Scientifico nella Piana, che porterà a Sesto Fiorentino il baricentro della cultura e della ricerca scientifica universitaria in Toscana, oltre ad incrementarne improvvisamente la popolazione studentesca.
* In seguito alla riforma dell’autonomia universitaria la vita e la gestione dell’Università saranno fortemente modificate, le Facoltà verranno forzate ad aprirsi a quelle parti del mondo civile ed economico in grado di offrir loro servizi, collaborazioni, sinergie.
* In seguito alla riforma dell’autonomia scolastica assisteremo a profondi cambiamenti nella vita e nella gestione delle scuole, che similmente potranno costruire collaborazioni con tutti gli operatori culturali esterni.
La comunità sestese, la sua Amministrazione Comunale, più in generale le sue Istituzioni e le sue realtà socioeconomiche, non dovrebbero essere spettatori ma protagonisti di tutto ciò: protagonisti di una vera sfida per far propria quella miniera di cultura, e di lavoro, che questi cambiamenti potranno diventare!
Grande, si sa, è la potenzialità del tessuto sociale sestese, lo dimostra la sua storia; grande ha saputo essere la lungimiranza delle associazioni, dell’Amministrazione Comunale e di molti operatori economici. Importante è la presenza della Pubblica Istruzione, con un Liceo Scientifico, un Liceo Artistico e un Istituto per Ragionieri. Infine, grandissima è la generosità di tutti quelli che, finora, nella cultura hanno operato.
Forse però per raccogliere quella sfida con successo occorre qualcosa di ancora più organizzato, mirato e potenziato. Forse quel che oggi esiste ancora non basta. Infatti:
* non esiste un solo operatore culturale scientifico di dimensioni importanti con il quale l’Amministrazione, le aziende, le scuole e l’Università, dialoghino ed interagiscano;
* la conoscenza che ha il cittadino sestese medio dell’ambiente naturale che lo circonda è comunque scarsissima, tantopiù in un periodo di ”riflusso” della cultura e della presenza ambientalista;
* le scuole non hanno ancora strumenti, programmi né mentalità tali da integrarsi fruttuosamente con l’ambiente esterno: devono essere ”cercate”, ”stanate” se occorre;
* il trasferimento delle facoltà scientifiche a Sesto è guardato con sufficienza da parte dell’ambiente universitario, che non conosce la realtà cittadina né tantomeno ne è attratto. Del resto neanche la realtà locale, Istituzioni e società civile, ha una piena coscienza dell’opportunità che l’incontro con il mondo accademico rappresenta.
Temiamo che se tale situazione non verrà rapidamente modificata resteranno grandi opportunità perdute l’incontro con le facoltà scientifiche, così come la realizzazione del Parco della Piana o la risistemazione di quello della Collina. È quindi pienamente necessario lavorare attorno alla problematica dell’offerta culturale in tema ambientale e scientifico.
## III Descrizione del progetto.
L’idea attorno alla quale stiamo lavorando, l’argomento di questo Progetto, è la costituzione sul territorio del Comune di Sesto Fiorentino, di un Centro per la Divulgazione della Cultura Scientifica (di seguito indicato come Cdcs), ente autonomo ed autogestito, senza finalità di lucro, che cresca come patrimonio della Città sia per attività amatoriali (mostre, cicli di conferenze, percorsi formativi, ricreazione scientifica), sia come vera e propria azienda del privato sociale tesa a fornire servizi specifici ad enti e cittadini.
Il Cdcs dovrà essere una struttura polifunzionale comprendente:
* un laboratorio scientifico-naturalistico;
* una stazione per la produzione e la diffusione di materiali multimediali;
* un ambiente in cui tenere seminari, proiettare diapositive e audiovisivi;
* un punto di partenza per escursioni didattiche sul territorio;
* una mediateca permanente in cui materiale scientifico di varia natura è custodito e consultabile dalla popolazione.
Naturalmente per tutto ciò è necessaria un’unica struttura. Accanto alla sede del laboratorio scientifico-naturalistico (che per il suo peculiare carattere necessita di un ambiente appositamente attrezzato) contiamo di avvalerci della collaborazione di strutture già esistenti e già operanti sul territorio come attrattori delle iniziative culturali. Segnatamente, della Biblioteca Comunale, aperta nei mesi invernali anche nelle ore serali, e perciò particolarmente adatta come punto di incontro per le riunioni anche con adulti.
Descriviamo di seguito gli ambienti e le funzioni che richiediamo di poter realizzare nella sede unica destinata al Cdcs dall’Amministrazione Comunale di Sesto Fiorentino.
### A Laboratorio scientifico ad indirizzo chimico, fisico e matematico.
Nell’ambito del progetto del Cdcs di Sesto Fiorentino riteniamo opportuna la creazione di una laboratorio dedicato ad attività di educazione e sperimentazione chimica, fisica e matematica rivolto sia alle scuole, sia al singolo cittadino che abbia interesse nelle materie in questione.
Le funzioni che saranno accese all’interno di questa struttura saranno le seguenti:
* Esperimenti e dimostrazioni pratiche di facile comprensione di carattere fisico, chimico, matematico eseguiti con strumenti scientifici da personale qualificato.
Tali esperimenti, a causa della complessità di realizzazione e dell’utilizzo di materiale non adatto in particolare agli utenti più piccoli, sarà eseguito da operatori che illustreranno tutte le fasi degli esperimenti in modo chiaro ed esauriente, eventualmente con l’ausilio di supporti audiovisivi, dispense cartacee di nostra produzione ed ipertesti come descritto in dettaglio successivamente.
* Esperimenti eseguibili dagli utenti stessi, da realizzarsi con materiali di facile reperibilità e di uso quotidiano, sotto la supervisione del personale di laboratorio.
Queste esperienze sono molto utili per avvicinare il pubblico alle materie scientifiche passando per esperienze di tutti i giorni in occasione delle quali entriamo costantemente in contatto con la natura e le sue leggi. Le attività ora descritte prevedono due fasi: la prima in cui il personale spiegherà sia i fondamenti teorici alla base del fenomeno osservato, sia la realizzazione dell’esperimento; e la seconda che consiste nella realizzazione pratica di ciò che è stato appena appreso.
Queste attività sono particolarmente indicate per le scuole elementari, in quanto la seconda fase può rappresentare un vero momento ludico d’apprendimento.
* Realizzazione di un centro di documentazione permanente sia sull’attività svolta dal laboratorio stesso e dalle altre strutture interne al Centro, sia delle iniziative a carattere scientifico che verranno svolte sul territorio anche da altri Enti e strutture analoghe alla nostra, in particolar modo dalle scuole elementari e medie inferiori.
* Realizzazione di un centro informazioni, con il compito di essere un punto di riferimento per chiunque voglia svolgere attività scientifiche, fornendo assistenza da parte di personale qualificato e strutture idonee alla realizzazione di tali attività. In particolare tale centro informazioni potrà essere uno strumento utile agli insegnanti che vogliono interagire maggiormente con il Centro stesso.
* Punto d’incontro con le realtà a carattere scientifico, delle zone limitrofe, per organizzare visite guidate a musei, laboratori, industrie, centri di ricerca, osservatori astronomici ecc.
* Realizzazione di un laboratorio di matematica che si prefigga lo scopo di avvicinare i ragazzi allo studio della matematica tramite esperienze tattili e visuali, cercando di trasformare in gioco le applicazioni matematiche. È nostro intento costruire strumenti didattici in questo senso, che possano sia essere usati all’interno del Cdcs che dati in prestito a scuole o a privati.
Il laboratorio vuole essere una realtà il più possibile aperta alla cittadinanza, per questo motivo ricercherà la collaborazione di tutti coloro che si dimostreranno interessati alle attività di cui sopra e organizzerà anche iniziative al di fuori della propria sede in modo tale da avvicinarsi maggiormente al singolo cittadino.
### B Laboratorio scientifico ad indirizzo naturalistico, biologico ed ecologico
Il laboratorio, con caratteristiche polifunzionali e possibilità di fruizione costante, si propone di mettere a disposizione dell’utenza attrezzature, materiali e personale qualificato allo scopo di:
* Svolgere attività didattiche con scuole di ogni ordine e grado al fine di acquisire, tramite esperienze di tipo naturalistico, nozioni relative sia a metodologie agricole di tipo ecologico, sia volte alla conoscenza dell’ambiente naturale del comprensorio;
* Creare laboratori per ragazzi e adulti dove realizzare, nel tempo libero, esperienze di orticoltura biologica, giardinaggio, compostaggio, riconoscimento di specie vegetali, fisiologia vegetale, erboristeria;
* Realizzare un centro di informazione e documentazione (dotato di audiovisivi, libri, riviste specializzate, materiale multimediale…) volto all’acquisizione di conoscenze di carattere storico e ambientale;
* Realizzare un centro di divulgazione di tecniche bioedilizie, bioclimatiche, lagunaggio e risparmio delle risorse energetiche utilizzando energie alternative e rinnovabili (solare, biogas etc.) per le quali esempio concreto potrà essere la struttura stessa;
* Svolgere attività didattiche ed esplicative sui popolamenti della ”Piana” e di Monte Morello, con particolare riferimento all’evidenze archeologiche dello sviluppo demografico sin dalla preistoria.
Le attività da intraprendere nel laboratorio avranno carattere interdisciplinare, per cui saranno condotte da esperti di ogni settore naturalistico, (biologi, naturalisti, forestali, agronomi, ecc.) al fine di fornire le più esaurienti spiegazioni in ogni campo scientifico-naturalistico, e nel tentativo di dimostrare l’importanza e l’utilità di far comunicare tra loro più settori scientifici, sia durante la ricerca, sia durante la divulgazione.
L’impostazione interdisciplinare di questa attività mira a creare, quindi, una conoscenza esaustiva del mondo della natura perché ragionata e sviscerata sotto tutti gli aspetti.
A caratterizzare l’attività di educazione scientifica-naturalistica e di conoscenza del territorio vi è una metodologia di base, che si fonda sull’approccio diretto tra l’utente e l’argomento affrontato, e sull’approfondimento differenziato per ogni categoria di destinatari: scolari, studenti, anziani, o appassionati di ogni età.
Nello svolgere ogni attività didattica ricreativa sarà sempre tenuto presente lo scopo educativo, mirando alla comprensione degli effetti che possono avere sugli ecosistemi determinate azioni dell’uomo.
Il centro fornisce il punto d’appoggio e la base per:
* Realizzazione di attività orticole con le scuole materne, elementari , medie e superiori
* Attività didattico-naturalistiche per scuole medie e superiori
* Ecoludoteca: attività ludiche per ragazzi nel tempo libero
* Attività pratiche per adolescenti, adulti, anziani anche a scopo sociale ( categorie a rischio, disabili)
* Itinerari di scoperta delle caratteristiche storiche del territorio della piana
* Percorsi naturalistici nelle zone umide della piana e lungo i canali
* Reperimento di informazioni riguardanti: vivaismo e agricoltura biologica, compostaggio, bioarchitettura, educazione ambientale, aspetti naturalistici del comprensorio
* Creazione di una rete di comunicazione con altri Centri di Educazione Ambientale nel territorio nazionale (LABNET) e internazionale (INTERNET).
### C Articolazione delle attività didattiche.
Si programmano attività di divulgazione delle Scienze Naturali (qui di seguito indicate con SN) rivolte sia alla cittadinanza in senso lato, sia a settori di essa definiti da caratteristiche lavorative e sociali precise. Queste attività sono riunite nei quattro settori Divulgativo, Scolastico, Libera Università e Individuale di seguito illustrati, distinti dal bacino di utenza al quale si rivolgono, e conseguentemente caratterizzati.
#### 1 Settore divulgativo.
Il primo tipo di attività mira alla diffusione della conoscenza scientifica delle SN a livello di divulgazione e si rivolge alla cittadinanza indistintamente; sono attività pensate per fare delle SN una occasione di socializzazione, di ricreazione e di approfondimento, e destinate essenzialmente al tempo libero. L’obbiettivo è quello di far entrare l’utente in contatto con le SN, con i problemi posti a livello teorico, con lo stesso mondo di chi nelle SN opera professionalmente.
Le attività del settore divulgativo perciò non esigono una programmazione organica complessiva, e vengono pensate e realizzate come iniziative monografiche non esigenti una la frequenza dell’altra.
I contenuti scientifici con cui l’utente viene a contatto sono di ”difficoltà” diversa, e non necessariamente piccola: non vi è alcuna intenzione di relegare al settore divulgativo concetti, problemi e argomenti necessariamente ”semplici”. Piuttosto è nella forma della loro presentazione che si dovrà tener conto di come tali contenuti saranno occasione di socializzazione e ricreazione ”culturale” piuttosto che di arricchimento di un bagaglio di interesse ”professionale”.
In tale ordine di idee si privilegerà una presentazione discorsiva, illustrativa, volta alla descrizione della fenomenologia più che alla formalizzazione tecnica, tesa a far entrare l’utente in contatto con l’accadere di un processo o col funzionare di un meccanismo. Il fenomeno quindi è spiegato, in maniera anche serrata e soprattutto rigorosa, senza concedere alcunché alla spettacolarizzazione gratuita, ma facendo necessariamente a meno di formalizzazione tecnica e matematica che interessa soltanto un approccio professionale.
Le attività divulgative del Cdcs possono essere riunite in quattro tipi a seconda del luogo fisico dove si svolgono e del personale a cura di cui sono.
##### a Incontri monografici divulgativi.
Si tengono lezioni monografiche di SN, incentrate soprattutto su immagini da proiettarsi con lavagna luminosa e proiettore, si visiona e discute materiale in videocassetta, si eseguono e spiegano dimostrazioni ed esperimenti tratti dal programma per le scuole dell’obbligo che siano ritenuti particolarmente significativi per i non addetti.
Si propongono anche attività divulgative che s’avvalgano dell’apporto di esterni alle Associazioni titolari del progetto in qualità di animatori scientifici. Si pensa a ricercatori, scienziati, tecnici e pubblicisti in genere invitati dal Cdcs a presentare, nel contesto e nello spirito delle Attività divulgative, propri lavori, risultati, pubblicazioni ritenuti di interesse.
Questi incontri si programmano sia nella stessa sede del Centro, che in altre sedi di pubblico interesse, a cura di animatori scientifici interni alle Associazioni titolari del progetto Cdcs od in collaborazione con personale di dette altre sedi. Segnatamente si rivolge questa proposta di attività divulgativa a: biblioteche, circoli aderenti all’Associazionismo Sestese, centri sociali, circoli parrocchiali, circoli culturali e sportivi privati, aziende, locali, centri commerciali, Asl, circoli dopolavoro ed altri di simile carattere e interesse.
##### b Gite d’istruzione.
Collaborando con gli operatori scientifici della divulgazione direttamente collegati con l’università ed i centri di ricerca, si effettuano visite guidate ad osservatori astrofisici, laboratori nazionali ed europei, musei, parchi naturali, acquari, od escursioni in luoghi panoramici con osservazioni astronomici.
I responsabili scientifici delle Gite di istruzione, che potranno essere interni o meno alle Associazioni titolari del progetto, saranno designati secondo il Regolamento del Cdcs.
##### c Produzione di materiale didattico.
Il contenuto delle diverse iniziative scientifiche didattiche sarà anche presentato in forma di dispense monografiche autoprodotte dai soci delle Associazioni titolari del progetto, che potranno essere in queste forme sottoelencate od in altre forme.
* Dispense monografiche cartacee.
* Dispense interattive in forma di ipertesto su cd rom o floppy disk.
* Collezioni di diapositive.
* Collezioni di fotografie.
* Videodispense.
* Audiodispense.
Detto materiale divulgativo sarà commercializzato secondo la legge sulle Società Non-Profit ed il ricavato verrà reinvestito nella gestione del Cdcs secondo il Regolamento e le deliberazioni del Consiglio d’Amministrazione del Centro.
#### 2 Attività scolastiche.
Capitolo fondamentale della programmazione di SN è naturalmente il lavoro negli Istituti scolastici, anche in previsione dell’attuazione della riforma della loro autonomia. Diverse tipologie di attività si possono individuare:
* cicli di lezioni di Fisica moderna (Meccanica quantistica e Relatività) che portano nella scuola argomenti quasi totalmente estranei dagli attuali programmi ministeriali;
* lezioni di assistenza alla programmazione istituzionale: potremmo offrire agli insegnanti l’occasione di alleggerire la propria programmazione curando approfondimenti e ripassi su argomenti in programma;
* corsi speciali per gruppi di studenti che desiderino approfondire particolarmente la materia;
* esperimenti, all’interno del laboratorio del Cdcs, pensati e realizzati con gli insegnanti in modo da rispondere a precise esigenze di programma; (seminari sugli ecosistemi forestali e le loro specie in forma monografica e/o consequenziale (tipo corso) mirati ad approfondimenti particolari; (corsi di ecologia, botanica, zoologia, monografici e/o consequenziale (tipo corso) mirati più che alla formazione scientifica professionale alla presa di coscienza, da parte dei corsisti, della complessità e delicatezza degli equilibri degli ecosistemi (gite e trekking naturalistici allo scopo di osservare gli ecosistemi, le loro componenti e le loro dinamiche.
#### 3 Libera Università.
Nei locali del Cdcs si organizzano corsi autogestiti di programma fissato dai titolari designati secondo il Regolamento del Centro fra soci interni o personale esterno; questi corsi costituiscono nel loro complesso un ciclo curricolare vero e proprio a cui diamo il nome di Libera Università.
Si tratta dell’ambizioso e sentito progetto di creare, esternamente al ciclo istituzionale degli studi scolastici ed universitari, l’occasione per lo studio di livello professionale o semi-professionale delle SN, da offrire ai cittadini che, per vicende diverse (condizioni economiche, situazioni storiche, scelta lavorativa, altre esigenze) non hanno potuto frequentare i corsi universitari pur avendo interesse ed inclinazione alla materia.
La Libera Università prevede l’iscrizione e la frequenza a cicli di lezioni, dalla quale ci si aspetta non solo l’assimilazione di nozioni e concetti inerenti alla disciplina studiata, ma anche l’appropriazione di un certo bagaglio tecnico potenzialmente utilizzabile in sede professionale.
#### 4 Attività individuali.
Le attività individuali sono percorsi culturali rivolti al cittadino come lezioni individuali o momenti autodidattici, in cui egli è seguito dal nostro personale qualificato. In esse si distinguono Doposcuola scientifico (rivolto a studenti ed alunni della scuola pubblica e privata che vogliano supportare od approfondire la loro preparazione), Percorsi professionali (studiati apposta per le esigenze specifiche di lavoratori) e Ludoteca scientifica (rivolta a bambini, ragazzi e adulti che vogliano avvicinarsi, attraverso esperimenti e audiovisivi, al mondo delle SN).
##### a Doposcuola scientifico.
L’offerta scolastica pubblica e privata nel nostro Paese ha grandi meriti e limiti sui quali sarebbe qui molto lungo distendersi, e che non è nostra intenzione criticare.
È tuttavia un dato che un bacino d’utenza vastissimo fra gli studenti e gli alunni delle scuole italiane ricorre, per motivi diversissimi e con un panorama di investimenti quantomai variegato, a lezioni private di recupero e supporto. È un altro dato inconfutabile che l’offerta di lezioni private resta per la quasi totalità al di fuori di ogni legislazione, ed è per questo lasciata quasi completamente all’arbitrio di una contrattazione insegnante-studente che funziona come un doppio ricatto: chi offre lezioni private lo fa prevalentemente per mancanza di altri redditi regolari, che le richiede lo fa per l’incapacità, un po’ sua un po’ della struttura scolastica, di far sì che egli apprenda le nozioni necessarie al suo curriculum per via ”istituzionale”.
Così chi ha bisogno di questo tipo di supporto didattico deve affidarsi o ad un mercato clandestino dai prezzi completamente fluttuanti (e dal personale docente la cui preparazione è in generale fuori dal controllo dell’utente), o ad istituti privati che operano sì alla luce del sole, ma con tariffe spesso violentemente proibitive. Inutile sottolineare che questa situazione rappresenta ipso facto una inaccettabile selezione censitaria degli studenti!
L’intervento pubblico che mette in moto la Scuola di Stato può tentare di non arrestarsi fuori dagli Istituti, ed escogitare, per mezzo dell’Ente Locale, forme di aiuto agli studenti in difficoltà, offrendo loro una valida alternativa al mercato esoso delle lezioni private. Naturalmente ciò deve avvenire in forma dinamica e non assistenziale: nel caso del progetto che proponiamo l’Amministrazione Comunale promuove il Cdcs che garantisce un servizio ”municipalizzato” nelle materie scientifiche di lezioni individuali, lavorando con tariffe studiate per rendere accessibile al massimo numero di utenti un servizio dai costi attualmente gravosi.
Pensiamo non tanto di sostituire alla lezione privata la lezione ”municipalizzata” fatta a domicilio del singolo studente: piuttosto realizzeremo un ”doposcuola” in cui lo studente possa essere seguito singolarmente, con la stessa attenzione che riceverebbe nella lezione privata, ma potendo usufruire della sede del Cdcs, dei materiali didattici e scolastici ivi utilizzabili! Studiando le caratteristiche e le esigenze della preparazione degli studenti in difficoltà che al Cdcs si rivolgano, gli insegnanti del Centro incoraggeranno gli stessi a discutere fra di loro delle materie esaminate, formando gruppi in cui allo studio individuale e rigoroso seguano confronti e dibattiti sugli argomenti approfonditi.
Gli insegnanti verranno designati attraverso voto del Comitato di Gestione del Cdcs conformemente al Regolamento del Centro, fra personale iscritto alle Associazioni titolari del progetto o cittadini non iscritti e lavoreranno come liberi professionisti, nei termini della legge.
##### b Percorsi professionali individuali.
Per lavoratori e/o disoccupati che abbiano interesse di migliorare le loro conoscenze in campo scientifico naturalistico, al fine di qualificare le loro prestazioni professionali sia autonome che dipendenti. Tali percorsi avranno sempre carattere pratico-applicativo in modo che l’utente possa attuare subito le conoscenze acquisite.
##### c Ludoteca scientifico-naturalistica.
La ludoteca da ospitare negli stessi locali del laboratorio ma in orari diversi, si rivolge a tutta la popolazione, dando la possibilità a chiunque ne faccia richiesta di poter eseguire o assistere agli esperimenti desiderati, di poter consultare tutti i materiali multimediali presenti, di ricevere delucidazioni in materia scientifica da parte degli operatori presenti. Questo, oltre a costituire un momento di gioco per i ragazzi, potrà essere un’occasione per coloro che già si interessano di scienza a livello non professionale per approfondire le proprie conoscenze, e per coloro che ne sono completamente digiuni per cominciare ad interessarsene.
Vuole essere, inoltre, il luogo dove i più piccoli abbiano la possibilità di iniziare a conoscere la natura in modo diretto attraverso esperienze sensoriali, ed indiretto con giochi, immagini e audiovisivi aventi per protagonisti soggetti presenti in natura.
La caratteristica di tali attività sarà quella di fare entrare in contatto i piccoli allievi con il mondo della natura attraverso i cinque sensi educandoli attraverso il gioco, alla conoscenza e alla curiosità verso il mondo che li circonda e soprattutto al rispetto per l’ambiente in senso lato.
## IV Risultati attesi.
Il centro di didattica ambientale contribuirà direttamente ed indirettamente con le sue funzioni ad un incremento occupazionale, anche se sono previste prestazioni volontaristiche e contributi pubblici, provvedendo ad animare corsi di formazione, di aggiornamento, di divulgazione, di ricerca, di informazione (anche attraverso le reti telematiche e i network a carattere ecologico).
## V Carattere innovativo.
I mezzi impiegati consentiranno di caratterizzare in modo nuovo l’impostazione della nostra attività, che vuole essere tale da non trascurare e tralasciare nessun tipo di utente, e approfondire esaurientemente gli argomenti oggetto di studio in modo appropriato alle esigenze dei partecipanti. Questo implica non solo la produzione e diffusione, attraverso pubblicazioni di vario genere (botanico, zoologico, paesaggistico, turistico-naturalistico, materiali ricreativi e didattici) di nozioni ecologiche e scientifico-naturalistiche, ma anche la volontà di far crescere nelle persone una nuova coscienza e conoscenza ecologica.
Il Cdcs è nuovo anche per la mentalità o forse meglio per l’atteggiamento verso il mondo delle scienze, infatti mira a riunire le varie discipline scientifiche, in particolare quelle relative allo studio dei fenomeni naturali sia, durante la ricerca sia, durante la discussione sia, durante la divulgazione.
In ultimo la grande novità sta nell’obbiettivo, tanto arduo quanto importante ed innovativo, di guardare all’educazione ambientale e all’educazione alla scoperta del territorio, come al mezzo per formare una conoscenza ed una coscienza ecologica nei cittadini, da tradurre poi in uno spontaneo stile di vita ecocompatibile da parte di tutta la comunità.
## VI Riproducibilità del progetto.
Il centro didattico, oltre ad offrire un supporto culturale e scientifico, è al contempo un luogo di incontro e di scambio sociale, per cui, questo tipo di funzione, è estendibile a centri analoghi rivolti ad altri settori di interesse, rispetto al patrimonio storico-culturale, architettonico, archeologico, ecc., funzionando come poli di rivitalizzazione creativa sul territorio e nelle interrelazioni e negli scambi sociali.
La previsione di un piano variamente articolato ed organico nel suo insieme, assicurano un giusto inserimento nel territorio anche ad interventi specialistici come oasi faunistiche, parchi naturali, riserve di caccia,. impianti di lagunaggio, ecc., permettendo di soddisfare funzioni e bisogni diversi, stemperando tensioni e diversi bisogni che attualmente si contendono le poche aree esistenti disponibili.
## VII Conclusioni.
Le Associazioni titolari di questo progetto non nascondono di aver in mente di costruire pezzo su pezzo una vera e propria occasione di lavoro per giovani studiosi che, in futuro, possano farne un’attività professionale a tempo pieno.
Conformemente alle ispirazioni ed alle finalità sociali previste dai loro statuti, le Associazioni Galea e Set puntano su questo progetto per realizzare, nel Comune di Sesto Fiorentino, insieme una struttura che divenga prezioso patrimonio per la collettività cittadina e ed occasione per creare lavoro nei ”servizi sociali” dell’ambito culturale. Così quello che oggi è un contributo per attività culturali-ricreative nel settore propriamente no - profit, l’avvio per un’attività associativa di un gruppo di appassionati, potrebbe trasformarsi, crescendo, in un lungimirante investimento capace di fruttificare in termini di qualità della vita a Sesto, fruibilità della cultura ed occupazione.
Guardiamo alla costruzione del Cdcs con grande entusiasmo, anzitutto per un fatto sociale: incominciare a parlarne, a stendere il presente progetto, cercare di disegnarlo nelle nostre aspirazioni, nella nostra immaginazione, confrontandolo con le situazioni oggettive, con gli insegnamenti che ognuno di noi trae dalle proprie esperienze passate, è stata un’operazione di grande valore umano ed organizzativo. Per la prima volta per quel che ne sappiamo, una Associazione preesistente (Galea) ed una di nuova formazione (Set), si sono messe attorno a un tavolo per sperimentare una creazione comune, un progetto unificante, invertendo il processo di frammentazione latente che spesso arricchisce il panorama associativo disgregando realtà esistenti. Ci è parsa una strada da percorrere, anche contro l’orgoglio individuale delle singole associazioni che partecipano, anche contro la gelosia della propria identità, puntando più sulla costruzione comune, sul servizio che solo uniti si può fornire, che su quanto bravi si può essere ognuno per sé.
Ci è parso un progetto ambizioso e lontano, e proprio per questo da perseguire!
E sicuramente ancora più ambizioso è il progetto di far crescere il Cdcs al punto di diventare, in un futuro lontano che lavoreremo per avvicinare, una vera occasione lavorativa, per impiegarsi in un ambito che ci è congeniale, rendendoci utili al territorio, e padroni del nostro lavoro.
|
no-problem/9905/hep-th9905045.html
|
ar5iv
|
text
|
# On Conformally Compactified Phase Space
## 1 Introduction
The discovery of Maxwell’s equations covariance with respect to the conformal group $`C=\{L,D,P_4,S_4\}`$ (where $`L`$,$`D`$,$`P_4`$,$`S_4`$ mean Lorentz-, Dilatation-, Poincarè-, and special conformal transformations respectively building up $`C`$ have induced several authors to conjecture that Minkowski space-time $`𝕄=^{3,1}`$ may be densely contained in conformally compactified<sup>1</sup><sup>1</sup>1From which Robertson-Walker space-time $`𝕄_{RW}=S^3\times R^1`$ is obtained, familiar to cosmologists (being at the origin of the Cosmological Principle ), where $`R^1`$ is conceived as the infinite covering of $`S^1`$. space-time $`𝕄_c`$:
$$𝕄_c=\frac{S^3\times S^1}{_2}$$
(1)
often conceived as the homogeneous space of the conformal group:
$$𝕄_c=\frac{C}{c_1}$$
where $`c_1=\{L,D,S_4\}`$ is the stability group of the origin $`x_\mu =0`$.
As well known $`C`$ may be linearly represented by $`SO(4,2)`$, acting in $`^{4,2}`$, containing the Lorentz group $`SO(3,1)`$ as a subgroup which, because of the relevance of space-time reflections for natural phenomena, should be extended to $`O(3,1)`$. But then $`SO(4,2)`$ should be also extended to $`O(4,2)`$ including conformal reflections $`I`$ ( with respect to hyperplanes orthogonal to the $`5^{th}`$ and $`6^{th}`$ axis ), whose relevance for physics should then be expected as well. To start with, in fact, in this case $`𝕄_c=C/c_1`$ seems not to be the only automorphism space of $`O(4,2)`$, since:
$$I𝕄_cI^1=I\frac{C}{c_1}I^1=\frac{C}{c_2}=_c=\frac{S^3\times S^1}{_2}$$
(2)
where $`c_2=\{L,D,P_4\}`$ is the stability group of infinity. Therefore, being $`c_1`$ and $`c_2`$ conjugate, $`𝕄_c`$ and $`_c`$ are two copies of the same homogeneous space $``$ of the conformal group including reflections, and, as we will see, both are needed to represent the group linearly . Because of eq. (2) we will call $`𝕄_c`$ and $`_c`$ conformally dual.
There are good arguments (see also footnote 4) in favor of the hypothesis that $`_c`$ may represent conformally compactified momentum space $`=^{3,1}`$. In this case $`𝕄_c`$ and $`_c`$ build up conformally compactified phase space, which then is a space of automorphism of the conformal group $`C`$ including reflections.
## 2 Conformally compactified phase space
For simultaneous compactification of $`𝕄`$ and $``$ in $`M_c`$ and $`_c`$ no exact Fourier transform is known. It can be only approximated by a finite lattice phase space .
An exact Fourier transform may be defined instead in the 2-dimensional space time when $`𝕄=^{1,1}=`$ and for which
$$𝕄_c=\frac{S^1\times S^1}{_2}=_c$$
(3)
Then inscribing in each $`S_1`$, of radius $`R`$, of $`𝕄_c`$ and in each $`S_1`$ of radius $`K`$ of $`_c`$ a regular polygon with
$$2N=2\pi RK$$
(4)
vertices any function $`f\left(x_{mn}\right)`$ defined in the resulting lattice $`M_L𝕄_c`$ is correlated to the Fourier-transformed $`F\left(k_{\rho \tau }\right)`$ on the $`P_L_c`$ lattice by the finite Fourier series:
$`f\left(x_{nm}\right)={\displaystyle \frac{1}{2\pi R^2}}{\displaystyle \underset{\rho ,\tau =N}{\overset{N1}{}}}\epsilon ^{\left(n\rho m\tau \right)}F\left(k_{\rho \tau }\right)`$
(5)
$`F\left(k_{\rho \tau }\right)={\displaystyle \frac{1}{2\pi K^2}}{\displaystyle \underset{n,m=N}{\overset{N1}{}}}\epsilon ^{\left(n\rho m\tau \right)}f\left(x_{nm}\right)`$
where $`\epsilon =e^{i\frac{\pi }{N}}`$ is the $`2N`$-root of unit. They may be called Fourier transforms since either for $`R\mathrm{}`$ or $`K\mathrm{}`$, (or both) they coincide with the standard ones. This further confirms the identification of $`_c`$ with momentum space which is here on purpose characterized geometrically rather then algebraically (Poisson bracket). On this model the action of the conformal group $`O(2,2)`$ may be easily operated and tested.
## 3 Conformal duality
The non linear, local action of $`I`$ on $`𝕄`$ is well known; for $`x_\mu 𝕄`$ we have:
$$I:x_\mu I\left(x_\mu \right)=\pm \frac{x_\mu }{x^2}$$
(6)
For $`x^20`$ and $`x_\mu `$ space-like, if $`x`$ indicates the distance of a point from the origin, we have <sup>2</sup><sup>2</sup>2$`I`$ maps every point, at a distance $`x`$ from the center, in the sphere $`S^2`$, to a point outside of it at a distance $`x^1`$. For $`𝕄=^{2,1}`$ the sphere $`S^2`$ reduces to a circle $`S^1`$ and then (7) reminds Target Space duality in string theory , which then might be the consequence of conformal inversion. For $`𝕄=^{1,1}`$, that is for the two dimensional model $`I`$ may be locally represented through quotient rational transformation by means of $`I=i\sigma _2`$ and the result is (7), which in turn represents the action of $`I`$ for the conformal group $`G=\{D,P_1,T_1\}`$ on a straight line $`^1`$. :
$$I:xI\left(x\right)=\frac{1}{x}.$$
(7)
Since $`𝕄`$ is densely contained in $`𝕄_c`$, $`x_\mu `$ defines a point of the homogeneous space $``$, of automorphisms for $`C`$; as such $`x_\mu `$ must then be conceived as dimensionless in (6), as usually done in mathematics. Therefore for physical applications, to represent space-time we must substitute $`x_\mu `$ with $`x_\mu /l`$, where $`l`$ represents an (arbitrary) unit of lengths, then from (6) and (7) we obtain:
Proposition $`P_1`$: Conformal reflections determine a map in space of the microworld to the macroworld ( with respect to $`l`$) and vice-versa.
The conformal group may be well represented in momentum space $`=^{3,1}`$, densely contained in $`_c`$, where the action of $`I`$ induces non linear transformation like (6) and (7), where $`x_\mu `$ and $`x`$ are substituted by $`k_\mu `$ and $`k`$. If we then take (7) and the corresponding for $``$ we obtain (see also footnote 2):
$$I:xkI\left(xk\right)=\frac{1}{xk}$$
(8)
Now physical momentum $`p`$ is obtained after multiplying the wave-number $`k`$ by an (arbitrary) unit of action $`H`$ by which (8) becomes:
$$I:\frac{xp}{H}I\left(\frac{xp}{H}\right)=\frac{H}{xp}$$
(9)
from which we obtain:
Proposition $`P_2`$: Conformal reflections determine a map, in phase space of the world of micro actions to the one of macro actions (with respect to $`H`$), and vice-versa.
Now if we choose for the arbitrary unit $`H`$ the Planck’s constant $`\mathrm{}`$ then from propositions $`P_1`$ and $`P_2`$ we have:
Corollary $`C_2`$: Conformal reflections determine a map between classical and quantum mechanics.
Let us now remind the identifications $`𝕄_cC/c_1`$ and $`_cC/c_2`$ to be conceived as two copies of the homogeneous space $``$, representing conformally compactified space-time and momentum space respectively, and that $`I𝕄_cI^1=_c`$ and then $`I`$ represents a map<sup>3</sup><sup>3</sup>3 The action of $`I`$ may be rigorously tested in the two dimensional model where $`I(x_{nm})=k_{nm}`$ and the action of $`I`$ is linear in the compactified phase space at difference with its local non linear action in $`𝕄`$ and $``$, as will be further discussed elsewhere. of every point $`x_\mu `$ of $`𝕄`$ to a point $`k_\mu `$ of $``$ : $`II(x_\mu )=p_\mu `$ and we have:
Proposition $`P_3`$: Conformal reflections determine a map between space-time and momentum space.
Let us now assume, as the history of celestial mechanics suggests, that space-time $`𝕄`$ is the most appropriate for the description of classical mechanics then, as a consequence of propositions $`P_1`$, $`P_3`$ and of corollary $`C_2`$, momentum space should be the most appropriate for the description of quantum mechanics. The legitimacy of this conjecture seems in fact to be supported by spinor geometry as we will see.
## 4 Quantum mechanics in momentum space
Notoriously the most elementary constituents of matter are fermions, represented by spinors, whose geometry, as formulated by its discoverer E. Cartan has already the form of equations of motions for fermions in momentum space.
In fact given a pseudo euclidean, $`2n`$-dimensional vector space $`V`$ with signature $`(k,l)`$; $`k+l=2n`$ and the associated Clifford algebra $`\mathrm{}(k,l)`$ with generators $`\gamma _a`$ a Dirac spinor $`\psi `$ is an element of the endomorphism space of $`\mathrm{}(k,l)`$ and is defined by Cartan’s equation
$$\gamma _ap^a\psi =0$$
(10)
where $`p_a`$ are the components of a vector $`pV`$.
Now it may be shown that for the signatures $`(k,l)=(3,1),(4,1)`$, the Weyl, (Maxwell), Majorana, Dirac equations, respectively may be naturally obtained , from (10), precisely in momentum space. For the signature $`(4,2)`$ eq. (10) contains twistors equations and, for $`\psi 0`$, the vector $`p`$ is null: $`p_ap^a=0`$ and the directions of $`p`$ form the projective quadric<sup>4</sup><sup>4</sup>4Since Weyl equation in $`=^{3,1}`$, out of which Maxwell’s equation for the electromagnetic tensor $`F_{\mu \nu }`$ (expressed bilinearly in term of Weyl spinors) may be obtained, is contained in twistor equation in $`=^{4,2}`$, defining the projective quadric: $`_c=\left(S^3\times S^1\right)/_2`$. This is a further argument why momentum space $``$ should be densely contained in $`_c`$. $`(S^3\times S^1)/_2`$ identical to conformally compactified momentum space $`_c`$ given in (2).
For the signature $`(5,3)`$ instead one may easily obtain from (10) the equation
$$\left(\gamma _\mu p^\mu +\stackrel{}{\pi }\stackrel{}{\sigma }\gamma _5+M\right)N=0$$
(11)
where $`\stackrel{}{\pi }=\stackrel{~}{N},\stackrel{}{\sigma }\gamma _5N`$ and $`N=\left[\begin{array}{c}\psi _1\\ \psi _2\end{array}\right]`$ ; $`M=\left[\begin{array}{cc}m& 0\\ 0& m\end{array}\right]`$ ; $`\stackrel{~}{N}=[\stackrel{~}{\psi }_1,\stackrel{~}{\psi }_2]`$ ; $`\stackrel{~}{\psi }_j=\psi _j^{}\gamma _0`$, with $`\psi _1`$,$`\psi _2`$ \- space-time Dirac spinors, and $`\stackrel{}{\sigma }=(\sigma _1,\sigma _2,\sigma _3)`$ Pauli matrices.
Eq. (11) represents the equation, in momentum space, of the proton-neutron doublet interacting with the pseudoscalar pion triplet$`\stackrel{}{\pi }`$.
Also the equation for the electroweak model may be easily obtained in the frame of $`\mathrm{}(5,3)`$.
All this may further justify the conjecture that spinor geometry in conformally compactified momentum space is the appropriate arena for the description of quantum mechanics of fermions.
It is remarkable that for signatures $`(3,1)`$, $`(4,1)`$ and $`(5,3)`$, $`(7,1)`$ ( while not for $`(4,2)`$ ) the real components $`p_a`$ of $`p`$ may be bilinearly expressed in terms of spinors .
If spinor geometry in momentum space is at the origin of quantum mechanics of fermions, then their multiplicity, observed in natural phenomena, could be a natural consequence of the fact that a Dirac spinor associated with $`\mathrm{}(2n)`$ has $`2^n`$ components, which naturally splits in multiplets of space-time spinors (as it already appears in eq. (11)). Since in this approach vectors appear as bilinearly composed by spinors, some of the problematic aspects of dimensional reduction could be avoided dealing merely with spinors .
Also rotations naturally arise, in spinor spaces as products of reflections operated by the generators $`\gamma _a`$ of the Clifford algebras. These could then be at the origin of the so called internal symmetry. This appears in eq. (11) where the isospin symmetry of nuclear forces arises from conformal reflections which appear there as the units of the quaternion field of numbers of which, proton-neutron equivalence, for strong interactions, could be a realization in nature. For $`\mathrm{}(8)`$ and higher Clifford algebras, and the associated spinors, octonions could be expected to play a role as recently advocated .
## 5 Some further consequences of conformal duality
Compact phase space implies, for field theories, the absence of the concept of infinity in both space-time and momentum and, provided we may rigorously define Fourier dual manifolds, where fields may be defined, it would imply also the absence of both infrared and ultraviolet divergences in perturbation expansions. This is, for the moment, possible only in the four-dimensional phase space-model, where such manifold restricts to the lattice $`M_L`$ and $`P_L`$ on a double torus, where Fourier transforms (5) hold. One could call $`M_L`$ and $`P_L`$ the physical spaces which are compact and discrete, to distinguish them from the mathematical spaces $`M_c`$ and $`P_c`$ which are compact and continuous, the latter are only conformally dual while the former are both conformally and Fourier dual.
In the realistic eight dimensional phase space one would also expect to find physical spaces represented by lattices as non commutative geometry seems also to suggest .
Let us now consider as a canonical example of a quantum system: the hydrogen atom in stationary states. According to our hypothesis it should be appropriated to deal with it in momentum space (as a non relativistic limit of the Dirac equation for an electron subject to an external e.m. field). This is possible as shown by v. Fock through the integral equation:
$$\varphi \left(p\right)=\frac{\lambda }{2\pi ^2}_{S^3}\frac{\varphi \left(q\right)}{\left(pq\right)^2}d^3q$$
(12)
where $`S^3`$ is the one point compactification of momentum space $`=^3`$, and $`\lambda =\frac{e^2}{\mathrm{}c}\sqrt{\frac{p_0^2}{2mE}}`$, where $`p_0`$ is a unit of momentum and $`E`$ the (negative) energy of the H-atom.
For $`\lambda =n+1`$, $`\varphi (p)=Y_{nlm}\left(\alpha \beta \gamma \right)`$ which are the harmonics on $`S^3`$, and for $`p_0=mc`$, we obtain:
$$E_n=\frac{me^4}{2\mathrm{}^2\left(n+1\right)^2}$$
which are the energy eigenvalues of the H-atom.
It is interesting to observe that eq. (12) is a purely geometrical equation, where the only quantum parameter<sup>5</sup><sup>5</sup>5The geometrical determination of this parameter in eq. (12) (through harmonic analysis, say) could furnish a clue to understanding the geometrical origin of quantum mechanics. This was a persistent hope of the late Wolfgang Pauli. is the dimensionless fine structure constant
$$\frac{e^2}{\mathrm{}c}=\frac{1}{137,\mathrm{}}$$
According to this equation the stationary states of the H-atom may be represented as eigenvibrations of the $`S^3`$ sphere (of radius $`p_0`$ ) in conformally compactified momentum space and out of which the quanum numbers $`n`$,$`l`$,$`m`$ result. If conformal duality is realized in nature then there should exist a classical system represented by eigenvibration of $`S^3`$ in $`𝕄_c`$ or $`𝕄_{RW}`$.
In fact this system could be the universe since recent observations on distant galaxies (in the direction of N-S galactic poles) have revealed that their distribution may be represented by the $`S^3`$ eigenfunction
$$Y_{n,0,0}=k_n\frac{\mathrm{sin}\left(\mathrm{n}+1\right)\chi }{\mathrm{sin}\chi }$$
(13)
with $`k_n`$ a constant, $`\chi `$ the geodetic distance from the center of the corresponding eigenvibration on the $`S^3`$ sphere of the $`𝕄_{RW}`$ universe. Now $`Y_{n,0,0}`$ is exactly equal to the eigenfunction of the H-atom however in momentum space. If the astronomical observations will confirm eq. (13) then the universe and the H-atom would represent a realization in nature of conformal duality. Here we have in fact that $`Y_{n,0,0}`$ on $`_c`$ represents the (most symmetric) eigenfunction of the (quantum) H-atom and the same $`Y_{n,0,0}`$ on $`𝕄_c`$ may represent the (visible) mass distribution of the (classical) universe. They could then be an example<sup>6</sup><sup>6</sup>6There could be other examples of conformal duality represented by our planetary system. In fact observe that in order to compare with the density of matter the square of the $`S^3`$ harmonic $`Y_{nlm}`$ has to be taken. Now $`Y_{n00}^2`$ presents maxima for $`r_n=\left(n+\frac{1}{2}\right)r_0`$, and it has been shown by Y.K. Gulak that the values of the large semi axes of the 10 major solar planets satisfy this rule, which could then suggest that they arise from a planetary cloud presenting the structure of a $`S^3`$ eigenvibrations, as will be discussed elsewhere. of conformally dual systems: one classical and one quantum mechanical.
There could be another important consequence of conformal duality. in fact suppose that $`𝕄_c`$ and $`_c`$ are also Fourier dual<sup>7</sup><sup>7</sup>7 Even if, for defining rigorously Fourier duality for $`𝕄_c`$ and $`_c`$ one may have to abandon standard differential calculus, locally it may be assumed to be approximately true, with reasonable confidence since the spacing of the possible lattice will be extremely small. In fact taking for $`K`$ the Planck’s radius one could have of the order of $`10^{30}`$ points of the lattice per centimeter. . Then to the eigenfunctions: $`Y_{nlm}\left(\alpha \beta \gamma \right)`$ on $`_c`$ of the H-atom there will correspond on $`𝕄=^{3,1}`$ (densely contained in $`𝕄_c`$) their Fourier transforms; that is the known eigenfunctions $`\mathrm{\Psi }_{nlm}(x_1,x_2,x_3)`$ in $`𝕄=^{3,1}`$ of the H-atom stationary states.
Now according to propositions $`P_1`$, $`P_3`$ and corollary $`C_2`$ for high values of the action of the system; that is for high values of $`n`$ in $`Y_{nlm}`$, the system should be identified with the corresponding classical one, and in space-time $`𝕄`$ where it is represented by $`\mathrm{\Psi }_{nlm}(x_1,x_2,x_3)`$.
In fact this is what postulated by the correspondence principle: for high values of the quantum numbers the wavefunction $`\mathrm{\Psi }_{nlm}(x_1,x_2,x_3)`$ identifies with the Kepler orbits; that is the same system (with potential proportional to $`1/r`$) dealt in the frame of classical mechanics. In this way, at least in this particular example, the correspondence principle appears as a consequence of conformal duality<sup>8</sup><sup>8</sup>8Also the property of Fourier transforms play a role. Consider in fact a classical system with fixed orbits: a massive point particle on $`S^1`$, say. Its quantization appears in the Fourier dual momentum space $`Z`$ : ($`m=0,\pm 1,\pm 2,\mathrm{}`$) and for the large $`m`$ the eigenfunctions identifies with $`S^1`$. and precisely of propositions $`P_1`$, $`P_3`$ and corollary $`C_2`$. Obviously at difference with the previous case of duality, in this case it is the same system (the two body problem) dealt once quantum mechanically in $`_c`$ and once classically in $`𝕄_c`$. What they keep in common is the $`SO(4)`$ group of symmetry, named accidental symmetry when discovered by W. Pauli for the Kepler orbits, while here it derives from the properties of conformal reflections, which preserve $`S^3`$, as seen from (2).
|
no-problem/9905/physics9905041.html
|
ar5iv
|
text
|
# Untitled Document
Quiet Violin, Comment on ”The dead zone for string players”
I read with interest your recent article about how a violin generates sound, which reported that Johan Broomfield and Michael Leask had found it becomes impossible to produce a sound from a string that is bowed at its midpoint (January p5). However, this finding is trivial if one knows the mechanism of bowing
When a horse-tail bow is drawn across a string, tiny microscopic ”nails” on the surface of the bow constantly pluck the string as the bow moves over the string (see, for example, E G Gray, Splitting Hairs, Strad vol.82 p107-108 (1989)). This is different from what happens when you pluck a string with your finger. The string can then vibrate freely once your finger has left the string. But with a bow, the nails continuously intervene in the vibration, producing a sort of ”forced” or ”kicked” vibration. Once one nail has passed over the string, the string can only vibrate freely for a short period before the next nail arrives. Indeed, the nails are so closely spaced that it is perhaps better to describe them as ”brushes” or ”combs.
It is therefore better for the amplitude of vibration at the bowing point to be about the same as the spacing of the nails so that the intervention of the nails is minimized. If the amplitude is larger than the spacing of the nails, the string does not manage to vibrate much before the next nail arrives. (Actually, the presence of the next nail does not stop the vibration completely; it only ”constrains” the vibration to the position of the next nail.) This mechanism applies to every part of the string, but the influence of the next nail becomes most significant as you bow at the midpoint –\***because the amplitude of vibration is then at its largest\***.
Julian Juhi-Lian Ting
Taichung, Taiwan, Republic of China
jlting@yahoo.com
Date:
|
no-problem/9905/quant-ph9905019.html
|
ar5iv
|
text
|
# Quantum-mechanical model for particles carrying electric charge and magnetic flux in two dimensions published in Phys. Rev. A 59 (1999) 3228-3235. ©The American Physical Society
## I Introduction
Field theories with Chern-Simons (CS) term in (2+1)-dimensional space-time admit soliton solutions carrying both electric charge and magnetic flux \[1-8\]. These solutions are often called CS vortices or vortex solitons, as compared with Nielsen-Olesen vortices which are electrically neutral. They appear in both relativistic and nonrelativistic field theories, and regardless of whether the gauge field action involves both Maxwell and CS terms or only a pure CS term. The ratio of electric charge $`q`$ to magnetic flux $`\mathrm{\Phi }`$ depends only on the parameters in the field theoretical model, not on the specific solution. Such solutions are not only of interest in field theories, but also expected to be useful in condensed matter physics. However, the interaction of these vortex solitons is very complicated. A single soliton solution is available in analytic form only for nonrelativistic theory and when the Maxwell term is absent. It seems difficult to find multi-soliton solutions in closed forms, especially when both Maxwell and CS terms are present. Therefore a simple quantum mechanical model for the interaction of such vortex solitons may be of interest. The purpose of the present paper is to study such a model.
The real CS vortices have finite sizes. The electric charge density and the magnetic flux density (the magnetic field) depend on the specific solution. As a simple approximation, we use point-like particles to represent them in this paper. Both the magnetic flux and the electric charge are then confined to a region of infinitesimal area, in other words, to a point where the particle is located. The vector potential associated with the flux is the Aharonov-Bohm (AB) potential . (see also Refs. for some more works on the subject.) This is responsible to the charge-flux interaction. As for the charge-charge interaction, we make use of the Coulomb potential. Note that in two-dimensional space there are two kinds of Coulomb potentials. The first one satisfies the two-dimensional Poisson equation with point source and is proportional to $`\mathrm{ln}r`$, where $`r`$ is the distance between the two point charges. The second simply imitates the form of the three-dimensional one and is proportional to $`1/r`$. It should be remarked that the real interaction between the CS vortices may be very complicated, and it depends on whether the field theoretical model involves both Maxwell and CS terms or only a CS term. Neither of the above forms can be expected to be capable of well describing the real situation. Either one is in any case a rough approximation. We prefer the latter one since it is easier to obtain exact solutions in this case. This is the potential adopted in the study of the so called two-dimensional hydrogen atom (2H) \[13-18\].
In this paper we confine ourselves to the framework of nonrelativistic quantum mechanics. Now that the forms of the interaction potentials are established, we can write down an $`n`$-body Schrödinger equation for these particles carrying magnetic flux as well as electric charges. This is done in Sec. II. The $`a`$th particle has charge and flux ($`q_a`$, $`\mathrm{\Phi }_a`$), where $`a=1,2,\mathrm{},n`$. It should be emphasized that the ratio $`q_a/\mathrm{\Phi }_a`$ does not depend on $`a`$, as pointed out in the first paragraph. After the time variable is separated out to obtain a stationary Schrödinger equation, we concentrate our attention on the two-body problem. This is separable into two equations. One governs the center-of-mass motion, which is free, and the other governs the relative motion, which is of main interest to us and is the main subject of the remaining part of this paper. It is remarkable that the separability of the two-body equation crucially depends on the condition $`q_1/\mathrm{\Phi }_1=q_2/\mathrm{\Phi }_2`$. We then denote $`(q_1,\mathrm{\Phi }_1)=(q,\mathrm{\Phi }/Z)`$, $`(q_2,\mathrm{\Phi }_2)=(Zq,\mathrm{\Phi })`$, where $`Z`$ is a nonvanishing real number. The relative Hamiltonian has the same form as that for a particle of reduced mass moving in the composite field of a vector AB potential and a scalar Coulomb one. This may be called an Aharonov-Bohm-Coulomb (ABC) system. Although the so called ABC system has been dealt with by numerous works \[19-24\] in the literature, the Coulomb potential considered there is a three-dimensional one. Thus the situation is quite different from that studied here. In other words, the model studied in the above cited works is a three-dimensional ABC system, while that encountered here is a two-dimensional one.
In Sec. III we study the bound state problem. Bound states are possible only when $`Z>0`$, i.e., when the Coulomb field represents attractive force, regardless of whether an AB potential is present. When $`\mathrm{\Phi }=0`$, the spectrum is just those of the 2H. The level $`E_N`$ has degeneracy $`2N+1`$ ($`N=0,1,2,\mathrm{}`$). If $`q\mathrm{\Phi }/2\pi \mathrm{}c`$ takes nonvanishing integers, the spectrum is roughly the same except that the ground state has energy $`E_1`$ and the level $`E_N`$ has degeneracy $`2N`$ ($`N=1,2,\mathrm{}`$) since some solutions are not acceptable. In the general case each level $`E_N`$ of the 2H splits into two, each with lower degeneracy. When $`q\mathrm{\Phi }/2\pi \mathrm{}c`$ takes half integers, however, some of the splited levels coincide and we have again a high degeneracy. The degeneracy implies that the system should have SU(2) symmetry in this case, as the SO(3) symmetry of the ordinary 2H \[13, 16-17\]. But this has not been explicitly proved.
In Sec. IV we study the scattering problem. In the general case partial wave expansion in the polar coordinates should be employed. However, as the asymptotic form of the partial wave involves logarithmic distortion due to the long range nature of the Coulomb field, it is somewhat difficult to handle the partial wave expansion. In this paper we restrict our discussion to special cases where $`q\mathrm{\Phi }/2\pi \mathrm{}c`$ takes integers or half integers. In these cases the scattering problem can be exactly solved in parabolic coordinates, as the ordinary Coulomb scattering in two dimensions . Note that what we use here are parabolic coordinates on the plane, and thus they are quite different from the rotational parabolic coordinates used in the discussion of the ordinary three-dimensional Coulomb problem in the text books of quantum mechanics. The latter are also used in the study of the three-dimensional ABC system . When $`\mathrm{\Phi }=0`$ the cross section is just that for the Coulomb scattering in two dimensions. When $`q\mathrm{\Phi }/2\pi \mathrm{}c`$ takes nonzero integers, the cross section gains an additional term which comes from the interference of the scattered wave with an additional stationary wave present in the scattering solution. Without the stationary wave term the solution would become meaningless at the origin. To the best of our knowledge, such circumstances are not encountered previously in the literature. When $`q\mathrm{\Phi }/2\pi \mathrm{}c`$ takes half integers, the result is simple but of course rather different from that for pure Coulomb scattering. Without the Coulomb field our results reduce to those for pure AB scattering \[10-11\]. The classical limit of the results is also discussed.
Sec. V is devoted to a brief summary and some more remarks.
## II The model
Consider $`n`$ point-like particles carrying magnetic flux as well as electric charges in two-dimensional space. The $`a`$th particle has mass $`\mu _a`$, carries electric charge and magnetic flux ($`q_a`$, $`\mathrm{\Phi }_a`$), $`a=1,2,\mathrm{},n`$. The position of the $`a`$th particle is denoted by $`𝐫_a=(x_a,y_a)`$. As remarked in the introduction, the ratio $`q_a/\mathrm{\Phi }_a`$ is independent of $`a`$. More precisely, we have
$$\frac{q_1}{\mathrm{\Phi }_1}=\frac{q_2}{\mathrm{\Phi }_2}=\mathrm{}=\frac{q_n}{\mathrm{\Phi }_n}.$$
(1)
We describe the charge-flux interactions among the particles by the vector AB potentials and the charge-charge interactions by the scalar Coulomb ones. The $`n`$-body wave function is denoted by $`\mathrm{\Psi }^{(n)}(t,𝐫_1,\mathrm{},𝐫_n)`$. In this paper we work in the domain of nonrelativistic quantum mechanics. The Schrödinger equation for the wave function is then
$$i\mathrm{}\frac{\mathrm{\Psi }^{(n)}}{t}=H_\mathrm{T}\mathrm{\Psi }^{(n)},$$
$`(2\mathrm{a})`$
where $`H_\mathrm{T}`$ is the Hamiltonian of the system given by
$$H_\mathrm{T}=\underset{a=1}{\overset{n}{}}\frac{\mathrm{}^2}{2\mu _a}\left[_a\frac{iq_a}{\mathrm{}c}𝐀_a(𝐫_1,\mathrm{},𝐫_n)\right]^2+\underset{a<b}{}\frac{q_aq_b}{|𝐫_a𝐫_b|},$$
$`(2\mathrm{b})`$
where the second term (the Coulomb interaction) involves a double summation subject to the condition $`a<b`$, and $`𝐀_a(𝐫_1,\mathrm{},𝐫_n)`$ is the AB vector potential at the position $`𝐫_a`$. Note that all particles, except the $`a`$th one, contribute to $`𝐀_a`$. Thus the components of $`𝐀_a`$ are given by
$$A_{ax}(𝐫_1,\mathrm{},𝐫_n)=\underset{ba}{}\frac{\mathrm{\Phi }_b}{2\pi }\frac{y_ay_b}{|𝐫_a𝐫_b|^2},$$
$$A_{ay}(𝐫_1,\mathrm{},𝐫_n)=\underset{ba}{}\frac{\mathrm{\Phi }_b}{2\pi }\frac{x_ax_b}{|𝐫_a𝐫_b|^2}.$$
$`(2\mathrm{c})`$
Since the Hamiltonian $`H_\mathrm{T}`$ does not involve $`t`$, the time dependent factor in $`\mathrm{\Psi }^{(n)}`$ can be separated out. Let
$$\mathrm{\Psi }^{(n)}(t,𝐫_1,\mathrm{},𝐫_n)=e^{iE_\mathrm{T}t/\mathrm{}}\psi ^{(n)}(t,𝐫_1,\mathrm{},𝐫_n),$$
(3)
we have for $`\psi ^{(n)}`$ the stationary Schrödinger equation
$$H_\mathrm{T}\psi ^{(n)}=E_\mathrm{T}\psi ^{(n)}.$$
(4)
In the following we concentrate our attention on the two-body problem, since this is the only case where exact analysis is possible. In this case the first summation in Eq. (2b) contains two terms and the second contains only one. We introduce the relative position r and the center-of-mass position R defined by
$$𝐫=𝐫_1𝐫_2,𝐑=\frac{\mu _1𝐫_1+\mu _2𝐫_2}{M},$$
(5)
Where $`M=\mu _1+\mu _2`$ is the total mass of the system. Note that both $`𝐀_1`$ and $`𝐀_2`$ depend only on r, it is not difficult to recast $`H_\mathrm{T}`$ in the form
$`H_\mathrm{T}`$ $`=`$ $`{\displaystyle \frac{\mathrm{}^2}{2\mu _1}}\left(_r{\displaystyle \frac{iq_1}{\mathrm{}c}}𝐀_1\right)^2{\displaystyle \frac{\mathrm{}^2}{2\mu _2}}\left(_r+{\displaystyle \frac{iq_2}{\mathrm{}c}}𝐀_2\right)^2+{\displaystyle \frac{q_1q_2}{r}}`$ (7)
$`{\displaystyle \frac{\mathrm{}^2}{2M}}_R^2+{\displaystyle \frac{i\mathrm{}}{Mc}}(q_1𝐀_1+q_2𝐀_2)_R,`$
where $`r=|𝐫|`$. Using Eqs. (1) and (2c), it can be shown that
$$q_1𝐀_1=q_2𝐀_2.$$
(8)
Thus Eq. (6) reduces to
$$H_\mathrm{T}=\frac{\mathrm{}^2}{2\mu }\left(_r\frac{iq_1}{\mathrm{}c}𝐀_1\right)^2+\frac{q_1q_2}{r}\frac{\mathrm{}^2}{2M}_R^2,$$
(9)
where $`\mu =\mu _1\mu _2/(\mu _1+\mu _2)`$ is the reduced mass of the system. Now Eq. (4) can be separated into two equations. Let
$$\psi ^{(2)}(𝐫_1,𝐫_2)=\psi _{\mathrm{cm}}(𝐑)\psi (𝐫),$$
(10)
we have
$$\frac{\mathrm{}^2}{2M}_R^2\psi _{\mathrm{cm}}=E_{\mathrm{cm}}\psi _{\mathrm{cm}},$$
(11)
$$H\psi =E\psi ,$$
$`(11\mathrm{a})`$
where
$$H=\frac{\mathrm{}^2}{2\mu }\left(_r\frac{iq_1}{\mathrm{}c}𝐀_1\right)^2+\frac{q_1q_2}{r},$$
$`(11\mathrm{b})`$
$$A_{1x}=\frac{\mathrm{\Phi }_2}{2\pi }\frac{y}{r^2},A_{1y}=\frac{\mathrm{\Phi }_2}{2\pi }\frac{x}{r^2},$$
$`(11\mathrm{c})`$
and $`E_{\mathrm{cm}}+E=E_\mathrm{T}`$. Eq. (10) governs the center-of-mass motion of the system, which is obviously free and will not be discussed any further. Eq. (11) governs the relative motion of the two particles, which is of essential interest to us and is the main subject of the remaining part of this paper. In the following we omit the subscript $`r`$ of $`_r`$. We also denote $`(q_1,\mathrm{\Phi }_1)=(q,\mathrm{\Phi }/Z)`$ and $`(q_2,\mathrm{\Phi }_2)=(Zq,\mathrm{\Phi })`$, where $`Z`$ is a nonvanishing real number. The Hamiltonian (11b) can be written as
$$H=\frac{\mathrm{}^2}{2\mu }\left(+i\frac{q\mathrm{\Phi }}{2\pi \mathrm{}c}\theta \right)^2\frac{Zq^2}{r},$$
(13)
where $`(r,\theta )`$ are polar coordinates on the $`xy`$ plane, and $`r`$ has been used above.
As pointed out in the introduction, the Hamiltonian (12) is the same as that governs the motion of a charged particle in the combined field of a vector AB potential and a scalar Coulomb one. However, it is quite different from that for the so called ABC system studied in the literature, since that is a three-dimensional model while ours is a two-dimensional one. More precisely, in their Coulomb potential, $`r=(x^2+y^2+z^2)^{1/2}`$, whereas in ours $`r=(x^2+y^2)^{1/2}`$. In fact, everything is independent of $`z`$ here, or, if one prefers, there is no $`z`$ component here.
To conclude this section we emphasize that the separability of Eq. (4) (for $`n=2`$) crucially depends on the relation (7) and thus on the condition (1).
## III Bound states
In this section we study bound states of the two-body system. These are solutions vanishing at infinity of Eq. (11). It is convenient to solve Eq. (11a), with the Hamiltonian written in the form of Eq. (12), in polar coordinates. We denote
$$\frac{q\mathrm{\Phi }}{2\pi \mathrm{}c}=m_0+\nu ,$$
(14)
where $`m_0`$ is an integer and $`0\nu <1`$. Eq. (11) can be written in polar coordinates as
$$\frac{1}{r}\frac{}{r}\left(r\frac{\psi }{r}\right)+\frac{1}{r^2}\left(\frac{}{\theta }+im_0+i\nu \right)^2\psi +\left(\frac{2\mu E}{\mathrm{}^2}+\frac{2\mu Zq^2}{\mathrm{}^2r}\right)\psi =0.$$
(15)
We write $`\psi `$ as
$$\psi (r,\theta )=R(r)e^{i(mm_0)\theta },m=0,\pm 1,\pm 2,\mathrm{},$$
(16)
then $`R(r)`$ satisfies the equation
$$\frac{d^2R}{dr^2}+\frac{1}{r}\frac{dR}{dr}+\left[\frac{2\mu E}{\mathrm{}^2}+\frac{2\mu Zq^2}{\mathrm{}^2r}\frac{(m+\nu )^2}{r^2}\right]R=0.$$
(17)
Now it can be shown that $`E>0`$ gives scattering solutions which will not be discussed in this section. Thus bound states have $`E<0`$. It will also become clear in the following that bound states are possible only when $`Z>0`$, i.e., when the Coulomb potential represents attraction. These are all familiar conclusions in the pure Coulomb problem in three or two dimensions. Note that the factorized form of the solution (15) itself requires
$$R(0)=0$$
(18)
except for $`m=m_0`$. This is because $`\theta `$ is not well defined at the origin. It will exclude some well behaved solutions of Eq. (16). As $`E<0`$, we introduce a dimensionless variable $`\rho `$ defined as
$$\rho =\alpha r,\alpha =\frac{\sqrt{8\mu E}}{\mathrm{}},$$
(19)
and a new parameter
$$\lambda =\frac{Zq^2}{\mathrm{}}\sqrt{\frac{\mu }{2E}},$$
(20)
then Eq. (16) can be written as
$$\frac{d^2R}{d\rho ^2}+\frac{1}{\rho }\frac{dR}{d\rho }+\left[\frac{1}{4}+\frac{\lambda }{\rho }\frac{(m+\nu )^2}{\rho ^2}\right]R=0.$$
(21)
Now we define a new function $`u(\rho )`$ through the relation
$$R(\rho )=e^{\rho /2}\rho ^{|m+\nu |}u(\rho ),$$
(22)
then we have for $`u(\rho )`$ the equation
$$\rho \frac{d^2u}{d\rho ^2}+(2|m+\nu |+1\rho )\frac{du}{d\rho }\left(|m+\nu |+\frac{1}{2}\lambda \right)u=0.$$
(23)
This is the confluent hypergeometric equation. It is solved by the confluent hypergeometric function
$$u(\rho )=CF(|m+\nu |+\frac{1}{2}\lambda ,2|m+\nu |+1,\rho ),$$
(24)
where $`C`$ is a normalization constant to be determined below. The other solution to Eq. (22) makes $`R(r)`$ infinite at $`r=0`$ and is thus dropped. The above solution, though well behaved at $`r=0`$, diverges when $`r\mathrm{}`$: $`u(\rho )`$ behaves like $`e^\rho `$ and $`R(\rho )`$ like $`e^{\rho /2}`$. Therefore it is still not acceptable in general. Physically acceptable solutions appear when $`E`$ or $`\lambda `$ takes special values so that the confluent hypergeometric series terminates. This happens when
$$|m+\nu |+\frac{1}{2}\lambda =n_r,n_r=0,1,2,\mathrm{},$$
(25)
and $`u(\rho )`$ becomes a polynomial of order $`n_r`$. From Eq. (19) we see that this can be satisfied only when $`Z>0`$, and the energy levels are given by
$$E=\frac{\mu Z^2q^4}{2\mathrm{}^2(n_r+|m+\nu |+1/2)^2}.$$
(26)
The corresponding wave function is
$$\psi _{n_rm}(r,\theta )=C_{n_rm}e^{\rho /2}\rho ^{|m+\nu |}F(n_r,2|m+\nu |+1,\rho )e^{i(mm_0)\theta }.$$
(27)
There are degeneracies in the energy levels. This is why we have not attached any subscript to $`E`$. The degeneracy depends on the values of $`\nu `$ and $`m_0`$. The various cases are discussed as follows.
1. $`\nu =m_0=0`$. This is the case of a pure Coulomb problem, or the 2H. We introduce the principal quantum number
$$N=n_r+|m|,$$
(28)
then the energy levels are written as
$$E_N=\frac{\mu Z^2q^4}{2\mathrm{}^2(N+1/2)^2},N=0,1,2,\mathrm{}.$$
(29)
With a given $`N`$, the possible values for ($`n_r,m`$) are ($`N,0`$), ($`N1,\pm 1`$), $`\mathrm{}`$, ($`0,\pm N`$), and the degeneracy is $`d_N=2N+1`$. These results are well known \[13-18\].
2. $`\nu =0`$, $`m_00`$. In other words, $`q\mathrm{\Phi }/2\pi \mathrm{}c`$ takes nonzero integers. In this case the energy levels are roughly the same. However, from Eq. (26) we see that the solution with $`m=0`$ is not acceptable, regardless of the value of $`n_r`$, because the radial part of the wavefunction does not satisfy Eq. (17). Therefore the ground state has energy $`E_1`$, and the level $`E_N`$ has degeneracy $`d_N=2N`$ ($`N=1,2,\mathrm{}`$).
3. $`0<\nu <\frac{1}{2}`$ or $`\frac{1}{2}<\nu <1`$. In this case each level of the 2H splits into two. When $`m0`$ we have
$$E_N^+=\frac{\mu Z^2q^4}{2\mathrm{}^2(N+\nu +1/2)^2},N=0,1,2,\mathrm{},$$
$`(29\mathrm{a})`$
while when $`m<0`$ we have
$$E_N^{}=\frac{\mu Z^2q^4}{2\mathrm{}^2(N\nu +1/2)^2},N=1,2,\mathrm{}.$$
$`(29\mathrm{b})`$
The possible values of ($`n_r,m`$) correspond to $`E_N^+`$ are ($`N,0`$), ($`N1,1`$), $`\mathrm{}`$, ($`0,N`$), thus the degeneracy is $`d_N^+=N+1`$. Those correspond to $`E_N^{}`$ are ($`N1,1`$), ($`N2,2`$), $`\mathrm{}`$, ($`0,N`$), thus the degeneracy is $`d_N^{}=N`$. The difference between the case $`0<\nu <\frac{1}{2}`$ and the case $`\frac{1}{2}<\nu <1`$ lies in the order of the energy levels. In the first case the order of the levels is
$$E_0^+<E_1^{}<E_1^+<\mathrm{}<E_N^{}<E_N^+<E_{N+1}^{}<\mathrm{}.$$
$`(30\mathrm{a})`$
In the second case it is
$$E_1^{}<E_0^+<E_2^{}<\mathrm{}<E_N^{}<E_{N1}^+<E_{N+1}^{}<\mathrm{}.$$
$`(30\mathrm{b})`$
4. $`\nu =\frac{1}{2}`$. In other words, $`q\mathrm{\Phi }/2\pi \mathrm{}c`$ takes half integers. In this case we have
$$E_N^+=E_{N+1}^{}=\frac{\mu Z^2q^4}{2\mathrm{}^2(N+1)^2},N=0,1,2,\mathrm{}.$$
(32)
The degeneracy of the level is $`d_N^++d_{N+1}^{}=2N+2`$. This implies that the system has higer dynamical symmetry than the geometrical SO(2). It is well know that the 2H possesses SO(3) symmetry, just like the ordinary three-dimensional hydrogen atom possesses SO(4) symmetry. It seems that the symmetry for the present case is SU(2), and the above energy level corresponds to the value $`(N+\frac{1}{2})(N+\frac{3}{2})`$ for the Casimir operator of the SU(2) algebra. But this has not been explicitly proved. One can construct the Runge-Lenz vector in a way similar to that in the case of 2H . However, the conservation of it and the closure of the algebra involve some difficulty due to the singularity of the AB potential at the origin. Perhaps some other method should be employed to deal with the problem.
Both bound state and scattering problems of the two-dimensional Coulomb field can be solved in parabolic coordinates \[17-18, 25\]. Here we point out that the case 2. and 4. disscussed above can also be solved in parabolic coordinates. As no new result can be obtained, we will not discuss the solutions in detail. In the next section we will deal with the scattering problem. It is in these two cases that exact solutions are available.
Finally we give the value of the normalization constant $`C_{n_rm}`$ in the wave function (26):
$$C_{n_rm}=\frac{4\mu Zq^2}{\mathrm{}^2(2n_r+2|m+\nu |+1)\mathrm{\Gamma }(2|m+\nu |+1)}\left[\frac{\mathrm{\Gamma }(n_r+2|m+\nu |+1)}{2\pi n_r!(2n_r+2|m+\nu |+1)}\right]^{\frac{1}{2}}.$$
(33)
## IV Scattering problem
In this section we study scattering problem of the two-body system. Here the Coulomb field may be either attractive or repulsive. We denote $`\kappa =Zq^2`$, which may be positive or negative. For general value of $`m_0`$ and $`\nu `$, one may employ the method of partial wave expansion in polar coordinates. Then the starting point may be Eqs. (15) and (16). However, the asymptotic form of $`R(r)`$ when $`r\mathrm{}`$ involves the $`\mathrm{ln}r`$ distortion, due to the long-range nature of the Coulomb field. This may be more clearly seen in the following. Thus it is not easy to treat the partial wave expansion and to obtain the scattering cross section in a closed form. For this reason we confine ourselves in this paper to two special cases where exact analysis can be carried out in parabolic coordinates, and defer the general discussion to subsequent study.
Consider Eqs. (11a) and (12). Let us make a transformation
$$\psi (r,\theta )=e^{i(m_0+\nu )\theta }\psi _0(r,\theta ).$$
(34)
The new wavefunction $`\psi _0(r,\theta )`$ satisfies the Schrödinger equation with a pure Coulomb field:
$$\frac{\mathrm{}^2}{2\mu }^2\psi _0\frac{\kappa }{r}\psi _0=E\psi _0.$$
(35)
In parabolic coordinates this equation can be separated into two ordinary differential equations while Eq. (11) cannot be separated. The probability current density
$$𝐣=\frac{\mathrm{}}{2i\mu }(\psi ^{}\psi \psi \psi ^{})+\frac{(m_0+\nu )\mathrm{}}{\mu }\psi ^{}\psi \theta $$
(36)
can be written in terms of $`\psi _0`$ as
$$𝐣=\frac{\mathrm{}}{2i\mu }(\psi _0^{}\psi _0\psi _0\psi _0^{}).$$
(37)
Although $`\psi _0`$ satisfies a simpler equation, the problem does not become easier since $`\psi _0`$ must satisfy a nontrivial boundary condition
$$\psi _0(r,\theta +2\pi )=e^{i2\pi \nu }\psi _0(r,\theta )$$
(38)
such that $`\psi (r,\theta )`$ is single valued. Moreover, $`\psi _0(r,\theta )`$ should have proper behavior at the origin, so that $`\psi `$ is well defined there. The latter condition also imposes a constraint on the solution.
It is in general difficult to deal with Eq. (37). In the following we only consider two special cases. The first is $`\nu =0`$, or $`q\mathrm{\Phi }/2\pi \mathrm{}c`$ takes integers. In this case Eq. (37) becomes
$$\psi _0(r,\theta +2\pi )=\psi _0(r,\theta ),(\nu =0),$$
(39)
which means $`\psi _0`$ is single valued. This is because the first factor in Eq. (33) is also single valued in the present case. The second case we are to consider is $`\nu =\frac{1}{2}`$, or $`q\mathrm{\Phi }/2\pi \mathrm{}c`$ takes half integers. In this case Eq. (37) becomes
$$\psi _0(r,\theta +2\pi )=\psi _0(r,\theta ),(\nu =\frac{1}{2}).$$
(40)
Though this is not convenient in polar coordinates, it may be easily treated in parabolic coordinates.
Now we introduce the parabolic coordinates ($`\xi ,\eta `$) whose relation with ($`x,y`$) and ($`r,\theta `$) are given by
$$x=\frac{1}{2}(\xi ^2\eta ^2),y=\xi \eta ,$$
(41)
$$\xi =\sqrt{2r}\mathrm{cos}\frac{\theta }{2},\eta =\sqrt{2r}\mathrm{sin}\frac{\theta }{2}.$$
(42)
In these coordinates, Eqs. (38) and (39) become
$$\psi _0(\xi ,\eta )=\psi _0(\xi ,\eta ),(\nu =0)$$
(43)
and
$$\psi _0(\xi ,\eta )=\psi _0(\xi ,\eta ),(\nu =\frac{1}{2})$$
(44)
respectively, where for convenience we have used the same notation $`\psi _0`$ to denote the wave function in parabolic coordinates. It is easy to see that other values of $`\nu `$ in Eq. (37) renders $`\psi _0(\xi ,\eta )`$ multivalued and thus are difficult to deal with. Though $`\psi _0`$ is double valued in polar coordinates in the case $`\nu =\frac{1}{2}`$, it becomes single valued in the parabolic coordinates. This is essentially because a $`\xi \eta `$ plane covers the $`xy`$ plane twice, which is obvious from the relation $`x+iy=(\xi +i\eta )^2/2`$.
In the parabolic coordinates Eq. (34) becomes
$$(_\xi ^2+_\eta ^2)\psi _0+k^2(\xi ^2+\eta ^2)\psi _0+4\beta k\psi _0=0,$$
(45)
where
$$k=\frac{\sqrt{2\mu E}}{\mathrm{}},\beta =\frac{\mu \kappa }{\mathrm{}^2k}.$$
(46)
Note that $`E>0`$ since we are considering scattering states, and $`\beta `$ is dimensionless. Equation (44) can be solved by separation of variables. Let
$$\psi _0(\xi ,\eta )=v(\xi )w(\eta ),$$
(47)
we have for $`v`$ and $`w`$ the following equations:
$$v^{\prime \prime }+k^2\xi ^2v+\beta _1kv=0,$$
(48)
$$w^{\prime \prime }+k^2\eta ^2w+\beta _2kw=0,$$
(49)
where $`\beta _1+\beta _2=4\beta `$, and primes denote differentiation with respect to argument. The general solution of Eq. (44) can be obtained by superposition of solutions of the form (46) over the parameter $`\beta _1`$. For the scattering problem at hand we will see, however, that a single $`\beta _1`$ is sufficient. No superposition is necessary. Specifically, we are looking for solutions that have the asymptotic property
$$\psi _0e^{ikx},\mathrm{for}x\mathrm{}.$$
(50)
This represents particles incident in the $`+x`$ direction, as is easily verified by using Eq. (36). In the parabolic coordinates it becomes
$$\psi _0e^{ik(\xi ^2\eta ^2)/2},\mathrm{for}\eta \mathrm{}\mathrm{and}\mathrm{all}\xi .$$
(51)
This can be satisfied only if
$$v(\xi )=e^{ik\xi ^2/2}$$
(52)
and $`w(\eta )`$ has the asymptotic form
$$w(\eta )e^{ik\eta ^2/2},\mathrm{for}\eta \mathrm{}.$$
(53)
It is easy to verify that $`v(\xi )`$ given by Eq. (51) does satisfy Eq. (47) with $`\beta _1=i`$. Then the constant $`\beta _2`$ in Eq. (48) is given by $`\beta _2=4\beta +i`$. The subsequent discussions depend on the value of $`\nu `$, and we should distinguish between the two cases $`\nu =0`$ and $`\nu =\frac{1}{2}`$.
For $`\nu =0`$ we define a new function $`u(\eta )`$ by
$$w(\eta )=e^{ik\eta ^2/2}u(\eta ),$$
(54)
then Eq. (48) becomes
$$u^{\prime \prime }2ik\eta u^{}+4\beta ku=0.$$
(55)
On account of Eqs. (51) and (53), the condition (42) now simply means that $`u(\eta )`$ is an even function of $`\eta `$:
$$u(\eta )=u(\eta ).$$
(56)
It is easy to find the solution of Eq. (54) that satisfies this condition:
$$u(\eta )=c_1F(i\beta ,\frac{1}{2},ik\eta ^2),$$
(57)
where $`c_1`$ is a normalization constant. Collecting Eqs. (46), (51), (53), and (56) we obtain the solution
$$\psi _0=c_1e^{ik(\xi ^2\eta ^2)/2}F(i\beta ,\frac{1}{2},ik\eta ^2)=c_1e^{ikx}F(i\beta ,\frac{1}{2},ik\eta ^2).$$
(58)
If in addition to $`\nu =0`$ we have $`m_0=0`$, i.e., for a pure Coulomb potential, this is the required solution. Taking the limit $`r\mathrm{}`$, and choosing the constant $`c_1=e^{\beta \pi /2}\mathrm{\Gamma }(1/2i\beta )/\sqrt{\pi }`$, we have for $`\psi _0`$ the asymptotic form
$$\psi _0\mathrm{exp}[ikxi\beta \mathrm{ln}k(rx)]+f_\mathrm{C}(\theta )\frac{\mathrm{exp}(ikr+i\beta \mathrm{ln}2kr)}{\sqrt{r}},(r\mathrm{})$$
(59)
up to the order $`r^{1/2}`$, where
$$f_\mathrm{C}(\theta )=\frac{\mathrm{\Gamma }(1/2i\beta )}{\mathrm{\Gamma }(i\beta )}\frac{\mathrm{exp}(i\beta \mathrm{ln}\mathrm{sin}^2\theta /2i\pi /4)}{\sqrt{2k\mathrm{sin}^2\theta /2}}.$$
(60)
The first term in the above equation represents the incident wave while the second represents the scattered one. Both of them are distorted by a logarithmic term in the phase due to the long-range nature of the Coulomb field. Despite these distortions, it can be shown that the scattering cross section is given by
$$\sigma (\theta )=|f_\mathrm{C}(\theta )|^2,$$
(61)
where the subscript C indicates pure Coulomb scattering. Using the mathematical formulas
$$|\mathrm{\Gamma }(\pm i\beta )|^2=\frac{\pi }{\beta \mathrm{sinh}\beta \pi },|\mathrm{\Gamma }(\frac{1}{2}\pm i\beta )|^2=\frac{\pi }{\mathrm{cosh}\beta \pi },$$
(62)
we arrive at
$$\sigma _\mathrm{C}(\theta )=\frac{\beta \mathrm{tanh}\beta \pi }{2k\mathrm{sin}^2\theta /2}.$$
(63)
This is the result obtained in Ref. . If $`m_00`$, i.e., if $`q\mathrm{\Phi }/2\pi \mathrm{}c`$ takes nonzero integers, the solution (57) is in problem, however. This is because $`\psi _0(𝐫=0)=c_10`$, and according to Eq. (33) $`\psi (𝐫=0)=c_1e^{im_0\theta }`$, which is not well defined since $`\theta `$ is not well defined at the origin. The correct solution for $`m_00`$ should be
$$\psi _0=c_1[e^{ikx}F(i\beta ,\frac{1}{2},ik\eta ^2)e^{ikr}F(\frac{1}{2}i\beta ,1,2ikr)],$$
(64)
where the second term in the square bracket also solves Eq. (34) with the condition (38), and does not affect the boundary condition (49). We have now $`\psi _0(𝐫=0)=0`$ and no problem arises. Due to this additional term, the solution now behaves at infinity like
$$\psi _0\psi _{\mathrm{in}}+\psi _{\mathrm{sc}}+\psi _{\mathrm{st}},(r\mathrm{})$$
(65)
where $`\psi _{\mathrm{in}}`$ and $`\psi _{\mathrm{sc}}`$ represent the incident and scattered waves which are given by the first and second terms in Eq. (58), respectively, and $`\psi _{\mathrm{st}}`$ represents a stationary wave which comes from the second term in Eq. (63) and is given by
$$\psi _{\mathrm{st}}=e^{i\delta _0}\sqrt{\frac{2}{\pi k}}\frac{\mathrm{cos}(kr+\beta \mathrm{ln}2kr+\delta _0\pi /4)}{\sqrt{r}},$$
(66)
where
$$\delta _0=\mathrm{arg}\mathrm{\Gamma }(\frac{1}{2}i\beta ).$$
(67)
Since the second term in Eq. (63) is in fact the $`s`$-wave term in the partial wave expansion for a pure Coulomb field, the logarithmic distortion in its asymptotic form mentioned before becomes clear here. Similar distortions appear in all partial waves regardless of whether the AB potential is present. The first term in Eq. (64) gives an incident current in the $`+x`$ direction (when $`x\mathrm{}`$). The second gives a scattered one in the radial direction (the component in the $`\theta `$ direction can be ignored when $`r\mathrm{}`$) and leads to the cross section $`\sigma _\mathrm{C}(\theta )`$ obtained above. The third term, as a stationary wave, contributes nothing to the cross section. There are, however, interference terms. The interference of the first term with the subsequent ones does not lead to physically significant results. However, the interference of the second and the third terms actually gives rise to an additional term in the cross section, which will be denoted by $`\sigma _\times (\theta )`$. The differential cross section in the present case is thus given by
$$\sigma _1(\theta )=\sigma _\mathrm{C}(\theta )+\sigma _\times (\theta ),$$
(68)
where
$$\sigma _\times (\theta )=\frac{\sqrt{\beta \mathrm{tanh}\beta \pi }}{\sqrt{\pi }k}\frac{\mathrm{cos}(\delta _0+\delta _1\beta \mathrm{ln}\mathrm{sin}^2\theta /2)}{|\mathrm{sin}\theta /2|},$$
(69)
and
$$\delta _1=\mathrm{arg}\mathrm{\Gamma }(i\beta ).$$
(70)
In the neighbourhood of $`\theta =0`$, $`\sigma _\times (\theta )`$ oscillates rapidly and thus the total contribution in a finite (but small) interval of $`\theta `$ may be neglected. For large $`\theta `$, especially near $`\theta =\pi `$, however, $`\sigma _\times (\theta )`$ gives considerable contribution. It is remarkable that $`\sigma _\times (\theta )`$ is not positive definite and thus $`\sigma _1(\theta )`$ may become negative somewhere. This means that the particles move toward the origin at some directions. To the best of our knowledge, similar results were not encountered previously in the literature. Though the differential cross section $`\sigma _1(\theta )`$ may become negative at some direction, it does not cause any trouble physically because the total cross section is positive (actually positively infinite due to the long range nature of the potentials). Indeed, $`\sigma _\times (\theta )`$ gives a finite contribution (positive or negative) to the total cross section, while $`\sigma _\mathrm{C}(\theta )`$ gives a positively infinite one.
Now we turn to the case $`\nu =\frac{1}{2}`$. In this case we make the transformation
$$w(\eta )=e^{ik\eta ^2/2}\eta u(\eta ),$$
(71)
then the equation for $`u`$ reads
$$\eta u^{\prime \prime }+2(1ik\eta ^2)u^{}+2k(2\beta i)\eta u=0.$$
(72)
The condition (43) means that $`u(\eta )`$ is an even function of $`\eta `$. The required solution can be found to be
$$u(\eta )=c_2F(i\beta +\frac{1}{2},\frac{3}{2},ik\eta ^2),$$
(73)
where $`c_2`$ is a normalization constant. Collecting Eqs. (46), (51), (70), and (72) we obtain the solution
$$\psi _0=c_2e^{ikx}\eta F(i\beta +\frac{1}{2},\frac{3}{2},ik\eta ^2).$$
(74)
Here two remarks should be made. First, as a function of $`r`$ and $`\theta `$, $`\psi _0`$ is double valued, so that $`\psi `$ is single valued \[cf. Eq. (33) where now $`\nu =\frac{1}{2}`$\] . Second, as a consequence of Eq. (43) and obvious from the above result, we have $`\psi _0(𝐫=0)=0`$ here, so that $`\psi `$ is well defined at the origin. We choose
$$c_2=2\sqrt{\frac{k}{\pi }}\mathrm{exp}\left(\frac{\beta \pi }{2}i\frac{\pi }{4}\right)\mathrm{\Gamma }(1i\beta ),$$
then the asymptotic form of $`\psi _0`$ is given by
$$\psi _0\mathrm{exp}[ikxi\beta \mathrm{ln}k(rx)]\frac{\mathrm{sin}\theta /2}{|\mathrm{sin}\theta /2|}+f(\theta )\frac{\mathrm{exp}(ikr+i\beta \mathrm{ln}2kr)}{\sqrt{r}},(r\mathrm{})$$
(75)
where
$$f(\theta )=\frac{\beta \mathrm{\Gamma }(i\beta )}{\mathrm{\Gamma }(1/2+i\beta )}\frac{\mathrm{exp}(i\beta \mathrm{ln}\mathrm{sin}^2\theta /2+i3\pi /4)}{\sqrt{2k}\mathrm{sin}^2\theta /2}.$$
(76)
Again note that both terms are double valued. The double valueness does not cause much trouble in the calculation. Using the formulas (61) the cross section can be shown to be
$$\sigma _2(\theta )=|f(\theta )|^2=\frac{\beta \mathrm{coth}\beta \pi }{2k\mathrm{sin}^2\theta /2}.$$
(77)
This has the same angular distribution as $`\sigma _\mathrm{C}(\theta )`$, but the dependence on other parameters is quite different.
If we ignore the relation $`\kappa =Zq^2`$ and treat $`\kappa `$ as an independent parameter, we may set $`\kappa =0`$ in the above results (note that $`Z=0`$ is not allowed in our formalism). Then we have
$$\sigma _1(\theta )=0,\sigma _2(\theta )=\frac{1}{2\pi k\mathrm{sin}^2\theta /2}.$$
(78)
These are the AB scattering cross sections for the corresponding values of $`\nu `$.
Finally we point out that the cross sections (62), (67), and (76), when expressed in terms of the classical velocity $`v_\mathrm{c}=\mathrm{}k/\mu `$ instead of $`k`$, involve $`\mathrm{}`$ explicitly. In the classical limit, $`\mathrm{}0`$, $`\beta =\kappa /\mathrm{}v_\mathrm{c}\mathrm{}`$ (this is actually realized in the low energy limit), we see that $`\sigma _\times (\theta )`$ is negligible in compared with $`\sigma _\mathrm{C}(\theta )`$, and both $`\mathrm{tanh}\beta \pi `$ and $`\mathrm{coth}\beta \pi `$ tend to $`\pm 1`$. So we have in this limit
$$\sigma _\mathrm{C}(\theta )=\sigma _1(\theta )=\sigma _2(\theta )=\frac{|\kappa |}{2\mu v_\mathrm{c}^2\mathrm{sin}^2\theta /2},$$
(79)
which is the classical scattering cross section for a pure Coulomb field in two dimensions. This result implies that the AB potential has no significant effect in the classical limit as expected.
## V Summary and discussions
In this paper we propose an $`n`$-body Schrödinger equation for particles carrying magnetic flux as well as electric charges. The ratio of electric charge to magnetic flux is the same for all particles. The two-body problem is studied in detail. The bound state problem is exactly solved in the general case, while the scattering problem is exactly solved in two special cases.
The original intention of this work is to describe the CS vortex solitons by a simple quantum mechanical model. If the sizes of the solitons are small, the AB potential may be a good approximation in describing the charge-flux interaction. On the other hand, the real charge-charge interaction may be quite complicated, thus the Coulomb potential used here may be questionable. If a better form $`V(q_a,q_b,|𝐫_a𝐫_b|)`$ can be found for the interaction potential of charge $`q_a`$ at $`𝐫_a`$ and charge $`q_b`$ at $`𝐫_b`$, then the $`n`$-body equation may be improved by substituting this potential for $`q_aq_b/|𝐫_a𝐫_b|`$ in Eq. (2b). In this case the last term $`q_1q_2/r`$ in the two-body relative Hamiltonian (11b) should be replaced by $`V(q_1,q_2,r)`$. With an improved potential, the Schrödinger equation might become more difficult to solve, however. Therefore, the model studied in this paper, even though it cannot well describe the interaction of the vortex solitons, may have some interest in itself since it allows exact analysis to some extent.
Several aspects of this model that need further studies may be: the dynamical symmetry of the two-body system, the scattering problem for general value of $`\nu `$, and finally, the relativistic generalization of the model.
## Acknowledgment
The author is grateful to Professor Guang-jiong Ni for encouragement. This work was supported by the National Natural Science Foundation of China.
|
no-problem/9905/quant-ph9905022.html
|
ar5iv
|
text
|
# Entanglement measure and distance
## ACKNOWLEDGMENTS
I am grateful to K.Suehiro for careful reading of the manuscript and helpful comments. I am also indebted to N.Imoto and H.-K. Lo for pointing out the fault in the previous version.
|
no-problem/9905/physics9905021.html
|
ar5iv
|
text
|
# Test your surrogate data before you test for nonlinearity
## I Introduction
Often an indirect approach is followed to investigate the existence of nonlinear dynamics in time series by means of hypothesis testing using surrogate data . To this respect, the null hypothesis of a linear stochastic process undergoing a nonlinear static transform is considered as the most appropriate because it is the closest to nonlinearity one can get with linear dynamics. Surrogate data representing this null hypothesis ought to be random data, but possess the same power spectrum (or autocorrelation) and amplitude distribution as the original time series. To test the null hypothesis, a method sensitive to nonlinearity is applied to the original time series and to a set of surrogate time series. The null hypothesis is rejected if a statistic derived from the method statistically discriminates the original from the surrogate data.
For the generation of surrogate data the algorithm of the so-called amplitude adjusted Fourier transform (AAFT), by Theiler and co-workers , has been followed in a number of applications so far .
Recently, another algorithm similar to that of Theiler, but making use of an iterative scheme in order to achieve arbitrarily close approximation to the autocorrelation and the amplitude distribution was proposed by Schreiber and Schmitz . We refer to it as the iterative AAFT (IAAFT) algorithm. A more advanced algorithm designed to generate surrogates for any given constraints is supposed to be very accurate for moderate and large data sets, but in the deterrent cost of long computation time . Therefore it is not considered in our comparative study.
Other schemes circumvent the problem of non-Gaussian distribution by making first a so-called ”Gaussianization” of the original data and proceed with this data set generating Fourier transform (FT) surrogates instead . In this way the results of the FT surrogate test concern the ”Gaussianized” data and it is not clear why the same results should be valid for the original non-Gaussian data.
Shortcomings of the AAFT algorithm due to the use of FT on short periodic signals and signals with long coherent times have been reported elsewhere . Here, we pinpoint more general caveats of the method, often expected to occur in applications, and give comparative results with the IAAFT method.
In Section II, the AAFT and IAAFT algorithms are presented and discussed. In Section III, the dependence of the hypothesis test on the generating scheme for the surrogate data is examined and in Section IV the performance of the algorithms on real data is presented.
## II The AAFT and IAAFT algorithms
Let $`\{x_i\},i=1,\mathrm{},N`$, be the observed time series. According to the null hypothesis, $`x_i=h(s_i)`$, where $`\{s_i\},i=1,\mathrm{},N`$, is a realization of a Gaussian stochastic process (and thus linear) and $`h`$ is a static measurement function, possibly nonlinear. In order for a surrogate data set $`\{z\}`$ (of the same length $`N`$) to represent the null hypothesis it must fulfill the following two conditions: 1) $`R_z(\tau )=R_x(\tau )`$ for $`\tau =1,\mathrm{},\tau ^{}`$, 2) $`A_z(z)=A_x(x)`$, where $`R`$ is the autocorrelation, $`A`$ the empirical amplitude distribution and $`\tau ^{}`$ a sufficiently large lag time.
### A The AAFT algorithm
The AAFT algorithm starts with the important assumption that $`h`$ is a monotonic function, i.e., $`h^1`$ exists. The idea is to simulate first $`h^1`$ (by reordering white noise data to have the same rank structure as $`\{x\}`$, call it $`\{y\}`$, step 1), then make a randomized copy of the obtained version of $`\{s\}`$ (by making a FT surrogate of $`\{y\}`$, call it $`\{y^{\text{FT}}\}`$, step 2) and transform it back simulating $`h`$ (by reordering $`\{x\}`$ to have the same rank structure as $`\{y^{\text{FT}}\}`$, call it $`\{z\}`$, step 3).
In step 1, it is attempted to bring $`\{x\}`$ back to $`\{s\}`$, in a loose statistic manner, by constructing a time series with similar structure to $`\{x\}`$, but with Gaussian amplitude distribution. However, it is not clear what is the impact of this process on the autocorrelation $`R`$. From the probabilistic theory for transformations of stochastic variables it is known that $`R_x|R_s|`$ in general , and moreover $`R_x=g(R_s)`$, for a smooth function $`g`$ (under some assumptions on $`h`$). Assuming that $`h^1`$ exists and is successfully simulated in step 1, we get $`R_yR_s`$ and thus $`R_x|R_y|`$. Analytic verification of this is not possible because reordering constitutes an ”empirical transformation”. When $`h`$ is not monotonic or not successfully simulated by the reordering in step 1 the relation $`R_yR_s`$ cannot be established and $`R_y`$ will be somehow close to $`R_x`$.
The phase randomization process in step 2 does not affect $`R`$ apart from statistical fluctuations ($`R_yR_y^{\text{FT}}`$), neither alter the Gaussian distribution, but it just destroys any possible nonlinear structure in $`\{y\}`$. The reordering in step 3 gives $`A_z(z)=A_x(x)`$. This process changes also $`R`$ according to the function $`g`$, i.e., $`R_z=g(R_y)`$, assuming again that $`h`$ is successfully simulated by the reordering. For the latter to be true, a necessary condition is $`R_z|R_y|`$. So, the preservation of the autocorrelation $`R_zR_x`$ is established only if in step 1 $`R_yR_s`$ is achieved after the reordering, which is not the case when $`h`$ is not monotonic. Otherwise, the AAFT algorithm gives biased autocorrelation with a bias determined by the reordering in step 1 and the form of $`g`$, and subsequently the form of $`h`$.
To elucidate, we consider the simplest case of an AR($`1`$) process \[$`s_i=bs_{i1}+w_i`$, $`b=0.4`$ and $`w_i𝒩(0,1b^2)`$\], and static power transforms, $`x=h(s)=s^a`$, for positive $`a`$. For $`s_i\text{I}\text{R}`$, $`h^1`$ exists only for odd values of $`a`$. For even values of $`a`$, a deviation of $`R_y`$ from $`R_s`$ is expected resulting to surrogate data with different autocorrelation than that of the original ($`R_zR_x`$). Monte Carlo simulation approves this finding as shown in Fig. 1(a) ($`N=2048`$, $`a=1,2,\mathrm{},10`$, $`M=40`$ surrogates, $`100`$ realizations). Note that for $`R_y(1)`$ the standard deviation (SD) is almost zero indicating that all $`40`$ reordered noise data $`\{y\}`$ obtain about the same $`R_y(1)`$ for each realization of $`\{s\}`$, which depends on $`R_s(1)`$ at each realization. The results show the good matching $`R_y(1)R_s(1)`$ and $`R_z(1)R_x(1)`$ for odd $`a`$. For even $`a`$, $`R_y(1)`$ is always on the same level, well below $`R_s(1)`$, and $`R_z(1)<R_x(1)`$, with the difference to decrease with larger powers.
### B The IAAFT algorithm
The IAAFT algorithm makes no assumption for the form of the $`h`$ transform . Starting with a random permutation $`\{z^{(0)}\}`$ of the data $`\{x\}`$, the idea is to repeat a two-step process: approach $`R_x`$ in the first step (by bringing the power spectrum of $`\{z^{(i)}\}`$ to be identical to that of $`\{x\}`$, call the resulting time series $`\{y^{(i)}\}`$), and regain the identical $`A_x`$ in the second step (by reordering $`\{x\}`$ to have the same rank structure as $`\{y^{(i)}\}`$, call it $`\{z^{(i+1)}\}`$).
The desired power spectrum gained in step 1 is changed in step 2 and therefore several iterations are required to achieve convergence of the power spectrum. The algorithm terminates when the exact reordering is repeated in two consecutive iterations indicating that the power spectrum of the surrogate cannot be brought closer to the original. The original algorithm gives as outcome the data set derived from step 2 at the last iteration ($`\{z^{(i+1)}\}`$, where $`i`$ the last iteration), denoted here IAAFT-1. The IAAFT-1 surrogate has exactly the same amplitude distribution as the original, but discrepancies in autocorrelation are likely to occur. If one seeks the best matching in autocorrelation leaving the discrepancies for the amplitude distribution, the data set from step 1 after the last iteration ($`\{y^{(i)}\}`$), denoted IAAFT-2, must be chosen instead.
The bias of the autocorrelation with IAAFT-1 can be significant, as shown in Fig. 1(b). For reasons we cannot explain, the bias in $`R(1)`$ gets larger for odd values of $`a`$ (monotonic $`h`$), for which AAFT gives no bias (the same bias was observed using 16384 samples ruling out that it can be due to limited data size). On the other hand, the matching in autocorrelation with IAAFT-2 was perfect (not shown). The surrogate data derived from the IAAFT algorithm exhibit little (IAAFT-1) or essentially zero (IAAFT-2) variation in autocorrelation compared to AAFT. This is an important property of the IAAFT algorithm in general because it increases the significance of the discriminating statistic as it will be shown below.
## III The effect of biased autocorrelation
By construction, the IAAFT algorithm can represent the null hypothesis regardless of the form of $`h`$ while the AAFT algorithm cannot represent it when $`h`$ is not monotonic. One can argue that the deviation in the autocorrelation due to possible non-monotonicity of $`h`$ is small and does not affect the results of the test. This is correct only for discriminating statistics which are not sensitive to linear data correlations, but most of the nonlinear methods, including all nonlinear prediction models, are sensitive to data correlations and therefore they are supposed to have the power to distinguish nonlinear correlations from linear.
We show in Fig. 2 the results of the test with AAFT and IAAFT surrogate data from the example with the AR(1) process. Two discriminating statistics are drawn from the correlation coefficient of the one step ahead fit $`\rho (1)`$ using an AR(1) model and from the $`\rho (1)`$ using a local average mapping (LAM) (embedding dimension $`m=3`$, delay time $`\tau =1`$, neighbors $`k=5`$). The significance of each statistic $`q`$ \[here $`q=\rho (1)`$\] is computed as
$$S=\frac{|q_0\overline{q}|}{\sigma _q}$$
(1)
where $`q_0`$ is for the original data, $`\overline{q}`$ and $`\sigma _q`$ is the mean and SD of $`q_i`$, $`i=1,\mathrm{},M`$, for the $`M`$ surrogate data (here $`M=40`$). The significance $`S`$ is a dimensionless quantity, but it is customarily given in units of ”sigmas” $`\sigma `$. A value of $`2\sigma `$ suggests the rejection of the null hypothesis at the $`95\%`$ confidence level.
For AAFT, the number of rejections with both AR(1) and LAM is at the level of the ”size” of the test ($`5\%`$, denoted with a horizontal line in Fig. 2) when $`h`$ is monotonic, but at much higher levels when $`h`$ is not monotonic, depending actually on the magnitude of $`a`$. For the IAAFT algorithm the results are very different. Using IAAFT-1 surrogates the number of rejections is almost always much larger than the ”size” of the test and the opposite feature from that for AAFT is somehow observed for the even and odd values of $`a`$. The extremely large number of rejections with IAAFT-1 is actually due to the small variance of the statistics for the IAAFT surrogates \[see also Fig. 1(b)\]. On the other hand, the $`\rho (1)`$ of the AR(1) for the IAAFT-2 surrogates seems to coincide with the original because, besides the small SD in Eq. (1), the significance is almost always below $`2\sigma `$. Note that the $`\rho (1)`$ of AR(1) behaves similar to $`R(1)`$ for this particular example. The values of $`\rho (1)`$ of LAM for each surrogate data group are more spread giving thus less rejections for AAFT and IAAFT-1. For IAAFT-2 and for $`a=3`$ and $`a=5`$, there are too many rejections and this cannot be easily explained since this behavior is not observed for the linear measure. We suspect that the rejections are due to the difference in the amplitude distribution of the IAAFT-2 surrogate data from the original, which may effect measures based on the local distribution of the data such as LAM.
Simulations with AR-processes of higher order and with stronger correlations showed qualitatively the same results.
## IV Surrogate tests with AAFT and IAAFT on real data
In order to verify the effect of the bias in autocorrelation in a more comprehensive manner, we consider in the following examples with real data the discriminating statistics $`q^i=\rho (T,i)`$, $`i=1,\mathrm{},n`$, from the $`T`$ time step ahead fit with polynomials $`p_i`$ of the Volterra series of degree $`d`$ and memory (or embedding dimension) $`m`$
$`\widehat{x}_{i+T}`$ $`=`$ $`p_n(𝒙_i)=p_n(x_i,x_{i1},\mathrm{},x_{i(m1)})`$ (2)
$`=`$ $`a_0+a_1x_i+\mathrm{}+a_mx_{i(m1)}`$ (4)
$`+a_{m+1}x_i^2+a_{m+2}x_ix_{i1}+\mathrm{}+a_{n1}x_{i(m1)}^d`$
where $`n=(m+d)!/(m!d!)`$ is the total number of terms. To distinguish nonlinearity, $`d=2`$ is sufficient. Starting from $`p_1=a_0`$, we construct $`n`$ polynomials adding one term at a time. Note that $`p_2,\mathrm{},p_{m+1}`$ are actually the linear AR models of order $`1,\mathrm{},m`$, respectively, and the first nonlinear term enters in the polynomial $`p_{m+2}`$. So, if the data contains dynamic nonlinearity, this can be diagnosed by an increase of $`\rho (T,i)`$ when nonlinear terms are added in the polynomial form. This constitutes a direct test for nonlinearity, independent of the surrogate test, and its significance is determined by the increase of $`\rho (T,i)`$ for $`i>m+1`$. Note that a smooth increase of $`\rho (T,i)`$ with the polynomial terms is always expected because the fitting becomes better even if only noise is ”modeled” in this way. In Ref. , where this technique was applied first, this is avoided by punishing the addition of terms and the AIC criterion is used instead of $`\rho `$. Here, we retain the $`\rho `$ measure bearing this remark in mind when we interpret the results from the fit.
We choose this approach because on the one hand, it gives a clear indication for the existence or not of nonlinearity, and on the other hand, it preserves or even amplifies the discrepancies in the autocorrelation, so that we can easily verify the performance of the AAFT and IAAFT methods.
### A The sunspot numbers
We start with the celebrated annual sunspot numbers (e.g. see ). Many suggest that this time series involves nonlinear dynamics (e.g. see for a comparison of classical statistical models on sunspot data, and for a comparison of empirical nonlinear prediction models). The sunspot data follows rather a squared Gaussian than a Gaussian distribution and therefore the AAFT does not reach a high level of accuracy in autocorrelation \[see Fig. 3(a)\]. Note that the $`R_y(\tau )`$ of the reordered noise data $`\{y\}`$ follows well with the $`R_z(\tau )`$ of the AAFT, and is even closer to the original $`R_x(\tau )`$. This behavior is reminiscent to that of squared transformed AR-data \[see Fig. 1(a) for $`a=2`$\], which is in agreement with the comment in that the sunspot numbers are in first approximation proportional to the squared magnetic field strength. The condition $`R_z(\tau )|R_y(\tau )|`$ holds supporting that the simulation of $`h`$ transform (step 3 of AAFT algorithm) is successful. Due to the short size of the sunspot data also IAAFT-1 surrogates cannot mimic perfectly the autocorrelation, as shown in Fig. 3(b). On the other hand, the IAAFT-2 surrogates match perfectly the autocorrelation and follow closely the original amplitude distribution.
The discrepancies in autocorrelation are well reflected in the correlation coefficient $`\rho `$ from the polynomial modeling as shown in Fig. 4. To avoid bias in favor of rejecting the null hypothesis, we use arbitrarily $`m=10`$ in all our examples in this section. The fit of the original sunspot data is improved as linear terms increase from $`1`$ to $`9`$, and no improvement is observed adding the tenth linear term which is in agreement with the choice of AR(9) as the best linear model . As expected, this feature is observed for the AAFT and IAAFT surrogates as well \[see Fig. 4(a)\]. Further, the inclusion of the first nonlinear term ($`x_i^2`$), improves the fitting of the original sunspot numbers, but not of the surrogates. Actually, the Volterra polynomial fitting shows that the interaction of $`x_i`$ and $`x_{i1}`$ with theirselves and with each other is prevalent for the sunspot data \[note the increase of $`\rho (1,i)`$ for $`i=12`$, $`i=13`$, and $`i=22`$\]. When compared with AAFT surrogates this feature is not so obvious mainly due to the large variance of $`\rho (1,i)`$ of the AAFT surrogates and the large discrepancy from the $`\rho (1,i)`$ of the original data, which persists also for the linear terms (for $`i=2,\mathrm{},11`$). For the IAAFT-1 surrogate data, there is also a small discrepancy due to imperfect matching of the autocorrelation, which disappears when the IAAFT-2 surrogate data are used instead.
The significance $`S`$ of the discriminating statistics $`\rho (1,i)`$, $`i=1,\mathrm{},66`$, shown in Fig. 4(b), reveals the inappropriateness of the use of AAFT. The null hypothesis is rejected even for $`i=2,\mathrm{},11`$, i.e., when a linear statistic is used. On the other hand, using IAAFT-1 or IAAFT-2 surrogate data only the $`\rho (1,i)`$ for $`im+2`$, i.e., involving nonlinear terms, give $`S`$ over the $`2\sigma `$ level. For only linear terms, $`S`$ is at the $`2\sigma `$ level using IAAFT-1 surrogate data and falls to zero when IAAFT-2 surrogate data are used instead. Employing as discriminating statistic the difference of $`\rho (1)`$ after, for example, the inclusion of the $`x_i^2`$ term, i.e., $`q=\rho (1,12)\rho (1,11)`$, gives for AAFT $`S=1.76\sigma `$, for IAAFT-1 $`S=3.35\sigma `$, and for IAAFT-2 $`S=4.05\sigma `$.
So, even for short time series, for which IAAFT-1 cannot match perfectly the autocorrelation, it turns out that the IAAFT algorithm distinguishes correctly nonlinear dynamics while following the AAFT algorithm one can be fooled or at least left in doubt as to the rejection of the null hypothesis. Here, we had already strong evidence from previous works that nonlinear dynamics is present, and used this example to demonstrate the confusion AAFT method can cause. However, if one checks first the autocorrelation of AAFT \[Fig. 3(a)\], then these results should be anticipated \[Fig. 4(a) and (b)\].
### B The AE Index data
We examine here a geophysical time series, the auroral electrojet index (AE) data from magnetosphere . Surrogate data tests for the hypothesis of nonlinear dynamics have been applied to records of AE index of different time scales and sizes with contradictory results . Here, we use a long record of six months, but smoothly resampled to a final data set of $`4096`$ samples \[see Fig. 5(a)\]. The amplitude distribution of the AE index data is non-Gaussian and has a bulk at very small magnitudes and a long tail along large magnitudes. This is so, because the AE index is characterized by resting periods interrupted by periods of high activity probably coupled to the storms of solar wind.
For the autocorrelation, it turns out that the AAFT algorithm gives positive bias in this case, i.e. the AAFT surrogates are significantly more correlated than the original. The $`R_y(\tau )`$ of the ordered noise data $`\{y\}`$ are slightly larger than $`R_x(\tau )`$, which, according to Section II, is a sign that under the null hypothesis the $`h`$ transform is not monotonic. Also $`R_z(\tau )|R_y(\tau )|`$ holds so that $`h`$ seems to have been simulated successfully. On the other hand, the IAAFT-1 surrogates match almost perfectly the autocorrelation and represent exactly the null hypothesis (therefore IAAFT-2 surrogate data are not used here).
The $`\rho (1,i)`$ from the Volterra polynomial fit on the original AE index shows a characteristic improvement of fitting with the addition of the first nonlinear term (see Fig. 6). This result alone gives evidence for nonlinear dynamics in the AE index. However, the surrogate data test using the AAFT does not support this finding due to the bias and variance in the autocorrelation. To the contrary, as shown in Fig. 6(b), it gives the confusing pattern that the null hypothesis is marginally rejected at the $`95\%`$ confidence level with linear discriminating statistics \[$`\rho (1,i)`$ for $`i=2,\mathrm{},11`$\], but not rejected with nonlinear statistics \[$`\rho (1,i)`$ for $`i=12,\mathrm{},66`$\]. The IAAFT algorithm is obviously proper here. The $`\rho (1,i)`$ for the IAAFT-1 follows closely the $`\rho (1,i)`$ for the original only for the linear fitting \[Fig. 6(a)\]. Consequently, the significance changes dramatically with the inclusion of the first nonlinear term from $`1\sigma `$ to $`7\sigma `$ and stabilizes at this level for all $`i=12,\mathrm{},66`$ \[Fig. 6(b)\].
The discrimination of the original AE data from AAFT surrogates can be still achieved by employing the discriminating statistic $`q=\rho (1,12)\rho (1,11)`$ giving $`S=5.3\sigma `$ (and $`S=6.4\sigma `$ for IAAFT-1). Actually, the nonlinearity indicated from the Volterra polynomial fit is very weak and can be marginally detected with other discriminating statistics . For example, a local fit would give the erroneous result that the null hypothesis is marginally rejected using AAFT, but not using IAAFT, because the local fit is mainly determined by the linear correlations. In particular, a fit with LAM ($`m=10`$, $`\tau =1`$, $`k=10`$) gave for $`\rho (1)`$ significance $`S=1.84\sigma `$ for AAFT and only $`S=0.3\sigma `$ for IAAFT.
### C Breath rate data
The next real data set is from the breath rate of a patient with sleep apnea. The time series consists of the first $`4096`$ samples of the set B of the Santa Fe Institute time series contest \[see Fig. 5(b)\]. This time series is characterized by periods of relaxation succeeded by periods of strong oscillations and follows a rather symmetric amplitude distribution but not Gaussian (more spiky at the bulk). These data are also believed to be nonlinear, but it is not clear whether the nonlinearity is autonomous or merely due to nonlinear coupling with the heart rate .
The breath rate time series does not have strong linear correlations. However, AAFT gives again bias in the autocorrelation, but not large variance while IAAFT-1 matches perfectly the original autocorrelation (therefore IAAFT-2 is not used here).
The Volterra polynomial fit, shown in Fig. 7, reflects exactly the results on the autocorrelation. For the linear terms, the $`\rho (1,i)`$ for AAFT are rather concentrated at a level roughly $`10\%`$ lower than the $`\rho (1,i)`$ for the original data. This large difference combined with the small variance does not validate the comparison of AAFT surrogate data to the original data with any nonlinear tool sensitive to data correlations. For the IAAFT-1, the situation is completely different. The perfect matching in $`\rho (1,i)`$ for the linear terms in combination with the abrupt rise of the $`\rho (1,i)`$ of the breath rate data after the inclusion of the second (not first!) nonlinear term, constitutes a very confident rejection of the null hypothesis at the level of at least $`25\sigma `$, as shown in Fig. 7(b).
It seems that for the modeling of the breath rate data, the interaction of $`x_i`$ and $`x_{i1}`$ (term $`13`$) is very important. Using the discriminating statistic $`q=\rho (1,12)\rho (1,11)`$ as before gives significance around $`3\sigma `$ for both AAFT and IAAFT, but using $`q=\rho (1,13)\rho (1,12)`$ instead gives significance about $`80\sigma `$ for both AAFT and IAAFT.
### D EEG data
The last example is the measurement of $`4096`$ samples from an arbitrary channel of an EEG recording assumed to be during normal activity \[actually, the record is from an epileptic patient long before the seizure, see Fig. 5(c)\]. Though the deviation of the amplitude distribution from Gaussian is small, the AAFT algorithm gives again large bias in the autocorrelation, while IAAFT-1 achieves good matching. Particularly, the condition $`R_z(\tau )|R_y(\tau )|`$ does not hold here for small $`R`$ values implying that the bias in autocorrelation may also be due to unsuccessful simulation of $`h`$ (in the step 3 of the AAFT algorithm).
This EEG time series does not exhibit nonlinearity, at least as observed from the one step ahead fit with Volterra polynomials (Fig. 8). Obviously, the difference in $`\rho (1,i)`$ between original and AAFT surrogates wrongly suggests rejection of the null hypothesis when the nonlinear Volterra polynomial fit (terms $`>11`$) is used as discriminating statistic. This is solely due to the bias in autocorrelation as this difference remains also for the linear terms. For IAAFT-1, there is a small difference in $`\rho (1,i)`$ for the linear terms, as shown in Fig. 8(a), though IAAFT-1 seems to give good matching in the autocorrelation. This probably implies that the $`\rho `$ from the linear fit amplifies even small discrepancies in autocorrelation, not detected by eye-ball judgement. Moreover, this small difference in $`\rho (1,i)`$ is significant, as shown in Fig. 8(b), because again IAAFT-1 tends to give dense statistics. Remarkably, the significance degrades to less than $`2\sigma `$ when nonlinear terms are added.
We employ IAAFT-2 surrogate data as well \[see Fig. 8(a)\]. These do not match perfectly the original amplitude distribution (especially at the bulk of the distribution), but possess exactly the same linear correlations as the original, as approved also from the linear fit in Fig. 8(a). For the IAAFT-2 surrogates, the significance from the $`\rho (1,i)`$ is correctly less than $`2\sigma `$ for both the linear and nonlinear terms of the polynomial fit, as shown in Fig. 8(b).
We want to stress here that the results on the EEG data are by no means conclusive, as they are derived from a simulation with a single tool (polynomial fit) on a single EEG time series. However, they do insinuate that the use of AAFT surrogate data in the numerous applications with EEG data should be treated with caution at least when a striking difference between the original data and the AAFT surrogate data cannot be established, which otherwise would rule out that the difference is solely due to biased autocorrelation.
## V Discussion
The study on the two methods for the generation of surrogate data that represent the null hypothesis of Gaussian correlated noise undergoing nonlinear static distortion revealed interesting characteristics and drawbacks of their performance. The most prominent of the two methods, the amplitude adjusted Fourier transform (AAFT) surrogates, can represent successfully the null hypothesis only if the static transform $`h`$ is monotonic. This is an important generic characteristic of the AAFT algorithm and not just a technical detail of minor importance as treated in all applications with AAFT so far . The bias in autocorrelation induced by the non-monotonicity of $`h`$ can lead to false rejections of the null hypothesis.
Our simulations revealed a drawback for the other method, the iterated AAFT (called IAAFT here), which was not initially expected. Depending on the data type, the iterative algorithm may naturally terminate while the matching in autocorrelation is not yet exact (we call the derived surrogate data IAAFT-1). In this case, all IAAFT-1 surrogate data achieve approximately the same level of accuracy in autocorrelation. Thus, the variance of autocorrelation is very small and therefore the mismatching becomes significant. Consequently, applying a nonlinear statistic sensitive to data correlations gives also significant discrimination. So, when using IAAFT surrogate data, the exact matching in autocorrelation must be first guaranteed and then differences due to nonlinearity become more distinct due to the small variance of the statistics on IAAFT. In cases the IAAFT-1 data set does not match exactly the original autocorrelation, we suggest to use a second data set derived from the same algorithm, IAAFT-2, which possesses exactly the same linear correlations as the original and may slightly differ in the amplitude distribution. Our simulations suggest the use of IAAFT-2 in general, but there may be cases where a detailed feature of the amplitude distribution should be preserved (e.g. data outliers of special importance) and then IAAFT-1 should be used instead.
The application of the AAFT and IAAFT algorithms to real world data demonstrated the inappropriateness of AAFT and the ”too good” significance obtained with IAAFT surrogates if nonlinearity is actually present. The results generally suggest that one has first to assure a good matching in autocorrelation of the surrogate data to the original before using them further to compute nonlinear discriminating statistics and test the null hypothesis. If a bias in autocorrelation is detected, statistical difference in the nonlinear statistics may also occur and then the rejection of the null hypothesis is not justified by a high significance level because it can be just an artifact of the bias in autocorrelation.
One can argue that when $`h`$ is not invertible then the assumption that the examined time series stems from a Gaussian process cannot be assessed by this test because there is not one to one correspondence between the measured time series $`\{x\}`$ and the Gaussian time series $`\{s\}`$, where $`x=h(s)`$. We would like to stress that the hypothesis does not yield a single Gaussian process, but any Gaussian process that under $`h`$ (even not monotonic) can give $`\{x\}`$, i.e., the existence of multiple solutions is not excluded. More precisely, the null hypothesis states that the examined time series belongs to the family of statically transformed Gaussian data with linear properties and deviation from the Gaussian distribution determined by the corresponding sample quantities of the original time series. Thus the surrogate data generated under the two conditions (matching in autocorrelation and amplitude distribution) may as well be considered as realizations of different Gaussian processes statically transformed under $`h`$. Differences within the different underlying linear processes are irrelevant when the presence of nonlinearity is investigated.
Concerning the discriminating statistics, our findings with synthetic and real data suggest that local models, such as the local average map, are not always suitable to test the null hypothesis and can give confusing results. On the other hand, the Volterra polynomial fit turned out to be a useful diagnostic tool for detecting dynamic nonlinearity directly on the original data as well as verifying the performance of the surrogate data because it offers direct detection of changes in the discriminating statistic from the linear to nonlinear case.
## Acknowledgements
The author thanks Giorgos Pavlos for providing the AE index data, Pål Larsson for providing the EEG data, and Holger Kantz for his valuable comments on the manuscript.
|
no-problem/9905/astro-ph9905078.html
|
ar5iv
|
text
|
# Observations of short-duration X-ray transients by WATCH on Granat
## 1 Introduction
Amongst the many kinds of sources in the variable X-ray sky, X-ray transients have been observed since the first experiments in the late 60’s. According to their duration, it is possible to distinguish between Long-duration X-ray Transients (lasting from weeks to few months) and Short-duration X-ray Transients (lasting from hours to very few days).
Long-duration X-ray Transients are mainly related to Be-neutron star systems (Be X-ray Transients) and to Soft X-ray Transients (a subclass of low-mass X-ray binaries). The latter include the best black hole candidates found so far (Tanaka & Shibazaki 1996).
But not too much is known with respect to most of Short-duration X-ray Transients. The main reason is the lack of counterparts at other wavelengths. One subclass are the so-called Fast X-ray Transients, that have been observed with many detectors since the launching of the Vela satellites (Heise et al. 1975, Cooke 1976) until the advent of the BeppoSAX satellite (Heise et al. 1998). Durations range from seconds (Belian, Conner & Evans 1976) to less than few hours. Normally, they have been seen once, and never seen in quiescence, implying high peak-to-quiescent flux ratios (10<sup>2</sup>-10<sup>3</sup>) (Ambruster et al. 1983). Spectral characteristics vary substantially, from a hard spectrum in MX 2346-65 (kT $``$ 20 keV; Rappaport et al. 1976) to soft spectra with blackbody temperatures from kT = 0.87 to 2.3 keV (Swank et al. 1977). In two cases, precursors to the main event were observed by SAS 3 (Hoffman et al. 1978). The precursors rose and felt in brighteness in less than 0.4 s. In the Ariel V database, 27 sources were discovered (Pye & McHardy 1983), and 10 more were detected in the HEAO 1 A-1 all sky survey (Ambruster & Wood, 1986), implying a fast transient all-sky event rate of $`1500\mathrm{yr}^1`$ for fluxes F<sub>x</sub> $``$ 3 $`\times `$ 10<sup>-10</sup> erg cm<sup>-2</sup> s<sup>-1</sup> in the 2-10 keV energy band. In the HEAO 1 A-2 survey, 52 events were found (Connors 1988), but 37 of them were related to four of the brightest X-ray sources in the LMC.
Due to the large difference of observational characteristics, it seems that these events are caused by more than one physical mechanism. In several cases, there have been tentative optical identifications on the basis of known sources in the transient error boxes. One suggestion has been that many of the fast X-ray transients are related with stellar flares originated in active coronal sources, like RS CVn binaries or dMe-dKe flare stars. RS CVn systems are binaries formed by a cool giant/subgiant (like a K0 IV) with an active corona and a less massive companion (a late G-dwarf) in a close synchronous orbit, with typical periods of 1-14 d. Peak luminosities are usually L<sub>x</sub> $``$ 10<sup>32</sup> erg s<sup>-1</sup>. The RS CVn system LDS 131 was identified with the X-ray transient detected by HEAO 1 on 9 Feb 1977 (Kaluzienski et al. 1978, Griffiths et al. 1979). The highest peak luminosity was recorded by Ariel V for the flare observed from DM UMa in 1975 (Pye & Mc Hardy, 1983). The hardest flare yet observed was for the system HR 1099, on 17 Feb 1980, which was detected by HEAO 2 at energies up to 20 keV (Agrawal & Vaidya 1988). The most energetic X-ray flare was observed by GINGA from UX Ari. Its decay time was quite long ($``$ 0.5 days).
X-ray flares can be also observed from M or K dwarfs with Balmer lines in emission (these are the active cool dwarf stars dMe-dKe). In the Ariel V sky survey, Rao & Vahia (1984) suggested seven dMe stars as responsibles of X-ray flares that reached peak luminosities L<sub>x</sub> $``$ 10<sup>32</sup> erg s<sup>-1</sup> (2-18 keV). AT Mic is the dMe star with the largest number of recorded events (four), with a big flare in 1977 (Kahn 1979) reaching a peak luminosity L<sub>x</sub> = 1.6 $`\times `$ 10<sup>31</sup> erg s<sup>-1</sup> (2-18 keV), which is $``$ 100 times larger that the strongest solar flares. One of the most energetic flares was the flare observed by EXOSAT in YY Gem on 15 Nov 1984. Pallavicini et al. (1990) estimated a total energy flare E<sub>x</sub> = 10<sup>34</sup> erg (0.005-2 keV), and a decay time t<sub>d</sub> = 65 min (one of the longest decay times ever mesured for such events). One of the longest flare ever reported was observed for more than 2 hours for the the X-ray source EXO 040830-7134.7 (van der Woerd et al. 1989).
X-ray flares have been observed from Algol-type binaries (Schnopper et al. 1976, Favata 1998), W UMa systems and young stellar objects (YSOs). Most of the YSOs are deeply embedded young stars and T Tauri stars. T Tauri stars are pre-main sequence stars (ages 10<sup>5</sup>-10<sup>7</sup> yr) that may exhibit X-ray flaring activity via thermal bremsstrahlung from a hot plasma (see Linsky 1991). Montmerle et al. (1983) reported a ”superflare” in the T Tauri star ROX-20, in the $`\rho `$ Oph Cloud Complex, which reached a peak luminosity L<sub>x</sub> = 1.1 $`\times `$ 10<sup>32</sup> erg s<sup>-1</sup> (0.3-2.5 keV), and an integrated flare X-ray energy E = 10<sup>34</sup> erg. An ASCA observation of a flare in V773 Tau implied L<sub>x</sub> $``$ 10<sup>33</sup> erg s<sup>-1</sup> (0.7-10 keV), and a total energy release of $``$ 10<sup>37</sup> erg, which is among the highest X-ray luminosities observed for T Tau stars (Tsuboi et al. 1998). Recently, X-ray activity in protostars has been detected. These are objects closely related to T Tau stars, with ages 10<sup>4</sup>-10<sup>6</sup> yr, which are known to be strong X-ray sources when they enter the T Tauri phase. See Montmerle et al. (1986) and Montmerle & Casanova (1996) for comprehensive reviews. Koyama et al. (1996) reported a extremely high temperature (kT $``$ 7 keV) quiescent X-ray emission from a cluster of protostars detected by ASCA in the R CrA molecular cloud. Preibisch (1998) also reported a ROSAT observation of the source EC 95 within the Serpens star forming region, for which a quiescent (dereddened) soft X-ray luminosity of $``$ (6-18) $`\times `$ 10<sup>32</sup> erg/s is derived, which is at least 60 times larger than that observed for other 7 YSOs. Two of these YSOs have displayed flaring activity. In particular, IRS 43 in the $`\rho `$ Oph cloud showed an extremely energetic superflare with a peak luminosity of L<sub>x</sub> = 6 $`\times `$ 10<sup>33</sup> erg s<sup>-1</sup> (0.1-2.4 keV) (Grosso et al. 1997).
Theoreticians have suggested other sources like dwarf novae (Stern, Agrawal & Riegler 1981) or bizarre type I X-ray bursts (Lewin & Joss 1981) as origin of some fast X-ray transients.
## 2 Observations
The WATCH instrument is the all-sky X-ray monitor onboard the Granat satellite, launched on 1 Dec 1989. It is based on the rotation modulation collimator principle (Lund 1985). The two energy ranges are approximately 8-15 and 15-100 keV. Usually, the uncertainty for the location of a new and short-duration source is 1 error radius (3-$`\sigma `$). More details can be found in Castro-Tirado (1994) and Brandt (1994).
### 2.1 Fast X-ray Transients discovered by WATCH
During 1990-1992, WATCH discovered three bright fast X-ray transients. A small quantity when we compare with the other instruments mentioned above. This is mainly due to the higher low energy cut-off for WATCH ($``$ 8 keV), implying that only the harder events are detected. The events were immediately noticeable as an increase in the low energy count rate (Fig. 1) lasting up to a few hours. No positive detections were made for the higher energy band. Their observational characteristics are summarized on Table 1.
### 2.2 Longer Duration X-ray Transients discovered by WATCH
Three objects were also discovered by WATCH during 1990-92, although it is likely that some others are present in the whole WATCH data base (1990-96). The sources reported here were observed to peak at X-ray fluxes up to 3 $`\times `$ 10<sup>-9</sup> erg cm<sup>-2</sup> s<sup>-1</sup> in the 8-15 keV band, and were discovered by analysis of the corresponding modulation pattern data. Their observational characteristics are summarized on Table 2. A fourth event, GRS 1133+54, lasting for $``$ 1 day on 19-20 Nov 1992 was reported by Lapshov et al. (1992).
### 2.3 Search for correlated X-ray flares.
We have searched for X-ray flares in the WATCH data base that could have occurred in simultaneity to observed flares at other wavelengths reported elsewhere. The results were negative both for $`\lambda `$ Eridani (a flare was observed by ROSAT on 21 Feb 1991, Smith et al. 1993) and AU Mic (a flare was observed by EUVE on 15 July 1992, Cully et al. 1993). In the case of EV Lac, no WATCH data were available for an extraordinary optical flare that occurred on 15 Sep 1991 (Gershberg et al. 1992).
## 3 Discussion
### 3.1 The fast X-ray Transients
The three events quoted on Table 1 imply a rate of $``$ 5 year<sup>-1</sup>. If the sources of the three events are galactic, their high lattitude would suggest that they are close to us, although no nearby known flare stars (Pettersen 1991) have been found in the corresponding WATCH error boxes.
GRS 1100-771. Only four variable stars are catalogued in the WATCH error box: CS Cha, CT Cha, CR Cha and TW Cha. All are Orion-type variables. TW Cha undergoes rapid light changes with typical amplitudes of 1<sup>m</sup>. The whole error box lies in the Chamaeleon I star-forming cloud, and the position of 41 (out of $``$ 75) soft X-ray sources detected by ROSAT in the cloud (Alcalá et al. 1995), are compatible with the position for GRS 1100-771 derived from the WATCH data. The DENIS near-IR survey of the region has revealed 170 objects in the cloud, mostly T Tau stars (Cambrésy et al. 1998). Feigelson et al. (1993) also found that $``$80 % of the X-ray sources are identified with T Tau stars with X-ray luminosities ranging from 6 $`\times `$ 10<sup>28</sup> to 2 $`\times `$ 10<sup>31</sup> erg s<sup>-1</sup>.
The WATCH detection implies a flaring luminosity L<sub>x</sub> = 0.6 $`\times `$ 10<sup>30</sup> (d/1 pc)<sup>2</sup> erg s<sup>-1</sup> (8-15 keV), or L<sub>x</sub> = 1.2 $`\times `$ 10<sup>30</sup> (d/ 1 pc)<sup>2</sup> erg s<sup>-1</sup> (0.1-2.4 keV) for a thermal spectrum with kT = 10 keV and no absorption (Greiner 1999). If the X–ray source lies in the Chamaeleon I cloud (at $``$ 150 pc), L<sub>x</sub> = 1.35 $`\times `$ 10<sup>34</sup> erg s<sup>-1</sup> (8-15 keV), or L<sub>x</sub> = 2.6 $`\times `$ 10<sup>34</sup> erg s<sup>-1</sup> (0.1-2.4 keV). with an integrated flare energy of $``$ 10<sup>38</sup> erg across the X-ray band (0.1-10 keV). Taking into account that kT $``$ 10 keV, a value higher than that observed in the T Tau star V773 (Tsuboi et al. 1998), and that the energy release would be $``$ 100 times larger than that seen in the YSO IRS 15 in the $`\rho `$ Oph cloud (Grosso et al. 1997), we tentatively suggest a superflare arising from one of the YSOs in the Chamaeleon I star-forming cloud as the most likely candidate for GRS 1100-771. If this is indeed the case, it will be difficult to explain how this exceptionally high X–ray luminosity can be explained by a solar-like coronal emission mechanism. However, a large amount of energy, stored in large magnetic structures that it is set free by magnetic reconnection events (see Grosso et al. 1997), can account for the WATCH detection. But we notice that the highest quiescent luminosity for any of the X-ray sources detected by ROSAT in the WATCH error box is L<sub>x</sub> $``$ 10<sup>31</sup> erg s<sup>-1</sup> (0.1-2.4 keV) at the utmost (Alcalá et al. 1997), i.e. 100 times lower than that seen in IRS 43 for which a steady supply of energy (like a large number of reconnection events) was proposed (Preibisch 1998). In any case, the possibility of GRS 1100-771 being an isolated neutron star unrelated to the cloud cannot be excluded, given the relatively large error box provided by WATCH.
GRS 2037-404. It was discovered on 23 Sep 1992 (Castro-Tirado, Brandt & Lund 1992), and lasted for 110 min, reaching a peak intensity of $``$ 0.8 Crab. On our request, a Schmidt plate was taken at La Silla on 29 Sep, and based on this plate, it was reported the presence of the Mira star U Mic near maximum brightness at 7.0 mag (Della Valle & Pasquini, 1992). U Mic is a Mira star and we do not consider this star as a candidate due to the different timescales of the involved physical processes. Moreover it is too far away from the error box centre ($``$ 2). Hudec (1992) proposed another variable object inside the WATCH error box as a candidate to be further investigated. This is the star RV Mic (B $``$ 10.5) discovered in 1948 (Hoffmeister 1963). Although is classified as a Mira Cet type star, it is apparently a highly variable object. Another four variables are within the 1 radius WATCH error box: UU Mic (a RR Lyr type), UW Mic (a semi-regular pulsating star), SZ Mic (an eclipsing binary) and UZ Mic (an eclipsing binary of W UMa type). In any case, we carefully examined the Schmidt plate and found no object inside the 1 radius WATCH error box that would have varied by more then 0.5<sup>m</sup> when comparing with the corresponding ESO Sky Survey plate.
GRS 2220-150. No variable stars are catalogued within the WATCH error box. The high flux (2.1 Crab) is similar to the peak of the fast transient at 20h14m + 30.9 discovered by OSO-8 in 1977 (Selermitsos, Burner & Swank 1979). The peak luminosity for GRS 2220-150 was L<sub>x</sub> = 1.5 $`\times `$ 10<sup>30</sup> (d/1 pc)<sup>2</sup> erg s<sup>-1</sup> (8-15 keV), or L<sub>x</sub> = 6 $`\times `$ 10<sup>30</sup> (d/1 pc)<sup>2</sup> erg s<sup>-1</sup> (2-15 keV) if we assume a Crab-like spectrum.
Considering the high peak luminosities, a flare of a dMe or a dKe star may be excluded in these two latter cases. No nearby RS CVn stars from the list of Lang (1992) were found in the error boxes. It can be possible that these events would be associated to old isolated neutron stars accreting interstellar matter with unstable nuclear burning (Ergma & Tutukov 1980). According to Zduenek et al. (1992), when the accretion rate is higher than 10<sup>13</sup> g s<sup>-1</sup>, the hydrogen burning triggered by electron capture becomes unstable. As the mass of the accreted envelope should be 10<sup>23</sup> g, several hundred years will be required for accreting this ammount of matter. Assuming that the number of isolated neutron stars with such high rate is 10<sup>5</sup> (Blaes & Madau 1993), one should expect 10-100 fast X-ray transients per year.
### 3.2 The longer duration X-ray Transients
With the exception of GRS 1133+54, the other three sources quoted in Table 2 are concentrated near the galactic plane, suggesting that they could be more distant than the three fast X-ray transients mentioned above. In this case, no nearby dwarf star or RS CVn systems have been identified within the corresponding error boxes.
GRS 0817-524. No variable stars are catalogued in the error box. However, we note the presence of an X-ray source detected by ROSAT: 1RXS J081938.3-520421 is listed in the All-Sky Bright Source Catalogue (Voges et al. 1996), with coordinates R.A.(2000) = 8h19m38.3s, Dec(2000) = -52 04’ 21”.
GRS 1148-665. Two variable stars lie within the error box: CY Mus (a RR Lyr type) and TW Mus (an eclipsing binary of W UMa type). We also note here the existence of a ROSAT All-Sky bright source: 1RXS J115222.9-673815, at R.A.(2000) = 11h52m22.9s, Dec(2000) = -67 38’ 15”.
GRS 1624-375. No variable stars are reported within the error box. A 300-s optical spectrum taken in March 1997 at the 2.2-m ESO telescope of the possible candidate suggested by Castro-Tirado (1994) reveals that this is a M-star unrelated to GRS 1624-375 (Lehmann 1998). However there are three sources in the ROSAT Catalogue: 1RXS J162730.0-374929, 1RXS J162751.2-371923 and 1RXS J163137.0-380439.
Taking into account the a priori probability of finding such bright X-ray sources in the typical WATCH error boxes ($``$ 1.5 sources per error box), we conclude that none of the 1RXS sources is related to the longer duration X-ray Transients detected by WATCH.
In the Ariel V Catalogue of fast X-ray Transients, Pye & McHardy (1983) described 10 X-ray transients with durations ranging 0.5-4 days. Some of them were associated with known Be-neutron stars systems, like 4U 0114+65, and other with RS CnV systems, like $`\sigma `$ Cen or DM UMa (for which a flare lasting 1.5 days was observed by HEAO-2). The RossiXTE satellite has recently detected other two X-ray transients: XTE J0421+560 and XTE J2123-058. XTE J0421+560 lasted for $``$ 2 days (Smith et al. 1998) and it is probably related to a black hole in a binary system. XTE J2123-058 lasted for $``$ 5 days and is presumably related to a low-mass binary in which a neutron star undergoes type-I bursts (Levine et al. 1998, Takeshima and Strohmayer 1998). Another short-duration event, lasting $``$ 2 days has been observed in X-rays and radio wavelengths in the superluminal galactic transient GRS 1915+105 (Waltman et al. 1998). We consider that the three long duration events reported in this paper, with fluxes in the range 0.35–0.5 Crab assuming a Crab-like spectrum, are likely to have originated from compact objects in low-mass or high-mass X-ray binaries, similarly to XTE J0421+560 and XTE J2123-058.
## 4 Conclusions
Amongst the 3 fast X-ray transients discovered by WATCH in 1990-92, GRS 1100-771 might be related to a superflare arising from a YSO in the Chamaeleon I cloud, that would make -to our knowledge- the strongest flare ever seen in such an object. But the possibility of GRS 1100-771 being an isolated neutron star unrelated to the cloud cannot be excluded, given the relatively large error box provided by WATCH. Amongst the other 3 longer duration X-ray transients, none of them seem to be related to known objects, and we suggest that the latter are likely to have originated from compact objects in low-mass or high-mass X-ray binaries.
###### Acknowledgements.
We would like to thank the referee, Thiérry Montmerle, for his very constructive comments. We are grateful to J. Greiner for useful discussions and for providing the expected fluxes in the 0.1-2.4 keV by means of the EXSAS package. We also thanks R. Hudec for valuable comments, and G. Pizarro, A. Smette and R. West for the Schmidt plates taken at ESO ´s La Silla Observatory, and to I. Lehmann for the optical spectrum taken at the 2.2-m ESO telescope in the GRS 1624-375 field. We are indebted to the staff of the Evpatoria ground station in Ukraine and those of the Lavotchkin and Babakin Space Center in Moscow, where Granat was built. This search made use of the SIMBAD database, at the Centre de Données astronomiques de Strasbourg. This work has been partially supported by Spanish CICYT grant ESP95-0389-C02-02.
|
no-problem/9905/cond-mat9905121.html
|
ar5iv
|
text
|
# Circular Kinks on the Surface of Granular Material Rotated in a Tilted Spinning Bucket
## Abstract
We find that circular kinks form on the surface of granular material when the axis of rotation is tilted more than the angle of internal friction of the material. Radius of the kinks is measured as a function of the spinning speed and the tilting angle. Stability consideration of the surface results in an explanation that the kink is a boundary between the inner unstable and outer stable regions. A simple cellular automata model also displays kinks at the stability boundary.
PACS number: 46.10.+z, 47.20.Ma, 47.54.+r, 81.05.Rm\]
Granular materials behave differently from any other familiar forms of matter. They possess both solid- and fluid-like nature and exhibit unusual dynamic behaviors, such as segregation, surface waves, heap formation and convection. The surface of granular material in a spinning bucket is an example of such interesting phenomena. Vavrek and Baxter showed that the surface shape of sand in a vertical spinning bucket can largely be explained using Coulomb’s criterion. Medina et al. investigated hysteresis of the surface shape with semi two-dimensional rotating bins and showed the existence of multiple steady-state solutions. Yeung studied the system with an initially conical surface. By using a model of granular surface flow, he found that the behaviors of the model agreed well with the experiments.
When we tilt the rotational axis of a bucket from the vertical direction, circular kinks develop on the granular surface if the tilting angle is greater than the angle of internal friction of the material. A glass beaker of 10.5 cm diameter and 1 liter capacity is used as a spinning bucket. The beaker is mounted on a dc motor to rotate around its axis of symmetry. The motor is fixed to a stand such that the tilting angle $`\alpha `$ can be varied. The rotation rate is varied from 0 to 300 rpm by controlling the voltage. The angular velocity $`\omega `$ is measured by using a photogate timer and is constant within an error smaller than 1 rpm. We use natural sand, as used in general construction, as a prototypical granular material. In order to insure the monodispersity of sands, two sieves with 0.35 and 0.25 mm meshes are used and the sizes in between are selected. The angle of internal friction (angle of repose) and apparent density are found to be $`\theta _f`$ = 34 $`\pm `$ 1 and $`\rho `$ = 1.52 g/cm<sup>3</sup>, respectively.
Figure 1(a) is a schematic side-view and 1(b) is a top-view photograph of a circular kink formed on the sand surface. To measure the diameter of the kinks, a divider is placed near a kink and matched with the diameter. Several measurements are carried out for each kink and errors of measurements are about 1 mm. We also measure the surface shapes in some cases using the method of Ref. 6.
First, we tilt the bucket by angle $`\alpha `$ and then turn on the motor to various speeds. In this type of experiment the initial granular surfaces are inclined flat surfaces. After a few minutes of rotation, we measure the radius of kinks $`r_k`$. The radius depends on $`\omega `$ as $`r_k\omega ^{2+x}`$ ($`0.2x0.1`$), thus a dimensionless radius $`R_k=r_k\omega ^2/g`$, which is roughly constant, can be introduced (Fig. 2). Next, the tilting angle is varied at fixed $`\omega `$. When $`\alpha 30^{}`$, there forms a paraboloid-like granular surface whose shape depends on the initial condition but does not change with time. As $`\alpha `$ becomes larger than about 35, a circular kink forms which is independent of the initial granular surface. Inside the kink, most of the grains on the surface avalanche during the rotation and we can see a complex motion like a whirlpool. The surface shape of the inner region is asymmetric about the rotation axis. On the other hand, those grains at and outside the kink are stationary with respect to the bucket. The surface shape of the solid-like region is in general asymmetric and depends on the initial surface. However, when we first rotate the bucket vertically with an initially conical surface and then tilt slowly, the surface shape remains nearly symmetric. The kink is a boundary between the two dynamically distinguishable regions. As $`\alpha `$ increases, the $`R_k`$ tends to increase. The measured values of $`R_k`$ at several tilting angles are shown in Fig. 3. For $`\alpha >70^{}`$, a new type of instability appears; some sand grains are separated from the surface and fall freely during the rotation.
We also study hysteresis by sequentially increasing, decreasing or randomly changing the angular velocity. For fixed $`\alpha `$, the radius of kinks is determined only by $`\omega `$ and does not depend on the past history of changing $`\omega `$. But the shape of the surface shows hysteresis. When $`\omega `$ increases a new kink appears inside the previous one. The previous kink formed at slower $`\omega `$ can be frozen in the solid-like outer region. Thus, there can be many concentric kinks. On the other hand, when $`\omega `$ decreases a new kink appears outside the previous one and the previous kinks are always washed out.
Various vessels are tested as spinning buckets. If the width of a container is larger than the diameter of a kink, the radius of the kink does not depend on the size or shape of the container. We find that, to form the kinks, it is not necessary for the rotational axis to coincide with the axis of symmetry of a container. When we rotate a rectangular box around an axis which is fixed on one of the side-walls, there appears a semicircular kink.
Similar kinks are also found with the silica-gel (Matrex Silica, Amicon Corp., Danvers, MA 01923, U.S.A., of apparent density $`\rho `$ = 0.35 g/cm<sup>3</sup>, particle diameter around 0.2 mm, and the angle of internal friction $`\theta _f=30\pm 1^{}`$), but the $`R_k`$’s are smaller by about 20 % than with the sand. We try sugar powder (sucrose, grain size $``$ 0.2 mm, after 100C for 2 hours to remove moisture), and observe kinks too.
We now discuss the stability of the granular surface. There are four forces acting on a grain at the surface in the rotating frame: gravity $`m\stackrel{}{g}`$, centrifugal force $`m\stackrel{}{r}\omega ^2`$, normal force $`\stackrel{}{N}`$ and frictional (shearing) force $`\stackrel{}{f}`$. Here, $`m`$ is the mass of a grain, $`\stackrel{}{g}`$ the gravitational acceleration and $`\stackrel{}{r}`$ is the radial displacement of the grain from the rotational axis. We make the following assumptions. First, there is no bulk motion in the pile and the grain at the surface can only slide or roll on the surface. Second, there is no inertial effect. The grain stops as soon as it satisfies a force balance equation. Third, we also assume the Coulomb yield condition , which states that the grain does not move when $`f\mu N`$, where $`\mu `$ $`(\mu =\mathrm{tan}\theta _f)`$ is the coefficient of friction. The force balance equation for the granular system is
$`\stackrel{}{N}+m\stackrel{}{g}+m\stackrel{}{r}\omega ^2+\stackrel{}{f}=0,f\mu N.`$ (1)
We define a dimensionless vector
$`\stackrel{}{n}_h={\displaystyle \frac{m\stackrel{}{g}+m\stackrel{}{r}\omega ^2}{mg}},`$ (2)
which is normal to the stable surface when there is no friction ($`\stackrel{}{f}=0`$). The inclination of the rotational axis breaks the cylindrical symmetry of the vector $`\stackrel{}{n}_h`$ and makes the angle $`\beta `$ between $`\stackrel{}{n}_h`$ and $`\widehat{z}`$ time-dependent, where $`\beta `$ is also the angle between the stable surface without friction and the bottom plane of the spinning bucket (Fig. 4).
The kink is an abrupt change in the slope of the granular surface along the radial direction. Therefore, we concentrate on the radial motion of grains. The radial stability condition can be expressed in terms of the angle $`\beta `$
$`\mathrm{tan}(\beta (t)\theta _f)\mathrm{tan}\theta \mathrm{tan}(\beta (t)+\theta _f),`$ (3)
where $`\mathrm{tan}\theta `$ is the local slope of the granular surface in the radial direction with respect to the bottom of the bucket (Fig. 4).
The stable surface satisfying Eq. (3) for all $`t`$, can exist only if the following condition is satisfied;
$`\beta _{\text{max}}\theta _f\beta _{\text{min}}+\theta _f,`$ (4)
where $`\beta _{\text{max}}`$ ($`\beta _{\text{min}}`$) is the maximum (minimum) value of $`\beta (t)`$ during rotation. If we use a cylindrical coordinate system ($`\rho ,\varphi ,z`$) aligned to the rotation axis, the $`\stackrel{}{n}_h`$ and $`\beta (t)`$ can be expressed as
$`\stackrel{}{n}_h=\widehat{\rho }(\mathrm{sin}\alpha \mathrm{sin}\varphi R)+\widehat{\varphi }\mathrm{sin}\alpha \mathrm{cos}\varphi +\widehat{z}\mathrm{cos}\alpha `$ (5)
and
$`\mathrm{cos}\beta (t)={\displaystyle \frac{\mathrm{cos}\alpha }{\sqrt{R^2+12R\mathrm{sin}\alpha \mathrm{sin}\omega t}}},`$ (6)
where $`R=r\omega ^2/g`$ is a dimensionless radius and $`\varphi =\omega t`$ (the phase is chosen that $`\varphi =\pi /2`$ corresponds to the highest position during rotation). The $`\beta _{\text{max}}`$ ($`\beta _{\text{min}}`$) in Eq. (4) becomes the angle $`\beta (t)`$ at $`\varphi =3\pi /2`$ ($`\varphi =\pi /2`$).
Finally, we reach the following radial stability condition,
$`R\{\begin{array}{cc}0\hfill & (\alpha \theta _f)\hfill \\ R_c\sqrt{\frac{\mathrm{sin}2(\alpha \theta _f)}{\mathrm{sin}2\theta _f}}\hfill & (\alpha >\theta _f).\hfill \end{array}`$ (9)
If $`\alpha \theta _f`$, the steady state surface is stable for all $`R`$. If $`\alpha >\theta _f`$, the surface can be stable only in the region where $`R`$ is larger than the critical radius $`R_c`$. In the region with smaller $`R`$, avalanches of grains occur. We plot the $`R_c(\alpha )`$ in Fig. 3 for the sand sample with $`\theta _f=34^{}`$ to compare with the radius of kink. The critical radii calculated by Eq. (7) reflect the qualitative features of the experimental kink radius; they increase with the inclination angle and do not depend on the angular velocity.
The angular stability condition is also examined. The corresponding critical radius $`R_c^\varphi `$ for the azimuthal direction is always larger than $`R_c`$, and their difference increases monotonically as $`\alpha `$ increases. In our experiment ($`35^{}\alpha 70^{}`$), however, the difference is small and the angular instability is expected to have little effect on the formation of kinks.
In order to gain insights on the formation of the kinks, we study the system using a simple cellular automata model similar to that of Bak et al.. In addition to the assumptions made earlier, we further assume that the relative motion of grains in the azimuthal direction is not significant. Experimentally, grains just inside the kink show nearly circular trajectories without relative movement in the azimuthal direction. Since the kinks are formed by the motion of grains nearby the kinks, we expect that the above assumption does not alter their formations. The two-dimensional granular surfaces can now be described by one-dimensional curves— the surface profile $`h(R)`$ at a given $`\varphi `$.
The spatial coordinate $`R`$ is made discrete—it is replaced by $`R_i=i\mathrm{\Delta }R`$ ($`i=n,n+1,\mathrm{},n`$), which runs from one end of the container to the other end. We measure local slopes $`s_i`$, defined as $`(h(i)h(i+1))/\mathrm{\Delta }R`$, and check the stability of all the slopes following the criterion of Eq. (3). We then update the heights of the pile to make the local slopes at least marginally stable starting from the uphill and proceeding towards the downhill.
If the local slope $`s_{i1}`$ becomes unstable due to an update at the $`i`$-th site, we allow “backward propagation”, where a perturbation at a site can influence a site uphill from the perturbation. To be more specific, we decrease the height $`h(i1)`$ and transfer the excess amount to the $`(i+1)`$th site. If the change makes the $`(i2)`$th site unstable, its height is decreased in a similar way. We proceed until all local slopes become stable. Then the container is rotated by a small angle $`\delta \varphi `$ and we again update the height. For computational simplicity we assume that the relaxation time of the pile is much shorter than the period of the rotation. At the boundary, we apply a mass conservation condition; grains cannot enter nor leave the container.
In Fig. 5, we show time evolutions of the heights $`h(i,t)`$ with different initial conditions plotted during a half rotation. Here, $`\mathrm{\Delta }R=0.01,\alpha =45^{},\theta _f=34^{}`$, and $`\delta \varphi =\frac{\pi }{3}\times 10^2`$ rad. One can see the region with large $`|R|`$ is solid-like; the profile does not change with time. On the other hand, the inner region is fluid-like; the heights keep changing. There is a discontinuity in the slope at the border of the two regions ($`|R|1.2`$). The resulting kink looks similar to what is observed in the experiments. Although the formation of the kinks does not depend on the initial surface, the profile of the solid-like region does. The radii $`R_k(\alpha )`$ resulting from the simulation with $`\theta _f=34^{}`$ are shown in Fig. 3 for different $`\alpha `$ in the range $`(0^{}90^{})`$. One notices that not only the qualitative features such as the threshold angle $`(\alpha =\theta _f)`$ and the overall shape, but also their numerical values are in good agreement with the measured ones. However, this quantitative agreement may be accidental, since we expect the present simple model to reproduce qualitative, but not quantitative behaviors. To study the hysteresis behaviors, we start with a kink formed on the surface at $`\alpha =40^{}`$, and suddenly increase $`\alpha `$ to $`65^{}`$. There indeed appears a new kink with smaller radius, leaving the old one stable, similar to what is observed in the experiments.
Finally, the formation of the kinks can be viewed as follows. Consider the border at $`R_c`$ from the stability analysis. There will be avalanches just inside the border. Due to the backward propagation, which we expect to exist in real sandpiles, the region just outside the border should be involved in the avalanche. This explains why the measured $`R_k`$ is larger than $`R_c`$ as shown in Fig. 3. The surface in the stable region will be marginally stable, so we expect its slope is a smooth function of $`R`$. The surface in the inner unstable region, however, is produced by a process different from that for the surface outside the border, so there is no reason to expect that the two slopes join smoothly. It is thus natural to expect that a sudden change in the slope is observed at the border between the solid-like and fluid-like regions.
In summary, we have discovered circular kinks on the surface of granular material in a spinning bucket when its axis of rotation is tilted beyond the angle of internal friction. The radius of the kinks depends on the angular velocity, the tilting angle and the angle of internal friction. With a fixed tilting angle, the dimensionless radius of kinks $`R_k=r_k\omega ^2/g`$ remains roughly constant. We find that the surface is divided into two regions: a fluid-like inner and a solid-like outer regions. We determine the critical radius $`R_c`$ from the radial stability condition and the prediction reflects the basic features of the experiments. Using a simple cellular automata model, with the same stability condition and by allowing the propagation effect of avalanche, we obtain the granular surface in good accord with the experiments.
This work is supported by the Basic Science Research Institute, Seoul National University and by Korea Science and Engineering Foundation (KOSEF) through the Science Research Center for Dielectrics and Advanced Matter Physics. One of us (J.L.) is supported in part by KOSEF through the Brain-pool program and SNU-CTP.
Electronic address : isyu@snu.ac.kr
|
no-problem/9905/chao-dyn9905040.html
|
ar5iv
|
text
|
# Breakdown of Modulational Approximations in Nonlinear Wave Interaction
## I Introduction
Modulational instability of high-frequency nonlinear waves is a common process in a variety of circumstances involving wave propagation in continuous systems. Modulational processes can be seen to occur in a wide range of physical situations, from nonlinear waves in plasmas to nonlinear electromagnetic waves propagating in optical fibers . What usually happens in all those cases is that due to generic nonlinear interactions, the amplitude of a high-frequency carrier develops slow modulations in space and time. If the modulations are indeed much slower than the high-frequencies involved, one can obtain simplified equations describing the dynamics of the slowly varying amplitudes solely, the amplitude equations . In the present analysis we consider systems that become integrable in this modulational limit, a feature often displayed. Should this be indeed the case, no spatiotemporal chaos would be observed there. The basic interest then would be to see what happens when the approximations leading to modulational approximations cease to be satisfied. The paper is organized as follows: in §2 we introduce our model equation and discuss how and when it can be approximated by appropriate amplitude equations; in §3 we investigate the modulational process from the point of view of nonlinear dynamics; in §4 we perform full spatiotemporal simulations and compare the results with those obtained in §3; and in §5 we conclude the work.
## II Model equation, modulational approximations, and amplitude equations
In the present paper we focus attention on a nonlinear variant of the Klein-Gordon equation (NKGE) to investigate the breakdown of modulational approximations in the context of nonlinear wave fields. The NKGE used here reads
$$_t^2A(x,t)_x^2A(x,t)+\frac{\mathrm{\Phi }}{A}=0,$$
(1)
($`_t/t,_x/x`$) where we write the generalized nonlinear potential as
$$\mathrm{\Phi }(A)=\omega ^2\frac{A(x,t)^2}{2}\frac{A(x,t)^4}{4}+\frac{A(x,t)^6}{6},$$
(2)
$`\omega `$ playing the role of a linear frequency which will set the fast time scale. The remaining coefficients on the right-hand-side were chosen to allow for modulational instability and saturation; we shall see that while the negative sign of the second fulfills the condition for modulational instability, the positive sign of the third provides saturation. The choice of their numerical values is arbitrary, but our results are nevertheless generic. The NKGE is known to describe wave propagation in nonlinear media and the idea here is to see how the dynamics changes as a function of the parameters of the theory: wave amplitude, and time and length scales.
Let us first derive the conditions for slow modulations. We start by supposing that the field $`A(x,t)`$ be expressed in the form
$$A(x,t)=\stackrel{~}{A}(x,t)e^{i\omega t}+\mathrm{complex}\mathrm{conjugate}.$$
(3)
Then, if one assumes slow modulations and discards terms like $`_t^2\stackrel{~}{A}`$ and the highest-order power of the potential $`\mathrm{\Phi }`$, one obtains
$$2i\omega _t\stackrel{~}{A}(x,t)_x^2\stackrel{~}{A}(x,t)3|\stackrel{~}{A}(x,t)|^2\stackrel{~}{A}(x,t)=0,$$
(4)
which is, apart from some rescalings, the Nonlinear Schrödinger Equation - we shall refer to it as NLSE here - an integrable equation.
This is no novelty; it is known that the modulational approximation is obtainable when there is a great disparity between the time scales of the high-frequency $`\omega `$ and the modulational frequency, we call it $`\mathrm{\Omega }`$, such that terms of order $`\mathrm{\Omega }^2\stackrel{~}{A}`$, when compared to $`\omega ^2\stackrel{~}{A}`$, can be dropped from the governing equation. The magnitude of the modulational frequency can be estimated as follows. Consider Eq. (4) and suppose a democratic balance among the magnitude of its various terms; $`\omega _t\stackrel{~}{A}_x^2\stackrel{~}{A}\stackrel{~}{A}^3`$. Then one obtains for $`_t\mathrm{\Omega }`$,
$$\frac{\mathrm{\Omega }}{\omega }(\frac{\stackrel{~}{A}}{\omega })^2.$$
(5)
It is thus clear that the modulational approximation is valid only when $`\stackrel{~}{A}\omega `$, since this condition slows down the modulational process causing $`\mathrm{\Omega }\omega `$. The next question would be on what is to be expected when the modulational approach ceases to be valid. Before proceeding along this line, let us mention that a stability analysis can be performed on Eq. (4). One perturbs an homogeneous self-sustained state with small fluctuations of a given wavevector $`k>0`$ (we choose $`k>0`$ here, but the theory is invariant when $`kk`$) and after some algebra one concludes that: (i) the field in the homogeneous state, let us call this field $`A_h`$, is given by
$$A_h=a_oe^{i\frac{3a_o^2}{2\omega }t}$$
(6)
where $`a_o`$ is an arbitrary amplitude parameter and where the exponential term should be seen as providing a small nonlinear correction to the linear frequency $`\omega `$; and (ii) the perturbation is unstable when
$$k<k_{tr}\sqrt{6}a_o.$$
(7)
with maximum growth rate at
$$k_{max}\frac{k_{tr}}{\sqrt{2}}.$$
(8)
When unstable, the homogeneous state typically evolves towards a state populated by regular structures which can be formed precisely because the underlying governing NLSE, Eq. (4), is of the integrable type as mentioned earlier.
## III Beyond the modulational approximation
To advance the analysis beyond the modulational regimes we start from the basic equation, Eq. (1), but do not use approximation $`_t^2\stackrel{~}{A}\omega ^2\stackrel{~}{A}`$ leading to Eq. (4) and eventually to condition $`\stackrel{~}{A}\omega `$. The idea is precisely to examine what happens as the ratio $`\stackrel{~}{A}/\omega `$ of Eq. (5) grows from values much smaller, up to values comparable to the unit.
Our first task is to examine how the regular and well known modulational instability analyzed in the previous section comes directly from Eq. (1). To do that, let us write a truncated solution as the sum of an homogeneous term plus fluctuations with wavevector $`k`$, $`A=A_h(t)+A_1(t)(e^{ikx}+e^{ikx})`$, $`A_h`$ and $`A_1`$ real. The truncation, that discards higher harmonics, is legitimate within linear regimes, but fortunately we shall see that it is not as restrictive as it might appear even in nonlinear regimes. The general idea favouring truncation here is that in the reasonable situation where modes with the fastest growth rates are more strongly excited, relation (8) indicates that second harmonics are already outside the instability band since $`2k_{max}=\sqrt{2}k_{tr}>k_{tr}`$. Under these circumstance one would be led to think that the most important modes would be the homogeneous and those at the fundamental spatial harmonic. We shall actually see that the truncation provides a nice and representative approach to the case of regular regimes.
Note that we are already considering the amplitudes of the exponential functions as equal. This results from a simplified symmetrical choice of initial conditions and is totally consistent with the real character of Eq. (1). After some lengthy algebra, one finds out that the coupled nonlinear dynamics of the fields $`A_h`$ and $`A_1`$ is governed by the Hamiltonian
$$H=\frac{p_h^2}{2}+\frac{\omega ^2q_h^2}{2}\frac{q_h^4}{2}+\frac{2q_h^6}{3}+\frac{p_1^2}{2}+\frac{\chi ^2q_1^2}{2}\frac{3q_1^4}{4}+\frac{5q_1^6}{3}$$
$$3q_h^2q_1^2+10q_h^4q_1^2+15q_h^2q_1^4,$$
(9)
where $`\chi ^2\omega ^2+k^2`$, $`q_h=A_h/\sqrt{2}`$, $`q_1=A_1`$, and where the $`p`$’s denote the two momenta conjugate to the respective $`q`$-coordinates.
The Hamiltonian (9) can be informative. As a first instance it can be used to determine the stability properties of the homogeneous pump, as mentioned before. To see this, assume that in average $`q_hq_1`$ and solve the dynamics perturbatively. In zeroth order, one would have the following Hamiltonian governing the $`(p_h,q_h)`$ dynamics:
$$h_o=\frac{p_h^2}{2}+\frac{\omega ^2q_h^2}{2}\frac{q_h^4}{2}+\frac{2q_h^6}{3}.$$
(10)
With help of action-angle variables ($`\rho ,\mathrm{\Theta }`$) for the zeroth-order part and conventional perturbative techniques the solution reads
$$q_h=\sqrt{\frac{2\rho }{\omega }}\mathrm{cos}(\omega t\frac{3}{2}\frac{\rho }{\omega ^2}t),$$
(11)
if $`\rho `$, the amplitude parameter, is not too large. Note that the oscillatory frequency undergoes a small nonlinear correction which is determined by the quartic term in $`q_h`$ of the Hamiltonian $`h_o`$.
Next we consider the driven Hamiltonian controlling the dynamics of the canonical pair $`(p_1,q_1)`$
$$h_1=\frac{p_1^2}{2}+\frac{\chi ^2q_1^2}{2}+3q_h^2q_1^2,$$
(12)
where we recall that the pair $`(p_1,q_1)`$ describes the inhomogeneity of the system. One again introduces action-angle variables $`(I,\theta )`$ to rewrite the linear Hamiltonian (12) in the form
$$h_1=\chi I+12\frac{\rho }{\omega }\frac{I}{\chi }\mathrm{cos}^2\theta \mathrm{cos}^2(\omega t\frac{3}{2}\frac{\rho }{\omega ^2}t),$$
(13)
from which we obtain the resonant form
$$h_{1,r}=\frac{1}{2\omega ^2}(k^23\rho )I+\frac{3\rho I}{2\omega ^2}\mathrm{cos}(2\varphi ),$$
(14)
with $`\varphi =\theta (\omega 3\rho /2\omega ^2)t`$, and where use is made of the approximation $`\chi \omega +k^2/(2\omega )`$ valid when $`k\omega `$. Now consider the stability of a small perturbation $`I0`$. For this initial condition $`h_{1,r}0`$. Since $`h_{1,r}`$ is a constant of motion, solutions for arbitrarily large values of $`I`$, what would indicate instability, are possible only when $`|\mathrm{cos}(2\varphi )|1`$. From the resonant Hamiltonian (14), this demands
$$k<\sqrt{\frac{6\rho }{\omega }},$$
(15)
and also indicates that maximum growth rate for $`I`$ occurs at $`k_{max}=\sqrt{3\rho /\omega }`$. Comparisons of temporal dependence of Eqs. (3), (6) and (11) shows that $`\rho /\omega =a_o^2`$, and that conditions (7) and (15) are therefore one and the same as they should be. In other words, starting from the full nonlinear wave equation, we recover the typical results naturally yielded by the NLSE.
But the Hamiltonian (9) gives further information because apart the truncation, it is not an adiabatic approximation like Eq. (4); it can therefore tell us whether the reduced dynamics, if unstable, is of the regular or chaotic type. The interest on this issue comes from the fact that the reduced dynamics usually helps to determine the spatiotemporal patterns of the full system: while regular reduced dynamics is associated with regular structures - frequently a collection of solitons or soliton-like structures, chaotic reduced dynamics is associated with spatiotemporal chaos. The correlation has its roots on the so called stochastic pump model . The model states that intense chaos in a low-dimensional subsystem of a multidimensional environment can make the subsystem act like a thermal source, irreversibly delivering energy into others degrees-of-freedom. In the limit of a predominantly regular dynamics undergoing in the reduced system, irreversibility is greatly reduced and energy tends to remain confined within the subsystem. On the other hand, in the limit of deeply chaotic dynamics with no periodicity at all, energy flow out of the subsystem is fast and there is not even much sense in defining the subsystem as an approximately isolated entity. The intermediary cases are those more amenable to a description in terms of the stochastic pump . We point out, however, that in all cases, even in the chaotic one, analysis of the reduced subsystem serves as an orientation on what to be expected in the full spatiotemporal dynamics. In our case, the appropriate subsystem is precisely the one we have been using. This is so because it is the smallest subsystem comprising the most important ingredients of the dynamics: the homogeneous and the only linearly unstable modes.
We now examine these points in some detail. Let us consider $`\omega =0.1`$. When $`A_{h,o}\omega `$, $`A_{h,o}A_h(t=0)`$, the modulational approximation can be used to determine the instability range, $`k<k_{tr}`$. A surface of section plot based on the two-degrees-of-freedom Hamiltonian (9) produces Fig. (1), where we record the values of $`(q_1,p_1)`$ whenever $`p_h=0`$ with $`dp_h/dt>0`$, and where we take $`q_1=0.01q_hq_h`$ and $`p_h=p_1=0`$ to determine the total unique energy of the various initial conditions we launch in the simulations. Note that the particular “seed” initial condition introduced above is included in the ensemble of initial conditions launched and represents small perturbations to an homogeneous background.
In Fig. (1a) we take $`A_{h,o}/\omega =0.5`$ and $`k=2k_{max}>k_{tr}`$. One lies outside the instability range and the figure reveals that the origin, which represents the purely homogeneous state $`q_1=A_1=0`$, is indeed stable. The remaining panels are all made for $`k=k_{max}`$ and increasing values of the ratio $`A_{h,o}/\omega `$. One sees that for such values of $`k`$ and $`A_{h,o}`$ not only the origin is rendered unstable, as it also becomes progressively surrounded by chaotic activity. The inner chaotic trajectory issuing from the origin - i.e., the trajectory representing the modulational instability - is encircled by invariant curves, but the last panel already shows that some external chaos, here represented by scattered points around the invariant curves, is also present. At some amplitude $`A_{h,o}=A_{critical}`$ slightly larger than the one used in panel (d), invariant curves are completely destroyed. Above the critical field, inner orbits are no longer restricted to move within confined regions of phase-space - these orbits are in fact engulfed by the external chaos seen in panel (d). External chaos here is a result of the hyperbolic point positioned at the local maximum of the generalized potential $`\mathrm{\Phi }(A)`$, at $`A_10.1`$. One gross determination of the critical field based on simple numerical observation yields $`A_{critical}/\omega 1/1.62`$, although the corresponding transition from order to disorder in the full simulations may not be so sharply defined. Yet, one may expect that in both low-dimensional and full simulations, the transition should occur at comparable amplitudes.
## IV Full spatiotemporal simulations
To make the appropriate comparisons, we now look into the full simulation of Eq. (1). Full simulations are made through a discretization of the spatial domain via a finite difference method. The dynamics is resolved temporally by means of a sympletic integrator, as was the purely temporal Hamiltonian. The results are quite robust and energy is conserved up to $`10^6`$ parts in one. In Fig. (2) we display the spacetime history of the quantity $`Q=\sqrt{\omega ^2A(x,t)^2+\dot{A}(x,t)^2}`$ \- we plot this quantity because it becomes a constant of motion in the limit where we discard nonlinearities and inhomogeneities. Initial conditions are the same as the seed initial condition used in Fig. (1), but the control parameters differ. In the case of Fig. (2a) where we choose $`A_{h,o}/\omega =0.1`$ so as to safely satisfy $`A_{h,o}A_{critical}`$ and $`A_{h,o}\omega `$, it is seen that the spatiotemporal dynamics is very regular. The homogeneous state is unstable, but only periodic spatiotemporal spikes can be devised. This is the regular spatiotemporal dynamics so typical of the integrable NLSE and this kind of dynamics agrees very well with the low-dimensional predictions of the reduced Hamiltonian (9). The fact that only one single structure can be found along the spatial axis at any given time, indicates that the dynamics is singly periodic (period-1) along this axis and thus basically understood in terms of the reduced number of active modes (homogeneous plus fundamental harmonics) of Hamiltonian (9). We now move into the vicinity of the critical amplitude $`A_{critical}`$ discussed earlier. Under such conditions, one may expect to see the effects of spatiotemporal chaos. The value $`A_{h,o}/\omega =1/1.625`$ is chosen in Fig. (2b), where full simulations indeed display a highly disorganized state after a short regular transient. Regularly interspersed spikes can no longer be seen and spatial and temporal periodicities are lost, which characterizes spatiotemporal chaos. In this regime many modes become active (we shall return to this point later) and the reduced Hamiltonian (9) fails to provide an accurate description of the full dynamics. However it still provides a good estimate on the point of transition. A more throughful examination of the transition in the full simulations suggests that the critical field there is a bit smaller - a value close to $`\omega /1.95`$; for smaller values we have not observed noticeable signals of spatiotemporal chaos even for much longer runs than those presented here. The much larger oscillations executed by $`Q`$ in chaotic cases (see the legend of Fig. (2)) is a direct result of the destruction of the invariant curves seen in Fig. (1). In the absence of invariant curves, initial conditions are no longer restricted to move near the origin.
It is thus seen that there are limits to an integrable modulational description of the dynamics of a wave field. The limits are essentially set by the parameter $`A_{h,o}/\omega `$. If it is much smaller than the unit, the modulational description is valid and one can expect to see a collection of spatiotemporal periodic structures being formed as asymptotic states of the dynamics. On the other hand, as the parameter approaches the unit, nonintegrable features are likely to be seen. In particular, regularity does not survive for very long, and fluctuations with various length scales appear in the system. This is a regime of spatiotemporal chaos which fundamentally involves the presence of nonlinear resonances between the frequency of the carrier, $`\omega `$, and the intrinsic nonlinear modulational frequency, $`\mathrm{\Omega }`$.
Due to the presence of chaos, one is suggested that the transition involves an irreversible energy flow out of the reduced subsystem. If one computes the average number of active modes
$$<N^2>\frac{\underset{n}{}n^2|A_n|^2}{_n|A_n|^2},$$
(16)
where the amplitudes are defined in the form
$$A_n=\underset{j}{}A(x_j,t)e^{inkx_j},$$
(17)
one obtains the plots shown in Fig. (3).
In Eq. (17), “$`j`$” is the discretization index. In Fig. (3a) we use the same conditions as in Fig. (2a). This panel shows that in rough terms, energy keeps periodically migrating between the homogeneous mode (when $`\sqrt{<N^2>}0`$) and the fundamental harmonic (when $`\sqrt{<N^2>}1`$). Conditions of Fig. (3b) are the same as those of Fig. (2b); one sees that as the ratio $`A_{h,o}/\omega `$ grows, periodicity is lost, and that energy flow out of the initial subsystem into other modes becomes clearly irreversible. In Fig. (3c) we use the same previous conditions with exception of $`A_{h,o}`$ which we take $`A_{h,o}=\omega /1.95`$. This slightly smaller, but not too small, value of the initial amplitude allows to observe the slow diffusive transit of energy during the initial stages of the corresponding simulations. During this stage one can actually look at the subsystem as an energy source adiabatically delivering energy into other modes - the concept of the stochastic pump applies more appropriately in those situations.
## V Final conclusions
To summarize, in this paper we have studied the breakdown of modulational approximations in nonlinear wave interactions. We have analyzed a nonlinear Klein-Gordon equation to draw the following conclusions. Adiabatic or modulational approximations are accurate while the high-frequency of the carrier wave keeps much larger than the modulational frequency. Under these circumstances the full spatiotemporal patterns are regular as is the dynamics in the reduced subsystem where the system energy is initially injected. There is no net flow of energy out of the reduced subsystem into the remaining modes.
On the other hand, when both frequencies become of the same order of magnitude, the reduced subsystem undergoes a transition to chaos. Correspondingly, the spatiotemporal patterns of the full system become highly disordered and energy spreads out over many modes. The correlation between the low-dimensional and high-dimensional spatiotemporal chaos has its roots on the stochastic pump model . According to the model, a low-dimensional chaotic subsystem may act like a thermal source, delivering energy in a irreversible fashion to other degrees-of-freedom of the entire system. Spectral simulations performed here indicates that this seems to be the case with the present setting.
The transition to chaos involves a noticeable increase in terms of wave amplitude. This takes place when the invariant curves of Fig. (1) are destroyed, allowing for the merge of the external and internal chaotic bands into one extended chaotic sea. This merging of chaotic bands is actually a result of reconnections involving the manifolds of the unstable fixed point at the origin, and other hyperbolic points associated with the curvature of the generalized potential $`\mathrm{\Phi }`$ . When both manifolds reconnect, inner trajectories issuing from the origin start to execute the large and irregular oscillations seen in Fig. (2). More detailed studied of the process is under current analysis.
## ACKNOWLEDGMENTS
This work was partially supported by Financiadora de Estudos e Projetos (FINEP), Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), and Fundação da Universidade Federal do Paraná - FUNPAR, Brazil. S.R. Lopes wishes to express his thanks for the hospitality at the Plasma Research Laboratory, University of Maryland. Part of the numerical work was performed on the CRAY Y-MP2E at the Supercomputing Center of the Universidade Federal do Rio Grande do Sul.
|
no-problem/9905/chao-dyn9905001.html
|
ar5iv
|
text
|
# Deciphering Secure Chaotic Communication
## Abstract
A simple technique for decoding an unknown modulated chaotic time-series is presented. We point out that, by fitting a polynomial model to the modulated chaotic signal, the error in the fit gives sufficient information to decode the modulating signal. For analog implementation, a lowpass filter can be used for fitting. This method is simple and easy to implement in hardware.
Indexing terms: chaotic time-series, secure communication (PACS: 05.45).
Secure communication using chaos has received much attention recently. Various methods for modulating and demodulating a chaotic oscillator have been proposed (see and references cited in ). There are two ways to hide the modulating signal inside the chaotic carrier signal. One way is to add a small modulating signal to the chaotic carrier signal whose amplitude is much larger. This is called chaotic masking. In this case, power is wasted in the carrier. Another way to encode the modulating signal is to modulate the parameters of the carrier chaotic oscillator. This method has been demonstrated experimentally by many groups.
However, it has also been pointed out that secure communication using chaos can be broken (,). The first method in is specific to the Lorenz oscillator case. The second method by Short is more general. In this approach by Short, the modulating signal is assumed to be small and the phase-space of the carrier chaotic oscillator is reconstructed from the transmitted time-series using the standard delay embedding techniques (, ). The chaotic time-series is predicted by noting the flow of nearby trajectories in the embedded phase-space. Then the fourier transform of the difference between the predicted series and the actual transmitted series is taken and a comb filter is applied. This fourier spectrum now reveals the modulating signal. As noted in , this technique works well when the modulating signal amplitude is small. If the amplitude of the modulating signal is large, the phase-space structure of the carrier gets greatly altered and it may not be possible to get a good delay embedding of the carrier oscillator dynamics.
Another way to unmask the modulating signal is to plot correlation integral of the chaotic time-series as a function of time. This also contains some information about the modulating signal. However, this method requires lot of computation and works only when the modulating waveform is very slowly varying.
In this brief, we point out a much simpler way to extract the modulating signal without using the phase-space information or sophosticated frequency filtering. This method is simple and easy to implement in hardware for real-time decoding and works well when the modulating parameter variation is sufficiently large. This method has been tested for many continous-time chaotic oscillator systems, including the Glass-Mackey equation that generates more complex chaotic waveforms.
We take a short segment of the data and fit a polynomial. The number of data points for the fit is kept more than the order of the polynomial. The error in the fit now gives some information about the modulating waveform. As the modulationg parameter is varied, the Lyapunov exponents ($`\lambda `$’s) of the chaotic oscillator vary and the waveform also varies from, say, a ‘more’ chaotic nature to ‘less’ chaotic nature. When the waveform is less chaotic (smaller positive $`\lambda `$), we expect the error in the fit to be smaller. The error in the fit becomes larger when the modulating parameter moves the oscillator to a more chaotic region (larger positive $`\lambda `$).
Since the information about the initial fit is lost within a time interval of about $`1/\lambda `$, next fit to a nearby segment gives another independent estimate of the fit error. It is easy to get sufficient number of such error points for averaging. The fit error data is then averaged (for about 100-200 points) and applied to a lowpass filter. We used a simple 4th order Butterworth lowpass filter. In case of a single-tone modulating signal, accurate demodulation is easy and a sharp bandpass filter will do. In a more realistic communication system, the baseband modulating signal will consist of a band of frequencies. The results presented here are shown for a two-tone modulating signal of the type $`m(t)=A_1\mathrm{sin}\omega _1t+A_2\mathrm{sin}\omega _2t`$.
We have explored other minor variations of the above technique for calculating the fit error. Some improvement in the demodulated waveform can be found by averging the demodulated signals of different orders of polynomial curve fit. Another way to find the fit error is to project the fitted polynomial to a nearby time location and calculate the fit error in that location. The improvements are only marginal.
First, we show the results for the Lorenz oscillator case with parameters $`\sigma =10`$, $`b=8/3`$ and $`r=35`$ (notations as in ). For modulation, the parameter $`r`$ is varied around its nominal value by the two-tone signal with amplitudes $`A_1=A_2=5`$ and $`\omega _1=2\omega _2=0.233`$. That is: $`r=r_0+A_1\mathrm{sin}\omega _1t+A_2\mathrm{sin}\omega _2t`$. The integration is done using a 4th order Runge-Kutta method with a time step of $`\mathrm{\Delta }T=0.01`$. The variable $`x(t)`$ of the Lorenz oscillator is assumed to be transmitted. A 4th order polynomial is used to fit every 10 points in the time-series. A 4th order lowpass filter with a small cut-off frequency (0.001/$`\mathrm{\Delta }T`$) is used filter the error signal. The results are summarised in Fig. 1. The demodulated output $`\widehat{m}(t)`$ is shown in Fig. 1d. The fourier frequency spectrum of the error signal (before the final lowpass filter) is shown in Fig. 2. It is clear that the fit error data contains information about the modulating waveform.
This demodulation technique requires a digital signal processor (DSP) for real-time decoding. To aviod this, we present another simpler method that can be easily implemented with analog circuitry (Fig. 3). Here, instead of using a DSP or a computer, we use a simple lowpass filter LPF1 (of order 1 or 2) with low time-constant to predict a short segment of the input chaotic time-series. Then the error $`e(t)`$ between the input signal $`x(t)`$ and the LPF1 output is calculated. The error can be computed with a squaring or an absolute value circuitry. The delay in the input path can be implemented with a simple I or II order allpass filter. This is to compensate for the delay in LPF1. LPF2 is a sharper lowpass filter (say, of order 4 or 5) with a large time-constant that averages the error signal and produces the output demodulated signal $`\widehat{m}(t)`$. This analog implementation also gives performance comparable to the polynomial fit algorithm given earlier.
The results for the Rossler oscillator case are shown Fig. 4. The Rossler oscillator is simulated with parameters $`a=0.2`$, $`b=0.2`$ and $`c=5.7`$. The $`b`$ parameter is modulated, as before, with a two-tone modulating signal $`A_1=A_2=0.015`$ and $`\omega _1=2\omega _2=0.02`$. The time-step for Runge-Kutta method is $`\mathrm{\Delta }T=0.15`$. LPF1 is a 2nd order Butterwoth lowpass filter with a large cut-off frequency ($`0.15/\mathrm{\Delta }T`$). The averaging lowpass filter LPF2 is a 4th order Butterwoth filter with a small cut-off frequency ($`0.002/\mathrm{\Delta }T`$).
Again, the fourier spectrum clearly shows the modulating waveforms. The variation of the output or the $`\lambda `$’s may not be linear when modulating parameter varies by a large amount. To compensate for this, a non-linear companding/expanding circuitry may be used after demodulation. Thus, it is easy to intercept and decode a continous-time secure chaotic communication.
In conclusion, we have presented a simple algorithm for decoding an unknown modulated chaotic time-series. This technique can be used for real-time decoding using a DSP or an analog circuitry.
|
no-problem/9905/physics9905055.html
|
ar5iv
|
text
|
# Experimental study of Taylor’s hypothesis in a turbulent soap film
## I Introduction
In a 1938 paper on the statistics of turbulence, G. I. Taylor presented an assumption from which he could infer the spatial structure of a turbulent velocity field from a single point measurement of its temporal fluctuation . This assumption, known as Taylor’s hypothesis or the frozen turbulence assumption, relies on the existence of a large mean flow which translates the fluctuations past the stationary probe in a time short compared to the evolution time of the turbulence. The experimental measurements treated by Taylor were made on the turbulence generated behind a stationary grid in a wind tunnel, and his hypothesis has become a standard technique employed in similar experiments which inform our current views on turbulence (see for example Refs. , , and ). The importance of this hypothesis stems from the fact that most turbulence theories are framed in terms of the spatial structure of the velocity field .
In practical terms, the limits of Taylor’s hypothesis are determined by how large the mean velocity must be relative to the fluctuations. Recently Yakhot has pointed out that the corrections to Taylor’s hypothesis could well be of the same order as corrections to the standard model of turbulence, Kolmogorov’s 1941 theory . There is at present, however, no firm theoretical derivation of the hypothesis, and thus no fashion to evaluate its reliability, calculate higher order corrections, or predict in what way it will break down. A few theoretical discussions do exist , and a promising direction has recently been taken by Hayot & Jayaprakash for the Burgers equation . The treatment by Lumley in particular offers corrections to statistical measures of spatial gradients for non-negligible velocity fluctuations . Still a general theoretical framework is lacking.
Experimentally there have been many studies of Taylor’s hypothesis, all of which have treated three dimensional (3D) turbulence; we give a brief overview in Section II. Our experiments, however, are performed on the quasi-two dimensional flow of a soap film. Two-dimensional (2D) fluid flows occur in many physical situations, mostly due to the effects of rotation or stratification in the atmosphere and ocean . Turbulence in 2D is different from 3D in several ways, largely due to the absence of vortex stretching in 2D . This means that the squared vorticity (called enstrophy) becomes a nearly-conserved quantity in 2D, like the energy, and thus two cascades are expected: a direct cascade of enstrophy to smaller scales, and an inverse cascade of energy to larger scales ; for decaying 2D turbulence the inverse cascade is apparently absent . Although Taylor’s hypothesis is also an important assumption in the study of 2D turbulence, to our knowledge it has never been tested, nor is it clear that the hypothesis should do relatively better or worse than in 3D.
It may be helpful at this point to discuss the essential differences between 2D and 3D turbulence in order to better relate the present measurements to prior three dimensional tests of the Taylor hypothesis. In three dimensions the vorticity vector $`\omega (x,y,z,t)`$ can point in any direction, but in 2D it is restricted to be perpendicular to the $`x,y`$ plane of the flow. This fact alone assures that in inviscid flows, the enstrophy $`\mathrm{\Omega }=\frac{1}{2}<\omega ^2>`$ is a constant of the motion, in addition to kinetic energy conservation $`K=\frac{1}{2}<v^2>`$ (the angular brackets designate an appropriate average). From the Navier-Stokes equation it follows that in 2D, vorticity cannot be amplified (or attenuated except by viscous damping) by a velocity field gradient. The existence of vortex stretching in 3D is intimately related to the energy cascade from large scales to small, with the process controlled by the rate $`ϵ`$ at which kinetic energy is injected at large scales.
In 2D, one expects that energy injected at intermediate scale, $`r_{inj}`$ will be transferred to larger scales and dissipated at the boundaries of the system. This inverse cascade process is not expected to be local in k-space. For scales $`r<r_{inj}`$, the small-scale velocity fluctuations $`<|\delta v(r)|>`$, which carry little of the energy, are expected to cascade down to the dissipative scale under the control of the enstrophy injection rate $`\beta \mathrm{\Omega }/t`$ (the energy injection rate $`ϵ`$ plays the corresponding role in the 2D inverse cascade).
It seems reasonable that the absence of vortex stretching in 2D will increase the likelihood of a velocity fluctuation being transported intact to a distant downstream point, thereby enhancing the validity of Taylor’s hypothesis over 3D turbulence. Taylor’s hypothesis can also fail when a local fluctuation is transported laterally into the flow path between adjacent points separated by the distance $`\mathrm{\Delta }x=U_0\mathrm{\Delta }t`$. For purely geometrical reasons, this would seem to be a less likely occurence in 2D than in 3D, so that the frozen turbulence relation $`\delta v(\mathrm{\Delta }x)=\delta v(U_0t)`$ should presumably hold out to larger values of $`\mathrm{\Delta }x`$. It is harder for us to assess the impact of 2D nonlocality on the validity of the Taylor hypothesis in 2D as compared to 3D.
Our experiment uses a laser Doppler velocimeter (LDV) with two probes to nonintrusively examine Taylor’s hypothesis in a turbulent soap film. We differentiate between two different aspects of the hypothesis. By Taylor’s hypothesis of coherence we mean the assertion that the velocity field is unchanged as it is advected downstream: that $`v(x,t)`$ is identical to $`v(x+\mathrm{\Delta }x,t+\tau _0)`$, where $`\tau _0=\mathrm{\Delta }x/U_0`$, with $`U_0`$ being the mean velocity and $`\mathrm{\Delta }x`$ the distance between the two points . For a clear illustration of this, glance ahead to Figure 9. This coherence hypothesis is unquestionably an approximation and must fail as $`\mathrm{\Delta }x`$ becomes large; here we quantify this failure, and relate it to the predictability problem for turbulent flows. We measure the breakdown of the coherence hypothesis via the correlation between two points in the flow, displaced in both position and time . We find nevertheless that the expectation value of the lowest six moments of the longitudinal velocity difference is unaffected by the loss of coherence. Thus time correlation statistics of velocity fluctuations appear to be the same as spatial ones; we will refer to this as Taylor’s statistical hypothesis, which is implied by the coherence hypothesis, but does not actually require it.
## II Previous Studies of Taylor’s Hypothesis
Taylor’s hypothesis has been exposed to many experimental tests in three dimensional turbulence, and we do not intend to give an exhaustive survey here. These experimental studies can be divided into two broad categories, concerned with either correlations over finite distances , or local spatial derivatives used in turbulent dissipation estimates . It has long been appreciated that the validity of Taylor’s frozen turbulence assumption requires the smallness of the turbulent intensity $`I_t`$, defined as the ratio of rms velocity fluctuations to the mean flow speed $`U_0`$. Additionally, the mean shear rate and the viscous damping must be small in the range of spatial scales $`\mathrm{\Delta }x`$ being probed. In this section we give a sampling of the sort of work which has been done in 3D; a more detailed discussion of studies with which we directly compare our results is given in Section V. The reader is referred to introductory reviews in two recent articles , which are somewhat complementary to what is given here.
To test the application of Taylor’s hypothesis to velocity correlations over finite distances, one approach is to compare measurements made at a single observation point with measurements made at points displaced downstream. The first such tests were made in a wind tunnel by Favre, Gaviglio, & Dumas . The measurements were performed within the turbulent boundary layer of a plate at various $`I_t`$ up to 15%. They found that Taylor’s statistical hypothesis is valid for measurements of the velocity correlation function $`R(\mathrm{\Delta }x,\tau )=v_1(x_1,t)v_2(x_1+\mathrm{\Delta }x,t\tau )`$ made not too close to the plate. Fisher & Davies made careful measurements of the velocity correlation function $`R(\mathrm{\Delta }x,\tau )`$ in a jet. They found that the relation $`\tau =\mathrm{\Delta }x/U_0`$ was not well satisfied in the mixing region, where $`I_t`$ is typically $``$ 20%. These authors observed, as have many others, that the functional form of $`R`$ changes with increasing $`\mathrm{\Delta }x`$, and that this function is not very sharply peaked, as it would be if Taylor’s coherence hypothesis were satisfied. Comte-Bellot & Corrsin measured $`R(\mathrm{\Delta }x,\tau )`$ for grid-generated turbulence in a wind tunnel, where both $`I_t`$ and the mean shear rates are rather small. Though downstream decay causes the maximum value of $`R`$ vs. $`\tau `$ to decrease with increasing $`\mathrm{\Delta }x`$, the correlation functions could still be collapsed onto the same functional form.
One of the fundamental effects of turbulence is its enhancement of dissipation, and measurements of this require knowledge of spatial derivatives. Using Taylor’s hypothesis allows the time deriative of a single point measurement to be related to spatial derivatives, the simplest relation being $`\varphi /x=U_0\varphi /t`$, where $`\varphi `$ could be a passive scalar concentration, temperature, a velocity component, or a product of velocity components. Kailasneth, Sreenivasan, & Saylor studied a variety of turbulent systems and tested Taylor’s hypothesis using a fluorescent dye in a jet, the heated wake of a cylinder, and the atmospheric boundary layer. They were interested in the probability density function of these scalars, and found that Taylor’s statistical hypothesis worked well for conditional probability densities of the scalar fluctuations. Mi & Antonia used a heated jet of air, with a turbulent intensity of about 26 %, to verify different theoretical relations between spatial and temporal derivatives of temperature, corrected for finite turbulent intensities. Dahm & Southerland compared 2D spatial and spatio-temporal gradient fields of fluorescent dye in a water jet using a fast photodiode array. They found that Taylor’s coherence hypothesis was only approximately verified for these fields. Piomelli, Balint, & Wallace studied Taylor’s hypothesis for various velocity derivatives and compared hot wire measurements, large eddy simulations, and direct numerical simulations of the Navier-Stokes equation for wall-bounded flows. Taylor’s hypothesis was found to be in accord with the calculations and measurements made sufficiently far from the wall, where the mean shear is not excessive. In our experiments we have not treated the application of Taylor’s hypothesis to gradients.
All hot wire measurements are sufficiently intrusive that one must compensate or otherwise adjust for the perturbations produced by the wake of the upstream probe on the velocity measured at the downstream probe. Cenedese et al. avoided this problem by making velocity measurements with a Laser Doppler system. Only for small $`\mathrm{\Delta }x`$ did their correlation measurements satisfy Taylor’s hypothesis very well, though admittedly $`I_t`$ was rather high (13%). We discuss their results in more detail in Section V.
In the present experiment, Laser Doppler velocimetry is also used to measure $`R(\mathrm{\Delta }x,\tau )`$, so that there is no perturbation on the downstream probe. In contrast to the studies discussed above, however, we examine Taylor’s hypothesis in a quasi-2D system, a flowing soap film; we have also studied the effects on the higher order moments of velocity differences. Before presenting our results, we discuss in some detail our experimental system.
## III Experimental Setup
The use of soap films as convenient systems for the experimental study of 2D hydrodynamics began with the pioneering work of Couder and coworkers and Gharib and Derango . Our measurements are performed using a flowing soap film apparatus developed at the University of Pittsburgh by Kellay, Wu, & Goldburg , and Rutgers, Wu, & Goldburg ; we are using the latest version of this system, built by Rutgers. In our setup, a thin soap film several $`\mu `$m thick is allowed to fall between two taut plastic wires from an upper reservoir into a lower one, see Figure 7. The channel width is $`W=6.2`$ cm over a distance of 120 cm, where the measurements are performed. Quasi-two-dimensional turbulence is generated behind a comb (tooth diameter 1 mm and spacing $`M=`$ 3.8 mm) which perforates the film at a fixed height. The typical transit time between comb and lower reservoir is the order of a second.
The turbulence generated in such a soap film decays downstream from the grid, exhibiting many aspects which agree with theories of 2D turbulence , though there are also some differences. It therefore seems worthwhile to evaluate the experimental situation to date. Because the lateral dimensions of soap films are many orders of magnitude larger than their thickness, there would seem to be no doubt that the vorticity can indeed be regarded as a scalar quantity, so that vortex stretching is absent . This would be a key requirement for the occurence two-dimensional turbulence in soap films. In addition, to be able to compare with theoretical and numerical results concerning incompressible turbulence, the two-dimensional compressibility of the film should be zero. This condition is clearly not fully met, since fluctuations in film thickness are visible to the eye through optical interference of light reflected from the front and back faces of the film . There are, however, recent experiments in which the two dimensional divergence $`D_2=_2\dot{𝐯}(x,y)`$ was measured by particle imaging velocimetery . In a set-up very similar to that used here, $`D_2`$ was measured to be 10-15% of the rms vorticity near the comb. Note also that because the velocity of peristaltic waves in a soap film is orders of magnitude larger than the turbulent velocity fluctuations , it is expected that the film may be regarded as incompressible from the point of view of this study.
Another mitigating factor to the two-dimensionality of soap film flow is the friction between the film and the surrounding air. In a set of experiments in which the film was placed in a partial vacuum where the air pressure 3 % of the atmospheric value, it was found that the energy spectrum $`E(k)`$ decayed for a decade in $`k`$ as $`E(k)k^\zeta `$, with $`\zeta =3.3\pm `$ 0.3. This exponent had the same value in both the partial vacuum and at atmospheric pressure. The main effect of the reduced pressure was to magnify the magnitude of $`E(k)`$ near the comb . These measurements also showed that the total kinetic energy initially decayed downstream, but ultimately leveled off at a sufficient distance below the comb, which is expected theoretically at very high Reynolds numbers . If the levelling off distance is called $`x^{}`$ and the corresponding time $`t^{}=x^{}/U_0`$, then all measurements reported in this paper were made at values of $`t<t^{}`$, i.e. at distances from the comb too close for the levelling off to have occurred.
In most (but not all) measurements of decaying turbulence in a soap film, only the enstrophy cascade is observed, i.e. the inverse energy cascade ($`E(k)k^{5/3}`$) is not. Recently an experiment was performed in which turbulence was forced by an array of teeth parallel to the direction of flow . There one finds evidence of both the inverse cascades ($`k^{5/3}`$) and the inverse cascade spectrum, where $`k^3`$.
We use a commercial LDV system to measure the film velocity fluctuations . The soap solution (water and 2% commercial detergent by volume) is seeded with 1 $`\mu `$m polystyrene spheres at a volume fraction of about $`10^4`$, and the data rate ranges from 1 to 8 kHz. At a distance $`x=`$ 8 cm below the comb, the mean and RMS velocities were typically $`U_0=`$ 180 cm/s and $`v_{RMS}v^2^{1/2}=`$ 24 cm/s in the longitudinal (streamwise) direction, where $`v^{}vU_0`$. The turbulent intensity in these experiments was $`I_t0.14`$. This is the quantity which is assumed in Taylor’s hypothesis to be small , and we have explicitly chosen for our study a value of $`I_t`$ which is not very small. The Reynolds number for the channel is $`Re_WU_0W/\nu 11,000`$, and for the comb $`Re_MU_0M/\nu 700`$. The viscosity of a flowing soap film is not a well established quantity; here we use $`\nu =0.1`$ cm<sup>2</sup>/s as measured in a 2D Couette viscometer by Martin & Wu . Deviations from two-dimensionality caused by air friction appear not to affect the turbulence for the scales of interest here .
In order to test Taylor’s hypothesis, two LDV heads are used at spatially separated points. Figure 1 shows the arrangement, with the downward (flow) direction defined as the $`x`$ direction. One head (labeled “LDV 1”) is kept fixed at $`x_1=`$ 8 cm below the comb, while a second head (“LDV 2”) is placed at various points ranging from $`x_2`$ = 4 to 30 cm below the comb. The two probes are arranged to be directly in line with each other, so that the lower probe measures the same part of the flow as the upper one, with a delay given by the transit time between them.
Because the LDV only measures velocity when there is a scatterer in its measuring volume, the two probes do not in general measure velocity simultaneously. Therefore some binning or “simultaneity window” $`\mathrm{\Delta }t`$ is needed to perform statistical comparisons: two measurements are treated as simultaneous if they occur within $`\mathrm{\Delta }t`$ of one another. Here we use binning windows from $`\mathrm{\Delta }t=`$ 25 $`\mu `$s to 200 $`\mu `$s, corresponding to frequencies $`1/\mathrm{\Delta }t`$ from 5 to 40 kHz, which are higher than the largest observed frequency in the velocity power spectrum. All measurements reported here are insensitive to small changes in $`\mathrm{\Delta }t`$.
One of the difficulties in examining the validity of Taylor’s hypothesis in decaying turbulence is that the “small parameter” $`I_t`$ is not constant, but decreases downstream as the turbulence decays. In Fig. 8a we plot our turbulent intensity as a function of distance from the comb. Thus, although we measure the correlation of velocity fluctuations relative to $`x_1=8`$ cm, where $`I_t`$ 0.14, the actual turbulent intensity affecting the velocity field downstream is always less than 0.14. This would seem to significantly complicate the matter.
How does the turbulence decay in our system? A standard result from 3D decaying grid turbulence is that the inverse square of the turbulent intensity depends on the distance from the grid as: $`I_t^2=A(x/MB)^\beta `$, where the dimensionless constants are typically found to be $`A`$ 130 - 150 and $`B`$ 3 - 20 for $`\beta =1`$ , or $`A`$ 20 and $`B`$ 3.5 for $`\beta =1.25`$ ; note that $`B`$ is the effective position of the origin for this scaling in units of $`M`$. Assuming that $`\beta =1.25`$ means that $`I_t^{1.6}`$ should be a linear function of $`x/M`$; however we find that by taking $`I_t^{1.1}`$ we get the best linear plot vs. $`x/M`$, as shown in Fig. 8b. The line corresponds to
$$\frac{1}{I_t^2}=0.2\left(\frac{x}{M}\right)^{1.8},$$
with $`B`$ = 0 for our fit, which means that the virtual origin is located at the position of the grid itself. Note that we do not measure far enough downstream to see any evidence of the kinetic energy saturation as the turbulence decays downstream ($`I_t`$ constant).
## IV Results
### A Testing Taylor’s Coherence Hypothesis
Figure 9 shows the velocity fluctuations measured by two probes with $`\mathrm{\Delta }x=`$ 0.5 cm. For this small separation, Taylor’s hypothesis is clearly a good estimate: the two velocity traces are nearly identical except for a small shift in time, which should correspond to the transit time across the spatial separation $`\mathrm{\Delta }x`$. To test whether the velocity trace is translated spatially without evolving dynamically, we measure the cross-correlation $`C_{12}(\tau ,\mathrm{\Delta }x,x_1)`$ between the two probes:
$$C_{12}(\tau ,\mathrm{\Delta }x,x_1)\frac{v_1(x_1,t)v_2(x_1+\mathrm{\Delta }x,t\tau )}{v_{1RMS}\times v_{2RMS}}.$$
(1)
Here $`v_1(x_1,t)`$ and $`v_2(x_2,t)`$ are the two velocities measured by the probes, $`v_{iRMS}`$ are the RMS velocity fluctuations, $`\mathrm{\Delta }xx_2x_1`$, and the brackets $``$ denote a time average. Note that $`C_{12}`$ is a function not only of delay time $`\tau `$ and separation $`\mathrm{\Delta }x`$, but of the absolute location of the first probe $`x_1`$. This last dependence comes from the fact that the turbulence is decaying. In this study we fix $`x_1=`$ 8 cm, and ignore this dependence.
Figure 10 shows $`C_{12}(\tau ,\mathrm{\Delta }x)`$ vs. $`\tau `$ for several different separations $`\mathrm{\Delta }x`$. As expected, there is a well-defined maximum correlation
$$C_{12}^{MAX}(\mathrm{\Delta }x)C_{12}(\tau _{MAX},\mathrm{\Delta }x)$$
at a particular value of the delay time $`\tau _{MAX}(\mathrm{\Delta }x)`$. Taylor’s coherence hypothesis requires that $`C_{12}^{MAX}(\mathrm{\Delta }x)`$ be close to 1, and $`\tau _{MAX}(\mathrm{\Delta }x)`$ be equal to the transit time $`\mathrm{\Delta }x/U_0`$. Figure 11 shows $`\tau _{MAX}`$ as a function of $`\mathrm{\Delta }x/U_0`$, in agreement with the line drawn for $`\tau _{MAX}=\mathrm{\Delta }x/U_0`$ . As predicted by Taylor’s hypothesis, the slope of this line is unity. The small deviations are due to errors in our measurement of $`\mathrm{\Delta }x`$.
In Figure 12 we plot the maximum correlation $`C_{12}^{MAX}(\mathrm{\Delta }x)`$. As one would expect, the correlation decreases as $`\mathrm{\Delta }x`$ increases, though we have not found any simple functional form to fit to this decrease, nor is there to our knowledge any predicted form. The loss of correlation is due to the dynamic evolution of the velocity fluctuations and sets a limit to Taylor’s hypothesis, which we quantify by defining an “evolution length” $`\delta _e`$ as the separation for which the correlation drops to 50%. For our experiment we find $`\delta _e`$ 7 cm, corresponding to an evolution time $`\tau _e=\delta _e/U_040`$ ms. This length is much larger than the relevant lengths of the turbulent velocity field, as we show next.
### B Testing Taylor’s Statistical Hypothesis
The statistical study of turbulence is framed in terms of velocity correlation functions, structure functions, and energy spectra; here we focus on the structure function. The longitudinal velocity difference between two points separated by a distance $`r`$ is written as
$$\delta v(r,t)(𝐯(x_1+r,t)𝐯(x_1,t))\widehat{𝐫},$$
where the unit vector $`\widehat{𝐫}`$ is in the downward direction of the flow. The $`n`$th order structure function is defined as:
$$S_n(r)(\delta v(r,t))^n.$$
In Figure 13 we show the structure functions $`S_2(r)`$, $`S_4(r)`$, and $`S_6(r)`$, calculated from single point velocity measurements using Taylor’s hypothesis: $`r=U_0\tau `$ (solid circles). We also plot direct spatial measurements of these structure functions made using two probes (open squares). This is a direct confirmation of Taylor’s statistical hypothesis, which is one of the central results of our study. Note that $`S_2(r)`$ shows a scaling region of about a decade where $`S_2(r)r^{1.6}`$, in good agreement with other experiments on turbulent soap films . We also find approximately that $`S_4(r)r^{2.9}`$ and $`S_6(r)r^{4.0}`$, as shown in the figure. The third moment $`S_3(r)`$ has been treated in detail elsewhere .
Some comment must be made on the observed values of the exponents, which are so different from $`S_n(r)r^n`$, the theoretical expectation for the enstrophy cascade in 2D turbulence . It is well known that in 3D turbulence the scaling law exponents of the $`n`$th order structure functions deviate from their expected value of $`n/3`$ as $`n`$ gets large . This systematic difference is attributed to the intermittency of the fluctuations . However, the third order structure function must scale as $`S_3(r)r`$ even with intermittency, as can be derived directly from the Navier-Stokes equations . An equivalent derivation for 2D turbulence would not be relevant to the third moment in the enstrophy cascade range discussed here. In fact $`S_3`$ is observed to be approximately zero in this range (and is positive for large $`r`$) . This observation suggests to us that the scaling exponents of the enstrophy range are more sensitive to intermittency than in 3D turbulence. Evidence of intermittency, indicated by non-Gaussian velocity fluctuations, has been reported previously for our experiment .
For $`r>`$ 1 cm, the $`S_n(r)`$ saturate to constant values, which for $`S_2(r)`$ is equal to $`2v_{RMS}^2`$. This occurs roughly at the integral or outer scale , which characterizes the largest scales on which the velocity is correlated. The integral scale $`\mathrm{}_0`$ is defined as
$$\mathrm{}_0_0^{\mathrm{}}b(r)𝑑r/v_{RMS}^2,$$
(2)
where $`b(r)`$ is the velocity correlation function:
$$b(r)v^{}(x,t)v^{}(x+r,t)=v_{RMS}^2\frac{1}{2}S_2(r).$$
At $`x_1=`$ 8 cm we find $`\mathrm{}_0=`$ 0.6 cm, which is much less than the evolution length $`\delta _e`$ = 7 cm. Thus for the turbulence in our soap film, Taylor’s hypothesis is justified: the two signals are correlated better than 90% for scales $`r<\mathrm{}_0`$ (see Figs. 12 and 13).
### C Detailed Study of the Velocity Decorrelation
There are in general two reasons for the breakdown of Taylor’s hypothesis: the entrance of new structures into the line of travel, introducing new fluctuations into the signal, and the evolution of the velocity field itself. In Figure 14 we show an overlay of the velocity measured at $`x_1=`$ 8.0 cm vs. $`t`$, and the velocity measured at $`x_2=`$ 10.0 cm vs. $`t\tau _{MAX}`$ ($`\mathrm{\Delta }x=2`$ cm). For a perfect correlation ($`C_{12}^{MAX}=1`$) the two curves would fall on top of each other. The arrows indicate fluctuations which have either appeared or disappeared during the transit time between the two probes. In effect this means that information is being generated, and this “new information” is partially responsible for the velocity decorrelation (Fig. 5).
To explore the details of this process, we measure the coherence spectrum $`Cs(f)`$ of the fluctuations at $`x_1`$ and $`x_2`$ . If Taylor’s hypothesis of coherence were justified, then $`v_1v(x_1,t)`$ would be identical to $`v_2v(x_2,t\tau _{MAX})`$. If $`\widehat{v}_1(f)`$ is the complex Fourier transform of $`v_1`$, then the standard power spectrum is $`Ps_1(f)=\widehat{v}_1(f)\widehat{v}_1^{}(f)`$, where $`\widehat{v}^{}`$ is the complex conjugate of $`\widehat{v}`$. The coherence spectrum is
$$Cs(f)\frac{1}{2}\frac{\widehat{v}_1(f)\widehat{v}_2^{}(f)+\widehat{v}_1^{}(f)\widehat{v}_2(f)}{\sqrt{Ps_1(f)\times Ps_2(f)}},$$
(3)
normalized so that $`Cs=`$ 1.0 for frequencies where the two time series are coherent. A measurement of $`Cs(f)`$ with increasing probe separation shows which modes in the turbulent spectrum persist longer and which evolve faster.
Consider first the velocity power spectrum at a single point, shown as the thin line in Figure 15. This spectrum, according to the standard picture of 2D decaying turbulence , should have a power law dependence $`Ps(f)f^\alpha `$ in the enstrophy cascade range, with $`\alpha =`$ 3. In an earlier soap film experiment , this exponent was found to be measurably larger than 3; here in we find $`\alpha =3.6\pm 0.2`$. We compare this to the coherence spectrum, which we expect to be nearly equal to 1.0 for small separations. The coherence spectrum for $`\mathrm{\Delta }x=`$ 0.2 cm is also shown in Figure 15 (thick line). We see that $`Cs(f)`$ is indeed close to unity over most of the frequency range in which the power spectrum appears. However, the coherence drops below $`Cs0.9`$ at $`f`$ 300 Hz, around the middle of the range where $`Ps(f)f^\alpha `$, and for $`f`$ 850 Hz, where the power spectrum is reaching the noise floor in our measurement, $`Cs0.2`$. Thus already at $`\mathrm{\Delta }x=`$ 0.2 cm it appears that the high frequency components are the most rapidly evolving. In contrast, for the 2D enstrophy cascade it is expected that the ‘eddy turnover time’ is independent of size ; the observed falloff in $`Cs(f)`$ may indicate viscous dissipation effects.
The coherence spectra at five increasing separations $`\mathrm{\Delta }x`$ are shown in Figure 16, as a linear-log plot. In each case we see decorrelation at higher frequencies (smaller scales), while the low frequency part remains at a constant value which decreases as $`\mathrm{\Delta }x`$ increases. This constant correlation is approximately equal to $`C_{12}^{MAX}(\mathrm{\Delta }x)`$, which means that the overall coherence of the velocity field is determined mainly by the low frequency components. The high frequency decorrelation also moves to lower frequencies as $`\mathrm{\Delta }x`$ increases. To see whether the whole shape of the $`Cs(f)`$ follows the decay of $`C_{12}^{MAX}(\mathrm{\Delta }x)`$, we normalize the coherence spectra as $`Cs(f)/C_{12}^{MAX}`$ in Figure 17. The curves do not lie entirely on top of each other, indicating that the cutoff at high frequencies follows a different evolution than $`C_{12}^{MAX}`$. The cutoff shape is well described as $`Cs(f)log(1/f)`$, shown as the straight lines drawn through the data. This advancing cutoff is reminescent of the loss of predictability in the spectra of atmospheric turbulence simulations .
At larger separations ($`\mathrm{\Delta }x>12`$ cm), we find that the spectral position of this cutoff no longer moves to lower frequencies as $`\mathrm{\Delta }x`$ increases. In Figure 18 we superpose the coherence spectra for $`\mathrm{\Delta }x`$ from 12 to 22 cm, normalized by $`C_{12}^{MAX}(\mathrm{\Delta }x)`$. The curves lie reasonably on top of each other, which means that the entire coherence spectrum follows the overall decrease of $`C_{12}^{MAX}`$. By taking the intersection of the logarithmic fit with the line $`Cs(f)=C_{12}^{MAX}(\mathrm{\Delta }x)`$ in Figs. 17 and 18, we use the frequency $`f_d`$ of the intersection to characterize the position of the spectral cutoff . We plot this frequency in Figure 19. Up to $`\mathrm{\Delta }x12`$ cm, the advance of $`f_d`$ to lower frequencies is consistent with the scaling $`f_d\mathrm{\Delta }x^{1/2}`$, which is slower than that seen for wavenumber cutoff in 2D numerical simulations of atmospheric predictability . For $`\mathrm{\Delta }x>12`$ cm, $`f_d`$ reaches a constant value of about 30 Hz, corresponding to a length of 6 cm, which is the size of our system (the channel width $`W=6`$ cm).
### D Turbulent Predictability and Taylor’s Hypothesis
The failure of Taylor’s hypothesis in our experiment is closely related to the question of predictability in 2D turbulence . The general study of predictability in turbulence (see for example ) is of particular importance to the weather prediction problem . Here we briefly show how our analysis parallels this general framework. Note that here we are comparing a developing turbulent velocity with its initial state, whereas studies of predictability treat the diverging evolution of two nearly identical initial states. Nonetheless there are several similarities between the two.
Following Métais & Lesieur, we first define the time series of the velocity difference, or error time series , which for our experiment is written
$$\mathrm{\Delta }v(\mathrm{\Delta }x,t)v_1(x_1,t)v_2(x_1+\mathrm{\Delta }x,t\tau _{MAX}).$$
(4)
For a perfectly correlated signal this time series would be identically zero. We define the difference energy $`\mathrm{\Delta }E(\mathrm{\Delta }x)=(\mathrm{\Delta }v(\mathrm{\Delta }x,t))^2`$, which is the second moment of $`\mathrm{\Delta }v`$ and thus a kinetic energy associated with the difference series; it is analogous to the error energy in predictability studies . In Figure 20 we plot a dimensionless $`\mathrm{\Delta }E`$, namely
$$\rho (\mathrm{\Delta }x)\frac{\mathrm{\Delta }E(\mathrm{\Delta }x)}{v_{1RMS}^2+v_{2RMS}^2},$$
(5)
as a function of the decay time $`\mathrm{\Delta }x/U_0`$. The function $`\rho `$ is defined to increase from 0 to 1 as $`\mathrm{\Delta }x`$ increases, and acts as a sort of distance function between the two velocities. By comparing Eqs. 1 and 5, one sees that $`\rho `$ and $`C_{12}`$ are simply related.
The inset to Fig. 20 shows an enlargement of $`\rho (\mathrm{\Delta }x/U_0)`$ near $`\mathrm{\Delta }x/U_0=0`$. There is no clear linear portion in the plot which would correspond to an exponential error increase. In the study of turbulent predictability an exponential error growth is used to define a sort of Lyapunov exponent , with the error energy serving as a metric for evaluating the distance between co-evolving turbulent states. The data shown in Fig. 20 are in fact better described by a power law $`\rho (\mathrm{\Delta }x/U_0)^{1/2}`$, as shown in Fig. 21. This apparent square root dependence should be interpreted only as a power law dependence: the actual value of the exponent depends on the choice of metric function, Eq. 5. Since an exponential growth of the error energy depends on the linearization of an underlying equation for $`\rho `$, Fig. 21 may indicate the presence of higher order terms, similar to the quadratic saturation term used by Lorenz to fit error growth in an iterated map .
The predictability time $`T_p`$ is a standard measure of the time beyond which one can no longer project the state of a turbulent system . The exact definition of such a time is somewhat arbitrary, though it is usually much larger than the large scale eddy turnover time for the turbulent flow . We can characterize the long time growth of $`\rho (\tau )`$ by quantifying how long the velocity $`v_2`$ remains similar to $`v_1`$. The predictability time used by Métais & Lesieur was defined by $`\rho (T_p)=0.5`$ ; we define an evolution time $`T_e`$ such that $`\rho (T_e)=0.5`$, and find that $`T_e`$ 25 ms. This is of the same order as our decorrelation time $`\tau _e40`$ ms given by $`C_{12}^{MAX}`$ (Fig. 12), a result which is not surprising given that the two functions are related. The analogy between the loss of predictability and the failure of Taylor’s hypothesis is in fact rooted in a common cause: the loss of velocity coherence due to turbulence. Whether any implications can be drawn from this connection remains to be seen.
## V Discussion
### A Comparison with 3D Measurements
We have measured the breakdown of Taylor’s hypothesis for decaying turbulence in a flowing soap film and shown that the hypothesis is a valid assumption for statistical measurements of the turbulence (the structure functions). How do our measurements compare to similar experimental studies of 3D decaying turbulence? Of the six studies which to our knowledge provide information comparable to Fig. 12 we will examine three in detail . Two of these studies used hot wire anemometry , and thus additional techniques were required to compensate for the wake of the upstream probe. Champagne et al. used a ‘grid’ made of 12 parallel channels (spacing $`M^{}=`$ 2.54 cm) in a wind tunnel with a mean speed of 12 m/s, and $`Re_M^{}=`$ 21,000. Cross-correlation measurements started at $`x_1=`$ 259 cm, where $`I_t`$ 0.018 and $`\mathrm{}_0`$ 4.2 cm. Comte-Bellot & Corrsin made measurements behind a standard grid ($`M=`$ 5.08 cm) in a wind tunnel with $`Re_M=`$ 34,000. The two hot-wire cross correlation measurements were made starting at $`x_1=`$ 210 cm, where $`I_t`$ 0.022 and $`\mathrm{}_0`$ 1.1 cm. Cenedese et al. used a nonintrusive laser Doppler anemometer similar to our LDV (see ), but did not use a standard grid; the turbulence was produced by a combination of a honeycomb and the channel walls. Their measurements were made in a water channel (height $`h=`$ 2 cm) starting at $`x_1=`$ 14 cm, where $`I_t`$ 0.13 and $`\mathrm{}_0`$ 1.0 cm. Their Reynolds numbers were also significantly lower: $`Re_h=`$ 4,800.
Are there any differences between Taylor’s hypothesis in our approximately two-dimensional soap film and in these 3D experiments? We address this question by plotting $`C_{12}^{MAX}(\mathrm{\Delta }x)`$ from these three studies along with our measurements in Figure 22. The independent variable in this plot is $`\mathrm{\Delta }x`$ in units of the integral scale $`\mathrm{}_0`$. One might expect the decorrelation to occur more slowly in the soap film due to the absense of vortex stretching. However, as the turbulent intensity in our experiment is high ($`I_t=`$ 0.14) compared to the two wind tunnel experiments ($`I_t`$ 0.02), our data should be directly compared only to that of Cenedese et al. ($`I_t=`$ 0.13). In this case we see that indeed the correlation in our soap film extends to much larger values of $`\mathrm{\Delta }x/\mathrm{}_0`$ than in their 3D experiment. Note that $`C_{12}^{MAX}`$ from the wind tunnel experiments also extends to much larger values of $`\mathrm{\Delta }x/\mathrm{}_0`$ than the data of Cenedese et al, probably due to the fact that their turbulent intensities are much lower. More definitive conclusions would come from a single experiment (in 2D or 3D) which measures $`C_{12}^{MAX}(\mathrm{\Delta }x)`$ for several different $`I_t`$.
### B Detailed Shape of the Cross Correlation $`C_{12}(\tau )`$
As of yet there is no rigorous underpinning to Taylor’s hypothesis which would allow for the calculation of higher order corrections to velocity correlations, though an intriguing suggestion was implemented in . To provide detailed information for some future theory, we focus on the shape of the cross-correlation function $`C_{12}(\tau ,\mathrm{\Delta }x)`$ around $`\tau _{MAX}`$. This shape is by definition (Eq. 1) the average convolution of a velocity fluctuation taken with itself a time $`\tau _{MAX}`$ later. In Figure 23 we show as an example $`C_{12}(\tau )`$ for $`\mathrm{\Delta }x=`$ 4 cm, along with a Gaussian distribution centered on $`\tau _{MAX}`$. We find that the shape is always nearly Gaussian, with a slight skewness around $`\tau _{MAX}`$ consistently towards the positive. The width of the Gaussian does not broaden as $`\mathrm{\Delta }x`$ increases, though the maximum does decrease as shown in Figure 24. Thus the development of the cross-correlation cannot be treated as a diffusion-like process, for which the width would increase as the maximum decreases. The small positive skewness is also not strongly dependent on $`\mathrm{\Delta }x`$.
## VI Conclusion
In this paper we focused on the breakdown of Taylor’s coherence hypothesis in a turbulent soap film, a quasi-2D experimental system. We have shown that for the lower order moments the statistical hypothesis works well, even when the actual cross correlation between the two probes is low. As the relevant length scale of this decorrelation is much larger than the integral scale of the turbulence ($`\delta _e>>\mathrm{}_0`$), this phenomenon is outside the region usually considered by most studies: it is the turbulence beyond the scaling range. Yet this evolution contains untapped information, as we have indicated. The failure of Taylor’s hypothesis may thus shed light on deeper problems in turbulence.
## VII Acknowledgements
We would like to thank M. Rivera and X. L. Wu for beneficial and insightful discussions, and H. Kellay, M. A. Rutgers, R. Cressman, and the referees for helpful comments on the manuscript. This work was supported by NASA and the National Science Foundation.
|
no-problem/9905/gr-qc9905083.html
|
ar5iv
|
text
|
# Is supernovae data in favour of isotropic cosmologies?
## Abstract
Most of the observational claims in cosmology are based on the assumption that the universe is isotropic and homogeneous so they essentially test different types of Friedmann models. This also refers to recent observations of supernovae Ia, which, within the framework of Friedmann cosmologies give strong support to negative pressure matter and also weaken the age conflict. In this essay we drop the assumption of homogeneity, though temporarily leaving the assumption of isotropy with respect to one point, and show that supernovae data can be consistent with a model of the universe with inhomogeneous pressure known as the Stephani model. Being consistent with supernovae data we are able to get the age of the universe in this model to be about 3.8 Gyr more than in its Friedmann counterpart.
An essay presented for Gravity Research Foundation Essay Competition ’99.
The standard isotropic cosmological models have intensively been studied as the models of the large-scale structure of the universe. One of the main reasons is their mathematical simplicity expressed in terms of the Cosmological Principle. There is of course some ‘evidence’ for these models to be the right ones from many different astronomical tests and especially from low-redshift linear Hubble expansion law (e.g. ). However, the situation is not so clear for large-redshift objects since the generalized Hubble law – the redshift-magnitude relation – becomes nonlinear and the effects of spatial curvature of the universe are important. In the past the main problem was that the luminosity function of prospective ‘standard candles’ (whose absolute magnitude is presumably known) was poorly known for most of them at redshifts $`z1`$. This is not the case for supernovae type Ia (SnIa) and these objects have recently been used to determine the curvature and consequently the matter content of the universe . The results of these investigations give strong support to Friedmann models with negative pressure matter such as the cosmological constant, domain walls or cosmic strings . It is a very strong claim, since, despite a very long story of the cosmological constant, and a relatively long story of topological defects , people hardly believed in their large contribution to the total energy density of matter in the universe at the present epoch of the evolution.
In this essay we try to make an alternative proposal for the explanation of supernovae data and suggest an inhomogeneous model of the universe which belongs to the class of models known as the Stephani universes . We basically try to fit this model to SnIa data as given in . Our model is described by the following metric tensor
$$ds^2=\frac{c^2}{V^2}d\tau ^2+\frac{R^2}{V^2}\left[dr^2+r^2\left(d\theta ^2+\mathrm{sin}^2\theta d\phi ^2\right)\right],$$
(1)
with
$`R(\tau )`$ $`=`$ $`a\tau ^2+\tau ,`$ (2)
$`V(\tau ,r)`$ $`=`$ $`1{\displaystyle \frac{a}{c^2}}\left(a\tau ^2+\tau \right)r^2`$ (3)
$`k(\tau )`$ $`=`$ $`4{\displaystyle \frac{a^2}{c^2}}R(\tau ),`$ (4)
where $`\tau `$ is the cosmic time coordinate and $`r`$ is the radial coordinate. In Eq. (1) $`R(\tau )`$ is the generalized scale factor and $`k(\tau )`$ is the time-dependent spatial curvature index, so the spatial curvature of the universe may change during the evolution which is impossible in Friedmann models. The constant $`c`$ is the velocity of light and the parameter $`a`$ is measured in $`\mathrm{km}^2\mathrm{s}^2\mathrm{Mpc}^1`$. The physical meaning of $`a`$ is that it measures non-uniformity of pressure (acceleration) of the model (see ). One is able to obtain the flat Friedmann model from (1) is one takes the limit $`a0`$.
In the model described by the metric (1) the energy density $`\varrho `$ depends on the cosmic time, similarly as in Friedmann models, but the pressure, $`p`$, is the function of both the time and radial coordinates . This justifies its name as ‘inhomogeneous pressure universe’. It is spherically symmetric and it can certainly be used as the first step towards the observational verification of inhomogeneous cosmologies. A general class of Stephani models is really inhomogeneous which means there are no Killing vectors in spacetime. The assumption of spherical symmetry is of course equivalent to dropping the assumption of the Cosmological Principle, provided we put an observer outside of the center of symmetry. In this essay we do not consider off-centre observers although the suitable relations are known . Despite that, we can have an important effect on the observational relations at the center, because the light reaching the observer there was emitted from the off-center galaxies.
The spherically symmetric Stephani model is the model of concentric pressure spheres (pressure varies from sphere to sphere) and it can be put in some opposition to the Tolman model which is the model of concentric density spheres (energy density varies from sphere to sphere). Both models do not necessarily have to be used as models of the global geometry of the universe, but can also be applied to model local inhomogeneity (or void) in the universe. Kinematically, both models expand and in the Tolman model there is shear while in the Stephani model there is acceleration. Acceleration is the result of the combined effect of gravitational and inertial forces on the fluid which are unable to be separated and appears due to the spatial pressure gradient on the concentric spheres – the particles are accelerated in the direction from high-pressure regions to low-pressure regions.
We consider our investigations of Stephani models in the context of supernovae data as an important step towards understanding the large-scale structure of the universe because very few, if any, attempts to compare inhomogeneous models of the universe (see ) with astronomical data have been done so far. It was done for the first time by using a preliminary SnIa data in and in this essay we try to give new insight into the problem using large sample data given in .
The parameter space of Friedmann models contains of three parameters: the Hubble constant $`H_0`$, the deceleration parameter $`q_0`$ and the density of nonrelativistic matter $`\mathrm{\Omega }_{m0}`$ reducing to just two of them in a flat universe.
The Stephani model under consideration is a simple generalization of a flat Friedmann model and its parameter space can mimic (as far as the redshift-magnitude relation is concerned) that of Friedmann with an important admission of the effect of pressure gradient (acceleration) in the universe.
The standard cosmological test – a redshift-magnitude relation – to second order in redshift $`z`$ for Friedmann models reads as (e.g. )
$`m_\mathrm{B}`$ $`=`$ $`M_\mathrm{B}+25+5\mathrm{log}_{10}cz5\mathrm{log}_{10}H_0`$ (5)
$`+`$ $`1.086\left(1q_0\right)z+0.2715\left[3(1+q_0)^24(1+\mathrm{\Omega }_{m0})\right]z^2+O(z^3),`$
where $`m_B`$ is the apparent bolometric magnitude of a galaxy, $`M_B`$ is its absolute magnitude and $`z`$ is the redshift. For Friedmann cosmologies the following relations between the parameters are fulfilled
$`\mathrm{\Omega }_\mathrm{\Lambda }={\displaystyle \frac{\mathrm{\Lambda }}{3H_0^2}}={\displaystyle \frac{1}{2}}\mathrm{\Omega }_{m0}q_0,{\displaystyle \frac{k}{H_0^2R_0^2}}={\displaystyle \frac{3}{2}}\mathrm{\Omega }_{m0}q_01,`$ (6)
where $`\mathrm{\Omega }_\mathrm{\Lambda }`$ is the density of cosmological constant $`\mathrm{\Lambda }`$, $`k`$ the curvature index and $`R_0`$ the present value of the scale factor. The relation (5) was tested by supernovae data and the best fit values of the cosmological parameters in flat $`(k=0=\mathrm{\Omega }_{m0}+\mathrm{\Omega }_\mathrm{\Lambda }1)`$ universe were claimed to be
$`q_0`$ $`=`$ $`0.55,`$ (7)
$`\mathrm{\Omega }_{m0}`$ $`=`$ $`0.3,`$ (8)
$`\mathrm{\Omega }_\mathrm{\Lambda }`$ $`=`$ $`0.7,`$ (9)
for the Hubble’a constant
$$H_0=63\mathrm{kms}^1\mathrm{Mpc}^1,$$
(10)
giving the best-fit age of the universe
$$t_0=14.9\mathrm{Gyr}.$$
(11)
.
The redshift-magnitude relation for Stephani universes has been found in . Two exact cases were presented and the theoretical relations were plotted for a range of different parameter values. The relations were obtained following the method of Kristian & Sachs of expanding all relativistic quantities in power series and truncating at a suitable order, though, one can use an exact relation too . An analogous to (5) relation for the Stephani universe (1), to second order in redshift $`z`$, reads as
$`m_\mathrm{B}`$ $`=`$ $`M_\mathrm{B}+25+5\mathrm{log}_{10}cz5\mathrm{log}_{10}\stackrel{~}{H}_0+1.086\left(1\stackrel{~}{q}_0\right)z`$ (12)
$`+`$ $`0.2715\left[3(1+\stackrel{~}{q}_0)^24(1+\stackrel{~}{\mathrm{\Omega }}_{m0})\right]z^2+O(z^3),`$
where
$`\stackrel{~}{H}_0`$ $`=`$ $`{\displaystyle \frac{2a\tau _0+1}{a\tau _0^2+\tau _0}},`$ (13)
$`\stackrel{~}{q}_0`$ $`=`$ $`4a{\displaystyle \frac{a\tau _0^2+\tau _0}{(2a\tau _0+1)^2}}.`$ (14)
Equation (12) now takes the same functional form as equation (5), as was similarly pointed out in with $`\stackrel{~}{H}_0`$ and $`\stackrel{~}{q}_0`$ replacing $`H_0`$ and $`q_0`$. We can think of $`\stackrel{~}{H}_0`$ and $`\stackrel{~}{q}_0`$ as a generalised Hubble parameter and deceleration parameter which are related to the age of the universe in a different way from the Friedmann case. The key question of interest here is therefore whether one can construct generalised parameters, $`\stackrel{~}{H}_0`$ and $`\stackrel{~}{q}_0`$, which are in good agreement with the supernovae data but which correspond to a value of $`\tau _0`$ which exceeds that Friedmann age with $`H_0=\stackrel{~}{H}_0`$ and $`q_0=\stackrel{~}{q}_0`$. More precisely, both relations (5) and (12) are equal provided a generalized density parameter $`\stackrel{~}{\mathrm{\Omega }}_{m0}=(1/3)(1+\stackrel{~}{q}_0)`$. Assuming the following values for the replaced parameters
$`\stackrel{~}{H}_0`$ $`=`$ $`63\mathrm{kms}^1\mathrm{Mpc}^1,`$ (15)
$`\stackrel{~}{q}_0`$ $`=`$ $`0.55,`$ (16)
we obtain the age of the universe in the Stephani model (1) to be
$$\tau _0=18.67\mathrm{Gyr},$$
(17)
and $`\stackrel{~}{\mathrm{\Omega }}_{m0}=0.15`$. Then we have an agreement with supernovae data, provided the non-uniform pressure parameter is equal to
$$\mathrm{a}=12.3\mathrm{km}^2\mathrm{s}^2\mathrm{Mpc}^1,$$
(18)
which translates into the value of the acceleration scalar to be
$$\dot{u}=2\frac{a}{c^2}r=2.7310^{10}r\mathrm{Mpc}^1,$$
(19)
with $`r`$ being the radial coordinate of the model. Since the non-uniform pressure parameter $`a`$ is positive, then the high pressure region is at $`r=0`$, while the low (negative) pressure regions are outside the center, so the particles are accelerated away from the center. This is similar effect as that caused by the positive cosmological constant $`\mathrm{\Lambda }>0`$ in Friedmann models, although the physical mechanism is somewhat different. However, in both cases it is worth to appeal to ideas from the theory of elementary particles and especially to the notion of the energy of the vacuum. While in Friedmann models vacuum gives constant pressure on every spatial section of constant time, in Stephani models it gives the pressure which depends on spatial coordinates.
Finally, we emphasize that the result obtained here is based only on the studies of the redshift-magnitude relation. The value of the non-uniform pressure parameter in (18) should also be tested by the level of the microwave background anisotropies and other astronomical data.
Spherically symmetric models in which cosmic acceleration is also explained by the inhomogeneity in pressure (though except acceleration admitting shear) have been considered recently by Pascual-Sánchez . They form a larger class than Stephani models.
Acknowledgments
I acknowledge useful discussions with Richard Barrett, Chris Clarkson and Martin Hendry.
|
no-problem/9905/patt-sol9905010.html
|
ar5iv
|
text
|
# Long-range interacting solitons: pattern formation and nonextensive thermostatistics
## I Introduction
There is a great interest in the formulation of models in which the solitons can interact with long-range forces. The soliton solutions of the well-known models (e.g., sine-Gordon and $`\varphi ^4`$ equations) interact with short-range forces. There is experimental evidence that most real transfer mechanisms have long-range character. We are interested in models where there is spontaneous formation of particle-like objects that possess long-range interactions. In such systems we can study pattern formation and other complex phenomena. Due to the fact that systems with long-range interactions can exhibit nonextensive behavior, the models we are investigating are relevant for the recently proposed thermostatistical theories.
Some authors have considered long-range effects including ad-hoc nonlocal terms in the equations. Spin systems have been also studied considering the coupling constant $`J_{ij}`$ between the lattice spins to be proportional to $`r_{ij}^\alpha `$. González and Estrada-Sarlabous demonstrated that pure Klein-Gordon equations without coordinate-dependent terms, can support solitons with long-range interactions.
In this paper we investigate a system that is an extension of the Klein-Gordon equation and possesses soliton solutions with long-range interaction. We also introduce a generalized version of the Ginzburg-Landau equation which supports topological defects whose interaction force decays very slowly. It is possible to create a gas of such topological defects with an interaction force that decays so slowly that we enter the nonextensivity regime. We apply these results to nonequilibrium systems, pattern formation and growth models.
## II Modified Klein-Gordon equation
In this section we will study the Klein-Gordon equation
$$\varphi _{tt}\varphi _{xx}G(\varphi )=0.$$
(1)
Here $`G(\varphi )=\frac{U(\varphi )}{\varphi }`$. In Ref. the authors investigated systems of type (1) where $`U(\varphi )`$ possesses at least two minima (at points $`\varphi _1`$ and $`\varphi _3`$) and a maximum at point $`\varphi _2`$ ($`\varphi _1<\varphi _2<\varphi _3`$). In particular, it was shown that if the potential $`U(\varphi )`$ behaves in the neighborhood of a minima as $`U(\varphi )\left(\varphi \varphi _i\right)^{2n}`$, then the solitons supported by the system interact with a force $`F(d)`$ that decays exponentially with the distance for $`n=1`$. On the other hand, for $`n>1`$ the solitons interact with a force that decays following the law $`Fd^{\frac{2n}{1n}}`$.
### A Pattern formation
The most investigated growth model is the KPZ equation. Nevertheless other models have been proposed including the sine-Gordon model.
In this section we present an alternative model which is given by the equation
$$\varphi _{tt}+\gamma \varphi _t\varphi _{xx}G(\varphi )=\eta (x,t),$$
(2)
where $`\eta (x,t)`$ is spatiotemporal white noise with the properties $`\eta (x,t)=0`$, $`\eta (x,t)\eta (x^{},t^{})=2D\delta (tt^{})\delta (xx^{})`$. Here the potential is $`U(\varphi )=2\mathrm{sin}^{2n}\left(\frac{\varphi }{2}\right)`$. It can be shown that this potential has the property $`U(\varphi )\left(\varphi \varphi _i\right)^{2n}`$.This equation with $`n>1`$ can be used as a growth model for periodic media with marginal stability.
The growth model described by equation (2) presents noise-induced pattern formation that can be studied through the calculation of the roughness exponent. The sine-Gordon equation ($`n=1`$) does not eliminate disorder at great scales. Meanwhile, the systems with $`n1`$ display self-affine behavior at all scales. The long-range interaction between the solitons is the most relevant feature of this system. This also can explain the wavelet analysis performed in Ref. . There for $`n1`$ the authors found the existence of structures at all scales.
Summarizing, in this system there is pattern formation produced by the spontaneous generation of topological objects with long-range interactions which can create complex structures exhibiting fractal behavior.
## III A generalized Ginzburg-Landau equation
In this section we introduce a generalized version of the Ginzburg-Landau equation:
$$\frac{u}{t}=^2u+u(1\left|u\right|^2)^{2n1}.$$
(3)
Note that for $`n=1`$ we recover the well-known Ginzburg-Landau (G-L) equation. Equation (3) preserves all the qualitative properties of the G-L equation even for $`n>1`$. There exist an unstable state at $`u=0`$ and a degenerate stable state at $`\left|u\right|=1`$. For all integer $`n`$, the Equation (3) possesses topological solitons. However, for $`D=1`$ ($`n=1`$) G-L solitons interact with a force that decays exponentially. On the other hand the soliton solutions of Equation (3) ($`n>1`$) interact with long-range forces as the modified sine-Gordon equation.
Nevertheless, here we will be interested in vortice-like topological defects in $`D=2`$. A vortice-like topological defect with topological charge $`\kappa `$ can be expressed in polar coordinates ($`r`$, $`\phi `$) by the following equation $`u=\rho (r)e^{i\kappa \phi }`$, where $`\rho (r)`$ is a solution of the equation
$$\frac{^2\rho }{r^2}+\frac{1}{r}\frac{\rho }{r}\frac{\kappa ^2\rho }{r^2}+\rho \left(1\rho ^2\right)^{2n1}=0.$$
(4)
The analysis of the asymptotic behavior of the vortice solution for $`n=1`$ of Equation (4) yields that in the limit $`r\mathrm{}`$, $`\left(\rho (r)1\right)r^2`$. If $`n>1`$, then $`\left(\rho (r)1\right)r^{\frac{1}{1n}}`$. Note that the transition from $`n=1`$ to $`n>1`$ is not continuous because for the long-range potentials that we are considering $`n`$ is usually taken as an integer number. In principle, $`n`$ can be generalized to be a real number. In this case, we define $`n=1+\epsilon `$. Hence for $`\epsilon <\frac{1}{2}`$ we have $`\left(\rho (r)1\right)r^2`$. On the other hand for $`\epsilon >\frac{1}{2}`$, $`\left(\rho (r)1\right)r^{\frac{1}{\epsilon }}`$. As we can see the solutions for $`\epsilon <\frac{1}{2}`$ and $`\epsilon >\frac{1}{2}`$ match if $`\epsilon `$ is real.
In the case of the generalized G-L equation (3) the force that acts on a vortice situated at the point $`r`$ due to the existence of another vortice at the coordinates origin satisfies the relation $`F\left(\rho (r)1\right)`$. Thus the vortices produced by equation (3) with $`n=1`$ have Coulomb interactions. Meanwhile, for $`n>1`$ the interaction decays much more slowly.
## IV Nonextensivity
In principle, it is possible to construct a system described by equations of the type
$$\frac{^2\varphi _1}{t^2}+\gamma \frac{\varphi _1}{t}^2\varphi _1=\frac{V(\left|\varphi _1\right|,\left|\varphi _2\right|)}{\varphi _1},$$
(5)
$$\frac{^2\varphi _2}{t^2}+\gamma \frac{\varphi _2}{t}^2\varphi _2=\frac{V(\left|\varphi _1\right|,\left|\varphi _2\right|)}{\varphi _2},$$
(6)
where the potential $`V(\left|\varphi _1\right|,\left|\varphi _2\right|)`$ holds the necessary conditions in order to produce long-range interactions. When we are in the presence of systems of equations like (5,6) with two order-parameters we can have the situation where the sustained topological defects repel each other at very small distances and they attract each other at great distances. We can have an effective interaction potential like the following:
$$V_{eff}(r)=\epsilon \left[\left(\frac{\sigma }{1+r^2}\right)^{\rho /2}\left(\frac{\sigma }{1+r^2}\right)^{\alpha /2}\right],$$
(7)
where $`\alpha <\rho `$. This is a situation equivalent to that discussed in Ref. . Thus when we have $`N`$ particles in the system, the energy will grow with $`N`$ following the laws:
$`E\{\begin{array}{cc}N\hfill & \text{if }\frac{\alpha }{D}>1\text{,}\hfill \\ N\mathrm{ln}N\hfill & \text{if }\frac{\alpha }{D}=1\text{,}\hfill \\ N^{2\frac{\alpha }{D}}\hfill & \text{if }\frac{\alpha }{D}<1\text{.}\hfill \end{array}`$ (8)
In our case $`\alpha =\frac{2n}{n1}`$. When $`n>1`$ (for $`n`$ integer and $`D=2`$) we are in the nonextensive regime. In the general case, for $`n=1+\epsilon `$ (real) we obtain the nonextensivity condition $`\frac{1\epsilon }{\epsilon }<D`$.
This work has been partially supported by Consejo Nacional de Investigaciones Científicas y Tecnológicas (CONICIT) under Project S1-2708.
|
no-problem/9905/astro-ph9905105.html
|
ar5iv
|
text
|
# Millimetric Astronomy from the High Antarctic Plateau: site testing at Dome C
## 1 Introduction
The high Antarctic Plateau is considered the best site on Earth for astrophysical observations in the millimetric and sub-millimetric wavelength ranges (Burton et al. 1994). In the last few years many astrophysical experiments have been deployed on this continent (Ruhl et al. 1995; Viper Home Page; Balm 1996; Storey, Ashley & Burton 1996 1996; Valenziano et al. 1998) and large telescope projects are presently being developed (Stark 1998). The main astrophysical observing site, already operational, is the U.S. Amundsen-Scott station at the South Pole (SP hereafter), where the CARA (Center for Astrophysical Research in Antarctica) installed quite a large laboratory. Data on the observing conditions at SP have already been reported in the literature (Dragovan et al. 1990; Chamberlin & Bally 1994; Chamberlin 1995; Chamberlin, Lane & Stark 1997; Lane 1998), while sophisticated tests are being performed by Australian researchers, using an automated instrument (Storey, Ashley & Burton 1996).
Italy has been deeply involved in astrophysical activities in Antarctica since 1987. The Italian Antarctic Program (PNRA - Programma Nazionale di Ricerche in Antartide) established an astrophysical observatory (OASI, Osservatorio Antartico Submillimetrico e Infrarosso, Dall’Oglio et al., 1992) at the Italian station Baia Terra Nova (74 41’ 36” S, 164 05’ 58” E). A 2.6 meter sub-millimetric telescope is operating there during the southern summer season. Many facilities are available, including Nitrogen and Helium liquefiers, mechanical workshops, electronic and cryogenic laboratories. The site quality is comparable to mid-latitude mountain observatories for millimetric (Dall’Oglio et al. 1988) and mid-infrared observations (Valenziano 1996).
In 1994, Italy and France started a program for building a permanent scientific station (Concordia) on the high antarctic plateau, at Dome C, hereafter DC, (75 06’ 25” S, 123 20’ 44” E). Domes are regions more elevated than the rest of the continent (DC is at 3280 m), barring the Trans Antarctic Mountains. The highest is Dome A (4100 m), potentially the best observing site on the planet, but it is very difficult to reach.
In December 1995, a test experiment was run to directly compare the short term atmospheric stability (the so-called sky noise) between DC and the Italian station on the coast. Raw data show a reduction in rms noise of factors of 3 and 10 at $`\lambda `$=2 mm and $`\lambda `$=1.2 mm respectively (Dall’Oglio 1997). In December 1996 the APACHE96 (Antarctic Plateau Anisotropy CHasing Experiment) (Valenziano et al., 1997,1998) was set up at DC. Preliminary data analysis shows good atmospheric stability in terms of sky-noise at mm wavelengths. Atmospheric PWV content was measured during the latter mission.
The AASTO experiment, presently running at the SP, is planned to be moved to DC in the next few years. It will allow a careful assessment of the observational quality at DC over a wide wavelength range. However, some interesting information can be obtained from meteorological data, taking advantage of the good statistics based on data collected over eight years.
The AWS project, started in the early 80s, installs automatic units in remote areas of the Antarctic continent. The main objective of this program is to support meteorological research by unattended, low cost, data collecting stations. AWS units operate continuously throughout the year. AWS data for all the stations are publicly available at the University of Wisconsin - Madison. One AWS unit is located a few kilometers from the Concordia Station site. Therefore, it shares the same atmospheric conditions as the planned astrophysics observatory. Another unit is at the geographical SP (Clean Air, elevation 2836 m). Therefore it is possible to compare the two sites on the basis of homogeneous data.
The main goal of this paper is to present data on DC conditions and to compare them with those of well established observing sites. They are useful for planning experiments and for stimulating a deeper exploration of this interesting and promising observing site.
## 2 Automatic Weather Stations
AWS were developed by the Radio Science Laboratory at Stanford University. The basic AWS units measure air temperature, wind speed and wind direction at a nominal height of three meters above the surface, and air pressure at the electronic enclosure at about 1.75 meters above the surface. The height is only nominal, due to possible snow accumulation. Some AWS units can also measure humidity and vertical air temperature difference, but these sensors are not available at either DC or SP stations. Data are transmitted to a NOAA (National Oceanic and Atmospheric Administration) satellite and then stored at the University of Wisconsin (Keller et al. 1997). More details on the AWS program are available at http://uwamrc.ssec.wisc.edu/aws/.
## 3 AWS data analysis
AWS data used in this work are already binned in 3 hour intervals. This is useful in order to evaluate the stability of the observing conditions over a reasonably short interval. Some data can be missing, due to instrumental or transmission failures. Data were further averaged in one month intervals. The typical uncertainty in the monthly averages is 15% (standard deviation) for temperature, 1-6% for pressure and 50-75% for wind speed. Plots of these data are shown in Figures 1,3 and 5. Statistical distributions of the whole data set have also been calculated. Data have been binned in 1 intervals for temperature, 1 hPa for pressure, 1 m/s for wind speed and 10 for wind azimuth. Histograms for temperature, pressure, wind azimuth and wind speed are presented in Figures 3,5,7 and 7. Monthly plots of the whole data set and data distributions, along with a table with median values and mean absolute deviation, are reported elsewhere (Valenziano 1997).
## 4 Precipitable Water Vapor measurements
To the best of our knowledge, the first published measurement of the DC PWV content of the atmosphere was performed by the authors in January 1997 (Valenziano et al. 1998). The instrument used was a portable photometer (Volz 1974), with an accuracy of 20 %. The limited sensitivity of the instrument allowed only upper limits to be set in some cases. Data are presented in Figure 8. The average PWV at DC is around 0.6 mm.
A comparison between DC, SP, Atacama and Mauna Kea sites is reported in Table 1. Data for DC were measured by the authors (in one month), while quartile data for the other site are from Lane (1998). Results for the Vostok station (elevation 3488 m) (Townes & Melnick 1990) in summer do not differ significantly from those reported here, but values of less than 0.1 mm were measured during winter. We have also calculated the 225 GHz and 492 GHz opacities from the 50th percentile PWV data, using a model evaluated from the SP data (Lane 1998). We used the following relations:
$`\tau _0(225GHz)=0.030+0.069PWV(mm)`$ (1)
$`\tau _0(492GHz)=0.33+1.49PWV(mm)`$ (2)
and we evaluated the corresponding transmissions at the zenith as $`T=\mathrm{exp}^{\tau _0}`$. These values are reported in Table 1, along with available data for other sites (from Lane, 1998).
## 5 Discussion
The analysis of the AWS data set and the PWV measurements shows the following main results:
* DC and SP average temperatures are comparable, ranging between typical values of -65 C in winter and -26 C in summer. The median value for both sites is -53 C, while the correlation of monthly average temperatures between DC and SP is 96%.
* The pressure is always lower at DC than at the SP. Median values are 644 hPa and 682 hPa respectively. Monthly averaged pressure data show a correlation of 92 % between the two sites.
* The wind speed is very low at both sites, with a maximum speed of 15.9 m/s at DC and 18.9 m/s at SP.
* A Kolmogorov-Smirnov test shows that wind speed distributions for the two sites are different. Correlation between monthly averaged wind speed data is less than 30 %. 50th percentile values for wind speed, evaluated for the whole data set, are 1 m/s at DC and 2 m/s at SP.
* Wind azimuth distributions are different: the prevalent direction is approximately azimuth 180 at DC and between azimuth 0 and 90 at SP.
* PWV values measured at DC in January 1997 are comparable with SP values in the same season and lower than those measured at other sites.
In Table 1 and Table 2 our results for Antarctica (over eight years) are compared with a well-established observing site, Mauna Kea (Hawaii Islands) and a future important site in the Atacama desert (Northern Chile). Data for these latter sites are reported from Holdaway (1996).
Some conclusions can be derived from these results:
* Lower wind regimes at the Antarctic sites result in lower turbulence implying a smaller contamination on observations. It is worth considering that most of the sky-noise at Infra-Red and millimetric wavelengths is induced by convective motion and wind driven turbulence in the lowest layers of the atmosphere, where the bulk of water vapor is found (Smoot et al. 1987; Ade et al. 1984; Andreani et al 1990). While the former needs further investigation (Argentini 1998, Burton 1995), the latter is minimal on the Antarctic Plateau.
* Lower average wind speed reduces pointing errors for large antennas (see Holdaway, 1996).
* In terms of wind speed, DC shows better conditions with respect to SP.
* The high Antarctic Plateau is the driest observing site on Earth.
* DC and SP pressure and temperature conditions are strongly correlated, indicating that they share similar meteorological conditions. It is possible to infer that PWV values at DC during wintertime are similar to SP ones in the same season.
* 225 GHz and 492 GHz opacities for DC, calculated using a model valid for SP, show results similar to this site and better than Mauna Kea, when comparable water vapor amounts are considered.
## 6 Conclusions
An analysis of available site quality data for DC has been accomplished, including PWV measurements made by the authors. A comparison with SP and other sites has been performed.
AWS data show that the quality of DC and SP is comparable, at least for meteorological conditions. DC seems to be a more suitable site in terms of wind speed (which is related to atmospheric turbulence).
The PWV content, measured at DC during the 1996-97 antarctic summer, is comparable to that at SP during summertime. The high Antarctic Plateau is shown to be the driest site compared to other observing sites. Further measurements at DC, with improved sensitivity and automatic operation of the instrument, are mandatory in order to explore conditions also during wintertime.
In conclusion, the overall exceptional quality of the high Antarctic Plateau at millimetric wavelengths, widely discussed in the literature (Dragovan et al. 1990; Chamberlin & Bally 1994; Burton et al. 1994; Chamberlin 1995; Chamberlin, Lane & Stark 1997; Lane 1998, Stark 1998), is confirmed by our work. DC conditions are at least comparable and probably better than those at the SP. However, further and more sophisticated site testing experiments are required to completely assess the DC observing quality.
## Acknowledgments
Data were provided by the Automatic Weather Station Project, run by Dr. Charles R. Stearns at the University of Wisconsin - Madison, which is funded by the National Science Foundation of the United States of America. Data are available by anonymous ftp at ice.ssec.wisc.edu. This work is partly supported by PNRA. We wish to thank G. Pizzichini, E. Pian and J. Stephen for the careful revision of the text.
## References
Ade, P.A.R. 1984, Infr. Phys., 24, 403
Andreani P. et al., Infr. Phys., 30, 479
Argentini S. 1998 in Atti della Conferenza Nazionale sull’Antartide, (Roma: PNRA)
Balm, S.P. 1996 PASA, 13, 1, 14
Burton, M. et al. 1994 PASA, 11, 127
Burton, M.G. 1995, in ASP Conf. Ser. 73, Airborne Astronomy Symp. on the Galactic Ecosystem: From Gas to Stars to Dust, eds. M. R. Haas, J. A. Davidson & E. F. Erickson, (San Francisco: ASP), 559-562.
Chamberlin, R.A. 1995 Int. J. Infr. Millim. Waves, 16, 907
Chamberlin, R.A.& Bally, J. 1994 Appl. Opt, 33, 1095
Chamberlin, R.A., Lane, A.P. & Stark, A.A. 1997 Ap.J., 476, 428
Dall’Oglio, G. 1997 Communication at the Concordia Project Meeting, Siena (Italy), June 3-6
Dall’Oglio, G., et al. 1988 Infr. Phys., 28, 155
Dall’Oglio, G., et al. 1992 Exp. Astr., 2, 275
Dragovan, M., Stark, A. A., Pernic, R. & Pomerantz, M. A. 1990, Appl. Opt., 29, 463
Gundersen, J.O., et al. 1995 Ap.J., 443, L57
Holdaway, M.A., et al., 1996, MMA Memo 159
Keller, L.M., et al. 1997 Antarctic AWS data for Calendar Year 1995, Space Science and Engineering Center, Univ. of Wisconsin, Madison USA
Lane, A.P. 1998, in ASP Conf. Ser. 141, Astrophysics from Antarctica, ed. R. Landsberg & G. Novak (San Francisco: ASP), 289
Ruhl, J.R., et al. 1995 Ap.J., 451, L1
Smoot, G.F. et al. 1987, Radio Sci., 22, 521-528
Stark, A. 1998, in ASP Conf. Ser. 141, Astrophysics from Antarctica, ed. R. Landsberg & G. Novak (San Francisco: ASP), 349
Storey, J.W.V., Ashley, M.C.B., Burton, M.G. 1996 PASA, 13, 35
Townes C.H. & Melnick, G. 1990 PASP, 102, 357
Valenziano, L. 1996 Ph.D. thesis, Università di Perugia
Valenziano, L. 1997 Te.S.R.E. Rep. 192/97, available at
http://tonno.tesre.bo.cnr.it/ valenzia/APACHE/apache.htm
Valenziano, L., et al. 1997 Proc. PPEUC, available at
http://www.mrao.cam.ac.uk/ppeuc/astronomy/papers/valenziano/valenziano.html
Valenziano, L., et al. 1998, in ASP Conf. Ser. 141, Astrophysics from Antarctica, ed. R. Landsberg & G. Novak (San Francisco: ASP), 81
Viper Home Page, http://cmbr.phys.cmu.edu/vip.html
Volz, F. 1974 Appl. Opt., 13, 1732
|
no-problem/9905/astro-ph9905214.html
|
ar5iv
|
text
|
# Spectral Transition and Torque Reversal in X-ray Pulsar 4U 1626-67
## 1 Introduction
The continuous monitoring of the accretion-powered X-ray pulsar 4U 1626-67 by BATSE (e.g. Chakrabarty et al. 1997a) has revealed that the pulsar system has recently undergone a puzzling, abrupt torque-reversal from spin-up to spin-down. Similar phenomena have also been detected in other pulsar systems (e.g. Bildsten et al. 1997). The X-ray luminosity in the 1-20 keV band changes little around the torque reversal (Vaughan & Kitamoto 1997), which indicates that the reversal event is not likely to be caused by a large, discontinuous change of mass accretion rate (Yi et al. 1997, Yi & Wheeler 1998). This phenomenon is difficult to explain within the context of a Ghosh-Lamb magnetized disk model (Ghosh & Lamb 1979) in which the torque and mass accretion rate would be expected to change together.
Nelson et al. (1997) proposed that it is due to a sudden change in the sense of accretion disk rotation with a nearly constant mass accretion rate. This possibility seems suspect given that the accretion flow is apparently stable up until the reversal event and then has to suddenly reverse its rotational direction. The formation of a retrograde disk is more likely in wind-fed systems in which small fluctuations due to fluctuating angular momentum are apparent observationally (e.g. Nagase 1989). Van Kerkwijk et al. (1998) have suggested an intriguing possibility in which the inner disk is warped by an irradiation instability to such an extent that the tilt angle becomes larger than 90 degrees and the accretion flow’s rotation becomes retrograde. In this explanation, there exist several outstanding issues such as (i) the uncertain flip time scale, (ii) the return mechanism from the high tilt back to low tilt angle, (iii) the X-ray irradiation efficiency in strongly magnetized accretion (e.g. Yi & Vishniac 1994), and (iv) the severe X-ray obscuration by the tilted disk material (van Kerkwijk et al. 1998).
The observed X-ray luminosity appears to decrease slightly from spin-up to spin-down. In the 1-20 keV GINGA band, the X-ray luminosity decreases by about $`20`$%. This flux decrease seems to be more significant at lower energies within this band (Vaughan & Kitamoto 1997). During spin-up, the phase-averaged spectrum can be modeled by a blackbody with a temperature $`0.6`$ keV (Angelini et al. 1995) together with a power-law component with a spectral index $`1`$ (Kii et al. 1986). Vaughan and Kitamoto (1997) discuss simple power-law fits to the time-averaged GINGA ASM count data. The strong energy dependence of the pulse profile indicates anisotropic radiative transfer within a magnetically channeled accretion column with a strong magnetic field $`>6\times 10^{12}G`$ (e.g. Kii et al. 1986). Recently, BeppoSAX (0.1-10 keV) detected X-ray emission during the spin-down phase (Orlandini et al. 1997, Owens et al. 1997). During this phase the spectrum is well fit by an absorbed power-law of index $`0.6`$ and a blackbody of temperature $`0.3`$ keV. The absorption column depth is $`10^{21}cm^2`$ (Owens et al. 1997). Vaughan & Kitamoto (1997) suggest much harder spectra during spin-down with a power law index $`0.41`$, although $`0.6`$ is within their allowed spectral index range. GINGA spin-up spectra are described by the power-law index of $`1.48`$ which is somewhat steeper than the index seen in mid- to late 1980s. This could well be the result of the wider energy band. A self-consistent explanation for the observed spectral transition compatible with the torque reversal, is required. Absorption has been suggested as a possible explanation. The required absorption column density is however as high as $`N_H10^{23}10^{24}cm^2`$ (Vaughan & Kitamoto 1997), which is inconsistent with the relatively low accretion rate. On the other hand, in the model of van Kerkwijk et al. (1998), as the disk tilt angle increases, absorption by the disk is expected to be too severe to account for the observed X-ray spectra.
In this paper, we propose that the spectral change is directly related to the formation of a geometrically thick hot accretion flow during the spin-down, which Compton up-scatters the soft photons emitted near the stellar surface (Yi et al. 1997). Such a hot flow is available only during spin-down, where the spin-down itself is the result of the sub-Keplerian rotation of the hot accretion flow (e.g. Narayan & Yi 1995).
## 2 Torque Reversal, Mass Accretion Rate, and X-ray Luminosity
In a conventional disk-magnetosphere interaction model with a Keplerian accretion flow with the mass accretion rate $`\dot{M}`$, the torque exerted on the pulsar is
$$N=(7N_0/6)\left[1(8/7)(R_0/R_c)^{3/2}\right]\left[1(R_0/R_c)^{3/2}\right]^1$$
(2-1)
where $`R_c=(GM_{}P_{}^2/4\pi ^2)^{1/3}`$ is the Keplerian corotation radius for a pulsar of mass $`M_{}=1.4M_{\mathrm{}}`$ and spin period $`P_{}`$ and $`N_0=\dot{M}(GM_{}R_0)^{1/2}`$ is the material torque at the disk disruption radius $`R_0`$, which is determined by
$$(R_0/R_c)^{7/2}\left[1(R_0/R_c)^{3/2}\right]^1=2(2\pi )^{7/3}b_pB_{}^2R_{}^5/(GM_{})^{2/3}P_{}^{7/3}L_x=\delta .$$
(2-2)
$`b_p`$ is a parameter of order unity which sets the pitch of the magnetic field (Wang 1995, Yi et al. 1997, and references therein). $`B_{}`$ is the polar surface magnetic field strength of the pulsar and $`L_xGM_{}\dot{M}/R_{}`$ is the X-ray luminosity from the surface of the accreting pulsar with radius $`R_{}`$ and accretion rate $`\dot{M}`$. In this picture, as $`\dot{M}`$ varies, the pulsar spin period changes according to $`\dot{P}_{}/P_{}^2=N/2\pi I_{}`$ where $`I_{}=10^{45}gcm^2`$ is the neutron star moment of inertia. If $`\dot{M}`$ variation is smooth, the corresponding torque change is also smooth (e.g. Yi et al. 1997). The torque decreases in response to the decreasing $`\dot{M}`$. In this case, the torque should show a wide range of positive and negative values and pass through $`N=0`$. This is apparently not observed in 4U 1626-67 and other similar systems (Chakrabarty 1997a, Nelson et al. 1997). The observed spin-down to spin-up transition is too abrupt to be reproduced by gradual $`\dot{M}`$ change in the Keplerian models. The observed event could be reproduced only if the mass accretion rate jumps by a factor $`4`$ almost discontinuously (Yi et al. 1997), which is highly unlikely given the fact that the observed luminosity decreases only by about $`20`$% (Vaughan & Kitamoto 1997).
The model proposed by Yi, Wheeler, & Vishniac (1997) suggests that the torque reversal is due to the transition of the accretion flow from (to) Keplerian rotation to (from) sub-Keplerian rotation. The transition occurs when $`\dot{M}`$ crosses the critical rate $`\dot{M}_c`$ which lies somewhere in a range $`10^{16}10^{17}`$ g/s. After the transition, the rotation of the accretion flow becomes sub-Keplerian due to the large internal pressure of the hot accretion flow. For sub-Keplerian rotation, $`\mathrm{\Omega }=A\mathrm{\Omega }_K<\mathrm{\Omega }_K`$ (i.e. $`A<1`$ and $`\mathrm{\Omega }_K=(GM_{}/R^3)^{1/2}`$ is the Keplerian rotation), the new corotation radius $`R_c^{}=A^{2/3}R_c`$ and the new inner edge $`R_0^{}`$ is determined by
$$(R_0^{}/R_c^{})^{7/2}\left[1(R_0^{}/R_c^{})^{3/2}\right]^1=2(2\pi )^{7/3}b_pA^{7/3}B_{}^2R_{}^5/(GM_{})^{2/3}P_{}^{7/3}L_x=\delta ^{}.$$
(2-3)
The new torque on the star after the transition to the sub-Keplerian rotation is
$$N^{}=(7N_0^{}/6)\left[1(8/7)(R_0^{}/R_c^{})^{3/2}\right]\left[1(R_0^{}/R_c^{})^{3/2}\right]^1,$$
(2-4)
where $`N_0^{}=A\dot{M}(GM_{}R_o^{})^{1/2}`$. The reversal is simply a consequence of the change in $`A`$ from 1 to $`<1`$ (Yi et al. 1997) as $`\dot{M}`$ gradually crosses $`\dot{M}_c`$. It has been shown that the observed torque reversals occur near spin-equilibrium (i.e. $`N=0`$) before and after transition (Yi et al. 1997) satisfying $`b_p^{1/2}B_{}7\times 10^{11}\delta ^{1/2}(L_x/10^{36}erg/s)^{1/2}(P_{}/10s)^{7/6}G`$ where $`2A^{7/6}<\delta ^{1/2}<4A^{7/6}`$ is likely to be of order unity (Yi & Wheeler 1998).
The observed spin frequency evolution is easily accounted for by $`\dot{M}_c=2.4\times 10^{16}g/s`$, $`A=0.462`$ at torque reversal with $`B_{}=5.4(b_p/10)^{1/2}\times 10^{11}G`$, which are consistent with the above criterion (Yi & Wheeler 1998). The model requires only a small change of $`\dot{M}`$ at a rate $`d\dot{M}/dt6\times 10^{14}g/s/yr`$, about a factor of $`5`$ less than the required $`d\dot{M}/dt`$ for the Keplerian model. The observed small flux change $`20`$% around the torque reversal is consistent with the present model. This model is yet to explain the negative second derivative of the spin frequency before and after the reversal (Chakrabarty et al. 1997a). The truncation radius changes from $`R_o5.5\times 10^8cm`$ just before reversal to $`R_o^{}3.7\times 10^8cm`$ right after reversal. We look for a mechanism for the spectral transition with small changes in $`\dot{M}`$ within the model of Yi et al. (1997). The small change in the integrated X-ray luminosity suggests a comparably small change in the mass accretion rate. In the magnetized accretion picture, if the accretion flow is truncated far away from the neutron star, most of the X-ray emission should originate in the column accretion flow near the neutron star. In this case, the X-ray luminosity is likely to reflect the mass accretion rate change. Since the observed spectra show a strong thermal emission component, it is plausible that the X-ray emission comes from the thermalized material near the accreting magnetic poles (e.g. Kaminker et al. 1976, Frank et al. 1992).
When the accretion flow is Keplerian, the accretion flow is dominated by cooling and the accreted plasma remains cold until it undergoes shock heating and thermalization near the magnetic poles. In this case, the accretion column is not hot enough for significant Compton scattering. The cold accretion column is more likely to act as absorbing gas than a Comptonizing gas. The geometric thickness of the accretion column flow is much smaller than the stellar radius, so that only a small fraction of the radiation from the star will interact with the accreted material. After transition, the accretion flow is hot and its internal pressure becomes large enough to support a large scale height. The accretion flow extending from $`R_o^{}`$ to $`R_{}`$ is geometrically thick and hot enough to Comptonize the thermal stellar radiation.
## 3 Scattering in Hot Accretion Column during Spin-down
The sub-Keplerian flow is advection-dominated (e.g. Narayan & Yi 1995), which implies that the bulk of the viscously dissipated energy is stored within the accretion flow. Assuming that advection is the dominant channel for the viscously dissipated energy, the temperature of the accretion flow is $`T_{acc}1.2\times 10^9K`$ and the electron scattering depth along the vertical direction is $`\tau _{es,acc}2.8\times 10^2`$ at $`R=R_o^{}4\times 10^8cm`$. Here we have assumed an equipartition magnetic field and a viscosity parameter $`\alpha =0.3`$ (Narayan & Yi 1995). This implies that the advection-dominated accretion flow itself has a very small scattering depth. However, the accretion flow inside the radius $`R=R_o^{}`$ is channeled by the magnetic fields lines, and is a very promising site for scattering due to geometric focusing. In the conventional column accretion model (e.g. Frank et al. 1992, Yi & Vishniac 1994), if the dipole magnetic field axis is misaligned with respect to rotation axis of the star (which is assumed to be aligned with the accretion flow’s rotational axis) by an angle $`\pi /2\beta _1`$, then the angle between the magnetic axis and the magnetic field lines connected to the disk plane at $`R_o^{}`$, $`\beta _2`$, is given by $`\mathrm{sin}^2\beta _1=\mathrm{sin}^2\beta _2\times (R_{}/R_o^{})`$. It can also be shown that the fraction of the polar surface area threaded by magnetic field lines connected to the accretion flow at $`RR_o^{}`$ is $`sR_{}\mathrm{sin}^2\beta _2/4R_o^{}R_{}/2R_o^{}`$ after angle-averaging over $`\beta _2`$. This implies that unless $`\beta _2`$ is very small the accreting fraction of the stellar surface area $`s`$ is likely to be of order $`R_{}/R_o^{}10^210^3`$. The accretion is assumed to continue to the poles without mass loss and then the soft X-ray radiation from the polar regions is uniquely determined by the mass accretion rate derived from the torque reversal (Yi & Wheeler 1998).
The detailed emission process near the magnetic poles is complicated (e.g. Frank et al. 1992 and references therein). The existence of a thermal component at low X-ray energies implies that a significant fraction of the accretion energy is thermalized near the surface of the neutron star. If the entire accretion energy is thermalized at the stellar surface (cf. Kaminker et al. 1976), the anticipated blackbody temperature is $`kT_{}5s^{1/4}keV`$. Since the accretion rate change is small near the torque reversal, the observed spectral transition is unlikely to be caused by the sudden change in physical conditions. If there is no scattering effect from the infalling column material, the underlying emission process is likely to remain unchanged around the torque reversal. We therefore assume that the intrinsic spectrum (before scattering and absorption) remains unchanged.
The column accretion flow follows the bent magnetic field lines. Since a detailed geometry is hard to specify, we assume that the solid angle subtended by the accretion column at the accretion poles is constant at all radii along the field lines. The radial bulk infall velocity can be parametrized as a fraction of the free-fall velocity as $`v_R=x(2GM_{}/R)^{1/2}`$ and we get
$$\tau _{es}\frac{\sigma _T}{4\pi \mu m_p}\frac{1}{s^{1/2}x}\frac{\dot{M}}{(2GM_{}R)^{1/2}}$$
(3-1)
where $`\mu 1`$ is the mean molecular weight. We find that $`\tau _{es}0.4\mu (R/10^6cm)^{1/2}`$ with $`s10^2`$, $`x1`$, $`\dot{M}2.4\times 10^{16}g/s`$ where the mass accretion rate is based on the torque reversal fitting (e.g. Yi & Wheeler 1998). From the inner disk radius $`R=R_o^{}4\times 10^8cm`$ to the base of the accretion column near the stellar surface $`R10^6cm`$, the scattering depth is likely to range from $`\tau _{es}2\times 10^2\mu ^1`$ to $`0.1\mu ^1`$ at $`R10^7cm`$ to $`0.4\mu ^1`$ at $`R10^6cm`$. Evidently the accretion column can have a dramatic effect on the outgoing soft radiation through scattering.
The angular momentum has to be continuously transported along the accretion column (corresponding to the torque action between the star and accretion flow). We assume the radial infall is nearly adiabatic and the shear dissipation is decoupled from the bulk radial motion. We take the dissipation rate per unit volume within the accretion column as $`q^+=\nu \rho R^2(d\mathrm{\Omega }/dR)^2`$ where $`\nu =c_sH`$ is the kinematic viscosity and $`H2\sqrt{s}R`$ is the cross-sectional thickness of the accretion column. We have assumed that the internal random energies, thermal and turbulent, are comparable and the bulk radial motion is supersonic. We then obtain $`q^+18G\sqrt{s}M_{}c_s\rho /4r`$ or the integrated dissipation rate per unit length of the column $`Q^+q^+\times \pi D^29c_s|v_R|\dot{M}D^2/8R^2`$. The cooling of the column material occurs via Compton up-scattering of the soft photons from the thermalized radiation near the stellar surface (e.g. Narayan & Yi 1995). Assuming that the Compton interaction occurs at a distance sufficiently far away from the stellar surface, we treat the thermal radiation as originating from a point source. Then the integrated cooling rate across the column is estimated as $`Q^{}4kGM_{}\dot{M}D\tau _{es}T/2m_ec^2R_{}R^2`$. The energy balance $`Q^+=Q^{}`$ gives $`(kT)^{1/2}\tau _{es}9m_ec^2R_{}D/16(\mu m_p)^{1/2}\sqrt{s}|v_R|R^2`$ which shows the expected inverse correlation between the Compton depth and the column temperature. In general, $`Q^+=Q^{}`$ is not guaranteed but as $`\tau _{es}1`$ near the stellar surface (e.g. Yi & Vishniac 1994), a significant Compton cooling could lead to $`Q^+=Q^{}`$ (cf. Narayan & Yi 1995). In this simple picture, the column temperature remains nearly independent of $`R`$ until the column material becomes optically thick and the radiation is thermalized. For parameters $`s10^2`$, and $`\dot{M}10^{16}g/s`$, we expect $`kT60keV`$. This implies that the resulting Comptonized radiation is likely to show the signature of a hot plasma with this temperature. The sub-Keplerian accretion flow at $`R>R_o^{}`$ is advection-dominated (e.g. Narayan & Yi 1995). At radii $`R>R_o^{}`$, the temperature difference between electrons and ions is likely to be small. After the accretion flow enters into the magnetic column, the accreted material is rapidly compressed, which justifies the single temperature treatment of the energy balance between viscous heating and Compton cooling. The Comptonized radiation is calculated according to $`E_f=\eta E_i`$ where $`E_f`$ the photon energy after scattering and $`E_i`$ is the incident soft energy. $`\eta `$ is the energy boost parameter originally derived by Dermer et al. (1991), i.e. $`\eta =1+\eta _1\eta _2(E_i/kT)^{\eta _3}`$ where $`\eta _1`$, $`\eta _2`$, and $`\eta _3`$ are defined in Narayan & Yi (1995). These functions are completely specified by $`\tau _{es}`$ and $`T`$ for a given $`E_i`$. Therefore, given the incident photon spectrum $`N_i(E_i)`$ the resulting Comptonized spectrum is completely specified. The radial infall speed in the scattering region and its effects on the scattering could be significant, which is beyond the scope of this work.
Although the soft radiation is likely to be the thermalized radiation from the stellar surface, the details of the emission remains unclear. We simply assume that the soft photon spectrum remains unchanged near the torque reversal and that the unscattered soft photon spectrum during spin-down is similar to that of the spin-up period. Since during spin-up scattering is negligible, this assumption is quite plausible. We further assume that the luminosity of the source decreases according to the mass accretion rate decline. We expect the soft photon luminosity to decrease by a few $`\times 10`$% while the spectral shape remains unchanged. We also assume that all outgoing radiation goes through the Comptonizing accretion column. Figure 1 shows the result of the Compton scattering from a column material with $`T_e=10^9K`$ and $`\tau _{es}0.3`$ which are close to the values we have derived above. These parameters adequately reproduce the qualitative features of the observed spectral transition. The shown spectral transition requires that the soft photon flux decrease by $`40`$% during spin-down, which is not far from the change required for the torque reversal (Yi et al. 1997, Yi & Wheeler 1998). Despite the uncertainties in accretion column geometry, the expected spectral change is remarkably close to the observed change. This strongly indicates that the observed spectral transition could be due to formation of the hot accretion flow during spin-down. More detailed, quantitative answers would require exact scattering geometric information and physics of thermal radiation emission near the stellar surface.
## 4 Discussion
We have shown that Compton-scattering during spin-down could account for the sudden spectral transition correlated with the torque reversal seen in 4U 1626-67. The small luminosity change supports the possibility that the mass accretion rate varies little as the present model indicates. The expected beat-type QPOs would be hard to detect during spin-down (Yi & Wheeler 1998). Kommers et al. (1998) reported the detection of QPOs at a frequency of 0.048Hz using RXTE, but they found that the observed QPOs are not likely to be attributable to a magnetospheric beat.
Although the spectral transition appears robust, the time-averaging of the X-ray spectra over $`30`$ days (Vaughan & Kitamoto 1997) could be questionable due to some changes in accretion flows near the torque reversal on short time scales. If the accretion flow transition occurs gradually over a period of days (e.g. Yi et al. 1997), the early spin-down spectra could be substantially softer than late spin-down spectra. If such a difference is not found, the accretion flow transition should occur on a time scale $`<day`$. Such a short transition time scale is still plausible within the proposed model (Yi et al. 1997) although the details of the transition process have not been addressed (Yi & Wheeler 1998). On long time scales, Owens et al. (1997) found that the X-ray flux detected by BeppoSax is about a factor 2 lower than the ASCA flux measured three years earlier (Angelini et al. 1995). This suggests that the mass accretion rate does decrease on long time scales although the accretion rate decrease on short time scales near the reversal is not large enough to drive the torque reversal and spectral transition.
Orlandini et al. (1998) reported an absorption feature at $`37`$keV, which they claim to be the cyclotron absorption line. The estimated surface field strength is $`3\times 10^{12}`$G. This field strength is substantially higher than our estimate $`5(b_p/10)^{1/2}\times 10^{11}`$G. Such a discrepancy is not unexpected given the uncertainties in the distance estimate. If the magnetic pitch $`b_p`$ is as small as $`1`$, the Orlandini et al. (1998) estimate is largely consistent with our estimate.
GX 1+4 has shown a similar torque reversal and an unexplained torque-flux correlation, which is exactly opposite to that predicted in the conventional model (Chakrabarty et al. 1997b). A significant spectral hardening similar to that seen in 4U 1626-67 could account for the correlation. So far such a transition has not been reported in GX 1+4. The prograde-retrograde transition (Nelson et al. 1997) could provide an explanation (van Kerkwijk 1998) despite several outstanding difficulties.
We acknowlege the support of the SUAM Foundation and KRF grant 1998-001-D00365 (IY), NASA grant NAG5-2773 and NSF grant AST-9318185 (ETV). ETV is grateful for the hospitality of MIT and the CfA during the completion of this work. IY thanks Josh Grindlay and Craig Wheeler for discussions on LMXBs and pulsars.
Figure 1: Expected X-ray photon spectra before and after torque reversal. The scattering region used in the theoretical post-reversal spectrum has $`\tau _{es}=0.3`$ and $`T_e=10^9K`$. The un-scattered soft radiation changes luminosity by 40% while its spectral shape remains unchanged (see text).
|
no-problem/9905/astro-ph9905099.html
|
ar5iv
|
text
|
# GLOBAL DYNAMICS OF ADVECTION-DOMINATED ACCRETION REVISITED
## 1 Introduction
Accretion of rotating matter onto a compact object powers many energetic astrophysical systems such as cataclysmic variables, X-ray binaries, and active galactic nuclei. In order to explain the observational radiative output of these systems, a full understanding of the global structure and dynamical behavior of accretion flows is of fundamental importance. Here the word ’global’ means the entire region of the flow, i.e. from the place where accretion starts till the surface of the compact object. The detailed microphysics of radiative cooling processes, although necessary for calculating the emission characteristics, may or may not significantly affect the flow dynamics, depending on whether the cooling is efficient or not.
The birth of modern accretion disk theory is traditionally attributed to the model of Shakura & Sunyaev (1973). In this model the accretion flow was considered to be geometrically thin (equivalently, thermally cool, cf. eq. below), optically thick, Keplerian rotating, and radially highly subsonic. Such a simple model has been very successful, particularly for the case of unmagnetized white dwarf accretion, i.e. in cataclysmic variables (see Frank, King, & Raine 1992 for a review). The reason for this fact is that the gravitational potential well of a white dwarf is not very deep, thus the radial velocity of the accreted matter is small, and the temperature is low (both the kinetic energy and the thermal energy are converted from the gravitational energy); such a physical situation just meets the basic assumptions of Shakura & Sunyaev model, and the model in turn provides an excellent description of the global structure and dynamics of the white dwarf accretion disk.
However, black hole accretion must be transonic, i.e. the radial velocity must be important in the region near the black hole, and some or even almost all the energy and entropy of the accreted matter can be advected into the hole, rather than being radiated away from the surface of the flow (neutron star accretion could be either transonic or entirely subsonic). Such a transonic or advective feature cannot be reflected by Shakura & Sunyaev model at all, and adequate models should be developed. There are two known regimes in which the radiative cooling is very inefficient and most of the dissipated energy is advected into the black hole, namely, when the mass accretion rate is very high and the flow is optically thick, and when the accretion rate is very low and the flow is optically thin. The resulting so-called advection- dominated accretion flows (ADAF) have been a very attractive subject for the recent years, especially because of promising applications of the optically thin ADAF model to explaining properties of X-ray transients, the Galactic center, low luminosity active galactic nuclei, and some other high energy objects (see Narayan, Mahadevan, & Quataert 1998 for a review). In particular, a number of papers have been devoted to the study of global solutions of ADAF (Matsumoto, Kato, & Fukue 1985; Abramowicz et al. 1988; Honma, Matsumoto, & Kato 1991; Chen & Taam 1993; Abramowicz et al. 1996; Chakrabarti 1996a; Igumenshchev, Chen, & Abramowicz 1996; Chen, Abramowicz, & Lasota 1997; Nakamura et al. 1997; Narayan, Kato, & Honma 1997; Peitz & Appl 1997; Igumenshchev, Abramowicz, & Novikov 1998; Popham & Gammie 1998). Among these papers, we feel that the one of Narayan, Kato, & Honma (1997, hereafter NKH97) was most clearly written, and we would like to use it as a representative of the previous works on global solutions of ADAF.
NKH97 concentrated itself on the global structure and dynamics of ADAF around black holes. It did not specify any particular values for the accretion rate and the optical depth, so the global solution obtained in it could be applied to both the above mentioned regimes in which advection-dominated accretion can occur. This is an advantage of that paper. However, there are some points that we feel questionable. The first is about the boundary conditions of the flow, which are necessary and important for obtaining a global solution. An accretion flow into a black hole is definitely supersonic when crossing the hole’s horizon, and it is likely to be subsonic when locating at some large distance from the hole, therefore the flow motion must be transonic, i.e. there must exist at least one sonic point between the horizon and the large distance. Among these three boundary conditions what is unclear for an ADAF solution is the outer one. No one knows yet where and how an accretion flow becomes advection-dominated, or what kind of structure to which an ADAF should connect outwards. Perhaps because Shakura & Sunyaev model is the only well studied and successful one in the accretion disk theory, NKH97, as well as almost all previous works on global ADAF solutions, assumed that an ADAF connects outwards to a thin, cool, Keplerian accretion disk. However, this is only an assumption, and yet no justification has been made that it should be so. One may ask if conditions other than the thin disk condition are also possible for ADAF’s outer boundary. What we wish to have is a complete overview of the flow parameter space, in which all possible ADAF solutions corresponding to different boundary conditions are presented. This is the first purpose of our present paper.
The second question concerning NKH97, which we also try to answer in the present paper, is about discrepancies between the results of NKH97 for viscous ADAF and that in the literature for adiabatic, inviscid flows (e.g. Abramowicz & Zurek 1981; Chakrabarti 1990; Chakrabarti 1996b; Lu et al. 1997). Namely, for viscous ADAF there is always only one sonic point which is located close to the black hole, i.e. at several gravitational radii, and no shocks were found in the flow; while for the inviscid flow the sonic point can locate further away from the black hole, and there may exist two physical sonic points in the flow, consequently shocks may form. It is hard to understand these qualitative discrepancies, because adiabatic flows are the ideal case of ADAF, in the sense that all the energy of the accreted matter is carried into the black hole without any loss, there should be no such a jump between the dynamical behavior of adiabatic flows and that of ADAF. Rather, we think it is more reasonable to expect some similarities between the solution for adiabatic flows and that for ADAF, as well as a continuous and smooth change in the properties of flows, i.e. from inviscid flows via weakly viscous flows to strongly viscous flows.
## 2 Dynamics
### 2.1 Equations
We consider a steady state axisymmetric accretion flow. The governing equations are unique, although they may be written in more or less different forms by various authors. Here we employ exactly the same set of equations as in NKH97, which is similar to the ’slim disk equations’ of Abramowicz et al. (1988). In this formulation the hydrostatic equilibrium in the vertical direction is assumed, all physical variables are averaged vertically and thus are functions only of the cylindrical radius $`R`$ , the following resulting height-integrated equations, as shown by Narayan & Yi (1995), are a very good representation of the behavior of both thin disks and ADAF which tend to be nearly spherical.
(i) The continuity equation:
$`{\displaystyle \frac{d}{dR}}(2\pi R)(2H\rho \upsilon )=0`$ (1)
or, its integration form
$`4\pi RH\rho \upsilon =\dot{M}=constant`$ (2)
where $`\rho `$is the density of the accreted gas, $`H`$ is the vertical half-thickness of the flow, $`\upsilon `$ is the radial velocity which is defined to be negative when the flow is inward, and is the mass accretion rate.
(ii) The equation of vertical hydrostatic equilibrium:
$`H=({\displaystyle \frac{5}{2}})^{1/2}{\displaystyle \frac{c_s}{\mathrm{\Omega }_k}},`$ (3)
where the coefficient (5/2)<sup>1/2</sup> is chosen on the basis of the self-similar solution obtained by Narayan & Yi (1995); $`c_s`$ is the isothermal sound speed of the gas defined as
$`c_s^2=p/\rho ,`$ (4)
with $`p`$ being the pressure; and $`\mathrm{\Omega }_k`$ is the Keplerian angular velocity which takes the form
$`\mathrm{\Omega }_k^2={\displaystyle \frac{GM}{(RR_g)^2R}},`$ (5)
if the gravitational potential of the central black hole is assumed to be described by Paczynski & Wiita (1980) potential
$`\mathrm{\Phi }(R)={\displaystyle \frac{GM}{(RR_g)}},`$ (6)
with $`M`$ being the mass of the black hole, and $`R_g`$ the gravitational radius, $`R_g`$ $``$ $`2GM/c`$<sup>2</sup> .
(iii) The radial momentum equation:
$`\upsilon {\displaystyle \frac{d\upsilon }{dR}}=\mathrm{\Omega }^2R\mathrm{\Omega }_k^2R{\displaystyle \frac{1}{\rho }}{\displaystyle \frac{d}{dR}}(\rho c_s^2),`$ (7)
where $`\mathrm{\Omega }`$ is the angular velocity of the gas.
(iv) The angular momentum equation:
$`\upsilon {\displaystyle \frac{d}{dR}}(\mathrm{\Omega }R^2)={\displaystyle \frac{1}{\rho RH}}{\displaystyle \frac{d}{dR}}(\nu \rho R^3H{\displaystyle \frac{d\mathrm{\Omega }}{dR}}),`$ (8)
where $`\nu `$ is the kinematic coefficient of viscosity. In the spirit of Shakura & Sunyaev (1973), $`\nu `$ can be written as
$`\nu ={\displaystyle \frac{c_s^2}{\mathrm{\Omega }_k}},`$ (9)
where $`\alpha `$ is assumed to be a constant. Substituting equation (9) in equation (8) and integrating, one obtains
$`{\displaystyle \frac{d\mathrm{\Omega }}{dR}}={\displaystyle \frac{\upsilon \mathrm{\Omega }_k(\mathrm{\Omega }R^2j)}{\alpha R^2c_s^2}},`$ (10)
where the integration constant $`j`$ represents the specific angular momentum per unit mass accreted by the black hole.
(v) The energy equation for ADAF:
$`{\displaystyle \frac{\rho \upsilon }{(\gamma 1)}}{\displaystyle \frac{dc_s^2}{dR}}c_s^2\upsilon {\displaystyle \frac{d\rho }{dR}}={\displaystyle \frac{\alpha \rho c_s^2R^2}{\mathrm{\Omega }_k}}({\displaystyle \frac{d\mathrm{\Omega }}{dR}})^2,`$ (11)
where $`\gamma `$ is the ratio of specific heats of the gas.
As mentioned in NKH97, Shakura & Sunyaev (1973) originally wrote a simpler prescription for the shear stress of the form
$`shearstress=\alpha p{\displaystyle \frac{d\mathrm{ln}\mathrm{\Omega }_k}{d\mathrm{ln}R}}.`$ (12)
With this prescription, equation (8) is modified to a different form which integrates to give an algebraic relation instead of differential equation (10), i.e. the resulting set of equations contains one less differential equation, and the task of finding solutions becomes considerably easier. For this reason the shear stress prescription (12) has been used by most of the authors in the field (see Chakrabarti 1996a for references).
### 2.2 Boundary conditions and numerical method
The set of dynamical equations consist of two algebraic equations (2) and (3), and three first-order differential equations (7), (10), and (11), for five unknown variables $`H`$, $`\rho `$, $`c_s`$, $`\upsilon `$, and $`\mathrm{\Omega }`$, all being functions of $`R`$. As stated in the introduction section, a global solution for black hole accretion requires three boundary conditions, namely, that at the inner boundary $`R_{in}`$, at the outer boundary $`R_{out}`$, and at the sonic point $`R_s`$, respectively. We discuss them in turn.
(i) The sonic point condition:
By combining equations (1), (3), (7), (10), and (11) to eliminate $`H`$, $`d\mathrm{\Omega }/dR`$, $`d\mathrm{ln}\rho /dR`$, and $`dc_s/dR`$, one obtains the following differential equation for $`d\upsilon /dR`$:
$`({\displaystyle \frac{2\gamma }{\gamma +1}}{\displaystyle \frac{\upsilon ^2}{c_s^2}}){\displaystyle \frac{d\mathrm{ln}\upsilon }{dR}}={\displaystyle \frac{(\mathrm{\Omega }_k^2\mathrm{\Omega }^2)R}{c_s^2}}{\displaystyle \frac{2\gamma }{\gamma +1}}`$
$`\times ({\displaystyle \frac{1}{R}}{\displaystyle \frac{d\mathrm{ln}\mathrm{\Omega }_k}{dR}})+{\displaystyle \frac{(\gamma 1)\mathrm{\Omega }_k\upsilon (\mathrm{\Omega }R^2j)^2}{\alpha (\gamma +1)R^2c_s^4}}.`$ (13)
To make a smooth sonic transition for the flow, $`d\upsilon /dR`$ must be well behaved across the sonic point, this gives two boundary conditions:
$`\upsilon ^2{\displaystyle \frac{2\gamma }{\gamma +1}}c_s^2=0,R=R_s,`$ (14)
$`{\displaystyle \frac{(\mathrm{\Omega }_k^2\mathrm{\Omega }^2)R}{c_s^2}}{\displaystyle \frac{2\gamma }{\gamma +1}}({\displaystyle \frac{1}{R}}{\displaystyle \frac{d\mathrm{ln}\mathrm{\Omega }_k}{dR}})+{\displaystyle \frac{(\gamma 1)\mathrm{\Omega }_k\upsilon (\mathrm{\Omega }R^2j)^2}{\alpha (\gamma +1)R^2c_s^4}}=0,R=R_s.`$ (15)
Conditions (14) and (15) are exactly the same as in NKH97, and essentially the same as in all the previous works, because a sonic point in physics is just a critical point of ordinary differential equation (13) in mathematics, there is no question about it.
(ii) The inner boundary condition:
NKH97 imposed a no-torque condition ($`d\mathrm{\Omega }/dR=0`$) at $`R=R_{in}`$, which corresponds to (cf. eq. )
$`\mathrm{\Omega }R^2j=0,R=R_{in},`$ (16)
and argued that, although technically this condition must be applied at the black hole horizon (i.e.$`R=R_{in}=R_g`$), in practice it could be applied equally well at any other radius between the horizon and the sonic point, or even at the sonic point itself. The fact that such a condition arises is because the viscous stress term in equation (8) is diffusive in form, signals due to the shear stress can propagate backward even in the supersonic zone of the flow, and this makes it necessary to impose a downstream boundary condition on the angular momentum. If the prescription for shear stress (12) is adopted as in most of the previous studies, signals cannot propagate backward from the supersonic zone, then condition (16) is unnecessary.
(iii) The outer boundary condition:
Unlike the above two ones, this condition is very uncertain, and it is the treatment of this condition that makes the numerical model of global solution. NKH97 assumed that the gas starts off from $`R_{out}=10^6R_s`$ in a state that corresponds to a standard Shakura-Sunyaev thin disk, and accordingly applied two conditions:
$`\mathrm{\Omega }=\mathrm{\Omega }_k,R=R_{out},`$ (17)
$`c_s=10^3\mathrm{\Omega }_kR,R=R_{out},`$ (18)
of which condition (17) means that the flow is Keplerian rotating, and condition (18) means that the disk is cool, or equivalently thin ($`H10^3R`$ , cf. eq. ). With five boundary conditions (14), (15), (16), (17), and (18), NKH97 dealt with a very challenging numerical boundary value problem using a relaxation method. Since there are only three differential equations (7), (10), and (11) to be solved, the five boundary conditions enabled them to determine the two unknowns, i.e. the sonic radius $`R_s`$ and the integration constant $`j`$ as the eigenvalues. However, such an eigenvalue problem is numerically very time-consuming and difficult to solve; besides, one may still wonder how to know the location of outer boundary (why should it be $`10^6R_s`$ ?), and whether conditions other than (17) and (18) are also possible for ADAF solutions.
Chakrabarti and his collaborators introduced a very clever procedure (e.g. Chakrabarti 1996a). The difficulty of finding the eigenvalues was simply avoided: $`R_s`$ and $`j`$ were chosen to be free parameters and their values were supplied, and then the equations were integrated from the sonic point inwards until the black hole horizon, and outwards until the Keplerian value of angular momentum be achieved, i.e. condition (17) be satisfied. The outer boundary of the flow $`R_{out}`$ was solved out this way, rather than being specified as in NKH97. However, it should be noted that condition (17) is only a half of the thin disk condition, while the other half, i.e. condition (18) which describes the shape (or temperature) of the flow at $`R_{out}`$, was not mentioned in Chakrabarti’s work. Indeed, Chakrabarti’s approach is to let the outer boundary conditions automatically follow from an already known solution, there is no guarantee that solutions constructed in such a way will always be physically reasonable. Incorrect choices of eigenvalues may cause troubled solutions with which no astrophysically acceptable outer boundary conditions are consistent. Very recently, adopting a procedure similar to Chakrabarti’s, Igumenshchev, Abramowicz, & Novikov (1998) obtained global ADAF solutions for which the flow’s shape is slim everywhere ($`H/R1`$) and the distribution of angular momentum has a super-Keplerian part at the outer boundary. In particular, they found that ADAF solution could match outwards to the solution of Shapiro, Lightman, & Eardley (1976) in the case of bremsstrahlung cooling. The importance of these results is that outer boundary conditions other than the thin disk conditions (17) and (18) are shown to be possible for ADAF solutions.
In the present paper we solve the dynamical equations described in 2.1 in a way also similar to Chakrabarti’s. There are three differential equations (7), (10), and (11) for three unknown variables $`c_s`$, $`\upsilon `$, and $`\mathrm{\Omega }`$ (the term $`d\mathrm{ln}\rho /dR`$ in eqs. and can be substituted for by using eqs. and ). We integrate the three equations from the sonic point both inwards and outwards, and do not specify any ad hoc outer boundary conditions. To do so, we first take the sonic radius $`R_s`$ and the integration constant $`j`$ as two free parameters. Three more boundary values, namely the values of $`c_s`$, $`\upsilon `$, and $`\mathrm{\Omega }`$ at $`R=R_s`$ are needed to start the integration, but there exist two constraint conditions (14) and (15), thus only one more boundary value has to be supplied. Our choice is to apply no-torque condition (16) at the sonic point $`R_s`$, this means that the value of $`\mathrm{\Omega }`$ at $`R_s`$ is supplied, then the values of $`c_s`$ and $`\upsilon `$ at $`R_s`$ can be obtained from conditions (14) and (15). We use the standard fourth-order Runge-Kutta method to perform the integration from $`R_s`$. The derivative $`d\upsilon /dR`$ at $`R_s`$ is evaluated by applying l’Hôpital’s rule and solving a quadratic equation. The inward part of the solution extends until the black hole horizon, showing the flow’s supersonic motion; and the outward, subsonic part extends until a large radial distance which can be considered to be the flow’s outer boundary. After having $`c_s`$,$`\upsilon `$ , and $`\mathrm{\Omega }`$ as functions of $`R`$, other variables $`H`$, $`\rho `$, and $`p`$ can be calculated from equations (2), (3), and (4), and a global solution is obtained. By varying the values of two free parameters $`R_s`$ and $`j`$, we can find out all the possibilities of solutions: although some of the solutions constructed this way may not be physically acceptable, no physical solutions will be missed under such a ’carpet bombing’.
## 3 Results and discussion
We now present our numerical results. In our calculations the ratio of specific heats of the accreted gas $`\gamma `$ is fixed to be equal to 1.5. Figure 1 is an overview of the results which shows all the possibilities of solutions in the flow parameter space spanned by the sonic radius $`R_s`$ and the specific angular momentum accreted by the black hole $`j`$. The figure is for the viscosity parameter $`\alpha `$ = 0.01. When $`\alpha `$ is taken to be other non-zero values the results keep qualitatively similar and change only quantitatively. It is seen from the figure that the whole parameter space is divided into six regions. The dashed line is for Keplerian angular momentum ($`l_k=\mathrm{\Omega }_kR^2`$) corresponding to given $`R_s`$, and region I is that above the dashed line. This region is unphysical, because it would require the square of radial velocity $`\upsilon ^2`$ at the sonic point to have negative values; or in other words, the value of $`j`$ is chosen to form too high a centrifugal barrier, so that it is impossible for the gas to be accreted. Region II is that above the dotted line which is also unphysical, again because of wrong choices of the parameters. In this region the derivative $`d\upsilon /dR`$ at $`R_s`$, calculated using l’Hôpital’s rule, has no real solution, such a sonic point is classified as of spiral type and is not physical. Apart from these two no-solution regions, the other regions in the parameter space all give solutions. We describe them in detail in the following subsections.
### 3.1 ADAF-thin disk solutions
We obtain ADAF solutions that connect outwards to thin disk solutions, i.e. conditions (17) and (18) at the outer boundary $`R_{out}`$ are both fulfilled. These solutions are just what constructed in NKH97, and all sit on the solid line AB in figure 1, which forms a part of the boundary between regions III and V. Figures 2a-c give a typical example of this class of solution, for which $`R_s=2.29R_g`$, and $`j=1.749(cR_g)`$. Figure 2a shows the radial velocity $`\upsilon `$ and the sound speed $`c_s`$ of the flow as functions of the radius $`R`$, drawn as a solid line and a dashed line, respectively. The crossing point of the two lines (marked by a filled circle) is the sonic point $`R_s`$. In figure 2b the radial distribution of the angular momentum of the flow ($`l=\mathrm{\Omega }R^2`$) is shown by a solid line, and the dashed line gives the Keplerian angular momentum distribution ($`l_k=\mathrm{\Omega }_kR^2`$). It is seen that beyond the outer boundary $`R_{out}=10^{3.99}R_g`$ (marked by a filled square) where the flow’s angular momentum equals the Keplerian value (condition ), there is a narrow region for which the rotating motion of the flow is super-Keplerian. This feature was not found in NKH97, but has been noticed recently in Igumenshchev, Abramowicz, & Novikov (1998). We are agreed with the latter authors about the physical reason of this feature: in this narrow region the pressure increases with radius, which is due to the rapid decrease of the disk thickness (cf. Fig. 2c) and the corresponding increase in the matter density, the fact that both the gravitational force and the pressure-gradient force are directed inwards results in super-Keplerian rotation. Indeed, super-Keplerian rotation often occurs in a region where a sharp transition in qualitative properties of the accretion flow takes place, e.g. at the cusp-like inner edge of thick disks (Abramowicz, Calvani, & Nobili 1980). NKH97 also noticed the super-Keplerian rotation, but on the inside, i.e. in a narrow region just outside the sonic point for flows with low values of $`\alpha `$. However, the way in which we find the super-Keplerian rotation in the outermost region of ADAF is different from that of Igumenshchev, Abramowicz, & Novikov (1998). Those authors still assumed some outer boundary condition, i.e. at $`R_{out}`$ the angular velocity of the flow was fixed as a fraction of the Keplerian angular velocity, although they showed that the presence of super-Keplerian rotation did not depend on the particular value taken for $`\mathrm{\Omega }(R_{out})`$; we do not assume any outer boundary conditions, the super-Keplerian rotation is found naturally by the outward integration of the equations. Figure 2c shows the relative thickness of the disk $`H/R`$ for all the range of radii. It is seen that the disk is thin in its inner region, is slim in the middle region ($`H/R`$ is close to 1), and that the thickness of the disk decreases rapidly with radius in the outer region. Beyond the outer boundary $`R_{out}`$ (the filled square) where the thin disk condition (18) is fulfilled, $`H`$ tends to be zero, and the outward integration has to be stopped.
It should be addressed that the way of finding ADAF-thin disk solutions in NKH97 is very restrictive. Adopting five boundary conditions (14)-(18) for three differential equations (7), (10), and (11) enabled those authors to determine two unknowns $`R_s`$ and $`j`$ as the eigenvalues. However, the range of variation for the value of $`R_{out}`$ is very large (cf. Fig. 3), perhaps because of this uncertainty NKH97 constructed only a few global solutions with $`R_{out}=10^6R_s`$. We take $`R_s`$ and $`j`$ as free parameters instead, then $`R_{out}`$ is found out by the meeting of conditions (17) and (18) in the outward integration of the equations, rather than being assumed. Technically, our approach is equivalent to that of NKH97 because the three quantities $`R_s`$, $`j`$, and $`R_{out}`$ in ADAF-thin disk solutions are the eigenvalue of each other: if one of them is correctly given, then the other two are fixed accordingly and have to be found out. Figure 1 has shown such an unique correspondence between $`R_s`$ and $`j`$ (the solid line AB). In figure 3 we present the unique correspondence between $`R_s`$ and $`R_{out}`$ for three different values of $`\alpha `$, the middle line AB is for $`\alpha =0.01`$ which corresponds to the solid line AB in figure 1. It is obvious from figures 1 and 3 that the acceptable range of $`R_{out}`$ is very large, while those of $`R_s`$ and $`j`$ are very small. Thus our approach has an advantage that, by varying the values of $`R_s`$ and $`j`$, we are able to find all the ADAF-thin disk solutions, not only some examples as in NKH97.
### 3.2 ADAF-thick disk solutions
For parameters $`R_s`$ and $`j`$ located in region V of figure 1, i.e. the left and the lower part of the parameter space, global solutions are of another class: an ADAF solution connects outwards to a thick disk solution. Figures 4a-c provide an example of this class of solution, for which $`R_s=2.29R_g`$, $`j=1.74(cR_g)`$. Figure 4a shows the radial variation of the radial velocity $`\upsilon `$ (solid line) and of the sound speed $`c_s`$ (dashed line), the crossing point (the filled circle) of the two lines is the sonic point $`R_s`$. Figure 4b draws the radial distribution of the flow’s angular momentum $`l`$ (solid line) and that of Keplerian angular momentum $`l_k`$ (dashed line). The crossing point (the filled square) of the two lines defines the outer boundary of ADAF solution $`R_{out}=10^{1.837}R_g`$ at which condition (17) is fulfilled. Super-Keplerian rotation is also presented outside $`R_{out}`$, also because the pressure-gradient is directed inwards. However, the increase of pressure with radius is not due to the sharp decrease of the disk’s thickness and the corresponding increase in the matter density as for ADAF-thin disk solutions, rather, it is due to the increase of sound speed (i.e. temperature) as reflected by the increase of the disk’s thickness (cf. Fig. 4c and eq. ).
The most noticeable feature of this class of solution is that, as seen from figure 4c, condition (18) is not fulfilled, i.e. the disk is not thin at the outer boundary of ADAF solution $`R_{out}`$ (the filled square), and outside $`R_{out}`$ the relative thickness $`H/R`$ increases with radius and gets larger than 1. We note that, unlike NKH97, in many previous works which assumed hydrostatic equilibrium in the vertical direction of the flow, only condition (17), i.e. Keplerian rotating was adopted to determine the outer boundary $`R_{out}`$, while the shape (or the temperature) of the flow at $`R_{out}`$ was not reported (e.g. Chakrabarti 1996a and references therein). Igumenshchev, Abramowicz, & Novikov (1998) ruled out solutions with $`H/R>>1`$ for the whole range of radii, $`R_{in}<R<R_{out}`$, because those solutions did not satisfy the slim disk condition $`H/R<1`$ (Note however that those solutions are different from ours, in our solutions the disk is thin or slim for $`R<R_{out}`$, and $`H/R>1`$ for $`RR_{out}`$ only). To our knowledge, there have been only two papers, namely Peitz & Appl (1997, see its Figs. 5-10) and Popham & Gammie (1998, see its Figs. 1-4) that assumed the flow to be in vertical hydrostatic equilibrium and obtained results similar to ours: $`H/R`$ increases with $`R`$ and exceeds unity for large $`R`$. We think that from mathematical point of view the problem of the flow in vertical hydrostatic equilibrium with $`H/R1`$ needs a more sophisticated study; from physical point of view, however, the possibility of connection of an ADAF to a geometrically thick flow is worth considering. For example, Narayan et al. (1998) applied an ADAF model to the Galactic center. The model naturally accounted for the apparently contradictory observations of Sgr A\*, i.e. a moderate mass accretion rate and an extremely low bolometric luminosity. On the other hand, Melia (1992) showed that the accretion is likely initiated by a bow shock that dissipates most of the directed kinetic energy and heats the gas to a temperature as high as about $`7\times 10^6`$ K. Such an initial state of the accretion flow corresponds hardly to a cool, thin disk, while a hot, thick disk is probably a better alternative. It is also known that the model for quiescent soft X-ray transients by Narayan, McClintock, & Yi (1996), in which an ADAF is surrounded by an outer geometrically thin and optically thick disk, although being very promising, encountered some difficulties. In particular, Lasota, Narayan, & Yi (1996) pointed out that the outer disk is too cold to account for the observed UV flux, and suggested that an optically thin disk could have higher temperature. However, an optically thin and geometrically thin disk, known as the solution of Shapiro, Lightman, & Eardley (1976), is thermally unstable. Therefore, an optically thin, geometrically slim or thick disk could again be an alternative for the outer disk of Narayan, McClintock, & Yi model.
### 3.3 $`\alpha `$-type solutions
Following Abramowicz & Chakrabarti (1990), a solution is called X type if it is complete, i.e. if it is able to join the black hole horizon to a large radius which can be considered as the outer boundary; while a solution is called $`\alpha `$ type if it is incomplete, i.e. if it can extend to the black hole horizon only, or to the outer boundary only. The ADAF-thin disk solution and the ADAF- thick disk solution described in the above two subsections are both of X type. In the $`R_sj`$ parameter space (Fig. 1) there are two regions in which solutions are of $`\alpha `$ type, namely region III (bounded by the solid line AB, the dashed line, the dotted line, and the dot-dot-dot-dashed line) and region IV (between the dotted line and the dot-dashed line). An example of the solution in region III, for which R<sub>s</sub> = 2.29 R<sub>g</sub>, j = 1,76 (cR<sub>g</sub>), is given by figures 5a-c. Figure 5a is again for $`\upsilon `$ vs. $`R`$ (solid line) and $`c_s`$ vs. $`R`$ (dashed line). The crossing point (the filled circle) of the two lines is the sonic point $`R_s`$. The inward part of the solution extends until the black hole horizon. In the outward integration the two lines meet again at $`R=10^{2.02}R_g`$ (marked by a filled triangle). However it is not another physical sonic point, it is a singular point of differential equation (13), i.e. only condition (14) is satisfied there while condition (15) is not, the derivative $`d\upsilon /dR`$ tends to be infinite, and the outward integration cannot be performed any further. Such a solution is obviously of $`\alpha `$ type. Figure 5b shows that the angular momentum $`l`$ (solid line) is sub-Keplerian for the entire range of solution (the dashed line is for Keplerian angular momentum $`l_k`$); and figure 5c tells that the relative thickness $`H/R`$ keeps being smaller than 1.
The $`\alpha `$-type solution in region IV is somewhat different. Figures 6a-c provide an example with $`R_s=30R_g`$, $`j=1.4(cR_g)`$. The solution extends to $`R_{out}=10^{3.736}R_g`$ (the crossing point of the solid line for $`l`$ and the dashed line for $`l_k`$ in Fig. 6b, marked by a filled square) defined by condition (17), but not to the black hole horizon, just opposite to the solution in region III. After passing through the sonic point $`R_s`$ (the crossing point of the solid line for $`\upsilon `$ and the dashed line for $`c_s`$ in Fig. 6a, marked by a filled circle), the inward integration has to be stopped at $`R=4.49R_g`$ (the other meeting point of the two lines in Fig. 6a, marked by a filled triangle), also because of the singularity, i.e. an infinite derivative $`d\upsilon /dR`$ resulting from the fact that only condition (14) is satisfied and condition (15) is not. The distribution of angular momentum has a super-Keplerian part outside $`R_{out}`$ (Fig. 6b). The relative thickness $`H/R`$ increases rapidly with radius and gets much larger than 1 for large $`R`$ (Fig. 6c).
A single $`\alpha `$-type solution is unacceptable, simply because it is not a global solution. We present this class of solution here for the following reasons. First, it does occupy certain regions in the $`R_sj`$ parameter space, if it were ignored then the picture of solution distribution in parameter space would be incomplete. Second, perhaps more importantly, $`\alpha `$-type solutions are not totally useless. An $`\alpha `$-type solution could join another $`\alpha `$-type or an X-type solution via a standing shock, forming a global solution. Lu & Yuan (1998) obtained three types of shock- included global solutions for adiabatic black hole accretion flows, namely $`\alpha `$-X, X-$`\alpha `$, and $`\alpha `$-$`\alpha `$ type. We are currently working on the problem of shock formation in ADAF around black holes. Our initial results show that an ADAF-thick disk solution (X type) in region V of figure 1 may join an $`\alpha `$-type solution in region III via a standing shock, forming an $`\alpha `$-X type shock-included global solution. However, the parameter range of $`\alpha `$-type solutions which imply the existence of shocks is very small, much smaller than region III itself. For ADAF-thin disk solutions (solid line AB in figure 1) there are indeed no standing shocks at all, i.e. the conclusion made in NKH97 on this point is correct. We shall discuss the problem of shocks in ADAF in detail in a subsequent paper.
### 3.4 Solutions for inviscid flows
For comparison we show in figure 7 the distribution of solutions for inviscid flows (the viscosity parameter $`\alpha =0`$) in the $`R_sj`$ parameter space. The parameter $`j`$ is now understood as the constant specific angular momentum of the flow. The other parameter $`R_s`$ could be equivalently replaced by the constant specific total energy if the flow is further assumed to be adiabatic. It is seen that figure 7 is qualitatively similar to figure 1. The whole parameter space is divided into five regions. No solution exists in region I because a too high centrifugal barrier prevents the flow from accreting; nor in region II because the sonic point is of vortex type (not spiral type) which is also unphysical. Region V is for thick disk solutions, and regions III and IV are for $`\alpha `$-type solutions. However in these solutions no outer boundary defined by condition (17) can be found: for large radii the flow’s angular momentum is always sub-Keplerian. A remarkable feature of inviscid flows is that for these flows ADAF-thin disk solutions of NKH97 do not exist at all. In fact, as the value of $`\alpha `$ decreases, the range of parameters for ADAF-thin disk solutions shrinks, i.e. the solid line AB in figure 1 shortens. When $`\alpha =0`$ (figure 7), the line AB becomes a dot of $`R_s=2R_g`$ and $`j=2(cR_g)`$, the corresponding $`R_{out}`$ does not exist, in other words, it becomes infinity. All these results agree with our expectation that there should be similarities between the dynamical behavior of ADAF and that of adiabatic inviscid flows, and that there should be a continuous change from the behavior of viscous flows to that of inviscid flows, i.e. when the value of $`\alpha `$ decreases continuously.
## 4 Summary
In this paper we construct global solutions describing ADAF around black holes. The major points we wish to stress are as follows. (i) We recover the ADAF-thin disk solution as a whole class, not only some examples as in NKH97. (ii) We obtain a new class of global solution, namely the ADAF-thick disk solution, and suggest possible applications of this class of solution to some astrophysical systems. (iii) We present the $`\alpha `$-type solution as still another class of solution. Most of $`\alpha `$-type solutions are unacceptable because they are not global solutions, but some of them could be involved as part of global solutions with shocks. (iv) We support the conclusion of NKH97 that no standing shocks are relevant to ADAF-thin disk solutions. Global solutions with shocks, if they ever exist, could only result from combination of $`\alpha `$-type solutions and ADAF-thick disk solutions. (v) We show the solution distribution for inviscid flows in the parameter space, which agrees with the view that there should be some similarities between ADAF and adiabatic flows, as well as some continuity from the properties of viscous flows to those of inviscid ones.
We thank S. Kato and S. Mineshige for discussions, and R. Narayan and E. Quataert for showing us their numerical code which enabled us to understand the method and results of NKH97. We also thank M.A. Abramowicz for helpful suggestions to improve our manuscript.
|
no-problem/9905/hep-ph9905442.html
|
ar5iv
|
text
|
# Considerations Concerning the Radiative Corrections to Muon Decay in the Fermi and Standard Theories
## I Introduction
The one-loop radiative corrections to $`\mu `$-decay in the Fermi theory were evaluated approximately four decades ago. These studies included the correction to the electron spectrum and muon lifetime $`\tau _\mu `$, as well as the momentum dependence of the asymmetry and the integrated asymmetry for polarized muons. Although the Fermi theory is, in general, non-renormalizable, the $`\mu `$-decay case is especial. For vector and axial vector interactions of the charge retention order, which are the relevant couplings in the two-component theory of the neutrino and in the V-A theory, a theorem assures us that, to first order in $`G_\mu `$, but all orders in $`\alpha `$, the corrections to $`\mu `$-decay are convergent after mass and charge renormalization. A striking cancellation of mass singularities in the correction to the lifetime and integrated asymmetries was found, which occurs also for the scalar, pseudoscalar, and tensor Fermi interactions, and the $`\beta `$-decay lifetime. These observations were one of the motivations in the derivation of the KLN theorem. Years later, in order to clarify a controversy that arose in the determination of the Fermi constant, the corrections of $`𝒪\left(\alpha ^2\mathrm{ln}(m_\mu /m_e)\right)`$ to $`\tau _\mu `$ were obtained.
Very recently, in an important theoretical development, van Ritbergen and Stuart completed the evaluation of $`𝒪\left(\alpha ^2\right)`$ corrections to $`\tau _\mu `$ in the local V-A theory, in the limit $`m_e^2/m_\mu ^20`$. Their final answer can be expressed succintly as
$$C\left(m_\mu \right)=\frac{\alpha (m_\mu )}{\pi }c_1+\left(\frac{\alpha (m_\mu )}{\pi }\right)^2c_2,$$
(1)
$`c_1={\displaystyle \frac{1}{2}}\left({\displaystyle \frac{25}{4}}\pi ^2\right)`$ ; $`c_2=6.700.`$ (2)
In Eq.(1), $`\alpha (m_\mu )c_1/\pi `$ is the one-loop result and $`\alpha (m_\mu )`$ is a running coupling defined by
$$\alpha (m_\mu )=\frac{\alpha }{1\left(\frac{2\alpha }{3\pi }+\frac{\alpha ^2}{2\pi ^2}\right)\mathrm{ln}\frac{m_\mu }{m_e}}.$$
(3)
In Refs., the contribution of the last term in the denominator of Eq.(3) is separated out, as $`\left(\alpha ^3/2\pi ^2\right)\mathrm{ln}(m_\mu /m_e)`$. The two expressions differ by only $`0.2`$ ppm, so that we will employ the more succint expression of Eq.(3).
In Section 2, we apply the traditional FAC, PMS, and BLM optimization schemes to the expansion of Eq.(1). We use those methods for estimations of the third order coefficient and the theoretical error in Eq.(1). In Section 3 we turn our attention to the radiative corrections to muon decay in the Standard Model (SM). Using arguments based on effective field theory and simple considerations of Feynman diagrams, we show that, if terms of $`𝒪(\alpha m_\mu ^2/M_W^2)`$ are neglected, the radiative corrections factor out into $`[1+C(m_e)]`$ (defined in Section 2) and the electroweak amplitude $`g^2/(1\mathrm{\Delta }r)`$, which are separately scale-independent. We use the results to discuss the application of the Fermi constant $`G_F`$ in electroweak physics. Section 4 presents our conclusions.
## II Application of Optimization Methods
As it is well known, $`\alpha (\mu )`$ does not run below the $`m_e`$ scale and $`\alpha (m_e)`$ is identified with the conventional fine structure constant, $`\alpha ^1=137.03599959(38)(13)`$. Setting then $`\alpha =\alpha (m_e)`$ in Eq.(3) and replacing $`m_e\mu `$, we have
$$\alpha (m_\mu )=\frac{\alpha (\mu )}{1\left(\frac{2\alpha (\mu )}{3\pi }+\frac{\alpha ^2(\mu )}{2\pi ^2}\right)\mathrm{ln}\frac{m_\mu }{\mu }}.$$
(4)
Inserting this expression into Eq.(1), expanding and truncating in second order, we find
$$C(\mu )=\frac{\alpha (\mu )}{\pi }c_1+\left(\frac{\alpha (\mu )}{\pi }\right)^2\left[c_2+\frac{2}{3}c_1\mathrm{ln}\frac{m_\mu }{\mu }\right].$$
(5)
The method of Fastest Apparent Convergence (FAC) chooses the scale $`\mu _{\text{FAC}}`$ in such a manner that the NLO term vanishes. Thus, we have $`\mathrm{ln}(m_\mu /\mu _{\text{FAC}})=3c_2/(2c_1)`$, which leads to $`\mu _{\text{FAC}}=0.801m_e`$. As $`\alpha (\mu )`$ does not run below $`\mu =m_e`$, we interpret this result as $`\mu _{\text{FAC}}=m_e`$. Eq.(5) becomes
$$C(\mu _{\text{FAC}})=C(m_e)=\frac{\alpha }{\pi }c_1+\left(\frac{\alpha }{\pi }\right)^2\left[c_2+\frac{2}{3}c_1\mathrm{ln}\frac{m_\mu }{m_e}\right],$$
(6)
an expansion in terms of the fine structure constant $`\alpha `$. The $`𝒪`$$`\left((\alpha /\pi )^2ln(m_\mu /m_e)\right)`$ term concides with the expression found in Ref.. Recalling $`m_\mu /m_e=206.768273(24)`$, we see that the two terms between square brackets nearly cancel each other and Eq.(6) becomes
$$C(\mu _{\text{FAC}})=C(m_e)=\frac{\alpha }{\pi }c_1+\left(\frac{\alpha }{\pi }\right)^20.26724.$$
(7)
The Principle of Minimal Sensitivity (PMS) identifies $`\mu _{\text{PMS}}`$ with the stationary point of $`C(\mu )`$. Applying $`\mu (d/d\mu )`$ to $`C(\mu )`$, and recalling the renormalization group equation (RGE)
$$\mu \frac{d}{d\mu }\alpha (\mu )=\frac{2}{3\pi }\alpha ^2(\mu )+\frac{\alpha ^3(\mu )}{2\pi ^2},$$
(8)
we obtain
$$\mu \frac{d}{d\mu }C(\mu )=\frac{4}{3}\left(\frac{\alpha (\mu )}{\pi }\right)^3\left[c_2+\frac{3}{8}c_1+\frac{2}{3}c_1\mathrm{ln}\frac{m_\mu }{\mu }\right].$$
(9)
Thus, the stationary point is given by
$$\mathrm{ln}\frac{\mu _{\text{PMS}}}{m_\mu }=\frac{3}{2}\left(\frac{3}{8}+\frac{c_2}{c_1}\right),$$
which leads to $`\mu _{\text{PMS}}=1.40636m_e`$. We note that Eq.(8) is consistent with
$$\alpha (\mu )=\frac{\alpha }{1\left(\frac{2\alpha }{3\pi }+\frac{\alpha ^2}{2\pi ^2}\right)\mathrm{ln}\frac{\mu }{m_e}},$$
(10)
an expression that can be obtained by replacing $`m_\mu \mu `$, $`\mu m_e`$ in Eq.(4), and will be useful in the following discussion. Inserting $`\mu =\mu _{\text{PMS}}`$ in Eq.(5), we obtain
$$C(\mu _{\text{PMS}})=\frac{\alpha (\mu _{\text{PMS}})c_1}{\pi }\left(\frac{\alpha (\mu _{\text{PMS}})}{\pi }\right)^2\frac{3}{8}c_1,$$
(11)
or
$$C(\mu _{\text{PMS}})=\frac{\alpha (\mu _{\text{PMS}})c_1}{\pi }+\left(\frac{\alpha (\mu _{\text{PMS}})}{\pi }\right)^20.67868.$$
(12)
The BLM method chooses $`\mu _{\text{BLM}}`$ so as to cancel the terms in $`c_2`$ proportional to $`n_f`$, the number of light fermions. In the $`\mu `$-decay case, there is only one light fermion, the electron. From Ref. we learn that the electron loop contributions to $`c_2`$ (virtual loops and pair creation) is $`3.22034`$. Splitting $`c_2=3.22034+3.47966`$, one chooses $`\mu _{\text{BLM}}`$ to cancel the first contribution in the second order coefficient of Eq.(5). Thus, $`\mathrm{ln}(m_\mu /\mu _{\text{BLM}})=(3/2)(3.22034/c_1)`$, which leads to $`\mu _{\text{BLM}}=m_\mu /14.4267=14.3323m_e`$. We note that this is very close to $`\sqrt{m_em_\mu }=14.38m_e`$, the geometric average of the two masses. The BLM expansion is
$$C(\mu _{\text{BLM}})=\frac{\alpha (\mu _{\text{BLM}})c_1}{\pi }+\left(\frac{\alpha (\mu _{\text{BLM}})}{\pi }\right)^23.47966.$$
(13)
The FAC, PMS, and BLM optimization expressions of Eqs.(7, 12, 13), can be used to estimate the coefficient $`c_3`$ of $`\left(\alpha (m_\mu )/\pi \right)^3`$ in the expansion of Eq.(1). It is well known that such estimations are not particularly useful if contributions of an important class of new diagrams open up at the relevant level. For instance, in the radiative corrections to $`g2`$, large contributions due to the light by light scattering open up already in $`𝒪\left(\alpha ^3\right)`$. In $`\mu `$-decay we encounter a more felicitous situation, as light by light scattering contributes only in $`𝒪`$$`\left(\alpha ^4\right)`$. Writing the three optimized expansions in the generic form
$$C\left(\mu ^{}\right)=\frac{\alpha (\mu ^{})}{\pi }c_1+\left(\frac{\alpha (\mu ^{})}{\pi }\right)^2c_2^{},$$
(14)
and expressing $`\alpha (\mu ^{})`$ in terms of $`\alpha (m_\mu )`$ by replacing $`m_\mu \mu ^{}`$, $`\mu m_\mu `$ in Eq.(4), we find the estimation
$$\left(c_3\right)_{est}=\frac{4}{9}c_1\mathrm{ln}^2\frac{\mu ^{}}{m_\mu }+\left(\frac{c_1}{2}+\frac{4}{3}c_2^{}\right)\mathrm{ln}\frac{\mu ^{}}{m_\mu }.$$
(15)
Using the values of $`\mu ^{}`$ and $`c_2^{}`$ obtained above, Eq.(15) leads to
$$\left(c_3\right)_{est}=\{\begin{array}{cc}19.9\hfill & \text{(FAC)}\hfill \\ 20.0\hfill & \text{(PMS)}\hfill \\ 15.7\hfill & \text{(BLM)}\hfill \end{array}$$
(16)
We see that the three estimates are quite close.
It is also interesting to compare the numerical results of the optimized expansions among themselves and with that of Eq.(1). Using the values of $`\mu _{\text{PMS}}`$ and $`\mu _{\text{BLM}}`$ found before, and evaluating $`\alpha (\mu ^{})`$ ($`\mu ^{}=\mu _{\text{PMS}}`$,$`\mu _{\text{BLM}}`$) via Eq.(10), the expansions of Eqs.(7,12,13,1) lead to
$`C(m_e)`$ $`=`$ $`4.202402\times 10^3(FAC),`$ (17)
$`C(\mu _{\text{PMS}})`$ $`=`$ $`4.202403\times 10^3(PMS),`$ (18)
$`C(\mu _{\text{BLM}})`$ $`=`$ $`4.202348\times 10^3(BLM),`$ (19)
$`C(m_\mu )`$ $`=`$ $`4.202147\times 10^3(\alpha (m_\mu )\text{exp.}).`$ (20)
We see that the three optimization methods give very close results, with a maximum absolute difference of $`5.4\times 10^8`$. As the important mass scales for $`\mu `$ decay in the Fermi theory are $`m_e`$ and $`m_\mu `$, it is natural to consider $`m_e<\mu <m_\mu `$ as the relevant range. We may therefore take the maximum difference among the four evaluations above as an estimate of the theoretical error. This is given by the difference $`C(m_\mu )C(\mu _{\text{PMS}})=2.6\times 10^7`$, which is equivalent to the estimated third order contribution $`20.0[\alpha (m_\mu )/\pi ]^3`$ in the PMS estimate \[Cf.Eq.(16)\]. This leads to an error of $`1.3\times 10^7`$ in the determination of $`G_F`$, which is very close to the estimate given in Ref. from the consideration of the known leading third order logarithm in $`C(m_e)`$. We also recall that the one-loop corrections proportional to powers and logarithms of $`m_e^2/m_\mu ^2`$ can be obtained by combining Ref. and Ref., and have been reported in Refs.. When the tree-level $`\mu `$ decay phase space is factored out, the leading correction of this type is
$$\frac{\alpha }{\pi }\frac{m_e^2}{m_\mu ^2}\left[24log\frac{m_\mu }{m_e}94\pi ^2\right]=4.3\times 10^6.$$
(21)
In very high precision calculations, Eq.(21) should be added to Eqs.(17-20) so that the expansions, rounded to four decimals, become
$`C(m_e)=C(\mu _{\text{PMS}})`$ $`=`$ $`4.1981\times 10^3,`$ (22)
$`C(\mu _{\text{BLM}})`$ $`=`$ $`4.1980\times 10^3,`$ (23)
$`C(m_\mu )`$ $`=`$ $`4.1978\times 10^3.`$ (24)
The error due to the uncalculated terms of $`𝒪((\alpha /\pi )^2(m_e^2/m_\mu ^2)\mathrm{ln}^2m_\mu /m_e)`$ has been estimated to be a few times $`10^7`$. This seems quite resonable. In fact, we note that if the $`𝒪((\alpha /\pi )^2(m_e^2/m_\mu ^2))`$ contributions had the same magnitude relative to the leading $`𝒪(\alpha ^2)`$ term $`(\alpha (m_\mu )/\pi )^2c_2`$ in Eq.(1), as the $`𝒪((\alpha /\pi )(m_e^2/m_\mu ^2))`$ bear with respect to $`(\alpha (m_\mu )/\pi )c_1`$, their contribution would be even smaller, a few times $`10^8`$.
## III Factorization of the Radiative Corrections to Muon Decay in the SM
In the on-shell renormalization scheme of the SM, it is customary to write $`1/\tau _\mu `$ in the form
$`{\displaystyle \frac{1}{\tau _\mu }}`$ $`=`$ $`{\displaystyle \frac{P}{32}}{\displaystyle \frac{g^4}{m_W^4}}{\displaystyle \frac{[1+C(m_e)]}{(1\mathrm{\Delta }r)^2}},`$ (25)
$`P`$ $`=`$ $`f\left({\displaystyle \frac{m_e^2}{m_\mu ^2}}\right)\left[1+{\displaystyle \frac{3}{5}}{\displaystyle \frac{m_\mu ^2}{M_W^2}}\right]{\displaystyle \frac{m_\mu ^5}{192\pi ^3}},`$ (26)
$`f(x)`$ $`=`$ $`18x12x^2\mathrm{ln}x+8x^3x^4,`$ (27)
where $`g^2=e^2/\mathrm{sin}^2\theta _w`$, $`\mathrm{sin}^2\theta _w=1M_W^2/M_Z^2`$, $`M_W`$ and $`M_Z`$ are pole masses, $`C(m_e)`$ is the radiative correction of the Fermi V-A theory and $`\mathrm{\Delta }r`$ is the electroweak radiative correction introduced in Ref.. Before the work of Refs., only the terms involving $`c_1`$ in Eq.(6) were known and $`C(m_e)`$ was approximated by the expression
$$\frac{\alpha }{\pi }c_1\left[1+\frac{2\alpha }{3\pi }\mathrm{ln}\frac{m_\mu }{m_e}\right].$$
In Eq.(26), $`f(m_e^2/m_\mu ^2)m_\mu ^5/192\pi ^3`$ is a phase space factor and $`3m_\mu ^2/5M_W^2`$ is the tree-level contribution of the W-propagator.
If terms of $`𝒪(\alpha m_\mu ^2/M_W^2)`$ are neglected in the radiative corrections, and $`\mathrm{\Delta }r`$ is approximated by $`\mathrm{\Delta }r^{(1)}e^4\text{Re}\mathrm{\Pi }_2^{(r)}(M_Z^2)`$, where $`\mathrm{\Delta }r^{(1)}`$ is the one-loop contribution to $`\mathrm{\Delta }r`$, and $`e^4\mathrm{\Pi }_2^{(r)}(M_Z^2)`$ is the two-loop contribution to the renormalized vacuum polarization function at $`q^2=M_Z^2`$, it was shown in Ref. that Eq.(25) contains all the one-loop corrections, as well as all two-loop contributions involving mass singularities (i.e. logarithms $`\mathrm{ln}m`$, where $`m`$ is a generic light fermion mass). It also contains all the terms of $`𝒪\left[\left(\alpha \mathrm{ln}(M_Z/m)\right)^n\right]`$. In fact, if so desired, all these logarithms can be absorbed by expressing $`g^4/(1\mathrm{\Delta }r)^2`$ in terms of the running coupling $`\alpha (M_Z)=\alpha /(1\mathrm{\Delta }\alpha )`$, where $`\mathrm{\Delta }\alpha =\text{Re}\mathrm{\Pi }^{(r)}(M_Z^2)`$. This result follows from the observation that, in $`g^4/(1\mathrm{\Delta }r)^2`$, such logarithms arise from the renormalization of $`\alpha _0`$ in terms of $`\alpha `$.
We now show that, if contributions of $`𝒪`$$`(\alpha m_\mu ^2/M_W^2)`$ are neglected in the radiative corrections, the factorization displayed in Eq.(25), involving the QED corrections of the Fermi theory and the electroweak factor $`g^4/(1\mathrm{\Delta }r)^2`$, is valid to all orders in perturbation theory. This applies not only to the corrections to the lifetime, but also to those affecting the electron spectrum, as well as the momentum dependence of the asymmmetry and the integrated asymmetry in the case of polarized muons. We present two arguments, one based on the effective field theory approach, the other involving a simple discussion of higher order Feynman diagrams.
If contribution of $`𝒪(\alpha m_\mu ^2/M_W^2)`$ are neglected in the radiative corrections and the tree-level term $`3m_\mu ^2/5M_W^2`$ in Eq.(26) is for the moment disregarded, the effective field theory at the muon mass scale is the local V-A four-fermion Lagrangian density
$$=\frac{G_F}{\sqrt{2}}\left[\overline{\mathrm{\Psi }}_e\gamma ^\mu (1\gamma ^5)\mathrm{\Psi }_{\nu _e}\right]\left[\overline{\mathrm{\Psi }}_{\nu _\mu }\gamma _\mu (1\gamma ^5)\mathrm{\Psi }_\mu \right],$$
plus QED, plus QCD. Therefore, one can systematically evaluate the corrections to the spectrum, lifetime, and asymmetry in muon decay on the basis of this effective Lagrangian. This procedure results in the usual expressions involving $`G_F`$ and the radiative corrections of the Fermi V-A theory. As the latter are convergent to all orders in $`\alpha `$, there is no need to cancel ultraviolet divergences and, therefore, in their expression, there is no reference to the high mass scale $`M_Z`$ of the underlying theory. As mentioned in the Introduction, these corrections are known at the one-loop level in the case of the electron spectrum and asymmetry, and have now been evaluated at the two-loop level in the case of $`1/\tau _\mu `$. In particular, one finds
$$\frac{1}{\tau _\mu }=\frac{G_F^2m_\mu ^5}{192\pi ^3}f(m_e^2/m_\mu ^2)\left[1+C(m_e)\right],$$
(28)
where $`C(m_e)`$ is the two-loop correction discussed in Sections 1 and 2. This leads to Eq.(25) provided we identify
$$\frac{G_F}{\sqrt{2}}=\frac{g^2}{8M_W^2}\frac{1+\frac{3}{10}\frac{m_\mu ^2}{M_W^2}}{(1\mathrm{\Delta }r)}.$$
(29)
Eq.(29) has a very simple interpretation: it is the matching relation that expresses the coupling constant of the effective, low energy theory, in terms of the coupling constants and radiative corrections of the underlying theory. It is very important to note that $`C(m_e)`$ and $`g^2/(1\mathrm{\Delta }r)`$ are separately scale-independent quantities. The $`\mu `$-independence of $`C(m_e)`$ follows from the fact that, when expressed in terms of physical parameters such as $`\alpha `$, $`m_e`$, $`m_\mu `$, it is convergent to all orders of perturbation theory. The $`\mu `$-independence of $`g^2/(1\mathrm{\Delta }r)`$ follows then from the fact that it can be expressed in terms of physical observables via Eqs.(28,29). As explained above, in the on-shell scheme of renormalization $`g^2`$ is defined in terms of $`e^2`$, $`M_W`$ and $`M_Z`$, which are physical quantities and, therefore, $`\mu `$-independent. It follows that $`\mathrm{\Delta }r`$ is $`\mu `$-independent, an important property that has been verified at the one-loop level, and through terms of $`𝒪(g^4M_t^2/M_W^2)`$ at the two-loop level.
The same conclusion, concerning the factorization of QED and electroweak corrections when terms of $`𝒪`$$`(\alpha m_\mu ^2/M_W^2)`$ are neglected, can be reached by a simple analysis of Feynman diagrams. Consider, for instance, a two-loop diagram involving a photon of virtual momentum $`k`$ attached to external $`\mu `$ and/or $`e`$ lines and a second loop that includes heavy particles. The relevant momenta in the QED correction of the Fermi theory are $`|k^2|m_\mu ^2`$. If we neglect terms of $`𝒪`$$`(\alpha m_\mu ^2/M_W^2)`$, we can set $`k=0`$ in the heavy-loop integration and the two-loops factor out. We also note that when $`|k^2|`$ becomes a few times larger than $`m_\mu ^2`$, we can set $`p_e=0`$, where $`p_e`$ is the electron four-momentum. All reference to $`p_e`$ is lost and we see that such domain of virtual momenta does not contribute to the corrections to the electron spectrum. Thus, the latter involves essentially the domain $`|k^2|m_\mu ^2`$, for which the factorization is valid. A second consideration, based on Feynman diagrams, refers to the factorization of QED corrections relative to the dominant electroweak corrections. These arise from the renormalization of the bare coupling constant $`g_0^2=e_0^2/s_0^2`$. Consider, for simplicity, the one-loop photonic diagrams to $`\mu `$-decay in the SM. Each diagram carries a factor $`e_0^2/s_0^2`$ associated with the virtual $`W`$ interchange. The renormalization of $`e_0^2/s_0^2`$ in such diagrams leads, in higher orders, to dominant electroweak corrections that clearly factorize with respect to the QED corrections. The same arguments can be extended to higher orders.
The crucial Eq.(29) can be simplified by introducing a coupling constant
$$G_\mu =\frac{G_F}{1+\frac{3m_\mu ^2}{10M_W^2}},$$
so that Eqs.(28,29) become
$`{\displaystyle \frac{1}{\tau _\mu }}`$ $`=`$ $`{\displaystyle \frac{G_\mu ^2m_\mu ^5}{192\pi ^3}}f(m_e^2/m_\mu ^2)\left[1+{\displaystyle \frac{3}{5}}{\displaystyle \frac{m_\mu ^2}{M_W^2}}\right]\left[1+C(m_e)\right],`$ (30)
$`{\displaystyle \frac{G_\mu }{\sqrt{2}}}`$ $`=`$ $`{\displaystyle \frac{g^2}{8M_W^2}}{\displaystyle \frac{1}{(1\mathrm{\Delta }r)}}.`$ (31)
These are, in fact, the expressions most commonly used in literature. The difference between $`G_\mu `$ and $`G_F`$ is $`0.5`$ ppm; it is completely negligible at present and it would only be of marginal interest if the experimental error in $`\tau _\mu `$ is reduced by a factor 10. Nonetheless, this factor is frequently included in the theoretical expressions, as it is a tree-level contribution from the SM. Eq.(30) has the convenient feature that all the kinematical factors of the SM calculation are separated out.
The arguments carried above can be applied mutatis mutandi to other renormalization frameworks such as the $`\overline{MS}`$ scheme of Ref., in which couplings in $`g^2/(1\mathrm{\Delta }r)`$ are identified with running $`\overline{MS}`$ parameters evaluated at $`M_Z`$, masses are still interpreted as pole masses, and $`\mathrm{\Delta }r`$ is modified accordingly.
The authors of Ref. discuss a renormalization framework in which the couplings in $`g^2/(1\mathrm{\Delta }r)`$ are also identified with running parameters evaluated at $`M_Z`$. In particular, they claim that, in order to ensure the consistency of the renormalization scheme, the QED correction $`C`$ of the local theory, when applied to the SM, must be evaluated via Eq.(1) with $`\alpha (m_\mu )\alpha _e(M_Z)`$ ($`\alpha _e(M_Z)`$ is the expression obtained from Eq.(3) by replacing $`m_\mu M_Z`$). This claim has a curious consequence: before $`c_2`$ was evaluated, the current experimental value of $`\tau _\mu `$ led, via Eq.(28), to $`G_F=1.16639(2)\times 10^5/GeV^2`$. With the incorporation of the $`c_2`$ term in Eq.(1) one obtains $`G_F=1.16637(1)\times 10^5/GeV^2`$. However, according to the new claim, the correction $`C`$ is to be evaluated differently when applied to the SM than in the Fermi theory, with the result that the relevant value is turned back to $`G_F=1.16639(1)\times 10^5/GeV^2`$!
In order to clarify this strange situation, we make the following observations:
i) As explained above, both the QED correction $`C`$ and the electroweak amplitude $`g^2/(1\mathrm{\Delta }r)`$ are separately $`\mu `$-independent quantities. One can certainly evaluate these corrections using running couplings. In this case, if the expressions are carried out to all orders, the $`\mu `$-dependence of the couplings is cancelled by the $`\mu `$-dependence of the radiative corrections. In actual calculations, due to the necessary truncation of the perturbative series, a $`\mu `$-dependence emerges. However, there is no reason to require that the same scale be employed in $`g^2/(1\mathrm{\Delta }r)`$ and in $`C(\mu )`$. Indeed, the natural scale in the former is $`\mu M_Z`$ and, in the latter, it is somewhere in the range $`m_e<\mu <m_\mu `$, with the optimization methods discussed in Section 2 favoring the lower values of $`\mu `$. For instance, part of $`C`$ are I.B. contributions. The photons here carry $`k^2=0`$ and it is clear that their natural coupling is $`\alpha (m_e)=\alpha `$, in sharp contrast with $`\alpha _e(M_Z)`$.
ii) Although the choice $`\mu =M_Z`$ for the QED correction is not natural, its implementation should be done by setting $`\mu =M_Z`$ in Eq.(5), rather than replacing $`\alpha (m_\mu )\alpha _e(M_Z)`$ in Eq.(1). Using Eq.(5) and $`\alpha _e(M_Z)`$ to evaluate $`C(M_Z)`$, we find that $`G_F`$ would be decreased by only $`0.6`$ ppm rather than increased by $`20`$ ppm. In particular, we note that changing the scale of $`\alpha (\mu )`$ in Eq.(1), without modifying the second order coefficient according to Eq.(5), is inconsistent at the two-loop level.
In summary, our conclusion is that, subject to the neglect of terms of $`𝒪`$$`(\alpha m_\mu ^2/M_W^2)`$, one should apply the same QED corrections in the SM as in the Fermi theory. In particular, the value $`G_F=1.16637(1)\times 10^5/GeV^2`$, obtained in the Fermi theory, is the one that should be applied in the analysis of the SM.
## IV Conclusions
In Section 2, we have applied the traditional FAC, PMS and BLM optimization methods to the radiative corrections to the muon lifetime in the Fermi V-A theory. As the corrections are convergent, one expects on general grounds the natural mass scale to be in the range $`m_e<\mu <m_\mu `$. The analysis shows that the FAC and PMS approaches select a mass scale very close to $`m_e`$, while the BLM scheme leads to one very near the geometric average $`\sqrt{m_em_\mu }`$ of the two masses. The FAC and PMS methods provide nearly identical estimations of the third order coefficient in the $`\alpha (m_\mu )`$ expansion, while the BLM estimation is of the same sign and differs only by $`20\%`$ in magnitude. The three optimized expansions give very close results, with a maximum absolute difference of $`5.4\times 10^8`$. We use the maximal difference between the optimized and $`\alpha (m_\mu )`$ expansion, which amounts to be $`2.6\times 10^7`$, as an estimate of the theoretical error due to the truncation of the perturbative series. This translates into an error of $`1.3\times 10^7`$ in the determination of $`G_F`$, a result very close to estimates obtained in Ref. by different considerations. We also find that the expansion in powers of the conventional fine structure constant $`\alpha `$, which in this case is essentially equivalent to the FAC expansion, contains a very small second-order coefficient. In fact, the second-order term is about $`2900`$ times smaller than the first. Thus, it turns out that, by a curious cancellation of two-loop effects, the original one-loop calculation, expressed in terms of $`\alpha `$, had an error of only $`1.4\times 10^6`$! Of course, this amusing historical fact could not be known before the work of Refs. was carried out.
In Section 3, we return to the radiative corrections to muon decay in the SM. Using arguments based on the effective field theory approach, as well as considerations involving higher order Feynman diagrams, we show that, subject to the neglect of terms of $`𝒪(\alpha m_\mu ^2/M_W^2)`$, the overall answer factorizes into two separately scale-independent corrections: those of the Fermi V-A theory and the electroweak amplitude $`g^2/(1\mathrm{\Delta }r)`$. This important property applies to the corrections to the various observables, such as the electron spectrum, electron asymmetry in the case of polarized muons, and the muon lifetime. Therefore, if running couplings are employed, the scales may be chosen judiciously and independently in both factors, with $`\mu M_Z`$ being the natural scale in $`g^2/(1\mathrm{\Delta }r)`$, and $`m_e<\mu <m_\mu `$ being the logical range in the corrections of the Fermi V-A theory. We reach the conclusion that, subject to the neglect of terms of $`𝒪(\alpha m_\mu ^2/M_W^2)`$, one should apply the same QED corrections to muon decay in the SM as in the Fermi theory. In particular, at variance with a claim presented in Ref., we find that the value $`G_F=1.16637(1)\times 10^5/GeV^2`$, currently obtained in the Fermi theory, is the one that should be applied in the analysis of the SM.
We conclude our discussion with the following comments:
i) Although the shifts discussed in Refs. and this paper are numerically very small, it is clearly desiderable to evaluate fundamental parameters such as $`G_F`$, as accurately as possible.
ii) At present, the Fermi V-A theory is generally viewed, not as independent, but rather as an effective low-energy theory derived from the SM. The arguments presented in this paper make clear that the same constant $`G_F`$ that is precisely determined in the low-energy theory is, to a high degree of accuracy, the relevant paramenter in the analysis of the more fundamental, underlying gauge theory.
## Acknowledgements
One of us (A.S.) would like to thank L.Dixon and M.Schaden for very useful observations. This research was supported in part by NSF Grant No. PHY-9722083.
|
no-problem/9905/hep-th9905181.html
|
ar5iv
|
text
|
# Untitled Document
|
no-problem/9905/astro-ph9905041.html
|
ar5iv
|
text
|
# Imaging of the Shell Galaxies NGC 474 and NGC 7600, and Implications for their Formation
## 1 Introduction
In recent years it has become clear that elliptical galaxies are host to much fine structure. Perhaps the most dramatic fine structures observed are the edge-brightened arcs of stars surrounding and existing within the envelopes of some ellipticals. Originally classified as peculiar galaxies by Arp (1966), but later rediscovered and renamed ‘shell galaxies’ by Malin & Carter (1980), the exact nature and origin of these features still remains unclear.
The publication of Malin and Carter’s (1983) catalogue of 137 shell galaxies, found in the SRC/ESO Southern Sky Survey by eye, showed that about 17% of field ellipticals were surrounded by shells. Seitzer & Schweizer (1990) examined 74 elliptical and SO galaxies, north of $`\delta =15\mathrm{deg}`$, brighter than $`B_T=13.5`$, and nearby $`(cz>4000kms^1)`$. They found shells (referred to by Schweizer as ripples, but the name is interchangeable) in more than half of the field ellipticals and in one third of the field SOs. All found the frequency of shells in spirals to be almost negligible. Clearly shells are common in elliptical and S0 galaxies. Shell formation and evolution needs to be understood if a complete picture of the formation and evolution of the galaxies they inhabit is to be obtained.
A variety of models have been proposed to form shells (see reviews by Prieur 1990 and Carter 1998). In the minor merger models (Quinn 1984; Dupraz & Combes 1986; Hernquist & Quinn 1988, 1989) shells are formed from the stars of the secondary galaxy, which merges within the primary. The shells are a result of the phase wrapping (low orbital angular momentum encounters) or spatial wrapping (high orbital angular momentum encounters) of stars belonging to the secondary. The nature of the secondary can also affect the resulting shell system. Not surprisingly then, combining the small cross section for nearly radial encounters with the many types of accretion candidates leads to a wide variety of observed shell morphologies. Heisler & White (1990) pointed out the shortcomings of the three body approach, and used an accretion scenario to study the effect of tidal disruption on the secondary. They found that although the positions of the shells can be reproduced using test particles, in order to accurately reproduce the population of the shells, a self-consistent treatment of the disruption is essential. Dynamical friction against the primary galaxy is another important ingredient, and is believed to allow shells to be produced deep within the potential well. Dupraz & Combes (1987) investigated this analytically. They proposed that the stars least tightly bound to the companion (probably a large fraction of its mass) are pulled away by tidal forces during the first passage close to the main galaxy centre. Afterwards, the surviving companion is braked by friction as it orbits through the galaxy. More stars are ‘peeled’ by tidal forces launched with continuously lower and lower energies. The innermost shells develop at the end, when merging is almost complete.
Whether major disk-disk mergers can also form shells was asked by Schweizer (1980). Hernquist & Spergel (1992) reproduced shell-like structures from such a scenario and Hibbard & Mihos (1995) carried out major merger modelling of NGC 7252 and found that infalling tidal material may be able to create shells; however the simulation was not evolved to this latter stage. The discovery by Balcells (1997) of two very faint, opposed tidal tails, usually considered as a signature of a disk-disk merger, in the shell elliptical NGC 3656 lends some weight to the theory.
Thomson & Wright (1990) put forward a weak interaction model (WIM) as an alternative to the above merger models. The shells are produced from a one-armed spiral density wave induced within a dynamically cold ‘thick disk’ component of the primary galaxy during a fly-by interaction (not a merger) with a secondary companion; the shell-forming stars are initially on circular but not co-planar orbits. Although hot systems such as ellipticals would not be expected to contain such a population of stars on circular orbits, the effectiveness of the test particle model in reproducing so efficiently the shell distributions of NGC 3923 and 0422-476, particularly the inner shells (Thomson 1991), lends credence to the WIM. However, Carter, Thomson & Hau (1998) have recently reported minor axis rotation in NGC 3923, which implies that the underlying potential is in fact prolate. Together with the high velocity dispersion of the galaxy, this argues against the existence of a thick disk as required by the WIM. Carter, Thomson & Hau (1998) include a full discussion on these points.
Photometry offers a simple yet effective way of constraining models of shell formation. Assuming that no significant star formation is induced by the one-armed spiral density wave, a natural prediction of the WIM is that the shell colours should follow the colour gradient of the host galaxy. Accretion of a cold, disk galaxy in its early stages of merging should show a marked colour difference between the red host elliptical and the bluer companion. As the disk ages the colours will redden and the colour difference will be less pronounced. Other secondary candidates are low mass elliptical galaxies, which should still show a slight difference in colour as low mass elliptical galaxies are bluer on average than larger E type galaxies (Visvantha & Sandage 1977).
Previous photometric studies of shell galaxies (Carter, Allen & Malin 1982; Fort et al 1986; Schombert & Wallin 1987; Forbes et al 1995) have in general shown that the shells are similar in colour or slightly bluer than the underlying galaxy, although uncertainties in the shell colours and geometry are large. The motivation of this study is to provide high signal to noise data capable of allowing clear distinctions to be drawn between formation models, and also allowing us to learn more about the stellar population(s) the shells contain.
The surface brightness of the shells, when compared to the underlying galaxy, should provide a further test of the models. In the WIM the shells are a density wave excited in a component of the galaxy, and will be an approximately constant fraction of the surface brightness of this component. It is more difficult to predict the expected distribution in a merger scenario for reasons already mentioned. Through their investigation, Dupraz & Combes (1987) suggested that the outer shells should contain the highest surface brightness. It is difficult to know how to interpret galaxy isophotal data. Bender et al (1988), Bender (1997) and references therein used the fourth cosine term (B<sub>4</sub>) in the Fourier expansion of the azimuthal variation to quantify the isophotal deviations from pure ellipses. It is thought that pointy isophotes (positive B<sub>4</sub>) are associated with discs (Nieto et al 1991) whereas galaxies with box-shaped isophotes (negative B<sub>4</sub>) are thought to have been produced by a merger or interaction (Binney & Petrou 1985, Bender 1990). Shells are expected to produce boxy isophotes as they follow isopotential not isodensity surfaces and isopotentials are always rounder (Dupraz & Combes 1986). It is interesting to ask whether the correlations between boxiness and other evidence for interactions can be entirely explained by the presence of shells as suggested by Forbes & Thomson (1992). Changes in the semi-major axis position angle with radius can suggest triaxiality.
We present $`BR`$ colours and surface brightnesses for NGC 474 and NGC 7600, and their shells. We also present an isophotal analysis for each galaxy, as well as an investigation into associated fine structure and environment. Observations are discussed in §2, data reduction in §3 and results in §4. We analyse our findings in the context of the shell formation models in §5 and summarise our conclusions in §6.
## 2 Observations
Broadband B images of NGC 474 were taken at the prime focus of the 4.2m William Herschel Telescope in December 1994. We used a 1024 $`\times `$ 1024 Tektronix thinned CCD, giving a field of 7.2 $`\times `$ 7.2 arcminutes with 0.42 arcsec/pixel. NGC 474 fit comfortably onto the chip, although there is some slight clipping of the outer shell, lying 190 arcseconds from the galaxy centre. Broadband R and narrowband H$`\alpha `$ images of NGC 474 and B, R for NGC 7600 were obtained at the 2.5m Isaac Newton Telescope in November 1996. Again a Tektronix CCD was used at prime focus, giving a field of view of 11.5 $`\times `$ 10.6 arcminutes and an image scale of 0.59 arcsec/pixel. The seeing was $``$ 1.5 arcsec for each run and filter. The total integration times are listed in Table 1. Although included in the table, the H$`\alpha `$ image is not presented in this paper. When a scaled $`R`$-band image was subtracted from the H$`\alpha `$ image, no traces of H$`\alpha `$ emission were visible.
## 3 Data Reduction
Standard iraf data reduction techniques were used. After debiasing, our images were divided by a normalised flat field in the appropiate pass band, each obtained from a number of exposures taken during twilight. Each passband’s individual images, aligned to within a fraction of a pixel using imalign, were then combined using imcombine. This employed a median algorithm, scaled to the mode and was very efficient at removing the cosmic rays. The seeing was $``$ 1.5 arcsec for all images, and this as well as the accuracy of the aligning was checked by examining the FWHM of several stars in the frame before and after combining. When aligning the combined images with each other the same procedure was used as before except in the case of NGC 474, where as mentioned earlier the B and R images were taken on different telescopes. It was therefore necessary to correct for different chip scales and pixel sizes. The packages geomap and geotran were used for this. geomap computes the transformation required to map the coordinate system of the reference image to that of the input image, using over a dozen stars common to each frame, while geotran performs the transformation. Total flux was conserved and the accuracy checked as before.
### 3.1 Photometric Calibration
Calibration was performed using the phot package in iraf. The zero point for each night was found using a number of Landolt (1992) standard stars taken throughout the night; it was found that colour terms were not required. Mean extinction values provided by the Carlsberg Meridian Telescope were used to correct for atmospheric extinction, based on many observations of standard stars throughout the night. Although small, galactic extinction has been corrected for using the interstellar extinction law given by Rieke & Lebofsky (1985). The zero point was calculated for an individual galaxy frame and then recalibrated to the combined galaxy frame. The resulting zero points of the combined galaxy frames are displayed in Table 2. The difference in R zero points between NGC 474 and NGC 7600 is due to the different scaling used to produce the final composite galaxy images.
To test the reliability of our calibration, we have compared the surface brightness profiles of our galaxies with those published by other authors. Shown in Figure 1 is the $`B`$ surface brightness profile for NGC 474 taken from Schombert & Wallin (1987). Our results, overplotted as filled circles, are in very good agreement. For NGC 7600 we have compared our data with that of Penereiro et al., (1994). Shown in Figure 2 as open circles with error bars is the surface brightness of NGC 7600 transformed from the Gunn r filter as deduced by Penereiro et al into our Kron-Cousins R; our data are overplotted as filled circles, the symbol size being of the order of our error. We used the transformation given by Jorgensen (1994) to convert from the Gunn system to our Kron-Cousins photometric system. There is a zeropoint difference of $``$ 0.2 mag between our photometry and that of Penereiro et al., but the agreement is satisfactory.
### 3.2 Modelling the Underlying Galaxy
Apart from rare cases such as the outer shells of NGC 474, shells are faint structures and difficult to detect since they are superimposed upon the bright background of the galaxy. The galaxy signal also contaminates the signal from a shell. It is necessary therefore to produce and subtract off a model of the underlying galaxy. First, the contribution of the sky background was estimated from the mean value of several 10 $`\times `$ 10 pixel boxes placed in regions well away from the galaxy. Our isophotal analysis made use of the ellipse task within stsdas<sup>1</sup><sup>1</sup>1The Space Telescope Science Data Analysis System stsdas is distributed by the Space Telescope Science Institute., which is based on the method described in detail by Jedrzejewski (1987). We used a similar technique of ellipse fitting as that described by Forbes & Thomson (1992). In the case of NGC 474, where the bright shells significantly affect the galaxy isophotes, a first model was run with the centre, position angle and ellipticity allowed to vary. A second model was then run with the central position fixed at the value found from the first run. The brightest 10% of pixels in each ellipse were excluded, to avoid the brightest points in the shells. NGC 7600 was found to produce a residual image of comparable quality whether or not the parameters were fixed, and for this galaxy all parameters were allowed to vary and no clipping was used.
### 3.3 Aperture Photometry of Shells
First we identified the regions of the residual map where shells are present. A 10 $`\times `$ 10 pixel box was placed on the shell, being careful to avoid foreground stars, globular clusters and any other unwanted features. On either side of shell the task was repeated in order to deduce the local sky+galaxy background. The procedure was repeated along the shell, the number of times depending upon the size of shells. Typically this ranged from 5 to 15 independant boxes. The corresponding shell surface brightness $`(SB)`$ was thus deduced for each aperture using the formula:
$$SB=2.5log(\frac{counts}{itime})+m0$$
(1)
where $`counts`$ are per square arcsecond, $`itime`$ is the exposure time in seconds, and $`m0`$ the zero point for that filter. This was repeated for the other filter using identical positions, and the colour indices calculated. After checking for possible colour gradients, the mean value for the shell was determined. Any remaining spurious features were re-examined, and those apertures lying further than two $`\sigma `$ away from the mean were rejected and the mean recalculated. The final shell surface brightnesses were calculated by averaging the surface brightnesses calculated for each aperture along the shell.
#### 3.3.1 Error Analysis
The low surface brightness of the shells and the high signal from the galaxy and sky lead to some uncertainty in the final shell magnitude due to random measurement errors. We adopted the following procedure to calculate this uncertainty. The value $`\sigma _D`$ was calculated for one of the band passes:
$$\sigma _D=\sqrt{\sigma _{shell}^2+\frac{\sigma _{sky1+galaxy1}^2+\sigma _{sky2+galaxy2}^2}{2}}$$
(2)
These are the associated standard deviations of the values returned from the 10 $`\times `$ 10 pixel box placed on and to each side of the shell. The procedure was repeated for all apertures along the shell in that filter and the standard error for the whole shell calculated using (assuming we are in the $`R`$ Band):
$$\sigma _R=\sqrt{\underset{i=1}{\overset{N}{}}\frac{\left(\frac{\sigma _{Di}}{\sqrt{n}}\right)^2}{N}}$$
(3)
Here, $`N`$ is the number of apertures along the shell, which was typically no less than five and reached fourteen for some of NGC 474’s larger shells, and $`n`$ is the number of pixels in the aperture which is 100 in our case. The above process was repeated for the B filter and the total error calculated as:
$$\sigma _{BR}=\sqrt{\left(\frac{\sigma _B}{\overline{n}_B}\right)^2+\left(\frac{\sigma _R}{\overline{n}_R}\right)^2+(0.01)^2+(0.02)^2}$$
(4)
$`\overline{n}_R`$ and $`\overline{n}_B`$ are the average counts for the shell in $`R`$ and $`B`$ respectively. We have adopted zero-point errors of 0.01 and 0.02 for B and R respectively, for NGC 474 (see Table 2). For the NGC 474 shell at 202 arcseconds radius, we calculated $`\sigma _{BR}`$ = 0.05. We further calculated that this shell had an average $`BR`$ of 1.00 leading to a final value of 1.00 $`\pm 0.05`$. We believe $`\pm `$ 0.05 is a realistic error when obtaining shell colours based on the techniques described in this paper.
## 4 Results
In Tables 3 and 4 we list, for NGC 474 and NGC 7600 respectively, the average radius, surface brightness, and $`BR`$ colour of each shell. We also present plots of $`BR`$ and surface brightness for the shells and host galaxies in this section.
### 4.1 $`BR`$, Surface Brightness, and Isophotal Parameters
#### 4.1.1 NGC 474
In Figure 5a we plot $`R`$ surface brightness versus $`r^{1/4}`$ for NGC 474. Figures 5b,6ab contain the isophote data obtained for NGC 474, namely ellipticity, B<sub>4</sub> (the fourth cosine term in the Fourier expansion of the azimuthal variation from pure ellipse), and position angle plotted against radius. The position angle of the semi-major axis and the ellipticity of NGC 474 change quite dramatically with radius. The cos B<sub>4</sub> term shows significant scatter and prevents conclusions from being drawn.
Figure 7a shows the $`BR`$ profiles for NGC 474 and its shells. The galaxy signal falls off to background dominated values after 150 arcseconds and these points are not included. The shell B-R colours seem to follow those of the galaxy; this is more uncertain for the two outer-most points, since this involves an extrapolation of the galaxy colours to larger radius. It could also be argued that the colours of the inner shells are constant with radius, within the errors; the two outer-most shells are significantly bluer than the rest. The $`R`$ surface brightness of NGC 474 and its shells are plotted in Figure 7b. The shell surface brightness appears to remain constant with radius, and below that of the galaxy within 100 arcsec. The outer shells, however, appear to be bluer and it would be useful to measure the galaxy surface brightness out to this radius. The two outer-most shells also have a higher surface brightness than most of the inner shells, though not significantly so. The fact that the shell surface brightness does not follow that of the galaxy argues strongly against the WIM; we return to this point in Section 5.
Despite clipping, the question remains as to whether some shell light is included in the model images. The residual images would then be an underestimate of the true shell light, which if a function of radius, would effect the final shell colours. With no galaxy subtraction, the bright shell at a radius of 202 arcseconds has a $`BR`$ of 1.05 $`\pm 0.05`$. This value with associated error being obtained using the procedure outlined in section 3.3. From Table 3, we see that this is not a significant change within the errors. This shell has a high surface brightness and is furthest from the galaxy centre, so we believe that a realistic upper limit for the systematic uncertainty in the shell colours introduced by the possible inclusion of shell light in the galaxy model is 0.5 in $`BR`$. The effect will be smaller for NGC 7600, which has fainter shells. Despite some scatter in the points, the isophote plots do not seem to show features at the radii of the shells which is a strong point for proving their exclusion in the model. However, NGC 474 has a large, high surface-brightness shell at a radius of 55 arcseconds. A model run with this shell feature masked out significantly reduced the boxiness at that radius. Specifically the B<sub>4</sub> parameter was reduced from -0.04 to -0.01 with errors of the order $`\pm 0.005`$. It would seem this shell is responsible for the observed boxiness. The position angle and ellipticity were unaffected.
#### 4.1.2 NGC 7600
In Figure 8a, we plot the $`R`$ surface brightness versus $`r^{1/4}`$ for NGC 7600. Figures 8b,9ab plot the isophotal data for NGC 7600. The position angle changes little with radius, while the ellipticity rises sharply within 10 arcsec, and flattens out at $``$ 0.55 at larger radius. The cosB<sub>4</sub> term is negative (boxy) at most radii.
Figure 10a shows the $`BR`$ for the galaxy and shells. The galaxy $`BR`$ is flat over the radius plotted (again, the galaxy signal is dominated by sky after 150 arcseconds and these points are not included). The inner shells have roughly the same colour as the galaxy, while the three outer-most shells are bluer than the galaxy, with $`BR`$ $``$ 1.35. Given the errors, it could be argued that there is a general trend that the shells become bluer with increasing radius. The shell and galaxy surface brightness are shown in Figure 10b. The shells in NGC 7600 have a surface brightness roughly constant with radius, and do not follow the galaxy surface brightness at all (except for possibly the two outer-most points).
### 4.2 Associated Fine Structure and Environment
#### 4.2.1 NGC 474
NGC 474 has a peculiar kinematic structure in its core (Balcells, Hau & Carter 1998), with the core rotating about an axis intermediate between photometric major and minor axes. There is also evidence in the rotation curves of this galaxy for kinematic shells, of the type also seen in NGC 7626 (Balcells & Carter 1993). The shell system of NGC 474 was classed as a type II by Prieur (1988), meaning that the shells have no preferred orientation around the galaxy. From the residual image in Figure 3 we see that the two outer most shells appear to be linked to each other by a ‘tidal feature’ which crosses the center of the galaxy. NGC 474 is a close partner of the smaller spiral NGC 470. Both have identical recessional velocities and are at a projected separation of 300 arcseconds (60 kpc for H<sub>0</sub>=75 km/sec/Mpc). NGC 470 is undergoing an intense nuclear starburst and has a weak bar (Freidli et al 1996). It has strong central CO emission whereas NGC 474 is weaker (Sofue et al 1992). The two are linked via an HI tidal bridge, and NGC 470 appears to be in orbit about NGC 474 as the HI mimics a ‘Magellanic stream’ around NGC 474 (Schiminovich et al 1997).
#### 4.2.2 NGC 7600
Dressler & Sandage (1983) studied the kinematics in the central 20 arcsec of NGC 7600. Although the uncertainties are large, the bulge $`\frac{v}{\sigma }`$ versus ellipticity places NGC 7600 close towards the theoretical line for a prolate elliptical (see their Figure 2). Based on its morphology, especially the extended envelope, it has been classified as an S0. Dressler & Sandage claim that the low rotation of NGC 7600 implies that it is closer to an elliptical, with its support and flattening arising from a large velocity dispersion. They coined the phrase ‘diskless S0’ to describe NGC 7600 and similar galaxies. Schweizer & Seitzer (1988) noted that the ripples of NGC 7600 interleave with radius, and concluded that they probably arose from an external origin. We give NGC 7600 the classification of type I, based on this interleaving of shells aligned along the major axis. On inspection of the surrounding area, there appears to be no companion to NGC 7600.
## 5 Discussion
Here we discuss our results in light of models of shell formation.
### 5.1 NGC 474
Hernquist & Quinn (1988) showed that mass transfer, rather than complete mergers, were also capable of producing shell structures. They also noted that the efficiency of tidal stripping is greatly reduced in retrograde encounters. Longo & de Vaucouleurs (1983) quote a mean outer-disk colour for NGC 470 of $`BV`$ = 0.68. We transform this to $`BR`$ = 1.00, $`(\frac{\sigma }{\sqrt{N}})`$ $`0.08`$ based on observations of many disk galaxies in different filters by de Jong (1995). If (some of) the shells of NGC 474 are indeed stripped matter from NGC 470, then we must take into account subsequent passive stellar evolution (which would redden the shells) and also the expected blueing of shells as dust is removed during the interaction. We will assume for the moment that the two processes cancel each other. (It may also be the case, as suggested by the HI maps of the two systems, that NGC 470 is in orbit about NGC 474 and thus periodically replenishes the outer shells; the colours of the (outermost) shells would then also depend on the orbital timescale, further complicating the issue). Within our uncertainties, the two outer-most shells have colours consistent with being stripped matter from NGC 470 ($`BR`$ of 0.91 and 0.99). The inner shells however have a mean $`BR`$ of 1.27 $`\pm `$ 0.04, not compatible with the outer-disk colour of NGC 470. The discovery of kinematic shells and a peculiar core deep within the potential well of NGC 474 is important in this context. It is very unlikely that the mass transfer of material could form shells at very small radii, and which would exist for the timescales required. Kinematically-distinct core (KDC) ellipticals have central regions that rotate rapidly, and often in the opposite direction to the stars in the outer parts of the galaxy. KDCs are believed to be result of a merger (Illingworth & Franx 1989), either the accretion of a small secondary (Balcells & Quinn 1990) or a major merger of two nearly equal-mass disk galaxies (Hernquist & Barnes 1991). Forbes (1992) found that in a sample of galaxies with peculiar cores almost all had shells, implying that shells and KDCs may form in a similar way, namely via mergers (although the role of major mergers in shell formation is poorly understood).
In the WIM (Thomson 1991), the shell system of NGC 474 was formed via a weak interaction with NGC 470. Mass transfer to some degree is also compatible with the WIM, and could explain the origin of the outer shells. The similarity of the galaxy and shell $`BR`$ profiles in NGC 474 is in agreement with the WIM, but the fact that the shells’ surface brightness does not follow that of the galaxy is a very strong (perhaps fatal) argument against the WIM. Hau & Thomson (1994) showed how a fly-by interaction can form a decoupled core by spinning up the envelope of the galaxy. How this would effect the thick disk is not known but we would expect it to be heated up thus supressing shell formation. It seems therefore that in order to produce the peculiar core and the shells a merger would still be needed even in this scenario.
It seems plausible to use an accretion event to explain the inner shells of NGC 474 and the peculiar core kinematics. More speculatively, the two outer-most shells may be the result of mass transfer from NGC 470.
### 5.2 NGC 7600
With no close companions and no evidence for other peculiar structure apart from the shells, NGC 7600 is a much cleaner system than NGC 474. NGC 7600 is, like NGC 3923, an example of the expected result of a minor merger between a prolate elliptical and a dwarf elliptical on a near radial orbit (Dupraz & Combes 1986). This indeed seems to be consistent with the data. First, the shell system is aligned along the apparent major axis and appears interleaved. Second, although tentative the $`BR`$ of the outer shells appear different to the $`BR`$ of the galaxy although the inner shells are consistent with the $`BR`$ of the galaxy. It is difficult to know whether a gradient exists in the $`BR`$ of the shells due to the low surface brightness of the shells coupled with the missing values of some. On balance, however, we believe that the $`BR`$ profile and surface brightness of the shells point towards them being formed from the stars of a merged secondary.
## 6 Conclusions and Future Work
We have presented the first results from our investigation into the photometric colours of shell galaxies. Once reduction and calibration of the rest of the sample is complete we hope to make some statements concerning the generic properties of these galaxies. A key result from the present study of NGC 474 and NGC 7600 is that the shell surface brightness is roughly constant with radius. This is a strong argument against the Weak Interaction Model (Thomson 1991), which predicts that the surface brightness profile of the shells should follow that of the galaxy. Wilkinson et al. (1998/1999) also argue against the WIM from their study of the shell system of 0422-476.
Our hypotheses for the two galaxies is as follows. NGC 474 may have two families of shells. The outer-most shells were possibly formed from tidally liberated material from its interacting companion NGC 470, while the inner shells formed by a minor merger which is now shaping the galaxy’s core. For NGC 7600, our results favour shell formation from a merger/accretion of a smaller companion.
Unfortunately, the various formation models, chiefly mergers and the WIM, are not developed enough to allow unique, testable predictions for shell morphologies, surface brightnesses and colours. For instance, the shell colours of NGC 474 and NGC 7600 do not allow us to distinguish between a WIM and a merger model. Detailed numerical simulations of shell formation in the various scenarios, incorporating dynamical friction and realistic treatments of gas and dust, are needed to differentiate between the different models, and to learn more about the progenitors. On the observational side, shell kinematics holds considerable promise. For instance, Merrifield & Kuijken (1998) showed that the measurement of shell kinematics in regular, aligned systems provides a means for determining the gravitational potential of their host galaxies out to large radii. Such measurements will become feasible with integral-field spectrographs on 8–10m class telescopes.
## 7 Acknowledgements
AJT would like to thank Bob Thomson for initiating the project and for many helpful ideas and interesting discussions. We would like to thank Jim Collett for contributions to the discussion.
This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
The Isaac Newton Telescope is operated on the island of La Palma by the Royal Greenwich Observatory in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias.
## 8 References
* Arp, H. 1966, in Atlas of Peculiar Galaxies, Pasadena, California Institute of Technology
* Balcells, M. 1997, ApJ, 486, L87
* Balcells, M., & Carter, D. 1993, Astron. Astrophys, 279, 376
* Bender, R., Dobereiner, S & Mollenhoff, C., 1988. Astr. Astrophys. Suppl, 74, 385
* Bender, R., 1990, in Dynamics and Interactions of Galaxies, p.232, ed. R. Wielen (Springer, Berlin)
* Binney, J., & Petrou, M. 1985, MNRAS, 214 449
* Carter, D., preprint for IAU 186, 1997
* Carter, D., Allen, D. A., & Malin, D. F. 1982, Nature, 295, 126
* Carter, D., Thomson, R. C., & Hau, G. K. T. 1998, MNRAS, 294, 182
* de Jong, R. S. 1995, Thesis, Groningen, Netherlands.
* Dressler, A., & Sandage, A. 1983, ApJ, 265, 664
* Dupraz, C., & Combes, F. 1986, A&A, 166, 53
* Dupraz, C., & Combes, F. 1987, A&A, 185, L1
* Fort, B. P., Prieur, J-L., Carter, D., Meatheringham, S. J., & Vigroux, L. 1986, ApJ, 306, 110
* Forbes, D. A., Reitzel, D. B., Williger, G. M. 1995, Astron. J, 109(4), 1576
* Forbes, D. A., & Thomson, R. C. 1992, MNRAS, 254, 723
* Friedli, D., Wozniak, H., Rieke, M,. Martinet, L., & Bratschi, P. 1996, A&Asupp, 118, 461
* Heisler, J., & White, S. D. M. 1990, MNRAS, 243, 199
* Hibbard, J. E., & Mihos, J. C. 1995, Astron. J, 110, 140
* Hernquist, L., & Quinn, P. J. 1988, ApJ, 331, 682
* Hernquist, L., & Quinn, P. J. 1989, ApJ, bf 342, 1
* Hernquist, L., & Spergel, D. N. 1992, ApJ, 399, L117
* Jedrzejewski, R. I. 1987, MNRAS, 226, 747
* Jorgensen, I. 1994, PASP, 106, 967
* Landolt, A. U., 1992, AJ, 104(1), 340
* Longo, G & de Vaucouleurs, A. 1983, Univ. Tex. Monogr. Astron. No.3
* Malin, D. F., & Carter, D. 1983, ApJ, 274, 534
* Malin, D. F., & Carter, D. 1980, Nature, 285, 643
* Merrifield, M., & Kuijken, K. 1998, MNRAS, 29, 1292
* Nieto, J-L., Bender, R., Arnaud, J., & Surma, P. 1991, Astr. Astrophys, 244, L25
* Penereiro, J. C., de Carvalho, R. R., Djorgovski, S., & Thompson, D. 1994, A&A Supp, 108, 461
* Prieur, J-L., 1988, Thesis, Toulouse, (France)
* Prieur, J-L., 1990, in Dynamics and Interactions of Galaxies, p.72, ed. R. Wielen (Springer, Berlin)
* Quinn, P.J. 1984, ApJ, 279, 596
* Schiminovich, D., Van Gorkom, J. H & Van der Hulst, J. M. 1997, IAU S186
* Schombert, J. M., & Wallin, J. F. 1987, AJ, 94(2), 300
* Schweizer, F. 1980, ApJ, 237, 303
* Schweizer, F., & Seitzer, P. 1988, ApJ, 328, 88
* Seitzer, P., & Schweizer, F. 1990, in Dynamics and Interactions of Galaxies, p.270, ed. R. Wielen (Springer, Berlin)
* Thomson, R. C., & Wright, A. E. 1990, MNRAS, 247, 122
* Thomson, R.C. 1991, MNRAS, 257, 689
* Visvanathan, N., & Sandage, A. 1977, ApJ, 216, 214
* Wilkinson, A., Prieur, J.-L., Lemoine, R., Carter, D., Malin, D., Sparks, W. B. 1998, preprint
|
no-problem/9905/adap-org9905003.html
|
ar5iv
|
text
|
# Noise-induced memory in extended excitable systems
## Abstract
We describe a form of memory exhibited by extended excitable systems driven by stochastic fluctuations. Under such conditions, the system self-organizes into a state characterized by power-law correlations thus retaining long-term memory of previous states. The exponents are robust and model-independent. We discuss novel implications of these results for the functioning of cortical neurons as well as for networks of neurons.
Neurons receive thousands of perturbations affecting the transmembrane voltage at various points of the synaptic membrane. Recent experimental evidence has shown active nonlinearities at the dendrites of cortical neurons, and thus implying that a model representing these neurons must have many nonlinear spatial degrees of freedom.
What are the dynamical consequences of these distributed nonlinearities for the neurons function? The answer is not inmediately certain. The prevailing view has been, since Lapicque in 1907 , that all input regions (i.e., dendrites) were linear, and thus neurons were represented as a single compartment. In this view incoming excitations are linearly integrated and whenever the resulting value exceeds a predefined threshold an output (action potential) is generated. Thus, the neuron is considered to has a single non-linear degree of freedom (i.e., the spatial region where the thresholding dynamics takes place).
This Letter describes a robust form of noise-induced memory which appears naturally as a direct consequence of including distributed nonlinearities in the formulation of a neuron’s input region. Besides having relevance at the neural level, its touches other areas of biology where excitable models have been used, as is the case for models of forest-fire propagation, spreading of epidemics and noise-induced waves. From the outset, it needs to be noted that the phenomena to be described do not depend of the type of excitable model one uses.
To show the essence of the main point we adapt the Greenberg-Hastings cellular automata model of excitable media. For the purpose of this report let restrict ourselves to the case of a one-dimensional lattice of coupled identical compartments ($`n=1,\mathrm{},N`$), with open boundary conditions. Thus, each spatial location is assigned a discrete state $`S_n^t`$ which can be one of three: Quiescent, Excited or Refractory, with the dynamics determined by the transition rules: E $``$ R (always), R $``$ Q (always), Q $``$ E (with probability $`\rho `$, or if at least one neighbor is in the E state), Q $``$ Q (otherwise). To introduce the so-called “refractory period” typical of all excitable systems during which no re-excitation is possible, the transition from the R state to the Q state is delayed for $`r`$ time steps. Thus, the only two parameters in the system are $`\rho `$, which determines the probability that an input to a given site $`n`$ result into an excitation (i.e., a transition Q $``$ E); and $`r`$ determining the time scale of recovery from the excited state. It turns out that the precise value of $`r`$ is not crucial, but choosing a value of $`r`$ at least equal or larger than the value of $`N`$ eliminates a number of numerical complications . A dendritic region bombarded by many weak synaptic inputs is simulated adopting a relatively small value for $`\rho `$ (here $`10^2`$). The typical response of the model under such conditions is illustrated in Fig. 1. One can see that, starting from arbitrary initial conditions, eventually an element is first excited (left arrow in Fig. 1). This initiates a propagated wavefront which collides with others initiated in the same way somewhere else in the system. After the completion of the refractory period the process repeats originating another wavefront (right side of Fig. 1). An inmediately apparent feature is the overall similarity of any two consecutive fronts. The large-scale shape is preserved, despite of the fact that each element is being randomly perturbed.
We found that important information can be extracted from the analysis of the dynamics of the first element to be excited in each wavefront, denoted as $`L(n)`$. Figure 2 shows results of numerical simulations where $`L(n)`$ of each wavefront is plotted as a function of time. Note the tendency of $`L(n)`$ to remain near the previous leading site, which is specially apparent in the larger systems. To quantify this dynamics we estimated numerically $`<|L^t(n)L^{t+1}(n)|>`$ which is how far (on average) from its current position the leader will be in the next wavefront. The resulting distributions $`<P(\mathrm{\Delta }n)>`$ of these jumps are plotted in Figure 3A for all systems sizes. The largest probability corresponds to the case in which the wavefront is first triggered from the same element as in the previous event. The power-law $`n^\pi `$ tells us that there is always a non-zero probability for a very long jump, indeed as large as the entire system. Therefore, the cut-off of the power law is the only difference between the results obtained with small or large N (see Panel A in Figure 3) Another related measure is the estimation of the average distance the leader drifts from its current position as a function of time lag $`\mathrm{\Delta }t`$ (t is always given in wavefront’s units). The results are plotted in panel B of Figure 3. The fact that the log-log plot of $`|\mathrm{\Delta }n|`$ vs $`\mathrm{\Delta }t`$ is linear implies a power law $`\mathrm{\Delta }t^H`$. The best-fit line of the results in Figure 3B gives an exponent $`H=0.19`$. For this case it is known that the power spectrum decays as $`1/f^\beta `$ and that $`\beta `$ relates with $`H`$ as $`\beta =2H+1=1.4`$. A random walk will have similar statistical behavior but with an exponent $`H=1/2`$. These power-laws, with cutoffs given only by the system size, imply a lack of characteristic scale (both in time and space), a situation which resembles some of the scenarios described in the context of self-organized criticality .
What causes this memory is trivially simple: the first site to be activated by the noise will necessarily be the first (exactly after $`r`$ time steps) to be recovered and consequently to be ready to be re-excited. The two adjacent sites which were excited by the leader will recover only after $`r+1`$ time steps, and so on for the other adjacent sites. Thus, excitation by the noise will always be biased by the previous sequence of excitation. Therefore, this “memory” can be preserved as long as the cycle of recovery (in this model the $`r`$ time steps) is not affected by the noise. Regarding the dependence with the noise intensity, for vanishingly small $`\rho `$ all sites will have enough time to cycle to the Q state and no memory will be kept (see below). \- Exponents are robust - The phenomenology described as well as its characteristics power laws are not model-dependent. The fact that similar results were obtained with various numerical models motivated the search for the simplest numerical simulation scheme. It turns out that the dynamics and the statistical behaviour is preserved by the simple kinematic description of the motion of these noise-induced propagated excitable waves. The approach is described using the cartoon in Figure 4 as follows: Time and space are considered continuos variables. It is assumed that excitations can initiate a wavefront at any point in the 1D space with uniform probability. Therefore, after choosing the noise amplitude ($`\rho `$) the first step of the algorithm is to distribute all the potential excitation spots at random location and at random times. Larger values of $`\rho `$ imply more events to be distributed. Filled circles in Fig. 4 denoted “a” trough “e” correspond to few of these events. Then the space is scanned to locate the earliest excitation point (i.e., in the figure is the point “a” . Subsequently, two wavefronts are drawn from that point with uniform (arbitrary) speed. A front dies when either reaches the boundary as in the initial case or upon colliding with other front as the one initiated by event labeled “b”. (the dotted lines indicate two of these interrupted fronts). Thus the algorithm is the repetition of scanning follows by the identification of potential collisions. It needs to be noted that nothing is peculiar of these rules, simply they are the algorithmic description of what is known about excitable waves. The results of extensive simulations are plotted in Figure 3 side by side with those already described for the discrete model. The jump distributions are plotted for four noise levels in Panel C. The mean drifts as a function of time lag are plotted in Panel D. It can be seen that there is a remarkable agreement between the numerical values of both scaling exponents.
-How long does it remember?- For the sake of demonstration, the dissipation of memory can be estimated by first imposing an initial activation sequence in the system (i.e., writing) and then calculating the Hamming distance between the initial and subsequent wavefront separated by $`\mathrm{\Delta }t`$. Using the discrete model we impose an arbitrary initial configuration of excitation, in this case the sinusoidal pattern plotted in the inset of panel A of Figure 5. As time passes, the pattern deforms as shown by the snapshots at times 2, 5, 10 and 50 in the figure, which can be estimated by the Hamming distance defined as:
$$<D(t)>=\frac{1}{N}\underset{n=1}{\overset{N}{}}|S_n^tS_n^{t+\mathrm{\Delta }t}|$$
(1)
where $`S`$ are the initial and subsequent states, ranked by the excitation order of each element. Means and SEM $`D(t)`$ were calculated and the results are plotted in the main body of Figure 5A as a function of $`\rho `$. It can be seen that the Hamming distance follows a power-law up to times of about 50 events. It was already mentioned that for vanishingly small $`\rho `$ no memory of previous states can be maintained since this condition implies that all the element have enough time to go to the Q state preceding the excitation. Thus, rather paradoxically, more noise implies longer memory. This is illustrated by the results in Figure 5B, where $`D(t)`$ was calculated for increasing noise $`\rho `$. Thus, we can call this phenomena a form of noise-induced memory.
\- Implications for learning and memory - The dynamics described here might have important consequences for neural “plasticity”. This is the name given, in neuroscience, to the process by which interconnected neurons can strengthen or weaken their synaptic contacts to modulate their communication. The dogma is that memory and learning in animal brains are based on long-term changes of the synaptic connectivity. An important point in contemporary thinking assumes that whatever the plastic process is, it must be able to modify the synaptic strength during the time window imposed by the longest time-scale in the neuron dynamics. This window is given by the relaxation kinetics of the membrane and is at most of the order of hundreds of milliseconds . This length is considered too short for producing most of the necessary synaptic changes. The length of that window can be many orders of magnitude longer, if the results presented in this Letter survive the intricate complexity of the spatial structure in real neurons, as well as perhaps other caveats. This would imply that the correlated spatial activity along the dendrites established by the inputs will only decay after hundreds of firing events. This time scale is much longer than the fraction of a second currently considered as the longest time scale that a neuron could remember from its past history. This correlated sequence of activation can in turn influence the spatial distribution of the molecular machinery supposedly responsible for the long-term synaptic modifications.
The work reported here is restricted, for simplicity, to the one dimensional case and the use of the simplest conceivable excitable model. Nevertheless, the phenomena is robust and similar results can be obtained using more detailed models. If the dynamic described here exist as such in real neurons it would be very relevant to neural functioning.
Supported by the Mathers Foundation. R.U. Computer resources are supported by NSF ARI Grants. Discussions with P. Bak, R. Llinas and Mark Millonas are appreciated. Communicated in part by DRC at the First International Conference on Stochastic Resonance in Biological Systems. Arcidosso, Italy, May 5-9, 1998 where the hospitality of the colleagues of the Istituto di Biofisica of Pisa was cherished.
|
no-problem/9905/cond-mat9905329.html
|
ar5iv
|
text
|
# Multifractality in Human Heartbeat Dynamics
Recent evidence suggests that physiological signals under healthy conditions may have a fractal temporal structure . We investigate the possibility that time series generated by certain physiological control systems may be members of a special class of complex processes, termed multifractal, which require a large number of exponents to characterize their scaling properties . We report on evidence for multifractality in a biological dynamical system — the healthy human heartbeat. Further, we show that the multifractal character and nonlinear properties of the healthy heart rate are encoded in the Fourier phases. We uncover a loss of multifractality for a life-threatening condition, congestive heart failure.
Biomedical signals are generated by complex self-regulating systems that process inputs with a broad range of characteristics . Many physiological time series such as the one shown in Fig. 1a, are extremely inhomogeneous and nonstationary, fluctuating in an irregular and complex manner. The analysis of the fractal properties of such fluctuations has been restricted to second order linear characteristics such as the power spectrum and the two-point autocorrelation function. These analyses reveal that the fractal behavior of healthy, free-running physiological systems is often characterized by $`1/f`$-like scaling of the power spectra .
Monofractal signals are homogeneous in the sense that they have the same scaling properties throughout the entire signal. Therefore monofractal signals can be indexed by a single global exponent—the Hurst exponent $`H`$ . On the other hand, multifractal signals, can be decomposed into many subsets characterized by different local Hurst exponents $`h`$, which quantify the local singular behavior and thus relate to the local scaling of the time series. Thus multifractal signals require many exponents to fully characterize their scaling properties .
The statistical properties of the different subsets characterized by these different exponents $`h`$ can be quantified by the function $`D(h)`$, where $`D(h_o)`$ is the fractal dimension of the subset of the time series characterized by the local Hurst exponent $`h_o`$ . Thus, the multifractal approach for signals, a concept introduced in the context of multi-affine functions , has the potential to describe a wide class of signals that are more complex then those characterized by a single fractal dimension (such as classical 1/f noise).
We test whether a large number of exponents is required to characterize heterogeneous heartbeat interval time series \[Fig. 1\] by undertaking multifractal analysis. The first problem is to extract the local value of $`h`$. To this end we use methods derived from wavelet theory . The properties of the wavelet transform make wavelet methods attractive for the analysis of complex nonstationary time series such as one encounters in physiology . In particular, wavelets can remove polynomial trends that could lead box-counting techniques to fail to quantify the local scaling of the signal . Additionally, the time-frequency localization properties of the wavelets makes them particularly useful for the task of revealing the underlying hierarchy that governs the temporal distribution of the local Hurst exponents . Hence, the wavelet transform enables a reliable multifractal analysis . As the analyzing wavelet, we use derivatives of the Gaussian function, which allows us to estimate the singular behavior and the corresponding exponent $`h`$ at a given location in the time series. The higher the order $`n`$ of the derivative, the higher the order of the polynomial trends removed and the better the detection of the temporal structure of the local scaling exponents in the signal.
We evaluate the local exponent $`h`$ through the modulus of the maxima values of the wavelet transform at each point in the time series. We then estimate the scaling of the partition function $`Z_q(a)`$, which is defined as the sum of the $`q^{th}`$ powers of the local maxima of the modulus of the wavelet transform coefficients at scale $`a`$ . For small scales, we expect
$$Z_q(a)a^{\tau (q)}.$$
(1)
For certain values of $`q`$, the exponents $`\tau (q)`$ have familiar meanings. In particular, $`\tau (2)`$ is related to the scaling exponent of the Fourier power spectra, $`S(f)1/f^\beta `$, as $`\beta =2+\tau (2)`$. For positive $`q`$, $`Z_q(a)`$ reflects the scaling of the large fluctuations and strong singularities, while for negative $`q`$, $`Z_q(a)`$ reflects the scaling of the small fluctuations and weak singularities . Thus, the scaling exponents $`\tau (q)`$ can reveal different aspects of cardiac dynamics.
Monofractal signals display a linear $`\tau (q)`$ spectrum, $`\tau (q)=qH1`$, where $`H`$ is the global Hurst exponent. For multifractal signals, $`\tau (q)`$ is a nonlinear function: $`\tau (q)=qh(q)D(h)`$, where $`h(q)d\tau (q)/dq`$ is not constant. The fractal dimension $`D(h)`$, introduced earlier, is related to $`\tau (q)`$ through a Legendre transform,
$$D(h)=qh\tau (q).$$
(2)
We analyze both daytime (12:00 to 18:00) and nighttime (0:00 to 6:00) heartbeat time series records of healthy subjects, and the daytime records of patients with congestive heart failure. These data were obtained by Holter monitoring . Our database includes 18 healthy subjects (13 female and 5 male, with ages between 20 and 50, average 34.3 years), and 12 congestive heart failure subjects (3 female and 9 male, with ages between 22 and 71, average 60.8 years) in sinus rhythm (see Methods section for details on data acquisition and preprocessing). For all subjects, we find that for a broad range of positive and negative $`q`$ the partition function $`Z_q(a)`$ scales as a power law \[Figs. 2a,b\].
For all healthy subjects, we find that $`\tau (q)`$ is a nonlinear function \[Fig. 2c and Fig. 3a\], which indicates that the heart rate of healthy humans is a multifractal signal. Figure 3b shows that for healthy subjects, $`D(h)`$ has nonzero values for a broad range of local Hurst exponents $`h`$. The multifractality of healthy heartbeat dynamics cannot be explained by activity, as we analyze data from subjects during nocturnal hours. Furthermore, this multifractal behavior cannot be attributed to sleep-stage transitions, as we find multifractal features during daytime hours as well. The range of scaling exponents — $`0<h<0.3`$ — with nonzero fractal dimension $`D(h)`$, suggests that the fluctuations in the healthy heartbeat dynamics exhibit anti-correlated behavior ($`h=1/2`$ corresponds to uncorrelated behavior while $`h>1/2`$ corresponds to correlated behavior).
In contrast, we find that heart rate data from subjects with a pathological condition — congestive heart failure — show a clear loss of multifractality \[Figs. 3a,b\]. For the heart failure subjects, $`\tau (q)`$ is close to linear and $`D(h)`$ is non-zero only over a very narrow range of exponents $`h`$ indicating monofractal behaviour \[Fig. 3\].
Our results show that, for healthy subjects, local Hurst exponents in the range $`0.07<h<0.17`$ are associated with fractal dimensions close to one. This means that the subsets characterized by these local exponents are statistically dominant. On the other hand, for the heart failure subjects, we find that the statistically dominant exponents are confined to a narrow range of local Hurst exponents centered at $`h0.22`$. These results suggest that for heart failure the fluctuations are less anti-correlated than for healthy dynamics since the dominant scaling exponents $`h`$ are closer to $`1/2`$.
We systematically compare our method with other widely used methods of heart rate time series analysis . Several of these methods do not result in a fully consistent assignment of healthy versus diseased subjects . In Fig. 4a we illustrate the results of our method based on the multifractal formalism. Each subject’s dataset is characterized by three quantities: (1) the standard deviation of the interbeat intervals; (2) the exponent value $`\tau (q=3)`$ obtained from the scaling of the third moment $`Z_3(a)`$, and (3) the degree of multifractality, defined as the difference between the maximum and minimum values of local Hurst exponent $`h`$ for each individual \[Fig. 5\]. We find that the multifractal approach robustly discriminates the healthy from heart failure subjects.
We next blindly analyze a separate database containing 10 records, 5 from healthy individuals and 5 from patients with congestive heart failure. The time series in the new database are shorter than the ones in our database; on average they are only 2 hours long —that is, less than 8000 beats. Figure 4b shows the projection on the x-y plane of our data presented in Fig. 4a. Marked in black, we show the results for the blind test. Our approach clearly separates the blind test subjects into two groups: 1, 3, 5, 6, and 10 fall in the healthy group and 2, 4, 7, 8 and 9 in the heart failure group. Unblinding the test code reveals that indeed subjects 1, 3, 5, 6 and 10 are healthy, while 2, 4, 7, 8 and 9 are heart failure patients. We conclude that an analysis incorporating the multifractal method may add diagnostic power to contemporary analytic methods of heartbeat (and other physiological) time series analysis.
The multifractality of heart beat time series also enables us to quantify the greater complexity of the healthy dynamics compared to pathological conditions. Power spectrum analysis defines the complexity of heart beat dynamics through its scale-free behavior, identifying a single scaling exponent as an index of healthy or pathologic behavior. Hence, the power spectrum is not able to quantify the greater level of complexity of the healthy dynamics, reflected in the heterogeneity of the signal. On the other hand, the multifractal analysis reveals this new level of complexity by the broad range of exponents necessary to characterize the healthy dynamics. Moreover, the change in shape of the $`D(h)`$ curve for the heart failure group may provide insights into the alteration of the cardiac control mechanisms due to this pathology.
To further study the complexity of the healthy dynamics, we perform two tests with surrogate time series. First, we generate a surrogate time series by shuffling the interbeat interval increments of a record from a healthy subject. The new signal preserves the distribution of interbeat interval increments but destroys the long-range correlations among them. Hence, the signal is a simple random walk, which is characterized by a single Hurst exponent $`H=1/2`$ and exhibits monofractal behavior \[Fig. 5a\]. Second, we generate a surrogate time series by performing a Fourier transform on a record from a healthy subject, preserving the amplitudes of the Fourier transform but randomizing the phases, and then performing an inverse Fourier transform. This procedure eliminates nonlinearities, preserving only the linear features of the original time series. The new surrogate signal has the same $`1/f`$ behavior in the power spectrum as the original heart beat time series; however it exhibits monofractal behavior \[Fig. 5a\]. We repeat this test on a record of a heart failure subject. In this case, we find a smaller change in the multifractal spectrum \[Fig. 5b\]. The results suggest that the healthy heartbeat time series contains important phase correlations cancelled in the surrogate signal by the randomization of the Fourier phases, and that these correlations are weaker in heart failure subjects. Furthermore, the tests indicate that the observed multifractality is related to nonlinear features of the healthy heartbeat dynamics. A number of recent studies have tested for nonlinear and deterministic properties in recordings of interbeat intervals . Our results are the first to demonstrate an explicit relation between the nonlinear features (represented by the Fourier phase interactions) and the multifractality of healthy cardiac dynamics \[Fig. 5\].
From a physiological perspective, the detection of robust multifractal scaling in the heart rate dynamics is of interest because our findings raise the intriguing possibility that the control mechanisms regulating the heartbeat interact as part of a coupled cascade of feedback loops in a system operating far from equilibrium . Furthermore, the present results indicate that the healthy heartbeat is even more complex than previously suspected, posing a challenge to ongoing efforts to develop realistic models of the control of heart rate and other processes under neuroautonomic regulation.
Methods
Heart failure (CHF) data were recorded using a Del Mar Avionics Model 445 Holter recorder and digitized at 250 Hz. Beats were labeled using “Aristotle” arrhythmia analysis software. By using criteria based on timing and QRS morphology, “Aristotle” labels each detected beat as normal, ventricular ectopic, supraventricular ectopic, or unknown and the location of the R-wave peaks is determined with a resolution of 4 ms.
Healthy datasets were recorded using a Marquette Electronics Series 8500 Holter Recorder. Using a Marquette Electronics Model 8000T Holter Scanner, the tapes were then digitized at 128 Hz., scanned and annotated. The annotations were manually verified by an experienced Holter scanning technician. The location of the R-wave peaks was thus determined to a resolution of 8 ms.
The finite resolution implies that our estimates of the interbeat intervals are affected by a white noise due to estimation error. The signal-to-noise ratio is for both healthy and heart failure of the order of 100. Furthermore, the white noise due to the measurements would lead to the detection of a local Hurst exponent $`h=0`$ at very small scales. For that reason, in this study we have considered only scales larger than 16 beats.
From the beat annotation file, only the intervals (NN) between consecutive normal beats were determined; thus intervals containing non-normal beats were eliminated from the NN interval series. For the heart failure (CHF) data an average of 2% (0.1% min to 0.6% max) of the intervals were eliminated, and for the normal data an average of 0.01% (0% min to 0.06% max) were eliminated. No interpolation was done for eliminated intervals.
In order to eliminate outliers due to missed QRS detections which would give rise to erroneously large intervals that may have been included in the NN interval series, a moving window average filter was applied. For each set of 5 contiguous NN intervals, a local mean was computed, excluding the central interval. If the value of the central interval was greater than twice the local average, it was considered to be an outlier and excluded from the NN interval series. This criterion was applied to each NN interval in the series. For the CHF data an average of 0.02% (0% min to 0.1% max) of the intervals were eliminated and for the normal data an average of 0.07% (0% min to 0.7% max) were eliminated. No interpolation was done for eliminated intervals. Overall a total of 2% (0.1% min to 6% max) of the total number of RR intervals were eliminated for the CHF data and 0.08% (0% min to 0.7% max) for the normal data.
Next, we build a time series of increments between consecutive NN intervals and calculated their standard deviation. We then identified all pairs of associated increments with opposite signs and with an amplitude larger than 3 standard deviations. The values of the increments for each pair were replaced by linear interpolations of their values and the time serie of NN interval time was reconstructed by integration from the filtered increments. About 1% of time intervals were corrected by this procedure.
|
no-problem/9905/nucl-th9905010.html
|
ar5iv
|
text
|
# Mean-field nuclear structure calculations on a Basis-Spline Galerkin lattice
## 1 Introduction
In recent years, the area of nuclear structure physics has shown substantial progress and rapid growth . With detectors such as GAMMASPHERE and EUROGAM, the limits of total angular momentum and deformation in atomic nuclei have been explored, and new neutron rich nuclei have been identified in spontaneous fission studies. Gamma-ray detectors under development such as GRETA will have improved resolving power and should allow for the identification of weakly populated states never seen before in nuclei. Particularly exciting is the proposed construction of a next-generation ISOL FACILITY in the United States which has been been identified in the 1996 DOE/NSAC Long Range Plan as the highest priority for major new construction.
These experimental developments as well as recent advances in computational physics have sparked renewed interest in nuclear structure theory. In contrast to the well-understood behavior near the valley of stability, there are many open questions as we move towards the proton and neutron driplines and towards the limits in mass number (superheavy region). While the proton dripline has been explored experimentally up to Z=83, the neutron dripline represents mostly “terra incognita”. In these exotic regions of the nuclear chart, one expects to see several new phenomena: near the neutron dripline, the neutron-matter distribution will be very diffuse and of large size giving rise to “neutron halos” and “neutrons skins”. One also expects new collective modes associated with this neutron skin, e.g. the “scissors” vibrational mode or the “pygmy” resonance. In proton-rich nuclei, we have recently seen both spherical and deformed proton emitters; this “proton radioactivity” is caused by the tunneling of weakly bound protons through the Coulomb barrier. The investigation of the properties of exotic nuclei is also essential for our understanding of nucleosynthesis in stars and stellar explosions (rp- and r-process). Our primary goal is to carry out high-precision nuclear structure calculations in connection with Radioactive Ion Beam Facilities. Some of the topics of interest are the effective N-N interaction at large isospin, large pairing correlations and their density dependence, neutron halos/skins, and proton radioactivity. Specifically, we are interested in calculating observables such as the total binding energy, charge radii, densities $`\rho _{p,n}(𝐫)`$, separation energies for neutrons and protons, pairing gaps, and potential energy surfaces.
There are many theoretical approaches to nuclear structure physics. For lack of space, we mention only three of these: in the macroscopic \- microscopic method, one combines the liquid drop / droplet model with a microscopic shell correction from a deformed single-particle shell model (Möller and Nix , Nazarewicz et al. ). For relatively light nuclei, it is possible to diagonalize the nuclear Hamiltonian in a shell model basis. Barrett et al. have recently carried out large-basis no-core shell model calculations for p-shell nuclei. A different approach to the interacting nuclear shell model is the Shell Model Monte Carlo (SMMC) method developed by Dean et al. which does not involve matrix diagonalization but a path integral over auxiliary fields. This method has been applied to fp-shell and medium-heavy nuclei. Finally, for heavier nuclei one utilizes either nonrelativistic or relativistic mean field theories.
## 2 Outline of the theory: HFB formalism in coordinate space
As we move away from the valley of stability, surprisingly little is known about the pairing force: For example, what is its density dependence? Large pairing correlations are expected near the drip lines which are no longer a small residual interaction. Neutron-rich nuclei are expected to be highly superfluid due to continuum excitation of neutron “Cooper pairs”. The Hartree-Fock-Bogoliubov (HFB) theory unifies the HF mean field theory and the BCS pairing theory into a single selfconsistent variational theory. The main challenge in the theory of exotic nuclei near the proton or neutron drip line is that the outermost nucleons are weakly bound (which implies a very large spatial extent), and that the weakly-bound states are strongly coupled to the particle continuum. This represents a major problem for mean field theories that are based on the traditional shell model basis expansion method in which one utilizes bound harmonic oscillator basis wavefunctions. As illustrated in Figure 1 a weakly bound state can still be reasonably well represented in the oscillator basis, but this is no longer true for the continuum states. In fact, Nazarewicz et al. have shown that near the driplines the harmonic oscillator basis expansion does not converge even if $`N=50`$ oscillator quanta are used. This implies that one either has to use a continuum-shell model basis or one has to solve the problem directly on a coordinate space lattice. We have chosen the latter method.
Several years ago, Umar et al. have developed a three-dimensional HF code in Cartesian coordinates using the Basis-Spline discretization technique. The program is based on a density dependent effective N-N interaction (Skyrme force) which also includes the spin-orbit interaction. The code has proven efficient and extremely accurate; it incorporates BCS and Lipkin-Nogami pairing, and various constraints. The configuration space Hartree-Fock approach has had great successes in predicting systematic trends in the global properties of nuclei, in particular the mass, radii, and deformations across large regions of the periodic table.
So far, our attempts to generalize this 3D code to include self-consistent pairing forces (Hartree-Fock-Bogoliubov theory on the lattice) have proven too ambitious. The reason may be the lack of a suitable damping operator in 3D. We have therefore taken a different approach and developed a new Hartree-Fock + BCS pairing code in cylindrical coordinates for axially symmetric nuclei, based on the Galerkin method with B-Spline test functions . The new code has been written in Fortran 90 and makes extensive use of new data concepts, dynamic memory allocation and pointer variables. Extending this code, we believe that it will be easier to implement HFB in 2D because one can use highly efficient LAPACK routines to diagonalize the lattice Hamiltonian and does not necessarily rely on a damping operator.
We outline now our basic theoretical approach for lattice HFB. As is customary, we start by expanding the nucleon field operator into a complete orthonormal set of s.p. basis states $`\varphi _i`$
$$\widehat{\psi }^{}(𝐫,s)=\underset{i}{}\widehat{c}_i^{}\varphi _i^{}(𝐫,s)$$
(1)
which leads to the Hamiltonian in occupation number representation
$$\widehat{H}=\underset{i,j}{}<i|t|j>\widehat{c}_i^{}\widehat{c}_j+\frac{1}{4}\underset{i,j,m,n}{}<ij|\stackrel{~}{v}^{(2)}|mn>\widehat{c}_i^{}\widehat{c}_j^{}\widehat{c}_n\widehat{c}_m.$$
(2)
Like in the BCS theory, one performs a canonical transformation to quasiparticle operators $`\widehat{\beta },\widehat{\beta }^{}`$
$$\left(\begin{array}{c}\widehat{\beta }\\ \widehat{\beta }^{}\end{array}\right)=\left(\begin{array}{cc}U^{}& V^{}\\ V^T& U^T\end{array}\right)\left(\begin{array}{c}\widehat{c}\\ \widehat{c}^{}\end{array}\right).$$
(3)
The HFB ground state is defined as the quasiparticle vacuum
$$\widehat{\beta }_k|\mathrm{\Phi }_0>=0.$$
(4)
The basic building blocks of the theory are the normal density
$$\rho _{ij}=<\mathrm{\Phi }_0|\widehat{c}_j^{}\widehat{c}_i|\mathrm{\Phi }_0>=(V^{}V^T)_{ij}$$
(5)
and the pairing tensor
$$\kappa _{ij}=<\mathrm{\Phi }_0|\widehat{c}_j\widehat{c}_i|\mathrm{\Phi }_0>=(V^{}U^T)_{ij}$$
(6)
from which one can form the generalized density matrix
$$=\left(\begin{array}{cc}\rho & \kappa \\ \kappa ^{}& 1\rho ^{}\end{array}\right)^{}=,^2=.$$
(7)
Using the definition of the HFB ground state energy
$$E^{}()=<\mathrm{\Phi }_0|\widehat{H}\lambda \widehat{N}|\mathrm{\Phi }_0>$$
(8)
we derive the equations of motion from the variational principle
$$\delta [E^{}()\mathrm{tr}\mathrm{\Lambda }(^2)]=0$$
(9)
which results in the standard HFB equations
$$[,]=0$$
(10)
with the generalized single-particle Hamiltonian
$$=\left(\begin{array}{cc}h& \mathrm{\Delta }\\ \mathrm{\Delta }^{}& h^{}\end{array}\right);h=E^{}/\rho ,\mathrm{\Delta }=E^{}/\kappa ^{}.$$
(11)
Our goal is to transform to a coordinate space representation and solve the resulting differential equations on a lattice. For this purpose, we define two types of quasi-particle wavefunctions $`\varphi _1,\varphi _2`$
$$\varphi _1^{}(E_n,𝐫\sigma )=\underset{i}{}U_{in}\varphi _i(𝐫\sigma ),\varphi _2(E_n,𝐫\sigma )=\underset{i}{}V_{in}^{}\varphi _i(𝐫\sigma )$$
(12)
in terms of which the particle density matrix for the HFB ground state assumes a very simple mathematical structure
$`\rho _0(𝐫,\sigma ,𝐫^{},\sigma ^{})=<\mathrm{\Phi }_0|\widehat{\psi }^{}(𝐫^{}\sigma ^{})\widehat{\psi }(𝐫\sigma )|\mathrm{\Phi }_0>`$
$`={\displaystyle \underset{i,j}{}}\rho _{ij}\varphi _i(𝐫\sigma )\varphi _j^{}(𝐫^{}\sigma ^{})={\displaystyle \underset{E_n>0}{\overset{\mathrm{}}{}}}\varphi _2(E_n,𝐫\sigma )\varphi _2^{}(E_n,𝐫^{}\sigma ^{}).`$ (13)
In a similar fashion we obtain for the pairing tensor
$`\kappa _0(𝐫,\sigma ,𝐫^{},\sigma ^{})=<\mathrm{\Phi }_0|\widehat{\psi }(𝐫^{}\sigma ^{})\widehat{\psi }(𝐫\sigma )|\mathrm{\Phi }_0>`$
$`={\displaystyle \underset{i,j}{}}\kappa _{ij}\varphi _i(𝐫\sigma )\varphi _j(𝐫^{}\sigma ^{})={\displaystyle \underset{E_n>0}{\overset{\mathrm{}}{}}}\varphi _2(E_n,𝐫\sigma )\varphi _1^{}(E_n,𝐫^{}\sigma ^{}).`$ (14)
For certain types of effective interactions (e.g. Skyrme forces) the HFB equations in coordinate space are local and have a structure which is reminiscent of the Dirac equation
$$\left(\begin{array}{cc}(h\lambda )& \stackrel{~}{h}\\ \stackrel{~}{h}& (h\lambda )\end{array}\right)\left(\begin{array}{c}\varphi _1(𝐫)\\ \varphi _2(𝐫)\end{array}\right)=E\left(\begin{array}{c}\varphi _1(𝐫)\\ \varphi _2(𝐫)\end{array}\right),$$
(15)
where $`h`$ is the “particle” Hamiltonian and $`\stackrel{~}{h}`$ denotes the “pairing” Hamiltonian.
The various terms in $`h`$ depend on the particle densities $`\rho _q(𝐫)`$ for protons and neutrons, on the kinetic energy density $`\tau _q(𝐫)`$, and on the spin-current tensor $`J_{ij}(𝐫)`$. The pairing Hamiltonian $`\stackrel{~}{h}`$ has a similar structure, but depends on the pairing densities $`\stackrel{~}{\rho }_q(𝐫),\stackrel{~}{\tau }_𝐪(𝐫)`$ and $`\stackrel{~}{J}_{ij}(𝐫)`$ instead. Because of the structural similarity between the Dirac equation and the HFB equation in coordinate space, we encounter here similar computational challenges: for example, the spectrum of quasiparticle energies $`E`$ is unbounded from above and below. The spectrum is discrete for $`|E|<\lambda `$ and continuous for $`|E|>\lambda `$. In the case of axially symmetric nuclei, the spinor wavefunctions $`\varphi _1(𝐫)`$ and $`\varphi _2(𝐫)`$ have the structure
$$\psi ^\mathrm{\Omega }(\varphi ,r,z)=\frac{1}{\sqrt{2\pi }}\left(\begin{array}{c}e^{i(\mathrm{\Omega }\frac{1}{2})\varphi }U(r,z)\\ e^{i(\mathrm{\Omega }+\frac{1}{2})\varphi }L(r,z)\end{array}\right).$$
(16)
## 3 Computational method: Spline-Galerkin lattice representation
For nuclei near the p/n driplines, we overcome the convergence problems of the traditional shell-model expansion method by representing the nuclear Hamiltonian on a lattice utilizing a Basis-Spline expansion . B-Splines $`B_i^M(x)`$ are piecewise-continuous polynomial functions of order $`(M1)`$. They represent generalizations of finite elements which are B-splines with $`M=2`$. A set of fifth-order B-Splines is shown in Figure 2.
Let us now discuss the Galerkin method with B-Spline test functions. We consider an arbitrary (differential) operator equation
$$𝒪\overline{f}(x)\overline{g}(x)=0.$$
(17)
Special cases include eigenvalue equations of the HF/HFB type where $`𝒪=h`$ and $`\overline{g}(x)=E\overline{f}(x)`$. We assume that both $`\overline{f}(x)`$ and $`\overline{g}(x)`$ are well approximated by Spline functions
$$\overline{f}(x)f(x)\underset{i=1}{\overset{𝒩}{}}B_i^M(x)a^i,\overline{g}(x)g(x)\underset{i=1}{\overset{𝒩}{}}B_i^M(x)b^i.$$
(18)
Because the functions $`f(x)`$ and $`g(x)`$ are approximations to the exact functions $`\overline{f}(x)`$ and $`\overline{g}(x)`$, the operator equation will in general only be approximately fulfilled
$$𝒪f(x)g(x)=R(x).$$
(19)
The quantity $`R(x)`$ is called the residual; it is a measure of the accuracy of the lattice representation. We multiply the last equation from the left with the spline function $`B_k(x)`$ and integrate over $`x`$
$$v(x)𝑑xB_k(x)𝒪f(x)v(x)𝑑xB_k(x)g(x)=v(x)𝑑xB_k(x)R(x).$$
(20)
We have included a volume element weight function $`v(x)`$ in the integrals to emphasize that the formalism applies to arbitrary curvilinear coordinates. Various schemes exist to minimize the residual function $`R(x)`$; in the Galerkin method one requires that there be no overlap between the residual and an arbitrary B-spline function
$$v(x)𝑑xB_k(x)R(x)=0.$$
(21)
This so called Galerkin condition amounts to a global reduction of the residual. Applying the Galerkin condition and inserting the B-Spline expansions for $`f(x)`$ and $`g(x)`$ results in
$$\underset{i}{}\left[v(x)𝑑xB_k(x)𝒪B_i(x)\right]a^i\underset{i}{}\left[v(x)𝑑xB_k(x)B_i(x)\right]b^i=0.$$
(22)
Defining the matrix elements
$$𝒪_{ki}=v(x)𝑑xB_k(x)𝒪B_i(x),G_{ki}=v(x)𝑑xB_k(x)B_i(x)$$
(23)
transforms the (differential) operator equation into a matrix $`\times `$ vector equation
$$\underset{i}{}𝒪_{ki}a^i=\underset{i}{}G_{ki}b^i$$
(24)
which can be implemented on modern vector or parallel computers with high efficiency. The matrix $`G_{ki}`$ is sometimes referred to as the Gram matrix; it represents the nonvanishing overlap integrals between different B-Spline functions (see Fig. 2). We eliminate the expansion coefficients $`a^i,b^i`$ in the last equation by introducing the function values at the lattice support points $`x_\alpha `$ including both interior and boundary points.
The upper $`(U)`$ and lower $`(L)`$ components of the spinor wavefunctions defined earlier are represented on the 2-D lattice $`(r_\alpha ,z_\beta )`$ by a product of Basis Splines $`B_i(x)`$ evaluated at the lattice support points
$$U(r_\alpha ,z_\beta )=\underset{i,j}{}B_i(r_\alpha )B_j(z_\beta )U^{ij},L(r_\alpha ,z_\beta )=\underset{i,j}{}B_i(r_\alpha )B_j(z_\beta )L^{ij}.$$
(25)
We are also extending our previous B-spline work to include nonlinear grids. Use of a nonlinear lattice should be most useful for loosely bound systems near the proton or neutron drip lines. Non-Cartesian coordinates necessitate the use of fixed endpoint boundary conditions; much effort has been directed toward improving the treatment of these boundaries .
## 4 Numerical tests and results
We expect our Spline techniques to be superior to the traditional harmonic oscillator basis expansion method in cases of very strong nuclear deformation. To illustrate this point, we have performed a numerical test using a phenomenological (Woods-Saxon) deformed shell model potential. We calculate the single-particle energy spectrum for neutrons in $`{}_{}{}^{40}Ca`$ for quadrupole deformations ranging from strong oblate ($`\beta _2=1.25`$) to extreme prolate ($`\beta _2=+2.25`$). The results are shown in Fig. 3. Apparently, for $`\beta _2=0`$ we correctly reproduce the spherical shell structure of magic nuclei. As $`\beta _2`$ approaches large positive values our s.p. potential approaches the structure of two separated potential wells; as expected, we observe pairs of levels with opposite parity that are becoming degenerate in energy. The largest quadrupole deformation corresponds physically to a symmetric fission configuration. Clearly, such configurations cannot be described in a single oscillator basis, which confirms the numerical superiority of the B-Spline lattice technique.
In a second test calculation, we have investigated the properties of a nucleus near the neutron drip line. During the last decade the discovery of a ‘neutron halo’ in several neutron-rich isotopes generated a great deal of interest in the area of weakly bound quantum systems. The halo effect was first observed in $`{}_{3}{}^{}{}_{}{}^{11}`$Li, which consists of three protons and six neutrons in a central core and two planetary neutrons which comprise the halo. By adjusting the depth of the Woods-Saxon potential so that the separation energy of the last bound neutron is only $`10`$ keV, i.e. very close to the continuum, we were able to determine this neutron wavefunction on the lattice which shows a very large spatial extent (see Fig. 4). We conclude that the B-Spline lattice techniques are well-suited for representing weakly bound states near the drip lines; a similar calculation in the basis expansion method would require a large number of oscillator shells.
We now discuss our numerical results for the selfconsistent Hartree-Fock calculations with Skyrme-M interaction and BCS pairing. This is a special case of the HFB equation with a constant pairing matrix element. In Fig. 5 we display the proton density for a heavy nucleus, $`{}_{64}{}^{}{}_{}{}^{154}`$Gd, calculated with our new 2-D (HF+BCS) code. It should be noted that ALL 154 nucleons are treated dynamically (no inert core approximation). The theoretical charge density looks quite similar to the experimental result which is shown on the right hand side.
For several spherical nuclei, we have also compared the selfconsistent s.p. energy levels of our 2-D Spline-Galerkin code with a fully converged 1-D radial calculation. The result is shown in Table 1.
### 4.1 Plans and Future Directions
Having validated our new (HF+BCS) code on a 2D lattice with the Spline-Galerkin method, we plan to proceed as follows: We are currently working on the 2D HFB implementation with a pairing delta-force. After that, we will generalize the code utilizing the full SkP force with mean pairing field and pairing spin-orbit term. We will also add appropriate constraints, e.g. $`Q_{20},Q_{30},\omega j_x`$ for calculating potential energy surfaces and rotational bands. As we compare the observables (e.g. total binding energy, charge radii, densities $`\rho _{p,n}(𝐫)`$, separation energies for neutrons and protons, pairing gaps) with experimental data from the RIB facilities, it will almost certainly be necessary to develop new effective N-N interactions as we move farther away from the stability line towards the p/n drip lines.
## Acknowledgments
This research project was sponsored by the U.S. Department of Energy under contract No. DE-FG02-96ER40975 with Vanderbilt University. Some of the numerical calculations were carried out on CRAY supercomputers at the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory. We also acknowledge travel support from the NATO Collaborative Research Grants Program.
|
no-problem/9905/astro-ph9905049.html
|
ar5iv
|
text
|
# Type Ia Supernovae and Their Cosmological Implications
## 1 Introduction
Supernovae (SNe) come in two main varieties (see Filippenko 1997b for a review). Those whose optical spectra exhibit hydrogen are classified as Type II, while hydrogen-deficient SNe are designated Type I. SNe I are further subdivided according to the appearance of the early-time spectrum: SNe Ia are characterized by strong absorption near 6150 Å (now attributed to Si II), SNe Ib lack this feature but instead show prominent He I lines, and SNe Ic have neither the Si II nor the He I lines. SNe Ia are believed to result from the thermonuclear disruption of carbon-oxygen white dwarfs, while SNe II come from core collapse in massive supergiant stars. The latter mechanism probably produces most SNe Ib/Ic as well, but the progenitor stars previously lost their outer layers of hydrogen or even helium.
It has long been recognized that SNe Ia may be very useful distance indicators for a number of reasons (Branch & Tammann 1992; Branch 1998, and references therein). (1) They are exceedingly luminous, with peak absolute blue magnitudes averaging $`19.2`$ if the Hubble constant, $`H_0`$, is 65 km s<sup>-1</sup> Mpc<sup>-1</sup>. (2) “Normal” SNe Ia have small dispersion among their peak absolute magnitudes ($`\sigma <0.3`$ mag). (3) Our understanding of the progenitors and explosion mechanism of SNe Ia is on a reasonably firm physical basis. (4) Little cosmic evolution is expected in the peak luminosities of SNe Ia, and it can be modeled. This makes SNe Ia superior to galaxies as distance indicators. (5) One can perform local tests of various possible complications and evolutionary effects by comparing nearby SNe Ia in different environments.
Research on SNe Ia in the 1990s has demonstrated their enormous potential as cosmological distance indicators. Although there are subtle effects that must indeed be taken into account, it appears that SNe Ia provide among the most accurate values of $`H_0`$, $`q_0`$ (the deceleration parameter), $`\mathrm{\Omega }_M`$ (the matter density), and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ (the cosmological constant, $`\mathrm{\Lambda }c^2/3H_0^2`$).
There are now two major teams involved in the systematic investigation of high-redshift SNe Ia for cosmological purposes. The “Supernova Cosmology Project” (SCP) is led by Saul Perlmutter of the Lawrence Berkeley Laboratory, while the “High-Z Supernova Search Team” (HZT) is led by Brian Schmidt of the Mt. Stromlo and Siding Springs Observatories. One of us (A.V.F.) has worked with both teams, but his primary allegiance is now with the HZT. In this lecture we present results from the HZT.
## 2 Homogeneity and Heterogeneity
The traditional way in which SNe Ia have been used for cosmological distance determinations has been to assume that they are perfect “standard candles” and to compare their observed peak brightness with those of SNe Ia in galaxies whose distances have been independently determined (e.g., Cepheids). The rationale is that SNe Ia exhibit relatively little scatter in their peak blue luminosity ($`\sigma _B0.4`$–0.5 mag; Branch & Miller 1993), and even less if “peculiar” or highly reddened objects are eliminated from consideration by using a color cut. Moreover, the optical spectra of SNe Ia are usually quite homogeneous, if care is taken to compare objects at similar times relative to maximum brightness (Riess et al. 1997, and references therein). Branch, Fisher, & Nugent (1993) estimate that over 80% of all SNe Ia discovered thus far are “normal.”
From a Hubble diagram constructed with unreddened, moderately distant SNe Ia ($`z<0.1`$) for which peculiar motions should be small and relative distances (as given by ratios of redshifts) are accurate, Vaughan et al. (1995) find that
$$<M_B(\mathrm{max})>=(19.74\pm 0.06)+5\mathrm{log}(H_0/50)\mathrm{mag}.$$
(2.1)
In a series of papers, Sandage et al. (1996) and Saha et al. (1997) combine similar relations with Hubble Space Telescope (HST) Cepheid distances to the host galaxies of seven SNe Ia to derive $`H_0=57\pm 4`$ km s<sup>-1</sup> Mpc<sup>-1</sup>.
Over the past decade it has become clear, however, that SNe Ia do not constitute a perfectly homogeneous subclass (e.g., Filippenko 1997a,b). In retrospect this should have been obvious: the Hubble diagram for SNe Ia exhibits scatter larger than the photometric errors, the dispersion actually rises when reddening corrections are applied (under the assumption that all SNe Ia have uniform, very blue intrinsic colors at maximum; van den Bergh & Pazder 1992; Sandage & Tammann 1993), and there are some significant outliers whose anomalous magnitudes cannot possibly be explained by extinction alone.
Spectroscopic and photometric peculiarities have been noted with increasing frequency in well-observed SNe Ia. A striking case is SN 1991T; its pre-maximum spectrum did not exhibit Si II or Ca II absorption lines, yet two months past maximum the spectrum was nearly indistinguishable from that of a classical SN Ia (Filippenko et al. 1992b; Phillips et al. 1992). The light curves of SN 1991T were slightly broader than the SN Ia template curves, and the object was probably somewhat more luminous than average at maximum. The reigning champion of well observed, peculiar SNe Ia is SN 1991bg (Filippenko et al. 1992a; Leibundgut et al. 1993; Turatto et al. 1996). At maximum brightness it was subluminous by 1.6 mag in $`V`$ and 2.5 mag in $`B`$, its colors were intrinsically red, and its spectrum was peculiar (with a deep absorption trough due to Ti II). Moreover, the decline from maximum brightness was very steep, the $`I`$-band light curve did not exhibit a secondary maximum like normal SNe Ia, and the velocity of the ejecta was unusually low. The photometric heterogeneity among SNe Ia is well demonstrated by Suntzeff (1996) with five objects having excellent $`BVRI`$ light curves.
## 3 Cosmological Uses
### 3.1 Luminosity Corrections and Nearby Supernovae
Although SNe Ia can no longer be considered perfect “standard candles,” they are still exceptionally useful for cosmological distance determinations. Excluding those of low luminosity (which are hard to find, especially at large distances), most SNe Ia are nearly standard (Branch, Fisher, & Nugent 1993). Also, after many tenuous suggestions (e.g., Pskovskii 1977, 1984; Branch 1981), convincing evidence has finally been found for a correlation between light-curve shape and luminosity. Phillips (1993) achieved this by quantifying the photometric differences among a set of nine well-observed SNe Ia using a parameter, $`\mathrm{\Delta }m_{15}(B)`$, which measures the total drop (in $`B`$ magnitudes) from maximum to $`t=15`$ days after $`B`$ maximum. In all cases the host galaxies of his SNe Ia have accurate relative distances from surface brightness fluctuations or from the Tully-Fisher relation. In $`B`$, the SNe Ia exhibit a total spread of $`2`$ mag in maximum luminosity, and the intrinsically bright SNe Ia clearly decline more slowly than dim ones. The range in absolute magnitude is smaller in $`V`$ and $`I`$, making the correlation with $`\mathrm{\Delta }m_{15}(B)`$ less steep than in $`B`$, but it is present nonetheless.
Using SNe Ia discovered during the Calán/Tololo survey ($`z<0.1`$), Hamuy et al. (1995, 1996b) confirm and refine the Phillips (1993) correlation between $`\mathrm{\Delta }m_{15}(B)`$ and $`M_{max}(B,V)`$: it is not as steep as had been claimed. Apparently the slope is steep only at low luminosities; thus, objects such as SN 1991bg skew the slope of the best-fitting single straight line. Hamuy et al. reduce the scatter in the Hubble diagram of normal, unreddened SNe Ia to only 0.17 mag in $`B`$ and 0.14 mag in $`V`$; see also Tripp (1997).
In a similar effort, Riess, Press, & Kirshner (1995a) show that the luminosity of SNe Ia correlates with the detailed shape of the light curve, not just with its initial decline. They form a “training set” of light-curve shapes from 9 well-observed SNe Ia having known relative distances, including very peculiar objects (e.g., SN 1991bg). When the light curves of an independent sample of 13 SNe Ia (the Calán/Tololo survey) are analyzed with this set of basis vectors, the dispersion in the $`V`$-band Hubble diagram drops from 0.50 to 0.21 mag, and the Hubble constant rises from $`53\pm 11`$ to $`67\pm 7`$ km s<sup>-1</sup> Mpc<sup>-1</sup>, comparable to the conclusions of Hamuy et al. (1995, 1996b). About half of the rise in $`H_0`$ results from a change in the position of the “ridge line” defining the linear Hubble relation, and half is from a correction to the luminosity of some of the local calibrators which appear to be unusually luminous (e.g., SN 1972E).
By using light-curve shapes measured through several different filters, Riess, Press, & Kirshner (1996a) extend their analysis and objectively eliminate the effects of interstellar extinction: a SN Ia that has an unusually red $`BV`$ color at maximum brightness is assumed to be intrinsically subluminous if its light curves rise and decline quickly, or of normal luminosity but significantly reddened if its light curves rise and decline slowly. With a set of 20 SNe Ia consisting of the Calán/Tololo sample and their own objects, Riess, Press, & Kirshner (1996a) show that the dispersion decreases from 0.52 mag to 0.12 mag after application of this “multi-color light curve shape” (MLCS) method. The results from a very recent, expanded set of nearly 50 SNe Ia indicate that the dispersion decreases from 0.44 mag to 0.15 mag (Riess et al. 1999b). The resulting Hubble constant is $`65\pm 2`$ (statistical) km s<sup>-1</sup> Mpc<sup>-1</sup>, with an additional systematic and zero-point uncertainty of $`\pm 5`$ km s<sup>-1</sup> Mpc<sup>-1</sup>. Riess, Press, & Kirshner (1996a) also show that the Hubble flow is remarkably linear; indeed, SNe Ia now constitute the best evidence for linearity. Finally, they argue that the dust affecting SNe Ia is not of circumstellar origin, and show quantitatively that the extinction curve in external galaxies typically does not differ from that in the Milky Way (cf. Branch & Tammann 1992; but see Tripp 1998).
Riess, Press, & Kirshner (1995b) capitalize on another use of SNe Ia: determination of the Milky Way Galaxy’s peculiar motion relative to the Hubble flow. They select galaxies whose distances were accurately determined from SNe Ia, and compare their observed recession velocities with those expected from the Hubble law alone. The speed and direction of the Galaxy’s motion are consistent with what is found from COBE (Cosmic Background Explorer) studies of the microwave background, but not with the results of Lauer & Postman (1994).
The advantage of systematically correcting the luminosities of SNe Ia at high redshifts rather than trying to isolate “normal” ones seems clear in view of recent evidence that the luminosity of SNe Ia may be a function of stellar population. If the most luminous SNe Ia occur in young stellar populations (Hamuy et al. 1995, 1996a; Branch, Romanishin, & Baron 1996), then we might expect the mean peak luminosity of high-redshift SNe Ia to differ from that of a local sample. Alternatively, the use of Cepheids (Population I objects) to calibrate local SNe Ia can lead to a zero point that is too luminous. On the other hand, as long as the physics of SNe Ia is essentially the same in young stellar populations locally and at high redshift, we should be able to adopt the luminosity correction methods (photometric and spectroscopic) found from detailed studies of nearby samples of SNe Ia.
## 4 High-Redshift Supernovae
### 4.1 The Search
These same techniques can be applied to construct a Hubble diagram with high-redshift SNe, from which the value of $`q_0`$ can be determined. With enough objects spanning a range of redshifts, we can determine $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$ independently (e.g., Goobar & Perlmutter 1995). Contours of peak apparent $`R`$-band magnitude for SNe Ia at two redshifts have different slopes in the $`\mathrm{\Omega }_M`$$`\mathrm{\Omega }_\mathrm{\Lambda }`$ plane, and the regions of intersection provide the answers we seek.
Based on the pioneering work of Norgaard-Nielsen et al. (1989), whose goal was to find SNe in moderate-redshift clusters of galaxies, Perlmutter et al. (1997) and our HZT (Schmidt et al. 1998) devised a strategy that almost guarantees the discovery of many faint, distant SNe Ia on demand, during a predetermined set of nights. This “batch” approach to studying distant SNe allows follow-up spectroscopy and photometry to be scheduled in advance, resulting in a systematic study not possible with random discoveries. Most of the searched fields are equatorial, permitting follow-up from both hemispheres.
Our approach is simple in principle; see Schmidt et al. (1998) for details, and for a description of our first high-redshift SN Ia (SN 1995K). Pairs of first-epoch images are obtained with the CTIO or CFHT 4-m telescopes and wide-angle imaging cameras during the nights just after new moon, followed by second-epoch images 3–4 weeks later. (Pairs of images permit removal of cosmic rays, asteroids, and distant Kuiper-belt objects.) These are compared immediately using well-tested software, and new SN candidates are identified in the second-epoch images. Spectra are obtained as soon as possible after discovery to verify that the objects are SNe Ia and determine their redshifts. Each team has already found over 70 SNe in concentrated batches, as reported in numerous IAU Circulars (e.g., Perlmutter et al. 1995 — 11 SNe with $`0.16<z<0.65`$; Suntzeff et al. 1996 — 17 SNe with $`0.09<z<0.84`$).
Intensive photometry of the SNe Ia commences within a few days after procurement of the second-epoch images; it is continued throughout the ensuing and subsequent dark runs. In a few cases HST images are obtained. As expected, most of the discoveries are on the rise or near maximum brightness. When possible, the SNe are observed in filters which closely match the redshifted $`B`$ and $`V`$ bands; this way, the $`K`$-corrections become only a second-order effect (Kim, Goobar, & Perlmutter 1996). Custom-designed filters for redshifts centered on 0.35 and 0.45 are used by our HZT (Schmidt et al. 1998), when appropriate. We try to obtain excellent multi-color light curves, so that reddening and luminosity corrections can be applied (Riess, Press, & Kirshner 1996a; Hamuy et al. 1996a,b).
Although SNe in the magnitude range 22–22.5 can sometimes be spectroscopically confirmed with 4-m class telescopes, the signal-to-noise ratios are low, even after several hours of integration. Certainly Keck is required for the fainter objects (mag 22.5–24.5). With Keck, not only can we rapidly confirm a large number of candidate SNe, but we can search for peculiarities in the spectra that might indicate evolution of SNe Ia with redshift. Moreover, high-quality spectra allow us to measure the age of a supernova: we have developed a method for automatically comparing the spectrum of a SN Ia with a library of spectra corresponding to many different epochs in the development of SNe Ia (Riess et al. 1997). Our technique also has great practical utility at the telescope: we can determine the age of a SN “on the fly,” within half an hour after obtaining its spectrum. This allows us to rapidly decide which SNe are best for subsequent photometric follow-up, and we immediately alert our collaborators on other telescopes.
### 4.2 Results
First, we note that the light curves of high-redshift SNe Ia are broader than those of nearby SNe Ia; the initial indications of Leibundgut et al. (1996) and Goldhaber et al. (1997) are amply confirmed with our larger samples. Quantitatively, the amount by which the light curves are “stretched” is consistent with a factor of $`1+z`$, as expected if redshifts are produced by the expansion of space rather than by “tired light.” We were also able to demonstrate this spectroscopically at the $`2\sigma `$ confidence level for a single object: the spectrum of SN 1996bj ($`z=0.57`$) evolved more slowly than those of nearby SNe Ia, by a factor consistent with $`1+z`$ (Riess et al. 1997). More recently, we have used observations of SN 1997ex ($`z=0.36`$) at three epochs to conclusively verify the effects of time dilation: temporal changes in the spectra are slower than those of nearby SNe Ia by roughly the expected factor of 1.36 (Filippenko et al. 1999).
Following our Spring 1997 campaign, in which we found a SN with $`z=0.97`$ (SN 1997ck), and for which we obtained HST follow-up of three SNe, we published our first substantial results concerning the density of the Universe (Garnavich et al. 1998a): $`\mathrm{\Omega }_M=0.35\pm 0.3`$ under the assumption that $`\mathrm{\Omega }_{\mathrm{total}}=1`$, or $`\mathrm{\Omega }_M=0.1\pm 0.5`$ under the assumption that $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$. Our independent analysis of 10 SNe Ia using the “snapshot” distance method (with which conclusions are drawn from sparsely observed SNe Ia) gives quantitatively similar conclusions (Riess et al. 1998a).
Our next results, obtained from a total of 16 high-$`z`$ SNe Ia, were announced at a conference in February 1998 (Filippenko & Riess 1998) and formally published in September 1998 (Riess et al. 1998b). The Hubble diagram (from a refined version of the MLCS method; Riess et al. 1998b) for the 10 best-observed high-$`z`$ SNe Ia is given in Figure 1, while Figure 2 illustrates the derived confidence contours in the $`\mathrm{\Omega }_M`$$`\mathrm{\Omega }_\mathrm{\Lambda }`$ plane. We confirm our previous suggestion that $`\mathrm{\Omega }_M`$ is low. Even more exciting, however, is our conclusion that $`\mathrm{\Omega }_\mathrm{\Lambda }`$ is nonzero at the 3$`\sigma `$ statistical confidence level. With the MLCS method applied to the full set of 16 SNe Ia, our formal results are $`\mathrm{\Omega }_M=0.24\pm 0.10`$ if $`\mathrm{\Omega }_{\mathrm{total}}=1`$, or $`\mathrm{\Omega }_M=0.35\pm 0.18`$ (unphysical) if $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$. If we demand that $`\mathrm{\Omega }_M=0.2`$, then the best value for $`\mathrm{\Omega }_\mathrm{\Lambda }`$ is $`0.66\pm 0.21`$. These conclusions do not change significantly if only the 9 best-observed SNe Ia are used (Fig. 1; $`\mathrm{\Omega }_M=0.28\pm 0.10`$ if $`\mathrm{\Omega }_{\mathrm{total}}=1`$). The $`\mathrm{\Delta }m_{15}(B)`$ method yields similar results; if anything, the case for a positive cosmological constant strengthens. (For brevity, in this paper we won’t quote the $`\mathrm{\Delta }m_{15}(B)`$ numbers; see Riess et al. 1998b.) From an essentially independent set of 42 high-$`z`$ SNe Ia (only 2 objects in common), the SCP obtains almost identical results (Perlmutter et al. 1999). This suggests that neither team has made a large, simple blunder!
Very recently, we have calibrated an additional sample of 9 high-$`z`$ SNe Ia, including several observed with HST. Preliminary analysis suggests that the new data are entirely consistent with the old results, thereby strengthening their statistical significance. Figure 3 shows the tentative Hubble diagram; full details will be published elsewhere.
Though not drawn in Figure 2, the expected confidence contours from measurements of the angular scale of the first Doppler peak of the cosmic microwave background radiation (CMBR) are nearly perpendicular to those provided by SNe Ia (e.g., Zaldarriaga et al. 1997; Eisenstein, Hu, & Tegmark 1998); thus, the two techniques provide complementary information. The space-based CMBR experiments in the next decade (e.g., MAP, Planck) will give very narrow ellipses, but a stunning result is already provided by existing measurements (Hancock et al. 1998; Lineweaver & Barbosa 1998): our analysis of the data in Riess et al. (1998b) demonstrates that $`\mathrm{\Omega }_M+\mathrm{\Omega }_\mathrm{\Lambda }=0.94\pm 0.26`$, when the SN and CMBR constraints are combined (Garnavich et al. 1998b; see also Lineweaver 1998, Efstathiou et al. 1999, and others). As shown in Figure 4, the confidence contours are nearly circular, instead of highly eccentric ellipses as in Figure 2. We eagerly look forward to future CMBR measurements of even greater precision.
The dynamical age of the Universe can be calculated from the cosmological parameters. In an empty Universe with no cosmological constant, the dynamical age is simply the “Hubble time” (i.e., the inverse of the Hubble constant); there is no deceleration. SNe Ia yield $`H_0=65\pm 2`$ km s<sup>-1</sup> Mpc<sup>-1</sup> (statistical uncertainty only), and a Hubble time of $`15.1\pm 0.5`$ Gyr. For a more complex cosmology, integrating the velocity of the expansion from the current epoch ($`z=0`$) to the beginning ($`z=\mathrm{}`$) yields an expression for the dynamical age. As shown in detail by Riess et al. (1998b), we obtain a value of 14.2$`{}_{0.8}{}^{}{}_{}{}^{+1.0}`$ Gyr using the likely range for $`(\mathrm{\Omega }_M,\mathrm{\Omega }_\mathrm{\Lambda })`$ that we measure. (The precision is so high because our experiment is sensitive to roughly the difference between $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$, and the dynamical age also varies approximately this way.) Including the systematic uncertainty of the Cepheid distance scale, which may be up to 10%, a reasonable estimate of the dynamical age is $`14.2\pm 1.7`$ Gyr.
This result is consistent with ages determined from various other techniques such as the cooling of white dwarfs (Galactic disk $`>9.5`$ Gyr; Oswalt et al. 1996), radioactive dating of stars via the thorium and europium abundances ($`15.2\pm 3.7`$ Gyr; Cowan et al. 1997), and studies of globular clusters (10–15 Gyr, depending on whether Hipparcos parallaxes of Cepheids are adopted; Gratton et al. 1997; Chaboyer et al. 1998). Evidently, there is no longer a problem that the age of the oldest stars is greater than the dynamical age of the Universe.
## 5 Discussion
High-redshift SNe Ia are observed to be dimmer than expected in an empty Universe (i.e., $`\mathrm{\Omega }_M=0`$) with no cosmological constant. A cosmological explanation for this observation is that a positive vacuum energy density accelerates the expansion. Mass density in the Universe exacerbates this problem, requiring even more vacuum energy. For a Universe with $`\mathrm{\Omega }_M=0.2`$, the average MLCS distance moduli of the well-observed SNe are 0.25 mag larger (i.e., 12.5% greater distances) than the prediction from $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$. The average MLCS distance moduli are still 0.18 mag bigger than required for a 68.3% (1$`\sigma `$) consistency for a Universe with $`\mathrm{\Omega }_M=0.2`$ and without a cosmological constant. The derived value of $`q_0`$ is $`0.75\pm 0.32`$, implying that the expansion of the Universe is accelerating. It appears that the Universe will expand eternally.
### 5.1 Systematic Effects
A very important point is that the dispersion in the peak luminosities of SNe Ia ($`\sigma =0.15`$ mag) is low after application of the MLCS method of Riess et al. (1996a, 1998b). With 16 SNe Ia, our effective uncertainty is $`0.15/40.04`$ mag, less than the expected difference of 0.25 mag between universes with $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ and 0.76 (and low $`\mathrm{\Omega }_M`$); see Figure 1. Systematic uncertainties of even 0.05 mag (e.g., in the extinction) are significant, and at 0.1 mag they dominate any decrease in statistical uncertainty gained with a larger sample of SNe Ia. Thus, our conclusions with only 16 SNe Ia are already limited by systematic uncertainties, not by statistical uncertainties — but of course the 10 new objects further strengthen our case.
Here we explore possible systematic effects that might invalidate our results. Of those that can be quantified at the present time, none appears to reconcile the data with $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$. Further details can be found in Schmidt et al. (1998) and especially Riess et al. (1998b).
#### 5.1.1 Evolution
The local sample of SNe Ia displays a weak correlation between light-curve shape (or luminosity) and host galaxy type, in the sense that the most luminous SNe Ia with the broadest light curves only occur in late-type galaxies. Both early-type and late-type galaxies provide hosts for dimmer SNe Ia with narrower light curves (Hamuy et al. 1996a). The mean luminosity difference for SNe Ia in late-type and early-type galaxies is $`0.3`$ mag. In addition, the SN Ia rate per unit luminosity is almost twice as high in late-type galaxies as in early-type galaxies at the present epoch (Cappellaro et al. 1997). These results may indicate an evolution of SNe Ia with progenitor age. Possibly relevant physical parameters are the mass, metallicity, and C/O ratio of the progenitor (Höflich, Wheeler, & Thielemann 1998).
We expect that the relation between light-curve shape and luminosity that applies to the range of stellar populations and progenitor ages encountered in the late-type and early-type hosts in our nearby sample should also be applicable to the range we encounter in our distant sample. In fact, the range of age for SN Ia progenitors in the nearby sample is likely to be larger than the change in mean progenitor age over the 4–6 Gyr lookback time to the high-$`z`$ sample. Thus, to first order at least, our local sample should correct our distances for progenitor or age effects.
We can place empirical constraints on the effect that a change in the progenitor age would have on our SN Ia distances by comparing subsamples of low-redshift SNe Ia believed to arise from old and young progenitors. In the nearby sample, the mean difference between the distances for the early-type (8 SNe Ia) and late-type hosts (19 SNe Ia), at a given redshift, is 0.04 $`\pm `$ 0.07 mag from the MLCS method. This difference is consistent with zero. Even if the SN Ia progenitors evolved from one population at low redshift to the other at high redshift, we still would not explain the surplus in mean distance of 0.25 mag over the $`\mathrm{\Omega }_\mathrm{\Lambda }=0`$ prediction.
Moreover, it is reassuring that initial comparisons of high-redshift SN Ia spectra appear remarkably similar to those observed at low redshift. For example, the spectral characteristics of SN 1998ai ($`z=0.49`$) appear to be essentially indistinguishable from those of low-redshift SNe Ia; see Figure 5. In fact, the most obviously discrepant spectrum in this figure is the fourth one from the top, that of SN 1994B ($`z=0.09`$); it is intentionally included as a “decoy” that illustrates the degree to which even the spectra of nearby SNe Ia can vary.
We expect that our local calibration will work well at eliminating any pernicious drift in the supernova distances between the local and distant samples. However, we need to be vigilant for changes in the properties of SNe Ia at significant lookback times. Our distance measurements could be especially sensitive to changes in the colors of SNe Ia for a given light-curve shape.
#### 5.1.2 Extinction
Our SN Ia distances have the important advantage of including corrections for interstellar extinction occurring in the host galaxy and the Milky Way. Extinction corrections based on the relation between SN Ia colors and luminosity improve distance precision for a sample of nearby SNe Ia that includes objects with substantial extinction (Riess, Press, & Kirshner 1996a); the scatter in the Hubble diagram is much reduced. Moreover, the consistency of the measured Hubble flow from SNe Ia with late-type and early-type hosts (see above) shows that the extinction corrections applied to dusty SNe Ia at low redshift do not alter the expansion rate from its value measured from SNe Ia in low dust environments.
In practice, our high-redshift SNe Ia appear to suffer negligible extinction; their $`BV`$ colors at maximum brightness are normal, suggesting little color excess due to reddening. Riess, Press, & Kirshner (1996b) found indications that the Galactic ratios between selective absorption and color excess are similar for host galaxies in the nearby ($`z0.1`$) Hubble flow. Yet, what if these ratios changed with lookback time (e.g., Aguirre 1999)? Could an evolution in dust grain size descending from ancestral interstellar “pebbles” at higher redshifts cause us to underestimate the extinction? Large dust grains would not imprint the reddening signature of typical interstellar extinction upon which our corrections rely.
However, viewing our SNe through such grey interstellar grains would also induce a dispersion in the derived distances. Using the results of Hatano, Branch, & Deaton (1998), Riess et al. (1998b) estimate that the expected dispersion would be 0.40 mag if the mean grey extinction were 0.25 mag (the value required to explain the measured MLCS distances without a cosmological constant). This is significantly larger than the 0.21 mag dispersion observed in the high-redshift MLCS distances. Furthermore, most of the observed scatter is already consistent with the estimated statistical errors, leaving little to be caused by grey extinction. Nevertheless, if we assumed that all of the observed scatter were due to grey extinction, the mean shift in the SN Ia distances would only be 0.05 mag. With the observations presented here, we cannot rule out this modest amount of grey interstellar extinction.
Grey intergalactic extinction could dim the SNe without either telltale reddening or dispersion, if all lines of sight to a given redshift had a similar column density of absorbing material. The component of the intergalactic medium with such uniform coverage corresponds to the gas clouds producing Lyman-$`\alpha `$ forest absorption at low redshifts. These clouds have individual H I column densities less than about $`10^{15}\mathrm{cm}^2`$ (Bahcall et al. 1996). However, they display low metallicities, typically less than 10% of solar. Grey extinction would require larger dust grains which would need a larger mass in heavy elements than typical interstellar grain size distributions to achieve a given extinction. Furthermore, these clouds reside in hard radiation environments hostile to the survival of dust grains. Finally, the existence of grey intergalactic extinction would only augment the already surprising excess of galaxies in high-redshift galaxy surveys (e.g., Huang et al. 1997).
We conclude that grey extinction does not seem to provide an observationally or physically plausible explanation for the observed faintness of high-redshift SNe Ia.
#### 5.1.3 Selection Bias
Sample selection has the potential to distort the comparison of nearby and distant SNe. Most of our nearby ($`z<0.1`$) sample of SNe Ia was gathered from the Calán/Tololo survey (Hamuy et al. 1993), which employed the blinking of photographic plates obtained at different epochs with Schmidt telescopes, and from less well-defined searches (Riess et al. 1999a). Our distant ($`z>0.16`$) sample was obtained by subtracting digital CCD images at different epochs with the same instrument setup.
Although selection effects could alter the ratio of intrinsically dim to bright SNe Ia in the nearby and distant samples, our use of the light-curve shape to determine the supernova’s luminosity should correct most of this selection bias on our distance estimates. Nevertheless, the final dispersion is nonzero, and to investigate its consequences we used a Monte Carlo simulation; details are given by Riess et al. (1998b). The results are very encouraging, with recovered values of $`\mathrm{\Omega }_M`$ or $`\mathrm{\Omega }_\mathrm{\Lambda }`$ exceeding the simulated values by only 0.02–0.03 for these two parameters considered separately. There are two reasons we find such a small selection bias in the recovered cosmological parameters. First, the small dispersion of our distance indicator ($`\sigma 0.15`$ mag after light-curve shape correction) results in only a modest selection bias. Second, both nearby and distant samples include an excess of brighter than average SNe, so the difference in their individual selection biases remains small.
Additional work on quantifying the selection criteria of the nearby and distant samples is needed. Although the above simulation and others bode well for using SNe Ia to measure cosmological parameters, we must continue to be wary of subtle effects that might bias the comparison of SNe Ia near and far.
#### 5.1.4 Effect of a Local Void
Zehavi et al. (1998) find that the SNe Ia out to 7000 km s<sup>-1</sup> may (2–3$`\sigma `$ confidence level) exhibit an expansion rate which is 6% greater than that measured for the more distant objects; see the low-redshift portion of Figure 1. The implication is that the volume out to this distance is underdense relative to the global mean density.
In principle, a local void would increase the expansion rate measured for our low-redshift sample relative to the true, global expansion rate. Mistaking this inflated rate for the global value would give the false impression of an increase in the low-redshift expansion rate relative to the high-redshift expansion rate. This outcome could be incorrectly attributed to the influence of a positive cosmological constant. In practice, only a small fraction of our nearby sample is within this local void, reducing its effect on the determination of the low-redshift expansion rate.
As a test of the effect of a local void on our constraints for the cosmological parameters, we reanalyzed the data discarding the seven SNe Ia within 7000 km s<sup>-1</sup> ($`d=108`$ Mpc for $`H_0=65`$ km s<sup>-1</sup> Mpc<sup>-1</sup>). The result was a reduction in the confidence that $`\mathrm{\Omega }_\mathrm{\Lambda }>0`$ from 99.7% (3.0$`\sigma `$) to 98.3% (2.4$`\sigma `$) for the MLCS method.
#### 5.1.5 Weak Gravitational Lensing
The magnification and demagnification of light by large-scale structure can alter the observed magnitudes of high-redshift SNe (Kantowski, Vaughan, & Branch 1995). The effect of weak gravitational lensing on our analysis has been quantified by Wambsganss et al. (1997) and summarized by Schmidt et al. (1998). SN Ia light will, on average, be demagnified by $`0.5`$% at $`z=0.5`$ and $`1`$% at $`z=1`$ in a Universe with a non-negligible cosmological constant. Although the sign of the effect is the same as the influence of a cosmological constant, the size of the effect is negligible.
Holz & Wald (1998) have calculated the weak lensing effects on supernova light from ordinary matter which is not smoothly distributed in galaxies but rather clumped into stars (i.e., dark matter contained in massive compact halo objects). With this scenario, microlensing becomes a more important effect, further decreasing the observed supernova luminosities at $`z=0.5`$ by 0.02 mag for $`\mathrm{\Omega }_M`$=0.2 (Holz 1998a). Even if most ordinary matter were contained in compact objects, this effect would not be large enough to reconcile the SNe Ia distances with the influence of ordinary matter alone.
#### 5.1.6 Sample Contamination
Riess et al. (1998b) consider in detail the possibility of sample contamination by SNe that are not SNe Ia. Of the 16 initial objects, 12 are clearly SNe Ia and 2 others are almost certainly SNe Ia (though the region near Si II $`\lambda `$6150 was poorly observed in the latter two). One object (SN 1997ck at $`z=0.97`$) does not have a good spectrum, and another (SN 1996E) might be a SN Ic. A reanalysis with only the 14 most probable SNe Ia does not significantly alter our conclusions regarding a positive cosmological constant. However, without SN 1997ck we cannot obtain independent values for $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$.
Riess et al. (1998b) and Schmidt et al. (1998) discuss several other uncertainties (e.g., differences between fitting techniques; $`K`$-corrections) and suggest that they, too, are not serious in our study.
### 5.2 Future Tests
We are currently trying to increase the sample of high-$`z`$ SNe Ia used for measurements of cosmological parameters, primarily to quantify systematic effects. Perhaps the most probable one is intrinsic evolution of SNe Ia with redshift: What if SNe Ia were different at high redshift than locally? One way to observationally address this is through careful, quantitative spectroscopy of the quality shown in Figure 5 for SN 1998ai. Nearby SNe Ia exhibit a range of spectral properties that correlate with light-curve shape, and we need to determine whether high-$`z`$ SNe Ia show the same correlations. Comparisons of spectra obtained at very early times might be most illuminating: theoretical models predict that differences between SNe Ia should be most pronounced soon after the explosion, when the photosphere is in the outermost layers of the ejecta, which potentially undergo different amounts of nuclear burning.
Light curves can also be used to test for evolution of SNe Ia. For example, one can determine whether the rise time (from explosion to maximum brightness) is the same for high-$`z`$ and low-$`z`$ SNe Ia; a difference might indicate that the peak luminosities are also different (Höflich, Wheeler, & Thielemann 1998). Moreover, we should see whether high-$`z`$ SNe Ia show a second maximum in the rest-frame $`I`$-band about 25 days after the first maximum, as do normal SNe Ia (e.g., Ford et al. 1993; Suntzeff 1996). Very subluminous SNe Ia do not show this second maximum, but rather follow a linear decline (Filippenko et al. 1992a).
One of the most decisive tests to distinguish between $`\mathrm{\Lambda }`$ and systematic effects utilizes SNe Ia at $`z>0.85`$. We observe that $`z=0.5`$ SNe Ia are fainter than expected even in a non-decelerating universe, and interpret this as excessive distances (a consequence of positive $`\mathrm{\Omega }_\mathrm{\Lambda }`$). If so, the deviation of apparent magnitude from the low-$`\mathrm{\Omega }_M`$, zero-$`\mathrm{\Lambda }`$ model should actually begin to decrease at $`z0.85`$; we are looking so far back in time that the $`\mathrm{\Lambda }`$ effect becomes small compared with $`\mathrm{\Omega }_M`$ (see Fig. 6). If, on the other hand, a systematic effect such as non-standard extinction (Aguirre 1998), evolution of the white dwarf progenitors (Höflich, Wheeler, & Thielemann 1998), or gravitational lensing (Wambsganss, Cen, & Ostriker 1998) is the culprit, we expect that the deviation of the apparent magnitude will continue growing, to first order (Fig. 6). Given the expected uncertainties, one or two objects clearly won’t be sufficient, but a sample of 10–20 SNe Ia should give a good statistical result.
Note that a very large sample of SNe Ia in the range $`z=0.3`$–1.5 could be used to see whether $`\mathrm{\Lambda }`$ varies with time (“quintessence” models — e.g., Caldwell, Dave, & Steinhardt 1998). This is a very ambitious plan, but it should be achievable within 5–10 years. Moreover, with a very large sample (200–1000) of high-redshift SNe Ia, and accurate photometry, it should be possible to conduct a “super-MACHO” experiment. That is, the light from some SNe Ia should be strongly amplified by the presence of intervening matter, while the majority will be deamplified (e.g., Holz 1998b; Kantowski 1998). The distribution of amplification factors can be used to determine the type of dark matter most prevalent in the Universe (compact objects, or smoothly distributed).
## 6 Conclusions
When used with care, SNe Ia are excellent cosmological probes; the current precision of individual distance measurements is roughly 5–10%, but a number of subtleties must be taken into account to obtain reliable results. SNe Ia at $`z<0.1`$ have been used to demonstrate that (a) the Hubble flow is definitely linear, (b) the value of the Hubble constant is $`65\pm 2`$ (statistical) $`\pm 5`$ (systematic) km s<sup>-1</sup> Mpc<sup>-1</sup>, (c) the bulk motion of the Local Group is consistent with that derived from COBE measurements, and (d) the dust in other galaxies is similar to Galactic dust.
More recently, we have used a sample of 16 high-redshift SNe Ia ($`0.16z0.97`$) to make a number of deductions, as follows. These are supported by our preliminary analysis of 9 additional high-$`z`$ SNe Ia.
(1) The luminosity distances exceed the prediction of a low mass-density ($`\mathrm{\Omega }_M`$ $`0.2`$) Universe by about 0.25 mag. A cosmological explanation is provided by a positive cosmological constant at the 99.7% (3$`\sigma `$) confidence level, with the prior belief that $`\mathrm{\Omega }_M0`$. We also find that the expansion of the Universe is currently accelerating ($`q_00`$, where $`q_0\mathrm{\Omega }_M/2\mathrm{\Omega }_\mathrm{\Lambda }`$), and that the Universe will expand forever.
(2) The dynamical age of the Universe is 14.2 $`\pm 1.7`$ Gyr, including systematic uncertainties in the Cepheid distance scale used for the host galaxies of three nearby SNe Ia.
(3) These conclusions do not depend on inclusion of SN 1997ck ($`z=0.97`$; uncertain classification and extinction), nor on which of two light-curve fitting methods is used to determine the SN Ia distances.
(4) The systematic uncertainties presented by grey extinction, sample selection bias, evolution, a local void, weak gravitational lensing, and sample contamination currently do not provide a convincing substitute for a positive cosmological constant. Further studies of these and other possible systematic uncertainties are needed to increase the confidence in our results.
We emphasize that the most recent results of the SCP (Perlmutter et al. 1999) are consistent with those of our HZT. This is reassuring — but other, completely independent methods are certainly needed to verify these conclusions. The upcoming space CMBR experiments hold the most promise in this regard. Although new questions are arising (e.g., What is the physical source of the cosmological constant, if nonzero? Are evolving cosmic scalar fields a better alternative?), we speculate that this may be the beginning of the “end game” in the quest for the values of $`\mathrm{\Omega }_M`$ and $`\mathrm{\Omega }_\mathrm{\Lambda }`$.
###### Acknowledgements.
We thank all of our collaborators in the HZT for their contributions to this work. Our supernova research at U.C. Berkeley is supported by the Miller Institute for Basic Research in Science, by NSF grant AST-9417213, and by grant GO-7505 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. A.V.F. and A.G.R. are grateful for travel funds to this stimulating meeting in Chicago, from the Committee on Research (U.C. Berkeley) and the meeting organizers, respectively.
|
no-problem/9905/adap-org9905002.html
|
ar5iv
|
text
|
# Model of self-replicating cell capable of self-maintenance.
## 1 Introduction
The emergence of cells is one of the major transitions in the evolution of life. When primitive self-replicators such as a hypercycle of RNA enzymes evolve into a living cell, they must acquire membranes that will separate them from their noisy environment. It is well known that a hypercycle system can easily be broken down by the occurrence of parasites. Compartmentalization of a hypercycle system is a simplest way to avoid the disaster . At the same time, however, it should be noticed that it is also true that parasites can drive the increase of diversity and complexity of the replicator network . In order to examine the balance between stable reproduction and diversity, we should study a relationship between an internal replication and a cellular structure which enclose it.
Many models of proto-cell structures have been proposed. For example, it is well known that long-chained fatty acids spontaneously form micelles or vesicles when submerged in water. Luisi and his group demonstrated experimentally the self-organization and self-reproduction of liposomes, and showed that such vesicles maintain self-replicating RNA within them . Theoretical models for self-organization and self-reproduction of micelles are studied well .
There is an another essential feature of cells, that is, self-maintenance. Living cells metabolize and sustain their membrane by themselves, and the boundary of cells are defined by the membrane. This mutual dependence enables the coevolution of internal chemical networks and the membranes. The coevolution between the two is presumed in the early stages of the cell evolution. With respect to this point, Gánti proposed a model for primitive life termed ’the chemoton’, which presents three indispensable functions of the proto-cell: it has a metabolic cycle for assimilation; it maintains its membrane; and it replicate its genetic information . Varela also insisted that the boundary of cells (i.e. the cell membrane) must be organized and maintained by the cell itself . He presented a model on a two-dimensional lattice of an autopoietic cell that can maintain its membrane.
Our purpose in the present study is to demonstrate how such primitive cells can emerge and evolve from a simple set of chemical network. A model of self-maintenance and self-replicating cells in one-dimensional space was proposed by the same author . In the model, we have shown that self-reproduction of the cellular structure emerges spontaneously and there are two distinct processes of replication showing potentially different heredities.
In this paper, we extend our previous model for application to two-dimensional cases and showing that this system has a potential for further evolution.
## 2 A stochastic particle model
We simulate a discrete space-time dynamics of chemicals in a two-dimensional space, where chemicals catalyze each other’s reactions. Each chemical is given as a particle with/without anisotropic shape that moves around on a triangular lattice. Particles demonstrate two basic motions: hopping to neighbouring sites and rotating at one site. In addition to this behavior, a particle can change its chemical qualities. The former is termed a mobile transition, the latter a chemical transition, and both are determined by the potential energy of the particle.
We assume that there is a repulsive force between some chemicals, thus the physical potential of a chemical $`C`$ at the site $`x`$ ($`E_C(x)`$) is computed by summing up the the repulsion potential of all chemicals at the site $`x`$ and its six neighbouring sites. The mobile transition probability $`P_C(x,x_0)`$ from site $`x`$ to $`x_0`$ is computed from the difference in the potential magnitudes as
$$P_C(x,x_0)=R_{dif}f(E_C(x_0)E_C(x)),$$
where $`E_C(x)`$ gives the potential energy of the particle $`C`$ at the site $`x`$. The diffusion parameter $`R_{dif}`$ is fixed for all particles.
The chemical transition probability $`P_{CC^{}}(x)`$ from the state $`C`$ to $`C^{}`$ at the site $`x`$ is given as
$$P_{CC^{}}(x)=R_{CC^{}}(x)f(G_C^{}G_C+E_C^{}(x)E_C(x)),$$
where $`G_C`$ represents the chemical potential of $`C`$. The reaction parameter $`R_{CC^{}}(x)`$ is controlled by a catalyst found on the site $`x`$. One constraint is given to the form of the function $`f`$ in order to satisfy the thermal equilibrium condition, as follows:
$$\frac{f(\mathrm{\Delta }E)}{f(\mathrm{\Delta }E)}=e^{\mathrm{\Delta }E}$$
(1)
We define five different kinds of chemicals, $`A;M;W;X`$ and $`Y`$, in a system. Each particle can belong to any one of these chemicals. $`W`$ plays the role of abstract ’water’, and cannot change into any other chemical. $`X`$ is the material with high chemical potential, though it is not an autocatalytic chemical. $`A`$ is a unique autocatalytic chemical in the system. Their reaction processes are,
$`A+A`$ $``$ $`AA`$
$`and`$
$`X+AA`$ $``$ $`A+AA.`$
These chemical reactions only occur among particles that occupy the same site <sup>1</sup><sup>1</sup>1Note that a number of chemicals can occupy each site. In this study, the average is one hundred. The forward and backward reactions have an equal reaction parameter, which is given by the following formula:
$$R_{XA}(x)=R_{AX}(x)=B_{XA}+C_AA(x)^2,$$
where $`A(x)`$ denotes the number of $`A`$ on the site $`x`$. In the above equation, $`B_{XA}`$ and $`C_A`$ are the base rate and the catalysis coefficient, respectively. As a secondary process, $`A`$ produces $`M`$ as a co-product of the total reaction network.
$`X+AA`$ $``$ $`M+AA.`$
The reaction parameters are given by
$$R_{XM}(x)=R_{MX}(x)=B_{XM}+C_MA(x)^2.$$
In addition to the above reactions, we introduce the natural decay of chemicals into $`Y`$ where $`Y`$ has the lowest chemical potential.
Consider that there is a source of material $`X`$ in this system so that the reaction parameters between $`X`$ and $`Y`$ break the pattern of symmetry, as
$`R_{XY}`$ $`=`$ $`B_{XY}`$
$`R_{YX}`$ $`=`$ $`B_{XY}+S_x`$
where $`S_x`$ denotes the strength of the source $`X`$.
We assume there is repulsive force between $`M`$ and other chemicals like oil in water. In the following simulations, we examine three different kinds of potentials on the chemical $`M`$. First, $`M`$ equally repels all the other chemicals around it. Second, $`M`$ has an anisotropic repulsion regardless of the kinds of chemicals. This feature will be described latter. Third, in addition to the anisotropic repulsion, the repulsion force also depends on the kinds of chemicals.
## 3 Simulation Results
### 3.1 Formation of Cells
First, we simulate the case where the repulsion force depends on neither the kinds of chemicals nor the form of $`M`$. Starting from the homogeneous initial state which has rich amount of $`A`$, a system can maintain replication of $`A`$ and reproducing co-product $`M`$. Chemicals $`A`$ and $`M`$ can aggregate to form Turing-like patterns. Figure 1.a shows an example of the pattern generated; the spots of $`M`$ are formed among $`W`$ and $`A`$.
The second case, where the repulsion force depends on the orientation of $`M`$ molecule gives different observation. Here $`M`$ has an anisotropic potential, illustrated in Fig. 1.b. When these $`M`$s are placed on a triangular lattice, the ’head’ can be aligned in any of three directions (e.g. $`0`$,$`\pi /3`$ and $`2\pi /3`$). We assume that it can change its direction stochastically, with the transition probability given by,
$$P_{MM^{}}(x)=R_{rot}f(E_M(x)E_M^{}(x)),$$
where $`M^{}`$ denotes $`M`$ which has another orientation.
Figure 1.b shows the repulsion potential generated by $`M`$ for other chemicals $`A`$,$`W`$,$`X`$ and $`Y`$. The repulsion force from $`M`$ becomes strongest when $`M`$ and other chemicals are on the same site (indicated by black in Fig. 1.b). The repulsion is the second strongest at the front of or behind M $`M`$ (dark gray sites) and relatively weaker at the other four side-sites (light gray). We also assume there is repulsion between $`M`$s when their directions are different. Thus, the $`M`$ molecule tends to take the same direction as neighbouring $`M`$s.
When $`M`$ has this kind of anisotropic repulsive force, the clusters are organized differently than they are in the isotropic cases. The clusters of $`M`$ become thin films that we simply name ’membrane’ (see Fig. 1.c). The difference in the repulsion potential between front- and side-sites of $`M`$ affects the thickness of membrane. Also, when the repulsion between $`M`$ with different direction is stronger, the membranes tend to run straighter. These effects allow us to get membranes that have various degrees of flexibility.
When we start from a single ’cell’, that is, a spot of $`A`$ enveloped by membranes as shown Fig. 1.d, this structure can maintain itself stably because $`A`$ within the cell keep reproducing themselves and sustain the membranes by supplying $`M`$; simultaneously, the membranes keep $`A`$ from diffusing outward. Note that this structure collapses when the membranes are broken (see Fig. 1.e). Chemical $`A`$ cannot sustain reproduction because they leak away through the defect in the membrane. In the absence of a supply by $`A`$, the membranes decay and disappear.
### 3.2 Cell division
Living cells are not closed systems. They must ingest nutrients and excrete wastes through their membranes. In this section, we study the case where $`M`$ shows selective repulsion depending on the kind of molecules. We assume that the repulsion between $`M`$ and two chemicals $`X`$ and $`Y`$ is much weaker than that of the other chemicals.
In this case, $`X`$ and $`Y`$ can permeate through the membranes at a rate proportional to the gradient of their density. Because there are more $`X`$ in the environment than on the inside of the cell, the cells can absorb the external chemical $`X`$ and grows gradually. When the cell reaches a certain size at which it has outgrown its stability, it begins to generate a new membrane inside. This finally divides the mother cell into daughter cells (see Fig. 2). These new cells repeat the process of growing and dividing. Sometimes a cell fails to sustain its membrane structure and dies, due to a shortage of materials or to interference from other cells.
We can change the flexibility of the membranes by altering the repulsion strength between $`M`$s. This has the result of varying the division dynamics of cells. Examples are presented in Fig. 3. A strong repulsion between $`M`$ results in the formation of stiff membranes, and the shape of cells becomes more regular (Fig. 3.a and 3.b). On the other hand, Fig. 3.c represents a cell with more flexible membranes, which form in the presence of low repulsion values for $`M`$. These cells divide themselves irregularly at some narrow part.
## 4 Discussion
We have demonstrated the model of a self-maintaining cell. The cell has an internal autocatalytic cycle of chemicals, which maintains the membrane by itself and the membrane keeps the cell from collapse. We have also shown that the self-maintaining cell can replicate itself spontaneously; a transition is made from molecular reproduction to cellular reproduction.
In real life, the earliest membranes may have been simpler and rougher than the phospholipid membranes. The marigranule represents an example of a primitive cell. It has rough shell and can ingest amino acids from the environment. Though the materials of which marigranules consist are very elementary molecules, there is no linkage between the organization of shells and its internal dynamics. If such structure established a symbiotic relationship between its embedded chemical network and its membrane, the membranes could become the targets of the Darwinian selection and evolve into more complex structures.
In this study, we have shown that the cell divides in a different manner according to variation in the interaction strength between $`M`$s. It is also true that we can change some properties of biological membranes by varying their components. For example, unsaturated fatty acids components soften the membrane and cholesterols do the opposite.
Self-replicating spots in the two-dimensional reaction-diffusion system have been well studied. However, the kinds of replicating pattern cannot be as diverse as the patterns generated by cells with membranes. The cellular membrane can function as a boundary condition to the internal chemical network, and conversely, the internal reactions determine the cell shape. In this sense, the chemical reactions within cell membranes can be richer than those without membranes. The compartmentalization of chemicals will allow cells to be regarded as units of evolution, because it maintains the identity of their contents during reproduction.
Koch previously discussed the division mechanisms of phospholipid vesicles by considering the property of mechanical energy . Our model demonstrates an analogous dynamic division mechanism.
The evolution of selective permeability of the membrane must be considered in future studies. Cell membranes determine how cells communicate with the environment, including other cells. Cells selectively receive stimuli from the environment and from other cells, and respond to these stimuli. Our cell model provides several possible approaches by which to observe the formation and evolution of membrane functions, and of how the interaction between cells is generated by each cell’s own internal dynamics.
## Acknowledgments
This work is partially supported by Grant-in aid (No. 09640454) from the Ministry of Education, Science, Sports and Culture.
|
no-problem/9905/cond-mat9905048.html
|
ar5iv
|
text
|
# Trapped Bose-Einstein Condensates: Role of Dimensionality
## I Introduction
Experimental realisations of Bose-Einstein condensation (BEC) in ultracold and dilute atomic gases generated extensive theoretical studies of the macroscopic dynamics of condensed atomic clouds (see, e.g., Ref. 3 and references therein). In experiments, the BEC atoms are trapped in a three-dimensional, generally anisotropic, external potential created by a magnetic trap, and their collective dynamics can be described by the well-known Gross-Pitaevskii (GP) equation,
$$i\mathrm{}\frac{\mathrm{\Psi }}{t}=\frac{\mathrm{}^2}{2m}_D^2\mathrm{\Psi }+V(\stackrel{}{r})\mathrm{\Psi }+U_0|\mathrm{\Psi }|^2\mathrm{\Psi },$$
(1)
where $`\mathrm{\Psi }(\stackrel{}{r},t)`$ is the macroscopic wave function of a condensate in the $`D`$-dimensional space, $`V(\stackrel{}{r})`$ is a parabolic trapping potential, and the parameter $`U_0=4\pi \mathrm{}^2a/m`$ characterises the two-particle interaction proportional to the s-wave scaterring length $`a`$. When $`a>0`$, the interaction between the particles in the condensate is repulsive, as in most current experiments , whereas for $`a<0`$ the interaction is attractive .
From the viewpoint of the nonlinear dynamics of solitary waves, the case of the negative scattering length is the most interesting one. Indeed, it is well known that the solutions of Eq. (1) without a parabolic potential display collapse for both $`D=2`$ and $`D=3`$, so that no stable spatially localised solutions exist in those dimensions at all. A trapping potential allows a stabilisation of otherwise unstable and collapsing solutions and, as has been already shown for the 3D case with the help of different approximate techniques , there exists a critical number $`N_{\mathrm{cr}}`$ of the total condensate particles $`N`$, defined through the normalisation of the condensate wavefunction as
$$N=|\mathrm{\Psi }|^2d^3\stackrel{}{r},$$
(2)
such that the condensate is stable for $`N<N_{\mathrm{cr}}`$, and it becomes unstable for $`NN_{\mathrm{cr}}`$, and then its local density $`|\mathrm{\Psi }|^2`$ grows to infinity. This critical value of the particles’ number is readily observed in experiments .
In this paper we analyse systematically, in the framework of the GP equation (1), the structure and stability of a trapped condensate depending on its dimension $`D`$. In spite of the fact that the first experimental results were obtained for the BEC in a 3D anisotropic trap, the cases of lower dimensions are also of great importance and, moreover, they have a clear physical motivation. First of all, Eq. (1) with $`D=1`$ appears as the fundamental model of BEC in highly anisotropic traps of the axial symmetry
$$V(\stackrel{}{r})=\frac{m\omega ^2}{2}(r_{}^2+\lambda z^2),r_{}=\sqrt{x^2+y^2},$$
(3)
provided $`\lambda \omega _z^2/\omega _{}^21`$, i.e. for cigar-shaped traps. In this case, the transverse structure of the condensate is controlled by a trapping potential, and it can be described by the eigenmodes of a two-dimensional isotropic harmonic oscillator. Then, after averaging over the cross-section, Eq. (1) appears as a 1D model with renormalised parameters, and it describes the longitudinal dynamics of the condensate (see Ref. 6 and Sec. 2 below).
Secondly, in the case $`D=2`$, the model (1) can be employed to describe the condensate dynamics in the so-called pancake traps ($`\lambda 1`$), and it can be related to recent experiments with quasi-two-dimensional condensates or coherent atomic systems .
The paper is organised as follows. In Sec. 2 we consider the 1D case and find numerically the ground-state solutions for the trapped condensate in both $`a<0`$ and $`a>0`$ cases. In the former case, the problem is found to be remarkably similar to that of the soliton propagation in optical fibers in the presence of the dispersion management . The most interesting case of a 2D trap is considered in Sec. 3, where we discuss the necessary conditions for the instability of the radially symmetric condensate in a trap, and also analyse the invariants of the stationary solutions found numerically. We point out, for the first time to our knowledge, that 2D condensates can be stable, in spite of the possibility of collapse in the model described by the 2D GP equation. Similar results are briefly presented in Sec. 4 for the 3D BECs of radial symmetry, and they provide a rigorous background for the variational results obtained earlier by different methods . In all the cases, we also find the condensate structure for the positive scattering length where the instability is absent in all the dimensions.
## II Quasi-One-Dimensional Trap
### A Model
To derive the 1D GP equation from Eq. (1), we assume that the parabolic potential $`V(\stackrel{}{r})`$ describes a cigar-shaped trap. For the potential (3), this means that $`\lambda 1`$, and the transverse structure of the condensate, being close to a Gaussian in shape, is mostly defined by the trapping potential. We are interested in the stationary solutions of the form $`\mathrm{\Psi }(\stackrel{}{r},t)=\mathrm{\Psi }(\stackrel{}{r})\mathrm{e}^{\frac{i\mu }{\mathrm{}}t}`$, where $`\mu `$ has the meaning of the chemical potential. Measuring the spatial variables in the units of the longitudinal harmonic oscillator length $`a_{ho}=(\mathrm{}/m\omega \sqrt{\lambda })^{1/2}`$, and the wavefunction, in units of $`(\mathrm{}\omega /2U_0\sqrt{\lambda })^{1/2}`$, we obtain the following stationary dimensionless equation:
$$_D\mathrm{\Phi }+\mu ^{}\mathrm{\Phi }\left[\frac{1}{\lambda }(\xi ^2+\eta ^2)+\zeta ^2\right]\mathrm{\Phi }+\sigma |\mathrm{\Phi }|^2\mathrm{\Phi }=0,$$
(4)
where $`\mu ^{}=2\mu \sqrt{\lambda }/\mathrm{}\omega `$, $`(\xi ,\eta ,\zeta )=(x,y,z)/a_{ho}`$, and the sign $`\sigma =\mathrm{sgn}(a)=\pm 1`$ in front of the nonlinear term is defined by the sign of the s-wave scattering length of two-body interaction.
We assume that in Eq. (1) the nonlinear interaction is weak relative to the trapping potential force, i.e. $`U_0/m\omega \lambda 1`$. Then, it follows from Eq. (4) that the transverse structure of the condensate is of order of $`\lambda `$, and the condensate has a cigar-like shape. Therefore, we can look for solutions of Eq. (4) in the form,
$$\mathrm{\Phi }(r,\zeta )=\varphi (r)\psi (\zeta ),$$
(5)
where $`r=\sqrt{\xi ^2+\eta ^2}`$, and $`\varphi `$ is a solution of the auxiliary problem for the 2D quantum harmonic oscillator
$$_{}\varphi +\gamma \varphi \lambda ^1r^2\varphi =0,$$
(6)
which we take in the form of the no-node ground state,
$$\varphi _0(r)=C\mathrm{exp}\left(\frac{r^2}{2\sqrt{\lambda }}\right),\gamma =\frac{2}{\sqrt{\lambda }},$$
(7)
$`C`$ being constant. To preserve all the information about the 3D condensate in an asymmetric trap describing its properties by the 1D GP equation for the longitudinal profile, we impose the normalisation,
$$_{\mathrm{}}^{\mathrm{}}\varphi (r)𝑑x𝑑y=2\pi _0^{\mathrm{}}\varphi (r)r𝑑r=1$$
(8)
that yields $`C^2=1/\pi \sqrt{\lambda }`$.
After substituting the solution (5) with the function (7) into Eq. (4), dividing by $`\varphi `$ and integrating over the transverse cross-section $`(\xi ,\eta )`$ of the cigar-shaped condensate, we finally obtain the following 1D stationary GP equation (see also the similar results in Ref. 6)
$$\frac{d^2\psi }{d\zeta ^2}+\beta \psi \zeta ^2\psi +\sigma |\psi |^2\psi =0,$$
(9)
where $`\beta =\mu ^{}\gamma `$, and the normalisation (8) for $`\varphi (r)`$ is used.
Importantly, the number of the condensate particles $`N`$ is now defined as $`N=(\mathrm{}\omega /2U_0\sqrt{\lambda })Q`$, where
$$Q=_{\mathrm{}}^{\mathrm{}}|\psi |^2𝑑\zeta $$
(10)
is the integral of motion for the normalised nonstationary GP equation.
### B Stationary Ground-State Solutions
Equation (9) includes all the terms of the same order, and it describes the longitudinal profile of the condensate state in an anisotropic trap. In the linear limit, i.e. when formally $`\sigma 0`$, Eq. (9) becomes the well-known equation for a harmonic quantum oscillator. Its localised solutions exist only for $`\beta \beta _n=1+2n(n=0,1,2,\mathrm{})`$, and they are defined through the Gauss-Hermite polynomials, $`\psi _n(\zeta )=c_n\mathrm{e}^{\zeta ^2/2}H_n(\zeta )`$, where
$$H_n(\zeta )=(1)^n\mathrm{e}^{\zeta ^2/2}\frac{d^n(\mathrm{e}^{\zeta ^2/2})}{d\zeta ^n},$$
(11)
so that $`H_0=1`$, $`H_1=2\zeta `$, etc.
To describe the effect of weak nonlinearity, we can employ a perturbation theory based on the expansion of the general solution of Eq. (9) in the infinite set of eigenfunctions defined by Eq. (11). The similar approach has been used earlier in the theory of the dispersion-managed optical solitons (see, e.g., Ref. 10), and it allows us to describe the effect of weak interaction on the condensate structure and stability. These results will be presented elsewhere .
In the opposite limit, i.e. when the nonlinear term is much larger than the parabolic potential, localised solutions exist only for $`\beta <0`$ and $`\sigma =+1`$, and they are described by the stationary nonlinear Schrödinger (NLS) equation. The one-soliton solution is
$$\psi _s(\zeta )=\frac{\sqrt{2\beta }}{\mathrm{cosh}(\zeta \sqrt{\beta })},$$
(12)
so that $`Q_s=4\sqrt{\beta }`$ coincides with the soliton invariant.
In general, the ground-state solution of Eq. (9) can be found only numerically. Figures 1(a) and 1(b) present some examples of the numerically found solutions of Eq. (9) for several values of the dimensionless parameter $`\beta `$, for both negative (a) and positive (b) scattering length. For $`\beta 1`$, i.e. in the limit of the harmonic oscillator mode, the solution is close to Gaussian for both the cases. When $`\beta `$ deviates from 1, the solution profile is defined by the type of nonlinearity. For attraction ($`\sigma =+1`$), the profile approaches the sech-type soliton (12), whereas for repulsion ($`\sigma =1`$) the solution flattens, cf. Fig. 1(a) and Fig. 1(b).
In Fig. 2 we show the dependence of the invariant $`Q`$ on the parameter $`\beta `$, for both the families of localised solutions, corresponding to two different signs of the scattering length. The dotted line shows the limit of the soliton solution of the NLS equation without a trapping potential. In the asymptotic, i.e. say for $`\beta <2`$, the curve $`Q_\mathrm{s}(\beta )`$ coincides with the invariant $`Q`$ for the BEC condensate in a trap. This means that for such localised condensates the effect of the trap is negligible and the condensate function becomes localised mostly due to an attractive interparticle interaction.
To discuss the stability of localised solutions, we refer to a number of the corresponding problems well analysed in nonlinear optics of guided modes and solitary waves . In terms of the so-called Vakhitov-Kolokolov stability criterion, the condition $`dQ/d(\beta )>0`$ gives the solution stability for the case $`\sigma =+1`$. Therefore, all 1D solitons for the attraction case are stable. A detailed discussion of the soliton stability for this case can be found in Ref. 9.
Formally, the case $`\sigma =1`$ corresponds to the opposite sign of the derivative $`dQ/d\beta `$. However, this case of the so-called defocusing nonlinearity is also well-known in nonlinear optics of guided waves, and it has been analysed for different types of the trapping potential. In particular, it is known that the Vakhitov-Kolokolov criterion does not apply to this case, and all the family of spatially localised solutions are stable . Extension of all those results to the case of a parabolic trap is a straightforward technical problem.
### C Higher-Order Modes and Multi-Soliton States
It is well known that in the linear limit $`\sigma 0`$, Eq. (9) possesses a discrete set of localised modes corresponding to the different orders of the Gauss-Hermite polynomials. We demonstrate that all those modes can be readily obtained for the nonlinear problem as well, giving a continuation of the Gauss-Hermite modes to a nonlinear regime. Figure 3 shows several examples of those modes for both negative ($`\sigma =+1`$) and positive ($`\sigma =1`$) scattering length, respectively. In the limit $`\beta 1`$, those modes are defined by the eigenfunctions of the linear harmonic oscillator. The effect of nonlinearity is different for the negative and positive scattering length. For the negative scattering length (attraction), the higher-order modes transform into multi-soliton states consisting of a sequence of solitary waves with alternating phases \[see Figs. 3(a) and 3(b)\]. This is further confirmed by the analysis of the invariant $`Q`$ vs. $`\beta `$, where all the branches of the higher-order modes tend asymptotically to the soliton dependences $`Q_N(N+1)Q_\mathrm{s}`$, where $`N`$ is the order of the mode ($`N=0,1,\mathrm{}`$). Examples of the dependences $`Q(\beta )`$ for the first- and second-order modes are shown in Fig. 4. From the physical point of view, the higher-order modes in the case of attractive interaction correspond to a balance between repulsion of out-of-phase bright solitons and attraction of the trapping potential. It is clear that such a balance can be easily destroyed by, for example, making the soliton amplitudes different. However, the analysis of the stability of such higher-order multi-soliton modes is beyond the scope of this paper.
For the positive scattering length ($`\sigma =1`$), the higher-order modes transform into a sequence of dark solitons , so that the first-order mode corresponds to a single dark soliton, the second-order mode to a pair of dark solitons, etc. \[see Figs. 3(c) and 3(d)\]. Again, these stationary solutions satisfy the force balance condition - repulsion between dark solitons is exactly compensated by an attractive force of the trapping potential.
## III Quasi-Two-Dimensional Trap
### A Model and Stationary Ground-State Solutions
The case of the two-dimensional (2D) GP equation can be associated with a quasi-condensate of atomic hydrogen or quasi-2D gas of laser cooled atoms . Derivation of the GP equation from the first principles of scattering theory in the two-dimensional geometry is not trivial, and it has been recently shown that the correct form of the 2D GP equation (1) should have the 2D interaction potential $`U_0`$ in the form $`U_0=(2\pi \mathrm{}^2/m)\mathrm{ln}^1(ka)`$, where $`0<ka1`$, $`a`$ is the scattering length, and the characteristic wavenumber $`k`$ can be approximated as $`k1/a_{ho}`$. Here, we derive the 2D GP equation directly from its 3D form of Eq. (1), for the case of a pancake trap when $`\lambda 1`$.
In the case of a pancake trap, in the parabolic potential (3) we assume the parameter $`\lambda `$ is large, i.e. $`\lambda 1`$. Then, the longitudinal profile of the condensate is controlled by the parabolic potential $`(m\lambda \omega ^2/2)z^2`$. Measuring the spatial variables in the units of the transverse harmonic oscillator length $`(\mathrm{}/m\omega )^{1/2}`$ and the wavefunction in units of $`(\mathrm{}\omega /2U_0)^{1/2}`$, from Eq. (1) we obtain the following stationary equation,
$$_D\mathrm{\Phi }+\mu ^{}\mathrm{\Phi }\left[(\xi ^2+\eta ^2)+\lambda \zeta ^2\right]\mathrm{\Phi }+\sigma |\mathrm{\Phi }|^2\mathrm{\Phi }=0,$$
(13)
where $`\mu ^{}=2\mu /\mathrm{}\omega `$. For $`\lambda 1`$, the longitudinal structure of the condensate is squeezed in the $`z`$-direction and its size in this direction is of the order of $`\lambda ^1`$. Therefore, we can look for stationary solutions in the form,
$$\mathrm{\Phi }(r,\zeta )=\psi (r)\varphi (\zeta ),$$
(14)
where $`\psi (r)`$ depends on the radial coordinate $`r=\sqrt{x^2+y^2}`$, and this time the function $`\varphi (\zeta )`$ is a solution of the 1D quantum harmonic oscillator,
$$\frac{d^2\varphi }{d\zeta ^2}+\gamma \varphi \lambda \zeta ^2\varphi =0,$$
(15)
with the Gaussian form,
$$\varphi _0(\zeta )=C\mathrm{exp}\left(\frac{\zeta ^2\sqrt{\lambda }}{2}\right),\gamma =\sqrt{\lambda }.$$
(16)
Normalisation condition $`_{\mathrm{}}^{\mathrm{}}\varphi _0^2(\zeta )𝑑\zeta =1`$ yields $`C=\pi ^{1/4}\lambda ^{1/8}`$.
Substituting the solution (14) into Eq. (13) and averaging over $`\zeta `$, we obtain the following 2D equation for $`\psi (r)`$,
$$\left(\frac{d^2}{dr^2}+\frac{1}{r}\frac{d}{dr}\right)\psi +\beta \psi r^2\psi +\sigma |\psi |^2\psi =0,$$
(17)
where $`\sigma =\pm 1`$ and $`\beta =\mu ^{}\gamma `$. Equation (17) is the 2D GP equation for the stationary solutions without any angular dependence. In the linear limit, i.e. when we neglect the nonlinear term ($`\sigma 0`$), the solution exists only at $`\beta =2`$, and it can be written in the form $`\psi _0(r)=C_0\mathrm{exp}(r^2/2)`$, where $`C_0`$ is defined by normalisation. In the opposite limit, the localised solution exists only for $`\sigma =+1`$, and it is described by the radially symmetric solitary wave of the 2D NLS equation.
We integrate Eq. (17) numerically and find a family of radially symmetric localised solutions $`\psi (r)`$ for the condensate ground state, for both $`\sigma =+1`$ (attraction) and $`\sigma =1`$ (repulsion). Some results are presented in Figs. 5(a,b), for different values of the parameter $`\beta `$. The structure of the localised solutions is similar to those in the 1D case. However, in the limit of large negative $`\beta `$, the solution transforms into the 2D NLS soliton, known to be a self-similar radially symmetric solution corresponding to a critical collapse. This is further illustrated in Fig. 6, where we present the dependence of the invariant $`Q`$ on the parameter $`\beta `$. The 2D NLS soliton exists only for $`\beta <0`$ and $`\sigma =+1`$, and it corresponds to the fixed value of the invariant, $`Q_s11.7`$, shown as a dotted straight line in Fig. 6.
### B Stability and collapse
Stability of the 2D condensates in a parabolic trap is an important issue. In particular, as was shown by Bergé and Tsurumi and Wadati , the presence of a trapping potential does not remove collapse from the 2D GP equation. This result is very much expected because it has been known for a long time that, in the case of the 2D NLS equation with a parabolic potential, there exists an exact analytical transformation that allows the potential to be removed from the 2D NLS equation . This means that for some classes of the input conditions of the GP equation, the collapse should be observed even in the presence of a trapping potential. Nevertheless, as follows from our results summarised in Fig. 6, for the attractive case ($`\sigma =+1`$) the derivative $`dQ/d(\beta )`$ is positive and it does not change the sign. Moreover, according to the collapse conditions derived in Ref. 15, for the radius $`R`$ of the time-dependent 2D condensate the following equation holds
$$\frac{d^2R^2}{dt^2}=8H4R^2,$$
(18)
where $`H`$ is the system Hamiltonian
$$H=2\pi _0^{\mathrm{}}\left(\left|\frac{d\psi }{dr}\right|^2+r^2|\psi |^2\frac{1}{2}|\psi |^4\right)r𝑑r$$
(19)
and $`R`$ is defined as
$$R^2=2\pi _0^{\mathrm{}}r^3|\psi |^2𝑑r.$$
(20)
Equation (18) means that the condition $`H<0`$ is a sufficient condition for collapse, because the value $`R^2`$ surely becomes negative for some value of $`t`$ and, therefore, the wavefunction becomes singular and collapses as $`R^2`$ tends to zero.
When $`H`$ is zero or positive, the dynamics depend on the applied perturbation, so that for a large enough perturbation we can lower the stationary value of the Hamiltonian $`H`$. In Fig. 7 we present the dependences $`H(\beta )`$ and $`H(Q)`$ for the family of stationary solutions found numerically for the attraction case $`\sigma =+1`$. It is clear that the whole family of stationary solutions corresponds to $`H>0`$. Therefore, the ground-state is expected to be stable everywhere, but its dynamics should be sensitive to the perturbation amplitude for larger $`\beta `$ since a perturbation can make the Hamiltonian negative. This statement should be further verified by numerical simulations of the 2D GP equation.
### C Higher-Order Modes and Vortices
Similar to the 1D case discussed above, the higher-order modes of a harmonic oscillator can be found in the 2D case too, thus providing an analytical continuation of the 2D Gauss-Hermite modes. The set of such solutions is broader because it includes a continuation of a superposition of different modes. For example, the mode $`\psi _{00}(r)`$ is the fundamental ground-state solution, that continues the zero-order Gauss-Hermite mode $`H_0(x)H_0(y)`$, as discussed above. In general, the modes $`\psi _{nm}(x,y)`$ have no radial symmetry and they include both dipole-like and vortex-like states.
The simplest higher-order radially symmetric mode describes the condensate with a single vortex, shown in Fig. 8. Figures 8(a) and 8(b) show the density profile of the condensate with a vortex state, for both negative ($`\sigma =+1`$) and positive ($`\sigma =1`$) scattering length, respectively. Figure 8(c) presents the dependence of the invariant $`Q`$ vs. $`\beta `$ for the vortex state in comparison with the ground-state mode. Because the vortex state is a nonlinear mode that extends the corresponding mode of the linear system, it approaches the value $`\beta =4`$.
## IV Three-Dimensional Trap of Radial Symmetry
The case of a radially symmetric 3D trap is the most studied in the literature, so that the corresponding results are well known, and therefore we reproduce some of them just for the completeness of the general picture of the stationary states. Similar results for the stationary states can be found, e.g. in Refs. 3 and 17, whereas the BEC stability has been recently discussed by Huepe et al. .
Figures 9(a) and 9(b) show the density profiles of the 3D condensate of radial symmetry for different values of $`\beta `$, for both negative ($`\sigma =+1`$) and positive ($`\sigma =1`$) scattering length. While in the case of repulsion ($`\sigma =1`$), the condensate broadens for larger $`\beta `$ remaining always stable, for the attractive interparticle interaction ($`\sigma =+1`$), the condensate is stable only for $`\beta <\beta _{\mathrm{cr}}`$, where $`\beta _{\mathrm{cr}}=0.72`$ corresponds to the maximum value $`Q_{\mathrm{cr}}14.45`$ \[see Fig. 10\]. This result is consistent with the Vakhitov-Kolokolov stability criterion $`dQ/d(\beta )>0`$. The critical value $`\beta _{\mathrm{cr}}`$ corresponds to a critical value of the particles $`N_{\mathrm{cr}}`$ and it has been already observed in experiment . Without the trapping potential i.e. in the limit of the 3D NLS equation, all localised solutions for $`\sigma =+1`$ are unstable (shown as a dotted curve in Fig. 10).
## V Conclusions
We have presented a systematic study of the ground-state and higher-order spatially localised solutions of the GP equation for the Bose-Einstein condensates in a parabolic trap of different dimensions. While many results for the radially symmetric ground-state and vortex modes in 3D are available in the literature for the condensates with the positive scattering length (i.e., for repulsive interaction of the condensate atoms), little is known for the condensates with the negative scattering length, especially for the 2D traps. In particular, we have presented the results indicating that, in spite of the collapse condition in the 2D case, the family of 2D localised solutions of the GP equation can be stable. Additionally, we have clarified the meaning of higher-order localised modes, i.e. nonlinear Gauss-Hermite eigenmodes, and demonstrated their link to multi-soliton states. We believe the systematic study of the stationary states in all the dimensions carried out here from the viewpoint of the nonlinear physics of localised states and solitary waves allows us to close a gap in the literature and deepens our knowledge about the condensate properties and stability.
## References
|
no-problem/9905/cond-mat9905074.html
|
ar5iv
|
text
|
# Statistics of DNA sequences: a low frequency analysis
## I Introduction
The statistics of DNA sequences is an active topic of research nowadays. There are studies on the power spectral density, random walker representation, correlation function, etc. Although some of the studies are in contradiction with each other, there is a consensus with respect to the reported behavior of the power spectrum of DNA sequences. For high frequencies it is roughly flat, with a sharp peak at $`f=1/3`$, which has been shown to be due to nonuniform codon usage. For smaller frequencies, it has been reported that it presents a power-law behavior with exponent approximately equal to $`1`$, that is, $`1/f`$ noise. Since a cutoff of the power-law exists at high frequencies, it has been called “partial power-law”. The presence of “1/f” noise in a given frequency interval indicates the presence of a self-similar (fractal) structure in the corresponding range of wavelengths, whereas a flat power spectrum indicates absence of correlations (white noise).
It is an important question to know whether or not the power-law behavior of the power spectrum of a given DNA chain extents up to the smallest frequencies. If this occurs, it would imply that the fractal behavior of that DNA chain spans to the entire chain, and that the correlation length of the chain is not smaller than the chain size. Some studies have claimed that the fractal behavior of DNA prevails through the entire DNA molecule. The aim of this paper is to show that this is not generally correct.
We have done statistical analysis of the DNA of thirteen microbial complete genomes, that is, Archaeoglobus fulgidus (2178400 bp), Aquifex aeolicus (1551335 bp), Bacillus subtilis (4214814 bp), Chlamydia trachomatis (1042519 bp), Escherichia coli, also known as Ecoli (4639221 bp), Treponema pallidum (1138011 bp), Haemophilus influenzae Rd (1830138 bp), Helicobacter pylori 26695 (1667867 bp), Mycoplasma pneumoniae (816394 bp), Mycobacterium tuberculosis H37Rv (4411529 bp), Pyro-h Pyrococcus horikoshii OT3 (1738505 bp), Synechocystis PCC6803 (3573470 bp), and Mycoplasma genitalium G37 (580073 bp). We have found that the behavior of power spectrum at small frequencies can be different for different organisms. Also, it can be different for different nucleotides in the same organism. Thus, for some organisms, the behavior of the power spectrum (PS) as a function of the frequency shows, in a log-log plot, three different regions, instead of two, reported previously. That is, as the frequency increases, it changes from (on average) a flat function, a power-law, and then flat again, showing that the fractal structure of DNA sequences not necessarily extends up to the total length of the chain. The flattening of the power spectrum at low frequencies is just a signature of the fact that the correlation length of DNA sequences is, for many sequences, much smaller than the entire length of the DNA chain. We have calculated the autocorrelation function (AF) of the nucleotides in the DNA chains of the organisms mentioned above. We have found that in some of the organisms the correlation length is of the order of a few thousand base-pairs. In others, the correlation length is very large, being not smaller than 100,000 base-pairs.
A DNA chain is represented by a sequence of four letters, corresponding to four different nucleotides: adenine (A), cytosine (C), guanine (G) and thymine (T). The calculation of the power spectrum or the autocorrelation function requires that this symbolic sequence be transformed into a numerical one. Several methods have been proposed for this. Here we use the method introduced by Voss, which has been shown in to be equivalent to the method used in. In Voss’s method one associates 0 to the site in which a given symbol is absent and 1 to the location where it is present. So, for a given DNA sequence there will be four different numerical sequences, corresponding to the sequences associated with A, C, G and T. In his original paper, Voss calculated the PS for each one of these sequences and summed them to find the average PS. Here, we treat them distinctly, because we also want to know about the similarities and differences of the statistical features of different nucleotides in a given DNA sequence.
By artificially linking flank sequences together, Borstnik et al. found a behavior for the PS as a function of the frequency that was flat, then an exponential decay, then flat again. Our studies of complete sequences show that the behavior of the PS does not show any exponential decay in the region of intermediate frequencies. We found instead a power law. However, for low frequencies we also find a flat PS in several of the sequences studied. A flat PS at high frequencies is observed in all cases.
## II Statistical Analysis
### A Power Spectrum
Let us use Voss’s method and denote by $`x_j^A`$ the numerical value associated with the symbol A. Then one has $`x_j^A=1`$ if symbol $`A`$ is present at location $`j`$ and $`x_j^A=0`$ otherwise. Similar transformation is made for symbols C, G and T. Consequently, the DNA can be divided into four different binary subsequences of 0’s and 1’s, associated with the symbols A, C, G, and T.
The Fourier transform of a numerical sequence $`x_k`$ of length $`N`$ is by definition,
$$V(f_j)\frac{1}{N}\underset{k=0}{\overset{N1}{}}x_k\mathrm{exp}(2\pi ikf_j),$$
(1)
where the frequency $`f_j`$ is given by $`fj/N`$, and $`j=0,\mathrm{},N1`$. The PS is defined as $`S(f_j)=V(f_j)V(f_j)^{}=|V(f_j)|^2`$. From the definition, we can see that $`V(f_0)=<x_k>`$, where the brackets denote average along the chain. Consequently, this quantity carries no information about the relative positions of the nucleotides. Because of this, we usually neglect this quantity in our calculations, that is, we concentrate only on frequencies with $`j>0`$.
Since DNA sequences have a large number of base-pairs, and the PS presents considerable fluctuation, some kind of averaging is usually done to plot this quantity as a function of the frequency. The way of averaging done so far is the following : the DNA chain of length $`N`$ is divided into non-overlapping subsequences of length $`L`$. Then, the power spectrum of each of these segments is computed and averaged over the $`N/L`$ subsequences. In this method the smallest frequency for which the PS can be calculated is, of course, $`f=1/L`$. Consequently, the behavior of frequencies in the range $`[1/N,1/L]`$ is unknown. An example of such a calculation for Ecoli is shown in Fig. 1, where the DNA chain was divided in subsequences of 8192 nucleotides. A clear power law, followed by an approximate flat region with a sharp peak at $`j=1/3`$, is seen. To avoid overlap of the curves, we have displaced the PS of cytosine, guanine and thymine by dividing it by $`10`$, $`10^2`$ and $`10^3`$, respectively. Since the power spectrum for sequences of real numbers is symmetric with respect to the axis $`f=0.5`$, we plot only the PS for frequencies in the interval of $`0`$ to $`0.5`$. A similar figure for adenine is shown in .
In this paper we show that another way of averaging allows one to calculate the PS for smaller frequencies than the method described above, and then verify what happens to the power-law as the frequency decreases. More specifically, we calculate the mean PS in a sliding window of $`n`$ points, with adjacent windows having an overlap of $`n1`$ points. The average PS in each window will determine the values of the smoothed resulting sequence. In mathematical terms we can express this as
$$\overline{S}(f_j)=\frac{1}{n}\underset{m=j\mathrm{\Delta }}{\overset{j+\mathrm{\Delta }}{}}S(f_m)$$
(2)
where $`\mathrm{\Delta }=(n1)/2`$, $`n`$ is taken an odd number, and $`j`$ varies from $`\mathrm{\Delta }+1`$ to $`N\mathrm{\Delta }1`$. Although the new sequence in this method is smoother than the original one, its length is only smaller than it by $`2\mathrm{\Delta }`$ points. We have found that this method shows the same behavior for moderate and high frequencies as the method used in. However, it much superior for studies at low frequencies.
To speed up the calculations of the PS we have used, as it is normally done, the Fast Fourier Transform algorithm. This algorithm speeds up the calculation of the PS by a factor of $`N/log_\alpha N`$, but it requires that length of the sequence analyzed be an integer power of the integer $`\alpha `$, which usually is taken to be two. Since the length of DNA sequences are not generally equal to an integer power of two, we take in our computation the largest subsequence, starting from the beginning of the chain, that fulfills this requirement. More specifically, we take the first $`N^{}=2^K`$ nucleotides, where $`K`$ is the largest power of 2 satisfying the requirement that $`N^{}N`$, with $`N`$ being the total size of the DNA chain. In this way, the number of nucleotides not included in the calculation is always smaller, and in many cases much smaller, than $`N/2`$. We have also done calculations considering the entire length of the DNA and zero padding the sequence to the next integer power of 2, as described in. The results remain essentially the same as the ones we show here.
Since our method shows the same behavior for the PS in the range of intermediate and large frequencies as the other averaging method, and also due to the large size of the DNA chains, we plot the PS only in the frequency range $`[1/N,0.01]`$. We show in Fig. 2 the results of our calculation for $`n=33`$ for four representative cases of the thirteen ones studied. For clarity, we have displaced the PS of C, G and T by dividing it by $`10`$, $`10^2`$ and $`10^3`$, respectively. In this way, an overlap of the curves is avoided. Our results show that the low frequency PS associated with each of the nucleotides in the organisms studied fall into one of the following cases:
(a) All the four PS associated with the four different nucleotides flattens off at low frequencies. In these cases there are three regions in the PS versus frequency curve. At both low and high frequencies the PS is of white noise type and the middle regions is characterized approximately by a power-law behavior, that is, in a log-log plot the PS satisfy $`Sf^\gamma `$, with $`\gamma >0`$. This is for example the case of Ecoli, shown in Fig. 2(a). When compared with Fig. 1 or with Fig. 1 of we see that the averaging method of does not show the true behavior of the PS at low frequencies. In this calculation we used the first $`2^{22}`$ nucleotides, which corresponds to $`90\%`$ of the Ecoli DNA. We show another case with the same behavior in Fig. 2(b), which is the PS of Aquifex aeolicus. For the PS of Aquifex aeolicus we used the first $`10^{20}`$ sites, which corresponds to $`68\%`$ of the chain length. The other organisms, among the ones studied, that show the same PS behavior are Archaeoglobus fulgidus, Synechocystis PCC6803, Mycoplasma pneumoniae, and Mycobacterium tuberculosis.
(b) The second type of behavior is the one in which the PS at small frequencies of all the nucleotides presents a power law behavior, which is approximately an extension of the PS behavior at intermediate frequencies. For these organisms, the PS presents only two regions: a flat one at high frequencies, and a power law behavior for intermediate and low frequencies. A typical case for this kind of behavior is shown in Fig. 2(c), which is the PS of Bacillus subtilis. In the calculation of the PS in this case we have used the first $`10^{22}`$ sites, which corresponds to $`99\%`$ of the total length of the chain. The other organisms studied that have similar PS are: Treponema pallidum, Pyro-h Pyrococcus horikoshii OT3, and Mycoplasma genitalium.
(c) The third, and last, type of behavior we have seen is the one in which, for a given organism, different nucleotides present different asymptotic behavior for the PS at low frequencies. That is, the PS flattens off for some of the nucleotide sequences, and for the others it remains approximately a power-law. An example of such a behavior is shown in Fig. 2(d), which is the PS of Haemophilus influenzae Rd. We see that different behavior for the PS are grouped in pairs. In all the cases studied we found that the PS of A is qualitatively similar to the PS of T and the one of C is similar to the one of G. This kind pairing of the statistical features of nucleotides has been reported for yeast chromosomes in . This is probably caused by the strand symmetry of DNA sequences, reported in. In the calculation of the PS we have used the first $`10^{20}`$ sites of the DNA chain, which corresponds to $`57\%`$ of the total number of nucleotides. Since a large number of sites are left out of the calculation, we have also analyzed the PS of the central and final region of the chain. We verified that the results remain essentially the same as the ones shown in Fig. 2(d). The other organisms that have similar statistical features for the PS are Chlamydia trachomatis and Helicobacter pylori 26695.
### B Autocorrelation Function
The autocorrelation function R(l) of a numerical sequence is, by definition,
$$R(l)=<x_kx_{k+l}>,$$
(3)
where the brackets denote average over the sites along the chain. For $`l=0`$, Eq. (3) implies $`R(0)=<x_k^2>`$, which is a quantity carrying no information about the relative position of the nucleotides. As in the case of the power spectrum for $`S(0)`$, this quantity will be neglected in our calculations.
Statistical independence between sites separated a by distance $`l`$ implies that $`<x_kx_{k+l}>=<x_k>^2`$. The value of $`l`$ above which this condition is satisfied (on average) is called the correlation length. DNA molecules, depending on the organism, can form an open or a closed loop. Bacterial DNA usually forms a closed loop. For circular chains, the autocorrelation function and the PS form Fourier transform pairs (this is the Wiener-Khintchine theorem) . In order to consider the entire DNA sequence (without having the constrains of the Fast Fourier algorithm) we calculate the AF using its plain definition, that is, Eq.(3), and not via Fourier transforming of the PS. We present results for $`l`$ in the interval $`[1,10^5]`$. This is a much larger interval than the ones considered in previous publications, which took $`l`$ in $`[1,10^3]`$. It is obvious that when $`l<<N`$, as it occurs here, it does not matter if we consider open or closed boundary conditions. Since we find computationally easier to consider open boundary conditions, we present the results of the AF for this case. It is beyond the scope of this paper to study cross-correlation between two different kinds of nucleotides. Such a kind of study can be found for example in .
We show in Fig. 3 the AF versus $`l`$ for the sequences whose PS we displayed in Fig. 2. Since the AF presents a strong oscillation of period 3, we chose $`n`$ to be a multiple of 3 in order to smooth it out. Here we have used $`n=33`$ (there was no particular reason for choosing $`n`$ a multiple of 3 in the calculation of the PS). In Fig. 3 the horizontal lines are the corresponding values of $`<x_k>^2`$. When $`R(l)<x_kx_{k+l}><x_k>^2`$ statistical independence between the nucleotides of a given type holds. As Fig. 3 shows, when $`l100`$ the AF is roughly flat for some sequences, and for others it is approximately a power-law . Then, as $`l`$ increases we see a regime of a power-law in all cases. For the interval of $`l`$ studied, we observe that the AF can get flat again as $`l`$ increases even more (with $`R(l)<x_k>^2`$), or not reach a plateau. For the sequences where the PS flattens off at low frequencies, we expect that the AF will get flat for larger $`l`$, with statistical independence holding. However, for most of the cases studied, this happens when $`l>>10^5`$. Only the AF of Aquifex aeolicus seems to reach a plateau for $`l`$ in the interval $`[1,10^5]`$ for all the nucleotides. This is shown in Fig. 3(c) and Fig. 3(d), where we observe that the correlation lengths for this organism appear to be between $`10^3`$ and $`10^4`$. For the other organisms studied, we see a wide variety of behaviors for the AF in the region of $`l[10^3,10^5]`$. As Fig. 3 shows, we find cases in which the AF reaches a plateau with statistical independence between the nucleotides, in others we see a slow decrease of the AF, such as the AF of A for Bacillus subtilis. We also find an abrupt change of slope in a plateau region, like the AF for A and of G in Haemophilus influenzae Rd. And most interestingly, we find the presence of anti-correlations, that is, $`<x_kx_{k+l}>`$ being smaller than $`<x_k>^2`$. This implies that sites separated by a given distance tend to be occupied by different nucleotides. The case in which this appears more strongly is in the AF of C for Haemophilus influenzae Rd. We have also observed that most of the sequences present a peak in the AF at $`l100`$. The reason for this is unknown to us.
## III Conclusion
In summary, we have studied statistical properties of the complete DNA of thirteen microbial genomes and shown that its fractal behavior not always prevails through the entire chain. For some sequences the power spectrum gets flat at low frequencies, and for others it remains a power-law. In the study of the autocorrelation function we have found a rich variety of behaviors, including the presence of anti-correlations.
###### Acknowledgements.
This paper is an outgrowth of work done with H. J. Herrmann, to whom I am grateful for introducing me in this subject.
|
no-problem/9905/cond-mat9905082.html
|
ar5iv
|
text
|
# Self-organized Networks of Competing Boolean Agents
## Abstract
A model of Boolean agents competing in a market is presented where each agent bases his action on information obtained from a small group of other agents. The agents play a competitive game that rewards those in the minority. After a long time interval, the poorest player’s strategy is changed randomly, and the process is repeated. Eventually the network evolves to a stationary but intermittent state where random mutation of the worst strategy can change the behavior of the entire network, often causing a switch in the dynamics between attractors of vastly different lengths.
PACS numbers: 05.65.+b, 87.23.Ge, 87.23.Kg
Dynamical systems with many elements under mutual regulation or influence are thought to underlie much of the phenomena associated with complexity. Such systems arise naturally in biology, as, for instance, genetic regulatory networks , or ecosystems, and in the social sciences, in particular the economy . Economic agents make decisions to buy or sell, adjust prices, and so on based on individual strategies which take into account the heterogeneous external information each agent has available at the time, as well as internal preferences such as tolerance for risk. External information may include both globally available signals that represent aggregate behavior of many agents such as a market index, or specific (local) information on what some other identified players are doing. In this case each agent has a specified set of inputs, which are the actions of other agents, and a set of outputs, his own actions, that may be conveyed to some other agents. Thus, the economy can be represented as a dynamical network of interconnected agents sending signals to each other with possible, global feedback to the agents coming from aggregate measures of their behavior plus any exogenous forces.
Each agent’s current strategy can be represented as a function which specifies a set of outputs for each possible input. In the simplest case the agents have only one binary choice such as either buying or selling a stock . As indicated first by B. Arthur this simple case already presents a number of intriguing problems. In his “bar problem”, each agent must decide whether to attend a bar or refrain based on the previous aggregate attendance history . Challet and Zhang made a perspicuous adaptation, the so-called minority model, where agents in the minority are rewarded, and those in the majority punished . Common to all these and related works is that the network of interconnections between the agents is totally ignored. They are mean field descriptions. Each agent responds only to an aggregate signal, e.g. which value (0 or 1) was in the majority for the last $`T_i`$ time steps, rather than any detailed information he may have about other specified agents. It is not unexpected that an extended system with globally shared information can organize. A basic question in studies of complexity is how large systems with only local information available to the agents may become complex through a self-organized dynamical process.
Here we explicitly consider the network of interconnections between agents, and for simplicity exclude all other effects. We represent agents in a market as a random network of interconnected Boolean elements under mutual influence, the so-called Kauffman network . The market payoff takes the form of a competitive game. The performance of the individual agents is measured by counting the number of times each agent is in the majority. After a time scale, defining an epoch, the worst performer, who was in the majority most often, changes his strategy. The Boolean function of that agent is replaced with a new Boolean function chosen at random, and the process is repeated indefinitely. Note that it is not otherwise indicated to the agents what is rewarded, i.e. being in the minority. The agents are only given their individual scores and otherwise play blindly; they do not know directly that they are rewarded by the outcome of a minority game, unlike the original minority game model.
We observe that irrespective of initial conditions, the network ultimately self-organizes into an intermittent steady state at a borderline between two dynamical phases. This border may correspond to an “edge of chaos” . In some epochs the dynamics of the network takes place on a very long attractor; while, otherwise, the network is either completely frozen or the dynamics is localized on some attractor with a smaller period. More precisely, numerical simulation results indicate that the distribution of attractor lengths in the self-organized state is broad, with no apparent cutoff other than the one that must be numerically imposed, and consistent with power-law behavior for large enough attractor lengths. A single agent’s change of strategy from one epoch to the next can cause the entire network to flip between attractors of vastly different lengths. Thus the network can act as a switch.
Consider a network of $`N`$ agents where each agent is assigned a Boolean variable $`\sigma _i=0\mathrm{or}1`$. Each agent receives input from $`K`$ other distinct agents chosen at random in the system. The set of inputs for each agent $`i`$ is quenched. The evolution of the system is specified by $`N`$ Boolean functions of $`K`$ variables, each of the form
$$\sigma _i(t+1)=f_i(\sigma _{i_1}(t),\sigma _{i_2}(t),\mathrm{}\sigma _{i_K}(t)).$$
(1)
There exists $`2^{2^K}`$ possible Boolean functions of $`K`$ variables. Each function is a lookup table which specifies the binary output for a given set of binary inputs. In the simplest case defined by Kauffman, where the networks do not organize, each function $`f_i`$ is chosen randomly among these $`2^{2^K}`$ possible functions with no bias; we refer to this case as the random Kauffman network (RKN).
We will now briefly review some facts about Kauffman networks. First, a phase transition occurs on increasing $`K`$. For $`K<2`$ RKN starting from random initial conditions reach frozen configurations, while for $`K>2`$ RKN reach attractors whose length typically grow exponentially with $`N`$ and are called chaotic. RKN with $`K=2`$ are critical and the distribution of attractors lengths that the system reaches, starting from random initial conditions, approaches a power law , for large enough system sizes, when averaged over many network realizations. This phase transition in the Kauffman networks can also be observed by biasing the random functions $`f_i`$ so that the output variables switch more or less frequently if the input variables are changed. Boolean functions can be characterized by a “homogeneity parameter” $`P`$ which represents the fraction of 1’s or 0’s in the output, whichever is the majority for that function. In general, on increasing $`P`$ at fixed $`K`$, a phase transition is observed from chaotic to frozen behavior. For $`K<2`$ the unbiased, random value happens to fall above the transition in the frozen phase, while for $`K3`$ the opposite occurs . Kauffman networks are examples of strongly disordered systems and have attracted attention from physicists over the years (see for example Refs. ). Note that the phase transition previously observed in Kauffman networks arises by externally tuning parameters such as $`P`$ or $`K`$.
We consider random Boolean networks of $`K`$ inputs, and with lookup tables chosen independently from the $`2^{2^K}`$ possibilities with equal probability. With specified initial conditions, generally random, each agent is updated in parallel according to Eq. 1. The agents are competing against each other and at each time step those in the minority win. Thus there is a penalty for being in the herd. One may ascribe to agents a reluctance to change strategies. Only in the face of long-term failure will an agent overcome his barrier to change. In the limiting case of high barriers to change, the time scale for changing strategies will be set by the poorest performer in the network. The change of strategies is approximated as an extremal process where the agent who was in the majority the most often over a long time scale, the epoch, is chosen for “Darwinian” selection. In our simulations, the network was updated until either the attractor of the dynamics was found, or the length of the attractor was found to be larger than some limiting value which was typically set at 10,000 time steps, solely for reasons of numerical convenience. The performance of the agents was then measured over either the attractor or the portion of the attractor up to the cutoff length.
The Boolean function of the worst player is replaced with a new Boolean function chosen completely at random with equal probability from the $`2^{2^K}`$ possible Boolean functions. If two or more agents are the worst performers, one of them is chosen at random and changed. The performance of all the agents is then measured in the new epoch, and this process is continued indefinitely. Note that the connection matrix of the network does not evolve; the set of agents who are inputs to each agent is fixed by the initial conditions.
Independent of initial conditions, a $`K=3`$ network evolves to a statistically stationary but intermittent state, shown in Fig. 1. Initially the attractors that the system reaches are always very long, consistent with all previous work on Kauffman networks. But after many epochs of selecting the worst strategy, short attractors first appear and a new statistically stationary state emerges. In this Figure we roughly characterize an attractor as “chaotic” or long if its length is greater than $`l=10,000`$ time steps. On varying $`l`$ a similar picture is obtained as long as $`l`$ is sufficiently large to distinguish long period attractors from short period ones. In the stationary state, one observes that the network can switch behaviors on changing a single strategy. Intriguingly, Kauffman initially proposed random Boolean networks as simplified models of genetic regulation where it is known that switches exist and are important aspect of genetic control .
To be more precise, the histogram of the distribution of the lengths of the attractor in the self-organized state was measured as shown in Fig. 2 for different system sizes with the same numerically imposed cutoff $`l`$. The apparent peak at small periods is due to the relative presence or absence of prime numbers, and numbers which can be factored many ways. The last point represents all attractors larger than our numerically imposed cutoff 10,000, which is why a bump appears. In between these two regions, the behavior suggests a power-law, $`P_{atr}(t)1/t`$ asymptotically, as is the case at the phase transition in RKN . If we increase or decrease our numerically imposed cutoff then the bump at $`l`$ correspondingly moves left or right and the intermediate region expands or contracts, both consistent with the power law. Also the power law behavior becomes more apparent for increasing system size suggesting that the self-organized state we observed is not merely an effect of finite system size, but becomes more distinct as the system size increases.
The process of evolution towards the steady-state is monitored by measuring the average value of the homogeneity parameter $`P`$ in the network from epoch to epoch. As shown in Fig. 3, for $`K=3`$, the average value of $`P`$ tends to increase from the random value set by the initial conditions during the transient. For finite $`N`$, there are fluctuations in $`P`$ in the steady state, as well as finite size effects in the average value $`P`$. For $`N=(99,315,999,3161)`$ we measured an average value in the steady state $`P=(0.656(1),0.664(1),0.669(1),0.671(1))`$ and root-mean-square fluctuations $`\mathrm{\Delta }P_{rms}(0.015,0.007,0.004,0.001)`$. These numerical results suggest that in the thermodynamic limit, $`N\mathrm{}`$, $`P`$ is approaching a unique value $`P_c0.672`$. This value is below the $`P_c0.792438`$ of random Kauffman $`K=3`$ networks, but is many standard deviations away from the initial value.
The dynamical state that the system evolves toward is different from the phase transition of Kauffman networks in other (less trivial) ways. In particular, the phase transition in RKN is a freezing transition where most elements do not change state. Only a few elements, strictly $`(<𝒪(N))`$, are changing state at the phase transition of RKN, whereas in our self-organized networks, there can be short attractors associated with many elements $`(𝒪(N))`$ changing state. This can only occur if the Boolean tables in the network become correlated by the evolutionary process, which, by construction, is not allowed for RKN. Thus our initially chaotic networks are not freezing as in Kauffman networks at the phase transition, but are somehow phase locking many elements together.
The distribution of performances of agents in the network fluctuates a great deal from epoch to epoch. The performance is measured by counting the fraction of times each agent is in the majority. In the case where the network has period one, there are obviously two peaks, one corresponding to the group always in the minority and the other corresponding to the group always in the majority. In fact we find that even on the long attractors encountered in the steady state, typically a significant fraction of the agents are frozen. The number of these frozen agents fluctuates from epoch to epoch.
Fig. 4 is a histogram of performances for agents in a self-organized network in a particular epoch which had a period greater than $`10,000`$. Note that the relative performances vary considerably. The two peaks represent the frozen agents. As indicated in the figure, the frozen agents are typically divided between the two states unevenly. In any given instant, despite the uneven division between the frozen agents, the total number of agents in the two states (0,1) is almost evenly divided with fluctuations that are much smaller than in RKN. Active agents, who are changing their state in response to the inputs of others, comprise the remainder of the histogram outside of the two peaks. As shown in this figure, some agents who are inflexible and do not respond to their environment perform better than some agents who respond to their changing inputs and change states. This suggests that somehow the losers are being exploited by some information travelling in the network that they respond to. Also, somewhat counterintuitively, a large group of agents who take the same action, corresponding to the left hand peak, can compete very well in spite of the fact that the minority game tends to punish herd behavior.
Although we currently have no adequate theoretical description of our numerical observations, we can still discuss, to some extent, the generality and robustness of our results. First, if instead of changing the entire Boolean table of the worst performer just one element in it is changed, the self-organization process still takes place. If on the other hand, the Boolean function of the worst performer and those who receive input from it are changed, no self-organization takes place. Of course it doesn’t make sense to change the Boolean functions of the agents who listen to the worst performer because in our context the barrier to change is an internal function of the performance for each individual. The precise behavior on varying $`K`$ is not determined at present. For $`K=6`$, we have simulated systems with $`N=99`$ as long as $`10^6`$ epochs and never observed the system to reach any frozen state when starting from a random, unbiased state in the chaotic phase, so it is possible that the self-organization process as described here using completely random tables does not occur for high enough $`K`$.
However, other significant modifications were done where the self-organization process survives. For example, if instead of changing the boolean tables of the worst performer, we keep the boolean tables fixed at their initial state, but change the inputs for the worst performer by rewiring the network, then the $`K=3`$ networks still self-organize to a similar state at an “edge of chaos” with similar statistical properties for the periods of the attractors and performances of the agents. This occurs despite the fact that in this case the average homogeneity parameter, $`P`$, of the network cannot evolve.
Rather than define an arbitrary fitness, and select those agents with lowest fitness, an approach that was used by Bak and Sneppen , to describe co-evolution, we eliminate the concept of fitness and define a performance based on a specific game. Clearly if the agents are rewarded for being in the majority then the behavior of the system is completely trivial; the agents gain by cooperating instead of competing and the network is driven deep into the frozen phase. This naturally raises the question of which types of games lead to self-organized complex states. In our model, selection of agents in the majority for random change tends to increase the number in the minority. Even in the absence of interactions, eventually those in the minority would become the majority and lose. We suspect that, in general, the game must make agents compete for a reward that depends on the behavior of other agents in a manner that intrinsically frustrates any group of agents from permanently taking over and winning. This frustration may be an essential feature of the dynamics of many complex systems, and our model may be interpreted, as, for instance, describing an ecosystem of interacting and competing species.
We thank P. Bak, S. Kauffman, and K. Sneppen for stimulating discussions. This work was funded in part by EU Grant No. FMRX-CT98-0183.
|
no-problem/9905/hep-th9905082.html
|
ar5iv
|
text
|
# References
SOME COMMENTS ON THE SPIN OF THE CHERN-SIMONS VORTICES
R. Banerjee<sup>1</sup><sup>1</sup>1rabin@boson.bose.res.in
S.N.Bose National Centre for Basic Sciences
Block JD, Sector III, Salt Lake City, Calcutta 700091, India
and
P. Mukherjee
A.B.N. Seal College
Cooch Behar, West Bengal
India
## Abstract
We compute the spin of both the topological and nontopological solitons of the Chern - Simons - Higgs model by using our approach based on constrained analysis. We also propose an extension of our method to the non - relativistic Chern - Simons models. The spin formula for both the relativistic and nonrelativistic theories turn out to be structurally identical. This form invariance manifests the topological origin of the Chern - Simons term responsible for inducing fractional spin. Also, some comparisons with the existing results are done.
In (2+1) dimensional field theories the gauge - field dynamics may be dictated by the Chern - Simons (C-S) piece instead of the Maxwell ( Yang - Mills) term. Interest in the C-S field theories originated from the observation that the C-S term implements the Hopf interaction in (2+1) dimensional O(3) nonlinear sigma model within the framework of local gauge theory . The soliton sector of the theory offers excitations (baby skyrmions) carrying fractional spin and statistics . Pure C-S coupled field theories can support self dual vortex configurations - a fact exemplified by numerous models . A remarkable aspect of the C-S gauge field is that it can be coupled with both Poincare and Galileo symmetric models . The latter possibility is very useful in view of the applications of the C-S theories in condensed matter physics . There are different computations of the spin of the C - S vortices, both relativistic and non-relativistic . However, the results do not always agree . Recently, we have proposed a general framework for obtaining the spin of relativistic C - S vortices which is found to give consistent results for a variety of C - S theories . In this paper an extension of our method applicable to the nonrelativistic C - S models is presented. Consequently, a general method, equally viable in the relativistic and non-relativistic cases, is provided. To put our work in the proper perspective let us first give a brief digression.
The usual method of defining the spin of the C-S vortices is to identify the spin with the total angular momentum in the static soliton configuration . This angular momentum integral is constructed from the symmetric energy - momentum (E - M) tensor obtained by varying the action with respect to a background metric. Since this E - M tensor is relevant in formulating the Dirac - Schwinger conditions it will henceforth be referred as the Schwinger E - M tensor. Correspondingly, the angular momentum following from this energy momentum tensor usually goes by the name of Schwinger. It is both symmetric and gauge invariant, and also occurs naturally in the context of the general theory of relativity. For these properties it is also interpreted as the physical angular momentum. Contrary to the Noether angular momentum, however, the Schwinger angular momentum does not have a natural splitting into an orbital and a spin part . Thus it is not transparent how the value of this angular momentum in the static limit may be identified with the intrinsic spin of the vortices.
In the alternative field - theoretic definition of the spin of the C-S vortices advanced in ,one abstracts the canonical part from the physical angular momentum. The canonical part is obtained by using the conventional Noether definition. Both the canonical as well as the physical angular momentum are obtained from the improved versions of the corresponding E - M tensors to properly account for the constraints of the theory. Now the Noether angular momentum contains the orbital part plus the contribution coming from the spin degrees of freedom as appropriate for generating local transformations of the fields under Lorentz transformations . Its difference from the physical angular momentum is shown to be a total boundary containing the C-S gauge field only, the value of which depends on the asymptotic limit of the C-S field. It vanishes for nonsingular configurations. However, for the C-S vortex configurations we get a nonzero contribution. This contribution is found to be independent of the origin of the coordinate system. It is possible to interpret this difference as an internal angular momentum characterising the intrinsic spin of the C-S vortices. The connection of the anomalous spin with the topological C-S interaction is thus clearly revealed.
The C-S coupled O(3) nonlinear sigma model provides an ideal example for the comparision of the different methods. The Schwinger’s E - M tensor for the model is given by the well known expression ,
$$\theta _{\mu \nu }^s=\frac{2}{f^2}(2_\mu n^a_\nu n^ag_{\mu \nu }_\alpha n^a^\alpha n^a),$$
(1)
where $`n^a(a=1,2,3)`$ are the sigma model fields satisfying
$$n^an^a=1$$
(2)
The angular momentum integral is,
$$J=d^2xϵ_{ij}x_i\theta _{0j}^s$$
(3)
Since the (0,j) component of $`\theta _{\mu \nu }^s`$ explicitly involves a time derivative of $`n^a`$, J vanishes in the static configuration. The definition of then predicts zero spin of the baby skyrmions. However, it is definitely proved from quite general arguments that these excitations carry fractional spin and statistics . In fact it has been shown that the spin value is given by,
$$S=\frac{\theta }{2\pi }$$
(4)
where $`\theta `$ is the C-S coupling strength. The connection of the baby skyrmions with the quasiparticles found in the quantum Hall state has been established, where the anomalous spin of the excitations play a crucial role . So the vanishing spin of the baby skyrmions predicted by the definition of is clearly a contradiction. This contradiction, which was earlier noticed in , prompted alternative definitions for obtaining the spin of the vortices. In this context it is interesting to note that the definition of gives an entirely satisfactory result for this model.
In we work in a gauge - independent formalism and the Schwinger E - M tensor is required to be augmented by appropriate linear combinations of the constraints of the theory , so as to generate proper transformation of the fields . The difference of the angular momenta obtained from the Schwinger and the canonical E - M tensors is
$$K=\theta d^2x_i(x^iA^jA_jx^jA^iA_j),$$
(5)
where $`A^i`$ is the C-S gauge field. The asymptotic form of the gauge field is only required to compute K and this form is dictated by the requirements of rotational symmetry and the Gauss constraint of the theory to be,
$$A^i(x)=\frac{Q}{2\pi }ϵ_{ij}\frac{x^j}{|x|^2},$$
(6)
where Q is the topological charge. Using (5) we get
$$K=\frac{Q^2}{2\pi }\theta .$$
(7)
For the baby skyrmions Q = 1 and the spin value agrees with equation (4). This example shows that the definition of yields results compatible with general arguments .
We now extend our formalism for calculating the spin of the vortices in nonrelativistic models with C-S coupling, which is the main thrust of the paper. The Galileo invariant models cannot be made generally covariant and so a gauge invariant E - M tensor cannot be constructed along the lines of Schwinger. Nevertheless we can build up a gauge invariant E - M tensor using the equations of motion . We are then able to apply our formalism developed for the relativistic theories by substituting the Schwinger E - M tensor with this one. The resulting spin formula comes out to be exactly equivalent to that obtained in the relativistic case, revealing the topological connection of the origin of the fractional spin. Before discussing the nonrelativistic case, a quick survey of a relativistic example is appropriate.
The Lagrangian of the C-S-H model is
$$=(D_\mu \varphi )^{}(D^\mu \varphi )+\frac{k}{2}ϵ^{\mu \nu \lambda }A_\mu _\nu A_\lambda V(|\varphi |)$$
(8)
where the covariant derivative is defined by,
$$D_\mu =_\mu +ieA_\mu $$
(9)
This model is known to possess both topological as well as nontopological vortices. According to our definition the spin of the vortices is defined as
$$K=J^SJ^N$$
(10)
where $`J^S`$ and $`J^N`$ are, respectively, the Schwinger and Noether angular momentum. We work in a gauge independent formalism where the constraints of the theory are weakly zero. Different E-M tensors are thus required to be augmented by appropriate linear combination of the constraints of the theory to obtain proper transformation of the fields.
From a detailed analysis of the model (8) we arrive at the following augmented expressions for $`J^S`$ and $`J^N`$ ,
$`J^S`$ $`=`$ $`{\displaystyle d^2𝐱ϵ^{ij}x_i[\pi _j\varphi +\pi ^{}_j\varphi ^{}kϵ^{lm}A_j_lA_m]}`$ (11)
$`J^N`$ $`=`$ $`{\displaystyle d^2𝐱[ϵ^{ij}x_i(\pi _j\varphi +\pi ^{}_j\varphi ^{}\frac{k}{2}ϵ^{lm}A_l_jA_m)+\frac{k}{2}A^jA_j]}`$ (12)
where $`\pi (\pi ^{})`$ is the momentum conjugate to $`\varphi (\varphi ^{})`$. Using (11) and (12) in (10) we get
$$K=\frac{k}{2}d^2\stackrel{}{x}^i[x_iA_jA^jA_ix_jA^j]$$
(13)
which is the same formula as obtained for the O(3) nonlinear sigma model ( see equation (5) ). Note that the integrand is a boundary term so that only the asymptotic form of the gauge field $`A_i`$ is required for the computation of K.
For topological vortices, the matter field $`\varphi `$ at infinity bears a representation of the broken U(1) symmetry,
$$\varphi ve^{in\theta }$$
(14)
where n is the topological charge. The requirement of finite energy of the configuration dictates that asymptotically,
$$eA_i=nϵ_{ij}\frac{x^j}{|x|^2}$$
(15)
The above form is rotationally symmetric and satisfies the Gauss law. As a consequence the magnetic flux $`\mathrm{\Phi }`$ is quantised
$$\mathrm{\Phi }=\frac{2\pi n}{e}$$
(16)
After a straightforward calculation using (13) and (15), we obtain,
$$K=\frac{\pi kn^2}{e^2}$$
(17)
The nontopological vortices lie at the threshold of stability . For these the magnetic flux $`\mathrm{\Phi }`$ is arbitrary. The asymptotic form of the gauge field is now expressed as
$$A_i=\frac{\mathrm{\Phi }}{2\pi }ϵ_{ij}\frac{x^j}{|x|^2}$$
(18)
and the spin computed from (13) is
$$K=\frac{k\mathrm{\Phi }^2}{4\pi }$$
(19)
Equations (17) and (19) give the spin of the topological and nontopological vortices of the C-S-H model respectively. Notice that the sign of the spin is +ve in both the cases, which again is the same as that of the elementary excitations of the model. In earlier analysis there was a difference in sign which was explained by the introduction of a new interaction. This is not necessary in the present discussion.
Now we will apply the same general method to nonrelativistic models. Consider the Lagrangian
$$=i\varphi ^{}D_t\varphi \frac{1}{2m}(D_k\varphi )^{}(D_k\varphi )+\frac{k}{2}ϵ_{\mu \nu \lambda }A_\mu _\nu A_\lambda $$
(20)
where $`\varphi `$ is a bosonic Schrodinger field. The model (20) is invariant under the Galilean transformations and not under the transformations of the Poincare group. Note that the Galilean transformations take time and space on an unequal footing. So space-time metric is not defined. In writing (20) we adopt a spatial Eucledian metric, covariant and contravariant components are thus not to be distinguished.
The action of the model (20) cannot be made generally covariant. The powerful method of constructing a gauge invariant energy - momentum (E - M) tensor, as formulated by Schwinger, is thus not available. Nonetheless, it is possible to construct a gauge invariant E - M tensor by appealing to the equations of motion . Our program is then clear. We will find a gauge-invariant momentum density from the matter current obtained by using the equations of motion. These equations will then be exploited to show the conservation of the corresponding momentum. We work in the gauge independent formalism in contrast with the gauge-fixed approach of . A suitable linear combination of the Gauss constraint is to be added with the gauge invariant momentum operator, in order to generate the correct transformation of the fields under spatial translation. A gauge invariant angular momentum is then constructed using this momentum density. The canonical angular momentum obtained by Noether’s prescription is now subtracted from it. The spin of the vortices is, as usual, defined by
$$K=JJ^N$$
(21)
which is exactly similar to equation (10) with the exception that J is now the gauge invariant angular momentum constructed by using the equations of motion.
From the Lagrangian (20) we write the Euler-Lagrange equation corresponding to $`A_\mu `$,
$$kϵ_{\mu \nu \lambda }_\nu A_\lambda =j_\mu $$
(22)
where $`j_\mu `$ is given by
$`j_0`$ $`=`$ $`\varphi ^{}\varphi `$ (23)
$`j_i`$ $`=`$ $`{\displaystyle \frac{1}{2im}}[\varphi ^{}(D_i\varphi )\varphi (D_i\varphi )^{}]`$ (24)
Observe that (22) leads to a continuity equation
$$_0j_0+_ij_i=0$$
(25)
so that $`j_0`$ and $`j_i`$ can be interpreted as the matter density and current density respectively.
From the E-L equation corresponding to $`A_0`$ we get the Gauss constraint of the theory
$$G=\varphi ^{}\varphi kϵ_{ij}_iA_j0$$
(26)
Now we come to the construction of the gauge invariant momentum operator. The (0-i)th component of the EM tensor $`T_{0i}`$(i.e. the momentum density) is obtained from the matter current
$$T_{0i}=\frac{i}{2}[\varphi ^{}(D_i\varphi )\varphi (D_i\varphi )^{}]$$
(27)
We verify by a straightforward calculation using the equations of motion that $`T_{0i}`$ indeed satisfies the appropriate continuity equation,
$$_0T_{0i}+_kT_{ki}=0$$
(28)
where $`T_{ki}`$ is the stress - tensor,
$$T_{ki}=\frac{1}{2m}[(D_k\varphi )^{}(D_i\varphi )+(D_k\varphi )(D_i\varphi )^{}_i(\varphi ^{}D_k\varphi +\varphi (D_k\varphi )^{})]$$
(29)
Using the expression(27) of $`T_{0i}`$ we construct a gauge invariant momentum operator
$$P_i=d^2𝐱T_{0i}$$
(30)
Exploiting (28) and neglecting the boundary term we find that $`P_i`$ is indeed conserved,
$$\frac{dP_i}{dt}=0$$
(31)
The boundary term vanishes due to the condition that the covariant derivative $`D_i\varphi `$ is zero on the boundary which is required to keep the energy finite.
The lagrangian (20) is first order in time derivatives. It is then easy to read off the basic brackets from (20) by symplectic arguments. The nontrivial brackets are
$`\{\varphi (𝐱),\varphi ^{}(𝐲)\}`$ $`=`$ $`i\delta (𝐱𝐲)`$
$`\{A_i(𝐱),A_j(𝐲)\}`$ $`=`$ $`{\displaystyle \frac{1}{k}}ϵ_{ij}\delta (𝐱𝐲)`$ (32)
Using these basic brackets we obtain,
$$\{\varphi (𝐱),P_i\}=_i\varphi (𝐱)+iA_i\varphi (𝐱)$$
(33)
Hence the transformation of $`\varphi `$ deviates from the expected canonical structure. For proper transformation of the fields under spatial translation we require to supplement $`T_{0i}`$ by the Gauss constraint,
$$T_{0i}^T=T_{0i}+A_iG$$
(34)
and the corresponding momentum operator
$$P_i=d^2𝐱[\frac{i}{2}(\varphi ^{}D_i\varphi \varphi (D_i\varphi )^{})+A_iG]$$
(35)
turns out to be an appropriate generator of spatial translation. The term containing the Gauss operator in (35) exactly generates a piece in $`\{\varphi (𝐱),P_i\}`$ which cancels the anomalous term in (33).
We now come to the construction of $`J`$, the gauge invariant angular momentum, from the momentum density (35),
$$J=d^2𝐱ϵ_{ij}x_i[\frac{i}{2}(\varphi ^{}D_j\varphi \varphi (D_j\varphi )^{})+A_jG]$$
(36)
The canonical angular momentum $`J^N`$ is obtained from Noether’s theorem as ,
$$J^N=d^2𝐱[ϵ_{ij}x_i(\frac{i}{2}(\varphi ^{}_j\varphi \varphi (_j\varphi )^{})\frac{k}{2}ϵ_{mn}A_m_jA_n)+\frac{k}{2}A_jA_j]$$
(37)
Substituting (36) and (37) in (21) we obtain,
$$K=\frac{k}{2}d^2𝐱_i[x_iA^2x_jA_jA_i]$$
(38)
Observe that the master formula for the calculation of spin (38) is identical with equation (13). The asymptotic form of $`A_i`$ following from general considerations already elaborated leads to the same structure as in (15). Inserting this in (38) exactly reproduces (17) as the spin of the vortices.
We note in passing that self - dual soliton solutions can be obtained by including a quartic self - interaction in (20) , which are interpreted as the nonrelativistic limit of the nontopological vortices of the relativistic Chern - Simons - Higgs model considered previously. The spin of these solitons can be calculated by (38) using the asymptotic form (18). The result comes out to be identical with (19). This is expected, because the existence of the fractional spin is connected to the Chern - Simons piece which is a topological term. The spin of the vortices of the model (20) with quartic self - interaction and an external magnetic field was calculated earlier . Their method was in spirit akin to that of but they had to subtract the background contribution to get the exact spin. The result of scales with the topological number as in our case, but with opposite signature. The same comments apply to this comparison as made earlier in connection with the C - S - H model.
To conclude, we found that the usual method of defining the spin as the static limit of the physical angular momentum yields contradictory results when applied to compute the spin of the solitons of the Chern - Simons (C-S) coupled O(3) nonlinear sigma model. In this connection we have observed that a consistent result is obtained when we apply our general formalism for computing the spin in the C-S theories , which exploits the constraints of the theory. Here the canonical part of the physical angular momentum is abstracted by subtracting the canonical (Noether) angular momentum from the angular momentum obtained from Schwinger’s E - M tensor . The difference was found to be nonzero for singular configurations. In particular for C-S vortices this difference was shown to be independent of the origin of the coordinate system. Consequently we interpret it as the intrinsic spin of the vortices. The formula for the spin comes out to be model independent and contrary to other approaches where detailed field configurations are necessary, only the asymptotic form of the gauge field is required for its evaluation.
The spin of the topological and nontopological vortices of the C-S-H model was reviewed by the general formalism mentioned above. The spin of both types of vortices of the model comes out with the same sign. We also find that the sign of the spin of the topological vortices is the same as that of the elementary excitations of the model . This is a satisfactory result because the spin-statistics connection is then respected with the usual Aharonov - Bohm phase. Notably, in an opposite sign was found so that a new interaction was required to properly account for this phase .
Our formalism is directly applicable to the relativistic theories but the Chern - simons interaction enjoys the rare distinction of being suitable to be coupled to both Poincare and Galileo symmetric models \[5 -9\]. We were thus motivated to extend our formalism to the nonrelativistic theories. Moreover, a systematic discussion of the spin in such theories is nonexistent. Although an explicit calculation exists , the connection of the method with the corresponding ones used for the relativistic models is not quite clear. Our extended formalism treated the nonrelativistic models within the same general framework used for the relativistic case. The resulting spin formula was identical with that obtained for the relativistic theories. This points to the topological origin of the C-S term, responsible for the induction of the fractional spin, either in relativistic or nonrelativistic models.
Acknowledgements
One of the authors (PM) thanks Professor C.K.Majumdar, Director of the S.N.Bose National Centre for Basic Sciences, for allowing him to use the institute facilities.
|
no-problem/9905/cond-mat9905050.html
|
ar5iv
|
text
|
# On the possibility of optimal investment
## I Introduction
Non-equilibrium statistical mechanics, especially the theory of stochastic processes, finds recently wide applicability in economics. First area, intensively studied in the last several years, is the phenomenology of the signal (price, production, and other economic variables) measured on the economics system . Scaling concepts proved to be a very useful tool for such analysis.
Second area concerns optimization. In the competitive economics, agents should maximize their survival probability by balancing several requirements, often mutually exclusive, like profit and risk . Third area comprises creation of models which should grasp particular features of the behavior of real economics, like price fluctuations .
We focus here on an aspect of optimization, discussed recently by Marsili, Maslov and Zhang . In a simplified version of the economy, there are two possibilities where to put a cash: to buy either a risky asset (we shall call it stock, but it can be any kind of asset) or a riskless asset (deposit in a bank). In the latter case we are sure to gain each year a fixed amount, according to the interest rate. On the contrary, putting the money entirely to the stock is risky, but the gain may be larger (sometimes quite substantially). We may imagine, that increasing our degrees of freedom by putting a specified portion of our capital into the stock and the rest to the bank may lead to increased growth of our wealth. This way was first studied by Kelly and followers and intensively re-investigated recently .
The point of the Kelly’s approach is, that if we suppose that the stock price performs a multiplicative process , the quantity to maximize is not the average value of the capital, but the typical value, which may be substantially different, if the process is dominated by rare big events. It was found that given the probability distribution of the stock price changes, there is a unique optimal value of the fraction of the investor’s capital put into the stock.
The purpose of the present work is to investigate the practical applicability of the strategy suggested in . Let us first briefly summarize this approach. We suppose that the price $`p_t`$ of the stock changes from time $`t`$ to $`t+1`$ according to a simple multiplicative process
$$p_{t+1}=p_t\mathrm{e}^{\eta _t}$$
(1)
where $`\eta _t`$ for different $`t`$ are independent and equally distributed random variables. The angle brackets $`<>`$ will denote average over these variables.
We denote $`W_t`$ the total capital of the investor at the moment $`t`$. The fraction $`r`$ of the capital is stored in stock and the rest is deposited in a bank. We will call the number $`r`$ investment ratio. The interest rate provided by the bank is supposed to be fixed and equal to $`\rho `$ per one time unit. The strategy of the investor consists in keeping the investment ratio constant. It means, that he/she sells certain amount of stock every time the stock price rose and sell when the price went down.
If we suppose that the investor updates its portfolio (i. e. buys or sells some stock in order to keep the investment ratio constant) at each time step, then starting from the capital $`W_0`$, after $`N`$ time steps the investor owns
$$W_N=\underset{t=0}{\overset{N1}{}}(1+\rho +r(\mathrm{e}^{\eta _t}1\rho ))W_0.$$
(2)
The formula can be simply generalized to the situation when there is a non-zero transaction cost equal to $`\gamma `$ (see also ) and the update of the portfolio is done each $`M`$ time steps. We assume for simplicity that $`N`$ is a multiple of $`M`$.
$$W_N=\underset{t=0}{\overset{N/M1}{}}\frac{(1+\rho )^M+r(\mathrm{e}^{\overline{\eta }_{Mt}}(1+G)(1+\rho )^M)}{1+rG}W_0$$
(3)
where we denoted $`\overline{\eta }_{Mt}=_{i=Mt}^{Mt+M1}\eta _i`$ and $`G=\gamma \mathrm{sign}(M\mathrm{ln}(1+\rho )\overline{\eta }_{Mt}).`$
We can see that like the stock price itself, the capital performs a multiplicative process.
$$W_{t+1}=e_t(r)W_t$$
(4)
where the random variables $`e_t(r)`$ depend on the investment ratio as a parameter.
For $`N`$ sufficiently large the typical growth of the capital $`(W_{t+1}/W_t)_{typical}`$ is not equal to the mean $`<e(r)>`$ as one would naively expect, but is given by the median , which in this case gives
$$g(r)=\mathrm{log}((W_{t+1}/W_t)_{typical})=<\mathrm{log}e(r)>.$$
(5)
Therefore we look for the maximum of $`g`$ as a function of $`r`$, which in the simplest case without transaction costs leads to the equation
$$<\frac{\mathrm{e}^\eta 1\rho }{1+\rho +r_{\mathrm{opt}}(\mathrm{e}^\eta 1\rho )}>=0.$$
(6)
for the optimum strategy $`r_{\mathrm{opt}}`$. If the solution falls outside the interval $`[0,1]`$, one of the boundary points is the true optimum, based on the following conditions. If $`g^{}(0)<0`$ the optimum is $`r_{\mathrm{opt}}=0`$. If $`g^{}(1)>0`$ the optimum is $`r_{\mathrm{opt}}=1`$.
If $`\eta `$ is a random variable with probability density
$$P(\eta )=\frac{1}{2}\left(\delta (\eta md)+\delta (\eta m+d)\right)$$
(7)
the solution of (6) is straightforward:
$$r_{\mathrm{opt}}=\frac{1}{2}\left(\frac{1+\rho }{1+\rho \mathrm{e}^{m+d}}+\frac{1+\rho }{1+\rho \mathrm{e}^{md}}\right).$$
(8)
In more complicated cases we need to solve the equation (6) numerically. However, for small mean and variance of $`\eta `$ approximative analytical formulae are fairly accurate . We found, that equally good approximation is obtained, if we set $`m=<\eta >`$ and $`d=\sqrt{<\eta ^2><\eta >^2}`$ in the Eq. 8.
In the next section we investigate the method with real data. Section III shows the influence of the transaction costs. In Sec. IV a generalization of the method for the case of time-correlated price is shown. Finally, in Sec. V we discuss the obtained results.
## II Two-time optimal strategies
In the previous section we supposed the following procedure: the investor takes the stock price data and extracts some statistical information from them. This information is then plugged into theoretical machinery, which returns the suggested number $`r`$. However, we may also follow different path, which should be in principle equivalent, but in practice it looks different.
Namely, suppose we observe the past evolution of the stock price during some period starting at time $`t_1`$ and finishing at time $`t_2`$ (most probably it will be the present moment, but not necessarily). Then, we imagine that at time $`t_1`$ an investor started with capital $`W_{t_1}=1`$ and during that period followed the strategy determined by certain value of $`r`$. We compute his/her capital $`W_{t_2}(r)`$ at final time and find the maximum of the final capital $`W_{t_2}(r)`$ with respect to $`r`$. We call the value $`r_{\mathrm{opt}}`$ maximizing the final capital two-time optimal strategy. Optimum strategy in the past can be then used as predicted optimal strategy for the future.
The capital at time $`t_2`$ is again
$$W_{t_2}(r)=\underset{t=t_1}{\overset{t_21}{}}(1+\rho +r(\mathrm{e}^{\eta _t}1\rho ))$$
(9)
and its maximization with respect to $`r`$ leads to equation
$$g^{}(r_{\mathrm{opt}})=\underset{t=t_1}{\overset{t_21}{}}\frac{\mathrm{e}^{\eta _t}1\rho }{1+\rho +r_{\mathrm{opt}}(\mathrm{e}^{\eta _t}1\rho )}=0$$
(10)
which gives the optimal investment ratio $`r_{\mathrm{opt}}(t_1,t_2)`$ as a function of initial time $`t_1`$ and final time $`t_2`$. Note that it is an analog of the equation (6) but we deal with time averages here, not with sample averages as before. This is also another justification of the procedure of maximizing $`<\mathrm{log}(W_{t+1}/W_t)>`$ instead of $`<W_{t+1}/W_t>`$.
For comparison with reality we took the daily values of the New York Stock Exchange (NYSE) composite index. The time is measured in working days. The period studied started on 2 January 1990 ($`t=0`$) and finished on 31 December 1998 ($`t=2181`$). The time evolution of the index $`x(t)`$ is shown in Fig. 1. The values of $`\eta `$ are determined by $`\mathrm{exp}(\eta _t)=x(t+1)/x(t)`$.
The data of NYSE composite index were analyzed by calculating the two-time optimal strategies $`r_{\mathrm{opt}}(t_1,t_2)`$. As a typical example of the behavior observed, for initial time $`t_1=300`$ we vary the final time $`t_2`$ up to 2180. We used the interest rate 6.5% per 250 days (a realistic value for approximately 1 year). In this case we neglect the transaction costs, $`\gamma =0`$. The influence of non-zero transaction costs will be investigated in Sec. III. The results are in Fig. 2(a). We investigated also the possibility that the investment ratio goes beyond the limits 0 and 1, which means that the investor borrows either money or stock. We imposed the interest rate 8% on the loans and calculated again the optimal $`r`$. The results are in Fig. 2(c). We can see several far-reaching excursions above 1 and some also below 0, which indicates that quite often the optimal strategy requires borrowing considerable amount of money or stock.
An important conclusion may be drawn from the results obtained: the optimal strategy $`r_{\mathrm{opt}}(t_1,t_2)`$ as a function of the final time $`t_2`$ does not follow any smooth trajectory. On the contrary, the dependence is extremely noisy, as can be seen very well in the Fig. 2. Moreover, the strategy is very sensitive to initial conditions. If we compare the strategy $`r_{\mathrm{opt}}(t_1,t_2)`$ and $`r_{\mathrm{opt}}(t_1+\mathrm{\Delta }t,t_2)`$ for slightly different initial time, big differences are found in regions, where the strategy is non-trivial $`(0<r_{\mathrm{opt}}<1)`$. In Fig. 3 we show for $`\mathrm{\Delta }t=1`$ the average difference in optimal strategy
$$\mathrm{\Delta }r_{\mathrm{opt}}(t)=|r_{\mathrm{opt}}(t_1,t_1+t)r_{\mathrm{opt}}(t_1+1,t_1+t)|$$
(11)
where the average is taken over all initial times $`t_1`$ with the constraint, that we take into account only the points where both optimal strategies $`r_{\mathrm{opt}}(t_1,t_1+t)`$ and $`r_{\mathrm{opt}}(t_1+1,t_1+t)`$ are non-trivial.
Due to poor statistics, the data are not very smooth. We can also observe apparent two branches of the dependece, which is caused by superimposing data from different portions of the time evolution of the index. However, despite of the poor quality of the data, we can conclude, that even after a period as long as 1000 days (approximately 4 years) the difference of 1 day in the starting time leads to difference in optimal strategy as large as about 0.2. This finding challenges the reliability of the investment strategy based on finding optimal investment ratio $`r`$.
Moreover, we can see that if loans are prohibited, there are long periods where the optimal strategy is trivial ($`r_{\mathrm{opt}}=0`$ or $`r_{\mathrm{opt}}=1`$). We investigated the whole history of the NYSE composite index shown in Fig. 1 and determined, for which pairs $`(t_1,t_2)`$ the optimal strategy $`r_{\mathrm{opt}}(t_1,t_2)`$ is non-trivial. In the Fig. 4 each dot represents such pair. (In fact, not every point was checked: the grid $`5\times 5`$ was used, i. e. only such times which are multiples of 5 were investigated.)
We can observe large empty regions, which indicate absence of a non-trivial investment. In order to understand the origin of such empty spaces, let us consider a simple model. Suppose we have the random variable distributed according to (7), and $`\rho =0`$. Then the conditions for the existence of non-trivial optimal strategy between $`t_1=0`$ and $`t_2=N`$ are
$$g^{}(0)=\underset{t=0}{\overset{N1}{}}(\mathrm{e}^{\eta _t}1)>0$$
(12)
and
$$g^{}(1)=\underset{t=0}{\overset{N1}{}}(1\mathrm{e}^{\eta _t})<0.$$
(13)
Let us compute the probability $`p_{\mathrm{nt}}`$ that both of these conditions are satisfied. We have
$$g^{}(0)=N(\mathrm{e}^m\mathrm{cosh}d1)+\mathrm{e}^m\mathrm{sinh}d\underset{t=0}{\overset{N1}{}}z_t$$
(14)
and
$$g^{}(1)=N(1\mathrm{e}^m\mathrm{cosh}d)+\mathrm{e}^m\mathrm{sinh}d\underset{t=0}{\overset{N1}{}}z_t$$
(15)
where $`z`$’s can have values +1 or -1 with probability 1/2. The sum $`_{t=0}^{N1}z_t`$ has binomial distribution, and for large $`N`$ we can write
$$p_{\mathrm{nt}}=_{\sqrt{N}(\mathrm{coth}d\mathrm{e}^m/\mathrm{sinh}d)}^{\sqrt{N}(\mathrm{coth}d\mathrm{e}^m/\mathrm{sinh}d)}\frac{\mathrm{d}\zeta }{\sqrt{2\pi }}\mathrm{exp}(\frac{\zeta ^2}{2}).$$
(16)
We can see immediately that $`p_{\mathrm{nt}}`$ has a value close to 1 for the number of time steps at least
$$Nd^2.$$
(17)
For the data in Fig. 1 we found $`d0.01`$, which means $`N10000`$ days, or 40 years. This is thus an estimate of how long we need to observe the stock price before a reliable strategy can be fixed. However, during such a long period the market changes substantially many times. That is why no simple strategy of the kind investigated here can lead to sure profit.
## III Transaction costs
We investigated the influence of the transaction costs $`\gamma `$ and time lag $`M`$ between transactions. We found nearly no dependence on $`M`$, but the dependence on $`\gamma `$ is rather strong. It can be qualitatively seen in Fig. 2(b). When we compare the optimal strategy for $`\gamma =0`$ and $`\gamma =0.005`$, we can see, that already transaction costs $`0.5\%`$ decrease substantially the fraction of time when the strategy is non-trivial. We investigated the dependence of the fraction $`f_{\mathrm{nontrivial}}`$ of time pairs $`(t_1,t_2)`$ between which a nontrivial strategy exists on the transaction costs. We have found that it decreases with $`\gamma `$ and reaches negligible value for $`\gamma 0.006`$. This behavior is shown in Fig. 5.
The explanation of this behavior lies in the fact, that the transaction costs introduce some friction in the market, which means that large changes of the investment ratio are suppressed. Because the investment ration is mostly 0 or 1 even for $`\gamma =0`$, this implies that changing $`r`$ from 0 or 1 to a non-trivial value is even harder for $`\gamma >0`$ and a non-trivial investment becomes nearly impossible for large transaction costs.
## IV Investment in presence of correlations
In order to improve the strategy based only on the knowledge of the distribution of $`\eta `$, we would like to investigate a possible profit taken from the short-time correlations.
Imagine again the simplest case, when $`\eta `$ can have only two values, $`\eta ^+=m+d`$ and $`\eta ^{}=md`$. However, now $`\eta _t`$ and $`\eta _{t1}`$ may be correlated and we suppose the following probability distribution $`P(\eta _{t1},\eta _t)=1/4+c`$ if $`\eta _{t1}=\eta _t`$ and $`P(\eta _{t1},\eta _t)=1/4c`$ if $`\eta _{t1}\eta _t`$. The parameter $`c`$ quantifies the short-time correlations.
At time $`t`$ the strategy $`r(\eta _{t1})`$ should depend on the value of $`\eta `$ in the previous step. In our simplified situation we have only two possibilities, $`r^+=r(\eta ^+)`$ and $`r^{}=r(\eta ^{})`$. The problem then reduces to maximization of the typical gain
$$g(r^+,r^{})=<\mathrm{ln}(1+\rho +r(\eta _{t1})(\mathrm{e}^{\eta _t}1\rho ))>$$
(18)
which leads to decoupled equations for $`r_{\mathrm{opt}}^+`$ and $`r_{\mathrm{opt}}^{}`$
$`{\displaystyle \frac{g(r_{\mathrm{opt}}^+,r_{\mathrm{opt}}^{})}{r^+}}=`$ $`({\displaystyle \frac{1}{4}}+c){\displaystyle \frac{\mathrm{e}^{m+d}1\rho }{1+\rho +r_{\mathrm{opt}}^+(\mathrm{e}^{m+d}1\rho )}}+`$ (19)
$`({\displaystyle \frac{1}{4}}c){\displaystyle \frac{\mathrm{e}^{md}1\rho }{1+\rho +r_{\mathrm{opt}}^+(\mathrm{e}^{md}1\rho )}}=0`$ (20)
$`{\displaystyle \frac{g(r_{\mathrm{opt}}^+,r_{\mathrm{opt}}^{})}{r^{}}}=`$ $`({\displaystyle \frac{1}{4}}+c){\displaystyle \frac{\mathrm{e}^{md}1\rho }{1+\rho +r_{\mathrm{opt}}^{}(\mathrm{e}^{md}1\rho )}}+`$ (21)
$`({\displaystyle \frac{1}{4}}c){\displaystyle \frac{\mathrm{e}^{m+d}1\rho }{1+\rho +r_{\mathrm{opt}}^{}(\mathrm{e}^{m+d}1\rho )}}=0.`$ (22)
The solution is a straightforward generalization of Eq. (7).
The above procedure works equally well even in the case of more complicated time correlations. For example we may imagine that the price evolution is positively correlated over two time steps, i. e. $`\mathrm{Prob}(\eta _{t2}=\eta _t)>1/2`$, while $`\mathrm{Prob}(\eta _{t1}=\eta _t)=1/2`$. Generally, we have some joint probability distribution for the past and present $`P(\eta ^<,\eta )`$, where we denote $`\eta ^<=[\mathrm{},\eta _{t3},\eta _{t2},\eta _{t1}]`$ and $`\eta =\eta _t`$. The typical gain becomes a functional depending on the strategy $`r(\eta ^<)`$ which itself depends on the past price history.
However, maximizing this functional by looking for its stationary point leads to very simple set of decoupled equations for the strategies
$$d\eta P(\eta ^<,\eta )\frac{\mathrm{e}^\eta 1\rho }{1+\rho +r_{\mathrm{opt}}(\eta ^<)(\mathrm{e}^\eta 1\rho )}=0.$$
(23)
In the simplest case, when we assume that the strategy depends only on the sign of $`\eta `$ in the previous step, we performed the analysis on the NYSE composite index shown in the Fig. 1. We found optimal pairs $`[r_{\mathrm{opt}}^+,r_{\mathrm{opt}}^{}]`$. Contrary to the case when correlations were not taken into account, no non-trivial investment strategy was found. So, instead to improve the method of Ref. , the applicability of this method is further discredited.
## V Conclusions
We investigated the method of finding the optimal investment strategy based on the Kelly criterion. We checked the method on real data based on the time evolution of the New York Stock Exchange composite index. We found, that it is rarely possible to find an optimal strategy which would be stable at least for a short period of time. There are several reasons, which discredit the method based on the Kelly criterion. First, the optimal investment ratio fluctuates very rapidly in time. Second, it depends strongly on the time, when the investment strategy started to be applied. The difference of 1 day in the starting moment makes substantial difference even after 1000 days of investment. Third, the fraction of days, for which a non-trivial investment strategy is possible, is very low. This fraction also decreases with transaction costs and reaches negligible values for transaction costs about $`0.6\%`$. Taking into account possible correlations in the time evolution of the index makes the situation even less favorable, reducing further the fraction of times, when a non-trivial investment is possible.
We conclude, that straightforward application of the investment strategy based on the Kelly criterion would be very difficult in real conditions. The question remains, whether there are other optimization schemes, which would lead to more certain investment strategies. It would be also interesting to apply the approach used in this paper in order to check the reliability of the option-pricing strategies.
###### Acknowledgements.
I am obliged to Y.-C. Zhang, M. Serva, and M. Kotrla for many discussions and stimulating comments. I wish to thank to the Institut de Physique Théorique, Université de Fribourg, Switzerland, for financial support.
|
no-problem/9905/cond-mat9905233.html
|
ar5iv
|
text
|
# Multiple Andreev reflections and enhanced shot noise in diffusive SNS junctions
## Abstract
We study the dc conductance and current fluctuations in diffusive voltage biased SNS junctions with a tunnel barrier inside the mesoscopic normal region. We find that at subgap voltages, $`eV<2\mathrm{\Delta }/n`$, the current associated with the chain of $`n`$ Andreev reflections is mapped onto the quasiparticle flow through a structure of $`n+1`$ voltage biased barriers connected by diffusive conductors. As a result, the current-voltage characteristic of a long SNINS structure obeys Ohm’s law, in spite of the complex multiparticle transport process. At the same time, nonequilibrium heating of subgap electrons produces giant shot noise with pronounced subharmonic gap structure which corresponds to stepwise growth of the effective transferred charge. At $`eV0`$, the shot noise approaches the magnitude of the Johnson-Nyquist noise with the effective temperature $`T^{}=\mathrm{\Delta }/3`$, and the effective charge increases as $`(e/3)(1+2\mathrm{\Delta }/eV)`$, with the universal “one third suppression” factor. We analyse the role of inelastic scattering and present a criterion of strong nonequilibrium.
Current transport through mesoscopic resistive elements (tunnel barriers and disordered normal conductors) attached to superconductors is a subject of permanent interest and intensive experimental studies. The investigations of superconducting junctions are primarily focused on the complex nonlinear behavior of current-voltage characteristics, which exhibit subharmonic gap structure, zero bias anomaly, etc. In this Letter, we will discuss an elementary mesoscopic superconducting structure where the current shot noise manifests anomalous transport properties while the average current shows perfect Ohmic behavior.
The circuit under discussion consists of a low-transmission tunnel junction (or point contact) connected to voltage biased superconducting reservoirs via diffusive normal leads (Fig. 1a), like e.g. a short-gate Josephson field effect transistor . In a NIN structure connected to normal reservoirs, the average current obeys Ohm’s law and the current fluctuations show full Poissonian shot noise, $`S=2eI`$, if the tunnel resistance $`R`$ dominates over the resistance of the normal leads . The effect of the superconducting reservoirs, which has recently attracted much attention, is to modify the density of states and to create a gap $`E_g`$ in the electron spectrum of the normal leads . This proximity effect provides dc Josephson current flow and, simultaneously, blocks the single-particle tunneling at applied voltages $`eV<2E_g`$. At these subgap voltages, the current is due to multiparticle tunneling (MPT) . The MPT regime is manifested by the stepwise decrease of the current with decreasing applied voltage (subharmonic gap structure at $`eV=2E_g/n`$), which provides exponential decay of the current . At the same time, the current shot noise undergoes enhancement due to the growth of the elementary tunneling charge $`ne`$ . MPT has been extensively studied theoretically in quantum point contacts , and in short diffusive constrictions with a wide proximity gap of the order of the energy gap $`\mathrm{\Delta }`$ in the superconducting reservoirs. Both the subharmonic gap structure and the enhanced shot noise have been observed experimentally .
A distinctly different transport regime occurs in long diffusive SNS junctions with a small proximity gap of the order of the Thouless energy, $`E_gE_{Th}\mathrm{\Delta }`$. In this case, the Josephson current is suppressed, and single-particle tunneling dominates at virtually all applied voltages. However, when the inelastic mean free path exceeds the distance between the superconducting reservoirs, current transport at subgap voltages $`eV<2\mathrm{\Delta }`$ is still non-trivial: the tunneling electrons must undergo multiple Andreev reflections (MAR) before they may enter the reservoirs . We will show that in long SNS structures with opaque tunnel barriers, the current-voltage characteristic is perfectly linear and structureless, while the current shot noise is greatly enhanced and reveals subharmonic gap structure (kinks) at voltages $`eV=2\mathrm{\Delta }/n`$ (incoherent MAR regime).
The origin of the linear current-voltage dependence and the significant deviation of tunnel shot noise from the Poisson law can be qualitatively explained in the following way. In order to overcome the energy gap at low voltages ($`eV2\mathrm{\Delta }`$) the electron has to undergo a large number ($`M2\mathrm{\Delta }/eV`$) of Andreev reflections, gaining the energy $`eV`$ in each passage of the tunnel barrier (Fig. 1b). Thus, MAR tunneling in real space is associated with probability current flow along the energy axis through a structure of $`M+1`$ tunnel barriers with the total effective resistance $`R_M=(M+1)R`$. Since only electrons incoming within the energy layer $`eV`$ below the gap $`2\mathrm{\Delta }`$ participate in MAR transport, the total probability current is $`I_p=V/R_M`$. However, each pair of consequtive Andreev reflections transfers the charge $`2e`$ through the junction, and the real current $`I`$ is therefore $`M+1`$ times greater than the probability current: $`I=(M+1)I_p=V/R`$. The current flow in energy space generates shot noise $`S_p`$ which is related to the probability current as $`S_p=(2/3)eI_p`$ in the limit $`M\mathrm{}`$. The arguments for the 1/3 suppression of the Poissonian noise in multibarrier tunnel structures are similar to the ones presented in Ref. . Since the noise spectral density is given by the current-current correlation function, the real shot noise $`S`$ is $`(M+1)^2`$ times greater than $`S_p`$, i.e. approaches a constant value $`S=(4/R)(\mathrm{\Delta }/3)`$. This coincides with the exact result, Eqs. (11), (12) below, in the limit $`eV0`$.
For a quantitative treatment of incoherent MAR, we consider the diffusive kinetic equation for the 4$`\times `$4 supermatrix Keldysh-Green’s function $`\stackrel{ˇ}{G}(x,t_1,t_2)`$
$$i\stackrel{ˇ}{J}=[\stackrel{ˇ}{H},\stackrel{ˇ}{G}],\stackrel{ˇ}{G}^2=\stackrel{ˇ}{1},\stackrel{ˇ}{J}=𝒟\stackrel{ˇ}{G}\stackrel{ˇ}{G},$$
(1)
where $`𝒟`$ is the diffusion constant. We apply Eqs. (1) to the electrons in the normal leads and match the Green’s functions at the tunnel barrier ($`x=\pm 0`$) of low transparency ($`\mathrm{\Gamma }1`$) using the boundary condition
$$\stackrel{ˇ}{J}(0)=\stackrel{ˇ}{J}(+0)=(j/2)[\stackrel{ˇ}{G}(0),\stackrel{ˇ}{G}(+0)],j=(1/2)v_F\mathrm{\Gamma }.$$
(2)
Equation (2) represents the conservation law for the supermatrix current $`\stackrel{ˇ}{J}`$ and connects it with the voltage-induced imbalance of elementary probability currents $`j`$ of tunneling electrons. Since we have assumed that the barrier resistance $`R`$ dominates over the resistance $`R_d`$ of the leads ($`RR_d`$), the voltage drop at the barrier determines the time-dependent phase difference between the reservoirs by virtue of the Josephson relation. A gauge transformation allows us to remove the time dependence in the Hamiltonian in Eq. (1) and cancel the electric potential. Simultaneously, a periodic time dependence appears in the boundary condition of Eq. (2), which implies that the Green’s function $`\stackrel{ˇ}{G}(x,t_1,t_2)`$ consists of a set of harmonics $`\stackrel{ˇ}{G}(x,E_n,E_m)`$, $`E_n=E+neV`$. The problem is solvable in the limit of weak Josephson coupling, $`\mathrm{\Gamma }1`$ and/or $`E_{Th}\mathrm{\Delta }`$, when off-diagonal Green’s function harmonics can be neglected and the diagonal harmonics ($`n=m`$) satisfy the static equations, similar to the case of zero applied voltage.
The dc current $`I`$ can be expressed, to first order in $`\mathrm{\Gamma }`$, through the quasiparticle distribution function $`f(E)`$ defined by the following representation of the Keldysh function: $`\widehat{g}^K=\widehat{g}^R\widehat{f}\widehat{f}\widehat{g}^A`$, $`\widehat{f}=\widehat{1}(1f)+\sigma _zf_z`$, where $`\widehat{g}^{R,A}`$ are the retarded (advanced) Green’s functions, giving
$$I=\frac{1}{2eR}_{\mathrm{}}^+\mathrm{}𝑑EN(E)N(E+eV)[f(E)f(E+eV)].$$
(3)
The density of states, $`N(E)=(1/4)\text{Tr}\sigma _z(\widehat{g}^R\widehat{g}^A)`$, is calculated for a nontransparent barrier and normalized by the normal-electron density of states; all values are taken at the interface, $`x=0`$. Eq. (3) has the same form as the conventional equation of the tunnel model , with the nonequilibrium distribution function $`f(E,x)`$ obeying a kinetic equation following from Eq. (1),
$$\frac{}{x}D(E,x)\frac{f}{x}=\frac{N(E,x)}{\tau _\epsilon }[f(E,x)f_0(E)],$$
(4)
$`D(E,x)=(𝒟/4)\text{Tr}(1\widehat{g}^R\widehat{g}^A)`$. The inelastic scattering term in Eq. (4), describing relaxation to equilibrium population $`f_0(E)=2n_F(E)`$, is written for simplicity in the relaxation time approximation. The boundary condition for the function $`f`$ at the tunnel interface obtained from Eq. (2) reads
$$D(E,x)(f/x)|_0=Q^+(E)Q^{}(E),$$
(5)
$$Q^\pm (E)=\pm (j/2)N(E)N(E\pm eV)[f(E)f(E\pm eV)].$$
(6)
The quantities $`Q^\pm `$, which also determine the current in Eq. (3), can be interpreted as spectral quasiparticle currents, i.e. probability currents flowing upwards in energy space (Fig. 1): the current $`Q^+`$ exits from energy $`E`$ towards energy $`E+eV`$, while the current $`Q^{}`$ arrives at energy $`E`$ from energy $`EeV`$. Along this line of reasoning, the boundary condition, Eq. (5), represents the detailed balance between the spectral quasiparticle current and the leakage current \[the term on the left hand side of Eq. (5)\] due to either inelastic relaxation or escape into the reservoirs.
Let us consider the limit of infinitely large inelastic relaxation time . In this case, the leakage current is spatially homogeneous according to Eq. (4). Within the energy gap, $`|E|<\mathrm{\Delta }`$, the diffusion coefficient $`D(E,x)`$ turns to zero in the superconducting reservoirs, and therefore the leakage current is blocked, indicating complete Andreev reflection. Thus, the spectral current $`Q^\pm `$ is conserved within the superconducting gap: $`Q^+=Q^{}`$. This equation provides recurrence relations for the nonequilibrium distribution functions $`f(E_n)`$ in different side bands associated with MAR. The boundary conditions are established by the requirement of equilibrium outside the gap, $`f(E_n)=2n_F(E_n)`$, $`|E_n|>\mathrm{\Delta }`$. Indeed, the reservoirs maintain the equilibrium at the NS boundaries, $`f(E,\pm d)=2n_F(E)`$; on the other hand, the gradient of the distribution function given by Eq. (5) is small, $`R_d/R1`$, and may be neglected. We note that the latter condition is equivalent to a small ratio between the diffusion time through the normal lead, $`d^2/𝒟=E_{Th}^1`$, and the inverse tunneling rate, $`(\mathrm{\Gamma }v_F/d)^1`$, i.e. $`\mathrm{\Gamma }v_F/dE_{Th}\mathrm{\Delta }`$.
The physical picture of MAR in diffusive SNINS systems is illustrated in Fig. 1b. The equilibrium electron-like quasiparticles incoming from the left electrode with energy $`\mathrm{\Delta }eV<E_0<\mathrm{\Delta }`$ create a probability current $`Q^{}(E_0)=jn_F(E_0)N(E_0)N(E_0+eV)`$ across the tunnel junction into the subgap region. Due to low transmissivity of the barrier and fast electron diffusion through the normal leads, the particle undergoes many Andreev reflections from the superconductor before the next tunneling event will occur and, therefore, the electron and hole states at energy $`E_1=E_0+eV`$ are occupied with equal probability $`(1/2)f(E_1)`$ . Thus, the population $`f(E_1)`$ produces both the current of holes $`Q^{}(E_1)=(j/2)f(E_1)N(E_1)N(E_2)`$ moving upwards along the energy axis to the next side band, and the counter current of electrons $`Q^{}(E_1)=(j/2)f(E_1)N(E_0)N(E_1)`$ down to the initial state, determining the net probability current $`Q(E_0)=Q^{}(E_0)Q^{}(E_1)`$, and so on. As a result, the electron tunneling in real space is associated with the flow of spectral current $`Q^+(E_m)=Q^{}(E_m)=Q(E_0)`$ through $`M+1`$ tunnel barriers connected in series by a number $`M(E_0)=[(\mathrm{\Delta }E_0)/eV]`$ of Andreev side bands ($`[x]`$ denotes the integer part of $`x`$). In this transport problem in energy space, there is an effective bias voltage $`(M+1)eV`$ drop between the reservoirs represented by the spectral regions outside the energy gap, $`|E|>\mathrm{\Delta }`$, and it is equally distributed among the tunnel barriers. Therefore, the distribution function has a steplike form,
$$f(E_m)=2[[n_F(E_{M+1})n_F(E_0)]\frac{Z_m}{Z_{M+1}}+n_F(E_0)],$$
(7)
$$Z_m(E_0)=\underset{k=0}{\overset{m1}{}}N^1(E_k)N^1(E_{k+1}),Z_0=0.$$
(8)
The tunnel current in Eq. (3) is determined by the spectral current $`Q^+`$. At low temperature, the equilibrium spectral current at $`|E|>\mathrm{\Delta }`$ is exponentially small, and the main contribution to the total current comes from the nonequilibrium subgap region. Dividing it into pieces of length $`eV`$ and taking into account spectral current conservation, one finds from Eqs. (3), (7)
$$I(V)=\frac{1}{eR}_{\mathrm{\Delta }eV}^\mathrm{\Delta }𝑑E_0\frac{M+1}{Z_{M+1}}\left(n_F(E_0)n_F(E_{M+1})\right).$$
(9)
Equation (9) describes the single-particle current in a tunnel junction of arbitrary length. If some MAR chain contains a side band $`E_n`$ within the proximity-induced gap $`2E_g`$, the corresponding density of states $`N(E_n)`$ is zero, and the spectral current associated with this chain is blocked. At $`eV<2E_g`$, any MAR chain has at least one side band within the gap, and the total single particle current in Eq. (9) vanishes. In the limit of a long junction, the proximity gap closes and the local density of states becomes constant, $`N(E)=1`$. In this case, the current in Eq. (9) shows Ohmic behaviour, $`I=V/R`$, with the same resistance $`R`$ as in the absence of superconducting “mirrors”.
Let us turn to calculation of the tunnel current shot noise power $`S(V)`$. A general quantum equation for the shot noise in superconducting junctions has been derived in . Assuming the asymptotic limit of highly resistive tunnel barrier and the long-junction approximation we write it on the form
$$S(V)=_{\mathrm{}}^+\mathrm{}\frac{dE}{R}(f(E)+f(E+eV)f(E)f(E+eV)).$$
(10)
Taking into account the distribution function in Eq. (7), the noise power at zero temperature becomes
$$S(V)=\frac{2}{R}_{\mathrm{\Delta }eV}^\mathrm{\Delta }\frac{dE}{3}\left(M(E)+1+\frac{2}{M(E)+1}\right).$$
(11)
At voltages $`eV>2\mathrm{\Delta }`$ this formula gives conventional Poissonian noise $`S=2eI`$. At subgap voltages, the noise power undergoes enhancement: it shows a piecewise linear voltage dependence, $`dS/dV=(2e/3R)(1+4/([2\mathrm{\Delta }/eV]+2))`$, with kinks at the subharmonics of the superconducting gap, $`eV_n=2\mathrm{\Delta }/n`$ (see Fig. 2). At zero voltage, the noise power approaches the constant value $`S(0)=(4/R)(\mathrm{\Delta }/3)`$, which corresponds to the thermal Johnson-Nyquist noise with the effective temperature $`T^{}=\mathrm{\Delta }/3`$.
The enhancement of the shot noise power can be alternatively interpreted as an increase of the effective charge $`q(V)=S(V)/2I`$ with decreasing voltage,
$$\frac{q(V_n)}{e}=\frac{1}{3}\left(n+1+\frac{2}{n+1}\right)=1,\frac{11}{9},\frac{22}{15},\mathrm{}$$
(12)
In the limit $`eV0`$ the effective charge increases as $`q(V)/e(1/3)(1+2\mathrm{\Delta }/eV)`$. This result differs by a factor 1/3 from the value expected from a straightforward MAR argument which assumes the shot noise to be equal to the Poisson noise enhanced by the factor $`M`$. We stress that the 1/3 factor here results from multiple traversal of the tunnel barrier due to incoherent MAR and has nothing to do with the diffusive normal leads.
A more detailed analysis with account of inelastic scattering shows that the current shot noise is suppressed at low voltage when the lifetime of a quasiparticle within the normal leads becomes comparable to the inelastic relaxation time ($`\alpha 1`$, see Eq. (14)). Generalized recurrences for the distribution function in long junctions ($`N(E)=1`$) then take the form
$$f(E)f_0(E)=W_\epsilon \left[f(E+eV)+f(EeV)2f(E)\right].$$
(13)
The level of nonequilibrium of the subgap electrons is controlled by the parameter $`W_\epsilon =(R_d/R)(E_{Th}\tau _\epsilon /4)`$. The strong nonequilibrium state discussed above is only possible at $`W_\epsilon 1`$ whereas in the opposite limit, $`W_\epsilon 1`$, the normal leads may always be considered as reservoirs, and the enhanced noise disappears.
Numerical results for $`W_\epsilon =5`$ are presented in Fig. 2 by dashed curves. The rapid decrease of $`S(V)`$ at low voltage described by the following analytical approximation,
$$S(V)=S(0)\frac{3}{\alpha }(\mathrm{tanh}\frac{\alpha }{2}+\frac{\alpha \mathrm{sinh}\alpha }{\mathrm{sinh}^2\alpha }),\alpha =\frac{\mathrm{\Delta }}{eV\sqrt{W_\epsilon }},$$
(14)
occurs when the length of the MAR chain interrupted by inelastic scattering, $`eV\sqrt{W_\epsilon }`$, becomes less than $`2\mathrm{\Delta }`$.
In conclusion, we have studied subgap tunnel current and current shot noise in diffusive SNINS structures. We found that in junctions with normal leads which are much longer than the coherence length but much shorter than inelastic mean free path, the strongly nonequilibrium distribution of the subgap electrons created by MAR is manifested in the shot noise rather than in the tunnel current. While the tunnel current obeys Ohm’s law, the current shot noise is significantly enhanced and shows subharmonic gap structure.
Support from KVA, NFR, and NUTEK (Sweden), and from NEDO (Japan) is gratefully acknowledged.
|
no-problem/9905/hep-ph9905491.html
|
ar5iv
|
text
|
# Search for Scalar Leptoquarks with polarized protons (and neutrons) at HERA and future 𝑒𝑝(𝑛) Machines
## 1 Introduction
We present the effects of Scalar LQ in the Neutral Current (NC) and Charged Current (CC) channels at HERA, with high integrated luminosities and also at an eventual new $`ep`$ collider running at higher energies, like the TESLAxHERA or LEPxLHC projects . We estimate the constraints that can be reached using those facilities for several Leptoquark scenarios. We emphasize the relevance of having polarized lepton and proton beams as well as also having neutron beams (through polarized $`He^3`$ nuclei), in order to disentangle the chiral structure of these various models.
We adopt the “model independent” approach of Buchmüller-Rückl-Wyler (BRW) where the LQ are classified according to their quantum numbers and have to fulfill several assumptions like $`B`$ and $`L`$ conservation, $`SU(3)`$x$`SU(2)`$x$`U(1)`$ invariance … (see for more details). The interaction lagrangian is given by :
$``$ $`=`$ $`\left(g_{1L}\overline{q}_L^ci\tau _2\mathrm{}_L+g_{1R}\overline{u}_R^ce_R\right).𝐒_\mathrm{𝟏}+\stackrel{~}{g}_{1R}\overline{d}_R^ce_R.\stackrel{~}{𝐒}_\mathrm{𝟏}`$ (1)
$`+`$ $`g_{3L}\overline{q}_L^ci\tau _2\tau \mathrm{}_L.𝐒_\mathrm{𝟑}+\stackrel{~}{h}_2L\overline{d}_R\mathrm{}_L.\stackrel{~}{𝐑}_\mathrm{𝟐}`$
$`+`$ $`\left(h_{2L}\overline{u}_R\mathrm{}_L+h_{2R}\overline{q}_Li\tau _2e_R\right).𝐑_\mathrm{𝟐},`$
where the LQ $`S_1`$, $`\stackrel{~}{S}_1`$ are singlets, $`R_2`$, $`\stackrel{~}{R}_2`$ are doublets and $`S_3`$ is a triplet. $`\mathrm{}_L`$, $`q_L`$ ($`e_R`$, $`d_R`$, $`u_R`$) are the usual lepton and quark doublets (singlets). In what follows we denote generically by $`\lambda `$ the LQ coupling and by $`M`$ the associated mass.
These LQ are severely constrained by several different experiments, and we refer to for some detailled discussions.
Now, in order to simplify the analysis, we make the following assumptions : i) the LQ couple to the first generation only, ii) one LQ multiplet is present at a time, iii) the different LQ components within one LQ multiplet are degenerate in mass, iv) there is no mixing among LQ’s. From these assumptions and from eq.1, it is possible to deduce some of the coupling properties of the LQ, which are summarized in the table 1 of . We stress from this table that the LQ couplings are flavour dependent and chiral.
## 2 Future Constraints
We consider the HERA collider but with some high integrated luminosities, namely $`L_e^{}=L_{e^+}=500pb^1`$. The other parameters for the analysis being : $`e^\pm p`$ collisions, $`\sqrt{s}=300GeV`$, $`0.01<y<0.9`$, $`\left(\mathrm{\Delta }\sigma /\sigma \right)_{syst}3\%`$ and GRV pdf set . We have considered also the impact on the constraints of higher energies by considering, in the one hand, an energy $`\sqrt{s}=380GeV`$ which is closed to the maximal reach of HERA, and in the other hand, an energy $`\sqrt{s}=1TeV`$ which could be obtained at the distant projects TESLAxHERA and/or LEPxLHC . Limits at 95% CL for the various LQ models have been obtained from a $`\chi ^2`$ analysis performed on the unpolarized NC cross sections (best observables). In figure 1 we compare the sensitivities of various present and future experiments for $`R_{2L}`$ as an example.
We can remark the followings : 1) LEP limits are already covered by present HERA data . 2) For virtual exchange, the LENC constraints (in particular APV experiments) are stronger than what could be obtained at HERA even with higher integrated luminosities and energies. 3) For real exchange, Tevatron data cover an important part of the parameter space. However, the bounds obtained from LQ pair production at Tevatron are strongly sensitive to $`BR(LQeq)`$ . It means that there is still an important window for discovery at HERA in the real domain, especially for more exotic models like R-parity violating squarks in SUSY models . 4) To increase this window of sentivity (for real exchange), it is more important to increase the energy than the integrated luminosity. 5) A $`1TeV`$ $`ep`$ collider will give access to a domain (both real and virtual) which is unconstrained presently.
## 3 Chiral structure analysis
### 3.1 Unpolarized case
An effect in NC allows the separation of two classes of models. A deviation for $`\sigma _{e^{}p}^{NC}`$ indicates the class ($`S_{1L}`$,$`S_{1R}`$, $`\stackrel{~}{S}_1`$,$`S_3`$), whereas for $`\sigma _{e^+p}^{NC}`$ it corresponds to ($`R_{2L}`$,$`R_{2R}`$,$`\stackrel{~}{R}_2`$). For CC events, only $`S_{1L}`$ and $`S_3`$ can induce a deviation from SM expectations (if we do not assume LQ mixing). This means that the analysis of $`\sigma _{e^{}p}^{CC}`$ can separate the former class into ($`S_{1L}`$,$`S_3`$) and ($`S_{1R}`$,$`\stackrel{~}{S}_1`$). If we want to go further into the identification of the LQ we need to separate ”$`eu`$” from ”$`ed`$” interactions, which seems to be impossible with $`ep`$ collisions except if the number of anomalous events is huge. So, if we want a ”complete” separation of the LQ species we need to consider $`ep`$ and $`en`$ collisions as well, where some observables like the ratios of cross sections $`R=\sigma _{ep}^{NC}/\sigma _{en}^{NC}`$ for instance, will allow it. However, as soon as we relax one of our working assumptions (i-iv) some ambiguities will remain. The situation will be better with polarized collisions.
### 3.2 Polarized case
According to our previous experience we know that in general the Parity Violating (PV) two spin asymmetries exhibit stronger sensitivities to new chiral effects than the single spin asymmetries. Then we consider the case where the $`e`$ and $`p`$ (or neutrons) beams are both polarized. The PV asymmetries are defined by $`A_{LL}^{PV}=(\sigma _{NC}^{}\sigma _{NC}^{++})/(\sigma _{NC}^{}+\sigma _{NC}^{++})`$, where $`\sigma _{NC}^{\lambda _e\lambda _p}(d\sigma _{NC}/dQ^2)^{\lambda _e\lambda _p}`$, and $`\lambda _e,\lambda _p`$ are the helicities of the lepton and the proton, respectively. A LQ will induce some effects in these asymmetries, and the directions of the deviations from SM expectations allow the distinction between several classes of models. For instance, a positive deviation for $`A_{LL}^{PV}(e^{}p)`$ pins down the class ($`S_{1L}`$,$`S_3`$) and, a negative one, the class ($`S_{1R}`$,$`\stackrel{~}{S}_1`$). Similarly, an effect for $`A_{LL}^{PV}(e^+p)`$ makes a distinction between the model $`R_{2R}`$ and the class ($`R_{2L}`$,$`\stackrel{~}{R}_2`$). This last fact can be seen in figure 2 which represents $`A_{LL}^{PV}`$ for $`e^+p`$ collisions at TESLAxHERA energies with a LQ of mass 500 $`GeV`$ and coupling $`\lambda =0.2`$, the large (small) bars corresponding to $`L=100(500)pb^1`$ (a global systematic error of $`\left(\mathrm{\Delta }A/A\right)_{syst}=10\%`$ has been added in quadrature).
Some other observables, defined in , could be used to go further into the separation of the models. However the sensitivities of these asymmetries are rather weak, and they can be useful only for some particularly favorable values of the parameters (M,$`\lambda `$). Consequently, polarized $`\stackrel{}{e}\stackrel{}{n}`$ collisions are mandatory to perform the distinction between the LQ models. This can be seen through the ratio of asymmetries $`R=A_{LL}^{PV}(ep)/A_{LL}^{PV}(en)`$, which for an $`e^+`$ beam distinguishes the models $`R_{2L}`$ (positive deviation) and $`\stackrel{~}{R}_2`$ (negative one). This ratio is presented in figure 3 and the separation is obvious.
Similarly, for an $`e^{}`$ beam, a positive (negative) deviation in $`R(e^{})`$ indicates the class ($`S_{1R}`$,$`S_3`$) (($`S_{1L}`$,$`\stackrel{~}{S}_1`$)). Since these classes are complementary to the ones obtained from $`A_{LL}^{PV}(e^{}p)`$, it indicates a non-ambiguous separation of the LQ models.
Finally, if we relax the working assumptions i-iv, the LQ can have some more complex structures. Then some ambiguities can remain. Nevertheless, the use of additional asymmetries, like the huge number of charge and PC spin asymmetries that one can define with lepton plus nucleon polarizations , should be very useful for the determination of the chiral structure of the new interaction.
|
no-problem/9905/astro-ph9905369.html
|
ar5iv
|
text
|
# The Impact of Atmospheric Fluctuations on Degree-scale Imaging of the Cosmic Microwave Background
## 1 Introduction
Many groups are currently engaged in measuring the level of anisotropy present in the cosmic microwave background (CMB) from arcminute to degree angular scales. The brightness temperature of the fluctuations is of order $`10^5`$ K, requiring carefully designed experiments. Contamination from point sources and galactic dust emission is minimized by choosing an observing frequency of between 20 and 300 GHz (1.5 cm to 1 mm), depending on the angular scale being investigated (Tegmark & Efstathiou 1996). Differential measurements are employed to minimize systematic errors: chopped beam instruments measure the difference in emission between two or more directions on the sky, swept beam instruments sweep a beam rapidly backwards and forwards, and interferometers correlate signals from two or more antennas. Some examples of chopped beam experiments are Python I-IV (Dragovan et al. 1994, Ruhl et al. 1995, Platt et al. 1997, Kovac et al. 2000), OVRO RING5M (Leitch et al. 1998), and Tenerife (Davies et al. 1996). Current or recent swept beam experiments include Python V (Coble et al. 1999), Saskatoon (Netterfield et al. 1997), and the Mobile Anisotropy Experiment (Torbet et al. 1999). Interferometers currently being built include the Degree Angular Scale Interferometer (DASI) (Halverson et al. 1998), the Cosmic Background Interferometer (CBI) (A.C.S Readhead & S. Padin, personal communication), and the Very Small Array (VSA) (Jones & Scott 1998). In addition, the Cosmic Anisotropy Telescope (CAT) has already been operational for several years (Scott et al. 1996). These employ wide bandwidth, low-noise receivers to maximize sensitivity, and should be deployed at sites where the Earth’s atmosphere does not significantly compromise performance.
The atmosphere is also a source of brightness temperature variations, originating primarily from water molecules. Of the the three states that may be present – vapor, liquid and ice – it is water vapor that is most important. Most of it is contained in the troposphere with a scale height of $`2`$ km, and, because the water vapor is close to its condensation point, it is poorly mixed with the ‘dry’ component of the atmosphere (mostly nitrogen and oxygen). This, in combination with turbulence, leads to a clumpy, non-uniform distribution of water vapor in the troposphere. Since the water molecule has a strong dipole moment, rotational transitions couple strongly to millimeter-wave radiation, and water vapor is the dominant source of atmospheric emission (and therefore opacity) at most millimeter wavelengths. Liquid water, in the form of clouds, is also a source of non-uniform emission, but radiates much less per molecule. Ice is the least efficient radiator, since the molecules are unable to rotate. Fluctuations in temperature caused by turbulent mixing are also a source of brightness temperature variations, although they are generally much less significant than the water vapor contribution at millimeter wavelengths.
It should also be noted that the high refractive index of water vapor causes an excess propagation delay, and a non-uniform distribution of water vapor distorts an incoming wavefront. This sets a ‘seeing’ limit on interferometric observations at millimeter wavelengths, and there is an ongoing effort to correct for this effect. It is this problem, which limits performance at arcsecond spatial resolution and high frequencies, that has driven much of the recent research into the distribution of water vapor (e.g. Armstrong & Sramek 1982; Treuhaft & Lanyi 1987, Wright 1996, Lay 1997).
The wavefront distortions are not significant for the low resolution experiments considered here. The atmospheric emission fluctuations, however, can only be distinguished from fluctuations in the CMB by the wind-induced motion of the atmosphere with respect to the background. This paper investigates how well the two can be separated. The next section describes a model of atmospheric fluctuations and the responses of the different types of instrument. Section 3 describes how data from the Python V experiment have been used to characterize the fluctuations at the South Pole, and Section 4 estimates the level of emission fluctuations for the Atacama Desert in Chile using rms path fluctuation data. Section 5 shows how the theory of Section 2 can be combined with the fluctuation data from each site to predict the residual noise level due to the atmosphere for a given instrument configuration.
## 2 Models
In this section, we develop a model that describes the atmospheric fluctuations and their interaction with the instruments commonly used to measure CMB fluctuations.
Church (1995) presented a rigorous mathematical analysis based on the autocorrelation function of the fluctuations. We adopt a more pictorial approach based on the power spectrum of the fluctuations, using an atmospheric model that differs in a fundamental way from that assumed by Church.
### 2.1 Sky brightness and instrument response
A given pointing on the sky can be represented by a point on the surface of a sphere. Consider a patch of sky with angular extent $`\mathrm{\Delta }\theta _x1`$ radian, $`\mathrm{\Delta }\theta _y1`$ radian, such that the curved surface is well represented by a pair of ortholinear angular coordinates $`(\theta _x,\theta _y)`$. In addition, we will consider only the fine structure (angular scales $`1`$ radian) in the sky brightness distribution. These approximations greatly simplify the analysis; simple Fourier Transforms can be used, and the gradient in brightness temperature due to airmass can be ignored, without significant impact on the degree scales of interest.
The Fourier Transform of the (fine structure) sky brightness distribution $`T_{\mathrm{sky}}(\theta _x,\theta _y)`$ is given by
$$\stackrel{~}{𝐓}_{\mathrm{sky}}(\alpha _x,\alpha _y)=\frac{1}{\sqrt{\mathrm{\Delta }\theta _x\mathrm{\Delta }\theta _y}}_{\mathrm{\Delta }\theta _x/2}^{+\mathrm{\Delta }\theta _x/2}_{\mathrm{\Delta }\theta _y/2}^{+\mathrm{\Delta }\theta _y/2}T_{\mathrm{sky}}(\theta _x,\theta _y)e^{2\pi i(\alpha _x\theta _x+\alpha _y\theta _y)}𝑑\theta _x𝑑\theta _y.$$
(1)
The angular wavenumbers $`(\alpha _x,\alpha _y)`$ have units of cycles per radian. In general $`\stackrel{~}{𝐓}_{\mathrm{sky}}(\alpha _x,\alpha _y)`$ is a complex variable. Its amplitude is denoted by $`\stackrel{~}{T}_{\mathrm{sky}}(\alpha _x,\alpha _y)`$. The normalization in equation (1) is chosen such that the variance of the sky brightness fluctuations is given by
$`T_{\mathrm{sky},\mathrm{rms}}^2`$ $`=`$ $`{\displaystyle \frac{1}{\mathrm{\Delta }\theta _x\mathrm{\Delta }\theta _y}}{\displaystyle _{\mathrm{\Delta }\theta _x/2}^{+\mathrm{\Delta }\theta _x/2}}{\displaystyle _{\mathrm{\Delta }\theta _y/2}^{+\mathrm{\Delta }\theta _y/2}}T_{\mathrm{sky}}^2(\theta _x,\theta _y)𝑑\theta _x𝑑\theta _y`$ (2)
$`=`$ $`{\displaystyle _{\mathrm{}}^+\mathrm{}}{\displaystyle _{\mathrm{}}^+\mathrm{}}\stackrel{~}{T}_{\mathrm{sky}}^2(\alpha _x,\alpha _y)𝑑\alpha _x𝑑\alpha _y,`$ (3)
where $`\stackrel{~}{T}_{\mathrm{sky}}^2`$ represents the Power Spectral Density (PSD) of the (fine structure) sky brightness distribution (units: temperature<sup>2</sup> radian<sup>2</sup>).
A radiometer has a beam pattern represented by gain function $`G(\theta _x^{},\theta _y^{})`$, where $`\theta _x^{}`$ and $`\theta _y^{}`$ are angular offsets from the optical axis of the instrument. For a beam pattern localized within $`\theta _x^{}1,\theta _y^{}1`$, the response to fluctuations of a given angular wavenumber is
$$\widehat{𝐆}(\alpha _x,\alpha _y)=_{\mathrm{}}^+\mathrm{}_{\mathrm{}}^+\mathrm{}G(\theta _x^{},\theta _y^{})e^{2\pi i(\alpha _x\theta _x^{}+\alpha _y\theta _y^{})}𝑑\theta _x^{}𝑑\theta _y^{}.$$
(4)
The gain $`G(\theta _x^{},\theta _y^{})`$ is normalized such that the maximum value of $`\widehat{G}(\alpha _x,\alpha _y)`$ is unity. The hat symbol is used instead of the tilde to distinguish between the different normalizations applied in equation (1) and equation (4). The response of the instrument to the sky is given by the convolution of the beam and the sky brightness. In the angular wavenumber domain,
$$\stackrel{~}{𝐓}_{\mathrm{inst}}=\stackrel{~}{𝐓}_{\mathrm{sky}}\widehat{𝐆}.$$
(5)
The instrument acts as a spatial filter that is only sensitive to a particular range of angular wavenumbers on the sky. For example, a wide-angle beam smears out the small-scale structure, and the corresponding $`\widehat{G}`$ has a narrow distribution. An ideal pencil beam responds equally to all angular wavenumbers with $`\widehat{G}=1`$.
Atmospheric features in the sky brightness distribution are blown across the angular coordinate frame, causing the output of the radiometer to vary with time. Features on the sky with large angular wavenumber (many cycles per radian) produce rapid variations with time compared to those with small angular wavenumber. Time averaging of the instrument output over a period $`t_{\mathrm{av}}`$ suppresses the rapid fluctuations (large wavenumber). This relationship can be expressed by including a temporal filter function $`\widehat{Q}(\alpha _x,\alpha _y)`$, i.e.
$$T_{\mathrm{out},\mathrm{rms}}^2=_{\mathrm{}}^+\mathrm{}_{\mathrm{}}^+\mathrm{}\stackrel{~}{T}_{\mathrm{sky}}^2\widehat{G}^2\widehat{Q}^2𝑑\alpha _x𝑑\alpha _y.$$
(6)
Equation (6) can be used to calculate the rms noise due to atmospheric emission fluctuations for a radiometric system with time averaging. Section 2.2 describes an approximate model for $`\stackrel{~}{T}_{\mathrm{sky}}^2`$. Section 2.3 derives the spatial filter function $`\widehat{G}`$ for chopped beam, swept beam and interferometric experiments. Section 2.4 characterizes the instrument temporal response $`\widehat{Q}`$. Section 2.5 applies the results of 2.2, 2.3 and 2.4 to derive the residual level of atmospheric fluctuations for a measurement, using equation (6). Section 2.6 highlights the atmospheric parameter most relevant for degree-scale imaging of the CMB.
### 2.2 Model of Atmospheric Emission Fluctuations
We adopt the Kolmogorov model of turbulence (Tatarskii 1961), summarized briefly below. Turbulent energy is injected into the atmosphere on large scales from processes such as convection and wind shear, and then cascades down through a series of eddies to smaller scales, until it is dissipated by viscous forces on size scales of order 1 mm. If energy is conserved in the cascade then simple dimensionality arguments can be used to show that the power spectrum of the fluctuations in a large 3-dimensional volume is proportional to $`q^{11/3}`$, where $`q`$ is the spatial wavenumber (units: length<sup>-1</sup>). This holds from the outer scale size $`L_\mathrm{o}`$ on which the energy is injected to the inner scale size $`L_\mathrm{i}`$ on which it is dissipated. Tatarskii showed that this same power law applies to quantities that are passively entrained in the flow of air, such as the mass fraction of water vapor.
Figure 1 shows the geometry used for this analysis. The water vapor fluctuations are present in a layer of thickness $`\mathrm{\Delta }h`$ at average altitude $`h_{\mathrm{av}}`$. The $`x`$-, $`y`$\- and $`z`$-axes form an orthogonal set with the $`z`$-axis parallel to the line of sight of the observations at elevation $`ϵ`$.
Consider first the case of observing in the zenith direction, $`ϵ=90^{}`$. In the optically thin limit, the brightness temperature contribution at frequency $`\nu `$ from a volume element of atmosphere with thickness $`dz`$ along the line of sight and water vapor density $`\rho _{\mathrm{H}_2\mathrm{O}}`$ is given by
$$dT_{\mathrm{sky}}(x,y,z,\nu )=\rho _{\mathrm{H}_2\mathrm{O}}(x,y,z)\kappa _{\mathrm{H}_2\mathrm{O}}(\nu )T_{\mathrm{phys}}(x,y,z)dz,$$
(7)
where $`\kappa _{\mathrm{H}_2\mathrm{O}}(\nu )`$ is the mass opacity function for water vapor and $`T_{\mathrm{phys}}`$ is the physical temperature of the volume element. The brightness temperature of the atmosphere looking vertically up from the ground is
$$T_{\mathrm{sky}}(x,y,\nu )=\kappa _{\mathrm{H}_2\mathrm{O}}(\nu )_0^{\mathrm{}}T_{\mathrm{phys}}(z)\rho _{\mathrm{H}_2\mathrm{O}}𝑑z.$$
(8)
The dependence on $`\nu `$ is considered to be implicit from now on. The Fourier transform of this distribution is
$$\stackrel{~}{T}_{\mathrm{sky}}(q_x,q_y)=\underset{X,Y\mathrm{}}{lim}\left[\frac{1}{\sqrt{XY}}_{Y/2}^{+Y/2}_{X/2}^{+X/2}T_{\mathrm{sky}}(x,y)e^{2\pi i(q_xx+q_yy)}𝑑x𝑑y\right],$$
(9)
where $`q_x`$ and $`q_y`$ are the spatial wavenumbers for the $`x`$ and $`y`$ directions.
For Kolmogorov turbulence it can be shown (e.g. Lay 1997) that the Power Spectral Density (PSD) for the fluctuations as seen in projection is given by
$$\stackrel{~}{T}_{\mathrm{sky}}^2(q_x,q_y)=\{\begin{array}{cc}A(q_x^2+q_y^2)^{11/6}L_\mathrm{i}|q_x^2+q_y^2|^{1/2}2\mathrm{\Delta }h\hfill & \\ A^{}(q_x^2+q_y^2)^{8/6}2\mathrm{\Delta }h|q_x^2+q_y^2|^{1/2}L_\mathrm{o}\hfill & \end{array}$$
(10)
The coefficient $`A`$, and the related value $`A^{}`$, are a measure of the turbulent intensity. $`\stackrel{~}{T}_{\mathrm{sky}}^2`$ has units of K<sup>2</sup> m<sup>2</sup> and the angular braces denote an average over many realizations of the random atmosphere. The first case in equation (10), applies to spatial wavelengths smaller than $`2\mathrm{\Delta }h`$, where the turbulence is considered to be isotropic in three dimensions. In the second case the turbulence is isotropic in the horizontal plane, but is constrained vertically to lie in a layer of thickness $`\mathrm{\Delta }h`$. The increase of the PSD as $`(q_x,q_y)(0,0)`$ is less rapid than for the 3D case. Beyond the outer scale $`L_o`$ the PSD should become constant.
The derivation of the $`11/3`$ power law by Kolmogorov applies in the isotropic three-dimensional case, well within the inner and outer scales, where the turbulence can be considered scale-free. There is extensive experimental evidence to support this. In previous analyses (e.g. Church 1995, Andreani et al. 1990), it was assumed that there were no correlations present in the turbulent layer on scales greater than the thickness $`\mathrm{\Delta }h`$, i.e. $`\mathrm{\Delta }h`$ corresponded to the outer scale size (outer scale sizes of between 1 m and 100 m were used in Church’s calculations). Data from atmospheric phase monitors operating at 12 GHz (Masson 1994; Holdaway 1995, Lay 1997) and radio telescope arrays such as the Very Large Array (Armstrong & Sramek 1982; Carilli & Holdaway 1997) show no evidence for such a low value of $`L_\mathrm{o}`$; indeed, correlation is observed over separations in excess of 10 km. While there may be conditions in which the outer scale length is greatly reduced (Coulman & Vernin 1991), the data are generally well described by a $`11/3`$ power law on small scales, and a $`8/3`$ power law on large scales, with the transition occurring for sizes comparable to the layer thickness as described in equation (10). The $`8/3`$ power law for the two-dimensional regime can be derived from similar scaling arguments to the 3D case, but cannot be justified well on theoretical grounds, since turbulence is not strictly possible in a two-dimensional medium – the layer must have some vertical extent – and the model should therefore be regarded as somewhat empirical. This distribution of fluctuation power is shown schematically in Fig. 2.
Linear coordinates $`(x,y)`$ for fluctuations in a layer at average height above the ground, $`h_{\mathrm{av}}`$, are converted to angular coordinates in radians: $`\theta _xx/h_{\mathrm{av}},\theta _yy/h_{\mathrm{av}}`$ (still assuming $`ϵ90^{}`$). The approximation holds when $`\theta _x1`$ and $`\theta _y1`$. Linear wavenumbers $`(q_x,q_y)`$ are converted to angular wavenumbers: $`\alpha _x=q_xh_{\mathrm{av}}`$, $`\alpha _y=q_yh_{\mathrm{av}}`$. Therefore
$$\stackrel{~}{T}_{\mathrm{sky}}^2(\alpha _x,\alpha _y)=\{\begin{array}{cc}Ah_{\mathrm{av}}^{5/3}(\alpha _x^2+\alpha _y^2)^{11/6}h_{\mathrm{av}}/(2\mathrm{\Delta }h)(\alpha _x^2+\alpha _y^2)^{1/2}\alpha _\mathrm{i}\hfill & \\ A^{}h_{\mathrm{av}}^{2/3}(\alpha _x^2+\alpha _y^2)^{8/6}\alpha _\mathrm{o}(\alpha _x^2+\alpha _y^2)^{1/2}h_{\mathrm{av}}/(2\mathrm{\Delta }h).\hfill & \end{array}$$
(11)
The transition between the three- and two-dimensional regimes is at approximately $`h_{\mathrm{av}}/(2\mathrm{\Delta }h)`$; $`\alpha _\mathrm{o}`$ and $`\alpha _\mathrm{i}`$ correspond to the outer and inner scales of the turbulence, respectively. A factor of $`h_{\mathrm{av}}^2`$ is included for the correct normalization of the PSD, such that $`\stackrel{~}{T}_{\mathrm{sky}}^2(\alpha _x,\alpha _y)d\alpha _xd\alpha _y=\stackrel{~}{T}_{\mathrm{sky}}^2(q_x,q_y)dq_xdq_y`$.
Up until this point it has been assumed that the observations are in the zenith direction. In the 3D regime, two factors are needed to scale to arbitrary elevation. First, the atmospheric fluctuation power is proportional to the path length through the layer, which scales as $`1/\mathrm{sin}ϵ`$, i.e. $`A`$ in equation (11) must be replaced by $`A/\mathrm{sin}ϵ`$. In addition, the distance to the layer of fluctuations has increased from $`h_{\mathrm{av}}`$ to $`h_{\mathrm{av}}/\mathrm{sin}ϵ`$. Applying these substitutions to equation (11), for the 3D regime only ($`h_{\mathrm{av}}/(2\mathrm{\Delta }h\mathrm{sin}ϵ)(\alpha _x^2+\alpha _y^2)^{1/2}\alpha _\mathrm{i}`$)
$`\stackrel{~}{T}_{\mathrm{sky}}^2(\alpha _x,\alpha _y)`$ $`=`$ $`\left({\displaystyle \frac{A}{\mathrm{sin}ϵ}}\right)\left({\displaystyle \frac{h_{\mathrm{av}}}{\mathrm{sin}ϵ}}\right)^{5/3}(\alpha _x^2+\alpha _y^2)^{11/3}`$ (12)
$`=`$ $`Ah_{\mathrm{av}}^{5/3}(\mathrm{sin}ϵ)^{8/3}\alpha _{xy}^{11/3}.`$ (13)
This prescription for atmosphere fluctuations, valid for small angular scales, is used in §2.5. On larger scales, where the 2D regime becomes more important, the situation has a much more complicated dependence on layer thickness and elevation, and a numerical calculation should be performed (e.g. Treuhaft and Lanyi 1987; Lay 1997).
### 2.3 Spatial Filtering
This section is concerned with the derivation of the spatial filter function $`\widehat{G}^2(\alpha _x,\alpha _y)`$ (eq. ) for three common instrument configurations: (1) chopped beam, (2) swept beam, and (3) interferometer. The function $`\widehat{G}(\alpha _x,\alpha _y)`$ is the Fourier Transform of the instrument gain pattern, $`G(\theta _x,\theta _y)`$ (eq. ). We assume that the emission fluctuations are present in the far-field of the instrument, i.e. $`h_{\mathrm{av}}\frac{D^2}{\lambda }`$, so that this analysis does not apply to large telescopes and short wavelengths.
The result for a single aperture with a circular, Gaussian beam pattern is useful for all 3 configurations. For a beam with Full-Width-to-Half-Maximum (FWHM) power of $`\theta _\mathrm{b}`$, it can be shown that the spatial filter is another circular Gaussian:
$$\widehat{G}^2(\alpha _x,\alpha _y)=\mathrm{exp}\left\{\frac{\pi ^2\theta _\mathrm{b}^2(\alpha _x^2+\alpha _y^2)}{2\mathrm{log}_e2}\right\},$$
(14)
with FWHM of $`0.62\theta _\mathrm{b}^1`$.
#### 2.3.1 Chopped beam
A single aperture is chopped between two positions on the sky at equal elevation separated by angle $`\theta _{\mathrm{chop}}`$ and the output is the difference between the two signals. The power gain pattern $`G(\theta _x,\theta _y)`$ is shown schematically in Fig. 3a, with the $`\theta _x`$ axis defined to be along the chop direction. The individual beams are assumed to be circular Gaussians with FWHM of $`\theta _\mathrm{b}`$.
The corresponding spatial filter $`\widehat{G}^2(\alpha _x,\alpha _y)`$, depicted in Fig. 3d, is derived using the Fourier Transform relationship in equation (4), and is given by
$$\widehat{G}^2(\alpha _x,\alpha _y)=\mathrm{sin}^2(\pi \theta _{\mathrm{chop}}\alpha _x)\mathrm{exp}\left\{\frac{\pi ^2\theta _\mathrm{b}^2x^2}{2\mathrm{log}_e2}\right\}.$$
(15)
The normalization is such that the maximum value of $`\widehat{G}(\alpha _x,\alpha _y)`$ is unity (Section 2.1). The lower half of Fig. 3d shows a cross section of the spatial filter along the $`\alpha _x`$ axis. The instrument has zero response to brightness corrugations on the sky that have $`\alpha _x=n/\theta _{\mathrm{chop}}^1,(n=0,1,2,\mathrm{})`$. It is also insensitive to scales much smaller than the beam size $`\theta _\mathrm{b}`$, corresponding to large values of $`\alpha `$.
#### 2.3.2 Swept beam
The swept beam configuration sweeps the beam from a single aperture back and forth across a strip of sky of width $`\theta _{\mathrm{sweep}}`$ (Fig. 3b). The single beam is assumed to be a circular Gaussian with FWHM of $`\theta _\mathrm{b}`$, and the $`\theta _x`$ axis is defined by the sweep direction. It is assumed that the data are sampled at the Nyquist rate to generate $`N=2\theta _{\mathrm{sweep}}/\theta _\mathrm{b}`$ samples per sweep. An FFT of these data generate $`N`$ real channels (or alternatively $`N/2`$ complex channels for $`\alpha _x0`$) in the angular wavenumber domain spaced by $`\theta _{\mathrm{sweep}}^1`$ (dashed lines). The resulting spatial filter is illustrated in Fig. 3e. This case is different from the single chop, since there are $`N`$ independently determined quantities, rather than one. Each channel is sensitive to corrugations on the sky with a particular range of wavenumber $`\alpha _x`$. In practice, data from the swept ‘slot’ on the sky must to be tapered at the ends to avoid ‘ringing’ effects in the transform. This gives rise to an effective channel width $`\mathrm{\Delta }\alpha `$ that is larger than the channel separation, as indicated by the shaded region in Fig. 3e. The beam size again sets an upper limit on the angular wavenumber to which the instrument is sensitive.
#### 2.3.3 Interferometer
Figure 3c and f show the response of an interferometer consisting of two circular apertures producing Gaussian beams with FWHM of $`\theta _\mathrm{b}`$. The aperture centers are separated by a baseline of length $`B`$ and the $`\theta _x`$ axis is defined to be in the same direction. The interferometer responds to a range of spatial wavenumbers centered on $`\alpha _x=\pm B/\lambda `$ (Fig 3f). The larger the aperture, the larger the range of wavenumbers the instrument is sensitive to. It is assumed that a complex correlator is used to measure both the sine and cosine components on the sky; half of the total fluctuation power is present in each component (for the sake of clarity, Fig. 3 shows only the cosine component). Note that the Gaussian profile would imply that the interferometer has a finite response to $`(\alpha _x,\alpha _y)=(0,0)`$. In fact there must be zero response and the Gaussian approximation breaks down near the origin, since there is no overlap between the apertures.
### 2.4 Temporal Filtering
This section derives the form of the temporal filter $`\widehat{Q}(\alpha _x,\alpha _y)`$ of equation (6).
The wind vector $`𝐰`$ advects the layer containing the water vapor fluctuations in a horizontal direction, so that the distribution of fluctuations projected onto the $`xy`$ plane appears to move at speed $`𝐰_{xy}=(w_x,w_y)`$, the component of $`𝐰`$ parallel to the $`xy`$ plane. A fluctuation component characterized by wavenumbers$`(q_x,q_y)`$ sampled along a line of sight parallel to the $`z`$-axis produces a signal that varies in time with frequency $`\nu =w_xq_x+w_yq_y`$. Converting to angular wavenumber,
$$\nu =\frac{\mathrm{sin}ϵ}{h_{\mathrm{av}}}(w_x\alpha _x+w_y\alpha _y).$$
(16)
Time-averaging the output of an instrument is equivalent to a low pass filter which rejects signals that are varying rapidly. Since time-averaging is equivalent to convolution of the output time series with a boxcar function, the equivalent frequency response is given by $`\mathrm{sinc}(\pi \nu t_{\mathrm{av}})`$. Substituting for the frequency $`\nu `$ we obtain the temporal filter function $`\widehat{Q}`$:
$$\widehat{Q}(\alpha _x,\alpha _y)=\mathrm{sinc}\left(\frac{\pi t_{\mathrm{av}}\mathrm{sin}ϵ}{h_{\mathrm{av}}}(w_x\alpha _x+w_y\alpha _y)\right).$$
(17)
This function is represented schematically by the dark strips in Fig. 3d, e and f, perpendicular to the direction of the projected wind vector $`𝐰_{xy}`$.
### 2.5 Residual Fluctuation Power
The atmospheric fluctuation power remaining at the output of the instrument is determined by calculating the overlap integral between the unfiltered atmospheric power (Fig. 2) and the spatial and temporal filtering functions for the instrument as depicted in Fig. 3. This is analogous to the application of a window function for assessing the response of an instrument to the CMB power spectrum, or to the use of the Optical Transfer Function for optical systems. Each of the three instrument configurations is considered below.
#### 2.5.1 Chopped beam
The residual level of brightness temperature fluctuations at the output of a chopped beam experiment after time averaging is given by
$`T_{\mathrm{out},\mathrm{rms}}^2`$ $`=`$ $`{\displaystyle _{\mathrm{}}^+\mathrm{}}{\displaystyle _{\mathrm{}}^+\mathrm{}}\stackrel{~}{T}_{\mathrm{sky}}^2\widehat{G}^2\widehat{Q}^2𝑑\alpha _x𝑑\alpha _y`$ (19)
$`=`$ $`{\displaystyle _{\mathrm{}}^+\mathrm{}}{\displaystyle _{\mathrm{}}^+\mathrm{}}\left\{Ah_{\mathrm{av}}^{5/3}(\mathrm{sin}ϵ)^{8/3}(\alpha _x^2+\alpha _y^2)^{11/6}\right\}\left\{\mathrm{sin}^2(\pi \theta _{\mathrm{chop}}\alpha _x)\right\}`$
$`\left\{\mathrm{sinc}^2(\pi t_{\mathrm{av}}{\displaystyle \frac{\mathrm{sin}ϵ}{h_{\mathrm{av}}}}(\alpha _xw_x+\alpha _yw_y))\right\}d\alpha _xd\alpha _y.`$
The integral is dominated by the contribution from close to the origin, in which regime the spatial filter function can be approximated as $`\pi ^2\theta _{\mathrm{chop}}^2\alpha _x^2`$. The integral can then be reduced to the following expression:
$$T_{\mathrm{out},\mathrm{rms}}^2(11.2\mathrm{cos}^2\varphi +16.7\mathrm{sin}^2\varphi )Ah_{\mathrm{av}}^2(\mathrm{sin}ϵ)^3\theta _{\mathrm{chop}}^2w_{xy}^{1/3}t_{\mathrm{av}}^{1/3},$$
(20)
where $`\varphi `$ is the angle between the projected wind vector $`𝐰_{xy}`$ and the $`\theta _x`$ axis. The expression should apply as long as the 3D turbulence model is a good approximation for $`\stackrel{~}{T}_{\mathrm{sky}}^2`$ (Section 2.2). This requires that the chop angle $`\theta _{\mathrm{chop}}2\mathrm{\Delta }h\mathrm{sin}ϵ/h_{\mathrm{av}}`$ and $`w_{xy}t_{\mathrm{av}}\mathrm{\Delta }h_{\mathrm{av}}`$. If either constraint is exceeded, equation (20) represents an upper limit, since the regime of ‘2D turbulence’ has reduced power at low angular wavenumbers. It is interesting to note that the rms level of the fluctuations goes down as $`t_{\mathrm{av}}^{1/6}`$, much more slowly than the usual $`t_{\mathrm{av}}^{1/2}`$. This is because the filtered fluctuations do not have a white noise spectrum; most of the power is concentrated at the low frequencies.
It is clear that simple chopped observations are very susceptible to atmospheric fluctuations; in practice more complicated three and four beam chops have been used to alleviate this.
#### 2.5.2 Swept beam
For a channel with $`\alpha _x^{}\mathrm{\Delta }\alpha _{\mathrm{chan}}`$, the atmospheric fluctuation power $`\stackrel{~}{T}_{\mathrm{sky}}^2`$ can be considered approximately constant in the region of overlap between the spatial filter and the temporal filter (Fig. 3e). For such a case, it can be shown that
$`T_{\mathrm{out},\mathrm{rms}}^2`$ $`=`$ $`{\displaystyle _{\mathrm{}}^+\mathrm{}}{\displaystyle _{\mathrm{}}^+\mathrm{}}\stackrel{~}{T}_{\mathrm{sky}}^2\widehat{G}^2\widehat{Q}^2𝑑\alpha _x𝑑\alpha _y`$ (22)
$``$ $`\left\{Ah_{\mathrm{av}}^{5/3}(\mathrm{sin}ϵ)^{8/3}\left({\displaystyle \frac{\alpha _x^{}}{\mathrm{sin}\varphi }}\right)^{11/3}\right\}\left\{\mathrm{exp}\left[8\mathrm{ln}2\left({\displaystyle \frac{\alpha _x^{}}{\alpha _\mathrm{b}\mathrm{sin}\varphi }}\right)^2\right]\right\}`$
$`\left\{{\displaystyle \frac{\mathrm{\Delta }\alpha _{\mathrm{chan}}}{\mathrm{sin}\varphi }}\right\}\left\{(w_{xy}t_{\mathrm{av}})^1{\displaystyle \frac{h_{\mathrm{av}}}{\mathrm{sin}ϵ}}\right\},`$
where $`\alpha _x^{}`$ is the angular wavenumber of the channel and $`\varphi `$ is the angle that $`𝐰_{xy}`$ makes with the $`\theta _x`$-axis. The first factor is $`\stackrel{~}{T}_{\mathrm{sky}}^2`$ evaluated at the intersection of the channel spatial filter (shaded region in Fig. 3e) and the temporal filter (dark strip in Fig. 3e). The second factor is the taper imposed by the beam size of the aperture (a wide angle beam is insensitive to high angular wavenumbers). The third factor is the effective length of the temporal filter strip that overlaps with the channel. The fourth factor is the effective width of the temporal filter function. The third and fourth factors represent the area of overlap in Fig. 3e; the first and second represent the level of fluctuations in the overlap region. The residual fluctuations are largest when the wind $`𝐰_{xy}`$ is blowing perpendicular to the sweep direction, i.e. $`\varphi =90`$ degrees.
The channels with $`\alpha _x^{}`$ close to zero may have a significant response to the strong peak in $`\stackrel{~}{T}_{\mathrm{sky}}^2`$ near $`(\alpha _x,\alpha _y)=(0,0)`$. Careful thought must be given to the tapering (or ‘windowing’) function applied over the swept slot on the sky, since this determines the profile of the channel as a function of $`\alpha _x`$. Little or no tapering leads to strong sidelobes on the channel response; too much tapering substantially reduces the resolution of the power spectrum. Specific cases can be computed numerically if the altitude $`h_{\mathrm{av}}`$ and thickness $`\mathrm{\Delta }h`$ of the layer are known.
#### 2.5.3 Interferometer
For an interferometer baseline $`B`$ that is large compared to the diameter of the antennas, the circles representing the spatial filter response in Fig. 3f are widely spaced and the atmospheric power spectral density $`\stackrel{~}{T}_{\mathrm{sky}}^2`$ can be considered constant over these regions. The overlap with the temporal filter (dark strip in Fig. 3f) is maximum when the wind blows perpendicular to the baseline. For cases when the wind is close to perpendicular ($`\varphi >60^{}`$), the variance of the residual brightness temperature fluctuations at the time-averaged output of the instrument is approximated by
$`T_{\mathrm{out},\mathrm{rms}}^2`$ $`=`$ $`{\displaystyle _{\mathrm{}}^+\mathrm{}}{\displaystyle _{\mathrm{}}^+\mathrm{}}\stackrel{~}{T}_{\mathrm{sky}}^2\widehat{G}^2\widehat{Q}^2𝑑\alpha _x𝑑\alpha _y`$ (24)
$``$ $`\left\{Ah_{\mathrm{av}}^{5/3}(\mathrm{sin}ϵ)^{8/3}\left({\displaystyle \frac{B}{\lambda }}\mathrm{sin}\varphi \right)^{11/3}\right\}\left\{\mathrm{exp}\left[8\mathrm{ln}2\left({\displaystyle \frac{B\mathrm{cos}\varphi }{\lambda \alpha _\mathrm{b}}}\right)^2\right]\right\}`$
$`\left\{{\displaystyle \frac{\alpha _\mathrm{b}\sqrt{\pi }}{\sqrt{2\mathrm{ln}2}}}\right\}\left\{(w_{xy}t_{\mathrm{av}})^1{\displaystyle \frac{h_{\mathrm{av}}}{\mathrm{sin}ϵ}}\right\},`$
The first factor is $`\stackrel{~}{T}_{\mathrm{sky}}^2`$ evaluated at the intersection of the channel spatial filter and the temporal filter (assumes ‘3D turbulence’). The second factor accounts for the taper of the primary beam. The third factor is the equivalent width of the Gaussian spatial filter along the direction of the temporal filter strip for both positive and negative $`\alpha _x`$. The fourth factor is the equivalent width of the temporal filter function. When the wind blows more parallel to the baseline there is very little overlap between the spatial and temporal filters (Fig. 3f) and $`T_{\mathrm{out},\mathrm{rms}}^2`$ is very small compared to the perpendicular case.
If the edge-to-edge separation $`\mathrm{\Delta }D`$ of the apertures is small compared to their diameter $`d`$, the circles in Fig. 3f are close together, and it is necessary to make a numerical calculation to account for the change in $`\stackrel{~}{T}_{\mathrm{sky}}^2`$ in the region of overlap between the spatial and temporal filters. For the case where the wind is perpendicular to the baseline, equation (24) can be approximated by a one dimensional integral of the atmospheric power law and the spatial filter along the $`\alpha _x`$ direction, multiplied by the effective width of the temporal filter:
$$T_{\mathrm{out},\mathrm{rms}}^2\left\{Ah_{\mathrm{av}}^{5/3}(\mathrm{sin}ϵ)^{8/3}\right\}\left\{(w_{xy}t_{\mathrm{av}})^1\frac{h_{\mathrm{av}}}{\mathrm{sin}ϵ}\right\}S_{\mathrm{int}},$$
(25)
where
$$S_{\mathrm{int}}=_{\mathrm{}}^+\mathrm{}\alpha _x^{11/3}\widehat{G}^2𝑑\alpha _x.$$
(26)
This integral was evaluated for two different types of aperture: (1) a truncated Gaussian distribution of the electric field strength with the edge cut-off at the -10 dB level; (2) a truncated Bessel distribution of the electric field, cut off at the first zero. The first is typical for a dish illuminated by a feedhorn, and the second is obtained at the aperture of a corrugated horn. In each case, $`\widehat{G}`$ was calculated by cross-correlating the E-field distribution of two apertures. It was found that the result is almost independent of the transition from the 3D to the 2D turbulence regime. Also, for a given value of $`\mathrm{\Delta }D/d`$, the integral $`S_{\mathrm{int}}(d/\lambda )^{8/3}`$ (to a good approximation), i.e. $`S_{\mathrm{int}}(d/\lambda )^{8/3}`$ is a function of $`\mathrm{\Delta }D/d`$. This is plotted in Fig. 4. At large values of $`\mathrm{\Delta }D/d`$ (where $`\mathrm{\Delta }DB`$), there is a $`11/3`$ power-law dependence, as predicted by equation (24).
### 2.6 The Unfiltered Fluctuation Strength
The atmospheric brightness fluctuations at a given site and at a given time can be characterized by four parameters: the wind vector $`𝐰`$, the altitude of the turbulent layer $`h_{\mathrm{av}}`$, the fluctuation intensity $`A`$, and the thickness of the layer $`\mathrm{\Delta }h`$. Of particular importance is the combination $`Ah_{\mathrm{av}}^{8/3}`$, which is the measure of fluctuation ‘strength’ relevant to the swept beam and interferometric observations of the CMB (eqs. and ).
In the next sections, data from the South Pole and Chile are analyzed to constrain these parameters. The models developed above are then used to estimate the sky brightness fluctuations that would be expected for an interferometer located at these sites.
## 3 The South Pole
The South Pole has been chosen as a site for several CMB anisotropy experiments over the past decade (see for example Meinhold & Lubin 1991, Tucker et al. 1993, Dragovan et al. 1994, Platt et al. 1997). It is high (2800 m), extremely cold and dry, and is situated on an expansive ice sheet, with the surface wind dominated by weak katabatic airflow from higher terrain several hundred kilometers away to grid northeast (see discussion in King & Turner 1997). We characterize atmospheric brightness fluctuations at the South Pole using data from the Python telescope, a swept beam CMB experiment, obtained during the austral summer 1996-1997.
### 3.1 The Python Experiment
The Python telescope, in its configuration for the 1996-1997 season, employs a dual feed 40 GHz HEMT based receiver. The receiver has two corrugated feeds separated by $`2.75^{}`$ on the sky, with separate RF chains, HEMT amplifiers, and backend signal processing. The post-detector output is AC coupled with a cut-off frequency of 1 Hz, and incorporates a low-pass anti-alias filter which attenuates the signal at frequencies above 100 Hz. The RF signal is separated into two frequency bands, centered at 39 GHz and 41.5 GHz with bandwidths of approximately 2 GHz and 5 GHz, respectively. Data from the two bands are combined for atmospheric analysis.
The feeds are in the focal plane of a 0.8 m off-axis paraboloidal mirror, resulting in two $`1.1^{}`$ beams on the sky which are swept through $`10^{}`$ at constant elevation at a rate of 5.1 Hz by a large vertical flat mirror. The two beams are at the same elevation and their sweeps partially overlap on the sky. Beam spillover is reflected to the sky by two sets of shields, one set fixed to the tracking telescope, and a set of larger stationary ground shields which also shield the telescope structure from the sun and any local sources of interference. Data are taken for $`30`$ s while sweeping and tracking a central position on the sky; the telescope is then slewed to another position a few degrees away. Between 5 and 13 pointings are stored in one data file, representing 5 to 10 minutes of observing time. Data were taken over 80% of the period from early December 1996 through early February 1997.
### 3.2 Data Analysis Technique
In order to differentiate atmospheric fluctuation power from instrument noise, the covariance of the data from the two beams is taken for the portion of each sweep in which their positions overlap on the sky, approximately $`6^{}`$. Atmospheric brightness fluctuations are correlated between the two beams, while most of the instrument noise is uncorrelated. Thus the signal-to-noise of the correlated fluctuation power can be increased by averaging the covariance over many sweeps on the sky, allowing the atmospheric brightness fluctuation power to be estimated during stable periods when the system is receiver noise limited. The mean covariance represents the mean ‘snapshot’ fluctuation power in the $`6^{}`$ sweep, regardless of the number of sweeps that are subsequently averaged together.
Several instrumental effects are accounted for when estimating the atmospheric brightness fluctuation power:
1. The fluctuation power is corrected for the effect of the anti-alias filter roll-off and backend electronics delay.
2. Correlated 60 Hz line noise is removed from the data.
3. An offset dependent on the position of the sweeping mirror is correlated between the two beams. This is removed by subtracting the component of the signal that is constant on the sky over multiple pointings.
We determine the effect of the 60 Hz line noise and stationary signal removal techniques on the true atmospheric signal by examining their effect on the data during periods when the data are dominated by atmospheric fluctuations. We calculate that the atmospheric power should be increased by 30% to compensate for the combined effect of these removal techniques and instrumental effects. There is an additional correlated signal due to the stationary ground shield and due to the CMB itself, which is not removed with these techniques. Therefore, the quartiles which we report should be taken as an upper limit of the true atmospheric signal.
### 3.3 South Pole Atmospheric Fluctuation Data
¿From the Python data, we determine the South Pole brightness fluctuation power for a 6 sweep at a mean elevation angle of $`49^{}`$ over 2 months of the austral summer (Fig. 5). The atmospheric fluctuations at the South Pole are bimodal in nature, with long periods of high stability broken by periods of high atmospheric fluctuation power. An examination of the meteorolgical records shows that the latter are always associated with at least partial cloud cover. The infrequent periods of high atmospheric fluctuation power are correlated with a (grid) westerly shift in the wind direction, and are weakly correlated with increased windspeed and precipitable water vapor content. These correlations indicate that the periods of high atmospheric fluctuation power are associated, at least in part, with synoptically forced moist air from West Antarctica, a condition which occurs infrequently at the pole (Hogan et al. 1982). Variability in the fluctuation power during periods of low atmospheric fluctuation power is consistent with instrument noise.
From these data we construct a cumulative distribution function for the brightness fluctuation power (Fig. 6), and derive quartile values for brightness fluctuation power over the 2 month time period during which the data were taken. The cumulative distribution function for the data files, in which the covariance has been averaged for only a few minutes, is adequate for deriving the 50% and 75% quartile values. However, instrument noise dominates the distribution function at the 25% quartile level; therefore data that are binned in 6 hour intervals are used to derive the 25% quartile value. The quartile values for the 2 months of the austral summer 1996-1997 are {0.20, 0.51, 1.62} mK<sup>2</sup>, where the brackets are used to denote the 25%, 50% and 75% quartile values. To compensate for instrumental filtering effects described above, we increase these values by 30% to {0.27, 0.68, 2.15} mK<sup>2</sup>. The Python experiment was not operated during the austral winter, so data are not available for this period. However, precipitable water vapor and sky opacity quartile values are lower in the winter months (Chamberlin et al. 1997). It is therefore likely that the atmospheric stability improves during the austral winter as well.
The sharp roll-off of the Python primary beam spatial filter (see Fig. 3e) prevents an accurate determination of the underlying atmospheric angular power spectrum. The angular power spectrum for data that have not been tapered at the edges of the sweep exhibits power proportional to the inverse square of the angular wavenumber. This power law dependence is due to power from low angular wavenumbers (large angular scales) leaking into higher angular wavenumber channels. The observations demonstrate that it is desirable to taper CMB data to reduce contamination from the high atmospheric fluctuation power on large angular scales.
### 3.4 Estimating the fluctuation intensity
The value of $`Ah_{\mathrm{av}}^{5/3}`$ (eq. ) appropriate for the South Pole can be estimated from the rms of the Python measurements. Model atmospheric fluctuation power spectra for layers of turbulence seen at an elevation of $`49^{}`$ were computed using a full three dimensional integration (similar to Lay 1997). These were then multiplied by the Python spatial filter function $`\widehat{G}^2`$ (the sum of all but the DC channel) and integrated to give a value for $`\mathrm{\Delta }T_{\mathrm{out},\mathrm{rms}}^2`$. The model value for $`Ah_{\mathrm{av}}^{5/3}`$ was then scaled so that the model $`\mathrm{\Delta }T_{\mathrm{out},\mathrm{rms}}^2`$ matched the measured value. Note that since we are estimating the level of rms fluctuation level from ‘snaphots’ of the atmosphere, there is no correction needed for time averaging, i.e. the temporal filter $`\widehat{Q}^2=1`$ in equation 6.
The result is a function of the ratio of the layer thickness to the average altitude of the layer, $`\mathrm{\Delta }h/h_{\mathrm{av}}`$. For $`\mathrm{\Delta }h/h_{\mathrm{av}}=1`$, it was found that $`Ah_{\mathrm{av}}^{5/3}=0.99\mathrm{\Delta }T_{\mathrm{out},\mathrm{rms}}^2`$; for $`\mathrm{\Delta }h/h_{\mathrm{av}}=2`$, 0.5 and 0.25, the scale factor changed to 0.89, 1.13 and 1.36, respectively. Therefore the ratio does not have a major effect on the inferred value of $`Ah_{\mathrm{av}}^{5/3}`$. For $`\mathrm{\Delta }h=h_{\mathrm{av}}=500`$ m, the measured quartile values give $`Ah_{\mathrm{av}}^{5/3}=0.99\mathrm{\Delta }T_{\mathrm{atm}}^2=\{0.27,0.67,2.13\}`$ mK<sup>2</sup>. Corresponding values for the quantity $`Ah_{\mathrm{av}}^{8/3}/(\mathrm{mK}^2\mathrm{m})`$ (which determines the residual level of fluctuations for swept beam and interferometer experiments – see eqs. and ) are tabulated in Table 1 for three different values of $`h_{\mathrm{av}}`$. These numbers are appropriate for an observing frequency of 40 GHz. The brightness temperature of fluctuations due to water vapor scales approximately as the square of the observing frequency (except close to the line centers at 22 GHz and 183 GHz), so that $`Ah_{\mathrm{av}}^{8/3}`$ should be scaled as observing frequency to the fourth power.
### 3.5 Altitude of the fluctuations
By combining the Python data with radiosonde wind measurements, it is possible to determine the altitude of the fluctuations during periods of bad weather. This is illustrated by Fig. 7, which shows the measured emission as a function of angular position $`\theta _x`$ on the sky and time $`t`$. The plot represents an interval of 30 s, at a time when the wind was blowing parallel to the sweep direction. The stripes are produced by blobs of water vapor moving from left to right; the diagram shows that a blob moves through an angle of $`7^{}`$ in about 13 s. A $`7^{}`$ angular distance corresponds to a physical length of $`0.12h_{\mathrm{av}}/\mathrm{sin}(49^{})`$, where $`h_{\mathrm{av}}`$ is the average altitude of the fluctuation. The radiosonde launched 2 hours after this dataset indicated a fairly uniform wind speed of $`w=`$16 m s<sup>-1</sup> for the lower 2 km, so a blob is expected to move 208 m in 13 s. Solving for $`h_{\mathrm{av}}`$ gives 1300 m. The slope of the stripes is proportional to $`h_{\mathrm{av}}/w`$.
In order to average the data together from many 30-s intervals, we compute the power spectrum for each time–angle plot and average those together. The power spectrum for a 30-s interval is calculated by computing the Fast Fourier Transform of the time–angle data from each of the two feedhorns (using a Hann taper to minimize sidelobes), and then calculating the covariance between the two transformed datasets. Figure 7b shows the power spectrum averaged over one hour of data (containing the 30-s interval shown in Fig. 7a). The power is distributed along a radial line, perpendicular to the striping. The gradient of this line is proportional to $`w/h_{\mathrm{av}}`$; the overlaid radial lines represent (from vertical) altitudes of 0, 500, 1000, 1500, 2000, 2500 and 3000 m, calibrated using $`w=`$16 m s<sup>-1</sup>. The fluctuations have $`h_{\mathrm{av}}1300`$ m. The physical periodicity is given by $`w/\nu `$; e.g. $`\nu =0.1`$ Hz corresponds to a fluctuation with period 160 m. There is little power present at the origin because the DC level was removed from the time–angle plane before the Hann taper and Fourier transform were applied. The contours appear to fall off faster than would be expected for Kolmogorov turbulence. This is partly due to the effect of the primary beam taper in the angular wavenumber direction, which can be represented by a Gaussian centered on zero with a FWHM size of about 33 rad<sup>-1</sup>, but may also indicate that these bad weather fluctuations do not follow a Kolmogorov power law.
Average power spectra were computed for all the hours when the wind was parallel to the sweep direction (approximately every 12 hours). Two more examples during periods of bad weather are shown in Fig. 7c and d. The first indicates fluctuations at an altitude of $`500`$ m; the second shows two components: one at $`300`$ m and another at much higher altitude ($`>3`$ km). Unfortunately it was not possible to detect structure in the power spectrum during stable periods; the emission is too weak.
In most cases that were measured, the altitude determined for the strong fluctuations, which varies from 300 m to well over 3 km, agreed well with the altitude at which the relative humidity was a maximum (measured by radiosonde launches). This strengthens the case for the connection between clouds and strong fluctuations, and may explain the possible non-Kolmogorov nature of the power spectrum. In the cases of high altitude turbulence ($`h_{\mathrm{av}}>3`$ km), however, there was no corresponding maximum in the relative humidity and there is generally little water vapor present in the atmosphere. Another mechanism must be at work in these cases.
## 4 The Atacama Desert in Chile
The Atacama Desert in Northern Chile is extremely dry, and is the proposed location for the next generation of large millimeter-wave arrays, as well as the Cosmic Background Interferometer experiment. The Mobile Anisotropy Telescope has been deployed there for two seasons during 1997-1998 (Torbet et al. 1999, Miller et al. 1999). Monitoring of the atmospheric stability at Cerro Chajnantor (the proposed site for the Atacama Large Millimeter Array) has been underway for over 4 years, using a site test interferometer (Radford et al. 1996).
### 4.1 The site test interferometer
The interferometer used in Chile consists of two dishes 1.8 m in diameter, separated by an East-West baseline of 300 m, that observe the 11 GHz CW beacon from a geostationary communications satellite at an elevation of 36 and an azimuth of 65. The instrument measures the phase difference between the two received signals, which depends on the difference in the electrical path lengths along the two lines of sight to the satellite. The random component of the fluctuations is dominated by the non-uniform distribution of water vapor in the troposphere being blown over the interferometer, since water vapor has a high refractive index compared to dry air. Quartile values for the rms difference in path length over 1996 are {136, 300, 624} $`\mu `$m. The data processing is described by Holdaway et al. (1995). These values have not been scaled to zenith.
Water vapor is the principal source of both path length fluctuations and brightness temperature fluctuations. We wish to estimate the latter from a measurement of the former, which requires a conversion factor. This is described in §4.2. It is also necessary to determine $`A`$ (the intensity of fluctuations integrated through the atmosphere) from the interferometer measurements. This requires a model, and is described in §4.3.
### 4.2 Converting rms path to brightness temperature
The millimeter and submillimeter absorption spectrum of water vapor is dominated by a series of rotational line transitions, the lowest of which are at 22 GHz and 183 GHz. At frequencies close to the line centers, the optical depth $`\tau `$ for a given amount of precipitable water vapor (PWV: the depth of liquid water obtained if all the vapor is condensed) is well-determined, but between the lines experiments have shown that the absorption is higher than expected from theory (e.g. Waters 1976, Sutton and Hueckstaedt 1996). The reason for this is still not clear, although several hypotheses have been advanced. At 40 GHz the theoretical absorption from lines is 0.012 per centimeter PWV. The Gaut and Reifenstein (1971) model for the ‘continuum’ absorption increases this to 0.03 cm<sup>-1</sup>, and is believed to be a reasonable estimate at the frequencies of interest here (Sutton and Hueckstaedt 1996). If the physical temperature of the atmosphere is 250 K, then 1 cm PWV has a brightness temperature of $`0.03\times 250`$ K$`=7.5`$ K. The excess path resulting from 1 cm of water vapor at 250 K is approximately 7 cm (Thompson, Moran & Swenson 1994), so that 1 cm of excess path corresponds to a brightness temperature of 1.1 K, and 1 mK brightness corresponds to 9 $`\mu `$m excess path. This conversion should be considered an estimate with an uncertainty of order 50%. A hard upper limit of 22 $`\mu `$m mK<sup>-1</sup> is set by the contribution from line emission only (no continuum term).
Using the value of 9 $`\mu `$m mK<sup>-1</sup>, the rms path fluctuation quartile values are mapped to brightness temperature fluctuations of {16, 34, 69} mK at 40 GHz.
### 4.3 Estimating $`A`$
Since the phase monitor measures the path difference between two lines of sight, its spatial filtering properties are analogous to the chopping instrument depicted in Fig. 3a and d. The main difference is that the lines of sight are parallel to one another through the atmosphere, rather than diverging from the point of observation. As discussed in §2.5, calculation of the residual fluctuation power after spatial and temporal filtering for this configuration requires a model that includes the thickness of the layer containing the fluctuations and the detailed geometry of the observations. The relevant analysis can be found in Lay (1997).
The input model parameters are: baseline 300 m East-West; elevation $`36^{}`$; azimuth $`65^{}`$; layer thickness 500 m. The windspeed and altitude of the layer are not needed for calculation of $`A`$. To obtain rms brightness temperatures of $`\{16,34,69\}`$ mK with the above model parameters requires that $`A=\{9.4\times 10^5,4.4\times 10^4,1.9\times 10^3\}`$ mK<sup>2</sup> m<sup>-5/3</sup>. The layer thickness has only a small effect on the calculation; if instead the layer is actually 2 km thick, then $`A=\{8.0\times 10^5,3.7\times 10^4,1.6\times 10^3\}`$ mK<sup>2</sup> m<sup>-5/3</sup>.
The average altitude of the turbulent layer is much more important, but is not known. The quantity $`Ah_{\mathrm{av}}^{8/3}/(\mathrm{mK}^2\mathrm{m})`$ is tabulated in Table 1 for different values of $`h_{\mathrm{av}}`$ and $`\mathrm{\Delta }h=500`$ m. This is the value that determines the residual level of fluctuations in a CMB experiment (eq. ).
## 5 Example
The analysis and results presented above are applied to calculate the contribution from atmospheric brightness fluctuations to the noise measured by a small interferometer, of the kind that might be used for measurements of the CMB anisotropy.
We specify a field of view with FWHM of $`\theta _\mathrm{b}=3^{}=0.053`$ rad at a frequency of 40 GHz ($`\lambda =0.75`$ cm). The diameter of the aperture required for a Gaussian distribution truncated at the $``$10 dB level is $`d=1.15\lambda /\theta _\mathrm{b}=16.3`$ cm. For a corrugated horn, the diameter is $`d=1.32\lambda /\theta _\mathrm{b}=18.7`$ cm, and it is this case that we now investigate further.
The total thermal noise (for the combined sine and cosine components, no atmospheric fluctuations) for a 2-element interferometer with system temperature $`T_{\mathrm{sys}}`$ and bandwidth $`\mathrm{\Delta }\nu `$ is given by (see Thompson, Moran & Swenson 1994)
$$T_{\mathrm{therm},\mathrm{rms}}=0.28\left(\frac{T_{\mathrm{sys}}}{20\mathrm{K}}\right)\left(\frac{\mathrm{\Delta }\nu }{5\mathrm{GHz}}\right)^{1/2}\left(\frac{t_{\mathrm{av}}}{\mathrm{s}}\right)^{1/2}\mathrm{mK}.$$
(27)
The rms atmospheric contribution is given by the square root of equation (25):
$$T_{\mathrm{out},\mathrm{rms}}=\left(Ah_{\mathrm{av}}^{8/3}\right)^{1/2}(\mathrm{sin}ϵ)^{11/6}w_{xy}^{1/2}t_{\mathrm{av}}^{1/2}S_{\mathrm{int}}^{1/2}.$$
(28)
The value of $`S_{\mathrm{int}}`$ is determined from Figure 4, and depends on the separation of the apertures. The value of $`Ah_{\mathrm{av}}^{8/3}`$ depends on the conditions at the site, as shown in Table 1, as does the projected windspeed $`w_{xy}`$.
An ideal site should not significantly compromise the sensitivity of the experiment. We can determine the range of $`Ah_{\mathrm{av}}^{8/3}`$ for which $`T_{\mathrm{out},\mathrm{rms}}<T_{\mathrm{therm},\mathrm{rms}}`$. From Fig. 4, the maximum value of $`\mathrm{log}S_{\mathrm{int}}(d/\lambda )^{8/3}`$ is approximately $`+0.3`$, which applies when the wind is perpendicular to the baseline for an interferometer with corrugated horns that are very close together. In this example, $`d/\lambda =24.9`$, which implies $`S_{\mathrm{int}}3.8\times 10^4`$. Substituting these values into equation (28), along with a representative elevation of $`60^{}`$ and a projected windspeed of 10 m s<sup>-1</sup>, we obtain
$$T_{\mathrm{out},\mathrm{rms}}8.0\times 10^3\left(Ah_{\mathrm{av}}^{8/3}\right)^{1/2}t_{\mathrm{av}}^{1/2}\mathrm{m}^{1/2}\mathrm{s}^{1/2}.$$
(29)
For this to be less than the thermal noise contribution we require $`Ah_{\mathrm{av}}^{8/3}<1.2\times 10^3`$ mk<sup>2</sup> m<sup>-1</sup>. Reference to Table 1 indicates that the South Pole should satisfy this requirement most of the time during the summer months, unless the fluctuations are higher than 2 km above the ground. At the Chile site, it appears that the instrument noise for this configuration would be dominated by atmospheric fluctuation power, regardless of the altitude of the fluctuations. It is important, however, to realize that this example was calculated for the extreme case of the wind blowing perpendicular to a baseline for which the apertures are almost touching. The situation is improved by separating the apertures to observe smaller angular scales (Fig. 4), or when the wind blows parallel to the baseline (Fig. 3f).
## 6 Summary
1. The impact of non-uniform emission from the atmosphere on measurements of the CMB is assessed. Chopped beam, swept beam and interferometric measurement schemes are analyzed.
2. The analysis is based on a model where the atmospheric fluctuations are confined to a turbulent layer. Data from phase monitors indicate that there is a transition from a three- to a two-dimensional regime on scales comparable with the thickness of the layer. The outer correlation scale of fluctuations is much larger than the layer thickness. Previous analyses have assumed a much smaller outer scale length and therefore underestimate the level of fluctuations on large scales. The impact of atmospheric fluctuations is assessed by considering the instruments as a combination of spatial and temporal filters that act on the power spectrum of the turbulence.
3. Data from the Python V experiment during summer at the South Pole are analyzed to determine the level of fluctuations. The distribution is bimodal, with stable conditions for 75% of the time and much stronger fluctuations 25% of the time.
4. During normal stable conditions the rms brightness temperature variation across a $`6^{}`$ strip was less than 1 mK, at a frequency of 40 GHz. It was not possible to determine either the average altitude or the power spectrum of these very weak fluctuations.
5. The bad weather fluctuations have rms brightness temperature variations of up to 100 mK across a $`6^{}`$ strip of sky, at a frequency of 40 GHz. They are only present when there is at least partial cloud cover. The windspeed as a function of altitude was used to determine the average altitude of the fluctuations. This varied from 500 m to over 4 km, and in most cases corresponded to the altitude at which the relative humidity was a maximum. The power spectrum measured for these bad weather fluctuations appears to fall off faster than the predicted Kolmogorov power law, although this may be due to the taper imposed by the primary beam. We conclude that these strong fluctuations are probably associated in some way with cloud activity.
6. Path fluctuation data from a satellite phase monitor located in the Atacama Desert in Chile were used to estimate the corresponding level of brightness temperature fluctuations at 40 GHz for that site. Although this conversion is uncertain by a factor of 2, and the result depends strongly on the unknown altitude of the turbulent layer, the two months of South Pole data indicate a significantly lower level of fluctuations compared to the Chile site, as shown in Table 1.
7. The theoretical analysis and experimental measurements were combined to predict the residual atmospheric noise that would be present at the output of a small interferometer with antennas of diameter $`25\lambda `$ that could be used to measure CMB anisotropy. It was found that the atmospheric contribution is likely to be small compared to the thermal noise when the apertures are well-separated, but that the atmosphere can dominate in cases where the edges of the apertures are separated by only a few wavelengths. For the latter case, a good site, such as the South Pole is critical.
8. The analysis shows that the average altitude of the turbulent layer has a big impact on the suitability of a site for CMB measurements. This parameter is not well constrained by the data from either site. Measurement of this altitude should be a priority for future site testing experiments.
## Acknowledgments
The authors wish to express their gratitude to all of the members of the Python V observing team: S. R. Platt, M. Dragovan, G. Novak, J. B. Peterson, D. L. Alvarez, J. E. Carlstrom, J. L. Dotson, G. Griffin, W. L. Holzapfel, J. Kovac, K. Miller, M. Newcomb, and T. Renbarger, who provided us with the Python V data. We thank John Kovac, John Carlstrom and Bill Holzapfel for their insights and helpful discussions, and Greg Griffin and Steve Platt for providing assistance with data and code. We thank the South Pole meteorology office for providing weather data. This research was supported in part by the National Science Foundation under a cooperative agreement with the Center for Astrophysical Research in Antarctica (CARA), grant NSF OPP 89-20223. CARA is a National Science Foundation Science and Technology Center.
## References
Andreani, P., Dall’oglio, G., Martinis, L., Piccirillo, L., & Rossi, L. 1990, Infrared Phys., 30, 479
Armstrong, J. W., & Sramek, R. A., 1982, Radio Science 17, 1579
Chamberlin, R. A., Lane, A. P., & Stark, A. A. 1997, ApJ, 476, 428
Church, S. E. 1995, MNRAS, 272, 551
Coble, K., Dragovan, M., Kovac, J., Halverson, N. W., Holzapfel, W. L., Knox, L., Dodelson, S., Ganga, K., Alvarez, D., Peterson, J. B., Griffin, G., Newcomb, M., Miller, K., Platt, S. R., Novak, G. 1999, ApJ, 519, L5
Coulman, C. E., & Vernin, J., 1991, Applied Optics 30, 118
Davies, R. D., Gutierrez, C. M., Hopkins, J., Melhuish, S. J., Watson, R. A., Hoyland, R. J., Rebolo, R., Lasenby, A. N., & Hancock, S. 1996, MNRAS, 278, 883
Dragovan, M., Ruhl, J., Novak, G., Platt, S., Crone, B., Pernic, R., & Peterson, J. 1994, ApJ, 427, L67
Gaut, N. E., & Reifenstein, E. C. III, 1971, Environmental Res. and Tech. Rep. No. 13 (Lexington, Mass.)
Halverson, N. W., Carlstrom, J. E., Dragovan, M., Holzapfel, W. L., Kovac, J. 1998, In: Phillips, T. G. (Ed.), Advanced Technology MMW, Radio, and Terahertz Telescopes, Proc. SPIE, 3357, p. 416
Hogan, A., Barnard, S., Samson, J., & Winters, W. 1982, J. Geophys. Res., 87, 4287
Holdaway, M. A., Radford, S. J. E., Owen, F. N., & Foster, S. M. 1995, Millimeter Array Technical Memo No. 129
Jones, M. E., & Scott, P. F. 1998, The Very Small Array: Status Report, In: Tran Thanh Van, J., Giraud-Heraud, Y., Bouchet, F., Damour, T. & Mellier, Y. (eds.), Fundamental Parameters in Cosmology, Proceedings of the 33rd Recontres de Moriond, p. 233
King, J. C., & Turner, J. 1997, Antarctic Meteorology and Climatology (Cambridge: Cambridge University Press), 92
Kovac, J., Dragovan, M., Schleuning, D. A., Alvarez, D., Peterson, J. B., Miller, K., Platt, S. R., & Novak, G. 2000 in preparation
Lay, O. P. 1997, A&AS, 122, 535
Leitch, E. M., Readhead, A. C. S., Pearson, T. J., Myers, S. T., & Gulkis, S. 1998, ApJ, submitted (astro-ph/9807312)
Masson, C. R., 1994, Atmospheric Effects and Calibrations, In: Ishiguro, M. & Welch, W. J. (eds.), Astronomy with Millimeter and Submillimeter Wave Interferometry, ASP Conference Series Vol. 59, p. 87
Meinhold, P., & Lubin, P. 1991, ApJ, 370, L11
Miller, A. D., Caldwell, R., Devlin, M. J., Dorwart, W. B., Herbig, T., Nolta, M. R., Page, L. A., Puchalla, J., Torbet, E., Tran, H. T. 1999, ApJ, 524, L1
Netterfield, C. B., Devlin, M. J., Jarolik, N., Page, L., Wollack, E. J. 1997, ApJ, 474, 47
Platt, S. R., Kovac, J., Dragovan, M., Peterson, J. B., & Ruhl, J. E. 1997, ApJ, 475, L1
Radford, S. J. E., Reiland, G., & Shillue, B. 1996, PASP, 108, 441
Ruhl, J. E., Dragovan, M., Platt, S. R., Kovac, J., & Novak, G. 1995, ApJ, 453, L1
Scott, P. F., Saunders, R., Pooley, G., O’Sullivan, C., Lasenby, A. N., Jones, M., Hobson, M. P., Duffett-Smith, P. J., & Baker, J. 1996, ApJ, 461, L1
Sutton, E. C., & Hueckstaedt, R. M. 1996, A&AS, 119, 559
Tatarskii, V. I., 1961, Wave Propagation in a Turbulent Medium, Dover: New York
Tegmark, M., & Efstathiou, G. 1996 MNRAS, 281, 1297
Thompson, A. R., Moran, J. M., & Swenson, G., 1994, Interferometry and Synthesis in Radio Astronomy, Krieger
Torbet, E., Devlin, M. J., Dorwart, W. B., Herbig, T., Miller, A. D., Nolta, M. R., Page, L., Puchalla, J., Tran, H. T. 1999, ApJ, 521, L79
Treuhaft, R. N., & Lanyi, G. E. 1987, Radio Sci., 22 , 251
Tucker, G. S., Griffin, G. S., Nguyen, H. T., & Peterson, J. B. 1993, ApJ, 419, L45
Waters, J. W. 1976, Methods of Experimental Physics, Vol. 12B (M. L. Meeks, ed.), pp. 142-176
Wright, M. C. H. 1996, PASP, 108, 520
|
no-problem/9905/astro-ph9905017.html
|
ar5iv
|
text
|
# I introduction
## I introduction
For the past nearly twenty years, two major theories, inflation and topological defects to the early universe pressing problems seem to have stood the test of time and one current goal is to determine which if any best fits the increasingly accumulated astrophysics data, especially the COBE’s cosmic microwave background anisotropy detections as well as the soon lauched more accuracy MAP and PLANCK observations, and more reasonably interprete the origin of the Universe large scale structures. They usually are regarded as mutually exclusive theories in that defects formed before a period of inflation would rapidly be diluted to such a degree during the inflationary era as to make them of little interest to cosmology. But in some inflation models inspired from particle physics considerations the formation of topological defects can be naturally obtained at the end of the inflationary period. On the other side, as is widely supposed, the initial conditions for the successful hot big bang are set by inflation, and then an adiabatic, Gaussian and more or less scale invariant density perturbation spectrum at horizon entry is predicted. Such a perturbation is generated by the vacuum fluctuation during inflation so the dazzling prospect of a window on the fundamental interactions on scales approaching the Planck energy appears. The studying of inflation paradigm will help us to understand basic physics laws to the possibly highest scale in the Nature.
The inflationary Universe scenario has the universe undergoing a period of accelerated expansion, the effect originally being to dilute monopoles (and any other defect formed before this period) outside of the observable universe, thereby dramatically reducing their density to below the observable limits. In a homogeneous, isotropic Universe with a flat Friedmann-Robertson-Walker (FRW) metric described by a scale factor a(t), the acceleration is given via
$$\ddot{a}=a(4\pi G/3)(\rho +3p)$$
(1)
where $`\rho `$ is the energy density and p the pressure. Usually the energy density which drives inflation is identified with a scalar potential energy density that is positive, and flat enough to result in an effective equation of state
$$\rho pV(\varphi )$$
(2)
satisfying the acceleration condition $`\ddot{a}>0`$.
The scalar potential is associated with a scalar field known as the so-called inflaton. During the inflationary period, the inflaton potential is fairly flat in the direction the field evolves, dominating the ennergy density of the universe for a finite period of time. Over the period it evolves slowly towards a minimum of the potential either through a classical roll over or through a quantum mechanics tunnelling transition. Inflation then ends when the inflaton starts to execute decaying oscillations around its own vacuum value, and the hot Big Bang (reheating) ensues when the vacuum value has been achieved and the decay products have thermalised. Over the past decades there have been lots of inflation models constructed and there shall be certainly more with the coming of MAP and PLANCK satellites missions. With our knowledge so far we understand that any reasonable inflation model should satisfy at least that COBE normalization, cosmology observations constraint to the spectral index and adequate e-folding inflation for consistence requirements. In this line, the running-mass models of inflation without quartic term is studied and we will discuss a general tree-level inflation model, especially the constraint to the quartic term self-coupling constant.
This paper is arranged as following. In next section we give a general comments on the properties of inflaton potential, which must satisfy the COBE normalization condition besides the flatness conditions. In section three we examine detailly a tree-level hybrid inflation potential model, which can be regarded as a generalization of previously fully discussed some inflation models. We use the slow-roll approximation to derive an analytic expression for the e-folds number N between a given epoch and the end of slow-roll inflation, and derive the spectral index of the spectrum of the curvature perturbation of this model. Confronted them with the COBE measurement of the spectrum on large scales ( the normalization), the required e-folds number N and the observational constraint on the spectrum index over the whole range of cosmological scales, we give the coupling constant of the inflaton model an allowed region, to specify its parameter space by reducing its two free parameters to one . Finally we give a discussion and conclusion.
## II general considerations on Inflation model
Cosmological inflation has been regarded as the most elegant solution to the horizon and flatness problems of the standard Big Bang universe. If considering the later stage thermal inflation it also beautifully solve the moduli problems. Even though it explains successfully why the current Universe appears so homogenous and flat in a very natural manner, it has been difficult to construct a model of inflation without a small parameter(the fine-tunning problem). The key point is to have a resonable scalar field potential either from a more underlying gravity theory like effective superstring thery or from a more fundmental particle physics theory such as supergravity or the so-called M-theory. In fact to any case, one needs at least a scalar field component(inflaton) that rolls down the potential very slowly with enough e-folds number to successfully generate a viable inflationary scenario. This requires the potential to be almost flat in the direction of the inflaton. There are lots of inflation models constructed so far. If gravitation wave contribution is negligible, at least for the present situation, the curvature perturbation spectrum index is the most powerful discriminator to inflation models. In this section we discuss the general properties of an inflaton potential with the astrophysics considerations. In the effective slow-roll inflation scheme as physics requirments, the general inflaton potential $`V(\varphi )`$ must satisfy the flatness conditions
$$ϵ1$$
(3)
and
$$|\eta |1$$
(4)
where the notations
$`ϵ`$ $``$ $`{\displaystyle \frac{1}{2}}M_{\mathrm{Pl}}^2(V^{}/V)^2`$ (5)
$`\eta `$ $``$ $`M_{\mathrm{Pl}}^2V^{\prime \prime }/V`$ (6)
the prime indicates differiential with respect to $`\varphi `$ and $`M_{\mathrm{Pl}}=(8\pi G)^{1/2}=2.4\times 10^{18}\text{GeV}`$ is the reduced Planck mass(scale). When these are satisfied, the time dependence of the inflaton $`\varphi `$ is generally given by the slow-roll expression
$$3H\dot{\varphi }=V^{},$$
(7)
where the quantity
$$H\sqrt{\frac{1}{3}M_{\mathrm{Pl}}^2V}$$
(8)
is the Hubble parameter during inflation. On a given scale, the spectrum of the primordial curvature perturbation, thought to be the origin of structure in the Universe, is given by
$$\delta _H^2(k)=\frac{1}{150\pi ^2M_{\mathrm{Pl}}^4}\frac{V}{ϵ}$$
(9)
The right hand side is evaluated when the relevant scale $`k`$ leaves the horizon. On large scales, the COBE observation of the Cosmic Microwave Background(CMB) anisotropy corresponds to
$$V^{1/4}/ϵ^{1/4}=.027M_{\mathrm{Pl}}=6.7\times 10^{16}\text{GeV}$$
(10)
The spectral index of the primordial curvature perturbation is given by
$$n1=2\eta 6ϵ$$
(11)
A perfectly scale-independent spectrum would correspond to $`n=1`$, and observation already demands
$$|n1|<0.2$$
(12)
Thus $`ϵ`$ and $`\eta `$ have to be $`0.1`$ (barring a cancellation) and this constraint will get much tighter if future observations, such as the near future MAP and PLANCK satellite experiment missions, move $`n`$ closer to 1. Many models of inflation predict that this will be the case, some giving a value of $`n`$ completely indistinguishable from 1.
Usually, $`\varphi `$ is supposed to be charged under at least a $`Z_2`$ symmetry, that is there is no change for the system under
$$\varphi \varphi $$
(13)
which is unbroken during inflation. Then $`V^{}=0`$ at the origin, and inflation typically takes place near the origin. As a result $`ϵ`$ negligible compared with $`\eta `$, and
$$n1=2\eta 2M_{\mathrm{Pl}}^2V^{\prime \prime }/V$$
(14)
We assume that this is the case as in most inflation models, in what follows. If it is not, the inflation model-building, as in the slow roll approximation, is even more tricky. Another point should be clear that the possibly later stage thermal inflation only lasts a few e-foldings and at a lower energy scale around the electroweak symmetry breaking scale, which affects less the extremally successful Hot Big Bang Nucleo-synthesis which is at the MeV scale. It takes place while a lighter scalar field (with mass around 100GeV) with nonzero vacuum expectation value, is trapped by thermal effects in the false vacuum at the lighter scalar field value as zero. More components inflation model only make physics picture complex. In this paper we consider two scalar fields one of which is the Higgs-like scalar field to make sure the graceful exit after the inflation end, and contribute to the vacuum energy expectation value when taking a critical value; another explicit one is the inflaton which drives the cosmic inflation (exponential) expansion.
## III A tree-level hybrid inflation model
In this section we will present the allowed parameter regions for a particular vaccum-dominated potential that are not given out before. A similar form potential with only different signs appears in a Supersymmetry particle physics model. The focused vacuum dominated potential we consider has the usual form with dimension four for the sake of renormalizability in mind
$$V=b(M^2h^2)^2/2+m^2\varphi ^2/2+\lambda \varphi ^4/4,.$$
(15)
where b is a coupling constant, m is the h expectation-value dependent mass parameter for the inflaton and h is the Higgs-like scalar field for the graceful exit in the new inflation scenario. If the expectation value of h equals M during evolution the model turns out to be the general chaotic inflationary potential; if not then the usual hybrid inflation model with the resulting $`V_0`$ as the dominated vacuum.
$$V=V_0+m^2\varphi ^2/2+\lambda \varphi ^4/4.$$
(16)
with all parameters positive in it, which can be regarded as a generalization of previously fully discussed inflation models. Due to symmetry considerations we discard the cubic term. Higher order inflaton terms may appear in some Susy particle physics effective models. There are two particular limits of vacuum energy inflation, according to whether the energy density is dominated by the vacuum energy density or by the inflaton energy density. We assume the former in our case as preference in the slow-roll approximation . With the COBE normalization $`V^{1/4}/ϵ^{1/4}=.027M_{\mathrm{Pl}}`$ and $`ϵ\frac{1}{2}M_{\mathrm{Pl}}^2(V^{}/V)^2`$, in our case the false vacuum energy density
$$V_00.3^4M_{\mathrm{Pl}}^2(m^2\varphi _1+\lambda \varphi _1^3)^{2/3}/2^{1/3}.$$
(17)
where $`\varphi _1`$ is the inflaton value when COBE scale leaves the horizon. To reduce free parameters number we define
$$y_1=m^2/\varphi _1^2.$$
(18)
By cosmology observations to the power spectral index constraint
$$|n1|/2=\eta =M_{\mathrm{Pl}}^2V^{\prime \prime }/V<0.1$$
(19)
and if we take the nowadays observation value upper limit 0.1 as a potentially changing parameter x, we have
$$\eta =M_{\mathrm{Pl}}^2(m^2+3\lambda \varphi ^2)/V_0<x.$$
(20)
Taking the potential form into equation (19) and using our definition for $`y_1`$ we can define a function of $`y_1`$ as
$$f(y_1)=\frac{y_1+3\lambda }{(y_1+\lambda )^{2/3}}<x0.3^4/2^{1/3}.$$
(21)
In this expression, the parameter $`x`$ as an observation input runs from nowadays 0.1 to the hopfully 0.01 by the near future planned MAP and PLANK satellite missions. Easily we find
$$df/dy_1=(y_13\lambda )(y_1+\lambda )^{5/3}/3$$
(22)
For $`y_1>3\lambda `$ the $`df/dy_1>0`$, while for $`y_1<3\lambda `$ the $`df/dy_1<0`$, so that gives
$$f_{min}(3\lambda )=3(\lambda /2)^{1/3}$$
(23)
which corresponds to, with relation (21) into account
$$\lambda _{max}<x^31.97\times 10^8$$
(24)
With obviously
$$y_1+\lambda <y_1+3\lambda $$
(25)
and the relation (21), directly we have
$$y_1+\lambda <x^30.027^4/2.$$
(26)
which together give the allowed parameters regions for quartic self coupling constant $`\lambda `$ and the reduced mass parameter with various observed or to be obtanined parameter x values as inputs. Take today’s upper limit $`x=0.1`$ we find the quartic self-coupling constant $`O(10^{11})`$, far too small. For the reduced mass parameter it’s normal since inflation starts at the inflaton field value around 0.1$`M_{pl}`$, which gives the inflaton’s effective mass about 1000GeV.
With the certain e-folds number constraint for overcoming horizon and flatness problems
$$N=|_{\varphi _1}^{\varphi _c}\frac{V}{V^{}}𝑑\varphi |/M_{\mathrm{Pl}}^2.$$
(27)
where $`\varphi _c`$ is the inflaton at the end of inflation. Insert the potential form and the COBE normlization we get another relation for the reduced mass parameter defined as
$$y_c=m^2/\varphi _c^2=(\lambda +y_1)exp(Ny_110^3/3.6\times (\lambda +y_1)^{2/3})\lambda .$$
(28)
which limits the allowed parameters with astrophysics required e-folding numbers $`N`$ and the spectrum index x as today’s observations required values as inputs. We can see the limit cases are consistent with the previous results by numerical calculations with figures. The self-coupling constant is very tiny and the fine-tunning problem appears.(We also can get directly from the above relation curves of the reduced parameter $`y_c`$ with parameter $`\lambda `$). It is clear that the reduced parameter $`y_c`$ is approximatly linear to $`\lambda `$ when the expenatial value is around 1.
Taking the power spectral index constraint relation (19) and the relation (21) with the above equation into account we can get a constraint relation, approximatedly allowed regions to reduced parameters $`y_c`$ and $`\lambda `$ as
$$y_c+\lambda <(y_1+\lambda )exp(1000N/3.608\times (y_1+\lambda )^{1/3})<2.657\times 10^7x^3exp(1.6518Nx),$$
(29)
By which the approximatedly allowed parameter regions for $`\lambda `$ vs $`y_c`$ with x from 0.1 to future possibly 0.01 is straightforward worked out.
If put the power spectral index expression and the e-folding expression together we find
$$|n1|=N^12^{1/6}(1+3\lambda /y_1)ln(1+\frac{y_cy_1}{\lambda +y_1}).$$
(30)
qualitively that roughly is
$$|n1|N^1.$$
(31)
which implies these two constraints have intrinsic connection. When $`|n1|<0.1`$ as today’s cosmological observations available, then $`N10`$. If $`|n1|<0.01`$ as the soon satellite missions by MAP and PLANCK on design hopefully to give, then the $`N100`$. This kind of generic character as expressed by relation (31) appears in a class of dynamical supersymmetry breaking particle physics models.
## IV discussion and conclusion
Models of inflation driven by a false vacuum formed by the Higgs-like scalar field are mainly different from true vacuum cases in their no zero false vacuum energy density, which are simple but also can reflect the astrophysics obervations. We discuss a regular tree-level hybrid inflation model, whose special case is the general chaotic inflation model(not a toy model) here to show how to constraint its parameters when confronting data and cosmology consistence requirements, and give several new parameter relations and the allowed regions, which can be used as a prototype model for the two planned Microwave Anisotropy Probe (MAP) and PLANCK satellite missions tests. The results we have obtained based on a reasonable assumpation that the spectral index constraint we concentrate on is naturally satisfying the flatness conditions. Otherwise the slow roll approximation is not appliable.
The origin of this tree-level hybrid inflation model or its more complicated extentions may arise from some kind of supersymmetry particle physics or supergravity models which is generally cosidered as the appropriate framework for a description of the fundamental interactions at higher energy scale, and in particular for the description of their scalar interaction potential in the D-term and the gauge coupling F-Term. Here we only study the essence of them in order to get more viable inflation models from supersymmetry particle physics and supergravity as well as superstring (M-)theories. No matter what kind of theoretical model to be built it must satisfy at least the above observations, especially the spectral index constraint at x=0.01. The parameter space then in our case is very tiny that asks us to build the inflation models from a more natural way to avoid the fine-tunning problem, which is a chanlenge facing us. Within next few years with the rapidly increasings in the variety and more accuracy of cosmological observation data, like the measurements of temperature anisotropies in cosmic microwave background at the accuracy expected from MAP and PLANCK soon, it is possible for us to discriminate among inflation models.
As a flood of high-quality cosmological data is coming from experiments in outerspace, on earth and underground, even in the sea or at the bottom of the sea we are really entering the observation constrainning theory times. There are mainly six projects on the way. Here we only discuss the most related one to our topic, inflation model building:
The CMB Map of the Universe. COBE mapped the CMB with an angular resolution of around $`10^0`$; two new satellite missions, NASA’s MAP(lauch in 2000) and ESA’s PLANCK Surveyor (lauch 2007), will map CMB with 100 times better resolution ($`0.1^0`$) and with a detail map of our Universe at 300,000 years. From these maps of the Universe as it existed at a simpler time, long before the first stars and galaxies, will come a gold mine of information: a definitive measurement of matter fraction of the Universe today; a characterization of the primeval lumpiness and possible detection of the relic gravity waves from inflation as well as a determination of the Hubble constant to a precision of better than 0.05. Direct measurements of the expansion rate using standard candles, gravitational time delay, SZ imaging besides the precision CMB map will pin down the elusive Hubble constant once and for all. It is the fundamental parameter that sets the size- in time (the puzzling Universe age problem) and space- of the Universe. Its value is critical to testing the self consistency of the early Universe models. After all, the precision maps of the CMB that will be made are crucial to establishing the astroparticle physics inflation theory.
For the past two decades the hot big bang model as been referred to as the standard cosmology- and for good reasons. For just as long particle cosmologists have known there are fundamental questions that are not answered by the standard cosmology and point to a grander theory. The best candidate for that grander theory is the viable particle physics inflation theory plusing dark components (dark energy, hot and cold dark matter) as the Universe dominated contents. It holds that the Universe is flat, that slowly moving elemmentary particles left over from the earliest moments provide the cosmic infrastructure, and that the primeval density inhomogeneities that seed all the structure arose from quantum fluctuations. There is now lots of prima facie evidence that supports the two basic tenets of this paradigm. An avalanche of high quality cosmological observations will soon make this case stronger or even will break it. Key questions remain to be answered; foremost among them are: a viable inflation model to be built, elucidation of the dark-energy component and the identification as well as detection of the cold dark matter particles. The next, at least, two decades are exciting times in Particle Physics Cosmology with the planned continuous astrophysics experiments going.
Acknowledgements: The author thanks Laura Covi, Robert Brandenberger, Ilia Gogoladze, Christopher Kolda, Xueqian Li, David Lyth, Leszek Roszkowski, Lewis Ryder, Goran Senjanovic, Roy Tung and Xinmin Zhang for many useful discussions on this topic during his working in Lancaster, UK and his visit to Abdum Salam ICTP, Italy. He also benefits greatly from comments to his related work by R.Brandenberger, D.Lyth and X Zhang. This work is partially supported by grants from the overseas research scholarship of China National Education Ministry and from Natural Science Foundation of China: the important project for theoretical physics key field research: Particle Physics Cosmology.
|
no-problem/9905/physics9905018.html
|
ar5iv
|
text
|
# Difficult Problems Having Easy Solutions
## Abstract
We discuss how a class of difficult kinematic problems can play an important role in an introductory course in stimulating students’ reasoning on more complex physical situations. The problems presented here have an elementary analysis once certain symmetry features of the motion are revealed. We also explore some unexpected directions these problems lead us.
Key Words: Plane Motion, Central Force Motion PACS numbers: 01.40.-d, 01.40.Fk, 01.55.+b.
There is a class of kinematical problems whose general analysis lies beyond the level of undergraduate students but which nonetheless can play an important role in introductory courses. This happens because the problems involve certain symmetry features that allow for an easy solution once the symmetry is revealed.
What makes these problems worth mentioning here is that they are particularly useful in stimulating students’ reasoning on richer physical situations.
Consider this typical example. A boat crosses a river of width l with velocity of constant magnitude u always aimed toward a point S on the opposite shore directly across its starting position. If the rivers also runs with uniform velocity u, how far downstream from S does the boat reach the opposite shore? Fig. 1 depicts the situation when the boat is at point B in its path RV toward the opposite shore. Its velocity vector along BS makes an angle $`\theta `$ with RS. The river velocity is represented along TB.
The (easy) solution comes from the observation that in a time interval $`\mathrm{\Delta }t`$ the distance BS of the boat to the point S on the opposite shore diminishes by $`(uu\mathrm{cos}\theta )\mathrm{\Delta }t`$, while, at the same time, its distance TB to T increases by the same amount, thus keeping the sum $`BS+TB`$ constant throughout the trip along RV. Since it is clear that at the starting position $`BS=l`$ and $`TB=0`$, while at the final position $`BS=BT=d`$, which is the distance downstream, we conclude that $`d=l/2`$ (See for further examples).
The importance of this example lies in its far-reaching scope. For instance, many problems in central force motion (planetary motion) can be formulated in similar terms, and the same kind of analysis applies as well. Note the similarity we obtain if we turn the boat of the example into a comet describing a parabolic orbit around a sun S in the focus, as in Fig. 2. In this case, it is the component velocities along TB and normal to SB that are constants throughout the orbit (Compare with Fig. 1). As a result we learn that now the sum of the distances $`SB+BP`$ is constant along the comet’s orbit. This fact defines the parabolic orbit and, with $`SB=r`$ and $`BP=r\mathrm{cos}\theta `$, yields its equation in polar coordinates as $`r(1+\mathrm{cos}\theta )=\mathrm{const}.`$
We can go still further to realize that a parabolic orbit is unstable under the influence of possible nearby planets, and may be easily converted into an ellipse or an hyperbola, according to whether the velocity of the comet decreases or increases as a result of the planetary perturbation . When this happens, the pattern of constant velocity components, u parallel to a fixed direction (as BT), and v normal to the radius vector BS, is preserved in the new orbit. The difference is that these two velocities will not be equal to each other any more, and the ratio $`u/v`$ will determine the shape of the orbit (though not its size), being an ellipse when $`u/v<1`$, and a hyperbola when $`u/v>1`$.
We leave it to the reader to explore other features of planetary motion opened up by our example.
|
no-problem/9905/hep-ph9905321.html
|
ar5iv
|
text
|
# Combined analysis of diffractive and inclusive structure functions in the semiclassical framework
## Abstract
Small-$`x`$ DIS is described as the scattering of a partonic fluctuation of the photon off a superposition of target color fields. Diffraction occurs if the emerging partonic state is in a color singlet. Introducing a specific model for the averaging over all relevant color field configurations, both diffractive and inclusive parton distributions at some low scale $`Q_0^2`$ can be calculated. A conventional DGLAP analysis results in a good description of diffractive and inclusive structure functions at higher values of $`Q^2`$.
At this workshop, several numerical analyses of recent precise measurements of the diffractive structure function have been reported . In this contribution, a combined description of inclusive and diffractive DIS in the semiclassical framework is discussed .
From the target rest frame point of view, leading order diffractive DIS is the color singlet production of a $`q\overline{q}`$ pair, as shown on the l.h. side of Fig. 1a. The process is dominated by kinematic configurations corresponding to Bjorken’s aligned jet model, i.e., one of the quarks carries most of the photon’s longitudinal momentum and the transverse momenta are small. The dependence of the cross section on the target color field is encoded in the expression
$$d^2x_{}\text{tr}W_x_{}(y_{})\text{tr}W_x_{}^{}(y_{}^{}),$$
(1)
where the function
$$W_x_{}(y_{})=U(x_{})U^{}(x_{}+y_{})1$$
(2)
is built from two SU(3) matrices, $`U`$ and $`U^{}`$, corresponding to the non-Abelian phase factors picked up by the quark and antiquark penetrating the color field at transverse positions $`x_{}`$ and $`x_{}+y_{}`$.
In the Breit frame, leading order diffractive DIS is most naturally described by photon-quark scattering, with the quark coming from the diffractive parton distribution of the target hadron . This is illustrated on the r.h. side of Fig. 1a. Identifying the leading twist part of the $`q\overline{q}`$ pair production cross section (l.h. side of Fig. 1a) with the result of the conventional partonic calculation (r.h. side of Fig. 1a), the diffractive quark distribution of the target is expressed in terms of the color field dependent function given in Eq. (1).
Similarly, the cross section for the color singlet production of a $`q\overline{q}g`$ state (l.h. side of Fig. 1b) is identified with the boson-gluon fusion process based on the diffractive gluon distribution of the target (r.h. side of Fig. 1b). This allows for the calculation of the diffractive gluon distribution in terms of a function similar to Eq. (1) but with the $`U`$ matrices in the adjoint representation.
In the semiclassical approach, the cross sections for inclusive DIS are obtained from the same calculations as in the diffractive case where, however, the color singlet condition for the final state parton configuration is dropped. As a result, the $`q\overline{q}`$ production cross section (cf. the l.h. side of Fig. 1a) receives contributions from both the aligned jet and the high-$`p_{}`$ region. In the latter, the logarithmic $`dp_{}^2/p_{}^2`$ integration gives rise to a $`\mathrm{ln}Q^2`$ term in the full cross section.
In the leading order partonic analysis, the full cross section is described by photon-quark scattering. The gluon distribution is responsible for the scaling violations at small $`x`$, $`F_2(x,Q^2)/\mathrm{ln}Q^2xg(x,Q^2)`$. Thus, the semiclassical result for $`q\overline{q}`$ production, with its $`\mathrm{ln}Q^2`$ contribution, is sufficient to calculate both the inclusive quark and the inclusive gluon distribution. The results are again expressed in terms of the function in Eq. (1) where now the color trace is taken after the two $`W`$ matrices (corresponding to the amplitude and its complex conjugate) have been multiplied.
To obtain explicit formulae for the above parton distributions, a model for the averaging over the color fields, which underlie the eikonal factors in Eq. (2), has to be introduced. In the case of a very large hadronic target , such a model is naturally obtained from the observation that, even in the aligned jet region, the transverse separation of the $`q\overline{q}`$ pair remains small . This is a result of the saturation of the dipole cross section at smaller dipole size. Under the additional assumption that color fields in distant regions of the large target are uncorrelated, a simple Glauber-type exponentiation of the averaged local field strength results in explicit formulae for all the relevant functions of the type shown in Eq. (1).
Thus, diffractive and inclusive quark and gluon distributions at some small scale $`Q_0^2`$ are expressed in terms of only two parameters, the average color field strength and the total size of the large target hadron. The energy dependence arising from the large-momentum cutoff applied in the process of color field averaging can not be calculated from first principles. It is described by a $`\mathrm{ln}^2x`$ ansatz, consistent with unitarity, which is universal for both the inclusive and diffractive structure function . This introduces a further parameter, the unknown constant that comes with the logarithm.
A conventional leading order DGLAP analysis of data at small $`x`$ and $`Q^2>Q_0^2`$ results in a good four parameter fit ($`Q_0`$ being the fourth parameter) to both the inclusive and diffractive structure function. Diffractive data with $`M^2<4\text{GeV}^2`$ is excluded from the fit since higher twist effects are expected to affect this region. As an illustration, the $`\beta `$ dependence of $`F_2^{D(3)}`$ at different values of $`Q^2`$ is shown in Figs. 2 and 3 (see for further plots, in particular of the inclusive structure function, and more details of the analysis).
Finally, two important qualitative features of the approach should be emphasized. First, the diffractive gluon distribution is much larger than the diffractive quark distribution, a result reflected in the pattern of scaling violations of $`F_2^{D(3)}`$. This feature is also present in the analysis of , where, in contrast to the present approach, the target is modelled as a small color dipole. Second, the inclusive gluon distribution, calculated from $`q\overline{q}`$ pair production at high $`p_{}`$ and determined by the small-distance structure of the color field, is large and leads to the dominance of inclusive over diffractive DIS.
|
no-problem/9905/astro-ph9905045.html
|
ar5iv
|
text
|
# The Galaxy Luminosity and Selection Functions of the NOG sample
## 1. Introduction
We use the Nearby Optical Galaxy (NOG) sample to reconstruct the galaxy density field in the local universe. This sample is an all-sky, magnitude-limited sample of nearby galaxies (with $`cz<5500`$ km/s), which is nearly complete down to the limiting total corrected blue magnitude B=14 mag and comprises 6392 galaxies, of which 2789 objects are members of galaxy systems (with at least three members). The completeness level of the NOG sample limited to $`|b|>15^{}`$ (5832 galaxies) is estimated to be $``$80%.
The redshift-dependent distances of the field galaxies and galaxy systems have been corrected for non-cosmological motions by means of peculiar velocity field models (Marinoni et al. 1998a). Specifically, we employed two independent models: i) a semi-linear approach which uses a multi-attractor model (with Virgo, Great Attractor, Perseus-Pisces Supercluster and Shapley Concentration) fitting the Mark III peculiar velocity catalog (Willick et al. 1997); ii) a modified version of the optical cluster 3D-dipole reconstruction scheme by Branchini & Plionis (1996).
## 2. The Total Galaxy Luminosity Function
Adopting Turner’s (1979) method we evaluate the total galaxy luminosity function (LF) for for field and grouped galaxies, using their location in real distance space. Since the NOG sample comprises both bright and nearby galaxies, systematic errors in the determination of the LF are likely to minimized.
We find that the galaxy LF is well described by a Schechter function with $`\alpha `$-1.1, a low normalization factor $`\mathrm{\Phi }^{}`$ 0.006 Mpc<sup>-3</sup>, and a particularly bright characteristic magnitude $`M_B^{}`$-20.7 ($`H_0=75km^1Mpc^1`$) (see Marinoni et al. 1998b for details). Our $`M_B^{}`$-value is brighter, on average, by 0.4 mag than previous results, because, referring to total magnitudes corrected for Galactic extinction, internal extinction, and K-dimming, better represent the galaxy light.
The exact values of the Schechter parameters of the LF slightly depend on the adopted velocity field models (see Fig. 1), but peculiar motion effects are of the order of statistical errors; at most, they cause variations of 0.08 in $`\alpha `$ ($`1\sigma `$ error) and 0.2 mag ($`2\sigma `$ error) in $`M_B^{}`$.
The presence of galaxy systems in the NOG sample does not affect significantly the field galaxy LF. Environmental effects on the total LF are proved to be marginal. The LF of the galaxy members of the richest systems tends to show a slightly brighter value of $`M_B^{}`$, which gives some evidence of luminosity segregation with density.
## 3. The Morphological–Type Dependence of the Luminosity Function
We also evaluate the morphological type-specific LFs. The morphological types are available for almost all NOG galaxies.
The LF of E+S0 galaxies does not differ significantly from that of spirals. But the E galaxies clearly decrease in number towards low luminosities (with $`\alpha `$-0.5), whereas the number of late-type spirals and irregulars rise steeply towards the faint end (with $`\alpha `$-2.3 – -2.4). This behaviour hints at an upturn of the total LF in the unexplored faint end (at $`M_B>`$-15). In Fig. 2 we show a comparison between our type–specific LFs with those obtained from the CfA2 (Marzke et al. 1994) and the Stromlo–APM (Loveday et al. 1992; see also Driver, Windhorst & Griffiths 1995) samples.
As regards the morphological type–dependence of the LF, our results better agree with those derived from the CfA2 and SSRS2 (Marzke et al. 1998) samples than with the ones obtained from the Stromlo-APM survey. Moreover, the dependence of the LF on the morphological type appreciably differs from its dependence on the galaxy spectral classification as given by the LCRS (Bromley et al. 1998) and Autofib redshift survey (Heyl et al. 1997).
## 4. Local Galaxy Density and Environmental Effects
Underpredicting the observed galaxy number counts (e.g., Ellis 1997) at bright magnitudes ($`B`$18 mag), where little galaxy evolution is observed for the bulk of the galaxy population, our relatively low local normalization, which can not be biased low by photometric problems or by incompleteness of the sample, suggests that the nearby universe is underdense in galaxies (by a factor $``$1.5).
Although the galaxy LF, as well as the intimately related selection function of the NOG sample, appear to be little sensitive to peculiar motion effects, these effects have a quite large impact on the local galaxy density, especially on the smallest scales.
We are calculating the local galaxy density of each galaxy in terms of the number density of neighbouring galaxies. This is done by smoothing every galaxy with a Gaussian filter (Giuricin et al. 1993; Monaco et al. 1994) and by correcting the incompletion of the sample at large distances through the selection function of the sample. The main goal of this line of research is to use small-scale density parameters to analyze environmental effects on the properties of nearby galaxies.
## References
Branchini, E. & Plionis, M. 1996, ApJ, 460, 569.
Bromley, B. C., Press, W. H., Lin, H. & Kirschner, R. P. 1998, preprint astro-ph/9711227.
Driver, S. P., Windhorts, R. O. & Griffiths, R. E. 1995, ApJ, 453, 48.
Ellis, R. S. 1997, ARA&A, 35, 389.
Giuricin, G., Mardirossian, F., Mezzetti, M. & Monaco, P. 1993, ApJ, 407, 22.
Heyl, J., Colless, M., Ellis, R. S. & Broadhurst, T. 1997, MNRAS, 285, 613.
Loveday, J., Peterson, B. A., Efstathiou, G. & Maddox, S. J. 1992, ApJ, 390, 338.
Marinoni, C., Monaco, P., Giuricin, G. & Costantini, B. 1998a, ApJ, 505, 484
Marinoni, C., Monaco, P., Giuricin, G. & Costantini, B. 1998b, ApJ, in press
Marzke, R. O., da Costa, L. N., Pellegrini, P. S. & Willmer, C. N. A. 1998, ApJ, 503, 617.
Marzke, R. O., Geller, M., Huchra, J. P. & Corwin, Jr., H. G. 1994, AJ, 108, 437.
Monaco, P., Giuricin, G., Mardirossian, F. & Mezzetti, M. 1994, ApJ, 436, 576.
Turner, E. L. 1979, ApJ, 231, 645.
Willick, J. A., Courteau, S., Faber, S. M., Burstein, D., Dekel, A. & Strauss, M. A. 1997, ApJS, 109, 333.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.